CapBnd: ffffffffffffffff
voluntary_ctxt_switches: 0
nonvoluntary_ctxt_switches: 1
+ Stack usage: 12 kB
This shows you nearly the same information you would get if you viewed it with
the ps command. In fact, ps uses the proc file system to obtain its
Mems_allowed_list Same as previous, but in "list format"
voluntary_ctxt_switches number of voluntary context switches
nonvoluntary_ctxt_switches number of non voluntary context switches
+ Stack usage: stack usage high water mark (round up to page size)
..............................................................................
Table 1-3: Contents of the statm files (as of 2.6.8-rc3)
08049000-0804a000 rw-p 00001000 03:00 8312 /opt/test
0804a000-0806b000 rw-p 00000000 00:00 0 [heap]
a7cb1000-a7cb2000 ---p 00000000 00:00 0
-a7cb2000-a7eb2000 rw-p 00000000 00:00 0
+a7cb2000-a7eb2000 rw-p 00000000 00:00 0 [threadstack:001ff4b4]
a7eb2000-a7eb3000 ---p 00000000 00:00 0
a7eb3000-a7ed5000 rw-p 00000000 00:00 0
a7ed5000-a8008000 r-xp 00000000 03:00 4222 /lib/libc.so.6
[stack] = the stack of the main process
[vdso] = the "virtual dynamic shared object",
the kernel system call handler
+ [threadstack:xxxxxxxx] = the stack of the thread, xxxxxxxx is the stack size
or if empty, the mapping is anonymous.
all files in that instance (if CONFIG_NUMA is enabled) - which can be
adjusted on the fly via 'mount -o remount ...'
-mpol=default use the process allocation policy
- (see set_mempolicy(2))
+mpol=default prefers to allocate memory from the local node
mpol=prefer:Node prefers to allocate memory from the given Node
mpol=bind:NodeList allocates memory only from nodes in NodeList
mpol=interleave prefers to allocate from each node in turn
mpol=interleave:NodeList allocates from each node of NodeList in turn
-mpol=local prefers to allocate memory from the local node
NodeList format is a comma-separated list of decimal numbers and ranges,
a range being two hyphen-separated decimal numbers, the smallest and
Christoph Rohland <cr@sap.com>, 1.12.01
Updated:
Hugh Dickins, 4 June 2007
-Updated:
- KOSAKI Motohiro, 16 Mar 2010
in7_min_alarm 3v output undervoltage alarm
in8_min_alarm Vee (-12v) output undervoltage alarm
-in9_input GPIO voltage data
+in9_input GPIO #1 voltage data
+in10_input GPIO #2 voltage data
+in11_input GPIO #3 voltage data
power1_input 12v power usage (mW)
power2_input 5v power usage (mW)
* Intel 82801I (ICH9)
* Intel EP80579 (Tolapai)
* Intel 82801JI (ICH10)
- * Intel 3400/5 Series (PCH)
- * Intel Cougar Point (PCH)
+ * Intel PCH
Datasheets: Publicly available at the Intel website
Authors:
acpi_sleep= [HW,ACPI] Sleep options
Format: { s3_bios, s3_mode, s3_beep, s4_nohwsig,
- old_ordering, s4_nonvs, sci_force_enable }
+ old_ordering, s4_nonvs }
See Documentation/power/video.txt for information on
s3_bios and s3_mode.
s3_beep is for debugging; it makes the PC's speaker beep
of _PTS is used by default).
s4_nonvs prevents the kernel from saving/restoring the
ACPI NVS memory during hibernation.
- sci_force_enable causes the kernel to set SCI_EN directly
- on resume from S1/S3 (which is against the ACPI spec,
- but some broken systems don't work without it).
acpi_use_timer_override [HW,ACPI]
Use timer override. For some broken Nvidia NF5 boards
medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc
- userpte=
- [X86] Flags controlling user PTE allocations.
-
- nohigh = do not allocate PTE pages in
- HIGHMEM regardless of setting
- of CONFIG_HIGHPTE.
-
vdso= [X86,SH]
vdso=2: enable compat VDSO (default with COMPAT_VDSO)
vdso=1: enable VDSO (default)
For Lenovo ThinkPads with a new
BIOS, it has to be handled either
by the ACPI OSI, or by userspace.
- The driver does the right thing,
- never mess with this.
0x1011 0x10 FN+END Brightness down. See brightness
up for details.
Brightness hotkey notes:
-Don't mess with the brightness hotkeys in a Thinkpad. If you want
-notifications for OSD, use the sysfs backlight class event support.
+These are the current sane choices for brightness key mapping in
+thinkpad-acpi:
-The driver will issue KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN events
-automatically for the cases were userspace has to do something to
-implement brightness changes. When you override these events, you will
-either fail to handle properly the ThinkPads that require explicit
-action to change backlight brightness, or the ThinkPads that require
-that no action be taken to work properly.
+For IBM and Lenovo models *without* ACPI backlight control (the ones on
+which thinkpad-acpi will autoload its backlight interface by default,
+and on which ACPI video does not export a backlight interface):
+
+1. Don't enable or map the brightness hotkeys in thinkpad-acpi, as
+ these older firmware versions unfortunately won't respect the hotkey
+ mask for brightness keys anyway, and always reacts to them. This
+ usually work fine, unless X.org drivers are doing something to block
+ the BIOS. In that case, use (3) below. This is the default mode of
+ operation.
+
+2. Enable the hotkeys, but map them to something else that is NOT
+ KEY_BRIGHTNESS_UP/DOWN or any other keycode that would cause
+ userspace to try to change the backlight level, and use that as an
+ on-screen-display hint.
+
+3. IF AND ONLY IF X.org drivers find a way to block the firmware from
+ automatically changing the brightness, enable the hotkeys and map
+ them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN, and feed that to
+ something that calls xbacklight. thinkpad-acpi will not be able to
+ change brightness in that case either, so you should disable its
+ backlight interface.
+
+For Lenovo models *with* ACPI backlight control:
+
+1. Load up ACPI video and use that. ACPI video will report ACPI
+ events for brightness change keys. Do not mess with thinkpad-acpi
+ defaults in this case. thinkpad-acpi should not have anything to do
+ with backlight events in a scenario where ACPI video is loaded:
+ brightness hotkeys must be disabled, and the backlight interface is
+ to be kept disabled as well. This is the default mode of operation.
+
+2. Do *NOT* load up ACPI video, enable the hotkeys in thinkpad-acpi,
+ and map them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN. Process
+ these keys on userspace somehow (e.g. by calling xbacklight).
+ The driver will do this automatically if it detects that ACPI video
+ has been disabled.
Bluetooth
echo expand_toggle > /proc/acpi/ibm/video
echo video_switch > /proc/acpi/ibm/video
-NOTE: Access to this feature is restricted to processes owning the
-CAP_SYS_ADMIN capability for safety reasons, as it can interact badly
-enough with some versions of X.org to crash it.
-
Each video output device can be enabled or disabled individually.
Reading /proc/acpi/ibm/video shows the status of each device.
and it is always able to disable hot keys. Very old
thinkpads are properly supported. hotkey_bios_mask
is deprecated and marked for removal.
-
-0x020600: Marker for backlight change event support.
This configures the first found 3c509 card for IRQ 10, base I/O 0x310, and
transceiver type 3 (10base2). The flag "0x3c509" must be set to avoid conflicts
with other card types when overriding the I/O address. When the driver is
-loaded as a module, only the IRQ may be overridden. For example,
-setting two cards to IRQ10 and IRQ11 is done by using the irq module
-option:
+loaded as a module, only the IRQ and transceiver setting may be overridden.
+For example, setting two cards to 10base2/IRQ10 and AUI/IRQ11 is done by using
+the xcvr and irq module options:
- options 3c509 irq=10,11
+ options 3c509 xcvr=3,1 irq=10,11
(2) Full-duplex mode
itself full-duplex capable. This is almost certainly one of two things: a full-
duplex-capable Ethernet switch (*not* a hub), or a full-duplex-capable NIC on
another system that's connected directly to the 3c509B via a crossover cable.
-
-Full-duplex mode can be enabled using 'ethtool'.
/////Extremely important caution concerning full-duplex mode/////
Understand that the 3c509B's hardware's full-duplex support is much more
never automatically enable full-duplex mode in an existing installation;
it must always be explicitly enabled via one of these code in order to be
activated.
-
-The transceiver type can be changed using 'ethtool'.
(4a) Interpretation of error messages and common problems
S: Maintained
F: drivers/platform/x86/eeepc-laptop.c
-EFIFB FRAMEBUFFER DRIVER
-L: linux-fbdev@vger.kernel.org
-M: Peter Jones <pjones@redhat.com>
-S: Maintained
-F: drivers/video/efifb.c
-
EFS FILESYSTEM
W: http://aeschi.ch.eu.org/efs/
S: Orphan
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 32
-EXTRAVERSION = .27
+EXTRAVERSION = .9
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*
tristate "OProfile system profiling (EXPERIMENTAL)"
depends on PROFILING
depends on HAVE_OPROFILE
+ depends on TRACING_SUPPORT
+ select TRACING
select RING_BUFFER
select RING_BUFFER_ALLOW_SWAP
help
#define IO7__ERR_CYC__CYCLE__M (0x7)
printk("%s Packet In Error: %s\n"
- "%s Error in %s, cycle %lld%s%s\n",
+ "%s Error in %s, cycle %ld%s%s\n",
err_print_prefix,
packet_desc[EXTRACT(err_cyc, IO7__ERR_CYC__PACKET)],
err_print_prefix,
}
printk("%s Up Hose Garbage Symptom:\n"
- "%s Source Port: %lld - Dest PID: %lld - OpCode: %s\n",
+ "%s Source Port: %ld - Dest PID: %ld - OpCode: %s\n",
err_print_prefix,
err_print_prefix,
EXTRACT(ugbge_sym, IO7__PO7_UGBGE_SYM__UPH_SRC_PORT),
#define IO7__POX_SPLCMPLT__REM_BYTE_COUNT__M (0xfff)
printk("%s Split Completion Error:\n"
- "%s Source (Bus:Dev:Func): %lld:%lld:%lld\n",
+ "%s Source (Bus:Dev:Func): %ld:%ld:%ld\n",
err_print_prefix,
err_print_prefix,
EXTRACT(spl_cmplt, IO7__POX_SPLCMPLT__SOURCE_BUS),
ACTLR register. Note that setting specific bits in the ACTLR register
may not be available in non-secure mode.
-config ARM_ERRATA_720789
- bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"
- depends on CPU_V7 && SMP
- help
- This option enables the workaround for the 720789 Cortex-A9 (prior to
- r2p0) erratum. A faulty ASID can be sent to the other CPUs for the
- broadcasted CP15 TLB maintenance operations TLBIASIDIS and TLBIMVAIS.
- As a consequence of this erratum, some TLB entries which should be
- invalidated are not, resulting in an incoherency in the system page
- tables. The workaround changes the TLB flushing routines to invalidate
- entries regardless of the ASID.
-
endmenu
source "arch/arm/common/Kconfig"
.text
adr r0, LC0
- ARM( ldmia r0, {r1, r2, r3, r4, r5, r6, r11, ip, sp})
- THUMB( ldmia r0, {r1, r2, r3, r4, r5, r6, r11, ip} )
- THUMB( ldr sp, [r0, #32] )
+ ARM( ldmia r0, {r1, r2, r3, r4, r5, r6, ip, sp} )
+ THUMB( ldmia r0, {r1, r2, r3, r4, r5, r6, ip} )
+ THUMB( ldr sp, [r0, #28] )
subs r0, r0, r1 @ calculate the delta offset
@ if delta is zero, we are
/*
* We're running at a different address. We need to fix
* up various pointers:
- * r5 - zImage base address (_start)
- * r6 - size of decompressed image
- * r11 - GOT start
+ * r5 - zImage base address
+ * r6 - GOT start
* ip - GOT end
*/
add r5, r5, r0
- add r11, r11, r0
+ add r6, r6, r0
add ip, ip, r0
#ifndef CONFIG_ZBOOT_ROM
/*
* Relocate all entries in the GOT table.
*/
-1: ldr r1, [r11, #0] @ relocate entries in the GOT
+1: ldr r1, [r6, #0] @ relocate entries in the GOT
add r1, r1, r0 @ table. This fixes up the
- str r1, [r11], #4 @ C references.
- cmp r11, ip
+ str r1, [r6], #4 @ C references.
+ cmp r6, ip
blo 1b
#else
* Relocate entries in the GOT table. We only relocate
* the entries that are outside the (relocated) BSS region.
*/
-1: ldr r1, [r11, #0] @ relocate entries in the GOT
+1: ldr r1, [r6, #0] @ relocate entries in the GOT
cmp r1, r2 @ entry < bss_start ||
cmphs r3, r1 @ _end < entry
addlo r1, r1, r0 @ table. This fixes up the
- str r1, [r11], #4 @ C references.
- cmp r11, ip
+ str r1, [r6], #4 @ C references.
+ cmp r6, ip
blo 1b
#endif
* Check to see if we will overwrite ourselves.
* r4 = final kernel address
* r5 = start of this image
- * r6 = size of decompressed image
* r2 = end of malloc space (and therefore this image)
* We basically want:
* r4 >= r2 -> OK
*/
cmp r4, r2
bhs wont_overwrite
- add r0, r4, r6
+ sub r3, sp, r5 @ > compressed kernel size
+ add r0, r4, r3, lsl #2 @ allow for 4x expansion
cmp r0, r5
bls wont_overwrite
* r1-r3 = unused
* r4 = kernel execution address
* r5 = decompressed kernel start
+ * r6 = processor ID
* r7 = architecture ID
* r8 = atags pointer
* r9-r12,r14 = corrupted
.word _end @ r3
.word zreladdr @ r4
.word _start @ r5
- .word _image_size @ r6
- .word _got_start @ r11
+ .word _got_start @ r6
.word _got_end @ ip
.word user_stack+4096 @ sp
LC1: .word reloc_end - reloc_start
*
* On entry,
* r4 = kernel execution address
+ * r6 = processor ID
* r7 = architecture number
* r8 = atags pointer
* r9 = run-time address of "start" (???)
* r1-r3 = unused
* r4 = kernel execution address
* r5 = decompressed kernel start
+ * r6 = processor ID
* r7 = architecture ID
* r8 = atags pointer
* r9-r12,r14 = corrupted
* r1 = corrupted
* r2 = corrupted
* r3 = block offset
- * r9 = corrupted
+ * r6 = corrupted
* r12 = corrupted
*/
call_cache_fn: adr r12, proc_types
#ifdef CONFIG_CPU_CP15
- mrc p15, 0, r9, c0, c0 @ get processor ID
+ mrc p15, 0, r6, c0, c0 @ get processor ID
#else
- ldr r9, =CONFIG_PROCESSOR_ID
+ ldr r6, =CONFIG_PROCESSOR_ID
#endif
1: ldr r1, [r12, #0] @ get value
ldr r2, [r12, #4] @ get mask
- eor r1, r1, r9 @ (real ^ match)
+ eor r1, r1, r6 @ (real ^ match)
tst r1, r2 @ & mask
ARM( addeq pc, r12, r3 ) @ call cache function
THUMB( addeq r12, r3 )
* Turn off the Cache and MMU. ARMv3 does not support
* reading the control register, but ARMv4 does.
*
- * On exit, r0, r1, r2, r3, r9, r12 corrupted
+ * On entry, r6 = processor ID
+ * On exit, r0, r1, r2, r3, r12 corrupted
* This routine must preserve: r4, r6, r7
*/
.align 5
/*
* Clean and flush the cache to maintain consistency.
*
+ * On entry,
+ * r6 = processor ID
* On exit,
- * r1, r2, r3, r9, r11, r12 corrupted
+ * r1, r2, r3, r11, r12 corrupted
* This routine must preserve:
* r0, r4, r5, r6, r7
*/
mov r2, #64*1024 @ default: 32K dcache size (*2)
mov r11, #32 @ default: 32 byte line size
mrc p15, 0, r3, c0, c0, 1 @ read cache type
- teq r3, r9 @ cache ID register present?
+ teq r3, r6 @ cache ID register present?
beq no_cache_id
mov r1, r3, lsr #18
and r1, r1, #7
_etext = .;
- /* Assume size of decompressed image is 4x the compressed image */
- _image_size = (_etext - _text) * 4;
-
_got_start = .;
.got : { *(.got) }
_got_end = .;
if (!save)
return 0;
+ spin_lock_irqsave(&sachip->lock, flags);
+
/*
* Ensure that the SA1111 is still here.
* FIXME: shouldn't do this here.
* First of all, wake up the chip.
*/
sa1111_wake(sachip);
-
- /*
- * Only lock for write ops. Also, sa1111_wake must be called with
- * released spinlock!
- */
- spin_lock_irqsave(&sachip->lock, flags);
-
sa1111_writel(0, sachip->base + SA1111_INTC + SA1111_INTEN0);
sa1111_writel(0, sachip->base + SA1111_INTC + SA1111_INTEN1);
@ Slightly optimised to avoid incrementing the pointer twice
usraccoff \instr, \reg, \ptr, \inc, 0, \cond, \abort
.if \rept == 2
- usraccoff \instr, \reg, \ptr, \inc, \inc, \cond, \abort
+ usraccoff \instr, \reg, \ptr, \inc, 4, \cond, \abort
.endif
add\cond \ptr, #\rept * \inc
*/
static inline int valid_user_regs(struct pt_regs *regs)
{
- unsigned long mode = regs->ARM_cpsr & MODE_MASK;
-
- /*
- * Always clear the F (FIQ) and A (delayed abort) bits
- */
- regs->ARM_cpsr &= ~(PSR_F_BIT | PSR_A_BIT);
-
- if ((regs->ARM_cpsr & PSR_I_BIT) == 0) {
- if (mode == USR_MODE)
- return 1;
- if (elf_hwcap & HWCAP_26BIT && mode == USR26_MODE)
- return 1;
+ if (user_mode(regs) && (regs->ARM_cpsr & PSR_I_BIT) == 0) {
+ regs->ARM_cpsr &= ~(PSR_F_BIT | PSR_A_BIT);
+ return 1;
}
/*
* Force CPSR to something logical...
*/
- regs->ARM_cpsr &= PSR_f | PSR_s | PSR_x | PSR_T_BIT | MODE32_BIT;
+ regs->ARM_cpsr &= PSR_f | PSR_s | (PSR_x & ~PSR_A_BIT) | PSR_T_BIT | MODE32_BIT;
if (!(elf_hwcap & HWCAP_26BIT))
regs->ARM_cpsr |= USR_MODE;
if (tlb_flag(TLB_V6_I_ASID))
asm("mcr p15, 0, %0, c8, c5, 2" : : "r" (asid) : "cc");
if (tlb_flag(TLB_V7_UIS_ASID))
-#ifdef CONFIG_ARM_ERRATA_720789
- asm("mcr p15, 0, %0, c8, c3, 0" : : "r" (zero) : "cc");
-#else
asm("mcr p15, 0, %0, c8, c3, 2" : : "r" (asid) : "cc");
-#endif
if (tlb_flag(TLB_BTB)) {
/* flush the branch target cache */
if (tlb_flag(TLB_V6_I_PAGE))
asm("mcr p15, 0, %0, c8, c5, 1" : : "r" (uaddr) : "cc");
if (tlb_flag(TLB_V7_UIS_PAGE))
-#ifdef CONFIG_ARM_ERRATA_720789
- asm("mcr p15, 0, %0, c8, c3, 3" : : "r" (uaddr & PAGE_MASK) : "cc");
-#else
asm("mcr p15, 0, %0, c8, c3, 1" : : "r" (uaddr) : "cc");
-#endif
if (tlb_flag(TLB_BTB)) {
/* flush the branch target cache */
sys_sigreturn_wrapper:
add r0, sp, #S_OFF
- mov why, #0 @ prevent syscall restart handling
b sys_sigreturn
ENDPROC(sys_sigreturn_wrapper)
sys_rt_sigreturn_wrapper:
add r0, sp, #S_OFF
- mov why, #0 @ prevent syscall restart handling
b sys_rt_sigreturn
ENDPROC(sys_rt_sigreturn_wrapper)
{
insn_llret_3arg_fn_t *i_fn = (insn_llret_3arg_fn_t *)&p->ainsn.insn[0];
kprobe_opcode_t insn = p->opcode;
- long ppc = (long)p->addr + 8;
union reg_pair fnr;
int rd = (insn >> 12) & 0xf;
int rn = (insn >> 16) & 0xf;
int rm = insn & 0xf;
long rdv;
- long rnv = (rn == 15) ? ppc : regs->uregs[rn];
- long rmv = (rm == 15) ? ppc : regs->uregs[rm];
+ long rnv = regs->uregs[rn];
+ long rmv = regs->uregs[rm]; /* rm/rmv may be invalid, don't care. */
long cpsr = regs->ARM_cpsr;
fnr.dr = insnslot_llret_3arg_rflags(rnv, 0, rmv, cpsr, i_fn);
*/
.L_found:
#if __LINUX_ARM_ARCH__ >= 5
- rsb r0, r3, #0
- and r3, r3, r0
+ rsb r1, r3, #0
+ and r3, r3, r1
clz r3, r3
rsb r3, r3, #31
add r0, r2, r3
addeq r2, r2, #1
mov r0, r2
#endif
- cmp r1, r0 @ Clamp to maxbit
- movlo r0, r1
mov pc, lr
.end = AT91_BASE_SYS + AT91_DMA + SZ_512 - 1,
.flags = IORESOURCE_MEM,
},
- [1] = {
+ [2] = {
.start = AT91SAM9G45_ID_DMA,
.end = AT91SAM9G45_ID_DMA,
.flags = IORESOURCE_IRQ,
#define SYSTEM_REV_S_USES_VAUX3 0x8
static int board_keymap[] = {
- /*
- * Note that KEY(x, 8, KEY_XXX) entries represent "entrire row
- * connected to the ground" matrix state.
- */
KEY(0, 0, KEY_Q),
KEY(0, 1, KEY_O),
KEY(0, 2, KEY_P),
KEY(0, 4, KEY_BACKSPACE),
KEY(0, 6, KEY_A),
KEY(0, 7, KEY_S),
-
KEY(1, 0, KEY_W),
KEY(1, 1, KEY_D),
KEY(1, 2, KEY_F),
KEY(1, 5, KEY_J),
KEY(1, 6, KEY_K),
KEY(1, 7, KEY_L),
-
KEY(2, 0, KEY_E),
KEY(2, 1, KEY_DOT),
KEY(2, 2, KEY_UP),
KEY(2, 5, KEY_Z),
KEY(2, 6, KEY_X),
KEY(2, 7, KEY_C),
- KEY(2, 8, KEY_F9),
-
KEY(3, 0, KEY_R),
KEY(3, 1, KEY_V),
KEY(3, 2, KEY_B),
KEY(3, 5, KEY_SPACE),
KEY(3, 6, KEY_SPACE),
KEY(3, 7, KEY_LEFT),
-
KEY(4, 0, KEY_T),
KEY(4, 1, KEY_DOWN),
KEY(4, 2, KEY_RIGHT),
KEY(4, 4, KEY_LEFTCTRL),
KEY(4, 5, KEY_RIGHTALT),
KEY(4, 6, KEY_LEFTSHIFT),
- KEY(4, 8, KEY_F10),
-
KEY(5, 0, KEY_Y),
- KEY(5, 8, KEY_F11),
-
KEY(6, 0, KEY_U),
-
KEY(7, 0, KEY_I),
KEY(7, 1, KEY_F7),
KEY(7, 2, KEY_F8),
+ KEY(0xff, 2, KEY_F9),
+ KEY(0xff, 4, KEY_F10),
+ KEY(0xff, 5, KEY_F11),
};
static struct matrix_keymap_data board_map_data = {
#define _COLIBRI_H_
#include <net/ax88796.h>
-#include <mach/mfp.h>
/*
* common settings for all modules
bool "Support ARM11MPCore tile"
depends on MACH_REALVIEW_EB
select CPU_V6
- select ARCH_HAS_BARRIERS if SMP
help
Enable support for the ARM11MPCore tile on the Realview platform.
select CPU_V6
select ARM_GIC
select HAVE_PATA_PLATFORM
- select ARCH_HAS_BARRIERS if SMP
help
Include support for the ARM(R) RealView MPCore Platform Baseboard.
PB11MPCore is a platform with an on-board ARM11MPCore and has
+++ /dev/null
-/*
- * Barriers redefined for RealView ARM11MPCore platforms with L220 cache
- * controller to work around hardware errata causing the outer_sync()
- * operation to deadlock the system.
- */
-#define mb() dsb()
-#define rmb() dmb()
-#define wmb() mb()
{
asm("\
stmfd sp!, {r4-r9, lr} \n\
- mov ip, %2 \n\
+ mov ip, %0 \n\
1: mov lr, r1 \n\
ldmia r1!, {r2 - r9} \n\
pld [lr, #32] \n\
mcr p15, 0, ip, c7, c10, 4 @ drain WB\n\
ldmfd sp!, {r4-r9, pc}"
:
- : "r" (kto), "r" (kfrom), "I" (PAGE_SIZE));
+ : "I" (PAGE_SIZE));
}
void feroceon_copy_user_highpage(struct page *to, struct page *from,
{
asm("\
stmfd sp!, {r4, lr} @ 2\n\
- mov r2, %2 @ 1\n\
+ mov r2, %0 @ 1\n\
ldmia r1!, {r3, r4, ip, lr} @ 4\n\
1: mcr p15, 0, r0, c7, c6, 1 @ 1 invalidate D line\n\
stmia r0!, {r3, r4, ip, lr} @ 4\n\
mcr p15, 0, r1, c7, c10, 4 @ 1 drain WB\n\
ldmfd sp!, {r4, pc} @ 3"
:
- : "r" (kto), "r" (kfrom), "I" (PAGE_SIZE / 64));
+ : "I" (PAGE_SIZE / 64));
}
void v4wb_copy_user_highpage(struct page *to, struct page *from,
{
asm("\
stmfd sp!, {r4, lr} @ 2\n\
- mov r2, %2 @ 1\n\
+ mov r2, %0 @ 1\n\
ldmia r1!, {r3, r4, ip, lr} @ 4\n\
1: stmia r0!, {r3, r4, ip, lr} @ 4\n\
ldmia r1!, {r3, r4, ip, lr} @ 4+1\n\
mcr p15, 0, r2, c7, c7, 0 @ flush ID cache\n\
ldmfd sp!, {r4, pc} @ 3"
:
- : "r" (kto), "r" (kfrom), "I" (PAGE_SIZE / 64));
+ : "I" (PAGE_SIZE / 64));
}
void v4wt_copy_user_highpage(struct page *to, struct page *from,
{
asm("\
stmfd sp!, {r4, r5, lr} \n\
- mov lr, %2 \n\
+ mov lr, %0 \n\
\n\
pld [r1, #0] \n\
pld [r1, #32] \n\
\n\
ldmfd sp!, {r4, r5, pc}"
:
- : "r" (kto), "r" (kfrom), "I" (PAGE_SIZE / 64 - 1));
+ : "I" (PAGE_SIZE / 64 - 1));
}
void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
if (addr < TASK_SIZE)
return do_page_fault(addr, fsr, regs);
- if (user_mode(regs))
- goto bad_area;
-
index = pgd_index(addr);
/*
struct mxc_gpio_port *port =
container_of(chip, struct mxc_gpio_port, chip);
u32 l;
- unsigned long flags;
- spin_lock_irqsave(&port->lock, flags);
l = __raw_readl(port->base + GPIO_GDIR);
if (dir)
l |= 1 << offset;
else
l &= ~(1 << offset);
__raw_writel(l, port->base + GPIO_GDIR);
- spin_unlock_irqrestore(&port->lock, flags);
}
static void mxc_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
container_of(chip, struct mxc_gpio_port, chip);
void __iomem *reg = port->base + GPIO_DR;
u32 l;
- unsigned long flags;
- spin_lock_irqsave(&port->lock, flags);
l = (__raw_readl(reg) & (~(1 << offset))) | (value << offset);
__raw_writel(l, reg);
- spin_unlock_irqrestore(&port->lock, flags);
}
static int mxc_gpio_get(struct gpio_chip *chip, unsigned offset)
port[i].chip.base = i * 32;
port[i].chip.ngpio = 32;
- spin_lock_init(&port[i].lock);
-
/* its a serious configuration bug when it fails */
BUG_ON( gpiochip_add(&port[i].chip) < 0 );
#ifndef __ASM_ARCH_MXC_GPIO_H__
#define __ASM_ARCH_MXC_GPIO_H__
-#include <linux/spinlock.h>
#include <mach/hardware.h>
#include <asm-generic/gpio.h>
int virtual_irq_start;
struct gpio_chip chip;
u32 both_edges;
- spinlock_t lock;
};
int mxc_gpio_init(struct mxc_gpio_port*, int);
#ifdef CONFIG_VFPv3
@ d16 - d31 registers
.irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
-1: mcrr p11, 3, r0, r1, c\dr @ fmdrr r0, r1, d\dr
+1: mcrr p11, 3, r1, r2, c\dr @ fmdrr r1, r2, d\dr
mov pc, lr
.org 1b + 8
.endr
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define SMP_CACHE_BYTES L1_CACHE_BYTES
-#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
-
#ifdef CONFIG_SMP
#define __cacheline_aligned
#else
#define L1_CACHE_SHIFT (CONFIG_FRV_L1_CACHE_SHIFT)
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
-#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
-
#define __cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
spin_unlock_irqrestore(&ioc->saved_lock, flags);
pide = sba_search_bitmap(ioc, dev, pages_needed, 0);
- if (unlikely(pide >= (ioc->res_size << 3))) {
- printk(KERN_WARNING "%s: I/O MMU @ %p is"
- "out of mapping resources, %u %u %lx\n",
- __func__, ioc->ioc_hpa, ioc->res_size,
- pages_needed, dma_get_seg_boundary(dev));
- return -1;
- }
+ if (unlikely(pide >= (ioc->res_size << 3)))
+ panic(__FILE__ ": I/O MMU @ %p is out of mapping resources\n",
+ ioc->ioc_hpa);
#else
- printk(KERN_WARNING "%s: I/O MMU @ %p is"
- "out of mapping resources, %u %u %lx\n",
- __func__, ioc->ioc_hpa, ioc->res_size,
- pages_needed, dma_get_seg_boundary(dev));
- return -1;
+ panic(__FILE__ ": I/O MMU @ %p is out of mapping resources\n",
+ ioc->ioc_hpa);
#endif
}
}
#endif
pide = sba_alloc_range(ioc, dev, size);
- if (pide < 0)
- return 0;
iovp = (dma_addr_t) pide << iovp_shift;
unsigned long dma_offset, dma_len; /* start/len of DMA stream */
int n_mappings = 0;
unsigned int max_seg_size = dma_get_max_seg_size(dev);
- int idx;
while (nents > 0) {
unsigned long vaddr = (unsigned long) sba_sg_address(startsg);
vcontig_sg->dma_length = vcontig_len;
dma_len = (dma_len + dma_offset + ~iovp_mask) & iovp_mask;
ASSERT(dma_len <= DMA_CHUNK_SIZE);
- idx = sba_alloc_range(ioc, dev, dma_len);
- if (idx < 0) {
- dma_sg->dma_length = 0;
- return -1;
- }
- dma_sg->dma_address = (dma_addr_t)(PIDE_FLAG | (idx << iovp_shift)
- | dma_offset);
+ dma_sg->dma_address = (dma_addr_t) (PIDE_FLAG
+ | (sba_alloc_range(ioc, dev, dma_len) << iovp_shift)
+ | dma_offset);
n_mappings++;
}
return n_mappings;
}
-static void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist,
- int nents, enum dma_data_direction dir,
- struct dma_attrs *attrs);
+
/**
* sba_map_sg - map Scatter/Gather list
* @dev: instance of PCI owned by the driver that's asking.
** Access to the virtual address is what forces a two pass algorithm.
*/
coalesced = sba_coalesce_chunks(ioc, dev, sglist, nents);
- if (coalesced < 0) {
- sba_unmap_sg_attrs(dev, sglist, nents, dir, attrs);
- return 0;
- }
/*
** Program the I/O Pdir
#define acpi_noirq 0 /* ACPI always enabled on IA64 */
#define acpi_pci_disabled 0 /* ACPI PCI always enabled on IA64 */
#define acpi_strict 1 /* no ACPI spec workarounds on IA64 */
-#define acpi_ht 0 /* no HT-only mode on IA64 */
#endif
#define acpi_processor_cstate_check(x) (x) /* no idle limits on IA64 :) */
static inline void disable_acpi(void) { }
}
static __inline__ void __user *
-arch_compat_alloc_user_space (long len)
+compat_alloc_user_space (long len)
{
struct pt_regs *regs = task_pt_regs(current);
return (void __user *) (((regs->r12 & 0xffffffff) & -16) - len);
;;
RSM_PSR_I(p0, r18, r19) // mask interrupt delivery
+ mov ar.ccv=0
andcm r14=r14,r17 // filter out SIGKILL & SIGSTOP
- mov r8=EINVAL // default to EINVAL
#ifdef CONFIG_SMP
- // __ticket_spin_trylock(r31)
- ld4 r17=[r31]
- ;;
- mov.m ar.ccv=r17
- extr.u r9=r17,17,15
- adds r19=1,r17
- extr.u r18=r17,0,15
- ;;
- cmp.eq p6,p7=r9,r18
+ mov r17=1
;;
-(p6) cmpxchg4.acq r9=[r31],r19,ar.ccv
-(p6) dep.z r20=r19,1,15 // next serving ticket for unlock
-(p7) br.cond.spnt.many .lock_contention
+ cmpxchg4.acq r18=[r31],r17,ar.ccv // try to acquire the lock
+ mov r8=EINVAL // default to EINVAL
;;
- cmp4.eq p0,p7=r9,r17
- adds r31=2,r31
-(p7) br.cond.spnt.many .lock_contention
ld8 r3=[r2] // re-read current->blocked now that we hold the lock
+ cmp4.ne p6,p0=r18,r0
+(p6) br.cond.spnt.many .lock_contention
;;
#else
ld8 r3=[r2] // re-read current->blocked now that we hold the lock
+ mov r8=EINVAL // default to EINVAL
#endif
add r18=IA64_TASK_PENDING_OFFSET+IA64_SIGPENDING_SIGNAL_OFFSET,r16
add r19=IA64_TASK_SIGNAL_OFFSET,r16
(p6) br.cond.spnt.few 1b // yes -> retry
#ifdef CONFIG_SMP
- // __ticket_spin_unlock(r31)
- st2.rel [r31]=r20
- mov r20=0 // i must not leak kernel bits...
+ st4.rel [r31]=r0 // release the lock
#endif
SSM_PSR_I(p0, p9, r31)
;;
.sig_pending:
#ifdef CONFIG_SMP
- // __ticket_spin_unlock(r31)
- st2.rel [r31]=r20 // release the lock
+ st4.rel [r31]=r0 // release the lock
#endif
SSM_PSR_I(p0, p9, r17)
;;
if (irq_prepare_move(irq, cpu))
return -1;
- get_cached_msi_msg(irq, &msg);
+ read_msi_msg(irq, &msg);
addr = msg.address_lo;
addr &= MSI_ADDR_DEST_ID_MASK;
{
}
-void update_vsyscall(struct timespec *wall, struct clocksource *c, u32 mult)
+void update_vsyscall(struct timespec *wall, struct clocksource *c)
{
unsigned long flags;
/* copy fsyscall clock data */
fsyscall_gtod_data.clk_mask = c->mask;
- fsyscall_gtod_data.clk_mult = mult;
+ fsyscall_gtod_data.clk_mult = c->mult;
fsyscall_gtod_data.clk_shift = c->shift;
fsyscall_gtod_data.clk_fsys_mmio = c->fsys_mmio;
fsyscall_gtod_data.clk_cycle_last = c->cycle_last;
{
struct kvm_memory_slot *memslot;
int r, i;
- long base;
- unsigned long n;
+ long n, base;
unsigned long *dirty_bitmap = (unsigned long *)(kvm->arch.vm_base +
offsetof(struct kvm_vm_data, kvm_mem_dirty_log));
if (!memslot->dirty_bitmap)
goto out;
- n = kvm_dirty_bitmap_bytes(memslot);
+ n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
base = memslot->base_gfn / BITS_PER_LONG;
for (i = 0; i < n/sizeof(long); ++i) {
struct kvm_dirty_log *log)
{
int r;
- unsigned long n;
+ int n;
struct kvm_memory_slot *memslot;
int is_dirty = 0;
if (is_dirty) {
kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot];
- n = kvm_dirty_bitmap_bytes(memslot);
+ n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
memset(memslot->dirty_bitmap, 0, n);
}
r = 0;
ia64_invala();
for (;;) {
- asm volatile ("ld8.c.nc %0=[%1]" : "=r"(serve) : "r"(&ss->serve) : "memory");
+ asm volatile ("ld4.c.nc %0=[%1]" : "=r"(serve) : "r"(&ss->serve) : "memory");
if (time_before(t, serve))
return;
cpu_relax();
* Release XIO resources for the old MSI PCI address
*/
- get_cached_msi_msg(irq, &msg);
+ read_msi_msg(irq, &msg);
sn_pdev = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
pdev = sn_pdev->pdi_linux_pcidev;
provider = SN_PCIDEV_BUSPROVIDER(pdev);
#define L1_CACHE_SHIFT 4
#define L1_CACHE_BYTES (1<< L1_CACHE_SHIFT)
-#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
-
#endif
all: linux.bin
-# With make 3.82 we cannot mix normal and wildcard targets
-BOOT_TARGETS1 = linux.bin linux.bin.gz
-BOOT_TARGETS2 = simpleImage.%
+BOOT_TARGETS = linux.bin linux.bin.gz simpleImage.%
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
-$(BOOT_TARGETS1): vmlinux
- $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
-$(BOOT_TARGETS2): vmlinux
+$(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
define archhelp
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %0, %1 # atomic64_add \n"
- " daddu %0, %2 \n"
+ " addu %0, %2 \n"
" scd %0, %1 \n"
" beqzl %0, 1b \n"
" .set mips0 \n"
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %0, %1 # atomic64_add \n"
- " daddu %0, %2 \n"
+ " addu %0, %2 \n"
" scd %0, %1 \n"
" beqz %0, 2f \n"
" .subsection 2 \n"
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %0, %1 # atomic64_sub \n"
- " dsubu %0, %2 \n"
+ " subu %0, %2 \n"
" scd %0, %1 \n"
" beqzl %0, 1b \n"
" .set mips0 \n"
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %0, %1 # atomic64_sub \n"
- " dsubu %0, %2 \n"
+ " subu %0, %2 \n"
" scd %0, %1 \n"
" beqz %0, 2f \n"
" .subsection 2 \n"
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %1, %2 # atomic64_add_return \n"
- " daddu %0, %1, %3 \n"
+ " addu %0, %1, %3 \n"
" scd %0, %2 \n"
" beqzl %0, 1b \n"
- " daddu %0, %1, %3 \n"
+ " addu %0, %1, %3 \n"
" .set mips0 \n"
: "=&r" (result), "=&r" (temp), "=m" (v->counter)
: "Ir" (i), "m" (v->counter)
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %1, %2 # atomic64_add_return \n"
- " daddu %0, %1, %3 \n"
+ " addu %0, %1, %3 \n"
" scd %0, %2 \n"
" beqz %0, 2f \n"
- " daddu %0, %1, %3 \n"
+ " addu %0, %1, %3 \n"
" .subsection 2 \n"
"2: b 1b \n"
" .previous \n"
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %1, %2 # atomic64_sub_return \n"
- " dsubu %0, %1, %3 \n"
+ " subu %0, %1, %3 \n"
" scd %0, %2 \n"
" beqzl %0, 1b \n"
- " dsubu %0, %1, %3 \n"
+ " subu %0, %1, %3 \n"
" .set mips0 \n"
: "=&r" (result), "=&r" (temp), "=m" (v->counter)
: "Ir" (i), "m" (v->counter)
__asm__ __volatile__(
" .set mips3 \n"
"1: lld %1, %2 # atomic64_sub_return \n"
- " dsubu %0, %1, %3 \n"
+ " subu %0, %1, %3 \n"
" scd %0, %2 \n"
" beqz %0, 2f \n"
- " dsubu %0, %1, %3 \n"
+ " subu %0, %1, %3 \n"
" .subsection 2 \n"
"2: b 1b \n"
" .previous \n"
return (u32)(unsigned long)uptr;
}
-static inline void __user *arch_compat_alloc_user_space(long len)
+static inline void __user *compat_alloc_user_space(long len)
{
struct pt_regs *regs = (struct pt_regs *)
((unsigned long) current_thread_info() + THREAD_SIZE - 32) - 1;
#if defined(CONFIG_SB1_PASS_1_WORKAROUNDS) || \
defined(CONFIG_SB1_PASS_2_WORKAROUNDS)
-#ifndef __ASSEMBLY__
-extern int sb1250_m3_workaround_needed(void);
-#endif
-
-#define BCM1250_M3_WAR sb1250_m3_workaround_needed()
+#define BCM1250_M3_WAR 1
#define SIBYTE_1956_WAR 1
#else
#define FPU_CSR_COND6 0x40000000 /* $fcc6 */
#define FPU_CSR_COND7 0x80000000 /* $fcc7 */
-/*
- * Bits 18 - 20 of the FPU Status Register will be read as 0,
- * and should be written as zero.
- */
-#define FPU_CSR_RSVD 0x001c0000
-
/*
* X the exception cause indicator
* E the exception enable
#define FPU_CSR_UDF_S 0x00000008
#define FPU_CSR_INE_S 0x00000004
-/* Bits 0 and 1 of FPU Status Register specify the rounding mode */
-#define FPU_CSR_RM 0x00000003
+/* rounding mode */
#define FPU_CSR_RN 0x0 /* nearest */
#define FPU_CSR_RZ 0x1 /* towards zero */
#define FPU_CSR_RU 0x2 /* towards +Infinity */
#define FPCREG_RID 0 /* $0 = revision id */
#define FPCREG_CSR 31 /* $31 = csr */
-/* Determine rounding mode from the RM bits of the FCSR */
-#define modeindex(v) ((v) & FPU_CSR_RM)
-
/* Convert Mips rounding mode (0..3) to IEEE library modes. */
static const unsigned char ieee_rm[4] = {
[FPU_CSR_RN] = IEEE754_RN,
(void *) (xcp->cp0_epc),
MIPSInst_RT(ir), value);
#endif
-
- /*
- * Don't write reserved bits,
- * and convert to ieee library modes
- */
- ctx->fcr31 = (value &
- ~(FPU_CSR_RSVD | FPU_CSR_RM)) |
- ieee_rm[modeindex(value)];
+ value &= (FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03);
+ ctx->fcr31 &= ~(FPU_CSR_FLUSH | FPU_CSR_ALL_E | FPU_CSR_ALL_S | 0x03);
+ /* convert to ieee library modes */
+ ctx->fcr31 |= (value & ~0x3) | ieee_rm[value & 0x3];
}
if ((ctx->fcr31 >> 5) & ctx->fcr31 & FPU_CSR_ALL_E) {
return SIGFPE;
enum label_id {
label_second_part = 1,
label_leave,
+#ifdef MODULE_START
+ label_module_alloc,
+#endif
label_vmalloc,
label_vmalloc_done,
label_tlbw_hazard,
UASM_L_LA(_second_part)
UASM_L_LA(_leave)
+#ifdef MODULE_START
+UASM_L_LA(_module_alloc)
+#endif
UASM_L_LA(_vmalloc)
UASM_L_LA(_vmalloc_done)
UASM_L_LA(_tlbw_hazard)
* create the plain linear handler
*/
if (bcm1250_m3_war()) {
- unsigned int segbits = 44;
-
- uasm_i_dmfc0(&p, K0, C0_BADVADDR);
- uasm_i_dmfc0(&p, K1, C0_ENTRYHI);
+ UASM_i_MFC0(&p, K0, C0_BADVADDR);
+ UASM_i_MFC0(&p, K1, C0_ENTRYHI);
uasm_i_xor(&p, K0, K0, K1);
- uasm_i_dsrl32(&p, K1, K0, 62 - 32);
- uasm_i_dsrl(&p, K0, K0, 12 + 1);
- uasm_i_dsll32(&p, K0, K0, 64 + 12 + 1 - segbits - 32);
- uasm_i_or(&p, K0, K0, K1);
+ UASM_i_SRL(&p, K0, K0, PAGE_SHIFT + 1);
uasm_il_bnez(&p, &r, K0, label_leave);
/* No need for uasm_i_nop */
}
} else {
#if defined(CONFIG_HUGETLB_PAGE)
const enum label_id ls = label_tlb_huge_update;
+#elif defined(MODULE_START)
+ const enum label_id ls = label_module_alloc;
#else
const enum label_id ls = label_vmalloc;
#endif
memset(relocs, 0, sizeof(relocs));
if (bcm1250_m3_war()) {
- unsigned int segbits = 44;
-
- uasm_i_dmfc0(&p, K0, C0_BADVADDR);
- uasm_i_dmfc0(&p, K1, C0_ENTRYHI);
+ UASM_i_MFC0(&p, K0, C0_BADVADDR);
+ UASM_i_MFC0(&p, K1, C0_ENTRYHI);
uasm_i_xor(&p, K0, K0, K1);
- uasm_i_dsrl32(&p, K1, K0, 62 - 32);
- uasm_i_dsrl(&p, K0, K0, 12 + 1);
- uasm_i_dsll32(&p, K0, K0, 64 + 12 + 1 - segbits - 32);
- uasm_i_or(&p, K0, K0, K1);
+ UASM_i_SRL(&p, K0, K0, PAGE_SHIFT + 1);
uasm_il_bnez(&p, &r, K0, label_leave);
/* No need for uasm_i_nop */
}
insn_dmtc0, insn_dsll, insn_dsll32, insn_dsra, insn_dsrl,
insn_dsrl32, insn_dsubu, insn_eret, insn_j, insn_jal, insn_jr,
insn_ld, insn_ll, insn_lld, insn_lui, insn_lw, insn_mfc0,
- insn_mtc0, insn_or, insn_ori, insn_pref, insn_rfe, insn_sc, insn_scd,
+ insn_mtc0, insn_ori, insn_pref, insn_rfe, insn_sc, insn_scd,
insn_sd, insn_sll, insn_sra, insn_srl, insn_subu, insn_sw,
insn_tlbp, insn_tlbwi, insn_tlbwr, insn_xor, insn_xori
};
{ insn_lw, M(lw_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
{ insn_mfc0, M(cop0_op, mfc_op, 0, 0, 0, 0), RT | RD | SET},
{ insn_mtc0, M(cop0_op, mtc_op, 0, 0, 0, 0), RT | RD | SET},
- { insn_or, M(spec_op, 0, 0, 0, 0, or_op), RS | RT | RD },
{ insn_ori, M(ori_op, 0, 0, 0, 0, 0), RS | RT | UIMM },
{ insn_pref, M(pref_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
{ insn_rfe, M(cop0_op, cop_op, 0, 0, 0, rfe_op), 0 },
I_u1u2u3(_mfc0)
I_u1u2u3(_mtc0)
I_u2u1u3(_ori)
-I_u3u1u2(_or)
I_u2s3u1(_pref)
I_0(_rfe)
I_u2s3u1(_sc)
Ip_u1u2u3(_mfc0);
Ip_u1u2u3(_mtc0);
Ip_u2u1u3(_ori);
-Ip_u3u1u2(_or);
Ip_u2s3u1(_pref);
Ip_0(_rfe);
Ip_u2s3u1(_sc);
iomem_resource.end &= 0xfffffffffULL; /* 64 GB */
ioport_resource.end = controller->io_resource->end;
- controller->io_map_base = mips_io_port_base;
-
register_pci_controller(controller);
}
static struct pci_controller pnx8550_controller = {
.pci_ops = &pnx8550_pci_ops,
- .io_map_base = PNX8550_PORT_BASE,
.io_resource = &pci_io_resource,
.mem_resource = &pci_mem_resource,
};
PNX8550_GLB2_ENAB_INTA_O = 0;
/* IO/MEM resources. */
- set_io_port_base(PNX8550_PORT_BASE);
+ set_io_port_base(KSEG1);
ioport_resource.start = 0;
ioport_resource.end = ~0;
iomem_resource.start = 0;
.pci_ops = &msp_pci_ops,
.mem_resource = &pci_mem_resource,
.mem_offset = 0,
- .io_map_base = MSP_PCI_IOSPACE_BASE,
.io_resource = &pci_io_resource,
.io_offset = 0
};
panic(ioremap_failed);
set_io_port_base(io_v_base);
- py_controller.io_map_base = io_v_base;
TITAN_WRITE(RM9000x2_OCD_LKM7, TITAN_READ(RM9000x2_OCD_LKM7) | 1);
ioport_resource.end = TITAN_IO_SIZE - 1;
return ret;
}
-int sb1250_m3_workaround_needed(void)
-{
- switch (soc_type) {
- case K_SYS_SOC_TYPE_BCM1250:
- case K_SYS_SOC_TYPE_BCM1250_ALT:
- case K_SYS_SOC_TYPE_BCM1250_ALT2:
- case K_SYS_SOC_TYPE_BCM1125:
- case K_SYS_SOC_TYPE_BCM1125H:
- return soc_pass < K_SYS_REVISION_BCM1250_C0;
-
- default:
- return 0;
- }
-}
-
static int __init setup_bcm112x(void)
{
int ret = 0;
#define L1_CACHE_DISPARITY L1_CACHE_NENTRIES * L1_CACHE_BYTES
#endif
-#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
-
/* data cache purge registers
* - read from the register to unconditionally purge that cache line
* - write address & 0xffffff00 to conditionally purge that cache line
return (u32)(unsigned long)uptr;
}
-static __inline__ void __user *arch_compat_alloc_user_space(long len)
+static __inline__ void __user *compat_alloc_user_space(long len)
{
struct pt_regs *regs = ¤t->thread.regs;
return (void __user *)regs->gr[30];
*/
int pdc_iodc_print(const unsigned char *str, unsigned count)
{
+ static int posx; /* for simple TAB-Simulation... */
unsigned int i;
unsigned long flags;
iodc_dbuf[i+0] = '\r';
iodc_dbuf[i+1] = '\n';
i += 2;
+ posx = 0;
goto print;
+ case '\t':
+ while (posx & 7) {
+ iodc_dbuf[i] = ' ';
+ i++, posx++;
+ }
+ break;
case '\b': /* BS */
- i--; /* overwrite last */
+ posx -= 2;
default:
iodc_dbuf[i] = str[i];
- i++;
+ i++, posx++;
break;
}
}
return SIGNALCODE(SIGFPE, FPE_FLTINV);
case DIVISIONBYZEROEXCEPTION:
update_trap_counts(Fpu_register, aflags, bflags, trap_counts);
- Clear_excp_register(exception_index);
return SIGNALCODE(SIGFPE, FPE_FLTDIV);
case INEXACTEXCEPTION:
update_trap_counts(Fpu_register, aflags, bflags, trap_counts);
# Default to zImage, override when needed
all: zImage
-# With make 3.82 we cannot mix normal and wildcard targets
-BOOT_TARGETS1 := zImage zImage.initrd uImage
-BOOT_TARGETS2 := zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
+BOOT_TARGETS = zImage zImage.initrd uImage zImage% dtbImage% treeImage.% cuImage.% simpleImage.%
-PHONY += $(BOOT_TARGETS1) $(BOOT_TARGETS2)
+PHONY += $(BOOT_TARGETS)
boot := arch/$(ARCH)/boot
zImage: relocs_check
endif
-$(BOOT_TARGETS1): vmlinux
- $(Q)$(MAKE) ARCH=ppc64 $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
-$(BOOT_TARGETS2): vmlinux
- $(Q)$(MAKE) ARCH=ppc64 $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
-
-
-bootwrapper_install:
+$(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) ARCH=ppc64 $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
-%.dtb:
+bootwrapper_install %.dtb:
$(Q)$(MAKE) ARCH=ppc64 $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
define archhelp
return (u32)(unsigned long)uptr;
}
-static inline void __user *arch_compat_alloc_user_space(long len)
+static inline void __user *compat_alloc_user_space(long len)
{
struct pt_regs *regs = current->thread.regs;
unsigned long usp = regs->gpr[1];
*/
struct irq_chip;
+#ifdef CONFIG_PERF_EVENTS
+
+#ifdef CONFIG_PPC64
+static inline unsigned long test_perf_event_pending(void)
+{
+ unsigned long x;
+
+ asm volatile("lbz %0,%1(13)"
+ : "=r" (x)
+ : "i" (offsetof(struct paca_struct, perf_event_pending)));
+ return x;
+}
+
+static inline void set_perf_event_pending(void)
+{
+ asm volatile("stb %0,%1(13)" : :
+ "r" (1),
+ "i" (offsetof(struct paca_struct, perf_event_pending)));
+}
+
+static inline void clear_perf_event_pending(void)
+{
+ asm volatile("stb %0,%1(13)" : :
+ "r" (0),
+ "i" (offsetof(struct paca_struct, perf_event_pending)));
+}
+#endif /* CONFIG_PPC64 */
+
+#else /* CONFIG_PERF_EVENTS */
+
+static inline unsigned long test_perf_event_pending(void)
+{
+ return 0;
+}
+
+static inline void clear_perf_event_pending(void) {}
+#endif /* CONFIG_PERF_EVENTS */
+
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_HW_IRQ_H */
void eeh_sysfs_add_device(struct pci_dev *pdev);
void eeh_sysfs_remove_device(struct pci_dev *pdev);
-static inline const char *eeh_pci_name(struct pci_dev *pdev)
-{
- return pdev ? pci_name(pdev) : "<null>";
-}
-
#endif /* CONFIG_EEH */
#else /* CONFIG_PCI */
DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr));
DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled));
DEFINE(PACAHARDIRQEN, offsetof(struct paca_struct, hard_enabled));
+ DEFINE(PACAPERFPEND, offsetof(struct paca_struct, perf_event_pending));
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
#ifdef CONFIG_PPC_MM_SLICES
DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
2:
TRACE_AND_RESTORE_IRQ(r5);
+#ifdef CONFIG_PERF_EVENTS
+ /* check paca->perf_event_pending if we're enabling ints */
+ lbz r3,PACAPERFPEND(r13)
+ and. r3,r3,r5
+ beq 27f
+ bl .perf_event_do_pending
+27:
+#endif /* CONFIG_PERF_EVENTS */
+
/* extract EE bit and use it to restore paca->hard_enabled */
ld r3,_MSR(r1)
rldicl r4,r3,49,63 /* r0 = (r3 >> 15) & 1 */
/* Set thread priority to MEDIUM */
HMT_MEDIUM
- /* Initialize the kernel stack. Just a repeat for iSeries. */
- LOAD_REG_ADDR(r3, current_set)
- sldi r28,r24,3 /* get current_set[cpu#] */
- ldx r14,r3,r28
- addi r14,r14,THREAD_SIZE-STACK_FRAME_OVERHEAD
- std r14,PACAKSAVE(r13)
-
/* Do early setup for that CPU (stab, slb, hash table pointer) */
bl .early_setup_secondary
- /*
- * setup the new stack pointer, but *don't* use this until
- * translation is on.
- */
- mr r1, r14
+ /* Initialize the kernel stack. Just a repeat for iSeries. */
+ LOAD_REG_ADDR(r3, current_set)
+ sldi r28,r24,3 /* get current_set[cpu#] */
+ ldx r1,r3,r28
+ addi r1,r1,THREAD_SIZE-STACK_FRAME_OVERHEAD
+ std r1,PACAKSAVE(r13)
/* Clear backchain so we get nice backtraces */
li r7,0
#include <linux/bootmem.h>
#include <linux/pci.h>
#include <linux/debugfs.h>
+#include <linux/perf_event.h>
#include <asm/uaccess.h>
#include <asm/system.h>
}
#endif /* CONFIG_PPC_STD_MMU_64 */
+ if (test_perf_event_pending()) {
+ clear_perf_event_pending();
+ perf_event_do_pending();
+ }
+
/*
* if (get_paca()->hard_enabled) return;
* But again we need to take care that gcc gets hard_enabled directly
switch (unit) {
case PM_VPU:
mask = 0x4c; /* byte 0 bits 2,3,6 */
- break;
case PM_LSU0:
/* byte 2 bits 0,2,3,4,6; all of byte 1 */
mask = 0x085dff00;
- break;
case PM_LSU1L:
mask = 0x50 << 24; /* byte 3 bits 4,6 */
break;
}
#endif /* CONFIG_PPC_ISERIES */
-#ifdef CONFIG_PERF_EVENTS
-
-/*
- * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable...
- */
-#ifdef CONFIG_PPC64
-static inline unsigned long test_perf_event_pending(void)
-{
- unsigned long x;
-
- asm volatile("lbz %0,%1(13)"
- : "=r" (x)
- : "i" (offsetof(struct paca_struct, perf_event_pending)));
- return x;
-}
-
-static inline void set_perf_event_pending_flag(void)
-{
- asm volatile("stb %0,%1(13)" : :
- "r" (1),
- "i" (offsetof(struct paca_struct, perf_event_pending)));
-}
-
-static inline void clear_perf_event_pending(void)
-{
- asm volatile("stb %0,%1(13)" : :
- "r" (0),
- "i" (offsetof(struct paca_struct, perf_event_pending)));
-}
-
-#else /* 32-bit */
-
+#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_PPC32)
DEFINE_PER_CPU(u8, perf_event_pending);
-#define set_perf_event_pending_flag() __get_cpu_var(perf_event_pending) = 1
-#define test_perf_event_pending() __get_cpu_var(perf_event_pending)
-#define clear_perf_event_pending() __get_cpu_var(perf_event_pending) = 0
-
-#endif /* 32 vs 64 bit */
-
void set_perf_event_pending(void)
{
- preempt_disable();
- set_perf_event_pending_flag();
+ get_cpu_var(perf_event_pending) = 1;
set_dec(1);
- preempt_enable();
+ put_cpu_var(perf_event_pending);
}
-#else /* CONFIG_PERF_EVENTS */
+#define test_perf_event_pending() __get_cpu_var(perf_event_pending)
+#define clear_perf_event_pending() __get_cpu_var(perf_event_pending) = 0
+
+#else /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */
#define test_perf_event_pending() 0
#define clear_perf_event_pending()
-#endif /* CONFIG_PERF_EVENTS */
+#endif /* CONFIG_PERF_EVENTS && CONFIG_PPC32 */
/*
* For iSeries shared processors, we have to let the hypervisor
set_dec(DECREMENTER_MAX);
#ifdef CONFIG_PPC32
+ if (test_perf_event_pending()) {
+ clear_perf_event_pending();
+ perf_event_do_pending();
+ }
if (atomic_read(&ppc_n_lost_interrupts) != 0)
do_IRQ(regs);
#endif
calculate_steal_time();
- if (test_perf_event_pending()) {
- clear_perf_event_pending();
- perf_event_do_pending();
- }
-
#ifdef CONFIG_PPC_ISERIES
if (firmware_has_feature(FW_FEATURE_ISERIES))
get_lppaca()->int_dword.fields.decr_int = 0;
return (cycle_t)get_tb();
}
-void update_vsyscall(struct timespec *wall_time, struct clocksource *clock,
- u32 mult)
+void update_vsyscall(struct timespec *wall_time, struct clocksource *clock)
{
u64 t2x, stamp_xsec;
/* XXX this assumes clock->shift == 22 */
/* 4611686018 ~= 2^(20+64-22) / 1e9 */
- t2x = (u64) mult * 4611686018ULL;
+ t2x = (u64) clock->mult * 4611686018ULL;
stamp_xsec = (u64) xtime.tv_nsec * XSEC_PER_SEC;
do_div(stamp_xsec, 1000000000);
stamp_xsec += (u64) xtime.tv_sec * XSEC_PER_SEC;
{
struct kvm_vcpu *vcpu;
vcpu = kvmppc_core_vcpu_create(kvm, id);
- if (!IS_ERR(vcpu))
- kvmppc_create_vcpu_debugfs(vcpu, id);
+ kvmppc_create_vcpu_debugfs(vcpu, id);
return vcpu;
}
_GLOBAL(strncmp)
PPC_LCMPI r5,0
- ble- 2f
+ beqlr
mtctr r5
addi r5,r3,-1
addi r4,r4,-1
beqlr 1
bdnzt eq,1b
blr
-2: li r3,0
- blr
_GLOBAL(strlen)
addi r4,r3,-1
TLBCAM[index].MAS3 = (phys & PAGE_MASK) | MAS3_SX | MAS3_SR;
TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_SW : 0);
+#ifndef CONFIG_KGDB /* want user access for breakpoints */
if (flags & _PAGE_USER) {
TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
}
+#else
+ TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
+ TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
+#endif
tlbcam_addrs[index].start = virt;
tlbcam_addrs[index].limit = virt + size - 1;
index = ENTRIES-1;
/* make sure index is valid */
- if ((index >= ENTRIES) || (index < 0))
+ if ((index > ENTRIES) || (index < 0))
index = ENTRIES-1;
return initial_lfsr[index];
pdn->eeh_mode & EEH_MODE_NOCHECK) {
ignored_check++;
pr_debug("EEH: Ignored check (%x) for %s %s\n",
- pdn->eeh_mode, eeh_pci_name(dev), dn->full_name);
+ pdn->eeh_mode, pci_name (dev), dn->full_name);
return 0;
}
printk (KERN_ERR "EEH: %d reads ignored for recovering device at "
"location=%s driver=%s pci addr=%s\n",
pdn->eeh_check_count, location,
- dev->driver->name, eeh_pci_name(dev));
+ dev->driver->name, pci_name(dev));
printk (KERN_ERR "EEH: Might be infinite loop in %s driver\n",
dev->driver->name);
dump_stack();
location = location ? location : "unknown";
printk(KERN_ERR "EEH: Error: Cannot find partition endpoint "
"for location=%s pci addr=%s\n",
- location, eeh_pci_name(event->dev));
+ location, pci_name(event->dev));
return NULL;
}
pci_str = pci_name (frozen_pdn->pcidev);
drv_str = pcid_name (frozen_pdn->pcidev);
} else {
- pci_str = eeh_pci_name(event->dev);
+ pci_str = pci_name (event->dev);
drv_str = pcid_name (event->dev);
}
eeh_mark_slot(event->dn, EEH_MODE_RECOVERING);
printk(KERN_INFO "EEH: Detected PCI bus error on device %s\n",
- eeh_pci_name(event->dev));
+ pci_name(event->dev));
pdn = handle_eeh_events(event);
for(;;);
}
+static int qcss_tok; /* query-cpu-stopped-state token */
+
+/* Get state of physical CPU.
+ * Return codes:
+ * 0 - The processor is in the RTAS stopped state
+ * 1 - stop-self is in progress
+ * 2 - The processor is not in the RTAS stopped state
+ * -1 - Hardware Error
+ * -2 - Hardware Busy, Try again later.
+ */
+static int query_cpu_stopped(unsigned int pcpu)
+{
+ int cpu_status, status;
+
+ status = rtas_call(qcss_tok, 1, 2, &cpu_status, pcpu);
+ if (status != 0) {
+ printk(KERN_ERR
+ "RTAS query-cpu-stopped-state failed: %i\n", status);
+ return status;
+ }
+
+ return cpu_status;
+}
+
static int pseries_cpu_disable(void)
{
int cpu = smp_processor_id();
unsigned int pcpu = get_hard_smp_processor_id(cpu);
for (tries = 0; tries < 25; tries++) {
- cpu_status = smp_query_cpu_stopped(pcpu);
- if (cpu_status == QCSS_STOPPED ||
- cpu_status == QCSS_HARDWARE_ERROR)
+ cpu_status = query_cpu_stopped(pcpu);
+ if (cpu_status == 0 || cpu_status == -1)
break;
cpu_relax();
}
{
struct device_node *np;
const char *typep;
- int qcss_tok;
for_each_node_by_name(np, "interrupt-controller") {
typep = of_get_property(np, "compatible", NULL);
#include <asm/hvcall.h>
#include <asm/page.h>
-/* Get state of physical CPU from query_cpu_stopped */
-int smp_query_cpu_stopped(unsigned int pcpu);
-#define QCSS_STOPPED 0
-#define QCSS_STOPPING 1
-#define QCSS_NOT_STOPPED 2
-#define QCSS_HARDWARE_ERROR -1
-#define QCSS_HARDWARE_BUSY -2
-
static inline long poll_pending(void)
{
return plpar_hcall_norets(H_POLL_PENDING);
*/
static cpumask_t of_spin_map;
-/* Query where a cpu is now. Return codes #defined in plpar_wrappers.h */
-int smp_query_cpu_stopped(unsigned int pcpu)
-{
- int cpu_status, status;
- int qcss_tok = rtas_token("query-cpu-stopped-state");
-
- if (qcss_tok == RTAS_UNKNOWN_SERVICE) {
- printk(KERN_INFO "Firmware doesn't support "
- "query-cpu-stopped-state\n");
- return QCSS_HARDWARE_ERROR;
- }
-
- status = rtas_call(qcss_tok, 1, 2, &cpu_status, pcpu);
- if (status != 0) {
- printk(KERN_ERR
- "RTAS query-cpu-stopped-state failed: %i\n", status);
- return status;
- }
-
- return cpu_status;
-}
-
/**
* smp_startup_cpu() - start the given cpu
*
pcpu = get_hard_smp_processor_id(lcpu);
- /* Check to see if the CPU out of FW already for kexec */
- if (smp_query_cpu_stopped(pcpu) == QCSS_NOT_STOPPED){
- cpu_set(lcpu, of_spin_map);
- return 1;
- }
-
/* Fixup atomic count: it exited inside IRQ handler. */
task_thread_info(paca[lcpu].__current)->preempt_count = 0;
#endif
-static inline void __user *arch_compat_alloc_user_space(long len)
+static inline void __user *compat_alloc_user_space(long len)
{
unsigned long stack;
unsigned long long idle_count;
unsigned long long idle_enter;
unsigned long long idle_time;
- int nohz_delay;
};
DECLARE_PER_CPU(struct s390_idle_data, s390_idle);
vtime_start_cpu();
}
-static inline int s390_nohz_delay(int cpu)
-{
- return per_cpu(s390_idle, cpu).nohz_delay != 0;
-}
-
-#define arch_needs_cpu(cpu) s390_nohz_delay(cpu)
-
#endif /* _S390_CPUTIME_H */
static int notrace s390_revalidate_registers(struct mci *mci)
{
int kill_task;
+ u64 tmpclock;
u64 zero;
void *fpt_save_area, *fpt_creg_save_area;
: "0", "cc");
#endif
/* Revalidate clock comparator register */
- if (S390_lowcore.clock_comparator == -1)
- set_clock_comparator(get_clock());
- else
- set_clock_comparator(S390_lowcore.clock_comparator);
+ asm volatile(
+ " stck 0(%1)\n"
+ " sckc 0(%1)"
+ : "=m" (tmpclock) : "a" (&(tmpclock)) : "cc", "memory");
+
/* Check if old PSW is valid */
if (!mci->wp)
/*
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
{
- long ret = 0;
+ long ret;
/* Do the secure computing check first. */
secure_computing(regs->gprs[2]);
* The sysc_tracesys code in entry.S stored the system
* call number to gprs[2].
*/
+ ret = regs->gprs[2];
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
(tracehook_report_syscall_entry(regs) ||
regs->gprs[2] >= NR_syscalls)) {
regs->gprs[2], regs->orig_gpr2,
regs->gprs[3], regs->gprs[4],
regs->gprs[5]);
- return ret ?: regs->gprs[2];
+ return ret;
}
asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)
/* Serve timer interrupts first. */
clock_comparator_work();
kstat_cpu(smp_processor_id()).irqs[EXTERNAL_INTERRUPT]++;
- if (code != 0x1004)
- __get_cpu_var(s390_idle).nohz_delay = 1;
index = ext_hash(code);
for (p = ext_int_hash[index]; p; p = p->next) {
if (likely(p->code == code))
return &clocksource_tod;
}
-void update_vsyscall(struct timespec *wall_time, struct clocksource *clock,
- u32 mult)
+void update_vsyscall(struct timespec *wall_time, struct clocksource *clock)
{
if (clock != &clocksource_tod)
return;
/* Wait for external, I/O or machine check interrupt. */
psw.mask = psw_kernel_bits | PSW_MASK_WAIT | PSW_MASK_IO | PSW_MASK_EXT;
- idle->nohz_delay = 0;
-
/* Check if the CPU timer needs to be reprogrammed. */
if (vq->do_spt) {
__u64 vmax = VTIMER_MAX_SLICE;
rc = kvm_vcpu_init(vcpu, kvm, id);
if (rc)
- goto out_free_sie_block;
+ goto out_free_cpu;
VM_EVENT(kvm, 3, "create cpu %d at %p, sie block at %p", id, vcpu,
vcpu->arch.sie_block);
return vcpu;
-out_free_sie_block:
- free_page((unsigned long)(vcpu->arch.sie_block));
out_free_cpu:
kfree(vcpu);
out_nomem:
{
unsigned long mask, cr0, cr0_saved;
u64 clock_saved;
- u64 end;
- mask = psw_kernel_bits | PSW_MASK_WAIT | PSW_MASK_EXT;
- end = get_clock() + (usecs << 12);
clock_saved = local_tick_disable();
+ set_clock_comparator(get_clock() + (usecs << 12));
__ctl_store(cr0_saved, 0, 0);
cr0 = (cr0_saved & 0xffff00e0) | 0x00000800;
__ctl_load(cr0 , 0, 0);
+ mask = psw_kernel_bits | PSW_MASK_WAIT | PSW_MASK_EXT;
lockdep_off();
- do {
- set_clock_comparator(end);
- trace_hardirqs_on();
- __load_psw_mask(mask);
- local_irq_disable();
- } while (get_clock() < end);
+ trace_hardirqs_on();
+ __load_psw_mask(mask);
+ local_irq_disable();
lockdep_on();
__ctl_load(cr0_saved, 0, 0);
local_tick_enable(clock_saved);
output_addr = (CONFIG_MEMORY_START + 0x2000);
#else
output_addr = PHYSADDR((unsigned long)&_text+PAGE_SIZE);
-#if defined(CONFIG_29BIT) || defined(CONFIG_PMB_FIXED)
+#ifdef CONFIG_29BIT
output_addr |= P2SEG;
#endif
#endif
#define VSYSCALL_AUX_ENT \
if (vdso_enabled) \
- NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE); \
- else \
- NEW_AUX_ENT(AT_IGNORE, 0);
+ NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE);
#else
#define VSYSCALL_AUX_ENT
#endif /* CONFIG_VSYSCALL */
#ifdef CONFIG_SH_FPU
#define FPU_AUX_ENT NEW_AUX_ENT(AT_FPUCW, FPSCR_INIT)
#else
-#define FPU_AUX_ENT NEW_AUX_ENT(AT_IGNORE, 0)
+#define FPU_AUX_ENT
#endif
extern int l1i_cache_shape, l1d_cache_shape, l2_cache_shape;
unsigned int cpu;
struct mm_struct *mm = &init_mm;
- enable_mmu();
atomic_inc(&mm->mm_count);
atomic_inc(&mm->mm_users);
current->active_mm = mm;
#define atomic64_set(v, i) (((v)->counter) = i)
extern void atomic_add(int, atomic_t *);
-extern void atomic64_add(long, atomic64_t *);
+extern void atomic64_add(int, atomic64_t *);
extern void atomic_sub(int, atomic_t *);
-extern void atomic64_sub(long, atomic64_t *);
+extern void atomic64_sub(int, atomic64_t *);
extern int atomic_add_ret(int, atomic_t *);
-extern long atomic64_add_ret(long, atomic64_t *);
+extern int atomic64_add_ret(int, atomic64_t *);
extern int atomic_sub_ret(int, atomic_t *);
-extern long atomic64_sub_ret(long, atomic64_t *);
+extern int atomic64_sub_ret(int, atomic64_t *);
#define atomic_dec_return(v) atomic_sub_ret(1, v)
#define atomic64_dec_return(v) atomic64_sub_ret(1, v)
((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
-static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
+static inline int atomic64_add_unless(atomic64_t *v, long a, long u)
{
long c, old;
c = atomic64_read(v);
return (u32)(unsigned long)uptr;
}
-static inline void __user *arch_compat_alloc_user_space(long len)
+static inline void __user *compat_alloc_user_space(long len)
{
struct pt_regs *regs = current_thread_info()->kregs;
unsigned long usp = regs->u_regs[UREG_I6];
#include <asm/page.h> /* IO address mapping routines need this */
#include <asm/system.h>
-#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
+#define page_to_phys(page) (((page) - mem_map) << PAGE_SHIFT)
static inline u32 flip_dword (u32 l)
{
#define ioread8(X) readb(X)
#define ioread16(X) readw(X)
-#define ioread16be(X) __raw_readw(X)
#define ioread32(X) readl(X)
-#define ioread32be(X) __raw_readl(X)
#define iowrite8(val,X) writeb(val,X)
#define iowrite16(val,X) writew(val,X)
-#define iowrite16be(val,X) __raw_writew(val,X)
#define iowrite32(val,X) writel(val,X)
-#define iowrite32be(val,X) __raw_writel(val,X)
static inline void ioread8_rep(void __iomem *port, void *buf, unsigned long count)
{
#define ioread8(X) readb(X)
#define ioread16(X) readw(X)
-#define ioread16be(X) __raw_readw(X)
#define ioread32(X) readl(X)
-#define ioread32be(X) __raw_readl(X)
#define iowrite8(val,X) writeb(val,X)
#define iowrite16(val,X) writew(val,X)
-#define iowrite16be(val,X) __raw_writew(val,X)
#define iowrite32(val,X) writel(val,X)
-#define iowrite32be(val,X) __raw_writel(val,X)
/* Create a virtual mapping cookie for an IO port range */
extern void __iomem *ioport_map(unsigned long port, unsigned int nr);
char *buf, int buflen);
/* Retain physical memory to the caller across soft resets. */
-extern int prom_retain(const char *name, unsigned long size,
- unsigned long align, unsigned long *paddr);
+extern unsigned long prom_retain(const char *name,
+ unsigned long pa_low, unsigned long pa_high,
+ long size, long align);
/* Load explicit I/D TLB entries into the calling processor. */
extern long prom_itlb_load(unsigned long index,
extern int prom_ihandle2path(int handle, char *buffer, int bufsize);
/* Client interface level routines. */
-extern void p1275_cmd_direct(unsigned long *);
+extern long p1275_cmd(const char *, long, ...);
+
+#if 0
+#define P1275_SIZE(x) ((((long)((x) / 32)) << 32) | (x))
+#else
+#define P1275_SIZE(x) x
+#endif
+
+/* We support at most 16 input and 1 output argument */
+#define P1275_ARG_NUMBER 0
+#define P1275_ARG_IN_STRING 1
+#define P1275_ARG_OUT_BUF 2
+#define P1275_ARG_OUT_32B 3
+#define P1275_ARG_IN_FUNCTION 4
+#define P1275_ARG_IN_BUF 5
+#define P1275_ARG_IN_64B 6
+
+#define P1275_IN(x) ((x) & 0xf)
+#define P1275_OUT(x) (((x) << 4) & 0xf0)
+#define P1275_INOUT(i,o) (P1275_IN(i)|P1275_OUT(o))
+#define P1275_ARG(n,x) ((x) << ((n)*3 + 8))
#endif /* !(__SPARC64_OPLIB_H) */
#define phys_to_virt __va
#define ARCH_PFN_OFFSET (pfn_base)
-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_page(kaddr) (mem_map + ((((unsigned long)(kaddr)-PAGE_OFFSET)>>PAGE_SHIFT)))
#define pfn_valid(pfn) (((pfn) >= (pfn_base)) && (((pfn)-(pfn_base)) < max_mapnr))
#define virt_addr_valid(kaddr) ((((unsigned long)(kaddr)-PAGE_OFFSET)>>PAGE_SHIFT) < max_mapnr)
.name = "parallel",
.compatible = "ns87317-ecpp",
},
- {
- .name = "parallel",
- .compatible = "pnpALI,1533,3",
- },
{},
};
#define RWSEM_UNLOCKED_VALUE 0x00000000
#define RWSEM_ACTIVE_BIAS 0x00000001
#define RWSEM_ACTIVE_MASK 0x0000ffff
-#define RWSEM_WAITING_BIAS (-0x00010000)
+#define RWSEM_WAITING_BIAS 0xffff0000
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
ino_t st_ino;
mode_t st_mode;
short st_nlink;
- unsigned short st_uid;
- unsigned short st_gid;
+ uid_t st_uid;
+ gid_t st_gid;
unsigned short st_rdev;
off_t st_size;
time_t st_atime;
p->leds_resource.start = (unsigned long)
(p->clock_regs + CLOCK_CTRL);
- p->leds_resource.end = p->leds_resource.start;
+ p->leds_resource.end = p->leds_resource.end;
p->leds_resource.name = "leds";
p->leds_pdev.name = "sunfire-clockboard-leds";
if (!p->central) {
p->leds_resource.start = (unsigned long)
(p->pregs + FHC_PREGS_CTRL);
- p->leds_resource.end = p->leds_resource.start;
+ p->leds_resource.end = p->leds_resource.end;
p->leds_resource.name = "leds";
p->leds_pdev.name = "sunfire-fhc-leds";
* Set some valid stack frames to give to the child.
*/
childstack = (struct sparc_stackf __user *)
- (sp & ~0xfUL);
+ (sp & ~0x7UL);
parentstack = (struct sparc_stackf __user *)
regs->u_regs[UREG_FP];
} else
__get_user(fp, &(((struct reg_window32 __user *)psp)->ins[6]));
- /* Now align the stack as this is mandatory in the Sparc ABI
- * due to how register windows work. This hides the
- * restriction from thread libraries etc.
+ /* Now 8-byte align the stack as this is mandatory in the
+ * Sparc ABI due to how register windows work. This hides
+ * the restriction from thread libraries etc. -DaveM
*/
- csp &= ~15UL;
+ csp &= ~7UL;
distance = fp - psp;
rval = (csp - distance);
};
/* Align macros */
-#define SF_ALIGNEDSZ (((sizeof(struct signal_frame32) + 15) & (~15)))
-#define RT_ALIGNEDSZ (((sizeof(struct rt_signal_frame32) + 15) & (~15)))
+#define SF_ALIGNEDSZ (((sizeof(struct signal_frame32) + 7) & (~7)))
+#define RT_ALIGNEDSZ (((sizeof(struct rt_signal_frame32) + 7) & (~7)))
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
{
sp = current->sas_ss_sp + current->sas_ss_size;
}
- sp -= framesize;
-
/* Always align the stack frame. This handles two cases. First,
* sigaltstack need not be mindful of platform specific stack
* alignment. Second, if we took this signal because the stack
* is not aligned properly, we'd like to take the signal cleanly
* and report that.
*/
- sp &= ~15UL;
+ sp &= ~7UL;
- return (void __user *) sp;
+ return (void __user *)(sp - framesize);
}
static int save_fpu_state32(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
return err;
}
-/* The I-cache flush instruction only works in the primary ASI, which
- * right now is the nucleus, aka. kernel space.
- *
- * Therefore we have to kick the instructions out using the kernel
- * side linear mapping of the physical address backing the user
- * instructions.
- */
-static void flush_signal_insns(unsigned long address)
-{
- unsigned long pstate, paddr;
- pte_t *ptep, pte;
- pgd_t *pgdp;
- pud_t *pudp;
- pmd_t *pmdp;
-
- /* Commit all stores of the instructions we are about to flush. */
- wmb();
-
- /* Disable cross-call reception. In this way even a very wide
- * munmap() on another cpu can't tear down the page table
- * hierarchy from underneath us, since that can't complete
- * until the IPI tlb flush returns.
- */
-
- __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
- __asm__ __volatile__("wrpr %0, %1, %%pstate"
- : : "r" (pstate), "i" (PSTATE_IE));
-
- pgdp = pgd_offset(current->mm, address);
- if (pgd_none(*pgdp))
- goto out_irqs_on;
- pudp = pud_offset(pgdp, address);
- if (pud_none(*pudp))
- goto out_irqs_on;
- pmdp = pmd_offset(pudp, address);
- if (pmd_none(*pmdp))
- goto out_irqs_on;
-
- ptep = pte_offset_map(pmdp, address);
- pte = *ptep;
- if (!pte_present(pte))
- goto out_unmap;
-
- paddr = (unsigned long) page_address(pte_page(pte));
-
- __asm__ __volatile__("flush %0 + %1"
- : /* no outputs */
- : "r" (paddr),
- "r" (address & (PAGE_SIZE - 1))
- : "memory");
-
-out_unmap:
- pte_unmap(ptep);
-out_irqs_on:
- __asm__ __volatile__("wrpr %0, 0x0, %%pstate" : : "r" (pstate));
-
-}
-
-static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
- int signo, sigset_t *oldset)
+static void setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset)
{
struct signal_frame32 __user *sf;
int sigframe_size;
if (ka->ka_restorer) {
regs->u_regs[UREG_I7] = (unsigned long)ka->ka_restorer;
} else {
+ /* Flush instruction space. */
unsigned long address = ((unsigned long)&(sf->insns[0]));
+ pgd_t *pgdp = pgd_offset(current->mm, address);
+ pud_t *pudp = pud_offset(pgdp, address);
+ pmd_t *pmdp = pmd_offset(pudp, address);
+ pte_t *ptep;
+ pte_t pte;
regs->u_regs[UREG_I7] = (unsigned long) (&(sf->insns[0]) - 2);
if (err)
goto sigsegv;
- flush_signal_insns(address);
+ preempt_disable();
+ ptep = pte_offset_map(pmdp, address);
+ pte = *ptep;
+ if (pte_present(pte)) {
+ unsigned long page = (unsigned long)
+ page_address(pte_page(pte));
+
+ wmb();
+ __asm__ __volatile__("flush %0 + %1"
+ : /* no outputs */
+ : "r" (page),
+ "r" (address & (PAGE_SIZE - 1))
+ : "memory");
+ }
+ pte_unmap(ptep);
+ preempt_enable();
}
- return 0;
+ return;
sigill:
do_exit(SIGILL);
- return -EINVAL;
-
sigsegv:
force_sigsegv(signo, current);
- return -EFAULT;
}
-static int setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
- unsigned long signr, sigset_t *oldset,
- siginfo_t *info)
+static void setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ unsigned long signr, sigset_t *oldset,
+ siginfo_t *info)
{
struct rt_signal_frame32 __user *sf;
int sigframe_size;
if (ka->ka_restorer)
regs->u_regs[UREG_I7] = (unsigned long)ka->ka_restorer;
else {
+ /* Flush instruction space. */
unsigned long address = ((unsigned long)&(sf->insns[0]));
+ pgd_t *pgdp = pgd_offset(current->mm, address);
+ pud_t *pudp = pud_offset(pgdp, address);
+ pmd_t *pmdp = pmd_offset(pudp, address);
+ pte_t *ptep;
regs->u_regs[UREG_I7] = (unsigned long) (&(sf->insns[0]) - 2);
if (err)
goto sigsegv;
- flush_signal_insns(address);
+ preempt_disable();
+ ptep = pte_offset_map(pmdp, address);
+ if (pte_present(*ptep)) {
+ unsigned long page = (unsigned long)
+ page_address(pte_page(*ptep));
+
+ wmb();
+ __asm__ __volatile__("flush %0 + %1"
+ : /* no outputs */
+ : "r" (page),
+ "r" (address & (PAGE_SIZE - 1))
+ : "memory");
+ }
+ pte_unmap(ptep);
+ preempt_enable();
}
- return 0;
+ return;
sigill:
do_exit(SIGILL);
- return -EINVAL;
-
sigsegv:
force_sigsegv(signr, current);
- return -EFAULT;
}
-static inline int handle_signal32(unsigned long signr, struct k_sigaction *ka,
- siginfo_t *info,
- sigset_t *oldset, struct pt_regs *regs)
+static inline void handle_signal32(unsigned long signr, struct k_sigaction *ka,
+ siginfo_t *info,
+ sigset_t *oldset, struct pt_regs *regs)
{
- int err;
-
if (ka->sa.sa_flags & SA_SIGINFO)
- err = setup_rt_frame32(ka, regs, signr, oldset, info);
+ setup_rt_frame32(ka, regs, signr, oldset, info);
else
- err = setup_frame32(ka, regs, signr, oldset);
-
- if (err)
- return err;
+ setup_frame32(ka, regs, signr, oldset);
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
sigaddset(¤t->blocked,signr);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
- tracehook_signal_handler(signr, info, ka, regs, 0);
-
- return 0;
}
static inline void syscall_restart32(unsigned long orig_i0, struct pt_regs *regs,
if (signr > 0) {
if (restart_syscall)
syscall_restart32(orig_i0, regs, &ka.sa);
- if (handle_signal32(signr, &ka, &info, oldset, regs) == 0) {
- /* A signal was successfully delivered; the saved
- * sigmask will have been stored in the signal frame,
- * and will be restored by sigreturn, so we can simply
- * clear the TS_RESTORE_SIGMASK flag.
- */
- current_thread_info()->status &= ~TS_RESTORE_SIGMASK;
- }
+ handle_signal32(signr, &ka, &info, oldset, regs);
+
+ /* A signal was successfully delivered; the saved
+ * sigmask will have been stored in the signal frame,
+ * and will be restored by sigreturn, so we can simply
+ * clear the TS_RESTORE_SIGMASK flag.
+ */
+ current_thread_info()->status &= ~TS_RESTORE_SIGMASK;
+
+ tracehook_signal_handler(signr, &info, &ka, regs, 0);
return;
}
if (restart_syscall &&
regs->u_regs[UREG_I0] = orig_i0;
regs->tpc -= 4;
regs->tnpc -= 4;
- pt_regs_clear_syscall(regs);
}
if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->tpc -= 4;
regs->tnpc -= 4;
- pt_regs_clear_syscall(regs);
}
/* If there's no signal to deliver, we just put the saved sigmask
sp = current->sas_ss_sp + current->sas_ss_size;
}
- sp -= framesize;
-
/* Always align the stack frame. This handles two cases. First,
* sigaltstack need not be mindful of platform specific stack
* alignment. Second, if we took this signal because the stack
* is not aligned properly, we'd like to take the signal cleanly
* and report that.
*/
- sp &= ~15UL;
+ sp &= ~7UL;
- return (void __user *) sp;
+ return (void __user *)(sp - framesize);
}
static inline int
return err;
}
-static int setup_frame(struct k_sigaction *ka, struct pt_regs *regs,
- int signo, sigset_t *oldset)
+static void setup_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset)
{
struct signal_frame __user *sf;
int sigframe_size, err;
/* Flush instruction space. */
flush_sig_insns(current->mm, (unsigned long) &(sf->insns[0]));
}
- return 0;
+ return;
sigill_and_return:
do_exit(SIGILL);
- return -EINVAL;
-
sigsegv:
force_sigsegv(signo, current);
- return -EFAULT;
}
-static int setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
- int signo, sigset_t *oldset, siginfo_t *info)
+static void setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset, siginfo_t *info)
{
struct rt_signal_frame __user *sf;
int sigframe_size;
/* Flush instruction space. */
flush_sig_insns(current->mm, (unsigned long) &(sf->insns[0]));
}
- return 0;
+ return;
sigill:
do_exit(SIGILL);
- return -EINVAL;
-
sigsegv:
force_sigsegv(signo, current);
- return -EFAULT;
}
-static inline int
+static inline void
handle_signal(unsigned long signr, struct k_sigaction *ka,
siginfo_t *info, sigset_t *oldset, struct pt_regs *regs)
{
- int err;
-
if (ka->sa.sa_flags & SA_SIGINFO)
- err = setup_rt_frame(ka, regs, signr, oldset, info);
+ setup_rt_frame(ka, regs, signr, oldset, info);
else
- err = setup_frame(ka, regs, signr, oldset);
-
- if (err)
- return err;
+ setup_frame(ka, regs, signr, oldset);
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
sigaddset(¤t->blocked, signr);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
- tracehook_signal_handler(signr, info, ka, regs, 0);
-
- return 0;
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,
if (signr > 0) {
if (restart_syscall)
syscall_restart(orig_i0, regs, &ka.sa);
- if (handle_signal(signr, &ka, &info, oldset, regs) == 0) {
- /* a signal was successfully delivered; the saved
- * sigmask will have been stored in the signal frame,
- * and will be restored by sigreturn, so we can simply
- * clear the TIF_RESTORE_SIGMASK flag.
- */
- if (test_thread_flag(TIF_RESTORE_SIGMASK))
- clear_thread_flag(TIF_RESTORE_SIGMASK);
- }
+ handle_signal(signr, &ka, &info, oldset, regs);
+
+ /* a signal was successfully delivered; the saved
+ * sigmask will have been stored in the signal frame,
+ * and will be restored by sigreturn, so we can simply
+ * clear the TIF_RESTORE_SIGMASK flag.
+ */
+ if (test_thread_flag(TIF_RESTORE_SIGMASK))
+ clear_thread_flag(TIF_RESTORE_SIGMASK);
+
+ tracehook_signal_handler(signr, &info, &ka, regs, 0);
return;
}
if (restart_syscall &&
regs->u_regs[UREG_I0] = orig_i0;
regs->pc -= 4;
regs->npc -= 4;
- pt_regs_clear_syscall(regs);
}
if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->pc -= 4;
regs->npc -= 4;
- pt_regs_clear_syscall(regs);
}
/* if there's no signal to deliver, we just put the saved sigmask
/* Checks if the fp is valid */
static int invalid_frame_pointer(void __user *fp, int fplen)
{
- if (((unsigned long) fp) & 15)
+ if (((unsigned long) fp) & 7)
return 1;
return 0;
}
sp = current->sas_ss_sp + current->sas_ss_size;
}
- sp -= framesize;
-
/* Always align the stack frame. This handles two cases. First,
* sigaltstack need not be mindful of platform specific stack
* alignment. Second, if we took this signal because the stack
* is not aligned properly, we'd like to take the signal cleanly
* and report that.
*/
- sp &= ~15UL;
+ sp &= ~7UL;
- return (void __user *) sp;
+ return (void __user *)(sp - framesize);
}
-static inline int
+static inline void
setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
int signo, sigset_t *oldset, siginfo_t *info)
{
}
/* 4. return to kernel instructions */
regs->u_regs[UREG_I7] = (unsigned long)ka->ka_restorer;
- return 0;
+ return;
sigill:
do_exit(SIGILL);
- return -EINVAL;
-
sigsegv:
force_sigsegv(signo, current);
- return -EFAULT;
}
-static inline int handle_signal(unsigned long signr, struct k_sigaction *ka,
- siginfo_t *info,
- sigset_t *oldset, struct pt_regs *regs)
+static inline void handle_signal(unsigned long signr, struct k_sigaction *ka,
+ siginfo_t *info,
+ sigset_t *oldset, struct pt_regs *regs)
{
- int err;
-
- err = setup_rt_frame(ka, regs, signr, oldset,
- (ka->sa.sa_flags & SA_SIGINFO) ? info : NULL);
- if (err)
- return err;
+ setup_rt_frame(ka, regs, signr, oldset,
+ (ka->sa.sa_flags & SA_SIGINFO) ? info : NULL);
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(¤t->blocked,signr);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
- tracehook_signal_handler(signr, info, ka, regs, 0);
-
- return 0;
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,
if (signr > 0) {
if (restart_syscall)
syscall_restart(orig_i0, regs, &ka.sa);
- if (handle_signal(signr, &ka, &info, oldset, regs) == 0) {
- /* A signal was successfully delivered; the saved
- * sigmask will have been stored in the signal frame,
- * and will be restored by sigreturn, so we can simply
- * clear the TS_RESTORE_SIGMASK flag.
- */
- current_thread_info()->status &= ~TS_RESTORE_SIGMASK;
- }
+ handle_signal(signr, &ka, &info, oldset, regs);
+
+ /* A signal was successfully delivered; the saved
+ * sigmask will have been stored in the signal frame,
+ * and will be restored by sigreturn, so we can simply
+ * clear the TS_RESTORE_SIGMASK flag.
+ */
+ current_thread_info()->status &= ~TS_RESTORE_SIGMASK;
+
+ tracehook_signal_handler(signr, &info, &ka, regs, 0);
return;
}
if (restart_syscall &&
regs->u_regs[UREG_I0] = orig_i0;
regs->tpc -= 4;
regs->tnpc -= 4;
- pt_regs_clear_syscall(regs);
}
if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->tpc -= 4;
regs->tnpc -= 4;
- pt_regs_clear_syscall(regs);
}
/* If there's no signal to deliver, we just put the saved sigmask
tsb_itlb_load:
/* Executable bit must be set. */
-661: sethi %hi(_PAGE_EXEC_4U), %g4
- andcc %g5, %g4, %g0
- .section .sun4v_2insn_patch, "ax"
+661: andcc %g5, _PAGE_EXEC_4U, %g0
+ .section .sun4v_1insn_patch, "ax"
.word 661b
andcc %g5, _PAGE_EXEC_4V, %g0
- nop
.previous
be,pn %xcc, tsb_do_fault
#include <asm/thread_info.h>
.text
- .globl prom_cif_direct
-prom_cif_direct:
- sethi %hi(p1275buf), %o1
- or %o1, %lo(p1275buf), %o1
- ldx [%o1 + 0x0010], %o2 ! prom_cif_stack
- save %o2, -192, %sp
- ldx [%i1 + 0x0008], %l2 ! prom_cif_handler
+ .globl prom_cif_interface
+prom_cif_interface:
+ sethi %hi(p1275buf), %o0
+ or %o0, %lo(p1275buf), %o0
+ ldx [%o0 + 0x010], %o1 ! prom_cif_stack
+ save %o1, -192, %sp
+ ldx [%i0 + 0x008], %l2 ! prom_cif_handler
mov %g4, %l0
mov %g5, %l1
mov %g6, %l3
call %l2
- mov %i0, %o0 ! prom_args
+ add %i0, 0x018, %o0 ! prom_args
mov %l0, %g4
mov %l1, %g5
mov %l3, %g6
inline int
prom_nbgetchar(void)
{
- unsigned long args[7];
char inc;
- args[0] = (unsigned long) "read";
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) prom_stdin;
- args[4] = (unsigned long) &inc;
- args[5] = 1;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- if (args[6] == 1)
+ if (p1275_cmd("read", P1275_ARG(1,P1275_ARG_OUT_BUF)|
+ P1275_INOUT(3,1),
+ prom_stdin, &inc, P1275_SIZE(1)) == 1)
return inc;
- return -1;
+ else
+ return -1;
}
/* Non blocking put character to console device, returns -1 if
inline int
prom_nbputchar(char c)
{
- unsigned long args[7];
char outc;
outc = c;
-
- args[0] = (unsigned long) "write";
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) prom_stdout;
- args[4] = (unsigned long) &outc;
- args[5] = 1;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- if (args[6] == 1)
+ if (p1275_cmd("write", P1275_ARG(1,P1275_ARG_IN_BUF)|
+ P1275_INOUT(3,1),
+ prom_stdout, &outc, P1275_SIZE(1)) == 1)
return 0;
else
return -1;
void
prom_puts(const char *s, int len)
{
- unsigned long args[7];
-
- args[0] = (unsigned long) "write";
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) prom_stdout;
- args[4] = (unsigned long) s;
- args[5] = len;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
+ p1275_cmd("write", P1275_ARG(1,P1275_ARG_IN_BUF)|
+ P1275_INOUT(3,1),
+ prom_stdout, s, P1275_SIZE(len));
}
int
prom_devopen(const char *dstr)
{
- unsigned long args[5];
-
- args[0] = (unsigned long) "open";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned long) dstr;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[4];
+ return p1275_cmd ("open", P1275_ARG(0,P1275_ARG_IN_STRING)|
+ P1275_INOUT(1,1),
+ dstr);
}
/* Close the device described by device handle 'dhandle'. */
int
prom_devclose(int dhandle)
{
- unsigned long args[4];
-
- args[0] = (unsigned long) "close";
- args[1] = 1;
- args[2] = 0;
- args[3] = (unsigned int) dhandle;
-
- p1275_cmd_direct(args);
-
+ p1275_cmd ("close", P1275_INOUT(1,0), dhandle);
return 0;
}
void
prom_seek(int dhandle, unsigned int seekhi, unsigned int seeklo)
{
- unsigned long args[7];
-
- args[0] = (unsigned long) "seek";
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) dhandle;
- args[4] = seekhi;
- args[5] = seeklo;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
+ p1275_cmd ("seek", P1275_INOUT(3,1), dhandle, seekhi, seeklo);
}
int prom_service_exists(const char *service_name)
{
- unsigned long args[5];
+ int err = p1275_cmd("test", P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_INOUT(1, 1), service_name);
- args[0] = (unsigned long) "test";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned long) service_name;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- if (args[4])
+ if (err)
return 0;
return 1;
}
void prom_sun4v_guest_soft_state(void)
{
const char *svc = "SUNW,soft-state-supported";
- unsigned long args[3];
if (!prom_service_exists(svc))
return;
- args[0] = (unsigned long) svc;
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd(svc, P1275_INOUT(0, 0));
}
/* Reset and reboot the machine with the command 'bcommand'. */
void prom_reboot(const char *bcommand)
{
- unsigned long args[4];
-
#ifdef CONFIG_SUN_LDOMS
if (ldom_domaining_enabled)
ldom_reboot(bcommand);
#endif
- args[0] = (unsigned long) "boot";
- args[1] = 1;
- args[2] = 0;
- args[3] = (unsigned long) bcommand;
-
- p1275_cmd_direct(args);
+ p1275_cmd("boot", P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_INOUT(1, 0), bcommand);
}
/* Forth evaluate the expression contained in 'fstring'. */
void prom_feval(const char *fstring)
{
- unsigned long args[5];
-
if (!fstring || fstring[0] == 0)
return;
- args[0] = (unsigned long) "interpret";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned long) fstring;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
+ p1275_cmd("interpret", P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_INOUT(1, 1), fstring);
}
EXPORT_SYMBOL(prom_feval);
*/
void prom_cmdline(void)
{
- unsigned long args[3];
unsigned long flags;
local_irq_save(flags);
smp_capture();
#endif
- args[0] = (unsigned long) "enter";
- args[1] = 0;
- args[2] = 0;
-
- p1275_cmd_direct(args);
+ p1275_cmd("enter", P1275_INOUT(0, 0));
#ifdef CONFIG_SMP
smp_release();
*/
void notrace prom_halt(void)
{
- unsigned long args[3];
-
#ifdef CONFIG_SUN_LDOMS
if (ldom_domaining_enabled)
ldom_power_off();
#endif
again:
- args[0] = (unsigned long) "exit";
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd("exit", P1275_INOUT(0, 0));
goto again; /* PROM is out to get me -DaveM */
}
void prom_halt_power_off(void)
{
- unsigned long args[3];
-
#ifdef CONFIG_SUN_LDOMS
if (ldom_domaining_enabled)
ldom_power_off();
#endif
- args[0] = (unsigned long) "SUNW,power-off";
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,power-off", P1275_INOUT(0, 0));
/* if nothing else helps, we just halt */
prom_halt();
/* Set prom sync handler to call function 'funcp'. */
void prom_setcallback(callback_func_t funcp)
{
- unsigned long args[5];
if (!funcp)
return;
- args[0] = (unsigned long) "set-callback";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned long) funcp;
- args[4] = (unsigned long) -1;
- p1275_cmd_direct(args);
+ p1275_cmd("set-callback", P1275_ARG(0, P1275_ARG_IN_FUNCTION) |
+ P1275_INOUT(1, 1), funcp);
}
/* Get the idprom and stuff it into buffer 'idbuf'. Returns the
}
/* Load explicit I/D TLB entries. */
-static long tlb_load(const char *type, unsigned long index,
- unsigned long tte_data, unsigned long vaddr)
-{
- unsigned long args[9];
-
- args[0] = (unsigned long) prom_callmethod_name;
- args[1] = 5;
- args[2] = 1;
- args[3] = (unsigned long) type;
- args[4] = (unsigned int) prom_get_mmu_ihandle();
- args[5] = vaddr;
- args[6] = tte_data;
- args[7] = index;
- args[8] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (long) args[8];
-}
-
long prom_itlb_load(unsigned long index,
unsigned long tte_data,
unsigned long vaddr)
{
- return tlb_load("SUNW,itlb-load", index, tte_data, vaddr);
+ return p1275_cmd(prom_callmethod_name,
+ (P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_ARG(2, P1275_ARG_IN_64B) |
+ P1275_ARG(3, P1275_ARG_IN_64B) |
+ P1275_INOUT(5, 1)),
+ "SUNW,itlb-load",
+ prom_get_mmu_ihandle(),
+ /* And then our actual args are pushed backwards. */
+ vaddr,
+ tte_data,
+ index);
}
long prom_dtlb_load(unsigned long index,
unsigned long tte_data,
unsigned long vaddr)
{
- return tlb_load("SUNW,dtlb-load", index, tte_data, vaddr);
+ return p1275_cmd(prom_callmethod_name,
+ (P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_ARG(2, P1275_ARG_IN_64B) |
+ P1275_ARG(3, P1275_ARG_IN_64B) |
+ P1275_INOUT(5, 1)),
+ "SUNW,dtlb-load",
+ prom_get_mmu_ihandle(),
+ /* And then our actual args are pushed backwards. */
+ vaddr,
+ tte_data,
+ index);
}
int prom_map(int mode, unsigned long size,
unsigned long vaddr, unsigned long paddr)
{
- unsigned long args[11];
- int ret;
-
- args[0] = (unsigned long) prom_callmethod_name;
- args[1] = 7;
- args[2] = 1;
- args[3] = (unsigned long) prom_map_name;
- args[4] = (unsigned int) prom_get_mmu_ihandle();
- args[5] = (unsigned int) mode;
- args[6] = size;
- args[7] = vaddr;
- args[8] = 0;
- args[9] = paddr;
- args[10] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- ret = (int) args[10];
+ int ret = p1275_cmd(prom_callmethod_name,
+ (P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_ARG(3, P1275_ARG_IN_64B) |
+ P1275_ARG(4, P1275_ARG_IN_64B) |
+ P1275_ARG(6, P1275_ARG_IN_64B) |
+ P1275_INOUT(7, 1)),
+ prom_map_name,
+ prom_get_mmu_ihandle(),
+ mode,
+ size,
+ vaddr,
+ 0,
+ paddr);
+
if (ret == 0)
ret = -1;
return ret;
void prom_unmap(unsigned long size, unsigned long vaddr)
{
- unsigned long args[7];
-
- args[0] = (unsigned long) prom_callmethod_name;
- args[1] = 4;
- args[2] = 0;
- args[3] = (unsigned long) prom_unmap_name;
- args[4] = (unsigned int) prom_get_mmu_ihandle();
- args[5] = size;
- args[6] = vaddr;
-
- p1275_cmd_direct(args);
+ p1275_cmd(prom_callmethod_name,
+ (P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_ARG(2, P1275_ARG_IN_64B) |
+ P1275_ARG(3, P1275_ARG_IN_64B) |
+ P1275_INOUT(4, 0)),
+ prom_unmap_name,
+ prom_get_mmu_ihandle(),
+ size,
+ vaddr);
}
/* Set aside physical memory which is not touched or modified
* across soft resets.
*/
-int prom_retain(const char *name, unsigned long size,
- unsigned long align, unsigned long *paddr)
+unsigned long prom_retain(const char *name,
+ unsigned long pa_low, unsigned long pa_high,
+ long size, long align)
{
- unsigned long args[11];
-
- args[0] = (unsigned long) prom_callmethod_name;
- args[1] = 5;
- args[2] = 3;
- args[3] = (unsigned long) "SUNW,retain";
- args[4] = (unsigned int) prom_get_memory_ihandle();
- args[5] = align;
- args[6] = size;
- args[7] = (unsigned long) name;
- args[8] = (unsigned long) -1;
- args[9] = (unsigned long) -1;
- args[10] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- if (args[8])
- return (int) args[8];
-
- /* Next we get "phys_high" then "phys_low". On 64-bit
- * the phys_high cell is don't care since the phys_low
- * cell has the full value.
+ /* XXX I don't think we return multiple values correctly.
+ * XXX OBP supposedly returns pa_low/pa_high here, how does
+ * XXX it work?
*/
- *paddr = args[10];
- return 0;
+ /* If align is zero, the pa_low/pa_high args are passed,
+ * else they are not.
+ */
+ if (align == 0)
+ return p1275_cmd("SUNW,retain",
+ (P1275_ARG(0, P1275_ARG_IN_BUF) | P1275_INOUT(5, 2)),
+ name, pa_low, pa_high, size, align);
+ else
+ return p1275_cmd("SUNW,retain",
+ (P1275_ARG(0, P1275_ARG_IN_BUF) | P1275_INOUT(3, 2)),
+ name, size, align);
}
/* Get "Unumber" string for the SIMM at the given
unsigned long phys_addr,
char *buf, int buflen)
{
- unsigned long args[12];
-
- args[0] = (unsigned long) prom_callmethod_name;
- args[1] = 7;
- args[2] = 2;
- args[3] = (unsigned long) "SUNW,get-unumber";
- args[4] = (unsigned int) prom_get_memory_ihandle();
- args[5] = buflen;
- args[6] = (unsigned long) buf;
- args[7] = 0;
- args[8] = phys_addr;
- args[9] = (unsigned int) syndrome_code;
- args[10] = (unsigned long) -1;
- args[11] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[10];
+ return p1275_cmd(prom_callmethod_name,
+ (P1275_ARG(0, P1275_ARG_IN_STRING) |
+ P1275_ARG(3, P1275_ARG_OUT_BUF) |
+ P1275_ARG(6, P1275_ARG_IN_64B) |
+ P1275_INOUT(8, 2)),
+ "SUNW,get-unumber", prom_get_memory_ihandle(),
+ buflen, buf, P1275_SIZE(buflen),
+ 0, phys_addr, syndrome_code);
}
/* Power management extensions. */
void prom_sleepself(void)
{
- unsigned long args[3];
-
- args[0] = (unsigned long) "SUNW,sleep-self";
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,sleep-self", P1275_INOUT(0, 0));
}
int prom_sleepsystem(void)
{
- unsigned long args[4];
-
- args[0] = (unsigned long) "SUNW,sleep-system";
- args[1] = 0;
- args[2] = 1;
- args[3] = (unsigned long) -1;
- p1275_cmd_direct(args);
-
- return (int) args[3];
+ return p1275_cmd("SUNW,sleep-system", P1275_INOUT(0, 1));
}
int prom_wakeupsystem(void)
{
- unsigned long args[4];
-
- args[0] = (unsigned long) "SUNW,wakeup-system";
- args[1] = 0;
- args[2] = 1;
- args[3] = (unsigned long) -1;
- p1275_cmd_direct(args);
-
- return (int) args[3];
+ return p1275_cmd("SUNW,wakeup-system", P1275_INOUT(0, 1));
}
#ifdef CONFIG_SMP
void prom_startcpu(int cpunode, unsigned long pc, unsigned long arg)
{
- unsigned long args[6];
-
- args[0] = (unsigned long) "SUNW,start-cpu";
- args[1] = 3;
- args[2] = 0;
- args[3] = (unsigned int) cpunode;
- args[4] = pc;
- args[5] = arg;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,start-cpu", P1275_INOUT(3, 0), cpunode, pc, arg);
}
void prom_startcpu_cpuid(int cpuid, unsigned long pc, unsigned long arg)
{
- unsigned long args[6];
-
- args[0] = (unsigned long) "SUNW,start-cpu-by-cpuid";
- args[1] = 3;
- args[2] = 0;
- args[3] = (unsigned int) cpuid;
- args[4] = pc;
- args[5] = arg;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,start-cpu-by-cpuid", P1275_INOUT(3, 0),
+ cpuid, pc, arg);
}
void prom_stopcpu_cpuid(int cpuid)
{
- unsigned long args[4];
-
- args[0] = (unsigned long) "SUNW,stop-cpu-by-cpuid";
- args[1] = 1;
- args[2] = 0;
- args[3] = (unsigned int) cpuid;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,stop-cpu-by-cpuid", P1275_INOUT(1, 0),
+ cpuid);
}
void prom_stopself(void)
{
- unsigned long args[3];
-
- args[0] = (unsigned long) "SUNW,stop-self";
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,stop-self", P1275_INOUT(0, 0));
}
void prom_idleself(void)
{
- unsigned long args[3];
-
- args[0] = (unsigned long) "SUNW,idle-self";
- args[1] = 0;
- args[2] = 0;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,idle-self", P1275_INOUT(0, 0));
}
void prom_resumecpu(int cpunode)
{
- unsigned long args[4];
-
- args[0] = (unsigned long) "SUNW,resume-cpu";
- args[1] = 1;
- args[2] = 0;
- args[3] = (unsigned int) cpunode;
- p1275_cmd_direct(args);
+ p1275_cmd("SUNW,resume-cpu", P1275_INOUT(1, 0), cpunode);
}
#endif
long prom_callback; /* 0x00 */
void (*prom_cif_handler)(long *); /* 0x08 */
unsigned long prom_cif_stack; /* 0x10 */
+ unsigned long prom_args [23]; /* 0x18 */
+ char prom_buffer [3000];
} p1275buf;
extern void prom_world(int);
-extern void prom_cif_direct(unsigned long *args);
+extern void prom_cif_interface(void);
extern void prom_cif_callback(void);
/*
- * This provides SMP safety on the p1275buf.
+ * This provides SMP safety on the p1275buf. prom_callback() drops this lock
+ * to allow recursuve acquisition.
*/
DEFINE_SPINLOCK(prom_entry_lock);
-void p1275_cmd_direct(unsigned long *args)
+long p1275_cmd(const char *service, long fmt, ...)
{
+ char *p, *q;
unsigned long flags;
+ int nargs, nrets, i;
+ va_list list;
+ long attrs, x;
+
+ p = p1275buf.prom_buffer;
- raw_local_save_flags(flags);
- raw_local_irq_restore(PIL_NMI);
- spin_lock(&prom_entry_lock);
+ spin_lock_irqsave(&prom_entry_lock, flags);
+
+ p1275buf.prom_args[0] = (unsigned long)p; /* service */
+ strcpy (p, service);
+ p = (char *)(((long)(strchr (p, 0) + 8)) & ~7);
+ p1275buf.prom_args[1] = nargs = (fmt & 0x0f); /* nargs */
+ p1275buf.prom_args[2] = nrets = ((fmt & 0xf0) >> 4); /* nrets */
+ attrs = fmt >> 8;
+ va_start(list, fmt);
+ for (i = 0; i < nargs; i++, attrs >>= 3) {
+ switch (attrs & 0x7) {
+ case P1275_ARG_NUMBER:
+ p1275buf.prom_args[i + 3] =
+ (unsigned)va_arg(list, long);
+ break;
+ case P1275_ARG_IN_64B:
+ p1275buf.prom_args[i + 3] =
+ va_arg(list, unsigned long);
+ break;
+ case P1275_ARG_IN_STRING:
+ strcpy (p, va_arg(list, char *));
+ p1275buf.prom_args[i + 3] = (unsigned long)p;
+ p = (char *)(((long)(strchr (p, 0) + 8)) & ~7);
+ break;
+ case P1275_ARG_OUT_BUF:
+ (void) va_arg(list, char *);
+ p1275buf.prom_args[i + 3] = (unsigned long)p;
+ x = va_arg(list, long);
+ i++; attrs >>= 3;
+ p = (char *)(((long)(p + (int)x + 7)) & ~7);
+ p1275buf.prom_args[i + 3] = x;
+ break;
+ case P1275_ARG_IN_BUF:
+ q = va_arg(list, char *);
+ p1275buf.prom_args[i + 3] = (unsigned long)p;
+ x = va_arg(list, long);
+ i++; attrs >>= 3;
+ memcpy (p, q, (int)x);
+ p = (char *)(((long)(p + (int)x + 7)) & ~7);
+ p1275buf.prom_args[i + 3] = x;
+ break;
+ case P1275_ARG_OUT_32B:
+ (void) va_arg(list, char *);
+ p1275buf.prom_args[i + 3] = (unsigned long)p;
+ p += 32;
+ break;
+ case P1275_ARG_IN_FUNCTION:
+ p1275buf.prom_args[i + 3] =
+ (unsigned long)prom_cif_callback;
+ p1275buf.prom_callback = va_arg(list, long);
+ break;
+ }
+ }
+ va_end(list);
prom_world(1);
- prom_cif_direct(args);
+ prom_cif_interface();
prom_world(0);
- spin_unlock(&prom_entry_lock);
- raw_local_irq_restore(flags);
+ attrs = fmt >> 8;
+ va_start(list, fmt);
+ for (i = 0; i < nargs; i++, attrs >>= 3) {
+ switch (attrs & 0x7) {
+ case P1275_ARG_NUMBER:
+ (void) va_arg(list, long);
+ break;
+ case P1275_ARG_IN_STRING:
+ (void) va_arg(list, char *);
+ break;
+ case P1275_ARG_IN_FUNCTION:
+ (void) va_arg(list, long);
+ break;
+ case P1275_ARG_IN_BUF:
+ (void) va_arg(list, char *);
+ (void) va_arg(list, long);
+ i++; attrs >>= 3;
+ break;
+ case P1275_ARG_OUT_BUF:
+ p = va_arg(list, char *);
+ x = va_arg(list, long);
+ memcpy (p, (char *)(p1275buf.prom_args[i + 3]), (int)x);
+ i++; attrs >>= 3;
+ break;
+ case P1275_ARG_OUT_32B:
+ p = va_arg(list, char *);
+ memcpy (p, (char *)(p1275buf.prom_args[i + 3]), 32);
+ break;
+ }
+ }
+ va_end(list);
+ x = p1275buf.prom_args [nargs + 3];
+
+ spin_unlock_irqrestore(&prom_entry_lock, flags);
+
+ return x;
}
void prom_cif_init(void *cif_handler, void *cif_stack)
#include <asm/oplib.h>
#include <asm/ldc.h>
-static int prom_node_to_node(const char *type, int node)
-{
- unsigned long args[5];
-
- args[0] = (unsigned long) type;
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[4];
-}
-
/* Return the child of node 'node' or zero if no this node has no
* direct descendent.
*/
inline int __prom_getchild(int node)
{
- return prom_node_to_node("child", node);
+ return p1275_cmd ("child", P1275_INOUT(1, 1), node);
}
inline int prom_getchild(int node)
{
int cnode;
- if (node == -1)
- return 0;
+ if(node == -1) return 0;
cnode = __prom_getchild(node);
- if (cnode == -1)
- return 0;
- return cnode;
+ if(cnode == -1) return 0;
+ return (int)cnode;
}
EXPORT_SYMBOL(prom_getchild);
{
int cnode;
- if (node == -1)
- return 0;
- cnode = prom_node_to_node("parent", node);
- if (cnode == -1)
- return 0;
- return cnode;
+ if(node == -1) return 0;
+ cnode = p1275_cmd ("parent", P1275_INOUT(1, 1), node);
+ if(cnode == -1) return 0;
+ return (int)cnode;
}
/* Return the next sibling of node 'node' or zero if no more siblings
*/
inline int __prom_getsibling(int node)
{
- return prom_node_to_node(prom_peer_name, node);
+ return p1275_cmd(prom_peer_name, P1275_INOUT(1, 1), node);
}
inline int prom_getsibling(int node)
*/
inline int prom_getproplen(int node, const char *prop)
{
- unsigned long args[6];
-
- if (!node || !prop)
- return -1;
-
- args[0] = (unsigned long) "getproplen";
- args[1] = 2;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = (unsigned long) prop;
- args[5] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[5];
+ if((!node) || (!prop)) return -1;
+ return p1275_cmd ("getproplen",
+ P1275_ARG(1,P1275_ARG_IN_STRING)|
+ P1275_INOUT(2, 1),
+ node, prop);
}
EXPORT_SYMBOL(prom_getproplen);
inline int prom_getproperty(int node, const char *prop,
char *buffer, int bufsize)
{
- unsigned long args[8];
int plen;
plen = prom_getproplen(node, prop);
- if ((plen > bufsize) || (plen == 0) || (plen == -1))
+ if ((plen > bufsize) || (plen == 0) || (plen == -1)) {
return -1;
-
- args[0] = (unsigned long) prom_getprop_name;
- args[1] = 4;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = (unsigned long) prop;
- args[5] = (unsigned long) buffer;
- args[6] = bufsize;
- args[7] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[7];
+ } else {
+ /* Ok, things seem all right. */
+ return p1275_cmd(prom_getprop_name,
+ P1275_ARG(1,P1275_ARG_IN_STRING)|
+ P1275_ARG(2,P1275_ARG_OUT_BUF)|
+ P1275_INOUT(4, 1),
+ node, prop, buffer, P1275_SIZE(plen));
+ }
}
EXPORT_SYMBOL(prom_getproperty);
{
int intprop;
- if (prom_getproperty(node, prop, (char *) &intprop, sizeof(int)) != -1)
+ if(prom_getproperty(node, prop, (char *) &intprop, sizeof(int)) != -1)
return intprop;
return -1;
int retval;
retval = prom_getint(node, property);
- if (retval == -1)
- return deflt;
+ if(retval == -1) return deflt;
return retval;
}
int retval;
retval = prom_getproplen(node, prop);
- if (retval == -1)
- return 0;
+ if(retval == -1) return 0;
return 1;
}
EXPORT_SYMBOL(prom_getbool);
int len;
len = prom_getproperty(node, prop, user_buf, ubuf_size);
- if (len != -1)
- return;
+ if(len != -1) return;
user_buf[0] = 0;
return;
}
{
char namebuf[128];
prom_getproperty(node, "name", namebuf, sizeof(namebuf));
- if (strcmp(namebuf, name) == 0)
- return 1;
+ if(strcmp(namebuf, name) == 0) return 1;
return 0;
}
}
EXPORT_SYMBOL(prom_searchsiblings);
-static const char *prom_nextprop_name = "nextprop";
-
/* Return the first property type for node 'node'.
* buffer should be at least 32B in length
*/
inline char *prom_firstprop(int node, char *buffer)
{
- unsigned long args[7];
-
*buffer = 0;
- if (node == -1)
- return buffer;
-
- args[0] = (unsigned long) prom_nextprop_name;
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = 0;
- args[5] = (unsigned long) buffer;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
+ if(node == -1) return buffer;
+ p1275_cmd ("nextprop", P1275_ARG(2,P1275_ARG_OUT_32B)|
+ P1275_INOUT(3, 0),
+ node, (char *) 0x0, buffer);
return buffer;
}
EXPORT_SYMBOL(prom_firstprop);
*/
inline char *prom_nextprop(int node, const char *oprop, char *buffer)
{
- unsigned long args[7];
char buf[32];
- if (node == -1) {
+ if(node == -1) {
*buffer = 0;
return buffer;
}
strcpy (buf, oprop);
oprop = buf;
}
-
- args[0] = (unsigned long) prom_nextprop_name;
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = (unsigned long) oprop;
- args[5] = (unsigned long) buffer;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
+ p1275_cmd ("nextprop", P1275_ARG(1,P1275_ARG_IN_STRING)|
+ P1275_ARG(2,P1275_ARG_OUT_32B)|
+ P1275_INOUT(3, 0),
+ node, oprop, buffer);
return buffer;
}
EXPORT_SYMBOL(prom_nextprop);
int
prom_finddevice(const char *name)
{
- unsigned long args[5];
-
if (!name)
return 0;
- args[0] = (unsigned long) "finddevice";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned long) name;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[4];
+ return p1275_cmd(prom_finddev_name,
+ P1275_ARG(0,P1275_ARG_IN_STRING)|
+ P1275_INOUT(1, 1),
+ name);
}
EXPORT_SYMBOL(prom_finddevice);
*buf = 0;
do {
prom_nextprop(node, buf, buf);
- if (!strcmp(buf, prop))
+ if(!strcmp(buf, prop))
return 1;
} while (*buf);
return 0;
int
prom_setprop(int node, const char *pname, char *value, int size)
{
- unsigned long args[8];
-
if (size == 0)
return 0;
if ((pname == 0) || (value == 0))
return 0;
}
#endif
- args[0] = (unsigned long) "setprop";
- args[1] = 4;
- args[2] = 1;
- args[3] = (unsigned int) node;
- args[4] = (unsigned long) pname;
- args[5] = (unsigned long) value;
- args[6] = size;
- args[7] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[7];
+ return p1275_cmd ("setprop", P1275_ARG(1,P1275_ARG_IN_STRING)|
+ P1275_ARG(2,P1275_ARG_IN_BUF)|
+ P1275_INOUT(4, 1),
+ node, pname, value, P1275_SIZE(size));
}
EXPORT_SYMBOL(prom_setprop);
inline int prom_inst2pkg(int inst)
{
- unsigned long args[5];
int node;
- args[0] = (unsigned long) "instance-to-package";
- args[1] = 1;
- args[2] = 1;
- args[3] = (unsigned int) inst;
- args[4] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- node = (int) args[4];
- if (node == -1)
- return 0;
+ node = p1275_cmd ("instance-to-package", P1275_INOUT(1, 1), inst);
+ if (node == -1) return 0;
return node;
}
int node, inst;
inst = prom_devopen (path);
- if (inst == 0)
- return 0;
- node = prom_inst2pkg(inst);
- prom_devclose(inst);
- if (node == -1)
- return 0;
+ if (inst == 0) return 0;
+ node = prom_inst2pkg (inst);
+ prom_devclose (inst);
+ if (node == -1) return 0;
return node;
}
int prom_ihandle2path(int handle, char *buffer, int bufsize)
{
- unsigned long args[7];
-
- args[0] = (unsigned long) "instance-to-path";
- args[1] = 3;
- args[2] = 1;
- args[3] = (unsigned int) handle;
- args[4] = (unsigned long) buffer;
- args[5] = bufsize;
- args[6] = (unsigned long) -1;
-
- p1275_cmd_direct(args);
-
- return (int) args[6];
+ return p1275_cmd("instance-to-path",
+ P1275_ARG(1,P1275_ARG_OUT_BUF)|
+ P1275_INOUT(3, 1),
+ handle, buffer, P1275_SIZE(bufsize));
}
static void free_winch(struct winch *winch, int free_irq_ok)
{
- if (free_irq_ok)
- free_irq(WINCH_IRQ, winch);
-
list_del(&winch->list);
if (winch->pid != -1)
os_close_file(winch->fd);
if (winch->stack != 0)
free_stack(winch->stack, 0);
+ if (free_irq_ok)
+ free_irq(WINCH_IRQ, winch);
kfree(winch);
}
struct scatterlist sg[MAX_SG];
struct request *request;
int start_sg, end_sg;
- sector_t rq_pos;
};
#define DEFAULT_COW { \
.request = NULL, \
.start_sg = 0, \
.end_sg = 0, \
- .rq_pos = 0, \
}
/* Protected by ubd_lock */
{
struct io_thread_req *io_req;
struct request *req;
+ sector_t sector;
int n;
while(1){
return;
dev->request = req;
- dev->rq_pos = blk_rq_pos(req);
dev->start_sg = 0;
dev->end_sg = blk_rq_map_sg(q, req, dev->sg);
}
req = dev->request;
+ sector = blk_rq_pos(req);
while(dev->start_sg < dev->end_sg){
struct scatterlist *sg = &dev->sg[dev->start_sg];
return;
}
prepare_request(req, io_req,
- (unsigned long long)dev->rq_pos << 9,
+ (unsigned long long)sector << 9,
sg->offset, sg->length, sg_page(sg));
+ sector += sg->length >> 9;
n = os_write_file(thread_fd, &io_req,
sizeof(struct io_thread_req *));
if(n != sizeof(struct io_thread_req *)){
return;
}
- dev->rq_pos += sg->length >> 9;
dev->start_sg++;
}
dev->end_sg = 0;
_text = .;
_stext = .;
__init_begin = .;
- INIT_TEXT_SECTION(0)
+ INIT_TEXT_SECTION(PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
.text :
long long disable_timer(void)
{
struct itimerval time = ((struct itimerval) { { 0, 0 }, { 0, 0 } });
- long long remain, max = UM_NSEC_PER_SEC / UM_HZ;
+ int remain, max = UM_NSEC_PER_SEC / UM_HZ;
if (setitimer(ITIMER_VIRTUAL, &time, &time) < 0)
printk(UM_KERN_ERR "disable_timer - setitimer failed, "
setjmp.o signal.o stub.o stub_segv.o syscalls.o syscall_table.o \
sysrq.o ksyms.o tls.o
-subarch-obj-y = lib/csum-partial_64.o lib/memcpy_64.o lib/thunk_64.o \
- lib/rwsem_64.o
+subarch-obj-y = lib/csum-partial_64.o lib/memcpy_64.o lib/thunk_64.o
subarch-obj-$(CONFIG_MODULES) += kernel/module.o
ldt-y = ../sys-i386/ldt.o
config KTIME_SCALAR
def_bool X86_32
-
-config ARCH_CPU_PROBE_RELEASE
- def_bool y
- depends on HOTPLUG_CPU
-
source "init/Kconfig"
source "kernel/Kconfig.freezer"
bool "GART IOMMU support" if EMBEDDED
default y
select SWIOTLB
- depends on X86_64 && PCI && K8_NB
+ depends on X86_64 && PCI
---help---
Support for full DMA access of devices with 32bit memory access only
on systems with more than 3GB. This is usually needed for USB,
def_bool X86_64
depends on MEMORY_HOTPLUG
-config ILLEGAL_POINTER_VALUE
- hex
- default 0 if X86_32
- default 0xdead000000000000 if X86_64
-
source "mm/Kconfig"
config HIGHPTE
config K8_NB
def_bool y
- depends on CPU_SUP_AMD && PCI
+ depends on AGP_AMD64 || (X86_64 && (GART_IOMMU || (PCI && NUMA)))
source "drivers/pcmcia/Kconfig"
config X86_XADD
def_bool y
- depends on X86_64 || !M386
+ depends on X86_32 && !M386
config X86_PPRO_FENCE
bool "PentiumPro memory ordering errata workaround"
current->mm->free_area_cache = TASK_UNMAPPED_BASE;
current->mm->cached_hole_size = 0;
+ current->mm->mmap = NULL;
install_exec_creds(bprm);
current->flags &= ~PF_FORKNOEXEC;
/*
* Reload arg registers from stack in case ptrace changed them.
* We don't reload %eax because syscall_trace_enter() returned
- * the %rax value we should see. Instead, we just truncate that
- * value to 32 bits again as we did on entry from user mode.
- * If it's a new value set by user_regset during entry tracing,
- * this matches the normal truncation of the user-mode value.
- * If it's -1 to make us punt the syscall, then (u32)-1 is still
- * an appropriately invalid value.
+ * the value it wants us to use in the table lookup.
*/
.macro LOAD_ARGS32 offset, _r9=0
.if \_r9
movl \offset+48(%rsp),%edx
movl \offset+56(%rsp),%esi
movl \offset+64(%rsp),%edi
- movl %eax,%eax /* zero extension */
.endm
.macro CFI_STARTPROC32 simple
testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%r10)
CFI_REMEMBER_STATE
jnz sysenter_tracesys
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja ia32_badsys
sysenter_do_call:
IA32_ARG_FIXUP
movl $AUDIT_ARCH_I386,%edi /* 1st arg: audit arch */
call audit_syscall_entry
movl RAX-ARGOFFSET(%rsp),%eax /* reload syscall number */
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja ia32_badsys
movl %ebx,%edi /* reload 1st syscall arg */
movl RCX-ARGOFFSET(%rsp),%esi /* reload 2nd syscall arg */
call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
RESTORE_REST
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */
jmp sysenter_do_call
CFI_ENDPROC
testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%r10)
CFI_REMEMBER_STATE
jnz cstar_tracesys
- cmpq $IA32_NR_syscalls-1,%rax
+ cmpl $IA32_NR_syscalls-1,%eax
ja ia32_badsys
cstar_do_call:
IA32_ARG_FIXUP 1
LOAD_ARGS32 ARGOFFSET, 1 /* reload args from stack in case ptrace changed it */
RESTORE_REST
xchgl %ebp,%r9d
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */
jmp cstar_do_call
END(ia32_cstar_target)
orl $TS_COMPAT,TI_status(%r10)
testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags(%r10)
jnz ia32_tracesys
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja ia32_badsys
ia32_do_call:
IA32_ARG_FIXUP
call syscall_trace_enter
LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */
RESTORE_REST
- cmpq $(IA32_NR_syscalls-1),%rax
+ cmpl $(IA32_NR_syscalls-1),%eax
ja int_ret_from_sys_call /* ia32_tracesys has set RAX(%rsp) */
jmp ia32_do_call
END(ia32_syscall)
/* capabilities of that IOMMU read from ACPI */
u32 cap;
- /* flags read from acpi table */
- u8 acpi_flags;
-
/*
* Capability pointer. There could be more than one IOMMU per PCI
* device function if there are more than one AMD IOMMU capability
/* default dma_ops domain for that IOMMU */
struct dma_ops_domain *default_dom;
-
- /*
- * This array is required to work around a potential BIOS bug.
- * The BIOS may miss to restore parts of the PCI configuration
- * space when the system resumes from S3. The result is that the
- * IOMMU does not execute commands anymore which leads to system
- * failure.
- */
- u32 cache_cfg[4];
};
/*
/* some function prototypes */
extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
-static inline bool is_rd890_iommu(struct pci_dev *pdev)
-{
- return (pdev->vendor == PCI_VENDOR_ID_ATI) &&
- (pdev->device == PCI_DEVICE_ID_RD890_IOMMU);
-}
-
#endif /* _ASM_X86_AMD_IOMMU_TYPES_H */
#define __xg(x) ((struct __xchg_dummy *)(x))
/*
- * CMPXCHG8B only writes to the target if we had the previous
- * value in registers, otherwise it acts as a read and gives us the
- * "new previous" value. That is why there is a loop. Preloading
- * EDX:EAX is a performance optimization: in the common case it means
- * we need only one locked operation.
+ * The semantics of XCHGCMP8B are a bit strange, this is why
+ * there is a loop and the loading of %%eax and %%edx has to
+ * be inside. This inlines well in most cases, the cached
+ * cost is around ~38 cycles. (in the future we might want
+ * to do an SIMD/3DNOW!/MMX/FPU 64-bit store here, but that
+ * might have an implicit FPU-save as a cost, so it's not
+ * clear which path to go.)
*
- * A SIMD/3DNOW!/MMX/FPU 64-bit store here would require at the very
- * least an FPU save and/or %cr0.ts manipulation.
- *
- * cmpxchg8b must be used with the lock prefix here to allow the
- * instruction to be executed atomically. We need to have the reader
- * side to see the coherent 64bit value.
+ * cmpxchg8b must be used with the lock prefix here to allow
+ * the instruction to be executed atomically, see page 3-102
+ * of the instruction set reference 24319102.pdf. We need
+ * the reader side to see the coherent 64bit value.
*/
-static inline void set_64bit(volatile u64 *ptr, u64 value)
+static inline void __set_64bit(unsigned long long *ptr,
+ unsigned int low, unsigned int high)
{
- u32 low = value;
- u32 high = value >> 32;
- u64 prev = *ptr;
-
asm volatile("\n1:\t"
- LOCK_PREFIX "cmpxchg8b %0\n\t"
+ "movl (%0), %%eax\n\t"
+ "movl 4(%0), %%edx\n\t"
+ LOCK_PREFIX "cmpxchg8b (%0)\n\t"
"jnz 1b"
- : "=m" (*ptr), "+A" (prev)
- : "b" (low), "c" (high)
- : "memory");
+ : /* no outputs */
+ : "D"(ptr),
+ "b"(low),
+ "c"(high)
+ : "ax", "dx", "memory");
+}
+
+static inline void __set_64bit_constant(unsigned long long *ptr,
+ unsigned long long value)
+{
+ __set_64bit(ptr, (unsigned int)value, (unsigned int)(value >> 32));
}
+#define ll_low(x) *(((unsigned int *)&(x)) + 0)
+#define ll_high(x) *(((unsigned int *)&(x)) + 1)
+
+static inline void __set_64bit_var(unsigned long long *ptr,
+ unsigned long long value)
+{
+ __set_64bit(ptr, ll_low(value), ll_high(value));
+}
+
+#define set_64bit(ptr, value) \
+ (__builtin_constant_p((value)) \
+ ? __set_64bit_constant((ptr), (value)) \
+ : __set_64bit_var((ptr), (value)))
+
+#define _set_64bit(ptr, value) \
+ (__builtin_constant_p(value) \
+ ? __set_64bit(ptr, (unsigned int)(value), \
+ (unsigned int)((value) >> 32)) \
+ : __set_64bit(ptr, ll_low((value)), ll_high((value))))
+
/*
* Note: no "lock" prefix even on SMP: xchg always implies lock anyway
* Note 2: xchg has side effect, so that attribute volatile is necessary,
switch (size) {
case 1:
asm volatile("xchgb %b0,%1"
- : "=q" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=q" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
case 2:
asm volatile("xchgw %w0,%1"
- : "=r" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=r" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
case 4:
asm volatile("xchgl %0,%1"
- : "=r" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=r" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile(LOCK_PREFIX "cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile(LOCK_PREFIX "cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile(LOCK_PREFIX "cmpxchgl %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile("lock; cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile("lock; cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile("lock; cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("lock; cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile("lock; cmpxchgl %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("lock; cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile("cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile("cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile("cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile("cmpxchgl %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
unsigned long long new)
{
unsigned long long prev;
- asm volatile(LOCK_PREFIX "cmpxchg8b %1"
- : "=A"(prev), "+m" (*__xg(ptr))
+ asm volatile(LOCK_PREFIX "cmpxchg8b %3"
+ : "=A"(prev)
: "b"((unsigned long)new),
"c"((unsigned long)(new >> 32)),
+ "m"(*__xg(ptr)),
"0"(old)
: "memory");
return prev;
unsigned long long new)
{
unsigned long long prev;
- asm volatile("cmpxchg8b %1"
- : "=A"(prev), "+m"(*__xg(ptr))
+ asm volatile("cmpxchg8b %3"
+ : "=A"(prev)
: "b"((unsigned long)new),
"c"((unsigned long)(new >> 32)),
+ "m"(*__xg(ptr)),
"0"(old)
: "memory");
return prev;
#define __xg(x) ((volatile long *)(x))
-static inline void set_64bit(volatile u64 *ptr, u64 val)
+static inline void set_64bit(volatile unsigned long *ptr, unsigned long val)
{
*ptr = val;
}
+#define _set_64bit set_64bit
+
/*
* Note: no "lock" prefix even on SMP: xchg always implies lock anyway
* Note 2: xchg has side effect, so that attribute volatile is necessary,
switch (size) {
case 1:
asm volatile("xchgb %b0,%1"
- : "=q" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=q" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
case 2:
asm volatile("xchgw %w0,%1"
- : "=r" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=r" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
case 4:
asm volatile("xchgl %k0,%1"
- : "=r" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=r" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
case 8:
asm volatile("xchgq %0,%1"
- : "=r" (x), "+m" (*__xg(ptr))
- : "0" (x)
+ : "=r" (x)
+ : "m" (*__xg(ptr)), "0" (x)
: "memory");
break;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile(LOCK_PREFIX "cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile(LOCK_PREFIX "cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile(LOCK_PREFIX "cmpxchgl %k2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgl %k1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 8:
- asm volatile(LOCK_PREFIX "cmpxchgq %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile(LOCK_PREFIX "cmpxchgq %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile("lock; cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile("lock; cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile("lock; cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("lock; cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile("lock; cmpxchgl %k2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
- : "memory");
- return prev;
- case 8:
- asm volatile("lock; cmpxchgq %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("lock; cmpxchgl %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
unsigned long prev;
switch (size) {
case 1:
- asm volatile("cmpxchgb %b2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "q"(new), "0"(old)
+ asm volatile("cmpxchgb %b1,%2"
+ : "=a"(prev)
+ : "q"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 2:
- asm volatile("cmpxchgw %w2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("cmpxchgw %w1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 4:
- asm volatile("cmpxchgl %k2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("cmpxchgl %k1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
case 8:
- asm volatile("cmpxchgq %2,%1"
- : "=a"(prev), "+m"(*__xg(ptr))
- : "r"(new), "0"(old)
+ asm volatile("cmpxchgq %1,%2"
+ : "=a"(prev)
+ : "r"(new), "m"(*__xg(ptr)), "0"(old)
: "memory");
return prev;
}
return (u32)(unsigned long)uptr;
}
-static inline void __user *arch_compat_alloc_user_space(long len)
+static inline void __user *compat_alloc_user_space(long len)
{
struct pt_regs *regs = task_pt_regs(current);
return (void __user *)regs->sp - len;
#define X86_FEATURE_3DNOWPREFETCH (6*32+ 8) /* 3DNow prefetch instructions */
#define X86_FEATURE_OSVW (6*32+ 9) /* OS Visible Workaround */
#define X86_FEATURE_IBS (6*32+10) /* Instruction Based Sampling */
-#define X86_FEATURE_XOP (6*32+11) /* extended AVX instructions */
+#define X86_FEATURE_SSE5 (6*32+11) /* SSE-5 */
#define X86_FEATURE_SKINIT (6*32+12) /* SKINIT/STGI instructions */
#define X86_FEATURE_WDT (6*32+13) /* Watchdog timer */
-#define X86_FEATURE_NODEID_MSR (6*32+19) /* NodeId MSR */
/*
* Auxiliary flags: Linux defined - For features scattered in various
#endif
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
-#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
- FIX_OHCI1394_BASE,
-#endif
#ifdef CONFIG_X86_LOCAL_APIC
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
#endif
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
+#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
+ FIX_OHCI1394_BASE,
+#endif
#ifdef CONFIG_X86_32
FIX_WP_TEST,
#endif
extern void iounmap(volatile void __iomem *addr);
-extern void set_iounmap_nonlazy(void);
#ifdef CONFIG_X86_32
# include "io_32.h"
struct io_apic_irq_attr;
extern int io_apic_set_pci_routing(struct device *dev, int irq,
struct io_apic_irq_attr *irq_attr);
-void setup_IO_APIC_irq_extra(u32 gsi);
extern int (*ioapic_renumber_irq)(int ioapic, int irq);
extern void ioapic_init_mappings(void);
extern void ioapic_insert_resources(void);
extern int k8_scan_nodes(unsigned long start, unsigned long end);
#ifdef CONFIG_K8_NB
-extern int num_k8_northbridges;
-
static inline struct pci_dev *node_to_k8_nb_misc(int node)
{
return (node < num_k8_northbridges) ? k8_northbridges[node] : NULL;
}
-
#else
-#define num_k8_northbridges 0
-
static inline struct pci_dev *node_to_k8_nb_misc(int node)
{
return NULL;
struct x86_emulate_ops {
/*
* read_std: Read bytes of standard (non-emulated/special) memory.
- * Used for descriptor reading.
+ * Used for instruction fetch, stack operations, and others.
* @addr: [IN ] Linear address from which to read.
* @val: [OUT] Value read from memory, zero-extended to 'u_long'.
* @bytes: [IN ] Number of bytes to read from memory.
*/
int (*read_std)(unsigned long addr, void *val,
- unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error);
-
- /*
- * fetch: Read bytes of standard (non-emulated/special) memory.
- * Used for instruction fetch.
- * @addr: [IN ] Linear address from which to read.
- * @val: [OUT] Value read from memory, zero-extended to 'u_long'.
- * @bytes: [IN ] Number of bytes to read from memory.
- */
- int (*fetch)(unsigned long addr, void *val,
- unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error);
+ unsigned int bytes, struct kvm_vcpu *vcpu);
/*
* read_emulated: Read bytes from emulated/special memory area.
/* Execution mode, passed to the emulator. */
#define X86EMUL_MODE_REAL 0 /* Real mode. */
-#define X86EMUL_MODE_VM86 1 /* Virtual 8086 mode. */
#define X86EMUL_MODE_PROT16 2 /* 16-bit protected mode. */
#define X86EMUL_MODE_PROT32 4 /* 32-bit protected mode. */
#define X86EMUL_MODE_PROT64 8 /* 64-bit (long) mode. */
unsigned invalid:1;
unsigned cr4_pge:1;
unsigned nxe:1;
- unsigned cr0_wp:1;
};
};
void (*new_cr3)(struct kvm_vcpu *vcpu);
int (*page_fault)(struct kvm_vcpu *vcpu, gva_t gva, u32 err);
void (*free)(struct kvm_vcpu *vcpu);
- gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t gva, u32 access,
- u32 *error);
+ gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t gva);
void (*prefetch_page)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *page);
int (*sync_page)(struct kvm_vcpu *vcpu,
unsigned long value);
void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
-int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg);
+int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
+ int type_bits, int seg);
int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason);
int kvm_mmu_load(struct kvm_vcpu *vcpu);
void kvm_mmu_unload(struct kvm_vcpu *vcpu);
void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu);
-gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
-gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
-gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
-gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, u32 *error);
int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
int complete_pio(struct kvm_vcpu *vcpu);
-bool kvm_check_iopl(struct kvm_vcpu *vcpu);
struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn);
return (struct kvm_mmu_page *)page_private(page);
}
+static inline u16 kvm_read_fs(void)
+{
+ u16 seg;
+ asm("mov %%fs, %0" : "=g"(seg));
+ return seg;
+}
+
+static inline u16 kvm_read_gs(void)
+{
+ u16 seg;
+ asm("mov %%gs, %0" : "=g"(seg));
+ return seg;
+}
+
static inline u16 kvm_read_ldt(void)
{
u16 ldt;
return ldt;
}
+static inline void kvm_load_fs(u16 sel)
+{
+ asm("mov %0, %%fs" : : "rm"(sel));
+}
+
+static inline void kvm_load_gs(u16 sel)
+{
+ asm("mov %0, %%gs" : : "rm"(sel));
+}
+
static inline void kvm_load_ldt(u16 sel)
{
asm("lldt %0" : : "rm"(sel));
#define MSR_AMD64_PATCH_LEVEL 0x0000008b
#define MSR_AMD64_NB_CFG 0xc001001f
#define MSR_AMD64_PATCH_LOADER 0xc0010020
-#define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140
-#define MSR_AMD64_OSVW_STATUS 0xc0010141
-#define MSR_AMD64_DC_CFG 0xc0011022
#define MSR_AMD64_IBSFETCHCTL 0xc0011030
#define MSR_AMD64_IBSFETCHLINAD 0xc0011031
#define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032
#define FAM10H_MMIO_CONF_BUSRANGE_SHIFT 2
#define FAM10H_MMIO_CONF_BASE_MASK 0xfffffff
#define FAM10H_MMIO_CONF_BASE_SHIFT 20
-#define MSR_FAM10H_NODE_ID 0xc001100c
/* K8 MSRs */
#define MSR_K8_TOP_MEM1 0xc001001a
#define MSR_IA32_EBL_CR_POWERON 0x0000002a
#define MSR_IA32_FEATURE_CONTROL 0x0000003a
-#define FEATURE_CONTROL_LOCKED (1<<0)
-#define FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX (1<<1)
-#define FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX (1<<2)
+#define FEATURE_CONTROL_LOCKED (1<<0)
+#define FEATURE_CONTROL_VMXON_ENABLED (1<<2)
#define MSR_IA32_APICBASE 0x0000001b
#define MSR_IA32_APICBASE_BSP (1<<8)
static inline void paravirt_release_pud(unsigned long pfn) {}
#endif
-/*
- * Flags to use when allocating a user page table page.
- */
-extern gfp_t __userpte_alloc_gfp;
-
/*
* Allocate and free page tables.
*/
struct vm_area_struct;
extern pgd_t swapper_pg_dir[1024];
-extern pgd_t trampoline_pg_dir[1024];
static inline void pgtable_cache_init(void) { }
static inline void check_pgt_cache(void) { }
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/lockdep.h>
-#include <asm/asm.h>
struct rwsem_waiter;
/*
* the semaphore definition
- *
- * The bias values and the counter type limits the number of
- * potential readers/writers to 32767 for 32 bits and 2147483647
- * for 64 bits.
*/
-#ifdef CONFIG_X86_64
-# define RWSEM_ACTIVE_MASK 0xffffffffL
-#else
-# define RWSEM_ACTIVE_MASK 0x0000ffffL
-#endif
-
-#define RWSEM_UNLOCKED_VALUE 0x00000000L
-#define RWSEM_ACTIVE_BIAS 0x00000001L
-#define RWSEM_WAITING_BIAS (-RWSEM_ACTIVE_MASK-1)
+#define RWSEM_UNLOCKED_VALUE 0x00000000
+#define RWSEM_ACTIVE_BIAS 0x00000001
+#define RWSEM_ACTIVE_MASK 0x0000ffff
+#define RWSEM_WAITING_BIAS (-0x00010000)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-typedef signed long rwsem_count_t;
-
struct rw_semaphore {
- rwsem_count_t count;
+ signed long count;
spinlock_t wait_lock;
struct list_head wait_list;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
static inline void __down_read(struct rw_semaphore *sem)
{
asm volatile("# beginning down_read\n\t"
- LOCK_PREFIX _ASM_INC "(%1)\n\t"
+ LOCK_PREFIX " incl (%%eax)\n\t"
/* adds 0x00000001, returns the old value */
" jns 1f\n"
" call call_rwsem_down_read_failed\n"
*/
static inline int __down_read_trylock(struct rw_semaphore *sem)
{
- rwsem_count_t result, tmp;
+ __s32 result, tmp;
asm volatile("# beginning __down_read_trylock\n\t"
- " mov %0,%1\n\t"
+ " movl %0,%1\n\t"
"1:\n\t"
- " mov %1,%2\n\t"
- " add %3,%2\n\t"
+ " movl %1,%2\n\t"
+ " addl %3,%2\n\t"
" jle 2f\n\t"
- LOCK_PREFIX " cmpxchg %2,%0\n\t"
+ LOCK_PREFIX " cmpxchgl %2,%0\n\t"
" jnz 1b\n\t"
"2:\n\t"
"# ending __down_read_trylock\n\t"
*/
static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
{
- rwsem_count_t tmp;
+ int tmp;
tmp = RWSEM_ACTIVE_WRITE_BIAS;
asm volatile("# beginning down_write\n\t"
- LOCK_PREFIX " xadd %1,(%2)\n\t"
+ LOCK_PREFIX " xadd %%edx,(%%eax)\n\t"
/* subtract 0x0000ffff, returns the old value */
- " test %1,%1\n\t"
+ " testl %%edx,%%edx\n\t"
/* was the count 0 before? */
" jz 1f\n"
" call call_rwsem_down_write_failed\n"
*/
static inline int __down_write_trylock(struct rw_semaphore *sem)
{
- rwsem_count_t ret = cmpxchg(&sem->count,
- RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
+ signed long ret = cmpxchg(&sem->count,
+ RWSEM_UNLOCKED_VALUE,
+ RWSEM_ACTIVE_WRITE_BIAS);
if (ret == RWSEM_UNLOCKED_VALUE)
return 1;
return 0;
*/
static inline void __up_read(struct rw_semaphore *sem)
{
- rwsem_count_t tmp = -RWSEM_ACTIVE_READ_BIAS;
+ __s32 tmp = -RWSEM_ACTIVE_READ_BIAS;
asm volatile("# beginning __up_read\n\t"
- LOCK_PREFIX " xadd %1,(%2)\n\t"
+ LOCK_PREFIX " xadd %%edx,(%%eax)\n\t"
/* subtracts 1, returns the old value */
" jns 1f\n\t"
" call call_rwsem_wake\n"
*/
static inline void __up_write(struct rw_semaphore *sem)
{
- rwsem_count_t tmp;
asm volatile("# beginning __up_write\n\t"
- LOCK_PREFIX " xadd %1,(%2)\n\t"
+ " movl %2,%%edx\n\t"
+ LOCK_PREFIX " xaddl %%edx,(%%eax)\n\t"
/* tries to transition
0xffff0001 -> 0x00000000 */
" jz 1f\n"
" call call_rwsem_wake\n"
"1:\n\t"
"# ending __up_write\n"
- : "+m" (sem->count), "=d" (tmp)
- : "a" (sem), "1" (-RWSEM_ACTIVE_WRITE_BIAS)
- : "memory", "cc");
+ : "+m" (sem->count)
+ : "a" (sem), "i" (-RWSEM_ACTIVE_WRITE_BIAS)
+ : "memory", "cc", "edx");
}
/*
static inline void __downgrade_write(struct rw_semaphore *sem)
{
asm volatile("# beginning __downgrade_write\n\t"
- LOCK_PREFIX _ASM_ADD "%2,(%1)\n\t"
- /*
- * transitions 0xZZZZ0001 -> 0xYYYY0001 (i386)
- * 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 (x86_64)
- */
+ LOCK_PREFIX " addl %2,(%%eax)\n\t"
+ /* transitions 0xZZZZ0001 -> 0xYYYY0001 */
" jns 1f\n\t"
" call call_rwsem_downgrade_wake\n"
"1:\n\t"
"# ending __downgrade_write\n"
: "+m" (sem->count)
- : "a" (sem), "er" (-RWSEM_WAITING_BIAS)
+ : "a" (sem), "i" (-RWSEM_WAITING_BIAS)
: "memory", "cc");
}
/*
* implement atomic add functionality
*/
-static inline void rwsem_atomic_add(rwsem_count_t delta,
- struct rw_semaphore *sem)
+static inline void rwsem_atomic_add(int delta, struct rw_semaphore *sem)
{
- asm volatile(LOCK_PREFIX _ASM_ADD "%1,%0"
+ asm volatile(LOCK_PREFIX "addl %1,%0"
: "+m" (sem->count)
- : "er" (delta));
+ : "ir" (delta));
}
/*
* implement exchange and add functionality
*/
-static inline rwsem_count_t rwsem_atomic_update(rwsem_count_t delta,
- struct rw_semaphore *sem)
+static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem)
{
- rwsem_count_t tmp = delta;
+ int tmp = delta;
asm volatile(LOCK_PREFIX "xadd %0,%1"
: "+r" (tmp), "+m" (sem->count)
void (*smp_prepare_cpus)(unsigned max_cpus);
void (*smp_cpus_done)(unsigned max_cpus);
- void (*stop_other_cpus)(int wait);
+ void (*smp_send_stop)(void);
void (*smp_send_reschedule)(int cpu);
int (*cpu_up)(unsigned cpu);
static inline void smp_send_stop(void)
{
- smp_ops.stop_other_cpus(0);
-}
-
-static inline void stop_other_cpus(void)
-{
- smp_ops.stop_other_cpus(1);
+ smp_ops.smp_send_stop();
}
static inline void smp_prepare_boot_cpu(void)
void native_cpu_die(unsigned int cpu);
void native_play_dead(void);
void play_dead_common(void);
-void wbinvd_on_cpu(int cpu);
-int wbinvd_on_all_cpus(void);
void native_send_call_func_ipi(const struct cpumask *mask);
void native_send_call_func_single_ipi(int cpu);
{
return cpumask_weight(cpu_callout_mask);
}
-#else /* !CONFIG_SMP */
-#define wbinvd_on_cpu(cpu) wbinvd()
-static inline int wbinvd_on_all_cpus(void)
-{
- wbinvd();
- return 0;
-}
#endif /* CONFIG_SMP */
extern unsigned disabled_cpus __cpuinitdata;
struct saved_context {
u16 es, fs, gs, ss;
unsigned long cr0, cr2, cr3, cr4;
- u64 misc_enable;
- bool misc_enable_saved;
struct desc_ptr gdt;
struct desc_ptr idt;
u16 ldt;
u16 ds, es, fs, gs, ss;
unsigned long gs_base, gs_kernel_base, fs_base;
unsigned long cr0, cr2, cr3, cr4, cr8;
- u64 misc_enable;
- bool misc_enable_saved;
unsigned long efer;
u16 gdt_pad;
u16 gdt_limit;
*
* (Could use an alternative three way for this if there was one.)
*/
-static __always_inline void rdtsc_barrier(void)
+static inline void rdtsc_barrier(void)
{
alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC);
alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
extern unsigned long init_rsp;
extern unsigned long initial_code;
-extern unsigned long initial_page_table;
extern unsigned long initial_gs;
#define TRAMPOLINE_SIZE roundup(trampoline_end - trampoline_data, PAGE_SIZE)
#define TRAMPOLINE_BASE 0x6000
extern unsigned long setup_trampoline(void);
-extern void __init setup_trampoline_page_table(void);
extern void __init reserve_trampoline_memory(void);
#else
-static inline void setup_trampoline_page_table(void) {}
-static inline void reserve_trampoline_memory(void) {}
+static inline void reserve_trampoline_memory(void) {};
#endif /* CONFIG_X86_TRAMPOLINE */
#endif /* __ASSEMBLY__ */
extern void check_tsc_sync_target(void);
extern int notsc_setup(char *);
-extern void save_sched_clock_state(void);
-extern void restore_sched_clock_state(void);
#endif /* _ASM_X86_TSC_H */
CFLAGS_REMOVE_tsc.o = -pg
CFLAGS_REMOVE_rtc.o = -pg
CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
-CFLAGS_REMOVE_pvclock.o = -pg
-CFLAGS_REMOVE_kvmclock.o = -pg
CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_early_printk.o = -pg
endif
int acpi_gsi_to_irq(u32 gsi, unsigned int *irq)
{
*irq = gsi;
-
-#ifdef CONFIG_X86_IO_APIC
- if (acpi_irq_model == ACPI_IRQ_MODEL_IOAPIC)
- setup_IO_APIC_irq_extra(gsi);
-#endif
-
return 0;
}
plat_gsi = mp_register_gsi(dev, gsi, trigger, polarity);
}
#endif
- irq = plat_gsi;
-
+ acpi_gsi_to_irq(plat_gsi, &irq);
return irq;
}
if (!error) {
acpi_lapic = 1;
+#ifdef CONFIG_X86_BIGSMP
+ generic_bigsmp_probe();
+#endif
/*
* Parse MADT IO-APIC entries
*/
acpi_ioapic = 1;
smp_found_config = 1;
+ if (apic->setup_apic_routing)
+ apic->setup_apic_routing();
}
}
if (error == -EINVAL) {
DMI_MATCH(DMI_PRODUCT_NAME, "Workstation W8000"),
},
},
+ {
+ .callback = force_acpi_ht,
+ .ident = "ASUS P2B-DS",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
+ DMI_MATCH(DMI_BOARD_NAME, "P2B-DS"),
+ },
+ },
{
.callback = force_acpi_ht,
.ident = "ASUS CUR-DLS",
percpu_entry->states[cx->index].eax = cx->address;
percpu_entry->states[cx->index].ecx = MWAIT_ECX_INTERRUPT_BREAK;
}
-
- /*
- * For _CST FFH on Intel, if GAS.access_size bit 1 is cleared,
- * then we should skip checking BM_STS for this C-state.
- * ref: "Intel Processor Vendor-Specific ACPI Interface Specification"
- */
- if ((c->x86_vendor == X86_VENDOR_INTEL) && !(reg->access_size & 0x2))
- cx->bm_sts_skip = 1;
-
return retval;
}
EXPORT_SYMBOL_GPL(acpi_processor_ffh_cstate_probe);
for (i = 0; i <= amd_iommu_last_bdf; ++i) {
if ((domain == NULL && amd_iommu_pd_table[i] == NULL) ||
- (domain != NULL && amd_iommu_pd_table[i] != domain))
+ (amd_iommu_pd_table[i] != domain))
continue;
iommu = amd_iommu_rlookup_table[i];
size_t size,
int dir)
{
- dma_addr_t flush_addr;
dma_addr_t i, start;
unsigned int pages;
(dma_addr + size > dma_dom->aperture_size))
return;
- flush_addr = dma_addr;
pages = iommu_num_pages(dma_addr, size, PAGE_SIZE);
dma_addr &= PAGE_MASK;
start = dma_addr;
dma_ops_free_addresses(dma_dom, dma_addr, pages);
if (amd_iommu_unmap_flush || dma_dom->need_flush) {
- iommu_flush_pages(iommu, dma_dom->domain.id, flush_addr, size);
+ iommu_flush_pages(iommu, dma_dom->domain.id, dma_addr, size);
dma_dom->need_flush = false;
}
}
free_pagetable(domain);
- protection_domain_free(domain);
+ domain_id_free(domain->id);
+
+ kfree(domain);
dom->priv = NULL;
}
iommu->last_device = calc_devid(MMIO_GET_BUS(range),
MMIO_GET_LD(range));
iommu->evt_msi_num = MMIO_MSI_NUM(misc);
-
- if (is_rd890_iommu(iommu->dev)) {
- pci_read_config_dword(iommu->dev, 0xf0, &iommu->cache_cfg[0]);
- pci_read_config_dword(iommu->dev, 0xf4, &iommu->cache_cfg[1]);
- pci_read_config_dword(iommu->dev, 0xf8, &iommu->cache_cfg[2]);
- pci_read_config_dword(iommu->dev, 0xfc, &iommu->cache_cfg[3]);
- }
}
/*
struct ivhd_entry *e;
/*
- * First save the recommended feature enable bits from ACPI
+ * First set the recommended feature enable bits from ACPI
+ * into the IOMMU control registers
+ */
+ h->flags & IVHD_FLAG_HT_TUN_EN_MASK ?
+ iommu_feature_enable(iommu, CONTROL_HT_TUN_EN) :
+ iommu_feature_disable(iommu, CONTROL_HT_TUN_EN);
+
+ h->flags & IVHD_FLAG_PASSPW_EN_MASK ?
+ iommu_feature_enable(iommu, CONTROL_PASSPW_EN) :
+ iommu_feature_disable(iommu, CONTROL_PASSPW_EN);
+
+ h->flags & IVHD_FLAG_RESPASSPW_EN_MASK ?
+ iommu_feature_enable(iommu, CONTROL_RESPASSPW_EN) :
+ iommu_feature_disable(iommu, CONTROL_RESPASSPW_EN);
+
+ h->flags & IVHD_FLAG_ISOC_EN_MASK ?
+ iommu_feature_enable(iommu, CONTROL_ISOC_EN) :
+ iommu_feature_disable(iommu, CONTROL_ISOC_EN);
+
+ /*
+ * make IOMMU memory accesses cache coherent
*/
- iommu->acpi_flags = h->flags;
+ iommu_feature_enable(iommu, CONTROL_COHERENT_EN);
/*
* Done. Now parse the device entries
}
}
-static void iommu_init_flags(struct amd_iommu *iommu)
-{
- iommu->acpi_flags & IVHD_FLAG_HT_TUN_EN_MASK ?
- iommu_feature_enable(iommu, CONTROL_HT_TUN_EN) :
- iommu_feature_disable(iommu, CONTROL_HT_TUN_EN);
-
- iommu->acpi_flags & IVHD_FLAG_PASSPW_EN_MASK ?
- iommu_feature_enable(iommu, CONTROL_PASSPW_EN) :
- iommu_feature_disable(iommu, CONTROL_PASSPW_EN);
-
- iommu->acpi_flags & IVHD_FLAG_RESPASSPW_EN_MASK ?
- iommu_feature_enable(iommu, CONTROL_RESPASSPW_EN) :
- iommu_feature_disable(iommu, CONTROL_RESPASSPW_EN);
-
- iommu->acpi_flags & IVHD_FLAG_ISOC_EN_MASK ?
- iommu_feature_enable(iommu, CONTROL_ISOC_EN) :
- iommu_feature_disable(iommu, CONTROL_ISOC_EN);
-
- /*
- * make IOMMU memory accesses cache coherent
- */
- iommu_feature_enable(iommu, CONTROL_COHERENT_EN);
-}
-
-static void iommu_apply_quirks(struct amd_iommu *iommu)
-{
- if (is_rd890_iommu(iommu->dev)) {
- pci_write_config_dword(iommu->dev, 0xf0, iommu->cache_cfg[0]);
- pci_write_config_dword(iommu->dev, 0xf4, iommu->cache_cfg[1]);
- pci_write_config_dword(iommu->dev, 0xf8, iommu->cache_cfg[2]);
- pci_write_config_dword(iommu->dev, 0xfc, iommu->cache_cfg[3]);
- }
-}
-
/*
* This function finally enables all IOMMUs found in the system after
* they have been initialized
for_each_iommu(iommu) {
iommu_disable(iommu);
- iommu_apply_quirks(iommu);
- iommu_init_flags(iommu);
iommu_set_device_table(iommu);
iommu_enable_command_buffer(iommu);
iommu_enable_event_buffer(iommu);
if (ret)
goto free;
- enable_iommus();
-
if (iommu_pass_through)
ret = amd_iommu_init_passthrough();
else
amd_iommu_init_api();
+ enable_iommus();
+
if (iommu_pass_through)
goto out;
return ret;
free:
- disable_iommus();
-
free_pages((unsigned long)amd_iommu_pd_alloc_bitmap,
get_order(MAX_DOMAIN_ID/8));
for (i = 0; i < ARRAY_SIZE(bus_dev_ranges); i++) {
int bus;
int dev_base, dev_limit;
- u32 ctl;
bus = bus_dev_ranges[i].bus;
dev_base = bus_dev_ranges[i].dev_base;
iommu_detected = 1;
gart_iommu_aperture = 1;
- ctl = read_pci_config(bus, slot, 3,
- AMD64_GARTAPERTURECTL);
-
- /*
- * Before we do anything else disable the GART. It may
- * still be enabled if we boot into a crash-kernel here.
- * Reconfiguring the GART while it is enabled could have
- * unknown side-effects.
- */
- ctl &= ~GARTEN;
- write_pci_config(bus, slot, 3, AMD64_GARTAPERTURECTL, ctl);
-
- aper_order = (ctl >> 1) & 7;
+ aper_order = (read_pci_config(bus, slot, 3, AMD64_GARTAPERTURECTL) >> 1) & 7;
aper_size = (32 * 1024 * 1024) << aper_order;
aper_base = read_pci_config(bus, slot, 3, AMD64_GARTAPERTUREBASE) & 0x7fff;
aper_base <<= 25;
#include <asm/smp.h>
#include <asm/mce.h>
#include <asm/kvm_para.h>
-#include <asm/tsc.h>
unsigned int num_processors;
unsigned int value;
/* APIC hasn't been mapped yet */
- if (!x2apic_mode && !apic_phys)
+ if (!apic_phys)
return;
clear_local_APIC();
*/
void __cpuinit setup_local_APIC(void)
{
- unsigned int value, queued;
- int i, j, acked = 0;
- unsigned long long tsc = 0, ntsc;
- long long max_loops = cpu_khz;
-
- if (cpu_has_tsc)
- rdtscll(tsc);
+ unsigned int value;
+ int i, j;
if (disable_apic) {
arch_disable_smp_support();
* the interrupt. Hence a vector might get locked. It was noticed
* for timer irq (vector 0x31). Issue an extra EOI to clear ISR.
*/
- do {
- queued = 0;
- for (i = APIC_ISR_NR - 1; i >= 0; i--)
- queued |= apic_read(APIC_IRR + i*0x10);
-
- for (i = APIC_ISR_NR - 1; i >= 0; i--) {
- value = apic_read(APIC_ISR + i*0x10);
- for (j = 31; j >= 0; j--) {
- if (value & (1<<j)) {
- ack_APIC_irq();
- acked++;
- }
- }
+ for (i = APIC_ISR_NR - 1; i >= 0; i--) {
+ value = apic_read(APIC_ISR + i*0x10);
+ for (j = 31; j >= 0; j--) {
+ if (value & (1<<j))
+ ack_APIC_irq();
}
- if (acked > 256) {
- printk(KERN_ERR "LAPIC pending interrupts after %d EOI\n",
- acked);
- break;
- }
- if (cpu_has_tsc) {
- rdtscll(ntsc);
- max_loops = (cpu_khz << 10) - (ntsc - tsc);
- } else
- max_loops--;
- } while (queued && max_loops > 0);
- WARN_ON(max_loops <= 0);
+ }
/*
* Now that we are all set up, enable the APIC
}
#endif
-#ifndef CONFIG_SMP
enable_IR_x2apic();
+#ifdef CONFIG_X86_64
default_setup_apic_routing();
#endif
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
+#ifdef CONFIG_X86_32
+ switch (boot_cpu_data.x86_vendor) {
+ case X86_VENDOR_INTEL:
+ if (num_processors > 8)
+ def_to_bigsmp = 1;
+ break;
+ case X86_VENDOR_AMD:
+ if (max_physical_apicid >= 8)
+ def_to_bigsmp = 1;
+ }
+#endif
+
#if defined(CONFIG_SMP) || defined(CONFIG_X86_64)
early_per_cpu(x86_cpu_to_apicid, cpu) = apicid;
early_per_cpu(x86_bios_cpu_apicid, cpu) = apicid;
old_cfg = old_desc->chip_data;
- cfg->vector = old_cfg->vector;
- cfg->move_in_progress = old_cfg->move_in_progress;
- cpumask_copy(cfg->domain, old_cfg->domain);
- cpumask_copy(cfg->old_domain, old_cfg->old_domain);
+ memcpy(cfg, old_cfg, sizeof(struct irq_cfg));
init_copy_irq_2_pin(old_cfg, cfg, node);
}
-static void free_irq_cfg(struct irq_cfg *cfg)
+static void free_irq_cfg(struct irq_cfg *old_cfg)
{
- free_cpumask_var(cfg->domain);
- free_cpumask_var(cfg->old_domain);
- kfree(cfg);
+ kfree(old_cfg);
}
void arch_free_chip_data(struct irq_desc *old_desc, struct irq_desc *desc)
irte.dlvry_mode = apic->irq_delivery_mode;
irte.vector = vector;
irte.dest_id = IRTE_DEST(destination);
- irte.redir_hint = 1;
/* Set source-id of interrupt request */
set_ioapic_sid(&irte, apic_id);
static void __init setup_IO_APIC_irqs(void)
{
- int apic_id, pin, idx, irq;
+ int apic_id = 0, pin, idx, irq;
int notcon = 0;
struct irq_desc *desc;
struct irq_cfg *cfg;
apic_printk(APIC_VERBOSE, KERN_DEBUG "init IO_APIC IRQs\n");
- for (apic_id = 0; apic_id < nr_ioapics; apic_id++)
+#ifdef CONFIG_ACPI
+ if (!acpi_disabled && acpi_ioapic) {
+ apic_id = mp_find_ioapic(0);
+ if (apic_id < 0)
+ apic_id = 0;
+ }
+#endif
+
for (pin = 0; pin < nr_ioapic_registers[apic_id]; pin++) {
idx = find_irq_entry(apic_id, pin, mp_INT);
if (idx == -1) {
irq = pin_2_irq(idx, apic_id, pin);
- if ((apic_id > 0) && (irq > 16))
- continue;
-
/*
* Skip the timer IRQ if there's a quirk handler
* installed and if it returns 1:
" (apicid-pin) not connected\n");
}
-/*
- * for the gsit that is not in first ioapic
- * but could not use acpi_register_gsi()
- * like some special sci in IBM x3330
- */
-void setup_IO_APIC_irq_extra(u32 gsi)
-{
- int apic_id = 0, pin, idx, irq;
- int node = cpu_to_node(boot_cpu_id);
- struct irq_desc *desc;
- struct irq_cfg *cfg;
-
- /*
- * Convert 'gsi' to 'ioapic.pin'.
- */
- apic_id = mp_find_ioapic(gsi);
- if (apic_id < 0)
- return;
-
- pin = mp_find_ioapic_pin(apic_id, gsi);
- idx = find_irq_entry(apic_id, pin, mp_INT);
- if (idx == -1)
- return;
-
- irq = pin_2_irq(idx, apic_id, pin);
-#ifdef CONFIG_SPARSE_IRQ
- desc = irq_to_desc(irq);
- if (desc)
- return;
-#endif
- desc = irq_to_desc_alloc_node(irq, node);
- if (!desc) {
- printk(KERN_INFO "can not get irq_desc for %d\n", irq);
- return;
- }
-
- cfg = desc->chip_data;
- add_pin_to_irq_node(cfg, node, apic_id, pin);
-
- if (test_bit(pin, mp_ioapic_routing[apic_id].pin_programmed)) {
- pr_debug("Pin %d-%d already programmed\n",
- mp_ioapics[apic_id].apicid, pin);
- return;
- }
- set_bit(pin, mp_ioapic_routing[apic_id].pin_programmed);
-
- setup_IO_APIC_irq(apic_id, pin, irq, desc,
- irq_trigger(idx), irq_polarity(idx));
-}
-
/*
* Set up the timer pin, possibly with the 8259A-master behind.
*/
struct irq_pin_list *entry;
cfg = desc->chip_data;
- if (!cfg)
- continue;
entry = cfg->irq_2_pin;
if (!entry)
continue;
}
spin_unlock_irqrestore(&vector_lock, flags);
- if (irq > 0)
- dynamic_irq_init_keep_chip_data(irq);
-
+ if (irq > 0) {
+ dynamic_irq_init(irq);
+ /* restore it, in case dynamic_irq_init clear it */
+ if (desc_new)
+ desc_new->chip_data = cfg_new;
+ }
return irq;
}
{
unsigned long flags;
struct irq_cfg *cfg;
+ struct irq_desc *desc;
- dynamic_irq_cleanup_keep_chip_data(irq);
+ /* store it, in case dynamic_irq_cleanup clear it */
+ desc = irq_to_desc(irq);
+ cfg = desc->chip_data;
+ dynamic_irq_cleanup(irq);
+ /* connect back irq_cfg */
+ desc->chip_data = cfg;
free_irte(irq);
spin_lock_irqsave(&vector_lock, flags);
- cfg = irq_to_desc(irq)->chip_data;
__clear_irq_vector(irq, cfg);
spin_unlock_irqrestore(&vector_lock, flags);
}
irte.dlvry_mode = apic->irq_delivery_mode;
irte.vector = cfg->vector;
irte.dest_id = IRTE_DEST(dest);
- irte.redir_hint = 1;
/* Set source-id of interrupt request */
set_msi_sid(&irte, pdev);
cfg = desc->chip_data;
- get_cached_msi_msg_desc(desc, &msg);
+ read_msi_msg_desc(desc, &msg);
msg.data &= ~MSI_DATA_VECTOR_MASK;
msg.data |= MSI_DATA_VECTOR(cfg->vector);
#ifdef CONFIG_SMP
void __init setup_ioapic_dest(void)
{
- int pin, ioapic, irq, irq_entry;
+ int pin, ioapic = 0, irq, irq_entry;
struct irq_desc *desc;
const struct cpumask *mask;
if (skip_ioapic_setup == 1)
return;
- for (ioapic = 0; ioapic < nr_ioapics; ioapic++)
+#ifdef CONFIG_ACPI
+ if (!acpi_disabled && acpi_ioapic) {
+ ioapic = mp_find_ioapic(0);
+ if (ioapic < 0)
+ ioapic = 0;
+ }
+#endif
+
for (pin = 0; pin < nr_ioapic_registers[ioapic]; pin++) {
irq_entry = find_irq_entry(ioapic, pin, mp_INT);
if (irq_entry == -1)
continue;
irq = pin_2_irq(irq_entry, ioapic, pin);
- if ((ioapic > 0) && (irq > 16))
- continue;
-
desc = irq_to_desc(irq);
/*
late_initcall(print_ipi_mode);
void default_setup_apic_routing(void)
-{
- int version = apic_version[boot_cpu_physical_apicid];
-
- if (num_possible_cpus() > 8) {
- switch (boot_cpu_data.x86_vendor) {
- case X86_VENDOR_INTEL:
- if (!APIC_XAPIC(version)) {
- def_to_bigsmp = 0;
- break;
- }
- /* If P4 and above fall through */
- case X86_VENDOR_AMD:
- def_to_bigsmp = 1;
- }
- }
-
-#ifdef CONFIG_X86_BIGSMP
- generic_bigsmp_probe();
-#endif
-
- if (apic->setup_apic_routing)
- apic->setup_apic_routing();
-}
-
-void setup_apic_flat_routing(void)
{
#ifdef CONFIG_X86_IO_APIC
printk(KERN_INFO
.init_apic_ldr = default_init_apic_ldr,
.ioapic_phys_id_map = default_ioapic_phys_id_map,
- .setup_apic_routing = setup_apic_flat_routing,
+ .setup_apic_routing = default_setup_apic_routing,
.multi_timer_check = NULL,
.apicid_to_node = default_apicid_to_node,
.cpu_to_logical_apicid = default_cpu_to_logical_apicid,
}
#endif
- if (apic == &apic_flat && num_possible_cpus() > 8)
- apic = &apic_physflat;
+ if (apic == &apic_flat) {
+ switch (boot_cpu_data.x86_vendor) {
+ case X86_VENDOR_INTEL:
+ if (num_processors > 8)
+ apic = &apic_physflat;
+ break;
+ case X86_VENDOR_AMD:
+ if (max_physical_apicid >= 8)
+ apic = &apic_physflat;
+ }
+ }
printk(KERN_INFO "Setting APIC routing to %s\n", apic->name);
for (j = 0; j < 64; j++) {
if (!test_bit(j, &present))
continue;
- pnode = (i * 64 + j);
- uv_blade_info[blade].pnode = pnode;
+ uv_blade_info[blade].pnode = (i * 64 + j);
uv_blade_info[blade].nr_possible_cpus = 0;
uv_blade_info[blade].nr_online_cpus = 0;
- max_pnode = max(pnode, max_pnode);
blade++;
}
}
uv_cpu_hub_info(cpu)->scir.offset = uv_scir_offset(apicid);
uv_node_to_blade[nid] = blade;
uv_cpu_to_blade[cpu] = blade;
+ max_pnode = max(pnode, max_pnode);
+
+ printk(KERN_DEBUG "UV: cpu %d, apicid 0x%x, pnode %d, nid %d, lcpu %d, blade %d\n",
+ cpu, apicid, pnode, nid, lcpu, blade);
}
/* Add blade/pnode info for nodes without cpus */
pnode = (paddr >> m_val) & pnode_mask;
blade = boot_pnode_to_blade(pnode);
uv_node_to_blade[nid] = blade;
+ max_pnode = max(pnode, max_pnode);
}
map_gru_high(max_pnode);
/*
* Fixup core topology information for AMD multi-node processors.
- * Assumption: Number of cores in each internal node is the same.
+ * Assumption 1: Number of cores in each internal node is the same.
+ * Assumption 2: Mixed systems with both single-node and dual-node
+ * processors are not supported.
*/
#ifdef CONFIG_X86_HT
static void __cpuinit amd_fixup_dcm(struct cpuinfo_x86 *c)
{
- unsigned long long value;
- u32 nodes, cores_per_node;
+#ifdef CONFIG_PCI
+ u32 t, cpn;
+ u8 n, n_id;
int cpu = smp_processor_id();
- if (!cpu_has(c, X86_FEATURE_NODEID_MSR))
- return;
-
/* fixup topology information only once for a core */
if (cpu_has(c, X86_FEATURE_AMD_DCM))
return;
- rdmsrl(MSR_FAM10H_NODE_ID, value);
-
- nodes = ((value >> 3) & 7) + 1;
- if (nodes == 1)
+ /* check for multi-node processor on boot cpu */
+ t = read_pci_config(0, 24, 3, 0xe8);
+ if (!(t & (1 << 29)))
return;
set_cpu_cap(c, X86_FEATURE_AMD_DCM);
- cores_per_node = c->x86_max_cores / nodes;
- /* store NodeID, use llc_shared_map to store sibling info */
- per_cpu(cpu_llc_id, cpu) = value & 7;
+ /* cores per node: each internal node has half the number of cores */
+ cpn = c->x86_max_cores >> 1;
- /* fixup core id to be in range from 0 to (cores_per_node - 1) */
- c->cpu_core_id = c->cpu_core_id % cores_per_node;
+ /* even-numbered NB_id of this dual-node processor */
+ n = c->phys_proc_id << 1;
+
+ /*
+ * determine internal node id and assign cores fifty-fifty to
+ * each node of the dual-node processor
+ */
+ t = read_pci_config(0, 24 + n, 3, 0xe8);
+ n = (t>>30) & 0x3;
+ if (n == 0) {
+ if (c->cpu_core_id < cpn)
+ n_id = 0;
+ else
+ n_id = 1;
+ } else {
+ if (c->cpu_core_id < cpn)
+ n_id = 1;
+ else
+ n_id = 0;
+ }
+
+ /* compute entire NodeID, use llc_shared_map to store sibling info */
+ per_cpu(cpu_llc_id, cpu) = (c->phys_proc_id << 1) + n_id;
+
+ /* fixup core id to be in range from 0 to cpn */
+ c->cpu_core_id = c->cpu_core_id % cpn;
+#endif
}
#endif
}
}
-void __cpuinit get_cpu_cap(struct cpuinfo_x86 *c)
+static void __cpuinit get_cpu_cap(struct cpuinfo_x86 *c)
{
u32 tfms, xlvl;
u32 ebx;
if (c->extended_cpuid_level >= 0x80000007)
c->x86_power = cpuid_edx(0x80000007);
- init_scattered_cpuid_features(c);
}
static void __cpuinit identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
get_model_name(c); /* Default name */
+ init_scattered_cpuid_features(c);
detect_nopl(c);
}
*const __x86_cpu_dev_end[];
extern void display_cacheinfo(struct cpuinfo_x86 *c);
-extern void get_cpu_cap(struct cpuinfo_x86 *c);
#endif
per_cpu(drv_data, policy->cpu) = NULL;
acpi_processor_unregister_performance(data->acpi_data,
policy->cpu);
- kfree(data->freq_table);
kfree(data);
}
powernow_table[i].index = index;
/* Frequency may be rounded for these */
- if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
- || boot_cpu_data.x86 == 0x11) {
+ if (boot_cpu_data.x86 == 0x10 || boot_cpu_data.x86 == 0x11) {
powernow_table[i].frequency =
freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7);
} else
misc_enable &= ~MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
c->cpuid_level = cpuid_eax(0);
- get_cpu_cap(c);
}
}
(c->x86 == 0x6 && c->x86_model >= 0x0e))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
- /*
- * Atom erratum AAE44/AAF40/AAG38/AAH41:
- *
- * A race condition between speculative fetches and invalidating
- * a large page. This is worked around in microcode, but we
- * need the microcode to have already been loaded... so if it is
- * not, recommend a BIOS update and disable large pages.
- */
- if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_mask <= 2) {
- u32 ucode, junk;
-
- wrmsr(MSR_IA32_UCODE_REV, 0, 0);
- sync_core();
- rdmsr(MSR_IA32_UCODE_REV, junk, ucode);
-
- if (ucode < 0x20e) {
- printk(KERN_WARNING "Atom PSE erratum detected, BIOS microcode update recommended\n");
- clear_cpu_cap(c, X86_FEATURE_PSE);
- }
- }
-
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#else
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
- if (!check_tsc_unstable())
- sched_clock_stable = 1;
+ sched_clock_stable = 1;
}
/*
#include <asm/processor.h>
#include <linux/smp.h>
#include <asm/k8.h>
-#include <asm/smp.h>
#define LVL_1_INST 1
#define LVL_1_DATA 2
union _cpuid4_leaf_ebx ebx;
union _cpuid4_leaf_ecx ecx;
unsigned long size;
- bool can_disable;
- unsigned int l3_indices;
+ unsigned long can_disable;
DECLARE_BITMAP(shared_cpu_map, NR_CPUS);
};
union _cpuid4_leaf_ebx ebx;
union _cpuid4_leaf_ecx ecx;
unsigned long size;
- bool can_disable;
- unsigned int l3_indices;
+ unsigned long can_disable;
};
unsigned short num_cache_leaves;
(ebx->split.ways_of_associativity + 1) - 1;
}
-struct _cache_attr {
- struct attribute attr;
- ssize_t (*show)(struct _cpuid4_info *, char *);
- ssize_t (*store)(struct _cpuid4_info *, const char *, size_t count);
-};
-
-#ifdef CONFIG_CPU_SUP_AMD
-static unsigned int __cpuinit amd_calc_l3_indices(void)
-{
- /*
- * We're called over smp_call_function_single() and therefore
- * are on the correct cpu.
- */
- int cpu = smp_processor_id();
- int node = cpu_to_node(cpu);
- struct pci_dev *dev = node_to_k8_nb_misc(node);
- unsigned int sc0, sc1, sc2, sc3;
- u32 val = 0;
-
- pci_read_config_dword(dev, 0x1C4, &val);
-
- /* calculate subcache sizes */
- sc0 = !(val & BIT(0));
- sc1 = !(val & BIT(4));
- sc2 = !(val & BIT(8)) + !(val & BIT(9));
- sc3 = !(val & BIT(12)) + !(val & BIT(13));
-
- return (max(max(max(sc0, sc1), sc2), sc3) << 10) - 1;
-}
-
static void __cpuinit
amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf)
{
if (boot_cpu_data.x86 == 0x11)
return;
- /* see errata #382 and #388 */
- if ((boot_cpu_data.x86 == 0x10) &&
- ((boot_cpu_data.x86_model < 0x8) ||
- (boot_cpu_data.x86_mask < 0x1)))
+ /* see erratum #382 */
+ if ((boot_cpu_data.x86 == 0x10) && (boot_cpu_data.x86_model < 0x8))
return;
- /* not in virtualized environments */
- if (num_k8_northbridges == 0)
- return;
-
- this_leaf->can_disable = true;
- this_leaf->l3_indices = amd_calc_l3_indices();
-}
-
-static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf,
- unsigned int index)
-{
- int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
- int node = amd_get_nb_id(cpu);
- struct pci_dev *dev = node_to_k8_nb_misc(node);
- unsigned int reg = 0;
-
- if (!this_leaf->can_disable)
- return -EINVAL;
-
- if (!dev)
- return -EINVAL;
-
- pci_read_config_dword(dev, 0x1BC + index * 4, ®);
- return sprintf(buf, "0x%08x\n", reg);
-}
-
-#define SHOW_CACHE_DISABLE(index) \
-static ssize_t \
-show_cache_disable_##index(struct _cpuid4_info *this_leaf, char *buf) \
-{ \
- return show_cache_disable(this_leaf, buf, index); \
-}
-SHOW_CACHE_DISABLE(0)
-SHOW_CACHE_DISABLE(1)
-
-static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf,
- const char *buf, size_t count, unsigned int index)
-{
- int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
- int node = amd_get_nb_id(cpu);
- struct pci_dev *dev = node_to_k8_nb_misc(node);
- unsigned long val = 0;
-
-#define SUBCACHE_MASK (3UL << 20)
-#define SUBCACHE_INDEX 0xfff
-
- if (!this_leaf->can_disable)
- return -EINVAL;
-
- if (!capable(CAP_SYS_ADMIN))
- return -EPERM;
-
- if (!dev)
- return -EINVAL;
-
- if (strict_strtoul(buf, 10, &val) < 0)
- return -EINVAL;
-
- /* do not allow writes outside of allowed bits */
- if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) ||
- ((val & SUBCACHE_INDEX) > this_leaf->l3_indices))
- return -EINVAL;
-
- val |= BIT(30);
- pci_write_config_dword(dev, 0x1BC + index * 4, val);
- /*
- * We need to WBINVD on a core on the node containing the L3 cache which
- * indices we disable therefore a simple wbinvd() is not sufficient.
- */
- wbinvd_on_cpu(cpu);
- pci_write_config_dword(dev, 0x1BC + index * 4, val | BIT(31));
- return count;
+ this_leaf->can_disable = 1;
}
-#define STORE_CACHE_DISABLE(index) \
-static ssize_t \
-store_cache_disable_##index(struct _cpuid4_info *this_leaf, \
- const char *buf, size_t count) \
-{ \
- return store_cache_disable(this_leaf, buf, count, index); \
-}
-STORE_CACHE_DISABLE(0)
-STORE_CACHE_DISABLE(1)
-
-static struct _cache_attr cache_disable_0 = __ATTR(cache_disable_0, 0644,
- show_cache_disable_0, store_cache_disable_0);
-static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644,
- show_cache_disable_1, store_cache_disable_1);
-
-#else /* CONFIG_CPU_SUP_AMD */
-static void __cpuinit
-amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf)
-{
-};
-#endif /* CONFIG_CPU_SUP_AMD */
-
static int
__cpuinit cpuid4_cache_lookup_regs(int index,
struct _cpuid4_info_regs *this_leaf)
{
struct _cpuid4_info *this_leaf, *sibling_leaf;
unsigned long num_threads_sharing;
- int index_msb, i, sibling;
+ int index_msb, i;
struct cpuinfo_x86 *c = &cpu_data(cpu);
if ((index == 3) && (c->x86_vendor == X86_VENDOR_AMD)) {
- for_each_cpu(i, c->llc_shared_map) {
+ struct cpuinfo_x86 *d;
+ for_each_online_cpu(i) {
if (!per_cpu(cpuid4_info, i))
continue;
+ d = &cpu_data(i);
this_leaf = CPUID4_INFO_IDX(i, index);
- for_each_cpu(sibling, c->llc_shared_map) {
- if (!cpu_online(sibling))
- continue;
- set_bit(sibling, this_leaf->shared_cpu_map);
- }
+ cpumask_copy(to_cpumask(this_leaf->shared_cpu_map),
+ d->llc_shared_map);
}
return;
}
#define to_object(k) container_of(k, struct _index_kobject, kobj)
#define to_attr(a) container_of(a, struct _cache_attr, attr)
+static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf,
+ unsigned int index)
+{
+ int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
+ int node = cpu_to_node(cpu);
+ struct pci_dev *dev = node_to_k8_nb_misc(node);
+ unsigned int reg = 0;
+
+ if (!this_leaf->can_disable)
+ return -EINVAL;
+
+ if (!dev)
+ return -EINVAL;
+
+ pci_read_config_dword(dev, 0x1BC + index * 4, ®);
+ return sprintf(buf, "%x\n", reg);
+}
+
+#define SHOW_CACHE_DISABLE(index) \
+static ssize_t \
+show_cache_disable_##index(struct _cpuid4_info *this_leaf, char *buf) \
+{ \
+ return show_cache_disable(this_leaf, buf, index); \
+}
+SHOW_CACHE_DISABLE(0)
+SHOW_CACHE_DISABLE(1)
+
+static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf,
+ const char *buf, size_t count, unsigned int index)
+{
+ int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
+ int node = cpu_to_node(cpu);
+ struct pci_dev *dev = node_to_k8_nb_misc(node);
+ unsigned long val = 0;
+ unsigned int scrubber = 0;
+
+ if (!this_leaf->can_disable)
+ return -EINVAL;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (strict_strtoul(buf, 10, &val) < 0)
+ return -EINVAL;
+
+ val |= 0xc0000000;
+
+ pci_read_config_dword(dev, 0x58, &scrubber);
+ scrubber &= ~0x1f000000;
+ pci_write_config_dword(dev, 0x58, scrubber);
+
+ pci_write_config_dword(dev, 0x1BC + index * 4, val & ~0x40000000);
+ wbinvd();
+ pci_write_config_dword(dev, 0x1BC + index * 4, val);
+ return count;
+}
+
+#define STORE_CACHE_DISABLE(index) \
+static ssize_t \
+store_cache_disable_##index(struct _cpuid4_info *this_leaf, \
+ const char *buf, size_t count) \
+{ \
+ return store_cache_disable(this_leaf, buf, count, index); \
+}
+STORE_CACHE_DISABLE(0)
+STORE_CACHE_DISABLE(1)
+
+struct _cache_attr {
+ struct attribute attr;
+ ssize_t (*show)(struct _cpuid4_info *, char *);
+ ssize_t (*store)(struct _cpuid4_info *, const char *, size_t count);
+};
+
#define define_one_ro(_name) \
static struct _cache_attr _name = \
__ATTR(_name, 0444, show_##_name, NULL)
define_one_ro(shared_cpu_map);
define_one_ro(shared_cpu_list);
-#define DEFAULT_SYSFS_CACHE_ATTRS \
- &type.attr, \
- &level.attr, \
- &coherency_line_size.attr, \
- &physical_line_partition.attr, \
- &ways_of_associativity.attr, \
- &number_of_sets.attr, \
- &size.attr, \
- &shared_cpu_map.attr, \
- &shared_cpu_list.attr
+static struct _cache_attr cache_disable_0 = __ATTR(cache_disable_0, 0644,
+ show_cache_disable_0, store_cache_disable_0);
+static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644,
+ show_cache_disable_1, store_cache_disable_1);
static struct attribute *default_attrs[] = {
- DEFAULT_SYSFS_CACHE_ATTRS,
- NULL
-};
-
-static struct attribute *default_l3_attrs[] = {
- DEFAULT_SYSFS_CACHE_ATTRS,
-#ifdef CONFIG_CPU_SUP_AMD
+ &type.attr,
+ &level.attr,
+ &coherency_line_size.attr,
+ &physical_line_partition.attr,
+ &ways_of_associativity.attr,
+ &number_of_sets.attr,
+ &size.attr,
+ &shared_cpu_map.attr,
+ &shared_cpu_list.attr,
&cache_disable_0.attr,
&cache_disable_1.attr,
-#endif
NULL
};
unsigned int cpu = sys_dev->id;
unsigned long i, j;
struct _index_kobject *this_object;
- struct _cpuid4_info *this_leaf;
int retval;
retval = cpuid4_cache_sysfs_init(cpu);
this_object = INDEX_KOBJECT_PTR(cpu, i);
this_object->cpu = cpu;
this_object->index = i;
-
- this_leaf = CPUID4_INFO_IDX(cpu, i);
-
- if (this_leaf->can_disable)
- ktype_cache.default_attrs = default_l3_attrs;
- else
- ktype_cache.default_attrs = default_attrs;
-
retval = kobject_init_and_add(&(this_object->kobj),
&ktype_cache,
per_cpu(cache_kobject, cpu),
address = (low & MASK_BLKPTR_LO) >> 21;
if (!address)
break;
-
address += MCG_XBLK_ADDR;
} else
++address;
if (rdmsr_safe(address, &low, &high))
break;
- if (!(high & MASK_VALID_HI))
- continue;
+ if (!(high & MASK_VALID_HI)) {
+ if (block)
+ continue;
+ else
+ break;
+ }
if (!(high & MASK_CNTP_HI) ||
(high & MASK_LOCKED_HI))
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return 0;
- if (boot_cpu_data.x86 < 0xf)
+ if (boot_cpu_data.x86 < 0xf || boot_cpu_data.x86 > 0x11)
return 0;
/* In case some hypervisor doesn't pass SYSCFG through: */
if (rdmsr_safe(MSR_K8_SYSCFG, &l, &h) < 0)
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX];
-static const u64 westmere_hw_cache_event_ids
- [PERF_COUNT_HW_CACHE_MAX]
- [PERF_COUNT_HW_CACHE_OP_MAX]
- [PERF_COUNT_HW_CACHE_RESULT_MAX] =
-{
- [ C(L1D) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x010b, /* MEM_INST_RETIRED.LOADS */
- [ C(RESULT_MISS) ] = 0x0151, /* L1D.REPL */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = 0x020b, /* MEM_INST_RETURED.STORES */
- [ C(RESULT_MISS) ] = 0x0251, /* L1D.M_REPL */
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = 0x014e, /* L1D_PREFETCH.REQUESTS */
- [ C(RESULT_MISS) ] = 0x024e, /* L1D_PREFETCH.MISS */
- },
- },
- [ C(L1I ) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x0380, /* L1I.READS */
- [ C(RESULT_MISS) ] = 0x0280, /* L1I.MISSES */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = -1,
- [ C(RESULT_MISS) ] = -1,
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = 0x0,
- [ C(RESULT_MISS) ] = 0x0,
- },
- },
- [ C(LL ) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x0324, /* L2_RQSTS.LOADS */
- [ C(RESULT_MISS) ] = 0x0224, /* L2_RQSTS.LD_MISS */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = 0x0c24, /* L2_RQSTS.RFOS */
- [ C(RESULT_MISS) ] = 0x0824, /* L2_RQSTS.RFO_MISS */
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = 0x4f2e, /* LLC Reference */
- [ C(RESULT_MISS) ] = 0x412e, /* LLC Misses */
- },
- },
- [ C(DTLB) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x010b, /* MEM_INST_RETIRED.LOADS */
- [ C(RESULT_MISS) ] = 0x0108, /* DTLB_LOAD_MISSES.ANY */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = 0x020b, /* MEM_INST_RETURED.STORES */
- [ C(RESULT_MISS) ] = 0x010c, /* MEM_STORE_RETIRED.DTLB_MISS */
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = 0x0,
- [ C(RESULT_MISS) ] = 0x0,
- },
- },
- [ C(ITLB) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x01c0, /* INST_RETIRED.ANY_P */
- [ C(RESULT_MISS) ] = 0x0185, /* ITLB_MISSES.ANY */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = -1,
- [ C(RESULT_MISS) ] = -1,
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = -1,
- [ C(RESULT_MISS) ] = -1,
- },
- },
- [ C(BPU ) ] = {
- [ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x00c4, /* BR_INST_RETIRED.ALL_BRANCHES */
- [ C(RESULT_MISS) ] = 0x03e8, /* BPU_CLEARS.ANY */
- },
- [ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = -1,
- [ C(RESULT_MISS) ] = -1,
- },
- [ C(OP_PREFETCH) ] = {
- [ C(RESULT_ACCESS) ] = -1,
- [ C(RESULT_MISS) ] = -1,
- },
- },
-};
-
static const u64 nehalem_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
if (atomic_read(&active_events) == 0) {
if (!reserve_pmc_hardware())
err = -EBUSY;
- else {
+ else
err = reserve_bts_hardware();
- if (err)
- release_pmc_hardware();
- }
}
if (!err)
atomic_inc(&active_events);
* Install the hw-cache-events table:
*/
switch (boot_cpu_data.x86_model) {
-
case 15: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
case 22: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
case 23: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
pr_cont("Core2 events, ");
break;
default:
- case 26: /* 45 nm nehalem, "Bloomfield" */
- case 30: /* 45 nm nehalem, "Lynnfield" */
- case 46: /* 45 nm nehalem-ex, "Beckton" */
+ case 26:
memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
pr_cont("Atom events, ");
break;
-
- case 37: /* 32 nm nehalem, "Clarkdale" */
- case 44: /* 32 nm nehalem, "Gulftown" */
- memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids,
- sizeof(hw_cache_event_ids));
-
- pr_cont("Westmere events, ");
- break;
}
return 0;
}
*/
#include <linux/dmi.h>
-#include <linux/jiffies.h>
#include <asm/div64.h>
#include <asm/vmware.h>
#include <asm/x86_init.h>
static unsigned long vmware_get_tsc_khz(void)
{
- uint64_t tsc_hz, lpj;
+ uint64_t tsc_hz;
uint32_t eax, ebx, ecx, edx;
VMWARE_PORT(GETHZ, eax, ebx, ecx, edx);
printk(KERN_INFO "TSC freq read from hypervisor : %lu.%03lu MHz\n",
(unsigned long) tsc_hz / 1000,
(unsigned long) tsc_hz % 1000);
-
- if (!preset_lpj) {
- lpj = ((u64)tsc_hz * 1000);
- do_div(lpj, HZ);
- preset_lpj = lpj;
- }
-
return tsc_hz;
}
#include <asm/cpu.h>
#include <asm/reboot.h>
#include <asm/virtext.h>
+#include <asm/iommu.h>
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
#ifdef CONFIG_HPET_TIMER
hpet_disable();
#endif
+
+#ifdef CONFIG_X86_64
+ pci_iommu_shutdown();
+#endif
+
crash_save_cpu(regs, safe_smp_processor_id());
}
if (!csize)
return 0;
- vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+ vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
if (!vaddr)
return -ENOMEM;
} else
memcpy(buf, vaddr + offset, csize);
- set_iounmap_nonlazy();
iounmap(vaddr);
return csize;
}
/*
* Enable paging
*/
- movl pa(initial_page_table), %eax
+ movl $pa(swapper_pg_dir),%eax
movl %eax,%cr3 /* set the page table pointer.. */
movl %cr0,%eax
orl $X86_CR0_PG,%eax
.align 4
ENTRY(initial_code)
.long i386_start_kernel
-ENTRY(initial_page_table)
- .long pa(swapper_pg_dir)
/*
* BSS section
#endif
swapper_pg_fixmap:
.fill 1024,4,0
-#ifdef CONFIG_X86_TRAMPOLINE
-ENTRY(trampoline_pg_dir)
- .fill 1024,4,0
-#endif
ENTRY(empty_zero_page)
.fill 4096,1,0
hpet_writel(cnt, HPET_Tn_CMP(timer));
/*
- * We need to read back the CMP register on certain HPET
- * implementations (ATI chipsets) which seem to delay the
- * transfer of the compare register into the internal compare
- * logic. With small deltas this might actually be too late as
- * the counter could already be higher than the compare value
- * at that point and we would wait for the next hpet interrupt
- * forever. We found out that reading the CMP register back
- * forces the transfer so we can rely on the comparison with
- * the counter register below. If the read back from the
- * compare register does not match the value we programmed
- * then we might have a real hardware problem. We can not do
- * much about it here, but at least alert the user/admin with
- * a prominent warning.
- * An erratum on some chipsets (ICH9,..), results in comparator read
- * immediately following a write returning old value. Workaround
- * for this is to read this value second time, when first
- * read returns old value.
+ * We need to read back the CMP register to make sure that
+ * what we wrote hit the chip before we compare it to the
+ * counter.
*/
- if (unlikely((u32)hpet_readl(HPET_Tn_CMP(timer)) != cnt)) {
- WARN_ONCE((u32)hpet_readl(HPET_Tn_CMP(timer)) != cnt,
- KERN_WARNING "hpet: compare register read back failed.\n");
- }
+ WARN_ON_ONCE((u32)hpet_readl(HPET_Tn_CMP(timer)) != cnt);
return (s32)((u32)hpet_readl(HPET_COUNTER) - cnt) >= 0 ? -ETIME : 0;
}
{
unsigned int irq;
- irq = create_irq_nr(0, -1);
+ irq = create_irq();
if (!irq)
return -EINVAL;
void hpet_disable(void)
{
- if (is_hpet_capable() && hpet_virt_address) {
+ if (is_hpet_capable()) {
unsigned long cfg = hpet_readl(HPET_CFG);
if (hpet_legacy_int_enabled) {
}
EXPORT_SYMBOL_GPL(k8_flush_garts);
-static __init int init_k8_nbs(void)
-{
- int err = 0;
-
- err = cache_k8_northbridges();
-
- if (err < 0)
- printk(KERN_NOTICE "K8 NB: Cannot enumerate AMD northbridges.\n");
-
- return err;
-}
-
-/* This has to go after the PCI subsystem */
-fs_initcall(init_k8_nbs);
x86_init.mpparse.mpc_record(1);
}
+#ifdef CONFIG_X86_BIGSMP
+ generic_bigsmp_probe();
+#endif
+
+ if (apic->setup_apic_routing)
+ apic->setup_apic_routing();
+
if (!num_processors)
printk(KERN_ERR "MPTABLE: no processors registered!\n");
return num_processors;
unsigned long flags;
int ret = -EIO;
int i;
- int restarts = 0;
spin_lock_irqsave(&ec_lock, flags);
if (wait_on_obf(0x6c, 1)) {
printk(KERN_ERR "olpc-ec: timeout waiting for"
" EC to provide data!\n");
- if (restarts++ < 10)
- goto restart;
- goto err;
+ goto restart;
}
outbuf[i] = inb(0x68);
printk(KERN_DEBUG "olpc-ec: received 0x%x\n",
#define PMR_SOFTSTOPFAULT 0x40000000
#define PMR_HARDSTOP 0x20000000
-/*
- * The maximum PHB bus number.
- * x3950M2 (rare): 8 chassis, 48 PHBs per chassis = 384
- * x3950M2: 4 chassis, 48 PHBs per chassis = 192
- * x3950 (PCIE): 8 chassis, 32 PHBs per chassis = 256
- * x3950 (PCIX): 8 chassis, 16 PHBs per chassis = 128
- */
-#define MAX_PHB_BUS_NUM 256
-
-#define PHBS_PER_CALGARY 4
+#define MAX_NUM_OF_PHBS 8 /* how many PHBs in total? */
+#define MAX_NUM_CHASSIS 8 /* max number of chassis */
+/* MAX_PHB_BUS_NUM is the maximal possible dev->bus->number */
+#define MAX_PHB_BUS_NUM (MAX_NUM_OF_PHBS * MAX_NUM_CHASSIS * 2)
+#define PHBS_PER_CALGARY 4
/* register offsets in Calgary's internal register space */
static const unsigned long tar_offsets[] = {
struct iommu_table *tbl;
int ret;
+ BUG_ON(dev->bus->number >= MAX_PHB_BUS_NUM);
+
bbar = busno_to_bbar(dev->bus->number);
ret = calgary_setup_tar(dev, bbar);
if (ret)
enable_gart_translation(dev, __pa(agp_gatt_table));
}
-
- /* Flush the GART-TLB to remove stale entries */
- k8_flush_garts();
}
/*
unsigned long scratch;
long i;
- if (num_k8_northbridges == 0)
+ if (cache_k8_northbridges() < 0 || num_k8_northbridges == 0)
return;
#ifndef CONFIG_AGP_AMD64
}
/*
- * Check for AMD CPUs, where APIC timer interrupt does not wake up CPU from C1e.
- * For more information see
- * - Erratum #400 for NPT family 0xf and family 0x10 CPUs
- * - Erratum #365 for family 0x11 (not affected because C1e not in use)
+ * Check for AMD CPUs, which have potentially C1E support
*/
static int __cpuinit check_c1e_idle(const struct cpuinfo_x86 *c)
{
- u64 val;
if (c->x86_vendor != X86_VENDOR_AMD)
- goto no_c1e_idle;
+ return 0;
- /* Family 0x0f models < rev F do not have C1E */
- if (c->x86 == 0x0F && c->x86_model >= 0x40)
- return 1;
+ if (c->x86 < 0x0F)
+ return 0;
- if (c->x86 == 0x10) {
- /*
- * check OSVW bit for CPUs that are not affected
- * by erratum #400
- */
- if (cpu_has(c, X86_FEATURE_OSVW)) {
- rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val);
- if (val >= 2) {
- rdmsrl(MSR_AMD64_OSVW_STATUS, val);
- if (!(val & BIT(1)))
- goto no_c1e_idle;
- }
- }
- return 1;
- }
+ /* Family 0x0f models < rev F do not have C1E */
+ if (c->x86 == 0x0f && c->x86_model < 0x40)
+ return 0;
-no_c1e_idle:
- return 0;
+ return 1;
}
static cpumask_var_t c1e_mask;
set_tsk_thread_flag(p, TIF_FORK);
+ p->thread.fs = me->thread.fs;
+ p->thread.gs = me->thread.gs;
+
savesegment(gs, p->thread.gsindex);
- p->thread.gs = p->thread.gsindex ? 0 : me->thread.gs;
savesegment(fs, p->thread.fsindex);
- p->thread.fs = p->thread.fsindex ? 0 : me->thread.fs;
savesegment(es, p->thread.es);
savesegment(ds, p->thread.ds);
/* Make sure to be in 32bit mode */
set_thread_flag(TIF_IA32);
- current->personality |= force_personality32;
/* Prepare the first "return" to user space */
current_thread_info()->status |= TS_COMPAT;
return pv_tsc_khz;
}
-static atomic64_t last_value = ATOMIC64_INIT(0);
-
cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
{
struct pvclock_shadow_time shadow;
unsigned version;
cycle_t ret, offset;
- u64 last;
do {
version = pvclock_get_time_values(&shadow, src);
barrier();
} while (version != src->version);
- /*
- * Assumption here is that last_value, a global accumulator, always goes
- * forward. If we are less than that, we should not be much smaller.
- * We assume there is an error marging we're inside, and then the correction
- * does not sacrifice accuracy.
- *
- * For reads: global may have changed between test and return,
- * but this means someone else updated poked the clock at a later time.
- * We just need to make sure we are not seeing a backwards event.
- *
- * For updates: last_value = ret is not enough, since two vcpus could be
- * updating at the same time, and one of them could be slightly behind,
- * making the assumption that last_value always go forward fail to hold.
- */
- last = atomic64_read(&last_value);
- do {
- if (ret < last)
- return last;
- last = atomic64_cmpxchg(&last_value, last, ret);
- } while (unlikely(last != ret));
-
return ret;
}
{
struct pci_dev *nb_ht;
unsigned int devfn;
- u32 node;
u32 val;
devfn = PCI_DEVFN(PCI_SLOT(dev->devfn), 0);
return;
pci_read_config_dword(nb_ht, 0x60, &val);
- node = val & 7;
- /*
- * Some hardware may return an invalid node ID,
- * so check it first:
- */
- if (node_online(node))
- set_dev_node(&dev->dev, node);
+ set_dev_node(&dev->dev, val & 7);
pci_dev_put(nb_ht);
}
DMI_MATCH(DMI_PRODUCT_NAME, "Macmini3,1"),
},
},
- { /* Handle problems with rebooting on the iMac9,1. */
- .callback = set_pci_reboot,
- .ident = "Apple iMac9,1",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "iMac9,1"),
- },
- },
{ }
};
/* O.K Now that I'm on the appropriate processor,
* stop all of the others.
*/
- stop_other_cpus();
+ smp_send_stop();
#endif
lapic_shutdown();
#include <asm/numa_64.h>
#endif
#include <asm/mce.h>
-#include <asm/trampoline.h>
/*
* end_pfn only includes RAM, while max_pfn_mapped includes all e820 entries.
DMI_MATCH(DMI_BOARD_NAME, "DG45FC"),
},
},
- /*
- * The Dell Inspiron Mini 1012 has DMI_BIOS_VENDOR = "Dell Inc.", so
- * match on the product name.
- */
- {
- .callback = dmi_low_memory_corruption,
- .ident = "Phoenix BIOS",
- .matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 1012"),
- },
- },
#endif
{}
};
paging_init();
x86_init.paging.pagetable_setup_done(swapper_pg_dir);
- setup_trampoline_page_table();
-
tboot_probe();
#ifdef CONFIG_X86_64
irq_exit();
}
-static void native_stop_other_cpus(int wait)
+static void native_smp_send_stop(void)
{
unsigned long flags;
- unsigned long timeout;
+ unsigned long wait;
if (reboot_force)
return;
if (num_online_cpus() > 1) {
apic->send_IPI_allbutself(REBOOT_VECTOR);
- /*
- * Don't wait longer than a second if the caller
- * didn't ask us to wait.
- */
- timeout = USEC_PER_SEC;
- while (num_online_cpus() > 1 && (wait || timeout--))
+ /* Don't wait longer than a second */
+ wait = USEC_PER_SEC;
+ while (num_online_cpus() > 1 && wait--)
udelay(1);
}
.smp_prepare_cpus = native_smp_prepare_cpus,
.smp_cpus_done = native_smp_cpus_done,
- .stop_other_cpus = native_stop_other_cpus,
+ .smp_send_stop = native_smp_send_stop,
.smp_send_reschedule = native_smp_send_reschedule,
.cpu_up = native_cpu_up,
#ifdef CONFIG_X86_32
u8 apicid_2_node[MAX_APICID];
+static int low_mappings;
#endif
/* State of each CPU */
static DEFINE_PER_CPU(struct task_struct *, idle_thread_array);
#define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x))
#define set_idle_for_cpu(x, p) (per_cpu(idle_thread_array, x) = (p))
-
-/*
- * We need this for trampoline_base protection from concurrent accesses when
- * off- and onlining cores wildly.
- */
-static DEFINE_MUTEX(x86_cpu_hotplug_driver_mutex);
-
-void cpu_hotplug_driver_lock()
-{
- mutex_lock(&x86_cpu_hotplug_driver_mutex);
-}
-
-void cpu_hotplug_driver_unlock()
-{
- mutex_unlock(&x86_cpu_hotplug_driver_mutex);
-}
-
-ssize_t arch_cpu_probe(const char *buf, size_t count) { return -1; }
-ssize_t arch_cpu_release(const char *buf, size_t count) { return -1; }
#else
static struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ;
#define get_idle_for_cpu(x) (idle_thread_array[(x)])
* fragile that we want to limit the things done here to the
* most necessary things.
*/
-
-#ifdef CONFIG_X86_32
- /*
- * Switch away from the trampoline page-table
- *
- * Do this before cpu_init() because it needs to access per-cpu
- * data which may not be mapped in the trampoline page-table.
- */
- load_cr3(swapper_pg_dir);
- __flush_tlb_all();
-#endif
-
vmi_bringup();
cpu_init();
preempt_disable();
enable_8259A_irq(0);
}
+#ifdef CONFIG_X86_32
+ while (low_mappings)
+ cpu_relax();
+ __flush_tlb_all();
+#endif
+
/* This must be done before setting cpu_online_mask */
set_cpu_sibling_map(raw_smp_processor_id());
wmb();
#ifdef CONFIG_X86_32
/* Stack for startup_32 can be just as for start_secondary onwards */
irq_ctx_init(cpu);
- initial_page_table = __pa(&trampoline_pg_dir);
#else
clear_tsk_thread_flag(c_idle.idle, TIF_FORK);
initial_gs = per_cpu_offset(cpu);
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
+#ifdef CONFIG_X86_32
+ /* init low mem mapping */
+ clone_pgd_range(swapper_pg_dir, swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+ min_t(unsigned long, KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
+ flush_tlb_all();
+ low_mappings = 1;
+
err = do_boot_cpu(apicid, cpu);
+ zap_low_mappings(false);
+ low_mappings = 0;
+#else
+ err = do_boot_cpu(apicid, cpu);
+#endif
if (err) {
pr_debug("do_boot_cpu failed %d\n", err);
return -EIO;
set_cpu_sibling_map(0);
enable_IR_x2apic();
+#ifdef CONFIG_X86_64
default_setup_apic_routing();
+#endif
if (smp_sanity_check(max_cpus) < 0) {
printk(KERN_INFO "SMP disabled\n");
/* Global pointer to shared data; NULL means no measured launch. */
struct tboot *tboot __read_mostly;
-EXPORT_SYMBOL(tboot);
/* timeout for APs (in secs) to enter wait-for-SIPI state during shutdown */
#define AP_WAIT_TIMEOUT 1
#include <linux/io.h>
#include <asm/trampoline.h>
-#include <asm/pgtable.h>
#include <asm/e820.h>
#if defined(CONFIG_X86_64) && defined(CONFIG_ACPI_SLEEP)
memcpy(trampoline_base, trampoline_data, TRAMPOLINE_SIZE);
return virt_to_phys(trampoline_base);
}
-
-void __init setup_trampoline_page_table(void)
-{
-#ifdef CONFIG_X86_32
- /* Copy kernel address range */
- clone_pgd_range(trampoline_pg_dir + KERNEL_PGD_BOUNDARY,
- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
- KERNEL_PGD_PTRS);
-
- /* Initialize low mappings */
- clone_pgd_range(trampoline_pg_dir,
- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
- min_t(unsigned long, KERNEL_PGD_PTRS,
- KERNEL_PGD_BOUNDARY));
-#endif
-}
local_irq_restore(flags);
}
-static unsigned long long cyc2ns_suspend;
-
-void save_sched_clock_state(void)
-{
- if (!sched_clock_stable)
- return;
-
- cyc2ns_suspend = sched_clock();
-}
-
-/*
- * Even on processors with invariant TSC, TSC gets reset in some the
- * ACPI system sleep states. And in some systems BIOS seem to reinit TSC to
- * arbitrary value (still sync'd across cpu's) during resume from such sleep
- * states. To cope up with this, recompute the cyc2ns_offset for each cpu so
- * that sched_clock() continues from the point where it was left off during
- * suspend.
- */
-void restore_sched_clock_state(void)
-{
- unsigned long long offset;
- unsigned long flags;
- int cpu;
-
- if (!sched_clock_stable)
- return;
-
- local_irq_save(flags);
-
- __get_cpu_var(cyc2ns_offset) = 0;
- offset = cyc2ns_suspend - sched_clock();
-
- for_each_possible_cpu(cpu)
- per_cpu(cyc2ns_offset, cpu) = offset;
-
- local_irq_restore(flags);
-}
-
#ifdef CONFIG_CPU_FREQ
/* Frequency scaling support. Adjust the TSC based timer when the cpu frequency
write_sequnlock_irqrestore(&vsyscall_gtod_data.lock, flags);
}
-void update_vsyscall(struct timespec *wall_time, struct clocksource *clock,
- u32 mult)
+void update_vsyscall(struct timespec *wall_time, struct clocksource *clock)
{
unsigned long flags;
vsyscall_gtod_data.clock.vread = clock->vread;
vsyscall_gtod_data.clock.cycle_last = clock->cycle_last;
vsyscall_gtod_data.clock.mask = clock->mask;
- vsyscall_gtod_data.clock.mult = mult;
+ vsyscall_gtod_data.clock.mult = clock->mult;
vsyscall_gtod_data.clock.shift = clock->shift;
vsyscall_gtod_data.wall_time_sec = wall_time->tv_sec;
vsyscall_gtod_data.wall_time_nsec = wall_time->tv_nsec;
#define Group (1<<14) /* Bits 3:5 of modrm byte extend opcode */
#define GroupDual (1<<15) /* Alternate decoding of mod == 3 */
#define GroupMask 0xff /* Group number stored in bits 0:7 */
-#define Priv (1<<27) /* instruction generates #GP if current CPL != 0 */
/* Source 2 operand type */
#define Src2None (0<<29)
#define Src2CL (1<<29)
enum {
Group1_80, Group1_81, Group1_82, Group1_83,
Group1A, Group3_Byte, Group3, Group4, Group5, Group7,
- Group8, Group9,
};
static u32 opcode_table[256] = {
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
/* 0xF0 - 0xF7 */
0, 0, 0, 0,
- ImplicitOps | Priv, ImplicitOps, Group | Group3_Byte, Group | Group3,
+ ImplicitOps, ImplicitOps, Group | Group3_Byte, Group | Group3,
/* 0xF8 - 0xFF */
ImplicitOps, 0, ImplicitOps, ImplicitOps,
ImplicitOps, ImplicitOps, Group | Group4, Group | Group5,
static u32 twobyte_table[256] = {
/* 0x00 - 0x0F */
- 0, Group | GroupDual | Group7, 0, 0,
- 0, ImplicitOps, ImplicitOps | Priv, 0,
- ImplicitOps | Priv, ImplicitOps | Priv, 0, 0,
- 0, ImplicitOps | ModRM, 0, 0,
+ 0, Group | GroupDual | Group7, 0, 0, 0, ImplicitOps, ImplicitOps, 0,
+ ImplicitOps, ImplicitOps, 0, 0, 0, ImplicitOps | ModRM, 0, 0,
/* 0x10 - 0x1F */
0, 0, 0, 0, 0, 0, 0, 0, ImplicitOps | ModRM, 0, 0, 0, 0, 0, 0, 0,
/* 0x20 - 0x2F */
- ModRM | ImplicitOps | Priv, ModRM | Priv,
- ModRM | ImplicitOps | Priv, ModRM | Priv,
- 0, 0, 0, 0,
+ ModRM | ImplicitOps, ModRM, ModRM | ImplicitOps, ModRM, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
/* 0x30 - 0x3F */
- ImplicitOps | Priv, 0, ImplicitOps | Priv, 0,
- ImplicitOps, ImplicitOps | Priv, 0, 0,
+ ImplicitOps, 0, ImplicitOps, 0,
+ ImplicitOps, ImplicitOps, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
/* 0x40 - 0x47 */
DstReg | SrcMem | ModRM | Mov, DstReg | SrcMem | ModRM | Mov,
0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov,
DstReg | SrcMem16 | ModRM | Mov,
/* 0xB8 - 0xBF */
- 0, 0, Group | Group8, DstMem | SrcReg | ModRM | BitOp,
+ 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM | BitOp,
0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov,
DstReg | SrcMem16 | ModRM | Mov,
/* 0xC0 - 0xCF */
- 0, 0, 0, DstMem | SrcReg | ModRM | Mov,
- 0, 0, 0, Group | GroupDual | Group9,
+ 0, 0, 0, DstMem | SrcReg | ModRM | Mov, 0, 0, 0, ImplicitOps | ModRM,
0, 0, 0, 0, 0, 0, 0, 0,
/* 0xD0 - 0xDF */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
SrcMem | ModRM | Stack, 0,
SrcMem | ModRM | Stack, 0, SrcMem | ModRM | Stack, 0,
[Group7*8] =
- 0, 0, ModRM | SrcMem | Priv, ModRM | SrcMem | Priv,
+ 0, 0, ModRM | SrcMem, ModRM | SrcMem,
SrcNone | ModRM | DstMem | Mov, 0,
- SrcMem16 | ModRM | Mov | Priv, SrcMem | ModRM | ByteOp | Priv,
- [Group8*8] =
- 0, 0, 0, 0,
- DstMem | SrcImmByte | ModRM, DstMem | SrcImmByte | ModRM,
- DstMem | SrcImmByte | ModRM, DstMem | SrcImmByte | ModRM,
- [Group9*8] =
- 0, ImplicitOps | ModRM, 0, 0, 0, 0, 0, 0,
+ SrcMem16 | ModRM | Mov, SrcMem | ModRM | ByteOp,
};
static u32 group2_table[] = {
[Group7*8] =
- SrcNone | ModRM | Priv, 0, 0, SrcNone | ModRM,
+ SrcNone | ModRM, 0, 0, SrcNone | ModRM,
SrcNone | ModRM | DstMem | Mov, 0,
SrcMem16 | ModRM | Mov, 0,
- [Group9*8] =
- 0, 0, 0, 0, 0, 0, 0, 0,
};
/* EFLAGS bit definitions. */
-#define EFLG_ID (1<<21)
-#define EFLG_VIP (1<<20)
-#define EFLG_VIF (1<<19)
-#define EFLG_AC (1<<18)
#define EFLG_VM (1<<17)
#define EFLG_RF (1<<16)
-#define EFLG_IOPL (3<<12)
-#define EFLG_NT (1<<14)
#define EFLG_OF (1<<11)
#define EFLG_DF (1<<10)
#define EFLG_IF (1<<9)
-#define EFLG_TF (1<<8)
#define EFLG_SF (1<<7)
#define EFLG_ZF (1<<6)
#define EFLG_AF (1<<4)
if (linear < fc->start || linear >= fc->end) {
size = min(15UL, PAGE_SIZE - offset_in_page(linear));
- rc = ops->fetch(linear, fc->data, size, ctxt->vcpu, NULL);
+ rc = ops->read_std(linear, fc->data, size, ctxt->vcpu);
if (rc)
return rc;
fc->start = linear;
op_bytes = 3;
*address = 0;
rc = ops->read_std((unsigned long)ptr, (unsigned long *)size, 2,
- ctxt->vcpu, NULL);
+ ctxt->vcpu);
if (rc)
return rc;
rc = ops->read_std((unsigned long)ptr + 2, address, op_bytes,
- ctxt->vcpu, NULL);
+ ctxt->vcpu);
return rc;
}
switch (mode) {
case X86EMUL_MODE_REAL:
- case X86EMUL_MODE_VM86:
case X86EMUL_MODE_PROT16:
def_op_bytes = def_ad_bytes = 2;
break;
return rc;
}
-static int emulate_popf(struct x86_emulate_ctxt *ctxt,
- struct x86_emulate_ops *ops,
- void *dest, int len)
-{
- int rc;
- unsigned long val, change_mask;
- int iopl = (ctxt->eflags & X86_EFLAGS_IOPL) >> IOPL_SHIFT;
- int cpl = kvm_x86_ops->get_cpl(ctxt->vcpu);
-
- rc = emulate_pop(ctxt, ops, &val, len);
- if (rc != X86EMUL_CONTINUE)
- return rc;
-
- change_mask = EFLG_CF | EFLG_PF | EFLG_AF | EFLG_ZF | EFLG_SF | EFLG_OF
- | EFLG_TF | EFLG_DF | EFLG_NT | EFLG_RF | EFLG_AC | EFLG_ID;
-
- switch(ctxt->mode) {
- case X86EMUL_MODE_PROT64:
- case X86EMUL_MODE_PROT32:
- case X86EMUL_MODE_PROT16:
- if (cpl == 0)
- change_mask |= EFLG_IOPL;
- if (cpl <= iopl)
- change_mask |= EFLG_IF;
- break;
- case X86EMUL_MODE_VM86:
- if (iopl < 3) {
- kvm_inject_gp(ctxt->vcpu, 0);
- return X86EMUL_PROPAGATE_FAULT;
- }
- change_mask |= EFLG_IF;
- break;
- default: /* real mode */
- change_mask |= (EFLG_IOPL | EFLG_IF);
- break;
- }
-
- *(unsigned long *)dest =
- (ctxt->eflags & ~change_mask) | (val & change_mask);
-
- return rc;
-}
-
static inline int emulate_grp1a(struct x86_emulate_ctxt *ctxt,
struct x86_emulate_ops *ops)
{
rc = emulate_pop(ctxt, ops, &cs, c->op_bytes);
if (rc)
return rc;
- rc = kvm_load_segment_descriptor(ctxt->vcpu, (u16)cs, VCPU_SREG_CS);
+ rc = kvm_load_segment_descriptor(ctxt->vcpu, (u16)cs, 1, VCPU_SREG_CS);
return rc;
}
/* syscall is not available in real mode */
if (c->lock_prefix || ctxt->mode == X86EMUL_MODE_REAL
- || ctxt->mode == X86EMUL_MODE_VM86)
+ || !(ctxt->vcpu->arch.cr0 & X86_CR0_PE))
return -1;
setup_syscalls_segments(ctxt, &cs, &ss);
if (c->lock_prefix)
return -1;
- /* inject #GP if in real mode */
- if (ctxt->mode == X86EMUL_MODE_REAL) {
+ /* inject #GP if in real mode or paging is disabled */
+ if (ctxt->mode == X86EMUL_MODE_REAL ||
+ !(ctxt->vcpu->arch.cr0 & X86_CR0_PE)) {
kvm_inject_gp(ctxt->vcpu, 0);
return -1;
}
if (c->lock_prefix)
return -1;
- /* inject #GP if in real mode or Virtual 8086 mode */
- if (ctxt->mode == X86EMUL_MODE_REAL ||
- ctxt->mode == X86EMUL_MODE_VM86) {
+ /* inject #GP if in real mode or paging is disabled */
+ if (ctxt->mode == X86EMUL_MODE_REAL
+ || !(ctxt->vcpu->arch.cr0 & X86_CR0_PE)) {
+ kvm_inject_gp(ctxt->vcpu, 0);
+ return -1;
+ }
+
+ /* sysexit must be called from CPL 0 */
+ if (kvm_x86_ops->get_cpl(ctxt->vcpu) != 0) {
kvm_inject_gp(ctxt->vcpu, 0);
return -1;
}
return 0;
}
-static bool emulator_bad_iopl(struct x86_emulate_ctxt *ctxt)
-{
- int iopl;
- if (ctxt->mode == X86EMUL_MODE_REAL)
- return false;
- if (ctxt->mode == X86EMUL_MODE_VM86)
- return true;
- iopl = (ctxt->eflags & X86_EFLAGS_IOPL) >> IOPL_SHIFT;
- return kvm_x86_ops->get_cpl(ctxt->vcpu) > iopl;
-}
-
-static bool emulator_io_port_access_allowed(struct x86_emulate_ctxt *ctxt,
- struct x86_emulate_ops *ops,
- u16 port, u16 len)
-{
- struct kvm_segment tr_seg;
- int r;
- u16 io_bitmap_ptr;
- u8 perm, bit_idx = port & 0x7;
- unsigned mask = (1 << len) - 1;
-
- kvm_get_segment(ctxt->vcpu, &tr_seg, VCPU_SREG_TR);
- if (tr_seg.unusable)
- return false;
- if (tr_seg.limit < 103)
- return false;
- r = ops->read_std(tr_seg.base + 102, &io_bitmap_ptr, 2, ctxt->vcpu,
- NULL);
- if (r != X86EMUL_CONTINUE)
- return false;
- if (io_bitmap_ptr + port/8 > tr_seg.limit)
- return false;
- r = ops->read_std(tr_seg.base + io_bitmap_ptr + port/8, &perm, 1,
- ctxt->vcpu, NULL);
- if (r != X86EMUL_CONTINUE)
- return false;
- if ((perm >> bit_idx) & mask)
- return false;
- return true;
-}
-
-static bool emulator_io_permited(struct x86_emulate_ctxt *ctxt,
- struct x86_emulate_ops *ops,
- u16 port, u16 len)
-{
- if (emulator_bad_iopl(ctxt))
- if (!emulator_io_port_access_allowed(ctxt, ops, port, len))
- return false;
- return true;
-}
-
int
x86_emulate_insn(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)
{
memcpy(c->regs, ctxt->vcpu->arch.regs, sizeof c->regs);
saved_eip = c->eip;
- /* Privileged instruction can be executed only in CPL=0 */
- if ((c->d & Priv) && kvm_x86_ops->get_cpl(ctxt->vcpu)) {
- kvm_inject_gp(ctxt->vcpu, 0);
- goto done;
- }
-
if (((c->d & ModRM) && (c->modrm_mod != 3)) || (c->d & MemAbs))
memop = c->modrm_ea;
break;
case 0x6c: /* insb */
case 0x6d: /* insw/insd */
- if (!emulator_io_permited(ctxt, ops, c->regs[VCPU_REGS_RDX],
- (c->d & ByteOp) ? 1 : c->op_bytes)) {
- kvm_inject_gp(ctxt->vcpu, 0);
- goto done;
- }
- if (kvm_emulate_pio_string(ctxt->vcpu, NULL,
+ if (kvm_emulate_pio_string(ctxt->vcpu, NULL,
1,
(c->d & ByteOp) ? 1 : c->op_bytes,
c->rep_prefix ?
return 0;
case 0x6e: /* outsb */
case 0x6f: /* outsw/outsd */
- if (!emulator_io_permited(ctxt, ops, c->regs[VCPU_REGS_RDX],
- (c->d & ByteOp) ? 1 : c->op_bytes)) {
- kvm_inject_gp(ctxt->vcpu, 0);
- goto done;
- }
if (kvm_emulate_pio_string(ctxt->vcpu, NULL,
0,
(c->d & ByteOp) ? 1 : c->op_bytes,
break;
case 0x8e: { /* mov seg, r/m16 */
uint16_t sel;
+ int type_bits;
+ int err;
sel = c->src.val;
-
- if (c->modrm_reg == VCPU_SREG_CS ||
- c->modrm_reg > VCPU_SREG_GS) {
- kvm_queue_exception(ctxt->vcpu, UD_VECTOR);
- goto done;
- }
-
if (c->modrm_reg == VCPU_SREG_SS)
toggle_interruptibility(ctxt, X86_SHADOW_INT_MOV_SS);
- rc = kvm_load_segment_descriptor(ctxt->vcpu, sel, c->modrm_reg);
+ if (c->modrm_reg <= 5) {
+ type_bits = (c->modrm_reg == 1) ? 9 : 1;
+ err = kvm_load_segment_descriptor(ctxt->vcpu, sel,
+ type_bits, c->modrm_reg);
+ } else {
+ printk(KERN_INFO "Invalid segreg in modrm byte 0x%02x\n",
+ c->modrm);
+ goto cannot_emulate;
+ }
+
+ if (err < 0)
+ goto cannot_emulate;
c->dst.type = OP_NONE; /* Disable writeback. */
break;
c->dst.type = OP_REG;
c->dst.ptr = (unsigned long *) &ctxt->eflags;
c->dst.bytes = c->op_bytes;
- rc = emulate_popf(ctxt, ops, &c->dst.val, c->op_bytes);
- if (rc != X86EMUL_CONTINUE)
- goto done;
- break;
+ goto pop_instruction;
case 0xa0 ... 0xa1: /* mov */
c->dst.ptr = (unsigned long *)&c->regs[VCPU_REGS_RAX];
c->dst.val = c->src.val;
case 0xe9: /* jmp rel */
goto jmp;
case 0xea: /* jmp far */
- if (kvm_load_segment_descriptor(ctxt->vcpu, c->src2.val,
- VCPU_SREG_CS))
- goto done;
+ if (kvm_load_segment_descriptor(ctxt->vcpu, c->src2.val, 9,
+ VCPU_SREG_CS) < 0) {
+ DPRINTF("jmp far: Failed to load CS descriptor\n");
+ goto cannot_emulate;
+ }
c->eip = c->src.val;
break;
case 0xef: /* out (e/r)ax,dx */
port = c->regs[VCPU_REGS_RDX];
io_dir_in = 0;
- do_io:
- if (!emulator_io_permited(ctxt, ops, port,
- (c->d & ByteOp) ? 1 : c->op_bytes)) {
- kvm_inject_gp(ctxt->vcpu, 0);
- goto done;
- }
- if (kvm_emulate_pio(ctxt->vcpu, NULL, io_dir_in,
+ do_io: if (kvm_emulate_pio(ctxt->vcpu, NULL, io_dir_in,
(c->d & ByteOp) ? 1 : c->op_bytes,
port) != 0) {
c->eip = saved_eip;
c->dst.type = OP_NONE; /* Disable writeback. */
break;
case 0xfa: /* cli */
- if (emulator_bad_iopl(ctxt))
- kvm_inject_gp(ctxt->vcpu, 0);
- else {
- ctxt->eflags &= ~X86_EFLAGS_IF;
- c->dst.type = OP_NONE; /* Disable writeback. */
- }
+ ctxt->eflags &= ~X86_EFLAGS_IF;
+ c->dst.type = OP_NONE; /* Disable writeback. */
break;
case 0xfb: /* sti */
- if (emulator_bad_iopl(ctxt))
- kvm_inject_gp(ctxt->vcpu, 0);
- else {
- toggle_interruptibility(ctxt, X86_SHADOW_INT_STI);
- ctxt->eflags |= X86_EFLAGS_IF;
- c->dst.type = OP_NONE; /* Disable writeback. */
- }
+ toggle_interruptibility(ctxt, X86_SHADOW_INT_STI);
+ ctxt->eflags |= X86_EFLAGS_IF;
+ c->dst.type = OP_NONE; /* Disable writeback. */
break;
case 0xfc: /* cld */
ctxt->eflags &= ~EFLG_DF;
#define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | PT_USER_MASK \
| PT64_NX_MASK)
+#define PFERR_PRESENT_MASK (1U << 0)
+#define PFERR_WRITE_MASK (1U << 1)
+#define PFERR_USER_MASK (1U << 2)
+#define PFERR_RSVD_MASK (1U << 3)
+#define PFERR_FETCH_MASK (1U << 4)
+
#define PT_PDPE_LEVEL 3
#define PT_DIRECTORY_LEVEL 2
#define PT_PAGE_TABLE_LEVEL 1
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
-static bool is_write_protection(struct kvm_vcpu *vcpu)
+static int is_write_protection(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr0 & X86_CR0_WP;
}
for_each_sp(pages, sp, parents, i) {
kvm_mmu_zap_page(kvm, sp);
mmu_pages_clear_parents(&parents);
- zapped++;
}
+ zapped += pages.nr;
kvm_mmu_pages_init(parent, &parents, &pages);
}
*/
if (used_pages > kvm_nr_mmu_pages) {
- while (used_pages > kvm_nr_mmu_pages &&
- !list_empty(&kvm->arch.active_mmu_pages)) {
+ while (used_pages > kvm_nr_mmu_pages) {
struct kvm_mmu_page *page;
page = container_of(kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
- used_pages -= kvm_mmu_zap_page(kvm, page);
+ kvm_mmu_zap_page(kvm, page);
used_pages--;
}
- kvm_nr_mmu_pages = used_pages;
kvm->arch.n_free_mmu_pages = 0;
}
else
&& !sp->role.invalid) {
pgprintk("%s: zap %lx %x\n",
__func__, gfn, sp->role.word);
- if (kvm_mmu_zap_page(kvm, sp))
- nn = bucket->first;
+ kvm_mmu_zap_page(kvm, sp);
}
}
}
{
struct page *page;
- gpa_t gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL);
+ gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, gva);
if (gpa == UNMAPPED_GVA)
return NULL;
spte |= PT_WRITABLE_MASK;
- if (!tdp_enabled && !(pte_access & ACC_WRITE_MASK))
- spte &= ~PT_USER_MASK;
-
/*
* Optimization: for pte sync, if spte was writable the hash
* lookup is unnecessary (and expensive). Write protection
child = page_header(pte & PT64_BASE_ADDR_MASK);
mmu_page_remove_parent_pte(child, sptep);
- __set_spte(sptep, shadow_trap_nonpresent_pte);
- kvm_flush_remote_tlbs(vcpu->kvm);
} else if (pfn != spte_to_pfn(*sptep)) {
pgprintk("hfn old %lx new %lx\n",
spte_to_pfn(*sptep), pfn);
direct = 1;
if (mmu_check_root(vcpu, root_gfn))
return 1;
- spin_lock(&vcpu->kvm->mmu_lock);
sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
PT64_ROOT_LEVEL, direct,
ACC_ALL, NULL);
root = __pa(sp->spt);
++sp->root_count;
- spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.root_hpa = root;
return 0;
}
root_gfn = 0;
if (mmu_check_root(vcpu, root_gfn))
return 1;
- spin_lock(&vcpu->kvm->mmu_lock);
sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
PT32_ROOT_LEVEL, direct,
ACC_ALL, NULL);
root = __pa(sp->spt);
++sp->root_count;
- spin_unlock(&vcpu->kvm->mmu_lock);
-
vcpu->arch.mmu.pae_root[i] = root | PT_PRESENT_MASK;
}
vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root);
spin_unlock(&vcpu->kvm->mmu_lock);
}
-static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t vaddr,
- u32 access, u32 *error)
+static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t vaddr)
{
- if (error)
- *error = 0;
return vaddr;
}
r = paging32_init_context(vcpu);
vcpu->arch.mmu.base_role.glevels = vcpu->arch.mmu.root_level;
- vcpu->arch.mmu.base_role.cr0_wp = is_write_protection(vcpu);
return r;
}
goto out;
spin_lock(&vcpu->kvm->mmu_lock);
kvm_mmu_free_some_pages(vcpu);
- spin_unlock(&vcpu->kvm->mmu_lock);
r = mmu_alloc_roots(vcpu);
- spin_lock(&vcpu->kvm->mmu_lock);
mmu_sync_roots(vcpu);
spin_unlock(&vcpu->kvm->mmu_lock);
if (r)
if (tdp_enabled)
return 0;
- gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL);
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, gva);
spin_lock(&vcpu->kvm->mmu_lock);
r = kvm_mmu_unprotect_page(vcpu->kvm, gpa >> PAGE_SHIFT);
if (is_shadow_present_pte(ent) && !is_last_spte(ent, level))
audit_mappings_page(vcpu, ent, va, level - 1);
else {
- gpa_t gpa = kvm_mmu_gva_to_gpa_read(vcpu, va, NULL);
+ gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, va);
gfn_t gfn = gpa >> PAGE_SHIFT;
pfn_t pfn = gfn_to_pfn(vcpu->kvm, gfn);
hpa_t hpa = (hpa_t)pfn << PAGE_SHIFT;
#define PT32_ROOT_LEVEL 2
#define PT32E_ROOT_LEVEL 3
-#define PFERR_PRESENT_MASK (1U << 0)
-#define PFERR_WRITE_MASK (1U << 1)
-#define PFERR_USER_MASK (1U << 2)
-#define PFERR_RSVD_MASK (1U << 3)
-#define PFERR_FETCH_MASK (1U << 4)
-
int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]);
static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
break;
}
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) {
- struct kvm_mmu_page *child;
- unsigned direct_access;
-
- if (level != gw->level)
- continue;
-
- /*
- * For the direct sp, if the guest pte's dirty bit
- * changed form clean to dirty, it will corrupt the
- * sp's access: allow writable in the read-only sp,
- * so we should update the spte at this point to get
- * a new sp with the correct access.
- */
- direct_access = gw->pt_access & gw->pte_access;
- if (!is_dirty_gpte(gw->ptes[gw->level - 1]))
- direct_access &= ~ACC_WRITE_MASK;
-
- child = page_header(*sptep & PT64_BASE_ADDR_MASK);
- if (child->role.access == direct_access)
- continue;
-
- mmu_page_remove_parent_pte(child, sptep);
- __set_spte(sptep, shadow_trap_nonpresent_pte);
- kvm_flush_remote_tlbs(vcpu->kvm);
- }
+ if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
+ continue;
if (is_large_pte(*sptep)) {
rmap_remove(vcpu->kvm, sptep);
/* advance table_gfn when emulating 1gb pages with 4k */
if (delta == 0)
table_gfn += PT_INDEX(addr, level);
- access &= gw->pte_access;
} else {
direct = 0;
table_gfn = gw->table_gfn[level - 2];
spin_unlock(&vcpu->kvm->mmu_lock);
}
-static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr, u32 access,
- u32 *error)
+static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)
{
struct guest_walker walker;
gpa_t gpa = UNMAPPED_GVA;
int r;
- r = FNAME(walk_addr)(&walker, vcpu, vaddr,
- !!(access & PFERR_WRITE_MASK),
- !!(access & PFERR_USER_MASK),
- !!(access & PFERR_FETCH_MASK));
+ r = FNAME(walk_addr)(&walker, vcpu, vaddr, 0, 0, 0);
if (r) {
gpa = gfn_to_gpa(walker.gfn);
gpa |= vaddr & ~PAGE_MASK;
- } else if (error)
- *error = walker.error_code;
+ }
return gpa;
}
#include <linux/sched.h>
#include <linux/ftrace_event.h>
-#include <asm/tlbflush.h>
#include <asm/desc.h>
#include <asm/virtext.h>
#define nsvm_printk(fmt, args...) do {} while(0)
#endif
-static bool erratum_383_found __read_mostly;
-
static const u32 host_save_user_msrs[] = {
#ifdef CONFIG_X86_64
MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE,
svm_set_interrupt_shadow(vcpu, 0);
}
-static void svm_init_erratum_383(void)
-{
- u32 low, high;
- int err;
- u64 val;
-
- /* Only Fam10h is affected */
- if (boot_cpu_data.x86 != 0x10)
- return;
-
- /* Use _safe variants to not break nested virtualization */
- val = native_read_msr_safe(MSR_AMD64_DC_CFG, &err);
- if (err)
- return;
-
- val |= (1ULL << 47);
-
- low = lower_32_bits(val);
- high = upper_32_bits(val);
-
- native_write_msr_safe(MSR_AMD64_DC_CFG, low, high);
-
- erratum_383_found = true;
-}
-
static int has_svm(void)
{
const char *msg;
static void svm_hardware_enable(void *garbage)
{
+
struct svm_cpu_data *svm_data;
uint64_t efer;
struct descriptor_table gdt_descr;
wrmsrl(MSR_VM_HSAVE_PA,
page_to_pfn(svm_data->save_area) << PAGE_SHIFT);
-
- svm_init_erratum_383();
-
- return;
}
static void svm_cpu_uninit(int cpu)
control->iopm_base_pa = iopm_base;
control->msrpm_base_pa = __pa(svm->msrpm);
+ control->tsc_offset = 0;
control->int_ctl = V_INTR_MASKING_MASK;
init_seg(&save->es);
save->rip = 0x0000fff0;
svm->vcpu.arch.regs[VCPU_REGS_RIP] = save->rip;
- /* This is the guest-visible cr0 value.
- * svm_set_cr0() sets PG and WP and clears NW and CD on save->cr0.
+ /*
+ * cr0 val on cpu init should be 0x60000010, we enable cpu
+ * cache by default. the orderly way is to enable cache in bios.
*/
- svm->vcpu.arch.cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET;
- kvm_set_cr0(&svm->vcpu, svm->vcpu.arch.cr0);
-
+ save->cr0 = 0x00000010 | X86_CR0_PG | X86_CR0_WP;
save->cr4 = X86_CR4_PAE;
/* rdx = ?? */
if (err)
goto free_svm;
- err = -ENOMEM;
page = alloc_page(GFP_KERNEL);
- if (!page)
+ if (!page) {
+ err = -ENOMEM;
goto uninit;
+ }
+ err = -ENOMEM;
msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER);
if (!msrpm_pages)
- goto free_page1;
+ goto uninit;
nested_msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER);
if (!nested_msrpm_pages)
- goto free_page2;
+ goto uninit;
+
+ svm->msrpm = page_address(msrpm_pages);
+ svm_vcpu_init_msrpm(svm->msrpm);
hsave_page = alloc_page(GFP_KERNEL);
if (!hsave_page)
- goto free_page3;
-
+ goto uninit;
svm->nested.hsave = page_address(hsave_page);
- svm->msrpm = page_address(msrpm_pages);
- svm_vcpu_init_msrpm(svm->msrpm);
-
svm->nested.msrpm = page_address(nested_msrpm_pages);
svm->vmcb = page_address(page);
svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
svm->asid_generation = 0;
init_vmcb(svm);
- svm->vmcb->control.tsc_offset = 0-native_read_tsc();
fx_init(&svm->vcpu);
svm->vcpu.fpu_active = 1;
return &svm->vcpu;
-free_page3:
- __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER);
-free_page2:
- __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER);
-free_page1:
- __free_page(page);
uninit:
kvm_vcpu_uninit(&svm->vcpu);
free_svm:
int i;
if (unlikely(cpu != vcpu->cpu)) {
- u64 delta;
-
- if (check_tsc_unstable()) {
- /*
- * Make sure that the guest sees a monotonically
- * increasing TSC.
- */
- delta = vcpu->arch.host_tsc - native_read_tsc();
- svm->vmcb->control.tsc_offset += delta;
- if (is_nested(svm))
- svm->nested.hsave->control.tsc_offset += delta;
- }
+ u64 tsc_this, delta;
+
+ /*
+ * Make sure that the guest sees a monotonically
+ * increasing TSC.
+ */
+ rdtscll(tsc_this);
+ delta = vcpu->arch.host_tsc - tsc_this;
+ svm->vmcb->control.tsc_offset += delta;
+ if (is_nested(svm))
+ svm->nested.hsave->control.tsc_offset += delta;
vcpu->cpu = cpu;
kvm_migrate_timers(vcpu);
svm->asid_generation = 0;
return 1;
}
-static bool is_erratum_383(void)
-{
- int err, i;
- u64 value;
-
- if (!erratum_383_found)
- return false;
-
- value = native_read_msr_safe(MSR_IA32_MC0_STATUS, &err);
- if (err)
- return false;
-
- /* Bit 62 may or may not be set for this mce */
- value &= ~(1ULL << 62);
-
- if (value != 0xb600000000010015ULL)
- return false;
-
- /* Clear MCi_STATUS registers */
- for (i = 0; i < 6; ++i)
- native_write_msr_safe(MSR_IA32_MCx_STATUS(i), 0, 0);
-
- value = native_read_msr_safe(MSR_IA32_MCG_STATUS, &err);
- if (!err) {
- u32 low, high;
-
- value &= ~(1ULL << 2);
- low = lower_32_bits(value);
- high = upper_32_bits(value);
-
- native_write_msr_safe(MSR_IA32_MCG_STATUS, low, high);
- }
-
- /* Flush tlb to evict multi-match entries */
- __flush_tlb_all();
-
- return true;
-}
-
-static void svm_handle_mce(struct vcpu_svm *svm)
+static int mc_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
- if (is_erratum_383()) {
- /*
- * Erratum 383 triggered. Guest state is corrupt so kill the
- * guest.
- */
- pr_err("KVM: Guest triggered AMD Erratum 383\n");
-
- set_bit(KVM_REQ_TRIPLE_FAULT, &svm->vcpu.requests);
-
- return;
- }
-
/*
* On an #MC intercept the MCE handler is not called automatically in
* the host. So do it by hand here.
"int $0x12\n");
/* not sure if we ever come back to this point */
- return;
-}
-
-static int mc_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
-{
return 1;
}
static int iret_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
++svm->vcpu.stat.nmi_window_exits;
- svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_IRET);
+ svm->vmcb->control.intercept &= ~(1UL << INTERCEPT_IRET);
svm->vcpu.arch.hflags |= HF_IRET_MASK;
return 1;
}
svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI;
vcpu->arch.hflags |= HF_NMI_MASK;
- svm->vmcb->control.intercept |= (1ULL << INTERCEPT_IRET);
+ svm->vmcb->control.intercept |= (1UL << INTERCEPT_IRET);
++vcpu->stat.nmi_injections;
}
sync_lapic_to_cr8(vcpu);
save_host_msrs(vcpu);
- savesegment(fs, fs_selector);
- savesegment(gs, gs_selector);
+ fs_selector = kvm_read_fs();
+ gs_selector = kvm_read_gs();
ldt_selector = kvm_read_ldt();
svm->vmcb->save.cr2 = vcpu->arch.cr2;
/* required for live migration with NPT */
vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
- load_host_msrs(vcpu);
- loadsegment(fs, fs_selector);
-#ifdef CONFIG_X86_64
- load_gs_index(gs_selector);
- wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
-#else
- loadsegment(gs, gs_selector);
-#endif
+ kvm_load_fs(fs_selector);
+ kvm_load_gs(gs_selector);
kvm_load_ldt(ldt_selector);
+ load_host_msrs(vcpu);
reload_tss(vcpu);
vcpu->arch.regs_avail &= ~(1 << VCPU_EXREG_PDPTR);
vcpu->arch.regs_dirty &= ~(1 << VCPU_EXREG_PDPTR);
}
-
- /*
- * We need to handle MC intercepts here before the vcpu has a chance to
- * change the physical cpu
- */
- if (unlikely(svm->vmcb->control.exit_code ==
- SVM_EXIT_EXCP_BASE + MC_VECTOR))
- svm_handle_mce(svm);
}
#undef R
#include <linux/sched.h>
#include <linux/moduleparam.h>
#include <linux/ftrace_event.h>
-#include <linux/tboot.h>
#include "kvm_cache_regs.h"
#include "x86.h"
static int __read_mostly emulate_invalid_guest_state = 0;
module_param(emulate_invalid_guest_state, bool, S_IRUGO);
-#define RMODE_GUEST_OWNED_EFLAGS_BITS (~(X86_EFLAGS_IOPL | X86_EFLAGS_VM))
-
struct vmcs {
u32 revision_id;
u32 abort;
} host_state;
struct {
int vm86_active;
- ulong save_rflags;
+ u8 save_iopl;
struct kvm_save_segment {
u16 selector;
unsigned long base;
static DEFINE_PER_CPU(struct vmcs *, vmxarea);
static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
static DEFINE_PER_CPU(struct list_head, vcpus_on_cpu);
-static DEFINE_PER_CPU(struct desc_ptr, host_gdt);
static unsigned long *vmx_io_bitmap_a;
static unsigned long *vmx_io_bitmap_b;
*/
vmx->host_state.ldt_sel = kvm_read_ldt();
vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel;
- savesegment(fs, vmx->host_state.fs_sel);
+ vmx->host_state.fs_sel = kvm_read_fs();
if (!(vmx->host_state.fs_sel & 7)) {
vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel);
vmx->host_state.fs_reload_needed = 0;
vmcs_write16(HOST_FS_SELECTOR, 0);
vmx->host_state.fs_reload_needed = 1;
}
- savesegment(gs, vmx->host_state.gs_sel);
+ vmx->host_state.gs_sel = kvm_read_gs();
if (!(vmx->host_state.gs_sel & 7))
vmcs_write16(HOST_GS_SELECTOR, vmx->host_state.gs_sel);
else {
#endif
#ifdef CONFIG_X86_64
- save_msrs(vmx->host_msrs + vmx->msr_offset_kernel_gs_base, 1);
+ if (is_long_mode(&vmx->vcpu))
+ save_msrs(vmx->host_msrs +
+ vmx->msr_offset_kernel_gs_base, 1);
+
#endif
load_msrs(vmx->guest_msrs, vmx->save_nmsrs);
load_transition_efer(vmx);
static void __vmx_load_host_state(struct vcpu_vmx *vmx)
{
+ unsigned long flags;
+
if (!vmx->host_state.loaded)
return;
++vmx->vcpu.stat.host_state_reload;
vmx->host_state.loaded = 0;
if (vmx->host_state.fs_reload_needed)
- loadsegment(fs, vmx->host_state.fs_sel);
-#ifdef CONFIG_X86_64
- if (is_long_mode(&vmx->vcpu))
- save_msrs(vmx->guest_msrs + vmx->msr_offset_kernel_gs_base, 1);
-#endif
+ kvm_load_fs(vmx->host_state.fs_sel);
if (vmx->host_state.gs_ldt_reload_needed) {
kvm_load_ldt(vmx->host_state.ldt_sel);
+ /*
+ * If we have to reload gs, we must take care to
+ * preserve our gs base.
+ */
+ local_irq_save(flags);
+ kvm_load_gs(vmx->host_state.gs_sel);
#ifdef CONFIG_X86_64
- load_gs_index(vmx->host_state.gs_sel);
-#else
- loadsegment(gs, vmx->host_state.gs_sel);
+ wrmsrl(MSR_GS_BASE, vmcs_readl(HOST_GS_BASE));
#endif
+ local_irq_restore(flags);
}
reload_tss();
-#ifdef CONFIG_X86_64
- save_msrs(vmx->guest_msrs, vmx->msr_offset_kernel_gs_base);
- save_msrs(vmx->guest_msrs + vmx->msr_offset_kernel_gs_base + 1,
- vmx->save_nmsrs - vmx->msr_offset_kernel_gs_base - 1);
-#else
save_msrs(vmx->guest_msrs, vmx->save_nmsrs);
-#endif
load_msrs(vmx->host_msrs, vmx->save_nmsrs);
reload_host_efer(vmx);
- load_gdt(&__get_cpu_var(host_gdt));
}
static void vmx_load_host_state(struct vcpu_vmx *vmx)
static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
{
- unsigned long rflags, save_rflags;
+ unsigned long rflags;
rflags = vmcs_readl(GUEST_RFLAGS);
- if (to_vmx(vcpu)->rmode.vm86_active) {
- rflags &= RMODE_GUEST_OWNED_EFLAGS_BITS;
- save_rflags = to_vmx(vcpu)->rmode.save_rflags;
- rflags |= save_rflags & ~RMODE_GUEST_OWNED_EFLAGS_BITS;
- }
+ if (to_vmx(vcpu)->rmode.vm86_active)
+ rflags &= ~(unsigned long)(X86_EFLAGS_IOPL | X86_EFLAGS_VM);
return rflags;
}
static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
- if (to_vmx(vcpu)->rmode.vm86_active) {
- to_vmx(vcpu)->rmode.save_rflags = rflags;
+ if (to_vmx(vcpu)->rmode.vm86_active)
rflags |= X86_EFLAGS_IOPL | X86_EFLAGS_VM;
- }
vmcs_writel(GUEST_RFLAGS, rflags);
}
u64 msr;
rdmsrl(MSR_IA32_FEATURE_CONTROL, msr);
- if (msr & FEATURE_CONTROL_LOCKED) {
- if (!(msr & FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX)
- && tboot_enabled())
- return 1;
- if (!(msr & FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX)
- && !tboot_enabled())
- return 1;
- }
-
- return 0;
+ return (msr & (FEATURE_CONTROL_LOCKED |
+ FEATURE_CONTROL_VMXON_ENABLED))
+ == FEATURE_CONTROL_LOCKED;
/* locked but not enabled */
}
{
int cpu = raw_smp_processor_id();
u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
- u64 old, test_bits;
+ u64 old;
INIT_LIST_HEAD(&per_cpu(vcpus_on_cpu, cpu));
rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
-
- test_bits = FEATURE_CONTROL_LOCKED;
- test_bits |= FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
- if (tboot_enabled())
- test_bits |= FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX;
-
- if ((old & test_bits) != test_bits) {
+ if ((old & (FEATURE_CONTROL_LOCKED |
+ FEATURE_CONTROL_VMXON_ENABLED))
+ != (FEATURE_CONTROL_LOCKED |
+ FEATURE_CONTROL_VMXON_ENABLED))
/* enable and lock */
- wrmsrl(MSR_IA32_FEATURE_CONTROL, old | test_bits);
- }
+ wrmsrl(MSR_IA32_FEATURE_CONTROL, old |
+ FEATURE_CONTROL_LOCKED |
+ FEATURE_CONTROL_VMXON_ENABLED);
write_cr4(read_cr4() | X86_CR4_VMXE); /* FIXME: not cpu hotplug safe */
asm volatile (ASM_VMX_VMXON_RAX
: : "a"(&phys_addr), "m"(phys_addr)
: "memory", "cc");
-
- store_gdt(&__get_cpu_var(host_gdt));
}
static void vmclear_local_vcpus(void)
vmcs_write32(GUEST_TR_AR_BYTES, vmx->rmode.tr.ar);
flags = vmcs_readl(GUEST_RFLAGS);
- flags &= RMODE_GUEST_OWNED_EFLAGS_BITS;
- flags |= vmx->rmode.save_rflags & ~RMODE_GUEST_OWNED_EFLAGS_BITS;
+ flags &= ~(X86_EFLAGS_IOPL | X86_EFLAGS_VM);
+ flags |= (vmx->rmode.save_iopl << IOPL_SHIFT);
vmcs_writel(GUEST_RFLAGS, flags);
vmcs_writel(GUEST_CR4, (vmcs_readl(GUEST_CR4) & ~X86_CR4_VME) |
vmcs_write32(GUEST_TR_AR_BYTES, 0x008b);
flags = vmcs_readl(GUEST_RFLAGS);
- vmx->rmode.save_rflags = flags;
+ vmx->rmode.save_iopl
+ = (flags & X86_EFLAGS_IOPL) >> IOPL_SHIFT;
flags |= X86_EFLAGS_IOPL | X86_EFLAGS_VM;
~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
if (vmx->vpid == 0)
exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
- if (!enable_ept) {
+ if (!enable_ept)
exec_control &= ~SECONDARY_EXEC_ENABLE_EPT;
- enable_unrestricted_guest = 0;
- }
if (!enable_unrestricted_guest)
exec_control &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control);
vmcs_write16(HOST_CS_SELECTOR, __KERNEL_CS); /* 22.2.4 */
vmcs_write16(HOST_DS_SELECTOR, __KERNEL_DS); /* 22.2.4 */
vmcs_write16(HOST_ES_SELECTOR, __KERNEL_DS); /* 22.2.4 */
- vmcs_write16(HOST_FS_SELECTOR, 0); /* 22.2.4 */
- vmcs_write16(HOST_GS_SELECTOR, 0); /* 22.2.4 */
+ vmcs_write16(HOST_FS_SELECTOR, kvm_read_fs()); /* 22.2.4 */
+ vmcs_write16(HOST_GS_SELECTOR, kvm_read_gs()); /* 22.2.4 */
vmcs_write16(HOST_SS_SELECTOR, __KERNEL_DS); /* 22.2.4 */
#ifdef CONFIG_X86_64
rdmsrl(MSR_FS_BASE, a);
if (vmx->vpid != 0)
vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->vpid);
- vmx->vcpu.arch.cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET;
+ vmx->vcpu.arch.cr0 = 0x60000010;
vmx_set_cr0(&vmx->vcpu, vmx->vcpu.arch.cr0); /* enter rmode */
vmx_set_cr4(&vmx->vcpu, 0);
vmx_set_efer(&vmx->vcpu, 0);
kvm_queue_exception(vcpu, vec);
return 1;
case BP_VECTOR:
- /*
- * Update instruction length as we may reinject the exception
- * from user space while in guest debugging mode.
- */
- to_vmx(vcpu)->vcpu.arch.event_exit_inst_len =
- vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP)
return 0;
/* fall through */
kvm_run->debug.arch.dr7 = vmcs_readl(GUEST_DR7);
/* fall through */
case BP_VECTOR:
- /*
- * Update instruction length as we may reinject #BP from
- * user space while in guest debugging mode. Reading it for
- * #DB as well causes no harm, it is not used in that case.
- */
- vmx->vcpu.arch.event_exit_inst_len =
- vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
kvm_run->exit_reason = KVM_EXIT_DEBUG;
kvm_run->debug.arch.pc = vmcs_readl(GUEST_CS_BASE) + rip;
kvm_run->debug.arch.exception = ex_no;
void kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
{
if (cr0 & CR0_RESERVED_BITS) {
+ printk(KERN_DEBUG "set_cr0: 0x%lx #GP, reserved bits 0x%lx\n",
+ cr0, vcpu->arch.cr0);
kvm_inject_gp(vcpu, 0);
return;
}
if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD)) {
+ printk(KERN_DEBUG "set_cr0: #GP, CD == 0 && NW == 1\n");
kvm_inject_gp(vcpu, 0);
return;
}
if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) {
+ printk(KERN_DEBUG "set_cr0: #GP, set PG flag "
+ "and a clear PE flag\n");
kvm_inject_gp(vcpu, 0);
return;
}
int cs_db, cs_l;
if (!is_pae(vcpu)) {
+ printk(KERN_DEBUG "set_cr0: #GP, start paging "
+ "in long mode while PAE is disabled\n");
kvm_inject_gp(vcpu, 0);
return;
}
kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
if (cs_l) {
+ printk(KERN_DEBUG "set_cr0: #GP, start paging "
+ "in long mode while CS.L == 1\n");
kvm_inject_gp(vcpu, 0);
return;
} else
#endif
if (is_pae(vcpu) && !load_pdptrs(vcpu, vcpu->arch.cr3)) {
+ printk(KERN_DEBUG "set_cr0: #GP, pdptrs "
+ "reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
{
- kvm_set_cr0(vcpu, (vcpu->arch.cr0 & ~0x0eul) | (msw & 0x0f));
+ kvm_set_cr0(vcpu, (vcpu->arch.cr0 & ~0x0ful) | (msw & 0x0f));
}
EXPORT_SYMBOL_GPL(kvm_lmsw);
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE;
if (cr4 & CR4_RESERVED_BITS) {
+ printk(KERN_DEBUG "set_cr4: #GP, reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
if (is_long_mode(vcpu)) {
if (!(cr4 & X86_CR4_PAE)) {
+ printk(KERN_DEBUG "set_cr4: #GP, clearing PAE while "
+ "in long mode\n");
kvm_inject_gp(vcpu, 0);
return;
}
} else if (is_paging(vcpu) && (cr4 & X86_CR4_PAE)
&& ((cr4 ^ old_cr4) & pdptr_bits)
&& !load_pdptrs(vcpu, vcpu->arch.cr3)) {
+ printk(KERN_DEBUG "set_cr4: #GP, pdptrs reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
if (cr4 & X86_CR4_VMXE) {
+ printk(KERN_DEBUG "set_cr4: #GP, setting VMXE\n");
kvm_inject_gp(vcpu, 0);
return;
}
if (is_long_mode(vcpu)) {
if (cr3 & CR3_L_MODE_RESERVED_BITS) {
+ printk(KERN_DEBUG "set_cr3: #GP, reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
} else {
if (is_pae(vcpu)) {
if (cr3 & CR3_PAE_RESERVED_BITS) {
+ printk(KERN_DEBUG
+ "set_cr3: #GP, reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
if (is_paging(vcpu) && !load_pdptrs(vcpu, cr3)) {
+ printk(KERN_DEBUG "set_cr3: #GP, pdptrs "
+ "reserved bits\n");
kvm_inject_gp(vcpu, 0);
return;
}
void kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8)
{
if (cr8 & CR8_RESERVED_BITS) {
+ printk(KERN_DEBUG "set_cr8: #GP, reserved bits 0x%lx\n", cr8);
kvm_inject_gp(vcpu, 0);
return;
}
MSR_IA32_MISC_ENABLE,
};
-static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
+static void set_efer(struct kvm_vcpu *vcpu, u64 efer)
{
- if (efer & efer_reserved_bits)
- return 1;
+ if (efer & efer_reserved_bits) {
+ printk(KERN_DEBUG "set_efer: 0x%llx #GP, reserved bits\n",
+ efer);
+ kvm_inject_gp(vcpu, 0);
+ return;
+ }
if (is_paging(vcpu)
- && (vcpu->arch.shadow_efer & EFER_LME) != (efer & EFER_LME))
- return 1;
+ && (vcpu->arch.shadow_efer & EFER_LME) != (efer & EFER_LME)) {
+ printk(KERN_DEBUG "set_efer: #GP, change LME while paging\n");
+ kvm_inject_gp(vcpu, 0);
+ return;
+ }
if (efer & EFER_FFXSR) {
struct kvm_cpuid_entry2 *feat;
feat = kvm_find_cpuid_entry(vcpu, 0x80000001, 0);
- if (!feat || !(feat->edx & bit(X86_FEATURE_FXSR_OPT)))
- return 1;
+ if (!feat || !(feat->edx & bit(X86_FEATURE_FXSR_OPT))) {
+ printk(KERN_DEBUG "set_efer: #GP, enable FFXSR w/o CPUID capability\n");
+ kvm_inject_gp(vcpu, 0);
+ return;
+ }
}
if (efer & EFER_SVME) {
struct kvm_cpuid_entry2 *feat;
feat = kvm_find_cpuid_entry(vcpu, 0x80000001, 0);
- if (!feat || !(feat->ecx & bit(X86_FEATURE_SVM)))
- return 1;
+ if (!feat || !(feat->ecx & bit(X86_FEATURE_SVM))) {
+ printk(KERN_DEBUG "set_efer: #GP, enable SVM w/o SVM\n");
+ kvm_inject_gp(vcpu, 0);
+ return;
+ }
}
+ kvm_x86_ops->set_efer(vcpu, efer);
+
efer &= ~EFER_LMA;
efer |= vcpu->arch.shadow_efer & EFER_LMA;
- kvm_x86_ops->set_efer(vcpu, efer);
-
vcpu->arch.shadow_efer = efer;
vcpu->arch.mmu.base_role.nxe = (efer & EFER_NX) && !tdp_enabled;
kvm_mmu_reset_context(vcpu);
-
- return 0;
}
void kvm_enable_efer_bits(u64 mask)
static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock)
{
- int version;
- int r;
+ static int version;
struct pvclock_wall_clock wc;
struct timespec boot;
if (!wall_clock)
return;
- r = kvm_read_guest(kvm, wall_clock, &version, sizeof(version));
- if (r)
- return;
-
- if (version & 1)
- ++version; /* first time write, random junk */
-
- ++version;
+ version++;
kvm_write_guest(kvm, wall_clock, &version, sizeof(version));
if (msr >= MSR_IA32_MC0_CTL &&
msr < MSR_IA32_MC0_CTL + 4 * bank_num) {
u32 offset = msr - MSR_IA32_MC0_CTL;
- /* only 0 or all 1s can be written to IA32_MCi_CTL
- * some Linux kernels though clear bit 10 in bank 4 to
- * workaround a BIOS/GART TBL issue on AMD K8s, ignore
- * this to avoid an uncatched #GP in the guest
- */
+ /* only 0 or all 1s can be written to IA32_MCi_CTL */
if ((offset & 0x3) == 0 &&
- data != 0 && (data | (1 << 10)) != ~(u64)0)
+ data != 0 && data != ~(u64)0)
return -1;
vcpu->arch.mce_banks[offset] = data;
break;
{
switch (msr) {
case MSR_EFER:
- return set_efer(vcpu, data);
+ set_efer(vcpu, data);
+ break;
case MSR_K7_HWCR:
data &= ~(u64)0x40; /* ignore flush filter disable */
if (data != 0) {
case KVM_CAP_NR_MEMSLOTS:
r = KVM_MEMORY_SLOTS;
break;
- case KVM_CAP_PV_MMU: /* obsolete */
- r = 0;
+ case KVM_CAP_PV_MMU:
+ r = !tdp_enabled;
break;
case KVM_CAP_IOMMU:
r = iommu_found();
{
int r;
- vcpu_load(vcpu);
r = -E2BIG;
if (cpuid->nent < vcpu->arch.cpuid_nent)
goto out;
out:
cpuid->nent = vcpu->arch.cpuid_nent;
- vcpu_put(vcpu);
return r;
}
const u32 kvm_supported_word6_x86_features =
F(LAHF_LM) | F(CMP_LEGACY) | F(SVM) | 0 /* ExtApicSpace */ |
F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
- F(3DNOWPREFETCH) | 0 /* OSVW */ | 0 /* IBS */ | F(XOP) |
+ F(3DNOWPREFETCH) | 0 /* OSVW */ | 0 /* IBS */ | F(SSE5) |
0 /* SKINIT */ | 0 /* WDT */;
/* all calls to cpuid_count() should be made on the same cpu */
int r;
unsigned bank_num = mcg_cap & 0xff, bank;
- vcpu_load(vcpu);
r = -EINVAL;
if (!bank_num || bank_num >= KVM_MAX_MCE_BANKS)
goto out;
for (bank = 0; bank < bank_num; bank++)
vcpu->arch.mce_banks[bank*4] = ~(u64)0;
out:
- vcpu_put(vcpu);
return r;
}
r = -EFAULT;
if (copy_from_user(&mce, argp, sizeof mce))
goto out;
- vcpu_load(vcpu);
r = kvm_vcpu_ioctl_x86_set_mce(vcpu, &mce);
- vcpu_put(vcpu);
break;
}
default:
sizeof(ps->channels));
ps->flags = kvm->arch.vpit->pit_state.flags;
mutex_unlock(&kvm->arch.vpit->pit_state.lock);
- memset(&ps->reserved, 0, sizeof(ps->reserved));
return r;
}
struct kvm_dirty_log *log)
{
int r;
- unsigned long n;
+ int n;
struct kvm_memory_slot *memslot;
int is_dirty = 0;
kvm_mmu_slot_remove_write_access(kvm, log->slot);
spin_unlock(&kvm->mmu_lock);
memslot = &kvm->memslots[log->slot];
- n = kvm_dirty_bitmap_bytes(memslot);
+ n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
memset(memslot->dirty_bitmap, 0, n);
}
r = 0;
now_ns = timespec_to_ns(&now);
user_ns.clock = kvm->arch.kvmclock_offset + now_ns;
user_ns.flags = 0;
- memset(&user_ns.pad, 0, sizeof(user_ns.pad));
r = -EFAULT;
if (copy_to_user(argp, &user_ns, sizeof(user_ns)))
return kvm_io_bus_read(&vcpu->kvm->mmio_bus, addr, len, v);
}
-gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, u32 *error)
-{
- u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
- return vcpu->arch.mmu.gva_to_gpa(vcpu, gva, access, error);
-}
-
- gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva, u32 *error)
-{
- u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
- access |= PFERR_FETCH_MASK;
- return vcpu->arch.mmu.gva_to_gpa(vcpu, gva, access, error);
-}
-
-gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, u32 *error)
-{
- u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
- access |= PFERR_WRITE_MASK;
- return vcpu->arch.mmu.gva_to_gpa(vcpu, gva, access, error);
-}
-
-/* uses this to access any guest's mapped memory without checking CPL */
-gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, u32 *error)
-{
- return vcpu->arch.mmu.gva_to_gpa(vcpu, gva, 0, error);
-}
-
-static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int bytes,
- struct kvm_vcpu *vcpu, u32 access,
- u32 *error)
+static int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes,
+ struct kvm_vcpu *vcpu)
{
void *data = val;
int r = X86EMUL_CONTINUE;
while (bytes) {
- gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr, access, error);
+ gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr);
unsigned offset = addr & (PAGE_SIZE-1);
unsigned toread = min(bytes, (unsigned)PAGE_SIZE - offset);
int ret;
return r;
}
-/* used for instruction fetching */
-static int kvm_fetch_guest_virt(gva_t addr, void *val, unsigned int bytes,
- struct kvm_vcpu *vcpu, u32 *error)
-{
- u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
- return kvm_read_guest_virt_helper(addr, val, bytes, vcpu,
- access | PFERR_FETCH_MASK, error);
-}
-
-static int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes,
- struct kvm_vcpu *vcpu, u32 *error)
-{
- u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
- return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,
- error);
-}
-
-static int kvm_read_guest_virt_system(gva_t addr, void *val, unsigned int bytes,
- struct kvm_vcpu *vcpu, u32 *error)
-{
- return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, 0, error);
-}
-
static int kvm_write_guest_virt(gva_t addr, void *val, unsigned int bytes,
- struct kvm_vcpu *vcpu, u32 *error)
+ struct kvm_vcpu *vcpu)
{
void *data = val;
int r = X86EMUL_CONTINUE;
while (bytes) {
- gpa_t gpa = kvm_mmu_gva_to_gpa_write(vcpu, addr, error);
+ gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr);
unsigned offset = addr & (PAGE_SIZE-1);
unsigned towrite = min(bytes, (unsigned)PAGE_SIZE - offset);
int ret;
struct kvm_vcpu *vcpu)
{
gpa_t gpa;
- u32 error_code;
if (vcpu->mmio_read_completed) {
memcpy(val, vcpu->mmio_data, bytes);
return X86EMUL_CONTINUE;
}
- gpa = kvm_mmu_gva_to_gpa_read(vcpu, addr, &error_code);
-
- if (gpa == UNMAPPED_GVA) {
- kvm_inject_page_fault(vcpu, addr, error_code);
- return X86EMUL_PROPAGATE_FAULT;
- }
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr);
/* For APIC access vmexit */
if ((gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
goto mmio;
- if (kvm_read_guest_virt(addr, val, bytes, vcpu, NULL)
+ if (kvm_read_guest_virt(addr, val, bytes, vcpu)
== X86EMUL_CONTINUE)
return X86EMUL_CONTINUE;
+ if (gpa == UNMAPPED_GVA)
+ return X86EMUL_PROPAGATE_FAULT;
mmio:
/*
struct kvm_vcpu *vcpu)
{
gpa_t gpa;
- u32 error_code;
- gpa = kvm_mmu_gva_to_gpa_write(vcpu, addr, &error_code);
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr);
if (gpa == UNMAPPED_GVA) {
- kvm_inject_page_fault(vcpu, addr, error_code);
+ kvm_inject_page_fault(vcpu, addr, 2);
return X86EMUL_PROPAGATE_FAULT;
}
char *kaddr;
u64 val;
- gpa = kvm_mmu_gva_to_gpa_write(vcpu, addr, NULL);
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, addr);
if (gpa == UNMAPPED_GVA ||
(gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
{
struct kvm_vcpu *vcpu = ctxt->vcpu;
- if (!kvm_x86_ops->get_dr)
- return X86EMUL_UNHANDLEABLE;
-
switch (dr) {
case 0 ... 3:
*dest = kvm_x86_ops->get_dr(vcpu, dr);
unsigned long mask = (ctxt->mode == X86EMUL_MODE_PROT64) ? ~0ULL : ~0U;
int exception;
- if (!kvm_x86_ops->set_dr)
- return X86EMUL_UNHANDLEABLE;
-
kvm_x86_ops->set_dr(ctxt->vcpu, dr, value & mask, &exception);
if (exception) {
/* FIXME: better handling */
rip_linear = rip + get_segment_base(vcpu, VCPU_SREG_CS);
- kvm_read_guest_virt(rip_linear, (void *)opcodes, 4, vcpu, NULL);
+ kvm_read_guest_virt(rip_linear, (void *)opcodes, 4, vcpu);
printk(KERN_ERR "emulation failed (%s) rip %lx %02x %02x %02x %02x\n",
context, rip, opcodes[0], opcodes[1], opcodes[2], opcodes[3]);
EXPORT_SYMBOL_GPL(kvm_report_emulation_failure);
static struct x86_emulate_ops emulate_ops = {
- .read_std = kvm_read_guest_virt_system,
- .fetch = kvm_fetch_guest_virt,
+ .read_std = kvm_read_guest_virt,
.read_emulated = emulator_read_emulated,
.write_emulated = emulator_write_emulated,
.cmpxchg_emulated = emulator_cmpxchg_emulated,
vcpu->arch.emulate_ctxt.vcpu = vcpu;
vcpu->arch.emulate_ctxt.eflags = kvm_x86_ops->get_rflags(vcpu);
vcpu->arch.emulate_ctxt.mode =
- (!(vcpu->arch.cr0 & X86_CR0_PE)) ? X86EMUL_MODE_REAL :
(vcpu->arch.emulate_ctxt.eflags & X86_EFLAGS_VM)
- ? X86EMUL_MODE_VM86 : cs_l
+ ? X86EMUL_MODE_REAL : cs_l
? X86EMUL_MODE_PROT64 : cs_db
? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
gva_t q = vcpu->arch.pio.guest_gva;
unsigned bytes;
int ret;
- u32 error_code;
bytes = vcpu->arch.pio.size * vcpu->arch.pio.cur_count;
if (vcpu->arch.pio.in)
- ret = kvm_write_guest_virt(q, p, bytes, vcpu, &error_code);
+ ret = kvm_write_guest_virt(q, p, bytes, vcpu);
else
- ret = kvm_read_guest_virt(q, p, bytes, vcpu, &error_code);
-
- if (ret == X86EMUL_PROPAGATE_FAULT)
- kvm_inject_page_fault(vcpu, q, error_code);
-
+ ret = kvm_read_guest_virt(q, p, bytes, vcpu);
return ret;
}
if (io->in) {
r = pio_copy_data(vcpu);
if (r)
- goto out;
+ return r;
}
delta = 1;
kvm_register_write(vcpu, VCPU_REGS_RSI, val);
}
}
-out:
+
io->count -= io->cur_count;
io->cur_count = 0;
{
unsigned long val;
- trace_kvm_pio(!in, port, size, 1);
-
vcpu->run->exit_reason = KVM_EXIT_IO;
vcpu->run->io.direction = in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT;
vcpu->run->io.size = vcpu->arch.pio.size = size;
vcpu->arch.pio.down = 0;
vcpu->arch.pio.rep = 0;
+ trace_kvm_pio(vcpu->run->io.direction == KVM_EXIT_IO_OUT, port,
+ size, 1);
+
val = kvm_register_read(vcpu, VCPU_REGS_RAX);
memcpy(vcpu->arch.pio_data, &val, 4);
unsigned now, in_page;
int ret = 0;
- trace_kvm_pio(!in, port, size, count);
-
vcpu->run->exit_reason = KVM_EXIT_IO;
vcpu->run->io.direction = in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT;
vcpu->run->io.size = vcpu->arch.pio.size = size;
vcpu->arch.pio.down = down;
vcpu->arch.pio.rep = rep;
+ trace_kvm_pio(vcpu->run->io.direction == KVM_EXIT_IO_OUT, port,
+ size, count);
+
if (!count) {
kvm_x86_ops->skip_emulated_instruction(vcpu);
return 1;
if (!vcpu->arch.pio.in) {
/* string PIO write */
ret = pio_copy_data(vcpu);
- if (ret == X86EMUL_PROPAGATE_FAULT)
+ if (ret == X86EMUL_PROPAGATE_FAULT) {
+ kvm_inject_gp(vcpu, 0);
return 1;
+ }
if (ret == 0 && !pio_string_write(vcpu)) {
complete_pio(vcpu);
if (vcpu->arch.pio.count == 0)
kvm_queue_exception_e(vcpu, GP_VECTOR, selector & 0xfffc);
return 1;
}
- return kvm_read_guest_virt_system(dtable.base + index*8,
- seg_desc, sizeof(*seg_desc),
- vcpu, NULL);
+ return kvm_read_guest_virt(dtable.base + index*8, seg_desc, sizeof(*seg_desc), vcpu);
}
/* allowed just for 8 bytes segments */
if (dtable.limit < index * 8 + 7)
return 1;
- return kvm_write_guest_virt(dtable.base + index*8, seg_desc, sizeof(*seg_desc), vcpu, NULL);
+ return kvm_write_guest_virt(dtable.base + index*8, seg_desc, sizeof(*seg_desc), vcpu);
}
-static gpa_t get_tss_base_addr_write(struct kvm_vcpu *vcpu,
- struct desc_struct *seg_desc)
-{
- u32 base_addr = get_desc_base(seg_desc);
-
- return kvm_mmu_gva_to_gpa_write(vcpu, base_addr, NULL);
-}
-
-static gpa_t get_tss_base_addr_read(struct kvm_vcpu *vcpu,
+static gpa_t get_tss_base_addr(struct kvm_vcpu *vcpu,
struct desc_struct *seg_desc)
{
u32 base_addr = get_desc_base(seg_desc);
- return kvm_mmu_gva_to_gpa_read(vcpu, base_addr, NULL);
+ return vcpu->arch.mmu.gva_to_gpa(vcpu, base_addr);
}
static u16 get_segment_selector(struct kvm_vcpu *vcpu, int seg)
return kvm_seg.selector;
}
+static int load_segment_descriptor_to_kvm_desct(struct kvm_vcpu *vcpu,
+ u16 selector,
+ struct kvm_segment *kvm_seg)
+{
+ struct desc_struct seg_desc;
+
+ if (load_guest_segment_descriptor(vcpu, selector, &seg_desc))
+ return 1;
+ seg_desct_to_kvm_desct(&seg_desc, selector, kvm_seg);
+ return 0;
+}
+
static int kvm_load_realmode_segment(struct kvm_vcpu *vcpu, u16 selector, int seg)
{
struct kvm_segment segvar = {
.unusable = 0,
};
kvm_x86_ops->set_segment(vcpu, &segvar, seg);
- return X86EMUL_CONTINUE;
+ return 0;
}
static int is_vm86_segment(struct kvm_vcpu *vcpu, int seg)
(kvm_x86_ops->get_rflags(vcpu) & X86_EFLAGS_VM);
}
-int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector, int seg)
+int kvm_load_segment_descriptor(struct kvm_vcpu *vcpu, u16 selector,
+ int type_bits, int seg)
{
struct kvm_segment kvm_seg;
- struct desc_struct seg_desc;
- u8 dpl, rpl, cpl;
- unsigned err_vec = GP_VECTOR;
- u32 err_code = 0;
- bool null_selector = !(selector & ~0x3); /* 0000-0003 are null */
- int ret;
if (is_vm86_segment(vcpu, seg) || !(vcpu->arch.cr0 & X86_CR0_PE))
return kvm_load_realmode_segment(vcpu, selector, seg);
+ if (load_segment_descriptor_to_kvm_desct(vcpu, selector, &kvm_seg))
+ return 1;
+ kvm_seg.type |= type_bits;
+ if (seg != VCPU_SREG_SS && seg != VCPU_SREG_CS &&
+ seg != VCPU_SREG_LDTR)
+ if (!kvm_seg.s)
+ kvm_seg.unusable = 1;
- /* NULL selector is not valid for TR, CS and SS */
- if ((seg == VCPU_SREG_CS || seg == VCPU_SREG_SS || seg == VCPU_SREG_TR)
- && null_selector)
- goto exception;
-
- /* TR should be in GDT only */
- if (seg == VCPU_SREG_TR && (selector & (1 << 2)))
- goto exception;
-
- ret = load_guest_segment_descriptor(vcpu, selector, &seg_desc);
- if (ret)
- return ret;
-
- seg_desct_to_kvm_desct(&seg_desc, selector, &kvm_seg);
-
- if (null_selector) { /* for NULL selector skip all following checks */
- kvm_seg.unusable = 1;
- goto load;
- }
-
- err_code = selector & 0xfffc;
- err_vec = GP_VECTOR;
-
- /* can't load system descriptor into segment selecor */
- if (seg <= VCPU_SREG_GS && !kvm_seg.s)
- goto exception;
-
- if (!kvm_seg.present) {
- err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
- goto exception;
- }
-
- rpl = selector & 3;
- dpl = kvm_seg.dpl;
- cpl = kvm_x86_ops->get_cpl(vcpu);
-
- switch (seg) {
- case VCPU_SREG_SS:
- /*
- * segment is not a writable data segment or segment
- * selector's RPL != CPL or segment selector's RPL != CPL
- */
- if (rpl != cpl || (kvm_seg.type & 0xa) != 0x2 || dpl != cpl)
- goto exception;
- break;
- case VCPU_SREG_CS:
- if (!(kvm_seg.type & 8))
- goto exception;
-
- if (kvm_seg.type & 4) {
- /* conforming */
- if (dpl > cpl)
- goto exception;
- } else {
- /* nonconforming */
- if (rpl > cpl || dpl != cpl)
- goto exception;
- }
- /* CS(RPL) <- CPL */
- selector = (selector & 0xfffc) | cpl;
- break;
- case VCPU_SREG_TR:
- if (kvm_seg.s || (kvm_seg.type != 1 && kvm_seg.type != 9))
- goto exception;
- break;
- case VCPU_SREG_LDTR:
- if (kvm_seg.s || kvm_seg.type != 2)
- goto exception;
- break;
- default: /* DS, ES, FS, or GS */
- /*
- * segment is not a data or readable code segment or
- * ((segment is a data or nonconforming code segment)
- * and (both RPL and CPL > DPL))
- */
- if ((kvm_seg.type & 0xa) == 0x8 ||
- (((kvm_seg.type & 0xc) != 0xc) && (rpl > dpl && cpl > dpl)))
- goto exception;
- break;
- }
-
- if (!kvm_seg.unusable && kvm_seg.s) {
- /* mark segment as accessed */
- kvm_seg.type |= 1;
- seg_desc.type |= 1;
- save_guest_segment_descriptor(vcpu, selector, &seg_desc);
- }
-load:
kvm_set_segment(vcpu, &kvm_seg, seg);
- return X86EMUL_CONTINUE;
-exception:
- kvm_queue_exception_e(vcpu, err_vec, err_code);
- return X86EMUL_PROPAGATE_FAULT;
+ return 0;
}
static void save_state_to_tss32(struct kvm_vcpu *vcpu,
tss->ldt_selector = get_segment_selector(vcpu, VCPU_SREG_LDTR);
}
-static void kvm_load_segment_selector(struct kvm_vcpu *vcpu, u16 sel, int seg)
-{
- struct kvm_segment kvm_seg;
- kvm_get_segment(vcpu, &kvm_seg, seg);
- kvm_seg.selector = sel;
- kvm_set_segment(vcpu, &kvm_seg, seg);
-}
-
static int load_state_from_tss32(struct kvm_vcpu *vcpu,
struct tss_segment_32 *tss)
{
kvm_register_write(vcpu, VCPU_REGS_RSI, tss->esi);
kvm_register_write(vcpu, VCPU_REGS_RDI, tss->edi);
- /*
- * SDM says that segment selectors are loaded before segment
- * descriptors
- */
- kvm_load_segment_selector(vcpu, tss->ldt_selector, VCPU_SREG_LDTR);
- kvm_load_segment_selector(vcpu, tss->es, VCPU_SREG_ES);
- kvm_load_segment_selector(vcpu, tss->cs, VCPU_SREG_CS);
- kvm_load_segment_selector(vcpu, tss->ss, VCPU_SREG_SS);
- kvm_load_segment_selector(vcpu, tss->ds, VCPU_SREG_DS);
- kvm_load_segment_selector(vcpu, tss->fs, VCPU_SREG_FS);
- kvm_load_segment_selector(vcpu, tss->gs, VCPU_SREG_GS);
-
- /*
- * Now load segment descriptors. If fault happenes at this stage
- * it is handled in a context of new task
- */
- if (kvm_load_segment_descriptor(vcpu, tss->ldt_selector, VCPU_SREG_LDTR))
+ if (kvm_load_segment_descriptor(vcpu, tss->ldt_selector, 0, VCPU_SREG_LDTR))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->es, VCPU_SREG_ES))
+ if (kvm_load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->cs, VCPU_SREG_CS))
+ if (kvm_load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->ss, VCPU_SREG_SS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->ds, VCPU_SREG_DS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->fs, VCPU_SREG_FS))
+ if (kvm_load_segment_descriptor(vcpu, tss->fs, 1, VCPU_SREG_FS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->gs, VCPU_SREG_GS))
+ if (kvm_load_segment_descriptor(vcpu, tss->gs, 1, VCPU_SREG_GS))
return 1;
return 0;
}
kvm_register_write(vcpu, VCPU_REGS_RSI, tss->si);
kvm_register_write(vcpu, VCPU_REGS_RDI, tss->di);
- /*
- * SDM says that segment selectors are loaded before segment
- * descriptors
- */
- kvm_load_segment_selector(vcpu, tss->ldt, VCPU_SREG_LDTR);
- kvm_load_segment_selector(vcpu, tss->es, VCPU_SREG_ES);
- kvm_load_segment_selector(vcpu, tss->cs, VCPU_SREG_CS);
- kvm_load_segment_selector(vcpu, tss->ss, VCPU_SREG_SS);
- kvm_load_segment_selector(vcpu, tss->ds, VCPU_SREG_DS);
-
- /*
- * Now load segment descriptors. If fault happenes at this stage
- * it is handled in a context of new task
- */
- if (kvm_load_segment_descriptor(vcpu, tss->ldt, VCPU_SREG_LDTR))
+ if (kvm_load_segment_descriptor(vcpu, tss->ldt, 0, VCPU_SREG_LDTR))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->es, VCPU_SREG_ES))
+ if (kvm_load_segment_descriptor(vcpu, tss->es, 1, VCPU_SREG_ES))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->cs, VCPU_SREG_CS))
+ if (kvm_load_segment_descriptor(vcpu, tss->cs, 9, VCPU_SREG_CS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->ss, VCPU_SREG_SS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ss, 1, VCPU_SREG_SS))
return 1;
- if (kvm_load_segment_descriptor(vcpu, tss->ds, VCPU_SREG_DS))
+ if (kvm_load_segment_descriptor(vcpu, tss->ds, 1, VCPU_SREG_DS))
return 1;
return 0;
}
sizeof tss_segment_16))
goto out;
- if (kvm_read_guest(vcpu->kvm, get_tss_base_addr_read(vcpu, nseg_desc),
+ if (kvm_read_guest(vcpu->kvm, get_tss_base_addr(vcpu, nseg_desc),
&tss_segment_16, sizeof tss_segment_16))
goto out;
tss_segment_16.prev_task_link = old_tss_sel;
if (kvm_write_guest(vcpu->kvm,
- get_tss_base_addr_write(vcpu, nseg_desc),
+ get_tss_base_addr(vcpu, nseg_desc),
&tss_segment_16.prev_task_link,
sizeof tss_segment_16.prev_task_link))
goto out;
sizeof tss_segment_32))
goto out;
- if (kvm_read_guest(vcpu->kvm, get_tss_base_addr_read(vcpu, nseg_desc),
+ if (kvm_read_guest(vcpu->kvm, get_tss_base_addr(vcpu, nseg_desc),
&tss_segment_32, sizeof tss_segment_32))
goto out;
tss_segment_32.prev_task_link = old_tss_sel;
if (kvm_write_guest(vcpu->kvm,
- get_tss_base_addr_write(vcpu, nseg_desc),
+ get_tss_base_addr(vcpu, nseg_desc),
&tss_segment_32.prev_task_link,
sizeof tss_segment_32.prev_task_link))
goto out;
int ret = 0;
u32 old_tss_base = get_segment_base(vcpu, VCPU_SREG_TR);
u16 old_tss_sel = get_segment_selector(vcpu, VCPU_SREG_TR);
- u32 desc_limit;
- old_tss_base = kvm_mmu_gva_to_gpa_write(vcpu, old_tss_base, NULL);
+ old_tss_base = vcpu->arch.mmu.gva_to_gpa(vcpu, old_tss_base);
/* FIXME: Handle errors. Failure to read either TSS or their
* descriptors should generate a pagefault.
}
}
- desc_limit = get_desc_limit(&nseg_desc);
- if (!nseg_desc.p ||
- ((desc_limit < 0x67 && (nseg_desc.type & 8)) ||
- desc_limit < 0x2b)) {
+ if (!nseg_desc.p || get_desc_limit(&nseg_desc) < 0x67) {
kvm_queue_exception_e(vcpu, TS_VECTOR, tss_selector & 0xfffc);
return 1;
}
vcpu_load(vcpu);
down_read(&vcpu->kvm->slots_lock);
- gpa = kvm_mmu_gva_to_gpa_system(vcpu, vaddr, NULL);
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, vaddr);
up_read(&vcpu->kvm->slots_lock);
tr->physical_address = gpa;
tr->valid = gpa != UNMAPPED_GVA;
# Makefile for x86 specific library files.
#
-obj-$(CONFIG_SMP) += msr-smp.o cache-smp.o
+obj-$(CONFIG_SMP) += msr-smp.o
lib-y := delay.o
lib-y += thunk_$(BITS).o
lib-y += thunk_64.o clear_page_64.o copy_page_64.o
lib-y += memmove_64.o memset_64.o
lib-y += copy_user_64.o rwlock_64.o copy_user_nocache_64.o
- lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem_64.o
endif
+++ /dev/null
-#include <linux/smp.h>
-#include <linux/module.h>
-
-static void __wbinvd(void *dummy)
-{
- wbinvd();
-}
-
-void wbinvd_on_cpu(int cpu)
-{
- smp_call_function_single(cpu, __wbinvd, NULL, 1);
-}
-EXPORT_SYMBOL(wbinvd_on_cpu);
-
-int wbinvd_on_all_cpus(void)
-{
- return on_each_cpu(__wbinvd, NULL, 1);
-}
-EXPORT_SYMBOL(wbinvd_on_all_cpus);
+++ /dev/null
-/*
- * x86-64 rwsem wrappers
- *
- * This interfaces the inline asm code to the slow-path
- * C routines. We need to save the call-clobbered regs
- * that the asm does not mark as clobbered, and move the
- * argument from %rax to %rdi.
- *
- * NOTE! We don't need to save %rax, because the functions
- * will always return the semaphore pointer in %rax (which
- * is also the input argument to these helpers)
- *
- * The following can clobber %rdx because the asm clobbers it:
- * call_rwsem_down_write_failed
- * call_rwsem_wake
- * but %rdi, %rsi, %rcx, %r8-r11 always need saving.
- */
-
-#include <linux/linkage.h>
-#include <asm/rwlock.h>
-#include <asm/alternative-asm.h>
-#include <asm/frame.h>
-#include <asm/dwarf2.h>
-
-#define save_common_regs \
- pushq %rdi; \
- pushq %rsi; \
- pushq %rcx; \
- pushq %r8; \
- pushq %r9; \
- pushq %r10; \
- pushq %r11
-
-#define restore_common_regs \
- popq %r11; \
- popq %r10; \
- popq %r9; \
- popq %r8; \
- popq %rcx; \
- popq %rsi; \
- popq %rdi
-
-/* Fix up special calling conventions */
-ENTRY(call_rwsem_down_read_failed)
- save_common_regs
- pushq %rdx
- movq %rax,%rdi
- call rwsem_down_read_failed
- popq %rdx
- restore_common_regs
- ret
- ENDPROC(call_rwsem_down_read_failed)
-
-ENTRY(call_rwsem_down_write_failed)
- save_common_regs
- movq %rax,%rdi
- call rwsem_down_write_failed
- restore_common_regs
- ret
- ENDPROC(call_rwsem_down_write_failed)
-
-ENTRY(call_rwsem_wake)
- decw %dx /* do nothing if still outstanding active readers */
- jnz 1f
- save_common_regs
- movq %rax,%rdi
- call rwsem_wake
- restore_common_regs
-1: ret
- ENDPROC(call_rwsem_wake)
-
-/* Fix up special calling conventions */
-ENTRY(call_rwsem_downgrade_wake)
- save_common_regs
- pushq %rdx
- movq %rax,%rdi
- call rwsem_downgrade_wake
- popq %rdx
- restore_common_regs
- ret
- ENDPROC(call_rwsem_downgrade_wake)
up_read(&mm->mmap_sem);
/* Kernel mode? Handle exceptions or die: */
- if (!(error_code & PF_USER)) {
+ if (!(error_code & PF_USER))
no_context(regs, error_code, address);
- return;
- }
/* User-space => ok to do another page fault: */
if (is_prefetch(regs, error_code, address))
#include <asm/numa.h>
#include <asm/cacheflush.h>
#include <asm/init.h>
-#include <linux/bootmem.h>
static unsigned long dma_reserve __initdata;
* Memory hotplug specific functions
*/
#ifdef CONFIG_MEMORY_HOTPLUG
-/*
- * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need
- * updating.
- */
-static void update_end_of_memory_vars(u64 start, u64 size)
-{
- unsigned long end_pfn = PFN_UP(start + size);
-
- if (end_pfn > max_pfn) {
- max_pfn = end_pfn;
- max_low_pfn = end_pfn;
- high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
- }
-}
-
/*
* Memory is added always to NORMAL zone. This means you will never get
* additional DMA/DMA32 memory.
ret = __add_pages(nid, zone, start_pfn, nr_pages);
WARN_ON_ONCE(ret);
- /* update max_pfn, max_low_pfn and high_memory */
- update_end_of_memory_vars(start, size);
-
return ret;
}
EXPORT_SYMBOL_GPL(arch_add_memory);
#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
-#ifdef CONFIG_HIGHPTE
-#define PGALLOC_USER_GFP __GFP_HIGHMEM
-#else
-#define PGALLOC_USER_GFP 0
-#endif
-
-gfp_t __userpte_alloc_gfp = PGALLOC_GFP | PGALLOC_USER_GFP;
-
pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
{
return (pte_t *)__get_free_page(PGALLOC_GFP);
{
struct page *pte;
- pte = alloc_pages(__userpte_alloc_gfp, 0);
+#ifdef CONFIG_HIGHPTE
+ pte = alloc_pages(PGALLOC_GFP | __GFP_HIGHMEM, 0);
+#else
+ pte = alloc_pages(PGALLOC_GFP, 0);
+#endif
if (pte)
pgtable_page_ctor(pte);
return pte;
}
-static int __init setup_userpte(char *arg)
-{
- if (!arg)
- return -EINVAL;
-
- /*
- * "userpte=nohigh" disables allocation of user pagetables in
- * high memory.
- */
- if (strcmp(arg, "nohigh") == 0)
- __userpte_alloc_gfp &= ~__GFP_HIGHMEM;
- else
- return -EINVAL;
- return 0;
-}
-early_param("userpte", setup_userpte);
-
void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
{
pgtable_page_dtor(pte);
static void nmi_cpu_start(void *dummy)
{
struct op_msrs const *msrs = &__get_cpu_var(cpu_msrs);
- if (!msrs->controls)
- WARN_ON_ONCE(1);
- else
- model->start(msrs);
+ model->start(msrs);
}
static int nmi_start(void)
static void nmi_cpu_stop(void *dummy)
{
struct op_msrs const *msrs = &__get_cpu_var(cpu_msrs);
- if (!msrs->controls)
- WARN_ON_ONCE(1);
- else
- model->stop(msrs);
+ model->stop(msrs);
}
static void nmi_stop(void)
for_each_possible_cpu(i) {
per_cpu(cpu_msrs, i).multiplex =
- kzalloc(multiplex_size, GFP_KERNEL);
+ kmalloc(multiplex_size, GFP_KERNEL);
if (!per_cpu(cpu_msrs, i).multiplex)
return 0;
}
if (counter_config[i].enabled) {
multiplex[i].saved = -(u64)counter_config[i].count;
} else {
+ multiplex[i].addr = 0;
multiplex[i].saved = 0;
}
}
static void nmi_cpu_save_mpx_registers(struct op_msrs *msrs)
{
- struct op_msr *counters = msrs->counters;
struct op_msr *multiplex = msrs->multiplex;
int i;
for (i = 0; i < model->num_counters; ++i) {
int virt = op_x86_phys_to_virt(i);
- if (counters[i].addr)
- rdmsrl(counters[i].addr, multiplex[virt].saved);
+ if (multiplex[virt].addr)
+ rdmsrl(multiplex[virt].addr, multiplex[virt].saved);
}
}
static void nmi_cpu_restore_mpx_registers(struct op_msrs *msrs)
{
- struct op_msr *counters = msrs->counters;
struct op_msr *multiplex = msrs->multiplex;
int i;
for (i = 0; i < model->num_counters; ++i) {
int virt = op_x86_phys_to_virt(i);
- if (counters[i].addr)
- wrmsrl(counters[i].addr, multiplex[virt].saved);
+ if (multiplex[virt].addr)
+ wrmsrl(multiplex[virt].addr, multiplex[virt].saved);
}
}
int i;
for_each_possible_cpu(i) {
- per_cpu(cpu_msrs, i).counters = kzalloc(counters_size,
+ per_cpu(cpu_msrs, i).counters = kmalloc(counters_size,
GFP_KERNEL);
if (!per_cpu(cpu_msrs, i).counters)
return 0;
- per_cpu(cpu_msrs, i).controls = kzalloc(controls_size,
+ per_cpu(cpu_msrs, i).controls = kmalloc(controls_size,
GFP_KERNEL);
if (!per_cpu(cpu_msrs, i).controls)
return 0;
int error;
error = sysdev_class_register(&oprofile_sysclass);
- if (error)
- return error;
-
- error = sysdev_register(&device_oprofile);
- if (error)
- sysdev_class_unregister(&oprofile_sysclass);
-
+ if (!error)
+ error = sysdev_register(&device_oprofile);
return error;
}
}
#else
-
-static inline int init_sysfs(void) { return 0; }
-static inline void exit_sysfs(void) { }
-
+#define init_sysfs() do { } while (0)
+#define exit_sysfs() do { } while (0)
#endif /* CONFIG_PM */
static int __init p4_init(char **cpu_type)
if (force_arch_perfmon && cpu_has_arch_perfmon)
return 0;
- /*
- * Documentation on identifying Intel processors by CPU family
- * and model can be found in the Intel Software Developer's
- * Manuals (SDM):
- *
- * http://www.intel.com/products/processor/manuals/
- *
- * As of May 2010 the documentation for this was in the:
- * "Intel 64 and IA-32 Architectures Software Developer's
- * Manual Volume 3B: System Programming Guide", "Table B-1
- * CPUID Signature Values of DisplayFamily_DisplayModel".
- */
switch (cpu_model) {
case 0 ... 2:
*cpu_type = "i386/ppro";
case 14:
*cpu_type = "i386/core";
break;
- case 0x0f:
- case 0x16:
- case 0x17:
- case 0x1d:
+ case 15: case 23:
*cpu_type = "i386/core_2";
break;
- case 0x1a:
- case 0x1e:
case 0x2e:
+ case 26:
spec = &op_arch_perfmon_spec;
*cpu_type = "i386/core_i7";
break;
- case 0x1c:
+ case 28:
*cpu_type = "i386/atom";
break;
default:
char *cpu_type = NULL;
int ret = 0;
- using_nmi = 0;
-
if (!cpu_has_apic)
return -ENODEV;
mux_init(ops);
- ret = init_sysfs();
- if (ret)
- return ret;
-
+ init_sysfs();
using_nmi = 1;
printk(KERN_INFO "oprofile: using NMI interrupt.\n");
return 0;
#ifdef CONFIG_OPROFILE_EVENT_MULTIPLEX
+static void op_mux_fill_in_addresses(struct op_msrs * const msrs)
+{
+ int i;
+
+ for (i = 0; i < NUM_VIRT_COUNTERS; i++) {
+ int hw_counter = op_x86_virt_to_phys(i);
+ if (reserve_perfctr_nmi(MSR_K7_PERFCTR0 + i))
+ msrs->multiplex[i].addr = MSR_K7_PERFCTR0 + hw_counter;
+ else
+ msrs->multiplex[i].addr = 0;
+ }
+}
+
static void op_mux_switch_ctrl(struct op_x86_model_spec const *model,
struct op_msrs const * const msrs)
{
/* enable active counters */
for (i = 0; i < NUM_COUNTERS; ++i) {
int virt = op_x86_phys_to_virt(i);
- if (!reset_value[virt])
+ if (!counter_config[virt].enabled)
continue;
rdmsrl(msrs->controls[i].addr, val);
val &= model->reserved;
}
}
+#else
+
+static inline void op_mux_fill_in_addresses(struct op_msrs * const msrs) { }
+
#endif
/* functions for op_amd_spec */
for (i = 0; i < NUM_COUNTERS; i++) {
if (reserve_perfctr_nmi(MSR_K7_PERFCTR0 + i))
msrs->counters[i].addr = MSR_K7_PERFCTR0 + i;
+ else
+ msrs->counters[i].addr = 0;
}
for (i = 0; i < NUM_CONTROLS; i++) {
if (reserve_evntsel_nmi(MSR_K7_EVNTSEL0 + i))
msrs->controls[i].addr = MSR_K7_EVNTSEL0 + i;
+ else
+ msrs->controls[i].addr = 0;
}
+
+ op_mux_fill_in_addresses(msrs);
}
static void op_amd_setup_ctrs(struct op_x86_model_spec const *model,
/* setup reset_value */
for (i = 0; i < NUM_VIRT_COUNTERS; ++i) {
- if (counter_config[i].enabled
- && msrs->counters[op_x86_virt_to_phys(i)].addr)
+ if (counter_config[i].enabled)
reset_value[i] = counter_config[i].count;
else
reset_value[i] = 0;
/* enable active counters */
for (i = 0; i < NUM_COUNTERS; ++i) {
int virt = op_x86_phys_to_virt(i);
- if (!reset_value[virt])
+ if (!counter_config[virt].enabled)
+ continue;
+ if (!msrs->counters[i].addr)
continue;
/* setup counter registers */
return 1;
}
+#ifdef CONFIG_NUMA
+ /* Sanity check */
+ /* Works only for 64bit with proper numa implementation. */
+ if (nodes != num_possible_nodes()) {
+ printk(KERN_DEBUG "Failed to setup CPU node(s) for IBS, "
+ "found: %d, expected %d",
+ nodes, num_possible_nodes());
+ return 1;
+ }
+#endif
return 0;
}
setup_num_counters();
stag = get_stagger();
+ /* initialize some registers */
+ for (i = 0; i < num_counters; ++i)
+ msrs->counters[i].addr = 0;
+ for (i = 0; i < num_controls; ++i)
+ msrs->controls[i].addr = 0;
+
/* the counter & cccr registers we pay attention to */
for (i = 0; i < num_counters; ++i) {
addr = p4_counters[VIRT_CTR(stag, i)].counter_address;
for (i = 0; i < num_counters; i++) {
if (reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i))
msrs->counters[i].addr = MSR_P6_PERFCTR0 + i;
+ else
+ msrs->counters[i].addr = 0;
}
for (i = 0; i < num_counters; i++) {
if (reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i))
msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i;
+ else
+ msrs->controls[i].addr = 0;
}
}
int i;
if (!reset_value) {
- reset_value = kzalloc(sizeof(reset_value[0]) * num_counters,
+ reset_value = kmalloc(sizeof(reset_value[0]) * num_counters,
GFP_ATOMIC);
if (!reset_value)
return;
case PCI_DEVICE_ID_INTEL_ICH10_1:
case PCI_DEVICE_ID_INTEL_ICH10_2:
case PCI_DEVICE_ID_INTEL_ICH10_3:
- case PCI_DEVICE_ID_INTEL_CPT_LPC1:
- case PCI_DEVICE_ID_INTEL_CPT_LPC2:
r->name = "PIIX/ICH";
r->get = pirq_piix_get;
r->set = pirq_piix_set;
ctxt->cr4 = read_cr4();
ctxt->cr8 = read_cr8();
#endif
- ctxt->misc_enable_saved = !rdmsrl_safe(MSR_IA32_MISC_ENABLE,
- &ctxt->misc_enable);
}
/* Needed by apm.c */
void save_processor_state(void)
{
__save_processor_state(&saved_context);
- save_sched_clock_state();
}
#ifdef CONFIG_X86_32
EXPORT_SYMBOL(save_processor_state);
*/
static void __restore_processor_state(struct saved_context *ctxt)
{
- if (ctxt->misc_enable_saved)
- wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
/*
* control registers
*/
void restore_processor_state(void)
{
__restore_processor_state(&saved_context);
- restore_sched_clock_state();
}
#ifdef CONFIG_X86_32
EXPORT_SYMBOL(restore_processor_state);
ret
ENTRY(restore_image)
- movl mmu_cr4_features, %ecx
movl resume_pg_dir, %eax
subl $__PAGE_OFFSET, %eax
movl %eax, %cr3
- jecxz 1f # cr4 Pentium and higher, skip if zero
- andl $~(X86_CR4_PGE), %ecx
- movl %ecx, %cr4; # turn off PGE
- movl %cr3, %eax; # flush TLB
- movl %eax, %cr3
-1:
movl restore_pblist, %edx
.p2align 4,,7
movl $swapper_pg_dir, %eax
subl $__PAGE_OFFSET, %eax
movl %eax, %cr3
+ /* Flush TLB, including "global" things (vmalloc) */
movl mmu_cr4_features, %ecx
jecxz 1f # cr4 Pentium and higher, skip if zero
+ movl %ecx, %edx
+ andl $~(X86_CR4_PGE), %edx
+ movl %edx, %cr4; # turn off PGE
+1:
+ movl %cr3, %eax; # flush TLB
+ movl %eax, %cr3
+ jecxz 1f # cr4 Pentium and higher, skip if zero
movl %ecx, %cr4; # turn PGE back on
1:
#include <asm/traps.h>
#include <asm/setup.h>
#include <asm/desc.h>
-#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/tlbflush.h>
#include <asm/reboot.h>
};
static const struct pv_time_ops xen_time_ops __initdata = {
- .sched_clock = xen_clocksource_read,
+ .sched_clock = xen_sched_clock,
};
static const struct pv_cpu_ops xen_cpu_ops __initdata = {
{
struct sched_shutdown r = { .reason = reason };
+#ifdef CONFIG_SMP
+ smp_send_stop();
+#endif
+
if (HYPERVISOR_sched_op(SCHEDOP_shutdown, &r))
BUG();
}
__supported_pte_mask |= _PAGE_IOMAP;
- /*
- * Prevent page tables from being allocated in highmem, even
- * if CONFIG_HIGHPTE is enabled.
- */
- __userpte_alloc_gfp &= ~__GFP_HIGHMEM;
-
#ifdef CONFIG_X86_64
/* Work out if we support NX */
check_efer();
{
pgprot_t prot = PAGE_KERNEL;
- /*
- * We disable highmem allocations for page tables so we should never
- * see any calls to kmap_atomic_pte on a highmem page.
- */
- BUG_ON(PageHighMem(page));
-
if (PagePinned(page))
prot = PAGE_KERNEL_RO;
+ if (0 && PageHighMem(page))
+ printk("mapping highpte %lx type %d prot %s\n",
+ page_to_pfn(page), type,
+ (unsigned long)pgprot_val(prot) & _PAGE_RW ? "WRITE" : "READ");
+
return kmap_atomic_prot(page, type, prot);
}
#endif
BUG();
}
-static void xen_stop_other_cpus(int wait)
+static void xen_smp_send_stop(void)
{
- smp_call_function(stop_self, NULL, wait);
+ smp_call_function(stop_self, NULL, 0);
}
static void xen_smp_send_reschedule(int cpu)
.cpu_disable = xen_cpu_disable,
.play_dead = xen_play_dead,
- .stop_other_cpus = xen_stop_other_cpus,
+ .smp_send_stop = xen_smp_send_stop,
.smp_send_reschedule = xen_smp_send_reschedule,
.send_call_func_ipi = xen_smp_send_call_function_ipi,
void xen_arch_resume(void)
{
- on_each_cpu(xen_vcpu_notify_restore,
- (void *)CLOCK_EVT_NOTIFY_RESUME, 1);
+ smp_call_function(xen_vcpu_notify_restore,
+ (void *)CLOCK_EVT_NOTIFY_RESUME, 1);
}
account_idle_ticks(ticks);
}
+/*
+ * Xen sched_clock implementation. Returns the number of unstolen
+ * nanoseconds, which is nanoseconds the VCPU spent in RUNNING+BLOCKED
+ * states.
+ */
+unsigned long long xen_sched_clock(void)
+{
+ struct vcpu_runstate_info state;
+ cycle_t now;
+ u64 ret;
+ s64 offset;
+
+ /*
+ * Ideally sched_clock should be called on a per-cpu basis
+ * anyway, so preempt should already be disabled, but that's
+ * not current practice at the moment.
+ */
+ preempt_disable();
+
+ now = xen_clocksource_read();
+
+ get_runstate_snapshot(&state);
+
+ WARN_ON(state.state != RUNSTATE_running);
+
+ offset = now - state.state_entry_time;
+ if (offset < 0)
+ offset = 0;
+
+ ret = state.time[RUNSTATE_blocked] +
+ state.time[RUNSTATE_running] +
+ offset;
+
+ preempt_enable();
+
+ return ret;
+}
+
+
/* Get the TSC speed from Xen */
unsigned long xen_tsc_khz(void)
{
# define CACHE_WAY_SIZE ICACHE_WAY_SIZE
#endif
-#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
#endif /* _XTENSA_CACHE_H */
unaligned = 1;
break;
}
- if (!iov[i].iov_len)
- return -EINVAL;
}
if (unaligned || (q->dma_pad_mask & len) || map_data)
#include <linux/blkdev.h>
#include <linux/bootmem.h> /* for max_pfn/max_low_pfn */
#include <linux/gcd.h>
-#include <linux/lcm.h>
#include "blk.h"
* hardware can operate on without reverting to read-modify-write
* operations.
*/
-void blk_queue_physical_block_size(struct request_queue *q, unsigned int size)
+void blk_queue_physical_block_size(struct request_queue *q, unsigned short size)
{
q->limits.physical_block_size = size;
/**
* blk_stack_limits - adjust queue_limits for stacked devices
- * @t: the stacking driver limits (top device)
- * @b: the underlying queue limits (bottom, component device)
+ * @t: the stacking driver limits (top)
+ * @b: the underlying queue limits (bottom)
* @offset: offset to beginning of data within component device
*
* Description:
- * This function is used by stacking drivers like MD and DM to ensure
- * that all component devices have compatible block sizes and
- * alignments. The stacking driver must provide a queue_limits
- * struct (top) and then iteratively call the stacking function for
- * all component (bottom) devices. The stacking function will
- * attempt to combine the values and ensure proper alignment.
- *
- * Returns 0 if the top and bottom queue_limits are compatible. The
- * top device's block sizes and alignment offsets may be adjusted to
- * ensure alignment with the bottom device. If no compatible sizes
- * and alignments exist, -1 is returned and the resulting top
- * queue_limits will have the misaligned flag set to indicate that
- * the alignment_offset is undefined.
+ * Merges two queue_limit structs. Returns 0 if alignment didn't
+ * change. Returns -1 if adding the bottom device caused
+ * misalignment.
*/
int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
sector_t offset)
{
- sector_t alignment;
- unsigned int top, bottom, ret = 0;
-
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors);
t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn);
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
- t->misaligned |= b->misaligned;
-
- alignment = queue_limit_alignment_offset(b, offset);
-
- /* Bottom device has different alignment. Check that it is
- * compatible with the current top alignment.
- */
- if (t->alignment_offset != alignment) {
-
- top = max(t->physical_block_size, t->io_min)
- + t->alignment_offset;
- bottom = max(b->physical_block_size, b->io_min) + alignment;
-
- /* Verify that top and bottom intervals line up */
- if (max(top, bottom) & (min(top, bottom) - 1)) {
- t->misaligned = 1;
- ret = -1;
- }
- }
-
t->logical_block_size = max(t->logical_block_size,
b->logical_block_size);
b->physical_block_size);
t->io_min = max(t->io_min, b->io_min);
- t->io_opt = lcm(t->io_opt, b->io_opt);
-
t->no_cluster |= b->no_cluster;
- /* Physical block size a multiple of the logical block size? */
- if (t->physical_block_size & (t->logical_block_size - 1)) {
- t->physical_block_size = t->logical_block_size;
- t->misaligned = 1;
- ret = -1;
- }
-
- /* Minimum I/O a multiple of the physical block size? */
- if (t->io_min & (t->physical_block_size - 1)) {
- t->io_min = t->physical_block_size;
+ /* Bottom device offset aligned? */
+ if (offset &&
+ (offset & (b->physical_block_size - 1)) != b->alignment_offset) {
t->misaligned = 1;
- ret = -1;
+ return -1;
}
- /* Optimal I/O a multiple of the physical block size? */
- if (t->io_opt & (t->physical_block_size - 1)) {
- t->io_opt = 0;
- t->misaligned = 1;
- ret = -1;
- }
-
- /* Find lowest common alignment_offset */
- t->alignment_offset = lcm(t->alignment_offset, alignment)
- & (max(t->physical_block_size, t->io_min) - 1);
+ /* If top has no alignment offset, inherit from bottom */
+ if (!t->alignment_offset)
+ t->alignment_offset =
+ b->alignment_offset & (b->physical_block_size - 1);
- /* Verify that new alignment_offset is on a logical block boundary */
+ /* Top device aligned on logical block boundary? */
if (t->alignment_offset & (t->logical_block_size - 1)) {
t->misaligned = 1;
- ret = -1;
+ return -1;
}
- /* Discard */
- t->max_discard_sectors = min_not_zero(t->max_discard_sectors,
- b->max_discard_sectors);
+ /* Find lcm() of optimal I/O size */
+ if (t->io_opt && b->io_opt)
+ t->io_opt = (t->io_opt * b->io_opt) / gcd(t->io_opt, b->io_opt);
+ else if (b->io_opt)
+ t->io_opt = b->io_opt;
- return ret;
+ /* Verify that optimal I/O size is a multiple of io_min */
+ if (t->io_min && t->io_opt % t->io_min)
+ return -1;
+
+ return 0;
}
EXPORT_SYMBOL(blk_stack_limits);
struct request_queue *q = (struct request_queue *) data;
unsigned long flags, next = 0;
struct request *rq, *tmp;
- int next_set = 0;
spin_lock_irqsave(q->queue_lock, flags);
if (blk_mark_rq_complete(rq))
continue;
blk_rq_timed_out(rq);
- } else if (!next_set || time_after(next, rq->deadline)) {
+ } else if (!next || time_after(next, rq->deadline))
next = rq->deadline;
- next_set = 1;
- }
}
- if (next_set)
+ /*
+ * next can never be 0 here with the list non-empty, since we always
+ * bump ->deadline to 1 so we can detect if the timer was ever added
+ * or not. See comment in blk_add_timer()
+ */
+ if (next)
mod_timer(&q->timeout, round_jiffies_up(next));
spin_unlock_irqrestore(q->queue_lock, flags);
/*
* fill in all the output members
*/
- hdr->device_status = rq->errors & 0xff;
+ hdr->device_status = status_byte(rq->errors);
hdr->transport_status = host_byte(rq->errors);
hdr->driver_status = driver_byte(rq->errors);
hdr->info = 0;
if (hdr->iovec_count) {
const int size = sizeof(struct sg_iovec) * hdr->iovec_count;
size_t iov_data_len;
- struct sg_iovec *sg_iov;
- struct iovec *iov;
- int i;
+ struct sg_iovec *iov;
- sg_iov = kmalloc(size, GFP_KERNEL);
- if (!sg_iov) {
+ iov = kmalloc(size, GFP_KERNEL);
+ if (!iov) {
ret = -ENOMEM;
goto out;
}
- if (copy_from_user(sg_iov, hdr->dxferp, size)) {
- kfree(sg_iov);
+ if (copy_from_user(iov, hdr->dxferp, size)) {
+ kfree(iov);
ret = -EFAULT;
goto out;
}
- /*
- * Sum up the vecs, making sure they don't overflow
- */
- iov = (struct iovec *) sg_iov;
- iov_data_len = 0;
- for (i = 0; i < hdr->iovec_count; i++) {
- if (iov_data_len + iov[i].iov_len < iov_data_len) {
- kfree(sg_iov);
- ret = -EINVAL;
- goto out;
- }
- iov_data_len += iov[i].iov_len;
- }
-
/* SG_IO howto says that the shorter of the two wins */
+ iov_data_len = iov_length((struct iovec *)iov,
+ hdr->iovec_count);
if (hdr->dxfer_len < iov_data_len) {
- hdr->iovec_count = iov_shorten(iov,
+ hdr->iovec_count = iov_shorten((struct iovec *)iov,
hdr->iovec_count,
hdr->dxfer_len);
iov_data_len = hdr->dxfer_len;
}
- ret = blk_rq_map_user_iov(q, rq, NULL, sg_iov, hdr->iovec_count,
+ ret = blk_rq_map_user_iov(q, rq, NULL, iov, hdr->iovec_count,
iov_data_len, GFP_KERNEL);
- kfree(sg_iov);
+ kfree(iov);
} else if (hdr->dxfer_len)
ret = blk_rq_map_user(q, rq, NULL, hdr->dxferp, hdr->dxfer_len,
GFP_KERNEL);
async_raid6_2data_recov(int disks, size_t bytes, int faila, int failb,
struct page **blocks, struct async_submit_ctl *submit)
{
- void *scribble = submit->scribble;
int non_zero_srcs, i;
BUG_ON(faila == failb);
pr_debug("%s: disks: %d len: %zu\n", __func__, disks, bytes);
- /* if a dma resource is not available or a scribble buffer is not
- * available punt to the synchronous path. In the 'dma not
- * available' case be sure to use the scribble buffer to
- * preserve the content of 'blocks' as the caller intended.
+ /* we need to preserve the contents of 'blocks' for the async
+ * case, so punt to synchronous if a scribble buffer is not available
*/
- if (!async_dma_find_channel(DMA_PQ) || !scribble) {
- void **ptrs = scribble ? scribble : (void **) blocks;
+ if (!submit->scribble) {
+ void **ptrs = (void **) blocks;
async_tx_quiesce(&submit->depend_tx);
for (i = 0; i < disks; i++)
pr_debug("%s: disks: %d len: %zu\n", __func__, disks, bytes);
- /* if a dma resource is not available or a scribble buffer is not
- * available punt to the synchronous path. In the 'dma not
- * available' case be sure to use the scribble buffer to
- * preserve the content of 'blocks' as the caller intended.
+ /* we need to preserve the contents of 'blocks' for the async
+ * case, so punt to synchronous if a scribble buffer is not available
*/
- if (!async_dma_find_channel(DMA_PQ) || !scribble) {
- void **ptrs = scribble ? scribble : (void **) blocks;
+ if (!scribble) {
+ void **ptrs = (void **) blocks;
async_tx_quiesce(&submit->depend_tx);
for (i = 0; i < disks; i++)
char tail[];
};
-static void authenc_request_complete(struct aead_request *req, int err)
-{
- if (err != -EINPROGRESS)
- aead_request_complete(req, err);
-}
-
static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
unsigned int keylen)
{
crypto_aead_authsize(authenc), 1);
out:
- authenc_request_complete(req, err);
+ aead_request_complete(req, err);
}
static void authenc_geniv_ahash_done(struct crypto_async_request *areq, int err)
err = crypto_ablkcipher_decrypt(abreq);
out:
- authenc_request_complete(req, err);
+ aead_request_complete(req, err);
}
static void authenc_verify_ahash_done(struct crypto_async_request *areq,
err = crypto_ablkcipher_decrypt(abreq);
out:
- authenc_request_complete(req, err);
+ aead_request_complete(req, err);
}
static u8 *crypto_authenc_ahash_fb(struct aead_request *req, unsigned int flags)
err = crypto_authenc_genicv(areq, iv, 0);
}
- authenc_request_complete(areq, err);
+ aead_request_complete(areq, err);
}
static int crypto_authenc_encrypt(struct aead_request *req)
err = crypto_authenc_genicv(areq, greq->giv, 0);
}
- authenc_request_complete(areq, err);
+ aead_request_complete(areq, err);
}
static int crypto_authenc_givencrypt(struct aead_givcrypt_request *req)
return err;
}
-static int alg_test_null(const struct alg_test_desc *desc,
- const char *driver, u32 type, u32 mask)
-{
- return 0;
-}
-
/* Please keep this list sorted by algorithm name. */
static const struct alg_test_desc alg_test_descs[] = {
{
- .alg = "__driver-cbc-aes-aesni",
- .test = alg_test_null,
- .suite = {
- .cipher = {
- .enc = {
- .vecs = NULL,
- .count = 0
- },
- .dec = {
- .vecs = NULL,
- .count = 0
- }
- }
- }
- }, {
- .alg = "__driver-ecb-aes-aesni",
- .test = alg_test_null,
- .suite = {
- .cipher = {
- .enc = {
- .vecs = NULL,
- .count = 0
- },
- .dec = {
- .vecs = NULL,
- .count = 0
- }
- }
- }
- }, {
- .alg = "__ghash-pclmulqdqni",
- .test = alg_test_null,
- .suite = {
- .hash = {
- .vecs = NULL,
- .count = 0
- }
- }
- }, {
.alg = "ansi_cprng",
.test = alg_test_cprng,
.fips_allowed = 1,
.count = CRC32C_TEST_VECTORS
}
}
- }, {
- .alg = "cryptd(__driver-ecb-aes-aesni)",
- .test = alg_test_null,
- .suite = {
- .cipher = {
- .enc = {
- .vecs = NULL,
- .count = 0
- },
- .dec = {
- .vecs = NULL,
- .count = 0
- }
- }
- }
- }, {
- .alg = "cryptd(__ghash-pclmulqdqni)",
- .test = alg_test_null,
- .suite = {
- .hash = {
- .vecs = NULL,
- .count = 0
- }
- }
}, {
.alg = "ctr(aes)",
.test = alg_test_skcipher,
}
}
}
- }, {
- .alg = "ecb(__aes-aesni)",
- .test = alg_test_null,
- .suite = {
- .cipher = {
- .enc = {
- .vecs = NULL,
- .count = 0
- },
- .dec = {
- .vecs = NULL,
- .count = 0
- }
- }
- }
}, {
.alg = "ecb(aes)",
.test = alg_test_skcipher,
obj-$(CONFIG_PNP) += pnp/
obj-$(CONFIG_ARM_AMBA) += amba/
-obj-$(CONFIG_VIRTIO) += virtio/
obj-$(CONFIG_XEN) += xen/
# regulators early, since some subsystems rely on them to initialize
obj-$(CONFIG_PPC_PS3) += ps3/
obj-$(CONFIG_OF) += of/
obj-$(CONFIG_SSB) += ssb/
+obj-$(CONFIG_VIRTIO) += virtio/
obj-$(CONFIG_VLYNQ) += vlynq/
obj-$(CONFIG_STAGING) += staging/
obj-y += platform/
ACPI_BITMASK_POWER_BUTTON_STATUS | \
ACPI_BITMASK_SLEEP_BUTTON_STATUS | \
ACPI_BITMASK_RT_CLOCK_STATUS | \
- ACPI_BITMASK_PCIEXP_WAKE_DISABLE | \
ACPI_BITMASK_WAKE_STATUS)
#define ACPI_BITMASK_TIMER_ENABLE 0x0001
acpi_ut_add_reference(obj_desc->field.region_obj);
- /* allow full data read from EC address space */
- if (obj_desc->field.region_obj->region.space_id ==
- ACPI_ADR_SPACE_EC) {
- if (obj_desc->common_field.bit_length > 8) {
- unsigned width =
- ACPI_ROUND_BITS_UP_TO_BYTES(
- obj_desc->common_field.bit_length);
- // access_bit_width is u8, don't overflow it
- if (width > 8)
- width = 8;
- obj_desc->common_field.access_byte_width =
- width;
- obj_desc->common_field.access_bit_width =
- 8 * width;
- }
- }
-
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"RegionField: BitOff %X, Off %X, Gran %X, Region %p\n",
obj_desc->field.start_field_bit_offset,
acpi_osi_setup("!Windows 2006");
return 0;
}
-static int __init dmi_disable_osi_win7(const struct dmi_system_id *d)
-{
- printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
- acpi_osi_setup("!Windows 2009");
- return 0;
-}
static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
{
DMI_MATCH(DMI_PRODUCT_NAME, "Sony VGN-SR290J"),
},
},
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba Satellite L355",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_VERSION, "Satellite L355"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba Satellite L355",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_VERSION, "Satellite L355"),
- },
- },
- {
- .callback = dmi_disable_osi_win7,
- .ident = "ASUS K50IJ",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "K50IJ"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba P305D",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P305D"),
- },
- },
/*
* BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
static acpi_status
acpi_ec_space_handler(u32 function, acpi_physical_address address,
- u32 bits, acpi_integer *value64,
+ u32 bits, acpi_integer *value,
void *handler_context, void *region_context)
{
struct acpi_ec *ec = handler_context;
- int result = 0, i, bytes = bits / 8;
- u8 *value = (u8 *)value64;
+ int result = 0, i;
+ u8 temp = 0;
if ((address > 0xFF) || !value || !handler_context)
return AE_BAD_PARAMETER;
if (function != ACPI_READ && function != ACPI_WRITE)
return AE_BAD_PARAMETER;
- if (EC_FLAGS_MSI || bits > 8)
+ if (bits != 8 && acpi_strict)
+ return AE_BAD_PARAMETER;
+
+ if (EC_FLAGS_MSI)
acpi_ec_burst_enable(ec);
- for (i = 0; i < bytes; ++i, ++address, ++value)
- result = (function == ACPI_READ) ?
- acpi_ec_read(ec, address, value) :
- acpi_ec_write(ec, address, *value);
+ if (function == ACPI_READ) {
+ result = acpi_ec_read(ec, address, &temp);
+ *value = temp;
+ } else {
+ temp = 0xff & (*value);
+ result = acpi_ec_write(ec, address, temp);
+ }
+
+ for (i = 8; unlikely(bits - i > 0); i += 8) {
+ ++address;
+ if (function == ACPI_READ) {
+ result = acpi_ec_read(ec, address, &temp);
+ (*value) |= ((acpi_integer)temp) << i;
+ } else {
+ temp = 0xff & ((*value) >> i);
+ result = acpi_ec_write(ec, address, temp);
+ }
+ }
- if (EC_FLAGS_MSI || bits > 8)
+ if (EC_FLAGS_MSI)
acpi_ec_burst_disable(ec);
switch (result) {
#define ACPI_POWER_METER_NAME "power_meter"
ACPI_MODULE_NAME(ACPI_POWER_METER_NAME);
#define ACPI_POWER_METER_DEVICE_NAME "Power Meter"
-#define ACPI_POWER_METER_CLASS "pwr_meter_resource"
+#define ACPI_POWER_METER_CLASS "power_meter_resource"
#define NUM_SENSORS 17
}
static struct dmi_system_id __cpuinitdata processor_idle_dmi_table[] = {
+ {
+ set_no_mwait, "IFL91 board", {
+ DMI_MATCH(DMI_BIOS_VENDOR, "COMPAL"),
+ DMI_MATCH(DMI_SYS_VENDOR, "ZEPTO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "3215W"),
+ DMI_MATCH(DMI_BOARD_NAME, "IFL91") }, NULL},
{
set_no_mwait, "Extensa 5220", {
DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
return(acpi_idle_enter_c1(dev, state));
local_irq_disable();
- if (cx->entry_method != ACPI_CSTATE_FFH) {
- current_thread_info()->status &= ~TS_POLLING;
- /*
- * TS_POLLING-cleared state must be visible before we test
- * NEED_RESCHED:
- */
- smp_mb();
- }
+ current_thread_info()->status &= ~TS_POLLING;
+ /*
+ * TS_POLLING-cleared state must be visible before we test
+ * NEED_RESCHED:
+ */
+ smp_mb();
if (unlikely(need_resched())) {
current_thread_info()->status |= TS_POLLING;
if (acpi_idle_suspend)
return(acpi_idle_enter_c1(dev, state));
- if (!cx->bm_sts_skip && acpi_idle_bm_check()) {
+ if (acpi_idle_bm_check()) {
if (dev->safe_state) {
dev->last_state = dev->safe_state;
return dev->safe_state->enter(dev, dev->safe_state);
}
local_irq_disable();
- if (cx->entry_method != ACPI_CSTATE_FFH) {
- current_thread_info()->status &= ~TS_POLLING;
- /*
- * TS_POLLING-cleared state must be visible before we test
- * NEED_RESCHED:
- */
- smp_mb();
- }
+ current_thread_info()->status &= ~TS_POLLING;
+ /*
+ * TS_POLLING-cleared state must be visible before we test
+ * NEED_RESCHED:
+ */
+ smp_mb();
if (unlikely(need_resched())) {
current_thread_info()->status |= TS_POLLING;
if (result)
goto update_bios;
- /* We need to call _PPC once when cpufreq starts */
- if (ignore_ppc != 1)
- result = acpi_processor_get_platform_limit(pr);
-
- return result;
+ return 0;
/*
* Having _PPC but missing frequencies (_PSS, _PCT) is a very good hint that
#ifdef CONFIG_ACPI_SLEEP
static u32 acpi_target_sleep_state = ACPI_STATE_S0;
-
/*
* ACPI 1.0 wants us to execute _PTS before suspending devices, so we allow the
* user to request that behavior by using the 'acpi_old_suspend_ordering'
#endif /* CONFIG_ACPI_SLEEP */
#ifdef CONFIG_SUSPEND
+/*
+ * According to the ACPI specification the BIOS should make sure that ACPI is
+ * enabled and SCI_EN bit is set on wake-up from S1 - S3 sleep states. Still,
+ * some BIOSes don't do that and therefore we use acpi_enable() to enable ACPI
+ * on such systems during resume. Unfortunately that doesn't help in
+ * particularly pathological cases in which SCI_EN has to be set directly on
+ * resume, although the specification states very clearly that this flag is
+ * owned by the hardware. The set_sci_en_on_resume variable will be set in such
+ * cases.
+ */
+static bool set_sci_en_on_resume;
+
extern void do_suspend_lowlevel(void);
static u32 acpi_suspend_states[] = {
break;
}
- /* This violates the spec but is required for bug compatibility. */
- acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1);
+ /* If ACPI is not enabled by the BIOS, we need to enable it here. */
+ if (set_sci_en_on_resume)
+ acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1);
+ else
+ acpi_enable();
/* Reprogram control registers and execute _BFS */
acpi_leave_sleep_state_prep(acpi_state);
return 0;
}
+static int __init init_set_sci_en_on_resume(const struct dmi_system_id *d)
+{
+ set_sci_en_on_resume = true;
+ return 0;
+}
+
static struct dmi_system_id __initdata acpisleep_dmi_table[] = {
{
.callback = init_old_suspend_ordering,
},
},
{
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Apple MacBook 1,1",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Apple Computer, Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBook1,1"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Apple MacMini 1,1",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Apple Computer, Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Macmini1,1"),
+ },
+ },
+ {
.callback = init_old_suspend_ordering,
.ident = "Asus Pundit P1-AH2 (M2N8L motherboard)",
.matches = {
},
},
{
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Toshiba Satellite L300",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Satellite L300"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard HP G7000 Notebook PC",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP G7000 Notebook PC"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard HP Pavilion dv3 Notebook PC",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv3 Notebook PC"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard Pavilion dv4",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard Pavilion dv7",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv7"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard Compaq Presario C700 Notebook PC",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Compaq Presario C700 Notebook PC"),
+ },
+ },
+ {
+ .callback = init_set_sci_en_on_resume,
+ .ident = "Hewlett-Packard Compaq Presario CQ40 Notebook PC",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Compaq Presario CQ40 Notebook PC"),
+ },
+ },
+ {
.callback = init_old_suspend_ordering,
.ident = "Panasonic CF51-2L",
.matches = {
unsigned long table_end;
acpi_size tbl_size;
- if (acpi_disabled && !acpi_ht)
+ if (acpi_disabled)
return -ENODEV;
if (!handler)
struct acpi_table_header *table = NULL;
acpi_size tbl_size;
- if (acpi_disabled && !acpi_ht)
+ if (acpi_disabled)
return -ENODEV;
if (!handler)
ACPI_VIDEO_BACKLIGHT_FORCE_VENDOR;
if (!strcmp("video", str))
acpi_video_support |=
- ACPI_VIDEO_BACKLIGHT_FORCE_VIDEO;
+ ACPI_VIDEO_OUTPUT_SWITCHING_FORCE_VIDEO;
}
return 1;
}
{ PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
{ PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
{ PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
- { PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
- { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
- { PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
- { PCI_VDEVICE(INTEL, 0x1c05), board_ahci }, /* CPT RAID */
- { PCI_VDEVICE(INTEL, 0x1c06), board_ahci }, /* CPT RAID */
- { PCI_VDEVICE(INTEL, 0x1c07), board_ahci }, /* CPT RAID */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
* On HP dv[4-6] and HDX18 with earlier BIOSen, link
* to the harddisk doesn't become online after
* resuming from STR. Warn and fail suspend.
- *
- * http://bugzilla.kernel.org/show_bug.cgi?id=12276
- *
- * Use dates instead of versions to match as HP is
- * apparently recycling both product and version
- * strings.
- *
- * http://bugzilla.kernel.org/show_bug.cgi?id=15462
*/
{
.ident = "dv4",
DMI_MATCH(DMI_PRODUCT_NAME,
"HP Pavilion dv4 Notebook PC"),
},
- .driver_data = "20090105", /* F.30 */
+ .driver_data = "F.30", /* cutoff BIOS version */
},
{
.ident = "dv5",
DMI_MATCH(DMI_PRODUCT_NAME,
"HP Pavilion dv5 Notebook PC"),
},
- .driver_data = "20090506", /* F.16 */
+ .driver_data = "F.16", /* cutoff BIOS version */
},
{
.ident = "dv6",
DMI_MATCH(DMI_PRODUCT_NAME,
"HP Pavilion dv6 Notebook PC"),
},
- .driver_data = "20090423", /* F.21 */
+ .driver_data = "F.21", /* cutoff BIOS version */
},
{
.ident = "HDX18",
DMI_MATCH(DMI_PRODUCT_NAME,
"HP HDX18 Notebook PC"),
},
- .driver_data = "20090430", /* F.23 */
+ .driver_data = "F.23", /* cutoff BIOS version */
},
/*
* Acer eMachines G725 has the same problem. BIOS
* work. Inbetween, there are V1.06, V2.06 and V3.03
* that we don't have much idea about. For now,
* blacklist anything older than V3.04.
- *
- * http://bugzilla.kernel.org/show_bug.cgi?id=15104
*/
{
.ident = "G725",
DMI_MATCH(DMI_SYS_VENDOR, "eMachines"),
DMI_MATCH(DMI_PRODUCT_NAME, "eMachines G725"),
},
- .driver_data = "20091216", /* V3.04 */
+ .driver_data = "V3.04", /* cutoff BIOS version */
},
{ } /* terminate list */
};
const struct dmi_system_id *dmi = dmi_first_match(sysids);
- int year, month, date;
- char buf[9];
+ const char *ver;
if (!dmi || pdev->bus->number || pdev->devfn != PCI_DEVFN(0x1f, 2))
return false;
- dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);
- snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+ ver = dmi_get_system_info(DMI_BIOS_VERSION);
- return strcmp(buf, dmi->driver_data) < 0;
+ return !ver || strcmp(ver, dmi->driver_data) < 0;
}
static bool ahci_broken_online(struct pci_dev *pdev)
if (pdev->vendor == PCI_VENDOR_ID_MARVELL && !marvell_enable)
return -ENODEV;
- /*
- * For some reason, MCP89 on MacBook 7,1 doesn't work with
- * ahci, use ata_generic instead.
- */
- if (pdev->vendor == PCI_VENDOR_ID_NVIDIA &&
- pdev->device == PCI_DEVICE_ID_NVIDIA_NFORCE_MCP89_SATA &&
- pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE &&
- pdev->subsystem_device == 0xcb89)
- return -ENODEV;
-
/* acquire resources */
rc = pcim_enable_device(pdev);
if (rc)
ahci_save_initial_config(pdev, hpriv);
/* prepare host */
- if (hpriv->cap & HOST_CAP_NCQ) {
- pi.flags |= ATA_FLAG_NCQ;
- /* Auto-activate optimization is supposed to be supported on
- all AHCI controllers indicating NCQ support, but it seems
- to be broken at least on some NVIDIA MCP79 chipsets.
- Until we get info on which NVIDIA chipsets don't have this
- issue, if any, disable AA on all NVIDIA AHCIs. */
- if (pdev->vendor != PCI_VENDOR_ID_NVIDIA)
- pi.flags |= ATA_FLAG_FPDMA_AA;
- }
+ if (hpriv->cap & HOST_CAP_NCQ)
+ pi.flags |= ATA_FLAG_NCQ | ATA_FLAG_FPDMA_AA;
if (hpriv->cap & HOST_CAP_PMP)
pi.flags |= ATA_FLAG_PMP;
* A generic parallel ATA driver using libata
*/
-enum {
- ATA_GEN_CLASS_MATCH = (1 << 0),
- ATA_GEN_FORCE_DMA = (1 << 1),
-};
-
/**
* generic_set_mode - mode setting
* @link: link to set up
static int generic_set_mode(struct ata_link *link, struct ata_device **unused)
{
struct ata_port *ap = link->ap;
- const struct pci_device_id *id = ap->host->private_data;
int dma_enabled = 0;
struct ata_device *dev;
struct pci_dev *pdev = to_pci_dev(ap->host->dev);
- if (id->driver_data & ATA_GEN_FORCE_DMA) {
- dma_enabled = 0xff;
- } else if (ap->ioaddr.bmdma_addr) {
- /* Bits 5 and 6 indicate if DMA is active on master/slave */
+ /* Bits 5 and 6 indicate if DMA is active on master/slave */
+ if (ap->ioaddr.bmdma_addr)
dma_enabled = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS);
- }
if (pdev->vendor == PCI_VENDOR_ID_CENATEK)
dma_enabled = 0xFF;
const struct ata_port_info *ppi[] = { &info, NULL };
/* Don't use the generic entry unless instructed to do so */
- if ((id->driver_data & ATA_GEN_CLASS_MATCH) && all_generic_ide == 0)
+ if (id->driver_data == 1 && all_generic_ide == 0)
return -ENODEV;
/* Devices that need care */
return rc;
pcim_pin_device(dev);
}
- return ata_pci_sff_init_one(dev, ppi, &generic_sht, (void *)id);
+ return ata_pci_sff_init_one(dev, ppi, &generic_sht, NULL);
}
static struct pci_device_id ata_generic[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE), },
{ PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561), },
{ PCI_DEVICE(PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558), },
- { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE),
- .driver_data = ATA_GEN_FORCE_DMA },
- /*
- * For some reason, MCP89 on MacBook 7,1 doesn't work with
- * ahci, use ata_generic instead.
- */
- { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP89_SATA,
- PCI_VENDOR_ID_APPLE, 0xcb89,
- .driver_data = ATA_GEN_FORCE_DMA },
+ { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE), },
{ PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO), },
{ PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1), },
{ PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2), },
/* Must come last. If you add entries adjust this table appropriately */
- { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL),
- .driver_data = ATA_GEN_CLASS_MATCH },
+ { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL, 1},
{ 0, },
};
struct piix_host_priv {
const int *map;
u32 saved_iocfg;
- spinlock_t sidpr_lock; /* FIXME: remove once locking in EH is fixed */
void __iomem *sidpr;
};
{ 0x8086, 0x3b2d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
/* SATA Controller IDE (PCH) */
{ 0x8086, 0x3b2e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata },
- /* SATA Controller IDE (CPT) */
- { 0x8086, 0x1c00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata },
- /* SATA Controller IDE (CPT) */
- { 0x8086, 0x1c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata },
- /* SATA Controller IDE (CPT) */
- { 0x8086, 0x1c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
- /* SATA Controller IDE (CPT) */
- { 0x8086, 0x1c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
{ } /* terminate list */
};
unsigned int reg, u32 *val)
{
struct piix_host_priv *hpriv = link->ap->host->private_data;
- unsigned long flags;
if (reg >= ARRAY_SIZE(piix_sidx_map))
return -EINVAL;
- spin_lock_irqsave(&hpriv->sidpr_lock, flags);
piix_sidpr_sel(link, reg);
*val = ioread32(hpriv->sidpr + PIIX_SIDPR_DATA);
- spin_unlock_irqrestore(&hpriv->sidpr_lock, flags);
return 0;
}
unsigned int reg, u32 val)
{
struct piix_host_priv *hpriv = link->ap->host->private_data;
- unsigned long flags;
if (reg >= ARRAY_SIZE(piix_sidx_map))
return -EINVAL;
- spin_lock_irqsave(&hpriv->sidpr_lock, flags);
piix_sidpr_sel(link, reg);
iowrite32(val, hpriv->sidpr + PIIX_SIDPR_DATA);
- spin_unlock_irqrestore(&hpriv->sidpr_lock, flags);
return 0;
}
hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
if (!hpriv)
return -ENOMEM;
- spin_lock_init(&hpriv->sidpr_lock);
/* Save IOCFG, this will be used for cable detection, quirk
* detection and restoration on detach. This is necessary
module_param_named(allow_tpm, libata_allow_tpm, int, 0444);
MODULE_PARM_DESC(allow_tpm, "Permit the use of TPM commands (0=off [default], 1=on)");
-static int atapi_an;
-module_param(atapi_an, int, 0444);
-MODULE_PARM_DESC(atapi_an, "Enable ATAPI AN media presence notification (0=0ff [default], 1=on)");
-
MODULE_AUTHOR("Jeff Garzik");
MODULE_DESCRIPTION("Library module for ATA devices");
MODULE_LICENSE("GPL");
* to enable ATAPI AN to discern between PHY status
* changed notifications and ATAPI ANs.
*/
- if (atapi_an &&
- (ap->flags & ATA_FLAG_AN) && ata_id_has_atapi_AN(id) &&
+ if ((ap->flags & ATA_FLAG_AN) && ata_id_has_atapi_AN(id) &&
(!sata_pmp_attached(ap) ||
sata_scr_read(&ap->link, SCR_NOTIFICATION, &sntf) == 0)) {
unsigned int err_mask;
{ "HTS541080G9SA00", "MB4OC60D", ATA_HORKAGE_NONCQ, },
{ "HTS541010G9SA00", "MBZOC60D", ATA_HORKAGE_NONCQ, },
- /* https://bugzilla.kernel.org/show_bug.cgi?id=15573 */
- { "C300-CTFDDAC128MAG", "0001", ATA_HORKAGE_NONCQ, },
-
/* devices which puke on READ_NATIVE_MAX */
{ "HDS724040KLSA80", "KFAOA20N", ATA_HORKAGE_BROKEN_HPA, },
{ "WDC WD3200JD-00KLB0", "WD-WCAMR1130137", ATA_HORKAGE_BROKEN_HPA },
*/
int ata_host_suspend(struct ata_host *host, pm_message_t mesg)
{
- unsigned int ehi_flags = ATA_EHI_QUIET;
int rc;
/*
*/
ata_lpm_enable(host);
- /*
- * On some hardware, device fails to respond after spun down
- * for suspend. As the device won't be used before being
- * resumed, we don't need to touch the device. Ask EH to skip
- * the usual stuff and proceed directly to suspend.
- *
- * http://thread.gmane.org/gmane.linux.ide/46764
- */
- if (mesg.event == PM_EVENT_SUSPEND)
- ehi_flags |= ATA_EHI_NO_AUTOPSY | ATA_EHI_NO_RECOVERY;
-
- rc = ata_host_request_pm(host, mesg, 0, ehi_flags, 1);
+ rc = ata_host_request_pm(host, mesg, 0, ATA_EHI_QUIET, 1);
if (rc == 0)
host->dev->power.power_state = mesg;
return rc;
void ata_qc_schedule_eh(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
- struct request_queue *q = qc->scsicmd->device->request_queue;
- unsigned long flags;
WARN_ON(!ap->ops->error_handler);
* Note that ATA_QCFLAG_FAILED is unconditionally set after
* this function completes.
*/
- spin_lock_irqsave(q->queue_lock, flags);
blk_abort_request(qc->scsicmd->request);
- spin_unlock_irqrestore(q->queue_lock, flags);
}
/**
}
/* okay, this error is ours */
- memset(&tf, 0, sizeof(tf));
rc = ata_eh_read_log_10h(dev, &tag, &tf);
if (rc) {
ata_link_printk(link, KERN_ERR, "failed to read log page 10h "
if (link->flags & ATA_LFLAG_DISABLED)
return 1;
- /* skip if explicitly requested */
- if (ehc->i.flags & ATA_EHI_NO_RECOVERY)
- return 1;
-
/* thaw frozen port and recover failed devices */
if ((ap->pflags & ATA_PFLAG_FROZEN) || ata_link_nr_enabled(link))
return 0;
*
* If door lock fails, always clear sdev->locked to
* avoid this infinite loop.
- *
- * This may happen before SCSI scan is complete. Make
- * sure qc->dev->sdev isn't NULL before dereferencing.
*/
- if (qc->cdb[0] == ALLOW_MEDIUM_REMOVAL && qc->dev->sdev)
+ if (qc->cdb[0] == ALLOW_MEDIUM_REMOVAL)
qc->dev->sdev->locked = 0;
qc->scsicmd->result = SAM_STAT_CHECK_CONDITION;
* write indication (used for PIO/DMA setup), result TF is
* copied back and we don't whine too much about its failure.
*/
- tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
+ tf->flags = ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
if (scmd->sc_data_direction == DMA_TO_DEVICE)
tf->flags |= ATA_TFLAG_WRITE;
do_write);
}
- if (!do_write && !PageSlab(page))
+ if (!do_write)
flush_dcache_page(page);
qc->curbytes += qc->sect_size;
/* Clear CD-ROM DMA write bit */
tmp &= 0x7F;
/* Cable and UDMA */
- if (pdev->revision >= 0xc2)
- tmp |= 0x01;
- pci_write_config_byte(pdev, 0x4B, tmp | 0x08);
+ pci_write_config_byte(pdev, 0x4B, tmp | 0x09);
/*
* CD_ROM DMA on (0x53 bit 0). Enable this even if we want
* to use PIO. 0x53 bit 1 (rev 20 only) - enable FIFO control
#include <linux/libata.h>
#define DRV_NAME "pata_hpt3x2n"
-#define DRV_VERSION "0.3.9"
+#define DRV_VERSION "0.3.8"
enum {
HPT_PCI_FAST = (1 << 31),
pci_mhz);
/* Set our private data up. We only need a few flags so we use
it directly */
- if (pci_mhz > 60)
+ if (pci_mhz > 60) {
hpriv = (void *)(PCI66 | USE_DPLL);
-
- /*
- * On HPT371N, if ATA clock is 66 MHz we must set bit 2 in
- * the MISC. register to stretch the UltraDMA Tss timing.
- * NOTE: This register is only writeable via I/O space.
- */
- if (dev->device == PCI_DEVICE_ID_TTI_HPT371)
- outb(inb(iobase + 0x9c) | 0x04, iobase + 0x9c);
+ /*
+ * On HPT371N, if ATA clock is 66 MHz we must set bit 2 in
+ * the MISC. register to stretch the UltraDMA Tss timing.
+ * NOTE: This register is only writeable via I/O space.
+ */
+ if (dev->device == PCI_DEVICE_ID_TTI_HPT371)
+ outb(inb(iobase + 0x9c) | 0x04, iobase + 0x9c);
+ }
/* Now kick off ATA set up */
return ata_pci_sff_init_one(dev, ppi, &hpt3x2n_sht, hpriv);
* pata_pdc202xx_old.c - Promise PDC202xx PATA for new ATA layer
* (C) 2005 Red Hat Inc
* Alan Cox <alan@lxorguk.ukuu.org.uk>
- * (C) 2007,2009,2010 Bartlomiej Zolnierkiewicz
+ * (C) 2007,2009 Bartlomiej Zolnierkiewicz
*
* Based in part on linux/drivers/ide/pci/pdc202xx_old.c
*
return ATA_CBL_PATA80;
}
-static void pdc202xx_exec_command(struct ata_port *ap,
- const struct ata_taskfile *tf)
-{
- DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command);
-
- iowrite8(tf->command, ap->ioaddr.command_addr);
- ndelay(400);
-}
-
/**
* pdc202xx_configure_piomode - set chip PIO timing
* @ap: ATA interface
.cable_detect = ata_cable_40wire,
.set_piomode = pdc202xx_set_piomode,
.set_dmamode = pdc202xx_set_dmamode,
-
- .sff_exec_command = pdc202xx_exec_command,
};
static struct ata_port_operations pdc2026x_port_ops = {
.dev_config = pdc2026x_dev_config,
.port_start = pdc2026x_port_start,
-
- .sff_exec_command = pdc202xx_exec_command,
};
static int pdc202xx_init_one(struct pci_dev *dev, const struct pci_device_id *id)
{ PCI_VDEVICE(VIA, 0x3164), },
{ PCI_VDEVICE(VIA, 0x5324), },
{ PCI_VDEVICE(VIA, 0xC409), VIA_IDFLAG_SINGLE },
- { PCI_VDEVICE(VIA, 0x9001), VIA_IDFLAG_SINGLE },
{ },
};
* LOCKING:
* Inherited from caller.
*/
-static void mv_bmdma_stop_ap(struct ata_port *ap)
+static void mv_bmdma_stop(struct ata_queued_cmd *qc)
{
+ struct ata_port *ap = qc->ap;
void __iomem *port_mmio = mv_ap_base(ap);
u32 cmd;
/* clear start/stop bit */
cmd = readl(port_mmio + BMDMA_CMD);
- if (cmd & ATA_DMA_START) {
- cmd &= ~ATA_DMA_START;
- writelfl(cmd, port_mmio + BMDMA_CMD);
-
- /* one-PIO-cycle guaranteed wait, per spec, for HDMA1:0 transition */
- ata_sff_dma_pause(ap);
- }
-}
+ cmd &= ~ATA_DMA_START;
+ writelfl(cmd, port_mmio + BMDMA_CMD);
-static void mv_bmdma_stop(struct ata_queued_cmd *qc)
-{
- mv_bmdma_stop_ap(qc->ap);
+ /* one-PIO-cycle guaranteed wait, per spec, for HDMA1:0 transition */
+ ata_sff_dma_pause(ap);
}
/**
reg = readl(port_mmio + BMDMA_STATUS);
if (reg & ATA_DMA_ACTIVE)
status = ATA_DMA_ACTIVE;
- else if (reg & ATA_DMA_ERR)
+ else
status = (reg & ATA_DMA_ERR) | ATA_DMA_INTR;
- else {
- /*
- * Just because DMA_ACTIVE is 0 (DMA completed),
- * this does _not_ mean the device is "done".
- * So we should not yet be signalling ATA_DMA_INTR
- * in some cases. Eg. DSM/TRIM, and perhaps others.
- */
- mv_bmdma_stop_ap(ap);
- if (ioread8(ap->ioaddr.altstatus_addr) & ATA_BUSY)
- status = 0;
- else
- status = ATA_DMA_INTR;
- }
return status;
}
switch (tf->protocol) {
case ATA_PROT_DMA:
- if (tf->command == ATA_CMD_DSM)
- return;
- /* fall-thru */
case ATA_PROT_NCQ:
break; /* continue below */
case ATA_PROT_PIO:
if ((tf->protocol != ATA_PROT_DMA) &&
(tf->protocol != ATA_PROT_NCQ))
return;
- if (tf->command == ATA_CMD_DSM)
- return; /* use bmdma for this */
/* Fill in Gen IIE command request block */
if (!(tf->flags & ATA_TFLAG_WRITE))
switch (qc->tf.protocol) {
case ATA_PROT_DMA:
- if (qc->tf.command == ATA_CMD_DSM) {
- if (!ap->ops->bmdma_setup) /* no bmdma on GEN_I */
- return AC_ERR_OTHER;
- break; /* use bmdma for this */
- }
- /* fall thru */
case ATA_PROT_NCQ:
mv_start_edma(ap, port_mmio, pp, qc->tf.protocol);
pp->req_idx = (pp->req_idx + 1) & MV_MAX_Q_DEPTH_MASK;
mask = readl(mmio_base + NV_INT_ENABLE_MCP55);
mask &= ~(NV_INT_ALL_MCP55 << shift);
writel(mask, mmio_base + NV_INT_ENABLE_MCP55);
+ ata_sff_freeze(ap);
}
static void nv_mcp55_thaw(struct ata_port *ap)
mask = readl(mmio_base + NV_INT_ENABLE_MCP55);
mask |= (NV_INT_MASK_MCP55 << shift);
writel(mask, mmio_base + NV_INT_ENABLE_MCP55);
+ ata_sff_thaw(ap);
}
static void nv_adma_error_handler(struct ata_port *ap)
}
pci_set_master(pdev);
- return ata_pci_sff_activate_host(host, ipriv->irq_handler, ipriv->sht);
+ return ata_host_activate(host, pdev->irq, ipriv->irq_handler,
+ IRQF_SHARED, ipriv->sht);
}
#ifdef CONFIG_PM
tmp8 |= NATIVE_MODE_ALL;
pci_write_config_byte(pdev, SATA_NATIVE_MODE, tmp8);
}
-
- /*
- * vt6421 has problems talking to some drives. The following
- * is the magic fix from Joseph Chan <JosephChan@via.com.tw>.
- * Please add proper documentation if possible.
- *
- * https://bugzilla.kernel.org/show_bug.cgi?id=15173
- */
- if (pdev->device == 0x3249) {
- pci_read_config_byte(pdev, 0x52, &tmp8);
- tmp8 |= 1 << 2;
- pci_write_config_byte(pdev, 0x52, tmp8);
- }
}
static int svia_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
sk_for_each(s, node, head) {
vcc = atm_sk(s);
if (vcc->dev == dev && vcc->vci == vci &&
- vcc->vpi == vpi && vcc->qos.rxtp.traffic_class != ATM_NONE &&
- test_bit(ATM_VF_READY, &vcc->flags))
+ vcc->vpi == vpi && vcc->qos.rxtp.traffic_class != ATM_NONE)
goto out;
}
vcc = NULL;
clear_bit(ATM_VF_ADDR, &vcc->flags);
clear_bit(ATM_VF_READY, &vcc->flags);
- /* Hold up vcc_destroy_socket() (our caller) until solos_bh() in the
- tasklet has finished processing any incoming packets (and, more to
- the point, using the vcc pointer). */
- tasklet_unlock_wait(&card->tlet);
return;
}
int retval;
if (dev->class) {
- static DEFINE_MUTEX(gdp_mutex);
struct kobject *kobj = NULL;
struct kobject *parent_kobj;
struct kobject *k;
else
parent_kobj = &parent->kobj;
- mutex_lock(&gdp_mutex);
-
/* find our class-directory at the parent and reference it */
spin_lock(&dev->class->p->class_dirs.list_lock);
list_for_each_entry(k, &dev->class->p->class_dirs.list, entry)
break;
}
spin_unlock(&dev->class->p->class_dirs.list_lock);
- if (kobj) {
- mutex_unlock(&gdp_mutex);
+ if (kobj)
return kobj;
- }
/* or create a new class-directory at the parent device */
k = kobject_create();
- if (!k) {
- mutex_unlock(&gdp_mutex);
+ if (!k)
return NULL;
- }
k->kset = &dev->class->p->class_dirs;
retval = kobject_add(k, parent_kobj, "%s", dev->class->name);
if (retval < 0) {
- mutex_unlock(&gdp_mutex);
kobject_put(k);
return NULL;
}
/* do not emit an uevent for this simple "glue" directory */
- mutex_unlock(&gdp_mutex);
return k;
}
/* display offline cpus < nr_cpu_ids */
if (!alloc_cpumask_var(&offline, GFP_KERNEL))
return -ENOMEM;
- cpumask_andnot(offline, cpu_possible_mask, cpu_online_mask);
+ cpumask_complement(offline, cpu_online_mask);
n = cpulist_scnprintf(buf, len, offline);
free_cpumask_var(offline);
if (dentry->d_inode) {
err = vfs_getattr(nd.path.mnt, dentry, &stat);
if (!err && dev_mynode(dev, dentry->d_inode, &stat)) {
- struct iattr newattrs;
- /*
- * before unlinking this node, reset permissions
- * of possible references like hardlinks
- */
- newattrs.ia_uid = 0;
- newattrs.ia_gid = 0;
- newattrs.ia_mode = stat.mode & ~0777;
- newattrs.ia_valid =
- ATTR_UID|ATTR_GID|ATTR_MODE;
- mutex_lock(&dentry->d_inode->i_mutex);
- notify_change(dentry, &newattrs);
- mutex_unlock(&dentry->d_inode->i_mutex);
err = vfs_unlink(nd.path.dentry->d_inode,
dentry);
if (!err || err == -ENOENT)
return sprintf(buf, "%d\n", loading);
}
-static void firmware_free_data(const struct firmware *fw)
-{
- int i;
- vunmap(fw->data);
- if (fw->pages) {
- for (i = 0; i < PFN_UP(fw->size); i++)
- __free_page(fw->pages[i]);
- kfree(fw->pages);
- }
-}
-
/* Some architectures don't have PAGE_KERNEL_RO */
#ifndef PAGE_KERNEL_RO
#define PAGE_KERNEL_RO PAGE_KERNEL
mutex_unlock(&fw_lock);
break;
}
- firmware_free_data(fw_priv->fw);
- memset(fw_priv->fw, 0, sizeof(struct firmware));
- /* If the pages are not owned by 'struct firmware' */
+ vfree(fw_priv->fw->data);
+ fw_priv->fw->data = NULL;
for (i = 0; i < fw_priv->nr_pages; i++)
__free_page(fw_priv->pages[i]);
kfree(fw_priv->pages);
fw_priv->pages = NULL;
fw_priv->page_array_size = 0;
fw_priv->nr_pages = 0;
+ fw_priv->fw->size = 0;
set_bit(FW_STATUS_LOADING, &fw_priv->status);
mutex_unlock(&fw_lock);
break;
case 0:
if (test_bit(FW_STATUS_LOADING, &fw_priv->status)) {
- vunmap(fw_priv->fw->data);
+ vfree(fw_priv->fw->data);
fw_priv->fw->data = vmap(fw_priv->pages,
fw_priv->nr_pages,
0, PAGE_KERNEL_RO);
dev_err(dev, "%s: vmap() failed\n", __func__);
goto err;
}
- /* Pages are now owned by 'struct firmware' */
- fw_priv->fw->pages = fw_priv->pages;
- fw_priv->pages = NULL;
-
+ /* Pages will be freed by vfree() */
fw_priv->page_array_size = 0;
fw_priv->nr_pages = 0;
complete(&fw_priv->completion);
if (fw->data == builtin->data)
goto free_fw;
}
- firmware_free_data(fw);
+ vfree(fw->data);
free_fw:
kfree(fw);
}
if (ret)
goto fail;
- file_update_time(file);
-
transfer_result = lo_do_transfer(lo, WRITE, page, offset,
bvec->bv_page, bv_offs, size, IV);
copied = size;
/* Generic Bluetooth USB device */
{ USB_DEVICE_INFO(0xe0, 0x01, 0x01) },
- /* Apple iMac11,1 */
- { USB_DEVICE(0x05ac, 0x8215) },
-
/* AVM BlueFRITZ! USB v2.0 */
{ USB_DEVICE(0x057c, 0x3800) },
BT_DBG("tty %p", tty);
- /* FIXME: This btw is bogus, nothing requires the old ldisc to clear
- the pointer */
if (hu)
return -EEXIST;
- /* Error if the tty has no write op instead of leaving an exploitable
- hole */
- if (tty->ops->write == NULL)
- return -EOPNOTSUPP;
-
if (!(hu = kzalloc(sizeof(struct hci_uart), GFP_KERNEL))) {
BT_ERR("Can't allocate control structure");
return -ENFILE;
config AGP_AMD64
tristate "AMD Opteron/Athlon64 on-CPU GART support" if !GART_IOMMU
- depends on AGP && X86 && K8_NB
+ depends on AGP && X86
default y if GART_IOMMU
help
This option gives you AGP support for the GLX component of
u8 cap_ptr;
int err;
- /* The Highlander principle */
- if (agp_bridges_found)
- return -ENODEV;
-
cap_ptr = pci_find_capability(pdev, PCI_CAP_ID_AGP);
if (!cap_ptr)
return -ENODEV;
amd64_aperture_sizes[bridge->aperture_size_idx].size);
agp_remove_bridge(bridge);
agp_put_bridge(bridge);
-
- agp_bridges_found--;
}
#ifdef CONFIG_PM
MODULE_DEVICE_TABLE(pci, agp_amd64_pci_table);
-static DEFINE_PCI_DEVICE_TABLE(agp_amd64_pci_promisc_table) = {
- { PCI_DEVICE_CLASS(0, 0) },
- { }
-};
-
static struct pci_driver agp_amd64_pci_driver = {
.name = "agpgart-amd64",
.id_table = agp_amd64_pci_table,
return err;
if (agp_bridges_found == 0) {
+ struct pci_dev *dev;
if (!agp_try_unsupported && !agp_try_unsupported_boot) {
printk(KERN_INFO PFX "No supported AGP bridge found.\n");
#ifdef MODULE
return -ENODEV;
/* Look for any AGP bridge */
- agp_amd64_pci_driver.id_table = agp_amd64_pci_promisc_table;
- err = driver_attach(&agp_amd64_pci_driver.driver);
- if (err == 0 && agp_bridges_found == 0)
- err = -ENODEV;
+ dev = NULL;
+ err = -ENODEV;
+ for_each_pci_dev(dev) {
+ if (!pci_find_capability(dev, PCI_CAP_ID_AGP))
+ continue;
+ /* Only one bridge supported right now */
+ if (agp_amd64_probe(dev, NULL) == 0) {
+ err = 0;
+ break;
+ }
+ }
}
return err;
}
handle = obj;
do {
status = acpi_get_object_info(handle, &info);
- if (ACPI_SUCCESS(status) && (info->valid & ACPI_VALID_HID)) {
+ if (ACPI_SUCCESS(status)) {
/* TBD check _CID also */
+ info->hardware_id.string[sizeof(info->hardware_id.length)-1] = '\0';
match = (strcmp(info->hardware_id.string, "HWP0001") == 0);
kfree(info);
if (match) {
#include <linux/kernel.h>
#include <linux/pagemap.h>
#include <linux/agp_backend.h>
-#include <asm/smp.h>
#include "agp.h"
/*
intel_i830_fini_flush();
}
+static void
+do_wbinvd(void *null)
+{
+ wbinvd();
+}
+
/* The chipset_flush interface needs to get data that has already been
* flushed out of the CPU all the way out to main memory, because the GPU
* doesn't snoop those buffers.
memset(pg, 0, 1024);
- if (cpu_has_clflush)
+ if (cpu_has_clflush) {
clflush_cache_range(pg, 1024);
- else if (wbinvd_on_all_cpus() != 0)
- printk(KERN_ERR "Timed out waiting for cache flush.\n");
+ } else {
+ if (on_each_cpu(do_wbinvd, NULL, 1) != 0)
+ printk(KERN_ERR "Timed out waiting for cache flush.\n");
+ }
}
/* The intel i830 automatically initializes the agp aperture during POST.
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
},
+ {
+ .class = (PCI_CLASS_BRIDGE_HOST << 8),
+ .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_SI,
+ .device = PCI_DEVICE_ID_SI_760,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ },
{ }
};
if (irq) {
unsigned long irq_flags;
- if (devp->hd_flags & HPET_SHARED_IRQ) {
- /*
- * To prevent the interrupt handler from seeing an
- * unwanted interrupt status bit, program the timer
- * so that it will not fire in the near future ...
- */
- writel(readl(&timer->hpet_config) & ~Tn_TYPE_CNF_MASK,
- &timer->hpet_config);
- write_counter(read_counter(&hpet->hpet_mc),
- &timer->hpet_compare);
- /* ... and clear any left-over status. */
- isr = 1 << (devp - devp->hd_hpets->hp_dev);
- writel(isr, &hpet->hpet_isr);
- }
-
sprintf(devp->hd_name, "hpet%d", (int)(devp - hpetp->hp_dev));
irq_flags = devp->hd_flags & HPET_SHARED_IRQ
? IRQF_SHARED : IRQF_DISABLED;
return -ENODEV;
if (!data.hd_address || !data.hd_nirqs) {
- if (data.hd_address)
- iounmap(data.hd_address);
printk("%s: no address or irqs in _CRS\n", __func__);
return -ENODEV;
}
{
/* Deliver the message to the upper layer with the lock
released. */
-
- if (smi_info->run_to_completion) {
- ipmi_smi_msg_received(smi_info->intf, msg);
- } else {
- spin_unlock(&(smi_info->si_lock));
- ipmi_smi_msg_received(smi_info->intf, msg);
- spin_lock(&(smi_info->si_lock));
- }
+ spin_unlock(&(smi_info->si_lock));
+ ipmi_smi_msg_received(smi_info->intf, msg);
+ spin_lock(&(smi_info->si_lock));
}
static void return_hosed_msg(struct smi_info *smi_info, int cCode)
/*
* capabilities for /dev/zero
* - permits private mappings, "copies" are taken of the source of zeros
- * - no writeback happens
*/
static struct backing_dev_info zero_bdi = {
.name = "char/mem",
- .capabilities = BDI_CAP_MAP_COPY | BDI_CAP_NO_ACCT_AND_WRITEBACK,
+ .capabilities = BDI_CAP_MAP_COPY,
};
static const struct file_operations full_fops = {
unsigned char contents[NVRAM_BYTES];
unsigned i = *ppos;
unsigned char *tmp;
+ int len;
- if (i >= NVRAM_BYTES)
- return 0; /* Past EOF */
-
- if (count > NVRAM_BYTES - i)
- count = NVRAM_BYTES - i;
- if (count > NVRAM_BYTES)
- return -EFAULT; /* Can't happen, but prove it to gcc */
-
- if (copy_from_user(contents, buf, count))
+ len = (NVRAM_BYTES - i) < count ? (NVRAM_BYTES - i) : count;
+ if (copy_from_user(contents, buf, len))
return -EFAULT;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum())
goto checksum_err;
- for (tmp = contents; count--; ++i, ++tmp)
+ for (tmp = contents; count-- > 0 && i < NVRAM_BYTES; ++i, ++tmp)
__nvram_write_byte(*tmp, i);
__nvram_set_checksum();
if (cmd != SIOCWANDEV)
return hdlc_ioctl(dev, ifr, cmd);
- memset(&new_line, 0, size);
-
switch(ifr->ifr_settings.type) {
case IF_GET_IFACE: /* return current sync_serial_settings */
.aio_read = generic_file_aio_read,
.write = do_sync_write,
.aio_write = blkdev_aio_write,
- .fsync = block_fsync,
.open = raw_open,
.release= raw_release,
.ioctl = raw_ioctl,
u8 algorithm[4];
u8 encscheme[2];
u8 sigscheme[2];
- __be32 paramsize;
u8 parameters[12]; /*assuming RSA*/
__be32 keysize;
u8 modulus[256];
return size;
}
-static int itpm;
-module_param(itpm, bool, 0444);
-MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)");
-
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c,
&chip->vendor.int_queue);
status = tpm_tis_status(chip);
- if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) {
+ if ((status & TPM_STS_DATA_EXPECT) == 0) {
rc = -EIO;
goto out_err;
}
"1.2 TPM (device-id 0x%X, rev-id %d)\n",
vendor >> 16, ioread8(chip->vendor.iobase + TPM_RID(0)));
- if (itpm)
- dev_info(dev, "Intel iTPM workaround enabled\n");
-
-
/* Figure out the capabilities */
intfcaps =
ioread32(chip->vendor.iobase +
static int tpm_tis_pnp_resume(struct pnp_dev *dev)
{
- struct tpm_chip *chip = pnp_get_drvdata(dev);
- int ret;
-
- ret = tpm_pm_resume(&dev->dev);
- if (!ret)
- tpm_continue_selftest(chip);
-
- return ret;
+ return tpm_pm_resume(&dev->dev);
}
static struct pnp_device_id tpm_pnp_tbl[] __devinitdata = {
{"", 0}, /* User Specified */
{"", 0} /* Terminator */
};
-MODULE_DEVICE_TABLE(pnp, tpm_pnp_tbl);
static __devexit void tpm_tis_pnp_remove(struct pnp_dev *dev)
{
{
int copied = 0;
do {
- int goal = min(size - copied, TTY_BUFFER_PAGE);
- int space = tty_buffer_request_room(tty, goal);
+ int space = tty_buffer_request_room(tty, size - copied);
struct tty_buffer *tb = tty->buf.tail;
/* If there is no space then tb may be NULL */
if (unlikely(space == 0))
{
int copied = 0;
do {
- int goal = min(size - copied, TTY_BUFFER_PAGE);
- int space = tty_buffer_request_room(tty, goal);
+ int space = tty_buffer_request_room(tty, size - copied);
struct tty_buffer *tb = tty->buf.tail;
/* If there is no space then tb may be NULL */
if (unlikely(space == 0))
spin_lock_irqsave(&tty->buf.lock, flags);
if (!test_and_set_bit(TTY_FLUSHING, &tty->flags)) {
- struct tty_buffer *head, *tail = tty->buf.tail;
- int seen_tail = 0;
+ struct tty_buffer *head;
while ((head = tty->buf.head) != NULL) {
int count;
char *char_buf;
if (!count) {
if (head->next == NULL)
break;
- /*
- There's a possibility tty might get new buffer
- added during the unlock window below. We could
- end up spinning in here forever hogging the CPU
- completely. To avoid this let's have a rest each
- time we processed the tail buffer.
- */
- if (tail == head)
- seen_tail = 1;
tty->buf.head = head->next;
tty_buffer_free(tty, head);
continue;
line discipline as we want to empty the queue */
if (test_bit(TTY_FLUSHPENDING, &tty->flags))
break;
- if (!tty->receive_room || seen_tail) {
+ if (!tty->receive_room) {
schedule_delayed_work(&tty->buf.work, 1);
break;
}
list_del_init(&tty->tty_files);
file_list_unlock();
- put_pid(tty->pgrp);
- put_pid(tty->session);
free_tty_struct(tty);
}
static DEFINE_SPINLOCK(tty_ldisc_lock);
static DECLARE_WAIT_QUEUE_HEAD(tty_ldisc_wait);
-static DECLARE_WAIT_QUEUE_HEAD(tty_ldisc_idle);
/* Line disc dispatch table */
static struct tty_ldisc_ops *tty_ldiscs[NR_LDISCS];
return;
}
local_irq_restore(flags);
- wake_up(&tty_ldisc_idle);
}
/**
static int tty_ldisc_open(struct tty_struct *tty, struct tty_ldisc *ld)
{
- int ret;
-
WARN_ON(test_and_set_bit(TTY_LDISC_OPEN, &tty->flags));
- if (ld->ops->open) {
- ret = ld->ops->open(tty);
- if (ret)
- clear_bit(TTY_LDISC_OPEN, &tty->flags);
- }
+ if (ld->ops->open)
+ return ld->ops->open(tty);
return 0;
}
return cancel_delayed_work_sync(&tty->buf.work);
}
-/**
- * tty_ldisc_wait_idle - wait for the ldisc to become idle
- * @tty: tty to wait for
- *
- * Wait for the line discipline to become idle. The discipline must
- * have been halted for this to guarantee it remains idle.
- */
-static int tty_ldisc_wait_idle(struct tty_struct *tty)
-{
- int ret;
- ret = wait_event_interruptible_timeout(tty_ldisc_idle,
- atomic_read(&tty->ldisc->users) == 1, 5 * HZ);
- if (ret < 0)
- return ret;
- return ret > 0 ? 0 : -EBUSY;
-}
-
/**
* tty_set_ldisc - set line discipline
* @tty: the terminal to set
flush_scheduled_work();
- retval = tty_ldisc_wait_idle(tty);
-
mutex_lock(&tty->ldisc_mutex);
-
- /* handle wait idle failure locked */
- if (retval) {
- tty_ldisc_put(new_ldisc);
- goto enable;
- }
-
if (test_bit(TTY_HUPPED, &tty->flags)) {
/* We were raced by the hangup method. It will have stomped
the ldisc data and closed the ldisc down */
tty_ldisc_put(o_ldisc);
-enable:
/*
* Allow ldisc referencing to occur again
*/
/**
* tty_ldisc_reinit - reinitialise the tty ldisc
* @tty: tty to reinit
- * @ldisc: line discipline to reinitialize
*
- * Switch the tty to a line discipline and leave the ldisc
- * state closed
+ * Switch the tty back to N_TTY line discipline and leave the
+ * ldisc state closed
*/
-static int tty_ldisc_reinit(struct tty_struct *tty, int ldisc)
+static void tty_ldisc_reinit(struct tty_struct *tty)
{
- struct tty_ldisc *ld = tty_ldisc_get(ldisc);
-
- if (IS_ERR(ld))
- return -1;
+ struct tty_ldisc *ld;
tty_ldisc_close(tty, tty->ldisc);
tty_ldisc_put(tty->ldisc);
/*
* Switch the line discipline back
*/
+ ld = tty_ldisc_get(N_TTY);
+ BUG_ON(IS_ERR(ld));
tty_ldisc_assign(tty, ld);
- tty_set_termios_ldisc(tty, ldisc);
-
- return 0;
+ tty_set_termios_ldisc(tty, N_TTY);
}
/**
void tty_ldisc_hangup(struct tty_struct *tty)
{
struct tty_ldisc *ld;
- int reset = tty->driver->flags & TTY_DRIVER_RESET_TERMIOS;
- int err = 0;
/*
* FIXME! What are the locking issues here? This may me overdoing
wake_up_interruptible_poll(&tty->read_wait, POLLIN);
/*
* Shutdown the current line discipline, and reset it to
- * N_TTY if need be.
- *
- * Avoid racing set_ldisc or tty_ldisc_release
+ * N_TTY.
*/
- mutex_lock(&tty->ldisc_mutex);
- tty_ldisc_halt(tty);
- /* At this point we have a closed ldisc and we want to
- reopen it. We could defer this to the next open but
- it means auditing a lot of other paths so this is
- a FIXME */
- if (tty->ldisc) { /* Not yet closed */
- if (reset == 0) {
-
- if (!tty_ldisc_reinit(tty, tty->termios->c_line))
- err = tty_ldisc_open(tty, tty->ldisc);
- else
- err = 1;
- }
- /* If the re-open fails or we reset then go to N_TTY. The
- N_TTY open cannot fail */
- if (reset || err) {
- BUG_ON(tty_ldisc_reinit(tty, N_TTY));
+ if (tty->driver->flags & TTY_DRIVER_RESET_TERMIOS) {
+ /* Avoid racing set_ldisc or tty_ldisc_release */
+ mutex_lock(&tty->ldisc_mutex);
+ tty_ldisc_halt(tty);
+ if (tty->ldisc) { /* Not yet closed */
+ /* Switch back to N_TTY */
+ tty_ldisc_reinit(tty);
+ /* At this point we have a closed ldisc and we want to
+ reopen it. We could defer this to the next open but
+ it means auditing a lot of other paths so this is
+ a FIXME */
WARN_ON(tty_ldisc_open(tty, tty->ldisc));
+ tty_ldisc_enable(tty);
}
- tty_ldisc_enable(tty);
- }
- mutex_unlock(&tty->ldisc_mutex);
- if (reset)
+ mutex_unlock(&tty->ldisc_mutex);
tty_reset_termios(tty);
+ }
}
/**
struct kbd_struct * kbd;
unsigned int console;
unsigned char ucval;
- unsigned int uival;
void __user *up = (void __user *)arg;
int i, perm;
int ret = 0;
break;
case KDGETMODE:
- uival = vc->vc_mode;
+ ucval = vc->vc_mode;
goto setint;
case KDMAPDISP:
break;
case KDGKBMODE:
- uival = ((kbd->kbdmode == VC_RAW) ? K_RAW :
+ ucval = ((kbd->kbdmode == VC_RAW) ? K_RAW :
(kbd->kbdmode == VC_MEDIUMRAW) ? K_MEDIUMRAW :
(kbd->kbdmode == VC_UNICODE) ? K_UNICODE :
K_XLATE);
break;
case KDGKBMETA:
- uival = (vc_kbd_mode(kbd, VC_META) ? K_ESCPREFIX : K_METABIT);
+ ucval = (vc_kbd_mode(kbd, VC_META) ? K_ESCPREFIX : K_METABIT);
setint:
- ret = put_user(uival, (int __user *)arg);
+ ret = put_user(ucval, (int __user *)arg);
break;
case KDGETKEYCODE:
for (i = 0; i < MAX_NR_CONSOLES; ++i)
if (! VT_IS_IN_USE(i))
break;
- uival = i < MAX_NR_CONSOLES ? (i+1) : -1;
+ ucval = i < MAX_NR_CONSOLES ? (i+1) : -1;
goto setint;
/*
static int sh_cmt_clocksource_enable(struct clocksource *cs)
{
struct sh_cmt_priv *p = cs_to_sh_cmt(cs);
+ int ret;
p->total_cycles = 0;
- return sh_cmt_start(p, FLAG_CLOCKSOURCE);
+ ret = sh_cmt_start(p, FLAG_CLOCKSOURCE);
+ if (ret)
+ return ret;
+
+ /* TODO: calculate good shift from rate and counter bit width */
+ cs->shift = 0;
+ cs->mult = clocksource_hz2mult(p->rate, cs->shift);
+ return 0;
}
static void sh_cmt_clocksource_disable(struct clocksource *cs)
cs->disable = sh_cmt_clocksource_disable;
cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8);
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
-
- /* clk_get_rate() needs an enabled clock */
- clk_enable(p->clk);
- p->rate = clk_get_rate(p->clk) / (p->width == 16) ? 512 : 8;
- clk_disable(p->clk);
-
- /* TODO: calculate good shift from rate and counter bit width */
- cs->shift = 10;
- cs->mult = clocksource_hz2mult(p->rate, cs->shift);
-
pr_info("sh_cmt: %s used as clock source\n", cs->name);
-
clocksource_register(cs);
return 0;
}
p->irqaction.handler = sh_cmt_interrupt;
p->irqaction.dev_id = p;
p->irqaction.flags = IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL;
+ ret = setup_irq(irq, &p->irqaction);
+ if (ret) {
+ pr_err("sh_cmt: failed to request irq %d\n", irq);
+ goto err1;
+ }
/* get hold of clock */
p->clk = clk_get(&p->pdev->dev, cfg->clk);
if (IS_ERR(p->clk)) {
pr_err("sh_cmt: cannot get clock \"%s\"\n", cfg->clk);
ret = PTR_ERR(p->clk);
- goto err1;
+ goto err2;
}
if (resource_size(res) == 6) {
p->clear_bits = ~0xc000;
}
- ret = sh_cmt_register(p, cfg->name,
- cfg->clockevent_rating,
- cfg->clocksource_rating);
- if (ret) {
- pr_err("sh_cmt: registration failed\n");
- goto err1;
- }
-
- ret = setup_irq(irq, &p->irqaction);
- if (ret) {
- pr_err("sh_cmt: failed to request irq %d\n", irq);
- goto err1;
- }
-
- return 0;
-
-err1:
+ return sh_cmt_register(p, cfg->name,
+ cfg->clockevent_rating,
+ cfg->clocksource_rating);
+ err2:
+ remove_irq(irq, &p->irqaction);
+ err1:
iounmap(p->mapbase);
-err0:
+ err0:
return ret;
}
ced->cpumask = cpumask_of(0);
ced->set_mode = sh_mtu2_clock_event_mode;
- pr_info("sh_mtu2: %s used for clock events\n", ced->name);
- clockevents_register_device(ced);
-
ret = setup_irq(p->irqaction.irq, &p->irqaction);
if (ret) {
pr_err("sh_mtu2: failed to request irq %d\n",
p->irqaction.irq);
return;
}
+
+ pr_info("sh_mtu2: %s used for clock events\n", ced->name);
+ clockevents_register_device(ced);
}
static int sh_mtu2_register(struct sh_mtu2_priv *p, char *name,
static int sh_tmu_clocksource_enable(struct clocksource *cs)
{
struct sh_tmu_priv *p = cs_to_sh_tmu(cs);
+ int ret;
+
+ ret = sh_tmu_enable(p);
+ if (ret)
+ return ret;
- return sh_tmu_enable(p);
+ /* TODO: calculate good shift from rate and counter bit width */
+ cs->shift = 10;
+ cs->mult = clocksource_hz2mult(p->rate, cs->shift);
+ return 0;
}
static void sh_tmu_clocksource_disable(struct clocksource *cs)
cs->disable = sh_tmu_clocksource_disable;
cs->mask = CLOCKSOURCE_MASK(32);
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
-
- /* clk_get_rate() needs an enabled clock */
- clk_enable(p->clk);
- /* channel will be configured at parent clock / 4 */
- p->rate = clk_get_rate(p->clk) / 4;
- clk_disable(p->clk);
- /* TODO: calculate good shift from rate and counter bit width */
- cs->shift = 10;
- cs->mult = clocksource_hz2mult(p->rate, cs->shift);
-
pr_info("sh_tmu: %s used as clock source\n", cs->name);
clocksource_register(cs);
return 0;
ced->set_next_event = sh_tmu_clock_event_next;
ced->set_mode = sh_tmu_clock_event_mode;
- pr_info("sh_tmu: %s used for clock events\n", ced->name);
- clockevents_register_device(ced);
-
ret = setup_irq(p->irqaction.irq, &p->irqaction);
if (ret) {
pr_err("sh_tmu: failed to request irq %d\n",
p->irqaction.irq);
return;
}
+
+ pr_info("sh_tmu: %s used for clock events\n", ced->name);
+ clockevents_register_device(ced);
}
static int sh_tmu_register(struct sh_tmu_priv *p, char *name,
dprintk("governor switch\n");
/* end old governor */
- if (data->governor)
+ if (data->governor) {
+ /*
+ * Need to release the rwsem around governor
+ * stop due to lock dependency between
+ * cancel_delayed_work_sync and the read lock
+ * taken in the delayed work handler.
+ */
+ unlock_policy_rwsem_write(data->cpu);
__cpufreq_governor(data, CPUFREQ_GOV_STOP);
+ lock_policy_rwsem_write(data->cpu);
+ }
/* start new governor */
data->governor = policy->governor;
unsigned int expected_us;
u64 predicted_us;
+ unsigned int measured_us;
unsigned int exit_us;
unsigned int bucket;
u64 correction_factor[BUCKETS];
int i;
int multiplier;
+ data->last_state_idx = 0;
+ data->exit_us = 0;
+
if (data->needs_update) {
menu_update(dev);
data->needs_update = 0;
}
- data->last_state_idx = 0;
- data->exit_us = 0;
-
/* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0))
return 0;
new_factor = data->correction_factor[data->bucket]
* (DECAY - 1) / DECAY;
- if (data->expected_us > 0 && measured_us < MAX_INTERESTING)
+ if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
new_factor += RESOLUTION * measured_us / data->expected_us;
else
/*
if (initial)
asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */
: "+S" (input), "+D" (output), "+a" (iv)
- : "d" (control_word), "b" (key), "c" (initial));
+ : "d" (control_word), "b" (key), "c" (count));
asm volatile (".byte 0xf3,0x0f,0xa7,0xd0" /* rep xcryptcbc */
: "+S" (input), "+D" (output), "+a" (iv)
static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
{
- u32 val = ~(1 << (chan->idx * 16));
+ u32 val = (1 << (1 + (chan->idx * 16)));
dev_dbg(chan->device->common.dev, "%s, val 0x%08x\n", __func__, val);
__raw_writel(val, XOR_INTR_CAUSE(chan));
}
default:
amd64_printk(KERN_ERR, "Unsupported family!\n");
- return -EINVAL;
+ break;
}
return amd64_search_set_scrub_rate(pvt->misc_f3_ctl, *bandwidth,
min_scrubrate);
u64 chan_off;
if (hi_range_sel) {
- if (!(dct_sel_base_addr & 0xFFFF0000) &&
+ if (!(dct_sel_base_addr & 0xFFFFF800) &&
hole_valid && (sys_addr >= 0x100000000ULL))
chan_off = hole_off << 16;
else
void amd_decode_nb_mce(int node_id, struct err_regs *regs, int handle_errors)
{
u32 ec = ERROR_CODE(regs->nbsl);
+ u32 xec = EXT_ERROR_CODE(regs->nbsl);
if (!handle_errors)
return;
if (regs->nbsh & K8_NBSH_ERR_CPU_VAL)
pr_cont(", core: %u\n", (u8)(regs->nbsh & 0xf));
} else {
- u8 assoc_cpus = regs->nbsh & 0xf;
-
- if (assoc_cpus > 0)
- pr_cont(", core: %d", fls(assoc_cpus) - 1);
-
- pr_cont("\n");
+ pr_cont(", core: %d\n", ilog2((regs->nbsh & 0xf)));
}
- pr_emerg("%s.\n", EXT_ERR_MSG(regs->nbsl));
+
+ pr_emerg("%s.\n", EXT_ERR_MSG(xec));
if (BUS_ERROR(ec) && nb_bus_decoder)
nb_bus_decoder(node_id, regs);
((m->status & MCI_STATUS_PCC) ? "yes" : "no"));
/* do the two bits[14:13] together */
- ecc = (m->status >> 45) & 0x3;
+ ecc = m->status & (3ULL << 45);
if (ecc)
pr_cont(", %sECC Error", ((ecc == 2) ? "C" : "U"));
static void fw_card_bm_work(struct work_struct *work)
{
struct fw_card *card = container_of(work, struct fw_card, work.work);
- struct fw_device *root_device, *irm_device;
+ struct fw_device *root_device;
struct fw_node *root_node;
unsigned long flags;
int root_id, new_root_id, irm_id, local_id;
bool do_reset = false;
bool root_device_is_running;
bool root_device_is_cmc;
- bool irm_is_1394_1995_only;
spin_lock_irqsave(&card->lock, flags);
}
generation = card->generation;
-
root_node = card->root_node;
fw_node_get(root_node);
root_device = root_node->data;
root_device_is_running = root_device &&
atomic_read(&root_device->state) == FW_DEVICE_RUNNING;
root_device_is_cmc = root_device && root_device->cmc;
-
- irm_device = card->irm_node->data;
- irm_is_1394_1995_only = irm_device && irm_device->config_rom &&
- (irm_device->config_rom[2] & 0x000000f0) == 0;
-
root_id = root_node->node_id;
irm_id = card->irm_node->node_id;
local_id = card->local_node->node_id;
if (!card->irm_node->link_on) {
new_root_id = local_id;
- fw_notify("%s, making local node (%02x) root.\n",
- "IRM has link off", new_root_id);
- goto pick_me;
- }
-
- if (irm_is_1394_1995_only) {
- new_root_id = local_id;
- fw_notify("%s, making local node (%02x) root.\n",
- "IRM is not 1394a compliant", new_root_id);
+ fw_notify("IRM has link off, making local node (%02x) root.\n",
+ new_root_id);
goto pick_me;
}
* root, and thus, IRM.
*/
new_root_id = local_id;
- fw_notify("%s, making local node (%02x) root.\n",
- "BM lock failed", new_root_id);
+ fw_notify("BM lock failed, making local node (%02x) root.\n",
+ new_root_id);
goto pick_me;
}
} else if (card->bm_generation != generation) {
int ret;
if (_IOC_TYPE(cmd) != '#' ||
- _IOC_NR(cmd) >= ARRAY_SIZE(ioctl_handlers) ||
- _IOC_SIZE(cmd) > sizeof(buffer))
+ _IOC_NR(cmd) >= ARRAY_SIZE(ioctl_handlers))
return -EINVAL;
- if (_IOC_DIR(cmd) == _IOC_READ)
- memset(&buffer, 0, _IOC_SIZE(cmd));
-
- if (_IOC_DIR(cmd) & _IOC_WRITE)
- if (copy_from_user(buffer, arg, _IOC_SIZE(cmd)))
+ if (_IOC_DIR(cmd) & _IOC_WRITE) {
+ if (_IOC_SIZE(cmd) > sizeof(buffer) ||
+ copy_from_user(buffer, arg, _IOC_SIZE(cmd)))
return -EFAULT;
+ }
ret = ioctl_handlers[_IOC_NR(cmd)](client, buffer);
if (ret < 0)
return ret;
- if (_IOC_DIR(cmd) & _IOC_READ)
- if (copy_to_user(arg, buffer, _IOC_SIZE(cmd)))
+ if (_IOC_DIR(cmd) & _IOC_READ) {
+ if (_IOC_SIZE(cmd) > sizeof(buffer) ||
+ copy_to_user(arg, buffer, _IOC_SIZE(cmd)))
return -EFAULT;
+ }
return ret;
}
return -ENOMEM;
stack = &rom[READ_BIB_ROM_SIZE];
- memset(rom, 0, sizeof(*rom) * READ_BIB_ROM_SIZE);
device->max_speed = SCODE_100;
d = &ab->descriptor;
if (d->res_count == 0) {
- size_t size, size2, rest, pktsize, size3, offset;
+ size_t size, rest, offset;
dma_addr_t start_bus;
void *start;
*/
offset = offsetof(struct ar_buffer, data);
- start = ab;
+ start = buffer = ab;
start_bus = le32_to_cpu(ab->descriptor.data_address) - offset;
- buffer = ab->data;
ab = ab->next;
d = &ab->descriptor;
- size = start + PAGE_SIZE - ctx->pointer;
- /* valid buffer data in the next page */
+ size = buffer + PAGE_SIZE - ctx->pointer;
rest = le16_to_cpu(d->req_count) - le16_to_cpu(d->res_count);
- /* what actually fits in this page */
- size2 = min(rest, (size_t)PAGE_SIZE - offset - size);
memmove(buffer, ctx->pointer, size);
- memcpy(buffer + size, ab->data, size2);
-
- while (size > 0) {
- void *next = handle_ar_packet(ctx, buffer);
- pktsize = next - buffer;
- if (pktsize >= size) {
- /*
- * We have handled all the data that was
- * originally in this page, so we can now
- * continue in the next page.
- */
- buffer = next;
- break;
- }
- /* move the next packet to the start of the buffer */
- memmove(buffer, next, size + size2 - pktsize);
- size -= pktsize;
- /* fill up this page again */
- size3 = min(rest - size2,
- (size_t)PAGE_SIZE - offset - size - size2);
- memcpy(buffer + size + size2,
- (void *) ab->data + size2, size3);
- size2 += size3;
- }
-
- if (rest > 0) {
- /* handle the packets that are fully in the next page */
- buffer = (void *) ab->data +
- (buffer - (start + offset + size));
- end = (void *) ab->data + rest;
+ memcpy(buffer + size, ab->data, rest);
+ ctx->current_buffer = ab;
+ ctx->pointer = (void *) ab->data + rest;
+ end = buffer + size + rest;
- while (buffer < end)
- buffer = handle_ar_packet(ctx, buffer);
-
- ctx->current_buffer = ab;
- ctx->pointer = end;
+ while (buffer < end)
+ buffer = handle_ar_packet(ctx, buffer);
- dma_free_coherent(ohci->card.device, PAGE_SIZE,
- start, start_bus);
- ar_context_add_page(ctx);
- } else {
- ctx->pointer = start + PAGE_SIZE;
- }
+ dma_free_coherent(ohci->card.device, PAGE_SIZE,
+ start, start_bus);
+ ar_context_add_page(ctx);
} else {
buffer = ctx->pointer;
ctx->pointer = end =
return !((ret>>offset)^gpn_pol);
}
-static void wm831x_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+static int wm831x_gpio_direction_out(struct gpio_chip *chip,
+ unsigned offset, int value)
{
struct wm831x_gpio *wm831x_gpio = to_wm831x_gpio(chip);
struct wm831x *wm831x = wm831x_gpio->wm831x;
- wm831x_set_bits(wm831x, WM831X_GPIO_LEVEL, 1 << offset,
- value << offset);
+ return wm831x_set_bits(wm831x, WM831X_GPIO1_CONTROL + offset,
+ WM831X_GPN_DIR | WM831X_GPN_TRI, 0);
}
-static int wm831x_gpio_direction_out(struct gpio_chip *chip,
- unsigned offset, int value)
+static void wm831x_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
struct wm831x_gpio *wm831x_gpio = to_wm831x_gpio(chip);
struct wm831x *wm831x = wm831x_gpio->wm831x;
/* Can only set GPIO state once it's in output mode */
wm831x_gpio_set(chip, offset, value);
- return 0;
+ wm831x_set_bits(wm831x, WM831X_GPIO_LEVEL, 1 << offset,
+ value << offset);
}
static int wm831x_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
if (connector->status == connector_status_disconnected) {
DRM_DEBUG_KMS("%s is disconnected\n",
drm_get_connector_name(connector));
- drm_mode_connector_update_edid_property(connector, NULL);
goto prune;
}
mode_changed = true;
if (mode_changed) {
+ old_fb = set->crtc->fb;
+ set->crtc->fb = set->fb;
set->crtc->enabled = (set->mode != NULL);
if (set->mode != NULL) {
DRM_DEBUG_KMS("attempting to set mode from"
" userspace\n");
drm_mode_debug_printmodeline(set->mode);
- old_fb = set->crtc->fb;
- set->crtc->fb = set->fb;
if (!drm_crtc_helper_set_mode(set->crtc, set->mode,
set->x, set->y,
old_fb)) {
retcode = -EFAULT;
goto err_i1;
}
- } else
- memset(kdata, 0, _IOC_SIZE(cmd));
-
+ }
retcode = func(dev, kdata, file_priv);
if (cmd & IOC_OUT) {
/* Envision Peripherals, Inc. EN-7100e */
{ "EPI", 59264, EDID_QUIRK_135_CLOCK_TOO_HIGH },
- /* Envision EN2028 */
- { "EPI", 8232, EDID_QUIRK_PREFER_LARGE_60 },
/* Funai Electronics PM36B */
{ "FCM", 13600, EDID_QUIRK_PREFER_LARGE_75 |
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
/* 1024x768@85Hz */
{ DRM_MODE("1024x768", DRM_MODE_TYPE_DRIVER, 94500, 1024, 1072,
- 1168, 1376, 0, 768, 769, 772, 808, 0,
+ 1072, 1376, 0, 768, 769, 772, 808, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
/* 1152x864@75Hz */
{ DRM_MODE("1152x864", DRM_MODE_TYPE_DRIVER, 108000, 1152, 1216,
mode = drm_cvt_mode(dev, hsize, vsize, vrefresh_rate, 0, 0,
false);
mode->hdisplay = 1366;
- mode->hsync_start = mode->hsync_start - 1;
- mode->hsync_end = mode->hsync_end - 1;
+ mode->vsync_start = mode->vsync_start - 1;
+ mode->vsync_end = mode->vsync_end - 1;
return mode;
}
mode = NULL;
mode->vsync_end = mode->vsync_start + vsync_pulse_width;
mode->vtotal = mode->vdisplay + vblank;
+ /* perform the basic check for the detailed timing */
+ if (mode->hsync_end > mode->htotal ||
+ mode->vsync_end > mode->vtotal) {
+ drm_mode_destroy(dev, mode);
+ DRM_DEBUG_KMS("Incorrect detailed timing. "
+ "Sync is beyond the blank.\n");
+ return NULL;
+ }
+
/* Some EDIDs have bogus h/vtotal values */
if (mode->hsync_end > mode->htotal)
mode->htotal = mode->hsync_end + 1;
return modes;
}
-static int add_detailed_modes(struct drm_connector *connector,
- struct detailed_timing *timing,
- struct edid *edid, u32 quirks, int preferred)
-{
- int i, modes = 0;
- struct detailed_non_pixel *data = &timing->data.other_data;
- int timing_level = standard_timing_level(edid);
- struct drm_display_mode *newmode;
- struct drm_device *dev = connector->dev;
-
- if (timing->pixel_clock) {
- newmode = drm_mode_detailed(dev, edid, timing, quirks);
- if (!newmode)
- return 0;
-
- if (preferred)
- newmode->type |= DRM_MODE_TYPE_PREFERRED;
-
- drm_mode_probed_add(connector, newmode);
- return 1;
- }
-
- /* other timing types */
- switch (data->type) {
- case EDID_DETAIL_MONITOR_RANGE:
- /* Get monitor range data */
- break;
- case EDID_DETAIL_STD_MODES:
- /* Six modes per detailed section */
- for (i = 0; i < 6; i++) {
- struct std_timing *std;
- struct drm_display_mode *newmode;
-
- std = &data->data.timings[i];
- newmode = drm_mode_std(dev, std, edid->revision,
- timing_level);
- if (newmode) {
- drm_mode_probed_add(connector, newmode);
- modes++;
- }
- }
- break;
- default:
- break;
- }
-
- return modes;
-}
-
/**
- * add_detailed_info - get detailed mode info from EDID data
+ * add_detailed_modes - get detailed mode info from EDID data
* @connector: attached connector
* @edid: EDID block to scan
* @quirks: quirks to apply
static int add_detailed_info(struct drm_connector *connector,
struct edid *edid, u32 quirks)
{
- int i, modes = 0;
+ struct drm_device *dev = connector->dev;
+ int i, j, modes = 0;
+ int timing_level;
+
+ timing_level = standard_timing_level(edid);
for (i = 0; i < EDID_DETAILED_TIMINGS; i++) {
struct detailed_timing *timing = &edid->detailed_timings[i];
- int preferred = (i == 0) && (edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING);
+ struct detailed_non_pixel *data = &timing->data.other_data;
+ struct drm_display_mode *newmode;
- /* In 1.0, only timings are allowed */
- if (!timing->pixel_clock && edid->version == 1 &&
- edid->revision == 0)
- continue;
+ /* X server check is version 1.1 or higher */
+ if (edid->version == 1 && edid->revision >= 1 &&
+ !timing->pixel_clock) {
+ /* Other timing or info */
+ switch (data->type) {
+ case EDID_DETAIL_MONITOR_SERIAL:
+ break;
+ case EDID_DETAIL_MONITOR_STRING:
+ break;
+ case EDID_DETAIL_MONITOR_RANGE:
+ /* Get monitor range data */
+ break;
+ case EDID_DETAIL_MONITOR_NAME:
+ break;
+ case EDID_DETAIL_MONITOR_CPDATA:
+ break;
+ case EDID_DETAIL_STD_MODES:
+ for (j = 0; j < 6; i++) {
+ struct std_timing *std;
+ struct drm_display_mode *newmode;
+
+ std = &data->data.timings[j];
+ newmode = drm_mode_std(dev, std,
+ edid->revision,
+ timing_level);
+ if (newmode) {
+ drm_mode_probed_add(connector, newmode);
+ modes++;
+ }
+ }
+ break;
+ default:
+ break;
+ }
+ } else {
+ newmode = drm_mode_detailed(dev, edid, timing, quirks);
+ if (!newmode)
+ continue;
+
+ /* First detailed mode is preferred */
+ if (i == 0 && (edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING))
+ newmode->type |= DRM_MODE_TYPE_PREFERRED;
+ drm_mode_probed_add(connector, newmode);
- modes += add_detailed_modes(connector, timing, edid, quirks,
- preferred);
+ modes++;
+ }
}
return modes;
}
-
/**
* add_detailed_mode_eedid - get detailed mode info from addtional timing
* EDID block
static int add_detailed_info_eedid(struct drm_connector *connector,
struct edid *edid, u32 quirks)
{
- int i, modes = 0;
+ struct drm_device *dev = connector->dev;
+ int i, j, modes = 0;
char *edid_ext = NULL;
struct detailed_timing *timing;
+ struct detailed_non_pixel *data;
+ struct drm_display_mode *newmode;
int edid_ext_num;
int start_offset, end_offset;
int timing_level;
for (i = start_offset; i < end_offset;
i += sizeof(struct detailed_timing)) {
timing = (struct detailed_timing *)(edid_ext + i);
- modes += add_detailed_modes(connector, timing, edid, quirks, 0);
+ data = &timing->data.other_data;
+ /* Detailed mode timing */
+ if (timing->pixel_clock) {
+ newmode = drm_mode_detailed(dev, edid, timing, quirks);
+ if (!newmode)
+ continue;
+
+ drm_mode_probed_add(connector, newmode);
+
+ modes++;
+ continue;
+ }
+
+ /* Other timing or info */
+ switch (data->type) {
+ case EDID_DETAIL_MONITOR_SERIAL:
+ break;
+ case EDID_DETAIL_MONITOR_STRING:
+ break;
+ case EDID_DETAIL_MONITOR_RANGE:
+ /* Get monitor range data */
+ break;
+ case EDID_DETAIL_MONITOR_NAME:
+ break;
+ case EDID_DETAIL_MONITOR_CPDATA:
+ break;
+ case EDID_DETAIL_STD_MODES:
+ /* Five modes per detailed section */
+ for (j = 0; j < 5; i++) {
+ struct std_timing *std;
+ struct drm_display_mode *newmode;
+
+ std = &data->data.timings[j];
+ newmode = drm_mode_std(dev, std,
+ edid->revision,
+ timing_level);
+ if (newmode) {
+ drm_mode_probed_add(connector, newmode);
+ modes++;
+ }
+ }
+ break;
+ default:
+ break;
+ }
}
return modes;
spin_unlock(&dev->count_lock);
}
out:
- if (!retcode) {
- mutex_lock(&dev->struct_mutex);
- if (minor->type == DRM_MINOR_LEGACY) {
- if (dev->dev_mapping == NULL)
- dev->dev_mapping = inode->i_mapping;
- else if (dev->dev_mapping != inode->i_mapping)
- retcode = -ENODEV;
- }
- mutex_unlock(&dev->struct_mutex);
+ mutex_lock(&dev->struct_mutex);
+ if (minor->type == DRM_MINOR_LEGACY) {
+ BUG_ON((dev->dev_mapping != NULL) &&
+ (dev->dev_mapping != inode->i_mapping));
+ if (dev->dev_mapping == NULL)
+ dev->dev_mapping = inode->i_mapping;
}
+ mutex_unlock(&dev->struct_mutex);
return retcode;
}
uint8_t ctl2;
if (tfp410_readb(dvo, TFP410_CTL_2, &ctl2)) {
- if (ctl2 & TFP410_CTL_2_RSEN)
+ if (ctl2 & TFP410_CTL_2_HTPLG)
ret = connector_status_connected;
else
ret = connector_status_disconnected;
ret = copy_from_user(cliprects, batch->cliprects,
batch->num_cliprects *
sizeof(struct drm_clip_rect));
- if (ret != 0) {
- ret = -EFAULT;
+ if (ret != 0)
goto fail_free;
- }
}
mutex_lock(&dev->struct_mutex);
return -ENOMEM;
ret = copy_from_user(batch_data, cmdbuf->buf, cmdbuf->sz);
- if (ret != 0) {
- ret = -EFAULT;
+ if (ret != 0)
goto fail_batch_free;
- }
if (cmdbuf->num_cliprects) {
cliprects = kcalloc(cmdbuf->num_cliprects,
ret = copy_from_user(cliprects, cmdbuf->cliprects,
cmdbuf->num_cliprects *
sizeof(struct drm_clip_rect));
- if (ret != 0) {
- ret = -EFAULT;
+ if (ret != 0)
goto fail_clip_free;
- }
}
mutex_lock(&dev->struct_mutex);
}
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- /*
- * free the memory space allocated for the child device
- * config parsed from VBT
- */
- if (dev_priv->child_dev && dev_priv->child_dev_num) {
- kfree(dev_priv->child_dev);
- dev_priv->child_dev = NULL;
- dev_priv->child_dev_num = 0;
- }
drm_irq_uninstall(dev);
vga_client_register(dev->pdev, NULL, NULL, NULL);
}
}
} else {
DRM_ERROR("Error occurred. Don't know how to reset this chip.\n");
- mutex_unlock(&dev->struct_mutex);
return -ENODEV;
}
struct notifier_block lid_notifier;
- int crt_ddc_bus; /* 0 = unknown, else GPIO to use for CRT DDC */
+ int crt_ddc_bus; /* -1 = unknown, else GPIO to use for CRT DDC */
struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */
int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
int num_fence_regs; /* 8 on pre-965, 16 otherwise */
struct timer_list idle_timer;
bool busy;
u16 orig_clock;
- int child_dev_num;
- struct child_device_config *child_dev;
struct drm_connector *int_lvds_connector;
} drm_i915_private_t;
obj_priv->dirty = 0;
for (i = 0; i < page_count; i++) {
+ if (obj_priv->pages[i] == NULL)
+ break;
+
if (obj_priv->dirty)
set_page_dirty(obj_priv->pages[i]);
struct address_space *mapping;
struct inode *inode;
struct page *page;
+ int ret;
if (obj_priv->pages_refcount++ != 0)
return 0;
mapping = inode->i_mapping;
for (i = 0; i < page_count; i++) {
page = read_cache_page_gfp(mapping, i,
- GFP_HIGHUSER |
+ mapping_gfp_mask (mapping) |
__GFP_COLD |
- __GFP_RECLAIMABLE |
gfpmask);
- if (IS_ERR(page))
- goto err_pages;
-
+ if (IS_ERR(page)) {
+ ret = PTR_ERR(page);
+ i915_gem_object_put_pages(obj);
+ return ret;
+ }
obj_priv->pages[i] = page;
}
i915_gem_object_do_bit_17_swizzle(obj);
return 0;
-
-err_pages:
- while (i--)
- page_cache_release(obj_priv->pages[i]);
-
- drm_free_large(obj_priv->pages);
- obj_priv->pages = NULL;
- obj_priv->pages_refcount--;
- return PTR_ERR(page);
}
static void i965_write_fence_reg(struct drm_i915_fence_reg *reg)
pitch_val = obj_priv->stride / tile_width;
pitch_val = ffs(pitch_val) - 1;
- if (obj_priv->tiling_mode == I915_TILING_Y &&
- HAS_128_BYTE_Y_TILING(dev))
- WARN_ON(pitch_val > I830_FENCE_MAX_PITCH_VAL);
- else
- WARN_ON(pitch_val > I915_FENCE_MAX_PITCH_VAL);
-
val = obj_priv->gtt_offset;
if (obj_priv->tiling_mode == I915_TILING_Y)
val |= 1 << I830_FENCE_TILING_Y_SHIFT;
return -EINVAL;
}
- /* If the object is bigger than the entire aperture, reject it early
- * before evicting everything in a vain attempt to find space.
- */
- if (obj->size > dev->gtt_total) {
- DRM_ERROR("Attempting to bind an object larger than the aperture\n");
- return -E2BIG;
- }
-
search_free:
free_space = drm_mm_search_free(&dev_priv->mm.gtt_space,
obj->size, alignment, 0);
if (ret != 0) {
DRM_ERROR("copy %d cliprects failed: %d\n",
args->num_cliprects, ret);
- ret = -EFAULT;
goto pre_mutex_err;
}
}
int ret;
i915_verify_inactive(dev, __FILE__, __LINE__);
-
- if (obj_priv->gtt_space != NULL) {
- if (alignment == 0)
- alignment = i915_gem_get_gtt_alignment(obj);
- if (obj_priv->gtt_offset & (alignment - 1)) {
- ret = i915_gem_object_unbind(obj);
- if (ret)
- return ret;
- }
- }
-
if (obj_priv->gtt_space == NULL) {
ret = i915_gem_object_bind_to_gtt(obj, alignment);
if (ret)
list_add(&dev_priv->mm.shrink_list, &shrink_list);
spin_unlock(&shrink_list_lock);
- /* On GEN3 we really need to make sure the ARB C3 LP bit is set */
- if (IS_I915G(dev) || IS_I915GM(dev) || IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev)) {
- u32 tmp = I915_READ(MI_ARB_STATE);
- if (!(tmp & MI_ARB_C3_LP_WRITE_ENABLE)) {
- /* arb state is a masked write, so set bit + bit in mask */
- tmp = MI_ARB_C3_LP_WRITE_ENABLE | (MI_ARB_C3_LP_WRITE_ENABLE << MI_ARB_MASK_SHIFT);
- I915_WRITE(MI_ARB_STATE, tmp);
- }
- }
-
/* Old X drivers will take 0-2 for front, back, depth buffers */
dev_priv->fence_reg_start = 3;
* reg, so dont bother to check the size */
if (stride / 128 > I965_FENCE_MAX_PITCH_VAL)
return false;
- } else if (IS_I9XX(dev) || IS_I8XX(dev)) {
- if (stride > 8192)
+ } else if (IS_I9XX(dev)) {
+ uint32_t pitch_val = ffs(stride / tile_width) - 1;
+
+ /* XXX: For Y tiling, FENCE_MAX_PITCH_VAL is actually 6 (8KB)
+ * instead of 4 (2KB) on 945s.
+ */
+ if (pitch_val > I915_FENCE_MAX_PITCH_VAL ||
+ size > (I830_FENCE_MAX_SIZE_VAL << 20))
return false;
+ } else {
+ uint32_t pitch_val = ffs(stride / tile_width) - 1;
- if (IS_I9XX(dev)) {
- if (size > I830_FENCE_MAX_SIZE_VAL << 20)
- return false;
- } else {
- if (size > I830_FENCE_MAX_SIZE_VAL << 19)
- return false;
- }
+ if (pitch_val > I830_FENCE_MAX_PITCH_VAL ||
+ size > (I830_FENCE_MAX_SIZE_VAL << 19))
+ return false;
}
/* 965+ just needs multiples of tile width */
#define I830_FENCE_SIZE_BITS(size) ((ffs((size) >> 19) - 1) << 8)
#define I830_FENCE_PITCH_SHIFT 4
#define I830_FENCE_REG_VALID (1<<0)
-#define I915_FENCE_MAX_PITCH_VAL 4
+#define I915_FENCE_MAX_PITCH_VAL 0x10
#define I830_FENCE_MAX_PITCH_VAL 6
#define I830_FENCE_MAX_SIZE_VAL (1<<8)
#define LM_BURST_LENGTH 0x00000700
#define LM_FIFO_WATERMARK 0x0000001F
#define MI_ARB_STATE 0x020e4 /* 915+ only */
-#define MI_ARB_MASK_SHIFT 16 /* shift for enable bits */
-
-/* Make render/texture TLB fetches lower priorty than associated data
- * fetches. This is not turned on by default
- */
-#define MI_ARB_RENDER_TLB_LOW_PRIORITY (1 << 15)
-
-/* Isoch request wait on GTT enable (Display A/B/C streams).
- * Make isoch requests stall on the TLB update. May cause
- * display underruns (test mode only)
- */
-#define MI_ARB_ISOCH_WAIT_GTT (1 << 14)
-
-/* Block grant count for isoch requests when block count is
- * set to a finite value.
- */
-#define MI_ARB_BLOCK_GRANT_MASK (3 << 12)
-#define MI_ARB_BLOCK_GRANT_8 (0 << 12) /* for 3 display planes */
-#define MI_ARB_BLOCK_GRANT_4 (1 << 12) /* for 2 display planes */
-#define MI_ARB_BLOCK_GRANT_2 (2 << 12) /* for 1 display plane */
-#define MI_ARB_BLOCK_GRANT_0 (3 << 12) /* don't use */
-
-/* Enable render writes to complete in C2/C3/C4 power states.
- * If this isn't enabled, render writes are prevented in low
- * power states. That seems bad to me.
- */
-#define MI_ARB_C3_LP_WRITE_ENABLE (1 << 11)
-
-/* This acknowledges an async flip immediately instead
- * of waiting for 2TLB fetches.
- */
-#define MI_ARB_ASYNC_FLIP_ACK_IMMEDIATE (1 << 10)
-
-/* Enables non-sequential data reads through arbiter
- */
-#define MI_ARB_DUAL_DATA_PHASE_DISABLE (1 << 9)
-
-/* Disable FSB snooping of cacheable write cycles from binner/render
- * command stream
- */
-#define MI_ARB_CACHE_SNOOP_DISABLE (1 << 8)
-
-/* Arbiter time slice for non-isoch streams */
-#define MI_ARB_TIME_SLICE_MASK (7 << 5)
-#define MI_ARB_TIME_SLICE_1 (0 << 5)
-#define MI_ARB_TIME_SLICE_2 (1 << 5)
-#define MI_ARB_TIME_SLICE_4 (2 << 5)
-#define MI_ARB_TIME_SLICE_6 (3 << 5)
-#define MI_ARB_TIME_SLICE_8 (4 << 5)
-#define MI_ARB_TIME_SLICE_10 (5 << 5)
-#define MI_ARB_TIME_SLICE_14 (6 << 5)
-#define MI_ARB_TIME_SLICE_16 (7 << 5)
-
-/* Low priority grace period page size */
-#define MI_ARB_LOW_PRIORITY_GRACE_4KB (0 << 4) /* default */
-#define MI_ARB_LOW_PRIORITY_GRACE_8KB (1 << 4)
-
-/* Disable display A/B trickle feed */
-#define MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE (1 << 2)
-
-/* Set display plane priority */
-#define MI_ARB_DISPLAY_PRIORITY_A_B (0 << 0) /* display A > display B */
-#define MI_ARB_DISPLAY_PRIORITY_B_A (1 << 0) /* display B > display A */
-
#define CACHE_MODE_0 0x02120 /* 915+ only */
#define CM0_MASK_SHIFT 16
#define CM0_IZ_OPT_DISABLE (1<<6)
GPIOF,
};
+ /* Set sensible defaults in case we can't find the general block
+ or it is the wrong chipset */
+ dev_priv->crt_ddc_bus = -1;
+
general = find_section(bdb, BDB_GENERAL_DEFINITIONS);
if (general) {
u16 block_size = get_blocksize(general);
dev_priv->render_reclock_avail = true;
}
-static void
-parse_device_mapping(struct drm_i915_private *dev_priv,
- struct bdb_header *bdb)
-{
- struct bdb_general_definitions *p_defs;
- struct child_device_config *p_child, *child_dev_ptr;
- int i, child_device_num, count;
- u16 block_size;
-
- p_defs = find_section(bdb, BDB_GENERAL_DEFINITIONS);
- if (!p_defs) {
- DRM_DEBUG_KMS("No general definition block is found\n");
- return;
- }
- /* judge whether the size of child device meets the requirements.
- * If the child device size obtained from general definition block
- * is different with sizeof(struct child_device_config), skip the
- * parsing of sdvo device info
- */
- if (p_defs->child_dev_size != sizeof(*p_child)) {
- /* different child dev size . Ignore it */
- DRM_DEBUG_KMS("different child size is found. Invalid.\n");
- return;
- }
- /* get the block size of general definitions */
- block_size = get_blocksize(p_defs);
- /* get the number of child device */
- child_device_num = (block_size - sizeof(*p_defs)) /
- sizeof(*p_child);
- count = 0;
- /* get the number of child device that is present */
- for (i = 0; i < child_device_num; i++) {
- p_child = &(p_defs->devices[i]);
- if (!p_child->device_type) {
- /* skip the device block if device type is invalid */
- continue;
- }
- count++;
- }
- if (!count) {
- DRM_DEBUG_KMS("no child dev is parsed from VBT \n");
- return;
- }
- dev_priv->child_dev = kzalloc(sizeof(*p_child) * count, GFP_KERNEL);
- if (!dev_priv->child_dev) {
- DRM_DEBUG_KMS("No memory space for child device\n");
- return;
- }
-
- dev_priv->child_dev_num = count;
- count = 0;
- for (i = 0; i < child_device_num; i++) {
- p_child = &(p_defs->devices[i]);
- if (!p_child->device_type) {
- /* skip the device block if device type is invalid */
- continue;
- }
- child_dev_ptr = dev_priv->child_dev + count;
- count++;
- memcpy((void *)child_dev_ptr, (void *)p_child,
- sizeof(*p_child));
- }
- return;
-}
/**
* intel_init_bios - initialize VBIOS settings & find VBT
* @dev: DRM device
parse_lfp_panel_data(dev_priv, bdb);
parse_sdvo_panel_data(dev_priv, bdb);
parse_sdvo_device_mapping(dev_priv, bdb);
- parse_device_mapping(dev_priv, bdb);
parse_driver_features(dev_priv, bdb);
pci_unmap_rom(pdev, bios);
#define SWF14_APM_STANDBY 0x1
#define SWF14_APM_RESTORE 0x0
-/* Add the device class for LFP, TV, HDMI */
-#define DEVICE_TYPE_INT_LFP 0x1022
-#define DEVICE_TYPE_INT_TV 0x1009
-#define DEVICE_TYPE_HDMI 0x60D2
-#define DEVICE_TYPE_DP 0x68C6
-#define DEVICE_TYPE_eDP 0x78C6
-
-/* define the DVO port for HDMI output type */
-#define DVO_B 1
-#define DVO_C 2
-#define DVO_D 3
-
-/* define the PORT for DP output type */
-#define PORT_IDPB 7
-#define PORT_IDPC 8
-#define PORT_IDPD 9
-
#endif /* _I830_BIOS_H_ */
else {
i2c_reg = GPIOA;
/* Use VBT information for CRT DDC if available */
- if (dev_priv->crt_ddc_bus != 0)
+ if (dev_priv->crt_ddc_bus != -1)
i2c_reg = dev_priv->crt_ddc_bus;
}
intel_output->ddc_bus = intel_i2c_create(dev, i2c_reg, "CRTDDC_A");
intel_clock_t clock;
int max_n;
bool found;
- /* approximately equals target * 0.00585 */
- int err_most = (target >> 8) + (target >> 9);
+ /* approximately equals target * 0.00488 */
+ int err_most = (target >> 8) + (target >> 10);
found = false;
if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
dpa_ctl = I915_READ(DP_A);
dpa_ctl |= DP_PLL_ENABLE;
I915_WRITE(DP_A, dpa_ctl);
- POSTING_READ(DP_A);
udelay(200);
}
int pipe = intel_crtc->pipe;
bool enabled;
- if (intel_crtc->dpms_mode == mode)
- return;
-
dev_priv->display.dpms(crtc, mode);
intel_crtc->dpms_mode = mode;
}
intel_crtc->cursor_addr = 0;
- intel_crtc->dpms_mode = -1;
+ intel_crtc->dpms_mode = DRM_MODE_DPMS_OFF;
drm_crtc_helper_add(&intel_crtc->base, &intel_helper_funcs);
intel_crtc->busy = false;
}
/* Returns the core display clock speed */
- if (IS_I945G(dev) || (IS_G33(dev) && ! IS_IGDGM(dev)))
+ if (IS_I945G(dev))
dev_priv->display.get_display_clock_speed =
i945_get_display_clock_speed;
else if (IS_I915G(dev))
DMI_MATCH(DMI_PRODUCT_NAME, "PC-81005"),
},
},
- {
- .ident = "Clevo M5x0N",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "CLEVO Co."),
- DMI_MATCH(DMI_BOARD_NAME, "M5x0N"),
- },
- },
{ }
};
*/
static enum drm_connector_status intel_lvds_detect(struct drm_connector *connector)
{
- struct drm_device *dev = connector->dev;
enum drm_connector_status status = connector_status_connected;
- if (IS_I8XX(dev))
- return connector_status_connected;
-
if (!acpi_lid_open() && !dmi_check_system(bad_lid_status))
status = connector_status_disconnected;
DMI_MATCH(DMI_PRODUCT_VERSION, "AO00001JW"),
},
},
- {
- .callback = intel_no_lvds_dmi_callback,
- .ident = "Clientron U800",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Clientron"),
- DMI_MATCH(DMI_PRODUCT_NAME, "U800"),
- },
- },
{ } /* terminating entry */
};
+#ifdef CONFIG_ACPI
/*
- * Enumerate the child dev array parsed from VBT to check whether
- * the LVDS is present.
- * If it is present, return 1.
- * If it is not present, return false.
- * If no child dev is parsed from VBT, it assumes that the LVDS is present.
- * Note: The addin_offset should also be checked for LVDS panel.
- * Only when it is non-zero, it is assumed that it is present.
+ * check_lid_device -- check whether @handle is an ACPI LID device.
+ * @handle: ACPI device handle
+ * @level : depth in the ACPI namespace tree
+ * @context: the number of LID device when we find the device
+ * @rv: a return value to fill if desired (Not use)
*/
-static int lvds_is_present_in_vbt(struct drm_device *dev)
+static acpi_status
+check_lid_device(acpi_handle handle, u32 level, void *context,
+ void **return_value)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct child_device_config *p_child;
- int i, ret;
+ struct acpi_device *acpi_dev;
+ int *lid_present = context;
+
+ acpi_dev = NULL;
+ /* Get the acpi device for device handle */
+ if (acpi_bus_get_device(handle, &acpi_dev) || !acpi_dev) {
+ /* If there is no ACPI device for handle, return */
+ return AE_OK;
+ }
- if (!dev_priv->child_dev_num)
- return 1;
+ if (!strncmp(acpi_device_hid(acpi_dev), "PNP0C0D", 7))
+ *lid_present = 1;
- ret = 0;
- for (i = 0; i < dev_priv->child_dev_num; i++) {
- p_child = dev_priv->child_dev + i;
- /*
- * If the device type is not LFP, continue.
- * If the device type is 0x22, it is also regarded as LFP.
- */
- if (p_child->device_type != DEVICE_TYPE_INT_LFP &&
- p_child->device_type != DEVICE_TYPE_LFP)
- continue;
+ return AE_OK;
+}
+
+/**
+ * check whether there exists the ACPI LID device by enumerating the ACPI
+ * device tree.
+ */
+static int intel_lid_present(void)
+{
+ int lid_present = 0;
- /* The addin_offset should be checked. Only when it is
- * non-zero, it is regarded as present.
+ if (acpi_disabled) {
+ /* If ACPI is disabled, there is no ACPI device tree to
+ * check, so assume the LID device would have been present.
*/
- if (p_child->addin_offset) {
- ret = 1;
- break;
- }
+ return 1;
}
- return ret;
+
+ acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX,
+ check_lid_device, &lid_present, NULL);
+
+ return lid_present;
}
+#else
+static int intel_lid_present(void)
+{
+ /* In the absence of ACPI built in, assume that the LID device would
+ * have been present.
+ */
+ return 1;
+}
+#endif
/**
* intel_lvds_init - setup LVDS connectors on this device
if (dmi_check_system(intel_no_lvds))
return;
- if (!lvds_is_present_in_vbt(dev)) {
- DRM_DEBUG_KMS("LVDS is not present in VBT\n");
+ /* Assume that any device without an ACPI LID device also doesn't
+ * have an integrated LVDS. We would be better off parsing the BIOS
+ * to get a reliable indicator, but that code isn't written yet.
+ *
+ * In the case of all-in-one desktops using LVDS that we've seen,
+ * they're using SDVO LVDS.
+ */
+ if (!intel_lid_present())
return;
- }
if (IS_IGDNG(dev)) {
if ((I915_READ(PCH_LVDS) & LVDS_DETECTED) == 0)
#include "i915_drm.h"
#include "i915_drv.h"
#include "intel_sdvo_regs.h"
-#include <linux/dmi.h>
#undef SDVO_DEBUG
return 0x72;
}
-static int intel_sdvo_bad_tv_callback(const struct dmi_system_id *id)
-{
- DRM_DEBUG_KMS("Ignoring bad SDVO TV connector for %s\n", id->ident);
- return 1;
-}
-
-static struct dmi_system_id intel_sdvo_bad_tv[] = {
- {
- .callback = intel_sdvo_bad_tv_callback,
- .ident = "IntelG45/ICH10R/DME1737",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "IBM CORPORATION"),
- DMI_MATCH(DMI_PRODUCT_NAME, "4800784"),
- },
- },
-
- { } /* terminating entry */
-};
-
static bool
intel_sdvo_output_setup(struct intel_output *intel_output, uint16_t flags)
{
(1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
(1 << INTEL_ANALOG_CLONE_BIT);
}
- } else if ((flags & SDVO_OUTPUT_SVID0) &&
- !dmi_check_system(intel_sdvo_bad_tv)) {
+ } else if (flags & SDVO_OUTPUT_SVID0) {
sdvo_priv->controlled_output = SDVO_OUTPUT_SVID0;
encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
drm_connector_attach_property(connector,
dev->mode_config.tv_bottom_margin_property,
tv_priv->margin[TV_MARGIN_BOTTOM]);
+
+ dev_priv->hotplug_supported_mask |= TV_HOTPLUG_INT_STATUS;
out:
drm_sysfs_connector_add(connector);
}
/* 2D, 3D, CUBE */
switch (tmp) {
case 0:
- case 3:
- case 4:
case 5:
case 6:
case 7:
r100_hdp_reset(rdev);
/* FIXME: rv380 one pipes ? */
- if ((rdev->family == CHIP_R300 && rdev->pdev->device != 0x4144) ||
- (rdev->family == CHIP_R350)) {
+ if ((rdev->family == CHIP_R300) || (rdev->family == CHIP_R350)) {
/* r300,r350 */
rdev->num_gb_pipes = 2;
} else {
- /* rv350,rv370,rv380,r300 AD */
+ /* rv350,rv370,rv380 */
rdev->num_gb_pipes = 1;
}
rdev->num_z_pipes = 1;
if (rdev->accel_working) {
r = radeon_ib_pool_init(rdev);
if (r) {
- dev_err(rdev->dev, "IB initialization failed (%d).\n", r);
+ DRM_ERROR("radeon: failled initializing IB pool (%d).\n", r);
+ rdev->accel_working = false;
+ }
+ r = r600_ib_test(rdev);
+ if (r) {
+ DRM_ERROR("radeon: failled testing IB (%d).\n", r);
rdev->accel_working = false;
- } else {
- r = r600_ib_test(rdev);
- if (r) {
- dev_err(rdev->dev, "IB test failed (%d).\n", r);
- rdev->accel_working = false;
- }
}
}
return 0;
typedef int (*next_reloc_t)(struct radeon_cs_parser*, struct radeon_cs_reloc**);
static next_reloc_t r600_cs_packet_next_reloc = &r600_cs_packet_next_reloc_mm;
-struct r600_cs_track {
- u32 cb_color0_base_last;
-};
-
/**
* r600_cs_packet_parse() - parse cp packet and point ib index to next packet
* @parser: parser structure holding parsing context.
return 0;
}
-/**
- * r600_cs_packet_next_is_pkt3_nop() - test if next packet is packet3 nop for reloc
- * @parser: parser structure holding parsing context.
- *
- * Check next packet is relocation packet3, do bo validation and compute
- * GPU offset using the provided start.
- **/
-static inline int r600_cs_packet_next_is_pkt3_nop(struct radeon_cs_parser *p)
-{
- struct radeon_cs_packet p3reloc;
- int r;
-
- r = r600_cs_packet_parse(p, &p3reloc, p->idx);
- if (r) {
- return 0;
- }
- if (p3reloc.type != PACKET_TYPE3 || p3reloc.opcode != PACKET3_NOP) {
- return 0;
- }
- return 1;
-}
-
/**
* r600_cs_packet_next_vline() - parse userspace VLINE packet
* @parser: parser structure holding parsing context.
struct radeon_cs_packet *pkt)
{
struct radeon_cs_reloc *reloc;
- struct r600_cs_track *track;
volatile u32 *ib;
unsigned idx;
unsigned i;
int r;
u32 idx_value;
- track = (struct r600_cs_track *)p->track;
ib = p->ib->ptr;
idx = pkt->idx + 1;
idx_value = radeon_get_ib_value(p, idx);
for (i = 0; i < pkt->count; i++) {
reg = start_reg + (4 * i);
switch (reg) {
- /* This register were added late, there is userspace
- * which does provide relocation for those but set
- * 0 offset. In order to avoid breaking old userspace
- * we detect this and set address to point to last
- * CB_COLOR0_BASE, note that if userspace doesn't set
- * CB_COLOR0_BASE before this register we will report
- * error. Old userspace always set CB_COLOR0_BASE
- * before any of this.
- */
- case R_0280E0_CB_COLOR0_FRAG:
- case R_0280E4_CB_COLOR1_FRAG:
- case R_0280E8_CB_COLOR2_FRAG:
- case R_0280EC_CB_COLOR3_FRAG:
- case R_0280F0_CB_COLOR4_FRAG:
- case R_0280F4_CB_COLOR5_FRAG:
- case R_0280F8_CB_COLOR6_FRAG:
- case R_0280FC_CB_COLOR7_FRAG:
- case R_0280C0_CB_COLOR0_TILE:
- case R_0280C4_CB_COLOR1_TILE:
- case R_0280C8_CB_COLOR2_TILE:
- case R_0280CC_CB_COLOR3_TILE:
- case R_0280D0_CB_COLOR4_TILE:
- case R_0280D4_CB_COLOR5_TILE:
- case R_0280D8_CB_COLOR6_TILE:
- case R_0280DC_CB_COLOR7_TILE:
- if (!r600_cs_packet_next_is_pkt3_nop(p)) {
- if (!track->cb_color0_base_last) {
- dev_err(p->dev, "Broken old userspace ? no cb_color0_base supplied before trying to write 0x%08X\n", reg);
- return -EINVAL;
- }
- ib[idx+1+i] = track->cb_color0_base_last;
- printk_once(KERN_WARNING "You have old & broken userspace "
- "please consider updating mesa & xf86-video-ati\n");
- } else {
- r = r600_cs_packet_next_reloc(p, &reloc);
- if (r) {
- dev_err(p->dev, "bad SET_CONTEXT_REG 0x%04X\n", reg);
- return -EINVAL;
- }
- ib[idx+1+i] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
- }
- break;
case DB_DEPTH_BASE:
case DB_HTILE_DATA_BASE:
case CB_COLOR0_BASE:
- r = r600_cs_packet_next_reloc(p, &reloc);
- if (r) {
- DRM_ERROR("bad SET_CONTEXT_REG "
- "0x%04X\n", reg);
- return -EINVAL;
- }
- ib[idx+1+i] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
- track->cb_color0_base_last = ib[idx+1+i];
- break;
case CB_COLOR1_BASE:
case CB_COLOR2_BASE:
case CB_COLOR3_BASE:
int r600_cs_parse(struct radeon_cs_parser *p)
{
struct radeon_cs_packet pkt;
- struct r600_cs_track *track;
int r;
- track = kzalloc(sizeof(*track), GFP_KERNEL);
- p->track = track;
do {
r = r600_cs_packet_parse(p, &pkt, p->idx);
if (r) {
/* initialize parser */
memset(&parser, 0, sizeof(struct radeon_cs_parser));
parser.filp = filp;
- parser.dev = &dev->pdev->dev;
parser.rdev = NULL;
parser.family = family;
parser.ib = &fake_ib;
#define S_000E60_SOFT_RESET_TSC(x) (((x) & 1) << 16)
#define S_000E60_SOFT_RESET_VMC(x) (((x) & 1) << 17)
-#define R_005480_HDP_MEM_COHERENCY_FLUSH_CNTL 0x5480
-
-#define R_0280E0_CB_COLOR0_FRAG 0x0280E0
-#define S_0280E0_BASE_256B(x) (((x) & 0xFFFFFFFF) << 0)
-#define G_0280E0_BASE_256B(x) (((x) >> 0) & 0xFFFFFFFF)
-#define C_0280E0_BASE_256B 0x00000000
-#define R_0280E4_CB_COLOR1_FRAG 0x0280E4
-#define R_0280E8_CB_COLOR2_FRAG 0x0280E8
-#define R_0280EC_CB_COLOR3_FRAG 0x0280EC
-#define R_0280F0_CB_COLOR4_FRAG 0x0280F0
-#define R_0280F4_CB_COLOR5_FRAG 0x0280F4
-#define R_0280F8_CB_COLOR6_FRAG 0x0280F8
-#define R_0280FC_CB_COLOR7_FRAG 0x0280FC
-#define R_0280C0_CB_COLOR0_TILE 0x0280C0
-#define S_0280C0_BASE_256B(x) (((x) & 0xFFFFFFFF) << 0)
-#define G_0280C0_BASE_256B(x) (((x) >> 0) & 0xFFFFFFFF)
-#define C_0280C0_BASE_256B 0x00000000
-#define R_0280C4_CB_COLOR1_TILE 0x0280C4
-#define R_0280C8_CB_COLOR2_TILE 0x0280C8
-#define R_0280CC_CB_COLOR3_TILE 0x0280CC
-#define R_0280D0_CB_COLOR4_TILE 0x0280D0
-#define R_0280D4_CB_COLOR5_TILE 0x0280D4
-#define R_0280D8_CB_COLOR6_TILE 0x0280D8
-#define R_0280DC_CB_COLOR7_TILE 0x0280DC
-
-
#endif
};
struct radeon_cs_parser {
- struct device *dev;
struct radeon_device *rdev;
struct drm_file *filp;
/* chunks */
}
}
- /* ASUS HD 3600 board lists the DVI port as HDMI */
- if ((dev->pdev->device == 0x9598) &&
- (dev->pdev->subsystem_vendor == 0x1043) &&
- (dev->pdev->subsystem_device == 0x01e4)) {
- if (*connector_type == DRM_MODE_CONNECTOR_HDMIA) {
- *connector_type = DRM_MODE_CONNECTOR_DVII;
- }
- }
-
/* ASUS HD 3450 board lists the DVI port as HDMI */
if ((dev->pdev->device == 0x95C5) &&
(dev->pdev->subsystem_vendor == 0x1043) &&
lvds->native_mode.vtotal = lvds->native_mode.vdisplay +
le16_to_cpu(lvds_info->info.sLCDTiming.usVBlanking_Time);
lvds->native_mode.vsync_start = lvds->native_mode.vdisplay +
- le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncOffset);
+ le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncWidth);
lvds->native_mode.vsync_end = lvds->native_mode.vsync_start +
le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncWidth);
lvds->panel_pwr_delay =
{
struct drm_device *dev = connector->dev;
struct drm_connector *conflict;
- struct radeon_connector *radeon_conflict;
int i;
list_for_each_entry(conflict, &dev->mode_config.connector_list, head) {
if (conflict == connector)
continue;
- radeon_conflict = to_radeon_connector(conflict);
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
if (conflict->encoder_ids[i] == 0)
break;
if (conflict->status != connector_status_connected)
continue;
- if (radeon_conflict->use_digital)
- continue;
-
if (priority == true) {
DRM_INFO("1: conflicting encoders switching off %s\n", drm_get_connector_name(conflict));
DRM_INFO("in favor of %s\n", drm_get_connector_name(connector));
radeon_encoder = to_radeon_encoder(encoder);
if (!radeon_encoder->enc_priv)
return 0;
- if (ASIC_IS_AVIVO(rdev) || radeon_r4xx_atom) {
+ if (rdev->is_atom_bios) {
struct radeon_encoder_atom_dac *dac_int;
dac_int = radeon_encoder->enc_priv;
dac_int->tv_std = val;
return -EBUSY;
}
-static void radeon_init_pipes(struct drm_device *dev)
+static void radeon_init_pipes(drm_radeon_private_t *dev_priv)
{
- drm_radeon_private_t *dev_priv = dev->dev_private;
uint32_t gb_tile_config, gb_pipe_sel = 0;
if ((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_RV530) {
dev_priv->num_gb_pipes = ((gb_pipe_sel >> 12) & 0x3) + 1;
} else {
/* R3xx */
- if (((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_R300 &&
- dev->pdev->device != 0x4144) ||
+ if (((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_R300) ||
((dev_priv->flags & RADEON_FAMILY_MASK) == CHIP_R350)) {
dev_priv->num_gb_pipes = 2;
} else {
- /* RV3xx/R300 AD */
+ /* R3Vxx */
dev_priv->num_gb_pipes = 1;
}
}
/* setup the raster pipes */
if ((dev_priv->flags & RADEON_FAMILY_MASK) >= CHIP_R300)
- radeon_init_pipes(dev);
+ radeon_init_pipes(dev_priv);
/* Reset the CP ring */
radeon_do_cp_reset(dev_priv);
radeon_cp_load_microcode(dev_priv);
radeon_cp_init_ring_buffer(dev, dev_priv, file_priv);
- dev_priv->have_z_offset = 0;
radeon_do_engine_reset(dev);
radeon_irq_set_state(dev, RADEON_SW_INT_ENABLE, 1);
memset(&parser, 0, sizeof(struct radeon_cs_parser));
parser.filp = filp;
parser.rdev = rdev;
- parser.dev = rdev->dev;
r = radeon_cs_parser_init(&parser, data);
if (r) {
DRM_ERROR("Failed to initialize parser !\n");
}
r = radeon_cs_parser_relocs(&parser);
if (r) {
- if (r != -ERESTARTSYS)
- DRM_ERROR("Failed to parse relocation %d!\n", r);
+ DRM_ERROR("Failed to parse relocation !\n");
radeon_cs_parser_fini(&parser, r);
mutex_unlock(&rdev->cs_mutex);
return r;
struct drm_gem_object *obj;
obj = drm_gem_object_lookup(dev, file_priv, mode_cmd->handle);
- if (obj == NULL) {
- dev_err(&dev->pdev->dev, "No GEM object associated to handle 0x%08X, "
- "can't create framebuffer\n", mode_cmd->handle);
- return NULL;
- }
+
return radeon_framebuffer_create(dev, mode_cmd, obj);
}
u32 scratch_ages[5];
- int have_z_offset;
-
/* starting from here on, data is preserved accross an open */
uint32_t flags; /* see radeon_chip_flags */
resource_size_t fb_aper_offset;
case ENCODER_OBJECT_ID_INTERNAL_DAC2:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
atombios_dac_setup(encoder, ATOM_ENABLE);
- if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) {
- if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT))
- atombios_tv_setup(encoder, ATOM_ENABLE);
- else
- atombios_tv_setup(encoder, ATOM_DISABLE);
- }
+ if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT))
+ atombios_tv_setup(encoder, ATOM_ENABLE);
break;
}
atombios_apply_encoder_quirks(encoder, adjusted_mode);
if (!ref_div)
return 1;
- vcoFreq = ((unsigned)ref_freq * fb_div) / ref_div;
+ vcoFreq = ((unsigned)ref_freq & fb_div) / ref_div;
/*
* This is horribly crude: the VCO frequency range is divided into
udelay(panel_pwr_delay * 1000);
WREG32(RADEON_LVDS_GEN_CNTL, lvds_gen_cntl);
WREG32_PLL(RADEON_PIXCLKS_CNTL, pixclks_cntl);
- udelay(panel_pwr_delay * 1000);
break;
}
#define NTSC_TV_PLL_N_14 693
#define NTSC_TV_PLL_P_14 7
-#define PAL_TV_PLL_M_14 19
-#define PAL_TV_PLL_N_14 353
-#define PAL_TV_PLL_P_14 5
-
#define VERT_LEAD_IN_LINES 2
#define FRAC_BITS 0xe
#define FRAC_MASK 0x3fff
630627, /* defRestart */
347, /* crtcPLL_N */
14, /* crtcPLL_M */
- 8, /* crtcPLL_postDiv */
+ 8, /* crtcPLL_postDiv */
1022, /* pixToTV */
},
- { /* PAL timing for 14 Mhz ref clk */
- 800, /* horResolution */
- 600, /* verResolution */
- TV_STD_PAL, /* standard */
- 1131, /* horTotal */
- 742, /* verTotal */
- 813, /* horStart */
- 840, /* horSyncStart */
- 633, /* verSyncStart */
- 708369, /* defRestart */
- 211, /* crtcPLL_N */
- 9, /* crtcPLL_M */
- 8, /* crtcPLL_postDiv */
- 759, /* pixToTV */
- },
};
#define N_AVAILABLE_MODES ARRAY_SIZE(available_tv_modes)
if (pll->reference_freq == 2700)
const_ptr = &available_tv_modes[1];
else
- const_ptr = &available_tv_modes[3];
+ const_ptr = &available_tv_modes[1]; /* FIX ME */
}
return const_ptr;
}
n = PAL_TV_PLL_N_27;
p = PAL_TV_PLL_P_27;
} else {
- m = PAL_TV_PLL_M_14;
- n = PAL_TV_PLL_N_14;
- p = PAL_TV_PLL_P_14;
+ m = PAL_TV_PLL_M_27;
+ n = PAL_TV_PLL_N_27;
+ p = PAL_TV_PLL_P_27;
}
}
DRM_ERROR("Invalid depth buffer offset\n");
return -EINVAL;
}
- dev_priv->have_z_offset = 1;
break;
case RADEON_EMIT_PP_CNTL:
if (tmp & RADEON_BACK)
flags |= RADEON_FRONT;
}
- if (flags & (RADEON_DEPTH|RADEON_STENCIL)) {
- if (!dev_priv->have_z_offset) {
- printk_once(KERN_ERR "radeon: illegal depth clear request. Buggy mesa detected - please update.\n");
- flags &= ~(RADEON_DEPTH | RADEON_STENCIL);
- }
- }
if (flags & (RADEON_FRONT | RADEON_BACK)) {
WREG32_MC(R_000100_MC_PT0_CNTL, tmp);
tmp = RREG32_MC(R_000100_MC_PT0_CNTL);
- tmp |= S_000100_INVALIDATE_ALL_L1_TLBS(1) | S_000100_INVALIDATE_L2_CACHE(1);
+ tmp |= S_000100_INVALIDATE_ALL_L1_TLBS(1) & S_000100_INVALIDATE_L2_CACHE(1);
WREG32_MC(R_000100_MC_PT0_CNTL, tmp);
tmp = RREG32_MC(R_000100_MC_PT0_CNTL);
if (rdev->accel_working) {
r = radeon_ib_pool_init(rdev);
if (r) {
- dev_err(rdev->dev, "IB initialization failed (%d).\n", r);
+ DRM_ERROR("radeon: failled initializing IB pool (%d).\n", r);
+ rdev->accel_working = false;
+ }
+ r = r600_ib_test(rdev);
+ if (r) {
+ DRM_ERROR("radeon: failled testing IB (%d).\n", r);
rdev->accel_working = false;
- } else {
- r = r600_ib_test(rdev);
- if (r) {
- dev_err(rdev->dev, "IB test failed (%d).\n", r);
- rdev->accel_working = false;
- }
}
}
return 0;
INIT_LIST_HEAD(&fbo->lru);
INIT_LIST_HEAD(&fbo->swap);
fbo->vm_node = NULL;
- atomic_set(&fbo->cpu_writers, 0);
fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
if (fbo->mem.mm_node)
void *from_virtual;
void *to_virtual;
int i;
- int ret = -ENOMEM;
+ int ret;
if (ttm->page_flags & TTM_PAGE_FLAG_USER) {
ret = ttm_tt_set_user(ttm, ttm->tsk, ttm->start,
for (i = 0; i < ttm->num_pages; ++i) {
from_page = read_mapping_page(swap_space, i, NULL);
- if (IS_ERR(from_page)) {
- ret = PTR_ERR(from_page);
+ if (IS_ERR(from_page))
goto out_err;
- }
to_page = __ttm_tt_get_page(ttm, i);
if (unlikely(to_page == NULL))
goto out_err;
return 0;
out_err:
ttm_tt_free_alloced_pages(ttm);
- return ret;
+ return -ENOMEM;
}
int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistant_swap_storage)
void *from_virtual;
void *to_virtual;
int i;
- int ret = -ENOMEM;
BUG_ON(ttm->state != tt_unbound && ttm->state != tt_unpopulated);
BUG_ON(ttm->caching_state != tt_cached);
0);
if (unlikely(IS_ERR(swap_storage))) {
printk(KERN_ERR "Failed allocating swap storage.\n");
- return PTR_ERR(swap_storage);
+ return -ENOMEM;
}
} else
swap_storage = persistant_swap_storage;
if (unlikely(from_page == NULL))
continue;
to_page = read_mapping_page(swap_space, i, NULL);
- if (unlikely(IS_ERR(to_page))) {
- ret = PTR_ERR(to_page);
+ if (unlikely(to_page == NULL))
goto out_err;
- }
+
preempt_disable();
from_virtual = kmap_atomic(from_page, KM_USER0);
to_virtual = kmap_atomic(to_page, KM_USER1);
if (!persistant_swap_storage)
fput(swap_storage);
- return ret;
+ return -ENOMEM;
}
}
} else if (strncmp(curr_pos, "target ", 7) == 0) {
- struct pci_bus *pbus;
unsigned int domain, bus, devfn;
struct vga_device *vgadev;
remaining -= 7;
pr_devel("client 0x%p called 'target'\n", priv);
/* if target is default */
- if (!strncmp(curr_pos, "default", 7))
+ if (!strncmp(buf, "default", 7))
pdev = pci_dev_get(vga_default_device());
else {
if (!vga_pci_str_to_vars(curr_pos, remaining,
ret_val = -EPROTO;
goto done;
}
- pr_devel("vgaarb: %s ==> %x:%x:%x.%x\n", curr_pos,
- domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-
- pbus = pci_find_bus(domain, bus);
- pr_devel("vgaarb: pbus %p\n", pbus);
- if (pbus == NULL) {
- pr_err("vgaarb: invalid PCI domain and/or bus address %x:%x\n",
- domain, bus);
- ret_val = -ENODEV;
- goto done;
- }
- pdev = pci_get_slot(pbus, devfn);
- pr_devel("vgaarb: pdev %p\n", pdev);
+
+ pdev = pci_get_bus_and_slot(bus, devfn);
if (!pdev) {
- pr_err("vgaarb: invalid PCI address %x:%x\n",
- bus, devfn);
+ pr_info("vgaarb: invalid PCI address!\n");
ret_val = -ENODEV;
goto done;
}
}
vgadev = vgadev_find(pdev);
- pr_devel("vgaarb: vgadev %p\n", vgadev);
if (vgadev == NULL) {
- pr_err("vgaarb: this pci device is not a vga device\n");
+ pr_info("vgaarb: this pci device is not a vga device\n");
pci_dev_put(pdev);
ret_val = -ENODEV;
goto done;
}
}
if (i == MAX_USER_CARDS) {
- pr_err("vgaarb: maximum user cards (%d) number reached!\n",
- MAX_USER_CARDS);
+ pr_err("vgaarb: maximum user cards number reached!\n");
pci_dev_put(pdev);
/* XXX: which value to return? */
ret_val = -ENOMEM;
{ HID_USB_DEVICE(USB_VENDOR_ID_GREENASIA, 0x0012) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE_2) },
- { HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE_3) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KENSINGTON, USB_DEVICE_ID_KS_SLIMBLADE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) },
{ HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY1) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) },
{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) },
static int gyration_event(struct hid_device *hdev, struct hid_field *field,
struct hid_usage *usage, __s32 value)
{
-
- if (!(hdev->claimed & HID_CLAIMED_INPUT) || !field->hidinput)
- return 0;
+ struct input_dev *input = field->hidinput->input;
if ((usage->hid & HID_USAGE_PAGE) == HID_UP_GENDESK &&
(usage->hid & 0xff) == 0x82) {
- struct input_dev *input = field->hidinput->input;
input_event(input, usage->type, usage->code, 1);
input_sync(input);
input_event(input, usage->type, usage->code, 0);
static const struct hid_device_id gyration_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE_2) },
- { HID_USB_DEVICE(USB_VENDOR_ID_GYRATION, USB_DEVICE_ID_GYRATION_REMOTE_3) },
{ }
};
MODULE_DEVICE_TABLE(hid, gyration_devices);
#define USB_VENDOR_ID_GYRATION 0x0c16
#define USB_DEVICE_ID_GYRATION_REMOTE 0x0002
#define USB_DEVICE_ID_GYRATION_REMOTE_2 0x0003
-#define USB_DEVICE_ID_GYRATION_REMOTE_3 0x0008
#define USB_VENDOR_ID_HAPP 0x078b
#define USB_DEVICE_ID_UGCI_DRIVING 0x0010
#define USB_VENDOR_ID_NEC 0x073e
#define USB_DEVICE_ID_NEC_USB_GAME_PAD 0x0301
-#define USB_VENDOR_ID_NEXTWINDOW 0x1926
-#define USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN 0x0003
-
#define USB_VENDOR_ID_NTRIG 0x1b96
#define USB_DEVICE_ID_NTRIG_TOUCH_SCREEN 0x0001
#define USB_VENDOR_ID_SUNPLUS 0x04fc
#define USB_DEVICE_ID_SUNPLUS_WDESKTOP 0x05d8
+#define USB_VENDOR_ID_TENX 0x1130
+#define USB_DEVICE_ID_TENX_IBUDDY1 0x0001
+#define USB_DEVICE_ID_TENX_IBUDDY2 0x0002
+
#define USB_VENDOR_ID_THRUSTMASTER 0x044f
#define USB_VENDOR_ID_TOPMAX 0x0663
static ssize_t hidraw_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos)
{
unsigned int minor = iminor(file->f_path.dentry->d_inode);
- struct hid_device *dev;
+ /* FIXME: What stops hidraw_table going NULL */
+ struct hid_device *dev = hidraw_table[minor]->hid;
__u8 *buf;
int ret = 0;
- if (!hidraw_table[minor])
- return -ENODEV;
-
- dev = hidraw_table[minor]->hid;
-
if (!dev->hid_output_raw_report)
return -ENODEV;
struct inode *inode = file->f_path.dentry->d_inode;
unsigned int minor = iminor(inode);
long ret = 0;
- struct hidraw *dev;
+ /* FIXME: What stops hidraw_table going NULL */
+ struct hidraw *dev = hidraw_table[minor];
void __user *user_arg = (void __user*) arg;
lock_kernel();
- dev = hidraw_table[minor];
- if (!dev) {
- ret = -ENODEV;
- goto out;
- }
-
switch (cmd) {
case HIDIOCGRDESCSIZE:
if (put_user(dev->hid->rsize, (int __user *)arg))
ret = -ENOTTY;
}
-out:
unlock_kernel();
return ret;
}
err_hid("usb_submit_urb(out) failed");
return -1;
}
- usbhid->last_out = jiffies;
} else {
/*
* queue work to wake up the device.
err_hid("usb_submit_urb(ctrl) failed");
return -1;
}
- usbhid->last_ctrl = jiffies;
} else {
/*
* queue work to wake up the device.
usbhid->out[usbhid->outhead].report = report;
usbhid->outhead = head;
- if (!test_and_set_bit(HID_OUT_RUNNING, &usbhid->iofl)) {
+ if (!test_and_set_bit(HID_OUT_RUNNING, &usbhid->iofl))
if (hid_submit_out(hid))
clear_bit(HID_OUT_RUNNING, &usbhid->iofl);
- } else {
- /*
- * the queue is known to run
- * but an earlier request may be stuck
- * we may need to time out
- * no race because this is called under
- * spinlock
- */
- if (time_after(jiffies, usbhid->last_out + HZ * 5))
- usb_unlink_urb(usbhid->urbout);
- }
return;
}
usbhid->ctrl[usbhid->ctrlhead].dir = dir;
usbhid->ctrlhead = head;
- if (!test_and_set_bit(HID_CTRL_RUNNING, &usbhid->iofl)) {
+ if (!test_and_set_bit(HID_CTRL_RUNNING, &usbhid->iofl))
if (hid_submit_ctrl(hid))
clear_bit(HID_CTRL_RUNNING, &usbhid->iofl);
- } else {
- /*
- * the queue is known to run
- * but an earlier request may be stuck
- * we may need to time out
- * no race because this is called under
- * spinlock
- */
- if (time_after(jiffies, usbhid->last_ctrl + HZ * 5))
- usb_unlink_urb(usbhid->urbctrl);
- }
}
void usbhid_submit_report(struct hid_device *hid, struct hid_report *report, unsigned char dir)
}
}
+ init_waitqueue_head(&usbhid->wait);
+ INIT_WORK(&usbhid->reset_work, hid_reset);
+ INIT_WORK(&usbhid->restart_work, __usbhid_restart_queues);
+ setup_timer(&usbhid->io_retry, hid_retry_timeout, (unsigned long) hid);
+
+ spin_lock_init(&usbhid->lock);
+
+ usbhid->intf = intf;
+ usbhid->ifnum = interface->desc.bInterfaceNumber;
+
usbhid->urbctrl = usb_alloc_urb(0, GFP_KERNEL);
if (!usbhid->urbctrl) {
ret = -ENOMEM;
hid->driver_data = usbhid;
usbhid->hid = hid;
- usbhid->intf = intf;
- usbhid->ifnum = interface->desc.bInterfaceNumber;
-
- init_waitqueue_head(&usbhid->wait);
- INIT_WORK(&usbhid->reset_work, hid_reset);
- INIT_WORK(&usbhid->restart_work, __usbhid_restart_queues);
- setup_timer(&usbhid->io_retry, hid_retry_timeout, (unsigned long) hid);
- spin_lock_init(&usbhid->lock);
ret = hid_add_device(hid);
if (ret) {
{ USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING, HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_NATSU, USB_DEVICE_ID_NATSU_GAMEPAD, HID_QUIRK_BADPAD },
{ USB_VENDOR_ID_NEC, USB_DEVICE_ID_NEC_USB_GAME_PAD, HID_QUIRK_BADPAD },
- { USB_VENDOR_ID_NEXTWINDOW, USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN, HID_QUIRK_MULTI_INPUT},
{ USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD, HID_QUIRK_BADPAD },
{ USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD, HID_QUIRK_BADPAD },
unsigned char ctrlhead, ctrltail; /* Control fifo head & tail */
char *ctrlbuf; /* Control buffer */
dma_addr_t ctrlbuf_dma; /* Control buffer dma */
- unsigned long last_ctrl; /* record of last output for timeouts */
struct urb *urbout; /* Output URB */
struct hid_output_fifo out[HID_CONTROL_FIFO_SIZE]; /* Output pipe fifo */
unsigned char outhead, outtail; /* Output pipe fifo head & tail */
char *outbuf; /* Output buffer */
dma_addr_t outbuf_dma; /* Output buffer dma */
- unsigned long last_out; /* record of last output for timeouts */
spinlock_t lock; /* fifo spinlock */
unsigned long iofl; /* I/O flags (CTRL_RUNNING, OUT_RUNNING) */
return -ENODEV;
}
-void ams_sensor_detach(void)
+void ams_exit(void)
{
/* Remove input device */
ams_input_exit();
/* Remove attributes */
device_remove_file(&ams_info.of_dev->dev, &dev_attr_current);
+ /* Shut down implementation */
+ ams_info.exit();
+
/* Flush interrupt worker
*
* We do this after ams_info.exit(), because an interrupt might
pmf_unregister_irq_client(&ams_freefall_client);
}
-static void __exit ams_exit(void)
-{
- /* Shut down implementation */
- ams_info.exit();
-}
-
MODULE_AUTHOR("Stelian Pop, Michael Hanselmann");
MODULE_DESCRIPTION("Apple Motion Sensor driver");
MODULE_LICENSE("GPL");
static int ams_i2c_remove(struct i2c_client *client)
{
if (ams_info.has_device) {
- ams_sensor_detach();
-
/* Disable interrupts */
ams_i2c_set_irq(AMS_IRQ_ALL, 0);
static void ams_pmu_exit(void)
{
- ams_sensor_detach();
-
/* Disable interrupts */
ams_pmu_set_irq(AMS_IRQ_ALL, 0);
extern void ams_sensors(s8 *x, s8 *y, s8 *z);
extern int ams_sensor_attach(void);
-extern void ams_sensor_detach(void);
extern int ams_pmu_init(struct device_node *np);
extern int ams_i2c_init(struct device_node *np);
struct mutex update_lock;
const char *name;
u32 id;
- u16 core_id;
char valid; /* zero until following fields are valid */
unsigned long last_updated; /* in jiffies */
int temp;
if (attr->index == SHOW_NAME)
ret = sprintf(buf, "%s\n", data->name);
else /* show label */
- ret = sprintf(buf, "Core %d\n", data->core_id);
+ ret = sprintf(buf, "Core %d\n", data->id);
return ret;
}
if (err) {
dev_warn(dev,
"Unable to access MSR 0xEE, for Tjmax, left"
- " at default\n");
+ " at default");
} else if (eax & 0x40000000) {
tjmax = tjmax_ee;
}
}
data->id = pdev->id;
-#ifdef CONFIG_SMP
- data->core_id = c->cpu_core_id;
-#endif
data->name = "coretemp";
mutex_init(&data->update_lock);
struct list_head list;
struct platform_device *pdev;
unsigned int cpu;
-#ifdef CONFIG_SMP
- u16 phys_proc_id;
- u16 cpu_core_id;
-#endif
};
static LIST_HEAD(pdev_list);
int err;
struct platform_device *pdev;
struct pdev_entry *pdev_entry;
-#ifdef CONFIG_SMP
- struct cpuinfo_x86 *c = &cpu_data(cpu);
-#endif
-
- mutex_lock(&pdev_list_mutex);
-
-#ifdef CONFIG_SMP
- /* Skip second HT entry of each core */
- list_for_each_entry(pdev_entry, &pdev_list, list) {
- if (c->phys_proc_id == pdev_entry->phys_proc_id &&
- c->cpu_core_id == pdev_entry->cpu_core_id) {
- err = 0; /* Not an error */
- goto exit;
- }
- }
-#endif
pdev = platform_device_alloc(DRVNAME, cpu);
if (!pdev) {
pdev_entry->pdev = pdev;
pdev_entry->cpu = cpu;
-#ifdef CONFIG_SMP
- pdev_entry->phys_proc_id = c->phys_proc_id;
- pdev_entry->cpu_core_id = c->cpu_core_id;
-#endif
+ mutex_lock(&pdev_list_mutex);
list_add_tail(&pdev_entry->list, &pdev_list);
mutex_unlock(&pdev_list_mutex);
exit_device_put:
platform_device_put(pdev);
exit:
- mutex_unlock(&pdev_list_mutex);
return err;
}
#define F75375_REG_PWM2_DROP_DUTY 0x6C
#define FAN_CTRL_LINEAR(nr) (4 + nr)
-#define FAN_CTRL_MODE(nr) (4 + ((nr) * 2))
+#define FAN_CTRL_MODE(nr) (5 + ((nr) * 2))
/*
* Data structures and manipulation thereof
return -EINVAL;
fanmode = f75375_read8(client, F75375_REG_FAN_TIMER);
- fanmode &= ~(3 << FAN_CTRL_MODE(nr));
+ fanmode = ~(3 << FAN_CTRL_MODE(nr));
switch (val) {
case 0: /* Full speed */
mutex_lock(&data->update_lock);
conf = f75375_read8(client, F75375_REG_CONFIG1);
- conf &= ~(1 << FAN_CTRL_LINEAR(nr));
+ conf = ~(1 << FAN_CTRL_LINEAR(nr));
if (val == 0)
conf |= (1 << FAN_CTRL_LINEAR(nr)) ;
lis3lv02d_joystick_disable();
lis3lv02d_poweroff(&lis3_dev);
- led_classdev_unregister(&hpled_led.led_classdev);
flush_work(&hpled_led.work);
+ led_classdev_unregister(&hpled_led.led_classdev);
return lis3lv02d_remove_fs(&lis3_dev);
}
return inb(VAL);
}
-static inline void
-superio_outb(int reg, int val)
-{
- outb(reg, REG);
- outb(val, VAL);
-}
-
static int superio_inw(int reg)
{
int val;
sio_data->vid_value = superio_inb(IT87_SIO_VID_REG);
reg = superio_inb(IT87_SIO_PINX2_REG);
- /*
- * The IT8720F has no VIN7 pin, so VCCH should always be
- * routed internally to VIN7 with an internal divider.
- * Curiously, there still is a configuration bit to control
- * this, which means it can be set incorrectly. And even
- * more curiously, many boards out there are improperly
- * configured, even though the IT8720F datasheet claims
- * that the internal routing of VCCH to VIN7 is the default
- * setting. So we force the internal routing in this case.
- */
- if (sio_data->type == it8720 && !(reg & (1 << 1))) {
- reg |= (1 << 1);
- superio_outb(IT87_SIO_PINX2_REG, reg);
- pr_notice("it87: Routing internal VCCH to in7\n");
- }
if (reg & (1 << 0))
pr_info("it87: in3 is VCC (+5V)\n");
if (reg & (1 << 1))
int temp;
struct k8temp_data *data = k8temp_update_device(dev);
- if (data->swap_core_select && (data->sensorsp & SEL_CORE))
+ if (data->swap_core_select)
core = core ? 0 : 1;
temp = TEMP_FROM_REG(data->temp[core][place]) + data->temp_offset;
MODULE_DEVICE_TABLE(pci, k8temp_ids);
-static int __devinit is_rev_g_desktop(u8 model)
-{
- u32 brandidx;
-
- if (model < 0x69)
- return 0;
-
- if (model == 0xc1 || model == 0x6c || model == 0x7c)
- return 0;
-
- /*
- * Differentiate between AM2 and ASB1.
- * See "Constructing the processor Name String" in "Revision
- * Guide for AMD NPT Family 0Fh Processors" (33610).
- */
- brandidx = cpuid_ebx(0x80000001);
- brandidx = (brandidx >> 9) & 0x1f;
-
- /* Single core */
- if ((model == 0x6f || model == 0x7f) &&
- (brandidx == 0x7 || brandidx == 0x9 || brandidx == 0xc))
- return 0;
-
- /* Dual core */
- if (model == 0x6b &&
- (brandidx == 0xb || brandidx == 0xc))
- return 0;
-
- return 1;
-}
-
static int __devinit k8temp_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
"wrong - check erratum #141\n");
}
- if (is_rev_g_desktop(model)) {
+ if ((model >= 0x69) &&
+ !(model == 0xc1 || model == 0x6c || model == 0x7c)) {
/*
- * RevG desktop CPUs (i.e. no socket S1G1 or
- * ASB1 parts) need additional offset,
- * otherwise reported temperature is below
- * ambient temperature
+ * RevG desktop CPUs (i.e. no socket S1G1 parts)
+ * need additional offset, otherwise reported
+ * temperature is below ambient temperature
*/
data->temp_offset = 21000;
}
/*
* Common configuration
- * BDU: (12 bits sensors only) LSB and MSB values are not updated until
- * both have been read. So the value read will always be correct.
+ * BDU: LSB and MSB values are not updated until both have been read.
+ * So the value read will always be correct.
*/
- if (lis3->whoami == LIS_DOUBLE_ID) {
- lis3->read(lis3, CTRL_REG2, ®);
- reg |= CTRL2_BDU;
- lis3->write(lis3, CTRL_REG2, reg);
- }
+ lis3->read(lis3, CTRL_REG2, ®);
+ reg |= CTRL2_BDU;
+ lis3->write(lis3, CTRL_REG2, reg);
}
EXPORT_SYMBOL_GPL(lis3lv02d_poweron);
}
/* conversion btw sampling rate and the register values */
-static int lis3_12_rates[4] = {40, 160, 640, 2560};
-static int lis3_8_rates[2] = {100, 400};
+static int lis3lv02dl_df_val[4] = {40, 160, 640, 2560};
static ssize_t lis3lv02d_rate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int val;
lis3_dev.read(&lis3_dev, CTRL_REG1, &ctrl);
-
- if (lis3_dev.whoami == LIS_DOUBLE_ID)
- val = lis3_12_rates[(ctrl & (CTRL1_DF0 | CTRL1_DF1)) >> 4];
- else
- val = lis3_8_rates[(ctrl & CTRL1_DR) >> 7];
-
- return sprintf(buf, "%d\n", val);
+ val = (ctrl & (CTRL1_DF0 | CTRL1_DF1)) >> 4;
+ return sprintf(buf, "%d\n", lis3lv02dl_df_val[val]);
}
static DEVICE_ATTR(position, S_IRUGO, lis3lv02d_position_show, NULL);
CTRL1_DF1 = 0x20,
CTRL1_PD0 = 0x40,
CTRL1_PD1 = 0x80,
- CTRL1_DR = 0x80, /* Data rate on 8 bits */
};
enum lis3lv02d_ctrl2 {
CTRL2_DAS = 0x01,
switch (data->type) {
case adm1027:
case adt7463:
- case adt7468:
case emc6d100:
case emc6d102:
data->freq_map = adm1027_freq_map;
LTC4245_VEEIN = 0x19,
LTC4245_VEESENSE = 0x1a,
LTC4245_VEEOUT = 0x1b,
- LTC4245_GPIOADC = 0x1c,
+ LTC4245_GPIOADC1 = 0x1c,
+ LTC4245_GPIOADC2 = 0x1d,
+ LTC4245_GPIOADC3 = 0x1e,
};
struct ltc4245_data {
u8 cregs[0x08];
/* Voltage registers */
- u8 vregs[0x0d];
+ u8 vregs[0x0f];
};
static struct ltc4245_data *ltc4245_update_device(struct device *dev)
data->cregs[i] = val;
}
- /* Read voltage registers -- 0x10 to 0x1c */
+ /* Read voltage registers -- 0x10 to 0x1f */
for (i = 0; i < ARRAY_SIZE(data->vregs); i++) {
val = i2c_smbus_read_byte_data(client, i+0x10);
if (unlikely(val < 0))
case LTC4245_VEEOUT:
voltage = regval * -55;
break;
- case LTC4245_GPIOADC:
+ case LTC4245_GPIOADC1:
+ case LTC4245_GPIOADC2:
+ case LTC4245_GPIOADC3:
voltage = regval * 10;
break;
default:
LTC4245_ALARM(in8_min_alarm, (1 << 3), LTC4245_FAULT2);
/* GPIO voltages */
-LTC4245_VOLTAGE(in9_input, LTC4245_GPIOADC);
+LTC4245_VOLTAGE(in9_input, LTC4245_GPIOADC1);
+LTC4245_VOLTAGE(in10_input, LTC4245_GPIOADC2);
+LTC4245_VOLTAGE(in11_input, LTC4245_GPIOADC3);
/* Power Consumption (virtual) */
LTC4245_POWER(power1_input, LTC4245_12VSENSE);
&sensor_dev_attr_in8_min_alarm.dev_attr.attr,
&sensor_dev_attr_in9_input.dev_attr.attr,
+ &sensor_dev_attr_in10_input.dev_attr.attr,
+ &sensor_dev_attr_in11_input.dev_attr.attr,
&sensor_dev_attr_power1_input.dev_attr.attr,
&sensor_dev_attr_power2_input.dev_attr.attr,
static int __init pc87360_device_add(unsigned short address)
{
- struct resource res[3];
- int err, i, res_count;
+ struct resource res = {
+ .name = "pc87360",
+ .flags = IORESOURCE_IO,
+ };
+ int err, i;
pdev = platform_device_alloc("pc87360", address);
if (!pdev) {
goto exit;
}
- memset(res, 0, 3 * sizeof(struct resource));
- res_count = 0;
for (i = 0; i < 3; i++) {
if (!extra_isa[i])
continue;
- res[res_count].start = extra_isa[i];
- res[res_count].end = extra_isa[i] + PC87360_EXTENT - 1;
- res[res_count].name = "pc87360",
- res[res_count].flags = IORESOURCE_IO,
+ res.start = extra_isa[i];
+ res.end = extra_isa[i] + PC87360_EXTENT - 1;
- err = acpi_check_resource_conflict(&res[res_count]);
+ err = acpi_check_resource_conflict(&res);
if (err)
goto exit_device_put;
- res_count++;
- }
-
- err = platform_device_add_resources(pdev, res, res_count);
- if (err) {
- printk(KERN_ERR "pc87360: Device resources addition failed "
- "(%d)\n", err);
- goto exit_device_put;
+ err = platform_device_add_resources(pdev, &res, 1);
+ if (err) {
+ printk(KERN_ERR "pc87360: Device resource[%d] "
+ "addition failed (%d)\n", i, err);
+ goto exit_device_put;
+ }
}
err = platform_device_add(pdev);
**/
static inline int sht15_calc_temp(struct sht15_data *data)
{
- int d1 = temppoints[0].d1;
+ int d1 = 0;
int i;
- for (i = ARRAY_SIZE(temppoints) - 1; i > 0; i--)
+ for (i = 1; i < ARRAY_SIZE(temppoints); i++)
/* Find pointer to interpolate */
if (data->supply_uV > temppoints[i - 1].vdd) {
- d1 = (data->supply_uV - temppoints[i - 1].vdd)
+ d1 = (data->supply_uV/1000 - temppoints[i - 1].vdd)
* (temppoints[i].d1 - temppoints[i - 1].d1)
/ (temppoints[i].vdd - temppoints[i - 1].vdd)
+ temppoints[i - 1].d1;
/* If a regulator is available, query what the supply voltage actually is!*/
data->reg = regulator_get(data->dev, "vcc");
if (!IS_ERR(data->reg)) {
- int voltage;
-
- voltage = regulator_get_voltage(data->reg);
- if (voltage)
- data->supply_uV = voltage;
-
+ data->supply_uV = regulator_get_voltage(data->reg);
regulator_enable(data->reg);
/* setup a notifier block to update this if another device
* causes the voltage to change */
#define TMP423_DEVICE_ID 0x23
static const struct i2c_device_id tmp421_id[] = {
- { "tmp421", 2 },
- { "tmp422", 3 },
- { "tmp423", 4 },
+ { "tmp421", tmp421 },
+ { "tmp422", tmp422 },
+ { "tmp423", tmp423 },
{ }
};
MODULE_DEVICE_TABLE(i2c, tmp421_id);
struct mutex update_lock;
char valid;
unsigned long last_updated;
- int channels;
+ int kind;
u8 config;
s16 temp[4];
};
static int temp_from_s16(s16 reg)
{
- /* Mask out status bits */
- int temp = reg & ~0xf;
+ int temp = reg;
return (temp * 1000 + 128) / 256;
}
static int temp_from_u16(u16 reg)
{
- /* Mask out status bits */
- int temp = reg & ~0xf;
+ int temp = reg;
/* Add offset for extended temperature range. */
temp -= 64 * 256;
data->config = i2c_smbus_read_byte_data(client,
TMP421_CONFIG_REG_1);
- for (i = 0; i < data->channels; i++) {
+ for (i = 0; i <= data->kind; i++) {
data->temp[i] = i2c_smbus_read_byte_data(client,
TMP421_TEMP_MSB[i]) << 8;
data->temp[i] |= i2c_smbus_read_byte_data(client,
devattr = container_of(a, struct device_attribute, attr);
index = to_sensor_dev_attr(devattr)->index;
- if (index < data->channels)
+ if (data->kind > index)
return a->mode;
return 0;
i2c_set_clientdata(client, data);
mutex_init(&data->update_lock);
- data->channels = id->driver_data;
+ data->kind = id->driver_data;
err = tmp421_init_client(client);
if (err)
Tolapai 0x5032 32 hard yes yes yes
ICH10 0x3a30 32 hard yes yes yes
ICH10 0x3a60 32 hard yes yes yes
- 3400/5 Series (PCH) 0x3b30 32 hard yes yes yes
- Cougar Point (PCH) 0x1c22 32 hard yes yes yes
+ PCH 0x3b30 32 hard yes yes yes
Features supported by this driver:
Software PEC no
data->block[0] = 32; /* max for SMBus block reads */
}
- /* Experience has shown that the block buffer can only be used for
- SMBus (not I2C) block transactions, even though the datasheet
- doesn't mention this limitation. */
if ((i801_features & FEATURE_BLOCK_BUFFER)
- && command != I2C_SMBUS_I2C_BLOCK_DATA
+ && !(command == I2C_SMBUS_I2C_BLOCK_DATA
+ && read_write == I2C_SMBUS_READ)
&& i801_set_block_buffer_mode() == 0)
result = i801_block_transaction_by_block(data, read_write,
hwpec);
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH10_4) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH10_5) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PCH_SMBUS) },
- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CPT_SMBUS) },
{ 0, }
};
case PCI_DEVICE_ID_INTEL_ICH10_4:
case PCI_DEVICE_ID_INTEL_ICH10_5:
case PCI_DEVICE_ID_INTEL_PCH_SMBUS:
- case PCI_DEVICE_ID_INTEL_CPT_SMBUS:
i801_features |= FEATURE_I2C_BLOCK_READ;
/* fall through */
case PCI_DEVICE_ID_INTEL_82801DB_3:
static int pca_isa_waitforcompletion(void *pd)
{
+ long ret = ~0;
unsigned long timeout;
- long ret;
if (irq > -1) {
ret = wait_event_timeout(pca_wait,
} else {
/* Do polling */
timeout = jiffies + pca_isa_ops.timeout;
- do {
- ret = time_before(jiffies, timeout);
- if (pca_isa_readbyte(pd, I2C_PCA_CON)
- & I2C_PCA_CON_SI)
- break;
+ while (((pca_isa_readbyte(pd, I2C_PCA_CON)
+ & I2C_PCA_CON_SI) == 0)
+ && (ret = time_before(jiffies, timeout)))
udelay(100);
- } while (ret);
}
-
return ret > 0;
}
static int i2c_pca_pf_waitforcompletion(void *pd)
{
struct i2c_pca_pf_data *i2c = pd;
+ long ret = ~0;
unsigned long timeout;
- long ret;
if (i2c->irq) {
ret = wait_event_timeout(i2c->wait,
} else {
/* Do polling */
timeout = jiffies + i2c->adap.timeout;
- do {
- ret = time_before(jiffies, timeout);
- if (i2c->algo_data.read_byte(i2c, I2C_PCA_CON)
- & I2C_PCA_CON_SI)
- break;
+ while (((i2c->algo_data.read_byte(i2c, I2C_PCA_CON)
+ & I2C_PCA_CON_SI) == 0)
+ && (ret = time_before(jiffies, timeout)))
udelay(100);
- } while (ret);
}
return ret > 0;
if (irq) {
ret = request_irq(irq, i2c_pca_pf_handler,
- IRQF_TRIGGER_FALLING, pdev->name, i2c);
+ IRQF_TRIGGER_FALLING, i2c->adap.name, i2c);
if (ret)
goto e_reqirq;
}
/* Make sure there is something at this address, unless forced */
if (kind < 0) {
- if (addr == 0x73 && (adapter->class & I2C_CLASS_HWMON)) {
- /* Special probe for FSC hwmon chips */
- union i2c_smbus_data dummy;
-
- if (i2c_smbus_xfer(adapter, addr, 0, I2C_SMBUS_READ, 0,
- I2C_SMBUS_BYTE_DATA, &dummy) < 0)
- return 0;
- } else {
- if (i2c_smbus_xfer(adapter, addr, 0, I2C_SMBUS_WRITE, 0,
- I2C_SMBUS_QUICK, NULL) < 0)
- return 0;
-
- /* Prevent 24RF08 corruption */
- if ((addr & ~0x0f) == 0x50)
- i2c_smbus_xfer(adapter, addr, 0,
- I2C_SMBUS_WRITE, 0,
- I2C_SMBUS_QUICK, NULL);
- }
+ if (i2c_smbus_xfer(adapter, addr, 0, 0, 0,
+ I2C_SMBUS_QUICK, NULL) < 0)
+ return 0;
+
+ /* prevent 24RF08 corruption */
+ if ((addr & ~0x0f) == 0x50)
+ i2c_smbus_xfer(adapter, addr, 0, 0, 0,
+ I2C_SMBUS_QUICK, NULL);
}
/* Finally call the custom detection function */
static int cmd640_test_irq(ide_hwif_t *hwif)
{
+ struct pci_dev *dev = to_pci_dev(hwif->dev);
int irq_reg = hwif->channel ? ARTTIM23 : CFR;
- u8 irq_mask = hwif->channel ? ARTTIM23_IDE23INTR :
+ u8 irq_stat, irq_mask = hwif->channel ? ARTTIM23_IDE23INTR :
CFR_IDE01INTR;
- u8 irq_stat = get_cmd640_reg(irq_reg);
+
+ pci_read_config_byte(dev, irq_reg, &irq_stat);
return (irq_stat & irq_mask) ? 1 : 0;
}
return (flags & REQ_FAILED) ? -EIO : 0;
}
-/*
- * returns true if rq has been completed
- */
-static bool ide_cd_error_cmd(ide_drive_t *drive, struct ide_cmd *cmd)
+static void ide_cd_error_cmd(ide_drive_t *drive, struct ide_cmd *cmd)
{
unsigned int nr_bytes = cmd->nbytes - cmd->nleft;
if (cmd->tf_flags & IDE_TFLAG_WRITE)
nr_bytes -= cmd->last_xfer_len;
- if (nr_bytes > 0) {
+ if (nr_bytes > 0)
ide_complete_rq(drive, 0, nr_bytes);
- return true;
- }
-
- return false;
}
static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
}
if (uptodate == 0 && rq->bio)
- if (ide_cd_error_cmd(drive, cmd))
- return ide_stopped;
+ ide_cd_error_cmd(drive, cmd);
/* make sure it's fully ended */
if (blk_fs_request(rq) == 0) {
{
struct request *rq;
int error;
- int rw = !(cmd->tf_flags & IDE_TFLAG_WRITE) ? READ : WRITE;
- rq = blk_get_request(drive->queue, rw, __GFP_WAIT);
+ rq = blk_get_request(drive->queue, READ, __GFP_WAIT);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
+ if (cmd->tf_flags & IDE_TFLAG_WRITE)
+ rq->cmd_flags |= REQ_RW;
+
/*
* (ks) We transfer currently only whole sectors.
* This is suffient for now. But, it would be great,
V_MSS_IDX(mtu_idx) |
V_L2T_IDX(ep->l2t->idx) | V_TX_CHANNEL(ep->l2t->smt_idx);
opt0l = V_TOS((ep->tos >> 2) & M_TOS) | V_RCV_BUFSIZ(rcv_win>>10);
- opt2 = F_RX_COALESCE_VALID | V_RX_COALESCE(0) | V_FLAVORS_VALID(1) |
- V_CONG_CONTROL_FLAVOR(cong_flavor);
+ opt2 = V_FLAVORS_VALID(1) | V_CONG_CONTROL_FLAVOR(cong_flavor);
skb->priority = CPL_PRIORITY_SETUP;
set_arp_failure_handler(skb, act_open_req_arp_failure);
V_MSS_IDX(mtu_idx) |
V_L2T_IDX(ep->l2t->idx) | V_TX_CHANNEL(ep->l2t->smt_idx);
opt0l = V_TOS((ep->tos >> 2) & M_TOS) | V_RCV_BUFSIZ(rcv_win>>10);
- opt2 = F_RX_COALESCE_VALID | V_RX_COALESCE(0) | V_FLAVORS_VALID(1) |
- V_CONG_CONTROL_FLAVOR(cong_flavor);
+ opt2 = V_FLAVORS_VALID(1) | V_CONG_CONTROL_FLAVOR(cong_flavor);
rpl = cplhdr(skb);
rpl->wr.wr_hi = htonl(V_WR_OP(FW_WROPCODE_FORWARD));
if (++priv->tx_outstanding == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
tx->qp->qp_num);
- if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP))
- ipoib_warn(priv, "request notify on send CQ failed\n");
netif_stop_queue(dev);
}
}
return ret ? ret : count;
}
-static DEVICE_ATTR(create_child, S_IWUSR, NULL, create_child);
+static DEVICE_ATTR(create_child, S_IWUGO, NULL, create_child);
static ssize_t delete_child(struct device *dev,
struct device_attribute *attr,
return ret ? ret : count;
}
-static DEVICE_ATTR(delete_child, S_IWUSR, NULL, delete_child);
+static DEVICE_ATTR(delete_child, S_IWUGO, NULL, delete_child);
int ipoib_add_pkey_attr(struct net_device *dev)
{
mem_copy->copy_buf = NULL;
}
-#define IS_4K_ALIGNED(addr) ((((unsigned long)addr) & ~MASK_4K) == 0)
-
/**
* iser_sg_to_page_vec - Translates scatterlist entries to physical addresses
* and returns the length of resulting physical address array (may be less than
* where --few fragments of the same page-- are present in the SG as
* consecutive elements. Also, it handles one entry SG.
*/
-
static int iser_sg_to_page_vec(struct iser_data_buf *data,
struct iser_page_vec *page_vec,
struct ib_device *ibdev)
{
- struct scatterlist *sg, *sgl = (struct scatterlist *)data->buf;
- u64 start_addr, end_addr, page, chunk_start = 0;
+ struct scatterlist *sgl = (struct scatterlist *)data->buf;
+ struct scatterlist *sg;
+ u64 first_addr, last_addr, page;
+ int end_aligned;
+ unsigned int cur_page = 0;
unsigned long total_sz = 0;
- unsigned int dma_len;
- int i, new_chunk, cur_page, last_ent = data->dma_nents - 1;
+ int i;
/* compute the offset of first element */
page_vec->offset = (u64) sgl[0].offset & ~MASK_4K;
- new_chunk = 1;
- cur_page = 0;
for_each_sg(sgl, sg, data->dma_nents, i) {
- start_addr = ib_sg_dma_address(ibdev, sg);
- if (new_chunk)
- chunk_start = start_addr;
- dma_len = ib_sg_dma_len(ibdev, sg);
- end_addr = start_addr + dma_len;
+ unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
+
total_sz += dma_len;
- /* collect page fragments until aligned or end of SG list */
- if (!IS_4K_ALIGNED(end_addr) && i < last_ent) {
- new_chunk = 0;
- continue;
+ first_addr = ib_sg_dma_address(ibdev, sg);
+ last_addr = first_addr + dma_len;
+
+ end_aligned = !(last_addr & ~MASK_4K);
+
+ /* continue to collect page fragments till aligned or SG ends */
+ while (!end_aligned && (i + 1 < data->dma_nents)) {
+ sg = sg_next(sg);
+ i++;
+ dma_len = ib_sg_dma_len(ibdev, sg);
+ total_sz += dma_len;
+ last_addr = ib_sg_dma_address(ibdev, sg) + dma_len;
+ end_aligned = !(last_addr & ~MASK_4K);
}
- new_chunk = 1;
-
- /* address of the first page in the contiguous chunk;
- masking relevant for the very first SG entry,
- which might be unaligned */
- page = chunk_start & MASK_4K;
- do {
- page_vec->pages[cur_page++] = page;
+
+ /* handle the 1st page in the 1st DMA element */
+ if (cur_page == 0) {
+ page = first_addr & MASK_4K;
+ page_vec->pages[cur_page] = page;
+ cur_page++;
page += SIZE_4K;
- } while (page < end_addr);
- }
+ } else
+ page = first_addr;
+ for (; page < last_addr; page += SIZE_4K) {
+ page_vec->pages[cur_page] = page;
+ cur_page++;
+ }
+
+ }
page_vec->data_size = total_sz;
iser_dbg("page_vec->data_size:%d cur_page %d\n", page_vec->data_size,cur_page);
return cur_page;
}
+#define IS_4K_ALIGNED(addr) ((((unsigned long)addr) & ~MASK_4K) == 0)
/**
* iser_data_buf_aligned_len - Tries to determine the maximal correctly aligned
* the number of entries which are aligned correctly. Supports the case where
* consecutive SG elements are actually fragments of the same physcial page.
*/
-static int iser_data_buf_aligned_len(struct iser_data_buf *data,
- struct ib_device *ibdev)
+static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data,
+ struct ib_device *ibdev)
{
- struct scatterlist *sgl, *sg, *next_sg = NULL;
- u64 start_addr, end_addr;
- int i, ret_len, start_check = 0;
-
- if (data->dma_nents == 1)
- return 1;
+ struct scatterlist *sgl, *sg;
+ u64 end_addr, next_addr;
+ int i, cnt;
+ unsigned int ret_len = 0;
sgl = (struct scatterlist *)data->buf;
- start_addr = ib_sg_dma_address(ibdev, sgl);
+ cnt = 0;
for_each_sg(sgl, sg, data->dma_nents, i) {
- if (start_check && !IS_4K_ALIGNED(start_addr))
- break;
-
- next_sg = sg_next(sg);
- if (!next_sg)
- break;
-
- end_addr = start_addr + ib_sg_dma_len(ibdev, sg);
- start_addr = ib_sg_dma_address(ibdev, next_sg);
-
- if (end_addr == start_addr) {
- start_check = 0;
- continue;
- } else
- start_check = 1;
-
- if (!IS_4K_ALIGNED(end_addr))
- break;
+ /* iser_dbg("Checking sg iobuf [%d]: phys=0x%08lX "
+ "offset: %ld sz: %ld\n", i,
+ (unsigned long)sg_phys(sg),
+ (unsigned long)sg->offset,
+ (unsigned long)sg->length); */
+ end_addr = ib_sg_dma_address(ibdev, sg) +
+ ib_sg_dma_len(ibdev, sg);
+ /* iser_dbg("Checking sg iobuf end address "
+ "0x%08lX\n", end_addr); */
+ if (i + 1 < data->dma_nents) {
+ next_addr = ib_sg_dma_address(ibdev, sg_next(sg));
+ /* are i, i+1 fragments of the same page? */
+ if (end_addr == next_addr) {
+ cnt++;
+ continue;
+ } else if (!IS_4K_ALIGNED(end_addr)) {
+ ret_len = cnt + 1;
+ break;
+ }
+ }
+ cnt++;
}
- ret_len = (next_sg) ? i : i+1;
+ if (i == data->dma_nents)
+ ret_len = cnt; /* loop ended */
iser_dbg("Found %d aligned entries out of %d in sg:0x%p\n",
ret_len, data->dma_nents, data);
return ret_len;
#include <linux/mutex.h>
#include <linux/rcupdate.h>
#include <linux/smp_lock.h>
-#include "input-compat.h"
MODULE_AUTHOR("Vojtech Pavlik <vojtech@suse.cz>");
MODULE_DESCRIPTION("Input core");
return error;
}
-#ifdef CONFIG_COMPAT
-
-static int input_bits_to_string(char *buf, int buf_size,
- unsigned long bits, bool skip_empty)
-{
- int len = 0;
-
- if (INPUT_COMPAT_TEST) {
- u32 dword = bits >> 32;
- if (dword || !skip_empty)
- len += snprintf(buf, buf_size, "%x ", dword);
-
- dword = bits & 0xffffffffUL;
- if (dword || !skip_empty || len)
- len += snprintf(buf + len, max(buf_size - len, 0),
- "%x", dword);
- } else {
- if (bits || !skip_empty)
- len += snprintf(buf, buf_size, "%lx", bits);
- }
-
- return len;
-}
-
-#else /* !CONFIG_COMPAT */
-
-static int input_bits_to_string(char *buf, int buf_size,
- unsigned long bits, bool skip_empty)
-{
- return bits || !skip_empty ?
- snprintf(buf, buf_size, "%lx", bits) : 0;
-}
-
-#endif
#ifdef CONFIG_PROC_FS
unsigned long *bitmap, int max)
{
int i;
- bool skip_empty = true;
- char buf[18];
- seq_printf(seq, "B: %s=", name);
-
- for (i = BITS_TO_LONGS(max) - 1; i >= 0; i--) {
- if (input_bits_to_string(buf, sizeof(buf),
- bitmap[i], skip_empty)) {
- skip_empty = false;
- seq_printf(seq, "%s%s", buf, i > 0 ? " " : "");
- }
- }
-
- /*
- * If no output was produced print a single 0.
- */
- if (skip_empty)
- seq_puts(seq, "0");
+ for (i = BITS_TO_LONGS(max) - 1; i > 0; i--)
+ if (bitmap[i])
+ break;
+ seq_printf(seq, "B: %s=", name);
+ for (; i >= 0; i--)
+ seq_printf(seq, "%lx%s", bitmap[i], i > 0 ? " " : "");
seq_putc(seq, '\n');
}
{
int i;
int len = 0;
- bool skip_empty = true;
-
- for (i = BITS_TO_LONGS(max) - 1; i >= 0; i--) {
- len += input_bits_to_string(buf + len, max(buf_size - len, 0),
- bitmap[i], skip_empty);
- if (len) {
- skip_empty = false;
- if (i > 0)
- len += snprintf(buf + len, max(buf_size - len, 0), " ");
- }
- }
- /*
- * If no output was produced print a single 0.
- */
- if (len == 0)
- len = snprintf(buf, buf_size, "%d", 0);
+ for (i = BITS_TO_LONGS(max) - 1; i > 0; i--)
+ if (bitmap[i])
+ break;
+
+ for (; i >= 0; i--)
+ len += snprintf(buf + len, max(buf_size - len, 0),
+ "%lx%s", bitmap[i], i > 0 ? " " : "");
if (add_cr)
len += snprintf(buf + len, max(buf_size - len, 0), "\n");
{ \
struct input_dev *input_dev = to_input_dev(dev); \
int len = input_print_bitmap(buf, PAGE_SIZE, \
- input_dev->bm##bit, ev##_MAX, \
- true); \
+ input_dev->bm##bit, ev##_MAX, 1); \
return min_t(int, len, PAGE_SIZE); \
} \
static DEVICE_ATTR(bm, S_IRUGO, input_dev_show_cap_##bm, NULL)
len = input_print_bitmap(&env->buf[env->buflen - 1],
sizeof(env->buf) - env->buflen,
- bitmap, max, false);
+ bitmap, max, 0);
if (len >= (sizeof(env->buf) - env->buflen))
return -ENOMEM;
memcpy(joydev->abspam, abspam, len);
- for (i = 0; i < joydev->nabs; i++)
- joydev->absmap[joydev->abspam[i]] = i;
-
out:
kfree(abspam);
return retval;
*/
#define TWL4030_MAX_ROWS 8 /* TWL4030 hard limit */
#define TWL4030_MAX_COLS 8
-/*
- * Note that we add space for an extra column so that we can handle
- * row lines connected to the gnd (see twl4030_col_xlate()).
- */
-#define TWL4030_ROW_SHIFT 4
-#define TWL4030_KEYMAP_SIZE (TWL4030_MAX_ROWS << TWL4030_ROW_SHIFT)
+#define TWL4030_ROW_SHIFT 3
+#define TWL4030_KEYMAP_SIZE (TWL4030_MAX_ROWS * TWL4030_MAX_COLS)
struct twl4030_keypad {
unsigned short keymap[TWL4030_KEYMAP_SIZE];
return ret;
}
-static bool twl4030_is_in_ghost_state(struct twl4030_keypad *kp, u16 *key_state)
+static int twl4030_is_in_ghost_state(struct twl4030_keypad *kp, u16 *key_state)
{
int i;
u16 check = 0;
u16 col = key_state[i];
if ((col & check) && hweight16(col) > 1)
- return true;
+ return 1;
check |= col;
}
- return false;
+ return 0;
}
static void twl4030_kp_scan(struct twl4030_keypad *kp, bool release_all)
if (!changed)
continue;
- /* Extra column handles "all gnd" rows */
- for (col = 0; col < kp->n_cols + 1; col++) {
+ for (col = 0; col < kp->n_cols; col++) {
int code;
if (!(changed & (1 << col)))
{ { 0x62, 0x02, 0x14 }, 0xcf, 0xcf,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED },
{ { 0x73, 0x02, 0x50 }, 0xcf, 0xcf, ALPS_FW_BK_1 }, /* Dell Vostro 1400 */
- { { 0x52, 0x01, 0x14 }, 0xff, 0xff,
- ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */
};
/*
struct psmouse *psmouse = serio_get_drvdata(serio);
struct psmouse *parent = NULL;
struct serio_driver *drv = serio->drv;
- unsigned char type;
int rc = -1;
if (!drv || !psmouse) {
if (psmouse->reconnect) {
if (psmouse->reconnect(psmouse))
goto out;
- } else {
- psmouse_reset(psmouse);
-
- if (psmouse_probe(psmouse) < 0)
- goto out;
-
- type = psmouse_extensions(psmouse, psmouse_max_proto, false);
- if (psmouse->type != type)
- goto out;
+ } else if (psmouse_probe(psmouse) < 0 ||
+ psmouse->type != psmouse_extensions(psmouse,
+ psmouse_max_proto, false)) {
+ goto out;
}
/* ok, the device type (and capabilities) match the old one,
DMI_MATCH(DMI_BOARD_VERSION, "1.02"),
},
},
- {
- /* Gigabyte Spring Peak - defines wrong chassis type */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Spring Peak"),
- },
- },
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_PRODUCT_NAME, "PC-MM20 Series"),
},
},
- {
- /* Sony Vaio VPCZ122GX */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "VPCZ122GX"),
- },
- },
{
/* Sony Vaio FS-115b */
.matches = {
DMI_MATCH(DMI_PRODUCT_NAME, "E1210"),
},
},
- {
- /* Medion Akoya E1222 */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
- DMI_MATCH(DMI_PRODUCT_NAME, "E122X"),
- },
- },
{
/* Mivvy M310 */
.matches = {
static void __exit i8042_exit(void)
{
- platform_device_unregister(i8042_platform_device);
platform_driver_unregister(&i8042_driver);
+ platform_device_unregister(i8042_platform_device);
i8042_platform_exit();
panic_blink = NULL;
/*
* drivers/input/tablet/wacom.h
*
- * USB Wacom tablet support
+ * USB Wacom Graphire and Wacom Intuos tablet support
*
* Copyright (c) 2000-2004 Vojtech Pavlik <vojtech@ucw.cz>
* Copyright (c) 2000 Andreas Bach Aaen <abach@stofanet.dk>
* v1.49 (pc) - Added support for USB Tablet PC (0x90, 0x93, and 0x9A)
* v1.50 (pc) - Fixed a TabletPC touch bug in 2.6.28
* v1.51 (pc) - Added support for Intuos4
- * v1.52 (pc) - Query Wacom data upon system resume
*/
/*
/*
* Version Information
*/
-#define DRIVER_VERSION "v1.52"
+#define DRIVER_VERSION "v1.51"
#define DRIVER_AUTHOR "Vojtech Pavlik <vojtech@ucw.cz>"
-#define DRIVER_DESC "USB Wacom tablet driver"
+#define DRIVER_DESC "USB Wacom Graphire and Wacom Intuos tablet driver"
#define DRIVER_LICENSE "GPL"
MODULE_AUTHOR(DRIVER_AUTHOR);
/*
* drivers/input/tablet/wacom_sys.c
*
- * USB Wacom tablet support - system specific code
+ * USB Wacom Graphire and Wacom Intuos tablet support - system specific code
*/
/*
int rv;
mutex_lock(&wacom->lock);
-
- /* switch to wacom mode first */
- wacom_query_tablet_data(intf);
-
if (wacom->open)
rv = usb_submit_urb(wacom->irq, GFP_NOIO);
else
rv = 0;
-
mutex_unlock(&wacom->lock);
return rv;
* note that bcs may be NULL if no B channel is free
*/
at_state2->ConState = 700;
- for (i = 0; i < STR_NUM; ++i) {
- kfree(at_state2->str_var[i]);
- at_state2->str_var[i] = NULL;
- }
+ kfree(at_state2->str_var[STR_NMBR]);
+ at_state2->str_var[STR_NMBR] = NULL;
+ kfree(at_state2->str_var[STR_ZCPN]);
+ at_state2->str_var[STR_ZCPN] = NULL;
+ kfree(at_state2->str_var[STR_ZBC]);
+ at_state2->str_var[STR_ZBC] = NULL;
+ kfree(at_state2->str_var[STR_ZHLC]);
+ at_state2->str_var[STR_ZHLC] = NULL;
at_state2->int_var[VAR_ZCTP] = -1;
spin_lock_irqsave(&cs->lock, flags);
if ((tty = cs->tty) == NULL)
gig_dbg(DEBUG_ANY, "receive on closed device");
else {
+ tty_buffer_request_room(tty, len);
tty_insert_flip_string(tty, buffer, len);
tty_flip_buffer_push(tty);
}
pr_debug("%s: SCIOGETSPID: ioctl received\n",
sc_adapter[card]->devicename);
- spid = kzalloc(SCIOC_SPIDSIZE, GFP_KERNEL);
+ spid = kmalloc(SCIOC_SPIDSIZE, GFP_KERNEL);
if (!spid) {
kfree(rcvmsg);
return -ENOMEM;
kfree(rcvmsg);
return status;
}
- strlcpy(spid, rcvmsg->msg_data.byte_array, SCIOC_SPIDSIZE);
+ strcpy(spid, rcvmsg->msg_data.byte_array);
/*
* Package the switch type and send to user space
return status;
}
- dn = kzalloc(SCIOC_DNSIZE, GFP_KERNEL);
+ dn = kmalloc(SCIOC_DNSIZE, GFP_KERNEL);
if (!dn) {
kfree(rcvmsg);
return -ENOMEM;
}
- strlcpy(dn, rcvmsg->msg_data.byte_array, SCIOC_DNSIZE);
+ strcpy(dn, rcvmsg->msg_data.byte_array);
kfree(rcvmsg);
/*
pr_debug("%s: SCIOSTAT: ioctl received\n",
sc_adapter[card]->devicename);
- bi = kzalloc(sizeof(boardInfo), GFP_KERNEL);
+ bi = kmalloc (sizeof(boardInfo), GFP_KERNEL);
if (!bi) {
kfree(rcvmsg);
return -ENOMEM;
const struct of_device_id *match)
{
struct device_node *np = ofdev->node, *child;
+ struct gpio_led led;
struct gpio_led_of_platform_data *pdata;
int count = 0, ret;
if (!pdata)
return -ENOMEM;
+ memset(&led, 0, sizeof(led));
for_each_child_of_node(np, child) {
- struct gpio_led led = {};
enum of_gpio_flags flags;
const char *state;
static void write_both_fan_speed(struct thermostat *th, int speed);
static void write_fan_speed(struct thermostat *th, int speed, int fan);
-static void thermostat_create_files(void);
-static void thermostat_remove_files(void);
static int
write_reg(struct thermostat* th, int reg, u8 data)
struct thermostat *th = i2c_get_clientdata(client);
int i;
- thermostat_remove_files();
-
if (thread_therm != NULL) {
kthread_stop(thread_therm);
}
return -ENOMEM;
}
- thermostat_create_files();
-
return 0;
}
struct device_node* np;
const u32 *prop;
int i = 0, offset = 0;
+ int err;
np = of_find_node_by_name(NULL, "fan");
if (!np)
return -ENODEV;
}
-#ifndef CONFIG_I2C_POWERMAC
- request_module("i2c-powermac");
-#endif
-
- return i2c_add_driver(&thermostat_driver);
-}
-
-static void thermostat_create_files(void)
-{
- int err;
-
err = device_create_file(&of_dev->dev, &dev_attr_sensor1_temperature);
err |= device_create_file(&of_dev->dev, &dev_attr_sensor2_temperature);
err |= device_create_file(&of_dev->dev, &dev_attr_sensor1_limit);
if (err)
printk(KERN_WARNING
"Failed to create tempertaure attribute file(s).\n");
+
+#ifndef CONFIG_I2C_POWERMAC
+ request_module("i2c-powermac");
+#endif
+
+ return i2c_add_driver(&thermostat_driver);
}
-static void thermostat_remove_files(void)
+static void __exit
+thermostat_exit(void)
{
if (of_dev) {
device_remove_file(&of_dev->dev, &dev_attr_sensor1_temperature);
device_remove_file(&of_dev->dev,
&dev_attr_sensor2_fan_speed);
+ of_device_unregister(of_dev);
}
-}
-
-static void __exit
-thermostat_exit(void)
-{
i2c_del_driver(&thermostat_driver);
- of_device_unregister(of_dev);
}
module_init(thermostat_init);
{
if (!bitmap) return;
if (behind) {
- if (atomic_dec_and_test(&bitmap->behind_writes))
- wake_up(&bitmap->behind_wait);
+ atomic_dec(&bitmap->behind_writes);
PRINTK(KERN_DEBUG "dec write-behind count %d/%d\n",
atomic_read(&bitmap->behind_writes), bitmap->max_write_behind);
}
atomic_set(&bitmap->pending_writes, 0);
init_waitqueue_head(&bitmap->write_wait);
init_waitqueue_head(&bitmap->overflow_wait);
- init_waitqueue_head(&bitmap->behind_wait);
bitmap->mddev = mddev;
wait_queue_head_t write_wait;
wait_queue_head_t overflow_wait;
-#ifndef __GENKSYMS__
- wait_queue_head_t behind_wait;
-#endif
};
/* the bitmap API */
static inline chunk_t sector_to_chunk(struct dm_exception_store *store,
sector_t sector)
{
- return sector >> store->chunk_shift;
+ return (sector & ~store->chunk_mask) >> store->chunk_shift;
}
int dm_exception_store_type_register(struct dm_exception_store_type *type);
static void dm_hash_remove_all(int keep_open_devices)
{
- int i, dev_skipped;
+ int i, dev_skipped, dev_removed;
struct hash_cell *hc;
- struct mapped_device *md;
-
-retry:
- dev_skipped = 0;
+ struct list_head *tmp, *n;
down_write(&_hash_lock);
+retry:
+ dev_skipped = dev_removed = 0;
for (i = 0; i < NUM_BUCKETS; i++) {
- list_for_each_entry(hc, _name_buckets + i, name_list) {
- md = hc->md;
- dm_get(md);
+ list_for_each_safe (tmp, n, _name_buckets + i) {
+ hc = list_entry(tmp, struct hash_cell, name_list);
- if (keep_open_devices && dm_lock_for_deletion(md)) {
- dm_put(md);
+ if (keep_open_devices &&
+ dm_lock_for_deletion(hc->md)) {
dev_skipped++;
continue;
}
-
__hash_remove(hc);
-
- up_write(&_hash_lock);
-
- dm_put(md);
-
- /*
- * Some mapped devices may be using other mapped
- * devices, so repeat until we make no further
- * progress. If a new mapped device is created
- * here it will also get removed.
- */
- goto retry;
+ dev_removed = 1;
}
}
- up_write(&_hash_lock);
+ /*
+ * Some mapped devices may be using other mapped devices, so if any
+ * still exist, repeat until we make no further progress.
+ */
+ if (dev_skipped) {
+ if (dev_removed)
+ goto retry;
- if (dev_skipped)
DMWARN("remove_all left %d open device(s)", dev_skipped);
+ }
+
+ up_write(&_hash_lock);
}
static int dm_hash_rename(uint32_t cookie, const char *old, const char *new)
if (as->argc < nr_params) {
ti->error = "not enough path parameters";
- r = -EINVAL;
goto bad;
}
if (!md->barrier_error && io_error != -EOPNOTSUPP)
md->barrier_error = io_error;
end_io_acct(io);
- free_io(md, io);
} else {
end_io_acct(io);
- free_io(md, io);
if (io_error != DM_ENDIO_REQUEUE) {
trace_block_bio_complete(md->queue, bio);
bio_endio(bio, io_error);
}
}
+
+ free_io(md, io);
}
}
return BLKPREP_OK;
}
-/*
- * Returns:
- * 0 : the request has been processed (not requeued)
- * !0 : the request has been requeued
- */
-static int map_request(struct dm_target *ti, struct request *rq,
- struct mapped_device *md)
+static void map_request(struct dm_target *ti, struct request *rq,
+ struct mapped_device *md)
{
- int r, requeued = 0;
+ int r;
struct request *clone = rq->special;
struct dm_rq_target_io *tio = clone->end_io_data;
case DM_MAPIO_REQUEUE:
/* The target wants to requeue the I/O */
dm_requeue_unmapped_request(clone);
- requeued = 1;
break;
default:
if (r > 0) {
dm_kill_unmapped_request(clone, r);
break;
}
-
- return requeued;
}
/*
blk_start_request(rq);
spin_unlock(q->queue_lock);
- if (map_request(ti, rq, md))
- goto requeued;
-
+ map_request(ti, rq, md);
spin_lock_irq(q->queue_lock);
}
goto out;
-requeued:
- spin_lock_irq(q->queue_lock);
-
plug_and_out:
if (!elv_queue_empty(q))
/* Some requests still remain, retry later */
disk_stack_limits(mddev->gendisk, rdev->bdev,
rdev->data_offset << 9);
/* as we don't honour merge_bvec_fn, we must never risk
- * violating it, so limit max_phys_segments to 1 lying within
- * a single page.
+ * violating it, so limit ->max_sector to one PAGE, as
+ * a one page request is never in violation.
*/
- if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (rdev->bdev->bd_disk->queue->merge_bvec_fn &&
+ queue_max_sectors(mddev->queue) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
conf->array_sectors += rdev->sectors;
cnt++;
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
rdev->sb_page);
md_super_wait(rdev->mddev);
- return num_sectors;
+ return num_sectors / 2; /* kB for sysfs */
}
md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
rdev->sb_page);
md_super_wait(rdev->mddev);
- return num_sectors;
+ return num_sectors / 2; /* kB for sysfs */
}
static struct super_type super_types[] = {
if (!mddev->in_sync || mddev->recovery_cp != MaxSector) { /* not clean */
/* .. if the array isn't clean, an 'even' event must also go
* to spares. */
- if ((mddev->events&1)==0) {
+ if ((mddev->events&1)==0)
nospares = 0;
- sync_req = 2; /* force a second update to get the
- * even/odd in sync */
- }
} else {
/* otherwise an 'odd' event must go to spares */
- if ((mddev->events&1)) {
+ if ((mddev->events&1))
nospares = 0;
- sync_req = 2; /* force a second update to get the
- * even/odd in sync */
- }
}
}
int err = 0;
void __user *argp = (void __user *)arg;
mddev_t *mddev = NULL;
- int ro;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
err = do_md_stop(mddev, 1, 1);
goto done_unlock;
- case BLKROSET:
- if (get_user(ro, (int __user *)(arg))) {
- err = -EFAULT;
- goto done_unlock;
- }
- err = -EINVAL;
-
- /* if the bdev is going readonly the value of mddev->ro
- * does not matter, no writes are coming
- */
- if (ro)
- goto done_unlock;
-
- /* are we are already prepared for writes? */
- if (mddev->ro != 1)
- goto done_unlock;
-
- /* transitioning to readauto need only happen for
- * arrays that call md_write_start
- */
- if (mddev->pers) {
- err = restart_array(mddev);
- if (err == 0) {
- mddev->ro = 2;
- set_disk_ro(mddev->gendisk, 0);
- }
- }
- goto done_unlock;
}
/*
rdev->data_offset << 9);
/* as we don't honour merge_bvec_fn, we must never risk
- * violating it, so limit ->max_phys_segments to one, lying
- * within a single page.
+ * violating it, so limit ->max_sector to one PAGE, as
+ * a one page request is never in violation.
* (Note: it is very unlikely that a device with
* merge_bvec_fn will be involved in multipath.)
*/
- if (q->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (q->merge_bvec_fn &&
+ queue_max_sectors(q) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
conf->working_disks++;
mddev->degraded--;
/* as we don't honour merge_bvec_fn, we must never risk
* violating it, not that we ever expect a device with
* a merge_bvec_fn to be involved in multipath */
- if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (rdev->bdev->bd_disk->queue->merge_bvec_fn &&
+ queue_max_sectors(mddev->queue) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
if (!test_bit(Faulty, &rdev->flags))
conf->working_disks++;
disk_stack_limits(mddev->gendisk, rdev1->bdev,
rdev1->data_offset << 9);
/* as we don't honour merge_bvec_fn, we must never risk
- * violating it, so limit ->max_phys_segments to 1, lying within
- * a single page.
+ * violating it, so limit ->max_sector to one PAGE, as
+ * a one page request is never in violation.
*/
- if (rdev1->bdev->bd_disk->queue->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (rdev1->bdev->bd_disk->queue->merge_bvec_fn &&
+ queue_max_sectors(mddev->queue) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
+
if (!smallest || (rdev1->sectors < smallest->sectors))
smallest = rdev1;
cnt++;
*/
static int read_balance(conf_t *conf, r1bio_t *r1_bio)
{
- const sector_t this_sector = r1_bio->sector;
+ const unsigned long this_sector = r1_bio->sector;
int new_disk = conf->last_used, disk = new_disk;
int wonly_disk = -1;
const int sectors = r1_bio->sectors;
retry:
if (conf->mddev->recovery_cp < MaxSector &&
(this_sector + sectors >= conf->next_resync)) {
- /* Choose the first operational device, for consistancy */
+ /* Choose the first operation device, for consistancy */
new_disk = 0;
for (rdev = rcu_dereference(conf->mirrors[new_disk].rdev);
}
mirror = conf->mirrors + rdisk;
- if (test_bit(WriteMostly, &mirror->rdev->flags) &&
- bitmap) {
- /* Reading from a write-mostly device must
- * take care not to over-take any writes
- * that are 'behind'
- */
- wait_event(bitmap->behind_wait,
- atomic_read(&bitmap->behind_writes) == 0);
- }
r1_bio->read_disk = rdisk;
read_bio = bio_clone(bio, GFP_NOIO);
if (test_bit(Faulty, &rdev->flags)) {
rdev_dec_pending(rdev, mddev);
r1_bio->bios[i] = NULL;
- } else {
+ } else
r1_bio->bios[i] = bio;
- targets++;
- }
+ targets++;
} else
r1_bio->bios[i] = NULL;
}
set_bit(R1BIO_Degraded, &r1_bio->state);
}
- /* do behind I/O ?
- * Not if there are too many, or cannot allocate memory,
- * or a reader on WriteMostly is waiting for behind writes
- * to flush */
+ /* do behind I/O ? */
if (bitmap &&
atomic_read(&bitmap->behind_writes) < bitmap->max_write_behind &&
- !waitqueue_active(&bitmap->behind_wait) &&
(behind_pages = alloc_behind_pages(bio)) != NULL)
set_bit(R1BIO_BehindIO, &r1_bio->state);
* is not possible.
*/
if (!test_bit(Faulty, &rdev->flags) &&
- !mddev->recovery_disabled &&
mddev->degraded < conf->raid_disks) {
err = -EBUSY;
goto abort;
{
conf_t *conf = mddev->private;
struct bitmap *bitmap = mddev->bitmap;
+ int behind_wait = 0;
/* wait for behind writes to complete */
- if (bitmap && atomic_read(&bitmap->behind_writes) > 0) {
- printk(KERN_INFO "raid1: behind writes in progress on device %s, waiting to stop.\n", mdname(mddev));
+ while (bitmap && atomic_read(&bitmap->behind_writes) > 0) {
+ behind_wait++;
+ printk(KERN_INFO "raid1: behind writes in progress on device %s, waiting to stop (%d)\n", mdname(mddev), behind_wait);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ); /* wait a second */
/* need to kick something here to make sure I/O goes? */
- wait_event(bitmap->behind_wait,
- atomic_read(&bitmap->behind_writes) == 0);
}
raise_barrier(conf);
*/
static int read_balance(conf_t *conf, r10bio_t *r10_bio)
{
- const sector_t this_sector = r10_bio->sector;
+ const unsigned long this_sector = r10_bio->sector;
int disk, slot, nslot;
const int sectors = r10_bio->sectors;
sector_t new_distance, current_distance;
*/
bp = bio_split(bio,
chunk_sects - (bio->bi_sector & (chunk_sects - 1)) );
-
- /* Each of these 'make_request' calls will call 'wait_barrier'.
- * If the first succeeds but the second blocks due to the resync
- * thread raising the barrier, we will deadlock because the
- * IO to the underlying device will be queued in generic_make_request
- * and will never complete, so will never reduce nr_pending.
- * So increment nr_waiting here so no new raise_barriers will
- * succeed, and so the second wait_barrier cannot block.
- */
- spin_lock_irq(&conf->resync_lock);
- conf->nr_waiting++;
- spin_unlock_irq(&conf->resync_lock);
-
if (make_request(q, &bp->bio1))
generic_make_request(&bp->bio1);
if (make_request(q, &bp->bio2))
generic_make_request(&bp->bio2);
- spin_lock_irq(&conf->resync_lock);
- conf->nr_waiting--;
- wake_up(&conf->wait_barrier);
- spin_unlock_irq(&conf->resync_lock);
-
bio_pair_release(bp);
return 0;
bad_map:
disk_stack_limits(mddev->gendisk, rdev->bdev,
rdev->data_offset << 9);
- /* as we don't honour merge_bvec_fn, we must
- * never risk violating it, so limit
- * ->max_phys_segments to one lying with a single
- * page, as a one page request is never in
- * violation.
+ /* as we don't honour merge_bvec_fn, we must never risk
+ * violating it, so limit ->max_sector to one PAGE, as
+ * a one page request is never in violation.
*/
- if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (rdev->bdev->bd_disk->queue->merge_bvec_fn &&
+ queue_max_sectors(mddev->queue) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
p->head_position = 0;
rdev->raid_disk = mirror;
disk_stack_limits(mddev->gendisk, rdev->bdev,
rdev->data_offset << 9);
/* as we don't honour merge_bvec_fn, we must never risk
- * violating it, so limit max_phys_segments to 1 lying
- * within a single page.
+ * violating it, so limit ->max_sector to one PAGE, as
+ * a one page request is never in violation.
*/
- if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
- blk_queue_max_phys_segments(mddev->queue, 1);
- blk_queue_segment_boundary(mddev->queue,
- PAGE_CACHE_SIZE - 1);
- }
+ if (rdev->bdev->bd_disk->queue->merge_bvec_fn &&
+ queue_max_sectors(mddev->queue) > (PAGE_SIZE>>9))
+ blk_queue_max_sectors(mddev->queue, PAGE_SIZE>>9);
disk->head_position = 0;
}
clear_bit(R5_UPTODATE, &sh->dev[i].flags);
atomic_inc(&rdev->read_errors);
- if (conf->mddev->degraded >= conf->max_degraded)
+ if (conf->mddev->degraded)
printk_rl(KERN_WARNING
"raid5:%s: read error not correctable "
"(sector %llu on %s).\n",
int previous, int *dd_idx,
struct stripe_head *sh)
{
- sector_t stripe, stripe2;
- sector_t chunk_number;
+ long stripe;
+ unsigned long chunk_number;
unsigned int chunk_offset;
int pd_idx, qd_idx;
int ddf_layout = 0;
*/
chunk_offset = sector_div(r_sector, sectors_per_chunk);
chunk_number = r_sector;
+ BUG_ON(r_sector != chunk_number);
/*
* Compute the stripe number
*/
- stripe = chunk_number;
- *dd_idx = sector_div(stripe, data_disks);
- stripe2 = stripe;
+ stripe = chunk_number / data_disks;
+
+ /*
+ * Compute the data disk and parity disk indexes inside the stripe
+ */
+ *dd_idx = chunk_number % data_disks;
+
/*
* Select the parity disk based on the user selected algorithm.
*/
case 5:
switch (algorithm) {
case ALGORITHM_LEFT_ASYMMETRIC:
- pd_idx = data_disks - sector_div(stripe2, raid_disks);
+ pd_idx = data_disks - stripe % raid_disks;
if (*dd_idx >= pd_idx)
(*dd_idx)++;
break;
case ALGORITHM_RIGHT_ASYMMETRIC:
- pd_idx = sector_div(stripe2, raid_disks);
+ pd_idx = stripe % raid_disks;
if (*dd_idx >= pd_idx)
(*dd_idx)++;
break;
case ALGORITHM_LEFT_SYMMETRIC:
- pd_idx = data_disks - sector_div(stripe2, raid_disks);
+ pd_idx = data_disks - stripe % raid_disks;
*dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
break;
case ALGORITHM_RIGHT_SYMMETRIC:
- pd_idx = sector_div(stripe2, raid_disks);
+ pd_idx = stripe % raid_disks;
*dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
break;
case ALGORITHM_PARITY_0:
switch (algorithm) {
case ALGORITHM_LEFT_ASYMMETRIC:
- pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
+ pd_idx = raid_disks - 1 - (stripe % raid_disks);
qd_idx = pd_idx + 1;
if (pd_idx == raid_disks-1) {
(*dd_idx)++; /* Q D D D P */
(*dd_idx) += 2; /* D D P Q D */
break;
case ALGORITHM_RIGHT_ASYMMETRIC:
- pd_idx = sector_div(stripe2, raid_disks);
+ pd_idx = stripe % raid_disks;
qd_idx = pd_idx + 1;
if (pd_idx == raid_disks-1) {
(*dd_idx)++; /* Q D D D P */
(*dd_idx) += 2; /* D D P Q D */
break;
case ALGORITHM_LEFT_SYMMETRIC:
- pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
+ pd_idx = raid_disks - 1 - (stripe % raid_disks);
qd_idx = (pd_idx + 1) % raid_disks;
*dd_idx = (pd_idx + 2 + *dd_idx) % raid_disks;
break;
case ALGORITHM_RIGHT_SYMMETRIC:
- pd_idx = sector_div(stripe2, raid_disks);
+ pd_idx = stripe % raid_disks;
qd_idx = (pd_idx + 1) % raid_disks;
*dd_idx = (pd_idx + 2 + *dd_idx) % raid_disks;
break;
/* Exactly the same as RIGHT_ASYMMETRIC, but or
* of blocks for computing Q is different.
*/
- pd_idx = sector_div(stripe2, raid_disks);
+ pd_idx = stripe % raid_disks;
qd_idx = pd_idx + 1;
if (pd_idx == raid_disks-1) {
(*dd_idx)++; /* Q D D D P */
* D D D P Q rather than
* Q D D D P
*/
- stripe2 += 1;
- pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
+ pd_idx = raid_disks - 1 - ((stripe + 1) % raid_disks);
qd_idx = pd_idx + 1;
if (pd_idx == raid_disks-1) {
(*dd_idx)++; /* Q D D D P */
case ALGORITHM_ROTATING_N_CONTINUE:
/* Same as left_symmetric but Q is before P */
- pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
+ pd_idx = raid_disks - 1 - (stripe % raid_disks);
qd_idx = (pd_idx + raid_disks - 1) % raid_disks;
*dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
ddf_layout = 1;
case ALGORITHM_LEFT_ASYMMETRIC_6:
/* RAID5 left_asymmetric, with Q on last device */
- pd_idx = data_disks - sector_div(stripe2, raid_disks-1);
+ pd_idx = data_disks - stripe % (raid_disks-1);
if (*dd_idx >= pd_idx)
(*dd_idx)++;
qd_idx = raid_disks - 1;
break;
case ALGORITHM_RIGHT_ASYMMETRIC_6:
- pd_idx = sector_div(stripe2, raid_disks-1);
+ pd_idx = stripe % (raid_disks-1);
if (*dd_idx >= pd_idx)
(*dd_idx)++;
qd_idx = raid_disks - 1;
break;
case ALGORITHM_LEFT_SYMMETRIC_6:
- pd_idx = data_disks - sector_div(stripe2, raid_disks-1);
+ pd_idx = data_disks - stripe % (raid_disks-1);
*dd_idx = (pd_idx + 1 + *dd_idx) % (raid_disks-1);
qd_idx = raid_disks - 1;
break;
case ALGORITHM_RIGHT_SYMMETRIC_6:
- pd_idx = sector_div(stripe2, raid_disks-1);
+ pd_idx = stripe % (raid_disks-1);
*dd_idx = (pd_idx + 1 + *dd_idx) % (raid_disks-1);
qd_idx = raid_disks - 1;
break;
: conf->algorithm;
sector_t stripe;
int chunk_offset;
- sector_t chunk_number;
- int dummy1, dd_idx = i;
+ int chunk_number, dummy1, dd_idx = i;
sector_t r_sector;
struct stripe_head sh2;
chunk_offset = sector_div(new_sector, sectors_per_chunk);
stripe = new_sector;
+ BUG_ON(new_sector != stripe);
if (i == sh->pd_idx)
return 0;
}
chunk_number = stripe * data_disks + i;
- r_sector = chunk_number * sectors_per_chunk + chunk_offset;
+ r_sector = (sector_t)chunk_number * sectors_per_chunk + chunk_offset;
check = raid5_compute_sector(conf, r_sector,
previous, &dummy1, &sh2);
const u8 *ts, *ts_end, *from_where = NULL;
u8 ts_remain = 0, how_much = 0, new_ts = 1;
struct ethhdr *ethh = NULL;
- bool error = false;
#ifdef ULE_DEBUG
/* The code inside ULE_DEBUG keeps a history of the last 100 TS cells processed. */
/* Drop partly decoded SNDU, reset state, resync on PUSI. */
if (priv->ule_skb) {
- error = true;
- dev_kfree_skb(priv->ule_skb);
- }
-
- if (error || priv->ule_sndu_remain) {
+ dev_kfree_skb( priv->ule_skb );
dev->stats.rx_errors++;
dev->stats.rx_frame_errors++;
- error = false;
}
-
reset_ule(priv);
priv->need_pusi = 1;
continue;
"bytes left in TS. Resyncing.\n", ts_remain);
priv->ule_sndu_len = 0;
priv->need_pusi = 1;
- ts += TS_SZ;
continue;
}
from_where += 2;
}
- priv->ule_sndu_remain = priv->ule_sndu_len + 2;
/*
* State of current TS:
* ts_remain (remaining bytes in the current TS cell)
*/
switch (ts_remain) {
case 1:
- priv->ule_sndu_remain--;
priv->ule_sndu_type = from_where[0] << 8;
priv->ule_sndu_type_1 = 1; /* first byte of ule_type is set. */
ts_remain -= 1; from_where += 1;
default: /* complete ULE header is present in current TS. */
/* Extract ULE type field. */
if (priv->ule_sndu_type_1) {
- priv->ule_sndu_type_1 = 0;
priv->ule_sndu_type |= from_where[0];
from_where += 1; /* points to payload start. */
ts_remain -= 1;
select DVB_MT352 if !DVB_FE_CUSTOMISE
select DVB_ZL10353 if !DVB_FE_CUSTOMISE
select DVB_DIB7000P if !DVB_FE_CUSTOMISE
+ select DVB_LGS8GL5 if !DVB_FE_CUSTOMISE
select DVB_TUNER_DIB0070 if !DVB_FE_CUSTOMISE
- select DVB_LGS8GXX if !DVB_FE_CUSTOMISE
select MEDIA_TUNER_SIMPLE if !MEDIA_TUNER_CUSTOMISE
select MEDIA_TUNER_XC2028 if !MEDIA_TUNER_CUSTOMISE
select MEDIA_TUNER_MXL5005S if !MEDIA_TUNER_CUSTOMISE
spi_bias *= qam_tab[p->constellation];
spi_bias /= p->code_rate_HP + 1;
spi_bias /= (guard_tab[p->guard_interval] + 32);
- spi_bias *= 1000;
- spi_bias /= 1000 + ppm/1000;
+ spi_bias *= 1000ULL;
+ spi_bias /= 1000ULL + ppm/1000;
spi_bias *= p->code_rate_HP;
val0x04 = (p->transmission_mode << 2) | p->guard_interval;
select DVB_VES1820 if !DVB_FE_CUSTOMISE
select DVB_L64781 if !DVB_FE_CUSTOMISE
select DVB_TDA8083 if !DVB_FE_CUSTOMISE
+ select DVB_TDA10021 if !DVB_FE_CUSTOMISE
+ select DVB_TDA10023 if !DVB_FE_CUSTOMISE
select DVB_S5H1420 if !DVB_FE_CUSTOMISE
select DVB_TDA10086 if !DVB_FE_CUSTOMISE
select DVB_TDA826X if !DVB_FE_CUSTOMISE
select DVB_LNBP21 if !DVB_FE_CUSTOMISE
select DVB_TDA1004X if !DVB_FE_CUSTOMISE
- select DVB_ISL6423 if !DVB_FE_CUSTOMISE
- select DVB_STV090x if !DVB_FE_CUSTOMISE
- select DVB_STV6110x if !DVB_FE_CUSTOMISE
help
Support for simple SAA7146 based DVB cards (so called Budget-
or Nova-PCI cards) without onboard MPEG2 decoder, and without
&budget->i2c_adap,
&tt1600_isl6423_config);
+ } else {
+ dvb_frontend_detach(budget->dvb_frontend);
+ budget->dvb_frontend = NULL;
}
}
break;
request_modules(btv);
}
- init_bttv_i2c_ir(btv);
bttv_input_init(btv);
/* everything is fine */
if (0 == btv->i2c_rc && i2c_scan)
do_i2c_scan(btv->c.v4l2_dev.name, &btv->i2c_client);
- return btv->i2c_rc;
-}
-
-/* Instantiate the I2C IR receiver device, if present */
-void __devinit init_bttv_i2c_ir(struct bttv *btv)
-{
+ /* Instantiate the IR receiver device, if present */
if (0 == btv->i2c_rc) {
struct i2c_board_info info;
/* The external IR receiver is at i2c address 0x34 (0x35 for
strlcpy(info.type, "ir_video", I2C_NAME_SIZE);
i2c_new_probed_device(&btv->c.i2c_adap, &info, addr_list);
}
+ return btv->i2c_rc;
}
int __devexit fini_bttv_i2c(struct bttv *btv)
extern unsigned int bttv_gpio;
extern void bttv_gpio_tracking(struct bttv *btv, char *comment);
extern int init_bttv_i2c(struct bttv *btv);
-extern void init_bttv_i2c_ir(struct bttv *btv);
extern int fini_bttv_i2c(struct bttv *btv);
#define bttv_printk if (bttv_verbose) printk
dev->board.name, dev->model);
/* set the direction for GPIO pins */
- if (dev->board.tuner_gpio) {
- cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1);
- cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1);
- cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1);
+ cx231xx_set_gpio_direction(dev, dev->board.tuner_gpio->bit, 1);
+ cx231xx_set_gpio_value(dev, dev->board.tuner_gpio->bit, 1);
+ cx231xx_set_gpio_direction(dev, dev->board.tuner_sif_gpio, 1);
- /* request some modules if any required */
+ /* request some modules if any required */
- /* reset the Tuner */
- cx231xx_gpio_set(dev, dev->board.tuner_gpio);
- }
+ /* reset the Tuner */
+ cx231xx_gpio_set(dev, dev->board.tuner_gpio);
/* set the mode to Analog mode initially */
cx231xx_set_mode(dev, CX231XX_ANALOG_MODE);
memset(&info, 0, sizeof(struct i2c_board_info));
strlcpy(info.type, "ir_video", I2C_NAME_SIZE);
- /*
- * We can't call i2c_new_probed_device() because it uses
- * quick writes for probing and the IR receiver device only
- * replies to reads.
- */
- if (i2c_smbus_xfer(&bus->i2c_adap, addr_list[0], 0,
- I2C_SMBUS_READ, 0, I2C_SMBUS_QUICK,
- NULL) >= 0) {
- info.addr = addr_list[0];
- i2c_new_device(&bus->i2c_adap, &info);
- }
+ i2c_new_probed_device(&bus->i2c_adap, &info, addr_list);
}
return bus->i2c_rc;
0x18, 0x6b, 0x71,
I2C_CLIENT_END
};
- const unsigned short *addrp;
memset(&info, 0, sizeof(struct i2c_board_info));
strlcpy(info.type, "ir_video", I2C_NAME_SIZE);
- /*
- * We can't call i2c_new_probed_device() because it uses
- * quick writes for probing and at least some R receiver
- * devices only reply to reads.
- */
- for (addrp = addr_list; *addrp != I2C_CLIENT_END; addrp++) {
- if (i2c_smbus_xfer(&core->i2c_adap, *addrp, 0,
- I2C_SMBUS_READ, 0,
- I2C_SMBUS_QUICK, NULL) >= 0) {
- info.addr = *addrp;
- i2c_new_device(&core->i2c_adap, &info);
- break;
- }
- }
+ i2c_new_probed_device(&core->i2c_adap, &info, addr_list);
}
return core->i2c_rc;
}
if (dev->dvb) {
unregister_dvb(dev->dvb);
- kfree(dev->dvb);
dev->dvb = NULL;
}
{0x13, 0x00, {0x01}, 1},
{0, 0, {0}, 0}
};
- /* Without this command the cam won't work with USB-UHCI */
- gspca_dev->usb_buf[0] = 0x0a;
- gspca_dev->usb_buf[1] = 0x00;
- err_code = mr_write(gspca_dev, 2);
- if (err_code < 0)
- return err_code;
err_code = sensor_write_regs(gspca_dev, cif_sensor1_init_data,
ARRAY_SIZE(cif_sensor1_init_data));
}
{USB_DEVICE(0x046D, 0x08F5), .driver_info = BRIDGE_ST6422 },
/* QuickCam Messenger (new) */
{USB_DEVICE(0x046D, 0x08F6), .driver_info = BRIDGE_ST6422 },
+ /* QuickCam Messenger (new) */
+ {USB_DEVICE(0x046D, 0x08DA), .driver_info = BRIDGE_ST6422 },
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
struct fb_vblank vblank;
u32 trace;
- memset(&vblank, 0, sizeof(struct fb_vblank));
-
vblank.flags = FB_VBLANK_HAVE_COUNT |FB_VBLANK_HAVE_VCOUNT |
FB_VBLANK_HAVE_VSYNC;
trace = read_reg(0x028c0) >> 16;
buf[0] = 0xff; /* fixed */
ret = send_control_msg(pdev,
- SET_LUM_CTL, SHUTTER_MODE_FORMATTER, &buf, 1);
+ SET_LUM_CTL, SHUTTER_MODE_FORMATTER, &buf, sizeof(buf));
if (!mode && ret >= 0) {
if (value < 0)
ctrl |= SAA7134_MAIN_CTRL_TE5;
irq |= SAA7134_IRQ1_INTE_RA2_1 |
SAA7134_IRQ1_INTE_RA2_0;
+
+ /* dma: setup channel 5 (= TS) */
+
+ saa_writeb(SAA7134_TS_DMA0, (dev->ts.nr_packets - 1) & 0xff);
+ saa_writeb(SAA7134_TS_DMA1,
+ ((dev->ts.nr_packets - 1) >> 8) & 0xff);
+ /* TSNOPIT=0, TSCOLAP=0 */
+ saa_writeb(SAA7134_TS_DMA2,
+ (((dev->ts.nr_packets - 1) >> 16) & 0x3f) | 0x00);
+ saa_writel(SAA7134_RS_PITCH(5), TS_PACKET_SIZE);
+ saa_writel(SAA7134_RS_CONTROL(5), SAA7134_RS_CONTROL_BURST_16 |
+ SAA7134_RS_CONTROL_ME |
+ (dev->ts.pt_ts.dma >> 12));
}
/* set task conditions + field handling */
BUG_ON(dev->ts_started);
- /* dma: setup channel 5 (= TS) */
- saa_writeb(SAA7134_TS_DMA0, (dev->ts.nr_packets - 1) & 0xff);
- saa_writeb(SAA7134_TS_DMA1,
- ((dev->ts.nr_packets - 1) >> 8) & 0xff);
- /* TSNOPIT=0, TSCOLAP=0 */
- saa_writeb(SAA7134_TS_DMA2,
- (((dev->ts.nr_packets - 1) >> 16) & 0x3f) | 0x00);
- saa_writel(SAA7134_RS_PITCH(5), TS_PACKET_SIZE);
- saa_writel(SAA7134_RS_CONTROL(5), SAA7134_RS_CONTROL_BURST_16 |
- SAA7134_RS_CONTROL_ME |
- (dev->ts.pt_ts.dma >> 12));
-
- /* reset hardware TS buffers */
saa_writeb(SAA7134_TS_SERIAL1, 0x00);
saa_writeb(SAA7134_TS_SERIAL1, 0x03);
saa_writeb(SAA7134_TS_SERIAL1, 0x00);
ret = 0;
goto out;
- case V4L2_CTRL_TYPE_BUTTON:
- v4l2_ctrl->minimum = 0;
- v4l2_ctrl->maximum = 0;
- v4l2_ctrl->step = 0;
- ret = 0;
- goto out;
-
default:
break;
}
.guid = UVC_GUID_FORMAT_YUY2,
.fcc = V4L2_PIX_FMT_YUYV,
},
- {
- .name = "YUV 4:2:2 (YUYV)",
- .guid = UVC_GUID_FORMAT_YUY2_ISIGHT,
- .fcc = V4L2_PIX_FMT_YUYV,
- },
{
.name = "YUV 4:2:0 (NV12)",
.guid = UVC_GUID_FORMAT_NV12,
.fcc = V4L2_PIX_FMT_UYVY,
},
{
- .name = "Greyscale (8-bit)",
+ .name = "Greyscale",
.guid = UVC_GUID_FORMAT_Y800,
.fcc = V4L2_PIX_FMT_GREY,
},
- {
- .name = "Greyscale (16-bit)",
- .guid = UVC_GUID_FORMAT_Y16,
- .fcc = V4L2_PIX_FMT_Y16,
- },
{
.name = "RGB Bayer",
.guid = UVC_GUID_FORMAT_BY8,
/* Parse the frame descriptors. Only uncompressed, MJPEG and frame
* based formats have frame descriptors.
*/
- while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
- buffer[2] == ftype) {
+ while (buflen > 2 && buffer[2] == ftype) {
frame = &format->frame[format->nframes];
if (ftype != UVC_VS_FRAME_FRAME_BASED)
n = buflen > 25 ? buffer[25] : 0;
buffer += buffer[0];
}
- if (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
- buffer[2] == UVC_VS_STILL_IMAGE_FRAME) {
+ if (buflen > 2 && buffer[2] == UVC_VS_STILL_IMAGE_FRAME) {
buflen -= buffer[0];
buffer += buffer[0];
}
- if (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
- buffer[2] == UVC_VS_COLORFORMAT) {
+ if (buflen > 2 && buffer[2] == UVC_VS_COLORFORMAT) {
if (buflen < 6) {
uvc_trace(UVC_TRACE_DESCR, "device %d videostreaming "
"interface %d COLORFORMAT error\n",
buffer += buffer[0];
}
- if (buflen)
- uvc_trace(UVC_TRACE_DESCR, "device %d videostreaming interface "
- "%d has %u bytes of trailing descriptor garbage.\n",
- dev->udev->devnum, alts->desc.bInterfaceNumber, buflen);
-
/* Parse the alternate settings to find the maximum bandwidth. */
for (i = 0; i < intf->num_altsetting; ++i) {
struct usb_host_endpoint *ep;
.bInterfaceSubClass = 1,
.bInterfaceProtocol = 0,
.driver_info = UVC_QUIRK_STREAM_NO_FID },
- /* Syntek (Packard Bell EasyNote MX52 */
- { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = 0x174f,
- .idProduct = 0x8a12,
- .bInterfaceClass = USB_CLASS_VIDEO,
- .bInterfaceSubClass = 1,
- .bInterfaceProtocol = 0,
- .driver_info = UVC_QUIRK_STREAM_NO_FID },
/* Syntek (Asus F9SG) */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_INT_INFO,
.bInterfaceSubClass = 1,
.bInterfaceProtocol = 0,
.driver_info = UVC_QUIRK_PROBE_MINMAX },
- /* Arkmicro unbranded */
- { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = 0x18ec,
- .idProduct = 0x3290,
- .bInterfaceClass = USB_CLASS_VIDEO,
- .bInterfaceSubClass = 1,
- .bInterfaceProtocol = 0,
- .driver_info = UVC_QUIRK_PROBE_DEF },
/* Bodelin ProScopeHR */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_DEV_HI
#define UVC_GUID_FORMAT_YUY2 \
{ 'Y', 'U', 'Y', '2', 0x00, 0x00, 0x10, 0x00, \
0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
-#define UVC_GUID_FORMAT_YUY2_ISIGHT \
- { 'Y', 'U', 'Y', '2', 0x00, 0x00, 0x10, 0x00, \
- 0x80, 0x00, 0x00, 0x00, 0x00, 0x38, 0x9b, 0x71}
#define UVC_GUID_FORMAT_NV12 \
{ 'N', 'V', '1', '2', 0x00, 0x00, 0x10, 0x00, \
0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
#define UVC_GUID_FORMAT_Y800 \
{ 'Y', '8', '0', '0', 0x00, 0x00, 0x10, 0x00, \
0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
-#define UVC_GUID_FORMAT_Y16 \
- { 'Y', '1', '6', ' ', 0x00, 0x00, 0x10, 0x00, \
- 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
#define UVC_GUID_FORMAT_BY8 \
{ 'B', 'Y', '8', ' ', 0x00, 0x00, 0x10, 0x00, \
0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
+
/* ------------------------------------------------------------------------
* Driver specific constants.
*/
struct video_code32 {
char loadwhat[16]; /* name or tag of file being passed */
compat_int_t datasize;
- compat_uptr_t data;
+ unsigned char *data;
};
-static struct video_code __user *get_microcode32(struct video_code32 *kp)
+static int get_microcode32(struct video_code *kp, struct video_code32 __user *up)
{
- struct video_code __user *up;
-
- up = compat_alloc_user_space(sizeof(*up));
-
- /*
- * NOTE! We don't actually care if these fail. If the
- * user address is invalid, the native ioctl will do
- * the error handling for us
- */
- (void) copy_to_user(up->loadwhat, kp->loadwhat, sizeof(up->loadwhat));
- (void) put_user(kp->datasize, &up->datasize);
- (void) put_user(compat_ptr(kp->data), &up->data);
- return up;
+ if (!access_ok(VERIFY_READ, up, sizeof(struct video_code32)) ||
+ copy_from_user(kp->loadwhat, up->loadwhat, sizeof(up->loadwhat)) ||
+ get_user(kp->datasize, &up->datasize) ||
+ copy_from_user(kp->data, up->data, up->datasize))
+ return -EFAULT;
+ return 0;
}
#define VIDIOCGTUNER32 _IOWR('v', 4, struct video_tuner32)
struct video_tuner vt;
struct video_buffer vb;
struct video_window vw;
- struct video_code32 vc;
+ struct video_code vc;
struct video_audio va;
#endif
struct v4l2_format v2f;
break;
case VIDIOCSMICROCODE:
- /* Copy the 32-bit "video_code32" to kernel space */
- if (copy_from_user(&karg.vc, up, sizeof(karg.vc)))
- return -EFAULT;
- /* Convert the 32-bit version to a 64-bit version in user space */
- up = get_microcode32(&karg.vc);
+ err = get_microcode32(&karg.vc, up);
+ compatible_arg = 0;
break;
case VIDIOCSFREQ:
struct mspro_block_data *msb = memstick_get_drvdata(card);
unsigned long flags;
+ del_gendisk(msb->disk);
+ dev_dbg(&card->dev, "mspro block remove\n");
spin_lock_irqsave(&msb->q_lock, flags);
msb->eject = 1;
blk_start_queue(msb->queue);
spin_unlock_irqrestore(&msb->q_lock, flags);
- del_gendisk(msb->disk);
- dev_dbg(&card->dev, "mspro block remove\n");
-
blk_cleanup_queue(msb->queue);
msb->queue = NULL;
*/
iocnumX = khdr.iocnum & 0xFF;
if (((iocnum = mpt_verify_adapter(iocnumX, &iocp)) < 0) ||
- (iocp == NULL))
+ (iocp == NULL)) {
+ printk(KERN_DEBUG MYNAM "%s::mptctl_ioctl() @%d - ioc%d not found!\n",
+ __FILE__, __LINE__, iocnumX);
return -ENODEV;
+ }
if (!iocp->active) {
printk(KERN_DEBUG MYNAM "%s::mptctl_ioctl() @%d - Controller disabled.\n",
* precedence!
*/
sc->result = (DID_OK << 16) | scsi_status;
- if (!(scsi_state & MPI_SCSI_STATE_AUTOSENSE_VALID)) {
-
- /*
- * For an Errata on LSI53C1030
- * When the length of request data
- * and transfer data are different
- * with result of command (READ or VERIFY),
- * DID_SOFT_ERROR is set.
+ if (scsi_state & MPI_SCSI_STATE_AUTOSENSE_VALID) {
+ /* Have already saved the status and sense data
*/
- if (ioc->bus_type == SPI) {
- if (pScsiReq->CDB[0] == READ_6 ||
- pScsiReq->CDB[0] == READ_10 ||
- pScsiReq->CDB[0] == READ_12 ||
- pScsiReq->CDB[0] == READ_16 ||
- pScsiReq->CDB[0] == VERIFY ||
- pScsiReq->CDB[0] == VERIFY_16) {
- if (scsi_bufflen(sc) !=
- xfer_cnt) {
- sc->result =
- DID_SOFT_ERROR << 16;
- printk(KERN_WARNING "Errata"
- "on LSI53C1030 occurred."
- "sc->req_bufflen=0x%02x,"
- "xfer_cnt=0x%02x\n",
- scsi_bufflen(sc),
- xfer_cnt);
- }
- }
- }
-
+ ;
+ } else {
if (xfer_cnt < sc->underflow) {
if (scsi_status == SAM_STAT_BUSY)
sc->result = SAM_STAT_BUSY;
sc->result = (DID_OK << 16) | scsi_status;
if (scsi_state == 0) {
;
- } else if (scsi_state &
- MPI_SCSI_STATE_AUTOSENSE_VALID) {
-
- /*
- * For potential trouble on LSI53C1030.
- * (date:2007.xx.)
- * It is checked whether the length of
- * request data is equal to
- * the length of transfer and residual.
- * MEDIUM_ERROR is set by incorrect data.
- */
- if ((ioc->bus_type == SPI) &&
- (sc->sense_buffer[2] & 0x20)) {
- u32 difftransfer;
- difftransfer =
- sc->sense_buffer[3] << 24 |
- sc->sense_buffer[4] << 16 |
- sc->sense_buffer[5] << 8 |
- sc->sense_buffer[6];
- if (((sc->sense_buffer[3] & 0x80) ==
- 0x80) && (scsi_bufflen(sc)
- != xfer_cnt)) {
- sc->sense_buffer[2] =
- MEDIUM_ERROR;
- sc->sense_buffer[12] = 0xff;
- sc->sense_buffer[13] = 0xff;
- printk(KERN_WARNING"Errata"
- "on LSI53C1030 occurred."
- "sc->req_bufflen=0x%02x,"
- "xfer_cnt=0x%02x\n" ,
- scsi_bufflen(sc),
- xfer_cnt);
- }
- if (((sc->sense_buffer[3] & 0x80)
- != 0x80) &&
- (scsi_bufflen(sc) !=
- xfer_cnt + difftransfer)) {
- sc->sense_buffer[2] =
- MEDIUM_ERROR;
- sc->sense_buffer[12] = 0xff;
- sc->sense_buffer[13] = 0xff;
- printk(KERN_WARNING
- "Errata on LSI53C1030 occurred"
- "sc->req_bufflen=0x%02x,"
- " xfer_cnt=0x%02x,"
- "difftransfer=0x%02x\n",
- scsi_bufflen(sc),
- xfer_cnt,
- difftransfer);
- }
- }
-
+ } else if (scsi_state & MPI_SCSI_STATE_AUTOSENSE_VALID) {
/*
* If running against circa 200003dd 909 MPT f/w,
* may get this (AUTOSENSE_VALID) for actual TASK_SET_FULL
ioc->name,sdev->tagged_supported, sdev->simple_tags,
sdev->ordered_tags));
- blk_queue_dma_alignment (sdev->request_queue, 512 - 1);
-
return 0;
}
cdev->groups = enclosure_groups;
err = device_register(cdev);
- if (err) {
- ecomp->number = -1;
- put_device(cdev);
- return ERR_PTR(err);
- }
+ if (err)
+ ERR_PTR(err);
return ecomp;
}
* nodes that can comprise an access protection grouping. The access
* protection is in regards to memory, IOI and IPI.
*/
+ max_regions = 64;
region_size = xp_region_size;
- if (is_uv())
- max_regions = 256;
- else {
- max_regions = 64;
-
- switch (region_size) {
- case 128:
- max_regions *= 2;
- case 64:
- max_regions *= 2;
- case 32:
- max_regions *= 2;
- region_size = 16;
- DBUG_ON(!is_shub2());
- }
+ switch (region_size) {
+ case 128:
+ max_regions *= 2;
+ case 64:
+ max_regions *= 2;
+ case 32:
+ max_regions *= 2;
+ region_size = 16;
+ DBUG_ON(!is_shub2());
}
for (region = 0; region < max_regions; region++) {
enum xp_retval xp_ret;
int ret;
int nid;
- int nasid;
int pg_order;
struct page *page;
struct xpc_gru_mq_uv *mq;
goto out_5;
}
- nasid = UV_PNODE_TO_NASID(uv_cpu_to_pnode(cpu));
-
mmr_value = (struct uv_IO_APIC_route_entry *)&mq->mmr_value;
ret = gru_create_message_queue(mq->gru_mq_desc, mq->address, mq_size,
- nasid, mmr_value->vector, mmr_value->dest);
+ nid, mmr_value->vector, mmr_value->dest);
if (ret != 0) {
dev_err(xpc_part, "gru_create_message_queue() returned "
"error=%d\n", ret);
static void
xpc_handle_activate_mq_msg_uv(struct xpc_partition *part,
struct xpc_activate_mq_msghdr_uv *msg_hdr,
- int part_setup,
int *wakeup_hb_checker)
{
unsigned long irq_flags;
case XPC_ACTIVATE_MQ_MSG_CHCTL_CLOSEREQUEST_UV: {
struct xpc_activate_mq_msg_chctl_closerequest_uv *msg;
- if (!part_setup)
- break;
-
msg = container_of(msg_hdr, struct
xpc_activate_mq_msg_chctl_closerequest_uv,
hdr);
case XPC_ACTIVATE_MQ_MSG_CHCTL_CLOSEREPLY_UV: {
struct xpc_activate_mq_msg_chctl_closereply_uv *msg;
- if (!part_setup)
- break;
-
msg = container_of(msg_hdr, struct
xpc_activate_mq_msg_chctl_closereply_uv,
hdr);
case XPC_ACTIVATE_MQ_MSG_CHCTL_OPENREQUEST_UV: {
struct xpc_activate_mq_msg_chctl_openrequest_uv *msg;
- if (!part_setup)
- break;
-
msg = container_of(msg_hdr, struct
xpc_activate_mq_msg_chctl_openrequest_uv,
hdr);
case XPC_ACTIVATE_MQ_MSG_CHCTL_OPENREPLY_UV: {
struct xpc_activate_mq_msg_chctl_openreply_uv *msg;
- if (!part_setup)
- break;
-
msg = container_of(msg_hdr, struct
xpc_activate_mq_msg_chctl_openreply_uv, hdr);
args = &part->remote_openclose_args[msg->ch_number];
case XPC_ACTIVATE_MQ_MSG_CHCTL_OPENCOMPLETE_UV: {
struct xpc_activate_mq_msg_chctl_opencomplete_uv *msg;
- if (!part_setup)
- break;
-
msg = container_of(msg_hdr, struct
xpc_activate_mq_msg_chctl_opencomplete_uv, hdr);
spin_lock_irqsave(&part->chctl_lock, irq_flags);
part_referenced = xpc_part_ref(part);
xpc_handle_activate_mq_msg_uv(part, msg_hdr,
- part_referenced,
&wakeup_hb_checker);
if (part_referenced)
xpc_part_deref(part);
head->first = first->next;
if (head->first == NULL)
head->last = NULL;
-
- head->n_entries--;
- BUG_ON(head->n_entries < 0);
-
- first->next = NULL;
}
+ head->n_entries--;
+ BUG_ON(head->n_entries < 0);
spin_unlock_irqrestore(&head->lock, irq_flags);
+ first->next = NULL;
return first;
}
xpc_send_activate_IRQ_part_uv(part, &msg, sizeof(msg),
XPC_ACTIVATE_MQ_MSG_SYNC_ACT_STATE_UV);
- while (!((part->sn.uv.remote_act_state == XPC_P_AS_ACTIVATING) ||
- (part->sn.uv.remote_act_state == XPC_P_AS_ACTIVE))) {
+ while (part->sn.uv.remote_act_state != XPC_P_AS_ACTIVATING) {
dev_dbg(xpc_part, "waiting to make first contact with "
"partition %d\n", XPC_PARTID(part));
msg_slot = ch_uv->recv_msg_slots +
(msg->hdr.msg_slot_number % ch->remote_nentries) * ch->entry_size;
+ BUG_ON(msg->hdr.msg_slot_number != msg_slot->hdr.msg_slot_number);
BUG_ON(msg_slot->hdr.size != 0);
memcpy(msg_slot, msg, msg->hdr.size);
sizeof(struct xpc_notify_mq_msghdr_uv));
if (ret != xpSuccess)
XPC_DEACTIVATE_PARTITION(&xpc_partitions[ch->partid], ret);
+
+ msg->hdr.msg_slot_number += ch->remote_nentries;
}
static struct xpc_arch_operations xpc_arch_ops_uv = {
{
struct mmc_data *data = host->data;
- if (data)
- dma_unmap_sg(&host->pdev->dev, data->sg, data->sg_len,
- ((data->flags & MMC_DATA_WRITE)
- ? DMA_TO_DEVICE : DMA_FROM_DEVICE));
+ dma_unmap_sg(&host->pdev->dev, data->sg, data->sg_len,
+ ((data->flags & MMC_DATA_WRITE)
+ ? DMA_TO_DEVICE : DMA_FROM_DEVICE));
}
static void atmci_stop_dma(struct atmel_mci *host)
"command error: status=0x%08x\n", status);
if (cmd->data) {
- atmci_stop_dma(host);
host->data = NULL;
+ atmci_stop_dma(host);
mci_writel(host, IDR, MCI_NOTBUSY
| MCI_TXRDY | MCI_RXRDY
| ATMCI_DATA_ERROR_FLAGS);
} else {
data->bytes_xfered = data->blocks * data->blksz;
data->error = 0;
- mci_writel(host, IDR, ATMCI_DATA_ERROR_FLAGS);
}
if (!data->stop) {
ret = -ENODEV;
if (pdata->slot[0].bus_width) {
ret = atmci_init_slot(host, &pdata->slot[0],
- 0, MCI_SDCSEL_SLOT_A);
+ MCI_SDCSEL_SLOT_A, 0);
if (!ret)
nr_slots++;
}
if (pdata->slot[1].bus_width) {
ret = atmci_init_slot(host, &pdata->slot[1],
- 1, MCI_SDCSEL_SLOT_B);
+ MCI_SDCSEL_SLOT_B, 1);
if (!ret)
nr_slots++;
}
struct s3c24xx_mci_pdata *pdata = host->pdata;
int ret;
- if (pdata->no_detect)
+ if (pdata->gpio_detect == 0)
return -ENOSYS;
ret = gpio_get_value(pdata->gpio_detect) ? 0 : 1;
static struct s3c24xx_mci_pdata s3cmci_def_pdata = {
/* This is currently here to avoid a number of if (host->pdata)
* checks. Any zero fields to ensure reaonable defaults are picked. */
- .no_wprotect = 1,
- .no_detect = 1,
};
#ifdef CONFIG_CPU_FREQ
static int __devexit sdhci_s3c_remove(struct platform_device *pdev)
{
- struct sdhci_host *host = platform_get_drvdata(pdev);
- struct sdhci_s3c *sc = sdhci_priv(host);
- int ptr;
-
- sdhci_remove_host(host, 1);
-
- for (ptr = 0; ptr < 3; ptr++) {
- if (sc->clk_bus[ptr]) {
- clk_disable(sc->clk_bus[ptr]);
- clk_put(sc->clk_bus[ptr]);
- }
- }
- clk_disable(sc->clk_io);
- clk_put(sc->clk_io);
-
- iounmap(host->ioaddr);
- release_resource(sc->ioarea);
- kfree(sc->ioarea);
-
- sdhci_free_host(host);
- platform_set_drvdata(pdev, NULL);
-
return 0;
}
static inline void tmio_mmc_pio_irq(struct tmio_mmc_host *host)
{
struct mmc_data *data = host->data;
- void *sg_virt;
unsigned short *buf;
unsigned int count;
unsigned long flags;
return;
}
- sg_virt = tmio_mmc_kmap_atomic(host->sg_ptr, &flags);
- buf = (unsigned short *)(sg_virt + host->sg_off);
+ buf = (unsigned short *)(tmio_mmc_kmap_atomic(host, &flags) +
+ host->sg_off);
count = host->sg_ptr->length - host->sg_off;
if (count > data->blksz)
host->sg_off += count;
- tmio_mmc_kunmap_atomic(sg_virt, &flags);
+ tmio_mmc_kunmap_atomic(host, &flags);
if (host->sg_off == host->sg_ptr->length)
tmio_mmc_next_sg(host);
#define ack_mmc_irqs(host, i) \
do { \
- sd_ctrl_write32((host), CTL_STATUS, ~(i)); \
+ u32 mask;\
+ mask = sd_ctrl_read32((host), CTL_STATUS); \
+ mask &= ~((i) & TMIO_MASK_IRQ); \
+ sd_ctrl_write32((host), CTL_STATUS, mask); \
} while (0)
return --host->sg_len;
}
-static inline char *tmio_mmc_kmap_atomic(struct scatterlist *sg,
+static inline char *tmio_mmc_kmap_atomic(struct tmio_mmc_host *host,
unsigned long *flags)
{
+ struct scatterlist *sg = host->sg_ptr;
+
local_irq_save(*flags);
return kmap_atomic(sg_page(sg), KM_BIO_SRC_IRQ) + sg->offset;
}
-static inline void tmio_mmc_kunmap_atomic(void *virt,
+static inline void tmio_mmc_kunmap_atomic(struct tmio_mmc_host *host,
unsigned long *flags)
{
- kunmap_atomic(virt, KM_BIO_SRC_IRQ);
+ kunmap_atomic(sg_page(host->sg_ptr), KM_BIO_SRC_IRQ);
local_irq_restore(*flags);
}
#define tAR_NDTR1(r) (((r) >> 0) & 0xf)
/* convert nano-seconds to nand flash controller clock cycles */
-#define ns2cycle(ns, clk) (int)((ns) * (clk / 1000000) / 1000)
+#define ns2cycle(ns, clk) (int)(((ns) * (clk / 1000000) / 1000) - 1)
/* convert nand flash controller clock cycles to nano-seconds */
#define cycle2ns(c, clk) ((((c) + 1) * 1000000 + clk / 500) / (clk / 1000))
return retval;
}
-static irqreturn_t el2_probe_interrupt(int irq, void *seen)
-{
- *(bool *)seen = true;
- return IRQ_HANDLED;
-}
-
static int
el2_open(struct net_device *dev)
{
outb(EGACFR_NORM, E33G_GACFR); /* Enable RAM and interrupts. */
do {
- bool seen;
-
- retval = request_irq(*irqp, el2_probe_interrupt, 0,
- dev->name, &seen);
- if (retval == -EBUSY)
- continue;
- if (retval < 0)
- goto err_disable;
-
+ retval = request_irq(*irqp, NULL, 0, "bogus", dev);
+ if (retval >= 0) {
/* Twinkle the interrupt, and check if it's seen. */
- seen = false;
- smp_wmb();
+ unsigned long cookie = probe_irq_on();
outb_p(0x04 << ((*irqp == 9) ? 2 : *irqp), E33G_IDCFR);
outb_p(0x00, E33G_IDCFR);
- msleep(1);
- free_irq(*irqp, el2_probe_interrupt);
- if (!seen)
- continue;
-
- retval = request_irq(dev->irq = *irqp, eip_interrupt, 0,
- dev->name, dev);
- if (retval == -EBUSY)
- continue;
- if (retval < 0)
- goto err_disable;
+ if (*irqp == probe_irq_off(cookie) /* It's a good IRQ line! */
+ && ((retval = request_irq(dev->irq = *irqp,
+ eip_interrupt, 0, dev->name, dev)) == 0))
+ break;
+ } else {
+ if (retval != -EBUSY)
+ return retval;
+ }
} while (*++irqp);
-
if (*irqp == 0) {
- err_disable:
outb(EGACFR_IRQOFF, E33G_GACFR); /* disable interrupts. */
return -EAGAIN;
}
{ 0x1571, 0xa204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa206, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
- { 0x10B5, 0x9030, 0x10B5, 0x2978, 0, 0, ARC_CAN_10MBIT },
- { 0x10B5, 0x9050, 0x10B5, 0x2273, 0, 0, ARC_CAN_10MBIT },
+ { 0x10B5, 0x9030, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
+ { 0x10B5, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x14BA, 0x6000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x2200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{0,}
.get_eeprom = atl1e_get_eeprom,
.set_eeprom = atl1e_set_eeprom,
.get_tx_csum = atl1e_get_tx_csum,
- .set_tx_csum = ethtool_op_set_tx_hw_csum,
.get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg,
#ifdef NETIF_F_TSO
.get_tso = ethtool_op_get_tso,
#endif
- .set_tso = ethtool_op_set_tso,
};
void atl1e_set_ethtool_ops(struct net_device *netdev)
pci_enable_wake(pdev, PCI_D3cold, 0);
atl1_reset_hw(&adapter->hw);
+ adapter->cmb.cmb->int_stats = 0;
- if (netif_running(netdev)) {
- adapter->cmb.cmb->int_stats = 0;
+ if (netif_running(netdev))
atl1_up(adapter);
- }
netif_device_attach(netdev);
return 0;
dev->irq = sdev->irq;
SET_ETHTOOL_OPS(dev, &b44_ethtool_ops);
+ netif_carrier_off(dev);
+
err = ssb_bus_powerup(sdev->bus, 0);
if (err) {
dev_err(sdev->dev,
goto err_out_powerdown;
}
- netif_carrier_off(dev);
-
ssb_set_drvdata(sdev, dev);
/* Chip reset provides power to the b44 MAC & PCI cores, which
MODULE_DEVICE_TABLE(pci, bnx2_pci_tbl);
-static void bnx2_init_napi(struct bnx2 *bp);
-static void bnx2_del_napi(struct bnx2 *bp);
-
static inline u32 bnx2_tx_avail(struct bnx2 *bp, struct bnx2_tx_ring_info *txr)
{
u32 diff;
rc = bnx2_alloc_bad_rbuf(bp);
}
- if (bp->flags & BNX2_FLAG_USING_MSIX) {
+ if (bp->flags & BNX2_FLAG_USING_MSIX)
bnx2_setup_msix_tbl(bp);
- /* Prevent MSIX table reads and write from timing out */
- REG_WR(bp, BNX2_MISC_ECO_HW_CTL,
- BNX2_MISC_ECO_HW_CTL_LARGE_GRC_TMOUT_EN);
- }
return rc;
}
bnx2_disable_int(bp);
bnx2_setup_int_mode(bp, disable_msi);
- bnx2_init_napi(bp);
bnx2_napi_enable(bp);
rc = bnx2_alloc_mem(bp);
if (rc)
bnx2_free_skbs(bp);
bnx2_free_irq(bp);
bnx2_free_mem(bp);
- bnx2_del_napi(bp);
return rc;
}
bnx2_free_irq(bp);
bnx2_free_skbs(bp);
bnx2_free_mem(bp);
- bnx2_del_napi(bp);
bp->link_up = 0;
netif_carrier_off(bp->dev);
bnx2_set_power_state(bp, PCI_D3hot);
return str;
}
-static void
-bnx2_del_napi(struct bnx2 *bp)
-{
- int i;
-
- for (i = 0; i < bp->irq_nvecs; i++)
- netif_napi_del(&bp->bnx2_napi[i].napi);
-}
-
-static void
+static void __devinit
bnx2_init_napi(struct bnx2 *bp)
{
int i;
- for (i = 0; i < bp->irq_nvecs; i++) {
+ for (i = 0; i < BNX2_MAX_MSIX_VEC; i++) {
struct bnx2_napi *bnapi = &bp->bnx2_napi[i];
int (*poll)(struct napi_struct *, int);
dev->ethtool_ops = &bnx2_ethtool_ops;
bp = netdev_priv(dev);
+ bnx2_init_napi(bp);
pci_set_drvdata(pdev, dev);
if (!(dev->flags & IFF_MASTER))
goto out;
- if (!pskb_may_pull(skb, sizeof(struct lacpdu)))
- goto out;
-
read_lock(&bond->lock);
slave = bond_get_slave_by_dev((struct bonding *)netdev_priv(dev),
orig_dev);
goto out;
}
- if (!pskb_may_pull(skb, arp_hdr_len(bond_dev)))
- goto out;
-
if (skb->len < sizeof(struct arp_pkt)) {
pr_debug("Packet is too small to be an ARP\n");
goto out;
.brp_inc = 1,
};
-static void sja1000_write_cmdreg(struct sja1000_priv *priv, u8 val)
-{
- unsigned long flags;
-
- /*
- * The command register needs some locking and time to settle
- * the write_reg() operation - especially on SMP systems.
- */
- spin_lock_irqsave(&priv->cmdreg_lock, flags);
- priv->write_reg(priv, REG_CMR, val);
- priv->read_reg(priv, REG_SR);
- spin_unlock_irqrestore(&priv->cmdreg_lock, flags);
-}
-
static int sja1000_probe_chip(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
can_put_echo_skb(skb, dev, 0);
- sja1000_write_cmdreg(priv, CMD_TR);
+ priv->write_reg(priv, REG_CMR, CMD_TR);
return NETDEV_TX_OK;
}
cf->data[i++] = 0;
/* release receive buffer */
- sja1000_write_cmdreg(priv, CMD_RRB);
+ priv->write_reg(priv, REG_CMR, CMD_RRB);
netif_rx(skb);
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
stats->rx_over_errors++;
stats->rx_errors++;
- sja1000_write_cmdreg(priv, CMD_CDO); /* clear bit */
+ priv->write_reg(priv, REG_CMR, CMD_CDO); /* clear bit */
}
if (isrc & IRQ_EI) {
void __iomem *reg_base; /* ioremap'ed address to registers */
unsigned long irq_flags; /* for request_irq() */
- spinlock_t cmdreg_lock; /* lock for concurrent cmd register writes */
u16 flags; /* custom mode flags */
u8 ocr; /* output control register */
if (netif_msg_drv(priv))
printk(KERN_ERR "%s: Could not attach to PHY\n",
dev->name);
- rc = PTR_ERR(priv->phy);
- goto fail;
+ return PTR_ERR(priv->phy);
}
if ((rc = register_netdev(dev))) {
int t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{
- cphy_init(phy, adapter, phy_addr, &xaui_direct_ops, mdio_ops,
+ cphy_init(phy, adapter, MDIO_PRTAD_NONE, &xaui_direct_ops, mdio_ops,
SUPPORTED_10000baseT_Full | SUPPORTED_AUI | SUPPORTED_TP,
"10GBASE-CX4");
return 0;
free_irq_resources(adapter);
quiesce_rx(adapter);
- t3_sge_stop(adapter);
flush_workqueue(cxgb3_wq); /* wait for external IRQ handler */
}
case CHELSIO_GET_QSET_NUM:{
struct ch_reg edata;
- memset(&edata, 0, sizeof(struct ch_reg));
-
edata.cmd = CHELSIO_GET_QSET_NUM;
edata.val = pi->nqsets;
if (copy_to_user(useraddr, &edata, sizeof(edata)))
return dm->rx_csum;
}
-static int dm9000_set_rx_csum_unlocked(struct net_device *dev, uint32_t data)
+static int dm9000_set_rx_csum(struct net_device *dev, uint32_t data)
{
board_info_t *dm = to_dm9000_board(dev);
+ unsigned long flags;
if (dm->can_csum) {
dm->rx_csum = data;
return -EOPNOTSUPP;
}
-static int dm9000_set_rx_csum(struct net_device *dev, uint32_t data)
-{
- board_info_t *dm = to_dm9000_board(dev);
- unsigned long flags;
- int ret;
-
- spin_lock_irqsave(&dm->lock, flags);
- ret = dm9000_set_rx_csum_unlocked(dev, data);
- spin_unlock_irqrestore(&dm->lock, flags);
-
- return ret;
-}
-
static int dm9000_set_tx_csum(struct net_device *dev, uint32_t data)
{
board_info_t *dm = to_dm9000_board(dev);
* Set DM9000 multicast address
*/
static void
-dm9000_hash_table_unlocked(struct net_device *dev)
+dm9000_hash_table(struct net_device *dev)
{
board_info_t *db = netdev_priv(dev);
struct dev_mc_list *mcptr = dev->mc_list;
u32 hash_val;
u16 hash_table[4];
u8 rcr = RCR_DIS_LONG | RCR_DIS_CRC | RCR_RXEN;
+ unsigned long flags;
dm9000_dbg(db, 1, "entering %s\n", __func__);
+ spin_lock_irqsave(&db->lock, flags);
for (i = 0, oft = DM9000_PAR; i < 6; i++, oft++)
iow(db, oft, dev->dev_addr[i]);
}
iow(db, DM9000_RCR, rcr);
-}
-
-static void
-dm9000_hash_table(struct net_device *dev)
-{
- board_info_t *db = netdev_priv(dev);
- unsigned long flags;
-
- spin_lock_irqsave(&db->lock, flags);
- dm9000_hash_table_unlocked(dev);
spin_unlock_irqrestore(&db->lock, flags);
}
db->io_mode = ior(db, DM9000_ISR) >> 6; /* ISR bit7:6 keeps I/O mode */
/* Checksum mode */
- dm9000_set_rx_csum_unlocked(dev, db->rx_csum);
+ dm9000_set_rx_csum(dev, db->rx_csum);
/* GPIO0 on pre-activate PHY */
iow(db, DM9000_GPR, 0); /* REG_1F bit0 activate phyxcer */
iow(db, DM9000_ISR, ISR_CLR_STATUS); /* Clear interrupt status */
/* Set address filter table */
- dm9000_hash_table_unlocked(dev);
+ dm9000_hash_table(dev);
imr = IMR_PAR | IMR_PTM | IMR_PRM;
if (db->type != TYPE_DM9000E)
#define E1000_KMRNCTRLSTA_DIAG_OFFSET 0x3 /* Kumeran Diagnostic */
#define E1000_KMRNCTRLSTA_DIAG_NELPBK 0x1000 /* Nearend Loopback mode */
#define E1000_KMRNCTRLSTA_K1_CONFIG 0x7
-#define E1000_KMRNCTRLSTA_K1_ENABLE 0x0002
+#define E1000_KMRNCTRLSTA_K1_ENABLE 0x140E
#define E1000_KMRNCTRLSTA_K1_DISABLE 0x1400
#define IFE_PHY_EXTENDED_STATUS_CONTROL 0x10
#define E1000_DEV_ID_80003ES2LAN_COPPER_SPT 0x10BA
#define E1000_DEV_ID_80003ES2LAN_SERDES_SPT 0x10BB
-#define E1000_DEV_ID_ICH8_82567V_3 0x1501
#define E1000_DEV_ID_ICH8_IGP_M_AMT 0x1049
#define E1000_DEV_ID_ICH8_IGP_AMT 0x104A
#define E1000_DEV_ID_ICH8_IGP_C 0x104B
u32 phy_ctrl;
switch (hw->mac.type) {
- case e1000_ich8lan:
case e1000_ich9lan:
case e1000_ich10lan:
case e1000_pchlan:
i = 0;
}
- if (i == tx_ring->next_to_use)
- break;
eop = tx_ring->buffer_info[i].next_to_watch;
eop_desc = E1000_TX_DESC(*tx_ring, eop);
}
/* disable SERR in case the MSI write causes a master abort */
pci_read_config_word(adapter->pdev, PCI_COMMAND, &pci_cmd);
- if (pci_cmd & PCI_COMMAND_SERR)
- pci_write_config_word(adapter->pdev, PCI_COMMAND,
- pci_cmd & ~PCI_COMMAND_SERR);
+ pci_write_config_word(adapter->pdev, PCI_COMMAND,
+ pci_cmd & ~PCI_COMMAND_SERR);
err = e1000_test_msi_interrupt(adapter);
- /* re-enable SERR */
- if (pci_cmd & PCI_COMMAND_SERR) {
- pci_read_config_word(adapter->pdev, PCI_COMMAND, &pci_cmd);
- pci_cmd |= PCI_COMMAND_SERR;
- pci_write_config_word(adapter->pdev, PCI_COMMAND, pci_cmd);
- }
+ /* restore previous setting of command word */
+ pci_write_config_word(adapter->pdev, PCI_COMMAND, pci_cmd);
/* success ! */
if (!err)
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH8_IGP_C), board_ich8lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH8_IGP_M), board_ich8lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH8_IGP_M_AMT), board_ich8lan },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH8_82567V_3), board_ich8lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IFE), board_ich9lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IFE_G), board_ich9lan },
equalizer_t *eql;
master_config_t mc;
- memset(&mc, 0, sizeof(master_config_t));
-
if (eql_is_master(dev)) {
eql = netdev_priv(dev);
mc.max_slaves = eql->max_slaves;
/* Limit the number of tx's outstanding for hw bug */
if (id->driver_data & DEV_NEED_TX_LIMIT) {
np->tx_limit = 1;
- if (((id->driver_data & DEV_NEED_TX_LIMIT2) == DEV_NEED_TX_LIMIT2) &&
+ if ((id->driver_data & DEV_NEED_TX_LIMIT2) &&
pci_dev->revision >= 0xA2)
np->tx_limit = 0;
}
if (skb_queue_len(&priv->rx_recycle) < priv->rx_ring_size &&
skb_recycle_check(skb, priv->rx_buffer_size +
RXBUF_ALIGNMENT))
- skb_queue_head(&priv->rx_recycle, skb);
+ __skb_queue_head(&priv->rx_recycle, skb);
else
dev_kfree_skb_any(skb);
struct gfar_private *priv = netdev_priv(dev);
struct sk_buff *skb = NULL;
- skb = skb_dequeue(&priv->rx_recycle);
+ skb = __skb_dequeue(&priv->rx_recycle);
if (!skb)
skb = netdev_alloc_skb(dev,
priv->rx_buffer_size + RXBUF_ALIGNMENT);
* recycle list.
*/
skb->data = skb->head + NET_SKB_PAD;
- skb_queue_head(&priv->rx_recycle, skb);
+ __skb_queue_head(&priv->rx_recycle, skb);
}
} else {
/* Increment the number of packets */
break;
case E1000_DEV_ID_82576:
case E1000_DEV_ID_82576_NS:
- case E1000_DEV_ID_82576_NS_SERDES:
case E1000_DEV_ID_82576_FIBER:
case E1000_DEV_ID_82576_SERDES:
case E1000_DEV_ID_82576_QUAD_COPPER:
{
s32 ret_val = 0;
- /*
- * If there's an alternate MAC address place it in RAR0
- * so that it will override the Si installed default perm
- * address.
- */
- ret_val = igb_check_alt_mac_addr(hw);
- if (ret_val)
- goto out;
-
- ret_val = igb_read_mac_addr(hw);
+ if (igb_check_alt_mac_addr(hw))
+ ret_val = igb_read_mac_addr(hw);
-out:
return ret_val;
}
#define E1000_DEV_ID_82576_SERDES 0x10E7
#define E1000_DEV_ID_82576_QUAD_COPPER 0x10E8
#define E1000_DEV_ID_82576_NS 0x150A
-#define E1000_DEV_ID_82576_NS_SERDES 0x1518
#define E1000_DEV_ID_82576_SERDES_QUAD 0x150D
#define E1000_DEV_ID_82575EB_COPPER 0x10A7
#define E1000_DEV_ID_82575EB_FIBER_SERDES 0x10A9
#define E1000_FUNC_1 1
-#define E1000_ALT_MAC_ADDRESS_OFFSET_LAN1 3
-
enum e1000_mac_type {
e1000_undefined = 0,
e1000_82575,
}
if (nvm_alt_mac_addr_offset == 0xFFFF) {
- /* There is no Alternate MAC Address */
+ ret_val = -(E1000_NOT_IMPLEMENTED);
goto out;
}
if (hw->bus.func == E1000_FUNC_1)
- nvm_alt_mac_addr_offset += E1000_ALT_MAC_ADDRESS_OFFSET_LAN1;
+ nvm_alt_mac_addr_offset += ETH_ALEN/sizeof(u16);
+
for (i = 0; i < ETH_ALEN; i += 2) {
offset = nvm_alt_mac_addr_offset + (i >> 1);
ret_val = hw->nvm.ops.read(hw, offset, 1, &nvm_data);
/* if multicast bit is set, the alternate address will not be used */
if (alt_mac_addr[0] & 0x01) {
- hw_dbg("Ignoring Alternate Mac Address with MC bit set\n");
+ ret_val = -(E1000_NOT_IMPLEMENTED);
goto out;
}
- /*
- * We have a valid alternate MAC address, and we want to treat it the
- * same as the normal permanent MAC address stored by the HW into the
- * RAR. Do this by mapping this address into RAR0.
- */
- hw->mac.ops.rar_set(hw, alt_mac_addr, 0);
+ for (i = 0; i < ETH_ALEN; i++)
+ hw->mac.addr[i] = hw->mac.perm_addr[i] = alt_mac_addr[i];
+
+ hw->mac.ops.rar_set(hw, hw->mac.perm_addr, 0);
out:
return ret_val;
static struct pci_device_id igb_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_82576), board_82575 },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_82576_NS), board_82575 },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_82576_NS_SERDES), board_82575 },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_82576_FIBER), board_82575 },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_82576_SERDES), board_82575 },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_82576_SERDES_QUAD), board_82575 },
case IXGBE_DEV_ID_82599_KX4:
case IXGBE_DEV_ID_82599_KX4_MEZZ:
case IXGBE_DEV_ID_82599_COMBO_BACKPLANE:
- case IXGBE_DEV_ID_82599_KR:
case IXGBE_DEV_ID_82599_XAUI_LOM:
/* Default device ID is mezzanine card KX/KX4 */
media_type = ixgbe_media_type_backplane;
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_XAUI_LOM),
board_82599 },
- {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_KR),
- board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_SFP),
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_KX4_MEZZ),
static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb)
{
struct ixgbe_adapter *adapter = netdev_priv(dev);
- int txq = smp_processor_id();
- if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
- while (unlikely(txq >= dev->real_num_tx_queues))
- txq -= dev->real_num_tx_queues;
- return txq;
- }
+ if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)
+ return smp_processor_id();
if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)
return (skb->vlan_tci & IXGBE_TX_FLAGS_VLAN_PRIO_MASK) >> 13;
#define IXGBE_DEV_ID_82598EB_XF_LR 0x10F4
#define IXGBE_DEV_ID_82599_KX4 0x10F7
#define IXGBE_DEV_ID_82599_KX4_MEZZ 0x1514
-#define IXGBE_DEV_ID_82599_KR 0x1517
#define IXGBE_DEV_ID_82599_CX4 0x10F9
#define IXGBE_DEV_ID_82599_SFP 0x10FB
#define IXGBE_DEV_ID_82599_XAUI_LOM 0x10FC
jme->jme_vlan_rx(skb, jme->vlgrp,
le16_to_cpu(rxdesc->descwb.vlan));
NET_STAT(jme).rx_bytes += 4;
- } else {
- dev_kfree_skb(skb);
}
} else {
jme->jme_rx(skb);
}
}
-static inline void
-jme_phy_on(struct jme_adapter *jme)
-{
- u32 bmcr;
-
- bmcr = jme_mdio_read(jme->dev, jme->mii_if.phy_id, MII_BMCR);
- bmcr &= ~BMCR_PDOWN;
- jme_mdio_write(jme->dev, jme->mii_if.phy_id, MII_BMCR, bmcr);
-}
-
static int
jme_open(struct net_device *netdev)
{
jme_start_irq(jme);
- if (test_bit(JME_FLAG_SSET, &jme->flags)) {
- jme_phy_on(jme);
+ if (test_bit(JME_FLAG_SSET, &jme->flags))
jme_set_settings(netdev, &jme->old_ecmd);
- } else {
+ else
jme_reset_phy_processor(jme);
- }
jme_reset_link(jme);
jme_reset_link(jme);
}
-static inline void jme_pause_rx(struct jme_adapter *jme)
-{
- atomic_dec(&jme->link_changing);
-
- jme_set_rx_pcc(jme, PCC_OFF);
- if (test_bit(JME_FLAG_POLL, &jme->flags)) {
- JME_NAPI_DISABLE(jme);
- } else {
- tasklet_disable(&jme->rxclean_task);
- tasklet_disable(&jme->rxempty_task);
- }
-}
-
-static inline void jme_resume_rx(struct jme_adapter *jme)
-{
- struct dynpcc_info *dpi = &(jme->dpi);
-
- if (test_bit(JME_FLAG_POLL, &jme->flags)) {
- JME_NAPI_ENABLE(jme);
- } else {
- tasklet_hi_enable(&jme->rxclean_task);
- tasklet_hi_enable(&jme->rxempty_task);
- }
- dpi->cur = PCC_P1;
- dpi->attempt = PCC_P1;
- dpi->cnt = 0;
- jme_set_rx_pcc(jme, PCC_P1);
-
- atomic_inc(&jme->link_changing);
-}
-
static void
jme_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp)
{
struct jme_adapter *jme = netdev_priv(netdev);
- jme_pause_rx(jme);
jme->vlgrp = grp;
- jme_resume_rx(jme);
}
static void
jme_clear_pm(jme);
pci_restore_state(pdev);
- if (test_bit(JME_FLAG_SSET, &jme->flags)) {
- jme_phy_on(jme);
+ if (test_bit(JME_FLAG_SSET, &jme->flags))
jme_set_settings(netdev, &jme->old_ecmd);
- } else {
+ else
jme_reset_phy_processor(jme);
- }
jme_start_irq(jme);
netif_device_attach(netdev);
static irqreturn_t ks_irq(int irq, void *pw)
{
- struct net_device *netdev = pw;
- struct ks_net *ks = netdev_priv(netdev);
+ struct ks_net *ks = pw;
+ struct net_device *netdev = ks->netdev;
u16 status;
/*this should be the first in IRQ handler */
if (chunk->nsg <= 0)
goto fail;
- }
- if (chunk->npages == MLX4_ICM_CHUNK_LEN)
chunk = NULL;
+ }
npages -= 1 << cur_order;
} else {
if (pkt_offset)
skb_pull(skb, pkt_offset);
+ skb->truesize = skb->len + sizeof(struct sk_buff);
skb->protocol = eth_type_trans(skb, netdev);
napi_gro_receive(&sds_ring->napi, skb);
skb_put(skb, lro_length + data_offset);
+ skb->truesize = skb->len + sizeof(struct sk_buff) + skb_headroom(skb);
+
skb_pull(skb, l2_hdr_offset);
skb->protocol = eth_type_trans(skb, netdev);
/* Calculate UDP checksum if configured to do so */
if (sk_tun->sk_no_check == UDP_CSUM_NOXMIT)
skb->ip_summed = CHECKSUM_NONE;
- else if ((skb_dst(skb) && skb_dst(skb)->dev) &&
- (!(skb_dst(skb)->dev->features & NETIF_F_V4_CSUM))) {
+ else if (!(skb_dst(skb)->dev->features & NETIF_F_V4_CSUM)) {
skb->ip_summed = CHECKSUM_COMPLETE;
csum = skb_checksum(skb, 0, udp_len, 0);
uh->check = csum_tcpudp_magic(inet->saddr, inet->daddr,
#define RX_DESC_SIZE (RX_DCNT * sizeof(struct r6040_descriptor))
#define TX_DESC_SIZE (TX_DCNT * sizeof(struct r6040_descriptor))
#define MBCR_DEFAULT 0x012A /* MAC Bus Control Register */
-#define MCAST_MAX 3 /* Max number multicast addresses to filter */
+#define MCAST_MAX 4 /* Max number multicast addresses to filter */
/* Descriptor status */
#define DSC_OWNER_MAC 0x8000 /* MAC is the owner of this descriptor */
crc >>= 26;
hash_table[crc >> 4] |= 1 << (15 - (crc & 0xf));
}
+ /* Write the index of the hash table */
+ for (i = 0; i < 4; i++)
+ iowrite16(hash_table[i] << 14, ioaddr + MCR1);
/* Fill the MAC hash tables with their values */
iowrite16(hash_table[0], ioaddr + MAR0);
iowrite16(hash_table[1], ioaddr + MAR1);
iowrite16(hash_table[3], ioaddr + MAR3);
}
/* Multicast Address 1~4 case */
- dmi = dev->mc_list;
for (i = 0, dmi; (i < dev->mc_count) && (i < MCAST_MAX); i++) {
adrp = (u16 *)dmi->dmi_addr;
iowrite16(adrp[0], ioaddr + MID_1L + 8*i);
dmi = dmi->next;
}
for (i = dev->mc_count; i < MCAST_MAX; i++) {
- iowrite16(0xffff, ioaddr + MID_1L + 8*i);
- iowrite16(0xffff, ioaddr + MID_1M + 8*i);
- iowrite16(0xffff, ioaddr + MID_1H + 8*i);
+ iowrite16(0xffff, ioaddr + MID_0L + 8*i);
+ iowrite16(0xffff, ioaddr + MID_0M + 8*i);
+ iowrite16(0xffff, ioaddr + MID_0H + 8*i);
}
}
MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
-/*
- * we set our copybreak very high so that we don't have
- * to allocate 16k frames all the time (see note in
- * rtl8169_open()
- */
-static int rx_copybreak = 16383;
+static int rx_copybreak = 200;
static int use_dac;
static struct {
u32 msg_enable;
break;
udelay(25);
}
- /*
- * According to hardware specs a 20us delay is required after write
- * complete indication, but before sending next command.
- */
- udelay(20);
}
static int mdio_read(void __iomem *ioaddr, int reg_addr)
}
udelay(25);
}
- /*
- * According to hardware specs a 20us delay is required after read
- * complete indication, but before sending next command.
- */
- udelay(20);
-
return value;
}
spin_lock_irq(&tp->lock);
RTL_W8(Cfg9346, Cfg9346_Unlock);
-
- RTL_W32(MAC4, high);
- RTL_R32(MAC4);
-
RTL_W32(MAC0, low);
- RTL_R32(MAC0);
-
+ RTL_W32(MAC4, high);
RTL_W8(Cfg9346, Cfg9346_Lock);
spin_unlock_irq(&tp->lock);
}
static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,
- unsigned int mtu)
+ struct net_device *dev)
{
- unsigned int max_frame = mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
-
- if (max_frame != 16383)
- printk(KERN_WARNING PFX "WARNING! Changing of MTU on this "
- "NIC may lead to frame reception errors!\n");
+ unsigned int max_frame = dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
tp->rx_buf_sz = (max_frame > RX_BUF_SIZE) ? max_frame : RX_BUF_SIZE;
}
int retval = -ENOMEM;
- /*
- * Note that we use a magic value here, its wierd I know
- * its done because, some subset of rtl8169 hardware suffers from
- * a problem in which frames received that are longer than
- * the size set in RxMaxSize register return garbage sizes
- * when received. To avoid this we need to turn off filtering,
- * which is done by setting a value of 16383 in the RxMaxSize register
- * and allocating 16k frames to handle the largest possible rx value
- * thats what the magic math below does.
- */
- rtl8169_set_rxbufsize(tp, 16383 - VLAN_ETH_HLEN - ETH_FCS_LEN);
+ rtl8169_set_rxbufsize(tp, dev);
/*
* Rx and Tx desscriptors needs 256 bytes alignment.
rtl8169_down(dev);
- rtl8169_set_rxbufsize(tp, dev->mtu);
+ rtl8169_set_rxbufsize(tp, dev);
ret = rtl8169_init_ring(dev);
if (ret < 0)
static struct sk_buff *rtl8169_alloc_rx_skb(struct pci_dev *pdev,
struct net_device *dev,
struct RxDesc *desc, int rx_buf_sz,
- unsigned int align, gfp_t gfp)
+ unsigned int align)
{
struct sk_buff *skb;
dma_addr_t mapping;
pad = align ? align : NET_IP_ALIGN;
- skb = __netdev_alloc_skb(dev, rx_buf_sz + pad, gfp);
+ skb = netdev_alloc_skb(dev, rx_buf_sz + pad);
if (!skb)
goto err_out;
}
static u32 rtl8169_rx_fill(struct rtl8169_private *tp, struct net_device *dev,
- u32 start, u32 end, gfp_t gfp)
+ u32 start, u32 end)
{
u32 cur;
skb = rtl8169_alloc_rx_skb(tp->pci_dev, dev,
tp->RxDescArray + i,
- tp->rx_buf_sz, tp->align, gfp);
+ tp->rx_buf_sz, tp->align);
if (!skb)
break;
memset(tp->tx_skb, 0x0, NUM_TX_DESC * sizeof(struct ring_info));
memset(tp->Rx_skbuff, 0x0, NUM_RX_DESC * sizeof(struct sk_buff *));
- if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC, GFP_KERNEL) != NUM_RX_DESC)
+ if (rtl8169_rx_fill(tp, dev, 0, NUM_RX_DESC) != NUM_RX_DESC)
goto err_out;
rtl8169_mark_as_last_descriptor(tp->RxDescArray + NUM_RX_DESC - 1);
tp->cur_tx += frags + 1;
- wmb();
+ smp_wmb();
RTL_W8(TxPoll, NPQ); /* set polling bit */
count = cur_rx - tp->cur_rx;
tp->cur_rx = cur_rx;
- delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx, GFP_ATOMIC);
+ delta = rtl8169_rx_fill(tp, dev, tp->dirty_rx, tp->cur_rx);
if (!delta && count && netif_msg_intr(tp))
printk(KERN_INFO "%s: no Rx buffer allocated\n", dev->name);
tp->dirty_rx += delta;
* until it does.
*/
tp->intr_mask = 0xffff;
- wmb();
+ smp_wmb();
RTL_W16(IntrMask, tp->intr_event);
}
mc_filter[1] = swab32(data);
}
- RTL_W32(MAR0 + 4, mc_filter[1]);
RTL_W32(MAR0 + 0, mc_filter[0]);
+ RTL_W32(MAR0 + 4, mc_filter[1]);
RTL_W32(RxConfig, tmp);
#include <linux/sched.h>
#include <linux/seq_file.h>
#include <linux/mii.h>
-#include <linux/dmi.h>
#include <asm/irq.h>
#include "skge.h"
dev->name, dev->dev_addr);
}
-static int only_32bit_dma;
-
static int __devinit skge_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
pci_set_master(pdev);
- if (!only_32bit_dma && !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
+ if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
using_dac = 1;
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
} else if (!(err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))) {
.shutdown = skge_shutdown,
};
-static struct dmi_system_id skge_32bit_dma_boards[] = {
- {
- .ident = "Gigabyte nForce boards",
- .matches = {
- DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co"),
- DMI_MATCH(DMI_BOARD_NAME, "nForce"),
- },
- },
- {}
-};
-
static int __init skge_init_module(void)
{
- if (dmi_check_system(skge_32bit_dma_boards))
- only_32bit_dma = 1;
skge_debug_init();
return pci_register_driver(&skge_driver);
}
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
}
-/* Enable Rx/Tx */
-static void sky2_enable_rx_tx(struct sky2_port *sky2)
-{
- struct sky2_hw *hw = sky2->hw;
- unsigned port = sky2->port;
- u16 reg;
-
- reg = gma_read16(hw, port, GM_GP_CTRL);
- reg |= GM_GPCR_RX_ENA | GM_GPCR_TX_ENA;
- gma_write16(hw, port, GM_GP_CTRL, reg);
-}
-
/* Force a renegotiation */
static void sky2_phy_reinit(struct sky2_port *sky2)
{
spin_lock_bh(&sky2->phy_lock);
sky2_phy_init(sky2->hw, sky2->port);
- sky2_enable_rx_tx(sky2);
spin_unlock_bh(&sky2->phy_lock);
}
static inline struct sky2_tx_le *get_tx_le(struct sky2_port *sky2, u16 *slot)
{
struct sky2_tx_le *le = sky2->tx_le + *slot;
+ struct tx_ring_info *re = sky2->tx_ring + *slot;
*slot = RING_NEXT(*slot, sky2->tx_ring_size);
+ re->flags = 0;
+ re->skb = NULL;
le->ctrl = 0;
return le;
}
return count;
}
-static void sky2_tx_unmap(struct pci_dev *pdev, struct tx_ring_info *re)
+static void sky2_tx_unmap(struct pci_dev *pdev,
+ const struct tx_ring_info *re)
{
if (re->flags & TX_MAP_SINGLE)
pci_unmap_single(pdev, pci_unmap_addr(re, mapaddr),
pci_unmap_page(pdev, pci_unmap_addr(re, mapaddr),
pci_unmap_len(re, maplen),
PCI_DMA_TODEVICE);
- re->flags = 0;
}
/*
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
- re->skb = NULL;
dev_kfree_skb_any(skb);
sky2->tx_next = RING_NEXT(idx, sky2->tx_ring_size);
{
struct sky2_hw *hw = sky2->hw;
unsigned port = sky2->port;
+ u16 reg;
static const char *fc_name[] = {
[FC_NONE] = "none",
[FC_TX] = "tx",
[FC_BOTH] = "both",
};
- sky2_enable_rx_tx(sky2);
+ /* enable Rx/Tx */
+ reg = gma_read16(hw, port, GM_GP_CTRL);
+ reg |= GM_GPCR_RX_ENA | GM_GPCR_TX_ENA;
+ gma_write16(hw, port, GM_GP_CTRL, reg);
gm_phy_write(hw, port, PHY_MARV_INT_MASK, PHY_M_DEF_MSK);
*/
spinlock_t mac_lock;
- /* spinlock to ensure register accesses are serialised */
+ /* spinlock to ensure 16-bit accesses are serialised.
+ * unused with a 32-bit bus */
spinlock_t dev_lock;
struct phy_device *phy_dev;
unsigned int hashlo;
};
-static inline u32 __smsc911x_reg_read(struct smsc911x_data *pdata, u32 reg)
+/* The 16-bit access functions are significantly slower, due to the locking
+ * necessary. If your bus hardware can be configured to do this for you
+ * (in response to a single 32-bit operation from software), you should use
+ * the 32-bit access functions instead. */
+
+static inline u32 smsc911x_reg_read(struct smsc911x_data *pdata, u32 reg)
{
if (pdata->config.flags & SMSC911X_USE_32BIT)
return readl(pdata->ioaddr + reg);
- if (pdata->config.flags & SMSC911X_USE_16BIT)
- return ((readw(pdata->ioaddr + reg) & 0xFFFF) |
+ if (pdata->config.flags & SMSC911X_USE_16BIT) {
+ u32 data;
+ unsigned long flags;
+
+ /* these two 16-bit reads must be performed consecutively, so
+ * must not be interrupted by our own ISR (which would start
+ * another read operation) */
+ spin_lock_irqsave(&pdata->dev_lock, flags);
+ data = ((readw(pdata->ioaddr + reg) & 0xFFFF) |
((readw(pdata->ioaddr + reg + 2) & 0xFFFF) << 16));
+ spin_unlock_irqrestore(&pdata->dev_lock, flags);
+
+ return data;
+ }
BUG();
return 0;
}
-static inline u32 smsc911x_reg_read(struct smsc911x_data *pdata, u32 reg)
-{
- u32 data;
- unsigned long flags;
-
- spin_lock_irqsave(&pdata->dev_lock, flags);
- data = __smsc911x_reg_read(pdata, reg);
- spin_unlock_irqrestore(&pdata->dev_lock, flags);
-
- return data;
-}
-
-static inline void __smsc911x_reg_write(struct smsc911x_data *pdata, u32 reg,
- u32 val)
+static inline void smsc911x_reg_write(struct smsc911x_data *pdata, u32 reg,
+ u32 val)
{
if (pdata->config.flags & SMSC911X_USE_32BIT) {
writel(val, pdata->ioaddr + reg);
}
if (pdata->config.flags & SMSC911X_USE_16BIT) {
+ unsigned long flags;
+
+ /* these two 16-bit writes must be performed consecutively, so
+ * must not be interrupted by our own ISR (which would start
+ * another read operation) */
+ spin_lock_irqsave(&pdata->dev_lock, flags);
writew(val & 0xFFFF, pdata->ioaddr + reg);
writew((val >> 16) & 0xFFFF, pdata->ioaddr + reg + 2);
+ spin_unlock_irqrestore(&pdata->dev_lock, flags);
return;
}
BUG();
}
-static inline void smsc911x_reg_write(struct smsc911x_data *pdata, u32 reg,
- u32 val)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&pdata->dev_lock, flags);
- __smsc911x_reg_write(pdata, reg, val);
- spin_unlock_irqrestore(&pdata->dev_lock, flags);
-}
-
/* Writes a packet to the TX_DATA_FIFO */
static inline void
smsc911x_tx_writefifo(struct smsc911x_data *pdata, unsigned int *buf,
unsigned int wordcount)
{
- unsigned long flags;
-
- spin_lock_irqsave(&pdata->dev_lock, flags);
-
if (pdata->config.flags & SMSC911X_SWAP_FIFO) {
while (wordcount--)
- __smsc911x_reg_write(pdata, TX_DATA_FIFO,
- swab32(*buf++));
- goto out;
+ smsc911x_reg_write(pdata, TX_DATA_FIFO, swab32(*buf++));
+ return;
}
if (pdata->config.flags & SMSC911X_USE_32BIT) {
writesl(pdata->ioaddr + TX_DATA_FIFO, buf, wordcount);
- goto out;
+ return;
}
if (pdata->config.flags & SMSC911X_USE_16BIT) {
while (wordcount--)
- __smsc911x_reg_write(pdata, TX_DATA_FIFO, *buf++);
- goto out;
+ smsc911x_reg_write(pdata, TX_DATA_FIFO, *buf++);
+ return;
}
BUG();
-out:
- spin_unlock_irqrestore(&pdata->dev_lock, flags);
}
/* Reads a packet out of the RX_DATA_FIFO */
smsc911x_rx_readfifo(struct smsc911x_data *pdata, unsigned int *buf,
unsigned int wordcount)
{
- unsigned long flags;
-
- spin_lock_irqsave(&pdata->dev_lock, flags);
-
if (pdata->config.flags & SMSC911X_SWAP_FIFO) {
while (wordcount--)
- *buf++ = swab32(__smsc911x_reg_read(pdata,
- RX_DATA_FIFO));
- goto out;
+ *buf++ = swab32(smsc911x_reg_read(pdata, RX_DATA_FIFO));
+ return;
}
if (pdata->config.flags & SMSC911X_USE_32BIT) {
readsl(pdata->ioaddr + RX_DATA_FIFO, buf, wordcount);
- goto out;
+ return;
}
if (pdata->config.flags & SMSC911X_USE_16BIT) {
while (wordcount--)
- *buf++ = __smsc911x_reg_read(pdata, RX_DATA_FIFO);
- goto out;
+ *buf++ = smsc911x_reg_read(pdata, RX_DATA_FIFO);
+ return;
}
BUG();
-out:
- spin_unlock_irqrestore(&pdata->dev_lock, flags);
}
/* waits for MAC not busy, with timeout. Only called by smsc911x_mac_read
struct tg3 *tp = netdev_priv(dev);
for (i = 0; i < tp->irq_cnt; i++)
- tg3_interrupt(tp->napi[i].irq_vec, &tp->napi[i]);
+ tg3_interrupt(tp->napi[i].irq_vec, dev);
}
#endif
mss = 0;
if ((mss = skb_shinfo(skb)->gso_size) != 0) {
struct iphdr *iph;
- u32 tcp_opt_len, ip_tcp_len, hdr_len;
+ int tcp_opt_len, ip_tcp_len, hdr_len;
if (skb_header_cloned(skb) &&
pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
IPPROTO_TCP,
0);
- if (tp->tg3_flags2 & TG3_FLG2_HW_TSO_2)
- mss |= hdr_len << 9;
- else if ((tp->tg3_flags2 & TG3_FLG2_HW_TSO_1) ||
- GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5705) {
+ if ((tp->tg3_flags2 & TG3_FLG2_HW_TSO) ||
+ (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5705)) {
if (tcp_opt_len || iph->ihl > 5) {
int tsflags;
would_hit_hwbug = 0;
- if ((tp->tg3_flags3 & TG3_FLG3_SHORT_DMA_BUG) && len <= 8)
- would_hit_hwbug = 1;
-
if (tp->tg3_flags3 & TG3_FLG3_5701_DMA_BUG)
would_hit_hwbug = 1;
else if (tg3_4g_overflow_test(mapping, len))
tnapi->tx_buffers[entry].skb = NULL;
- if ((tp->tg3_flags3 & TG3_FLG3_SHORT_DMA_BUG) &&
- len <= 8)
- would_hit_hwbug = 1;
-
if (tg3_4g_overflow_test(mapping, len))
would_hit_hwbug = 1;
pci_disable_msi(tp->pdev);
tp->tg3_flags2 &= ~TG3_FLG2_USING_MSI;
- tp->napi[0].irq_vec = tp->pdev->irq;
err = tg3_request_irq(tp, 0);
if (err)
}
}
- if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
- tp->tg3_flags3 |= TG3_FLG3_SHORT_DMA_BUG;
-
tp->irq_max = 1;
#ifdef TG3_NAPI
goto err_out_iounmap;
}
- if (tp->tg3_flags3 & TG3_FLG3_5755_PLUS)
+ if ((tp->tg3_flags3 & TG3_FLG3_5755_PLUS) ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
dev->netdev_ops = &tg3_netdev_ops;
else
dev->netdev_ops = &tg3_netdev_ops_dma_bug;
#define TG3_FLG3_TOGGLE_10_100_L1PLLPD 0x00008000
#define TG3_FLG3_PHY_IS_FET 0x00010000
#define TG3_FLG3_ENABLE_RSS 0x00020000
-#define TG3_FLG3_4G_DMA_BNDRY_BUG 0x00080000
-#define TG3_FLG3_40BIT_DMA_LIMIT_BUG 0x00100000
-#define TG3_FLG3_SHORT_DMA_BUG 0x00200000
struct timer_list timer;
u16 timer_counter;
If in doubt, say Y.
-config TULIP_DM910X
- def_bool y
- depends on TULIP && SPARC
-
config DE4X5
tristate "Generic DECchip & DIGITAL EtherWORKS PCI/EISA"
depends on PCI || EISA
#include <asm/uaccess.h>
#include <asm/irq.h>
-#ifdef CONFIG_TULIP_DM910X
-#include <linux/of.h>
-#endif
-
/* Board/System/Debug information/definition ---------------- */
#define PCI_DM9132_ID 0x91321282 /* Davicom DM9132 ID */
if (!printed_version++)
printk(version);
- /*
- * SPARC on-board DM910x chips should be handled by the main
- * tulip driver, except for early DM9100s.
- */
-#ifdef CONFIG_TULIP_DM910X
- if ((ent->driver_data == PCI_DM9100_ID && pdev->revision >= 0x30) ||
- ent->driver_data == PCI_DM9102_ID) {
- struct device_node *dp = pci_device_to_OF_node(pdev);
-
- if (dp && of_get_property(dp, "local-mac-address", NULL)) {
- printk(KERN_INFO DRV_NAME
- ": skipping on-board DM910x (use tulip)\n");
- return -ENODEV;
- }
- }
-#endif
-
/* Init network device */
dev = alloc_etherdev(sizeof(*db));
if (dev == NULL)
| HAS_NWAY | HAS_PCI_MWI, tulip_timer, tulip_media_task },
/* DM910X */
-#ifdef CONFIG_TULIP_DM910X
{ "Davicom DM9102/DM9102A", 128, 0x0001ebef,
HAS_MII | HAS_MEDIA_TABLE | CSR12_IN_SROM | HAS_ACPI,
tulip_timer, tulip_media_task },
-#else
- { NULL },
-#endif
/* RS7112 */
{ "Conexant LANfinity", 256, 0x0001ebef,
{ 0x1259, 0xa120, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
{ 0x11F6, 0x9881, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMPEX9881 },
{ 0x8086, 0x0039, PCI_ANY_ID, PCI_ANY_ID, 0, 0, I21145 },
-#ifdef CONFIG_TULIP_DM910X
{ 0x1282, 0x9100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DM910X },
{ 0x1282, 0x9102, PCI_ANY_ID, PCI_ANY_ID, 0, 0, DM910X },
-#endif
{ 0x1113, 0x1216, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
{ 0x1113, 0x1217, PCI_ANY_ID, PCI_ANY_ID, 0, 0, MX98715 },
{ 0x1113, 0x9511, PCI_ANY_ID, PCI_ANY_ID, 0, 0, COMET },
}
/*
- * DM910x chips should be handled by the dmfe driver, except
- * on-board chips on SPARC systems. Also, early DM9100s need
- * software CRC which only the dmfe driver supports.
+ * Early DM9100's need software CRC and the DMFE driver
*/
-#ifdef CONFIG_TULIP_DM910X
- if (chip_idx == DM910X) {
- struct device_node *dp;
-
- if (pdev->vendor == 0x1282 && pdev->device == 0x9100 &&
- pdev->revision < 0x30) {
- printk(KERN_INFO PFX
- "skipping early DM9100 with Crc bug (use dmfe)\n");
- return -ENODEV;
- }
-
- dp = pci_device_to_OF_node(pdev);
- if (!(dp && of_get_property(dp, "local-mac-address", NULL))) {
- printk(KERN_INFO PFX
- "skipping DM910x expansion card (use dmfe)\n");
+ if (pdev->vendor == 0x1282 && pdev->device == 0x9100)
+ {
+ /* Read Chip revision */
+ if (pdev->revision < 0x30)
+ {
+ printk(KERN_ERR PFX "skipping early DM9100 with Crc bug (use dmfe)\n");
return -ENODEV;
}
}
-#endif
/*
* Looks for early PCI chipsets where people report hangs
if (err < 0)
goto err_free_sk;
- if (!net_eq(dev_net(tun->dev), &init_net) ||
- device_create_file(&tun->dev->dev, &dev_attr_tun_flags) ||
+ if (device_create_file(&tun->dev->dev, &dev_attr_tun_flags) ||
device_create_file(&tun->dev->dev, &dev_attr_owner) ||
device_create_file(&tun->dev->dev, &dev_attr_group))
printk(KERN_ERR "Failed to create tun sysfs files\n");
static void ugeth_quiesce(struct ucc_geth_private *ugeth)
{
- /* Prevent any further xmits, plus detach the device. */
- netif_device_detach(ugeth->ndev);
-
- /* Wait for any current xmits to finish. */
+ /* Wait for and prevent any further xmits. */
netif_tx_disable(ugeth->ndev);
/* Disable the interrupt to avoid NAPI rescheduling. */
{
napi_enable(&ugeth->napi);
enable_irq(ugeth->ug_info->uf_info.irq);
- netif_device_attach(ugeth->ndev);
+ netif_tx_wake_all_queues(ugeth->ndev);
}
/* Called every time the controller might need to be made
/* Handle the transmitted buffer and release */
/* the BD to be used with the current frame */
- skb = ugeth->tx_skbuff[txQ][ugeth->skb_dirtytx[txQ]];
- if (!skb)
+ if ((bd == ugeth->txBd[txQ]) && (netif_queue_stopped(dev) == 0))
break;
dev->stats.tx_packets++;
+ skb = ugeth->tx_skbuff[txQ][ugeth->skb_dirtytx[txQ]];
+
if (skb_queue_len(&ugeth->rx_recycle) < RX_BD_RING_LEN &&
skb_recycle_check(skb,
ugeth->ug_info->uf_info.max_rx_buf_length +
#define AX_CMD_WRITE_IPG0 0x12
#define AX_CMD_WRITE_IPG1 0x13
#define AX_CMD_READ_NODE_ID 0x13
-#define AX_CMD_WRITE_NODE_ID 0x14
#define AX_CMD_WRITE_IPG2 0x14
#define AX_CMD_WRITE_MULTI_FILTER 0x16
#define AX88172_CMD_READ_NODE_ID 0x17
/* This structure cannot exceed sizeof(unsigned long [5]) AKA 20 bytes */
struct asix_data {
u8 multi_filter[AX_MCAST_FILTER_SIZE];
- u8 mac_addr[ETH_ALEN];
u8 phymode;
u8 ledmode;
u8 eeprom_len;
return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL);
}
-static int asix_set_mac_address(struct net_device *net, void *p)
-{
- struct usbnet *dev = netdev_priv(net);
- struct asix_data *data = (struct asix_data *)&dev->data;
- struct sockaddr *addr = p;
-
- if (netif_running(net))
- return -EBUSY;
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- memcpy(net->dev_addr, addr->sa_data, ETH_ALEN);
-
- /* We use the 20 byte dev->data
- * for our 6 byte mac buffer
- * to avoid allocating memory that
- * is tricky to free later */
- memcpy(data->mac_addr, addr->sa_data, ETH_ALEN);
- asix_write_cmd_async(dev, AX_CMD_WRITE_NODE_ID, 0, 0, ETH_ALEN,
- data->mac_addr);
-
- return 0;
-}
-
/* We need to override some ethtool_ops so we require our
own structure so we don't interfere with other usbnet
devices that may be connected at the same time. */
.ndo_start_xmit = usbnet_start_xmit,
.ndo_tx_timeout = usbnet_tx_timeout,
.ndo_change_mtu = usbnet_change_mtu,
- .ndo_set_mac_address = asix_set_mac_address,
+ .ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
.ndo_do_ioctl = asix_ioctl,
.ndo_set_multicast_list = asix_set_multicast,
.ndo_stop = usbnet_stop,
.ndo_start_xmit = usbnet_start_xmit,
.ndo_tx_timeout = usbnet_tx_timeout,
- .ndo_set_mac_address = asix_set_mac_address,
+ .ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_multicast_list = asix_set_multicast,
.ndo_do_ioctl = asix_ioctl,
goto out;
dm_write_reg(dev, DM_SHARED_ADDR, phy ? (reg | 0x40) : reg);
- dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0x1a : 0x12);
+ dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0x1c : 0x14);
for (i = 0; i < DM_TIMEOUT; i++) {
u8 tmp;
struct uart_icount cnow;
struct hso_tiocmget *tiocmget = serial->tiocmget;
- memset(&icount, 0, sizeof(struct serial_icounter_struct));
-
if (!tiocmget)
return -ENOENT;
spin_lock_irq(&serial->serial_lock);
#include <linux/ethtool.h>
#include <linux/crc32.h>
#include <linux/bitops.h>
-#include <linux/workqueue.h>
#include <asm/processor.h> /* Processor type for cache alignment. */
#include <asm/io.h>
#include <asm/irq.h>
struct net_device *dev;
struct napi_struct napi;
spinlock_t lock;
- struct work_struct reset_task;
/* Frequently used values: keep some adjacent for cache effect. */
u32 quirks;
static int mdio_read(struct net_device *dev, int phy_id, int location);
static void mdio_write(struct net_device *dev, int phy_id, int location, int value);
static int rhine_open(struct net_device *dev);
-static void rhine_reset_task(struct work_struct *work);
static void rhine_tx_timeout(struct net_device *dev);
static netdev_tx_t rhine_start_tx(struct sk_buff *skb,
struct net_device *dev);
dev->irq = pdev->irq;
spin_lock_init(&rp->lock);
- INIT_WORK(&rp->reset_task, rhine_reset_task);
-
rp->mii_if.dev = dev;
rp->mii_if.mdio_read = mdio_read;
rp->mii_if.mdio_write = mdio_write;
return 0;
}
-static void rhine_reset_task(struct work_struct *work)
+static void rhine_tx_timeout(struct net_device *dev)
{
- struct rhine_private *rp = container_of(work, struct rhine_private,
- reset_task);
- struct net_device *dev = rp->dev;
+ struct rhine_private *rp = netdev_priv(dev);
+ void __iomem *ioaddr = rp->base;
+
+ printk(KERN_WARNING "%s: Transmit timed out, status %4.4x, PHY status "
+ "%4.4x, resetting...\n",
+ dev->name, ioread16(ioaddr + IntrStatus),
+ mdio_read(dev, rp->mii_if.phy_id, MII_BMSR));
/* protect against concurrent rx interrupts */
disable_irq(rp->pdev->irq);
napi_disable(&rp->napi);
- spin_lock_bh(&rp->lock);
+ spin_lock(&rp->lock);
/* clear all descriptors */
free_tbufs(dev);
rhine_chip_reset(dev);
init_registers(dev);
- spin_unlock_bh(&rp->lock);
+ spin_unlock(&rp->lock);
enable_irq(rp->pdev->irq);
dev->trans_start = jiffies;
netif_wake_queue(dev);
}
-static void rhine_tx_timeout(struct net_device *dev)
-{
- struct rhine_private *rp = netdev_priv(dev);
- void __iomem *ioaddr = rp->base;
-
- printk(KERN_WARNING "%s: Transmit timed out, status %4.4x, PHY status "
- "%4.4x, resetting...\n",
- dev->name, ioread16(ioaddr + IntrStatus),
- mdio_read(dev, rp->mii_if.phy_id, MII_BMSR));
-
- schedule_work(&rp->reset_task);
-}
-
static netdev_tx_t rhine_start_tx(struct sk_buff *skb,
struct net_device *dev)
{
struct rhine_private *rp = netdev_priv(dev);
void __iomem *ioaddr = rp->base;
- napi_disable(&rp->napi);
- cancel_work_sync(&rp->reset_task);
- netif_stop_queue(dev);
-
spin_lock_irq(&rp->lock);
+ netif_stop_queue(dev);
+ napi_disable(&rp->napi);
+
if (debug > 1)
printk(KERN_DEBUG "%s: Shutting down ethercard, "
"status was %4.4x.\n",
/* Ensure chip is running */
pci_set_power_state(vptr->pdev, PCI_D0);
+ velocity_give_many_rx_descs(vptr);
+
velocity_init_registers(vptr, VELOCITY_INIT_COLD);
ret = request_irq(vptr->pdev->irq, &velocity_intr, IRQF_SHARED,
goto out;
}
- velocity_give_many_rx_descs(vptr);
-
mac_enable_int(vptr->mac_regs);
netif_start_queue(dev);
vptr->flags |= VELOCITY_FLAGS_OPENED;
dev->mtu = new_mtu;
- velocity_init_registers(vptr, VELOCITY_INIT_COLD);
-
velocity_give_many_rx_descs(vptr);
+ velocity_init_registers(vptr, VELOCITY_INIT_COLD);
+
mac_enable_int(vptr->mac_regs);
netif_start_queue(dev);
vi = container_of(work, struct virtnet_info, refill.work);
napi_disable(&vi->napi);
- still_empty = !try_fill_recv(vi, GFP_KERNEL);
+ try_fill_recv(vi, GFP_KERNEL);
+ still_empty = (vi->num == 0);
napi_enable(&vi->napi);
/* In theory, this can happen: if we don't get any buffers in
if (xennet_connect(netdev) != 0)
break;
xenbus_switch_state(dev, XenbusStateConnected);
- netif_notify_peers(netdev);
break;
case XenbusStateClosing:
.notifier_call = module_load_notify,
};
+
+static void end_sync(void)
+{
+ end_cpu_work();
+ /* make sure we don't leak task structs */
+ process_task_mortuary();
+ process_task_mortuary();
+}
+
+
int sync_start(void)
{
int err;
if (!zalloc_cpumask_var(&marked_cpus, GFP_KERNEL))
return -ENOMEM;
- mutex_lock(&buffer_mutex);
+ start_cpu_work();
err = task_handoff_register(&task_free_nb);
if (err)
if (err)
goto out4;
- start_cpu_work();
-
out:
- mutex_unlock(&buffer_mutex);
return err;
out4:
profile_event_unregister(PROFILE_MUNMAP, &munmap_nb);
out2:
task_handoff_unregister(&task_free_nb);
out1:
+ end_sync();
free_cpumask_var(marked_cpus);
goto out;
}
void sync_stop(void)
{
- /* flush buffers */
- mutex_lock(&buffer_mutex);
- end_cpu_work();
unregister_module_notifier(&module_load_nb);
profile_event_unregister(PROFILE_MUNMAP, &munmap_nb);
profile_event_unregister(PROFILE_TASK_EXIT, &task_exit_nb);
task_handoff_unregister(&task_free_nb);
- mutex_unlock(&buffer_mutex);
- flush_scheduled_work();
-
- /* make sure we don't leak task structs */
- process_task_mortuary();
- process_task_mortuary();
-
+ end_sync();
free_cpumask_var(marked_cpus);
}
#define OP_BUFFER_FLAGS 0
-static struct ring_buffer *op_ring_buffer;
+/*
+ * Read and write access is using spin locking. Thus, writing to the
+ * buffer by NMI handler (x86) could occur also during critical
+ * sections when reading the buffer. To avoid this, there are 2
+ * buffers for independent read and write access. Read access is in
+ * process context only, write access only in the NMI handler. If the
+ * read buffer runs empty, both buffers are swapped atomically. There
+ * is potentially a small window during swapping where the buffers are
+ * disabled and samples could be lost.
+ *
+ * Using 2 buffers is a little bit overhead, but the solution is clear
+ * and does not require changes in the ring buffer implementation. It
+ * can be changed to a single buffer solution when the ring buffer
+ * access is implemented as non-locking atomic code.
+ */
+static struct ring_buffer *op_ring_buffer_read;
+static struct ring_buffer *op_ring_buffer_write;
DEFINE_PER_CPU(struct oprofile_cpu_buffer, cpu_buffer);
static void wq_sync_buffer(struct work_struct *work);
void free_cpu_buffers(void)
{
- if (op_ring_buffer)
- ring_buffer_free(op_ring_buffer);
- op_ring_buffer = NULL;
+ if (op_ring_buffer_read)
+ ring_buffer_free(op_ring_buffer_read);
+ op_ring_buffer_read = NULL;
+ if (op_ring_buffer_write)
+ ring_buffer_free(op_ring_buffer_write);
+ op_ring_buffer_write = NULL;
}
#define RB_EVENT_HDR_SIZE 4
unsigned long byte_size = buffer_size * (sizeof(struct op_sample) +
RB_EVENT_HDR_SIZE);
- op_ring_buffer = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS);
- if (!op_ring_buffer)
+ op_ring_buffer_read = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS);
+ if (!op_ring_buffer_read)
+ goto fail;
+ op_ring_buffer_write = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS);
+ if (!op_ring_buffer_write)
goto fail;
for_each_possible_cpu(i) {
cancel_delayed_work(&b->work);
}
+
+ flush_scheduled_work();
}
/*
*op_cpu_buffer_write_reserve(struct op_entry *entry, unsigned long size)
{
entry->event = ring_buffer_lock_reserve
- (op_ring_buffer, sizeof(struct op_sample) +
+ (op_ring_buffer_write, sizeof(struct op_sample) +
size * sizeof(entry->sample->data[0]));
- if (!entry->event)
+ if (entry->event)
+ entry->sample = ring_buffer_event_data(entry->event);
+ else
+ entry->sample = NULL;
+
+ if (!entry->sample)
return NULL;
- entry->sample = ring_buffer_event_data(entry->event);
+
entry->size = size;
entry->data = entry->sample->data;
int op_cpu_buffer_write_commit(struct op_entry *entry)
{
- return ring_buffer_unlock_commit(op_ring_buffer, entry->event);
+ return ring_buffer_unlock_commit(op_ring_buffer_write, entry->event);
}
struct op_sample *op_cpu_buffer_read_entry(struct op_entry *entry, int cpu)
{
struct ring_buffer_event *e;
- e = ring_buffer_consume(op_ring_buffer, cpu, NULL);
- if (!e)
+ e = ring_buffer_consume(op_ring_buffer_read, cpu, NULL);
+ if (e)
+ goto event;
+ if (ring_buffer_swap_cpu(op_ring_buffer_read,
+ op_ring_buffer_write,
+ cpu))
return NULL;
+ e = ring_buffer_consume(op_ring_buffer_read, cpu, NULL);
+ if (e)
+ goto event;
+ return NULL;
+event:
entry->event = e;
entry->sample = ring_buffer_event_data(e);
entry->size = (ring_buffer_event_length(e) - sizeof(struct op_sample))
unsigned long op_cpu_buffer_entries(int cpu)
{
- return ring_buffer_entries_cpu(op_ring_buffer, cpu);
+ return ring_buffer_entries_cpu(op_ring_buffer_read, cpu)
+ + ring_buffer_entries_cpu(op_ring_buffer_write, cpu);
}
static int
static int led_proc_write(struct file *file, const char *buf,
unsigned long count, void *data)
{
- char *cur, lbuf[32];
+ char *cur, lbuf[count + 1];
int d;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
- if (count >= sizeof(lbuf))
- count = sizeof(lbuf)-1;
+ memset(lbuf, 0, count + 1);
if (copy_from_user(lbuf, buf, count))
return -EFAULT;
- lbuf[count] = 0;
cur = lbuf;
int __init ibmphp_access_ebda (void)
{
- u8 format, num_ctlrs, rio_complete, hs_complete, ebda_sz;
+ u8 format, num_ctlrs, rio_complete, hs_complete;
u16 ebda_seg, num_entries, next_offset, offset, blk_id, sub_addr, re, rc_id, re_id, base;
int rc = 0;
iounmap (io_mem);
debug ("returned ebda segment: %x\n", ebda_seg);
- io_mem = ioremap(ebda_seg<<4, 1);
- if (!io_mem)
- return -ENOMEM;
- ebda_sz = readb(io_mem);
- iounmap(io_mem);
- debug("ebda size: %d(KiB)\n", ebda_sz);
- if (ebda_sz == 0)
- return -ENOMEM;
-
- io_mem = ioremap(ebda_seg<<4, (ebda_sz * 1024));
+ io_mem = ioremap(ebda_seg<<4, 1024);
if (!io_mem )
return -ENOMEM;
next_offset = 0x180;
#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32))
#define DMA_64BIT_PFN IOVA_PFN(DMA_BIT_MASK(64))
-/* page table handling */
-#define LEVEL_STRIDE (9)
-#define LEVEL_MASK (((u64)1 << LEVEL_STRIDE) - 1)
-
-static inline int agaw_to_level(int agaw)
-{
- return agaw + 2;
-}
-
-static inline int agaw_to_width(int agaw)
-{
- return 30 + agaw * LEVEL_STRIDE;
-}
-
-static inline int width_to_agaw(int width)
-{
- return (width - 30) / LEVEL_STRIDE;
-}
-
-static inline unsigned int level_to_offset_bits(int level)
-{
- return (level - 1) * LEVEL_STRIDE;
-}
-
-static inline int pfn_level_offset(unsigned long pfn, int level)
-{
- return (pfn >> level_to_offset_bits(level)) & LEVEL_MASK;
-}
-
-static inline unsigned long level_mask(int level)
-{
- return -1UL << level_to_offset_bits(level);
-}
-
-static inline unsigned long level_size(int level)
-{
- return 1UL << level_to_offset_bits(level);
-}
-
-static inline unsigned long align_to_level(unsigned long pfn, int level)
-{
- return (pfn + level_size(level) - 1) & level_mask(level);
-}
/* VT-d pages must always be _smaller_ than MM pages. Otherwise things
are never going to work. */
}
+static inline int width_to_agaw(int width);
+
static int __iommu_calculate_agaw(struct intel_iommu *iommu, int max_gaw)
{
unsigned long sagaw;
spin_unlock_irqrestore(&iommu->lock, flags);
}
+/* page table handling */
+#define LEVEL_STRIDE (9)
+#define LEVEL_MASK (((u64)1 << LEVEL_STRIDE) - 1)
+
+static inline int agaw_to_level(int agaw)
+{
+ return agaw + 2;
+}
+
+static inline int agaw_to_width(int agaw)
+{
+ return 30 + agaw * LEVEL_STRIDE;
+
+}
+
+static inline int width_to_agaw(int width)
+{
+ return (width - 30) / LEVEL_STRIDE;
+}
+
+static inline unsigned int level_to_offset_bits(int level)
+{
+ return (level - 1) * LEVEL_STRIDE;
+}
+
+static inline int pfn_level_offset(unsigned long pfn, int level)
+{
+ return (pfn >> level_to_offset_bits(level)) & LEVEL_MASK;
+}
+
+static inline unsigned long level_mask(int level)
+{
+ return -1UL << level_to_offset_bits(level);
+}
+
+static inline unsigned long level_size(int level)
+{
+ return 1UL << level_to_offset_bits(level);
+}
+
+static inline unsigned long align_to_level(unsigned long pfn, int level)
+{
+ return (pfn + level_size(level) - 1) & level_mask(level);
+}
+
static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
unsigned long pfn)
{
void read_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg)
{
struct msi_desc *entry = get_irq_desc_msi(desc);
-
- BUG_ON(entry->dev->current_state != PCI_D0);
-
if (entry->msi_attrib.is_msix) {
void __iomem *base = entry->mask_base +
entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE;
read_msi_msg_desc(desc, msg);
}
-void get_cached_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg)
-{
- struct msi_desc *entry = get_irq_desc_msi(desc);
-
- /* Assert that the cache is valid, assuming that
- * valid messages are not all-zeroes. */
- BUG_ON(!(entry->msg.address_hi | entry->msg.address_lo |
- entry->msg.data));
-
- *msg = entry->msg;
-}
-
-void get_cached_msi_msg(unsigned int irq, struct msi_msg *msg)
-{
- struct irq_desc *desc = irq_to_desc(irq);
-
- get_cached_msi_msg_desc(desc, msg);
-}
-
void write_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg)
{
struct msi_desc *entry = get_irq_desc_msi(desc);
-
- if (entry->dev->current_state != PCI_D0) {
- /* Don't touch the hardware now */
- } else if (entry->msi_attrib.is_msix) {
+ if (entry->msi_attrib.is_msix) {
void __iomem *base;
base = entry->mask_base +
entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE;
#ifdef HAVE_PCI_MMAP
-int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vma,
- enum pci_mmap_api mmap_api)
+int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vma)
{
- unsigned long nr, start, size, pci_start;
+ unsigned long nr, start, size;
- if (pci_resource_len(pdev, resno) == 0)
- return 0;
nr = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
start = vma->vm_pgoff;
size = ((pci_resource_len(pdev, resno) - 1) >> PAGE_SHIFT) + 1;
- pci_start = (mmap_api == PCI_MMAP_PROCFS) ?
- pci_resource_start(pdev, resno) >> PAGE_SHIFT : 0;
- if (start >= pci_start && start < pci_start + size &&
- start + nr <= pci_start + size)
+ if (start < size && size - start >= nr)
return 1;
+ WARN(1, "process \"%s\" tried to map 0x%08lx-0x%08lx on %s BAR %d (size 0x%08lx)\n",
+ current->comm, start, start+nr, pci_name(pdev), resno, size);
return 0;
}
if (i >= PCI_ROM_RESOURCE)
return -ENODEV;
- if (!pci_mmap_fits(pdev, i, vma, PCI_MMAP_SYSFS)) {
- WARN(1, "process \"%s\" tried to map 0x%08lx bytes "
- "at page 0x%08lx on %s BAR %d (start 0x%16Lx, size 0x%16Lx)\n",
- current->comm, vma->vm_end-vma->vm_start, vma->vm_pgoff,
- pci_name(pdev), i,
- pci_resource_start(pdev, i), pci_resource_len(pdev, i));
+ if (!pci_mmap_fits(pdev, i, vma))
return -EINVAL;
- }
/* pci_mmap_page_range() expects the same kind of entry as coming
* from /proc/bus/pci/ which is a "user visible" value. If this is
*/
int __pci_complete_power_transition(struct pci_dev *dev, pci_power_t state)
{
- return state >= PCI_D0 ?
+ return state > PCI_D0 ?
pci_platform_power_transition(dev, state) : -EINVAL;
}
EXPORT_SYMBOL_GPL(__pci_complete_power_transition);
*/
return 0;
+ /* Check if we're already there */
+ if (dev->current_state == state)
+ return 0;
+
__pci_start_power_transition(dev, state);
/* This device is quirked not to be put into D3, so
pci_write_config_word(dev, pos + PCI_MSIX_FLAGS, control);
}
}
-EXPORT_SYMBOL_GPL(pci_msi_off);
#ifndef HAVE_ARCH_PCI_SET_DMA_MASK
/*
*/
int pcix_get_max_mmrbc(struct pci_dev *dev)
{
- int cap;
+ int err, cap;
u32 stat;
cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);
if (!cap)
return -EINVAL;
- if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))
+ err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat);
+ if (err)
return -EINVAL;
- return 512 << ((stat & PCI_X_STATUS_MAX_READ) >> 21);
+ return (stat & PCI_X_STATUS_MAX_READ) >> 12;
}
EXPORT_SYMBOL(pcix_get_max_mmrbc);
*/
int pcix_get_mmrbc(struct pci_dev *dev)
{
- int cap;
- u16 cmd;
+ int ret, cap;
+ u32 cmd;
cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);
if (!cap)
return -EINVAL;
- if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))
- return -EINVAL;
+ ret = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd);
+ if (!ret)
+ ret = 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2);
- return 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2);
+ return ret;
}
EXPORT_SYMBOL(pcix_get_mmrbc);
*/
int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc)
{
- int cap;
- u32 stat, v, o;
- u16 cmd;
+ int cap, err = -EINVAL;
+ u32 stat, cmd, v, o;
if (mmrbc < 512 || mmrbc > 4096 || !is_power_of_2(mmrbc))
- return -EINVAL;
+ goto out;
v = ffs(mmrbc) - 10;
cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);
if (!cap)
- return -EINVAL;
+ goto out;
- if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))
- return -EINVAL;
+ err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat);
+ if (err)
+ goto out;
if (v > (stat & PCI_X_STATUS_MAX_READ) >> 21)
return -E2BIG;
- if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))
- return -EINVAL;
+ err = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd);
+ if (err)
+ goto out;
o = (cmd & PCI_X_CMD_MAX_READ) >> 2;
if (o != v) {
cmd &= ~PCI_X_CMD_MAX_READ;
cmd |= v << 2;
- if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd))
- return -EIO;
+ err = pci_write_config_dword(dev, cap + PCI_X_CMD, cmd);
}
- return 0;
+out:
+ return err;
}
EXPORT_SYMBOL(pcix_set_mmrbc);
return 0;
}
-/* Some architectures require additional programming to enable VGA */
-static arch_set_vga_state_t arch_set_vga_state;
-
-void __init pci_register_set_vga_state(arch_set_vga_state_t func)
-{
- arch_set_vga_state = func; /* NULL disables */
-}
-
-static int pci_set_vga_state_arch(struct pci_dev *dev, bool decode,
- unsigned int command_bits, bool change_bridge)
-{
- if (arch_set_vga_state)
- return arch_set_vga_state(dev, decode, command_bits,
- change_bridge);
- return 0;
-}
-
/**
* pci_set_vga_state - set VGA decode state on device and parents if requested
* @dev: the PCI device
struct pci_bus *bus;
struct pci_dev *bridge;
u16 cmd;
- int rc;
WARN_ON(command_bits & ~(PCI_COMMAND_IO|PCI_COMMAND_MEMORY));
- /* ARCH specific VGA enables */
- rc = pci_set_vga_state_arch(dev, decode, command_bits, change_bridge);
- if (rc)
- return rc;
-
pci_read_config_word(dev, PCI_COMMAND, &cmd);
if (decode == true)
cmd |= command_bits;
EXPORT_SYMBOL(pci_prepare_to_sleep);
EXPORT_SYMBOL(pci_back_from_sleep);
EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state);
+
extern void pci_remove_sysfs_dev_files(struct pci_dev *pdev);
extern void pci_cleanup_rom(struct pci_dev *dev);
#ifdef HAVE_PCI_MMAP
-enum pci_mmap_api {
- PCI_MMAP_SYSFS, /* mmap on /sys/bus/pci/devices/<BDF>/resource<N> */
- PCI_MMAP_PROCFS /* mmap on /proc/bus/pci/<BDF> */
-};
extern int pci_mmap_fits(struct pci_dev *pdev, int resno,
- struct vm_area_struct *vmai,
- enum pci_mmap_api mmap_api);
+ struct vm_area_struct *vma);
#endif
int pci_probe_reset_function(struct pci_dev *dev);
unsigned long flags;
unsigned int devfn = PCI_DEVFN(einj->dev, einj->fn);
int pos_cap_err, rp_pos_cap_err;
- u32 sever, cor_mask, uncor_mask;
+ u32 sever;
int ret = 0;
dev = pci_get_bus_and_slot(einj->bus, devfn);
goto out_put;
}
pci_read_config_dword(dev, pos_cap_err + PCI_ERR_UNCOR_SEVER, &sever);
- pci_read_config_dword(dev, pos_cap_err + PCI_ERR_COR_MASK, &cor_mask);
- pci_read_config_dword(dev, pos_cap_err + PCI_ERR_UNCOR_MASK,
- &uncor_mask);
rp_pos_cap_err = pci_find_ext_capability(rpdev, PCI_EXT_CAP_ID_ERR);
if (!rp_pos_cap_err) {
err->header_log2 = einj->header_log2;
err->header_log3 = einj->header_log3;
- if (einj->cor_status && !(einj->cor_status & ~cor_mask)) {
- ret = -EINVAL;
- printk(KERN_WARNING "The correctable error(s) is masked "
- "by device\n");
- spin_unlock_irqrestore(&inject_lock, flags);
- goto out_put;
- }
- if (einj->uncor_status && !(einj->uncor_status & ~uncor_mask)) {
- ret = -EINVAL;
- printk(KERN_WARNING "The uncorrectable error(s) is masked "
- "by device\n");
- spin_unlock_irqrestore(&inject_lock, flags);
- goto out_put;
- }
-
rperr = __find_aer_error_by_dev(rpdev);
if (!rperr) {
rperr = rperr_alloc;
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
{
int pos;
- u32 status;
+ u32 status, mask;
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
if (!pos)
return -EIO;
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
- if (status)
- pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status);
+ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &mask);
+ if (dev->error_state == pci_channel_io_normal)
+ status &= ~mask; /* Clear corresponding nonfatal bits */
+ else
+ status &= mask; /* Clear corresponding fatal bits */
+ pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status);
return 0;
}
/* Make sure the caller is mapping a real resource for this device */
for (i = 0; i < PCI_ROM_RESOURCE; i++) {
- if (pci_mmap_fits(dev, i, vma, PCI_MMAP_PROCFS))
+ if (pci_mmap_fits(dev, i, vma))
break;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs);
-/*
- * Intel NM10 "TigerPoint" LPC PM1a_STS.BM_STS must be clear
- * for some HT machines to use C4 w/o hanging.
- */
-static void __devinit quirk_tigerpoint_bm_sts(struct pci_dev *dev)
-{
- u32 pmbase;
- u16 pm1a;
-
- pci_read_config_dword(dev, 0x40, &pmbase);
- pmbase = pmbase & 0xff80;
- pm1a = inw(pmbase);
-
- if (pm1a & 0x10) {
- dev_info(&dev->dev, FW_BUG "TigerPoint LPC.BM_STS cleared\n");
- outw(0x10, pmbase);
- }
-}
-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TGP_LPC, quirk_tigerpoint_bm_sts);
-
/*
* Chipsets where PCI->PCI transfers vanish or hang
*/
conf5 &= ~(1 << 24); /* Clear bit 24 */
switch (pdev->device) {
- case PCI_DEVICE_ID_JMICRON_JMB360: /* SATA single port */
- case PCI_DEVICE_ID_JMICRON_JMB362: /* SATA dual ports */
+ case PCI_DEVICE_ID_JMICRON_JMB360:
/* The controller should be in single function ahci mode */
conf1 |= 0x0002A100; /* Set 8, 13, 15, 17 */
break;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB360, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, quirk_jmicron_ata);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB362, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB360, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, quirk_jmicron_ata);
-DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB362, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366, quirk_jmicron_ata);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT3336, quirk_disable_all_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT3351, quirk_disable_all_msi);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT3364, quirk_disable_all_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8380_0, quirk_disable_all_msi);
/* Disable MSI on chipsets that are known to not support it */
static void __devinit quirk_disable_msi(struct pci_dev *dev)
}
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi);
/* Go through the list of Hypertransport capabilities and
* return 1 if a HT MSI capability is found and enabled */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8132_BRIDGE,
ht_enable_msi_mapping);
-/* The P5N32-SLI motherboards from Asus have a problem with msi
+/* The P5N32-SLI Premium motherboard from Asus has a problem with msi
* for the MCP55 NIC. It is not yet determined whether the msi problem
* also affects other devices. As for now, turn off msi for this device.
*/
static void __devinit nvenet_msi_disable(struct pci_dev *dev)
{
- if (dmi_name_in_vendors("P5N32-SLI PREMIUM") ||
- dmi_name_in_vendors("P5N32-E SLI")) {
+ if (dmi_name_in_vendors("P5N32-SLI PREMIUM")) {
dev_info(&dev->dev,
- "Disabling msi for MCP55 NIC on P5N32-SLI\n");
+ "Disabling msi for MCP55 NIC on P5N32-SLI Premium\n");
dev->no_msi = 1;
}
}
int pos;
int found;
- if (!pci_msi_enabled())
- return;
-
/* check if there is HT MSI cap or enabled on this device */
found = ht_check_msi_mapping(dev);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10e8, quirk_i82576_sriov);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x150a, quirk_i82576_sriov);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x150d, quirk_i82576_sriov);
-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1518, quirk_i82576_sriov);
#endif /* CONFIG_PCI_IOV */
#ifdef CONFIG_PCMCIA_PROBE
#include <asm/irq.h>
/* mask of IRQs already reserved by other cards, we should avoid using them */
-static u8 pcmcia_used_irq[32];
+static u8 pcmcia_used_irq[NR_IRQS];
#endif
for (try = 0; try < 64; try++) {
irq = try % 32;
- if (irq > NR_IRQS)
- continue;
-
/* marked as available by driver, and not blocked by userspace? */
if (!((mask >> irq) & 1))
continue;
server running, phase of the moon, and the current mood of
Schroedinger's cat. If you can use X.org's RandR to control
your ThinkPad's video output ports instead of this feature,
- don't think twice: do it and say N here to save memory and avoid
- bad interactions with X.org.
+ don't think twice: do it and say N here to save some memory.
- NOTE: access to this feature is limited to processes with the
- CAP_SYS_ADMIN capability, to avoid local DoS issues in platforms
- where it interacts badly with X.org.
-
- If you are not sure, say Y here but do try to check if you could
- be using X.org RandR instead.
+ If you are not sure, say Y here.
config THINKPAD_ACPI_HOTKEY_POLL
bool "Support NVRAM polling for hot keys"
#include <linux/rfkill.h>
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
-#include <linux/dmi.h>
#define EEEPC_LAPTOP_VERSION "0.1"
acpi_handle handle; /* the handle of the hotk device */
u32 cm_supported; /* the control methods supported
by this BIOS */
- bool cpufv_disabled;
- bool hotplug_disabled;
uint init_flag; /* Init flags */
u16 event_count[128]; /* count for each event */
struct input_dev *inputdev;
MODULE_DESCRIPTION(EEEPC_HOTK_NAME);
MODULE_LICENSE("GPL");
-static bool hotplug_disabled;
-
-module_param(hotplug_disabled, bool, 0644);
-MODULE_PARM_DESC(hotplug_disabled,
- "Disable hotplug for wireless device. "
- "If your laptop need that, please report to "
- "acpi4asus-user@lists.sourceforge.net.");
-
/*
* ACPI Helpers
*/
struct eeepc_cpufv c;
int rv, value;
- if (ehotk->cpufv_disabled)
- return -EPERM;
if (get_cpufv(&c))
return -ENODEV;
rv = parse_arg(buf, count, &value);
return rv;
}
-static ssize_t show_cpufv_disabled(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- return sprintf(buf, "%d\n", ehotk->cpufv_disabled);
-}
-
-static ssize_t store_cpufv_disabled(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t count)
-{
- int rv, value;
-
- rv = parse_arg(buf, count, &value);
- if (rv < 0)
- return rv;
-
- switch (value) {
- case 0:
- if (ehotk->cpufv_disabled)
- pr_warning("cpufv enabled (not officially supported "
- "on this model)\n");
- ehotk->cpufv_disabled = false;
- return rv;
- case 1:
- return -EPERM;
- default:
- return -EINVAL;
- }
-}
-
-
static struct device_attribute dev_attr_cpufv = {
.attr = {
.name = "cpufv",
.show = show_available_cpufv
};
-static struct device_attribute dev_attr_cpufv_disabled = {
- .attr = {
- .name = "cpufv_disabled",
- .mode = 0644 },
- .show = show_cpufv_disabled,
- .store = store_cpufv_disabled
-};
-
-
static struct attribute *platform_attributes[] = {
&dev_attr_camera.attr,
&dev_attr_cardr.attr,
&dev_attr_disp.attr,
&dev_attr_cpufv.attr,
&dev_attr_available_cpufv.attr,
- &dev_attr_cpufv_disabled.attr,
NULL
};
return -EINVAL;
}
-static void eeepc_dmi_check(void)
-{
- const char *model;
-
- model = dmi_get_system_info(DMI_PRODUCT_NAME);
- if (!model)
- return;
-
- /*
- * Blacklist for setting cpufv (cpu speed).
- *
- * EeePC 4G ("701") implements CFVS, but it is not supported
- * by the pre-installed OS, and the original option to change it
- * in the BIOS setup screen was removed in later versions.
- *
- * Judging by the lack of "Super Hybrid Engine" on Asus product pages,
- * this applies to all "701" models (4G/4G Surf/2G Surf).
- *
- * So Asus made a deliberate decision not to support it on this model.
- * We have several reports that using it can cause the system to hang
- *
- * The hang has also been reported on a "702" (Model name "8G"?).
- *
- * We avoid dmi_check_system() / dmi_match(), because they use
- * substring matching. We don't want to affect the "701SD"
- * and "701SDX" models, because they do support S.H.E.
- */
- if (strcmp(model, "701") == 0 || strcmp(model, "702") == 0) {
- ehotk->cpufv_disabled = true;
- pr_info("model %s does not officially support setting cpu "
- "speed\n", model);
- pr_info("cpufv disabled to avoid instability\n");
- }
-
- /*
- * Blacklist for wlan hotplug
- *
- * Eeepc 1005HA doesn't work like others models and don't need the
- * hotplug code. In fact, current hotplug code seems to unplug another
- * device...
- */
- if (strcmp(model, "1005HA") == 0 || strcmp(model, "1201N") == 0 ||
- strcmp(model, "1005PE") == 0) {
- ehotk->hotplug_disabled = true;
- pr_info("wlan hotplug disabled\n");
- }
-}
-
static void cmsg_quirk(int cm, const char *name)
{
int dummy;
struct pci_dev *dev;
struct pci_bus *bus;
bool blocked = eeepc_wlan_rfkill_blocked();
- bool absent;
- u32 l;
if (ehotk->wlan_rfkill)
rfkill_set_sw_state(ehotk->wlan_rfkill, blocked);
goto out_unlock;
}
- if (pci_bus_read_config_dword(bus, 0, PCI_VENDOR_ID, &l)) {
- pr_err("Unable to read PCI config space?\n");
- goto out_unlock;
- }
- absent = (l == 0xffffffff);
-
- if (blocked != absent) {
- pr_warning("BIOS says wireless lan is %s, "
- "but the pci device is %s\n",
- blocked ? "blocked" : "unblocked",
- absent ? "absent" : "present");
- pr_warning("skipped wireless hotplug as probably "
- "inappropriate for this model\n");
- goto out_unlock;
- }
-
if (!blocked) {
dev = pci_get_slot(bus, 0);
if (dev) {
if (result && result != -ENODEV)
goto exit;
- if (ehotk->hotplug_disabled)
- return 0;
-
result = eeepc_setup_pci_hotplug();
/*
* If we get -EBUSY then something else is handling the PCI hotplug -
device->driver_data = ehotk;
ehotk->device = device;
- ehotk->hotplug_disabled = hotplug_disabled;
-
- eeepc_dmi_check();
-
result = eeepc_hotk_check();
if (result)
goto fail_platform_driver;
*/
#define TPACPI_VERSION "0.23"
-#define TPACPI_SYSFS_VERSION 0x020600
+#define TPACPI_SYSFS_VERSION 0x020500
/*
* Changelog:
#include <linux/nvram.h>
#include <linux/proc_fs.h>
-#include <linux/seq_file.h>
#include <linux/sysfs.h>
#include <linux/backlight.h>
#include <linux/fb.h>
struct ibm_struct {
char *name;
- int (*read) (struct seq_file *);
+ int (*read) (char *);
int (*write) (char *);
void (*exit) (void);
void (*resume) (void);
char param[32];
int (*init) (struct ibm_init_struct *);
- mode_t base_procfs_mode;
struct ibm_struct *data;
};
****************************************************************************
****************************************************************************/
-static int dispatch_proc_show(struct seq_file *m, void *v)
+static int dispatch_procfs_read(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
{
- struct ibm_struct *ibm = m->private;
+ struct ibm_struct *ibm = data;
+ int len;
if (!ibm || !ibm->read)
return -EINVAL;
- return ibm->read(m);
-}
-static int dispatch_proc_open(struct inode *inode, struct file *file)
-{
- return single_open(file, dispatch_proc_show, PDE(inode)->data);
+ len = ibm->read(page);
+ if (len < 0)
+ return len;
+
+ if (len <= off + count)
+ *eof = 1;
+ *start = page + off;
+ len -= off;
+ if (len > count)
+ len = count;
+ if (len < 0)
+ len = 0;
+
+ return len;
}
-static ssize_t dispatch_proc_write(struct file *file,
+static int dispatch_procfs_write(struct file *file,
const char __user *userbuf,
- size_t count, loff_t *pos)
+ unsigned long count, void *data)
{
- struct ibm_struct *ibm = PDE(file->f_path.dentry->d_inode)->data;
+ struct ibm_struct *ibm = data;
char *kernbuf;
int ret;
return ret;
}
-static const struct file_operations dispatch_proc_fops = {
- .owner = THIS_MODULE,
- .open = dispatch_proc_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
- .write = dispatch_proc_write,
-};
-
static char *next_cmd(char **cmds)
{
char *start = *cmds;
struct tpacpi_rfk *atp_rfk;
int res;
bool sw_state = false;
- bool hw_state;
int sw_status;
BUG_ON(id >= TPACPI_RFK_SW_MAX || tpacpi_rfkill_switches[id]);
rfkill_init_sw_state(atp_rfk->rfkill, sw_state);
}
}
- hw_state = tpacpi_rfk_check_hwblock_state();
- rfkill_set_hw_state(atp_rfk->rfkill, hw_state);
+ rfkill_set_hw_state(atp_rfk->rfkill, tpacpi_rfk_check_hwblock_state());
res = rfkill_register(atp_rfk->rfkill);
if (res < 0) {
}
tpacpi_rfkill_switches[id] = atp_rfk;
-
- printk(TPACPI_INFO "rfkill switch %s: radio is %sblocked\n",
- name, (sw_state || hw_state) ? "" : "un");
return 0;
}
}
/* procfs -------------------------------------------------------------- */
-static int tpacpi_rfk_procfs_read(const enum tpacpi_rfk_id id,
- struct seq_file *m)
+static int tpacpi_rfk_procfs_read(const enum tpacpi_rfk_id id, char *p)
{
+ int len = 0;
+
if (id >= TPACPI_RFK_SW_MAX)
- seq_printf(m, "status:\t\tnot supported\n");
+ len += sprintf(p + len, "status:\t\tnot supported\n");
else {
int status;
return status;
}
- seq_printf(m, "status:\t\t%s\n",
+ len += sprintf(p + len, "status:\t\t%s\n",
(status == TPACPI_RFK_RADIO_ON) ?
"enabled" : "disabled");
- seq_printf(m, "commands:\tenable, disable\n");
+ len += sprintf(p + len, "commands:\tenable, disable\n");
}
- return 0;
+ return len;
}
static int tpacpi_rfk_procfs_write(const enum tpacpi_rfk_id id, char *buf)
TPV_QL1('7', '9', 'E', '3', '5', '0'), /* T60/p */
TPV_QL1('7', 'C', 'D', '2', '2', '2'), /* R60, R60i */
- TPV_QL1('7', 'E', 'D', '0', '1', '5'), /* R60e, R60i */
+ TPV_QL0('7', 'E', 'D', '0'), /* R60e, R60i */
/* BIOS FW BIOS VERS EC FW EC VERS */
TPV_QI2('1', 'W', '9', '0', '1', 'V', '2', '8'), /* R50e (1) */
TPV_QI1('7', '4', '6', '4', '2', '7'), /* X41 (0) */
TPV_QI1('7', '5', '6', '0', '2', '0'), /* X41t (0) */
- TPV_QL1('7', 'B', 'D', '7', '4', '0'), /* X60/s */
- TPV_QL1('7', 'J', '3', '0', '1', '3'), /* X60t */
+ TPV_QL0('7', 'B', 'D', '7'), /* X60/s */
+ TPV_QL0('7', 'J', '3', '0'), /* X60t */
/* (0) - older versions lack DMI EC fw string and functionality */
/* (1) - older versions known to lack functionality */
return 0;
}
-static int thinkpad_acpi_driver_read(struct seq_file *m)
+static int thinkpad_acpi_driver_read(char *p)
{
- seq_printf(m, "driver:\t\t%s\n", TPACPI_DESC);
- seq_printf(m, "version:\t%s\n", TPACPI_VERSION);
- return 0;
+ int len = 0;
+
+ len += sprintf(p + len, "driver:\t\t%s\n", TPACPI_DESC);
+ len += sprintf(p + len, "version:\t%s\n", TPACPI_VERSION);
+
+ return len;
}
static struct ibm_struct thinkpad_acpi_driver_data = {
static void tpacpi_driver_event(const unsigned int hkey_event);
static void hotkey_driver_event(const unsigned int scancode);
-static void hotkey_poll_setup(const bool may_warn);
/* HKEY.MHKG() return bits */
#define TP_HOTKEY_TABLET_MASK (1 << 3)
fwmask, hotkey_acpi_mask);
}
- if (tpacpi_lifecycle != TPACPI_LIFE_EXITING)
- hotkey_mask_warn_incomplete_mask();
+ hotkey_mask_warn_incomplete_mask();
return rc;
}
rc = hotkey_mask_set((hotkey_acpi_mask | hotkey_driver_mask) &
~hotkey_source_mask);
- hotkey_poll_setup(true);
-
mutex_unlock(&hotkey_mutex);
return rc;
}
/* call with hotkey_mutex held */
-static void hotkey_poll_setup(const bool may_warn)
+static void hotkey_poll_setup(bool may_warn)
{
const u32 poll_driver_mask = hotkey_driver_mask & hotkey_source_mask;
const u32 poll_user_mask = hotkey_user_mask & hotkey_source_mask;
}
}
-static void hotkey_poll_setup_safe(const bool may_warn)
+static void hotkey_poll_setup_safe(bool may_warn)
{
mutex_lock(&hotkey_mutex);
hotkey_poll_setup(may_warn);
#else /* CONFIG_THINKPAD_ACPI_HOTKEY_POLL */
-static void hotkey_poll_setup(const bool __unused)
-{
-}
-
-static void hotkey_poll_setup_safe(const bool __unused)
+static void hotkey_poll_setup_safe(bool __unused)
{
}
{
switch (tpacpi_lifecycle) {
case TPACPI_LIFE_INIT:
- case TPACPI_LIFE_RUNNING:
- hotkey_poll_setup_safe(false);
+ /*
+ * hotkey_init will call hotkey_poll_setup_safe
+ * at the appropriate moment
+ */
return 0;
case TPACPI_LIFE_EXITING:
return -EBUSY;
+ case TPACPI_LIFE_RUNNING:
+ hotkey_poll_setup_safe(false);
+ return 0;
}
/* Should only happen if tpacpi_lifecycle is corrupt */
static void hotkey_inputdev_close(struct input_dev *dev)
{
/* disable hotkey polling when possible */
- if (tpacpi_lifecycle != TPACPI_LIFE_EXITING &&
+ if (tpacpi_lifecycle == TPACPI_LIFE_RUNNING &&
!(hotkey_source_mask & hotkey_driver_mask))
hotkey_poll_setup_safe(false);
}
int res, i;
int status;
int hkeyv;
- bool radiosw_state = false;
- bool tabletsw_state = false;
unsigned long quirks;
#ifdef CONFIG_THINKPAD_ACPI_DEBUGFACILITIES
if (dbg_wlswemul) {
tp_features.hotkey_wlsw = 1;
- radiosw_state = !!tpacpi_wlsw_emulstate;
printk(TPACPI_INFO
"radio switch emulation enabled\n");
} else
/* Not all thinkpads have a hardware radio switch */
if (acpi_evalf(hkey_handle, &status, "WLSW", "qd")) {
tp_features.hotkey_wlsw = 1;
- radiosw_state = !!status;
printk(TPACPI_INFO
"radio switch found; radios are %s\n",
enabled(status, 0));
/* For X41t, X60t, X61t Tablets... */
if (!res && acpi_evalf(hkey_handle, &status, "MHKG", "qd")) {
tp_features.hotkey_tablet = 1;
- tabletsw_state = !!(status & TP_HOTKEY_TABLET_MASK);
printk(TPACPI_INFO
"possible tablet mode switch found; "
"ThinkPad in %s mode\n",
- (tabletsw_state) ? "tablet" : "laptop");
+ (status & TP_HOTKEY_TABLET_MASK)?
+ "tablet" : "laptop");
res = add_to_attr_set(hotkey_dev_attributes,
&dev_attr_hotkey_tablet_mode.attr);
}
TPACPI_HOTKEY_MAP_SIZE);
}
- input_set_capability(tpacpi_inputdev, EV_MSC, MSC_SCAN);
+ set_bit(EV_KEY, tpacpi_inputdev->evbit);
+ set_bit(EV_MSC, tpacpi_inputdev->evbit);
+ set_bit(MSC_SCAN, tpacpi_inputdev->mscbit);
tpacpi_inputdev->keycodesize = TPACPI_HOTKEY_MAP_TYPESIZE;
tpacpi_inputdev->keycodemax = TPACPI_HOTKEY_MAP_LEN;
tpacpi_inputdev->keycode = hotkey_keycode_map;
for (i = 0; i < TPACPI_HOTKEY_MAP_LEN; i++) {
if (hotkey_keycode_map[i] != KEY_RESERVED) {
- input_set_capability(tpacpi_inputdev, EV_KEY,
- hotkey_keycode_map[i]);
+ set_bit(hotkey_keycode_map[i],
+ tpacpi_inputdev->keybit);
} else {
if (i < sizeof(hotkey_reserved_mask)*8)
hotkey_reserved_mask |= 1 << i;
}
if (tp_features.hotkey_wlsw) {
- input_set_capability(tpacpi_inputdev, EV_SW, SW_RFKILL_ALL);
- input_report_switch(tpacpi_inputdev,
- SW_RFKILL_ALL, radiosw_state);
+ set_bit(EV_SW, tpacpi_inputdev->evbit);
+ set_bit(SW_RFKILL_ALL, tpacpi_inputdev->swbit);
}
if (tp_features.hotkey_tablet) {
- input_set_capability(tpacpi_inputdev, EV_SW, SW_TABLET_MODE);
- input_report_switch(tpacpi_inputdev,
- SW_TABLET_MODE, tabletsw_state);
+ set_bit(EV_SW, tpacpi_inputdev->evbit);
+ set_bit(SW_TABLET_MODE, tpacpi_inputdev->swbit);
}
/* Do not issue duplicate brightness change events to
tpacpi_inputdev->close = &hotkey_inputdev_close;
hotkey_poll_setup_safe(true);
+ tpacpi_send_radiosw_update();
+ tpacpi_input_send_tabletsw();
return 0;
}
}
-static void thermal_dump_all_sensors(void);
-
static bool hotkey_notify_thermal(const u32 hkey,
bool *send_acpi_ev,
bool *ignore_acpi_ev)
{
- bool known = true;
-
/* 0x6000-0x6FFF: thermal alarms */
*send_acpi_ev = true;
*ignore_acpi_ev = false;
switch (hkey) {
- case TP_HKEY_EV_THM_TABLE_CHANGED:
- printk(TPACPI_INFO
- "EC reports that Thermal Table has changed\n");
- /* recommended action: do nothing, we don't have
- * Lenovo ATM information */
- return true;
case TP_HKEY_EV_ALARM_BAT_HOT:
printk(TPACPI_CRIT
"THERMAL ALARM: battery is too hot!\n");
/* recommended action: warn user through gui */
- break;
+ return true;
case TP_HKEY_EV_ALARM_BAT_XHOT:
printk(TPACPI_ALERT
"THERMAL EMERGENCY: battery is extremely hot!\n");
/* recommended action: immediate sleep/hibernate */
- break;
+ return true;
case TP_HKEY_EV_ALARM_SENSOR_HOT:
printk(TPACPI_CRIT
"THERMAL ALARM: "
"a sensor reports something is too hot!\n");
/* recommended action: warn user through gui, that */
/* some internal component is too hot */
- break;
+ return true;
case TP_HKEY_EV_ALARM_SENSOR_XHOT:
printk(TPACPI_ALERT
"THERMAL EMERGENCY: "
"a sensor reports something is extremely hot!\n");
/* recommended action: immediate sleep/hibernate */
- break;
+ return true;
+ case TP_HKEY_EV_THM_TABLE_CHANGED:
+ printk(TPACPI_INFO
+ "EC reports that Thermal Table has changed\n");
+ /* recommended action: do nothing, we don't have
+ * Lenovo ATM information */
+ return true;
default:
printk(TPACPI_ALERT
"THERMAL ALERT: unknown thermal alarm received\n");
- known = false;
+ return false;
}
-
- thermal_dump_all_sensors();
-
- return known;
}
static void hotkey_notify(struct ibm_struct *ibm, u32 event)
break;
case 3:
/* 0x3000-0x3FFF: bay-related wakeups */
- switch (hkey) {
- case TP_HKEY_EV_BAYEJ_ACK:
+ if (hkey == TP_HKEY_EV_BAYEJ_ACK) {
hotkey_autosleep_ack = 1;
printk(TPACPI_INFO
"bay ejected\n");
hotkey_wakeup_hotunplug_complete_notify_change();
known_ev = true;
- break;
- case TP_HKEY_EV_OPTDRV_EJ:
- /* FIXME: kick libata if SATA link offline */
- known_ev = true;
- break;
- default:
+ } else {
known_ev = false;
}
break;
}
/* procfs -------------------------------------------------------------- */
-static int hotkey_read(struct seq_file *m)
+static int hotkey_read(char *p)
{
int res, status;
+ int len = 0;
if (!tp_features.hotkey) {
- seq_printf(m, "status:\t\tnot supported\n");
- return 0;
+ len += sprintf(p + len, "status:\t\tnot supported\n");
+ return len;
}
if (mutex_lock_killable(&hotkey_mutex))
if (res)
return res;
- seq_printf(m, "status:\t\t%s\n", enabled(status, 0));
+ len += sprintf(p + len, "status:\t\t%s\n", enabled(status, 0));
if (hotkey_all_mask) {
- seq_printf(m, "mask:\t\t0x%08x\n", hotkey_user_mask);
- seq_printf(m, "commands:\tenable, disable, reset, <mask>\n");
+ len += sprintf(p + len, "mask:\t\t0x%08x\n", hotkey_user_mask);
+ len += sprintf(p + len,
+ "commands:\tenable, disable, reset, <mask>\n");
} else {
- seq_printf(m, "mask:\t\tnot supported\n");
- seq_printf(m, "commands:\tenable, disable, reset\n");
+ len += sprintf(p + len, "mask:\t\tnot supported\n");
+ len += sprintf(p + len, "commands:\tenable, disable, reset\n");
}
- return 0;
+ return len;
}
static void hotkey_enabledisable_warn(bool enable)
TP_ACPI_BLUETOOTH_HWPRESENT = 0x01, /* Bluetooth hw available */
TP_ACPI_BLUETOOTH_RADIOSSW = 0x02, /* Bluetooth radio enabled */
TP_ACPI_BLUETOOTH_RESUMECTRL = 0x04, /* Bluetooth state at resume:
- 0 = disable, 1 = enable */
+ off / last state */
};
enum {
}
#endif
+ /* We make sure to keep TP_ACPI_BLUETOOTH_RESUMECTRL off */
+ status = TP_ACPI_BLUETOOTH_RESUMECTRL;
if (state == TPACPI_RFK_RADIO_ON)
- status = TP_ACPI_BLUETOOTH_RADIOSSW
- | TP_ACPI_BLUETOOTH_RESUMECTRL;
- else
- status = 0;
+ status |= TP_ACPI_BLUETOOTH_RADIOSSW;
if (!acpi_evalf(hkey_handle, NULL, "SBDC", "vd", status))
return -EIO;
}
/* procfs -------------------------------------------------------------- */
-static int bluetooth_read(struct seq_file *m)
+static int bluetooth_read(char *p)
{
- return tpacpi_rfk_procfs_read(TPACPI_RFK_BLUETOOTH_SW_ID, m);
+ return tpacpi_rfk_procfs_read(TPACPI_RFK_BLUETOOTH_SW_ID, p);
}
static int bluetooth_write(char *buf)
TP_ACPI_WANCARD_HWPRESENT = 0x01, /* Wan hw available */
TP_ACPI_WANCARD_RADIOSSW = 0x02, /* Wan radio enabled */
TP_ACPI_WANCARD_RESUMECTRL = 0x04, /* Wan state at resume:
- 0 = disable, 1 = enable */
+ off / last state */
};
#define TPACPI_RFK_WWAN_SW_NAME "tpacpi_wwan_sw"
}
#endif
+ /* We make sure to set TP_ACPI_WANCARD_RESUMECTRL */
+ status = TP_ACPI_WANCARD_RESUMECTRL;
if (state == TPACPI_RFK_RADIO_ON)
- status = TP_ACPI_WANCARD_RADIOSSW
- | TP_ACPI_WANCARD_RESUMECTRL;
- else
- status = 0;
+ status |= TP_ACPI_WANCARD_RADIOSSW;
if (!acpi_evalf(hkey_handle, NULL, "SWAN", "vd", status))
return -EIO;
}
/* procfs -------------------------------------------------------------- */
-static int wan_read(struct seq_file *m)
+static int wan_read(char *p)
{
- return tpacpi_rfk_procfs_read(TPACPI_RFK_WWAN_SW_ID, m);
+ return tpacpi_rfk_procfs_read(TPACPI_RFK_WWAN_SW_ID, p);
}
static int wan_write(char *buf)
/* not reached */
}
-static int video_read(struct seq_file *m)
+static int video_read(char *p)
{
int status, autosw;
+ int len = 0;
if (video_supported == TPACPI_VIDEO_NONE) {
- seq_printf(m, "status:\t\tnot supported\n");
- return 0;
+ len += sprintf(p + len, "status:\t\tnot supported\n");
+ return len;
}
- /* Even reads can crash X.org, so... */
- if (!capable(CAP_SYS_ADMIN))
- return -EPERM;
-
status = video_outputsw_get();
if (status < 0)
return status;
if (autosw < 0)
return autosw;
- seq_printf(m, "status:\t\tsupported\n");
- seq_printf(m, "lcd:\t\t%s\n", enabled(status, 0));
- seq_printf(m, "crt:\t\t%s\n", enabled(status, 1));
+ len += sprintf(p + len, "status:\t\tsupported\n");
+ len += sprintf(p + len, "lcd:\t\t%s\n", enabled(status, 0));
+ len += sprintf(p + len, "crt:\t\t%s\n", enabled(status, 1));
if (video_supported == TPACPI_VIDEO_NEW)
- seq_printf(m, "dvi:\t\t%s\n", enabled(status, 3));
- seq_printf(m, "auto:\t\t%s\n", enabled(autosw, 0));
- seq_printf(m, "commands:\tlcd_enable, lcd_disable\n");
- seq_printf(m, "commands:\tcrt_enable, crt_disable\n");
+ len += sprintf(p + len, "dvi:\t\t%s\n", enabled(status, 3));
+ len += sprintf(p + len, "auto:\t\t%s\n", enabled(autosw, 0));
+ len += sprintf(p + len, "commands:\tlcd_enable, lcd_disable\n");
+ len += sprintf(p + len, "commands:\tcrt_enable, crt_disable\n");
if (video_supported == TPACPI_VIDEO_NEW)
- seq_printf(m, "commands:\tdvi_enable, dvi_disable\n");
- seq_printf(m, "commands:\tauto_enable, auto_disable\n");
- seq_printf(m, "commands:\tvideo_switch, expand_toggle\n");
+ len += sprintf(p + len, "commands:\tdvi_enable, dvi_disable\n");
+ len += sprintf(p + len, "commands:\tauto_enable, auto_disable\n");
+ len += sprintf(p + len, "commands:\tvideo_switch, expand_toggle\n");
- return 0;
+ return len;
}
static int video_write(char *buf)
if (video_supported == TPACPI_VIDEO_NONE)
return -ENODEV;
- /* Even reads can crash X.org, let alone writes... */
- if (!capable(CAP_SYS_ADMIN))
- return -EPERM;
-
enable = 0;
disable = 0;
flush_workqueue(tpacpi_wq);
}
-static int light_read(struct seq_file *m)
+static int light_read(char *p)
{
+ int len = 0;
int status;
if (!tp_features.light) {
- seq_printf(m, "status:\t\tnot supported\n");
+ len += sprintf(p + len, "status:\t\tnot supported\n");
} else if (!tp_features.light_status) {
- seq_printf(m, "status:\t\tunknown\n");
- seq_printf(m, "commands:\ton, off\n");
+ len += sprintf(p + len, "status:\t\tunknown\n");
+ len += sprintf(p + len, "commands:\ton, off\n");
} else {
status = light_get_status();
if (status < 0)
return status;
- seq_printf(m, "status:\t\t%s\n", onoff(status, 0));
- seq_printf(m, "commands:\ton, off\n");
+ len += sprintf(p + len, "status:\t\t%s\n", onoff(status, 0));
+ len += sprintf(p + len, "commands:\ton, off\n");
}
- return 0;
+ return len;
}
static int light_write(char *buf)
device_remove_file(&tpacpi_pdev->dev, &dev_attr_cmos_command);
}
-static int cmos_read(struct seq_file *m)
+static int cmos_read(char *p)
{
+ int len = 0;
+
/* cmos not supported on 570, 600e/x, 770e, 770x, A21e, A2xm/p,
R30, R31, T20-22, X20-21 */
if (!cmos_handle)
- seq_printf(m, "status:\t\tnot supported\n");
+ len += sprintf(p + len, "status:\t\tnot supported\n");
else {
- seq_printf(m, "status:\t\tsupported\n");
- seq_printf(m, "commands:\t<cmd> (<cmd> is 0-21)\n");
+ len += sprintf(p + len, "status:\t\tsupported\n");
+ len += sprintf(p + len, "commands:\t<cmd> (<cmd> is 0-21)\n");
}
- return 0;
+ return len;
}
static int cmos_write(char *buf)
((s) == TPACPI_LED_OFF ? "off" : \
((s) == TPACPI_LED_ON ? "on" : "blinking"))
-static int led_read(struct seq_file *m)
+static int led_read(char *p)
{
+ int len = 0;
+
if (!led_supported) {
- seq_printf(m, "status:\t\tnot supported\n");
- return 0;
+ len += sprintf(p + len, "status:\t\tnot supported\n");
+ return len;
}
- seq_printf(m, "status:\t\tsupported\n");
+ len += sprintf(p + len, "status:\t\tsupported\n");
if (led_supported == TPACPI_LED_570) {
/* 570 */
status = led_get_status(i);
if (status < 0)
return -EIO;
- seq_printf(m, "%d:\t\t%s\n",
+ len += sprintf(p + len, "%d:\t\t%s\n",
i, str_led_status(status));
}
}
- seq_printf(m, "commands:\t"
+ len += sprintf(p + len, "commands:\t"
"<led> on, <led> off, <led> blink (<led> is 0-15)\n");
- return 0;
+ return len;
}
static int led_write(char *buf)
return (beep_handle)? 0 : 1;
}
-static int beep_read(struct seq_file *m)
+static int beep_read(char *p)
{
+ int len = 0;
+
if (!beep_handle)
- seq_printf(m, "status:\t\tnot supported\n");
+ len += sprintf(p + len, "status:\t\tnot supported\n");
else {
- seq_printf(m, "status:\t\tsupported\n");
- seq_printf(m, "commands:\t<cmd> (<cmd> is 0-17)\n");
+ len += sprintf(p + len, "status:\t\tsupported\n");
+ len += sprintf(p + len, "commands:\t<cmd> (<cmd> is 0-17)\n");
}
- return 0;
+ return len;
}
static int beep_write(char *buf)
TP_EC_THERMAL_TMP0 = 0x78, /* ACPI EC regs TMP 0..7 */
TP_EC_THERMAL_TMP8 = 0xC0, /* ACPI EC regs TMP 8..15 */
TP_EC_THERMAL_TMP_NA = -128, /* ACPI EC sensor not available */
-
- TPACPI_THERMAL_SENSOR_NA = -128000, /* Sensor not available */
};
-
#define TPACPI_MAX_THERMAL_SENSORS 16 /* Max thermal sensors supported */
struct ibm_thermal_sensors_struct {
s32 temp[TPACPI_MAX_THERMAL_SENSORS];
return n;
}
-static void thermal_dump_all_sensors(void)
-{
- int n, i;
- struct ibm_thermal_sensors_struct t;
-
- n = thermal_get_sensors(&t);
- if (n <= 0)
- return;
-
- printk(TPACPI_NOTICE
- "temperatures (Celsius):");
-
- for (i = 0; i < n; i++) {
- if (t.temp[i] != TPACPI_THERMAL_SENSOR_NA)
- printk(KERN_CONT " %d", (int)(t.temp[i] / 1000));
- else
- printk(KERN_CONT " N/A");
- }
-
- printk(KERN_CONT "\n");
-}
-
/* sysfs temp##_input -------------------------------------------------- */
static ssize_t thermal_temp_input_show(struct device *dev,
res = thermal_get_sensor(idx, &value);
if (res)
return res;
- if (value == TPACPI_THERMAL_SENSOR_NA)
+ if (value == TP_EC_THERMAL_TMP_NA * 1000)
return -ENXIO;
return snprintf(buf, PAGE_SIZE, "%d\n", value);
case TPACPI_THERMAL_ACPI_TMP07:
case TPACPI_THERMAL_ACPI_UPDT:
sysfs_remove_group(&tpacpi_sensors_pdev->dev.kobj,
- &thermal_temp_input8_group);
+ &thermal_temp_input16_group);
break;
case TPACPI_THERMAL_NONE:
default:
}
}
-static int thermal_read(struct seq_file *m)
+static int thermal_read(char *p)
{
+ int len = 0;
int n, i;
struct ibm_thermal_sensors_struct t;
if (unlikely(n < 0))
return n;
- seq_printf(m, "temperatures:\t");
+ len += sprintf(p + len, "temperatures:\t");
if (n > 0) {
for (i = 0; i < (n - 1); i++)
- seq_printf(m, "%d ", t.temp[i] / 1000);
- seq_printf(m, "%d\n", t.temp[i] / 1000);
+ len += sprintf(p + len, "%d ", t.temp[i] / 1000);
+ len += sprintf(p + len, "%d\n", t.temp[i] / 1000);
} else
- seq_printf(m, "not supported\n");
+ len += sprintf(p + len, "not supported\n");
- return 0;
+ return len;
}
static struct ibm_struct thermal_driver_data = {
static u8 ecdump_regs[256];
-static int ecdump_read(struct seq_file *m)
+static int ecdump_read(char *p)
{
+ int len = 0;
int i, j;
u8 v;
- seq_printf(m, "EC "
+ len += sprintf(p + len, "EC "
" +00 +01 +02 +03 +04 +05 +06 +07"
" +08 +09 +0a +0b +0c +0d +0e +0f\n");
for (i = 0; i < 256; i += 16) {
- seq_printf(m, "EC 0x%02x:", i);
+ len += sprintf(p + len, "EC 0x%02x:", i);
for (j = 0; j < 16; j++) {
if (!acpi_ec_read(i + j, &v))
break;
if (v != ecdump_regs[i + j])
- seq_printf(m, " *%02x", v);
+ len += sprintf(p + len, " *%02x", v);
else
- seq_printf(m, " %02x", v);
+ len += sprintf(p + len, " %02x", v);
ecdump_regs[i + j] = v;
}
- seq_putc(m, '\n');
+ len += sprintf(p + len, "\n");
if (j != 16)
break;
}
/* These are way too dangerous to advertise openly... */
#if 0
- seq_printf(m, "commands:\t0x<offset> 0x<value>"
+ len += sprintf(p + len, "commands:\t0x<offset> 0x<value>"
" (<offset> is 00-ff, <value> is 00-ff)\n");
- seq_printf(m, "commands:\t0x<offset> <value> "
+ len += sprintf(p + len, "commands:\t0x<offset> <value> "
" (<offset> is 00-ff, <value> is 0-255)\n");
#endif
- return 0;
+ return len;
}
static int ecdump_write(char *buf)
return status & TP_EC_BACKLIGHT_LVLMSK;
}
-static void tpacpi_brightness_notify_change(void)
-{
- backlight_force_update(ibm_backlight_device,
- BACKLIGHT_UPDATE_HOTKEY);
-}
-
static struct backlight_ops ibm_backlight_data = {
.get_brightness = brightness_get,
.update_status = brightness_update_status,
TPACPI_Q_IBM('1', 'Y', TPACPI_BRGHT_Q_EC), /* T43/p ATI */
/* Models with ATI GPUs that can use ECNVRAM */
- TPACPI_Q_IBM('1', 'R', TPACPI_BRGHT_Q_EC), /* R50,51 T40-42 */
+ TPACPI_Q_IBM('1', 'R', TPACPI_BRGHT_Q_EC),
TPACPI_Q_IBM('1', 'Q', TPACPI_BRGHT_Q_ASK|TPACPI_BRGHT_Q_EC),
- TPACPI_Q_IBM('7', '6', TPACPI_BRGHT_Q_EC), /* R52 */
+ TPACPI_Q_IBM('7', '6', TPACPI_BRGHT_Q_ASK|TPACPI_BRGHT_Q_EC),
TPACPI_Q_IBM('7', '8', TPACPI_BRGHT_Q_ASK|TPACPI_BRGHT_Q_EC),
/* Models with Intel Extreme Graphics 2 */
- TPACPI_Q_IBM('1', 'U', TPACPI_BRGHT_Q_NOEC), /* X40 */
+ TPACPI_Q_IBM('1', 'U', TPACPI_BRGHT_Q_NOEC),
TPACPI_Q_IBM('1', 'V', TPACPI_BRGHT_Q_ASK|TPACPI_BRGHT_Q_EC),
TPACPI_Q_IBM('1', 'W', TPACPI_BRGHT_Q_ASK|TPACPI_BRGHT_Q_EC),
ibm_backlight_device->props.brightness = b & TP_EC_BACKLIGHT_LVLMSK;
backlight_update_status(ibm_backlight_device);
- vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_BRGHT,
- "brightness: registering brightness hotkeys "
- "as change notification\n");
- tpacpi_hotkey_driver_mask_set(hotkey_driver_mask
- | TP_ACPI_HKEY_BRGHTUP_MASK
- | TP_ACPI_HKEY_BRGHTDWN_MASK);;
return 0;
}
tpacpi_brightness_checkpoint_nvram();
}
-static int brightness_read(struct seq_file *m)
+static int brightness_read(char *p)
{
+ int len = 0;
int level;
level = brightness_get(NULL);
if (level < 0) {
- seq_printf(m, "level:\t\tunreadable\n");
+ len += sprintf(p + len, "level:\t\tunreadable\n");
} else {
- seq_printf(m, "level:\t\t%d\n", level);
- seq_printf(m, "commands:\tup, down\n");
- seq_printf(m, "commands:\tlevel <level>"
+ len += sprintf(p + len, "level:\t\t%d\n", level);
+ len += sprintf(p + len, "commands:\tup, down\n");
+ len += sprintf(p + len, "commands:\tlevel <level>"
" (<level> is 0-%d)\n",
(tp_features.bright_16levels) ? 15 : 7);
}
- return 0;
+ return len;
}
static int brightness_write(char *buf)
* Doing it this way makes the syscall restartable in case of EINTR
*/
rc = brightness_set(level);
- if (!rc && ibm_backlight_device)
- backlight_force_update(ibm_backlight_device,
- BACKLIGHT_UPDATE_SYSFS);
return (rc == -EINTR)? -ERESTARTSYS : rc;
}
static int volume_offset = 0x30;
-static int volume_read(struct seq_file *m)
+static int volume_read(char *p)
{
+ int len = 0;
u8 level;
if (!acpi_ec_read(volume_offset, &level)) {
- seq_printf(m, "level:\t\tunreadable\n");
+ len += sprintf(p + len, "level:\t\tunreadable\n");
} else {
- seq_printf(m, "level:\t\t%d\n", level & 0xf);
- seq_printf(m, "mute:\t\t%s\n", onoff(level, 6));
- seq_printf(m, "commands:\tup, down, mute\n");
- seq_printf(m, "commands:\tlevel <level>"
+ len += sprintf(p + len, "level:\t\t%d\n", level & 0xf);
+ len += sprintf(p + len, "mute:\t\t%s\n", onoff(level, 6));
+ len += sprintf(p + len, "commands:\tup, down, mute\n");
+ len += sprintf(p + len, "commands:\tlevel <level>"
" (<level> is 0-15)\n");
}
- return 0;
+ return len;
}
static int volume_write(char *buf)
}
}
-static int fan_read(struct seq_file *m)
+static int fan_read(char *p)
{
+ int len = 0;
int rc;
u8 status;
unsigned int speed = 0;
if (rc < 0)
return rc;
- seq_printf(m, "status:\t\t%s\n"
+ len += sprintf(p + len, "status:\t\t%s\n"
"level:\t\t%d\n",
(status != 0) ? "enabled" : "disabled", status);
break;
if (rc < 0)
return rc;
- seq_printf(m, "status:\t\t%s\n",
+ len += sprintf(p + len, "status:\t\t%s\n",
(status != 0) ? "enabled" : "disabled");
rc = fan_get_speed(&speed);
if (rc < 0)
return rc;
- seq_printf(m, "speed:\t\t%d\n", speed);
+ len += sprintf(p + len, "speed:\t\t%d\n", speed);
if (status & TP_EC_FAN_FULLSPEED)
/* Disengaged mode takes precedence */
- seq_printf(m, "level:\t\tdisengaged\n");
+ len += sprintf(p + len, "level:\t\tdisengaged\n");
else if (status & TP_EC_FAN_AUTO)
- seq_printf(m, "level:\t\tauto\n");
+ len += sprintf(p + len, "level:\t\tauto\n");
else
- seq_printf(m, "level:\t\t%d\n", status);
+ len += sprintf(p + len, "level:\t\t%d\n", status);
break;
case TPACPI_FAN_NONE:
default:
- seq_printf(m, "status:\t\tnot supported\n");
+ len += sprintf(p + len, "status:\t\tnot supported\n");
}
if (fan_control_commands & TPACPI_FAN_CMD_LEVEL) {
- seq_printf(m, "commands:\tlevel <level>");
+ len += sprintf(p + len, "commands:\tlevel <level>");
switch (fan_control_access_mode) {
case TPACPI_FAN_WR_ACPI_SFAN:
- seq_printf(m, " (<level> is 0-7)\n");
+ len += sprintf(p + len, " (<level> is 0-7)\n");
break;
default:
- seq_printf(m, " (<level> is 0-7, "
+ len += sprintf(p + len, " (<level> is 0-7, "
"auto, disengaged, full-speed)\n");
break;
}
}
if (fan_control_commands & TPACPI_FAN_CMD_ENABLE)
- seq_printf(m, "commands:\tenable, disable\n"
+ len += sprintf(p + len, "commands:\tenable, disable\n"
"commands:\twatchdog <timeout> (<timeout> "
"is 0 (off), 1-120 (seconds))\n");
if (fan_control_commands & TPACPI_FAN_CMD_SPEED)
- seq_printf(m, "commands:\tspeed <speed>"
+ len += sprintf(p + len, "commands:\tspeed <speed>"
" (<speed> is 0-65535)\n");
- return 0;
+ return len;
}
static int fan_write_cmd_level(const char *cmd, int *rc)
*/
static void tpacpi_driver_event(const unsigned int hkey_event)
{
- if (ibm_backlight_device) {
- switch (hkey_event) {
- case TP_HKEY_EV_BRGHT_UP:
- case TP_HKEY_EV_BRGHT_DOWN:
- tpacpi_brightness_notify_change();
- }
- }
}
"%s installed\n", ibm->name);
if (ibm->read) {
- mode_t mode = iibm->base_procfs_mode;
-
- if (!mode)
- mode = S_IRUGO;
- if (ibm->write)
- mode |= S_IWUSR;
- entry = proc_create_data(ibm->name, mode, proc_dir,
- &dispatch_proc_fops, ibm);
+ entry = create_proc_entry(ibm->name,
+ S_IFREG | S_IRUGO | S_IWUSR,
+ proc_dir);
if (!entry) {
printk(TPACPI_ERR "unable to create proc entry %s\n",
ibm->name);
ret = -ENODEV;
goto err_out;
}
+ entry->data = ibm;
+ entry->read_proc = &dispatch_procfs_read;
+ if (ibm->write)
+ entry->write_proc = &dispatch_procfs_write;
ibm->flags.proc_created = 1;
}
#ifdef CONFIG_THINKPAD_ACPI_VIDEO
{
.init = video_init,
- .base_procfs_mode = S_IRUSR,
.data = &video_driver_data,
},
#endif
return -EINVAL;
}
-module_param(experimental, int, 0444);
+module_param(experimental, int, 0);
MODULE_PARM_DESC(experimental,
"Enables experimental features when non-zero");
module_param_named(debug, dbg_level, uint, 0);
MODULE_PARM_DESC(debug, "Sets debug level bit-mask");
-module_param(force_load, bool, 0444);
+module_param(force_load, bool, 0);
MODULE_PARM_DESC(force_load,
"Attempts to load the driver even on a "
"mis-identified ThinkPad when true");
-module_param_named(fan_control, fan_control_allowed, bool, 0444);
+module_param_named(fan_control, fan_control_allowed, bool, 0);
MODULE_PARM_DESC(fan_control,
"Enables setting fan parameters features when true");
-module_param_named(brightness_mode, brightness_mode, uint, 0444);
+module_param_named(brightness_mode, brightness_mode, uint, 0);
MODULE_PARM_DESC(brightness_mode,
"Selects brightness control strategy: "
"0=auto, 1=EC, 2=UCMS, 3=EC+NVRAM");
-module_param(brightness_enable, uint, 0444);
+module_param(brightness_enable, uint, 0);
MODULE_PARM_DESC(brightness_enable,
"Enables backlight control when 1, disables when 0");
-module_param(hotkey_report_mode, uint, 0444);
+module_param(hotkey_report_mode, uint, 0);
MODULE_PARM_DESC(hotkey_report_mode,
"used for backwards compatibility with userspace, "
"see documentation");
TPACPI_PARAM(fan);
#ifdef CONFIG_THINKPAD_ACPI_DEBUGFACILITIES
-module_param(dbg_wlswemul, uint, 0444);
+module_param(dbg_wlswemul, uint, 0);
MODULE_PARM_DESC(dbg_wlswemul, "Enables WLSW emulation");
module_param_named(wlsw_state, tpacpi_wlsw_emulstate, bool, 0);
MODULE_PARM_DESC(wlsw_state,
"Initial state of the emulated WLSW switch");
-module_param(dbg_bluetoothemul, uint, 0444);
+module_param(dbg_bluetoothemul, uint, 0);
MODULE_PARM_DESC(dbg_bluetoothemul, "Enables bluetooth switch emulation");
module_param_named(bluetooth_state, tpacpi_bluetooth_emulstate, bool, 0);
MODULE_PARM_DESC(bluetooth_state,
"Initial state of the emulated bluetooth switch");
-module_param(dbg_wwanemul, uint, 0444);
+module_param(dbg_wwanemul, uint, 0);
MODULE_PARM_DESC(dbg_wwanemul, "Enables WWAN switch emulation");
module_param_named(wwan_state, tpacpi_wwan_emulstate, bool, 0);
MODULE_PARM_DESC(wwan_state,
"Initial state of the emulated WWAN switch");
-module_param(dbg_uwbemul, uint, 0444);
+module_param(dbg_uwbemul, uint, 0);
MODULE_PARM_DESC(dbg_uwbemul, "Enables UWB switch emulation");
module_param_named(uwb_state, tpacpi_uwb_emulstate, bool, 0);
MODULE_PARM_DESC(uwb_state,
PCI_VENDOR_ID_IBM;
tpacpi_inputdev->id.product = TPACPI_HKEY_INPUT_PRODUCT;
tpacpi_inputdev->id.version = TPACPI_HKEY_INPUT_VERSION;
- tpacpi_inputdev->dev.parent = &tpacpi_pdev->dev;
}
for (i = 0; i < ARRAY_SIZE(ibms_init); i++) {
ret = ibm_init(&ibms_init[i]);
return ret;
}
}
-
- tpacpi_lifecycle = TPACPI_LIFE_RUNNING;
-
ret = input_register_device(tpacpi_inputdev);
if (ret < 0) {
printk(TPACPI_ERR "unable to register input device\n");
tp_features.input_device_registered = 1;
}
+ tpacpi_lifecycle = TPACPI_LIFE_RUNNING;
return 0;
}
empty_design_prop = POWER_SUPPLY_PROP_ENERGY_EMPTY_DESIGN;
now_prop = POWER_SUPPLY_PROP_ENERGY_NOW;
avg_prop = POWER_SUPPLY_PROP_ENERGY_AVG;
- break;
case SOURCE_VOLTAGE:
full_prop = POWER_SUPPLY_PROP_VOLTAGE_MAX;
empty_prop = POWER_SUPPLY_PROP_VOLTAGE_MIN;
if (ret)
return ret;
- val->intval = (s16)be16_to_cpu(ec_word) * 9760L / 32;
+ val->intval = (int)be16_to_cpu(ec_word) * 9760L / 32;
break;
case POWER_SUPPLY_PROP_CURRENT_AVG:
ret = olpc_ec_cmd(EC_BAT_CURRENT, NULL, 0, (void *)&ec_word, 2);
if (ret)
return ret;
- val->intval = (s16)be16_to_cpu(ec_word) * 15625L / 120;
+ val->intval = (int)be16_to_cpu(ec_word) * 15625L / 120;
break;
case POWER_SUPPLY_PROP_CAPACITY:
ret = olpc_ec_cmd(EC_BAT_SOC, NULL, 0, &ec_byte, 1);
if (ret)
return ret;
- val->intval = (s16)be16_to_cpu(ec_word) * 100 / 256;
+ val->intval = (int)be16_to_cpu(ec_word) * 100 / 256;
break;
case POWER_SUPPLY_PROP_TEMP_AMBIENT:
ret = olpc_ec_cmd(EC_AMB_TEMP, NULL, 0, (void *)&ec_word, 2);
if (ret)
return ret;
- val->intval = (s16)be16_to_cpu(ec_word) * 6250 / 15;
+ val->intval = (int)be16_to_cpu(ec_word) * 6250 / 15;
break;
case POWER_SUPPLY_PROP_SERIAL_NUMBER:
ret = olpc_ec_cmd(EC_BAT_SERIAL, NULL, 0, (void *)&ser_buf, 8);
{
rtc_dev_exit();
class_destroy(rtc_class);
- idr_destroy(&rtc_idr);
}
subsys_initcall(rtc_init);
}
}
- cmos_rtc.dev = dev;
- dev_set_drvdata(dev, &cmos_rtc);
-
cmos_rtc.rtc = rtc_device_register(driver_name, dev,
&cmos_rtc_ops, THIS_MODULE);
if (IS_ERR(cmos_rtc.rtc)) {
goto cleanup0;
}
+ cmos_rtc.dev = dev;
+ dev_set_drvdata(dev, &cmos_rtc);
rename_region(ports, dev_name(&cmos_rtc.rtc->dev));
spin_lock_irq(&rtc_lock);
{
struct coh901331_port *rtap = dev_get_drvdata(&pdev->dev);
- if (device_may_wakeup(&pdev->dev)) {
+ if (device_may_wakeup(&pdev->dev))
disable_irq_wake(rtap->irq);
- } else {
+ else
clk_enable(rtap->clk);
writel(rtap->irqmaskstore, rtap->virtbase + COH901331_IRQ_MASK);
clk_disable(rtap->clk);
- }
return 0;
}
#else
read_rtc:
/* read RTC registers */
- tmp = ds1307->read_block_data(ds1307->client, ds1307->offset, 8, buf);
+ tmp = ds1307->read_block_data(ds1307->client, 0, 8, buf);
if (tmp != 8) {
pr_debug("read error %d\n", tmp);
err = -EIO;
if (ds1307->regs[DS1307_REG_HOUR] & DS1307_BIT_PM)
tmp += 12;
i2c_smbus_write_byte_data(client,
- ds1307->offset + DS1307_REG_HOUR,
+ DS1307_REG_HOUR,
bin2bcd(tmp));
}
pr_debug("s3c2410_rtc: RTCCON=%02x\n",
readb(s3c_rtc_base + S3C2410_RTCCON));
+ s3c_rtc_setfreq(&pdev->dev, 1);
+
device_init_wakeup(&pdev->dev, 1);
/* register RTC and exit */
rtc->max_user_freq = 128;
platform_set_drvdata(pdev, rtc);
-
- s3c_rtc_setfreq(&pdev->dev, 1);
-
return 0;
err_nortc:
old_regs = set_irq_regs(regs);
s390_idle_check();
irq_enter();
- __get_cpu_var(s390_idle).nohz_delay = 1;
if (S390_lowcore.int_clock >= S390_lowcore.clock_comparator)
/* Serve timer interrupts first. */
clock_comparator_work();
/* Does this really need to be GFP_DMA? */
p = kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
if(!p) {
- dprintk((KERN_DEBUG "aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
+ kfree (usg);
+ dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size = %d buffer number %d of %d\n",
usg->sg[i].count,i,usg->count));
- kfree(usg);
rcode = -ENOMEM;
goto cleanup;
}
tinfo->curr.transport_version = 2;
tinfo->goal.transport_version = 2;
tinfo->goal.ppr_options = 0;
- if (scb != NULL) {
- /*
- * Remove any SCBs in the waiting
- * for selection queue that may
- * also be for this target so that
- * command ordering is preserved.
- */
- ahd_freeze_devq(ahd, scb);
- ahd_qinfifo_requeue_tail(ahd, scb);
- }
+ /*
+ * Remove any SCBs in the waiting for selection
+ * queue that may also be for this target so
+ * that command ordering is preserved.
+ */
+ ahd_freeze_devq(ahd, scb);
+ ahd_qinfifo_requeue_tail(ahd, scb);
printerror = 0;
}
} else if (ahd_sent_msg(ahd, AHDMSG_EXT, MSG_EXT_WDTR, FALSE)
MSG_EXT_WDTR_BUS_8_BIT,
AHD_TRANS_CUR|AHD_TRANS_GOAL,
/*paused*/TRUE);
- if (scb != NULL) {
- /*
- * Remove any SCBs in the waiting for
- * selection queue that may also be for
- * this target so that command ordering
- * is preserved.
- */
- ahd_freeze_devq(ahd, scb);
- ahd_qinfifo_requeue_tail(ahd, scb);
- }
+ /*
+ * Remove any SCBs in the waiting for selection
+ * queue that may also be for this target so that
+ * command ordering is preserved.
+ */
+ ahd_freeze_devq(ahd, scb);
+ ahd_qinfifo_requeue_tail(ahd, scb);
printerror = 0;
} else if (ahd_sent_msg(ahd, AHDMSG_EXT, MSG_EXT_SDTR, FALSE)
&& ppr_busfree == 0) {
/*ppr_options*/0,
AHD_TRANS_CUR|AHD_TRANS_GOAL,
/*paused*/TRUE);
- if (scb != NULL) {
- /*
- * Remove any SCBs in the waiting for
- * selection queue that may also be for
- * this target so that command ordering
- * is preserved.
- */
- ahd_freeze_devq(ahd, scb);
- ahd_qinfifo_requeue_tail(ahd, scb);
- }
+ /*
+ * Remove any SCBs in the waiting for selection
+ * queue that may also be for this target so that
+ * command ordering is preserved.
+ */
+ ahd_freeze_devq(ahd, scb);
+ ahd_qinfifo_requeue_tail(ahd, scb);
printerror = 0;
} else if ((ahd->msg_flags & MSG_FLAG_EXPECT_IDE_BUSFREE) != 0
&& ahd_sent_msg(ahd, AHDMSG_1B,
* the message phases. We check it last in case we
* had to send some other message that caused a busfree.
*/
- if (scb != NULL && printerror != 0
+ if (printerror != 0
&& (lastphase == P_MESGIN || lastphase == P_MESGOUT)
&& ((ahd->msg_flags & MSG_FLAG_EXPECT_PPR_BUSFREE) != 0)) {
if (info->scsi.phase == PHASE_IDLE)
fas216_kick(info);
- mod_timer(&info->eh_timer, jiffies + 30 * HZ);
+ mod_timer(&info->eh_timer, 30 * HZ);
spin_unlock_irqrestore(&info->host_lock, flags);
/*
ha = gdth_find_ha(gen.ionode);
if (!ha)
return -EFAULT;
-
- if (gen.data_len > INT_MAX)
- return -EINVAL;
- if (gen.sense_len > INT_MAX)
- return -EINVAL;
- if (gen.data_len + gen.sense_len > INT_MAX)
- return -EINVAL;
-
if (gen.data_len + gen.sense_len != 0) {
if (!(buf = gdth_ioctl_alloc(ha, gen.data_len + gen.sense_len,
FALSE, &paddr)))
DECLARE_COMPLETION_ONSTACK(comp);
int wait;
unsigned long flags;
- signed long timeout = IBMVFC_ABORT_WAIT_TIMEOUT * HZ;
+ signed long timeout = init_timeout * HZ;
ENTER;
do {
if (crq->valid & 0x80) {
if (++async_crq->cur == async_crq->size)
async_crq->cur = 0;
- rmb();
} else
crq = NULL;
if (crq->valid & 0x80) {
if (++queue->cur == queue->size)
queue->cur = 0;
- rmb();
} else
crq = NULL;
while ((async = ibmvfc_next_async_crq(vhost)) != NULL) {
ibmvfc_handle_async(async, vhost);
async->valid = 0;
- wmb();
}
/* Pull all the valid messages off the CRQ */
while ((crq = ibmvfc_next_crq(vhost)) != NULL) {
ibmvfc_handle_crq(crq, vhost);
crq->valid = 0;
- wmb();
}
vio_enable_interrupts(vdev);
vio_disable_interrupts(vdev);
ibmvfc_handle_async(async, vhost);
async->valid = 0;
- wmb();
} else if ((crq = ibmvfc_next_crq(vhost)) != NULL) {
vio_disable_interrupts(vdev);
ibmvfc_handle_crq(crq, vhost);
crq->valid = 0;
- wmb();
} else
done = 1;
}
#define IBMVFC_ADISC_PLUS_CANCEL_TIMEOUT \
(IBMVFC_ADISC_TIMEOUT + IBMVFC_ADISC_CANCEL_TIMEOUT)
#define IBMVFC_INIT_TIMEOUT 120
-#define IBMVFC_ABORT_WAIT_TIMEOUT 40
#define IBMVFC_MAX_REQUESTS_DEFAULT 100
#define IBMVFC_DEBUG 0
WARN_ON(hdrlength >= 256);
hdr->hlength = hdrlength & 0xFF;
- hdr->cmdsn = task->cmdsn = cpu_to_be32(session->cmdsn);
if (session->tt->init_task && session->tt->init_task(task))
return -EIO;
task->state = ISCSI_TASK_RUNNING;
+ hdr->cmdsn = task->cmdsn = cpu_to_be32(session->cmdsn);
session->cmdsn++;
conn->scsicmd_pdus_cnt++;
session->state = ISCSI_STATE_TERMINATE;
else if (conn->stop_stage != STOP_CONN_RECOVER)
session->state = ISCSI_STATE_IN_RECOVERY;
-
- old_stop_stage = conn->stop_stage;
- conn->stop_stage = flag;
spin_unlock_bh(&session->lock);
del_timer_sync(&conn->transport_timer);
iscsi_suspend_tx(conn);
spin_lock_bh(&session->lock);
+ old_stop_stage = conn->stop_stage;
+ conn->stop_stage = flag;
conn->c_stage = ISCSI_CONN_STOPPED;
spin_unlock_bh(&session->lock);
static struct ata_port_operations sas_sata_ops = {
.phy_reset = sas_ata_phy_reset,
.post_internal_cmd = sas_ata_post_internal,
- .qc_defer = ata_std_qc_defer,
.qc_prep = ata_noop_qc_prep,
.qc_issue = sas_ata_qc_issue,
.qc_fill_rtf = sas_ata_qc_fill_rtf,
void sas_ata_task_abort(struct sas_task *task)
{
struct ata_queued_cmd *qc = task->uldd_task;
- struct request_queue *q = qc->scsicmd->device->request_queue;
struct completion *waiting;
- unsigned long flags;
/* Bounce SCSI-initiated commands to the SCSI EH */
if (qc->scsicmd) {
- spin_lock_irqsave(q->queue_lock, flags);
blk_abort_request(qc->scsicmd->request);
- spin_unlock_irqrestore(q->queue_lock, flags);
scsi_schedule_eh(qc->scsicmd->device->host);
return;
}
void sas_task_abort(struct sas_task *task)
{
struct scsi_cmnd *sc = task->uldd_task;
- struct request_queue *q = sc->device->request_queue;
- unsigned long flags;
/* Escape for libsas internal commands */
if (!sc) {
return;
}
- spin_lock_irqsave(q->queue_lock, flags);
blk_abort_request(sc->request);
- spin_unlock_irqrestore(q->queue_lock, flags);
scsi_schedule_eh(sc->device->host);
}
compat_alloc_user_space(sizeof(struct megasas_iocpacket));
int i;
int error = 0;
- compat_uptr_t ptr;
if (clear_user(ioc, sizeof(*ioc)))
return -EFAULT;
copy_in_user(&ioc->sge_count, &cioc->sge_count, sizeof(u32)))
return -EFAULT;
- /*
- * The sense_ptr is used in megasas_mgmt_fw_ioctl only when
- * sense_len is not null, so prepare the 64bit value under
- * the same condition.
- */
- if (ioc->sense_len) {
- void __user **sense_ioc_ptr =
- (void __user **)(ioc->frame.raw + ioc->sense_off);
- compat_uptr_t *sense_cioc_ptr =
- (compat_uptr_t *)(cioc->frame.raw + cioc->sense_off);
- if (get_user(ptr, sense_cioc_ptr) ||
- put_user(compat_ptr(ptr), sense_ioc_ptr))
- return -EFAULT;
- }
-
for (i = 0; i < MAX_IOCTL_SGE; i++) {
+ compat_uptr_t ptr;
+
if (get_user(ptr, &cioc->sgl[i].iov_base) ||
put_user(compat_ptr(ptr), &ioc->sgl[i].iov_base) ||
copy_in_user(&ioc->sgl[i].iov_len,
struct _sas_port *mpt2sas_port;
struct _sas_device *sas_device;
struct _sas_node *expander_sibling;
- struct _raid_device *raid_device, *next;
- struct MPT2SAS_TARGET *sas_target_priv_data;
struct workqueue_struct *wq;
unsigned long flags;
if (wq)
destroy_workqueue(wq);
- /* release all the volumes */
- list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list,
- list) {
- if (raid_device->starget) {
- sas_target_priv_data =
- raid_device->starget->hostdata;
- sas_target_priv_data->deleted = 1;
- scsi_remove_target(&raid_device->starget->dev);
- }
- printk(MPT2SAS_INFO_FMT "removing handle(0x%04x), wwid"
- "(0x%016llx)\n", ioc->name, raid_device->handle,
- (unsigned long long) raid_device->wwid);
- _scsih_raid_device_remove(ioc, raid_device);
- }
-
/* free ports attached to the sas_host */
retry_again:
list_for_each_entry(mpt2sas_port,
{ PCI_VDEVICE(MARVELL, 0x9180), chip_9180 },
{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 },
{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 },
- { PCI_VDEVICE(ADAPTEC2, 0x0450), chip_6440 },
{ } /* terminate list */
};
uint16_t mb[MAILBOX_REGISTER_COUNT], i;
int err;
- spin_unlock_irq(ha->host->host_lock);
err = request_firmware(&fw, ql1280_board_tbl[ha->devnum].fwname,
&ha->pdev->dev);
- spin_lock_irq(ha->host->host_lock);
if (err) {
printk(KERN_ERR "Failed to load image \"%s\" err %d\n",
ql1280_board_tbl[ha->devnum].fwname, err);
return -ENOMEM;
#endif
- spin_unlock_irq(ha->host->host_lock);
err = request_firmware(&fw, ql1280_board_tbl[ha->devnum].fwname,
&ha->pdev->dev);
- spin_lock_irq(ha->host->host_lock);
if (err) {
printk(KERN_ERR "Failed to load image \"%s\" err %d\n",
ql1280_board_tbl[ha->devnum].fwname, err);
extern void qla25xx_wrt_req_reg(struct qla_hw_data *, uint16_t, uint16_t);
extern void qla25xx_wrt_rsp_reg(struct qla_hw_data *, uint16_t, uint16_t);
extern void qla24xx_wrt_rsp_reg(struct qla_hw_data *, uint16_t, uint16_t);
+extern struct scsi_qla_host * qla25xx_get_host(struct rsp_que *);
#endif /* _QLA_GBL_H */
sense_len = rsp_info_len = resid_len = fw_resid_len = 0;
if (IS_FWI2_CAPABLE(ha)) {
- if (scsi_status & SS_SENSE_LEN_VALID)
- sense_len = le32_to_cpu(sts24->sense_len);
- if (scsi_status & SS_RESPONSE_INFO_LEN_VALID)
- rsp_info_len = le32_to_cpu(sts24->rsp_data_len);
- if (scsi_status & (SS_RESIDUAL_UNDER | SS_RESIDUAL_OVER))
- resid_len = le32_to_cpu(sts24->rsp_residual_count);
- if (comp_status == CS_DATA_UNDERRUN)
- fw_resid_len = le32_to_cpu(sts24->residual_len);
+ sense_len = le32_to_cpu(sts24->sense_len);
+ rsp_info_len = le32_to_cpu(sts24->rsp_data_len);
+ resid_len = le32_to_cpu(sts24->rsp_residual_count);
+ fw_resid_len = le32_to_cpu(sts24->residual_len);
rsp_info = sts24->data;
sense_data = sts24->data;
host_to_fcp_swap(sts24->data, sizeof(sts24->data));
} else {
- if (scsi_status & SS_SENSE_LEN_VALID)
- sense_len = le16_to_cpu(sts->req_sense_length);
- if (scsi_status & SS_RESPONSE_INFO_LEN_VALID)
- rsp_info_len = le16_to_cpu(sts->rsp_info_len);
+ sense_len = le16_to_cpu(sts->req_sense_length);
+ rsp_info_len = le16_to_cpu(sts->rsp_info_len);
resid_len = le32_to_cpu(sts->residual_length);
rsp_info = sts->rsp_info;
sense_data = sts->req_sense_data;
break;
case CS_DATA_UNDERRUN:
- DEBUG2(printk(KERN_INFO
- "scsi(%ld:%d:%d) UNDERRUN status detected 0x%x-0x%x. "
- "resid=0x%x fw_resid=0x%x cdb=0x%x os_underflow=0x%x\n",
- vha->host_no, cp->device->id, cp->device->lun, comp_status,
- scsi_status, resid_len, fw_resid_len, cp->cmnd[0],
- cp->underflow));
-
+ resid = resid_len;
/* Use F/W calculated residual length. */
- resid = IS_FWI2_CAPABLE(ha) ? fw_resid_len : resid_len;
- scsi_set_resid(cp, resid);
- if (scsi_status & SS_RESIDUAL_UNDER) {
- if (IS_FWI2_CAPABLE(ha) && fw_resid_len != resid_len) {
- DEBUG2(printk(
- "scsi(%ld:%d:%d:%d) Dropped frame(s) "
- "detected (%x of %x bytes)...residual "
- "length mismatch...retrying command.\n",
- vha->host_no, cp->device->channel,
- cp->device->id, cp->device->lun, resid,
- scsi_bufflen(cp)));
-
- cp->result = DID_ERROR << 16 | lscsi_status;
- break;
+ if (IS_FWI2_CAPABLE(ha)) {
+ if (!(scsi_status & SS_RESIDUAL_UNDER)) {
+ lscsi_status = 0;
+ } else if (resid != fw_resid_len) {
+ scsi_status &= ~SS_RESIDUAL_UNDER;
+ lscsi_status = 0;
}
+ resid = fw_resid_len;
+ }
- if (!lscsi_status &&
- ((unsigned)(scsi_bufflen(cp) - resid) <
- cp->underflow)) {
- qla_printk(KERN_INFO, ha,
- "scsi(%ld:%d:%d:%d): Mid-layer underflow "
- "detected (%x of %x bytes)...returning "
- "error status.\n", vha->host_no,
- cp->device->channel, cp->device->id,
- cp->device->lun, resid, scsi_bufflen(cp));
-
- cp->result = DID_ERROR << 16;
- break;
- }
- } else if (!lscsi_status) {
- DEBUG2(printk(
- "scsi(%ld:%d:%d:%d) Dropped frame(s) detected "
- "(%x of %x bytes)...firmware reported underrun..."
- "retrying command.\n", vha->host_no,
- cp->device->channel, cp->device->id,
- cp->device->lun, resid, scsi_bufflen(cp)));
+ if (scsi_status & SS_RESIDUAL_UNDER) {
+ scsi_set_resid(cp, resid);
+ } else {
+ DEBUG2(printk(KERN_INFO
+ "scsi(%ld:%d:%d) UNDERRUN status detected "
+ "0x%x-0x%x. resid=0x%x fw_resid=0x%x cdb=0x%x "
+ "os_underflow=0x%x\n", vha->host_no,
+ cp->device->id, cp->device->lun, comp_status,
+ scsi_status, resid_len, resid, cp->cmnd[0],
+ cp->underflow));
- cp->result = DID_ERROR << 16;
- break;
}
- cp->result = DID_OK << 16 | lscsi_status;
-
/*
* Check to see if SCSI Status is non zero. If so report SCSI
* Status.
*/
if (lscsi_status != 0) {
+ cp->result = DID_OK << 16 | lscsi_status;
+
if (lscsi_status == SAM_STAT_TASK_SET_FULL) {
DEBUG2(printk(KERN_INFO
"scsi(%ld): QUEUE FULL status detected "
break;
qla2x00_handle_sense(sp, sense_data, sense_len, rsp);
+ } else {
+ /*
+ * If RISC reports underrun and target does not report
+ * it then we must have a lost frame, so tell upper
+ * layer to retry it by reporting an error.
+ */
+ if (!(scsi_status & SS_RESIDUAL_UNDER)) {
+ DEBUG2(printk("scsi(%ld:%d:%d:%d) Dropped "
+ "frame(s) detected (%x of %x bytes)..."
+ "retrying command.\n",
+ vha->host_no, cp->device->channel,
+ cp->device->id, cp->device->lun, resid,
+ scsi_bufflen(cp)));
+
+ scsi_set_resid(cp, resid);
+ cp->result = DID_ERROR << 16;
+ break;
+ }
+
+ /* Handle mid-layer underflow */
+ if ((unsigned)(scsi_bufflen(cp) - resid) <
+ cp->underflow) {
+ qla_printk(KERN_INFO, ha,
+ "scsi(%ld:%d:%d:%d): Mid-layer underflow "
+ "detected (%x of %x bytes)...returning "
+ "error status.\n", vha->host_no,
+ cp->device->channel, cp->device->id,
+ cp->device->lun, resid,
+ scsi_bufflen(cp));
+
+ cp->result = DID_ERROR << 16;
+ break;
+ }
+
+ /* Everybody online, looking good... */
+ cp->result = DID_OK << 16;
}
break;
spin_lock_irq(&ha->hardware_lock);
- vha = pci_get_drvdata(ha->pdev);
+ vha = qla25xx_get_host(rsp);
qla24xx_process_response_queue(vha, rsp);
if (!ha->mqenable) {
WRT_REG_DWORD(®->hccr, HCCRX_CLR_RISC_INT);
/* If possible, enable MSI-X. */
if (!IS_QLA2432(ha) && !IS_QLA2532(ha) &&
- !IS_QLA8432(ha) && !IS_QLA8001(ha))
- goto skip_msi;
-
- if (ha->pdev->subsystem_vendor == PCI_VENDOR_ID_HP &&
- (ha->pdev->subsystem_device == 0x7040 ||
- ha->pdev->subsystem_device == 0x7041 ||
- ha->pdev->subsystem_device == 0x1705)) {
- DEBUG2(qla_printk(KERN_WARNING, ha,
- "MSI-X: Unsupported ISP2432 SSVID/SSDID (0x%X,0x%X).\n",
- ha->pdev->subsystem_vendor,
- ha->pdev->subsystem_device));
- goto skip_msi;
- }
+ !IS_QLA8432(ha) && !IS_QLA8001(ha))
+ goto skip_msix;
if (IS_QLA2432(ha) && (ha->pdev->revision < QLA_MSIX_CHIP_REV_24XX ||
!QLA_MSIX_FW_MODE_1(ha->fw_attributes))) {
DEBUG2(qla_printk(KERN_WARNING, ha,
"MSI-X: Unsupported ISP2432 (0x%X, 0x%X).\n",
ha->pdev->revision, ha->fw_attributes));
+
goto skip_msix;
}
+ if (ha->pdev->subsystem_vendor == PCI_VENDOR_ID_HP &&
+ (ha->pdev->subsystem_device == 0x7040 ||
+ ha->pdev->subsystem_device == 0x7041 ||
+ ha->pdev->subsystem_device == 0x1705)) {
+ DEBUG2(qla_printk(KERN_WARNING, ha,
+ "MSI-X: Unsupported ISP2432 SSVID/SSDID (0x%X, 0x%X).\n",
+ ha->pdev->subsystem_vendor,
+ ha->pdev->subsystem_device));
+
+ goto skip_msi;
+ }
+
ret = qla24xx_enable_msix(ha, rsp);
if (!ret) {
DEBUG2(qla_printk(KERN_INFO, ha,
msix->rsp = rsp;
return ret;
}
+
+struct scsi_qla_host *
+qla25xx_get_host(struct rsp_que *rsp)
+{
+ srb_t *sp;
+ struct qla_hw_data *ha = rsp->hw;
+ struct scsi_qla_host *vha = NULL;
+ struct sts_entry_24xx *pkt;
+ struct req_que *req;
+ uint16_t que;
+ uint32_t handle;
+
+ pkt = (struct sts_entry_24xx *) rsp->ring_ptr;
+ que = MSW(pkt->handle);
+ handle = (uint32_t) LSW(pkt->handle);
+ req = ha->req_q_map[que];
+ if (handle < MAX_OUTSTANDING_COMMANDS) {
+ sp = req->outstanding_cmds[handle];
+ if (sp)
+ return sp->fcport->vha;
+ else
+ goto base_que;
+ }
+base_que:
+ vha = pci_get_drvdata(ha->pdev);
+ return vha;
+}
static void qla_do_work(struct work_struct *work)
{
- unsigned long flags;
struct rsp_que *rsp = container_of(work, struct rsp_que, q_work);
struct scsi_qla_host *vha;
- struct qla_hw_data *ha = rsp->hw;
- spin_lock_irqsave(&rsp->hw->hardware_lock, flags);
- vha = pci_get_drvdata(ha->pdev);
+ vha = qla25xx_get_host(rsp);
qla24xx_process_response_queue(vha, rsp);
- spin_unlock_irqrestore(&rsp->hw->hardware_lock, flags);
}
/* create response queue */
static sector_t get_sdebug_capacity(void)
{
if (scsi_debug_virtual_gb > 0)
- return (sector_t)scsi_debug_virtual_gb *
- (1073741824 / scsi_debug_sector_size);
+ return 2048 * 1024 * (sector_t)scsi_debug_virtual_gb;
else
return sdebug_store_sectors;
}
if (scmd->device->allow_restart &&
(sshdr.asc == 0x04) && (sshdr.ascq == 0x02))
return FAILED;
-
- if (blk_barrier_rq(scmd->request))
- /*
- * barrier requests should always retry on UA
- * otherwise block will get a spurious error
- */
- return NEEDS_RETRY;
- else
- /*
- * for normal (non barrier) commands, pass the
- * UA upwards for a determination in the
- * completion functions
- */
- return SUCCESS;
+ return SUCCESS;
/* these three are not supported */
case COPY_ABORTED:
case SG_SCSI_RESET_DEVICE:
val = SCSI_TRY_RESET_DEVICE;
break;
- case SG_SCSI_RESET_TARGET:
- val = SCSI_TRY_RESET_TARGET;
- break;
case SG_SCSI_RESET_BUS:
val = SCSI_TRY_RESET_BUS;
break;
* we already took a copy of the original into rq->errors which
* is what gets returned to the user
*/
- if (sense_valid && (sshdr.sense_key == RECOVERED_ERROR)) {
- /* if ATA PASS-THROUGH INFORMATION AVAILABLE skip
- * print since caller wants ATA registers. Only occurs on
- * SCSI ATA PASS_THROUGH commands when CK_COND=1
- */
- if ((sshdr.asc == 0x0) && (sshdr.ascq == 0x1d))
- ;
- else if (!(req->cmd_flags & REQ_QUIET))
+ if (sense_valid && sshdr.sense_key == RECOVERED_ERROR) {
+ if (!(req->cmd_flags & REQ_QUIET))
scsi_print_sense("", cmd);
result = 0;
/* BLOCK_PC may have set error */
sdev->sdev_state = SDEV_RUNNING;
else if (sdev->sdev_state == SDEV_CREATED_BLOCK)
sdev->sdev_state = SDEV_CREATED;
- else if (sdev->sdev_state != SDEV_CANCEL &&
- sdev->sdev_state != SDEV_OFFLINE)
+ else
return -EINVAL;
spin_lock_irqsave(q->queue_lock, flags);
list_for_each_entry(sdev, &shost->__devices, siblings) {
if (sdev->channel != starget->channel ||
sdev->id != starget->id ||
- scsi_device_get(sdev))
+ sdev->sdev_state == SDEV_DEL)
continue;
spin_unlock_irqrestore(shost->host_lock, flags);
scsi_remove_device(sdev);
- scsi_device_put(sdev);
spin_lock_irqsave(shost->host_lock, flags);
goto restart;
}
{
struct fc_vport *vport = transport_class_to_vport(dev);
struct Scsi_Host *shost = vport_to_shost(vport);
- unsigned long flags;
-
- spin_lock_irqsave(shost->host_lock, flags);
- if (vport->flags & (FC_VPORT_DEL | FC_VPORT_CREATING)) {
- spin_unlock_irqrestore(shost->host_lock, flags);
- return -EBUSY;
- }
- vport->flags |= FC_VPORT_DELETING;
- spin_unlock_irqrestore(shost->host_lock, flags);
fc_queue_work(shost, &vport->vport_delete_work);
return count;
list_for_each_entry(vport, &fc_host->vports, peers) {
if ((vport->channel == 0) &&
(vport->port_name == wwpn) && (vport->node_name == wwnn)) {
- if (vport->flags & (FC_VPORT_DEL | FC_VPORT_CREATING))
- break;
- vport->flags |= FC_VPORT_DELETING;
match = 1;
break;
}
unsigned long flags;
int stat;
+ spin_lock_irqsave(shost->host_lock, flags);
+ if (vport->flags & FC_VPORT_CREATING) {
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ return -EBUSY;
+ }
+ if (vport->flags & (FC_VPORT_DEL)) {
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ return -EALREADY;
+ }
+ vport->flags |= FC_VPORT_DELETING;
+ spin_unlock_irqrestore(shost->host_lock, flags);
+
if (i->f->vport_delete)
stat = i->f->vport_delete(vport);
else
return;
while (!blk_queue_plugged(q)) {
- if (rport && (rport->port_state == FC_PORTSTATE_BLOCKED) &&
- !(rport->flags & FC_RPORT_FAST_FAIL_TIMEDOUT))
- break;
+ if (rport && (rport->port_state == FC_PORTSTATE_BLOCKED))
+ break;
req = blk_fetch_request(q);
if (!req)
{
rq->cmd_type = REQ_TYPE_BLOCK_PC;
rq->timeout = SD_TIMEOUT;
- rq->retries = SD_MAX_RETRIES;
rq->cmd[0] = SYNCHRONIZE_CACHE;
rq->cmd_len = 10;
}
index = sdkp->index;
dev = &sdp->sdev_gendev;
- gd->major = sd_major((index & 0xf0) >> 4);
- gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00);
- gd->minors = SD_MINORS;
-
+ if (index < SD_MAX_DISKS) {
+ gd->major = sd_major((index & 0xf0) >> 4);
+ gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00);
+ gd->minors = SD_MINORS;
+ }
gd->fops = &sd_fops;
gd->private_data = &sdkp->driver;
gd->queue = sdkp->device->request_queue;
if (error)
goto out_put;
- if (index >= SD_MAX_DISKS) {
- error = -ENODEV;
- sdev_printk(KERN_WARNING, sdp, "SCSI disk (sd) name space exhausted.\n");
- goto out_free_index;
- }
-
error = sd_format_disk_name("sd", index, gd->disk_name, DISK_NAME_LEN);
if (error)
goto out_free_index;
ses_dev->page10_len = len;
buf = NULL;
}
+ kfree(hdr_buf);
+
scomp = kzalloc(sizeof(struct ses_component) * components, GFP_KERNEL);
if (!scomp)
goto err_free;
goto err_free;
}
- kfree(hdr_buf);
-
edev->scratch = ses_dev;
for (i = 0; i < components; i++)
edev->component[i].scratch = scomp + i;
{ "FUJ02E6", 0 },
/* Fujitsu Wacom 2FGT Tablet PC device */
{ "FUJ02E7", 0 },
- /* Fujitsu Wacom 1FGT Tablet PC device */
- { "FUJ02E9", 0 },
/*
* LG C1 EXPRESS DUAL (C1-PB11A3) touch screen (actually a FUJ02E6 in
* disguise)
}
}
-#if defined(CONFIG_CONSOLE_POLL) || defined(CONFIG_SERIAL_CPM_CONSOLE)
-/*
- * Write a string to the serial port
- * Note that this is called with interrupts already disabled
- */
-static void cpm_uart_early_write(struct uart_cpm_port *pinfo,
- const char *string, u_int count)
-{
- unsigned int i;
- cbd_t __iomem *bdp, *bdbase;
- unsigned char *cpm_outp_addr;
-
- /* Get the address of the host memory buffer.
- */
- bdp = pinfo->tx_cur;
- bdbase = pinfo->tx_bd_base;
-
- /*
- * Now, do each character. This is not as bad as it looks
- * since this is a holding FIFO and not a transmitting FIFO.
- * We could add the complexity of filling the entire transmit
- * buffer, but we would just wait longer between accesses......
- */
- for (i = 0; i < count; i++, string++) {
- /* Wait for transmitter fifo to empty.
- * Ready indicates output is ready, and xmt is doing
- * that, not that it is ready for us to send.
- */
- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
- ;
-
- /* Send the character out.
- * If the buffer address is in the CPM DPRAM, don't
- * convert it.
- */
- cpm_outp_addr = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr),
- pinfo);
- *cpm_outp_addr = *string;
-
- out_be16(&bdp->cbd_datlen, 1);
- setbits16(&bdp->cbd_sc, BD_SC_READY);
-
- if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)
- bdp = bdbase;
- else
- bdp++;
-
- /* if a LF, also do CR... */
- if (*string == 10) {
- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
- ;
-
- cpm_outp_addr = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr),
- pinfo);
- *cpm_outp_addr = 13;
-
- out_be16(&bdp->cbd_datlen, 1);
- setbits16(&bdp->cbd_sc, BD_SC_READY);
-
- if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)
- bdp = bdbase;
- else
- bdp++;
- }
- }
-
- /*
- * Finally, Wait for transmitter & holding register to empty
- * and restore the IER
- */
- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
- ;
-
- pinfo->tx_cur = bdp;
-}
-#endif
-
#ifdef CONFIG_CONSOLE_POLL
/* Serial polling routines for writing and reading from the uart while
* in an interrupt or debug context.
static char ch[2];
ch[0] = (char)c;
- cpm_uart_early_write(pinfo, ch, 1);
+ cpm_uart_early_write(pinfo->port.line, ch, 1);
}
#endif /* CONFIG_CONSOLE_POLL */
u_int count)
{
struct uart_cpm_port *pinfo = &cpm_uart_ports[co->index];
+ unsigned int i;
+ cbd_t __iomem *bdp, *bdbase;
+ unsigned char *cp;
unsigned long flags;
int nolock = oops_in_progress;
spin_lock_irqsave(&pinfo->port.lock, flags);
}
- cpm_uart_early_write(pinfo, s, count);
+ /* Get the address of the host memory buffer.
+ */
+ bdp = pinfo->tx_cur;
+ bdbase = pinfo->tx_bd_base;
+
+ /*
+ * Now, do each character. This is not as bad as it looks
+ * since this is a holding FIFO and not a transmitting FIFO.
+ * We could add the complexity of filling the entire transmit
+ * buffer, but we would just wait longer between accesses......
+ */
+ for (i = 0; i < count; i++, s++) {
+ /* Wait for transmitter fifo to empty.
+ * Ready indicates output is ready, and xmt is doing
+ * that, not that it is ready for us to send.
+ */
+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
+ ;
+
+ /* Send the character out.
+ * If the buffer address is in the CPM DPRAM, don't
+ * convert it.
+ */
+ cp = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo);
+ *cp = *s;
+
+ out_be16(&bdp->cbd_datlen, 1);
+ setbits16(&bdp->cbd_sc, BD_SC_READY);
+
+ if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+
+ /* if a LF, also do CR... */
+ if (*s == 10) {
+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
+ ;
+
+ cp = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo);
+ *cp = 13;
+
+ out_be16(&bdp->cbd_datlen, 1);
+ setbits16(&bdp->cbd_sc, BD_SC_READY);
+
+ if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+ }
+ }
+
+ /*
+ * Finally, Wait for transmitter & holding register to empty
+ * and restore the IER
+ */
+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)
+ ;
+
+ pinfo->tx_cur = bdp;
if (unlikely(nolock)) {
local_irq_restore(flags);
#define MX2_UCR3_RXDMUXSEL (1<<2) /* RXD Muxed Input Select, on mx2/mx3 */
#define UCR3_INVT (1<<1) /* Inverted Infrared transmission */
#define UCR3_BPEN (1<<0) /* Preset registers enable */
-#define UCR4_CTSTL_SHF 10 /* CTS trigger level shift */
-#define UCR4_CTSTL_MASK 0x3F /* CTS trigger is 6 bits wide */
+#define UCR4_CTSTL_32 (32<<10) /* CTS trigger level (32 chars) */
#define UCR4_INVR (1<<9) /* Inverted infrared reception */
#define UCR4_ENIRI (1<<8) /* Serial infrared interrupt enable */
#define UCR4_WKEN (1<<7) /* Wake interrupt enable */
return 0;
}
-/* half the RX buffer size */
-#define CTSTL 16
-
static int imx_startup(struct uart_port *port)
{
struct imx_port *sport = (struct imx_port *)port;
if (USE_IRDA(sport))
temp |= UCR4_IRSC;
- /* set the trigger level for CTS */
- temp &= ~(UCR4_CTSTL_MASK<< UCR4_CTSTL_SHF);
- temp |= CTSTL<< UCR4_CTSTL_SHF;
-
writel(temp & ~UCR4_DREN, sport->port.membase + UCR4);
if (USE_IRDA(sport)) {
sport->use_irda = 1;
#endif
- if (pdata && pdata->init) {
+ if (pdata->init) {
ret = pdata->init(pdev);
if (ret)
goto clkput;
return 0;
deinit:
- if (pdata && pdata->exit)
+ if (pdata->exit)
pdata->exit(pdev);
clkput:
clk_put(sport->clk);
clk_disable(sport->clk);
- if (pdata && pdata->exit)
+ if (pdata->exit)
pdata->exit(pdev);
iounmap(sport->port.membase);
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4312) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4315) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4318) },
- { PCI_DEVICE(PCI_VENDOR_ID_BCM_GVC, 0x4318) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4319) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4320) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4321) },
{
if (!cc->dev)
return; /* We don't have a ChipCommon */
- if (cc->dev->id.revision >= 11)
- cc->status = chipco_read32(cc, SSB_CHIPCO_CHIPSTAT);
- ssb_dprintk(KERN_INFO PFX "chipcommon status is 0x%x\n", cc->status);
ssb_pmu_init(cc);
chipco_powercontrol_init(cc);
ssb_chipco_set_clockmode(cc, SSB_CLKMODE_FAST);
}
/* Get the word-offset for a SSB_SPROM_XXX define. */
-#define SPOFF(offset) (((offset) - SSB_SPROM_BASE1) / sizeof(u16))
+#define SPOFF(offset) (((offset) - SSB_SPROM_BASE) / sizeof(u16))
/* Helper to extract some _offset, which is one of the SSB_SPROM_XXX defines. */
#define SPEX16(_outvar, _offset, _mask, _shift) \
out->_outvar = ((in[SPOFF(_offset)] & (_mask)) >> (_shift))
int i;
for (i = 0; i < bus->sprom_size; i++)
- sprom[i] = ioread16(bus->mmio + bus->sprom_offset + (i * 2));
+ sprom[i] = ioread16(bus->mmio + SSB_SPROM_BASE + (i * 2));
return 0;
}
ssb_printk("75%%");
else if (i % 2)
ssb_printk(".");
- writew(sprom[i], bus->mmio + bus->sprom_offset + (i * 2));
+ writew(sprom[i], bus->mmio + SSB_SPROM_BASE + (i * 2));
mmiowb();
msleep(20);
}
int err = -ENOMEM;
u16 *buf;
- if (!ssb_is_sprom_available(bus)) {
- ssb_printk(KERN_ERR PFX "No SPROM available!\n");
- return -ENODEV;
- }
- if (bus->chipco.dev) { /* can be unavailible! */
- /*
- * get SPROM offset: SSB_SPROM_BASE1 except for
- * chipcommon rev >= 31 or chip ID is 0x4312 and
- * chipcommon status & 3 == 2
- */
- if (bus->chipco.dev->id.revision >= 31)
- bus->sprom_offset = SSB_SPROM_BASE31;
- else if (bus->chip_id == 0x4312 &&
- (bus->chipco.status & 0x03) == 2)
- bus->sprom_offset = SSB_SPROM_BASE31;
- else
- bus->sprom_offset = SSB_SPROM_BASE1;
- } else {
- bus->sprom_offset = SSB_SPROM_BASE1;
- }
- ssb_dprintk(KERN_INFO PFX "SPROM offset is 0x%x\n", bus->sprom_offset);
-
buf = kcalloc(SSB_SPROMSIZE_WORDS_R123, sizeof(u16), GFP_KERNEL);
if (!buf)
goto out;
{
return fallback_sprom;
}
-
-/* http://bcm-v4.sipsolutions.net/802.11/IsSpromAvailable */
-bool ssb_is_sprom_available(struct ssb_bus *bus)
-{
- /* status register only exists on chipcomon rev >= 11 and we need check
- for >= 31 only */
- /* this routine differs from specs as we do not access SPROM directly
- on PCMCIA */
- if (bus->bustype == SSB_BUSTYPE_PCI &&
- bus->chipco.dev && /* can be unavailible! */
- bus->chipco.dev->id.revision >= 31)
- return bus->chipco.capabilities & SSB_CHIPCO_CAP_SPROM;
-
- return true;
-}
source "drivers/staging/rtl8192e/Kconfig"
+source "drivers/staging/mimio/Kconfig"
+
source "drivers/staging/frontier/Kconfig"
source "drivers/staging/android/Kconfig"
obj-$(CONFIG_RTL8187SE) += rtl8187se/
obj-$(CONFIG_RTL8192SU) += rtl8192su/
obj-$(CONFIG_RTL8192E) += rtl8192e/
+obj-$(CONFIG_INPUT_MIMIO) += mimio/
obj-$(CONFIG_TRANZPORT) += frontier/
obj-$(CONFIG_ANDROID) += android/
obj-$(CONFIG_STAGING_DREAM) += dream/
#define ASUS_OLED_DEVICE_ATTR(_file) dev_attr_asus_oled_##_file
-static DEVICE_ATTR(asus_oled_enabled, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(asus_oled_enabled, S_IWUGO | S_IRUGO,
get_enabled, set_enabled);
-static DEVICE_ATTR(asus_oled_picture, S_IWUSR , NULL, set_picture);
+static DEVICE_ATTR(asus_oled_picture, S_IWUGO , NULL, set_picture);
-static DEVICE_ATTR(enabled, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(enabled, S_IWUGO | S_IRUGO,
class_get_enabled, class_set_enabled);
-static DEVICE_ATTR(picture, S_IWUSR, NULL, class_set_picture);
+static DEVICE_ATTR(picture, S_IWUGO, NULL, class_set_picture);
static int asus_oled_probe(struct usb_interface *interface,
const struct usb_device_id *id)
config COMEDI_PCI_DRIVERS
tristate "Comedi PCI drivers"
depends on COMEDI && PCI
- select COMEDI_8255
default N
---help---
Enable lots of comedi PCI drivers to be built
config COMEDI_PCMCIA_DRIVERS
tristate "Comedi PCMCIA drivers"
depends on COMEDI && PCMCIA && PCCARD
- select COMEDI_8255
default N
---help---
Enable lots of comedi PCMCIA and PCCARD drivers to be built
default N
---help---
Enable lots of comedi USB drivers to be built
-
-config COMEDI_8255
- tristate
obj-$(CONFIG_COMEDI) += comedi_parport.o
obj-$(CONFIG_COMEDI) += pcm_common.o
-# Comedi 8255 module
-obj-$(CONFIG_COMEDI_8255) += 8255.o
-
# Comedi PCI drivers
+obj-$(CONFIG_COMEDI_PCI_DRIVERS) += 8255.o
obj-$(CONFIG_COMEDI_PCI_DRIVERS) += acl7225b.o
obj-$(CONFIG_COMEDI_PCI_DRIVERS) += addi_apci_035.o
obj-$(CONFIG_COMEDI_PCI_DRIVERS) += addi_apci_1032.o
.adbits = 12,
.ai_fifo_depth = 1024,
.alwaysdither = 0,
- .gainlkup = ai_gain_4,
+ .gainlkup = ai_gain_16,
.ai_speed = 5000,
.n_aochan = 2,
.aobits = 12,
-#define DRIVER_VERSION "v2.4"
+#define DRIVER_VERSION "v2.2"
#define DRIVER_AUTHOR "Bernd Porr, BerndPorr@f2s.com"
#define DRIVER_DESC "Stirling/ITL USB-DUX -- Bernd.Porr@f2s.com"
/*
* 2.0: PWM seems to be stable and is not interfering with the other functions
* 2.1: changed PWM API
* 2.2: added firmware kernel request to fix an udev problem
- * 2.3: corrected a bug in bulk timeouts which were far too short
- * 2.4: fixed a bug which causes the driver to hang when it ran out of data.
- * Thanks to Jan-Matthias Braun and Ian to spot the bug and fix it.
*
*/
#define BOARDNAME "usbdux"
-/* timeout for the USB-transfer in ms*/
-#define BULK_TIMEOUT 1000
+/* timeout for the USB-transfer */
+#define EZTIMEOUT 30
/* constants for "firmware" upload and download */
#define USBDUXSUB_FIRMWARE 0xA0
}
}
/* tell comedi that data is there */
- s->async->events |= COMEDI_CB_BLOCK | COMEDI_CB_EOS;
comedi_event(this_usbduxsub->comedidev, s);
}
/* Length */
1,
/* Timeout */
- BULK_TIMEOUT);
+ EZTIMEOUT);
if (errcode < 0) {
dev_err(&usbduxsub->interface->dev,
"comedi_: control msg failed (start)\n");
/* Length */
1,
/* Timeout */
- BULK_TIMEOUT);
+ EZTIMEOUT);
if (errcode < 0) {
dev_err(&usbduxsub->interface->dev,
"comedi_: control msg failed (stop)\n");
/* length */
len,
/* timeout */
- BULK_TIMEOUT);
+ EZTIMEOUT);
dev_dbg(&usbduxsub->interface->dev, "comedi_: result=%d\n", errcode);
if (errcode < 0) {
dev_err(&usbduxsub->interface->dev, "comedi_: upload failed\n");
usb_sndbulkpipe(this_usbduxsub->usbdev,
COMMAND_OUT_EP),
this_usbduxsub->dux_commands, SIZEOFDUXBUFFER,
- &nsent, BULK_TIMEOUT);
+ &nsent, 10);
if (result < 0)
dev_err(&this_usbduxsub->interface->dev, "comedi%d: "
"could not transmit dux_command to the usb-device, "
usb_rcvbulkpipe(this_usbduxsub->usbdev,
COMMAND_IN_EP),
this_usbduxsub->insnBuffer, SIZEINSNBUF,
- &nrec, BULK_TIMEOUT);
+ &nrec, 1);
if (result < 0) {
dev_err(&this_usbduxsub->interface->dev, "comedi%d: "
"insn: USB error %d while receiving DUX command"
t->value = temp; \
return count; \
} \
- static DEVICE_ATTR(value, S_IWUSR | S_IRUGO, show_##value, set_##value);
+ static DEVICE_ATTR(value, S_IWUGO | S_IRUGO, show_##value, set_##value);
show_int(enable);
show_int(offline);
DPRINT_ENTER(VMBUS);
if (gHvContext.SignalEventBuffer) {
- kfree(gHvContext.SignalEventBuffer);
gHvContext.SignalEventBuffer = NULL;
gHvContext.SignalEventParam = NULL;
+ kfree(gHvContext.SignalEventBuffer);
}
if (gHvContext.GuestId == HV_LINUX_GUEST_ID) {
static inline u64
GetRingBufferIndices(RING_BUFFER_INFO* RingInfo)
{
- return (u64)RingInfo->RingBuffer->WriteIndex << 32;
+ return ((u64)RingInfo->RingBuffer->WriteIndex << 32) || RingInfo->RingBuffer->ReadIndex;
}
ret = RndisFilterSetPacketFilter(Device,
NDIS_PACKET_TYPE_BROADCAST |
- NDIS_PACKET_TYPE_ALL_MULTICAST |
NDIS_PACKET_TYPE_DIRECTED);
if (ret == 0)
Device->State = RNDIS_DEV_DATAINITIALIZED;
#include "VmbusApi.h"
/* Defines */
-#define STORVSC_RING_BUFFER_SIZE (20*PAGE_SIZE)
+#define STORVSC_RING_BUFFER_SIZE (10*PAGE_SIZE)
#define BLKVSC_RING_BUFFER_SIZE (20*PAGE_SIZE)
-#define STORVSC_MAX_IO_REQUESTS 128
+#define STORVSC_MAX_IO_REQUESTS 64
/*
* In Hyper-V, each port/path/target maps to 1 scsi host adapter. In
.ndo_start_xmit = netvsc_start_xmit,
.ndo_get_stats = netvsc_get_stats,
.ndo_set_multicast_list = netvsc_set_multicast_list,
- .ndo_change_mtu = eth_change_mtu,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = eth_mac_addr,
};
static int netvsc_probe(struct device *device)
if (!net_drv_obj->Base.OnDeviceAdd)
return -1;
- net = alloc_etherdev(sizeof(struct net_device_context));
+ net = alloc_netdev(sizeof(struct net_device_context), "seth%d",
+ ether_setup);
if (!net)
return -1;
ASSERT(orig_sgl[i].offset + orig_sgl[i].length <= PAGE_SIZE);
- if (bounce_addr == 0)
+ if (j == 0)
bounce_addr = (unsigned long)kmap_atomic(sg_page((&bounce_sgl[j])), KM_IRQ0);
while (srclen) {
destlen = orig_sgl[i].length;
ASSERT(orig_sgl[i].offset + orig_sgl[i].length <= PAGE_SIZE);
- if (bounce_addr == 0)
+ if (j == 0)
bounce_addr = (unsigned long)kmap_atomic(sg_page((&bounce_sgl[j])), KM_IRQ0);
while (destlen) {
unsigned int request_size = 0;
int i;
struct scatterlist *sgl;
- unsigned int sg_count = 0;
DPRINT_ENTER(STORVSC_DRV);
request->DataBuffer.Length = scsi_bufflen(scmnd);
if (scsi_sg_count(scmnd)) {
sgl = (struct scatterlist *)scsi_sglist(scmnd);
- sg_count = scsi_sg_count(scmnd);
/* check if we need to bounce the sgl */
if (do_bounce_buffer(sgl, scsi_sg_count(scmnd)) != -1) {
scsi_sg_count(scmnd));
sgl = cmd_request->bounce_sgl;
- sg_count = cmd_request->bounce_sgl_count;
}
request->DataBuffer.Offset = sgl[0].offset;
- for (i = 0; i < sg_count; i++) {
+ for (i = 0; i < scsi_sg_count(scmnd); i++) {
DPRINT_DBG(STORVSC_DRV, "sgl[%d] len %d offset %d \n",
i, sgl[i].length, sgl[i].offset);
request->DataBuffer.PfnArray[i] =
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/sysctl.h>
-#include <linux/pci.h>
-#include <linux/dmi.h>
#include "osd.h"
#include "logging.h"
#include "vmbus.h"
}
}
-static struct dmi_system_id __initdata microsoft_hv_dmi_table[] = {
- {
- .ident = "Hyper-V",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Virtual Machine"),
- DMI_MATCH(DMI_BOARD_NAME, "Virtual Machine"),
- },
- },
- { },
-};
-MODULE_DEVICE_TABLE(dmi, microsoft_hv_dmi_table);
-
static int __init vmbus_init(void)
{
int ret = 0;
vmbus_loglevel, HIWORD(vmbus_loglevel), LOWORD(vmbus_loglevel));
/* Todo: it is used for loglevel, to be ported to new kernel. */
- if (!dmi_check_system(microsoft_hv_dmi_table))
- return -ENODEV;
-
ret = vmbus_bus_init(VmbusInitialize);
DPRINT_EXIT(VMBUS_DRV);
return;
}
-/*
- * We use a PCI table to determine if we should autoload this driver This is
- * needed by distro tools to determine if the hyperv drivers should be
- * installed and/or configured. We don't do anything else with the table, but
- * it needs to be present.
- */
-const static struct pci_device_id microsoft_hv_pci_table[] = {
- { PCI_DEVICE(0x1414, 0x5353) }, /* VGA compatible controller */
- { 0 }
-};
-MODULE_DEVICE_TABLE(pci, microsoft_hv_pci_table);
-
MODULE_LICENSE("GPL");
module_param(vmbus_irq, int, S_IRUGO);
module_param(vmbus_loglevel, int, S_IRUGO);
tristate "Line6 USB support"
depends on USB && SND
select SND_RAWMIDI
- select SND_PCM
help
This is a driver for the guitar amp, cab, and effects modeller
PODxt Pro by Line6 (and similar devices), supporting the
VARIAX_PARAM_R(float, mix1);
VARIAX_PARAM_R(int, pickup_wiring);
-static DEVICE_ATTR(tweak, S_IWUSR | S_IRUGO, pod_get_tweak, pod_set_tweak);
-static DEVICE_ATTR(wah_position, S_IWUSR | S_IRUGO, pod_get_wah_position, pod_set_wah_position);
-static DEVICE_ATTR(compression_gain, S_IWUSR | S_IRUGO, pod_get_compression_gain, pod_set_compression_gain);
-static DEVICE_ATTR(vol_pedal_position, S_IWUSR | S_IRUGO, pod_get_vol_pedal_position, pod_set_vol_pedal_position);
-static DEVICE_ATTR(compression_threshold, S_IWUSR | S_IRUGO, pod_get_compression_threshold, pod_set_compression_threshold);
-static DEVICE_ATTR(pan, S_IWUSR | S_IRUGO, pod_get_pan, pod_set_pan);
-static DEVICE_ATTR(amp_model_setup, S_IWUSR | S_IRUGO, pod_get_amp_model_setup, pod_set_amp_model_setup);
-static DEVICE_ATTR(amp_model, S_IWUSR | S_IRUGO, pod_get_amp_model, pod_set_amp_model);
-static DEVICE_ATTR(drive, S_IWUSR | S_IRUGO, pod_get_drive, pod_set_drive);
-static DEVICE_ATTR(bass, S_IWUSR | S_IRUGO, pod_get_bass, pod_set_bass);
-static DEVICE_ATTR(mid, S_IWUSR | S_IRUGO, pod_get_mid, pod_set_mid);
-static DEVICE_ATTR(lowmid, S_IWUSR | S_IRUGO, pod_get_lowmid, pod_set_lowmid);
-static DEVICE_ATTR(treble, S_IWUSR | S_IRUGO, pod_get_treble, pod_set_treble);
-static DEVICE_ATTR(highmid, S_IWUSR | S_IRUGO, pod_get_highmid, pod_set_highmid);
-static DEVICE_ATTR(chan_vol, S_IWUSR | S_IRUGO, pod_get_chan_vol, pod_set_chan_vol);
-static DEVICE_ATTR(reverb_mix, S_IWUSR | S_IRUGO, pod_get_reverb_mix, pod_set_reverb_mix);
-static DEVICE_ATTR(effect_setup, S_IWUSR | S_IRUGO, pod_get_effect_setup, pod_set_effect_setup);
-static DEVICE_ATTR(band_1_frequency, S_IWUSR | S_IRUGO, pod_get_band_1_frequency, pod_set_band_1_frequency);
-static DEVICE_ATTR(presence, S_IWUSR | S_IRUGO, pod_get_presence, pod_set_presence);
-static DEVICE_ATTR2(treble__bass, treble, S_IWUSR | S_IRUGO, pod_get_treble__bass, pod_set_treble__bass);
-static DEVICE_ATTR(noise_gate_enable, S_IWUSR | S_IRUGO, pod_get_noise_gate_enable, pod_set_noise_gate_enable);
-static DEVICE_ATTR(gate_threshold, S_IWUSR | S_IRUGO, pod_get_gate_threshold, pod_set_gate_threshold);
-static DEVICE_ATTR(gate_decay_time, S_IWUSR | S_IRUGO, pod_get_gate_decay_time, pod_set_gate_decay_time);
-static DEVICE_ATTR(stomp_enable, S_IWUSR | S_IRUGO, pod_get_stomp_enable, pod_set_stomp_enable);
-static DEVICE_ATTR(comp_enable, S_IWUSR | S_IRUGO, pod_get_comp_enable, pod_set_comp_enable);
-static DEVICE_ATTR(stomp_time, S_IWUSR | S_IRUGO, pod_get_stomp_time, pod_set_stomp_time);
-static DEVICE_ATTR(delay_enable, S_IWUSR | S_IRUGO, pod_get_delay_enable, pod_set_delay_enable);
-static DEVICE_ATTR(mod_param_1, S_IWUSR | S_IRUGO, pod_get_mod_param_1, pod_set_mod_param_1);
-static DEVICE_ATTR(delay_param_1, S_IWUSR | S_IRUGO, pod_get_delay_param_1, pod_set_delay_param_1);
-static DEVICE_ATTR(delay_param_1_note_value, S_IWUSR | S_IRUGO, pod_get_delay_param_1_note_value, pod_set_delay_param_1_note_value);
-static DEVICE_ATTR2(band_2_frequency__bass, band_2_frequency, S_IWUSR | S_IRUGO, pod_get_band_2_frequency__bass, pod_set_band_2_frequency__bass);
-static DEVICE_ATTR(delay_param_2, S_IWUSR | S_IRUGO, pod_get_delay_param_2, pod_set_delay_param_2);
-static DEVICE_ATTR(delay_volume_mix, S_IWUSR | S_IRUGO, pod_get_delay_volume_mix, pod_set_delay_volume_mix);
-static DEVICE_ATTR(delay_param_3, S_IWUSR | S_IRUGO, pod_get_delay_param_3, pod_set_delay_param_3);
-static DEVICE_ATTR(reverb_enable, S_IWUSR | S_IRUGO, pod_get_reverb_enable, pod_set_reverb_enable);
-static DEVICE_ATTR(reverb_type, S_IWUSR | S_IRUGO, pod_get_reverb_type, pod_set_reverb_type);
-static DEVICE_ATTR(reverb_decay, S_IWUSR | S_IRUGO, pod_get_reverb_decay, pod_set_reverb_decay);
-static DEVICE_ATTR(reverb_tone, S_IWUSR | S_IRUGO, pod_get_reverb_tone, pod_set_reverb_tone);
-static DEVICE_ATTR(reverb_pre_delay, S_IWUSR | S_IRUGO, pod_get_reverb_pre_delay, pod_set_reverb_pre_delay);
-static DEVICE_ATTR(reverb_pre_post, S_IWUSR | S_IRUGO, pod_get_reverb_pre_post, pod_set_reverb_pre_post);
-static DEVICE_ATTR(band_2_frequency, S_IWUSR | S_IRUGO, pod_get_band_2_frequency, pod_set_band_2_frequency);
-static DEVICE_ATTR2(band_3_frequency__bass, band_3_frequency, S_IWUSR | S_IRUGO, pod_get_band_3_frequency__bass, pod_set_band_3_frequency__bass);
-static DEVICE_ATTR(wah_enable, S_IWUSR | S_IRUGO, pod_get_wah_enable, pod_set_wah_enable);
-static DEVICE_ATTR(modulation_lo_cut, S_IWUSR | S_IRUGO, pod_get_modulation_lo_cut, pod_set_modulation_lo_cut);
-static DEVICE_ATTR(delay_reverb_lo_cut, S_IWUSR | S_IRUGO, pod_get_delay_reverb_lo_cut, pod_set_delay_reverb_lo_cut);
-static DEVICE_ATTR(volume_pedal_minimum, S_IWUSR | S_IRUGO, pod_get_volume_pedal_minimum, pod_set_volume_pedal_minimum);
-static DEVICE_ATTR(eq_pre_post, S_IWUSR | S_IRUGO, pod_get_eq_pre_post, pod_set_eq_pre_post);
-static DEVICE_ATTR(volume_pre_post, S_IWUSR | S_IRUGO, pod_get_volume_pre_post, pod_set_volume_pre_post);
-static DEVICE_ATTR(di_model, S_IWUSR | S_IRUGO, pod_get_di_model, pod_set_di_model);
-static DEVICE_ATTR(di_delay, S_IWUSR | S_IRUGO, pod_get_di_delay, pod_set_di_delay);
-static DEVICE_ATTR(mod_enable, S_IWUSR | S_IRUGO, pod_get_mod_enable, pod_set_mod_enable);
-static DEVICE_ATTR(mod_param_1_note_value, S_IWUSR | S_IRUGO, pod_get_mod_param_1_note_value, pod_set_mod_param_1_note_value);
-static DEVICE_ATTR(mod_param_2, S_IWUSR | S_IRUGO, pod_get_mod_param_2, pod_set_mod_param_2);
-static DEVICE_ATTR(mod_param_3, S_IWUSR | S_IRUGO, pod_get_mod_param_3, pod_set_mod_param_3);
-static DEVICE_ATTR(mod_param_4, S_IWUSR | S_IRUGO, pod_get_mod_param_4, pod_set_mod_param_4);
-static DEVICE_ATTR(mod_param_5, S_IWUSR | S_IRUGO, pod_get_mod_param_5, pod_set_mod_param_5);
-static DEVICE_ATTR(mod_volume_mix, S_IWUSR | S_IRUGO, pod_get_mod_volume_mix, pod_set_mod_volume_mix);
-static DEVICE_ATTR(mod_pre_post, S_IWUSR | S_IRUGO, pod_get_mod_pre_post, pod_set_mod_pre_post);
-static DEVICE_ATTR(modulation_model, S_IWUSR | S_IRUGO, pod_get_modulation_model, pod_set_modulation_model);
-static DEVICE_ATTR(band_3_frequency, S_IWUSR | S_IRUGO, pod_get_band_3_frequency, pod_set_band_3_frequency);
-static DEVICE_ATTR2(band_4_frequency__bass, band_4_frequency, S_IWUSR | S_IRUGO, pod_get_band_4_frequency__bass, pod_set_band_4_frequency__bass);
-static DEVICE_ATTR(mod_param_1_double_precision, S_IWUSR | S_IRUGO, pod_get_mod_param_1_double_precision, pod_set_mod_param_1_double_precision);
-static DEVICE_ATTR(delay_param_1_double_precision, S_IWUSR | S_IRUGO, pod_get_delay_param_1_double_precision, pod_set_delay_param_1_double_precision);
-static DEVICE_ATTR(eq_enable, S_IWUSR | S_IRUGO, pod_get_eq_enable, pod_set_eq_enable);
-static DEVICE_ATTR(tap, S_IWUSR | S_IRUGO, pod_get_tap, pod_set_tap);
-static DEVICE_ATTR(volume_tweak_pedal_assign, S_IWUSR | S_IRUGO, pod_get_volume_tweak_pedal_assign, pod_set_volume_tweak_pedal_assign);
-static DEVICE_ATTR(band_5_frequency, S_IWUSR | S_IRUGO, pod_get_band_5_frequency, pod_set_band_5_frequency);
-static DEVICE_ATTR(tuner, S_IWUSR | S_IRUGO, pod_get_tuner, pod_set_tuner);
-static DEVICE_ATTR(mic_selection, S_IWUSR | S_IRUGO, pod_get_mic_selection, pod_set_mic_selection);
-static DEVICE_ATTR(cabinet_model, S_IWUSR | S_IRUGO, pod_get_cabinet_model, pod_set_cabinet_model);
-static DEVICE_ATTR(stomp_model, S_IWUSR | S_IRUGO, pod_get_stomp_model, pod_set_stomp_model);
-static DEVICE_ATTR(roomlevel, S_IWUSR | S_IRUGO, pod_get_roomlevel, pod_set_roomlevel);
-static DEVICE_ATTR(band_4_frequency, S_IWUSR | S_IRUGO, pod_get_band_4_frequency, pod_set_band_4_frequency);
-static DEVICE_ATTR(band_6_frequency, S_IWUSR | S_IRUGO, pod_get_band_6_frequency, pod_set_band_6_frequency);
-static DEVICE_ATTR(stomp_param_1_note_value, S_IWUSR | S_IRUGO, pod_get_stomp_param_1_note_value, pod_set_stomp_param_1_note_value);
-static DEVICE_ATTR(stomp_param_2, S_IWUSR | S_IRUGO, pod_get_stomp_param_2, pod_set_stomp_param_2);
-static DEVICE_ATTR(stomp_param_3, S_IWUSR | S_IRUGO, pod_get_stomp_param_3, pod_set_stomp_param_3);
-static DEVICE_ATTR(stomp_param_4, S_IWUSR | S_IRUGO, pod_get_stomp_param_4, pod_set_stomp_param_4);
-static DEVICE_ATTR(stomp_param_5, S_IWUSR | S_IRUGO, pod_get_stomp_param_5, pod_set_stomp_param_5);
-static DEVICE_ATTR(stomp_param_6, S_IWUSR | S_IRUGO, pod_get_stomp_param_6, pod_set_stomp_param_6);
-static DEVICE_ATTR(amp_switch_select, S_IWUSR | S_IRUGO, pod_get_amp_switch_select, pod_set_amp_switch_select);
-static DEVICE_ATTR(delay_param_4, S_IWUSR | S_IRUGO, pod_get_delay_param_4, pod_set_delay_param_4);
-static DEVICE_ATTR(delay_param_5, S_IWUSR | S_IRUGO, pod_get_delay_param_5, pod_set_delay_param_5);
-static DEVICE_ATTR(delay_pre_post, S_IWUSR | S_IRUGO, pod_get_delay_pre_post, pod_set_delay_pre_post);
-static DEVICE_ATTR(delay_model, S_IWUSR | S_IRUGO, pod_get_delay_model, pod_set_delay_model);
-static DEVICE_ATTR(delay_verb_model, S_IWUSR | S_IRUGO, pod_get_delay_verb_model, pod_set_delay_verb_model);
-static DEVICE_ATTR(tempo_msb, S_IWUSR | S_IRUGO, pod_get_tempo_msb, pod_set_tempo_msb);
-static DEVICE_ATTR(tempo_lsb, S_IWUSR | S_IRUGO, pod_get_tempo_lsb, pod_set_tempo_lsb);
-static DEVICE_ATTR(wah_model, S_IWUSR | S_IRUGO, pod_get_wah_model, pod_set_wah_model);
-static DEVICE_ATTR(bypass_volume, S_IWUSR | S_IRUGO, pod_get_bypass_volume, pod_set_bypass_volume);
-static DEVICE_ATTR(fx_loop_on_off, S_IWUSR | S_IRUGO, pod_get_fx_loop_on_off, pod_set_fx_loop_on_off);
-static DEVICE_ATTR(tweak_param_select, S_IWUSR | S_IRUGO, pod_get_tweak_param_select, pod_set_tweak_param_select);
-static DEVICE_ATTR(amp1_engage, S_IWUSR | S_IRUGO, pod_get_amp1_engage, pod_set_amp1_engage);
-static DEVICE_ATTR(band_1_gain, S_IWUSR | S_IRUGO, pod_get_band_1_gain, pod_set_band_1_gain);
-static DEVICE_ATTR2(band_2_gain__bass, band_2_gain, S_IWUSR | S_IRUGO, pod_get_band_2_gain__bass, pod_set_band_2_gain__bass);
-static DEVICE_ATTR(band_2_gain, S_IWUSR | S_IRUGO, pod_get_band_2_gain, pod_set_band_2_gain);
-static DEVICE_ATTR2(band_3_gain__bass, band_3_gain, S_IWUSR | S_IRUGO, pod_get_band_3_gain__bass, pod_set_band_3_gain__bass);
-static DEVICE_ATTR(band_3_gain, S_IWUSR | S_IRUGO, pod_get_band_3_gain, pod_set_band_3_gain);
-static DEVICE_ATTR2(band_4_gain__bass, band_4_gain, S_IWUSR | S_IRUGO, pod_get_band_4_gain__bass, pod_set_band_4_gain__bass);
-static DEVICE_ATTR2(band_5_gain__bass, band_5_gain, S_IWUSR | S_IRUGO, pod_get_band_5_gain__bass, pod_set_band_5_gain__bass);
-static DEVICE_ATTR(band_4_gain, S_IWUSR | S_IRUGO, pod_get_band_4_gain, pod_set_band_4_gain);
-static DEVICE_ATTR2(band_6_gain__bass, band_6_gain, S_IWUSR | S_IRUGO, pod_get_band_6_gain__bass, pod_set_band_6_gain__bass);
+static DEVICE_ATTR(tweak, S_IWUGO | S_IRUGO, pod_get_tweak, pod_set_tweak);
+static DEVICE_ATTR(wah_position, S_IWUGO | S_IRUGO, pod_get_wah_position, pod_set_wah_position);
+static DEVICE_ATTR(compression_gain, S_IWUGO | S_IRUGO, pod_get_compression_gain, pod_set_compression_gain);
+static DEVICE_ATTR(vol_pedal_position, S_IWUGO | S_IRUGO, pod_get_vol_pedal_position, pod_set_vol_pedal_position);
+static DEVICE_ATTR(compression_threshold, S_IWUGO | S_IRUGO, pod_get_compression_threshold, pod_set_compression_threshold);
+static DEVICE_ATTR(pan, S_IWUGO | S_IRUGO, pod_get_pan, pod_set_pan);
+static DEVICE_ATTR(amp_model_setup, S_IWUGO | S_IRUGO, pod_get_amp_model_setup, pod_set_amp_model_setup);
+static DEVICE_ATTR(amp_model, S_IWUGO | S_IRUGO, pod_get_amp_model, pod_set_amp_model);
+static DEVICE_ATTR(drive, S_IWUGO | S_IRUGO, pod_get_drive, pod_set_drive);
+static DEVICE_ATTR(bass, S_IWUGO | S_IRUGO, pod_get_bass, pod_set_bass);
+static DEVICE_ATTR(mid, S_IWUGO | S_IRUGO, pod_get_mid, pod_set_mid);
+static DEVICE_ATTR(lowmid, S_IWUGO | S_IRUGO, pod_get_lowmid, pod_set_lowmid);
+static DEVICE_ATTR(treble, S_IWUGO | S_IRUGO, pod_get_treble, pod_set_treble);
+static DEVICE_ATTR(highmid, S_IWUGO | S_IRUGO, pod_get_highmid, pod_set_highmid);
+static DEVICE_ATTR(chan_vol, S_IWUGO | S_IRUGO, pod_get_chan_vol, pod_set_chan_vol);
+static DEVICE_ATTR(reverb_mix, S_IWUGO | S_IRUGO, pod_get_reverb_mix, pod_set_reverb_mix);
+static DEVICE_ATTR(effect_setup, S_IWUGO | S_IRUGO, pod_get_effect_setup, pod_set_effect_setup);
+static DEVICE_ATTR(band_1_frequency, S_IWUGO | S_IRUGO, pod_get_band_1_frequency, pod_set_band_1_frequency);
+static DEVICE_ATTR(presence, S_IWUGO | S_IRUGO, pod_get_presence, pod_set_presence);
+static DEVICE_ATTR2(treble__bass, treble, S_IWUGO | S_IRUGO, pod_get_treble__bass, pod_set_treble__bass);
+static DEVICE_ATTR(noise_gate_enable, S_IWUGO | S_IRUGO, pod_get_noise_gate_enable, pod_set_noise_gate_enable);
+static DEVICE_ATTR(gate_threshold, S_IWUGO | S_IRUGO, pod_get_gate_threshold, pod_set_gate_threshold);
+static DEVICE_ATTR(gate_decay_time, S_IWUGO | S_IRUGO, pod_get_gate_decay_time, pod_set_gate_decay_time);
+static DEVICE_ATTR(stomp_enable, S_IWUGO | S_IRUGO, pod_get_stomp_enable, pod_set_stomp_enable);
+static DEVICE_ATTR(comp_enable, S_IWUGO | S_IRUGO, pod_get_comp_enable, pod_set_comp_enable);
+static DEVICE_ATTR(stomp_time, S_IWUGO | S_IRUGO, pod_get_stomp_time, pod_set_stomp_time);
+static DEVICE_ATTR(delay_enable, S_IWUGO | S_IRUGO, pod_get_delay_enable, pod_set_delay_enable);
+static DEVICE_ATTR(mod_param_1, S_IWUGO | S_IRUGO, pod_get_mod_param_1, pod_set_mod_param_1);
+static DEVICE_ATTR(delay_param_1, S_IWUGO | S_IRUGO, pod_get_delay_param_1, pod_set_delay_param_1);
+static DEVICE_ATTR(delay_param_1_note_value, S_IWUGO | S_IRUGO, pod_get_delay_param_1_note_value, pod_set_delay_param_1_note_value);
+static DEVICE_ATTR2(band_2_frequency__bass, band_2_frequency, S_IWUGO | S_IRUGO, pod_get_band_2_frequency__bass, pod_set_band_2_frequency__bass);
+static DEVICE_ATTR(delay_param_2, S_IWUGO | S_IRUGO, pod_get_delay_param_2, pod_set_delay_param_2);
+static DEVICE_ATTR(delay_volume_mix, S_IWUGO | S_IRUGO, pod_get_delay_volume_mix, pod_set_delay_volume_mix);
+static DEVICE_ATTR(delay_param_3, S_IWUGO | S_IRUGO, pod_get_delay_param_3, pod_set_delay_param_3);
+static DEVICE_ATTR(reverb_enable, S_IWUGO | S_IRUGO, pod_get_reverb_enable, pod_set_reverb_enable);
+static DEVICE_ATTR(reverb_type, S_IWUGO | S_IRUGO, pod_get_reverb_type, pod_set_reverb_type);
+static DEVICE_ATTR(reverb_decay, S_IWUGO | S_IRUGO, pod_get_reverb_decay, pod_set_reverb_decay);
+static DEVICE_ATTR(reverb_tone, S_IWUGO | S_IRUGO, pod_get_reverb_tone, pod_set_reverb_tone);
+static DEVICE_ATTR(reverb_pre_delay, S_IWUGO | S_IRUGO, pod_get_reverb_pre_delay, pod_set_reverb_pre_delay);
+static DEVICE_ATTR(reverb_pre_post, S_IWUGO | S_IRUGO, pod_get_reverb_pre_post, pod_set_reverb_pre_post);
+static DEVICE_ATTR(band_2_frequency, S_IWUGO | S_IRUGO, pod_get_band_2_frequency, pod_set_band_2_frequency);
+static DEVICE_ATTR2(band_3_frequency__bass, band_3_frequency, S_IWUGO | S_IRUGO, pod_get_band_3_frequency__bass, pod_set_band_3_frequency__bass);
+static DEVICE_ATTR(wah_enable, S_IWUGO | S_IRUGO, pod_get_wah_enable, pod_set_wah_enable);
+static DEVICE_ATTR(modulation_lo_cut, S_IWUGO | S_IRUGO, pod_get_modulation_lo_cut, pod_set_modulation_lo_cut);
+static DEVICE_ATTR(delay_reverb_lo_cut, S_IWUGO | S_IRUGO, pod_get_delay_reverb_lo_cut, pod_set_delay_reverb_lo_cut);
+static DEVICE_ATTR(volume_pedal_minimum, S_IWUGO | S_IRUGO, pod_get_volume_pedal_minimum, pod_set_volume_pedal_minimum);
+static DEVICE_ATTR(eq_pre_post, S_IWUGO | S_IRUGO, pod_get_eq_pre_post, pod_set_eq_pre_post);
+static DEVICE_ATTR(volume_pre_post, S_IWUGO | S_IRUGO, pod_get_volume_pre_post, pod_set_volume_pre_post);
+static DEVICE_ATTR(di_model, S_IWUGO | S_IRUGO, pod_get_di_model, pod_set_di_model);
+static DEVICE_ATTR(di_delay, S_IWUGO | S_IRUGO, pod_get_di_delay, pod_set_di_delay);
+static DEVICE_ATTR(mod_enable, S_IWUGO | S_IRUGO, pod_get_mod_enable, pod_set_mod_enable);
+static DEVICE_ATTR(mod_param_1_note_value, S_IWUGO | S_IRUGO, pod_get_mod_param_1_note_value, pod_set_mod_param_1_note_value);
+static DEVICE_ATTR(mod_param_2, S_IWUGO | S_IRUGO, pod_get_mod_param_2, pod_set_mod_param_2);
+static DEVICE_ATTR(mod_param_3, S_IWUGO | S_IRUGO, pod_get_mod_param_3, pod_set_mod_param_3);
+static DEVICE_ATTR(mod_param_4, S_IWUGO | S_IRUGO, pod_get_mod_param_4, pod_set_mod_param_4);
+static DEVICE_ATTR(mod_param_5, S_IWUGO | S_IRUGO, pod_get_mod_param_5, pod_set_mod_param_5);
+static DEVICE_ATTR(mod_volume_mix, S_IWUGO | S_IRUGO, pod_get_mod_volume_mix, pod_set_mod_volume_mix);
+static DEVICE_ATTR(mod_pre_post, S_IWUGO | S_IRUGO, pod_get_mod_pre_post, pod_set_mod_pre_post);
+static DEVICE_ATTR(modulation_model, S_IWUGO | S_IRUGO, pod_get_modulation_model, pod_set_modulation_model);
+static DEVICE_ATTR(band_3_frequency, S_IWUGO | S_IRUGO, pod_get_band_3_frequency, pod_set_band_3_frequency);
+static DEVICE_ATTR2(band_4_frequency__bass, band_4_frequency, S_IWUGO | S_IRUGO, pod_get_band_4_frequency__bass, pod_set_band_4_frequency__bass);
+static DEVICE_ATTR(mod_param_1_double_precision, S_IWUGO | S_IRUGO, pod_get_mod_param_1_double_precision, pod_set_mod_param_1_double_precision);
+static DEVICE_ATTR(delay_param_1_double_precision, S_IWUGO | S_IRUGO, pod_get_delay_param_1_double_precision, pod_set_delay_param_1_double_precision);
+static DEVICE_ATTR(eq_enable, S_IWUGO | S_IRUGO, pod_get_eq_enable, pod_set_eq_enable);
+static DEVICE_ATTR(tap, S_IWUGO | S_IRUGO, pod_get_tap, pod_set_tap);
+static DEVICE_ATTR(volume_tweak_pedal_assign, S_IWUGO | S_IRUGO, pod_get_volume_tweak_pedal_assign, pod_set_volume_tweak_pedal_assign);
+static DEVICE_ATTR(band_5_frequency, S_IWUGO | S_IRUGO, pod_get_band_5_frequency, pod_set_band_5_frequency);
+static DEVICE_ATTR(tuner, S_IWUGO | S_IRUGO, pod_get_tuner, pod_set_tuner);
+static DEVICE_ATTR(mic_selection, S_IWUGO | S_IRUGO, pod_get_mic_selection, pod_set_mic_selection);
+static DEVICE_ATTR(cabinet_model, S_IWUGO | S_IRUGO, pod_get_cabinet_model, pod_set_cabinet_model);
+static DEVICE_ATTR(stomp_model, S_IWUGO | S_IRUGO, pod_get_stomp_model, pod_set_stomp_model);
+static DEVICE_ATTR(roomlevel, S_IWUGO | S_IRUGO, pod_get_roomlevel, pod_set_roomlevel);
+static DEVICE_ATTR(band_4_frequency, S_IWUGO | S_IRUGO, pod_get_band_4_frequency, pod_set_band_4_frequency);
+static DEVICE_ATTR(band_6_frequency, S_IWUGO | S_IRUGO, pod_get_band_6_frequency, pod_set_band_6_frequency);
+static DEVICE_ATTR(stomp_param_1_note_value, S_IWUGO | S_IRUGO, pod_get_stomp_param_1_note_value, pod_set_stomp_param_1_note_value);
+static DEVICE_ATTR(stomp_param_2, S_IWUGO | S_IRUGO, pod_get_stomp_param_2, pod_set_stomp_param_2);
+static DEVICE_ATTR(stomp_param_3, S_IWUGO | S_IRUGO, pod_get_stomp_param_3, pod_set_stomp_param_3);
+static DEVICE_ATTR(stomp_param_4, S_IWUGO | S_IRUGO, pod_get_stomp_param_4, pod_set_stomp_param_4);
+static DEVICE_ATTR(stomp_param_5, S_IWUGO | S_IRUGO, pod_get_stomp_param_5, pod_set_stomp_param_5);
+static DEVICE_ATTR(stomp_param_6, S_IWUGO | S_IRUGO, pod_get_stomp_param_6, pod_set_stomp_param_6);
+static DEVICE_ATTR(amp_switch_select, S_IWUGO | S_IRUGO, pod_get_amp_switch_select, pod_set_amp_switch_select);
+static DEVICE_ATTR(delay_param_4, S_IWUGO | S_IRUGO, pod_get_delay_param_4, pod_set_delay_param_4);
+static DEVICE_ATTR(delay_param_5, S_IWUGO | S_IRUGO, pod_get_delay_param_5, pod_set_delay_param_5);
+static DEVICE_ATTR(delay_pre_post, S_IWUGO | S_IRUGO, pod_get_delay_pre_post, pod_set_delay_pre_post);
+static DEVICE_ATTR(delay_model, S_IWUGO | S_IRUGO, pod_get_delay_model, pod_set_delay_model);
+static DEVICE_ATTR(delay_verb_model, S_IWUGO | S_IRUGO, pod_get_delay_verb_model, pod_set_delay_verb_model);
+static DEVICE_ATTR(tempo_msb, S_IWUGO | S_IRUGO, pod_get_tempo_msb, pod_set_tempo_msb);
+static DEVICE_ATTR(tempo_lsb, S_IWUGO | S_IRUGO, pod_get_tempo_lsb, pod_set_tempo_lsb);
+static DEVICE_ATTR(wah_model, S_IWUGO | S_IRUGO, pod_get_wah_model, pod_set_wah_model);
+static DEVICE_ATTR(bypass_volume, S_IWUGO | S_IRUGO, pod_get_bypass_volume, pod_set_bypass_volume);
+static DEVICE_ATTR(fx_loop_on_off, S_IWUGO | S_IRUGO, pod_get_fx_loop_on_off, pod_set_fx_loop_on_off);
+static DEVICE_ATTR(tweak_param_select, S_IWUGO | S_IRUGO, pod_get_tweak_param_select, pod_set_tweak_param_select);
+static DEVICE_ATTR(amp1_engage, S_IWUGO | S_IRUGO, pod_get_amp1_engage, pod_set_amp1_engage);
+static DEVICE_ATTR(band_1_gain, S_IWUGO | S_IRUGO, pod_get_band_1_gain, pod_set_band_1_gain);
+static DEVICE_ATTR2(band_2_gain__bass, band_2_gain, S_IWUGO | S_IRUGO, pod_get_band_2_gain__bass, pod_set_band_2_gain__bass);
+static DEVICE_ATTR(band_2_gain, S_IWUGO | S_IRUGO, pod_get_band_2_gain, pod_set_band_2_gain);
+static DEVICE_ATTR2(band_3_gain__bass, band_3_gain, S_IWUGO | S_IRUGO, pod_get_band_3_gain__bass, pod_set_band_3_gain__bass);
+static DEVICE_ATTR(band_3_gain, S_IWUGO | S_IRUGO, pod_get_band_3_gain, pod_set_band_3_gain);
+static DEVICE_ATTR2(band_4_gain__bass, band_4_gain, S_IWUGO | S_IRUGO, pod_get_band_4_gain__bass, pod_set_band_4_gain__bass);
+static DEVICE_ATTR2(band_5_gain__bass, band_5_gain, S_IWUGO | S_IRUGO, pod_get_band_5_gain__bass, pod_set_band_5_gain__bass);
+static DEVICE_ATTR(band_4_gain, S_IWUGO | S_IRUGO, pod_get_band_4_gain, pod_set_band_4_gain);
+static DEVICE_ATTR2(band_6_gain__bass, band_6_gain, S_IWUGO | S_IRUGO, pod_get_band_6_gain__bass, pod_set_band_6_gain__bass);
static DEVICE_ATTR(body, S_IRUGO, variax_get_body, line6_nop_write);
static DEVICE_ATTR(pickup1_enable, S_IRUGO, variax_get_pickup1_enable, line6_nop_write);
static DEVICE_ATTR(pickup1_type, S_IRUGO, variax_get_pickup1_type, line6_nop_write);
return count;
}
-static DEVICE_ATTR(midi_mask_transmit, S_IWUSR | S_IRUGO, midi_get_midi_mask_transmit, midi_set_midi_mask_transmit);
-static DEVICE_ATTR(midi_mask_receive, S_IWUSR | S_IRUGO, midi_get_midi_mask_receive, midi_set_midi_mask_receive);
+static DEVICE_ATTR(midi_mask_transmit, S_IWUGO | S_IRUGO, midi_get_midi_mask_transmit, midi_set_midi_mask_transmit);
+static DEVICE_ATTR(midi_mask_receive, S_IWUGO | S_IRUGO, midi_get_midi_mask_receive, midi_set_midi_mask_receive);
/* MIDI device destructor */
static int snd_line6_midi_free(struct snd_device *device)
#undef GET_SYSTEM_PARAM
/* POD special files: */
-static DEVICE_ATTR(channel, S_IWUSR | S_IRUGO, pod_get_channel, pod_set_channel);
+static DEVICE_ATTR(channel, S_IWUGO | S_IRUGO, pod_get_channel, pod_set_channel);
static DEVICE_ATTR(clip, S_IRUGO, pod_wait_for_clip, line6_nop_write);
static DEVICE_ATTR(device_id, S_IRUGO, pod_get_device_id, line6_nop_write);
static DEVICE_ATTR(dirty, S_IRUGO, pod_get_dirty, line6_nop_write);
-static DEVICE_ATTR(dump, S_IWUSR | S_IRUGO, pod_get_dump, pod_set_dump);
-static DEVICE_ATTR(dump_buf, S_IWUSR | S_IRUGO, pod_get_dump_buf, pod_set_dump_buf);
-static DEVICE_ATTR(finish, S_IWUSR, line6_nop_read, pod_set_finish);
+static DEVICE_ATTR(dump, S_IWUGO | S_IRUGO, pod_get_dump, pod_set_dump);
+static DEVICE_ATTR(dump_buf, S_IWUGO | S_IRUGO, pod_get_dump_buf, pod_set_dump_buf);
+static DEVICE_ATTR(finish, S_IWUGO, line6_nop_read, pod_set_finish);
static DEVICE_ATTR(firmware_version, S_IRUGO, pod_get_firmware_version, line6_nop_write);
-static DEVICE_ATTR(midi_postprocess, S_IWUSR | S_IRUGO, pod_get_midi_postprocess, pod_set_midi_postprocess);
-static DEVICE_ATTR(monitor_level, S_IWUSR | S_IRUGO, pod_get_monitor_level, pod_set_monitor_level);
+static DEVICE_ATTR(midi_postprocess, S_IWUGO | S_IRUGO, pod_get_midi_postprocess, pod_set_midi_postprocess);
+static DEVICE_ATTR(monitor_level, S_IWUGO | S_IRUGO, pod_get_monitor_level, pod_set_monitor_level);
static DEVICE_ATTR(name, S_IRUGO, pod_get_name, line6_nop_write);
static DEVICE_ATTR(name_buf, S_IRUGO, pod_get_name_buf, line6_nop_write);
-static DEVICE_ATTR(retrieve_amp_setup, S_IWUSR, line6_nop_read, pod_set_retrieve_amp_setup);
-static DEVICE_ATTR(retrieve_channel, S_IWUSR, line6_nop_read, pod_set_retrieve_channel);
-static DEVICE_ATTR(retrieve_effects_setup, S_IWUSR, line6_nop_read, pod_set_retrieve_effects_setup);
-static DEVICE_ATTR(routing, S_IWUSR | S_IRUGO, pod_get_routing, pod_set_routing);
+static DEVICE_ATTR(retrieve_amp_setup, S_IWUGO, line6_nop_read, pod_set_retrieve_amp_setup);
+static DEVICE_ATTR(retrieve_channel, S_IWUGO, line6_nop_read, pod_set_retrieve_channel);
+static DEVICE_ATTR(retrieve_effects_setup, S_IWUGO, line6_nop_read, pod_set_retrieve_effects_setup);
+static DEVICE_ATTR(routing, S_IWUGO | S_IRUGO, pod_get_routing, pod_set_routing);
static DEVICE_ATTR(serial_number, S_IRUGO, pod_get_serial_number, line6_nop_write);
-static DEVICE_ATTR(store_amp_setup, S_IWUSR, line6_nop_read, pod_set_store_amp_setup);
-static DEVICE_ATTR(store_channel, S_IWUSR, line6_nop_read, pod_set_store_channel);
-static DEVICE_ATTR(store_effects_setup, S_IWUSR, line6_nop_read, pod_set_store_effects_setup);
-static DEVICE_ATTR(tuner_freq, S_IWUSR | S_IRUGO, pod_get_tuner_freq, pod_set_tuner_freq);
-static DEVICE_ATTR(tuner_mute, S_IWUSR | S_IRUGO, pod_get_tuner_mute, pod_set_tuner_mute);
+static DEVICE_ATTR(store_amp_setup, S_IWUGO, line6_nop_read, pod_set_store_amp_setup);
+static DEVICE_ATTR(store_channel, S_IWUGO, line6_nop_read, pod_set_store_channel);
+static DEVICE_ATTR(store_effects_setup, S_IWUGO, line6_nop_read, pod_set_store_effects_setup);
+static DEVICE_ATTR(tuner_freq, S_IWUGO | S_IRUGO, pod_get_tuner_freq, pod_set_tuner_freq);
+static DEVICE_ATTR(tuner_mute, S_IWUGO | S_IRUGO, pod_get_tuner_mute, pod_set_tuner_mute);
static DEVICE_ATTR(tuner_note, S_IRUGO, pod_get_tuner_note, line6_nop_write);
static DEVICE_ATTR(tuner_pitch, S_IRUGO, pod_get_tuner_pitch, line6_nop_write);
#if CREATE_RAW_FILE
-static DEVICE_ATTR(raw, S_IWUSR, line6_nop_read, line6_set_raw);
+static DEVICE_ATTR(raw, S_IWUGO, line6_nop_read, line6_set_raw);
#endif
/*
return count;
}
-static DEVICE_ATTR(led_red, S_IWUSR | S_IRUGO, line6_nop_read, toneport_set_led_red);
-static DEVICE_ATTR(led_green, S_IWUSR | S_IRUGO, line6_nop_read, toneport_set_led_green);
+static DEVICE_ATTR(led_red, S_IWUGO | S_IRUGO, line6_nop_read, toneport_set_led_red);
+static DEVICE_ATTR(led_green, S_IWUGO | S_IRUGO, line6_nop_read, toneport_set_led_green);
static int toneport_send_cmd(struct usb_device *usbdev, int cmd1, int cmd2)
#endif
/* Variax workbench special files: */
-static DEVICE_ATTR(model, S_IWUSR | S_IRUGO, variax_get_model, variax_set_model);
-static DEVICE_ATTR(volume, S_IWUSR | S_IRUGO, variax_get_volume, variax_set_volume);
-static DEVICE_ATTR(tone, S_IWUSR | S_IRUGO, variax_get_tone, variax_set_tone);
+static DEVICE_ATTR(model, S_IWUGO | S_IRUGO, variax_get_model, variax_set_model);
+static DEVICE_ATTR(volume, S_IWUGO | S_IRUGO, variax_get_volume, variax_set_volume);
+static DEVICE_ATTR(tone, S_IWUGO | S_IRUGO, variax_get_tone, variax_set_tone);
static DEVICE_ATTR(name, S_IRUGO, variax_get_name, line6_nop_write);
static DEVICE_ATTR(bank, S_IRUGO, variax_get_bank, line6_nop_write);
static DEVICE_ATTR(dump, S_IRUGO, variax_get_dump, line6_nop_write);
-static DEVICE_ATTR(active, S_IWUSR | S_IRUGO, variax_get_active, variax_set_active);
+static DEVICE_ATTR(active, S_IWUGO | S_IRUGO, variax_get_active, variax_set_active);
#if CREATE_RAW_FILE
-static DEVICE_ATTR(raw, S_IWUSR, line6_nop_read, line6_set_raw);
-static DEVICE_ATTR(raw2, S_IWUSR, line6_nop_read, variax_set_raw2);
+static DEVICE_ATTR(raw, S_IWUGO, line6_nop_read, line6_set_raw);
+static DEVICE_ATTR(raw2, S_IWUGO, line6_nop_read, variax_set_raw2);
#endif
--- /dev/null
+config INPUT_MIMIO
+ tristate "Mimio Xi interactive whiteboard support"
+ depends on USB && INPUT
+ default N
+ help
+ Say Y here if you want to use a Mimio Xi interactive
+ whiteboard device.
+
+ To compile this driver as a module, choose M here: the
+ module will be called mimio.
--- /dev/null
+obj-$(CONFIG_INPUT_MIMIO) += mimio.o
--- /dev/null
+/*
+ * Hardware event => input event mapping:
+ *
+ *
+ *
+ input.h:#define BTN_TOOL_PEN 0x140 black
+ input.h:#define BTN_TOOL_RUBBER 0x141 blue
+ input.h:#define BTN_TOOL_BRUSH 0x142 green
+ input.h:#define BTN_TOOL_PENCIL 0x143 red
+ input.h:#define BTN_TOOL_AIRBRUSH 0x144 eraser
+ input.h:#define BTN_TOOL_FINGER 0x145 small eraser
+ input.h:#define BTN_TOOL_MOUSE 0x146 mimio interactive
+ input.h:#define BTN_TOOL_LENS 0x147 mimio interactive but1
+ input.h:#define LOCALBTN_TOOL_EXTRA1 0x14a mimio interactive but2 == BTN_TOUCH
+ input.h:#define LOCALBTN_TOOL_EXTRA2 0x14b mimio extra pens (orange, brown, yellow, purple) == BTN_STYLUS
+ input.h:#define LOCALBTN_TOOL_EXTRA3 0x14c unused == BTN_STYLUS2
+ input.h:#define BTN_TOOL_DOUBLETAP 0x14d unused
+ input.h:#define BTN_TOOL_TRIPLETAP 0x14e unused
+ *
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_K) => EV_KEY BIT(BTN_TOOL_PEN)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_B) => EV_KEY BIT(BTN_TOOL_RUBBER)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_G) => EV_KEY BIT(BTN_TOOL_BRUSH)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_R) => EV_KEY BIT(BTN_TOOL_PENCIL)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_E) => EV_KEY BIT(BTN_TOOL_AIRBRUSH)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_ES) => EV_KEY BIT(BTN_TOOL_FINGER)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_I) => EV_KEY BIT(BTN_TOOL_MOUSE)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_IL) => EV_KEY BIT(BTN_TOOL_LENS)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_IR) => EV_KEY BIT(BTN_TOOL_DOUBLETAP)
+ * MIMIO_EV_PENDOWN(MIMIO_PEN_EX) => EV_KEY BIT(BTN_TOOL_TRIPLETAP)
+ * MIMIO_EV_PENDATA => EV_ABS BIT(ABS_X), BIT(ABS_Y)
+ * MIMIO_EV_MEMRESET => EV_KEY BIT(BTN_0)
+ * MIMIO_EV_ACC(ACC_NEWPAGE) => EV_KEY BIT(BTN_1)
+ * MIMIO_EV_ACC(ACC_TAGPAGE) => EV_KEY BIT(BTN_2)
+ * MIMIO_EV_ACC(ACC_PRINTPAGE) => EV_KEY BIT(BTN_3)
+ * MIMIO_EV_ACC(ACC_MAXIMIZE) => EV_KEY BIT(BTN_4)
+ * MIMIO_EV_ACC(ACC_FINDCTLPNL) => EV_KEY BIT(BTN_5)
+ *
+ *
+ * open issues:
+ * - cold-load of data captured when mimio in standalone mode not yet
+ * supported; need to snoop Win32 box to see datastream for this.
+ * - mimio mouse not yet supported; need to snoop Win32 box to see the
+ * datastream for this.
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/input.h>
+#include <linux/usb.h>
+
+#define DRIVER_VERSION "v0.031"
+#define DRIVER_AUTHOR "mwilder@cs.nmsu.edu"
+#define DRIVER_DESC "USB mimio-xi driver"
+
+enum {UPVALUE, DOWNVALUE, MOVEVALUE};
+
+#define MIMIO_XRANGE_MAX 9600
+#define MIMIO_YRANGE_MAX 4800
+
+#define LOCALBTN_TOOL_EXTRA1 BTN_TOUCH
+#define LOCALBTN_TOOL_EXTRA2 BTN_STYLUS
+#define LOCALBTN_TOOL_EXTRA3 BTN_STYLUS2
+
+#define MIMIO_VENDOR_ID 0x08d3
+#define MIMIO_PRODUCT_ID 0x0001
+#define MIMIO_MAXPAYLOAD (8)
+#define MIMIO_MAXNAMELEN (64)
+#define MIMIO_TXWAIT (1)
+#define MIMIO_TXDONE (2)
+
+#define MIMIO_EV_PENDOWN (0x22)
+#define MIMIO_EV_PENDATA (0x24)
+#define MIMIO_EV_PENUP (0x51)
+#define MIMIO_EV_MEMRESET (0x45)
+#define MIMIO_EV_ACC (0xb2)
+
+#define MIMIO_PEN_K (1) /* black pen */
+#define MIMIO_PEN_B (2) /* blue pen */
+#define MIMIO_PEN_G (3) /* green pen */
+#define MIMIO_PEN_R (4) /* red pen */
+/* 5, 6, 7, 8 are extra pens */
+#define MIMIO_PEN_E (9) /* big eraser */
+#define MIMIO_PEN_ES (10) /* lil eraser */
+#define MIMIO_PENJUMP_START (10)
+#define MIMIO_PENJUMP (6)
+#define MIMIO_PEN_I (17) /* mimio interactive */
+#define MIMIO_PEN_IL (18) /* mimio interactive button 1 */
+#define MIMIO_PEN_IR (19) /* mimio interactive button 2 */
+
+#define MIMIO_PEN_MAX (MIMIO_PEN_IR)
+
+#define ACC_DONE (0)
+#define ACC_NEWPAGE (1)
+#define ACC_TAGPAGE (2)
+#define ACC_PRINTPAGE (4)
+#define ACC_MAXIMIZE (8)
+#define ACC_FINDCTLPNL (16)
+
+#define isvalidtxsize(n) ((n) > 0 && (n) <= MIMIO_MAXPAYLOAD)
+
+
+struct pktbuf {
+ unsigned char instr;
+ unsigned char buf[16];
+ unsigned char *p;
+ unsigned char *q;
+};
+
+struct usbintendpt {
+ dma_addr_t dma;
+ struct urb *urb;
+ unsigned char *buf;
+ struct usb_endpoint_descriptor *desc;
+};
+
+struct mimio {
+ struct input_dev *idev;
+ struct usb_device *udev;
+ struct usb_interface *uifc;
+ int open;
+ int present;
+ int greeted;
+ int txflags;
+ char phys[MIMIO_MAXNAMELEN];
+ struct usbintendpt in;
+ struct usbintendpt out;
+ struct pktbuf pktbuf;
+ unsigned char minor;
+ wait_queue_head_t waitq;
+ spinlock_t txlock;
+ void (*rxhandler)(struct mimio *, unsigned char *, unsigned int);
+ int last_pen_down;
+};
+
+static void mimio_close(struct input_dev *);
+static void mimio_dealloc(struct mimio *);
+static void mimio_disconnect(struct usb_interface *);
+static int mimio_greet(struct mimio *);
+static void mimio_irq_in(struct urb *);
+static void mimio_irq_out(struct urb *);
+static int mimio_open(struct input_dev *);
+static int mimio_probe(struct usb_interface *, const struct usb_device_id *);
+static void mimio_rx_handler(struct mimio *, unsigned char *, unsigned int);
+static int mimio_tx(struct mimio *, const char *, int);
+
+static char mimio_name[] = "VirtualInk mimio-Xi";
+static struct usb_device_id mimio_table [] = {
+ { USB_DEVICE(MIMIO_VENDOR_ID, MIMIO_PRODUCT_ID) },
+ { USB_DEVICE(0x0525, 0xa4a0) }, /* gadget zero firmware */
+ { }
+};
+
+MODULE_DEVICE_TABLE(usb, mimio_table);
+
+static struct usb_driver mimio_driver = {
+ .name = "mimio",
+ .probe = mimio_probe,
+ .disconnect = mimio_disconnect,
+ .id_table = mimio_table,
+};
+
+static DECLARE_MUTEX(disconnect_sem);
+
+static void mimio_close(struct input_dev *idev)
+{
+ struct mimio *mimio;
+
+ mimio = input_get_drvdata(idev);
+ if (!mimio) {
+ dev_err(&idev->dev, "null mimio attached to input device\n");
+ return;
+ }
+
+ if (mimio->open <= 0)
+ dev_err(&idev->dev, "mimio not open.\n");
+ else
+ mimio->open--;
+
+ if (mimio->present == 0 && mimio->open == 0)
+ mimio_dealloc(mimio);
+}
+
+static void mimio_dealloc(struct mimio *mimio)
+{
+ if (mimio == NULL)
+ return;
+
+ usb_kill_urb(mimio->in.urb);
+
+ usb_kill_urb(mimio->out.urb);
+
+ if (mimio->idev) {
+ input_unregister_device(mimio->idev);
+ if (mimio->idev->grab)
+ input_close_device(mimio->idev->grab);
+ else
+ dev_dbg(&mimio->idev->dev, "mimio->idev->grab == NULL"
+ " -- didn't call input_close_device\n");
+ }
+
+ usb_free_urb(mimio->in.urb);
+
+ usb_free_urb(mimio->out.urb);
+
+ if (mimio->in.buf) {
+ usb_buffer_free(mimio->udev, MIMIO_MAXPAYLOAD, mimio->in.buf,
+ mimio->in.dma);
+ }
+
+ if (mimio->out.buf)
+ usb_buffer_free(mimio->udev, MIMIO_MAXPAYLOAD, mimio->out.buf,
+ mimio->out.dma);
+
+ if (mimio->idev)
+ input_free_device(mimio->idev);
+
+ kfree(mimio);
+}
+
+static void mimio_disconnect(struct usb_interface *ifc)
+{
+ struct mimio *mimio;
+
+ down(&disconnect_sem);
+
+ mimio = usb_get_intfdata(ifc);
+ usb_set_intfdata(ifc, NULL);
+ dev_dbg(&mimio->idev->dev, "disconnect\n");
+
+ if (mimio) {
+ mimio->present = 0;
+
+ if (mimio->open <= 0)
+ mimio_dealloc(mimio);
+ }
+
+ up(&disconnect_sem);
+}
+
+static int mimio_greet(struct mimio *mimio)
+{
+ const struct grtpkt {
+ int nbytes;
+ unsigned delay;
+ char data[8];
+ } grtpkts[] = {
+ { 3, 0, { 0x11, 0x55, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x53, 0x55, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x43, 0x55, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x33, 0x55, 0x00, 0x00, 0x66, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x13, 0x00, 0x5e, 0x02, 0x4f, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x13, 0x00, 0x04, 0x03, 0x14, 0x00, 0x00, 0x00 } },
+ { 5, 2, { 0x13, 0x00, 0x00, 0x04, 0x17, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x13, 0x00, 0x0d, 0x08, 0x16, 0x00, 0x00, 0x00 } },
+ { 5, 0, { 0x13, 0x00, 0x4d, 0x01, 0x5f, 0x00, 0x00, 0x00 } },
+ { 3, 0, { 0xf1, 0x55, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x00 } },
+ { 7, 2, { 0x52, 0x55, 0x00, 0x07, 0x31, 0x55, 0x64, 0x00 } },
+ { 0, 0, { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } },
+ };
+ int rslt;
+ const struct grtpkt *pkt;
+
+ for (pkt = grtpkts; pkt->nbytes; pkt++) {
+ rslt = mimio_tx(mimio, pkt->data, pkt->nbytes);
+ if (rslt)
+ return rslt;
+ if (pkt->delay)
+ msleep(pkt->delay);
+ }
+
+ return 0;
+}
+
+static void mimio_irq_in(struct urb *urb)
+{
+ int rslt;
+ char *data;
+ const char *reason = "going down";
+ struct mimio *mimio;
+
+ mimio = urb->context;
+
+ if (mimio == NULL)
+ /* paranoia */
+ return;
+
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ETIMEDOUT:
+ reason = "timeout -- unplugged?";
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ dev_dbg(&mimio->idev->dev, "%s.\n", reason);
+ return;
+ default:
+ dev_dbg(&mimio->idev->dev, "unknown urb-status: %d.\n",
+ urb->status);
+ goto exit;
+ }
+ data = mimio->in.buf;
+
+ if (mimio->rxhandler)
+ mimio->rxhandler(mimio, data, urb->actual_length);
+exit:
+ /*
+ * Keep listening to device on same urb.
+ */
+ rslt = usb_submit_urb(urb, GFP_ATOMIC);
+ if (rslt)
+ dev_err(&mimio->idev->dev, "usb_submit_urb failure: %d.\n",
+ rslt);
+}
+
+static void mimio_irq_out(struct urb *urb)
+{
+ unsigned long flags;
+ struct mimio *mimio;
+
+ mimio = urb->context;
+
+ if (urb->status)
+ dev_dbg(&mimio->idev->dev, "urb-status: %d.\n", urb->status);
+
+ spin_lock_irqsave(&mimio->txlock, flags);
+ mimio->txflags |= MIMIO_TXDONE;
+ spin_unlock_irqrestore(&mimio->txlock, flags);
+ wmb();
+ wake_up(&mimio->waitq);
+}
+
+static int mimio_open(struct input_dev *idev)
+{
+ int rslt;
+ struct mimio *mimio;
+
+ rslt = 0;
+ down(&disconnect_sem);
+ mimio = input_get_drvdata(idev);
+ dev_dbg(&idev->dev, "mimio_open\n");
+
+ if (mimio == NULL) {
+ dev_err(&idev->dev, "null mimio.\n");
+ rslt = -ENODEV;
+ goto exit;
+ }
+
+ if (mimio->open++)
+ goto exit;
+
+ if (mimio->present && !mimio->greeted) {
+ struct urb *urb = mimio->in.urb;
+ mimio->in.urb->dev = mimio->udev;
+ rslt = usb_submit_urb(mimio->in.urb, GFP_KERNEL);
+ if (rslt) {
+ dev_err(&idev->dev, "usb_submit_urb failure "
+ "(res = %d: %s). Not greeting.\n",
+ rslt,
+ (!urb ? "urb is NULL" :
+ (urb->hcpriv ? "urb->hcpriv is non-NULL" :
+ (!urb->complete ? "urb is not complete" :
+ (urb->number_of_packets <= 0 ? "urb has no packets" :
+ (urb->interval <= 0 ? "urb interval too small" :
+ "urb interval too large or some other error"))))));
+ rslt = -EIO;
+ goto exit;
+ }
+ rslt = mimio_greet(mimio);
+ if (rslt == 0) {
+ dev_dbg(&idev->dev, "Mimio greeted OK.\n");
+ mimio->greeted = 1;
+ } else {
+ dev_dbg(&idev->dev, "Mimio greet Failure (%d)\n",
+ rslt);
+ }
+ }
+
+exit:
+ up(&disconnect_sem);
+ return rslt;
+}
+
+static int mimio_probe(struct usb_interface *ifc,
+ const struct usb_device_id *id)
+{
+ char path[64];
+ int pipe, maxp;
+ struct mimio *mimio;
+ struct usb_device *udev;
+ struct usb_host_interface *hostifc;
+ struct input_dev *input_dev;
+ int res = 0;
+ int i;
+
+ udev = interface_to_usbdev(ifc);
+
+ mimio = kzalloc(sizeof(struct mimio), GFP_KERNEL);
+ if (!mimio)
+ return -ENOMEM;
+
+ input_dev = input_allocate_device();
+ if (!input_dev) {
+ mimio_dealloc(mimio);
+ return -ENOMEM;
+ }
+
+ mimio->uifc = ifc;
+ mimio->udev = udev;
+ mimio->pktbuf.p = mimio->pktbuf.buf;
+ mimio->pktbuf.q = mimio->pktbuf.buf;
+ /* init_input_dev(mimio->idev); */
+ mimio->idev = input_dev;
+ init_waitqueue_head(&mimio->waitq);
+ spin_lock_init(&mimio->txlock);
+ hostifc = ifc->cur_altsetting;
+
+ if (hostifc->desc.bNumEndpoints != 2) {
+ dev_err(&udev->dev, "Unexpected endpoint count: %d.\n",
+ hostifc->desc.bNumEndpoints);
+ mimio_dealloc(mimio);
+ return -ENODEV;
+ }
+
+ mimio->in.desc = &(hostifc->endpoint[0].desc);
+ mimio->out.desc = &(hostifc->endpoint[1].desc);
+
+ mimio->in.buf = usb_buffer_alloc(udev, MIMIO_MAXPAYLOAD, GFP_KERNEL,
+ &mimio->in.dma);
+ mimio->out.buf = usb_buffer_alloc(udev, MIMIO_MAXPAYLOAD, GFP_KERNEL,
+ &mimio->out.dma);
+
+ if (mimio->in.buf == NULL || mimio->out.buf == NULL) {
+ dev_err(&udev->dev, "usb_buffer_alloc failure.\n");
+ mimio_dealloc(mimio);
+ return -ENOMEM;
+ }
+
+ mimio->in.urb = usb_alloc_urb(0, GFP_KERNEL);
+ mimio->out.urb = usb_alloc_urb(0, GFP_KERNEL);
+
+ if (mimio->in.urb == NULL || mimio->out.urb == NULL) {
+ dev_err(&udev->dev, "usb_alloc_urb failure.\n");
+ mimio_dealloc(mimio);
+ return -ENOMEM;
+ }
+
+ /*
+ * Build the input urb.
+ */
+ pipe = usb_rcvintpipe(udev, mimio->in.desc->bEndpointAddress);
+ maxp = usb_maxpacket(udev, pipe, usb_pipeout(pipe));
+ if (maxp > MIMIO_MAXPAYLOAD)
+ maxp = MIMIO_MAXPAYLOAD;
+ usb_fill_int_urb(mimio->in.urb, udev, pipe, mimio->in.buf, maxp,
+ mimio_irq_in, mimio, mimio->in.desc->bInterval);
+ mimio->in.urb->transfer_dma = mimio->in.dma;
+ mimio->in.urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+
+ /*
+ * Build the output urb.
+ */
+ pipe = usb_sndintpipe(udev, mimio->out.desc->bEndpointAddress);
+ maxp = usb_maxpacket(udev, pipe, usb_pipeout(pipe));
+ if (maxp > MIMIO_MAXPAYLOAD)
+ maxp = MIMIO_MAXPAYLOAD;
+ usb_fill_int_urb(mimio->out.urb, udev, pipe, mimio->out.buf, maxp,
+ mimio_irq_out, mimio, mimio->out.desc->bInterval);
+ mimio->out.urb->transfer_dma = mimio->out.dma;
+ mimio->out.urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+
+ /*
+ * Build input device info
+ */
+ usb_make_path(udev, path, 64);
+ snprintf(mimio->phys, MIMIO_MAXNAMELEN, "%s/input0", path);
+ input_set_drvdata(input_dev, mimio);
+ /* input_dev->dev = &ifc->dev; */
+ input_dev->open = mimio_open;
+ input_dev->close = mimio_close;
+ input_dev->name = mimio_name;
+ input_dev->phys = mimio->phys;
+ input_dev->dev.parent = &ifc->dev;
+
+ input_dev->id.bustype = BUS_USB;
+ input_dev->id.vendor = le16_to_cpu(udev->descriptor.idVendor);
+ input_dev->id.product = le16_to_cpu(udev->descriptor.idProduct);
+ input_dev->id.version = le16_to_cpu(udev->descriptor.bcdDevice);
+
+ input_dev->evbit[0] |= BIT(EV_KEY) | BIT(EV_ABS);
+ for (i = BTN_TOOL_PEN; i <= LOCALBTN_TOOL_EXTRA2; ++i)
+ set_bit(i, input_dev->keybit);
+
+ input_dev->keybit[BIT_WORD(BTN_MISC)] |= BIT_MASK(BTN_0) |
+ BIT_MASK(BTN_1) |
+ BIT_MASK(BTN_2) |
+ BIT_MASK(BTN_3) |
+ BIT_MASK(BTN_4) |
+ BIT_MASK(BTN_5);
+ /* input_dev->keybit[BTN_MOUSE] |= BIT(BTN_LEFT); */
+ input_dev->absbit[0] |= BIT_MASK(ABS_X) | BIT_MASK(ABS_Y);
+ input_set_abs_params(input_dev, ABS_X, 0, MIMIO_XRANGE_MAX, 0, 0);
+ input_set_abs_params(input_dev, ABS_Y, 0, MIMIO_YRANGE_MAX, 0, 0);
+ input_dev->absbit[BIT_WORD(ABS_MISC)] |= BIT_MASK(ABS_MISC);
+
+#if 0
+ input_dev->absmin[ABS_X] = 0;
+ input_dev->absmin[ABS_Y] = 0;
+ input_dev->absmax[ABS_X] = 9600;
+ input_dev->absmax[ABS_Y] = 4800;
+ input_dev->absfuzz[ABS_X] = 0;
+ input_dev->absfuzz[ABS_Y] = 0;
+ input_dev->absflat[ABS_X] = 0;
+ input_dev->absflat[ABS_Y] = 0;
+#endif
+
+#if 0
+ /* this will just reduce the precision */
+ input_dev->absfuzz[ABS_X] = 8; /* experimental; may need to change */
+ input_dev->absfuzz[ABS_Y] = 8; /* experimental; may need to change */
+#endif
+
+ /*
+ * Register the input device.
+ */
+ res = input_register_device(mimio->idev);
+ if (res) {
+ dev_err(&udev->dev, "input_register_device failure (%d)\n",
+ res);
+ mimio_dealloc(mimio);
+ return -EIO;
+ }
+ dev_dbg(&mimio->idev->dev, "input: %s on %s (res = %d).\n",
+ input_dev->name, input_dev->phys, res);
+
+ usb_set_intfdata(ifc, mimio);
+ mimio->present = 1;
+
+ /*
+ * Submit the input urb to the usb subsystem.
+ */
+ mimio->in.urb->dev = mimio->udev;
+ res = usb_submit_urb(mimio->in.urb, GFP_KERNEL);
+ if (res) {
+ dev_err(&mimio->idev->dev, "usb_submit_urb failure (%d)\n",
+ res);
+ mimio_dealloc(mimio);
+ return -EIO;
+ }
+
+ /*
+ * Attempt to greet the mimio after giving
+ * it some post-init settling time.
+ *
+ * note: sometimes this sleep interval isn't
+ * long enough to permit the device to re-init
+ * after a hot-swap; maybe need to bump it up.
+ *
+ * As it is, this probably breaks module unloading support!
+ */
+ msleep(1024);
+
+ res = mimio_greet(mimio);
+ if (res == 0) {
+ dev_dbg(&mimio->idev->dev, "Mimio greeted OK.\n");
+ mimio->greeted = 1;
+ mimio->rxhandler = mimio_rx_handler;
+ } else {
+ dev_dbg(&mimio->idev->dev, "Mimio greet Failure (%d)\n", res);
+ }
+
+ return 0;
+}
+
+static int handle_mimio_rx_penupdown(struct mimio *mimio,
+ int down,
+ const char *const instr[],
+ const int instr_ofst[])
+{
+ int penid, x;
+ if (mimio->pktbuf.q - mimio->pktbuf.p < (down ? 4 : 3))
+ return 1; /* partial pkt */
+
+ if (down) {
+ x = *mimio->pktbuf.p ^ *(mimio->pktbuf.p + 1) ^
+ *(mimio->pktbuf.p + 2);
+ if (x != *(mimio->pktbuf.p + 3)) {
+ dev_dbg(&mimio->idev->dev, "EV_PEN%s: bad xsum.\n",
+ down ? "DOWN":"UP");
+ /* skip this event data */
+ mimio->pktbuf.p += 4;
+ /* decode any remaining events */
+ return 0;
+ }
+ penid = mimio->pktbuf.instr = *(mimio->pktbuf.p + 2);
+ if (penid > MIMIO_PEN_MAX) {
+ dev_dbg(&mimio->idev->dev,
+ "Unmapped penID (not in [0, %d]): %d\n",
+ MIMIO_PEN_MAX, (int)mimio->pktbuf.instr);
+ penid = mimio->pktbuf.instr = 0;
+ }
+ mimio->last_pen_down = penid;
+ } else {
+ penid = mimio->last_pen_down;
+ }
+ dev_dbg(&mimio->idev->dev, "%s (id %d, code %d) %s.\n", instr[penid],
+ instr_ofst[penid], penid, down ? "down" : "up");
+
+ if (instr_ofst[penid] >= 0) {
+ int code = BTN_TOOL_PEN + instr_ofst[penid];
+ int value = down ? DOWNVALUE : UPVALUE;
+ if (code > KEY_MAX)
+ dev_dbg(&mimio->idev->dev, "input_event will ignore "
+ "-- code (%d) > KEY_MAX\n", code);
+ if (!test_bit(code, mimio->idev->keybit))
+ dev_dbg(&mimio->idev->dev, "input_event will ignore "
+ "-- bit for code (%d) not enabled\n", code);
+ if (!!test_bit(code, mimio->idev->key) == value)
+ dev_dbg(&mimio->idev->dev, "input_event will ignore "
+ "-- bit for code (%d) already set to %d\n",
+ code, value);
+ if (value != DOWNVALUE) {
+ /* input_regs(mimio->idev, regs); */
+ input_report_key(mimio->idev, code, value);
+ input_sync(mimio->idev);
+ } else {
+ /* wait until we get some coordinates */
+ }
+ } else {
+ dev_dbg(&mimio->idev->dev, "penID offset[%d] == %d is < 0 "
+ "- not sending\n", penid, instr_ofst[penid]);
+ }
+ mimio->pktbuf.p += down ? 4 : 3; /* 3 for up, 4 for down */
+ return 0;
+}
+
+/*
+ * Stay tuned for partial-packet excitement.
+ *
+ * This routine buffers data packets received from the mimio device
+ * in the mimio's data space. This buffering is necessary because
+ * the mimio's in endpoint can serve us partial packets of data, and
+ * we want the driver to support the servicing of multiple mimios.
+ * Empirical evidence gathered so far suggests that the method of
+ * buffering packet data in the mimio's data space works. Previous
+ * versions of this driver did not buffer packet data in each mimio's
+ * data-space, and were therefore not able to service multiple mimios.
+ * Note that since the caller of this routine is running in interrupt
+ * context, care needs to be taken to ensure that this routine does not
+ * become bloated, and it may be that another spinlock is needed in each
+ * mimio to guard the buffered packet data properly.
+ */
+static void mimio_rx_handler(struct mimio *mimio,
+ unsigned char *data,
+ unsigned int nbytes)
+{
+ struct device *dev = &mimio->idev->dev;
+ unsigned int x;
+ unsigned int y;
+ static const char * const instr[] = {
+ "?0",
+ "black pen", "blue pen", "green pen", "red pen",
+ "brown pen", "orange pen", "purple pen", "yellow pen",
+ "big eraser", "lil eraser",
+ "?11", "?12", "?13", "?14", "?15", "?16",
+ "mimio interactive", "interactive button1",
+ "interactive button2"
+ };
+
+ /* Mimio Interactive gives:
+ * down: [0x22 0x01 0x11 0x32 0x24]
+ * b1 : [0x22 0x01 0x12 0x31 0x24]
+ * b2 : [0x22 0x01 0x13 0x30 0x24]
+ */
+ static const int instr_ofst[] = {
+ -1,
+ 0, 1, 2, 3,
+ 9, 9, 9, 9,
+ 4, 5,
+ -1, -1, -1, -1, -1, -1,
+ 6, 7, 8,
+ };
+
+ memcpy(mimio->pktbuf.q, data, nbytes);
+ mimio->pktbuf.q += nbytes;
+
+ while (mimio->pktbuf.p < mimio->pktbuf.q) {
+ int t = *mimio->pktbuf.p;
+ switch (t) {
+ case MIMIO_EV_PENUP:
+ case MIMIO_EV_PENDOWN:
+ if (handle_mimio_rx_penupdown(mimio,
+ t == MIMIO_EV_PENDOWN,
+ instr, instr_ofst))
+ return; /* partial packet */
+ break;
+
+ case MIMIO_EV_PENDATA:
+ if (mimio->pktbuf.q - mimio->pktbuf.p < 6)
+ /* partial pkt */
+ return;
+ x = *mimio->pktbuf.p ^ *(mimio->pktbuf.p + 1) ^
+ *(mimio->pktbuf.p + 2) ^
+ *(mimio->pktbuf.p + 3) ^
+ *(mimio->pktbuf.p + 4);
+ if (x != *(mimio->pktbuf.p + 5)) {
+ dev_dbg(dev, "EV_PENDATA: bad xsum.\n");
+ mimio->pktbuf.p += 6; /* skip this event data */
+ break; /* decode any remaining events */
+ }
+ x = *(mimio->pktbuf.p + 1);
+ x <<= 8;
+ x |= *(mimio->pktbuf.p + 2);
+ y = *(mimio->pktbuf.p + 3);
+ y <<= 8;
+ y |= *(mimio->pktbuf.p + 4);
+ dev_dbg(dev, "coord: (%d, %d)\n", x, y);
+ if (instr_ofst[mimio->pktbuf.instr] >= 0) {
+ int code = BTN_TOOL_PEN +
+ instr_ofst[mimio->last_pen_down];
+#if 0
+ /* Utter hack to ensure we get forwarded _AND_
+ * so we can identify when a complete signal is
+ * received */
+ mimio->idev->abs[ABS_Y] = -1;
+ mimio->idev->abs[ABS_X] = -1;
+#endif
+ /* input_regs(mimio->idev, regs); */
+ input_report_abs(mimio->idev, ABS_X, x);
+ input_report_abs(mimio->idev, ABS_Y, y);
+ /* fake a penup */
+ change_bit(code, mimio->idev->key);
+ input_report_key(mimio->idev,
+ code,
+ DOWNVALUE);
+ /* always sync here */
+ mimio->idev->sync = 0;
+ input_sync(mimio->idev);
+ }
+ mimio->pktbuf.p += 6;
+ break;
+ case MIMIO_EV_MEMRESET:
+ if (mimio->pktbuf.q - mimio->pktbuf.p < 7)
+ /* partial pkt */
+ return;
+ dev_dbg(dev, "mem-reset.\n");
+ /* input_regs(mimio->idev, regs); */
+ input_event(mimio->idev, EV_KEY, BTN_0, 1);
+ input_event(mimio->idev, EV_KEY, BTN_0, 0);
+ input_sync(mimio->idev);
+ mimio->pktbuf.p += 7;
+ break;
+ case MIMIO_EV_ACC:
+ if (mimio->pktbuf.q - mimio->pktbuf.p < 4)
+ /* partial pkt */
+ return;
+ x = *mimio->pktbuf.p ^ *(mimio->pktbuf.p + 1) ^
+ *(mimio->pktbuf.p + 2);
+ if (x != *(mimio->pktbuf.p + 3)) {
+ dev_dbg(dev, "EV_ACC: bad xsum.\n");
+ mimio->pktbuf.p += 4; /* skip this event data */
+ break; /* decode any remaining events */
+ }
+ switch (*(mimio->pktbuf.p + 2)) {
+ case ACC_NEWPAGE:
+ dev_dbg(&mimio->idev->dev, "new-page.\n");
+ /* input_regs(mimio->idev, regs); */
+ input_event(mimio->idev, EV_KEY, BTN_1, 1);
+ input_event(mimio->idev, EV_KEY, BTN_1, 0);
+ input_sync(mimio->idev);
+ break;
+ case ACC_TAGPAGE:
+ dev_dbg(&mimio->idev->dev, "tag-page.\n");
+ /* input_regs(mimio->idev, regs); */
+ input_event(mimio->idev, EV_KEY, BTN_2, 1);
+ input_event(mimio->idev, EV_KEY, BTN_2, 0);
+ input_sync(mimio->idev);
+ break;
+ case ACC_PRINTPAGE:
+ dev_dbg(&mimio->idev->dev, "print-page.\n");
+ /* input_regs(mimio->idev, regs);*/
+ input_event(mimio->idev, EV_KEY, BTN_3, 1);
+ input_event(mimio->idev, EV_KEY, BTN_3, 0);
+ input_sync(mimio->idev);
+ break;
+ case ACC_MAXIMIZE:
+ dev_dbg(&mimio->idev->dev,
+ "maximize-window.\n");
+ /* input_regs(mimio->idev, regs); */
+ input_event(mimio->idev, EV_KEY, BTN_4, 1);
+ input_event(mimio->idev, EV_KEY, BTN_4, 0);
+ input_sync(mimio->idev);
+ break;
+ case ACC_FINDCTLPNL:
+ dev_dbg(&mimio->idev->dev, "find-ctl-panel.\n");
+ /* input_regs(mimio->idev, regs); */
+ input_event(mimio->idev, EV_KEY, BTN_5, 1);
+ input_event(mimio->idev, EV_KEY, BTN_5, 0);
+ input_sync(mimio->idev);
+ break;
+ case ACC_DONE:
+ dev_dbg(&mimio->idev->dev, "acc-done.\n");
+ /* no event is dispatched to the input
+ * subsystem for this device event.
+ */
+ break;
+ default:
+ dev_dbg(dev, "unknown acc event.\n");
+ break;
+ }
+ mimio->pktbuf.p += 4;
+ break;
+ default:
+ mimio->pktbuf.p++;
+ break;
+ }
+ }
+
+ /*
+ * No partial event was received, so reset mimio's pktbuf ptrs.
+ */
+ mimio->pktbuf.p = mimio->pktbuf.q = mimio->pktbuf.buf;
+}
+
+static int mimio_tx(struct mimio *mimio, const char *buf, int nbytes)
+{
+ int rslt;
+ int timeout;
+ unsigned long flags;
+ DECLARE_WAITQUEUE(wait, current);
+
+ if (!(isvalidtxsize(nbytes))) {
+ dev_err(&mimio->idev->dev, "invalid arg: nbytes: %d.\n",
+ nbytes);
+ return -EINVAL;
+ }
+
+ /*
+ * Init the out urb and copy the data to send.
+ */
+ mimio->out.urb->dev = mimio->udev;
+ mimio->out.urb->transfer_buffer_length = nbytes;
+ memcpy(mimio->out.urb->transfer_buffer, buf, nbytes);
+
+ /*
+ * Send the data.
+ */
+ spin_lock_irqsave(&mimio->txlock, flags);
+ mimio->txflags = MIMIO_TXWAIT;
+ rslt = usb_submit_urb(mimio->out.urb, GFP_ATOMIC);
+ spin_unlock_irqrestore(&mimio->txlock, flags);
+ dev_dbg(&mimio->idev->dev, "rslt: %d.\n", rslt);
+
+ if (rslt) {
+ dev_err(&mimio->idev->dev, "usb_submit_urb failure: %d.\n",
+ rslt);
+ return rslt;
+ }
+
+ /*
+ * Wait for completion to be signalled (the mimio_irq_out
+ * completion routine will or MIMIO_TXDONE in with txflags).
+ */
+ timeout = HZ;
+ set_current_state(TASK_INTERRUPTIBLE);
+ add_wait_queue(&mimio->waitq, &wait);
+
+ while (timeout && ((mimio->txflags & MIMIO_TXDONE) == 0)) {
+ timeout = schedule_timeout(timeout);
+ rmb();
+ }
+
+ if ((mimio->txflags & MIMIO_TXDONE) == 0)
+ dev_dbg(&mimio->idev->dev, "tx timed out.\n");
+
+ /*
+ * Now that completion has been signalled,
+ * unlink the urb so that it can be recycled.
+ */
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&mimio->waitq, &wait);
+ usb_unlink_urb(mimio->out.urb);
+
+ return rslt;
+}
+
+static int __init mimio_init(void)
+{
+ int rslt;
+
+ rslt = usb_register(&mimio_driver);
+ if (rslt != 0) {
+ err("%s: usb_register failure: %d", __func__, rslt);
+ return rslt;
+ }
+
+ printk(KERN_INFO KBUILD_MODNAME ":"
+ DRIVER_DESC " " DRIVER_VERSION "\n");
+ return rslt;
+}
+
+static void __exit mimio_exit(void)
+{
+ usb_deregister(&mimio_driver);
+}
+
+module_init(mimio_init);
+module_exit(mimio_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
if (pprt) {
parport_release(pprt);
parport_unregister_device(pprt);
- pprt = NULL;
}
parport_unregister_driver(&panel_driver);
printk(KERN_ERR "Panel driver version " PANEL_VERSION
/* TODO: free all input signals */
parport_release(pprt);
parport_unregister_device(pprt);
- pprt = NULL;
}
parport_unregister_driver(&panel_driver);
}
{
if ((pAd->RxRing.Cell[index].DmaBuf.AllocVa) && (pAd->RxRing.Cell[index].pNdisPacket))
{
- PCI_UNMAP_SINGLE(pAd, pAd->RxRing.Cell[index].DmaBuf.AllocPa, pAd->RxRing.Cell[index].DmaBuf.AllocSize, PCI_DMA_FROMDEVICE);
+ PCI_UNMAP_SINGLE(pObj->pci_dev, pAd->RxRing.Cell[index].DmaBuf.AllocPa, pAd->RxRing.Cell[index].DmaBuf.AllocSize, PCI_DMA_FROMDEVICE);
RELEASE_NDIS_PACKET(pAd, pAd->RxRing.Cell[index].pNdisPacket, NDIS_STATUS_SUCCESS);
}
}
}
udelay(10);
}
- if (TryCnt == TC_3W_POLL_MAX_TRY_CNT) {
- printk(KERN_ERR "rtl8187se: HwThreeWire(): CmdReg:"
- " %#X RE|WE bits are not clear!!\n", u1bTmp);
- dump_stack();
- return 0;
- }
+ if (TryCnt == TC_3W_POLL_MAX_TRY_CNT)
+ panic("HwThreeWire(): CmdReg: %#X RE|WE bits are not clear!!\n", u1bTmp);
// RTL8187S HSSI Read/Write Function
u1bTmp = read_nic_byte(dev, RF_SW_CONFIG);
int idx;
int ByteCnt = nDataBufBitCnt / 8;
//printk("%d\n",nDataBufBitCnt);
- if ((nDataBufBitCnt % 8) != 0) {
- printk(KERN_ERR "rtl8187se: "
- "HwThreeWire(): nDataBufBitCnt(%d)"
- " should be multiple of 8!!!\n",
- nDataBufBitCnt);
- dump_stack();
- nDataBufBitCnt += 8;
- nDataBufBitCnt &= ~7;
- }
+ if ((nDataBufBitCnt % 8) != 0)
+ panic("HwThreeWire(): nDataBufBitCnt(%d) should be multiple of 8!!!\n",
+ nDataBufBitCnt);
- if (nDataBufBitCnt > 64) {
- printk(KERN_ERR "rtl8187se: HwThreeWire():"
- " nDataBufBitCnt(%d) should <= 64!!!\n",
- nDataBufBitCnt);
- dump_stack();
- nDataBufBitCnt = 64;
- }
+ if (nDataBufBitCnt > 64)
+ panic("HwThreeWire(): nDataBufBitCnt(%d) should <= 64!!!\n",
+ nDataBufBitCnt);
for(idx = 0; idx < ByteCnt; idx++)
{
static struct usb_device_id rtl8192_usb_id_tbl[] = {
/* Realtek */
- {USB_DEVICE(0x0bda, 0x8171)},
{USB_DEVICE(0x0bda, 0x8192)},
{USB_DEVICE(0x0bda, 0x8709)},
/* Corega */
{USB_DEVICE(0x07aa, 0x0043)},
/* Belkin */
{USB_DEVICE(0x050d, 0x805E)},
- {USB_DEVICE(0x050d, 0x815F)}, /* Belkin F5D8053 v6 */
/* Sitecom */
{USB_DEVICE(0x0df6, 0x0031)},
- {USB_DEVICE(0x0df6, 0x004b)}, /* WL-349 */
/* EnGenius */
{USB_DEVICE(0x1740, 0x9201)},
/* Dlink */
{USB_DEVICE(0x2001, 0x3301)},
/* Zinwell */
{USB_DEVICE(0x5a57, 0x0290)},
- /* Guillemot */
- {USB_DEVICE(0x06f8, 0xe031)},
//92SU
{USB_DEVICE(0x0bda, 0x8172)},
{}
ud->eh_ops.shutdown(ud);
ud->event &= ~USBIP_EH_SHUTDOWN;
+
+ break;
}
+ /* Stop the error handler. */
+ if (ud->event & USBIP_EH_BYE)
+ return -1;
+
/* Reset the device. */
if (ud->event & USBIP_EH_RESET) {
ud->eh_ops.reset(ud);
ud->event &= ~USBIP_EH_RESET;
+
+ break;
}
/* Mark the device as unusable. */
ud->eh_ops.unusable(ud);
ud->event &= ~USBIP_EH_UNUSABLE;
+
+ break;
}
- /* Stop the error handler. */
- if (ud->event & USBIP_EH_BYE)
- return -1;
+ /* NOTREACHED */
+ printk(KERN_ERR "%s: unknown event\n", __func__);
+ return -1;
}
return 0;
{
struct usbip_task *eh = &ud->eh;
- if (eh->thread == current)
- return; /* do not wait for myself */
-
wait_for_completion(&eh->thread_done);
usbip_dbg_eh("usbip_eh has finished\n");
}
* spin_unlock(&vdev->ud.lock); */
spin_unlock_irqrestore(&the_controller->lock, flags);
-
- usb_hcd_poll_rh_status(vhci_to_hcd(the_controller));
}
}
//2008-07-21-01<Add>by MikeLiu
//register wpadev
-#if 0
if(wpa_set_wpadev(pDevice, 1)!=0) {
printk("Fail to Register WPADEV?\n");
unregister_netdev(pDevice->dev);
free_netdev(dev);
}
-#endif
device_print_info(pDevice);
pci_set_drvdata(pcid, pDevice);
return 0;
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "wpa_ie_len = %d\n", param->u.wpa_associate.wpa_ie_len);
- if (param->u.wpa_associate.wpa_ie_len) {
- if (!param->u.wpa_associate.wpa_ie)
- return -EINVAL;
- if (param->u.wpa_associate.wpa_ie_len > sizeof(abyWPAIE))
- return -EINVAL;
- if (copy_from_user(&abyWPAIE[0], param->u.wpa_associate.wpa_ie, param->u.wpa_associate.wpa_ie_len))
- return -EFAULT;
- }
+ if (param->u.wpa_associate.wpa_ie &&
+ copy_from_user(&abyWPAIE[0], param->u.wpa_associate.wpa_ie, param->u.wpa_associate.wpa_ie_len))
+ return -EINVAL;
if (param->u.wpa_associate.mode == 1)
pMgmt->eConfigMode = WMAC_CONFIG_IBSS_STA;
return ret;
}
-static DEVICE_ATTR(stat_status, S_IWUSR | S_IRUGO, read_status, reboot);
+static DEVICE_ATTR(stat_status, S_IWUGO | S_IRUGO, read_status, reboot);
static ssize_t read_human_status(struct device *dev, struct device_attribute *attr,
char *buf)
return ret;
}
-static DEVICE_ATTR(stat_human_status, S_IRUGO, read_human_status, NULL);
+static DEVICE_ATTR(stat_human_status, S_IWUGO | S_IRUGO, read_human_status, NULL);
static ssize_t read_delin(struct device *dev, struct device_attribute *attr,
char *buf)
return ret;
}
-static DEVICE_ATTR(stat_delin, S_IRUGO, read_delin, NULL);
+static DEVICE_ATTR(stat_delin, S_IWUGO | S_IRUGO, read_delin, NULL);
#define UEA_ATTR(name, reset) \
\
{
wb->use = 0;
acm->transmitting--;
- usb_autopm_put_interface_async(acm->control);
}
/*
}
dbg("%s susp_count: %d", __func__, acm->susp_count);
- usb_autopm_get_interface_async(acm->control);
if (acm->susp_count) {
- if (!acm->delayed_wb)
- acm->delayed_wb = wb;
- else
- usb_autopm_put_interface_async(acm->control);
+ acm->delayed_wb = wb;
+ schedule_work(&acm->waker);
spin_unlock_irqrestore(&acm->write_lock, flags);
return 0; /* A white lie */
}
tty_kref_put(tty);
}
+static void acm_waker(struct work_struct *waker)
+{
+ struct acm *acm = container_of(waker, struct acm, waker);
+ int rv;
+
+ rv = usb_autopm_get_interface(acm->control);
+ if (rv < 0) {
+ dev_err(&acm->dev->dev, "Autopm failure in %s\n", __func__);
+ return;
+ }
+ if (acm->delayed_wb) {
+ acm_start_wb(acm, acm->delayed_wb);
+ acm->delayed_wb = NULL;
+ }
+ usb_autopm_put_interface(acm->control);
+}
+
/*
* TTY handlers
*/
}
if (!buflen) {
- if (intf->cur_altsetting->endpoint &&
- intf->cur_altsetting->endpoint->extralen &&
+ if (intf->cur_altsetting->endpoint->extralen &&
intf->cur_altsetting->endpoint->extra) {
dev_dbg(&intf->dev,
"Seeking extra descriptors on endpoint\n");
acm->urb_task.func = acm_rx_tasklet;
acm->urb_task.data = (unsigned long) acm;
INIT_WORK(&acm->work, acm_softint);
+ INIT_WORK(&acm->waker, acm_waker);
init_waitqueue_head(&acm->drain_wait);
spin_lock_init(&acm->throttle_lock);
spin_lock_init(&acm->write_lock);
if (rcv->urb == NULL) {
dev_dbg(&intf->dev,
"out of memory (read urbs usb_alloc_urb)\n");
- goto alloc_fail6;
+ goto alloc_fail7;
}
rcv->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
if (snd->urb == NULL) {
dev_dbg(&intf->dev,
"out of memory (write urbs usb_alloc_urb)");
- goto alloc_fail8;
+ goto alloc_fail7;
}
if (usb_endpoint_xfer_int(epwrite))
i = device_create_file(&intf->dev,
&dev_attr_iCountryCodeRelDate);
if (i < 0) {
- device_remove_file(&intf->dev, &dev_attr_wCountryCodes);
kfree(acm->country_codes);
goto skip_countries;
}
usb_free_urb(acm->wb[i].urb);
alloc_fail7:
acm_read_buffers_free(acm);
-alloc_fail6:
for (i = 0; i < num_rx_buf; i++)
usb_free_urb(acm->ru[i].urb);
usb_free_urb(acm->ctrlurb);
tasklet_enable(&acm->urb_task);
cancel_work_sync(&acm->work);
+ cancel_work_sync(&acm->waker);
}
static void acm_disconnect(struct usb_interface *intf)
static int acm_resume(struct usb_interface *intf)
{
struct acm *acm = usb_get_intfdata(intf);
- struct acm_wb *wb;
int rv = 0;
int cnt;
mutex_lock(&acm->mutex);
if (acm->port.count) {
rv = usb_submit_urb(acm->ctrlurb, GFP_NOIO);
-
- spin_lock_irq(&acm->write_lock);
- if (acm->delayed_wb) {
- wb = acm->delayed_wb;
- acm->delayed_wb = NULL;
- spin_unlock_irq(&acm->write_lock);
- acm_start_wb(acm, wb);
- } else {
- spin_unlock_irq(&acm->write_lock);
- }
-
- /*
- * delayed error checking because we must
- * do the write path at all cost
- */
if (rv < 0)
goto err_out;
}
#endif /* CONFIG_PM */
-
-#define NOKIA_PCSUITE_ACM_INFO(x) \
- USB_DEVICE_AND_INTERFACE_INFO(0x0421, x, \
- USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM, \
- USB_CDC_ACM_PROTO_VENDOR)
-
-#define SAMSUNG_PCSUITE_ACM_INFO(x) \
- USB_DEVICE_AND_INTERFACE_INFO(0x04e7, x, \
- USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM, \
- USB_CDC_ACM_PROTO_VENDOR)
-
/*
* USB driver structure.
*/
{ USB_DEVICE(0x1bbb, 0x0003), /* Alcatel OT-I650 */
.driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
},
- { USB_DEVICE(0x1576, 0x03b1), /* Maretron USB100 */
- .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */
- },
-
- /* Nokia S60 phones expose two ACM channels. The first is
- * a modem and is picked up by the standard AT-command
- * information below. The second is 'vendor-specific' but
- * is treated as a serial device at the S60 end, so we want
- * to expose it on Linux too. */
- { NOKIA_PCSUITE_ACM_INFO(0x042D), }, /* Nokia 3250 */
- { NOKIA_PCSUITE_ACM_INFO(0x04D8), }, /* Nokia 5500 Sport */
- { NOKIA_PCSUITE_ACM_INFO(0x04C9), }, /* Nokia E50 */
- { NOKIA_PCSUITE_ACM_INFO(0x0419), }, /* Nokia E60 */
- { NOKIA_PCSUITE_ACM_INFO(0x044D), }, /* Nokia E61 */
- { NOKIA_PCSUITE_ACM_INFO(0x0001), }, /* Nokia E61i */
- { NOKIA_PCSUITE_ACM_INFO(0x0475), }, /* Nokia E62 */
- { NOKIA_PCSUITE_ACM_INFO(0x0508), }, /* Nokia E65 */
- { NOKIA_PCSUITE_ACM_INFO(0x0418), }, /* Nokia E70 */
- { NOKIA_PCSUITE_ACM_INFO(0x0425), }, /* Nokia N71 */
- { NOKIA_PCSUITE_ACM_INFO(0x0486), }, /* Nokia N73 */
- { NOKIA_PCSUITE_ACM_INFO(0x04DF), }, /* Nokia N75 */
- { NOKIA_PCSUITE_ACM_INFO(0x000e), }, /* Nokia N77 */
- { NOKIA_PCSUITE_ACM_INFO(0x0445), }, /* Nokia N80 */
- { NOKIA_PCSUITE_ACM_INFO(0x042F), }, /* Nokia N91 & N91 8GB */
- { NOKIA_PCSUITE_ACM_INFO(0x048E), }, /* Nokia N92 */
- { NOKIA_PCSUITE_ACM_INFO(0x0420), }, /* Nokia N93 */
- { NOKIA_PCSUITE_ACM_INFO(0x04E6), }, /* Nokia N93i */
- { NOKIA_PCSUITE_ACM_INFO(0x04B2), }, /* Nokia 5700 XpressMusic */
- { NOKIA_PCSUITE_ACM_INFO(0x0134), }, /* Nokia 6110 Navigator (China) */
- { NOKIA_PCSUITE_ACM_INFO(0x046E), }, /* Nokia 6110 Navigator */
- { NOKIA_PCSUITE_ACM_INFO(0x002f), }, /* Nokia 6120 classic & */
- { NOKIA_PCSUITE_ACM_INFO(0x0088), }, /* Nokia 6121 classic */
- { NOKIA_PCSUITE_ACM_INFO(0x00fc), }, /* Nokia 6124 classic */
- { NOKIA_PCSUITE_ACM_INFO(0x0042), }, /* Nokia E51 */
- { NOKIA_PCSUITE_ACM_INFO(0x00b0), }, /* Nokia E66 */
- { NOKIA_PCSUITE_ACM_INFO(0x00ab), }, /* Nokia E71 */
- { NOKIA_PCSUITE_ACM_INFO(0x0481), }, /* Nokia N76 */
- { NOKIA_PCSUITE_ACM_INFO(0x0007), }, /* Nokia N81 & N81 8GB */
- { NOKIA_PCSUITE_ACM_INFO(0x0071), }, /* Nokia N82 */
- { NOKIA_PCSUITE_ACM_INFO(0x04F0), }, /* Nokia N95 & N95-3 NAM */
- { NOKIA_PCSUITE_ACM_INFO(0x0070), }, /* Nokia N95 8GB */
- { NOKIA_PCSUITE_ACM_INFO(0x00e9), }, /* Nokia 5320 XpressMusic */
- { NOKIA_PCSUITE_ACM_INFO(0x0099), }, /* Nokia 6210 Navigator, RM-367 */
- { NOKIA_PCSUITE_ACM_INFO(0x0128), }, /* Nokia 6210 Navigator, RM-419 */
- { NOKIA_PCSUITE_ACM_INFO(0x008f), }, /* Nokia 6220 Classic */
- { NOKIA_PCSUITE_ACM_INFO(0x00a0), }, /* Nokia 6650 */
- { NOKIA_PCSUITE_ACM_INFO(0x007b), }, /* Nokia N78 */
- { NOKIA_PCSUITE_ACM_INFO(0x0094), }, /* Nokia N85 */
- { NOKIA_PCSUITE_ACM_INFO(0x003a), }, /* Nokia N96 & N96-3 */
- { NOKIA_PCSUITE_ACM_INFO(0x00e9), }, /* Nokia 5320 XpressMusic */
- { NOKIA_PCSUITE_ACM_INFO(0x0108), }, /* Nokia 5320 XpressMusic 2G */
- { NOKIA_PCSUITE_ACM_INFO(0x01f5), }, /* Nokia N97, RM-505 */
- { NOKIA_PCSUITE_ACM_INFO(0x02e3), }, /* Nokia 5230, RM-588 */
- { NOKIA_PCSUITE_ACM_INFO(0x0178), }, /* Nokia E63 */
- { NOKIA_PCSUITE_ACM_INFO(0x010e), }, /* Nokia E75 */
- { NOKIA_PCSUITE_ACM_INFO(0x02d9), }, /* Nokia 6760 Slide */
- { NOKIA_PCSUITE_ACM_INFO(0x01d0), }, /* Nokia E52 */
- { NOKIA_PCSUITE_ACM_INFO(0x0223), }, /* Nokia E72 */
- { NOKIA_PCSUITE_ACM_INFO(0x0275), }, /* Nokia X6 */
- { NOKIA_PCSUITE_ACM_INFO(0x026c), }, /* Nokia N97 Mini */
- { NOKIA_PCSUITE_ACM_INFO(0x0154), }, /* Nokia 5800 XpressMusic */
- { NOKIA_PCSUITE_ACM_INFO(0x04ce), }, /* Nokia E90 */
- { NOKIA_PCSUITE_ACM_INFO(0x01d4), }, /* Nokia E55 */
- { SAMSUNG_PCSUITE_ACM_INFO(0x6651), }, /* Samsung GTi8510 (INNOV8) */
-
- /* NOTE: non-Nokia COMM/ACM/0xff is likely MSFT RNDIS... NOT a modem! */
-
- /* control interfaces without any protocol set */
- { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
- USB_CDC_PROTO_NONE) },
/* control interfaces with various AT-command sets */
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
USB_CDC_ACM_PROTO_AT_CDMA) },
+ /* NOTE: COMM/ACM/0xff is likely MSFT RNDIS ... NOT a modem!! */
{ }
};
struct mutex mutex;
struct usb_cdc_line_coding line; /* bits, stop, parity */
struct work_struct work; /* work queue entry for line discipline waking up */
+ struct work_struct waker;
wait_queue_head_t drain_wait; /* close processing */
struct tasklet_struct urb_task; /* rx processing */
spinlock_t throttle_lock; /* synchronize throtteling and read callback */
static int proc_connectinfo(struct dev_state *ps, void __user *arg)
{
- struct usbdevfs_connectinfo ci = {
- .devnum = ps->dev->devnum,
- .slow = ps->dev->speed == USB_SPEED_LOW
- };
+ struct usbdevfs_connectinfo ci;
+ ci.devnum = ps->dev->devnum;
+ ci.slow = ps->dev->speed == USB_SPEED_LOW;
if (copy_to_user(arg, &ci, sizeof(ci)))
return -EFAULT;
return 0;
free_async(as);
return -ENOMEM;
}
- /* Isochronous input data may end up being discontiguous
- * if some of the packets are short. Clear the buffer so
- * that the gaps don't leak kernel data to userspace.
- */
- if (is_in && uurb->type == USBDEVFS_URB_TYPE_ISO)
- memset(as->urb->transfer_buffer, 0,
- uurb->buffer_length);
}
as->urb->dev = ps->dev;
as->urb->pipe = (uurb->type << 30) |
void __user *addr = as->userurb;
unsigned int i;
- if (as->userbuffer && urb->actual_length) {
- if (urb->number_of_packets > 0) /* Isochronous */
- i = urb->transfer_buffer_length;
- else /* Non-Isoc */
- i = urb->actual_length;
- if (copy_to_user(as->userbuffer, urb->transfer_buffer, i))
+ if (as->userbuffer && urb->actual_length)
+ if (copy_to_user(as->userbuffer, urb->transfer_buffer,
+ urb->actual_length))
goto err_out;
- }
if (put_user(as->status, &userurb->status))
goto err_out;
if (put_user(urb->actual_length, &userurb->actual_length))
{
struct usb_device *usb_dev;
+ /* driver is often null here; dev_dbg() would oops */
+ pr_debug("usb %s: uevent\n", dev_name(dev));
+
if (is_usb_device(dev)) {
usb_dev = to_usb_device(dev);
} else if (is_usb_interface(dev)) {
}
if (usb_dev->devnum < 0) {
- /* driver is often null here; dev_dbg() would oops */
pr_debug("usb %s: already deleted?\n", dev_name(dev));
return -ENODEV;
}
udev->state == USB_STATE_SUSPENDED)
goto done;
+ udev->do_remote_wakeup = device_may_wakeup(&udev->dev);
+
if (msg.event & PM_EVENT_AUTO) {
- udev->do_remote_wakeup = device_may_wakeup(&udev->dev);
status = autosuspend_check(udev, 0);
if (status < 0)
goto done;
return status;
}
-static void choose_wakeup(struct usb_device *udev, pm_message_t msg)
-{
- /* Remote wakeup is needed only when we actually go to sleep.
- * For things like FREEZE and QUIESCE, if the device is already
- * autosuspended then its current wakeup setting is okay.
- */
- if (msg.event == PM_EVENT_FREEZE || msg.event == PM_EVENT_QUIESCE) {
- udev->do_remote_wakeup = 0;
- return;
- }
-
- /* Allow remote wakeup if it is enabled, even if no interface drivers
- * actually want it.
- */
- udev->do_remote_wakeup = device_may_wakeup(&udev->dev);
-}
-
int usb_suspend(struct device *dev, pm_message_t msg)
{
struct usb_device *udev;
}
udev->skip_sys_resume = 0;
- choose_wakeup(udev, msg);
return usb_external_suspend_device(udev, msg);
}
int usb_register_dev(struct usb_interface *intf,
struct usb_class_driver *class_driver)
{
- int retval;
+ int retval = -EINVAL;
int minor_base = class_driver->minor_base;
- int minor;
+ int minor = 0;
char name[20];
char *temp;
*/
minor_base = 0;
#endif
+ intf->minor = -1;
- if (class_driver->fops == NULL)
- return -EINVAL;
- if (intf->minor >= 0)
- return -EADDRINUSE;
-
- retval = init_usb_class();
- if (retval)
- return retval;
+ dbg ("looking for a minor, starting at %d", minor_base);
- dev_dbg(&intf->dev, "looking for a minor, starting at %d", minor_base);
+ if (class_driver->fops == NULL)
+ goto exit;
down_write(&minor_rwsem);
for (minor = minor_base; minor < MAX_USB_MINORS; ++minor) {
continue;
usb_minors[minor] = class_driver->fops;
- intf->minor = minor;
+
+ retval = 0;
break;
}
up_write(&minor_rwsem);
- if (intf->minor < 0)
- return -EXFULL;
+
+ if (retval)
+ goto exit;
+
+ retval = init_usb_class();
+ if (retval)
+ goto exit;
+
+ intf->minor = minor;
/* create a usb class device for this usb interface */
snprintf(name, sizeof(name), class_driver->name, minor - minor_base);
"%s", temp);
if (IS_ERR(intf->usb_dev)) {
down_write(&minor_rwsem);
- usb_minors[minor] = NULL;
- intf->minor = -1;
+ usb_minors[intf->minor] = NULL;
up_write(&minor_rwsem);
retval = PTR_ERR(intf->usb_dev);
}
+exit:
return retval;
}
EXPORT_SYMBOL_GPL(usb_register_dev);
* than a vendor-specific driver. */
else if (udev->descriptor.bDeviceClass !=
USB_CLASS_VENDOR_SPEC &&
- (desc && desc->bInterfaceClass !=
+ (!desc || desc->bInterfaceClass !=
USB_CLASS_VENDOR_SPEC)) {
best = c;
break;
0x09, /* __u8 bMaxPacketSize0; 2^9 = 512 Bytes */
0x6b, 0x1d, /* __le16 idVendor; Linux Foundation */
- 0x03, 0x00, /* __le16 idProduct; device 0x0003 */
+ 0x02, 0x00, /* __le16 idProduct; device 0x0002 */
KERNEL_VER, KERNEL_REL, /* __le16 bcdDevice */
0x03, /* __u8 iManufacturer; */
/* xHCI specific functions */
/* Called by usb_alloc_dev to alloc HC device structures */
int (*alloc_dev)(struct usb_hcd *, struct usb_device *);
- /* Called by usb_disconnect to free HC device structures */
+ /* Called by usb_release_dev to free HC device structures */
void (*free_dev)(struct usb_hcd *, struct usb_device *);
/* Bandwidth computation functions */
#include <linux/kthread.h>
#include <linux/mutex.h>
#include <linux/freezer.h>
-#include <linux/usb/quirks.h>
#include <asm/uaccess.h>
#include <asm/byteorder.h>
#endif
-static void hub_free_dev(struct usb_device *udev)
-{
- struct usb_hcd *hcd = bus_to_hcd(udev->bus);
-
- /* Root hubs aren't real devices, so don't free HCD resources */
- if (hcd->driver->free_dev && udev->parent)
- hcd->driver->free_dev(hcd, udev);
-}
-
/**
* usb_disconnect - disconnect a device (usbcore-internal)
* @pdev: pointer to device being disconnected
usb_stop_pm(udev);
- hub_free_dev(udev);
-
put_device(&udev->dev);
}
if (udev->parent)
usb_autoresume_device(udev->parent);
+ usb_detect_quirks(udev);
err = usb_enumerate_device(udev); /* Read descriptors */
if (err < 0)
goto fail;
else
i = udev->descriptor.bMaxPacketSize0;
if (le16_to_cpu(udev->ep0.desc.wMaxPacketSize) != i) {
- if (udev->speed == USB_SPEED_LOW ||
+ if (udev->speed != USB_SPEED_FULL ||
!(i == 8 || i == 16 || i == 32 || i == 64)) {
- dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", i);
+ dev_err(&udev->dev, "ep0 maxpacket = %d\n", i);
retval = -EMSGSIZE;
goto fail;
}
- if (udev->speed == USB_SPEED_FULL)
- dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i);
- else
- dev_warn(&udev->dev, "Using ep0 maxpacket: %d\n", i);
+ dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i);
udev->ep0.desc.wMaxPacketSize = cpu_to_le16(i);
usb_ep0_reinit(udev);
}
if (status < 0)
goto loop;
- usb_detect_quirks(udev);
- if (udev->quirks & USB_QUIRK_DELAY_INIT)
- msleep(1000);
-
/* consecutive bus-powered hubs aren't reliable; they can
* violate the voltage drop budget. if the new child has
* a "powered" LED, users should notice we didn't enable it
loop:
usb_ep0_reinit(udev);
release_address(udev);
- hub_free_dev(udev);
usb_put_dev(udev);
if ((status == -ENOTCONN) || (status == -ENOTSUPP))
break;
*dentry = NULL;
mutex_lock(&parent->d_inode->i_mutex);
*dentry = lookup_one_len(name, parent, strlen(name));
- if (!IS_ERR(*dentry)) {
+ if (!IS_ERR(dentry)) {
if ((mode & S_IFMT) == S_IFDIR)
error = usbfs_mkdir (parent->d_inode, *dentry, mode);
else
error = usbfs_create (parent->d_inode, *dentry, mode);
} else
- error = PTR_ERR(*dentry);
+ error = PTR_ERR(dentry);
mutex_unlock(&parent->d_inode->i_mutex);
return error;
{
int i;
+ dev_dbg(&dev->dev, "%s nuking %s URBs\n", __func__,
+ skip_ep0 ? "non-ep0" : "all");
+ for (i = skip_ep0; i < 16; ++i) {
+ usb_disable_endpoint(dev, i, true);
+ usb_disable_endpoint(dev, i + USB_DIR_IN, true);
+ }
+
/* getting rid of interfaces will disconnect
* any drivers bound to them (a key side effect)
*/
if (dev->state == USB_STATE_CONFIGURED)
usb_set_device_state(dev, USB_STATE_ADDRESS);
}
-
- dev_dbg(&dev->dev, "%s nuking %s URBs\n", __func__,
- skip_ep0 ? "non-ep0" : "all");
- for (i = skip_ep0; i < 16; ++i) {
- usb_disable_endpoint(dev, i, true);
- usb_disable_endpoint(dev, i + USB_DIR_IN, true);
- }
}
/**
intf->dev.groups = usb_interface_groups;
intf->dev.dma_mask = dev->dev.dma_mask;
INIT_WORK(&intf->reset_ws, __usb_queue_reset_device);
- intf->minor = -1;
device_initialize(&intf->dev);
mark_quiesced(intf);
dev_set_name(&intf->dev, "%d-%s:%d.%d",
/* Creative SB Audigy 2 NX */
{ USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME },
- /* Logitech Harmony 700-series */
- { USB_DEVICE(0x046d, 0xc122), .driver_info = USB_QUIRK_DELAY_INIT },
-
/* Philips PSC805 audio device */
{ USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME },
- /* Artisman Watchdog Dongle */
- { USB_DEVICE(0x04b4, 0x0526), .driver_info =
- USB_QUIRK_CONFIG_INTF_STRINGS },
-
/* Roland SC-8820 */
{ USB_DEVICE(0x0582, 0x0007), .driver_info = USB_QUIRK_RESET_RESUME },
/* X-Rite/Gretag-Macbeth Eye-One Pro display colorimeter */
{ USB_DEVICE(0x0971, 0x2000), .driver_info = USB_QUIRK_NO_SET_INTF },
- /* Broadcom BCM92035DGROM BT dongle */
- { USB_DEVICE(0x0a5c, 0x2021), .driver_info = USB_QUIRK_RESET_RESUME },
-
/* Action Semiconductor flash disk */
{ USB_DEVICE(0x10d6, 0x2200), .driver_info =
USB_QUIRK_STRING_FETCH_255 },
}
EXPORT_SYMBOL_GPL(usb_anchor_urb);
-/* Callers must hold anchor->lock */
-static void __usb_unanchor_urb(struct urb *urb, struct usb_anchor *anchor)
-{
- urb->anchor = NULL;
- list_del(&urb->anchor_list);
- usb_put_urb(urb);
- if (list_empty(&anchor->urb_list))
- wake_up(&anchor->wait);
-}
-
/**
* usb_unanchor_urb - unanchors an URB
* @urb: pointer to the urb to anchor
return;
spin_lock_irqsave(&anchor->lock, flags);
- /*
- * At this point, we could be competing with another thread which
- * has the same intention. To protect the urb from being unanchored
- * twice, only the winner of the race gets the job.
- */
- if (likely(anchor == urb->anchor))
- __usb_unanchor_urb(urb, anchor);
+ if (unlikely(anchor != urb->anchor)) {
+ /* we've lost the race to another thread */
+ spin_unlock_irqrestore(&anchor->lock, flags);
+ return;
+ }
+ urb->anchor = NULL;
+ list_del(&urb->anchor_list);
spin_unlock_irqrestore(&anchor->lock, flags);
+ usb_put_urb(urb);
+ if (list_empty(&anchor->urb_list))
+ wake_up(&anchor->wait);
}
EXPORT_SYMBOL_GPL(usb_unanchor_urb);
void usb_unlink_anchored_urbs(struct usb_anchor *anchor)
{
struct urb *victim;
+ unsigned long flags;
- while ((victim = usb_get_from_anchor(anchor)) != NULL) {
+ spin_lock_irqsave(&anchor->lock, flags);
+ while (!list_empty(&anchor->urb_list)) {
+ victim = list_entry(anchor->urb_list.prev, struct urb,
+ anchor_list);
+ usb_get_urb(victim);
+ spin_unlock_irqrestore(&anchor->lock, flags);
+ /* this will unanchor the URB */
usb_unlink_urb(victim);
usb_put_urb(victim);
+ spin_lock_irqsave(&anchor->lock, flags);
}
+ spin_unlock_irqrestore(&anchor->lock, flags);
}
EXPORT_SYMBOL_GPL(usb_unlink_anchored_urbs);
victim = list_entry(anchor->urb_list.next, struct urb,
anchor_list);
usb_get_urb(victim);
- __usb_unanchor_urb(victim, anchor);
+ spin_unlock_irqrestore(&anchor->lock, flags);
+ usb_unanchor_urb(victim);
} else {
+ spin_unlock_irqrestore(&anchor->lock, flags);
victim = NULL;
}
- spin_unlock_irqrestore(&anchor->lock, flags);
return victim;
}
while (!list_empty(&anchor->urb_list)) {
victim = list_entry(anchor->urb_list.prev, struct urb,
anchor_list);
- __usb_unanchor_urb(victim, anchor);
+ usb_get_urb(victim);
+ spin_unlock_irqrestore(&anchor->lock, flags);
+ /* this may free the URB */
+ usb_unanchor_urb(victim);
+ usb_put_urb(victim);
+ spin_lock_irqsave(&anchor->lock, flags);
}
spin_unlock_irqrestore(&anchor->lock, flags);
}
hcd = bus_to_hcd(udev->bus);
usb_destroy_configuration(udev);
+ /* Root hubs aren't real devices, so don't free HCD resources */
+ if (hcd->driver->free_dev && udev->parent)
+ hcd->driver->free_dev(hcd, udev);
usb_put_hcd(hcd);
kfree(udev->product);
kfree(udev->manufacturer);
} else {
disable_irq(gpio_to_irq(udc->vbus_pin));
}
- } else {
- /* gpio_request fail so use -EINVAL for gpio_is_valid */
- udc->vbus_pin = -EINVAL;
}
}
case USB_ENDPOINT_XFER_ISOC:
/* Calculate transactions needed for high bandwidth iso */
mult = (unsigned char)(1 + ((max >> 11) & 0x03));
- max = max & 0x7ff; /* bit 0~10 */
+ max = max & 0x8ff; /* bit 0~10 */
/* 3 transactions at most */
if (mult > 3)
goto en_done;
/* mandatory */
case OID_GEN_VENDOR_DESCRIPTION:
pr_debug("%s: OID_GEN_VENDOR_DESCRIPTION\n", __func__);
- if ( rndis_per_dev_params [configNr].vendorDescr ) {
- length = strlen (rndis_per_dev_params [configNr].vendorDescr);
- memcpy (outbuf,
- rndis_per_dev_params [configNr].vendorDescr, length);
- } else {
- outbuf[0] = 0;
- }
+ length = strlen (rndis_per_dev_params [configNr].vendorDescr);
+ memcpy (outbuf,
+ rndis_per_dev_params [configNr].vendorDescr, length);
retval = 0;
break;
list_move(&req->list, &port->read_pool);
}
- /* Push from tty to ldisc; without low_latency set this is handled by
- * a workqueue, so we won't get callbacks and can hold port_lock
+ /* Push from tty to ldisc; this is immediate with low_latency, and
+ * may trigger callbacks to this driver ... so drop the spinlock.
*/
if (tty && do_push) {
+ spin_unlock_irq(&port->port_lock);
tty_flip_buffer_push(tty);
+ wake_up_interruptible(&tty->read_wait);
+ spin_lock_irq(&port->port_lock);
+
+ /* tty may have been closed */
+ tty = port->port_tty;
}
port->open_count = 1;
port->openclose = false;
+ /* low_latency means ldiscs work in tasklet context, without
+ * needing a workqueue schedule ... easier to keep up.
+ */
+ tty->low_latency = 1;
+
/* if connected, start the I/O stream */
if (port->port_usb) {
struct gserial *gser = port->port_usb;
n_ports = 0;
tty_unregister_driver(gs_tty_driver);
- put_tty_driver(gs_tty_driver);
gs_tty_driver = NULL;
pr_debug("%s: cleaned up ttyGS* support\n", __func__);
*/
ehci->periodic_size = DEFAULT_I_TDPS;
INIT_LIST_HEAD(&ehci->cached_itd_list);
- INIT_LIST_HEAD(&ehci->cached_sitd_list);
if ((retval = ehci_mem_init(ehci, GFP_KERNEL)) < 0)
return retval;
/* endpoints can be iso streams. for now, we don't
* accelerate iso completions ... so spin a while.
*/
- if (qh->hw == NULL) {
+ if (qh->hw->hw_info1 == 0) {
ehci_vdbg (ehci, "iso delay\n");
goto idle_timeout;
}
tmp && tmp != qh;
tmp = tmp->qh_next.qh)
continue;
- /* periodic qh self-unlinks on empty, and a COMPLETING qh
- * may already be unlinked.
- */
- if (tmp)
- unlink_async(ehci, qh);
+ /* periodic qh self-unlinks on empty */
+ if (!tmp)
+ goto nogood;
+ unlink_async (ehci, qh);
/* FALL THROUGH */
case QH_STATE_UNLINK: /* wait for hw to finish? */
case QH_STATE_UNLINK_WAIT:
}
/* else FALL THROUGH */
default:
+nogood:
/* caller was supposed to have unlinked any requests;
* that's not our job. just leak this memory.
*/
/* manually resume the ports we suspended during bus_suspend() */
i = HCS_N_PORTS (ehci->hcs_params);
while (i--) {
- /* clear phy low power mode before resume */
- if (ehci->has_hostpc) {
- u32 __iomem *hostpc_reg =
- (u32 __iomem *)((u8 *)ehci->regs
- + HOSTPC0 + 4 * (i & 0xff));
- temp = ehci_readl(ehci, hostpc_reg);
- ehci_writel(ehci, temp & ~HOSTPC_PHCD,
- hostpc_reg);
- mdelay(5);
- }
temp = ehci_readl(ehci, &ehci->regs->port_status [i]);
temp &= ~(PORT_RWC_BITS | PORT_WAKE_BITS);
if (test_bit(i, &ehci->bus_suspended) &&
if (temp & PORT_SUSPEND) {
if ((temp & PORT_PE) == 0)
goto error;
- /* clear phy low power mode before resume */
- if (hostpc_reg) {
- temp1 = ehci_readl(ehci, hostpc_reg);
- ehci_writel(ehci, temp1 & ~HOSTPC_PHCD,
- hostpc_reg);
- mdelay(5);
- }
/* resume signaling for 20 msec */
temp &= ~(PORT_RWC_BITS | PORT_WAKE_BITS);
ehci_writel(ehci, temp | PORT_RESUME,
static void ehci_mem_cleanup (struct ehci_hcd *ehci)
{
- free_cached_lists(ehci);
+ free_cached_itd_list(ehci);
if (ehci->async)
qh_put (ehci->async);
ehci->async = NULL;
}
rv = usb_add_hcd(hcd, irq, 0);
- if (rv)
- goto err_ehci;
-
- return 0;
+ if (rv == 0)
+ return 0;
-err_ehci:
- if (ehci->has_amcc_usb23)
- iounmap(ehci->ohci_hcctrl_reg);
iounmap(hcd->regs);
err_ioremap:
irq_dispose_mapping(irq);
err_irq:
release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
+
+ if (ehci->has_amcc_usb23)
+ iounmap(ehci->ohci_hcctrl_reg);
err_rmr:
usb_put_hcd(hcd);
urb->interval);
}
- /* if dev->ep [epnum] is a QH, hw is set */
- } else if (unlikely (stream->hw != NULL)) {
+ /* if dev->ep [epnum] is a QH, info1.maxpacket is nonzero */
+ } else if (unlikely (stream->hw_info1 != 0)) {
ehci_dbg (ehci, "dev %s ep%d%s, not iso??\n",
urb->dev->devpath, epnum,
usb_pipein(urb->pipe) ? "in" : "out");
static inline void
itd_link (struct ehci_hcd *ehci, unsigned frame, struct ehci_itd *itd)
{
- union ehci_shadow *prev = &ehci->pshadow[frame];
- __hc32 *hw_p = &ehci->periodic[frame];
- union ehci_shadow here = *prev;
- __hc32 type = 0;
-
- /* skip any iso nodes which might belong to previous microframes */
- while (here.ptr) {
- type = Q_NEXT_TYPE(ehci, *hw_p);
- if (type == cpu_to_hc32(ehci, Q_TYPE_QH))
- break;
- prev = periodic_next_shadow(ehci, prev, type);
- hw_p = shadow_next_periodic(ehci, &here, type);
- here = *prev;
- }
-
- itd->itd_next = here;
- itd->hw_next = *hw_p;
- prev->itd = itd;
+ /* always prepend ITD/SITD ... only QH tree is order-sensitive */
+ itd->itd_next = ehci->pshadow [frame];
+ itd->hw_next = ehci->periodic [frame];
+ ehci->pshadow [frame].itd = itd;
itd->frame = frame;
wmb ();
- *hw_p = cpu_to_hc32(ehci, itd->itd_dma | Q_TYPE_ITD);
+ ehci->periodic[frame] = cpu_to_hc32(ehci, itd->itd_dma | Q_TYPE_ITD);
}
/* fit urb's itds into the selected schedule slot; activate as needed */
(stream->bEndpointAddress & USB_DIR_IN) ? "in" : "out");
}
iso_stream_put (ehci, stream);
-
+ /* OK to recycle this SITD now that its completion callback ran. */
done:
sitd->urb = NULL;
- if (ehci->clock_frame != sitd->frame) {
- /* OK to recycle this SITD now. */
- sitd->stream = NULL;
- list_move(&sitd->sitd_list, &stream->free_list);
- iso_stream_put(ehci, stream);
- } else {
- /* HW might remember this SITD, so we can't recycle it yet.
- * Move it to a safe place until a new frame starts.
- */
- list_move(&sitd->sitd_list, &ehci->cached_sitd_list);
- if (stream->refcount == 2) {
- /* If iso_stream_put() were called here, stream
- * would be freed. Instead, just prevent reuse.
- */
- stream->ep->hcpriv = NULL;
- stream->ep = NULL;
- }
- }
+ sitd->stream = NULL;
+ list_move(&sitd->sitd_list, &stream->free_list);
+ iso_stream_put(ehci, stream);
+
return retval;
}
/*-------------------------------------------------------------------------*/
-static void free_cached_lists(struct ehci_hcd *ehci)
+static void free_cached_itd_list(struct ehci_hcd *ehci)
{
struct ehci_itd *itd, *n;
- struct ehci_sitd *sitd, *sn;
list_for_each_entry_safe(itd, n, &ehci->cached_itd_list, itd_list) {
struct ehci_iso_stream *stream = itd->stream;
list_move(&itd->itd_list, &stream->free_list);
iso_stream_put(ehci, stream);
}
-
- list_for_each_entry_safe(sitd, sn, &ehci->cached_sitd_list, sitd_list) {
- struct ehci_iso_stream *stream = sitd->stream;
- sitd->stream = NULL;
- list_move(&sitd->sitd_list, &stream->free_list);
- iso_stream_put(ehci, stream);
- }
}
/*-------------------------------------------------------------------------*/
clock_frame = -1;
}
if (ehci->clock_frame != clock_frame) {
- free_cached_lists(ehci);
+ free_cached_itd_list(ehci);
ehci->clock_frame = clock_frame;
}
clock %= mod;
clock = now;
clock_frame = clock >> 3;
if (ehci->clock_frame != clock_frame) {
- free_cached_lists(ehci);
+ free_cached_itd_list(ehci);
ehci->clock_frame = clock_frame;
}
} else {
int next_uframe; /* scan periodic, start here */
unsigned periodic_sched; /* periodic activity count */
- /* list of itds & sitds completed while clock_frame was still active */
+ /* list of itds completed while clock_frame was still active */
struct list_head cached_itd_list;
- struct list_head cached_sitd_list;
unsigned clock_frame;
/* per root hub port */
clear_bit (action, &ehci->actions);
}
-static void free_cached_lists(struct ehci_hcd *ehci);
+static void free_cached_itd_list(struct ehci_hcd *ehci);
/*-------------------------------------------------------------------------*/
* acts like a qh would, if EHCI had them for ISO.
*/
struct ehci_iso_stream {
- /* first field matches ehci_hq, but is NULL */
- struct ehci_qh_hw *hw;
+ /* first two fields match QH, but info1 == 0 */
+ __hc32 hw_next;
+ __hc32 hw_info1;
u32 refcount;
u8 bEndpointAddress;
u16 wLength
) {
struct ohci_hcd *ohci = hcd_to_ohci (hcd);
- int ports = ohci->num_ports;
+ int ports = hcd_to_bus (hcd)->root_hub->maxchild;
u32 temp;
int retval = 0;
}
i2c_adap = i2c_get_adapter(2);
memset(&i2c_info, 0, sizeof(struct i2c_board_info));
- strlcpy(i2c_info.type, "isp1301_pnx", I2C_NAME_SIZE);
+ strlcpy(i2c_info.name, "isp1301_pnx", I2C_NAME_SIZE);
isp1301_i2c_client = i2c_new_probed_device(i2c_adap, &i2c_info,
normal_i2c);
i2c_put_adapter(i2c_adap);
out2:
clk_put(usb_clk);
out1:
- i2c_unregister_device(isp1301_i2c_client);
+ i2c_unregister_client(isp1301_i2c_client);
isp1301_i2c_client = NULL;
out_i2c_driver:
i2c_del_driver(&isp1301_driver);
pnx4008_unset_usb_bits();
clk_disable(usb_clk);
clk_put(usb_clk);
- i2c_unregister_device(isp1301_i2c_client);
+ i2c_unregister_client(isp1301_i2c_client);
isp1301_i2c_client = NULL;
i2c_del_driver(&isp1301_driver);
/* this function must be called with interrupt disabled */
static void free_usb_address(struct r8a66597 *r8a66597,
- struct r8a66597_device *dev, int reset)
+ struct r8a66597_device *dev)
{
int port;
dev->state = USB_STATE_DEFAULT;
r8a66597->address_map &= ~(1 << dev->address);
dev->address = 0;
- /*
- * Only when resetting USB, it is necessary to erase drvdata. When
- * a usb device with usb hub is disconnect, "dev->udev" is already
- * freed on usb_desconnect(). So we cannot access the data.
- */
- if (reset)
- dev_set_drvdata(&dev->udev->dev, NULL);
+ dev_set_drvdata(&dev->udev->dev, NULL);
list_del(&dev->device_list);
kfree(dev);
struct r8a66597_device *dev = r8a66597->root_hub[port].dev;
disable_r8a66597_pipe_all(r8a66597, dev);
- free_usb_address(r8a66597, dev, 0);
+ free_usb_address(r8a66597, dev);
start_root_hub_sampling(r8a66597, port, 0);
}
spin_lock_irqsave(&r8a66597->lock, flags);
dev = get_r8a66597_device(r8a66597, addr);
disable_r8a66597_pipe_all(r8a66597, dev);
- free_usb_address(r8a66597, dev, 0);
+ free_usb_address(r8a66597, dev);
put_child_connect_map(r8a66597, addr);
spin_unlock_irqrestore(&r8a66597->lock, flags);
}
rh->port |= (1 << USB_PORT_FEAT_RESET);
disable_r8a66597_pipe_all(r8a66597, dev);
- free_usb_address(r8a66597, dev, 1);
+ free_usb_address(r8a66597, dev);
r8a66597_mdfy(r8a66597, USBRST, USBRST | UACT,
get_dvstctr_reg(port));
uhci_hc_died(uhci);
uhci_scan_schedule(uhci);
spin_unlock_irq(&uhci->lock);
- synchronize_irq(hcd->irq);
del_timer_sync(&uhci->fsbr_timer);
release_uhci(uhci);
next = readl(base + ext_offset);
- if (ext_offset == XHCI_HCC_PARAMS_OFFSET) {
+ if (ext_offset == XHCI_HCC_PARAMS_OFFSET)
/* Find the first extended capability */
next = XHCI_HCC_EXT_CAPS(next);
- ext_offset = 0;
- } else {
+ else
/* Find the next extended capability */
next = XHCI_EXT_CAPS_NEXT(next);
- }
-
if (!next)
return 0;
/*
STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
}
-/*
- * Set the run bit and wait for the host to be running.
- */
-int xhci_start(struct xhci_hcd *xhci)
-{
- u32 temp;
- int ret;
-
- temp = xhci_readl(xhci, &xhci->op_regs->command);
- temp |= (CMD_RUN);
- xhci_dbg(xhci, "// Turn on HC, cmd = 0x%x.\n",
- temp);
- xhci_writel(xhci, temp, &xhci->op_regs->command);
-
- /*
- * Wait for the HCHalted Status bit to be 0 to indicate the host is
- * running.
- */
- ret = handshake(xhci, &xhci->op_regs->status,
- STS_HALT, 0, XHCI_MAX_HALT_USEC);
- if (ret == -ETIMEDOUT)
- xhci_err(xhci, "Host took too long to start, "
- "waited %u microseconds.\n",
- XHCI_MAX_HALT_USEC);
- return ret;
-}
-
/*
* Reset a halted HC, and set the internal HC state to HC_STATE_HALT.
*
{
u32 command;
u32 state;
- int ret;
state = xhci_readl(xhci, &xhci->op_regs->status);
if ((state & STS_HALT) == 0) {
/* XXX: Why does EHCI set this here? Shouldn't other code do this? */
xhci_to_hcd(xhci)->state = HC_STATE_HALT;
- ret = handshake(xhci, &xhci->op_regs->command,
- CMD_RESET, 0, 250 * 1000);
- if (ret)
- return ret;
-
- xhci_dbg(xhci, "Wait for controller to be ready for doorbell rings\n");
- /*
- * xHCI cannot write to any doorbells or operational registers other
- * than status until the "Controller Not Ready" flag is cleared.
- */
- return handshake(xhci, &xhci->op_regs->status, STS_CNR, 0, 250 * 1000);
+ return handshake(xhci, &xhci->op_regs->command, CMD_RESET, 0, 250 * 1000);
}
/*
if (NUM_TEST_NOOPS > 0)
doorbell = xhci_setup_one_noop(xhci);
- if (xhci_start(xhci)) {
- xhci_halt(xhci);
- return -ENODEV;
- }
-
+ temp = xhci_readl(xhci, &xhci->op_regs->command);
+ temp |= (CMD_RUN);
+ xhci_dbg(xhci, "// Turn on HC, cmd = 0x%x.\n",
+ temp);
+ xhci_writel(xhci, temp, &xhci->op_regs->command);
+ /* Flush PCI posted writes */
+ temp = xhci_readl(xhci, &xhci->op_regs->command);
xhci_dbg(xhci, "// @%p = 0x%x\n", &xhci->op_regs->command, temp);
if (doorbell)
(*doorbell)(xhci);
cmd_completion = &virt_dev->cmd_completion;
cmd_status = &virt_dev->cmd_status;
}
- init_completion(cmd_completion);
if (!ctx_change)
ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
kfree(virt_ep->stopped_td);
xhci_ring_cmd_db(xhci);
}
- virt_ep->stopped_td = NULL;
- virt_ep->stopped_trb = NULL;
spin_unlock_irqrestore(&xhci->lock, flags);
if (ret)
return EP_INTERVAL(interval);
}
-/* The "Mult" field in the endpoint context is only set for SuperSpeed devices.
- * High speed endpoint descriptors can define "the number of additional
- * transaction opportunities per microframe", but that goes in the Max Burst
- * endpoint context field.
- */
-static inline u32 xhci_get_endpoint_mult(struct usb_device *udev,
- struct usb_host_endpoint *ep)
-{
- if (udev->speed != USB_SPEED_SUPER || !ep->ss_ep_comp)
- return 0;
- return ep->ss_ep_comp->desc.bmAttributes;
-}
-
static inline u32 xhci_get_endpoint_type(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
return type;
}
-/* Return the maximum endpoint service interval time (ESIT) payload.
- * Basically, this is the maxpacket size, multiplied by the burst size
- * and mult size.
- */
-static inline u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,
- struct usb_device *udev,
- struct usb_host_endpoint *ep)
-{
- int max_burst;
- int max_packet;
-
- /* Only applies for interrupt or isochronous endpoints */
- if (usb_endpoint_xfer_control(&ep->desc) ||
- usb_endpoint_xfer_bulk(&ep->desc))
- return 0;
-
- if (udev->speed == USB_SPEED_SUPER) {
- if (ep->ss_ep_comp)
- return ep->ss_ep_comp->desc.wBytesPerInterval;
- xhci_warn(xhci, "WARN no SS endpoint companion descriptor.\n");
- /* Assume no bursts, no multiple opportunities to send. */
- return ep->desc.wMaxPacketSize;
- }
-
- max_packet = ep->desc.wMaxPacketSize & 0x3ff;
- max_burst = (ep->desc.wMaxPacketSize & 0x1800) >> 11;
- /* A 0 in max burst means 1 transfer per ESIT */
- return max_packet * (max_burst + 1);
-}
-
int xhci_endpoint_init(struct xhci_hcd *xhci,
struct xhci_virt_device *virt_dev,
struct usb_device *udev,
struct xhci_ring *ep_ring;
unsigned int max_packet;
unsigned int max_burst;
- u32 max_esit_payload;
ep_index = xhci_get_endpoint_index(&ep->desc);
ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
ep_ctx->deq = ep_ring->first_seg->dma | ep_ring->cycle_state;
ep_ctx->ep_info = xhci_get_endpoint_interval(udev, ep);
- ep_ctx->ep_info |= EP_MULT(xhci_get_endpoint_mult(udev, ep));
/* FIXME dig Mult and streams info out of ep companion desc */
default:
BUG();
}
- max_esit_payload = xhci_get_max_esit_payload(xhci, udev, ep);
- ep_ctx->tx_info = MAX_ESIT_PAYLOAD_FOR_EP(max_esit_payload);
-
- /*
- * XXX no idea how to calculate the average TRB buffer length for bulk
- * endpoints, as the driver gives us no clue how big each scatter gather
- * list entry (or buffer) is going to be.
- *
- * For isochronous and interrupt endpoints, we set it to the max
- * available, until we have new API in the USB core to allow drivers to
- * declare how much bandwidth they actually need.
- *
- * Normally, it would be calculated by taking the total of the buffer
- * lengths in the TD and then dividing by the number of TRBs in a TD,
- * including link TRBs, No-op TRBs, and Event data TRBs. Since we don't
- * use Event Data TRBs, and we don't chain in a link TRB on short
- * transfers, we're basically dividing by 1.
- */
- ep_ctx->tx_info |= AVG_TRB_LENGTH_FOR_EP(max_esit_payload);
-
/* FIXME Debug endpoint context */
return 0;
}
*seg = (*seg)->next;
*trb = ((*seg)->trbs);
} else {
- (*trb)++;
+ *trb = (*trb)++;
}
}
int i;
union xhci_trb *enq = ring->enqueue;
struct xhci_segment *enq_seg = ring->enq_seg;
- struct xhci_segment *cur_seg;
- unsigned int left_on_ring;
/* Check if ring is empty */
- if (enq == ring->dequeue) {
- /* Can't use link trbs */
- left_on_ring = TRBS_PER_SEGMENT - 1;
- for (cur_seg = enq_seg->next; cur_seg != enq_seg;
- cur_seg = cur_seg->next)
- left_on_ring += TRBS_PER_SEGMENT - 1;
-
- /* Always need one TRB free in the ring. */
- left_on_ring -= 1;
- if (num_trbs > left_on_ring) {
- xhci_warn(xhci, "Not enough room on ring; "
- "need %u TRBs, %u TRBs left\n",
- num_trbs, left_on_ring);
- return 0;
- }
+ if (enq == ring->dequeue)
return 1;
- }
/* Make sure there's an extra empty TRB available */
for (i = 0; i <= num_trbs; ++i) {
if (enq == ring->dequeue)
while (cur_seg->trbs > trb ||
&cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) {
generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic;
- if ((generic_trb->field[3] & TRB_TYPE_BITMASK) ==
- TRB_TYPE(TRB_LINK) &&
+ if (TRB_TYPE(generic_trb->field[3]) == TRB_LINK &&
(generic_trb->field[3] & LINK_TOGGLE))
*cycle_state = ~(*cycle_state) & 0x1;
cur_seg = cur_seg->next;
BUG();
trb = &state->new_deq_ptr->generic;
- if ((trb->field[3] & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK) &&
+ if (TRB_TYPE(trb->field[3]) == TRB_LINK &&
(trb->field[3] & LINK_TOGGLE))
state->new_cycle_state = ~(state->new_cycle_state) & 0x1;
next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr);
/* Otherwise just ring the doorbell to restart the ring */
ring_ep_doorbell(xhci, slot_id, ep_index);
}
- ep->stopped_td = NULL;
- ep->stopped_trb = NULL;
/*
* Drop the lock and complete the URBs in the cancelled TD list.
ep->stopped_td = td;
ep->stopped_trb = event_trb;
-
xhci_queue_reset_ep(xhci, slot_id, ep_index);
xhci_cleanup_stalled_ring(xhci, td->urb->dev, ep_index);
-
- ep->stopped_td = NULL;
- ep->stopped_trb = NULL;
-
xhci_ring_cmd_db(xhci);
goto td_cleanup;
default:
for (cur_trb = ep_ring->dequeue, cur_seg = ep_ring->deq_seg;
cur_trb != event_trb;
next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) {
- if ((cur_trb->generic.field[3] &
- TRB_TYPE_BITMASK) != TRB_TYPE(TRB_TR_NOOP) &&
- (cur_trb->generic.field[3] &
- TRB_TYPE_BITMASK) != TRB_TYPE(TRB_LINK))
+ if (TRB_TYPE(cur_trb->generic.field[3]) != TRB_TR_NOOP &&
+ TRB_TYPE(cur_trb->generic.field[3]) != TRB_LINK)
td->urb->actual_length +=
TRB_LEN(cur_trb->generic.field[2]);
}
#define MAX_PACKET_MASK (0xffff << 16)
#define MAX_PACKET_DECODED(p) (((p) >> 16) & 0xffff)
-/* tx_info bitmasks */
-#define AVG_TRB_LENGTH_FOR_EP(p) ((p) & 0xffff)
-#define MAX_ESIT_PAYLOAD_FOR_EP(p) (((p) & 0xffff) << 16)
-
/**
* struct xhci_input_control_context
return read_port(dev, attr, buf, 1, CYPRESS_READ_PORT_ID1);
}
-static DEVICE_ATTR(port0, S_IRUGO | S_IWUSR, get_port0_handler, set_port0_handler);
+static DEVICE_ATTR(port0, S_IWUGO | S_IRUGO,
+ get_port0_handler, set_port0_handler);
-static DEVICE_ATTR(port1, S_IRUGO | S_IWUSR, get_port1_handler, set_port1_handler);
+static DEVICE_ATTR(port1, S_IWUGO | S_IRUGO,
+ get_port1_handler, set_port1_handler);
static int cypress_probe(struct usb_interface *interface,
/* needed for power consumption */
struct usb_config_descriptor *cfg_descriptor = &dev->udev->actconfig->desc;
- memset(&info, 0, sizeof(info));
/* directly from the descriptor */
info.vendor = le16_to_cpu(dev->udev->descriptor.idVendor);
info.product = dev->product_id;
}
if (!sisusb->devinit) {
- if (sisusb->sisusb_dev->speed == USB_SPEED_HIGH ||
- sisusb->sisusb_dev->speed == USB_SPEED_SUPER) {
+ if (sisusb->sisusb_dev->speed == USB_SPEED_HIGH) {
if (sisusb_init_gfxdevice(sisusb, 0)) {
mutex_unlock(&sisusb->lock);
dev_err(&sisusb->sisusb_dev->dev, "Failed to initialize device\n");
#else
x.sisusb_conactive = 0;
#endif
- memset(x.sisusb_reserved, 0, sizeof(x.sisusb_reserved));
if (copy_to_user((void __user *)arg, &x, sizeof(x)))
retval = -EFAULT;
sisusb->present = 1;
- if (dev->speed == USB_SPEED_HIGH || dev->speed == USB_SPEED_SUPER) {
+ if (dev->speed == USB_SPEED_HIGH) {
int initscreen = 1;
#ifdef INCL_SISUSB_CON
if (sisusb_first_vc > 0 &&
{ USB_DEVICE(0x0711, 0x0902) },
{ USB_DEVICE(0x0711, 0x0903) },
{ USB_DEVICE(0x0711, 0x0918) },
- { USB_DEVICE(0x0711, 0x0920) },
{ USB_DEVICE(0x182d, 0x021c) },
{ USB_DEVICE(0x182d, 0x0269) },
{ }
return count;
}
-static DEVICE_ATTR(speed, S_IRUGO | S_IWUSR, show_speed, set_speed);
+static DEVICE_ATTR(speed, S_IWUGO | S_IRUGO, show_speed, set_speed);
static int tv_probe(struct usb_interface *interface,
const struct usb_device_id *id)
change_color(led); \
return count; \
} \
-static DEVICE_ATTR(value, S_IRUGO | S_IWUSR, show_##value, set_##value);
+static DEVICE_ATTR(value, S_IWUGO | S_IRUGO, show_##value, set_##value);
show_set(blue);
show_set(red);
show_set(green);
\
return count; \
} \
-static DEVICE_ATTR(name, S_IRUGO | S_IWUSR, show_attr_##name, set_attr_##name);
+static DEVICE_ATTR(name, S_IWUGO | S_IRUGO, show_attr_##name, set_attr_##name);
static ssize_t show_attr_text(struct device *dev,
struct device_attribute *attr, char *buf)
return count;
}
-static DEVICE_ATTR(text, S_IRUGO | S_IWUSR, show_attr_text, set_attr_text);
+static DEVICE_ATTR(text, S_IWUGO | S_IRUGO, show_attr_text, set_attr_text);
static ssize_t show_attr_decimals(struct device *dev,
struct device_attribute *attr, char *buf)
return count;
}
-static DEVICE_ATTR(decimals, S_IRUGO | S_IWUSR, show_attr_decimals, set_attr_decimals);
+static DEVICE_ATTR(decimals, S_IWUGO | S_IRUGO,
+ show_attr_decimals, set_attr_decimals);
static ssize_t show_attr_textmode(struct device *dev,
struct device_attribute *attr, char *buf)
return -EINVAL;
}
-static DEVICE_ATTR(textmode, S_IRUGO | S_IWUSR, show_attr_textmode, set_attr_textmode);
+static DEVICE_ATTR(textmode, S_IWUGO | S_IRUGO,
+ show_attr_textmode, set_attr_textmode);
MYDEV_ATTR_SIMPLE_UNSIGNED(powered, update_display_powered);
break;
}
}
+ simple_free_urb (urb);
ctx->pending--;
if (ctx->pending == 0) {
}
simple_free_urb (urbs [i]);
- urbs[i] = NULL;
context.pending--;
context.submit_error = 1;
break;
wait_for_completion (&context.done);
- for (i = 0; i < param->sglen; i++) {
- if (urbs[i])
- simple_free_urb(urbs[i]);
- }
/*
* Isochronous transfers are expected to fail sometimes. As an
* arbitrary limit, we will report an error if any submissions
mutex_lock(&rp->fetch_lock);
spin_lock_irqsave(&rp->b_lock, flags);
- mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+ mon_free_buff(rp->b_vec, size/CHUNK_SIZE);
kfree(rp->b_vec);
rp->b_vec = vec;
rp->b_size = size;
usb_nop_xceiv_register();
musb->xceiv = otg_get_transceiver();
- if (!musb->xceiv) {
- gpio_free(musb->config->gpio_vrsel);
+ if (!musb->xceiv)
return -ENODEV;
- }
if (ANOMALY_05000346) {
bfin_write_USB_APHY_CALIB(ANOMALY_05000346_value);
{
const u8 epnum = req->epnum;
struct usb_request *request = &req->request;
- struct musb_ep *musb_ep;
+ struct musb_ep *musb_ep = &musb->endpoints[epnum].ep_out;
void __iomem *epio = musb->endpoints[epnum].regs;
unsigned fifo_count = 0;
- u16 len;
+ u16 len = musb_ep->packet_sz;
u16 csr = musb_readw(epio, MUSB_RXCSR);
- struct musb_hw_ep *hw_ep = &musb->endpoints[epnum];
-
- if (hw_ep->is_shared_fifo)
- musb_ep = &hw_ep->ep_in;
- else
- musb_ep = &hw_ep->ep_out;
-
- len = musb_ep->packet_sz;
/* We shouldn't get here while DMA is active, but we do... */
if (dma_channel_status(musb_ep->dma) == MUSB_DMA_STATUS_BUSY) {
u16 csr;
struct usb_request *request;
void __iomem *mbase = musb->mregs;
- struct musb_ep *musb_ep;
+ struct musb_ep *musb_ep = &musb->endpoints[epnum].ep_out;
void __iomem *epio = musb->endpoints[epnum].regs;
struct dma_channel *dma;
- struct musb_hw_ep *hw_ep = &musb->endpoints[epnum];
-
- if (hw_ep->is_shared_fifo)
- musb_ep = &hw_ep->ep_in;
- else
- musb_ep = &hw_ep->ep_out;
musb_ep_select(mbase, epnum);
/*
* Context: controller locked, IRQs blocked.
*/
-void musb_ep_restart(struct musb *musb, struct musb_request *req)
+static void musb_ep_restart(struct musb *musb, struct musb_request *req)
{
DBG(3, "<== %s request %p len %u on hw_ep%d\n",
req->tx ? "TX/IN" : "RX/OUT",
extern int musb_gadget_set_halt(struct usb_ep *ep, int value);
-extern void musb_ep_restart(struct musb *, struct musb_request *);
-
#endif /* __MUSB_GADGET_H */
ctrlrequest->wIndex & 0x0f;
struct musb_ep *musb_ep;
struct musb_hw_ep *ep;
- struct musb_request *request;
void __iomem *regs;
int is_in;
u16 csr;
csr);
}
- /* Maybe start the first request in the queue */
- request = to_musb_request(
- next_request(musb_ep));
- if (!musb_ep->busy && request) {
- DBG(3, "restarting the request\n");
- musb_ep_restart(musb, request);
- }
-
/* select ep0 again */
musb_ep_select(mbase, 0);
handled = 1;
static int debug;
static struct usb_device_id id_table [] = {
- { USB_DEVICE(0x045B, 0x0053) }, /* Renesas RX610 RX-Stick */
{ USB_DEVICE(0x0471, 0x066A) }, /* AKTAKOM ACE-1001 cable */
{ USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
{ USB_DEVICE(0x0745, 0x1000) }, /* CipherLab USB CCD Barcode Scanner 1000 */
{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
- { USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
- { USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
{ USB_DEVICE(0x0FCF, 0x1004) }, /* Dynastream ANT2USB */
{ USB_DEVICE(0x0FCF, 0x1006) }, /* Dynastream ANT development board */
{ USB_DEVICE(0x10C4, 0x1601) }, /* Arkham Technology DS101 Adapter */
{ USB_DEVICE(0x10C4, 0x800A) }, /* SPORTident BSM7-D-USB main station */
{ USB_DEVICE(0x10C4, 0x803B) }, /* Pololu USB-serial converter */
- { USB_DEVICE(0x10C4, 0x8044) }, /* Cygnal Debug Adapter */
- { USB_DEVICE(0x10C4, 0x804E) }, /* Software Bisque Paramount ME build-in converter */
{ USB_DEVICE(0x10C4, 0x8053) }, /* Enfora EDG1228 */
{ USB_DEVICE(0x10C4, 0x8054) }, /* Enfora GSM2228 */
{ USB_DEVICE(0x10C4, 0x8066) }, /* Argussoft In-System Programmer */
- { USB_DEVICE(0x10C4, 0x806F) }, /* IMS USB to RS422 Converter Cable */
{ USB_DEVICE(0x10C4, 0x807A) }, /* Crumb128 board */
{ USB_DEVICE(0x10C4, 0x80CA) }, /* Degree Controls Inc */
{ USB_DEVICE(0x10C4, 0x80DD) }, /* Tracient RFID */
{ USB_DEVICE(0x10C4, 0x8115) }, /* Arygon NFC/Mifare Reader */
{ USB_DEVICE(0x10C4, 0x813D) }, /* Burnside Telecom Deskmobile */
{ USB_DEVICE(0x10C4, 0x813F) }, /* Tams Master Easy Control */
- { USB_DEVICE(0x10C4, 0x8149) }, /* West Mountain Radio Computerized Battery Analyzer */
{ USB_DEVICE(0x10C4, 0x814A) }, /* West Mountain Radio RIGblaster P&P */
{ USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
- { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
{ USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
- { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */
{ USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */
{ USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
{ USB_DEVICE(0x10C4, 0x81AC) }, /* MSD Dash Hawk */
- { USB_DEVICE(0x10C4, 0x81AD) }, /* INSYS USB Modem */
{ USB_DEVICE(0x10C4, 0x81C8) }, /* Lipowsky Industrie Elektronik GmbH, Baby-JTAG */
{ USB_DEVICE(0x10C4, 0x81E2) }, /* Lipowsky Industrie Elektronik GmbH, Baby-LIN */
{ USB_DEVICE(0x10C4, 0x81E7) }, /* Aerocomm Radio */
- { USB_DEVICE(0x10C4, 0x81E8) }, /* Zephyr Bioharness */
{ USB_DEVICE(0x10C4, 0x81F2) }, /* C1007 HF band RFID controller */
{ USB_DEVICE(0x10C4, 0x8218) }, /* Lipowsky Industrie Elektronik GmbH, HARP-1 */
{ USB_DEVICE(0x10C4, 0x822B) }, /* Modem EDGE(GSM) Comander 2 */
{ USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demostration module */
- { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesys ETRX2USB */
+ { USB_DEVICE(0x10c4, 0x8293) }, /* Telegesys ETRX2USB */
{ USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */
{ USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */
{ USB_DEVICE(0x10C4, 0x8382) }, /* Cygnal Integrated Products, Inc. */
{ USB_DEVICE(0x10C4, 0x83A8) }, /* Amber Wireless AMB2560 */
{ USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
{ USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */
- { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */
{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
- { USB_DEVICE(0x10C4, 0xEA71) }, /* Infinity GPS-MIC-1 Radio Monophone */
{ USB_DEVICE(0x10C4, 0xF001) }, /* Elan Digital Systems USBscope50 */
{ USB_DEVICE(0x10C4, 0xF002) }, /* Elan Digital Systems USBwave12 */
{ USB_DEVICE(0x10C4, 0xF003) }, /* Elan Digital Systems USBpulse100 */
{ USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */
{ USB_DEVICE(0x166A, 0x0303) }, /* Clipsal 5500PCU C-Bus USB interface */
{ USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */
- { USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */
- { USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
- { USB_DEVICE(0x16DC, 0x0012) }, /* W-IE-NE-R Plein & Baus GmbH MPOD Multi Channel Power Supply */
- { USB_DEVICE(0x16DC, 0x0015) }, /* W-IE-NE-R Plein & Baus GmbH CML Control, Monitoring and Data Logger */
- { USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
- { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
- { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
{ USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */
{ } /* Terminating Entry */
};
#define BITS_STOP_2 0x0002
/* CP210X_SET_BREAK */
-#define BREAK_ON 0x0001
-#define BREAK_OFF 0x0000
+#define BREAK_ON 0x0000
+#define BREAK_OFF 0x0001
/* CP210X_(SET_MHS|GET_MDMSTS) */
#define CONTROL_DTR 0x0001
#include <linux/serial.h>
#include <linux/usb/serial.h>
#include "ftdi_sio.h"
-#include "ftdi_sio_ids.h"
/*
* Version Information
*/
#define DRIVER_VERSION "v1.5.0"
-#define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>, Bill Ryder <bryder@sgi.com>, Kuba Ober <kuba@mareimbrium.org>, Andreas Mohr"
+#define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>, Bill Ryder <bryder@sgi.com>, Kuba Ober <kuba@mareimbrium.org>"
#define DRIVER_DESC "USB FTDI Serial Converters Driver"
static int debug;
-/*
- * Device ID not listed? Test via module params product/vendor or
- * /sys/bus/usb/ftdi_sio/new_id, then send patch/report!
- */
static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_AMC232_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANUSB_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANDAPTER_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_NXTCAM_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_0_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_1_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_2_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_5_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_6_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SCS_DEVICE_7_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_USINT_CAT_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_USINT_WKEY_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_USINT_RS232_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ACTZWAVE_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IRTRANS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IPLUS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_OPENDCC_SNIFFER_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_OPENDCC_THROTTLE_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_OPENDCC_GATEWAY_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_OPENDCC_GBM_PID) },
{ USB_DEVICE(INTERBIOMETRICS_VID, INTERBIOMETRICS_IOBOARD_PID) },
{ USB_DEVICE(INTERBIOMETRICS_VID, INTERBIOMETRICS_MINI_IOBOARD_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_SPROG_II) },
- { USB_DEVICE(FTDI_VID, FTDI_LENZ_LIUSB_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_XF_632_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_XF_634_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_XF_547_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_MTXORB_5_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_MTXORB_6_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_R2000KU_TRUE_RNG) },
- { USB_DEVICE(FTDI_VID, FTDI_VARDAAN_PID) },
{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0100_PID) },
{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0101_PID) },
{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0102_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PEDO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PROD_PID) },
/*
- * ELV devices:
+ * Due to many user requests for multiple ELV devices we enable
+ * them by default.
*/
- { USB_DEVICE(FTDI_VID, FTDI_ELV_USR_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_MSM1_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_KL100_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_WS550_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_EC3000_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_WS888_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_TWS550_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_FEM_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_CLI7000_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_PPS7330_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_TFM100_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_PCK100_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_RFP500_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_FS20SIG_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_UTP8_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_WS300PC_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_WS444PC_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_FHZ1300PC_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_EM1010PC_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_WS500_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_HS485_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_UMS100_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_TFD128_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_FM3RX_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_ELV_WS777_PID) },
{ USB_DEVICE(FTDI_VID, LINX_SDMUSBQSS_PID) },
{ USB_DEVICE(FTDI_VID, LINX_MASTERDEVEL2_PID) },
{ USB_DEVICE(FTDI_VID, LINX_FUTURE_0_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_OCEANIC_PID) },
{ USB_DEVICE(TTI_VID, TTI_QL355P_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_RM_CANVIEW_PID) },
- { USB_DEVICE(CONTEC_VID, CONTEC_COM1USBH_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USOTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USO9ML2_PID) },
{ USB_DEVICE(EVOLUTION_VID, EVOLUTION_ER1_PID) },
{ USB_DEVICE(EVOLUTION_VID, EVO_HYBRID_PID) },
{ USB_DEVICE(EVOLUTION_VID, EVO_RCM4_PID) },
- { USB_DEVICE(CONTEC_VID, CONTEC_COM1USBH_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ARTEMIS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ATIK_ATK16_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ATIK_ATK16C_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_RRCIRKITS_LOCOBUFFER_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ASK_RDR400_PID) },
{ USB_DEVICE(ICOM_ID1_VID, ICOM_ID1_PID) },
+ { USB_DEVICE(PAPOUCH_VID, PAPOUCH_TMU_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
{ USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
- { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_SERIAL_VX7_PID) },
- { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_CT29B_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_MAXSTREAM_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_PHI_FISCO_PID) },
{ USB_DEVICE(TML_VID, TML_USB_SERIAL_PID) },
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
{ USB_DEVICE(RATOC_VENDOR_ID, RATOC_PRODUCT_ID_USB60F) },
{ USB_DEVICE(FTDI_VID, FTDI_REU_TINY_PID) },
-
- /* Papouch devices based on FTDI chip */
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB485_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_AP485_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB422_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB485_2_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_AP485_2_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB422_2_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB485S_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB485C_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_LEC_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SB232_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_TMU_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_IRAMP_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_DRAK5_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO8x8_PID) },
{ USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO4x4_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO2x2_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO10x1_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO30x3_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO60x3_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO2x16_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_QUIDO3x32_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_DRAK6_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_UPSUSB_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_MU_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_SIMUKEY_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_AD4USB_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_GMUX_PID) },
- { USB_DEVICE(PAPOUCH_VID, PAPOUCH_GMSR_PID) },
-
{ USB_DEVICE(FTDI_VID, FTDI_DOMINTELL_DGQG_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_DOMINTELL_DUSB_PID) },
{ USB_DEVICE(ALTI2_VID, ALTI2_N3_PID) },
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
{ USB_DEVICE(FTDI_VID, HAMEG_HO820_PID) },
{ USB_DEVICE(FTDI_VID, HAMEG_HO870_PID) },
- { USB_DEVICE(FTDI_VID, MJSG_GENERIC_PID) },
- { USB_DEVICE(FTDI_VID, MJSG_SR_RADIO_PID) },
- { USB_DEVICE(FTDI_VID, MJSG_HD_RADIO_PID) },
- { USB_DEVICE(FTDI_VID, MJSG_XM_RADIO_PID) },
- { USB_DEVICE(FTDI_VID, XVERVE_SIGNALYZER_ST_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- { USB_DEVICE(FTDI_VID, XVERVE_SIGNALYZER_SLITE_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- { USB_DEVICE(FTDI_VID, XVERVE_SIGNALYZER_SH2_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- { USB_DEVICE(FTDI_VID, XVERVE_SIGNALYZER_SH4_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- { USB_DEVICE(FTDI_VID, SEGWAY_RMP200_PID) },
- { USB_DEVICE(FTDI_VID, ACCESIO_COM4SM_PID) },
- { USB_DEVICE(IONICS_VID, IONICS_PLUGCOMPUTER_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_24_MASTER_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_PC_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_USB_DMX_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_MIDI_TIMECODE_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_MINI_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_MAXI_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_MEDIA_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_CHAMSYS_WING_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_SCIENCESCOPE_LOGBOOKML_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_SCIENCESCOPE_LS_LOGBOOK_PID) },
- { USB_DEVICE(FTDI_VID, FTDI_SCIENCESCOPE_HS_LOGBOOK_PID) },
- { USB_DEVICE(QIHARDWARE_VID, MILKYMISTONE_JTAGSERIAL_PID),
- .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
{ }, /* Optional parameter entry */
{ } /* Terminating entry */
};
}
/* set max packet size based on descriptor */
- priv->max_packet_size = le16_to_cpu(ep_desc->wMaxPacketSize);
+ priv->max_packet_size = ep_desc->wMaxPacketSize;
dev_info(&udev->dev, "Setting MaxPacketSize %d\n", priv->max_packet_size);
}
/*
- * Driver definitions for the FTDI USB Single Port Serial Converter -
+ * Definitions for the FTDI USB Single Port Serial Converter -
* known as FTDI_SIO (Serial Input/Output application of the chipset)
*
- * For USB vendor/product IDs (VID/PID), please see ftdi_sio_ids.h
- *
- *
* The example I have is known as the USC-1000 which is available from
* http://www.dse.co.nz - cat no XH4214 It looks similar to this:
* http://www.dansdata.com/usbser.htm but I can't be sure There are other
* Bill Ryder - bryder@sgi.com formerly of Silicon Graphics, Inc.- wrote the
* FTDI_SIO implementation.
*
+ * Philipp Gühring - pg@futureware.at - added the Device ID of the USB relais
+ * from Rudolf Gugler
+ *
+ */
+
+#define FTDI_VID 0x0403 /* Vendor Id */
+#define FTDI_SIO_PID 0x8372 /* Product Id SIO application of 8U100AX */
+#define FTDI_8U232AM_PID 0x6001 /* Similar device to SIO above */
+#define FTDI_8U232AM_ALT_PID 0x6006 /* FTDI's alternate PID for above */
+#define FTDI_8U2232C_PID 0x6010 /* Dual channel device */
+#define FTDI_232RL_PID 0xFBFA /* Product ID for FT232RL */
+#define FTDI_4232H_PID 0x6011 /* Quad channel hi-speed device */
+#define FTDI_RELAIS_PID 0xFA10 /* Relais device from Rudolf Gugler */
+#define FTDI_NF_RIC_VID 0x0DCD /* Vendor Id */
+#define FTDI_NF_RIC_PID 0x0001 /* Product Id */
+#define FTDI_USBX_707_PID 0xF857 /* ADSTech IR Blaster USBX-707 */
+
+/* Larsen and Brusgaard AltiTrack/USBtrack */
+#define LARSENBRUSGAARD_VID 0x0FD8
+#define LB_ALTITRACK_PID 0x0001
+
+/* www.canusb.com Lawicel CANUSB device */
+#define FTDI_CANUSB_PID 0xFFA8 /* Product Id */
+
+/* AlphaMicro Components AMC-232USB01 device */
+#define FTDI_AMC232_PID 0xFF00 /* Product Id */
+
+/* www.candapter.com Ewert Energy Systems CANdapter device */
+#define FTDI_CANDAPTER_PID 0x9F80 /* Product Id */
+
+/* SCS HF Radio Modems PID's (http://www.scs-ptc.com) */
+/* the VID is the standard ftdi vid (FTDI_VID) */
+#define FTDI_SCS_DEVICE_0_PID 0xD010 /* SCS PTC-IIusb */
+#define FTDI_SCS_DEVICE_1_PID 0xD011 /* SCS Tracker / DSP TNC */
+#define FTDI_SCS_DEVICE_2_PID 0xD012
+#define FTDI_SCS_DEVICE_3_PID 0xD013
+#define FTDI_SCS_DEVICE_4_PID 0xD014
+#define FTDI_SCS_DEVICE_5_PID 0xD015
+#define FTDI_SCS_DEVICE_6_PID 0xD016
+#define FTDI_SCS_DEVICE_7_PID 0xD017
+
+/* ACT Solutions HomePro ZWave interface (http://www.act-solutions.com/HomePro.htm) */
+#define FTDI_ACTZWAVE_PID 0xF2D0
+
+
+/* www.starting-point-systems.com µChameleon device */
+#define FTDI_MICRO_CHAMELEON_PID 0xCAA0 /* Product Id */
+
+/* www.irtrans.de device */
+#define FTDI_IRTRANS_PID 0xFC60 /* Product Id */
+
+
+/* www.thoughttechnology.com/ TT-USB provide with procomp use ftdi_sio */
+#define FTDI_TTUSB_PID 0xFF20 /* Product Id */
+
+/* iPlus device */
+#define FTDI_IPLUS_PID 0xD070 /* Product Id */
+#define FTDI_IPLUS2_PID 0xD071 /* Product Id */
+
+/* DMX4ALL DMX Interfaces */
+#define FTDI_DMX4ALL 0xC850
+
+/* OpenDCC (www.opendcc.de) product id */
+#define FTDI_OPENDCC_PID 0xBFD8
+#define FTDI_OPENDCC_SNIFFER_PID 0xBFD9
+#define FTDI_OPENDCC_THROTTLE_PID 0xBFDA
+#define FTDI_OPENDCC_GATEWAY_PID 0xBFDB
+
+/* Sprog II (Andrew Crosland's SprogII DCC interface) */
+#define FTDI_SPROG_II 0xF0C8
+
+/* www.crystalfontz.com devices - thanx for providing free devices for evaluation ! */
+/* they use the ftdi chipset for the USB interface and the vendor id is the same */
+#define FTDI_XF_632_PID 0xFC08 /* 632: 16x2 Character Display */
+#define FTDI_XF_634_PID 0xFC09 /* 634: 20x4 Character Display */
+#define FTDI_XF_547_PID 0xFC0A /* 547: Two line Display */
+#define FTDI_XF_633_PID 0xFC0B /* 633: 16x2 Character Display with Keys */
+#define FTDI_XF_631_PID 0xFC0C /* 631: 20x2 Character Display */
+#define FTDI_XF_635_PID 0xFC0D /* 635: 20x4 Character Display */
+#define FTDI_XF_640_PID 0xFC0E /* 640: Two line Display */
+#define FTDI_XF_642_PID 0xFC0F /* 642: Two line Display */
+
+/* Video Networks Limited / Homechoice in the UK use an ftdi-based device for their 1Mb */
+/* broadband internet service. The following PID is exhibited by the usb device supplied */
+/* (the VID is the standard ftdi vid (FTDI_VID) */
+#define FTDI_VNHCPCUSB_D_PID 0xfe38 /* Product Id */
+
+/*
+ * PCDJ use ftdi based dj-controllers. The following PID is for their DAC-2 device
+ * http://www.pcdjhardware.com/DAC2.asp (PID sent by Wouter Paesen)
+ * (the VID is the standard ftdi vid (FTDI_VID) */
+#define FTDI_PCDJ_DAC2_PID 0xFA88
+
+/*
+ * The following are the values for the Matrix Orbital LCD displays,
+ * which are the FT232BM ( similar to the 8U232AM )
+ */
+#define FTDI_MTXORB_0_PID 0xFA00 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_1_PID 0xFA01 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_2_PID 0xFA02 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_3_PID 0xFA03 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_4_PID 0xFA04 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_5_PID 0xFA05 /* Matrix Orbital Product Id */
+#define FTDI_MTXORB_6_PID 0xFA06 /* Matrix Orbital Product Id */
+
+/* OOCDlink by Joern Kaipf <joernk@web.de>
+ * (http://www.joernonline.de/dw/doku.php?id=start&idx=projects:oocdlink) */
+#define FTDI_OOCDLINK_PID 0xbaf8 /* Amontec JTAGkey */
+
+/*
+ * The following are the values for the Matrix Orbital FTDI Range
+ * Anything in this range will use an FT232RL.
+ */
+#define MTXORB_VID 0x1B3D
+#define MTXORB_FTDI_RANGE_0100_PID 0x0100
+#define MTXORB_FTDI_RANGE_0101_PID 0x0101
+#define MTXORB_FTDI_RANGE_0102_PID 0x0102
+#define MTXORB_FTDI_RANGE_0103_PID 0x0103
+#define MTXORB_FTDI_RANGE_0104_PID 0x0104
+#define MTXORB_FTDI_RANGE_0105_PID 0x0105
+#define MTXORB_FTDI_RANGE_0106_PID 0x0106
+#define MTXORB_FTDI_RANGE_0107_PID 0x0107
+#define MTXORB_FTDI_RANGE_0108_PID 0x0108
+#define MTXORB_FTDI_RANGE_0109_PID 0x0109
+#define MTXORB_FTDI_RANGE_010A_PID 0x010A
+#define MTXORB_FTDI_RANGE_010B_PID 0x010B
+#define MTXORB_FTDI_RANGE_010C_PID 0x010C
+#define MTXORB_FTDI_RANGE_010D_PID 0x010D
+#define MTXORB_FTDI_RANGE_010E_PID 0x010E
+#define MTXORB_FTDI_RANGE_010F_PID 0x010F
+#define MTXORB_FTDI_RANGE_0110_PID 0x0110
+#define MTXORB_FTDI_RANGE_0111_PID 0x0111
+#define MTXORB_FTDI_RANGE_0112_PID 0x0112
+#define MTXORB_FTDI_RANGE_0113_PID 0x0113
+#define MTXORB_FTDI_RANGE_0114_PID 0x0114
+#define MTXORB_FTDI_RANGE_0115_PID 0x0115
+#define MTXORB_FTDI_RANGE_0116_PID 0x0116
+#define MTXORB_FTDI_RANGE_0117_PID 0x0117
+#define MTXORB_FTDI_RANGE_0118_PID 0x0118
+#define MTXORB_FTDI_RANGE_0119_PID 0x0119
+#define MTXORB_FTDI_RANGE_011A_PID 0x011A
+#define MTXORB_FTDI_RANGE_011B_PID 0x011B
+#define MTXORB_FTDI_RANGE_011C_PID 0x011C
+#define MTXORB_FTDI_RANGE_011D_PID 0x011D
+#define MTXORB_FTDI_RANGE_011E_PID 0x011E
+#define MTXORB_FTDI_RANGE_011F_PID 0x011F
+#define MTXORB_FTDI_RANGE_0120_PID 0x0120
+#define MTXORB_FTDI_RANGE_0121_PID 0x0121
+#define MTXORB_FTDI_RANGE_0122_PID 0x0122
+#define MTXORB_FTDI_RANGE_0123_PID 0x0123
+#define MTXORB_FTDI_RANGE_0124_PID 0x0124
+#define MTXORB_FTDI_RANGE_0125_PID 0x0125
+#define MTXORB_FTDI_RANGE_0126_PID 0x0126
+#define MTXORB_FTDI_RANGE_0127_PID 0x0127
+#define MTXORB_FTDI_RANGE_0128_PID 0x0128
+#define MTXORB_FTDI_RANGE_0129_PID 0x0129
+#define MTXORB_FTDI_RANGE_012A_PID 0x012A
+#define MTXORB_FTDI_RANGE_012B_PID 0x012B
+#define MTXORB_FTDI_RANGE_012C_PID 0x012C
+#define MTXORB_FTDI_RANGE_012D_PID 0x012D
+#define MTXORB_FTDI_RANGE_012E_PID 0x012E
+#define MTXORB_FTDI_RANGE_012F_PID 0x012F
+#define MTXORB_FTDI_RANGE_0130_PID 0x0130
+#define MTXORB_FTDI_RANGE_0131_PID 0x0131
+#define MTXORB_FTDI_RANGE_0132_PID 0x0132
+#define MTXORB_FTDI_RANGE_0133_PID 0x0133
+#define MTXORB_FTDI_RANGE_0134_PID 0x0134
+#define MTXORB_FTDI_RANGE_0135_PID 0x0135
+#define MTXORB_FTDI_RANGE_0136_PID 0x0136
+#define MTXORB_FTDI_RANGE_0137_PID 0x0137
+#define MTXORB_FTDI_RANGE_0138_PID 0x0138
+#define MTXORB_FTDI_RANGE_0139_PID 0x0139
+#define MTXORB_FTDI_RANGE_013A_PID 0x013A
+#define MTXORB_FTDI_RANGE_013B_PID 0x013B
+#define MTXORB_FTDI_RANGE_013C_PID 0x013C
+#define MTXORB_FTDI_RANGE_013D_PID 0x013D
+#define MTXORB_FTDI_RANGE_013E_PID 0x013E
+#define MTXORB_FTDI_RANGE_013F_PID 0x013F
+#define MTXORB_FTDI_RANGE_0140_PID 0x0140
+#define MTXORB_FTDI_RANGE_0141_PID 0x0141
+#define MTXORB_FTDI_RANGE_0142_PID 0x0142
+#define MTXORB_FTDI_RANGE_0143_PID 0x0143
+#define MTXORB_FTDI_RANGE_0144_PID 0x0144
+#define MTXORB_FTDI_RANGE_0145_PID 0x0145
+#define MTXORB_FTDI_RANGE_0146_PID 0x0146
+#define MTXORB_FTDI_RANGE_0147_PID 0x0147
+#define MTXORB_FTDI_RANGE_0148_PID 0x0148
+#define MTXORB_FTDI_RANGE_0149_PID 0x0149
+#define MTXORB_FTDI_RANGE_014A_PID 0x014A
+#define MTXORB_FTDI_RANGE_014B_PID 0x014B
+#define MTXORB_FTDI_RANGE_014C_PID 0x014C
+#define MTXORB_FTDI_RANGE_014D_PID 0x014D
+#define MTXORB_FTDI_RANGE_014E_PID 0x014E
+#define MTXORB_FTDI_RANGE_014F_PID 0x014F
+#define MTXORB_FTDI_RANGE_0150_PID 0x0150
+#define MTXORB_FTDI_RANGE_0151_PID 0x0151
+#define MTXORB_FTDI_RANGE_0152_PID 0x0152
+#define MTXORB_FTDI_RANGE_0153_PID 0x0153
+#define MTXORB_FTDI_RANGE_0154_PID 0x0154
+#define MTXORB_FTDI_RANGE_0155_PID 0x0155
+#define MTXORB_FTDI_RANGE_0156_PID 0x0156
+#define MTXORB_FTDI_RANGE_0157_PID 0x0157
+#define MTXORB_FTDI_RANGE_0158_PID 0x0158
+#define MTXORB_FTDI_RANGE_0159_PID 0x0159
+#define MTXORB_FTDI_RANGE_015A_PID 0x015A
+#define MTXORB_FTDI_RANGE_015B_PID 0x015B
+#define MTXORB_FTDI_RANGE_015C_PID 0x015C
+#define MTXORB_FTDI_RANGE_015D_PID 0x015D
+#define MTXORB_FTDI_RANGE_015E_PID 0x015E
+#define MTXORB_FTDI_RANGE_015F_PID 0x015F
+#define MTXORB_FTDI_RANGE_0160_PID 0x0160
+#define MTXORB_FTDI_RANGE_0161_PID 0x0161
+#define MTXORB_FTDI_RANGE_0162_PID 0x0162
+#define MTXORB_FTDI_RANGE_0163_PID 0x0163
+#define MTXORB_FTDI_RANGE_0164_PID 0x0164
+#define MTXORB_FTDI_RANGE_0165_PID 0x0165
+#define MTXORB_FTDI_RANGE_0166_PID 0x0166
+#define MTXORB_FTDI_RANGE_0167_PID 0x0167
+#define MTXORB_FTDI_RANGE_0168_PID 0x0168
+#define MTXORB_FTDI_RANGE_0169_PID 0x0169
+#define MTXORB_FTDI_RANGE_016A_PID 0x016A
+#define MTXORB_FTDI_RANGE_016B_PID 0x016B
+#define MTXORB_FTDI_RANGE_016C_PID 0x016C
+#define MTXORB_FTDI_RANGE_016D_PID 0x016D
+#define MTXORB_FTDI_RANGE_016E_PID 0x016E
+#define MTXORB_FTDI_RANGE_016F_PID 0x016F
+#define MTXORB_FTDI_RANGE_0170_PID 0x0170
+#define MTXORB_FTDI_RANGE_0171_PID 0x0171
+#define MTXORB_FTDI_RANGE_0172_PID 0x0172
+#define MTXORB_FTDI_RANGE_0173_PID 0x0173
+#define MTXORB_FTDI_RANGE_0174_PID 0x0174
+#define MTXORB_FTDI_RANGE_0175_PID 0x0175
+#define MTXORB_FTDI_RANGE_0176_PID 0x0176
+#define MTXORB_FTDI_RANGE_0177_PID 0x0177
+#define MTXORB_FTDI_RANGE_0178_PID 0x0178
+#define MTXORB_FTDI_RANGE_0179_PID 0x0179
+#define MTXORB_FTDI_RANGE_017A_PID 0x017A
+#define MTXORB_FTDI_RANGE_017B_PID 0x017B
+#define MTXORB_FTDI_RANGE_017C_PID 0x017C
+#define MTXORB_FTDI_RANGE_017D_PID 0x017D
+#define MTXORB_FTDI_RANGE_017E_PID 0x017E
+#define MTXORB_FTDI_RANGE_017F_PID 0x017F
+#define MTXORB_FTDI_RANGE_0180_PID 0x0180
+#define MTXORB_FTDI_RANGE_0181_PID 0x0181
+#define MTXORB_FTDI_RANGE_0182_PID 0x0182
+#define MTXORB_FTDI_RANGE_0183_PID 0x0183
+#define MTXORB_FTDI_RANGE_0184_PID 0x0184
+#define MTXORB_FTDI_RANGE_0185_PID 0x0185
+#define MTXORB_FTDI_RANGE_0186_PID 0x0186
+#define MTXORB_FTDI_RANGE_0187_PID 0x0187
+#define MTXORB_FTDI_RANGE_0188_PID 0x0188
+#define MTXORB_FTDI_RANGE_0189_PID 0x0189
+#define MTXORB_FTDI_RANGE_018A_PID 0x018A
+#define MTXORB_FTDI_RANGE_018B_PID 0x018B
+#define MTXORB_FTDI_RANGE_018C_PID 0x018C
+#define MTXORB_FTDI_RANGE_018D_PID 0x018D
+#define MTXORB_FTDI_RANGE_018E_PID 0x018E
+#define MTXORB_FTDI_RANGE_018F_PID 0x018F
+#define MTXORB_FTDI_RANGE_0190_PID 0x0190
+#define MTXORB_FTDI_RANGE_0191_PID 0x0191
+#define MTXORB_FTDI_RANGE_0192_PID 0x0192
+#define MTXORB_FTDI_RANGE_0193_PID 0x0193
+#define MTXORB_FTDI_RANGE_0194_PID 0x0194
+#define MTXORB_FTDI_RANGE_0195_PID 0x0195
+#define MTXORB_FTDI_RANGE_0196_PID 0x0196
+#define MTXORB_FTDI_RANGE_0197_PID 0x0197
+#define MTXORB_FTDI_RANGE_0198_PID 0x0198
+#define MTXORB_FTDI_RANGE_0199_PID 0x0199
+#define MTXORB_FTDI_RANGE_019A_PID 0x019A
+#define MTXORB_FTDI_RANGE_019B_PID 0x019B
+#define MTXORB_FTDI_RANGE_019C_PID 0x019C
+#define MTXORB_FTDI_RANGE_019D_PID 0x019D
+#define MTXORB_FTDI_RANGE_019E_PID 0x019E
+#define MTXORB_FTDI_RANGE_019F_PID 0x019F
+#define MTXORB_FTDI_RANGE_01A0_PID 0x01A0
+#define MTXORB_FTDI_RANGE_01A1_PID 0x01A1
+#define MTXORB_FTDI_RANGE_01A2_PID 0x01A2
+#define MTXORB_FTDI_RANGE_01A3_PID 0x01A3
+#define MTXORB_FTDI_RANGE_01A4_PID 0x01A4
+#define MTXORB_FTDI_RANGE_01A5_PID 0x01A5
+#define MTXORB_FTDI_RANGE_01A6_PID 0x01A6
+#define MTXORB_FTDI_RANGE_01A7_PID 0x01A7
+#define MTXORB_FTDI_RANGE_01A8_PID 0x01A8
+#define MTXORB_FTDI_RANGE_01A9_PID 0x01A9
+#define MTXORB_FTDI_RANGE_01AA_PID 0x01AA
+#define MTXORB_FTDI_RANGE_01AB_PID 0x01AB
+#define MTXORB_FTDI_RANGE_01AC_PID 0x01AC
+#define MTXORB_FTDI_RANGE_01AD_PID 0x01AD
+#define MTXORB_FTDI_RANGE_01AE_PID 0x01AE
+#define MTXORB_FTDI_RANGE_01AF_PID 0x01AF
+#define MTXORB_FTDI_RANGE_01B0_PID 0x01B0
+#define MTXORB_FTDI_RANGE_01B1_PID 0x01B1
+#define MTXORB_FTDI_RANGE_01B2_PID 0x01B2
+#define MTXORB_FTDI_RANGE_01B3_PID 0x01B3
+#define MTXORB_FTDI_RANGE_01B4_PID 0x01B4
+#define MTXORB_FTDI_RANGE_01B5_PID 0x01B5
+#define MTXORB_FTDI_RANGE_01B6_PID 0x01B6
+#define MTXORB_FTDI_RANGE_01B7_PID 0x01B7
+#define MTXORB_FTDI_RANGE_01B8_PID 0x01B8
+#define MTXORB_FTDI_RANGE_01B9_PID 0x01B9
+#define MTXORB_FTDI_RANGE_01BA_PID 0x01BA
+#define MTXORB_FTDI_RANGE_01BB_PID 0x01BB
+#define MTXORB_FTDI_RANGE_01BC_PID 0x01BC
+#define MTXORB_FTDI_RANGE_01BD_PID 0x01BD
+#define MTXORB_FTDI_RANGE_01BE_PID 0x01BE
+#define MTXORB_FTDI_RANGE_01BF_PID 0x01BF
+#define MTXORB_FTDI_RANGE_01C0_PID 0x01C0
+#define MTXORB_FTDI_RANGE_01C1_PID 0x01C1
+#define MTXORB_FTDI_RANGE_01C2_PID 0x01C2
+#define MTXORB_FTDI_RANGE_01C3_PID 0x01C3
+#define MTXORB_FTDI_RANGE_01C4_PID 0x01C4
+#define MTXORB_FTDI_RANGE_01C5_PID 0x01C5
+#define MTXORB_FTDI_RANGE_01C6_PID 0x01C6
+#define MTXORB_FTDI_RANGE_01C7_PID 0x01C7
+#define MTXORB_FTDI_RANGE_01C8_PID 0x01C8
+#define MTXORB_FTDI_RANGE_01C9_PID 0x01C9
+#define MTXORB_FTDI_RANGE_01CA_PID 0x01CA
+#define MTXORB_FTDI_RANGE_01CB_PID 0x01CB
+#define MTXORB_FTDI_RANGE_01CC_PID 0x01CC
+#define MTXORB_FTDI_RANGE_01CD_PID 0x01CD
+#define MTXORB_FTDI_RANGE_01CE_PID 0x01CE
+#define MTXORB_FTDI_RANGE_01CF_PID 0x01CF
+#define MTXORB_FTDI_RANGE_01D0_PID 0x01D0
+#define MTXORB_FTDI_RANGE_01D1_PID 0x01D1
+#define MTXORB_FTDI_RANGE_01D2_PID 0x01D2
+#define MTXORB_FTDI_RANGE_01D3_PID 0x01D3
+#define MTXORB_FTDI_RANGE_01D4_PID 0x01D4
+#define MTXORB_FTDI_RANGE_01D5_PID 0x01D5
+#define MTXORB_FTDI_RANGE_01D6_PID 0x01D6
+#define MTXORB_FTDI_RANGE_01D7_PID 0x01D7
+#define MTXORB_FTDI_RANGE_01D8_PID 0x01D8
+#define MTXORB_FTDI_RANGE_01D9_PID 0x01D9
+#define MTXORB_FTDI_RANGE_01DA_PID 0x01DA
+#define MTXORB_FTDI_RANGE_01DB_PID 0x01DB
+#define MTXORB_FTDI_RANGE_01DC_PID 0x01DC
+#define MTXORB_FTDI_RANGE_01DD_PID 0x01DD
+#define MTXORB_FTDI_RANGE_01DE_PID 0x01DE
+#define MTXORB_FTDI_RANGE_01DF_PID 0x01DF
+#define MTXORB_FTDI_RANGE_01E0_PID 0x01E0
+#define MTXORB_FTDI_RANGE_01E1_PID 0x01E1
+#define MTXORB_FTDI_RANGE_01E2_PID 0x01E2
+#define MTXORB_FTDI_RANGE_01E3_PID 0x01E3
+#define MTXORB_FTDI_RANGE_01E4_PID 0x01E4
+#define MTXORB_FTDI_RANGE_01E5_PID 0x01E5
+#define MTXORB_FTDI_RANGE_01E6_PID 0x01E6
+#define MTXORB_FTDI_RANGE_01E7_PID 0x01E7
+#define MTXORB_FTDI_RANGE_01E8_PID 0x01E8
+#define MTXORB_FTDI_RANGE_01E9_PID 0x01E9
+#define MTXORB_FTDI_RANGE_01EA_PID 0x01EA
+#define MTXORB_FTDI_RANGE_01EB_PID 0x01EB
+#define MTXORB_FTDI_RANGE_01EC_PID 0x01EC
+#define MTXORB_FTDI_RANGE_01ED_PID 0x01ED
+#define MTXORB_FTDI_RANGE_01EE_PID 0x01EE
+#define MTXORB_FTDI_RANGE_01EF_PID 0x01EF
+#define MTXORB_FTDI_RANGE_01F0_PID 0x01F0
+#define MTXORB_FTDI_RANGE_01F1_PID 0x01F1
+#define MTXORB_FTDI_RANGE_01F2_PID 0x01F2
+#define MTXORB_FTDI_RANGE_01F3_PID 0x01F3
+#define MTXORB_FTDI_RANGE_01F4_PID 0x01F4
+#define MTXORB_FTDI_RANGE_01F5_PID 0x01F5
+#define MTXORB_FTDI_RANGE_01F6_PID 0x01F6
+#define MTXORB_FTDI_RANGE_01F7_PID 0x01F7
+#define MTXORB_FTDI_RANGE_01F8_PID 0x01F8
+#define MTXORB_FTDI_RANGE_01F9_PID 0x01F9
+#define MTXORB_FTDI_RANGE_01FA_PID 0x01FA
+#define MTXORB_FTDI_RANGE_01FB_PID 0x01FB
+#define MTXORB_FTDI_RANGE_01FC_PID 0x01FC
+#define MTXORB_FTDI_RANGE_01FD_PID 0x01FD
+#define MTXORB_FTDI_RANGE_01FE_PID 0x01FE
+#define MTXORB_FTDI_RANGE_01FF_PID 0x01FF
+
+
+
+/* Interbiometrics USB I/O Board */
+/* Developed for Interbiometrics by Rudolf Gugler */
+#define INTERBIOMETRICS_VID 0x1209
+#define INTERBIOMETRICS_IOBOARD_PID 0x1002
+#define INTERBIOMETRICS_MINI_IOBOARD_PID 0x1006
+
+/*
+ * The following are the values for the Perle Systems
+ * UltraPort USB serial converters
+ */
+#define FTDI_PERLE_ULTRAPORT_PID 0xF0C0 /* Perle UltraPort Product Id */
+
+/*
+ * The following are the values for the Sealevel SeaLINK+ adapters.
+ * (Original list sent by Tuan Hoang. Ian Abbott renamed the macros and
+ * removed some PIDs that don't seem to match any existing products.)
+ */
+#define SEALEVEL_VID 0x0c52 /* Sealevel Vendor ID */
+#define SEALEVEL_2101_PID 0x2101 /* SeaLINK+232 (2101/2105) */
+#define SEALEVEL_2102_PID 0x2102 /* SeaLINK+485 (2102) */
+#define SEALEVEL_2103_PID 0x2103 /* SeaLINK+232I (2103) */
+#define SEALEVEL_2104_PID 0x2104 /* SeaLINK+485I (2104) */
+#define SEALEVEL_2106_PID 0x9020 /* SeaLINK+422 (2106) */
+#define SEALEVEL_2201_1_PID 0x2211 /* SeaPORT+2/232 (2201) Port 1 */
+#define SEALEVEL_2201_2_PID 0x2221 /* SeaPORT+2/232 (2201) Port 2 */
+#define SEALEVEL_2202_1_PID 0x2212 /* SeaPORT+2/485 (2202) Port 1 */
+#define SEALEVEL_2202_2_PID 0x2222 /* SeaPORT+2/485 (2202) Port 2 */
+#define SEALEVEL_2203_1_PID 0x2213 /* SeaPORT+2 (2203) Port 1 */
+#define SEALEVEL_2203_2_PID 0x2223 /* SeaPORT+2 (2203) Port 2 */
+#define SEALEVEL_2401_1_PID 0x2411 /* SeaPORT+4/232 (2401) Port 1 */
+#define SEALEVEL_2401_2_PID 0x2421 /* SeaPORT+4/232 (2401) Port 2 */
+#define SEALEVEL_2401_3_PID 0x2431 /* SeaPORT+4/232 (2401) Port 3 */
+#define SEALEVEL_2401_4_PID 0x2441 /* SeaPORT+4/232 (2401) Port 4 */
+#define SEALEVEL_2402_1_PID 0x2412 /* SeaPORT+4/485 (2402) Port 1 */
+#define SEALEVEL_2402_2_PID 0x2422 /* SeaPORT+4/485 (2402) Port 2 */
+#define SEALEVEL_2402_3_PID 0x2432 /* SeaPORT+4/485 (2402) Port 3 */
+#define SEALEVEL_2402_4_PID 0x2442 /* SeaPORT+4/485 (2402) Port 4 */
+#define SEALEVEL_2403_1_PID 0x2413 /* SeaPORT+4 (2403) Port 1 */
+#define SEALEVEL_2403_2_PID 0x2423 /* SeaPORT+4 (2403) Port 2 */
+#define SEALEVEL_2403_3_PID 0x2433 /* SeaPORT+4 (2403) Port 3 */
+#define SEALEVEL_2403_4_PID 0x2443 /* SeaPORT+4 (2403) Port 4 */
+#define SEALEVEL_2801_1_PID 0X2811 /* SeaLINK+8/232 (2801) Port 1 */
+#define SEALEVEL_2801_2_PID 0X2821 /* SeaLINK+8/232 (2801) Port 2 */
+#define SEALEVEL_2801_3_PID 0X2831 /* SeaLINK+8/232 (2801) Port 3 */
+#define SEALEVEL_2801_4_PID 0X2841 /* SeaLINK+8/232 (2801) Port 4 */
+#define SEALEVEL_2801_5_PID 0X2851 /* SeaLINK+8/232 (2801) Port 5 */
+#define SEALEVEL_2801_6_PID 0X2861 /* SeaLINK+8/232 (2801) Port 6 */
+#define SEALEVEL_2801_7_PID 0X2871 /* SeaLINK+8/232 (2801) Port 7 */
+#define SEALEVEL_2801_8_PID 0X2881 /* SeaLINK+8/232 (2801) Port 8 */
+#define SEALEVEL_2802_1_PID 0X2812 /* SeaLINK+8/485 (2802) Port 1 */
+#define SEALEVEL_2802_2_PID 0X2822 /* SeaLINK+8/485 (2802) Port 2 */
+#define SEALEVEL_2802_3_PID 0X2832 /* SeaLINK+8/485 (2802) Port 3 */
+#define SEALEVEL_2802_4_PID 0X2842 /* SeaLINK+8/485 (2802) Port 4 */
+#define SEALEVEL_2802_5_PID 0X2852 /* SeaLINK+8/485 (2802) Port 5 */
+#define SEALEVEL_2802_6_PID 0X2862 /* SeaLINK+8/485 (2802) Port 6 */
+#define SEALEVEL_2802_7_PID 0X2872 /* SeaLINK+8/485 (2802) Port 7 */
+#define SEALEVEL_2802_8_PID 0X2882 /* SeaLINK+8/485 (2802) Port 8 */
+#define SEALEVEL_2803_1_PID 0X2813 /* SeaLINK+8 (2803) Port 1 */
+#define SEALEVEL_2803_2_PID 0X2823 /* SeaLINK+8 (2803) Port 2 */
+#define SEALEVEL_2803_3_PID 0X2833 /* SeaLINK+8 (2803) Port 3 */
+#define SEALEVEL_2803_4_PID 0X2843 /* SeaLINK+8 (2803) Port 4 */
+#define SEALEVEL_2803_5_PID 0X2853 /* SeaLINK+8 (2803) Port 5 */
+#define SEALEVEL_2803_6_PID 0X2863 /* SeaLINK+8 (2803) Port 6 */
+#define SEALEVEL_2803_7_PID 0X2873 /* SeaLINK+8 (2803) Port 7 */
+#define SEALEVEL_2803_8_PID 0X2883 /* SeaLINK+8 (2803) Port 8 */
+
+/*
+ * The following are the values for two KOBIL chipcard terminals.
+ */
+#define KOBIL_VID 0x0d46 /* KOBIL Vendor ID */
+#define KOBIL_CONV_B1_PID 0x2020 /* KOBIL Konverter for B1 */
+#define KOBIL_CONV_KAAN_PID 0x2021 /* KOBIL_Konverter for KAAN */
+
+/*
+ * Icom ID-1 digital transceiver
+ */
+
+#define ICOM_ID1_VID 0x0C26
+#define ICOM_ID1_PID 0x0004
+
+/*
+ * ASK.fr devices
+ */
+#define FTDI_ASK_RDR400_PID 0xC991 /* ASK RDR 400 series card reader */
+
+/*
+ * FTDI USB UART chips used in construction projects from the
+ * Elektor Electronics magazine (http://elektor-electronics.co.uk)
+ */
+#define ELEKTOR_VID 0x0C7D
+#define ELEKTOR_FT323R_PID 0x0005 /* RFID-Reader, issue 09-2006 */
+
+/*
+ * DSS-20 Sync Station for Sony Ericsson P800
+ */
+#define FTDI_DSS20_PID 0xFC82
+
+/*
+ * Home Electronics (www.home-electro.com) USB gadgets
+ */
+#define FTDI_HE_TIRA1_PID 0xFA78 /* Tira-1 IR transceiver */
+
+/* USB-UIRT - An infrared receiver and transmitter using the 8U232AM chip */
+/* http://home.earthlink.net/~jrhees/USBUIRT/index.htm */
+#define FTDI_USB_UIRT_PID 0xF850 /* Product Id */
+
+/* TNC-X USB-to-packet-radio adapter, versions prior to 3.0 (DLP module) */
+
+#define FTDI_TNC_X_PID 0xEBE0
+
+/*
+ * ELV USB devices submitted by Christian Abt of ELV (www.elv.de).
+ * All of these devices use FTDI's vendor ID (0x0403).
+ *
+ * The previously included PID for the UO 100 module was incorrect.
+ * In fact, that PID was for ELV's UR 100 USB-RS232 converter (0xFB58).
+ *
+ * Armin Laeuger originally sent the PID for the UM 100 module.
+ */
+#define FTDI_R2000KU_TRUE_RNG 0xFB80 /* R2000KU TRUE RNG */
+#define FTDI_ELV_UR100_PID 0xFB58 /* USB-RS232-Umsetzer (UR 100) */
+#define FTDI_ELV_UM100_PID 0xFB5A /* USB-Modul UM 100 */
+#define FTDI_ELV_UO100_PID 0xFB5B /* USB-Modul UO 100 */
+#define FTDI_ELV_ALC8500_PID 0xF06E /* ALC 8500 Expert */
+/* Additional ELV PIDs that default to using the FTDI D2XX drivers on
+ * MS Windows, rather than the FTDI Virtual Com Port drivers.
+ * Maybe these will be easier to use with the libftdi/libusb user-space
+ * drivers, or possibly the Comedi drivers in some cases. */
+#define FTDI_ELV_CLI7000_PID 0xFB59 /* Computer-Light-Interface (CLI 7000) */
+#define FTDI_ELV_PPS7330_PID 0xFB5C /* Processor-Power-Supply (PPS 7330) */
+#define FTDI_ELV_TFM100_PID 0xFB5D /* Temperartur-Feuchte Messgeraet (TFM 100) */
+#define FTDI_ELV_UDF77_PID 0xFB5E /* USB DCF Funkurh (UDF 77) */
+#define FTDI_ELV_UIO88_PID 0xFB5F /* USB-I/O Interface (UIO 88) */
+#define FTDI_ELV_UAD8_PID 0xF068 /* USB-AD-Wandler (UAD 8) */
+#define FTDI_ELV_UDA7_PID 0xF069 /* USB-DA-Wandler (UDA 7) */
+#define FTDI_ELV_USI2_PID 0xF06A /* USB-Schrittmotoren-Interface (USI 2) */
+#define FTDI_ELV_T1100_PID 0xF06B /* Thermometer (T 1100) */
+#define FTDI_ELV_PCD200_PID 0xF06C /* PC-Datenlogger (PCD 200) */
+#define FTDI_ELV_ULA200_PID 0xF06D /* USB-LCD-Ansteuerung (ULA 200) */
+#define FTDI_ELV_FHZ1000PC_PID 0xF06F /* FHZ 1000 PC */
+#define FTDI_ELV_CSI8_PID 0xE0F0 /* Computer-Schalt-Interface (CSI 8) */
+#define FTDI_ELV_EM1000DL_PID 0xE0F1 /* PC-Datenlogger fuer Energiemonitor (EM 1000 DL) */
+#define FTDI_ELV_PCK100_PID 0xE0F2 /* PC-Kabeltester (PCK 100) */
+#define FTDI_ELV_RFP500_PID 0xE0F3 /* HF-Leistungsmesser (RFP 500) */
+#define FTDI_ELV_FS20SIG_PID 0xE0F4 /* Signalgeber (FS 20 SIG) */
+#define FTDI_ELV_WS300PC_PID 0xE0F6 /* PC-Wetterstation (WS 300 PC) */
+#define FTDI_ELV_FHZ1300PC_PID 0xE0E8 /* FHZ 1300 PC */
+#define FTDI_ELV_WS500_PID 0xE0E9 /* PC-Wetterstation (WS 500) */
+#define FTDI_ELV_HS485_PID 0xE0EA /* USB to RS-485 adapter */
+#define FTDI_ELV_EM1010PC_PID 0xE0EF /* Engery monitor EM 1010 PC */
+#define FTDI_PHI_FISCO_PID 0xE40B /* PHI Fisco USB to Serial cable */
+
+/*
+ * Definitions for ID TECH (www.idt-net.com) devices
+ */
+#define IDTECH_VID 0x0ACD /* ID TECH Vendor ID */
+#define IDTECH_IDT1221U_PID 0x0300 /* IDT1221U USB to RS-232 adapter */
+
+/*
+ * Definitions for Omnidirectional Control Technology, Inc. devices
+ */
+#define OCT_VID 0x0B39 /* OCT vendor ID */
+/* Note: OCT US101 is also rebadged as Dick Smith Electronics (NZ) XH6381 */
+/* Also rebadged as Dick Smith Electronics (Aus) XH6451 */
+/* Also rebadged as SIIG Inc. model US2308 hardware version 1 */
+#define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */
+
+/* an infrared receiver for user access control with IR tags */
+#define FTDI_PIEGROUP_PID 0xF208 /* Product Id */
+
+/*
+ * Definitions for Artemis astronomical USB based cameras
+ * Check it at http://www.artemisccd.co.uk/
+ */
+#define FTDI_ARTEMIS_PID 0xDF28 /* All Artemis Cameras */
+
+/*
+ * Definitions for ATIK Instruments astronomical USB based cameras
+ * Check it at http://www.atik-instruments.com/
+ */
+#define FTDI_ATIK_ATK16_PID 0xDF30 /* ATIK ATK-16 Grayscale Camera */
+#define FTDI_ATIK_ATK16C_PID 0xDF32 /* ATIK ATK-16C Colour Camera */
+#define FTDI_ATIK_ATK16HR_PID 0xDF31 /* ATIK ATK-16HR Grayscale Camera */
+#define FTDI_ATIK_ATK16HRC_PID 0xDF33 /* ATIK ATK-16HRC Colour Camera */
+#define FTDI_ATIK_ATK16IC_PID 0xDF35 /* ATIK ATK-16IC Grayscale Camera */
+
+/*
+ * Protego product ids
+ */
+#define PROTEGO_SPECIAL_1 0xFC70 /* special/unknown device */
+#define PROTEGO_R2X0 0xFC71 /* R200-USB TRNG unit (R210, R220, and R230) */
+#define PROTEGO_SPECIAL_3 0xFC72 /* special/unknown device */
+#define PROTEGO_SPECIAL_4 0xFC73 /* special/unknown device */
+
+/*
+ * Gude Analog- und Digitalsysteme GmbH
+ */
+#define FTDI_GUDEADS_E808_PID 0xE808
+#define FTDI_GUDEADS_E809_PID 0xE809
+#define FTDI_GUDEADS_E80A_PID 0xE80A
+#define FTDI_GUDEADS_E80B_PID 0xE80B
+#define FTDI_GUDEADS_E80C_PID 0xE80C
+#define FTDI_GUDEADS_E80D_PID 0xE80D
+#define FTDI_GUDEADS_E80E_PID 0xE80E
+#define FTDI_GUDEADS_E80F_PID 0xE80F
+#define FTDI_GUDEADS_E888_PID 0xE888 /* Expert ISDN Control USB */
+#define FTDI_GUDEADS_E889_PID 0xE889 /* USB RS-232 OptoBridge */
+#define FTDI_GUDEADS_E88A_PID 0xE88A
+#define FTDI_GUDEADS_E88B_PID 0xE88B
+#define FTDI_GUDEADS_E88C_PID 0xE88C
+#define FTDI_GUDEADS_E88D_PID 0xE88D
+#define FTDI_GUDEADS_E88E_PID 0xE88E
+#define FTDI_GUDEADS_E88F_PID 0xE88F
+
+/*
+ * Linx Technologies product ids
+ */
+#define LINX_SDMUSBQSS_PID 0xF448 /* Linx SDM-USB-QS-S */
+#define LINX_MASTERDEVEL2_PID 0xF449 /* Linx Master Development 2.0 */
+#define LINX_FUTURE_0_PID 0xF44A /* Linx future device */
+#define LINX_FUTURE_1_PID 0xF44B /* Linx future device */
+#define LINX_FUTURE_2_PID 0xF44C /* Linx future device */
+
+/* CCS Inc. ICDU/ICDU40 product ID - the FT232BM is used in an in-circuit-debugger */
+/* unit for PIC16's/PIC18's */
+#define FTDI_CCSICDU20_0_PID 0xF9D0
+#define FTDI_CCSICDU40_1_PID 0xF9D1
+#define FTDI_CCSMACHX_2_PID 0xF9D2
+#define FTDI_CCSLOAD_N_GO_3_PID 0xF9D3
+#define FTDI_CCSICDU64_4_PID 0xF9D4
+#define FTDI_CCSPRIME8_5_PID 0xF9D5
+
+/* Inside Accesso contactless reader (http://www.insidefr.com) */
+#define INSIDE_ACCESSO 0xFAD0
+
+/*
+ * Intrepid Control Systems (http://www.intrepidcs.com/) ValueCAN and NeoVI
+ */
+#define INTREPID_VID 0x093C
+#define INTREPID_VALUECAN_PID 0x0601
+#define INTREPID_NEOVI_PID 0x0701
+
+/*
+ * Falcom Wireless Communications GmbH
+ */
+#define FALCOM_VID 0x0F94 /* Vendor Id */
+#define FALCOM_TWIST_PID 0x0001 /* Falcom Twist USB GPRS modem */
+#define FALCOM_SAMBA_PID 0x0005 /* Falcom Samba USB GPRS modem */
+
+/*
+ * SUUNTO product ids
+ */
+#define FTDI_SUUNTO_SPORTS_PID 0xF680 /* Suunto Sports instrument */
+
+/*
+ * Oceanic product ids
+ */
+#define FTDI_OCEANIC_PID 0xF460 /* Oceanic dive instrument */
+
+/*
+ * TTi (Thurlby Thandar Instruments)
+ */
+#define TTI_VID 0x103E /* Vendor Id */
+#define TTI_QL355P_PID 0x03E8 /* TTi QL355P power supply */
+
+/*
+ * Definitions for B&B Electronics products.
+ */
+#define BANDB_VID 0x0856 /* B&B Electronics Vendor ID */
+#define BANDB_USOTL4_PID 0xAC01 /* USOTL4 Isolated RS-485 Converter */
+#define BANDB_USTL4_PID 0xAC02 /* USTL4 RS-485 Converter */
+#define BANDB_USO9ML2_PID 0xAC03 /* USO9ML2 Isolated RS-232 Converter */
+#define BANDB_USOPTL4_PID 0xAC11
+#define BANDB_USPTL4_PID 0xAC12
+#define BANDB_USO9ML2DR_2_PID 0xAC16
+#define BANDB_USO9ML2DR_PID 0xAC17
+#define BANDB_USOPTL4DR2_PID 0xAC18 /* USOPTL4R-2 2-port Isolated RS-232 Converter */
+#define BANDB_USOPTL4DR_PID 0xAC19
+#define BANDB_485USB9F_2W_PID 0xAC25
+#define BANDB_485USB9F_4W_PID 0xAC26
+#define BANDB_232USB9M_PID 0xAC27
+#define BANDB_485USBTB_2W_PID 0xAC33
+#define BANDB_485USBTB_4W_PID 0xAC34
+#define BANDB_TTL5USB9M_PID 0xAC49
+#define BANDB_TTL3USB9M_PID 0xAC50
+#define BANDB_ZZ_PROG1_USB_PID 0xBA02
+
+/*
+ * RM Michaelides CANview USB (http://www.rmcan.com)
+ * CAN fieldbus interface adapter, added by port GmbH www.port.de)
+ * Ian Abbott changed the macro names for consistency.
+ */
+#define FTDI_RM_CANVIEW_PID 0xfd60 /* Product Id */
+
+/*
+ * EVER Eco Pro UPS (http://www.ever.com.pl/)
+ */
+
+#define EVER_ECO_PRO_CDS 0xe520 /* RS-232 converter */
+
+/*
+ * 4N-GALAXY.DE PIDs for CAN-USB, USB-RS232, USB-RS422, USB-RS485,
+ * USB-TTY activ, USB-TTY passiv. Some PIDs are used by several devices
+ * and I'm not entirely sure which are used by which.
+ */
+#define FTDI_4N_GALAXY_DE_1_PID 0xF3C0
+#define FTDI_4N_GALAXY_DE_2_PID 0xF3C1
+
+/*
+ * Mobility Electronics products.
+ */
+#define MOBILITY_VID 0x1342
+#define MOBILITY_USB_SERIAL_PID 0x0202 /* EasiDock USB 200 serial */
+
+/*
+ * microHAM product IDs (http://www.microham.com).
+ * Submitted by Justin Burket (KL1RL) <zorton@jtan.com>
+ * and Mike Studer (K6EEP) <k6eep@hamsoftware.org>.
+ * Ian Abbott <abbotti@mev.co.uk> added a few more from the driver INF file.
+ */
+#define FTDI_MHAM_KW_PID 0xEEE8 /* USB-KW interface */
+#define FTDI_MHAM_YS_PID 0xEEE9 /* USB-YS interface */
+#define FTDI_MHAM_Y6_PID 0xEEEA /* USB-Y6 interface */
+#define FTDI_MHAM_Y8_PID 0xEEEB /* USB-Y8 interface */
+#define FTDI_MHAM_IC_PID 0xEEEC /* USB-IC interface */
+#define FTDI_MHAM_DB9_PID 0xEEED /* USB-DB9 interface */
+#define FTDI_MHAM_RS232_PID 0xEEEE /* USB-RS232 interface */
+#define FTDI_MHAM_Y9_PID 0xEEEF /* USB-Y9 interface */
+
+/*
+ * Active Robots product ids.
+ */
+#define FTDI_ACTIVE_ROBOTS_PID 0xE548 /* USB comms board */
+
+/*
+ * Xsens Technologies BV products (http://www.xsens.com).
+ */
+#define XSENS_CONVERTER_0_PID 0xD388
+#define XSENS_CONVERTER_1_PID 0xD389
+#define XSENS_CONVERTER_2_PID 0xD38A
+#define XSENS_CONVERTER_3_PID 0xD38B
+#define XSENS_CONVERTER_4_PID 0xD38C
+#define XSENS_CONVERTER_5_PID 0xD38D
+#define XSENS_CONVERTER_6_PID 0xD38E
+#define XSENS_CONVERTER_7_PID 0xD38F
+
+/*
+ * Teratronik product ids.
+ * Submitted by O. Wölfelschneider.
+ */
+#define FTDI_TERATRONIK_VCP_PID 0xEC88 /* Teratronik device (preferring VCP driver on windows) */
+#define FTDI_TERATRONIK_D2XX_PID 0xEC89 /* Teratronik device (preferring D2XX driver on windows) */
+
+/*
+ * Evolution Robotics products (http://www.evolution.com/).
+ * Submitted by Shawn M. Lavelle.
+ */
+#define EVOLUTION_VID 0xDEEE /* Vendor ID */
+#define EVOLUTION_ER1_PID 0x0300 /* ER1 Control Module */
+#define EVO_8U232AM_PID 0x02FF /* Evolution robotics RCM2 (FT232AM)*/
+#define EVO_HYBRID_PID 0x0302 /* Evolution robotics RCM4 PID (FT232BM)*/
+#define EVO_RCM4_PID 0x0303 /* Evolution robotics RCM4 PID */
+
+/* Pyramid Computer GmbH */
+#define FTDI_PYRAMID_PID 0xE6C8 /* Pyramid Appliance Display */
+
+/*
+ * NDI (www.ndigital.com) product ids
+ */
+#define FTDI_NDI_HUC_PID 0xDA70 /* NDI Host USB Converter */
+#define FTDI_NDI_SPECTRA_SCU_PID 0xDA71 /* NDI Spectra SCU */
+#define FTDI_NDI_FUTURE_2_PID 0xDA72 /* NDI future device #2 */
+#define FTDI_NDI_FUTURE_3_PID 0xDA73 /* NDI future device #3 */
+#define FTDI_NDI_AURORA_SCU_PID 0xDA74 /* NDI Aurora SCU */
+
+/*
+ * Posiflex inc retail equipment (http://www.posiflex.com.tw)
+ */
+#define POSIFLEX_VID 0x0d3a /* Vendor ID */
+#define POSIFLEX_PP7000_PID 0x0300 /* PP-7000II thermal printer */
+
+/*
+ * Westrex International devices submitted by Cory Lee
+ */
+#define FTDI_WESTREX_MODEL_777_PID 0xDC00 /* Model 777 */
+#define FTDI_WESTREX_MODEL_8900F_PID 0xDC01 /* Model 8900F */
+
+/*
+ * RR-CirKits LocoBuffer USB (http://www.rr-cirkits.com)
+ */
+#define FTDI_RRCIRKITS_LOCOBUFFER_PID 0xc7d0 /* LocoBuffer USB */
+
+/*
+ * Eclo (http://www.eclo.pt/) product IDs.
+ * PID 0xEA90 submitted by Martin Grill.
+ */
+#define FTDI_ECLO_COM_1WIRE_PID 0xEA90 /* COM to 1-Wire USB adaptor */
+
+/*
+ * Papouch products (http://www.papouch.com/)
+ * Submitted by Folkert van Heusden
+ */
+
+#define PAPOUCH_VID 0x5050 /* Vendor ID */
+#define PAPOUCH_TMU_PID 0x0400 /* TMU USB Thermometer */
+#define PAPOUCH_QUIDO4x4_PID 0x0900 /* Quido 4/4 Module */
+
+/*
+ * ACG Identification Technologies GmbH products (http://www.acg.de/).
+ * Submitted by anton -at- goto10 -dot- org.
*/
+#define FTDI_ACG_HFDUAL_PID 0xDD20 /* HF Dual ISO Reader (RFID) */
+
+/*
+ * Yost Engineering, Inc. products (www.yostengineering.com).
+ * PID 0xE050 submitted by Aaron Prose.
+ */
+#define FTDI_YEI_SERVOCENTER31_PID 0xE050 /* YEI ServoCenter3.1 USB */
+
+/*
+ * ThorLabs USB motor drivers
+ */
+#define FTDI_THORLABS_PID 0xfaf0 /* ThorLabs USB motor drivers */
+
+/*
+ * Testo products (http://www.testo.com/)
+ * Submitted by Colin Leroy
+ */
+#define TESTO_VID 0x128D
+#define TESTO_USB_INTERFACE_PID 0x0001
+
+/*
+ * Gamma Scout (http://gamma-scout.com/). Submitted by rsc@runtux.com.
+ */
+#define FTDI_GAMMA_SCOUT_PID 0xD678 /* Gamma Scout online */
+
+/*
+ * Tactrix OpenPort (ECU) devices.
+ * OpenPort 1.3M submitted by Donour Sizemore.
+ * OpenPort 1.3S and 1.3U submitted by Ian Abbott.
+ */
+#define FTDI_TACTRIX_OPENPORT_13M_PID 0xCC48 /* OpenPort 1.3 Mitsubishi */
+#define FTDI_TACTRIX_OPENPORT_13S_PID 0xCC49 /* OpenPort 1.3 Subaru */
+#define FTDI_TACTRIX_OPENPORT_13U_PID 0xCC4A /* OpenPort 1.3 Universal */
+
+/*
+ * Telldus Technologies
+ */
+#define TELLDUS_VID 0x1781 /* Vendor ID */
+#define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */
+
+/*
+ * IBS elektronik product ids
+ * Submitted by Thomas Schleusener
+ */
+#define FTDI_IBS_US485_PID 0xff38 /* IBS US485 (USB<-->RS422/485 interface) */
+#define FTDI_IBS_PICPRO_PID 0xff39 /* IBS PIC-Programmer */
+#define FTDI_IBS_PCMCIA_PID 0xff3a /* IBS Card reader for PCMCIA SRAM-cards */
+#define FTDI_IBS_PK1_PID 0xff3b /* IBS PK1 - Particel counter */
+#define FTDI_IBS_RS232MON_PID 0xff3c /* IBS RS232 - Monitor */
+#define FTDI_IBS_APP70_PID 0xff3d /* APP 70 (dust monitoring system) */
+#define FTDI_IBS_PEDO_PID 0xff3e /* IBS PEDO-Modem (RF modem 868.35 MHz) */
+#define FTDI_IBS_PROD_PID 0xff3f /* future device */
+
+/*
+ * MaxStream devices www.maxstream.net
+ */
+#define FTDI_MAXSTREAM_PID 0xEE18 /* Xbee PKG-U Module */
+
+/* Olimex */
+#define OLIMEX_VID 0x15BA
+#define OLIMEX_ARM_USB_OCD_PID 0x0003
+
+/* Luminary Micro Stellaris Boards, VID = FTDI_VID */
+/* FTDI 2332C Dual channel device, side A=245 FIFO (JTAG), Side B=RS232 UART */
+#define LMI_LM3S_DEVEL_BOARD_PID 0xbcd8
+#define LMI_LM3S_EVAL_BOARD_PID 0xbcd9
+
+/* www.elsterelectricity.com Elster Unicom III Optical Probe */
+#define FTDI_ELSTER_UNICOM_PID 0xE700 /* Product Id */
+
+/*
+ * The Mobility Lab (TML)
+ * Submitted by Pierre Castella
+ */
+#define TML_VID 0x1B91 /* Vendor ID */
+#define TML_USB_SERIAL_PID 0x0064 /* USB - Serial Converter */
+
+/* Propox devices */
+#define FTDI_PROPOX_JTAGCABLEII_PID 0xD738
+
+/* Rig Expert Ukraine devices */
+#define FTDI_REU_TINY_PID 0xED22 /* RigExpert Tiny */
+
+/* Domintell products http://www.domintell.com */
+#define FTDI_DOMINTELL_DGQG_PID 0xEF50 /* Master */
+#define FTDI_DOMINTELL_DUSB_PID 0xEF51 /* DUSB01 module */
+
+/* Alti-2 products http://www.alti-2.com */
+#define ALTI2_VID 0x1BC9
+#define ALTI2_N3_PID 0x6001 /* Neptune 3 */
/* Commands */
#define FTDI_SIO_RESET 0 /* Reset the port */
#define INTERFACE_C 3
#define INTERFACE_D 4
+/*
+ * FIC / OpenMoko, Inc. http://wiki.openmoko.org/wiki/Neo1973_Debug_Board_v3
+ * Submitted by Harald Welte <laforge@openmoko.org>
+ */
+#define FIC_VID 0x1457
+#define FIC_NEO1973_DEBUG_PID 0x5118
+
+/*
+ * RATOC REX-USB60F
+ */
+#define RATOC_VENDOR_ID 0x0584
+#define RATOC_PRODUCT_ID_USB60F 0xb020
+
+/*
+ * DIEBOLD BCS SE923
+ */
+#define DIEBOLD_BCS_SE923_PID 0xfb99
+
+/*
+ * Atmel STK541
+ */
+#define ATMEL_VID 0x03eb /* Vendor ID */
+#define STK541_PID 0x2109 /* Zigbee Controller */
+
+/*
+ * Dresden Elektronic Sensor Terminal Board
+ */
+#define DE_VID 0x1cf1 /* Vendor ID */
+#define STB_PID 0x0001 /* Sensor Terminal Board */
+#define WHT_PID 0x0004 /* Wireless Handheld Terminal */
+
+/*
+ * Blackfin gnICE JTAG
+ * http://docs.blackfin.uclinux.org/doku.php?id=hw:jtag:gnice
+ */
+#define ADI_VID 0x0456
+#define ADI_GNICE_PID 0xF000
+#define ADI_GNICEPLUS_PID 0xF001
+
+/*
+ * JETI SPECTROMETER SPECBOS 1201
+ * http://www.jeti.com/products/sys/scb/scb1201.php
+ */
+#define JETI_VID 0x0c6c
+#define JETI_SPC1201_PID 0x04b2
+
+/*
+ * Marvell SheevaPlug
+ */
+#define MARVELL_VID 0x9e88
+#define MARVELL_SHEEVAPLUG_PID 0x9e8f
+
+#define FTDI_TURTELIZER_PID 0xBDC8 /* JTAG/RS-232 adapter by egnite GmBH */
+
+/*
+ * GN Otometrics (http://www.otometrics.com)
+ * Submitted by Ville Sundberg.
+ */
+#define GN_OTOMETRICS_VID 0x0c33 /* Vendor ID */
+#define AURICAL_USB_PID 0x0010 /* Aurical USB Audiometer */
+
+/*
+ * Bayer Ascensia Contour blood glucose meter USB-converter cable.
+ * http://winglucofacts.com/cables/
+ */
+#define BAYER_VID 0x1A79
+#define BAYER_CONTOUR_CABLE_PID 0x6001
+
+/*
+ * Marvell OpenRD Base, Client
+ * http://www.open-rd.org
+ * OpenRD Base, Client use VID 0x0403
+ */
+#define MARVELL_OPENRD_PID 0x9e90
+
+/*
+ * Hameg HO820 and HO870 interface (using VID 0x0403)
+ */
+#define HAMEG_HO820_PID 0xed74
+#define HAMEG_HO870_PID 0xed71
/*
* BmRequestType: 1100 0000b
* B2..7 Length of message - (not including Byte 0)
*
*/
+
+++ /dev/null
-/*
- * vendor/product IDs (VID/PID) of devices using FTDI USB serial converters.
- * Please keep numerically sorted within individual areas, thanks!
- *
- * Philipp Gühring - pg@futureware.at - added the Device ID of the USB relais
- * from Rudolf Gugler
- *
- */
-
-
-/**********************************/
-/***** devices using FTDI VID *****/
-/**********************************/
-
-
-#define FTDI_VID 0x0403 /* Vendor Id */
-
-
-/*** "original" FTDI device PIDs ***/
-
-#define FTDI_8U232AM_PID 0x6001 /* Similar device to SIO above */
-#define FTDI_8U232AM_ALT_PID 0x6006 /* FTDI's alternate PID for above */
-#define FTDI_8U2232C_PID 0x6010 /* Dual channel device */
-#define FTDI_4232H_PID 0x6011 /* Quad channel hi-speed device */
-#define FTDI_SIO_PID 0x8372 /* Product Id SIO application of 8U100AX */
-#define FTDI_232RL_PID 0xFBFA /* Product ID for FT232RL */
-
-
-/*** third-party PIDs (using FTDI_VID) ***/
-
-/*
- * Marvell OpenRD Base, Client
- * http://www.open-rd.org
- * OpenRD Base, Client use VID 0x0403
- */
-#define MARVELL_OPENRD_PID 0x9e90
-
-/* www.candapter.com Ewert Energy Systems CANdapter device */
-#define FTDI_CANDAPTER_PID 0x9F80 /* Product Id */
-
-#define FTDI_NXTCAM_PID 0xABB8 /* NXTCam for Mindstorms NXT */
-
-/* US Interface Navigator (http://www.usinterface.com/) */
-#define FTDI_USINT_CAT_PID 0xb810 /* Navigator CAT and 2nd PTT lines */
-#define FTDI_USINT_WKEY_PID 0xb811 /* Navigator WKEY and FSK lines */
-#define FTDI_USINT_RS232_PID 0xb812 /* Navigator RS232 and CONFIG lines */
-
-/* OOCDlink by Joern Kaipf <joernk@web.de>
- * (http://www.joernonline.de/dw/doku.php?id=start&idx=projects:oocdlink) */
-#define FTDI_OOCDLINK_PID 0xbaf8 /* Amontec JTAGkey */
-
-/* Luminary Micro Stellaris Boards, VID = FTDI_VID */
-/* FTDI 2332C Dual channel device, side A=245 FIFO (JTAG), Side B=RS232 UART */
-#define LMI_LM3S_DEVEL_BOARD_PID 0xbcd8
-#define LMI_LM3S_EVAL_BOARD_PID 0xbcd9
-
-#define FTDI_TURTELIZER_PID 0xBDC8 /* JTAG/RS-232 adapter by egnite GmBH */
-
-/* OpenDCC (www.opendcc.de) product id */
-#define FTDI_OPENDCC_PID 0xBFD8
-#define FTDI_OPENDCC_SNIFFER_PID 0xBFD9
-#define FTDI_OPENDCC_THROTTLE_PID 0xBFDA
-#define FTDI_OPENDCC_GATEWAY_PID 0xBFDB
-#define FTDI_OPENDCC_GBM_PID 0xBFDC
-
-/*
- * RR-CirKits LocoBuffer USB (http://www.rr-cirkits.com)
- */
-#define FTDI_RRCIRKITS_LOCOBUFFER_PID 0xc7d0 /* LocoBuffer USB */
-
-/* DMX4ALL DMX Interfaces */
-#define FTDI_DMX4ALL 0xC850
-
-/*
- * ASK.fr devices
- */
-#define FTDI_ASK_RDR400_PID 0xC991 /* ASK RDR 400 series card reader */
-
-/* www.starting-point-systems.com µChameleon device */
-#define FTDI_MICRO_CHAMELEON_PID 0xCAA0 /* Product Id */
-
-/*
- * Tactrix OpenPort (ECU) devices.
- * OpenPort 1.3M submitted by Donour Sizemore.
- * OpenPort 1.3S and 1.3U submitted by Ian Abbott.
- */
-#define FTDI_TACTRIX_OPENPORT_13M_PID 0xCC48 /* OpenPort 1.3 Mitsubishi */
-#define FTDI_TACTRIX_OPENPORT_13S_PID 0xCC49 /* OpenPort 1.3 Subaru */
-#define FTDI_TACTRIX_OPENPORT_13U_PID 0xCC4A /* OpenPort 1.3 Universal */
-
-/* SCS HF Radio Modems PID's (http://www.scs-ptc.com) */
-/* the VID is the standard ftdi vid (FTDI_VID) */
-#define FTDI_SCS_DEVICE_0_PID 0xD010 /* SCS PTC-IIusb */
-#define FTDI_SCS_DEVICE_1_PID 0xD011 /* SCS Tracker / DSP TNC */
-#define FTDI_SCS_DEVICE_2_PID 0xD012
-#define FTDI_SCS_DEVICE_3_PID 0xD013
-#define FTDI_SCS_DEVICE_4_PID 0xD014
-#define FTDI_SCS_DEVICE_5_PID 0xD015
-#define FTDI_SCS_DEVICE_6_PID 0xD016
-#define FTDI_SCS_DEVICE_7_PID 0xD017
-
-/* iPlus device */
-#define FTDI_IPLUS_PID 0xD070 /* Product Id */
-#define FTDI_IPLUS2_PID 0xD071 /* Product Id */
-
-/*
- * Gamma Scout (http://gamma-scout.com/). Submitted by rsc@runtux.com.
- */
-#define FTDI_GAMMA_SCOUT_PID 0xD678 /* Gamma Scout online */
-
-/* Propox devices */
-#define FTDI_PROPOX_JTAGCABLEII_PID 0xD738
-
-/* Lenz LI-USB Computer Interface. */
-#define FTDI_LENZ_LIUSB_PID 0xD780
-
-/* Vardaan Enterprises Serial Interface VEUSB422R3 */
-#define FTDI_VARDAAN_PID 0xF070
-
-/*
- * Xsens Technologies BV products (http://www.xsens.com).
- */
-#define XSENS_CONVERTER_0_PID 0xD388
-#define XSENS_CONVERTER_1_PID 0xD389
-#define XSENS_CONVERTER_2_PID 0xD38A
-#define XSENS_CONVERTER_3_PID 0xD38B
-#define XSENS_CONVERTER_4_PID 0xD38C
-#define XSENS_CONVERTER_5_PID 0xD38D
-#define XSENS_CONVERTER_6_PID 0xD38E
-#define XSENS_CONVERTER_7_PID 0xD38F
-
-/*
- * NDI (www.ndigital.com) product ids
- */
-#define FTDI_NDI_HUC_PID 0xDA70 /* NDI Host USB Converter */
-#define FTDI_NDI_SPECTRA_SCU_PID 0xDA71 /* NDI Spectra SCU */
-#define FTDI_NDI_FUTURE_2_PID 0xDA72 /* NDI future device #2 */
-#define FTDI_NDI_FUTURE_3_PID 0xDA73 /* NDI future device #3 */
-#define FTDI_NDI_AURORA_SCU_PID 0xDA74 /* NDI Aurora SCU */
-
-/*
- * ChamSys Limited (www.chamsys.co.uk) USB wing/interface product IDs
- */
-#define FTDI_CHAMSYS_24_MASTER_WING_PID 0xDAF8
-#define FTDI_CHAMSYS_PC_WING_PID 0xDAF9
-#define FTDI_CHAMSYS_USB_DMX_PID 0xDAFA
-#define FTDI_CHAMSYS_MIDI_TIMECODE_PID 0xDAFB
-#define FTDI_CHAMSYS_MINI_WING_PID 0xDAFC
-#define FTDI_CHAMSYS_MAXI_WING_PID 0xDAFD
-#define FTDI_CHAMSYS_MEDIA_WING_PID 0xDAFE
-#define FTDI_CHAMSYS_WING_PID 0xDAFF
-
-/*
- * Westrex International devices submitted by Cory Lee
- */
-#define FTDI_WESTREX_MODEL_777_PID 0xDC00 /* Model 777 */
-#define FTDI_WESTREX_MODEL_8900F_PID 0xDC01 /* Model 8900F */
-
-/*
- * ACG Identification Technologies GmbH products (http://www.acg.de/).
- * Submitted by anton -at- goto10 -dot- org.
- */
-#define FTDI_ACG_HFDUAL_PID 0xDD20 /* HF Dual ISO Reader (RFID) */
-
-/*
- * Definitions for Artemis astronomical USB based cameras
- * Check it at http://www.artemisccd.co.uk/
- */
-#define FTDI_ARTEMIS_PID 0xDF28 /* All Artemis Cameras */
-
-/*
- * Definitions for ATIK Instruments astronomical USB based cameras
- * Check it at http://www.atik-instruments.com/
- */
-#define FTDI_ATIK_ATK16_PID 0xDF30 /* ATIK ATK-16 Grayscale Camera */
-#define FTDI_ATIK_ATK16C_PID 0xDF32 /* ATIK ATK-16C Colour Camera */
-#define FTDI_ATIK_ATK16HR_PID 0xDF31 /* ATIK ATK-16HR Grayscale Camera */
-#define FTDI_ATIK_ATK16HRC_PID 0xDF33 /* ATIK ATK-16HRC Colour Camera */
-#define FTDI_ATIK_ATK16IC_PID 0xDF35 /* ATIK ATK-16IC Grayscale Camera */
-
-/*
- * Yost Engineering, Inc. products (www.yostengineering.com).
- * PID 0xE050 submitted by Aaron Prose.
- */
-#define FTDI_YEI_SERVOCENTER31_PID 0xE050 /* YEI ServoCenter3.1 USB */
-
-/*
- * ELV USB devices submitted by Christian Abt of ELV (www.elv.de).
- * All of these devices use FTDI's vendor ID (0x0403).
- * Further IDs taken from ELV Windows .inf file.
- *
- * The previously included PID for the UO 100 module was incorrect.
- * In fact, that PID was for ELV's UR 100 USB-RS232 converter (0xFB58).
- *
- * Armin Laeuger originally sent the PID for the UM 100 module.
- */
-#define FTDI_ELV_USR_PID 0xE000 /* ELV Universal-Sound-Recorder */
-#define FTDI_ELV_MSM1_PID 0xE001 /* ELV Mini-Sound-Modul */
-#define FTDI_ELV_KL100_PID 0xE002 /* ELV Kfz-Leistungsmesser KL 100 */
-#define FTDI_ELV_WS550_PID 0xE004 /* WS 550 */
-#define FTDI_ELV_EC3000_PID 0xE006 /* ENERGY CONTROL 3000 USB */
-#define FTDI_ELV_WS888_PID 0xE008 /* WS 888 */
-#define FTDI_ELV_TWS550_PID 0xE009 /* Technoline WS 550 */
-#define FTDI_ELV_FEM_PID 0xE00A /* Funk Energie Monitor */
-#define FTDI_ELV_FHZ1300PC_PID 0xE0E8 /* FHZ 1300 PC */
-#define FTDI_ELV_WS500_PID 0xE0E9 /* PC-Wetterstation (WS 500) */
-#define FTDI_ELV_HS485_PID 0xE0EA /* USB to RS-485 adapter */
-#define FTDI_ELV_UMS100_PID 0xE0EB /* ELV USB Master-Slave Schaltsteckdose UMS 100 */
-#define FTDI_ELV_TFD128_PID 0xE0EC /* ELV Temperatur-Feuchte-Datenlogger TFD 128 */
-#define FTDI_ELV_FM3RX_PID 0xE0ED /* ELV Messwertuebertragung FM3 RX */
-#define FTDI_ELV_WS777_PID 0xE0EE /* Conrad WS 777 */
-#define FTDI_ELV_EM1010PC_PID 0xE0EF /* Engery monitor EM 1010 PC */
-#define FTDI_ELV_CSI8_PID 0xE0F0 /* Computer-Schalt-Interface (CSI 8) */
-#define FTDI_ELV_EM1000DL_PID 0xE0F1 /* PC-Datenlogger fuer Energiemonitor (EM 1000 DL) */
-#define FTDI_ELV_PCK100_PID 0xE0F2 /* PC-Kabeltester (PCK 100) */
-#define FTDI_ELV_RFP500_PID 0xE0F3 /* HF-Leistungsmesser (RFP 500) */
-#define FTDI_ELV_FS20SIG_PID 0xE0F4 /* Signalgeber (FS 20 SIG) */
-#define FTDI_ELV_UTP8_PID 0xE0F5 /* ELV UTP 8 */
-#define FTDI_ELV_WS300PC_PID 0xE0F6 /* PC-Wetterstation (WS 300 PC) */
-#define FTDI_ELV_WS444PC_PID 0xE0F7 /* Conrad WS 444 PC */
-#define FTDI_PHI_FISCO_PID 0xE40B /* PHI Fisco USB to Serial cable */
-#define FTDI_ELV_UAD8_PID 0xF068 /* USB-AD-Wandler (UAD 8) */
-#define FTDI_ELV_UDA7_PID 0xF069 /* USB-DA-Wandler (UDA 7) */
-#define FTDI_ELV_USI2_PID 0xF06A /* USB-Schrittmotoren-Interface (USI 2) */
-#define FTDI_ELV_T1100_PID 0xF06B /* Thermometer (T 1100) */
-#define FTDI_ELV_PCD200_PID 0xF06C /* PC-Datenlogger (PCD 200) */
-#define FTDI_ELV_ULA200_PID 0xF06D /* USB-LCD-Ansteuerung (ULA 200) */
-#define FTDI_ELV_ALC8500_PID 0xF06E /* ALC 8500 Expert */
-#define FTDI_ELV_FHZ1000PC_PID 0xF06F /* FHZ 1000 PC */
-#define FTDI_ELV_UR100_PID 0xFB58 /* USB-RS232-Umsetzer (UR 100) */
-#define FTDI_ELV_UM100_PID 0xFB5A /* USB-Modul UM 100 */
-#define FTDI_ELV_UO100_PID 0xFB5B /* USB-Modul UO 100 */
-/* Additional ELV PIDs that default to using the FTDI D2XX drivers on
- * MS Windows, rather than the FTDI Virtual Com Port drivers.
- * Maybe these will be easier to use with the libftdi/libusb user-space
- * drivers, or possibly the Comedi drivers in some cases. */
-#define FTDI_ELV_CLI7000_PID 0xFB59 /* Computer-Light-Interface (CLI 7000) */
-#define FTDI_ELV_PPS7330_PID 0xFB5C /* Processor-Power-Supply (PPS 7330) */
-#define FTDI_ELV_TFM100_PID 0xFB5D /* Temperartur-Feuchte Messgeraet (TFM 100) */
-#define FTDI_ELV_UDF77_PID 0xFB5E /* USB DCF Funkurh (UDF 77) */
-#define FTDI_ELV_UIO88_PID 0xFB5F /* USB-I/O Interface (UIO 88) */
-
-/*
- * EVER Eco Pro UPS (http://www.ever.com.pl/)
- */
-
-#define EVER_ECO_PRO_CDS 0xe520 /* RS-232 converter */
-
-/*
- * Active Robots product ids.
- */
-#define FTDI_ACTIVE_ROBOTS_PID 0xE548 /* USB comms board */
-
-/* Pyramid Computer GmbH */
-#define FTDI_PYRAMID_PID 0xE6C8 /* Pyramid Appliance Display */
-
-/* www.elsterelectricity.com Elster Unicom III Optical Probe */
-#define FTDI_ELSTER_UNICOM_PID 0xE700 /* Product Id */
-
-/*
- * Gude Analog- und Digitalsysteme GmbH
- */
-#define FTDI_GUDEADS_E808_PID 0xE808
-#define FTDI_GUDEADS_E809_PID 0xE809
-#define FTDI_GUDEADS_E80A_PID 0xE80A
-#define FTDI_GUDEADS_E80B_PID 0xE80B
-#define FTDI_GUDEADS_E80C_PID 0xE80C
-#define FTDI_GUDEADS_E80D_PID 0xE80D
-#define FTDI_GUDEADS_E80E_PID 0xE80E
-#define FTDI_GUDEADS_E80F_PID 0xE80F
-#define FTDI_GUDEADS_E888_PID 0xE888 /* Expert ISDN Control USB */
-#define FTDI_GUDEADS_E889_PID 0xE889 /* USB RS-232 OptoBridge */
-#define FTDI_GUDEADS_E88A_PID 0xE88A
-#define FTDI_GUDEADS_E88B_PID 0xE88B
-#define FTDI_GUDEADS_E88C_PID 0xE88C
-#define FTDI_GUDEADS_E88D_PID 0xE88D
-#define FTDI_GUDEADS_E88E_PID 0xE88E
-#define FTDI_GUDEADS_E88F_PID 0xE88F
-
-/*
- * Eclo (http://www.eclo.pt/) product IDs.
- * PID 0xEA90 submitted by Martin Grill.
- */
-#define FTDI_ECLO_COM_1WIRE_PID 0xEA90 /* COM to 1-Wire USB adaptor */
-
-/* TNC-X USB-to-packet-radio adapter, versions prior to 3.0 (DLP module) */
-#define FTDI_TNC_X_PID 0xEBE0
-
-/*
- * Teratronik product ids.
- * Submitted by O. Wölfelschneider.
- */
-#define FTDI_TERATRONIK_VCP_PID 0xEC88 /* Teratronik device (preferring VCP driver on windows) */
-#define FTDI_TERATRONIK_D2XX_PID 0xEC89 /* Teratronik device (preferring D2XX driver on windows) */
-
-/* Rig Expert Ukraine devices */
-#define FTDI_REU_TINY_PID 0xED22 /* RigExpert Tiny */
-
-/*
- * Hameg HO820 and HO870 interface (using VID 0x0403)
- */
-#define HAMEG_HO820_PID 0xed74
-#define HAMEG_HO870_PID 0xed71
-
-/*
- * MaxStream devices www.maxstream.net
- */
-#define FTDI_MAXSTREAM_PID 0xEE18 /* Xbee PKG-U Module */
-
-/*
- * microHAM product IDs (http://www.microham.com).
- * Submitted by Justin Burket (KL1RL) <zorton@jtan.com>
- * and Mike Studer (K6EEP) <k6eep@hamsoftware.org>.
- * Ian Abbott <abbotti@mev.co.uk> added a few more from the driver INF file.
- */
-#define FTDI_MHAM_KW_PID 0xEEE8 /* USB-KW interface */
-#define FTDI_MHAM_YS_PID 0xEEE9 /* USB-YS interface */
-#define FTDI_MHAM_Y6_PID 0xEEEA /* USB-Y6 interface */
-#define FTDI_MHAM_Y8_PID 0xEEEB /* USB-Y8 interface */
-#define FTDI_MHAM_IC_PID 0xEEEC /* USB-IC interface */
-#define FTDI_MHAM_DB9_PID 0xEEED /* USB-DB9 interface */
-#define FTDI_MHAM_RS232_PID 0xEEEE /* USB-RS232 interface */
-#define FTDI_MHAM_Y9_PID 0xEEEF /* USB-Y9 interface */
-
-/* Domintell products http://www.domintell.com */
-#define FTDI_DOMINTELL_DGQG_PID 0xEF50 /* Master */
-#define FTDI_DOMINTELL_DUSB_PID 0xEF51 /* DUSB01 module */
-
-/*
- * The following are the values for the Perle Systems
- * UltraPort USB serial converters
- */
-#define FTDI_PERLE_ULTRAPORT_PID 0xF0C0 /* Perle UltraPort Product Id */
-
-/* Sprog II (Andrew Crosland's SprogII DCC interface) */
-#define FTDI_SPROG_II 0xF0C8
-
-/* an infrared receiver for user access control with IR tags */
-#define FTDI_PIEGROUP_PID 0xF208 /* Product Id */
-
-/* ACT Solutions HomePro ZWave interface
- (http://www.act-solutions.com/HomePro.htm) */
-#define FTDI_ACTZWAVE_PID 0xF2D0
-
-/*
- * 4N-GALAXY.DE PIDs for CAN-USB, USB-RS232, USB-RS422, USB-RS485,
- * USB-TTY activ, USB-TTY passiv. Some PIDs are used by several devices
- * and I'm not entirely sure which are used by which.
- */
-#define FTDI_4N_GALAXY_DE_1_PID 0xF3C0
-#define FTDI_4N_GALAXY_DE_2_PID 0xF3C1
-
-/*
- * Linx Technologies product ids
- */
-#define LINX_SDMUSBQSS_PID 0xF448 /* Linx SDM-USB-QS-S */
-#define LINX_MASTERDEVEL2_PID 0xF449 /* Linx Master Development 2.0 */
-#define LINX_FUTURE_0_PID 0xF44A /* Linx future device */
-#define LINX_FUTURE_1_PID 0xF44B /* Linx future device */
-#define LINX_FUTURE_2_PID 0xF44C /* Linx future device */
-
-/*
- * Oceanic product ids
- */
-#define FTDI_OCEANIC_PID 0xF460 /* Oceanic dive instrument */
-
-/*
- * SUUNTO product ids
- */
-#define FTDI_SUUNTO_SPORTS_PID 0xF680 /* Suunto Sports instrument */
-
-/* USB-UIRT - An infrared receiver and transmitter using the 8U232AM chip */
-/* http://home.earthlink.net/~jrhees/USBUIRT/index.htm */
-#define FTDI_USB_UIRT_PID 0xF850 /* Product Id */
-
-/* CCS Inc. ICDU/ICDU40 product ID -
- * the FT232BM is used in an in-circuit-debugger unit for PIC16's/PIC18's */
-#define FTDI_CCSICDU20_0_PID 0xF9D0
-#define FTDI_CCSICDU40_1_PID 0xF9D1
-#define FTDI_CCSMACHX_2_PID 0xF9D2
-#define FTDI_CCSLOAD_N_GO_3_PID 0xF9D3
-#define FTDI_CCSICDU64_4_PID 0xF9D4
-#define FTDI_CCSPRIME8_5_PID 0xF9D5
-
-/*
- * The following are the values for the Matrix Orbital LCD displays,
- * which are the FT232BM ( similar to the 8U232AM )
- */
-#define FTDI_MTXORB_0_PID 0xFA00 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_1_PID 0xFA01 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_2_PID 0xFA02 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_3_PID 0xFA03 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_4_PID 0xFA04 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_5_PID 0xFA05 /* Matrix Orbital Product Id */
-#define FTDI_MTXORB_6_PID 0xFA06 /* Matrix Orbital Product Id */
-
-/*
- * Home Electronics (www.home-electro.com) USB gadgets
- */
-#define FTDI_HE_TIRA1_PID 0xFA78 /* Tira-1 IR transceiver */
-
-/* Inside Accesso contactless reader (http://www.insidefr.com) */
-#define INSIDE_ACCESSO 0xFAD0
-
-/*
- * ThorLabs USB motor drivers
- */
-#define FTDI_THORLABS_PID 0xfaf0 /* ThorLabs USB motor drivers */
-
-/*
- * Protego product ids
- */
-#define PROTEGO_SPECIAL_1 0xFC70 /* special/unknown device */
-#define PROTEGO_R2X0 0xFC71 /* R200-USB TRNG unit (R210, R220, and R230) */
-#define PROTEGO_SPECIAL_3 0xFC72 /* special/unknown device */
-#define PROTEGO_SPECIAL_4 0xFC73 /* special/unknown device */
-
-/*
- * DSS-20 Sync Station for Sony Ericsson P800
- */
-#define FTDI_DSS20_PID 0xFC82
-
-/* www.irtrans.de device */
-#define FTDI_IRTRANS_PID 0xFC60 /* Product Id */
-
-/*
- * RM Michaelides CANview USB (http://www.rmcan.com) (FTDI_VID)
- * CAN fieldbus interface adapter, added by port GmbH www.port.de)
- * Ian Abbott changed the macro names for consistency.
- */
-#define FTDI_RM_CANVIEW_PID 0xfd60 /* Product Id */
-/* www.thoughttechnology.com/ TT-USB provide with procomp use ftdi_sio */
-#define FTDI_TTUSB_PID 0xFF20 /* Product Id */
-
-#define FTDI_USBX_707_PID 0xF857 /* ADSTech IR Blaster USBX-707 (FTDI_VID) */
-
-#define FTDI_RELAIS_PID 0xFA10 /* Relais device from Rudolf Gugler */
-
-/*
- * PCDJ use ftdi based dj-controllers. The following PID is
- * for their DAC-2 device http://www.pcdjhardware.com/DAC2.asp
- * (the VID is the standard ftdi vid (FTDI_VID), PID sent by Wouter Paesen)
- */
-#define FTDI_PCDJ_DAC2_PID 0xFA88
-
-#define FTDI_R2000KU_TRUE_RNG 0xFB80 /* R2000KU TRUE RNG (FTDI_VID) */
-
-/*
- * DIEBOLD BCS SE923 (FTDI_VID)
- */
-#define DIEBOLD_BCS_SE923_PID 0xfb99
-
-/* www.crystalfontz.com devices
- * - thanx for providing free devices for evaluation !
- * they use the ftdi chipset for the USB interface
- * and the vendor id is the same
- */
-#define FTDI_XF_632_PID 0xFC08 /* 632: 16x2 Character Display */
-#define FTDI_XF_634_PID 0xFC09 /* 634: 20x4 Character Display */
-#define FTDI_XF_547_PID 0xFC0A /* 547: Two line Display */
-#define FTDI_XF_633_PID 0xFC0B /* 633: 16x2 Character Display with Keys */
-#define FTDI_XF_631_PID 0xFC0C /* 631: 20x2 Character Display */
-#define FTDI_XF_635_PID 0xFC0D /* 635: 20x4 Character Display */
-#define FTDI_XF_640_PID 0xFC0E /* 640: Two line Display */
-#define FTDI_XF_642_PID 0xFC0F /* 642: Two line Display */
-
-/*
- * Video Networks Limited / Homechoice in the UK use an ftdi-based device
- * for their 1Mb broadband internet service. The following PID is exhibited
- * by the usb device supplied (the VID is the standard ftdi vid (FTDI_VID)
- */
-#define FTDI_VNHCPCUSB_D_PID 0xfe38 /* Product Id */
-
-/* AlphaMicro Components AMC-232USB01 device (FTDI_VID) */
-#define FTDI_AMC232_PID 0xFF00 /* Product Id */
-
-/*
- * IBS elektronik product ids (FTDI_VID)
- * Submitted by Thomas Schleusener
- */
-#define FTDI_IBS_US485_PID 0xff38 /* IBS US485 (USB<-->RS422/485 interface) */
-#define FTDI_IBS_PICPRO_PID 0xff39 /* IBS PIC-Programmer */
-#define FTDI_IBS_PCMCIA_PID 0xff3a /* IBS Card reader for PCMCIA SRAM-cards */
-#define FTDI_IBS_PK1_PID 0xff3b /* IBS PK1 - Particel counter */
-#define FTDI_IBS_RS232MON_PID 0xff3c /* IBS RS232 - Monitor */
-#define FTDI_IBS_APP70_PID 0xff3d /* APP 70 (dust monitoring system) */
-#define FTDI_IBS_PEDO_PID 0xff3e /* IBS PEDO-Modem (RF modem 868.35 MHz) */
-#define FTDI_IBS_PROD_PID 0xff3f /* future device */
-/* www.canusb.com Lawicel CANUSB device (FTDI_VID) */
-#define FTDI_CANUSB_PID 0xFFA8 /* Product Id */
-
-
-
-/********************************/
-/** third-party VID/PID combos **/
-/********************************/
-
-
-
-/*
- * Atmel STK541
- */
-#define ATMEL_VID 0x03eb /* Vendor ID */
-#define STK541_PID 0x2109 /* Zigbee Controller */
-
-/*
- * Blackfin gnICE JTAG
- * http://docs.blackfin.uclinux.org/doku.php?id=hw:jtag:gnice
- */
-#define ADI_VID 0x0456
-#define ADI_GNICE_PID 0xF000
-#define ADI_GNICEPLUS_PID 0xF001
-
-/*
- * RATOC REX-USB60F
- */
-#define RATOC_VENDOR_ID 0x0584
-#define RATOC_PRODUCT_ID_USB60F 0xb020
-
-/*
- * Contec products (http://www.contec.com)
- * Submitted by Daniel Sangorrin
- */
-#define CONTEC_VID 0x06CE /* Vendor ID */
-#define CONTEC_COM1USBH_PID 0x8311 /* COM-1(USB)H */
-
-/*
- * Contec products (http://www.contec.com)
- * Submitted by Daniel Sangorrin
- */
-#define CONTEC_VID 0x06CE /* Vendor ID */
-#define CONTEC_COM1USBH_PID 0x8311 /* COM-1(USB)H */
-
-/*
- * Definitions for B&B Electronics products.
- */
-#define BANDB_VID 0x0856 /* B&B Electronics Vendor ID */
-#define BANDB_USOTL4_PID 0xAC01 /* USOTL4 Isolated RS-485 Converter */
-#define BANDB_USTL4_PID 0xAC02 /* USTL4 RS-485 Converter */
-#define BANDB_USO9ML2_PID 0xAC03 /* USO9ML2 Isolated RS-232 Converter */
-#define BANDB_USOPTL4_PID 0xAC11
-#define BANDB_USPTL4_PID 0xAC12
-#define BANDB_USO9ML2DR_2_PID 0xAC16
-#define BANDB_USO9ML2DR_PID 0xAC17
-#define BANDB_USOPTL4DR2_PID 0xAC18 /* USOPTL4R-2 2-port Isolated RS-232 Converter */
-#define BANDB_USOPTL4DR_PID 0xAC19
-#define BANDB_485USB9F_2W_PID 0xAC25
-#define BANDB_485USB9F_4W_PID 0xAC26
-#define BANDB_232USB9M_PID 0xAC27
-#define BANDB_485USBTB_2W_PID 0xAC33
-#define BANDB_485USBTB_4W_PID 0xAC34
-#define BANDB_TTL5USB9M_PID 0xAC49
-#define BANDB_TTL3USB9M_PID 0xAC50
-#define BANDB_ZZ_PROG1_USB_PID 0xBA02
-
-/*
- * Intrepid Control Systems (http://www.intrepidcs.com/) ValueCAN and NeoVI
- */
-#define INTREPID_VID 0x093C
-#define INTREPID_VALUECAN_PID 0x0601
-#define INTREPID_NEOVI_PID 0x0701
-
-/*
- * Definitions for ID TECH (www.idt-net.com) devices
- */
-#define IDTECH_VID 0x0ACD /* ID TECH Vendor ID */
-#define IDTECH_IDT1221U_PID 0x0300 /* IDT1221U USB to RS-232 adapter */
-
-/*
- * Definitions for Omnidirectional Control Technology, Inc. devices
- */
-#define OCT_VID 0x0B39 /* OCT vendor ID */
-/* Note: OCT US101 is also rebadged as Dick Smith Electronics (NZ) XH6381 */
-/* Also rebadged as Dick Smith Electronics (Aus) XH6451 */
-/* Also rebadged as SIIG Inc. model US2308 hardware version 1 */
-#define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */
-
-/*
- * Icom ID-1 digital transceiver
- */
-
-#define ICOM_ID1_VID 0x0C26
-#define ICOM_ID1_PID 0x0004
-
-/*
- * GN Otometrics (http://www.otometrics.com)
- * Submitted by Ville Sundberg.
- */
-#define GN_OTOMETRICS_VID 0x0c33 /* Vendor ID */
-#define AURICAL_USB_PID 0x0010 /* Aurical USB Audiometer */
-
-/*
- * The following are the values for the Sealevel SeaLINK+ adapters.
- * (Original list sent by Tuan Hoang. Ian Abbott renamed the macros and
- * removed some PIDs that don't seem to match any existing products.)
- */
-#define SEALEVEL_VID 0x0c52 /* Sealevel Vendor ID */
-#define SEALEVEL_2101_PID 0x2101 /* SeaLINK+232 (2101/2105) */
-#define SEALEVEL_2102_PID 0x2102 /* SeaLINK+485 (2102) */
-#define SEALEVEL_2103_PID 0x2103 /* SeaLINK+232I (2103) */
-#define SEALEVEL_2104_PID 0x2104 /* SeaLINK+485I (2104) */
-#define SEALEVEL_2106_PID 0x9020 /* SeaLINK+422 (2106) */
-#define SEALEVEL_2201_1_PID 0x2211 /* SeaPORT+2/232 (2201) Port 1 */
-#define SEALEVEL_2201_2_PID 0x2221 /* SeaPORT+2/232 (2201) Port 2 */
-#define SEALEVEL_2202_1_PID 0x2212 /* SeaPORT+2/485 (2202) Port 1 */
-#define SEALEVEL_2202_2_PID 0x2222 /* SeaPORT+2/485 (2202) Port 2 */
-#define SEALEVEL_2203_1_PID 0x2213 /* SeaPORT+2 (2203) Port 1 */
-#define SEALEVEL_2203_2_PID 0x2223 /* SeaPORT+2 (2203) Port 2 */
-#define SEALEVEL_2401_1_PID 0x2411 /* SeaPORT+4/232 (2401) Port 1 */
-#define SEALEVEL_2401_2_PID 0x2421 /* SeaPORT+4/232 (2401) Port 2 */
-#define SEALEVEL_2401_3_PID 0x2431 /* SeaPORT+4/232 (2401) Port 3 */
-#define SEALEVEL_2401_4_PID 0x2441 /* SeaPORT+4/232 (2401) Port 4 */
-#define SEALEVEL_2402_1_PID 0x2412 /* SeaPORT+4/485 (2402) Port 1 */
-#define SEALEVEL_2402_2_PID 0x2422 /* SeaPORT+4/485 (2402) Port 2 */
-#define SEALEVEL_2402_3_PID 0x2432 /* SeaPORT+4/485 (2402) Port 3 */
-#define SEALEVEL_2402_4_PID 0x2442 /* SeaPORT+4/485 (2402) Port 4 */
-#define SEALEVEL_2403_1_PID 0x2413 /* SeaPORT+4 (2403) Port 1 */
-#define SEALEVEL_2403_2_PID 0x2423 /* SeaPORT+4 (2403) Port 2 */
-#define SEALEVEL_2403_3_PID 0x2433 /* SeaPORT+4 (2403) Port 3 */
-#define SEALEVEL_2403_4_PID 0x2443 /* SeaPORT+4 (2403) Port 4 */
-#define SEALEVEL_2801_1_PID 0X2811 /* SeaLINK+8/232 (2801) Port 1 */
-#define SEALEVEL_2801_2_PID 0X2821 /* SeaLINK+8/232 (2801) Port 2 */
-#define SEALEVEL_2801_3_PID 0X2831 /* SeaLINK+8/232 (2801) Port 3 */
-#define SEALEVEL_2801_4_PID 0X2841 /* SeaLINK+8/232 (2801) Port 4 */
-#define SEALEVEL_2801_5_PID 0X2851 /* SeaLINK+8/232 (2801) Port 5 */
-#define SEALEVEL_2801_6_PID 0X2861 /* SeaLINK+8/232 (2801) Port 6 */
-#define SEALEVEL_2801_7_PID 0X2871 /* SeaLINK+8/232 (2801) Port 7 */
-#define SEALEVEL_2801_8_PID 0X2881 /* SeaLINK+8/232 (2801) Port 8 */
-#define SEALEVEL_2802_1_PID 0X2812 /* SeaLINK+8/485 (2802) Port 1 */
-#define SEALEVEL_2802_2_PID 0X2822 /* SeaLINK+8/485 (2802) Port 2 */
-#define SEALEVEL_2802_3_PID 0X2832 /* SeaLINK+8/485 (2802) Port 3 */
-#define SEALEVEL_2802_4_PID 0X2842 /* SeaLINK+8/485 (2802) Port 4 */
-#define SEALEVEL_2802_5_PID 0X2852 /* SeaLINK+8/485 (2802) Port 5 */
-#define SEALEVEL_2802_6_PID 0X2862 /* SeaLINK+8/485 (2802) Port 6 */
-#define SEALEVEL_2802_7_PID 0X2872 /* SeaLINK+8/485 (2802) Port 7 */
-#define SEALEVEL_2802_8_PID 0X2882 /* SeaLINK+8/485 (2802) Port 8 */
-#define SEALEVEL_2803_1_PID 0X2813 /* SeaLINK+8 (2803) Port 1 */
-#define SEALEVEL_2803_2_PID 0X2823 /* SeaLINK+8 (2803) Port 2 */
-#define SEALEVEL_2803_3_PID 0X2833 /* SeaLINK+8 (2803) Port 3 */
-#define SEALEVEL_2803_4_PID 0X2843 /* SeaLINK+8 (2803) Port 4 */
-#define SEALEVEL_2803_5_PID 0X2853 /* SeaLINK+8 (2803) Port 5 */
-#define SEALEVEL_2803_6_PID 0X2863 /* SeaLINK+8 (2803) Port 6 */
-#define SEALEVEL_2803_7_PID 0X2873 /* SeaLINK+8 (2803) Port 7 */
-#define SEALEVEL_2803_8_PID 0X2883 /* SeaLINK+8 (2803) Port 8 */
-
-/*
- * JETI SPECTROMETER SPECBOS 1201
- * http://www.jeti.com/products/sys/scb/scb1201.php
- */
-#define JETI_VID 0x0c6c
-#define JETI_SPC1201_PID 0x04b2
-
-/*
- * FTDI USB UART chips used in construction projects from the
- * Elektor Electronics magazine (http://elektor-electronics.co.uk)
- */
-#define ELEKTOR_VID 0x0C7D
-#define ELEKTOR_FT323R_PID 0x0005 /* RFID-Reader, issue 09-2006 */
-
-/*
- * Posiflex inc retail equipment (http://www.posiflex.com.tw)
- */
-#define POSIFLEX_VID 0x0d3a /* Vendor ID */
-#define POSIFLEX_PP7000_PID 0x0300 /* PP-7000II thermal printer */
-
-/*
- * The following are the values for two KOBIL chipcard terminals.
- */
-#define KOBIL_VID 0x0d46 /* KOBIL Vendor ID */
-#define KOBIL_CONV_B1_PID 0x2020 /* KOBIL Konverter for B1 */
-#define KOBIL_CONV_KAAN_PID 0x2021 /* KOBIL_Konverter for KAAN */
-
-#define FTDI_NF_RIC_VID 0x0DCD /* Vendor Id */
-#define FTDI_NF_RIC_PID 0x0001 /* Product Id */
-
-/*
- * Falcom Wireless Communications GmbH
- */
-#define FALCOM_VID 0x0F94 /* Vendor Id */
-#define FALCOM_TWIST_PID 0x0001 /* Falcom Twist USB GPRS modem */
-#define FALCOM_SAMBA_PID 0x0005 /* Falcom Samba USB GPRS modem */
-
-/* Larsen and Brusgaard AltiTrack/USBtrack */
-#define LARSENBRUSGAARD_VID 0x0FD8
-#define LB_ALTITRACK_PID 0x0001
-
-/*
- * TTi (Thurlby Thandar Instruments)
- */
-#define TTI_VID 0x103E /* Vendor Id */
-#define TTI_QL355P_PID 0x03E8 /* TTi QL355P power supply */
-
-/* Interbiometrics USB I/O Board */
-/* Developed for Interbiometrics by Rudolf Gugler */
-#define INTERBIOMETRICS_VID 0x1209
-#define INTERBIOMETRICS_IOBOARD_PID 0x1002
-#define INTERBIOMETRICS_MINI_IOBOARD_PID 0x1006
-
-/*
- * Testo products (http://www.testo.com/)
- * Submitted by Colin Leroy
- */
-#define TESTO_VID 0x128D
-#define TESTO_USB_INTERFACE_PID 0x0001
-
-/*
- * Mobility Electronics products.
- */
-#define MOBILITY_VID 0x1342
-#define MOBILITY_USB_SERIAL_PID 0x0202 /* EasiDock USB 200 serial */
-
-/*
- * FIC / OpenMoko, Inc. http://wiki.openmoko.org/wiki/Neo1973_Debug_Board_v3
- * Submitted by Harald Welte <laforge@openmoko.org>
- */
-#define FIC_VID 0x1457
-#define FIC_NEO1973_DEBUG_PID 0x5118
-
-/* Olimex */
-#define OLIMEX_VID 0x15BA
-#define OLIMEX_ARM_USB_OCD_PID 0x0003
-
-/*
- * Telldus Technologies
- */
-#define TELLDUS_VID 0x1781 /* Vendor ID */
-#define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */
-
-/*
- * RT Systems programming cables for various ham radios
- */
-#define RTSYSTEMS_VID 0x2100 /* Vendor ID */
-#define RTSYSTEMS_SERIAL_VX7_PID 0x9e52 /* Serial converter for VX-7 Radios using FT232RL */
-#define RTSYSTEMS_CT29B_PID 0x9e54 /* CT29B Radio Cable */
-
-/*
- * Bayer Ascensia Contour blood glucose meter USB-converter cable.
- * http://winglucofacts.com/cables/
- */
-#define BAYER_VID 0x1A79
-#define BAYER_CONTOUR_CABLE_PID 0x6001
-
-/*
- * The following are the values for the Matrix Orbital FTDI Range
- * Anything in this range will use an FT232RL.
- */
-#define MTXORB_VID 0x1B3D
-#define MTXORB_FTDI_RANGE_0100_PID 0x0100
-#define MTXORB_FTDI_RANGE_0101_PID 0x0101
-#define MTXORB_FTDI_RANGE_0102_PID 0x0102
-#define MTXORB_FTDI_RANGE_0103_PID 0x0103
-#define MTXORB_FTDI_RANGE_0104_PID 0x0104
-#define MTXORB_FTDI_RANGE_0105_PID 0x0105
-#define MTXORB_FTDI_RANGE_0106_PID 0x0106
-#define MTXORB_FTDI_RANGE_0107_PID 0x0107
-#define MTXORB_FTDI_RANGE_0108_PID 0x0108
-#define MTXORB_FTDI_RANGE_0109_PID 0x0109
-#define MTXORB_FTDI_RANGE_010A_PID 0x010A
-#define MTXORB_FTDI_RANGE_010B_PID 0x010B
-#define MTXORB_FTDI_RANGE_010C_PID 0x010C
-#define MTXORB_FTDI_RANGE_010D_PID 0x010D
-#define MTXORB_FTDI_RANGE_010E_PID 0x010E
-#define MTXORB_FTDI_RANGE_010F_PID 0x010F
-#define MTXORB_FTDI_RANGE_0110_PID 0x0110
-#define MTXORB_FTDI_RANGE_0111_PID 0x0111
-#define MTXORB_FTDI_RANGE_0112_PID 0x0112
-#define MTXORB_FTDI_RANGE_0113_PID 0x0113
-#define MTXORB_FTDI_RANGE_0114_PID 0x0114
-#define MTXORB_FTDI_RANGE_0115_PID 0x0115
-#define MTXORB_FTDI_RANGE_0116_PID 0x0116
-#define MTXORB_FTDI_RANGE_0117_PID 0x0117
-#define MTXORB_FTDI_RANGE_0118_PID 0x0118
-#define MTXORB_FTDI_RANGE_0119_PID 0x0119
-#define MTXORB_FTDI_RANGE_011A_PID 0x011A
-#define MTXORB_FTDI_RANGE_011B_PID 0x011B
-#define MTXORB_FTDI_RANGE_011C_PID 0x011C
-#define MTXORB_FTDI_RANGE_011D_PID 0x011D
-#define MTXORB_FTDI_RANGE_011E_PID 0x011E
-#define MTXORB_FTDI_RANGE_011F_PID 0x011F
-#define MTXORB_FTDI_RANGE_0120_PID 0x0120
-#define MTXORB_FTDI_RANGE_0121_PID 0x0121
-#define MTXORB_FTDI_RANGE_0122_PID 0x0122
-#define MTXORB_FTDI_RANGE_0123_PID 0x0123
-#define MTXORB_FTDI_RANGE_0124_PID 0x0124
-#define MTXORB_FTDI_RANGE_0125_PID 0x0125
-#define MTXORB_FTDI_RANGE_0126_PID 0x0126
-#define MTXORB_FTDI_RANGE_0127_PID 0x0127
-#define MTXORB_FTDI_RANGE_0128_PID 0x0128
-#define MTXORB_FTDI_RANGE_0129_PID 0x0129
-#define MTXORB_FTDI_RANGE_012A_PID 0x012A
-#define MTXORB_FTDI_RANGE_012B_PID 0x012B
-#define MTXORB_FTDI_RANGE_012C_PID 0x012C
-#define MTXORB_FTDI_RANGE_012D_PID 0x012D
-#define MTXORB_FTDI_RANGE_012E_PID 0x012E
-#define MTXORB_FTDI_RANGE_012F_PID 0x012F
-#define MTXORB_FTDI_RANGE_0130_PID 0x0130
-#define MTXORB_FTDI_RANGE_0131_PID 0x0131
-#define MTXORB_FTDI_RANGE_0132_PID 0x0132
-#define MTXORB_FTDI_RANGE_0133_PID 0x0133
-#define MTXORB_FTDI_RANGE_0134_PID 0x0134
-#define MTXORB_FTDI_RANGE_0135_PID 0x0135
-#define MTXORB_FTDI_RANGE_0136_PID 0x0136
-#define MTXORB_FTDI_RANGE_0137_PID 0x0137
-#define MTXORB_FTDI_RANGE_0138_PID 0x0138
-#define MTXORB_FTDI_RANGE_0139_PID 0x0139
-#define MTXORB_FTDI_RANGE_013A_PID 0x013A
-#define MTXORB_FTDI_RANGE_013B_PID 0x013B
-#define MTXORB_FTDI_RANGE_013C_PID 0x013C
-#define MTXORB_FTDI_RANGE_013D_PID 0x013D
-#define MTXORB_FTDI_RANGE_013E_PID 0x013E
-#define MTXORB_FTDI_RANGE_013F_PID 0x013F
-#define MTXORB_FTDI_RANGE_0140_PID 0x0140
-#define MTXORB_FTDI_RANGE_0141_PID 0x0141
-#define MTXORB_FTDI_RANGE_0142_PID 0x0142
-#define MTXORB_FTDI_RANGE_0143_PID 0x0143
-#define MTXORB_FTDI_RANGE_0144_PID 0x0144
-#define MTXORB_FTDI_RANGE_0145_PID 0x0145
-#define MTXORB_FTDI_RANGE_0146_PID 0x0146
-#define MTXORB_FTDI_RANGE_0147_PID 0x0147
-#define MTXORB_FTDI_RANGE_0148_PID 0x0148
-#define MTXORB_FTDI_RANGE_0149_PID 0x0149
-#define MTXORB_FTDI_RANGE_014A_PID 0x014A
-#define MTXORB_FTDI_RANGE_014B_PID 0x014B
-#define MTXORB_FTDI_RANGE_014C_PID 0x014C
-#define MTXORB_FTDI_RANGE_014D_PID 0x014D
-#define MTXORB_FTDI_RANGE_014E_PID 0x014E
-#define MTXORB_FTDI_RANGE_014F_PID 0x014F
-#define MTXORB_FTDI_RANGE_0150_PID 0x0150
-#define MTXORB_FTDI_RANGE_0151_PID 0x0151
-#define MTXORB_FTDI_RANGE_0152_PID 0x0152
-#define MTXORB_FTDI_RANGE_0153_PID 0x0153
-#define MTXORB_FTDI_RANGE_0154_PID 0x0154
-#define MTXORB_FTDI_RANGE_0155_PID 0x0155
-#define MTXORB_FTDI_RANGE_0156_PID 0x0156
-#define MTXORB_FTDI_RANGE_0157_PID 0x0157
-#define MTXORB_FTDI_RANGE_0158_PID 0x0158
-#define MTXORB_FTDI_RANGE_0159_PID 0x0159
-#define MTXORB_FTDI_RANGE_015A_PID 0x015A
-#define MTXORB_FTDI_RANGE_015B_PID 0x015B
-#define MTXORB_FTDI_RANGE_015C_PID 0x015C
-#define MTXORB_FTDI_RANGE_015D_PID 0x015D
-#define MTXORB_FTDI_RANGE_015E_PID 0x015E
-#define MTXORB_FTDI_RANGE_015F_PID 0x015F
-#define MTXORB_FTDI_RANGE_0160_PID 0x0160
-#define MTXORB_FTDI_RANGE_0161_PID 0x0161
-#define MTXORB_FTDI_RANGE_0162_PID 0x0162
-#define MTXORB_FTDI_RANGE_0163_PID 0x0163
-#define MTXORB_FTDI_RANGE_0164_PID 0x0164
-#define MTXORB_FTDI_RANGE_0165_PID 0x0165
-#define MTXORB_FTDI_RANGE_0166_PID 0x0166
-#define MTXORB_FTDI_RANGE_0167_PID 0x0167
-#define MTXORB_FTDI_RANGE_0168_PID 0x0168
-#define MTXORB_FTDI_RANGE_0169_PID 0x0169
-#define MTXORB_FTDI_RANGE_016A_PID 0x016A
-#define MTXORB_FTDI_RANGE_016B_PID 0x016B
-#define MTXORB_FTDI_RANGE_016C_PID 0x016C
-#define MTXORB_FTDI_RANGE_016D_PID 0x016D
-#define MTXORB_FTDI_RANGE_016E_PID 0x016E
-#define MTXORB_FTDI_RANGE_016F_PID 0x016F
-#define MTXORB_FTDI_RANGE_0170_PID 0x0170
-#define MTXORB_FTDI_RANGE_0171_PID 0x0171
-#define MTXORB_FTDI_RANGE_0172_PID 0x0172
-#define MTXORB_FTDI_RANGE_0173_PID 0x0173
-#define MTXORB_FTDI_RANGE_0174_PID 0x0174
-#define MTXORB_FTDI_RANGE_0175_PID 0x0175
-#define MTXORB_FTDI_RANGE_0176_PID 0x0176
-#define MTXORB_FTDI_RANGE_0177_PID 0x0177
-#define MTXORB_FTDI_RANGE_0178_PID 0x0178
-#define MTXORB_FTDI_RANGE_0179_PID 0x0179
-#define MTXORB_FTDI_RANGE_017A_PID 0x017A
-#define MTXORB_FTDI_RANGE_017B_PID 0x017B
-#define MTXORB_FTDI_RANGE_017C_PID 0x017C
-#define MTXORB_FTDI_RANGE_017D_PID 0x017D
-#define MTXORB_FTDI_RANGE_017E_PID 0x017E
-#define MTXORB_FTDI_RANGE_017F_PID 0x017F
-#define MTXORB_FTDI_RANGE_0180_PID 0x0180
-#define MTXORB_FTDI_RANGE_0181_PID 0x0181
-#define MTXORB_FTDI_RANGE_0182_PID 0x0182
-#define MTXORB_FTDI_RANGE_0183_PID 0x0183
-#define MTXORB_FTDI_RANGE_0184_PID 0x0184
-#define MTXORB_FTDI_RANGE_0185_PID 0x0185
-#define MTXORB_FTDI_RANGE_0186_PID 0x0186
-#define MTXORB_FTDI_RANGE_0187_PID 0x0187
-#define MTXORB_FTDI_RANGE_0188_PID 0x0188
-#define MTXORB_FTDI_RANGE_0189_PID 0x0189
-#define MTXORB_FTDI_RANGE_018A_PID 0x018A
-#define MTXORB_FTDI_RANGE_018B_PID 0x018B
-#define MTXORB_FTDI_RANGE_018C_PID 0x018C
-#define MTXORB_FTDI_RANGE_018D_PID 0x018D
-#define MTXORB_FTDI_RANGE_018E_PID 0x018E
-#define MTXORB_FTDI_RANGE_018F_PID 0x018F
-#define MTXORB_FTDI_RANGE_0190_PID 0x0190
-#define MTXORB_FTDI_RANGE_0191_PID 0x0191
-#define MTXORB_FTDI_RANGE_0192_PID 0x0192
-#define MTXORB_FTDI_RANGE_0193_PID 0x0193
-#define MTXORB_FTDI_RANGE_0194_PID 0x0194
-#define MTXORB_FTDI_RANGE_0195_PID 0x0195
-#define MTXORB_FTDI_RANGE_0196_PID 0x0196
-#define MTXORB_FTDI_RANGE_0197_PID 0x0197
-#define MTXORB_FTDI_RANGE_0198_PID 0x0198
-#define MTXORB_FTDI_RANGE_0199_PID 0x0199
-#define MTXORB_FTDI_RANGE_019A_PID 0x019A
-#define MTXORB_FTDI_RANGE_019B_PID 0x019B
-#define MTXORB_FTDI_RANGE_019C_PID 0x019C
-#define MTXORB_FTDI_RANGE_019D_PID 0x019D
-#define MTXORB_FTDI_RANGE_019E_PID 0x019E
-#define MTXORB_FTDI_RANGE_019F_PID 0x019F
-#define MTXORB_FTDI_RANGE_01A0_PID 0x01A0
-#define MTXORB_FTDI_RANGE_01A1_PID 0x01A1
-#define MTXORB_FTDI_RANGE_01A2_PID 0x01A2
-#define MTXORB_FTDI_RANGE_01A3_PID 0x01A3
-#define MTXORB_FTDI_RANGE_01A4_PID 0x01A4
-#define MTXORB_FTDI_RANGE_01A5_PID 0x01A5
-#define MTXORB_FTDI_RANGE_01A6_PID 0x01A6
-#define MTXORB_FTDI_RANGE_01A7_PID 0x01A7
-#define MTXORB_FTDI_RANGE_01A8_PID 0x01A8
-#define MTXORB_FTDI_RANGE_01A9_PID 0x01A9
-#define MTXORB_FTDI_RANGE_01AA_PID 0x01AA
-#define MTXORB_FTDI_RANGE_01AB_PID 0x01AB
-#define MTXORB_FTDI_RANGE_01AC_PID 0x01AC
-#define MTXORB_FTDI_RANGE_01AD_PID 0x01AD
-#define MTXORB_FTDI_RANGE_01AE_PID 0x01AE
-#define MTXORB_FTDI_RANGE_01AF_PID 0x01AF
-#define MTXORB_FTDI_RANGE_01B0_PID 0x01B0
-#define MTXORB_FTDI_RANGE_01B1_PID 0x01B1
-#define MTXORB_FTDI_RANGE_01B2_PID 0x01B2
-#define MTXORB_FTDI_RANGE_01B3_PID 0x01B3
-#define MTXORB_FTDI_RANGE_01B4_PID 0x01B4
-#define MTXORB_FTDI_RANGE_01B5_PID 0x01B5
-#define MTXORB_FTDI_RANGE_01B6_PID 0x01B6
-#define MTXORB_FTDI_RANGE_01B7_PID 0x01B7
-#define MTXORB_FTDI_RANGE_01B8_PID 0x01B8
-#define MTXORB_FTDI_RANGE_01B9_PID 0x01B9
-#define MTXORB_FTDI_RANGE_01BA_PID 0x01BA
-#define MTXORB_FTDI_RANGE_01BB_PID 0x01BB
-#define MTXORB_FTDI_RANGE_01BC_PID 0x01BC
-#define MTXORB_FTDI_RANGE_01BD_PID 0x01BD
-#define MTXORB_FTDI_RANGE_01BE_PID 0x01BE
-#define MTXORB_FTDI_RANGE_01BF_PID 0x01BF
-#define MTXORB_FTDI_RANGE_01C0_PID 0x01C0
-#define MTXORB_FTDI_RANGE_01C1_PID 0x01C1
-#define MTXORB_FTDI_RANGE_01C2_PID 0x01C2
-#define MTXORB_FTDI_RANGE_01C3_PID 0x01C3
-#define MTXORB_FTDI_RANGE_01C4_PID 0x01C4
-#define MTXORB_FTDI_RANGE_01C5_PID 0x01C5
-#define MTXORB_FTDI_RANGE_01C6_PID 0x01C6
-#define MTXORB_FTDI_RANGE_01C7_PID 0x01C7
-#define MTXORB_FTDI_RANGE_01C8_PID 0x01C8
-#define MTXORB_FTDI_RANGE_01C9_PID 0x01C9
-#define MTXORB_FTDI_RANGE_01CA_PID 0x01CA
-#define MTXORB_FTDI_RANGE_01CB_PID 0x01CB
-#define MTXORB_FTDI_RANGE_01CC_PID 0x01CC
-#define MTXORB_FTDI_RANGE_01CD_PID 0x01CD
-#define MTXORB_FTDI_RANGE_01CE_PID 0x01CE
-#define MTXORB_FTDI_RANGE_01CF_PID 0x01CF
-#define MTXORB_FTDI_RANGE_01D0_PID 0x01D0
-#define MTXORB_FTDI_RANGE_01D1_PID 0x01D1
-#define MTXORB_FTDI_RANGE_01D2_PID 0x01D2
-#define MTXORB_FTDI_RANGE_01D3_PID 0x01D3
-#define MTXORB_FTDI_RANGE_01D4_PID 0x01D4
-#define MTXORB_FTDI_RANGE_01D5_PID 0x01D5
-#define MTXORB_FTDI_RANGE_01D6_PID 0x01D6
-#define MTXORB_FTDI_RANGE_01D7_PID 0x01D7
-#define MTXORB_FTDI_RANGE_01D8_PID 0x01D8
-#define MTXORB_FTDI_RANGE_01D9_PID 0x01D9
-#define MTXORB_FTDI_RANGE_01DA_PID 0x01DA
-#define MTXORB_FTDI_RANGE_01DB_PID 0x01DB
-#define MTXORB_FTDI_RANGE_01DC_PID 0x01DC
-#define MTXORB_FTDI_RANGE_01DD_PID 0x01DD
-#define MTXORB_FTDI_RANGE_01DE_PID 0x01DE
-#define MTXORB_FTDI_RANGE_01DF_PID 0x01DF
-#define MTXORB_FTDI_RANGE_01E0_PID 0x01E0
-#define MTXORB_FTDI_RANGE_01E1_PID 0x01E1
-#define MTXORB_FTDI_RANGE_01E2_PID 0x01E2
-#define MTXORB_FTDI_RANGE_01E3_PID 0x01E3
-#define MTXORB_FTDI_RANGE_01E4_PID 0x01E4
-#define MTXORB_FTDI_RANGE_01E5_PID 0x01E5
-#define MTXORB_FTDI_RANGE_01E6_PID 0x01E6
-#define MTXORB_FTDI_RANGE_01E7_PID 0x01E7
-#define MTXORB_FTDI_RANGE_01E8_PID 0x01E8
-#define MTXORB_FTDI_RANGE_01E9_PID 0x01E9
-#define MTXORB_FTDI_RANGE_01EA_PID 0x01EA
-#define MTXORB_FTDI_RANGE_01EB_PID 0x01EB
-#define MTXORB_FTDI_RANGE_01EC_PID 0x01EC
-#define MTXORB_FTDI_RANGE_01ED_PID 0x01ED
-#define MTXORB_FTDI_RANGE_01EE_PID 0x01EE
-#define MTXORB_FTDI_RANGE_01EF_PID 0x01EF
-#define MTXORB_FTDI_RANGE_01F0_PID 0x01F0
-#define MTXORB_FTDI_RANGE_01F1_PID 0x01F1
-#define MTXORB_FTDI_RANGE_01F2_PID 0x01F2
-#define MTXORB_FTDI_RANGE_01F3_PID 0x01F3
-#define MTXORB_FTDI_RANGE_01F4_PID 0x01F4
-#define MTXORB_FTDI_RANGE_01F5_PID 0x01F5
-#define MTXORB_FTDI_RANGE_01F6_PID 0x01F6
-#define MTXORB_FTDI_RANGE_01F7_PID 0x01F7
-#define MTXORB_FTDI_RANGE_01F8_PID 0x01F8
-#define MTXORB_FTDI_RANGE_01F9_PID 0x01F9
-#define MTXORB_FTDI_RANGE_01FA_PID 0x01FA
-#define MTXORB_FTDI_RANGE_01FB_PID 0x01FB
-#define MTXORB_FTDI_RANGE_01FC_PID 0x01FC
-#define MTXORB_FTDI_RANGE_01FD_PID 0x01FD
-#define MTXORB_FTDI_RANGE_01FE_PID 0x01FE
-#define MTXORB_FTDI_RANGE_01FF_PID 0x01FF
-
-
-
-/*
- * The Mobility Lab (TML)
- * Submitted by Pierre Castella
- */
-#define TML_VID 0x1B91 /* Vendor ID */
-#define TML_USB_SERIAL_PID 0x0064 /* USB - Serial Converter */
-
-/* Alti-2 products http://www.alti-2.com */
-#define ALTI2_VID 0x1BC9
-#define ALTI2_N3_PID 0x6001 /* Neptune 3 */
-
-/*
- * Ionics PlugComputer
- */
-#define IONICS_VID 0x1c0c
-#define IONICS_PLUGCOMPUTER_PID 0x0102
-
-/*
- * Dresden Elektronic Sensor Terminal Board
- */
-#define DE_VID 0x1cf1 /* Vendor ID */
-#define STB_PID 0x0001 /* Sensor Terminal Board */
-#define WHT_PID 0x0004 /* Wireless Handheld Terminal */
-
-/*
- * Papouch products (http://www.papouch.com/)
- * Submitted by Folkert van Heusden
- */
-
-#define PAPOUCH_VID 0x5050 /* Vendor ID */
-#define PAPOUCH_SB485_PID 0x0100 /* Papouch SB485 USB-485/422 Converter */
-#define PAPOUCH_AP485_PID 0x0101 /* AP485 USB-RS485 Converter */
-#define PAPOUCH_SB422_PID 0x0102 /* Papouch SB422 USB-RS422 Converter */
-#define PAPOUCH_SB485_2_PID 0x0103 /* Papouch SB485 USB-485/422 Converter */
-#define PAPOUCH_AP485_2_PID 0x0104 /* AP485 USB-RS485 Converter */
-#define PAPOUCH_SB422_2_PID 0x0105 /* Papouch SB422 USB-RS422 Converter */
-#define PAPOUCH_SB485S_PID 0x0106 /* Papouch SB485S USB-485/422 Converter */
-#define PAPOUCH_SB485C_PID 0x0107 /* Papouch SB485C USB-485/422 Converter */
-#define PAPOUCH_LEC_PID 0x0300 /* LEC USB Converter */
-#define PAPOUCH_SB232_PID 0x0301 /* Papouch SB232 USB-RS232 Converter */
-#define PAPOUCH_TMU_PID 0x0400 /* TMU USB Thermometer */
-#define PAPOUCH_IRAMP_PID 0x0500 /* Papouch IRAmp Duplex */
-#define PAPOUCH_DRAK5_PID 0x0700 /* Papouch DRAK5 */
-#define PAPOUCH_QUIDO8x8_PID 0x0800 /* Papouch Quido 8/8 Module */
-#define PAPOUCH_QUIDO4x4_PID 0x0900 /* Papouch Quido 4/4 Module */
-#define PAPOUCH_QUIDO2x2_PID 0x0a00 /* Papouch Quido 2/2 Module */
-#define PAPOUCH_QUIDO10x1_PID 0x0b00 /* Papouch Quido 10/1 Module */
-#define PAPOUCH_QUIDO30x3_PID 0x0c00 /* Papouch Quido 30/3 Module */
-#define PAPOUCH_QUIDO60x3_PID 0x0d00 /* Papouch Quido 60(100)/3 Module */
-#define PAPOUCH_QUIDO2x16_PID 0x0e00 /* Papouch Quido 2/16 Module */
-#define PAPOUCH_QUIDO3x32_PID 0x0f00 /* Papouch Quido 3/32 Module */
-#define PAPOUCH_DRAK6_PID 0x1000 /* Papouch DRAK6 */
-#define PAPOUCH_UPSUSB_PID 0x8000 /* Papouch UPS-USB adapter */
-#define PAPOUCH_MU_PID 0x8001 /* MU controller */
-#define PAPOUCH_SIMUKEY_PID 0x8002 /* Papouch SimuKey */
-#define PAPOUCH_AD4USB_PID 0x8003 /* AD4USB Measurement Module */
-#define PAPOUCH_GMUX_PID 0x8004 /* Papouch GOLIATH MUX */
-#define PAPOUCH_GMSR_PID 0x8005 /* Papouch GOLIATH MSR */
-
-/*
- * Marvell SheevaPlug
- */
-#define MARVELL_VID 0x9e88
-#define MARVELL_SHEEVAPLUG_PID 0x9e8f
-
-/*
- * Evolution Robotics products (http://www.evolution.com/).
- * Submitted by Shawn M. Lavelle.
- */
-#define EVOLUTION_VID 0xDEEE /* Vendor ID */
-#define EVOLUTION_ER1_PID 0x0300 /* ER1 Control Module */
-#define EVO_8U232AM_PID 0x02FF /* Evolution robotics RCM2 (FT232AM)*/
-#define EVO_HYBRID_PID 0x0302 /* Evolution robotics RCM4 PID (FT232BM)*/
-#define EVO_RCM4_PID 0x0303 /* Evolution robotics RCM4 PID */
-
-/*
- * MJS Gadgets HD Radio / XM Radio / Sirius Radio interfaces (using VID 0x0403)
- */
-#define MJSG_GENERIC_PID 0x9378
-#define MJSG_SR_RADIO_PID 0x9379
-#define MJSG_XM_RADIO_PID 0x937A
-#define MJSG_HD_RADIO_PID 0x937C
-
-/*
- * Xverve Signalyzer tools (http://www.signalyzer.com/)
- */
-#define XVERVE_SIGNALYZER_ST_PID 0xBCA0
-#define XVERVE_SIGNALYZER_SLITE_PID 0xBCA1
-#define XVERVE_SIGNALYZER_SH2_PID 0xBCA2
-#define XVERVE_SIGNALYZER_SH4_PID 0xBCA4
-
-/*
- * Segway Robotic Mobility Platform USB interface (using VID 0x0403)
- * Submitted by John G. Rogers
- */
-#define SEGWAY_RMP200_PID 0xe729
-
-
-/*
- * Accesio USB Data Acquisition products (http://www.accesio.com/)
- */
-#define ACCESIO_COM4SM_PID 0xD578
-
-/* www.sciencescope.co.uk educational dataloggers */
-#define FTDI_SCIENCESCOPE_LOGBOOKML_PID 0xFF18
-#define FTDI_SCIENCESCOPE_LS_LOGBOOK_PID 0xFF1C
-#define FTDI_SCIENCESCOPE_HS_LOGBOOK_PID 0xFF1D
-
-/*
- * Milkymist One JTAG/Serial
- */
-#define QIHARDWARE_VID 0x20B7
-#define MILKYMISTONE_JTAGSERIAL_PID 0x0713
-
/* Check if we have an old version in the I2C and
update if necessary */
- if (download_cur_ver < download_new_ver) {
+ if (download_cur_ver != download_new_ver) {
dbg("%s - Update I2C dld from %d.%d to %d.%d",
__func__,
firmware_version->Ver_Major,
kfree(port->read_urb->transfer_buffer);
port->read_urb->transfer_buffer = buffer;
port->read_urb->transfer_buffer_length = buffer_size;
- port->bulk_in_buffer = buffer;
buffer = kmalloc(buffer_size, GFP_KERNEL);
if (!buffer) {
kfree(port->write_urb->transfer_buffer);
port->write_urb->transfer_buffer = buffer;
port->write_urb->transfer_buffer_length = buffer_size;
- port->bulk_out_buffer = buffer;
port->bulk_out_size = buffer_size;
}
usb_free_urb(priv->write_urb_pool[j]);
}
}
- kfree(priv);
usb_set_serial_port_data(serial->port[i], NULL);
}
return -ENOMEM;
/* FIXME: Add rts/dtr methods */
if (port->write_urb) {
- usb_poison_urb(port->write_urb);
- kfree(port->write_urb->transfer_buffer);
+ usb_kill_urb(port->write_urb);
usb_free_urb(port->write_urb);
port->write_urb = NULL;
}
case TIOCGICOUNT:
cnow = mos7720_port->icount;
-
- memset(&icount, 0, sizeof(struct serial_icounter_struct));
-
icount.cts = cnow.cts;
icount.dsr = cnow.dsr;
icount.rng = cnow.rng;
* by making a change here, in moschip_port_id_table, and in
* moschip_id_table_combined
*/
-#define USB_VENDOR_ID_BANDB 0x0856
-#define BANDB_DEVICE_ID_USO9ML2_2 0xAC22
-#define BANDB_DEVICE_ID_USO9ML2_2P 0xBC00
-#define BANDB_DEVICE_ID_USO9ML2_4 0xAC24
-#define BANDB_DEVICE_ID_USO9ML2_4P 0xBC01
-#define BANDB_DEVICE_ID_US9ML2_2 0xAC29
-#define BANDB_DEVICE_ID_US9ML2_4 0xAC30
-#define BANDB_DEVICE_ID_USPTL4_2 0xAC31
-#define BANDB_DEVICE_ID_USPTL4_4 0xAC32
-#define BANDB_DEVICE_ID_USOPTL4_2 0xAC42
-#define BANDB_DEVICE_ID_USOPTL4_2P 0xBC02
-#define BANDB_DEVICE_ID_USOPTL4_4 0xAC44
-#define BANDB_DEVICE_ID_USOPTL4_4P 0xBC03
-#define BANDB_DEVICE_ID_USOPTL2_4 0xAC24
+#define USB_VENDOR_ID_BANDB 0x0856
+#define BANDB_DEVICE_ID_USO9ML2_2 0xAC22
+#define BANDB_DEVICE_ID_USO9ML2_4 0xAC24
+#define BANDB_DEVICE_ID_US9ML2_2 0xAC29
+#define BANDB_DEVICE_ID_US9ML2_4 0xAC30
+#define BANDB_DEVICE_ID_USPTL4_2 0xAC31
+#define BANDB_DEVICE_ID_USPTL4_4 0xAC32
+#define BANDB_DEVICE_ID_USOPTL4_2 0xAC42
+#define BANDB_DEVICE_ID_USOPTL4_4 0xAC44
/* This driver also supports
* ATEN UC2324 device using Moschip MCS7840
{USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7840)},
{USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7820)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_2)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_2P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_4)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_4P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_US9ML2_2)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_US9ML2_4)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USPTL4_2)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USPTL4_4)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4P)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4)},
{USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2324)},
{USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2322)},
{} /* terminating entry */
{USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7840)},
{USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7820)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_2)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_2P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_4)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USO9ML2_4P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_US9ML2_2)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_US9ML2_4)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USPTL4_2)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USPTL4_4)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2P)},
{USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4P)},
- {USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4)},
{USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2324)},
{USB_DEVICE(USB_VENDOR_ID_ATENINTL, ATENINTL_DEVICE_ID_UC2322)},
{} /* terminating entry */
{
struct usb_device *dev = port->serial->dev;
int ret = 0;
- u8 *buf;
-
- buf = kmalloc(VENDOR_READ_LENGTH, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), MCS_RDREQ,
- MCS_RD_RTYPE, 0, reg, buf, VENDOR_READ_LENGTH,
+ MCS_RD_RTYPE, 0, reg, val, VENDOR_READ_LENGTH,
MOS_WDR_TIMEOUT);
- *val = buf[0];
dbg("mos7840_get_reg_sync offset is %x, return val %x", reg, *val);
-
- kfree(buf);
+ *val = (*val) & 0x00ff;
return ret;
}
struct usb_device *dev = port->serial->dev;
int ret = 0;
__u16 Wval;
- u8 *buf;
-
- buf = kmalloc(VENDOR_READ_LENGTH, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
/* dbg("application number is %4x",
(((__u16)port->number - (__u16)(port->serial->minor))+1)<<8); */
}
}
ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), MCS_RDREQ,
- MCS_RD_RTYPE, Wval, reg, buf, VENDOR_READ_LENGTH,
+ MCS_RD_RTYPE, Wval, reg, val, VENDOR_READ_LENGTH,
MOS_WDR_TIMEOUT);
- *val = buf[0];
-
- kfree(buf);
+ *val = (*val) & 0x00ff;
return ret;
}
mos7840_port = urb->context;
if (!mos7840_port) {
dbg("%s", "NULL mos7840_port pointer");
+ mos7840_port->read_urb_busy = false;
return;
}
case TIOCGICOUNT:
cnow = mos7840_port->icount;
smp_rmb();
-
- memset(&icount, 0, sizeof(struct serial_icounter_struct));
-
icount.cts = cnow.cts;
icount.dsr = cnow.dsr;
icount.rng = cnow.rng;
static struct usb_device_id id_table [] = {
{ USB_DEVICE(0x0a99, 0x0001) }, /* Talon Technology device */
- { USB_DEVICE(0x0df7, 0x0900) }, /* Mobile Action i-gotU */
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);
available_room = tty_buffer_request_room(tty,
data_length);
if (available_room) {
- tty_insert_flip_string(tty, data + 2,
- data_length);
+ tty_insert_flip_string(tty, data,
+ available_room);
tty_flip_buffer_push(tty);
}
tty_kref_put(tty);
priv->bulk_address),
priv->bulk_in_buffer, priv->buffer_size,
opticon_bulk_callback, priv);
- result = usb_submit_urb(priv->bulk_read_urb, GFP_ATOMIC);
+ result = usb_submit_urb(port->read_urb, GFP_ATOMIC);
if (result)
dev_err(&port->dev,
"%s - failed resubmitting read urb, error %d\n",
#define HUAWEI_PRODUCT_E143D 0x143D
#define HUAWEI_PRODUCT_E143E 0x143E
#define HUAWEI_PRODUCT_E143F 0x143F
-#define HUAWEI_PRODUCT_K4505 0x1464
-#define HUAWEI_PRODUCT_K3765 0x1465
#define HUAWEI_PRODUCT_E14AC 0x14AC
-#define HUAWEI_PRODUCT_ETS1220 0x1803
#define QUANTA_VENDOR_ID 0x0408
#define QUANTA_PRODUCT_Q101 0xEA02
#define AMOI_PRODUCT_H01 0x0800
#define AMOI_PRODUCT_H01A 0x7002
#define AMOI_PRODUCT_H02 0x0802
-#define AMOI_PRODUCT_SKYPEPHONE_S2 0x0407
#define DELL_VENDOR_ID 0x413C
#define QUALCOMM_VENDOR_ID 0x05C6
-#define CMOTECH_VENDOR_ID 0x16d8
-#define CMOTECH_PRODUCT_6008 0x6008
-#define CMOTECH_PRODUCT_6280 0x6280
+#define MAXON_VENDOR_ID 0x16d8
#define TELIT_VENDOR_ID 0x1bc7
#define TELIT_PRODUCT_UC864E 0x1003
#define QISDA_PRODUCT_H21_4512 0x4512
#define QISDA_PRODUCT_H21_4523 0x4523
#define QISDA_PRODUCT_H20_4515 0x4515
-#define QISDA_PRODUCT_H20_4518 0x4518
#define QISDA_PRODUCT_H20_4519 0x4519
/* TLAYTECH PRODUCTS */
#define ALCATEL_VENDOR_ID 0x1bbb
#define ALCATEL_PRODUCT_X060S 0x0000
-#define PIRELLI_VENDOR_ID 0x1266
-#define PIRELLI_PRODUCT_C100_1 0x1002
-#define PIRELLI_PRODUCT_C100_2 0x1003
-#define PIRELLI_PRODUCT_1004 0x1004
-#define PIRELLI_PRODUCT_1005 0x1005
-#define PIRELLI_PRODUCT_1006 0x1006
-#define PIRELLI_PRODUCT_1007 0x1007
-#define PIRELLI_PRODUCT_1008 0x1008
-#define PIRELLI_PRODUCT_1009 0x1009
-#define PIRELLI_PRODUCT_100A 0x100a
-#define PIRELLI_PRODUCT_100B 0x100b
-#define PIRELLI_PRODUCT_100C 0x100c
-#define PIRELLI_PRODUCT_100D 0x100d
-#define PIRELLI_PRODUCT_100E 0x100e
-#define PIRELLI_PRODUCT_100F 0x100f
-#define PIRELLI_PRODUCT_1011 0x1011
-#define PIRELLI_PRODUCT_1012 0x1012
-
/* Airplus products */
#define AIRPLUS_VENDOR_ID 0x1011
#define AIRPLUS_PRODUCT_MCD650 0x3198
#define THINKWILL_VENDOR_ID 0x19f5
#define THINKWILL_PRODUCT_ID 0x9909
-#define CINTERION_VENDOR_ID 0x0681
-
-/* Olivetti products */
-#define OLIVETTI_VENDOR_ID 0x0b3c
-#define OLIVETTI_PRODUCT_OLICARD100 0xc000
-
-/* Celot products */
-#define CELOT_VENDOR_ID 0x211f
-#define CELOT_PRODUCT_CT680M 0x6801
-
static struct usb_device_id option_ids[] = {
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143D, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143E, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143F, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4505, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3765, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_ETS1220, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E14AC, 0xff, 0xff, 0xff) },
+ { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E14AC) },
{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_9508) },
{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) }, /* Novatel Merlin V640/XV620 */
{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V620) }, /* Novatel Merlin V620/S620 */
{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) },
{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) },
{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H02) },
- { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_SKYPEPHONE_S2) },
{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5700_MINICARD) }, /* Dell Wireless 5700 Mobile Broadband CDMA/EVDO Mini-Card == Novatel Expedite EV620 CDMA/EV-DO */
{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5500_MINICARD) }, /* Dell Wireless 5500 Mobile Broadband HSDPA Mini-Card == Novatel Expedite EU740 HSDPA/3G */
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) },
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
- { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6280) }, /* BP3-USB & BP3-EXT HSDPA */
- { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6008) },
+ { USB_DEVICE(MAXON_VENDOR_ID, 0x6280) }, /* BP3-USB & BP3-EXT HSDPA */
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864E) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864G) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0011, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0012, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0013, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0014, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF628, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0016, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0017, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0023, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0024, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0025, 0xff, 0xff, 0xff) },
- /* { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0026, 0xff, 0xff, 0xff) }, */
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0026, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0028, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0029, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0030, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF626, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0032, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0033, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0034, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0037, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0038, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0039, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0040, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0042, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0043, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0044, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0048, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0049, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0050, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0051, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0052, 0xff, 0xff, 0xff) },
- /* { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0053, 0xff, 0xff, 0xff) }, */
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0054, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0055, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0056, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0057, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0058, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0059, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0061, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0062, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0063, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0064, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0065, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0066, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0067, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0069, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0070, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0076, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0077, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0078, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0079, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0082, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0083, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0086, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0087, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0104, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0105, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0106, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0108, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0113, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0160, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0161, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0162, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1010, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1012, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1057, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1058, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1059, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1060, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1061, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1062, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1063, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1064, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1065, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1066, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1067, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1068, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1069, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1070, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1071, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1072, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1073, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1074, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1075, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1076, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1077, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1078, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1079, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1080, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1081, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1082, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1083, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1084, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1085, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1086, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1087, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1088, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1089, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1090, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1091, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1092, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1093, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1094, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1095, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1096, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1097, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1098, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1099, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1100, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1101, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1102, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1103, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1104, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1105, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1106, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1107, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1108, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1109, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1110, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1111, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1112, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1113, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1114, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1115, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1116, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1117, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1118, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1119, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1120, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1121, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1122, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1123, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1124, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1125, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1126, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1127, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1128, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1129, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1130, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1131, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1132, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1133, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1134, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1135, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1136, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1137, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1138, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1139, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1140, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1141, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1142, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1143, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1144, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1145, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1146, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1147, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1148, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1149, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1150, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1151, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1152, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1153, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1154, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1155, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1156, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1157, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1158, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1159, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1160, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1161, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1162, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1163, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1164, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1165, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1166, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1167, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1168, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1169, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1170, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1244, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1245, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1246, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1247, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1248, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1249, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1250, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1251, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1252, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1253, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1254, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1255, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1256, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1257, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1258, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1259, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1260, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1261, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1262, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1263, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1264, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1265, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1266, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1267, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1268, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1269, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1270, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1271, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1272, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1273, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1274, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1275, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1276, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1277, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1278, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1279, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1280, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1281, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1282, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1283, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1284, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1285, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1286, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1287, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1288, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1289, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1290, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1291, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1292, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1293, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1294, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1295, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1296, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1297, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1298, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1299, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1300, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0014, 0xff, 0xff, 0xff) }, /* ZTE CDMA products */
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0027, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0059, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0141, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) },
{ USB_DEVICE(QISDA_VENDOR_ID, QISDA_PRODUCT_H21_4512) },
{ USB_DEVICE(QISDA_VENDOR_ID, QISDA_PRODUCT_H21_4523) },
{ USB_DEVICE(QISDA_VENDOR_ID, QISDA_PRODUCT_H20_4515) },
- { USB_DEVICE(QISDA_VENDOR_ID, QISDA_PRODUCT_H20_4518) },
{ USB_DEVICE(QISDA_VENDOR_ID, QISDA_PRODUCT_H20_4519) },
{ USB_DEVICE(TOSHIBA_VENDOR_ID, TOSHIBA_PRODUCT_G450) },
{ USB_DEVICE(TOSHIBA_VENDOR_ID, TOSHIBA_PRODUCT_HSDPA_MINICARD ) }, /* Toshiba 3G HSDPA == Novatel Expedite EU870D MiniCard */
{ USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) },
{ USB_DEVICE(FOUR_G_SYSTEMS_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14) },
{ USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) },
- /* Pirelli */
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_1)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_2)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1004)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1005)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1006)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1007)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1008)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1009)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100A)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100B) },
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100C) },
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100D) },
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100E) },
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100F) },
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1011)},
- { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1012)},
- { USB_DEVICE(CINTERION_VENDOR_ID, 0x0047) },
- { USB_DEVICE(LEADCORE_VENDOR_ID, LEADCORE_PRODUCT_LC1808) }, //zzc
- { USB_DEVICE(SC8800G_VENDOR_ID,SC8800G_PRODUCT_ID)},
- { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },
- { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
-
-// cmy:
- { USB_DEVICE(0x0685, 0x6000) },
- { USB_DEVICE(0x1E89, 0x1E16) },
- { USB_DEVICE(0x7693, 0x0001) },
- { USB_DEVICE(0x1D09, 0x4308) },
- { USB_DEVICE(0x1234, 0x0033) },
- { USB_DEVICE(0xFEED, 0x0001) },
- { USB_DEVICE(ALCATEL_VENDOR_ID, 0x0017) },
-
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
const struct usb_device_id *id)
{
struct option_intf_private *data;
-
/* D-Link DWM 652 still exposes CD-Rom emulation interface in modem mode */
if (serial->dev->descriptor.idVendor == DLINK_VENDOR_ID &&
serial->dev->descriptor.idProduct == DLINK_PRODUCT_DWM_652 &&
serial->interface->cur_altsetting->desc.bInterfaceClass == 0x8)
return -ENODEV;
- /* Bandrich modem and AT command interface is 0xff */
- if ((serial->dev->descriptor.idVendor == BANDRICH_VENDOR_ID ||
- serial->dev->descriptor.idVendor == PIRELLI_VENDOR_ID) &&
- serial->interface->cur_altsetting->desc.bInterfaceClass != 0xff)
- return -ENODEV;
-
- /* Don't bind network interfaces on Huawei K3765 & K4505 */
- if (serial->dev->descriptor.idVendor == HUAWEI_VENDOR_ID &&
- (serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K3765 ||
- serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K4505) &&
- serial->interface->cur_altsetting->desc.bInterfaceNumber == 1)
- return -ENODEV;
-
data = serial->private = kzalloc(sizeof(struct option_intf_private), GFP_KERNEL);
if (!data)
return -ENOMEM;
{ USB_DEVICE(SUPERIAL_VENDOR_ID, SUPERIAL_PRODUCT_ID) },
{ USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) },
{ USB_DEVICE(CRESSI_VENDOR_ID, CRESSI_EDY_PRODUCT_ID) },
- { USB_DEVICE(ZEAGLE_VENDOR_ID, ZEAGLE_N2ITION3_PRODUCT_ID) },
{ USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) },
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
{ } /* Terminating entry */
#define CRESSI_VENDOR_ID 0x04b8
#define CRESSI_EDY_PRODUCT_ID 0x0521
-/* Zeagle dive computer interface */
-#define ZEAGLE_VENDOR_ID 0x04b8
-#define ZEAGLE_N2ITION3_PRODUCT_ID 0x0522
-
/* Sony, USB data cable for CMD-Jxx mobile phones */
#define SONY_VENDOR_ID 0x054c
#define SONY_QN3USB_PRODUCT_ID 0x0437
{USB_DEVICE(0x05c6, 0x9221)}, /* Generic Gobi QDL device */
{USB_DEVICE(0x05c6, 0x9231)}, /* Generic Gobi QDL device */
{USB_DEVICE(0x1f45, 0x0001)}, /* Unknown Gobi QDL device */
- {USB_DEVICE(0x413c, 0x8185)}, /* Dell Gobi 2000 QDL device (N0218, VU936) */
- {USB_DEVICE(0x413c, 0x8186)}, /* Dell Gobi 2000 Modem device (N0218, VU936) */
- {USB_DEVICE(0x05c6, 0x9224)}, /* Sony Gobi 2000 QDL device (N0279, VU730) */
- {USB_DEVICE(0x05c6, 0x9225)}, /* Sony Gobi 2000 Modem device (N0279, VU730) */
- {USB_DEVICE(0x05c6, 0x9244)}, /* Samsung Gobi 2000 QDL device (VL176) */
- {USB_DEVICE(0x05c6, 0x9245)}, /* Samsung Gobi 2000 Modem device (VL176) */
- {USB_DEVICE(0x03f0, 0x241d)}, /* HP Gobi 2000 QDL device (VP412) */
- {USB_DEVICE(0x03f0, 0x251d)}, /* HP Gobi 2000 Modem device (VP412) */
- {USB_DEVICE(0x05c6, 0x9214)}, /* Acer Gobi 2000 QDL device (VP413) */
- {USB_DEVICE(0x05c6, 0x9215)}, /* Acer Gobi 2000 Modem device (VP413) */
- {USB_DEVICE(0x05c6, 0x9264)}, /* Asus Gobi 2000 QDL device (VR305) */
- {USB_DEVICE(0x05c6, 0x9265)}, /* Asus Gobi 2000 Modem device (VR305) */
- {USB_DEVICE(0x05c6, 0x9234)}, /* Top Global Gobi 2000 QDL device (VR306) */
- {USB_DEVICE(0x05c6, 0x9235)}, /* Top Global Gobi 2000 Modem device (VR306) */
- {USB_DEVICE(0x05c6, 0x9274)}, /* iRex Technologies Gobi 2000 QDL device (VR307) */
- {USB_DEVICE(0x05c6, 0x9275)}, /* iRex Technologies Gobi 2000 Modem device (VR307) */
- {USB_DEVICE(0x1199, 0x9000)}, /* Sierra Wireless Gobi 2000 QDL device (VT773) */
- {USB_DEVICE(0x1199, 0x9001)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9002)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9003)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9004)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9005)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9006)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9007)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9008)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x9009)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x1199, 0x900a)}, /* Sierra Wireless Gobi 2000 Modem device (VT773) */
- {USB_DEVICE(0x16d8, 0x8001)}, /* CMDTech Gobi 2000 QDL device (VU922) */
- {USB_DEVICE(0x16d8, 0x8002)}, /* CMDTech Gobi 2000 Modem device (VU922) */
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, id_table);
static struct usb_device_id id_table [] = {
{ USB_DEVICE(0x0F3D, 0x0112) }, /* Airprime/Sierra PC 5220 */
{ USB_DEVICE(0x03F0, 0x1B1D) }, /* HP ev2200 a.k.a MC5720 */
- { USB_DEVICE(0x03F0, 0x211D) }, /* HP ev2210 a.k.a MC5725 */
{ USB_DEVICE(0x03F0, 0x1E1D) }, /* HP hs2300 a.k.a MC8775 */
{ USB_DEVICE(0x1199, 0x0017) }, /* Sierra Wireless EM5625 */
{ USB_DEVICE(0x1199, 0x0021) }, /* Sierra Wireless AirCard 597E */
{ USB_DEVICE(0x1199, 0x0112) }, /* Sierra Wireless AirCard 580 */
{ USB_DEVICE(0x1199, 0x0120) }, /* Sierra Wireless USB Dongle 595U */
- { USB_DEVICE(0x1199, 0x0301) }, /* Sierra Wireless USB Dongle 250U */
/* Sierra Wireless C597 */
{ USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x0023, 0xFF, 0xFF, 0xFF) },
/* Sierra Wireless T598 */
} else {
if (urb->actual_length) {
tty = tty_port_tty_get(&port->port);
- if (tty) {
- tty_buffer_request_room(tty,
- urb->actual_length);
- tty_insert_flip_string(tty, data,
- urb->actual_length);
- tty_flip_buffer_push(tty);
-
- tty_kref_put(tty);
- usb_serial_debug_data(debug, &port->dev,
- __func__, urb->actual_length, data);
- }
+
+ tty_buffer_request_room(tty, urb->actual_length);
+ tty_insert_flip_string(tty, data, urb->actual_length);
+ tty_flip_buffer_push(tty);
+
+ tty_kref_put(tty);
+ usb_serial_debug_data(debug, &port->dev, __func__,
+ urb->actual_length, data);
} else {
dev_dbg(&port->dev, "%s: empty read urb"
" received\n", __func__);
.throttle = visor_throttle,
.unthrottle = visor_unthrottle,
.attach = clie_3_5_startup,
- .release = visor_release,
.write = visor_write,
.write_room = visor_write_room,
.write_bulk_callback = visor_write_bulk_callback,
}
return result;
}
-static DEVICE_ATTR(truinst, S_IRUGO, show_truinst, NULL);
+static DEVICE_ATTR(truinst, S_IWUGO | S_IRUGO, show_truinst, NULL);
int sierra_ms_init(struct us_data *us)
{
0 ),
/* Reported by Jan Dumon <j.dumon@option.com>
- * These devices (wrongly) have a vendor-specific device descriptor.
- * These entries are needed so usb-storage can bind to their mass-storage
+ * This device (wrongly) has a vendor-specific device descriptor.
+ * The entry is needed so usb-storage can bind to it's mass-storage
* interface as an interface driver */
UNUSUAL_DEV( 0x0af0, 0x7501, 0x0000, 0x0000,
"Option",
US_SC_DEVICE, US_PR_DEVICE, NULL,
0 ),
-UNUSUAL_DEV( 0x0af0, 0x7701, 0x0000, 0x0000,
- "Option",
- "GI 0451 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x7706, 0x0000, 0x0000,
- "Option",
- "GI 0451 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x7901, 0x0000, 0x0000,
- "Option",
- "GI 0452 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x7A01, 0x0000, 0x0000,
- "Option",
- "GI 0461 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x7A05, 0x0000, 0x0000,
- "Option",
- "GI 0461 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x8300, 0x0000, 0x0000,
- "Option",
- "GI 033x SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x8302, 0x0000, 0x0000,
- "Option",
- "GI 033x SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0x8304, 0x0000, 0x0000,
- "Option",
- "GI 033x SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xc100, 0x0000, 0x0000,
- "Option",
- "GI 070x SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xd057, 0x0000, 0x0000,
- "Option",
- "GI 1505 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xd058, 0x0000, 0x0000,
- "Option",
- "GI 1509 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xd157, 0x0000, 0x0000,
- "Option",
- "GI 1515 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xd257, 0x0000, 0x0000,
- "Option",
- "GI 1215 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
-UNUSUAL_DEV( 0x0af0, 0xd357, 0x0000, 0x0000,
- "Option",
- "GI 1505 SD-Card",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- 0 ),
-
/* Reported by Ben Efros <ben@pc-doctor.com> */
UNUSUAL_DEV( 0x0bc2, 0x3010, 0x0000, 0x0000,
"Seagate",
US_SC_DEVICE, US_PR_DEVICE, NULL,
US_FL_IGNORE_RESIDUE ),
-/* Reported by Hans de Goede <hdegoede@redhat.com>
- * These Appotech controllers are found in Picture Frames, they provide a
- * (buggy) emulation of a cdrom drive which contains the windows software
- * Uploading of pictures happens over the corresponding /dev/sg device. */
-UNUSUAL_DEV( 0x1908, 0x1315, 0x0000, 0x0000,
- "BUILDWIN",
- "Photo Frame",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- US_FL_BAD_SENSE ),
-UNUSUAL_DEV( 0x1908, 0x1320, 0x0000, 0x0000,
- "BUILDWIN",
- "Photo Frame",
- US_SC_DEVICE, US_PR_DEVICE, NULL,
- US_FL_BAD_SENSE ),
-
UNUSUAL_DEV( 0x2116, 0x0320, 0x0001, 0x0001,
"ST",
"2A",
{
struct backlight_device *bd = to_backlight_device(dev);
- mutex_lock(&bd->ops_lock);
- if (bd->ops && bd->ops->options & BL_CORE_SUSPENDRESUME) {
+ if (bd->ops->options & BL_CORE_SUSPENDRESUME) {
+ mutex_lock(&bd->ops_lock);
bd->props.state |= BL_CORE_SUSPENDED;
backlight_update_status(bd);
+ mutex_unlock(&bd->ops_lock);
}
- mutex_unlock(&bd->ops_lock);
return 0;
}
{
struct backlight_device *bd = to_backlight_device(dev);
- mutex_lock(&bd->ops_lock);
- if (bd->ops && bd->ops->options & BL_CORE_SUSPENDRESUME) {
+ if (bd->ops->options & BL_CORE_SUSPENDRESUME) {
+ mutex_lock(&bd->ops_lock);
bd->props.state &= ~BL_CORE_SUSPENDED;
backlight_update_status(bd);
+ mutex_unlock(&bd->ops_lock);
}
- mutex_unlock(&bd->ops_lock);
return 0;
}
}
static const struct dmi_system_id __initdata mbp_device_table[] = {
- {
- .callback = mbp_dmi_match,
- .ident = "MacBook 1,1",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "MacBook1,1"),
- },
- .driver_data = (void *)&intel_chipset_data,
- },
- {
- .callback = mbp_dmi_match,
- .ident = "MacBook 2,1",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "MacBook2,1"),
- },
- .driver_data = (void *)&intel_chipset_data,
- },
- {
- .callback = mbp_dmi_match,
- .ident = "MacBook 3,1",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "MacBook3,1"),
- },
- .driver_data = (void *)&intel_chipset_data,
- },
- {
- .callback = mbp_dmi_match,
- .ident = "MacBook 4,1",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "MacBook4,1"),
- },
- .driver_data = (void *)&intel_chipset_data,
- },
- {
- .callback = mbp_dmi_match,
- .ident = "MacBook 4,2",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "MacBook4,2"),
- },
- .driver_data = (void *)&intel_chipset_data,
- },
{
.callback = mbp_dmi_match,
.ident = "MacBookPro 3,1",
fbinfo->fbops = &bfin_t350mcqb_fb_ops;
fbinfo->flags = FBINFO_FLAG_DEFAULT;
- info->fb_buffer = dma_alloc_coherent(NULL, fbinfo->fix.smem_len +
- ACTIVE_VIDEO_MEM_OFFSET,
- &info->dma_handle, GFP_KERNEL);
+ info->fb_buffer =
+ dma_alloc_coherent(NULL, fbinfo->fix.smem_len, &info->dma_handle,
+ GFP_KERNEL);
if (NULL == info->fb_buffer) {
printk(KERN_ERR DRIVER_NAME
out6:
fb_dealloc_cmap(&fbinfo->cmap);
out4:
- dma_free_coherent(NULL, fbinfo->fix.smem_len + ACTIVE_VIDEO_MEM_OFFSET,
- info->fb_buffer, info->dma_handle);
+ dma_free_coherent(NULL, fbinfo->fix.smem_len, info->fb_buffer,
+ info->dma_handle);
out3:
framebuffer_release(fbinfo);
out2:
free_irq(info->irq, info);
if (info->fb_buffer != NULL)
- dma_free_coherent(NULL, fbinfo->fix.smem_len +
- ACTIVE_VIDEO_MEM_OFFSET, info->fb_buffer,
- info->dma_handle);
+ dma_free_coherent(NULL, fbinfo->fix.smem_len, info->fb_buffer,
+ info->dma_handle);
fb_dealloc_cmap(&fbinfo->cmap);
#include <linux/platform_device.h>
#include <linux/screen_info.h>
#include <linux/dmi.h>
-#include <linux/pci.h>
+
#include <video/vga.h>
static struct fb_var_screeninfo efifb_defined __initdata = {
M_I20, /* 20-Inch iMac */
M_I20_SR, /* 20-Inch iMac (Santa Rosa) */
M_I24, /* 24-Inch iMac */
- M_I24_8_1, /* 24-Inch iMac, 8,1th gen */
- M_I24_10_1, /* 24-Inch iMac, 10,1th gen */
- M_I27_11_1, /* 27-Inch iMac, 11,1th gen */
M_MINI, /* Mac Mini */
- M_MINI_3_1, /* Mac Mini, 3,1th gen */
- M_MINI_4_1, /* Mac Mini, 4,1th gen */
M_MB, /* MacBook */
M_MB_2, /* MacBook, 2nd rev. */
M_MB_3, /* MacBook, 3rd rev. */
- M_MB_5_1, /* MacBook, 5th rev. */
- M_MB_6_1, /* MacBook, 6th rev. */
- M_MB_7_1, /* MacBook, 7th rev. */
M_MB_SR, /* MacBook, 2nd gen, (Santa Rosa) */
M_MBA, /* MacBook Air */
M_MBP, /* MacBook Pro */
M_MBP_2, /* MacBook Pro 2nd gen */
- M_MBP_2_2, /* MacBook Pro 2,2nd gen */
M_MBP_SR, /* MacBook Pro (Santa Rosa) */
M_MBP_4, /* MacBook Pro, 4th gen */
- M_MBP_5_1, /* MacBook Pro, 5,1th gen */
- M_MBP_5_2, /* MacBook Pro, 5,2th gen */
- M_MBP_5_3, /* MacBook Pro, 5,3rd gen */
- M_MBP_6_1, /* MacBook Pro, 6,1th gen */
- M_MBP_6_2, /* MacBook Pro, 6,2th gen */
- M_MBP_7_1, /* MacBook Pro, 7,1th gen */
M_UNKNOWN /* placeholder */
};
[M_I20] = { "i20", 0x80010000, 1728 * 4, 1680, 1050 }, /* guess */
[M_I20_SR] = { "imac7", 0x40010000, 1728 * 4, 1680, 1050 },
[M_I24] = { "i24", 0x80010000, 2048 * 4, 1920, 1200 }, /* guess */
- [M_I24_8_1] = { "imac8", 0xc0060000, 2048 * 4, 1920, 1200 },
- [M_I24_10_1] = { "imac10", 0xc0010000, 2048 * 4, 1920, 1080 },
- [M_I27_11_1] = { "imac11", 0xc0010000, 2560 * 4, 2560, 1440 },
[M_MINI]= { "mini", 0x80000000, 2048 * 4, 1024, 768 },
- [M_MINI_3_1] = { "mini31", 0x40010000, 1024 * 4, 1024, 768 },
- [M_MINI_4_1] = { "mini41", 0xc0010000, 2048 * 4, 1920, 1200 },
[M_MB] = { "macbook", 0x80000000, 2048 * 4, 1280, 800 },
- [M_MB_5_1] = { "macbook51", 0x80010000, 2048 * 4, 1280, 800 },
- [M_MB_6_1] = { "macbook61", 0x80010000, 2048 * 4, 1280, 800 },
- [M_MB_7_1] = { "macbook71", 0x80010000, 2048 * 4, 1280, 800 },
[M_MBA] = { "mba", 0x80000000, 2048 * 4, 1280, 800 },
[M_MBP] = { "mbp", 0x80010000, 1472 * 4, 1440, 900 },
[M_MBP_2] = { "mbp2", 0, 0, 0, 0 }, /* placeholder */
- [M_MBP_2_2] = { "mbp22", 0x80010000, 1472 * 4, 1440, 900 },
[M_MBP_SR] = { "mbp3", 0x80030000, 2048 * 4, 1440, 900 },
[M_MBP_4] = { "mbp4", 0xc0060000, 2048 * 4, 1920, 1200 },
- [M_MBP_5_1] = { "mbp51", 0xc0010000, 2048 * 4, 1440, 900 },
- [M_MBP_5_2] = { "mbp52", 0xc0010000, 2048 * 4, 1920, 1200 },
- [M_MBP_5_3] = { "mbp53", 0xd0010000, 2048 * 4, 1440, 900 },
- [M_MBP_6_1] = { "mbp61", 0x90030000, 2048 * 4, 1920, 1200 },
- [M_MBP_6_2] = { "mbp62", 0x90030000, 2048 * 4, 1680, 1050 },
- [M_MBP_7_1] = { "mbp71", 0xc0010000, 2048 * 4, 1280, 800 },
[M_UNKNOWN] = { NULL, 0, 0, 0, 0 }
};
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "iMac6,1", M_I24),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "iMac6,1", M_I24),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "iMac7,1", M_I20_SR),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "iMac8,1", M_I24_8_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "iMac10,1", M_I24_10_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "iMac11,1", M_I27_11_1),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "Macmini1,1", M_MINI),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "Macmini3,1", M_MINI_3_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "Macmini4,1", M_MINI_4_1),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBook1,1", M_MB),
/* At least one of these two will be right; maybe both? */
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBook2,1", M_MB),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBook3,1", M_MB),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBook3,1", M_MB),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBook4,1", M_MB),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBook5,1", M_MB_5_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBook6,1", M_MB_6_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBook7,1", M_MB_7_1),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookAir1,1", M_MBA),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBookPro1,1", M_MBP),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBookPro2,1", M_MBP_2),
- EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBookPro2,2", M_MBP_2_2),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro2,1", M_MBP_2),
EFIFB_DMI_SYSTEM_ID("Apple Computer, Inc.", "MacBookPro3,1", M_MBP_SR),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro3,1", M_MBP_SR),
EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro4,1", M_MBP_4),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro5,1", M_MBP_5_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro5,2", M_MBP_5_2),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro5,3", M_MBP_5_3),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro6,1", M_MBP_6_1),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro6,2", M_MBP_6_2),
- EFIFB_DMI_SYSTEM_ID("Apple Inc.", "MacBookPro7,1", M_MBP_7_1),
{},
};
{
struct efifb_dmi_info *info = id->driver_data;
if (info->base == 0)
- return 0;
+ return -ENODEV;
printk(KERN_INFO "efifb: dmi detected %s - framebuffer at %p "
"(%dx%d, stride %d)\n", id->ident,
info->stride);
/* Trust the bootloader over the DMI tables */
- if (screen_info.lfb_base == 0) {
-#if defined(CONFIG_PCI)
- struct pci_dev *dev = NULL;
- int found_bar = 0;
-#endif
+ if (screen_info.lfb_base == 0)
screen_info.lfb_base = info->base;
+ if (screen_info.lfb_linelength == 0)
+ screen_info.lfb_linelength = info->stride;
+ if (screen_info.lfb_width == 0)
+ screen_info.lfb_width = info->width;
+ if (screen_info.lfb_height == 0)
+ screen_info.lfb_height = info->height;
+ if (screen_info.orig_video_isVGA == 0)
+ screen_info.orig_video_isVGA = VIDEO_TYPE_EFI;
-#if defined(CONFIG_PCI)
- /* make sure that the address in the table is actually on a
- * VGA device's PCI BAR */
-
- for_each_pci_dev(dev) {
- int i;
- if ((dev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
- continue;
- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- resource_size_t start, end;
-
- start = pci_resource_start(dev, i);
- if (start == 0)
- break;
- end = pci_resource_end(dev, i);
- if (screen_info.lfb_base >= start &&
- screen_info.lfb_base < end) {
- found_bar = 1;
- }
- }
- }
- if (!found_bar)
- screen_info.lfb_base = 0;
-#endif
- }
- if (screen_info.lfb_base) {
- if (screen_info.lfb_linelength == 0)
- screen_info.lfb_linelength = info->stride;
- if (screen_info.lfb_width == 0)
- screen_info.lfb_width = info->width;
- if (screen_info.lfb_height == 0)
- screen_info.lfb_height = info->height;
- if (screen_info.orig_video_isVGA == 0)
- screen_info.orig_video_isVGA = VIDEO_TYPE_EFI;
- } else {
- screen_info.lfb_linelength = 0;
- screen_info.lfb_width = 0;
- screen_info.lfb_height = 0;
- screen_info.orig_video_isVGA = 0;
- return 0;
- }
- return 1;
+ return 0;
}
static int efifb_setcolreg(unsigned regno, unsigned red, unsigned green,
return 0;
}
-static void efifb_destroy(struct fb_info *info)
-{
- if (info->screen_base)
- iounmap(info->screen_base);
- release_mem_region(info->aperture_base, info->aperture_size);
- framebuffer_release(info);
-}
-
static struct fb_ops efifb_ops = {
.owner = THIS_MODULE,
- .fb_destroy = efifb_destroy,
.fb_setcolreg = efifb_setcolreg,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
info->par = NULL;
info->aperture_base = efifb_fix.smem_start;
- info->aperture_size = size_remap;
+ info->aperture_size = size_total;
info->screen_base = ioremap(efifb_fix.smem_start, efifb_fix.smem_len);
if (!info->screen_base) {
return 0;
}
-static void offb_destroy(struct fb_info *info)
-{
- if (info->screen_base)
- iounmap(info->screen_base);
- release_mem_region(info->aperture_base, info->aperture_size);
- framebuffer_release(info);
-}
-
static struct fb_ops offb_ops = {
.owner = THIS_MODULE,
- .fb_destroy = offb_destroy,
.fb_setcolreg = offb_setcolreg,
.fb_set_par = offb_set_par,
.fb_blank = offb_blank,
var->sync = 0;
var->vmode = FB_VMODE_NONINTERLACED;
- /* set offb aperture size for generic probing */
- info->aperture_base = address;
- info->aperture_size = fix->smem_len;
-
info->fbops = &offb_ops;
info->screen_base = ioremap(address, fix->smem_len);
info->pseudo_palette = (void *) (info + 1);
- info->flags = FBINFO_DEFAULT | FBINFO_MISC_FIRMWARE | foreign_endian;
+ info->flags = FBINFO_DEFAULT | foreign_endian;
fb_alloc_cmap(&info->cmap, 256, 0);
break;
case FBIOGET_VBLANK:
-
- memset(&sisvbblank, 0, sizeof(struct fb_vblank));
-
sisvbblank.count = 0;
sisvbblank.flags = sisfb_setupvbblankflags(ivideo, &sisvbblank.vcount, &sisvbblank.hcount);
static int __devinit e3d_pci_register(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
- struct device_node *of_node;
- const char *device_type;
struct fb_info *info;
struct e3d_info *ep;
unsigned int line_length;
int err;
- of_node = pci_device_to_OF_node(pdev);
- if (!of_node) {
- printk(KERN_ERR "e3d: Cannot find OF node of %s\n",
- pci_name(pdev));
- return -ENODEV;
- }
-
- device_type = of_get_property(of_node, "device_type", NULL);
- if (!device_type) {
- printk(KERN_INFO "e3d: Ignoring secondary output device "
- "at %s\n", pci_name(pdev));
- return -ENODEV;
- }
-
err = pci_enable_device(pdev);
if (err < 0) {
printk(KERN_ERR "e3d: Cannot enable PCI device %s\n",
ep->info = info;
ep->pdev = pdev;
spin_lock_init(&ep->lock);
- ep->of_node = of_node;
+ ep->of_node = pci_device_to_OF_node(pdev);
+ if (!ep->of_node) {
+ printk(KERN_ERR "e3d: Cannot find OF node of %s\n",
+ pci_name(pdev));
+ err = -ENODEV;
+ goto err_release_fb;
+ }
/* Read the PCI base register of the frame buffer, which we
* need in order to interpret the RAMDAC_VID_*FB* values in
static struct pci_device_id e3d_pci_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_3DLABS, 0x7a0), },
- { PCI_DEVICE(0x1091, 0x7a0), },
{ PCI_DEVICE(PCI_VENDOR_ID_3DLABS, 0x7a2), },
{ .vendor = PCI_VENDOR_ID_3DLABS,
.device = PCI_ANY_ID,
writel(tmp, engine + 0x1C);
}
- if (op == VIA_BITBLT_FILL) {
- writel(fg_color, engine + 0x58);
- } else if (op == VIA_BITBLT_MONO) {
+ if (op != VIA_BITBLT_COLOR)
writel(fg_color, engine + 0x4C);
+
+ if (op == VIA_BITBLT_MONO)
writel(bg_color, engine + 0x50);
- }
if (op == VIA_BITBLT_FILL)
ge_cmd |= fill_rop << 24 | 0x00002000 | 0x00000001;
{
struct viafb_ioctl_info viainfo;
- memset(&viainfo, 0, sizeof(struct viafb_ioctl_info));
-
viainfo.viafb_id = VIAID;
viainfo.vendor_id = PCI_VIA_VENDOR_ID;
void w100fb_gpio_write(int port, unsigned long value)
{
if (port==W100_GPIO_PORT_A)
- writel(value, remapped_regs + mmGPIO_DATA);
+ value = writel(value, remapped_regs + mmGPIO_DATA);
else
- writel(value, remapped_regs + mmGPIO_DATA2);
+ value = writel(value, remapped_regs + mmGPIO_DATA2);
}
EXPORT_SYMBOL(w100fb_gpio_read);
EXPORT_SYMBOL(w100fb_gpio_write);
list_for_each_entry_safe(vq, n, &vdev->vqs, list) {
info = vq->priv;
- if (vp_dev->per_vq_vectors &&
- info->msix_vector != VIRTIO_MSI_NO_VECTOR)
+ if (vp_dev->per_vq_vectors)
free_irq(vp_dev->msix_entries[info->msix_vector].vector,
vq);
vp_del_vq(vq);
INIT_LIST_HEAD(&vp_dev->virtqueues);
spin_lock_init(&vp_dev->lock);
- /* Disable MSI/MSIX to bring device to a known good state. */
- pci_msi_off(pci_dev);
-
/* enable the device */
err = pci_enable_device(pci_dev);
if (err)
static inline int w1_DS18B20_convert_temp(u8 rom[9])
{
- s16 t = le16_to_cpup((__le16 *)rom);
- return t*1000/16;
+ int t = ((s16)rom[1] << 8) | rom[0];
+ t = t*1000/16;
+ return t;
}
static inline int w1_DS18S20_convert_temp(u8 rom[9])
/*
* Blackfin On-Chip Watchdog Driver
+ * Supports BF53[123]/BF53[467]/BF54[2489]/BF561
*
* Originally based on softdog.c
- * Copyright 2006-2010 Analog Devices Inc.
+ * Copyright 2006-2007 Analog Devices Inc.
* Copyright 2006-2007 Michele d'Amico
* Copyright 1996 Alan Cox <alan@lxorguk.ukuu.org.uk>
*
*/
static int bfin_wdt_set_timeout(unsigned long t)
{
- u32 cnt, max_t, sclk;
+ u32 cnt;
unsigned long flags;
- sclk = get_sclk();
- max_t = -1 / sclk;
- cnt = t * sclk;
- stamp("maxtimeout=%us newtimeout=%lus (cnt=%#x)", max_t, t, cnt);
+ stampit();
- if (t > max_t) {
+ cnt = t * get_sclk();
+ if (cnt < get_sclk()) {
printk(KERN_WARNING PFX "timeout value is too large\n");
return -EINVAL;
}
static int hpwdt_change_timer(int new_margin)
{
/* Arbitrary, can't find the card's limits */
- if (new_margin < 5 || new_margin > 600) {
+ if (new_margin < 30 || new_margin > 600) {
printk(KERN_WARNING
"hpwdt: New value passed in is invalid: %d seconds.\n",
new_margin);
TCO_3420, /* 3420 */
TCO_3450, /* 3450 */
TCO_EP80579, /* EP80579 */
- TCO_CPT1, /* Cougar Point */
- TCO_CPT2, /* Cougar Point Desktop */
- TCO_CPT3, /* Cougar Point Mobile */
- TCO_CPT4, /* Cougar Point */
- TCO_CPT5, /* Cougar Point */
- TCO_CPT6, /* Cougar Point */
- TCO_CPT7, /* Cougar Point */
- TCO_CPT8, /* Cougar Point */
- TCO_CPT9, /* Cougar Point */
- TCO_CPT10, /* Cougar Point */
- TCO_CPT11, /* Cougar Point */
- TCO_CPT12, /* Cougar Point */
- TCO_CPT13, /* Cougar Point */
- TCO_CPT14, /* Cougar Point */
- TCO_CPT15, /* Cougar Point */
- TCO_CPT16, /* Cougar Point */
- TCO_CPT17, /* Cougar Point */
- TCO_CPT18, /* Cougar Point */
- TCO_CPT19, /* Cougar Point */
- TCO_CPT20, /* Cougar Point */
- TCO_CPT21, /* Cougar Point */
- TCO_CPT22, /* Cougar Point */
- TCO_CPT23, /* Cougar Point */
- TCO_CPT24, /* Cougar Point */
- TCO_CPT25, /* Cougar Point */
- TCO_CPT26, /* Cougar Point */
- TCO_CPT27, /* Cougar Point */
- TCO_CPT28, /* Cougar Point */
- TCO_CPT29, /* Cougar Point */
- TCO_CPT30, /* Cougar Point */
- TCO_CPT31, /* Cougar Point */
+ TCO_CPTD, /* CPT Desktop */
+ TCO_CPTM, /* CPT Mobile */
};
static struct {
{"3420", 2},
{"3450", 2},
{"EP80579", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
- {"Cougar Point", 2},
+ {"CPT Desktop", 2},
+ {"CPT Mobile", 2},
{NULL, 0}
};
{ ITCO_PCI_DEVICE(0x3b14, TCO_3420)},
{ ITCO_PCI_DEVICE(0x3b16, TCO_3450)},
{ ITCO_PCI_DEVICE(0x5031, TCO_EP80579)},
- { ITCO_PCI_DEVICE(0x1c41, TCO_CPT1)},
- { ITCO_PCI_DEVICE(0x1c42, TCO_CPT2)},
- { ITCO_PCI_DEVICE(0x1c43, TCO_CPT3)},
- { ITCO_PCI_DEVICE(0x1c44, TCO_CPT4)},
- { ITCO_PCI_DEVICE(0x1c45, TCO_CPT5)},
- { ITCO_PCI_DEVICE(0x1c46, TCO_CPT6)},
- { ITCO_PCI_DEVICE(0x1c47, TCO_CPT7)},
- { ITCO_PCI_DEVICE(0x1c48, TCO_CPT8)},
- { ITCO_PCI_DEVICE(0x1c49, TCO_CPT9)},
- { ITCO_PCI_DEVICE(0x1c4a, TCO_CPT10)},
- { ITCO_PCI_DEVICE(0x1c4b, TCO_CPT11)},
- { ITCO_PCI_DEVICE(0x1c4c, TCO_CPT12)},
- { ITCO_PCI_DEVICE(0x1c4d, TCO_CPT13)},
- { ITCO_PCI_DEVICE(0x1c4e, TCO_CPT14)},
- { ITCO_PCI_DEVICE(0x1c4f, TCO_CPT15)},
- { ITCO_PCI_DEVICE(0x1c50, TCO_CPT16)},
- { ITCO_PCI_DEVICE(0x1c51, TCO_CPT17)},
- { ITCO_PCI_DEVICE(0x1c52, TCO_CPT18)},
- { ITCO_PCI_DEVICE(0x1c53, TCO_CPT19)},
- { ITCO_PCI_DEVICE(0x1c54, TCO_CPT20)},
- { ITCO_PCI_DEVICE(0x1c55, TCO_CPT21)},
- { ITCO_PCI_DEVICE(0x1c56, TCO_CPT22)},
- { ITCO_PCI_DEVICE(0x1c57, TCO_CPT23)},
- { ITCO_PCI_DEVICE(0x1c58, TCO_CPT24)},
- { ITCO_PCI_DEVICE(0x1c59, TCO_CPT25)},
- { ITCO_PCI_DEVICE(0x1c5a, TCO_CPT26)},
- { ITCO_PCI_DEVICE(0x1c5b, TCO_CPT27)},
- { ITCO_PCI_DEVICE(0x1c5c, TCO_CPT28)},
- { ITCO_PCI_DEVICE(0x1c5d, TCO_CPT29)},
- { ITCO_PCI_DEVICE(0x1c5e, TCO_CPT30)},
- { ITCO_PCI_DEVICE(0x1c5f, TCO_CPT31)},
+ { ITCO_PCI_DEVICE(0x1c42, TCO_CPTD)},
+ { ITCO_PCI_DEVICE(0x1c43, TCO_CPTM)},
{ 0, }, /* End of list */
};
MODULE_DEVICE_TABLE(pci, iTCO_wdt_pci_tbl);
#define VALID_EVTCHN(chn) ((chn) != 0)
static struct irq_chip xen_dynamic_chip;
-static struct irq_chip xen_percpu_chip;
/* Constructor for packed IRQ information. */
static struct irq_info mk_unbound_info(void)
}
#endif
- memset(cpu_evtchn_mask(0), ~0, sizeof(struct cpu_evtchn_s));
+ memset(cpu_evtchn_mask(0), ~0, sizeof(cpu_evtchn_mask(0)));
}
static inline void clear_evtchn(int port)
irq = find_unbound_irq();
set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
- handle_edge_irq, "event");
+ handle_level_irq, "event");
evtchn_to_irq[evtchn] = irq;
irq_info[irq] = mk_evtchn_info(evtchn);
if (irq < 0)
goto out;
- set_irq_chip_and_handler_name(irq, &xen_percpu_chip,
- handle_percpu_irq, "ipi");
+ set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
+ handle_level_irq, "ipi");
bind_ipi.vcpu = cpu;
if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_ipi,
irq = find_unbound_irq();
- set_irq_chip_and_handler_name(irq, &xen_percpu_chip,
- handle_percpu_irq, "virq");
+ set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
+ handle_level_irq, "virq");
evtchn_to_irq[evtchn] = irq;
irq_info[irq] = mk_virq_info(evtchn, virq);
if (irq < 0)
return irq;
- irqflags |= IRQF_NO_SUSPEND;
retval = request_irq(irq, handler, irqflags, devname, dev_id);
if (retval != 0) {
unbind_from_irq(irq);
.retrigger = retrigger_dynirq,
};
-static struct irq_chip xen_percpu_chip __read_mostly = {
- .name = "xen-percpu",
-
- .disable = disable_dynirq,
- .mask = disable_dynirq,
- .unmask = enable_dynirq,
-
- .ack = ack_dynirq,
-};
-
void __init xen_init_IRQ(void)
{
int i;
#define PRINTF_BUFFER_SIZE 4096
char *printf_buffer;
- printf_buffer = kmalloc(PRINTF_BUFFER_SIZE, GFP_NOIO | __GFP_HIGH);
+ printf_buffer = kmalloc(PRINTF_BUFFER_SIZE, GFP_KERNEL);
if (printf_buffer == NULL)
return -ENOMEM;
fw-shipped-all := $(fw-shipped-y) $(fw-shipped-m) $(fw-shipped-)
# Directories which we _might_ need to create, so we have a rule for them.
-firmware-dirs := $(sort $(addprefix $(objtree)/$(obj)/,$(dir $(fw-external-y) $(fw-shipped-all))))
+firmware-dirs := $(sort $(patsubst %,$(objtree)/$(obj)/%/,$(dir $(fw-external-y) $(fw-shipped-all))))
quiet_cmd_mkdir = MKDIR $(patsubst $(objtree)/%,%,$@)
cmd_mkdir = mkdir -p $@
P9_DPRINTK(P9_DEBUG_VFS, "filp: %p lock: %p\n", filp, fl);
/* No mandatory locks */
- if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK)
+ if (__mandatory_lock(inode))
return -ENOLCK;
if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
if (unlikely(nr < 0))
return -EINVAL;
- if (unlikely(nr > LONG_MAX/sizeof(*iocbpp)))
- nr = LONG_MAX/sizeof(*iocbpp);
-
if (unlikely(!access_ok(VERIFY_READ, iocbpp, (nr*sizeof(*iocbpp)))))
return -EFAULT;
{
int err = register_filesystem(&bm_fs_type);
if (!err) {
- err = insert_binfmt(&misc_format);
+ err = register_binfmt(&misc_format);
if (err)
unregister_filesystem(&bm_fs_type);
}
{
struct bio *bio;
- if (nr_iovecs > UIO_MAXIOV)
- return NULL;
-
bio = kmalloc(sizeof(struct bio) + nr_iovecs * sizeof(struct bio_vec),
gfp_mask);
if (unlikely(!bio))
static struct bio_map_data *bio_alloc_map_data(int nr_segs, int iov_count,
gfp_t gfp_mask)
{
- struct bio_map_data *bmd;
+ struct bio_map_data *bmd = kmalloc(sizeof(*bmd), gfp_mask);
- if (iov_count > UIO_MAXIOV)
- return NULL;
-
- bmd = kmalloc(sizeof(*bmd), gfp_mask);
if (!bmd)
return NULL;
end = (uaddr + iov[i].iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
start = uaddr >> PAGE_SHIFT;
- /*
- * Overflow, abort
- */
- if (end < start)
- return ERR_PTR(-EINVAL);
-
nr_pages += end - start;
len += iov[i].iov_len;
}
unsigned long end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
unsigned long start = uaddr >> PAGE_SHIFT;
- /*
- * Overflow, abort
- */
- if (end < start)
- return ERR_PTR(-EINVAL);
-
nr_pages += end - start;
/*
* buffer must be aligned to at least hardsector size for now
unsigned long start = uaddr >> PAGE_SHIFT;
const int local_nr_pages = end - start;
const int page_limit = cur_page + local_nr_pages;
-
+
ret = get_user_pages_fast(uaddr, local_nr_pages,
write_to_vm, &pages[cur_page]);
if (ret < local_nr_pages) {
* NULL first argument is nfsd_sync_dir() and that's not a directory.
*/
-int block_fsync(struct file *filp, struct dentry *dentry, int datasync)
+static int block_fsync(struct file *filp, struct dentry *dentry, int datasync)
{
return sync_blockdev(I_BDEV(filp->f_mapping->host));
}
return NULL;
return &ei->vfs_inode;
}
-EXPORT_SYMBOL(block_fsync);
static void bdev_destroy_inode(struct inode *inode)
{
/*
* hooks: /n/, see "layering violations".
*/
- if (!for_part) {
- ret = devcgroup_inode_permission(bdev->bd_inode, perm);
- if (ret != 0) {
- bdput(bdev);
- return ret;
- }
+ ret = devcgroup_inode_permission(bdev->bd_inode, perm);
+ if (ret != 0) {
+ bdput(bdev);
+ return ret;
}
lock_kernel();
/*
* Needs to be called with fs_mutex held
*/
-static int btrfs_set_acl(struct btrfs_trans_handle *trans,
- struct inode *inode, struct posix_acl *acl, int type)
+static int btrfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
{
int ret, size = 0;
const char *name;
switch (type) {
case ACL_TYPE_ACCESS:
mode = inode->i_mode;
- name = POSIX_ACL_XATTR_ACCESS;
- if (acl) {
- ret = posix_acl_equiv_mode(acl, &mode);
- if (ret < 0)
- return ret;
- inode->i_mode = mode;
- }
+ ret = posix_acl_equiv_mode(acl, &mode);
+ if (ret < 0)
+ return ret;
ret = 0;
+ inode->i_mode = mode;
+ name = POSIX_ACL_XATTR_ACCESS;
break;
case ACL_TYPE_DEFAULT:
if (!S_ISDIR(inode->i_mode))
goto out;
}
- ret = __btrfs_setxattr(trans, inode, name, value, size, 0);
+ ret = __btrfs_setxattr(inode, name, value, size, 0);
+
out:
kfree(value);
static int btrfs_xattr_set_acl(struct inode *inode, int type,
const void *value, size_t size)
{
- int ret;
+ int ret = 0;
struct posix_acl *acl = NULL;
- if (!is_owner_or_cap(inode))
- return -EPERM;
-
if (value) {
acl = posix_acl_from_xattr(value, size);
if (acl == NULL) {
}
}
- ret = btrfs_set_acl(NULL, inode, acl, type);
+ ret = btrfs_set_acl(inode, acl, type);
posix_acl_release(acl);
* stuff has been fixed to work with that. If the locking stuff changes, we
* need to re-evaluate the acl locking stuff.
*/
-int btrfs_init_acl(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir)
+int btrfs_init_acl(struct inode *inode, struct inode *dir)
{
struct posix_acl *acl = NULL;
int ret = 0;
mode_t mode;
if (S_ISDIR(inode->i_mode)) {
- ret = btrfs_set_acl(trans, inode, acl,
- ACL_TYPE_DEFAULT);
+ ret = btrfs_set_acl(inode, acl, ACL_TYPE_DEFAULT);
if (ret)
goto failed;
}
inode->i_mode = mode;
if (ret > 0) {
/* we need an acl */
- ret = btrfs_set_acl(trans, inode, clone,
+ ret = btrfs_set_acl(inode, clone,
ACL_TYPE_ACCESS);
}
}
- posix_acl_release(clone);
}
failed:
posix_acl_release(acl);
ret = posix_acl_chmod_masq(clone, inode->i_mode);
if (!ret)
- ret = btrfs_set_acl(NULL, inode, clone, ACL_TYPE_ACCESS);
+ ret = btrfs_set_acl(inode, clone, ACL_TYPE_ACCESS);
posix_acl_release(clone);
return 0;
}
-int btrfs_init_acl(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir)
+int btrfs_init_acl(struct inode *inode, struct inode *dir)
{
return 0;
}
*/
struct extent_io_tree io_failure_tree;
+ /* held while inesrting or deleting extents from files */
+ struct mutex extent_mutex;
+
/* held while logging the inode in tree-log.c */
struct mutex log_mutex;
static inline void btrfs_i_size_write(struct inode *inode, u64 size)
{
- i_size_write(inode, size);
+ inode->i_size = size;
BTRFS_I(inode)->disk_i_size = size;
}
struct extent_buffer *src_buf);
static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root,
struct btrfs_path *path, int level, int slot);
-static int setup_items_for_insert(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct btrfs_path *path,
- struct btrfs_key *cpu_key, u32 *data_size,
- u32 total_data, u32 total_size, int nr);
-
struct btrfs_path *btrfs_alloc_path(void)
{
extent_buffer_get(cow);
spin_unlock(&root->node_lock);
- btrfs_free_tree_block(trans, root, buf->start, buf->len,
- parent_start, root->root_key.objectid, level);
+ btrfs_free_extent(trans, root, buf->start, buf->len,
+ parent_start, root->root_key.objectid,
+ level, 0);
free_extent_buffer(buf);
add_root_to_dirty_list(root);
} else {
btrfs_set_node_ptr_generation(parent, parent_slot,
trans->transid);
btrfs_mark_buffer_dirty(parent);
- btrfs_free_tree_block(trans, root, buf->start, buf->len,
- parent_start, root->root_key.objectid, level);
+ btrfs_free_extent(trans, root, buf->start, buf->len,
+ parent_start, root->root_key.objectid,
+ level, 0);
}
if (unlock_orig)
btrfs_tree_unlock(buf);
btrfs_tree_unlock(mid);
/* once for the path */
free_extent_buffer(mid);
- ret = btrfs_free_tree_block(trans, root, mid->start, mid->len,
- 0, root->root_key.objectid, level);
+ ret = btrfs_free_extent(trans, root, mid->start, mid->len,
+ 0, root->root_key.objectid, level, 1);
/* once for the root ptr */
free_extent_buffer(mid);
return ret;
1);
if (wret)
ret = wret;
- wret = btrfs_free_tree_block(trans, root,
- bytenr, blocksize, 0,
- root->root_key.objectid,
- level);
+ wret = btrfs_free_extent(trans, root, bytenr,
+ blocksize, 0,
+ root->root_key.objectid,
+ level, 0);
if (wret)
ret = wret;
} else {
wret = del_ptr(trans, root, path, level + 1, pslot);
if (wret)
ret = wret;
- wret = btrfs_free_tree_block(trans, root, bytenr, blocksize,
- 0, root->root_key.objectid, level);
+ wret = btrfs_free_extent(trans, root, bytenr, blocksize,
+ 0, root->root_key.objectid,
+ level, 0);
if (wret)
ret = wret;
} else {
return ret;
}
-static noinline int setup_leaf_for_split(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path, int ins_len)
+/*
+ * This function splits a single item into two items,
+ * giving 'new_key' to the new item and splitting the
+ * old one at split_offset (from the start of the item).
+ *
+ * The path may be released by this operation. After
+ * the split, the path is pointing to the old item. The
+ * new item is going to be in the same node as the old one.
+ *
+ * Note, the item being split must be smaller enough to live alone on
+ * a tree block with room for one extra struct btrfs_item
+ *
+ * This allows us to split the item in place, keeping a lock on the
+ * leaf the entire time.
+ */
+int btrfs_split_item(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ struct btrfs_path *path,
+ struct btrfs_key *new_key,
+ unsigned long split_offset)
{
- struct btrfs_key key;
- struct extent_buffer *leaf;
- struct btrfs_file_extent_item *fi;
- u64 extent_len = 0;
u32 item_size;
- int ret;
+ struct extent_buffer *leaf;
+ struct btrfs_key orig_key;
+ struct btrfs_item *item;
+ struct btrfs_item *new_item;
+ int ret = 0;
+ int slot;
+ u32 nritems;
+ u32 orig_offset;
+ struct btrfs_disk_key disk_key;
+ char *buf;
leaf = path->nodes[0];
- btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
-
- BUG_ON(key.type != BTRFS_EXTENT_DATA_KEY &&
- key.type != BTRFS_EXTENT_CSUM_KEY);
-
- if (btrfs_leaf_free_space(root, leaf) >= ins_len)
- return 0;
+ btrfs_item_key_to_cpu(leaf, &orig_key, path->slots[0]);
+ if (btrfs_leaf_free_space(root, leaf) >= sizeof(struct btrfs_item))
+ goto split;
item_size = btrfs_item_size_nr(leaf, path->slots[0]);
- if (key.type == BTRFS_EXTENT_DATA_KEY) {
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- extent_len = btrfs_file_extent_num_bytes(leaf, fi);
- }
btrfs_release_path(root, path);
- path->keep_locks = 1;
path->search_for_split = 1;
- ret = btrfs_search_slot(trans, root, &key, path, 0, 1);
+ path->keep_locks = 1;
+
+ ret = btrfs_search_slot(trans, root, &orig_key, path, 0, 1);
path->search_for_split = 0;
- if (ret < 0)
- goto err;
- ret = -EAGAIN;
- leaf = path->nodes[0];
/* if our item isn't there or got smaller, return now */
- if (ret > 0 || item_size != btrfs_item_size_nr(leaf, path->slots[0]))
- goto err;
-
- if (key.type == BTRFS_EXTENT_DATA_KEY) {
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- if (extent_len != btrfs_file_extent_num_bytes(leaf, fi))
- goto err;
+ if (ret != 0 || item_size != btrfs_item_size_nr(path->nodes[0],
+ path->slots[0])) {
+ path->keep_locks = 0;
+ return -EAGAIN;
}
btrfs_set_path_blocking(path);
- ret = split_leaf(trans, root, &key, path, ins_len, 1);
+ ret = split_leaf(trans, root, &orig_key, path,
+ sizeof(struct btrfs_item), 1);
+ path->keep_locks = 0;
BUG_ON(ret);
- path->keep_locks = 0;
btrfs_unlock_up_safe(path, 1);
- return 0;
-err:
- path->keep_locks = 0;
- return ret;
-}
-
-static noinline int split_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path,
- struct btrfs_key *new_key,
- unsigned long split_offset)
-{
- struct extent_buffer *leaf;
- struct btrfs_item *item;
- struct btrfs_item *new_item;
- int slot;
- char *buf;
- u32 nritems;
- u32 item_size;
- u32 orig_offset;
- struct btrfs_disk_key disk_key;
-
leaf = path->nodes[0];
BUG_ON(btrfs_leaf_free_space(root, leaf) < sizeof(struct btrfs_item));
+split:
+ /*
+ * make sure any changes to the path from split_leaf leave it
+ * in a blocking state
+ */
btrfs_set_path_blocking(path);
item = btrfs_item_nr(leaf, path->slots[0]);
item_size = btrfs_item_size(leaf, item);
buf = kmalloc(item_size, GFP_NOFS);
- if (!buf)
- return -ENOMEM;
-
read_extent_buffer(leaf, buf, btrfs_item_ptr_offset(leaf,
path->slots[0]), item_size);
-
slot = path->slots[0] + 1;
+ leaf = path->nodes[0];
+
nritems = btrfs_header_nritems(leaf);
+
if (slot != nritems) {
/* shift the items */
memmove_extent_buffer(leaf, btrfs_item_nr_offset(slot + 1),
- btrfs_item_nr_offset(slot),
- (nritems - slot) * sizeof(struct btrfs_item));
+ btrfs_item_nr_offset(slot),
+ (nritems - slot) * sizeof(struct btrfs_item));
+
}
btrfs_cpu_key_to_disk(&disk_key, new_key);
item_size - split_offset);
btrfs_mark_buffer_dirty(leaf);
- BUG_ON(btrfs_leaf_free_space(root, leaf) < 0);
+ ret = 0;
+ if (btrfs_leaf_free_space(root, leaf) < 0) {
+ btrfs_print_leaf(root, leaf);
+ BUG();
+ }
kfree(buf);
- return 0;
-}
-
-/*
- * This function splits a single item into two items,
- * giving 'new_key' to the new item and splitting the
- * old one at split_offset (from the start of the item).
- *
- * The path may be released by this operation. After
- * the split, the path is pointing to the old item. The
- * new item is going to be in the same node as the old one.
- *
- * Note, the item being split must be smaller enough to live alone on
- * a tree block with room for one extra struct btrfs_item
- *
- * This allows us to split the item in place, keeping a lock on the
- * leaf the entire time.
- */
-int btrfs_split_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path,
- struct btrfs_key *new_key,
- unsigned long split_offset)
-{
- int ret;
- ret = setup_leaf_for_split(trans, root, path,
- sizeof(struct btrfs_item));
- if (ret)
- return ret;
-
- ret = split_item(trans, root, path, new_key, split_offset);
return ret;
}
-/*
- * This function duplicate a item, giving 'new_key' to the new item.
- * It guarantees both items live in the same tree leaf and the new item
- * is contiguous with the original item.
- *
- * This allows us to split file extent in place, keeping a lock on the
- * leaf the entire time.
- */
-int btrfs_duplicate_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path,
- struct btrfs_key *new_key)
-{
- struct extent_buffer *leaf;
- int ret;
- u32 item_size;
-
- leaf = path->nodes[0];
- item_size = btrfs_item_size_nr(leaf, path->slots[0]);
- ret = setup_leaf_for_split(trans, root, path,
- item_size + sizeof(struct btrfs_item));
- if (ret)
- return ret;
-
- path->slots[0]++;
- ret = setup_items_for_insert(trans, root, path, new_key, &item_size,
- item_size, item_size +
- sizeof(struct btrfs_item), 1);
- BUG_ON(ret);
-
- leaf = path->nodes[0];
- memcpy_extent_buffer(leaf,
- btrfs_item_ptr_offset(leaf, path->slots[0]),
- btrfs_item_ptr_offset(leaf, path->slots[0] - 1),
- item_size);
- return 0;
-}
-
/*
* make the item pointed to by the path smaller. new_size indicates
* how small to make it, and from_end tells us if we just chop bytes
*/
btrfs_unlock_up_safe(path, 0);
- ret = btrfs_free_tree_block(trans, root, leaf->start, leaf->len,
- 0, root->root_key.objectid, 0);
+ ret = btrfs_free_extent(trans, root, leaf->start, leaf->len,
+ 0, root->root_key.objectid, 0, 0);
return ret;
}
/*
#define BTRFS_MAX_INLINE_DATA_SIZE(r) (BTRFS_LEAF_DATA_SIZE(r) - \
sizeof(struct btrfs_item) - \
sizeof(struct btrfs_file_extent_item))
-#define BTRFS_MAX_XATTR_SIZE(r) (BTRFS_LEAF_DATA_SIZE(r) - \
- sizeof(struct btrfs_item) -\
- sizeof(struct btrfs_dir_item))
/*
struct mutex ordered_operations_mutex;
struct rw_semaphore extent_commit_sem;
- struct rw_semaphore cleanup_work_sem;
-
struct rw_semaphore subvol_sem;
+
struct srcu_struct subvol_srcu;
struct list_head trans_list;
struct list_head dead_roots;
struct list_head caching_block_groups;
- spinlock_t delayed_iput_lock;
- struct list_head delayed_iputs;
-
atomic_t nr_async_submits;
atomic_t async_submit_draining;
atomic_t nr_async_bios;
int ref_cows;
int track_dirty;
int in_radix;
- int clean_orphans;
u64 defrag_trans_start;
struct btrfs_key defrag_progress;
struct btrfs_key defrag_max;
int defrag_running;
+ int defrag_level;
char *name;
int in_sysfs;
u64 parent, u64 root_objectid,
struct btrfs_disk_key *key, int level,
u64 hint, u64 empty_size);
-int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- u64 bytenr, u32 blocksize,
- u64 parent, u64 root_objectid, int level);
struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
u64 bytenr, u32 blocksize,
struct btrfs_path *path,
struct btrfs_key *new_key,
unsigned long split_offset);
-int btrfs_duplicate_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path,
- struct btrfs_key *new_key);
int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
*root, struct btrfs_key *key, struct btrfs_path *p, int
ins_len, int cow);
struct btrfs_path *path,
struct btrfs_dir_item *di);
int btrfs_insert_xattr_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path, u64 objectid,
- const char *name, u16 name_len,
- const void *data, u16 data_len);
+ struct btrfs_root *root, const char *name,
+ u16 name_len, const void *data, u16 data_len,
+ u64 dir);
struct btrfs_dir_item *btrfs_lookup_xattr(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path, u64 dir,
struct inode *inode, u64 new_size,
u32 min_type);
-int btrfs_start_delalloc_inodes(struct btrfs_root *root, int delay_iput);
+int btrfs_start_delalloc_inodes(struct btrfs_root *root);
int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end);
int btrfs_writepages(struct address_space *mapping,
struct writeback_control *wbc);
void btrfs_orphan_cleanup(struct btrfs_root *root);
int btrfs_cont_expand(struct inode *inode, loff_t size);
int btrfs_invalidate_inodes(struct btrfs_root *root);
-void btrfs_add_delayed_iput(struct inode *inode);
-void btrfs_run_delayed_iputs(struct btrfs_root *root);
extern const struct dentry_operations btrfs_dentry_operations;
/* ioctl.c */
int skip_pinned);
int btrfs_check_file(struct btrfs_root *root, struct inode *inode);
extern const struct file_operations btrfs_file_operations;
-int btrfs_drop_extents(struct btrfs_trans_handle *trans, struct inode *inode,
- u64 start, u64 end, u64 *hint_byte, int drop_cache);
+int btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root, struct inode *inode,
+ u64 start, u64 end, u64 locked_end,
+ u64 inline_limit, u64 *hint_block, int drop_cache);
int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
struct inode *inode, u64 start, u64 end);
int btrfs_release_file(struct inode *inode, struct file *file);
#else
#define btrfs_check_acl NULL
#endif
-int btrfs_init_acl(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir);
+int btrfs_init_acl(struct inode *inode, struct inode *dir);
int btrfs_acl_chmod(struct inode *inode);
/* relocation.c */
* into the tree
*/
int btrfs_insert_xattr_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct btrfs_path *path, u64 objectid,
- const char *name, u16 name_len,
- const void *data, u16 data_len)
+ struct btrfs_root *root, const char *name,
+ u16 name_len, const void *data, u16 data_len,
+ u64 dir)
{
int ret = 0;
+ struct btrfs_path *path;
struct btrfs_dir_item *dir_item;
unsigned long name_ptr, data_ptr;
struct btrfs_key key, location;
struct extent_buffer *leaf;
u32 data_size;
- BUG_ON(name_len + data_len > BTRFS_MAX_XATTR_SIZE(root));
-
- key.objectid = objectid;
+ key.objectid = dir;
btrfs_set_key_type(&key, BTRFS_XATTR_ITEM_KEY);
key.offset = btrfs_name_hash(name, name_len);
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+ if (name_len + data_len + sizeof(struct btrfs_dir_item) >
+ BTRFS_LEAF_DATA_SIZE(root) - sizeof(struct btrfs_item))
+ return -ENOSPC;
data_size = sizeof(*dir_item) + name_len + data_len;
dir_item = insert_with_overflow(trans, root, path, &key, data_size,
write_extent_buffer(leaf, data, data_ptr, data_len);
btrfs_mark_buffer_dirty(path->nodes[0]);
+ btrfs_free_path(path);
return ret;
}
root->stripesize = stripesize;
root->ref_cows = 0;
root->track_dirty = 0;
- root->in_radix = 0;
- root->clean_orphans = 0;
root->fs_info = fs_info;
root->objectid = objectid;
root->defrag_trans_start = fs_info->generation;
init_completion(&root->kobj_unregister);
root->defrag_running = 0;
+ root->defrag_level = 0;
root->root_key.objectid = objectid;
root->anon_super.s_root = NULL;
root->anon_super.s_dev = 0;
while (1) {
ret = find_first_extent_bit(&log_root_tree->dirty_log_pages,
- 0, &start, &end, EXTENT_DIRTY | EXTENT_NEW);
+ 0, &start, &end, EXTENT_DIRTY);
if (ret)
break;
- clear_extent_bits(&log_root_tree->dirty_log_pages, start, end,
- EXTENT_DIRTY | EXTENT_NEW, GFP_NOFS);
+ clear_extent_dirty(&log_root_tree->dirty_log_pages,
+ start, end, GFP_NOFS);
}
eb = fs_info->log_root_tree->node;
ret = radix_tree_insert(&fs_info->fs_roots_radix,
(unsigned long)root->root_key.objectid,
root);
- if (ret == 0) {
+ if (ret == 0)
root->in_radix = 1;
- root->clean_orphans = 1;
- }
spin_unlock(&fs_info->fs_roots_radix_lock);
radix_tree_preload_end();
if (ret) {
ret = btrfs_find_dead_roots(fs_info->tree_root,
root->root_key.objectid);
WARN_ON(ret);
+
+ if (!(fs_info->sb->s_flags & MS_RDONLY))
+ btrfs_orphan_cleanup(root);
+
return root;
fail:
free_fs_root(root);
if (!(root->fs_info->sb->s_flags & MS_RDONLY) &&
mutex_trylock(&root->fs_info->cleaner_mutex)) {
- btrfs_run_delayed_iputs(root);
btrfs_clean_old_snapshots(root);
mutex_unlock(&root->fs_info->cleaner_mutex);
}
INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
INIT_LIST_HEAD(&fs_info->trans_list);
INIT_LIST_HEAD(&fs_info->dead_roots);
- INIT_LIST_HEAD(&fs_info->delayed_iputs);
INIT_LIST_HEAD(&fs_info->hashers);
INIT_LIST_HEAD(&fs_info->delalloc_inodes);
INIT_LIST_HEAD(&fs_info->ordered_operations);
spin_lock_init(&fs_info->new_trans_lock);
spin_lock_init(&fs_info->ref_cache_lock);
spin_lock_init(&fs_info->fs_roots_radix_lock);
- spin_lock_init(&fs_info->delayed_iput_lock);
init_completion(&fs_info->kobj_unregister);
fs_info->tree_root = tree_root;
mutex_init(&fs_info->cleaner_mutex);
mutex_init(&fs_info->volume_mutex);
init_rwsem(&fs_info->extent_commit_sem);
- init_rwsem(&fs_info->cleanup_work_sem);
init_rwsem(&fs_info->subvol_sem);
btrfs_init_free_cluster(&fs_info->meta_alloc_cluster);
if (!(sb->s_flags & MS_RDONLY)) {
ret = btrfs_recover_relocation(tree_root);
- if (ret < 0) {
- printk(KERN_WARNING
- "btrfs: failed to recover relocation\n");
- err = -EINVAL;
- goto fail_trans_kthread;
- }
+ BUG_ON(ret);
}
location.objectid = BTRFS_FS_TREE_OBJECTID;
if (!fs_info->fs_root)
goto fail_trans_kthread;
- if (!(sb->s_flags & MS_RDONLY)) {
- down_read(&fs_info->cleanup_work_sem);
- btrfs_orphan_cleanup(fs_info->fs_root);
- up_read(&fs_info->cleanup_work_sem);
- }
-
return tree_root;
fail_trans_kthread:
int ret;
mutex_lock(&root->fs_info->cleaner_mutex);
- btrfs_run_delayed_iputs(root);
btrfs_clean_old_snapshots(root);
mutex_unlock(&root->fs_info->cleaner_mutex);
-
- /* wait until ongoing cleanup work done */
- down_write(&root->fs_info->cleanup_work_sem);
- up_write(&root->fs_info->cleanup_work_sem);
-
trans = btrfs_start_transaction(root, 1);
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
return (cache->flags & bits) == bits;
}
-void btrfs_get_block_group(struct btrfs_block_group_cache *cache)
-{
- atomic_inc(&cache->count);
-}
-
-void btrfs_put_block_group(struct btrfs_block_group_cache *cache)
-{
- if (atomic_dec_and_test(&cache->count))
- kfree(cache);
-}
-
/*
* this adds the block group to the fs_info rb tree for the block group
* cache
}
}
if (ret)
- btrfs_get_block_group(ret);
+ atomic_inc(&ret->count);
spin_unlock(&info->block_group_cache_lock);
return ret;
int stripe_len;
int i, nr, ret;
- if (cache->key.objectid < BTRFS_SUPER_INFO_OFFSET) {
- stripe_len = BTRFS_SUPER_INFO_OFFSET - cache->key.objectid;
- cache->bytes_super += stripe_len;
- ret = add_excluded_extent(root, cache->key.objectid,
- stripe_len);
- BUG_ON(ret);
- }
-
for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
bytenr = btrfs_sb_offset(i);
ret = btrfs_rmap_block(&root->fs_info->mapping_tree,
if (ret)
break;
- if (extent_start <= start) {
+ if (extent_start == start) {
start = extent_end + 1;
} else if (extent_start > start && extent_start < end) {
size = extent_start - start;
put_caching_control(caching_ctl);
atomic_dec(&block_group->space_info->caching_threads);
- btrfs_put_block_group(block_group);
-
return 0;
}
up_write(&fs_info->extent_commit_sem);
atomic_inc(&cache->space_info->caching_threads);
- btrfs_get_block_group(cache);
tsk = kthread_run(caching_kthread, cache, "btrfs-cache-%llu\n",
cache->key.objectid);
return cache;
}
+void btrfs_put_block_group(struct btrfs_block_group_cache *cache)
+{
+ if (atomic_dec_and_test(&cache->count))
+ kfree(cache);
+}
+
static struct btrfs_space_info *__find_space_info(struct btrfs_fs_info *info,
u64 flags)
{
if (node) {
cache = rb_entry(node, struct btrfs_block_group_cache,
cache_node);
- btrfs_get_block_group(cache);
+ atomic_inc(&cache->count);
} else
cache = NULL;
spin_unlock(&root->fs_info->block_group_cache_lock);
root = async->root;
info = async->info;
- btrfs_start_delalloc_inodes(root, 0);
+ btrfs_start_delalloc_inodes(root);
wake_up(&info->flush_wait);
- btrfs_wait_ordered_extents(root, 0, 0);
+ btrfs_wait_ordered_extents(root, 0);
spin_lock(&info->lock);
info->flushing = 0;
return;
flush:
- btrfs_start_delalloc_inodes(root, 0);
- btrfs_wait_ordered_extents(root, 0, 0);
+ btrfs_start_delalloc_inodes(root);
+ btrfs_wait_ordered_extents(root, 0);
spin_lock(&info->lock);
info->flushing = 0;
else
old_val -= num_bytes;
btrfs_set_super_bytes_used(&info->super_copy, old_val);
+
+ /* block accounting for root item */
+ old_val = btrfs_root_used(&root->root_item);
+ if (alloc)
+ old_val += num_bytes;
+ else
+ old_val -= num_bytes;
+ btrfs_set_root_used(&root->root_item, old_val);
spin_unlock(&info->delalloc_lock);
while (total) {
return ret;
}
-int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- u64 bytenr, u32 blocksize,
- u64 parent, u64 root_objectid, int level)
-{
- u64 used;
- spin_lock(&root->node_lock);
- used = btrfs_root_used(&root->root_item) - blocksize;
- btrfs_set_root_used(&root->root_item, used);
- spin_unlock(&root->node_lock);
-
- return btrfs_free_extent(trans, root, bytenr, blocksize,
- parent, root_objectid, level, 0);
-}
-
static u64 stripe_align(struct btrfs_root *root, u64 val)
{
u64 mask = ((u64)root->stripesize - 1);
u64 offset;
int cached;
- btrfs_get_block_group(block_group);
+ atomic_inc(&block_group->count);
search_start = block_group->key.objectid;
have_block_group:
btrfs_put_block_group(block_group);
block_group = last_ptr->block_group;
- btrfs_get_block_group(block_group);
+ atomic_inc(&block_group->count);
spin_unlock(&last_ptr->lock);
spin_unlock(&last_ptr->refill_lock);
{
int ret;
u64 search_start = 0;
+ struct btrfs_fs_info *info = root->fs_info;
data = btrfs_get_alloc_profile(root, data);
again:
* the only place that sets empty_size is btrfs_realloc_node, which
* is not called recursively on allocations
*/
- if (empty_size || root->ref_cows)
+ if (empty_size || root->ref_cows) {
+ if (!(data & BTRFS_BLOCK_GROUP_METADATA)) {
+ ret = do_chunk_alloc(trans, root->fs_info->extent_root,
+ 2 * 1024 * 1024,
+ BTRFS_BLOCK_GROUP_METADATA |
+ (info->metadata_alloc_profile &
+ info->avail_metadata_alloc_bits), 0);
+ }
ret = do_chunk_alloc(trans, root->fs_info->extent_root,
num_bytes + 2 * 1024 * 1024, data, 0);
+ }
WARN_ON(num_bytes < root->sectorsize);
ret = find_free_extent(trans, root, num_bytes, empty_size,
extent_op);
BUG_ON(ret);
}
-
- if (root_objectid == root->root_key.objectid) {
- u64 used;
- spin_lock(&root->node_lock);
- used = btrfs_root_used(&root->root_item) + num_bytes;
- btrfs_set_root_used(&root->root_item, used);
- spin_unlock(&root->node_lock);
- }
return ret;
}
btrfs_set_buffer_uptodate(buf);
if (root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID) {
- /*
- * we allow two log transactions at a time, use different
- * EXENT bit to differentiate dirty pages.
- */
- if (root->log_transid % 2 == 0)
- set_extent_dirty(&root->dirty_log_pages, buf->start,
- buf->start + buf->len - 1, GFP_NOFS);
- else
- set_extent_new(&root->dirty_log_pages, buf->start,
- buf->start + buf->len - 1, GFP_NOFS);
+ set_extent_dirty(&root->dirty_log_pages, buf->start,
+ buf->start + buf->len - 1, GFP_NOFS);
} else {
set_extent_dirty(&trans->transaction->dirty_pages, buf->start,
buf->start + buf->len - 1, GFP_NOFS);
int ret;
while (level >= 0) {
+ if (path->slots[level] >=
+ btrfs_header_nritems(path->nodes[level]))
+ break;
+
ret = walk_down_proc(trans, root, path, wc, lookup_info);
if (ret > 0)
break;
if (level == 0)
break;
- if (path->slots[level] >=
- btrfs_header_nritems(path->nodes[level]))
- break;
-
ret = do_walk_down(trans, root, path, wc, &lookup_info);
if (ret > 0) {
path->slots[level]++;
wait_block_group_cache_done(block_group);
btrfs_remove_free_space_cache(block_group);
- btrfs_put_block_group(block_group);
+
+ WARN_ON(atomic_read(&block_group->count) != 1);
+ kfree(block_group);
spin_lock(&info->block_group_cache_lock);
}
spin_unlock(&tree->buffer_lock);
goto free_eb;
}
+ spin_unlock(&tree->buffer_lock);
+
/* add one reference for the tree */
atomic_inc(&eb->refs);
- spin_unlock(&tree->buffer_lock);
return eb;
free_eb:
}
flags = em->flags;
if (skip_pinned && test_bit(EXTENT_FLAG_PINNED, &em->flags)) {
- if (testend && em->start + em->len >= start + len) {
+ if (em->start <= start &&
+ (!testend || em->start + em->len >= start + len)) {
free_extent_map(em);
write_unlock(&em_tree->lock);
break;
}
- start = em->start + em->len;
- if (testend)
+ if (start < em->start) {
+ len = em->start - start;
+ } else {
len = start + len - (em->start + em->len);
+ start = em->start + em->len;
+ }
free_extent_map(em);
write_unlock(&em_tree->lock);
continue;
* If an extent intersects the range but is not entirely inside the range
* it is either truncated or split. Anything entirely inside the range
* is deleted from the tree.
+ *
+ * inline_limit is used to tell this code which offsets in the file to keep
+ * if they contain inline extents.
*/
-int btrfs_drop_extents(struct btrfs_trans_handle *trans, struct inode *inode,
- u64 start, u64 end, u64 *hint_byte, int drop_cache)
+noinline int btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root, struct inode *inode,
+ u64 start, u64 end, u64 locked_end,
+ u64 inline_limit, u64 *hint_byte, int drop_cache)
{
- struct btrfs_root *root = BTRFS_I(inode)->root;
+ u64 extent_end = 0;
+ u64 search_start = start;
+ u64 ram_bytes = 0;
+ u64 disk_bytenr = 0;
+ u64 orig_locked_end = locked_end;
+ u8 compression;
+ u8 encryption;
+ u16 other_encoding = 0;
struct extent_buffer *leaf;
- struct btrfs_file_extent_item *fi;
+ struct btrfs_file_extent_item *extent;
struct btrfs_path *path;
struct btrfs_key key;
- struct btrfs_key new_key;
- u64 search_start = start;
- u64 disk_bytenr = 0;
- u64 num_bytes = 0;
- u64 extent_offset = 0;
- u64 extent_end = 0;
- int del_nr = 0;
- int del_slot = 0;
- int extent_type;
+ struct btrfs_file_extent_item old;
+ int keep;
+ int slot;
+ int bookend;
+ int found_type = 0;
+ int found_extent;
+ int found_inline;
int recow;
int ret;
+ inline_limit = 0;
if (drop_cache)
btrfs_drop_extent_cache(inode, start, end - 1, 0);
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
-
while (1) {
recow = 0;
+ btrfs_release_path(root, path);
ret = btrfs_lookup_file_extent(trans, root, path, inode->i_ino,
search_start, -1);
if (ret < 0)
- break;
- if (ret > 0 && path->slots[0] > 0 && search_start == start) {
- leaf = path->nodes[0];
- btrfs_item_key_to_cpu(leaf, &key, path->slots[0] - 1);
- if (key.objectid == inode->i_ino &&
- key.type == BTRFS_EXTENT_DATA_KEY)
- path->slots[0]--;
- }
- ret = 0;
-next_slot:
- leaf = path->nodes[0];
- if (path->slots[0] >= btrfs_header_nritems(leaf)) {
- BUG_ON(del_nr > 0);
- ret = btrfs_next_leaf(root, path);
- if (ret < 0)
- break;
- if (ret > 0) {
+ goto out;
+ if (ret > 0) {
+ if (path->slots[0] == 0) {
ret = 0;
- break;
+ goto out;
}
- leaf = path->nodes[0];
- recow = 1;
+ path->slots[0]--;
}
-
- btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
- if (key.objectid > inode->i_ino ||
- key.type > BTRFS_EXTENT_DATA_KEY || key.offset >= end)
- break;
-
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- extent_type = btrfs_file_extent_type(leaf, fi);
-
- if (extent_type == BTRFS_FILE_EXTENT_REG ||
- extent_type == BTRFS_FILE_EXTENT_PREALLOC) {
- disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, fi);
- num_bytes = btrfs_file_extent_disk_num_bytes(leaf, fi);
- extent_offset = btrfs_file_extent_offset(leaf, fi);
- extent_end = key.offset +
- btrfs_file_extent_num_bytes(leaf, fi);
- } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
- extent_end = key.offset +
- btrfs_file_extent_inline_len(leaf, fi);
- } else {
- WARN_ON(1);
- extent_end = search_start;
+next_slot:
+ keep = 0;
+ bookend = 0;
+ found_extent = 0;
+ found_inline = 0;
+ compression = 0;
+ encryption = 0;
+ extent = NULL;
+ leaf = path->nodes[0];
+ slot = path->slots[0];
+ ret = 0;
+ btrfs_item_key_to_cpu(leaf, &key, slot);
+ if (btrfs_key_type(&key) == BTRFS_EXTENT_DATA_KEY &&
+ key.offset >= end) {
+ goto out;
}
-
- if (extent_end <= search_start) {
- path->slots[0]++;
- goto next_slot;
+ if (btrfs_key_type(&key) > BTRFS_EXTENT_DATA_KEY ||
+ key.objectid != inode->i_ino) {
+ goto out;
}
-
- search_start = max(key.offset, start);
if (recow) {
- btrfs_release_path(root, path);
+ search_start = max(key.offset, start);
continue;
}
+ if (btrfs_key_type(&key) == BTRFS_EXTENT_DATA_KEY) {
+ extent = btrfs_item_ptr(leaf, slot,
+ struct btrfs_file_extent_item);
+ found_type = btrfs_file_extent_type(leaf, extent);
+ compression = btrfs_file_extent_compression(leaf,
+ extent);
+ encryption = btrfs_file_extent_encryption(leaf,
+ extent);
+ other_encoding = btrfs_file_extent_other_encoding(leaf,
+ extent);
+ if (found_type == BTRFS_FILE_EXTENT_REG ||
+ found_type == BTRFS_FILE_EXTENT_PREALLOC) {
+ extent_end =
+ btrfs_file_extent_disk_bytenr(leaf,
+ extent);
+ if (extent_end)
+ *hint_byte = extent_end;
+
+ extent_end = key.offset +
+ btrfs_file_extent_num_bytes(leaf, extent);
+ ram_bytes = btrfs_file_extent_ram_bytes(leaf,
+ extent);
+ found_extent = 1;
+ } else if (found_type == BTRFS_FILE_EXTENT_INLINE) {
+ found_inline = 1;
+ extent_end = key.offset +
+ btrfs_file_extent_inline_len(leaf, extent);
+ }
+ } else {
+ extent_end = search_start;
+ }
- /*
- * | - range to drop - |
- * | -------- extent -------- |
- */
- if (start > key.offset && end < extent_end) {
- BUG_ON(del_nr > 0);
- BUG_ON(extent_type == BTRFS_FILE_EXTENT_INLINE);
-
- memcpy(&new_key, &key, sizeof(new_key));
- new_key.offset = start;
- ret = btrfs_duplicate_item(trans, root, path,
- &new_key);
- if (ret == -EAGAIN) {
- btrfs_release_path(root, path);
- continue;
+ /* we found nothing we can drop */
+ if ((!found_extent && !found_inline) ||
+ search_start >= extent_end) {
+ int nextret;
+ u32 nritems;
+ nritems = btrfs_header_nritems(leaf);
+ if (slot >= nritems - 1) {
+ nextret = btrfs_next_leaf(root, path);
+ if (nextret)
+ goto out;
+ recow = 1;
+ } else {
+ path->slots[0]++;
}
- if (ret < 0)
- break;
+ goto next_slot;
+ }
- leaf = path->nodes[0];
- fi = btrfs_item_ptr(leaf, path->slots[0] - 1,
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- start - key.offset);
+ if (end <= extent_end && start >= key.offset && found_inline)
+ *hint_byte = EXTENT_MAP_INLINE;
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
+ if (found_extent) {
+ read_extent_buffer(leaf, &old, (unsigned long)extent,
+ sizeof(old));
+ }
- extent_offset += start - key.offset;
- btrfs_set_file_extent_offset(leaf, fi, extent_offset);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- extent_end - start);
- btrfs_mark_buffer_dirty(leaf);
+ if (end < extent_end && end >= key.offset) {
+ bookend = 1;
+ if (found_inline && start <= key.offset)
+ keep = 1;
+ }
- if (disk_bytenr > 0) {
+ if (bookend && found_extent) {
+ if (locked_end < extent_end) {
+ ret = try_lock_extent(&BTRFS_I(inode)->io_tree,
+ locked_end, extent_end - 1,
+ GFP_NOFS);
+ if (!ret) {
+ btrfs_release_path(root, path);
+ lock_extent(&BTRFS_I(inode)->io_tree,
+ locked_end, extent_end - 1,
+ GFP_NOFS);
+ locked_end = extent_end;
+ continue;
+ }
+ locked_end = extent_end;
+ }
+ disk_bytenr = le64_to_cpu(old.disk_bytenr);
+ if (disk_bytenr != 0) {
ret = btrfs_inc_extent_ref(trans, root,
- disk_bytenr, num_bytes, 0,
- root->root_key.objectid,
- new_key.objectid,
- start - extent_offset);
+ disk_bytenr,
+ le64_to_cpu(old.disk_num_bytes), 0,
+ root->root_key.objectid,
+ key.objectid, key.offset -
+ le64_to_cpu(old.offset));
BUG_ON(ret);
- *hint_byte = disk_bytenr;
}
- key.offset = start;
}
- /*
- * | ---- range to drop ----- |
- * | -------- extent -------- |
- */
- if (start <= key.offset && end < extent_end) {
- BUG_ON(extent_type == BTRFS_FILE_EXTENT_INLINE);
- memcpy(&new_key, &key, sizeof(new_key));
- new_key.offset = end;
- btrfs_set_item_key_safe(trans, root, path, &new_key);
-
- extent_offset += end - key.offset;
- btrfs_set_file_extent_offset(leaf, fi, extent_offset);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- extent_end - end);
- btrfs_mark_buffer_dirty(leaf);
- if (disk_bytenr > 0) {
- inode_sub_bytes(inode, end - key.offset);
- *hint_byte = disk_bytenr;
+ if (found_inline) {
+ u64 mask = root->sectorsize - 1;
+ search_start = (extent_end + mask) & ~mask;
+ } else
+ search_start = extent_end;
+
+ /* truncate existing extent */
+ if (start > key.offset) {
+ u64 new_num;
+ u64 old_num;
+ keep = 1;
+ WARN_ON(start & (root->sectorsize - 1));
+ if (found_extent) {
+ new_num = start - key.offset;
+ old_num = btrfs_file_extent_num_bytes(leaf,
+ extent);
+ *hint_byte =
+ btrfs_file_extent_disk_bytenr(leaf,
+ extent);
+ if (btrfs_file_extent_disk_bytenr(leaf,
+ extent)) {
+ inode_sub_bytes(inode, old_num -
+ new_num);
+ }
+ btrfs_set_file_extent_num_bytes(leaf,
+ extent, new_num);
+ btrfs_mark_buffer_dirty(leaf);
+ } else if (key.offset < inline_limit &&
+ (end > extent_end) &&
+ (inline_limit < extent_end)) {
+ u32 new_size;
+ new_size = btrfs_file_extent_calc_inline_size(
+ inline_limit - key.offset);
+ inode_sub_bytes(inode, extent_end -
+ inline_limit);
+ btrfs_set_file_extent_ram_bytes(leaf, extent,
+ new_size);
+ if (!compression && !encryption) {
+ btrfs_truncate_item(trans, root, path,
+ new_size, 1);
+ }
}
- break;
}
+ /* delete the entire extent */
+ if (!keep) {
+ if (found_inline)
+ inode_sub_bytes(inode, extent_end -
+ key.offset);
+ ret = btrfs_del_item(trans, root, path);
+ /* TODO update progress marker and return */
+ BUG_ON(ret);
+ extent = NULL;
+ btrfs_release_path(root, path);
+ /* the extent will be freed later */
+ }
+ if (bookend && found_inline && start <= key.offset) {
+ u32 new_size;
+ new_size = btrfs_file_extent_calc_inline_size(
+ extent_end - end);
+ inode_sub_bytes(inode, end - key.offset);
+ btrfs_set_file_extent_ram_bytes(leaf, extent,
+ new_size);
+ if (!compression && !encryption)
+ ret = btrfs_truncate_item(trans, root, path,
+ new_size, 0);
+ BUG_ON(ret);
+ }
+ /* create bookend, splitting the extent in two */
+ if (bookend && found_extent) {
+ struct btrfs_key ins;
+ ins.objectid = inode->i_ino;
+ ins.offset = end;
+ btrfs_set_key_type(&ins, BTRFS_EXTENT_DATA_KEY);
- search_start = extent_end;
- /*
- * | ---- range to drop ----- |
- * | -------- extent -------- |
- */
- if (start > key.offset && end >= extent_end) {
- BUG_ON(del_nr > 0);
- BUG_ON(extent_type == BTRFS_FILE_EXTENT_INLINE);
+ btrfs_release_path(root, path);
+ path->leave_spinning = 1;
+ ret = btrfs_insert_empty_item(trans, root, path, &ins,
+ sizeof(*extent));
+ BUG_ON(ret);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- start - key.offset);
- btrfs_mark_buffer_dirty(leaf);
- if (disk_bytenr > 0) {
- inode_sub_bytes(inode, extent_end - start);
- *hint_byte = disk_bytenr;
- }
- if (end == extent_end)
- break;
+ leaf = path->nodes[0];
+ extent = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_file_extent_item);
+ write_extent_buffer(leaf, &old,
+ (unsigned long)extent, sizeof(old));
+
+ btrfs_set_file_extent_compression(leaf, extent,
+ compression);
+ btrfs_set_file_extent_encryption(leaf, extent,
+ encryption);
+ btrfs_set_file_extent_other_encoding(leaf, extent,
+ other_encoding);
+ btrfs_set_file_extent_offset(leaf, extent,
+ le64_to_cpu(old.offset) + end - key.offset);
+ WARN_ON(le64_to_cpu(old.num_bytes) <
+ (extent_end - end));
+ btrfs_set_file_extent_num_bytes(leaf, extent,
+ extent_end - end);
- path->slots[0]++;
- goto next_slot;
+ /*
+ * set the ram bytes to the size of the full extent
+ * before splitting. This is a worst case flag,
+ * but its the best we can do because we don't know
+ * how splitting affects compression
+ */
+ btrfs_set_file_extent_ram_bytes(leaf, extent,
+ ram_bytes);
+ btrfs_set_file_extent_type(leaf, extent, found_type);
+
+ btrfs_unlock_up_safe(path, 1);
+ btrfs_mark_buffer_dirty(path->nodes[0]);
+ btrfs_set_lock_blocking(path->nodes[0]);
+
+ path->leave_spinning = 0;
+ btrfs_release_path(root, path);
+ if (disk_bytenr != 0)
+ inode_add_bytes(inode, extent_end - end);
}
- /*
- * | ---- range to drop ----- |
- * | ------ extent ------ |
- */
- if (start <= key.offset && end >= extent_end) {
- if (del_nr == 0) {
- del_slot = path->slots[0];
- del_nr = 1;
- } else {
- BUG_ON(del_slot + del_nr != path->slots[0]);
- del_nr++;
- }
+ if (found_extent && !keep) {
+ u64 old_disk_bytenr = le64_to_cpu(old.disk_bytenr);
- if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
+ if (old_disk_bytenr != 0) {
inode_sub_bytes(inode,
- extent_end - key.offset);
- extent_end = ALIGN(extent_end,
- root->sectorsize);
- } else if (disk_bytenr > 0) {
+ le64_to_cpu(old.num_bytes));
ret = btrfs_free_extent(trans, root,
- disk_bytenr, num_bytes, 0,
- root->root_key.objectid,
+ old_disk_bytenr,
+ le64_to_cpu(old.disk_num_bytes),
+ 0, root->root_key.objectid,
key.objectid, key.offset -
- extent_offset);
+ le64_to_cpu(old.offset));
BUG_ON(ret);
- inode_sub_bytes(inode,
- extent_end - key.offset);
- *hint_byte = disk_bytenr;
- }
-
- if (end == extent_end)
- break;
-
- if (path->slots[0] + 1 < btrfs_header_nritems(leaf)) {
- path->slots[0]++;
- goto next_slot;
+ *hint_byte = old_disk_bytenr;
}
-
- ret = btrfs_del_items(trans, root, path, del_slot,
- del_nr);
- BUG_ON(ret);
-
- del_nr = 0;
- del_slot = 0;
-
- btrfs_release_path(root, path);
- continue;
}
- BUG_ON(1);
- }
-
- if (del_nr > 0) {
- ret = btrfs_del_items(trans, root, path, del_slot, del_nr);
- BUG_ON(ret);
+ if (search_start >= end) {
+ ret = 0;
+ goto out;
+ }
}
-
+out:
btrfs_free_path(path);
+ if (locked_end > orig_locked_end) {
+ unlock_extent(&BTRFS_I(inode)->io_tree, orig_locked_end,
+ locked_end - 1, GFP_NOFS);
+ }
return ret;
}
static int extent_mergeable(struct extent_buffer *leaf, int slot,
- u64 objectid, u64 bytenr, u64 orig_offset,
- u64 *start, u64 *end)
+ u64 objectid, u64 bytenr, u64 *start, u64 *end)
{
struct btrfs_file_extent_item *fi;
struct btrfs_key key;
fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
if (btrfs_file_extent_type(leaf, fi) != BTRFS_FILE_EXTENT_REG ||
btrfs_file_extent_disk_bytenr(leaf, fi) != bytenr ||
- btrfs_file_extent_offset(leaf, fi) != key.offset - orig_offset ||
btrfs_file_extent_compression(leaf, fi) ||
btrfs_file_extent_encryption(leaf, fi) ||
btrfs_file_extent_other_encoding(leaf, fi))
* two or three.
*/
int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
struct inode *inode, u64 start, u64 end)
{
- struct btrfs_root *root = BTRFS_I(inode)->root;
struct extent_buffer *leaf;
struct btrfs_path *path;
struct btrfs_file_extent_item *fi;
struct btrfs_key key;
- struct btrfs_key new_key;
u64 bytenr;
u64 num_bytes;
u64 extent_end;
u64 orig_offset;
u64 other_start;
u64 other_end;
- u64 split;
- int del_nr = 0;
- int del_slot = 0;
- int recow;
+ u64 split = start;
+ u64 locked_end = end;
+ int extent_type;
+ int split_end = 1;
int ret;
btrfs_drop_extent_cache(inode, start, end - 1, 0);
path = btrfs_alloc_path();
BUG_ON(!path);
again:
- recow = 0;
- split = start;
key.objectid = inode->i_ino;
key.type = BTRFS_EXTENT_DATA_KEY;
- key.offset = split;
+ if (split == start)
+ key.offset = split;
+ else
+ key.offset = split - 1;
ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
if (ret > 0 && path->slots[0] > 0)
key.type != BTRFS_EXTENT_DATA_KEY);
fi = btrfs_item_ptr(leaf, path->slots[0],
struct btrfs_file_extent_item);
- BUG_ON(btrfs_file_extent_type(leaf, fi) !=
- BTRFS_FILE_EXTENT_PREALLOC);
+ extent_type = btrfs_file_extent_type(leaf, fi);
+ BUG_ON(extent_type != BTRFS_FILE_EXTENT_PREALLOC);
extent_end = key.offset + btrfs_file_extent_num_bytes(leaf, fi);
BUG_ON(key.offset > start || extent_end < end);
bytenr = btrfs_file_extent_disk_bytenr(leaf, fi);
num_bytes = btrfs_file_extent_disk_num_bytes(leaf, fi);
orig_offset = key.offset - btrfs_file_extent_offset(leaf, fi);
- memcpy(&new_key, &key, sizeof(new_key));
- if (start == key.offset && end < extent_end) {
- other_start = 0;
- other_end = start;
- if (extent_mergeable(leaf, path->slots[0] - 1,
- inode->i_ino, bytenr, orig_offset,
- &other_start, &other_end)) {
- new_key.offset = end;
- btrfs_set_item_key_safe(trans, root, path, &new_key);
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- extent_end - end);
- btrfs_set_file_extent_offset(leaf, fi,
- end - orig_offset);
- fi = btrfs_item_ptr(leaf, path->slots[0] - 1,
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- end - other_start);
- btrfs_mark_buffer_dirty(leaf);
- goto out;
- }
- }
+ if (key.offset == start)
+ split = end;
- if (start > key.offset && end == extent_end) {
+ if (key.offset == start && extent_end == end) {
+ int del_nr = 0;
+ int del_slot = 0;
other_start = end;
other_end = 0;
- if (extent_mergeable(leaf, path->slots[0] + 1,
- inode->i_ino, bytenr, orig_offset,
- &other_start, &other_end)) {
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- start - key.offset);
- path->slots[0]++;
- new_key.offset = start;
- btrfs_set_item_key_safe(trans, root, path, &new_key);
-
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- other_end - start);
- btrfs_set_file_extent_offset(leaf, fi,
- start - orig_offset);
- btrfs_mark_buffer_dirty(leaf);
- goto out;
+ if (extent_mergeable(leaf, path->slots[0] + 1, inode->i_ino,
+ bytenr, &other_start, &other_end)) {
+ extent_end = other_end;
+ del_slot = path->slots[0] + 1;
+ del_nr++;
+ ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
+ 0, root->root_key.objectid,
+ inode->i_ino, orig_offset);
+ BUG_ON(ret);
}
- }
-
- while (start > key.offset || end < extent_end) {
- if (key.offset == start)
- split = end;
-
- new_key.offset = split;
- ret = btrfs_duplicate_item(trans, root, path, &new_key);
- if (ret == -EAGAIN) {
- btrfs_release_path(root, path);
- goto again;
+ other_start = 0;
+ other_end = start;
+ if (extent_mergeable(leaf, path->slots[0] - 1, inode->i_ino,
+ bytenr, &other_start, &other_end)) {
+ key.offset = other_start;
+ del_slot = path->slots[0];
+ del_nr++;
+ ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
+ 0, root->root_key.objectid,
+ inode->i_ino, orig_offset);
+ BUG_ON(ret);
+ }
+ split_end = 0;
+ if (del_nr == 0) {
+ btrfs_set_file_extent_type(leaf, fi,
+ BTRFS_FILE_EXTENT_REG);
+ goto done;
}
- BUG_ON(ret < 0);
-
- leaf = path->nodes[0];
- fi = btrfs_item_ptr(leaf, path->slots[0] - 1,
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- split - key.offset);
- fi = btrfs_item_ptr(leaf, path->slots[0],
+ fi = btrfs_item_ptr(leaf, del_slot - 1,
struct btrfs_file_extent_item);
-
- btrfs_set_file_extent_offset(leaf, fi, split - orig_offset);
+ btrfs_set_file_extent_type(leaf, fi, BTRFS_FILE_EXTENT_REG);
btrfs_set_file_extent_num_bytes(leaf, fi,
- extent_end - split);
+ extent_end - key.offset);
btrfs_mark_buffer_dirty(leaf);
- ret = btrfs_inc_extent_ref(trans, root, bytenr, num_bytes, 0,
- root->root_key.objectid,
- inode->i_ino, orig_offset);
+ ret = btrfs_del_items(trans, root, path, del_slot, del_nr);
BUG_ON(ret);
-
- if (split == start) {
- key.offset = start;
- } else {
- BUG_ON(start != key.offset);
- path->slots[0]--;
- extent_end = end;
+ goto release;
+ } else if (split == start) {
+ if (locked_end < extent_end) {
+ ret = try_lock_extent(&BTRFS_I(inode)->io_tree,
+ locked_end, extent_end - 1, GFP_NOFS);
+ if (!ret) {
+ btrfs_release_path(root, path);
+ lock_extent(&BTRFS_I(inode)->io_tree,
+ locked_end, extent_end - 1, GFP_NOFS);
+ locked_end = extent_end;
+ goto again;
+ }
+ locked_end = extent_end;
}
- recow = 1;
+ btrfs_set_file_extent_num_bytes(leaf, fi, split - key.offset);
+ } else {
+ BUG_ON(key.offset != start);
+ key.offset = split;
+ btrfs_set_file_extent_offset(leaf, fi, key.offset -
+ orig_offset);
+ btrfs_set_file_extent_num_bytes(leaf, fi, extent_end - split);
+ btrfs_set_item_key_safe(trans, root, path, &key);
+ extent_end = split;
}
- other_start = end;
- other_end = 0;
- if (extent_mergeable(leaf, path->slots[0] + 1,
- inode->i_ino, bytenr, orig_offset,
- &other_start, &other_end)) {
- if (recow) {
- btrfs_release_path(root, path);
- goto again;
+ if (extent_end == end) {
+ split_end = 0;
+ extent_type = BTRFS_FILE_EXTENT_REG;
+ }
+ if (extent_end == end && split == start) {
+ other_start = end;
+ other_end = 0;
+ if (extent_mergeable(leaf, path->slots[0] + 1, inode->i_ino,
+ bytenr, &other_start, &other_end)) {
+ path->slots[0]++;
+ fi = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_file_extent_item);
+ key.offset = split;
+ btrfs_set_item_key_safe(trans, root, path, &key);
+ btrfs_set_file_extent_offset(leaf, fi, key.offset -
+ orig_offset);
+ btrfs_set_file_extent_num_bytes(leaf, fi,
+ other_end - split);
+ goto done;
}
- extent_end = other_end;
- del_slot = path->slots[0] + 1;
- del_nr++;
- ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
- 0, root->root_key.objectid,
- inode->i_ino, orig_offset);
- BUG_ON(ret);
}
- other_start = 0;
- other_end = start;
- if (extent_mergeable(leaf, path->slots[0] - 1,
- inode->i_ino, bytenr, orig_offset,
- &other_start, &other_end)) {
- if (recow) {
- btrfs_release_path(root, path);
- goto again;
+ if (extent_end == end && split == end) {
+ other_start = 0;
+ other_end = start;
+ if (extent_mergeable(leaf, path->slots[0] - 1 , inode->i_ino,
+ bytenr, &other_start, &other_end)) {
+ path->slots[0]--;
+ fi = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_file_extent_item);
+ btrfs_set_file_extent_num_bytes(leaf, fi, extent_end -
+ other_start);
+ goto done;
}
- key.offset = other_start;
- del_slot = path->slots[0];
- del_nr++;
- ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
- 0, root->root_key.objectid,
- inode->i_ino, orig_offset);
- BUG_ON(ret);
}
- if (del_nr == 0) {
- fi = btrfs_item_ptr(leaf, path->slots[0],
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_type(leaf, fi,
- BTRFS_FILE_EXTENT_REG);
- btrfs_mark_buffer_dirty(leaf);
- } else {
- fi = btrfs_item_ptr(leaf, del_slot - 1,
- struct btrfs_file_extent_item);
- btrfs_set_file_extent_type(leaf, fi,
- BTRFS_FILE_EXTENT_REG);
- btrfs_set_file_extent_num_bytes(leaf, fi,
- extent_end - key.offset);
- btrfs_mark_buffer_dirty(leaf);
- ret = btrfs_del_items(trans, root, path, del_slot, del_nr);
- BUG_ON(ret);
+ btrfs_mark_buffer_dirty(leaf);
+
+ ret = btrfs_inc_extent_ref(trans, root, bytenr, num_bytes, 0,
+ root->root_key.objectid,
+ inode->i_ino, orig_offset);
+ BUG_ON(ret);
+ btrfs_release_path(root, path);
+
+ key.offset = start;
+ ret = btrfs_insert_empty_item(trans, root, path, &key, sizeof(*fi));
+ BUG_ON(ret);
+
+ leaf = path->nodes[0];
+ fi = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_file_extent_item);
+ btrfs_set_file_extent_generation(leaf, fi, trans->transid);
+ btrfs_set_file_extent_type(leaf, fi, extent_type);
+ btrfs_set_file_extent_disk_bytenr(leaf, fi, bytenr);
+ btrfs_set_file_extent_disk_num_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_offset(leaf, fi, key.offset - orig_offset);
+ btrfs_set_file_extent_num_bytes(leaf, fi, extent_end - key.offset);
+ btrfs_set_file_extent_ram_bytes(leaf, fi, num_bytes);
+ btrfs_set_file_extent_compression(leaf, fi, 0);
+ btrfs_set_file_extent_encryption(leaf, fi, 0);
+ btrfs_set_file_extent_other_encoding(leaf, fi, 0);
+done:
+ btrfs_mark_buffer_dirty(leaf);
+
+release:
+ btrfs_release_path(root, path);
+ if (split_end && split == start) {
+ split = end;
+ goto again;
+ }
+ if (locked_end > end) {
+ unlock_extent(&BTRFS_I(inode)->io_tree, end, locked_end - 1,
+ GFP_NOFS);
}
-out:
btrfs_free_path(path);
return 0;
}
}
mutex_lock(&dentry->d_inode->i_mutex);
out:
- return ret > 0 ? -EIO : ret;
+ return ret > 0 ? EIO : ret;
}
static const struct vm_operations_struct btrfs_file_vm_ops = {
u64 start, u64 end, int *page_started,
unsigned long *nr_written, int unlock);
-static int btrfs_init_inode_security(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir)
+static int btrfs_init_inode_security(struct inode *inode, struct inode *dir)
{
int err;
- err = btrfs_init_acl(trans, inode, dir);
+ err = btrfs_init_acl(inode, dir);
if (!err)
- err = btrfs_xattr_security_init(trans, inode, dir);
+ err = btrfs_xattr_security_init(inode, dir);
return err;
}
btrfs_mark_buffer_dirty(leaf);
btrfs_free_path(path);
- /*
- * we're an inline extent, so nobody can
- * extend the file past i_size without locking
- * a page we already have locked.
- *
- * We must do any isize and inode updates
- * before we unlock the pages. Otherwise we
- * could end up racing with unlink.
- */
BTRFS_I(inode)->disk_i_size = inode->i_size;
btrfs_update_inode(trans, root, inode);
-
return 0;
fail:
btrfs_free_path(path);
return 1;
}
- ret = btrfs_drop_extents(trans, inode, start, aligned_end,
+ ret = btrfs_drop_extents(trans, root, inode, start,
+ aligned_end, aligned_end, start,
&hint_byte, 1);
BUG_ON(ret);
start, end,
total_compressed, pages);
}
+ btrfs_end_transaction(trans, root);
if (ret == 0) {
/*
* inline extent creation worked, we don't need
EXTENT_CLEAR_DELALLOC |
EXTENT_CLEAR_ACCOUNTING |
EXTENT_SET_WRITEBACK | EXTENT_END_WRITEBACK);
-
- btrfs_end_transaction(trans, root);
+ ret = 0;
goto free_pages_out;
}
- btrfs_end_transaction(trans, root);
}
if (will_compress) {
if (list_empty(&async_cow->extents))
return 0;
+ trans = btrfs_join_transaction(root, 1);
while (!list_empty(&async_cow->extents)) {
async_extent = list_entry(async_cow->extents.next,
lock_extent(io_tree, async_extent->start,
async_extent->start + async_extent->ram_size - 1,
GFP_NOFS);
+ /*
+ * here we're doing allocation and writeback of the
+ * compressed pages
+ */
+ btrfs_drop_extent_cache(inode, async_extent->start,
+ async_extent->start +
+ async_extent->ram_size - 1, 0);
- trans = btrfs_join_transaction(root, 1);
ret = btrfs_reserve_extent(trans, root,
async_extent->compressed_size,
async_extent->compressed_size,
0, alloc_hint,
(u64)-1, &ins, 1);
- btrfs_end_transaction(trans, root);
-
if (ret) {
int i;
for (i = 0; i < async_extent->nr_pages; i++) {
goto retry;
}
- /*
- * here we're doing allocation and writeback of the
- * compressed pages
- */
- btrfs_drop_extent_cache(inode, async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1, 0);
-
em = alloc_extent_map(GFP_NOFS);
em->start = async_extent->start;
em->len = async_extent->ram_size;
BTRFS_ORDERED_COMPRESSED);
BUG_ON(ret);
+ btrfs_end_transaction(trans, root);
+
/*
* clear dirty, set writeback and unlock the pages.
*/
async_extent->nr_pages);
BUG_ON(ret);
+ trans = btrfs_join_transaction(root, 1);
alloc_hint = ins.objectid + ins.offset;
kfree(async_extent);
cond_resched();
}
+ btrfs_end_transaction(trans, root);
return 0;
}
EXTENT_CLEAR_DIRTY |
EXTENT_SET_WRITEBACK |
EXTENT_END_WRITEBACK);
-
*nr_written = *nr_written +
(end - start + PAGE_CACHE_SIZE) / PAGE_CACHE_SIZE;
*page_started = 1;
struct inode *inode, u64 file_pos,
u64 disk_bytenr, u64 disk_num_bytes,
u64 num_bytes, u64 ram_bytes,
+ u64 locked_end,
u8 compression, u8 encryption,
u16 other_encoding, int extent_type)
{
* the caller is expected to unpin it and allow it to be merged
* with the others.
*/
- ret = btrfs_drop_extents(trans, inode, file_pos, file_pos + num_bytes,
- &hint, 0);
+ ret = btrfs_drop_extents(trans, root, inode, file_pos,
+ file_pos + num_bytes, locked_end,
+ file_pos, &hint, 0);
BUG_ON(ret);
ins.objectid = inode->i_ino;
* before we start the transaction. It limits the amount of btree
* reads required while inside the transaction.
*/
+static noinline void reada_csum(struct btrfs_root *root,
+ struct btrfs_path *path,
+ struct btrfs_ordered_extent *ordered_extent)
+{
+ struct btrfs_ordered_sum *sum;
+ u64 bytenr;
+
+ sum = list_entry(ordered_extent->list.next, struct btrfs_ordered_sum,
+ list);
+ bytenr = sum->sums[0].bytenr;
+
+ /*
+ * we don't care about the results, the point of this search is
+ * just to get the btree leaves into ram
+ */
+ btrfs_lookup_csum(NULL, root->fs_info->csum_root, path, bytenr, 0);
+}
+
/* as ordered data IO finishes, this gets called so we can finish
* an ordered extent if the range of bytes in the file it covers are
* fully written.
struct btrfs_trans_handle *trans;
struct btrfs_ordered_extent *ordered_extent = NULL;
struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ struct btrfs_path *path;
int compressed = 0;
int ret;
if (!ret)
return 0;
- ordered_extent = btrfs_lookup_ordered_extent(inode, start);
- BUG_ON(!ordered_extent);
-
- if (test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags)) {
- BUG_ON(!list_empty(&ordered_extent->list));
- ret = btrfs_ordered_update_i_size(inode, 0, ordered_extent);
- if (!ret) {
- trans = btrfs_join_transaction(root, 1);
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
- btrfs_end_transaction(trans, root);
+ /*
+ * before we join the transaction, try to do some of our IO.
+ * This will limit the amount of IO that we have to do with
+ * the transaction running. We're unlikely to need to do any
+ * IO if the file extents are new, the disk_i_size checks
+ * covers the most common case.
+ */
+ if (start < BTRFS_I(inode)->disk_i_size) {
+ path = btrfs_alloc_path();
+ if (path) {
+ ret = btrfs_lookup_file_extent(NULL, root, path,
+ inode->i_ino,
+ start, 0);
+ ordered_extent = btrfs_lookup_ordered_extent(inode,
+ start);
+ if (!list_empty(&ordered_extent->list)) {
+ btrfs_release_path(root, path);
+ reada_csum(root, path, ordered_extent);
+ }
+ btrfs_free_path(path);
}
- goto out;
}
+ trans = btrfs_join_transaction(root, 1);
+
+ if (!ordered_extent)
+ ordered_extent = btrfs_lookup_ordered_extent(inode, start);
+ BUG_ON(!ordered_extent);
+ if (test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags))
+ goto nocow;
+
lock_extent(io_tree, ordered_extent->file_offset,
ordered_extent->file_offset + ordered_extent->len - 1,
GFP_NOFS);
- trans = btrfs_join_transaction(root, 1);
-
if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered_extent->flags))
compressed = 1;
if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) {
BUG_ON(compressed);
- ret = btrfs_mark_extent_written(trans, inode,
+ ret = btrfs_mark_extent_written(trans, root, inode,
ordered_extent->file_offset,
ordered_extent->file_offset +
ordered_extent->len);
ordered_extent->disk_len,
ordered_extent->len,
ordered_extent->len,
+ ordered_extent->file_offset +
+ ordered_extent->len,
compressed, 0, 0,
BTRFS_FILE_EXTENT_REG);
unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
unlock_extent(io_tree, ordered_extent->file_offset,
ordered_extent->file_offset + ordered_extent->len - 1,
GFP_NOFS);
+nocow:
add_pending_csums(trans, inode, ordered_extent->file_offset,
&ordered_extent->list);
- /* this also removes the ordered extent from the tree */
- btrfs_ordered_update_i_size(inode, 0, ordered_extent);
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
- btrfs_end_transaction(trans, root);
-out:
+ mutex_lock(&BTRFS_I(inode)->extent_mutex);
+ btrfs_ordered_update_i_size(inode, ordered_extent);
+ btrfs_update_inode(trans, root, inode);
+ btrfs_remove_ordered_extent(inode, ordered_extent);
+ mutex_unlock(&BTRFS_I(inode)->extent_mutex);
+
/* once for us */
btrfs_put_ordered_extent(ordered_extent);
/* once for the tree */
btrfs_put_ordered_extent(ordered_extent);
+ btrfs_end_transaction(trans, root);
return 0;
}
return -EIO;
}
-struct delayed_iput {
- struct list_head list;
- struct inode *inode;
-};
-
-void btrfs_add_delayed_iput(struct inode *inode)
-{
- struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
- struct delayed_iput *delayed;
-
- if (atomic_add_unless(&inode->i_count, -1, 1))
- return;
-
- delayed = kmalloc(sizeof(*delayed), GFP_NOFS | __GFP_NOFAIL);
- delayed->inode = inode;
-
- spin_lock(&fs_info->delayed_iput_lock);
- list_add_tail(&delayed->list, &fs_info->delayed_iputs);
- spin_unlock(&fs_info->delayed_iput_lock);
-}
-
-void btrfs_run_delayed_iputs(struct btrfs_root *root)
-{
- LIST_HEAD(list);
- struct btrfs_fs_info *fs_info = root->fs_info;
- struct delayed_iput *delayed;
- int empty;
-
- spin_lock(&fs_info->delayed_iput_lock);
- empty = list_empty(&fs_info->delayed_iputs);
- spin_unlock(&fs_info->delayed_iput_lock);
- if (empty)
- return;
-
- down_read(&root->fs_info->cleanup_work_sem);
- spin_lock(&fs_info->delayed_iput_lock);
- list_splice_init(&fs_info->delayed_iputs, &list);
- spin_unlock(&fs_info->delayed_iput_lock);
-
- while (!list_empty(&list)) {
- delayed = list_entry(list.next, struct delayed_iput, list);
- list_del(&delayed->list);
- iput(delayed->inode);
- kfree(delayed);
- }
- up_read(&root->fs_info->cleanup_work_sem);
-}
-
/*
* This creates an orphan entry for the given inode in case something goes
* wrong in the middle of an unlink/truncate.
struct inode *inode;
int ret = 0, nr_unlink = 0, nr_truncate = 0;
- if (!xchg(&root->clean_orphans, 0))
- return;
-
path = btrfs_alloc_path();
- BUG_ON(!path);
+ if (!path)
+ return;
path->reada = -1;
key.objectid = BTRFS_ORPHAN_OBJECTID;
btrfs_set_key_type(&key, BTRFS_ORPHAN_ITEM_KEY);
key.offset = (u64)-1;
+
while (1) {
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
if (ret < 0) {
* min_type is the minimum key type to truncate down to. If set to 0, this
* will kill all the items on this inode, including the INODE_ITEM_KEY.
*/
-int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
- struct btrfs_root *root,
- struct inode *inode,
- u64 new_size, u32 min_type)
+noinline int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root,
+ struct inode *inode,
+ u64 new_size, u32 min_type)
{
+ int ret;
struct btrfs_path *path;
- struct extent_buffer *leaf;
- struct btrfs_file_extent_item *fi;
struct btrfs_key key;
struct btrfs_key found_key;
+ u32 found_type = (u8)-1;
+ struct extent_buffer *leaf;
+ struct btrfs_file_extent_item *fi;
u64 extent_start = 0;
u64 extent_num_bytes = 0;
u64 extent_offset = 0;
u64 item_end = 0;
- u64 mask = root->sectorsize - 1;
- u32 found_type = (u8)-1;
int found_extent;
int del_item;
int pending_del_nr = 0;
int pending_del_slot = 0;
int extent_type = -1;
int encoding;
- int ret;
- int err = 0;
-
- BUG_ON(new_size > 0 && min_type != BTRFS_EXTENT_DATA_KEY);
+ u64 mask = root->sectorsize - 1;
if (root->ref_cows)
btrfs_drop_extent_cache(inode, new_size & (~mask), (u64)-1, 0);
-
path = btrfs_alloc_path();
BUG_ON(!path);
path->reada = -1;
+ /* FIXME, add redo link to tree so we don't leak on crash */
key.objectid = inode->i_ino;
key.offset = (u64)-1;
key.type = (u8)-1;
search_again:
path->leave_spinning = 1;
ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
- if (ret < 0) {
- err = ret;
- goto out;
- }
+ if (ret < 0)
+ goto error;
if (ret > 0) {
/* there are no items in the tree for us to truncate, we're
* done
*/
- if (path->slots[0] == 0)
- goto out;
+ if (path->slots[0] == 0) {
+ ret = 0;
+ goto error;
+ }
path->slots[0]--;
}
}
item_end--;
}
- if (found_type > min_type) {
- del_item = 1;
- } else {
- if (item_end < new_size)
- break;
- if (found_key.offset >= new_size)
- del_item = 1;
+ if (item_end < new_size) {
+ if (found_type == BTRFS_DIR_ITEM_KEY)
+ found_type = BTRFS_INODE_ITEM_KEY;
+ else if (found_type == BTRFS_EXTENT_ITEM_KEY)
+ found_type = BTRFS_EXTENT_DATA_KEY;
+ else if (found_type == BTRFS_EXTENT_DATA_KEY)
+ found_type = BTRFS_XATTR_ITEM_KEY;
+ else if (found_type == BTRFS_XATTR_ITEM_KEY)
+ found_type = BTRFS_INODE_REF_KEY;
+ else if (found_type)
+ found_type--;
else
- del_item = 0;
+ break;
+ btrfs_set_key_type(&key, found_type);
+ goto next;
}
+ if (found_key.offset >= new_size)
+ del_item = 1;
+ else
+ del_item = 0;
found_extent = 0;
+
/* FIXME, shrink the extent if the ref count is only 1 */
if (found_type != BTRFS_EXTENT_DATA_KEY)
goto delete;
inode->i_ino, extent_offset);
BUG_ON(ret);
}
+next:
+ if (path->slots[0] == 0) {
+ if (pending_del_nr)
+ goto del_pending;
+ btrfs_release_path(root, path);
+ if (found_type == BTRFS_INODE_ITEM_KEY)
+ break;
+ goto search_again;
+ }
- if (found_type == BTRFS_INODE_ITEM_KEY)
- break;
-
- if (path->slots[0] == 0 ||
- path->slots[0] != pending_del_slot) {
- if (root->ref_cows) {
- err = -EAGAIN;
- goto out;
- }
- if (pending_del_nr) {
- ret = btrfs_del_items(trans, root, path,
- pending_del_slot,
- pending_del_nr);
- BUG_ON(ret);
- pending_del_nr = 0;
- }
+ path->slots[0]--;
+ if (pending_del_nr &&
+ path->slots[0] + 1 != pending_del_slot) {
+ struct btrfs_key debug;
+del_pending:
+ btrfs_item_key_to_cpu(path->nodes[0], &debug,
+ pending_del_slot);
+ ret = btrfs_del_items(trans, root, path,
+ pending_del_slot,
+ pending_del_nr);
+ BUG_ON(ret);
+ pending_del_nr = 0;
btrfs_release_path(root, path);
+ if (found_type == BTRFS_INODE_ITEM_KEY)
+ break;
goto search_again;
- } else {
- path->slots[0]--;
}
}
-out:
+ ret = 0;
+error:
if (pending_del_nr) {
ret = btrfs_del_items(trans, root, path, pending_del_slot,
pending_del_nr);
}
btrfs_free_path(path);
- return err;
+ return ret;
}
/*
if (size <= hole_start)
return 0;
+ err = btrfs_truncate_page(inode->i_mapping, inode->i_size);
+ if (err)
+ return err;
+
while (1) {
struct btrfs_ordered_extent *ordered;
btrfs_wait_ordered_range(inode, hole_start,
btrfs_put_ordered_extent(ordered);
}
+ trans = btrfs_start_transaction(root, 1);
+ btrfs_set_trans_block_group(trans, inode);
+
cur_offset = hole_start;
while (1) {
em = btrfs_get_extent(inode, NULL, 0, cur_offset,
BUG_ON(IS_ERR(em) || !em);
last_byte = min(extent_map_end(em), block_end);
last_byte = (last_byte + mask) & ~mask;
- if (!test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) {
+ if (test_bit(EXTENT_FLAG_VACANCY, &em->flags)) {
u64 hint_byte = 0;
hole_size = last_byte - cur_offset;
-
- err = btrfs_reserve_metadata_space(root, 2);
+ err = btrfs_drop_extents(trans, root, inode,
+ cur_offset,
+ cur_offset + hole_size,
+ block_end,
+ cur_offset, &hint_byte, 1);
if (err)
break;
- trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
-
- err = btrfs_drop_extents(trans, inode, cur_offset,
- cur_offset + hole_size,
- &hint_byte, 1);
- BUG_ON(err);
+ err = btrfs_reserve_metadata_space(root, 1);
+ if (err)
+ break;
err = btrfs_insert_file_extent(trans, root,
inode->i_ino, cur_offset, 0,
0, hole_size, 0, hole_size,
0, 0, 0);
- BUG_ON(err);
-
btrfs_drop_extent_cache(inode, hole_start,
last_byte - 1, 0);
-
- btrfs_end_transaction(trans, root);
- btrfs_unreserve_metadata_space(root, 2);
+ btrfs_unreserve_metadata_space(root, 1);
}
free_extent_map(em);
cur_offset = last_byte;
- if (cur_offset >= block_end)
+ if (err || cur_offset >= block_end)
break;
}
+ btrfs_end_transaction(trans, root);
unlock_extent(io_tree, hole_start, block_end - 1, GFP_NOFS);
return err;
}
-static int btrfs_setattr_size(struct inode *inode, struct iattr *attr)
-{
- struct btrfs_root *root = BTRFS_I(inode)->root;
- struct btrfs_trans_handle *trans;
- unsigned long nr;
- int ret;
-
- if (attr->ia_size == inode->i_size)
- return 0;
-
- if (attr->ia_size > inode->i_size) {
- unsigned long limit;
- limit = current->signal->rlim[RLIMIT_FSIZE].rlim_cur;
- if (attr->ia_size > inode->i_sb->s_maxbytes)
- return -EFBIG;
- if (limit != RLIM_INFINITY && attr->ia_size > limit) {
- send_sig(SIGXFSZ, current, 0);
- return -EFBIG;
- }
- }
-
- ret = btrfs_reserve_metadata_space(root, 1);
- if (ret)
- return ret;
-
- trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
-
- ret = btrfs_orphan_add(trans, inode);
- BUG_ON(ret);
-
- nr = trans->blocks_used;
- btrfs_end_transaction(trans, root);
- btrfs_unreserve_metadata_space(root, 1);
- btrfs_btree_balance_dirty(root, nr);
-
- if (attr->ia_size > inode->i_size) {
- ret = btrfs_cont_expand(inode, attr->ia_size);
- if (ret) {
- btrfs_truncate(inode);
- return ret;
- }
-
- i_size_write(inode, attr->ia_size);
- btrfs_ordered_update_i_size(inode, inode->i_size, NULL);
-
- trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
-
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
- if (inode->i_nlink > 0) {
- ret = btrfs_orphan_del(trans, inode);
- BUG_ON(ret);
- }
- nr = trans->blocks_used;
- btrfs_end_transaction(trans, root);
- btrfs_btree_balance_dirty(root, nr);
- return 0;
- }
-
- /*
- * We're truncating a file that used to have good data down to
- * zero. Make sure it gets into the ordered flush list so that
- * any new writes get down to disk quickly.
- */
- if (attr->ia_size == 0)
- BTRFS_I(inode)->ordered_data_close = 1;
-
- /* we don't support swapfiles, so vmtruncate shouldn't fail */
- ret = vmtruncate(inode, attr->ia_size);
- BUG_ON(ret);
-
- return 0;
-}
-
static int btrfs_setattr(struct dentry *dentry, struct iattr *attr)
{
struct inode *inode = dentry->d_inode;
return err;
if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) {
- err = btrfs_setattr_size(inode, attr);
- if (err)
- return err;
+ if (attr->ia_size > inode->i_size) {
+ err = btrfs_cont_expand(inode, attr->ia_size);
+ if (err)
+ return err;
+ } else if (inode->i_size > 0 &&
+ attr->ia_size == 0) {
+
+ /* we're truncating a file that used to have good
+ * data down to zero. Make sure it gets into
+ * the ordered flush list so that any new writes
+ * get down to disk quickly.
+ */
+ BTRFS_I(inode)->ordered_data_close = 1;
+ }
}
- attr->ia_valid &= ~ATTR_SIZE;
- if (attr->ia_valid)
- err = inode_setattr(inode, attr);
+ err = inode_setattr(inode, attr);
if (!err && ((attr->ia_valid & ATTR_MODE)))
err = btrfs_acl_chmod(inode);
}
btrfs_wait_ordered_range(inode, 0, (u64)-1);
- if (root->fs_info->log_root_recovering) {
- BUG_ON(!list_empty(&BTRFS_I(inode)->i_orphan));
- goto no_delete;
- }
-
if (inode->i_nlink > 0) {
BUG_ON(btrfs_root_refs(&root->root_item) != 0);
goto no_delete;
}
btrfs_i_size_write(inode, 0);
+ trans = btrfs_join_transaction(root, 1);
- while (1) {
- trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
- ret = btrfs_truncate_inode_items(trans, root, inode, 0, 0);
+ btrfs_set_trans_block_group(trans, inode);
+ ret = btrfs_truncate_inode_items(trans, root, inode, inode->i_size, 0);
+ if (ret) {
+ btrfs_orphan_del(NULL, inode);
+ goto no_delete_lock;
+ }
- if (ret != -EAGAIN)
- break;
+ btrfs_orphan_del(trans, inode);
- nr = trans->blocks_used;
- btrfs_end_transaction(trans, root);
- trans = NULL;
- btrfs_btree_balance_dirty(root, nr);
- }
+ nr = trans->blocks_used;
+ clear_inode(inode);
- if (ret == 0) {
- ret = btrfs_orphan_del(trans, inode);
- BUG_ON(ret);
- }
+ btrfs_end_transaction(trans, root);
+ btrfs_btree_balance_dirty(root, nr);
+ return;
+no_delete_lock:
nr = trans->blocks_used;
btrfs_end_transaction(trans, root);
btrfs_btree_balance_dirty(root, nr);
no_delete:
clear_inode(inode);
- return;
}
/*
INIT_LIST_HEAD(&BTRFS_I(inode)->ordered_operations);
RB_CLEAR_NODE(&BTRFS_I(inode)->rb_node);
btrfs_ordered_inode_tree_init(&BTRFS_I(inode)->ordered_tree);
+ mutex_init(&BTRFS_I(inode)->extent_mutex);
mutex_init(&BTRFS_I(inode)->log_mutex);
}
}
srcu_read_unlock(&root->fs_info->subvol_srcu, index);
- if (root != sub_root) {
- down_read(&root->fs_info->cleanup_work_sem);
- if (!(inode->i_sb->s_flags & MS_RDONLY))
- btrfs_orphan_cleanup(sub_root);
- up_read(&root->fs_info->cleanup_work_sem);
- }
-
return inode;
}
/* Reached end of directory/root. Bump pos past the last item. */
if (key_type == BTRFS_DIR_INDEX_KEY)
- /*
- * 32-bit glibc will use getdents64, but then strtol -
- * so the last number we can serve is this.
- */
- filp->f_pos = 0x7fffffff;
+ filp->f_pos = INT_LIMIT(off_t);
else
filp->f_pos++;
nopos:
if (IS_ERR(inode))
goto out_unlock;
- err = btrfs_init_inode_security(trans, inode, dir);
+ err = btrfs_init_inode_security(inode, dir);
if (err) {
drop_inode = 1;
goto out_unlock;
if (IS_ERR(inode))
goto out_unlock;
- err = btrfs_init_inode_security(trans, inode, dir);
+ err = btrfs_init_inode_security(inode, dir);
if (err) {
drop_inode = 1;
goto out_unlock;
if (inode->i_nlink == 0)
return -ENOENT;
- /* do not allow sys_link's with other subvols of the same device */
- if (root->objectid != BTRFS_I(inode)->root->objectid)
- return -EPERM;
-
/*
* 1 item for inode ref
* 2 items for dir items
drop_on_err = 1;
- err = btrfs_init_inode_security(trans, inode, dir);
+ err = btrfs_init_inode_security(inode, dir);
if (err)
goto out_fail;
unsigned long nr;
u64 mask = root->sectorsize - 1;
- if (!S_ISREG(inode->i_mode)) {
- WARN_ON(1);
+ if (!S_ISREG(inode->i_mode))
+ return;
+ if (IS_APPEND(inode) || IS_IMMUTABLE(inode))
return;
- }
ret = btrfs_truncate_page(inode->i_mapping, inode->i_size);
if (ret)
return;
-
btrfs_wait_ordered_range(inode, inode->i_size & (~mask), (u64)-1);
- btrfs_ordered_update_i_size(inode, inode->i_size, NULL);
trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
/*
* setattr is responsible for setting the ordered_data_close flag,
if (inode->i_size == 0 && BTRFS_I(inode)->ordered_data_close)
btrfs_add_ordered_operation(trans, root, inode);
- while (1) {
- ret = btrfs_truncate_inode_items(trans, root, inode,
- inode->i_size,
- BTRFS_EXTENT_DATA_KEY);
- if (ret != -EAGAIN)
- break;
-
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
-
- nr = trans->blocks_used;
- btrfs_end_transaction(trans, root);
- btrfs_btree_balance_dirty(root, nr);
-
- trans = btrfs_start_transaction(root, 1);
- btrfs_set_trans_block_group(trans, inode);
- }
+ btrfs_set_trans_block_group(trans, inode);
+ btrfs_i_size_write(inode, inode->i_size);
- if (ret == 0 && inode->i_nlink > 0) {
- ret = btrfs_orphan_del(trans, inode);
- BUG_ON(ret);
- }
+ ret = btrfs_orphan_add(trans, inode);
+ if (ret)
+ goto out;
+ /* FIXME, add redo link to tree so we don't leak on crash */
+ ret = btrfs_truncate_inode_items(trans, root, inode, inode->i_size,
+ BTRFS_EXTENT_DATA_KEY);
+ btrfs_update_inode(trans, root, inode);
- ret = btrfs_update_inode(trans, root, inode);
+ ret = btrfs_orphan_del(trans, inode);
BUG_ON(ret);
+out:
nr = trans->blocks_used;
ret = btrfs_end_transaction_throttle(trans, root);
BUG_ON(ret);
spin_lock(&root->list_lock);
if (!list_empty(&BTRFS_I(inode)->i_orphan)) {
- printk(KERN_INFO "BTRFS: inode %lu still on the orphan list\n",
- inode->i_ino);
- list_del_init(&BTRFS_I(inode)->i_orphan);
+ printk(KERN_ERR "BTRFS: inode %lu: inode still on the orphan"
+ " list\n", inode->i_ino);
+ dump_stack();
}
spin_unlock(&root->list_lock);
* some fairly slow code that needs optimization. This walks the list
* of all the inodes with pending delalloc and forces them to disk.
*/
-int btrfs_start_delalloc_inodes(struct btrfs_root *root, int delay_iput)
+int btrfs_start_delalloc_inodes(struct btrfs_root *root)
{
struct list_head *head = &root->fs_info->delalloc_inodes;
struct btrfs_inode *binode;
spin_unlock(&root->fs_info->delalloc_lock);
if (inode) {
filemap_flush(inode->i_mapping);
- if (delay_iput)
- btrfs_add_delayed_iput(inode);
- else
- iput(inode);
+ iput(inode);
}
cond_resched();
spin_lock(&root->fs_info->delalloc_lock);
if (IS_ERR(inode))
goto out_unlock;
- err = btrfs_init_inode_security(trans, inode, dir);
+ err = btrfs_init_inode_security(inode, dir);
if (err) {
drop_inode = 1;
goto out_unlock;
return err;
}
-static int prealloc_file_range(struct inode *inode, u64 start, u64 end,
- u64 alloc_hint, int mode, loff_t actual_len)
+static int prealloc_file_range(struct btrfs_trans_handle *trans,
+ struct inode *inode, u64 start, u64 end,
+ u64 locked_end, u64 alloc_hint, int mode)
{
- struct btrfs_trans_handle *trans;
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_key ins;
u64 alloc_size;
u64 cur_offset = start;
u64 num_bytes = end - start;
int ret = 0;
- u64 i_size;
while (num_bytes > 0) {
alloc_size = min(num_bytes, root->fs_info->max_extent);
- trans = btrfs_start_transaction(root, 1);
+ ret = btrfs_reserve_metadata_space(root, 1);
+ if (ret)
+ goto out;
ret = btrfs_reserve_extent(trans, root, alloc_size,
root->sectorsize, 0, alloc_hint,
(u64)-1, &ins, 1);
if (ret) {
WARN_ON(1);
- goto stop_trans;
- }
-
- ret = btrfs_reserve_metadata_space(root, 3);
- if (ret) {
- btrfs_free_reserved_extent(root, ins.objectid,
- ins.offset);
- goto stop_trans;
+ goto out;
}
-
ret = insert_reserved_file_extent(trans, inode,
cur_offset, ins.objectid,
ins.offset, ins.offset,
- ins.offset, 0, 0, 0,
+ ins.offset, locked_end,
+ 0, 0, 0,
BTRFS_FILE_EXTENT_PREALLOC);
BUG_ON(ret);
btrfs_drop_extent_cache(inode, cur_offset,
cur_offset + ins.offset -1, 0);
-
num_bytes -= ins.offset;
cur_offset += ins.offset;
alloc_hint = ins.objectid + ins.offset;
-
+ btrfs_unreserve_metadata_space(root, 1);
+ }
+out:
+ if (cur_offset > start) {
inode->i_ctime = CURRENT_TIME;
BTRFS_I(inode)->flags |= BTRFS_INODE_PREALLOC;
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
- (actual_len > inode->i_size) &&
- (cur_offset > inode->i_size)) {
-
- if (cur_offset > actual_len)
- i_size = actual_len;
- else
- i_size = cur_offset;
- i_size_write(inode, i_size);
- btrfs_ordered_update_i_size(inode, i_size, NULL);
- }
-
+ cur_offset > i_size_read(inode))
+ btrfs_i_size_write(inode, cur_offset);
ret = btrfs_update_inode(trans, root, inode);
BUG_ON(ret);
-
- btrfs_end_transaction(trans, root);
- btrfs_unreserve_metadata_space(root, 3);
}
- return ret;
-stop_trans:
- btrfs_end_transaction(trans, root);
return ret;
-
}
static long btrfs_fallocate(struct inode *inode, int mode,
u64 locked_end;
u64 mask = BTRFS_I(inode)->root->sectorsize - 1;
struct extent_map *em;
+ struct btrfs_trans_handle *trans;
+ struct btrfs_root *root;
int ret;
alloc_start = offset & ~mask;
goto out;
}
- ret = btrfs_check_data_free_space(BTRFS_I(inode)->root, inode,
+ root = BTRFS_I(inode)->root;
+
+ ret = btrfs_check_data_free_space(root, inode,
alloc_end - alloc_start);
if (ret)
goto out;
while (1) {
struct btrfs_ordered_extent *ordered;
+ trans = btrfs_start_transaction(BTRFS_I(inode)->root, 1);
+ if (!trans) {
+ ret = -EIO;
+ goto out_free;
+ }
+
/* the extent lock is ordered inside the running
* transaction
*/
btrfs_put_ordered_extent(ordered);
unlock_extent(&BTRFS_I(inode)->io_tree,
alloc_start, locked_end, GFP_NOFS);
+ btrfs_end_transaction(trans, BTRFS_I(inode)->root);
+
/*
* we can't wait on the range with the transaction
* running or with the extent lock held
BUG_ON(IS_ERR(em) || !em);
last_byte = min(extent_map_end(em), alloc_end);
last_byte = (last_byte + mask) & ~mask;
- if (em->block_start == EXTENT_MAP_HOLE ||
- (cur_offset >= inode->i_size &&
- !test_bit(EXTENT_FLAG_PREALLOC, &em->flags))) {
- ret = prealloc_file_range(inode,
- cur_offset, last_byte,
- alloc_hint, mode, offset+len);
+ if (em->block_start == EXTENT_MAP_HOLE) {
+ ret = prealloc_file_range(trans, inode, cur_offset,
+ last_byte, locked_end + 1,
+ alloc_hint, mode);
if (ret < 0) {
free_extent_map(em);
break;
unlock_extent(&BTRFS_I(inode)->io_tree, alloc_start, locked_end,
GFP_NOFS);
- btrfs_free_reserved_data_space(BTRFS_I(inode)->root, inode,
- alloc_end - alloc_start);
+ btrfs_end_transaction(trans, BTRFS_I(inode)->root);
+out_free:
+ btrfs_free_reserved_data_space(root, inode, alloc_end - alloc_start);
out:
mutex_unlock(&inode->i_mutex);
return ret;
u64 objectid;
u64 new_dirid = BTRFS_FIRST_FREE_OBJECTID;
u64 index = 0;
+ unsigned long nr = 1;
/*
* 1 - inode item
btrfs_set_root_generation(&root_item, trans->transid);
btrfs_set_root_level(&root_item, 0);
btrfs_set_root_refs(&root_item, 1);
- btrfs_set_root_used(&root_item, leaf->len);
+ btrfs_set_root_used(&root_item, 0);
btrfs_set_root_last_snapshot(&root_item, 0);
memset(&root_item.drop_progress, 0, sizeof(root_item.drop_progress));
d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));
fail:
+ nr = trans->blocks_used;
err = btrfs_commit_transaction(trans, root);
if (err && !ret)
ret = err;
btrfs_unreserve_metadata_space(root, 6);
+ btrfs_btree_balance_dirty(root, nr);
return ret;
}
static int create_snapshot(struct btrfs_root *root, struct dentry *dentry,
char *name, int namelen)
{
- struct inode *inode;
struct btrfs_pending_snapshot *pending_snapshot;
struct btrfs_trans_handle *trans;
- int ret;
+ int ret = 0;
+ int err;
+ unsigned long nr = 0;
if (!root->ref_cows)
return -EINVAL;
*/
ret = btrfs_reserve_metadata_space(root, 6);
if (ret)
- goto fail;
+ goto fail_unlock;
pending_snapshot = kzalloc(sizeof(*pending_snapshot), GFP_NOFS);
if (!pending_snapshot) {
ret = -ENOMEM;
btrfs_unreserve_metadata_space(root, 6);
- goto fail;
+ goto fail_unlock;
}
pending_snapshot->name = kmalloc(namelen + 1, GFP_NOFS);
if (!pending_snapshot->name) {
ret = -ENOMEM;
kfree(pending_snapshot);
btrfs_unreserve_metadata_space(root, 6);
- goto fail;
+ goto fail_unlock;
}
memcpy(pending_snapshot->name, name, namelen);
pending_snapshot->name[namelen] = '\0';
pending_snapshot->root = root;
list_add(&pending_snapshot->list,
&trans->transaction->pending_snapshots);
- ret = btrfs_commit_transaction(trans, root);
- BUG_ON(ret);
- btrfs_unreserve_metadata_space(root, 6);
+ err = btrfs_commit_transaction(trans, root);
- inode = btrfs_lookup_dentry(dentry->d_parent->d_inode, dentry);
- if (IS_ERR(inode)) {
- ret = PTR_ERR(inode);
- goto fail;
- }
- BUG_ON(!inode);
- d_instantiate(dentry, inode);
- ret = 0;
-fail:
+fail_unlock:
+ btrfs_btree_balance_dirty(root, nr);
return ret;
}
*/
/* the destination must be opened for writing */
- if (!(file->f_mode & FMODE_WRITE) || (file->f_flags & O_APPEND))
+ if (!(file->f_mode & FMODE_WRITE))
return -EINVAL;
ret = mnt_want_write(file->f_path.mnt);
ret = -EBADF;
goto out_drop_write;
}
-
src = src_file->f_dentry->d_inode;
ret = -EINVAL;
if (src == inode)
goto out_fput;
- /* the src must be open for reading */
- if (!(src_file->f_mode & FMODE_READ))
- goto out_fput;
-
ret = -EISDIR;
if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode))
goto out_fput;
/* determine range to clone */
ret = -EINVAL;
- if (off + len > src->i_size || off + len < off)
+ if (off >= src->i_size || off + len > src->i_size)
goto out_unlock;
if (len == 0)
olen = len = src->i_size - off;
BUG_ON(!trans);
/* punch hole in destination first */
- btrfs_drop_extents(trans, inode, off, off + len, &hint_byte, 1);
+ btrfs_drop_extents(trans, root, inode, off, off + len,
+ off + len, 0, &hint_byte, 1);
/* clone data */
key.objectid = src->i_ino;
/*
* remove an ordered extent from the tree. No references are dropped
- * and you must wake_up entry->wait. You must hold the tree mutex
- * while you call this function.
+ * but, anyone waiting on this extent is woken up.
*/
-static int __btrfs_remove_ordered_extent(struct inode *inode,
+int btrfs_remove_ordered_extent(struct inode *inode,
struct btrfs_ordered_extent *entry)
{
struct btrfs_ordered_inode_tree *tree;
struct rb_node *node;
tree = &BTRFS_I(inode)->ordered_tree;
+ mutex_lock(&tree->mutex);
node = &entry->rb_node;
rb_erase(node, &tree->tree);
tree->last = NULL;
}
spin_unlock(&BTRFS_I(inode)->root->fs_info->ordered_extent_lock);
- return 0;
-}
-
-/*
- * remove an ordered extent from the tree. No references are dropped
- * but any waiters are woken.
- */
-int btrfs_remove_ordered_extent(struct inode *inode,
- struct btrfs_ordered_extent *entry)
-{
- struct btrfs_ordered_inode_tree *tree;
- int ret;
-
- tree = &BTRFS_I(inode)->ordered_tree;
- mutex_lock(&tree->mutex);
- ret = __btrfs_remove_ordered_extent(inode, entry);
mutex_unlock(&tree->mutex);
wake_up(&entry->wait);
-
- return ret;
+ return 0;
}
/*
* wait for all the ordered extents in a root. This is done when balancing
* space between drives.
*/
-int btrfs_wait_ordered_extents(struct btrfs_root *root,
- int nocow_only, int delay_iput)
+int btrfs_wait_ordered_extents(struct btrfs_root *root, int nocow_only)
{
struct list_head splice;
struct list_head *cur;
if (inode) {
btrfs_start_ordered_extent(inode, ordered, 1);
btrfs_put_ordered_extent(ordered);
- if (delay_iput)
- btrfs_add_delayed_iput(inode);
- else
- iput(inode);
+ iput(inode);
} else {
btrfs_put_ordered_extent(ordered);
}
btrfs_wait_ordered_range(inode, 0, (u64)-1);
else
filemap_flush(inode->i_mapping);
- btrfs_add_delayed_iput(inode);
+ iput(inode);
}
cond_resched();
* After an extent is done, call this to conditionally update the on disk
* i_size. i_size is updated to cover any fully written part of the file.
*/
-int btrfs_ordered_update_i_size(struct inode *inode, u64 offset,
+int btrfs_ordered_update_i_size(struct inode *inode,
struct btrfs_ordered_extent *ordered)
{
struct btrfs_ordered_inode_tree *tree = &BTRFS_I(inode)->ordered_tree;
u64 disk_i_size;
u64 new_i_size;
u64 i_size_test;
- u64 i_size = i_size_read(inode);
struct rb_node *node;
- struct rb_node *prev = NULL;
struct btrfs_ordered_extent *test;
- int ret = 1;
-
- if (ordered)
- offset = entry_end(ordered);
- else
- offset = ALIGN(offset, BTRFS_I(inode)->root->sectorsize);
mutex_lock(&tree->mutex);
disk_i_size = BTRFS_I(inode)->disk_i_size;
- /* truncate file */
- if (disk_i_size > i_size) {
- BTRFS_I(inode)->disk_i_size = i_size;
- ret = 0;
- goto out;
- }
-
/*
* if the disk i_size is already at the inode->i_size, or
* this ordered extent is inside the disk i_size, we're done
*/
- if (disk_i_size == i_size || offset <= disk_i_size) {
+ if (disk_i_size >= inode->i_size ||
+ ordered->file_offset + ordered->len <= disk_i_size) {
goto out;
}
* we can't update the disk_isize if there are delalloc bytes
* between disk_i_size and this ordered extent
*/
- if (test_range_bit(io_tree, disk_i_size, offset - 1,
+ if (test_range_bit(io_tree, disk_i_size,
+ ordered->file_offset + ordered->len - 1,
EXTENT_DELALLOC, 0, NULL)) {
goto out;
}
* if we find an ordered extent then we can't update disk i_size
* yet
*/
- if (ordered) {
- node = rb_prev(&ordered->rb_node);
- } else {
- prev = tree_search(tree, offset);
- /*
- * we insert file extents without involving ordered struct,
- * so there should be no ordered struct cover this offset
- */
- if (prev) {
- test = rb_entry(prev, struct btrfs_ordered_extent,
- rb_node);
- BUG_ON(offset_in_entry(test, offset));
- }
- node = prev;
- }
- while (node) {
+ node = &ordered->rb_node;
+ while (1) {
+ node = rb_prev(node);
+ if (!node)
+ break;
test = rb_entry(node, struct btrfs_ordered_extent, rb_node);
if (test->file_offset + test->len <= disk_i_size)
break;
- if (test->file_offset >= i_size)
+ if (test->file_offset >= inode->i_size)
break;
if (test->file_offset >= disk_i_size)
goto out;
- node = rb_prev(node);
}
- new_i_size = min_t(u64, offset, i_size);
+ new_i_size = min_t(u64, entry_end(ordered), i_size_read(inode));
/*
* at this point, we know we can safely update i_size to at least
* walk forward and see if ios from higher up in the file have
* finished.
*/
- if (ordered) {
- node = rb_next(&ordered->rb_node);
- } else {
- if (prev)
- node = rb_next(prev);
- else
- node = rb_first(&tree->tree);
- }
+ node = rb_next(&ordered->rb_node);
i_size_test = 0;
if (node) {
/*
* between our ordered extent and the next one.
*/
test = rb_entry(node, struct btrfs_ordered_extent, rb_node);
- if (test->file_offset > offset)
+ if (test->file_offset > entry_end(ordered))
i_size_test = test->file_offset;
} else {
- i_size_test = i_size;
+ i_size_test = i_size_read(inode);
}
/*
* are no delalloc bytes in this area, it is safe to update
* disk_i_size to the end of the region.
*/
- if (i_size_test > offset &&
- !test_range_bit(io_tree, offset, i_size_test - 1,
- EXTENT_DELALLOC, 0, NULL)) {
- new_i_size = min_t(u64, i_size_test, i_size);
+ if (i_size_test > entry_end(ordered) &&
+ !test_range_bit(io_tree, entry_end(ordered), i_size_test - 1,
+ EXTENT_DELALLOC, 0, NULL)) {
+ new_i_size = min_t(u64, i_size_test, i_size_read(inode));
}
BTRFS_I(inode)->disk_i_size = new_i_size;
- ret = 0;
out:
- /*
- * we need to remove the ordered extent with the tree lock held
- * so that other people calling this function don't find our fully
- * processed ordered entry and skip updating the i_size
- */
- if (ordered)
- __btrfs_remove_ordered_extent(inode, ordered);
mutex_unlock(&tree->mutex);
- if (ordered)
- wake_up(&ordered->wait);
- return ret;
+ return 0;
}
/*
int btrfs_wait_ordered_range(struct inode *inode, u64 start, u64 len);
struct btrfs_ordered_extent *
btrfs_lookup_first_ordered_extent(struct inode * inode, u64 file_offset);
-int btrfs_ordered_update_i_size(struct inode *inode, u64 offset,
+int btrfs_ordered_update_i_size(struct inode *inode,
struct btrfs_ordered_extent *ordered);
int btrfs_find_ordered_sum(struct inode *inode, u64 offset, u64 disk_bytenr, u32 *sum);
+int btrfs_wait_ordered_extents(struct btrfs_root *root, int nocow_only);
int btrfs_run_ordered_operations(struct btrfs_root *root, int wait);
int btrfs_add_ordered_operation(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct inode *inode);
-int btrfs_wait_ordered_extents(struct btrfs_root *root,
- int nocow_only, int delay_iput);
#endif
return 0;
}
-static void put_inodes(struct list_head *list)
-{
- struct inodevec *ivec;
- while (!list_empty(list)) {
- ivec = list_entry(list->next, struct inodevec, list);
- list_del(&ivec->list);
- while (ivec->nr > 0) {
- ivec->nr--;
- iput(ivec->inode[ivec->nr]);
- }
- kfree(ivec);
- }
-}
-
static int find_next_key(struct btrfs_path *path, int level,
struct btrfs_key *key)
btrfs_btree_balance_dirty(root, nr);
- /*
- * put inodes outside transaction, otherwise we may deadlock.
- */
- put_inodes(&inode_list);
-
if (replaced && rc->stage == UPDATE_DATA_PTRS)
invalidate_extent_cache(root, &key, &next_key);
}
btrfs_btree_balance_dirty(root, nr);
- put_inodes(&inode_list);
+ /*
+ * put inodes while we aren't holding the tree locks
+ */
+ while (!list_empty(&inode_list)) {
+ struct inodevec *ivec;
+ ivec = list_entry(inode_list.next, struct inodevec, list);
+ list_del(&ivec->list);
+ while (ivec->nr > 0) {
+ ivec->nr--;
+ iput(ivec->inode[ivec->nr]);
+ }
+ kfree(ivec);
+ }
if (replaced && rc->stage == UPDATE_DATA_PTRS)
invalidate_extent_cache(root, &key, &next_key);
return -ENOMEM;
path = btrfs_alloc_path();
- if (!path) {
- kfree(cluster);
+ if (!path)
return -ENOMEM;
- }
rc->extents_found = 0;
rc->extents_skipped = 0;
(unsigned long long)rc->block_group->key.objectid,
(unsigned long long)rc->block_group->flags);
- btrfs_start_delalloc_inodes(fs_info->tree_root, 0);
- btrfs_wait_ordered_extents(fs_info->tree_root, 0, 0);
+ btrfs_start_delalloc_inodes(fs_info->tree_root);
+ btrfs_wait_ordered_extents(fs_info->tree_root, 0);
while (1) {
rc->extents_found = 0;
BTRFS_DATA_RELOC_TREE_OBJECTID);
if (IS_ERR(fs_root))
err = PTR_ERR(fs_root);
- else
- btrfs_orphan_cleanup(fs_root);
}
return err;
}
{
struct btrfs_fs_info *info = root->fs_info;
substring_t args[MAX_OPT_ARGS];
- char *p, *num, *orig;
+ char *p, *num;
int intarg;
- int ret = 0;
if (!options)
return 0;
if (!options)
return -ENOMEM;
- orig = options;
while ((p = strsep(&options, ",")) != NULL) {
int token;
case Opt_discard:
btrfs_set_opt(info->mount_opt, DISCARD);
break;
- case Opt_err:
- printk(KERN_INFO "btrfs: unrecognized mount option "
- "'%s'\n", p);
- ret = -EINVAL;
- goto out;
default:
break;
}
}
-out:
- kfree(orig);
- return ret;
+ kfree(options);
+ return 0;
}
/*
return 0;
}
- btrfs_start_delalloc_inodes(root, 0);
- btrfs_wait_ordered_extents(root, 0, 0);
+ btrfs_start_delalloc_inodes(root);
+ btrfs_wait_ordered_extents(root, 0);
trans = btrfs_start_transaction(root, 1);
ret = btrfs_commit_transaction(trans, root);
seq_puts(seq, ",notreelog");
if (btrfs_test_opt(root, FLUSHONCOMMIT))
seq_puts(seq, ",flushoncommit");
- if (btrfs_test_opt(root, DISCARD))
- seq_puts(seq, ",discard");
if (!(root->fs_info->sb->s_flags & MS_POSIXACL))
seq_puts(seq, ",noacl");
return 0;
memset(trans, 0, sizeof(*trans));
kmem_cache_free(btrfs_trans_handle_cachep, trans);
- if (throttle)
- btrfs_run_delayed_iputs(root);
-
return 0;
}
* those extents are sent to disk but does not wait on them
*/
int btrfs_write_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark)
+ struct extent_io_tree *dirty_pages)
{
int ret;
int err = 0;
while (1) {
ret = find_first_extent_bit(dirty_pages, start, &start, &end,
- mark);
+ EXTENT_DIRTY);
if (ret)
break;
while (start <= end) {
* on all the pages and clear them from the dirty pages state tree
*/
int btrfs_wait_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark)
+ struct extent_io_tree *dirty_pages)
{
int ret;
int err = 0;
unsigned long index;
while (1) {
- ret = find_first_extent_bit(dirty_pages, start, &start, &end,
- mark);
+ ret = find_first_extent_bit(dirty_pages, 0, &start, &end,
+ EXTENT_DIRTY);
if (ret)
break;
- clear_extent_bits(dirty_pages, start, end, mark, GFP_NOFS);
+ clear_extent_dirty(dirty_pages, start, end, GFP_NOFS);
while (start <= end) {
index = start >> PAGE_CACHE_SHIFT;
start = (u64)(index + 1) << PAGE_CACHE_SHIFT;
* those extents are on disk for transaction or log commit
*/
int btrfs_write_and_wait_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark)
+ struct extent_io_tree *dirty_pages)
{
int ret;
int ret2;
- ret = btrfs_write_marked_extents(root, dirty_pages, mark);
- ret2 = btrfs_wait_marked_extents(root, dirty_pages, mark);
+ ret = btrfs_write_marked_extents(root, dirty_pages);
+ ret2 = btrfs_wait_marked_extents(root, dirty_pages);
return ret || ret2;
}
return filemap_write_and_wait(btree_inode->i_mapping);
}
return btrfs_write_and_wait_marked_extents(root,
- &trans->transaction->dirty_pages,
- EXTENT_DIRTY);
+ &trans->transaction->dirty_pages);
}
/*
{
int ret;
u64 old_root_bytenr;
- u64 old_root_used;
struct btrfs_root *tree_root = root->fs_info->tree_root;
- old_root_used = btrfs_root_used(&root->root_item);
btrfs_write_dirty_block_groups(trans, root);
while (1) {
old_root_bytenr = btrfs_root_bytenr(&root->root_item);
- if (old_root_bytenr == root->node->start &&
- old_root_used == btrfs_root_used(&root->root_item))
+ if (old_root_bytenr == root->node->start)
break;
btrfs_set_root_node(&root->root_item, root->node);
&root->root_item);
BUG_ON(ret);
- old_root_used = btrfs_root_used(&root->root_item);
ret = btrfs_write_dirty_block_groups(trans, root);
BUG_ON(ret);
}
memcpy(&pending->root_key, &key, sizeof(key));
fail:
kfree(new_root_item);
+ btrfs_unreserve_metadata_space(root, 6);
return ret;
}
u64 index = 0;
struct btrfs_trans_handle *trans;
struct inode *parent_inode;
+ struct inode *inode;
struct btrfs_root *parent_root;
parent_inode = pending->dentry->d_parent->d_inode;
BUG_ON(ret);
+ inode = btrfs_lookup_dentry(parent_inode, pending->dentry);
+ d_instantiate(pending->dentry, inode);
fail:
btrfs_end_transaction(trans, fs_info->fs_root);
return ret;
mutex_unlock(&root->fs_info->trans_mutex);
if (flush_on_commit) {
- btrfs_start_delalloc_inodes(root, 1);
- ret = btrfs_wait_ordered_extents(root, 0, 1);
+ btrfs_start_delalloc_inodes(root);
+ ret = btrfs_wait_ordered_extents(root, 0);
BUG_ON(ret);
} else if (snap_pending) {
- ret = btrfs_wait_ordered_extents(root, 0, 1);
+ ret = btrfs_wait_ordered_extents(root, 1);
BUG_ON(ret);
}
current->journal_info = NULL;
kmem_cache_free(btrfs_trans_handle_cachep, trans);
-
- if (current != root->fs_info->transaction_kthread)
- btrfs_run_delayed_iputs(root);
-
return ret;
}
int btrfs_record_root_in_trans(struct btrfs_trans_handle *trans,
struct btrfs_root *root);
int btrfs_write_and_wait_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark);
+ struct extent_io_tree *dirty_pages);
int btrfs_write_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark);
+ struct extent_io_tree *dirty_pages);
int btrfs_wait_marked_extents(struct btrfs_root *root,
- struct extent_io_tree *dirty_pages, int mark);
+ struct extent_io_tree *dirty_pages);
int btrfs_transaction_in_commit(struct btrfs_fs_info *info);
#endif
saved_nbytes = inode_get_bytes(inode);
/* drop any overlapping extents */
- ret = btrfs_drop_extents(trans, inode, start, extent_end,
- &alloc_hint, 1);
+ ret = btrfs_drop_extents(trans, root, inode,
+ start, extent_end, extent_end, start, &alloc_hint, 1);
BUG_ON(ret);
if (found_type == BTRFS_FILE_EXTENT_REG ||
return 0;
}
-static int insert_orphan_item(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, u64 offset)
-{
- int ret;
- ret = btrfs_find_orphan_item(root, offset);
- if (ret > 0)
- ret = btrfs_insert_orphan_item(trans, root, offset);
- return ret;
-}
-
-
/*
* There are a few corners where the link count of the file can't
* be properly maintained during replay. So, instead of adding
}
BTRFS_I(inode)->index_cnt = (u64)-1;
- if (inode->i_nlink == 0) {
- if (S_ISDIR(inode->i_mode)) {
- ret = replay_dir_deletes(trans, root, NULL, path,
- inode->i_ino, 1);
- BUG_ON(ret);
- }
- ret = insert_orphan_item(trans, root, inode->i_ino);
+ if (inode->i_nlink == 0 && S_ISDIR(inode->i_mode)) {
+ ret = replay_dir_deletes(trans, root, NULL, path,
+ inode->i_ino, 1);
BUG_ON(ret);
}
btrfs_free_path(path);
/* inode keys are done during the first stage */
if (key.type == BTRFS_INODE_ITEM_KEY &&
wc->stage == LOG_WALK_REPLAY_INODES) {
+ struct inode *inode;
struct btrfs_inode_item *inode_item;
u32 mode;
eb, i, &key);
BUG_ON(ret);
- /* for regular files, make sure corresponding
- * orhpan item exist. extents past the new EOF
- * will be truncated later by orphan cleanup.
+ /* for regular files, truncate away
+ * extents past the new EOF
*/
if (S_ISREG(mode)) {
- ret = insert_orphan_item(wc->trans, root,
- key.objectid);
+ inode = read_one_inode(root,
+ key.objectid);
+ BUG_ON(!inode);
+
+ ret = btrfs_truncate_inode_items(wc->trans,
+ root, inode, inode->i_size,
+ BTRFS_EXTENT_DATA_KEY);
BUG_ON(ret);
- }
+ /* if the nlink count is zero here, the iput
+ * will free the inode. We bump it to make
+ * sure it doesn't get freed until the link
+ * count fixup is done
+ */
+ if (inode->i_nlink == 0) {
+ btrfs_inc_nlink(inode);
+ btrfs_update_inode(wc->trans,
+ root, inode);
+ }
+ iput(inode);
+ }
ret = link_to_fixup_dir(wc->trans, root,
path, key.objectid);
BUG_ON(ret);
{
int index1;
int index2;
- int mark;
int ret;
struct btrfs_root *log = root->log_root;
struct btrfs_root *log_root_tree = root->fs_info->log_root_tree;
- unsigned long log_transid = 0;
+ u64 log_transid = 0;
mutex_lock(&root->log_mutex);
index1 = root->log_transid % 2;
goto out;
}
- log_transid = root->log_transid;
- if (log_transid % 2 == 0)
- mark = EXTENT_DIRTY;
- else
- mark = EXTENT_NEW;
-
/* we start IO on all the marked extents here, but we don't actually
* wait for them until later.
*/
- ret = btrfs_write_marked_extents(log, &log->dirty_log_pages, mark);
+ ret = btrfs_write_marked_extents(log, &log->dirty_log_pages);
BUG_ON(ret);
btrfs_set_root_node(&log->root_item, log->node);
root->log_batch = 0;
+ log_transid = root->log_transid;
root->log_transid++;
log->log_transid = root->log_transid;
root->log_start_pid = 0;
smp_mb();
/*
- * IO has been started, blocks of the log tree have WRITTEN flag set
- * in their headers. new modifications of the log will be written to
- * new positions. so it's safe to allow log writers to go in.
+ * log tree has been flushed to disk, new modifications of
+ * the log will be written to new positions. so it's safe to
+ * allow log writers to go in.
*/
mutex_unlock(&root->log_mutex);
index2 = log_root_tree->log_transid % 2;
if (atomic_read(&log_root_tree->log_commit[index2])) {
- btrfs_wait_marked_extents(log, &log->dirty_log_pages, mark);
+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);
wait_log_commit(trans, log_root_tree,
log_root_tree->log_transid);
mutex_unlock(&log_root_tree->log_mutex);
* check the full commit flag again
*/
if (root->fs_info->last_trans_log_full_commit == trans->transid) {
- btrfs_wait_marked_extents(log, &log->dirty_log_pages, mark);
+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);
mutex_unlock(&log_root_tree->log_mutex);
ret = -EAGAIN;
goto out_wake_log_root;
}
ret = btrfs_write_and_wait_marked_extents(log_root_tree,
- &log_root_tree->dirty_log_pages,
- EXTENT_DIRTY | EXTENT_NEW);
+ &log_root_tree->dirty_log_pages);
BUG_ON(ret);
- btrfs_wait_marked_extents(log, &log->dirty_log_pages, mark);
+ btrfs_wait_marked_extents(log, &log->dirty_log_pages);
btrfs_set_super_log_root(&root->fs_info->super_for_commit,
log_root_tree->node->start);
while (1) {
ret = find_first_extent_bit(&log->dirty_log_pages,
- 0, &start, &end, EXTENT_DIRTY | EXTENT_NEW);
+ 0, &start, &end, EXTENT_DIRTY);
if (ret)
break;
- clear_extent_bits(&log->dirty_log_pages, start, end,
- EXTENT_DIRTY | EXTENT_NEW, GFP_NOFS);
+ clear_extent_dirty(&log->dirty_log_pages,
+ start, end, GFP_NOFS);
}
if (log->log_transid > 0) {
root->fs_info->avail_metadata_alloc_bits;
if ((all_avail & BTRFS_BLOCK_GROUP_RAID10) &&
- root->fs_info->fs_devices->num_devices <= 4) {
+ root->fs_info->fs_devices->rw_devices <= 4) {
printk(KERN_ERR "btrfs: unable to go below four devices "
"on raid10\n");
ret = -EINVAL;
}
if ((all_avail & BTRFS_BLOCK_GROUP_RAID1) &&
- root->fs_info->fs_devices->num_devices <= 2) {
+ root->fs_info->fs_devices->rw_devices <= 2) {
printk(KERN_ERR "btrfs: unable to go below two "
"devices on raid1\n");
ret = -EINVAL;
return -EINVAL;
bdev = open_bdev_exclusive(device_path, 0, root->fs_info->bdev_holder);
- if (IS_ERR(bdev))
- return PTR_ERR(bdev);
+ if (!bdev)
+ return -EIO;
if (root->fs_info->fs_devices->seeding) {
seeding_dev = 1;
max_chunk_size = 10 * calc_size;
min_stripe_size = 64 * 1024 * 1024;
} else if (type & BTRFS_BLOCK_GROUP_METADATA) {
- max_chunk_size = 256 * 1024 * 1024;
+ max_chunk_size = 4 * calc_size;
min_stripe_size = 32 * 1024 * 1024;
} else if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
calc_size = 8 * 1024 * 1024;
if (!em)
return 1;
- if (btrfs_test_opt(root, DEGRADED)) {
- free_extent_map(em);
- return 0;
- }
-
map = (struct map_lookup *)em->bdev;
for (i = 0; i < map->num_stripes; i++) {
if (!map->stripes[i].dev->writeable) {
em = lookup_extent_mapping(em_tree, logical, *length);
read_unlock(&em_tree->lock);
- if (!em && unplug_page) {
- kfree(multi);
+ if (!em && unplug_page)
return 0;
- }
if (!em) {
printk(KERN_CRIT "unable to find logical %llu len %llu\n",
return ret;
}
-static int do_setxattr(struct btrfs_trans_handle *trans,
- struct inode *inode, const char *name,
- const void *value, size_t size, int flags)
+int __btrfs_setxattr(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
{
struct btrfs_dir_item *di;
struct btrfs_root *root = BTRFS_I(inode)->root;
+ struct btrfs_trans_handle *trans;
struct btrfs_path *path;
- size_t name_len = strlen(name);
- int ret = 0;
-
- if (name_len + size > BTRFS_MAX_XATTR_SIZE(root))
- return -ENOSPC;
+ int ret = 0, mod = 0;
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
+ trans = btrfs_join_transaction(root, 1);
+ btrfs_set_trans_block_group(trans, inode);
+
/* first lets see if we already have this xattr */
di = btrfs_lookup_xattr(trans, root, path, inode->i_ino, name,
strlen(name), -1);
}
ret = btrfs_delete_one_dir_name(trans, root, path, di);
- BUG_ON(ret);
+ if (ret)
+ goto out;
btrfs_release_path(root, path);
/* if we don't have a value then we are removing the xattr */
- if (!value)
+ if (!value) {
+ mod = 1;
goto out;
+ }
} else {
btrfs_release_path(root, path);
}
/* ok we have to create a completely new xattr */
- ret = btrfs_insert_xattr_item(trans, root, path, inode->i_ino,
- name, name_len, value, size);
- BUG_ON(ret);
-out:
- btrfs_free_path(path);
- return ret;
-}
-
-int __btrfs_setxattr(struct btrfs_trans_handle *trans,
- struct inode *inode, const char *name,
- const void *value, size_t size, int flags)
-{
- struct btrfs_root *root = BTRFS_I(inode)->root;
- int ret;
-
- if (trans)
- return do_setxattr(trans, inode, name, value, size, flags);
-
- ret = btrfs_reserve_metadata_space(root, 2);
+ ret = btrfs_insert_xattr_item(trans, root, name, strlen(name),
+ value, size, inode->i_ino);
if (ret)
- return ret;
-
- trans = btrfs_start_transaction(root, 1);
- if (!trans) {
- ret = -ENOMEM;
goto out;
- }
- btrfs_set_trans_block_group(trans, inode);
+ mod = 1;
- ret = do_setxattr(trans, inode, name, value, size, flags);
- if (ret)
- goto out;
-
- inode->i_ctime = CURRENT_TIME;
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
out:
- btrfs_end_transaction_throttle(trans, root);
- btrfs_unreserve_metadata_space(root, 2);
+ if (mod) {
+ inode->i_ctime = CURRENT_TIME;
+ ret = btrfs_update_inode(trans, root, inode);
+ }
+
+ btrfs_end_transaction(trans, root);
+ btrfs_free_path(path);
return ret;
}
if (size == 0)
value = ""; /* empty EA, do not remove */
-
- return __btrfs_setxattr(NULL, dentry->d_inode, name, value, size,
- flags);
+ return __btrfs_setxattr(dentry->d_inode, name, value, size, flags);
}
int btrfs_removexattr(struct dentry *dentry, const char *name)
if (!btrfs_is_valid_xattr(name))
return -EOPNOTSUPP;
-
- return __btrfs_setxattr(NULL, dentry->d_inode, name, NULL, 0,
- XATTR_REPLACE);
+ return __btrfs_setxattr(dentry->d_inode, name, NULL, 0, XATTR_REPLACE);
}
-int btrfs_xattr_security_init(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir)
+int btrfs_xattr_security_init(struct inode *inode, struct inode *dir)
{
int err;
size_t len;
} else {
strcpy(name, XATTR_SECURITY_PREFIX);
strcpy(name + XATTR_SECURITY_PREFIX_LEN, suffix);
- err = __btrfs_setxattr(trans, inode, name, value, len, 0);
+ err = __btrfs_setxattr(inode, name, value, len, 0);
kfree(name);
}
extern ssize_t __btrfs_getxattr(struct inode *inode, const char *name,
void *buffer, size_t size);
-extern int __btrfs_setxattr(struct btrfs_trans_handle *trans,
- struct inode *inode, const char *name,
- const void *value, size_t size, int flags);
+extern int __btrfs_setxattr(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags);
+
extern ssize_t btrfs_getxattr(struct dentry *dentry, const char *name,
void *buffer, size_t size);
extern int btrfs_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags);
extern int btrfs_removexattr(struct dentry *dentry, const char *name);
-extern int btrfs_xattr_security_init(struct btrfs_trans_handle *trans,
- struct inode *inode, struct inode *dir);
+extern int btrfs_xattr_security_init(struct inode *inode, struct inode *dir);
#endif /* __XATTR__ */
/*
* check the security details of the on-disk cache
* - must be called with security override in force
- * - must return with a security override in force - even in the case of an
- * error
*/
int cachefiles_determine_cache_security(struct cachefiles_cache *cache,
struct dentry *root,
* which create files */
ret = set_create_files_as(new, root->d_inode);
if (ret < 0) {
- abort_creds(new);
- cachefiles_begin_secure(cache, _saved_cred);
_leave(" = %d [cfa]", ret);
return ret;
}
#endif
/* permit direct mmap, for read, write or exec */
BDI_CAP_MAP_DIRECT |
- BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP |
- /* no writeback happens */
- BDI_CAP_NO_ACCT_AND_WRITEBACK),
+ BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP),
};
static struct kobj_map *cdev_map;
goto out_unregister_filesystem;
#endif
#ifdef CONFIG_CIFS_DFS_UPCALL
- rc = cifs_init_dns_resolver();
+ rc = register_key_type(&key_type_dns_resolver);
if (rc)
goto out_unregister_key_type;
#endif
out_unregister_resolver_key:
#ifdef CONFIG_CIFS_DFS_UPCALL
- cifs_exit_dns_resolver();
+ unregister_key_type(&key_type_dns_resolver);
out_unregister_key_type:
#endif
#ifdef CONFIG_CIFS_UPCALL
cifs_proc_clean();
#ifdef CONFIG_CIFS_DFS_UPCALL
cifs_dfs_release_automount_timer();
- cifs_exit_dns_resolver();
+ unregister_key_type(&key_type_dns_resolver);
#endif
#ifdef CONFIG_CIFS_UPCALL
unregister_key_type(&cifs_spnego_key_type);
#define CIFS_FATTR_DFS_REFERRAL 0x1
#define CIFS_FATTR_DELETE_PENDING 0x2
#define CIFS_FATTR_NEED_REVAL 0x4
-#define CIFS_FATTR_INO_COLLISION 0x8
struct cifs_fattr {
u32 cf_flags;
__u16 fileHandle, struct file *file,
struct vfsmount *mnt, unsigned int oflags);
extern int cifs_posix_open(char *full_path, struct inode **pinode,
- struct vfsmount *mnt,
- struct super_block *sb,
- int mode, int oflags,
- __u32 *poplock, __u16 *pnetfid, int xid);
+ struct vfsmount *mnt, int mode, int oflags,
+ __u32 *poplock, __u16 *pnetfid, int xid);
extern void cifs_unix_basic_to_fattr(struct cifs_fattr *fattr,
FILE_UNIX_BASIC_INFO *info,
struct cifs_sb_info *cifs_sb);
__u32 bytes_sent;
__u16 byte_count;
- *nbytes = 0;
-
/* cFYI(1, ("write at %lld %d bytes", offset, count));*/
if (tcon->ses == NULL)
return -ECONNABORTED;
cifs_stats_inc(&tcon->num_writes);
if (rc) {
cFYI(1, ("Send error in write = %d", rc));
+ *nbytes = 0;
} else {
*nbytes = le16_to_cpu(pSMBr->CountHigh);
*nbytes = (*nbytes) << 16;
*nbytes += le16_to_cpu(pSMBr->Count);
-
- /*
- * Mask off high 16 bits when bytes written as returned by the
- * server is greater than bytes requested by the client. Some
- * OS/2 servers are known to set incorrect CountHigh values.
- */
- if (*nbytes > count)
- *nbytes &= 0xFFFF;
}
cifs_buf_release(pSMB);
*nbytes = le16_to_cpu(pSMBr->CountHigh);
*nbytes = (*nbytes) << 16;
*nbytes += le16_to_cpu(pSMBr->Count);
-
- /*
- * Mask off high 16 bits when bytes written as returned by the
- * server is greater than bytes requested by the client. OS/2
- * servers are known to set incorrect CountHigh values.
- */
- if (*nbytes > count)
- *nbytes &= 0xFFFF;
}
/* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */
}
int cifs_posix_open(char *full_path, struct inode **pinode,
- struct vfsmount *mnt, struct super_block *sb,
- int mode, int oflags,
- __u32 *poplock, __u16 *pnetfid, int xid)
+ struct vfsmount *mnt, int mode, int oflags,
+ __u32 *poplock, __u16 *pnetfid, int xid)
{
int rc;
FILE_UNIX_BASIC_INFO *presp_data;
__u32 posix_flags = 0;
- struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ struct cifs_sb_info *cifs_sb = CIFS_SB(mnt->mnt_sb);
struct cifs_fattr fattr;
cFYI(1, ("posix open %s", full_path));
/* get new inode and set it up */
if (*pinode == NULL) {
- *pinode = cifs_iget(sb, &fattr);
+ *pinode = cifs_iget(mnt->mnt_sb, &fattr);
if (!*pinode) {
rc = -ENOMEM;
goto posix_open_ret;
cifs_fattr_to_inode(*pinode, &fattr);
}
- if (mnt)
- cifs_new_fileinfo(*pinode, *pnetfid, NULL, mnt, oflags);
+ cifs_new_fileinfo(*pinode, *pnetfid, NULL, mnt, oflags);
posix_open_ret:
kfree(presp_data);
if (nd && (nd->flags & LOOKUP_OPEN))
oflags = nd->intent.open.flags;
else
- oflags = FMODE_READ | SMB_O_CREAT;
+ oflags = FMODE_READ;
if (tcon->unix_ext && (tcon->ses->capabilities & CAP_UNIX) &&
(CIFS_UNIX_POSIX_PATH_OPS_CAP &
le64_to_cpu(tcon->fsUnixInfo.Capability))) {
- rc = cifs_posix_open(full_path, &newinode,
- nd ? nd->path.mnt : NULL,
- inode->i_sb, mode, oflags, &oplock, &fileHandle, xid);
+ rc = cifs_posix_open(full_path, &newinode, nd->path.mnt,
+ mode, oflags, &oplock, &fileHandle, xid);
/* EIO could indicate that (posix open) operation is not
supported, despite what server claimed in capability
negotation. EREMOTE indicates DFS junction, which is not
(nd->flags & LOOKUP_OPEN) && !pTcon->broken_posix_open &&
(nd->intent.open.flags & O_CREAT)) {
rc = cifs_posix_open(full_path, &newInode, nd->path.mnt,
- parent_dir_inode->i_sb,
nd->intent.open.create_mode,
nd->intent.open.flags, &oplock,
&fileHandle, xid);
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#include <linux/keyctl.h>
-#include <linux/key-type.h>
#include <keys/user-type.h>
#include "dns_resolve.h"
#include "cifsglob.h"
#include "cifsproto.h"
#include "cifs_debug.h"
-static const struct cred *dns_resolver_cache;
-
/* Checks if supplied name is IP address
* returns:
* 1 - name is IP
int
dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
{
- const struct cred *saved_cred;
int rc = -EAGAIN;
struct key *rkey = ERR_PTR(-EAGAIN);
char *name;
goto skip_upcall;
}
- saved_cred = override_creds(dns_resolver_cache);
rkey = request_key(&key_type_dns_resolver, name, "");
- revert_creds(saved_cred);
if (!IS_ERR(rkey)) {
- if (!(rkey->perm & KEY_USR_VIEW)) {
- down_read(&rkey->sem);
- rkey->perm |= KEY_USR_VIEW;
- up_read(&rkey->sem);
- }
len = rkey->type_data.x[0];
data = rkey->payload.data;
} else {
return rc;
}
-int __init cifs_init_dns_resolver(void)
-{
- struct cred *cred;
- struct key *keyring;
- int ret;
-
- printk(KERN_NOTICE "Registering the %s key type\n",
- key_type_dns_resolver.name);
-
- /* create an override credential set with a special thread keyring in
- * which DNS requests are cached
- *
- * this is used to prevent malicious redirections from being installed
- * with add_key().
- */
- cred = prepare_kernel_cred(NULL);
- if (!cred)
- return -ENOMEM;
-
- keyring = key_alloc(&key_type_keyring, ".dns_resolver", 0, 0, cred,
- (KEY_POS_ALL & ~KEY_POS_SETATTR) |
- KEY_USR_VIEW | KEY_USR_READ,
- KEY_ALLOC_NOT_IN_QUOTA);
- if (IS_ERR(keyring)) {
- ret = PTR_ERR(keyring);
- goto failed_put_cred;
- }
-
- ret = key_instantiate_and_link(keyring, NULL, 0, NULL, NULL);
- if (ret < 0)
- goto failed_put_key;
-
- ret = register_key_type(&key_type_dns_resolver);
- if (ret < 0)
- goto failed_put_key;
-
- /* instruct request_key() to use this special keyring as a cache for
- * the results it looks up */
- cred->thread_keyring = keyring;
- cred->jit_keyring = KEY_REQKEY_DEFL_THREAD_KEYRING;
- dns_resolver_cache = cred;
- return 0;
-
-failed_put_key:
- key_put(keyring);
-failed_put_cred:
- put_cred(cred);
- return ret;
-}
-void cifs_exit_dns_resolver(void)
-{
- key_revoke(dns_resolver_cache->thread_keyring);
- unregister_key_type(&key_type_dns_resolver);
- put_cred(dns_resolver_cache);
- printk(KERN_NOTICE "Unregistered %s key type\n",
- key_type_dns_resolver.name);
-}
#define _DNS_RESOLVE_H
#ifdef __KERNEL__
-#include <linux/module.h>
-
-extern int __init cifs_init_dns_resolver(void);
-extern void cifs_exit_dns_resolver(void);
+#include <linux/key-type.h>
+extern struct key_type key_type_dns_resolver;
extern int dns_resolve_server_name_to_ip(const char *unc, char **ip_addr);
#endif /* KERNEL */
(CIFS_UNIX_POSIX_PATH_OPS_CAP &
le64_to_cpu(tcon->fsUnixInfo.Capability))) {
int oflags = (int) cifs_posix_convert_flags(file->f_flags);
- oflags |= SMB_O_CREAT;
/* can not refresh inode info since size could be stale */
rc = cifs_posix_open(full_path, &inode, file->f_path.mnt,
- inode->i_sb,
- cifs_sb->mnt_file_mode /* ignored */,
- oflags, &oplock, &netfid, xid);
+ cifs_sb->mnt_file_mode /* ignored */,
+ oflags, &oplock, &netfid, xid);
if (rc == 0) {
cFYI(1, ("posix open succeeded"));
/* no need for special case handling of setting mode
int oflags = (int) cifs_posix_convert_flags(file->f_flags);
/* can not refresh inode info since size could be stale */
rc = cifs_posix_open(full_path, NULL, file->f_path.mnt,
- inode->i_sb,
- cifs_sb->mnt_file_mode /* ignored */,
- oflags, &oplock, &netfid, xid);
+ cifs_sb->mnt_file_mode /* ignored */,
+ oflags, &oplock, &netfid, xid);
if (rc == 0) {
cFYI(1, ("posix reopen succeeded"));
goto reopen_success;
if (CIFS_I(inode)->uniqueid != fattr->cf_uniqueid)
return 0;
- /*
- * uh oh -- it's a directory. We can't use it since hardlinked dirs are
- * verboten. Disable serverino and return it as if it were found, the
- * caller can discard it, generate a uniqueid and retry the find
- */
- if (S_ISDIR(inode->i_mode) && !list_empty(&inode->i_dentry)) {
- fattr->cf_flags |= CIFS_FATTR_INO_COLLISION;
- cifs_autodisable_serverino(CIFS_SB(inode->i_sb));
- }
-
return 1;
}
unsigned long hash;
struct inode *inode;
-retry_iget5_locked:
cFYI(1, ("looking for uniqueid=%llu", fattr->cf_uniqueid));
/* hash down to 32-bits on 32-bit arch */
hash = cifs_uniqueid_to_ino_t(fattr->cf_uniqueid);
inode = iget5_locked(sb, hash, cifs_find_inode, cifs_init_inode, fattr);
- if (inode) {
- /* was there a problematic inode number collision? */
- if (fattr->cf_flags & CIFS_FATTR_INO_COLLISION) {
- iput(inode);
- fattr->cf_uniqueid = iunique(sb, ROOT_I);
- fattr->cf_flags &= ~CIFS_FATTR_INO_COLLISION;
- goto retry_iget5_locked;
- }
+ /* we have fattrs in hand, update the inode */
+ if (inode) {
cifs_fattr_to_inode(inode, fattr);
if (sb->s_flags & MS_NOATIME)
inode->i_flags |= S_NOATIME | S_NOCMTIME;
if (rc == 0 || rc != -ETXTBSY)
return rc;
- /* open-file renames don't work across directories */
- if (to_dentry->d_parent != from_dentry->d_parent)
- return rc;
-
/* open the file to be renamed -- we need DELETE perms */
rc = CIFSSMBOpen(xid, pTcon, fromPath, FILE_OPEN, DELETE,
CREATE_NOT_DIR, &srcfid, &oplock, NULL,
/* calculate session key */
setup_ntlmv2_rsp(ses, v2_sess_key, nls_cp);
- /* FIXME: calculate MAC key */
+ if (first_time) /* should this be moved into common code
+ with similar ntlmv2 path? */
+ /* cifs_calculate_ntlmv2_mac_key(ses->server->mac_signing_key,
+ response BB FIXME, v2_sess_key); */
+
+ /* copy session key */
+
+ /* memcpy(bcc_ptr, (char *)ntlm_session_key,LM2_SESS_KEY_SIZE);
+ bcc_ptr += LM2_SESS_KEY_SIZE; */
memcpy(bcc_ptr, (char *)v2_sess_key,
sizeof(struct ntlmv2_resp));
bcc_ptr += sizeof(struct ntlmv2_resp);
if (retval < 0)
goto out;
+ current->stack_start = current->mm->start_stack;
+
/* execve succeeded */
current->fs->in_exec = 0;
current->in_execve = 0;
*******************************************************************************
**
** Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-** Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
+** Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
**
** This copyrighted material is made available to anyone wishing to use,
** modify, copy, or redistribute it subject to the terms and conditions
spin_unlock(&ast_queue_lock);
}
-void dlm_add_ast(struct dlm_lkb *lkb, int type, int mode)
+void dlm_add_ast(struct dlm_lkb *lkb, int type, int bastmode)
{
if (lkb->lkb_flags & DLM_IFL_USER) {
- dlm_user_add_ast(lkb, type, mode);
+ dlm_user_add_ast(lkb, type, bastmode);
return;
}
if (!(lkb->lkb_ast_type & (AST_COMP | AST_BAST))) {
kref_get(&lkb->lkb_ref);
list_add_tail(&lkb->lkb_astqueue, &ast_queue);
- lkb->lkb_ast_first = type;
}
-
- /* sanity check, this should not happen */
-
- if ((type == AST_COMP) && (lkb->lkb_ast_type & AST_COMP))
- log_print("repeat cast %d castmode %d lock %x %s",
- mode, lkb->lkb_castmode,
- lkb->lkb_id, lkb->lkb_resource->res_name);
-
lkb->lkb_ast_type |= type;
- if (type == AST_BAST)
- lkb->lkb_bastmode = mode;
- else
- lkb->lkb_castmode = mode;
+ if (bastmode)
+ lkb->lkb_bastmode = bastmode;
spin_unlock(&ast_queue_lock);
set_bit(WAKE_ASTS, &astd_wakeflags);
struct dlm_ls *ls = NULL;
struct dlm_rsb *r = NULL;
struct dlm_lkb *lkb;
- void (*castfn) (void *astparam);
- void (*bastfn) (void *astparam, int mode);
- int type, first, bastmode, castmode, do_bast, do_cast, last_castmode;
+ void (*cast) (void *astparam);
+ void (*bast) (void *astparam, int mode);
+ int type = 0, bastmode;
repeat:
spin_lock(&ast_queue_lock);
list_del(&lkb->lkb_astqueue);
type = lkb->lkb_ast_type;
lkb->lkb_ast_type = 0;
- first = lkb->lkb_ast_first;
- lkb->lkb_ast_first = 0;
bastmode = lkb->lkb_bastmode;
- castmode = lkb->lkb_castmode;
- castfn = lkb->lkb_astfn;
- bastfn = lkb->lkb_bastfn;
+
spin_unlock(&ast_queue_lock);
+ cast = lkb->lkb_astfn;
+ bast = lkb->lkb_bastfn;
+
+ if ((type & AST_COMP) && cast)
+ cast(lkb->lkb_astparam);
- do_cast = (type & AST_COMP) && castfn;
- do_bast = (type & AST_BAST) && bastfn;
-
- /* Skip a bast if its blocking mode is compatible with the
- granted mode of the preceding cast. */
-
- if (do_bast) {
- if (first == AST_COMP)
- last_castmode = castmode;
- else
- last_castmode = lkb->lkb_castmode_done;
- if (dlm_modes_compat(bastmode, last_castmode))
- do_bast = 0;
- }
-
- if (first == AST_COMP) {
- if (do_cast)
- castfn(lkb->lkb_astparam);
- if (do_bast)
- bastfn(lkb->lkb_astparam, bastmode);
- } else if (first == AST_BAST) {
- if (do_bast)
- bastfn(lkb->lkb_astparam, bastmode);
- if (do_cast)
- castfn(lkb->lkb_astparam);
- } else {
- log_error(ls, "bad ast_first %d ast_type %d",
- first, type);
- }
-
- if (do_cast)
- lkb->lkb_castmode_done = castmode;
- if (do_bast)
- lkb->lkb_bastmode_done = bastmode;
+ if ((type & AST_BAST) && bast)
+ bast(lkb->lkb_astparam, bastmode);
/* this removes the reference added by dlm_add_ast
and may result in the lkb being freed */
/******************************************************************************
*******************************************************************************
**
-** Copyright (C) 2005-2010 Red Hat, Inc. All rights reserved.
+** Copyright (C) 2005-2008 Red Hat, Inc. All rights reserved.
**
** This copyrighted material is made available to anyone wishing to use,
** modify, copy, or redistribute it subject to the terms and conditions
#ifndef __ASTD_DOT_H__
#define __ASTD_DOT_H__
-void dlm_add_ast(struct dlm_lkb *lkb, int type, int mode);
+void dlm_add_ast(struct dlm_lkb *lkb, int type, int bastmode);
void dlm_del_ast(struct dlm_lkb *lkb);
void dlm_astd_wake(void);
struct dlm_comms *cms = NULL;
void *gps = NULL;
- cl = kzalloc(sizeof(struct dlm_cluster), GFP_NOFS);
- gps = kcalloc(3, sizeof(struct config_group *), GFP_NOFS);
- sps = kzalloc(sizeof(struct dlm_spaces), GFP_NOFS);
- cms = kzalloc(sizeof(struct dlm_comms), GFP_NOFS);
+ cl = kzalloc(sizeof(struct dlm_cluster), GFP_KERNEL);
+ gps = kcalloc(3, sizeof(struct config_group *), GFP_KERNEL);
+ sps = kzalloc(sizeof(struct dlm_spaces), GFP_KERNEL);
+ cms = kzalloc(sizeof(struct dlm_comms), GFP_KERNEL);
if (!cl || !gps || !sps || !cms)
goto fail;
struct dlm_nodes *nds = NULL;
void *gps = NULL;
- sp = kzalloc(sizeof(struct dlm_space), GFP_NOFS);
- gps = kcalloc(2, sizeof(struct config_group *), GFP_NOFS);
- nds = kzalloc(sizeof(struct dlm_nodes), GFP_NOFS);
+ sp = kzalloc(sizeof(struct dlm_space), GFP_KERNEL);
+ gps = kcalloc(2, sizeof(struct config_group *), GFP_KERNEL);
+ nds = kzalloc(sizeof(struct dlm_nodes), GFP_KERNEL);
if (!sp || !gps || !nds)
goto fail;
{
struct dlm_comm *cm;
- cm = kzalloc(sizeof(struct dlm_comm), GFP_NOFS);
+ cm = kzalloc(sizeof(struct dlm_comm), GFP_KERNEL);
if (!cm)
return ERR_PTR(-ENOMEM);
struct dlm_space *sp = config_item_to_space(g->cg_item.ci_parent);
struct dlm_node *nd;
- nd = kzalloc(sizeof(struct dlm_node), GFP_NOFS);
+ nd = kzalloc(sizeof(struct dlm_node), GFP_KERNEL);
if (!nd)
return ERR_PTR(-ENOMEM);
if (cm->addr_count >= DLM_MAX_ADDR_COUNT)
return -ENOSPC;
- addr = kzalloc(sizeof(*addr), GFP_NOFS);
+ addr = kzalloc(sizeof(*addr), GFP_KERNEL);
if (!addr)
return -ENOMEM;
ids_count = sp->members_count;
- ids = kcalloc(ids_count, sizeof(int), GFP_NOFS);
+ ids = kcalloc(ids_count, sizeof(int), GFP_KERNEL);
if (!ids) {
rv = -ENOMEM;
goto out;
if (!new_count)
goto out_ids;
- new = kcalloc(new_count, sizeof(int), GFP_NOFS);
+ new = kcalloc(new_count, sizeof(int), GFP_KERNEL);
if (!new) {
kfree(ids);
rv = -ENOMEM;
if (bucket >= ls->ls_rsbtbl_size)
return NULL;
- ri = kzalloc(sizeof(struct rsbtbl_iter), GFP_NOFS);
+ ri = kzalloc(sizeof(struct rsbtbl_iter), GFP_KERNEL);
if (!ri)
return NULL;
if (n == 0)
spin_unlock(&ls->ls_recover_list_lock);
if (!found)
- de = kzalloc(sizeof(struct dlm_direntry) + len, GFP_NOFS);
+ de = kzalloc(sizeof(struct dlm_direntry) + len,
+ ls->ls_allocation);
return de;
}
dlm_dir_clear(ls);
- last_name = kmalloc(DLM_RESNAME_MAXLEN, GFP_NOFS);
+ last_name = kmalloc(DLM_RESNAME_MAXLEN, ls->ls_allocation);
if (!last_name)
goto out;
if (namelen > DLM_RESNAME_MAXLEN)
return -EINVAL;
- de = kzalloc(sizeof(struct dlm_direntry) + namelen, GFP_NOFS);
+ de = kzalloc(sizeof(struct dlm_direntry) + namelen, ls->ls_allocation);
if (!de)
return -ENOMEM;
*******************************************************************************
**
** Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
-** Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
+** Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
**
** This copyrighted material is made available to anyone wishing to use,
** modify, copy, or redistribute it subject to the terms and conditions
int8_t lkb_status; /* granted, waiting, convert */
int8_t lkb_rqmode; /* requested lock mode */
int8_t lkb_grmode; /* granted lock mode */
+ int8_t lkb_bastmode; /* requested mode */
int8_t lkb_highbast; /* highest mode bast sent for */
-
int8_t lkb_wait_type; /* type of reply waiting for */
int8_t lkb_wait_count;
int8_t lkb_ast_type; /* type of ast queued for */
- int8_t lkb_ast_first; /* type of first ast queued */
-
- int8_t lkb_bastmode; /* req mode of queued bast */
- int8_t lkb_castmode; /* gr mode of queued cast */
- int8_t lkb_bastmode_done; /* last delivered bastmode */
- int8_t lkb_castmode_done; /* last delivered castmode */
struct list_head lkb_idtbl_list; /* lockspace lkbtbl */
struct list_head lkb_statequeue; /* rsb g/c/w list */
int ls_low_nodeid;
int ls_total_weight;
int *ls_node_array;
+ gfp_t ls_allocation;
struct dlm_rsb ls_stub_rsb; /* for returning errors */
struct dlm_lkb ls_stub_lkb; /* for returning errors */
/******************************************************************************
*******************************************************************************
**
-** Copyright (C) 2005-2010 Red Hat, Inc. All rights reserved.
+** Copyright (C) 2005-2008 Red Hat, Inc. All rights reserved.
**
** This copyrighted material is made available to anyone wishing to use,
** modify, copy, or redistribute it subject to the terms and conditions
lkb->lkb_lksb->sb_status = rv;
lkb->lkb_lksb->sb_flags = lkb->lkb_sbflags;
- dlm_add_ast(lkb, AST_COMP, lkb->lkb_grmode);
+ dlm_add_ast(lkb, AST_COMP, 0);
}
static inline void queue_cast_overlap(struct dlm_rsb *r, struct dlm_lkb *lkb)
if (can_be_queued(lkb)) {
error = -EINPROGRESS;
add_lkb(r, lkb, DLM_LKSTS_WAITING);
+ send_blocking_asts(r, lkb);
add_timeout(lkb);
goto out;
}
error = -EAGAIN;
+ if (force_blocking_asts(lkb))
+ send_blocking_asts_all(r, lkb);
queue_cast(r, lkb, -EAGAIN);
+
out:
return error;
}
-static void do_request_effects(struct dlm_rsb *r, struct dlm_lkb *lkb,
- int error)
-{
- switch (error) {
- case -EAGAIN:
- if (force_blocking_asts(lkb))
- send_blocking_asts_all(r, lkb);
- break;
- case -EINPROGRESS:
- send_blocking_asts(r, lkb);
- break;
- }
-}
-
static int do_convert(struct dlm_rsb *r, struct dlm_lkb *lkb)
{
int error = 0;
if (can_be_granted(r, lkb, 1, &deadlk)) {
grant_lock(r, lkb);
queue_cast(r, lkb, 0);
+ grant_pending_locks(r);
goto out;
}
if (_can_be_granted(r, lkb, 1)) {
grant_lock(r, lkb);
queue_cast(r, lkb, 0);
+ grant_pending_locks(r);
goto out;
}
/* else fall through and move to convert queue */
error = -EINPROGRESS;
del_lkb(r, lkb);
add_lkb(r, lkb, DLM_LKSTS_CONVERT);
+ send_blocking_asts(r, lkb);
add_timeout(lkb);
goto out;
}
error = -EAGAIN;
+ if (force_blocking_asts(lkb))
+ send_blocking_asts_all(r, lkb);
queue_cast(r, lkb, -EAGAIN);
+
out:
return error;
}
-static void do_convert_effects(struct dlm_rsb *r, struct dlm_lkb *lkb,
- int error)
-{
- switch (error) {
- case 0:
- grant_pending_locks(r);
- /* grant_pending_locks also sends basts */
- break;
- case -EAGAIN:
- if (force_blocking_asts(lkb))
- send_blocking_asts_all(r, lkb);
- break;
- case -EINPROGRESS:
- send_blocking_asts(r, lkb);
- break;
- }
-}
-
static int do_unlock(struct dlm_rsb *r, struct dlm_lkb *lkb)
{
remove_lock(r, lkb);
queue_cast(r, lkb, -DLM_EUNLOCK);
- return -DLM_EUNLOCK;
-}
-
-static void do_unlock_effects(struct dlm_rsb *r, struct dlm_lkb *lkb,
- int error)
-{
grant_pending_locks(r);
+ return -DLM_EUNLOCK;
}
/* returns: 0 did nothing, -DLM_ECANCEL canceled lock */
error = revert_lock(r, lkb);
if (error) {
queue_cast(r, lkb, -DLM_ECANCEL);
+ grant_pending_locks(r);
return -DLM_ECANCEL;
}
return 0;
}
-static void do_cancel_effects(struct dlm_rsb *r, struct dlm_lkb *lkb,
- int error)
-{
- if (error)
- grant_pending_locks(r);
-}
-
/*
* Four stage 3 varieties:
* _request_lock(), _convert_lock(), _unlock_lock(), _cancel_lock()
goto out;
}
- if (is_remote(r)) {
+ if (is_remote(r))
/* receive_request() calls do_request() on remote node */
error = send_request(r, lkb);
- } else {
+ else
error = do_request(r, lkb);
- /* for remote locks the request_reply is sent
- between do_request and do_request_effects */
- do_request_effects(r, lkb, error);
- }
out:
return error;
}
{
int error;
- if (is_remote(r)) {
+ if (is_remote(r))
/* receive_convert() calls do_convert() on remote node */
error = send_convert(r, lkb);
- } else {
+ else
error = do_convert(r, lkb);
- /* for remote locks the convert_reply is sent
- between do_convert and do_convert_effects */
- do_convert_effects(r, lkb, error);
- }
return error;
}
{
int error;
- if (is_remote(r)) {
+ if (is_remote(r))
/* receive_unlock() calls do_unlock() on remote node */
error = send_unlock(r, lkb);
- } else {
+ else
error = do_unlock(r, lkb);
- /* for remote locks the unlock_reply is sent
- between do_unlock and do_unlock_effects */
- do_unlock_effects(r, lkb, error);
- }
return error;
}
{
int error;
- if (is_remote(r)) {
+ if (is_remote(r))
/* receive_cancel() calls do_cancel() on remote node */
error = send_cancel(r, lkb);
- } else {
+ else
error = do_cancel(r, lkb);
- /* for remote locks the cancel_reply is sent
- between do_cancel and do_cancel_effects */
- do_cancel_effects(r, lkb, error);
- }
return error;
}
pass into lowcomms_commit and a message buffer (mb) that we
write our data into */
- mh = dlm_lowcomms_get_buffer(to_nodeid, mb_len, GFP_NOFS, &mb);
+ mh = dlm_lowcomms_get_buffer(to_nodeid, mb_len, ls->ls_allocation, &mb);
if (!mh)
return -ENOBUFS;
attach_lkb(r, lkb);
error = do_request(r, lkb);
send_request_reply(r, lkb, error);
- do_request_effects(r, lkb, error);
unlock_rsb(r);
put_rsb(r);
goto out;
receive_flags(lkb, ms);
-
error = receive_convert_args(ls, lkb, ms);
- if (error) {
- send_convert_reply(r, lkb, error);
- goto out;
- }
-
+ if (error)
+ goto out_reply;
reply = !down_conversion(lkb);
error = do_convert(r, lkb);
+ out_reply:
if (reply)
send_convert_reply(r, lkb, error);
- do_convert_effects(r, lkb, error);
out:
unlock_rsb(r);
put_rsb(r);
goto out;
receive_flags(lkb, ms);
-
error = receive_unlock_args(ls, lkb, ms);
- if (error) {
- send_unlock_reply(r, lkb, error);
- goto out;
- }
+ if (error)
+ goto out_reply;
error = do_unlock(r, lkb);
+ out_reply:
send_unlock_reply(r, lkb, error);
- do_unlock_effects(r, lkb, error);
out:
unlock_rsb(r);
put_rsb(r);
error = do_cancel(r, lkb);
send_cancel_reply(r, lkb, error);
- do_cancel_effects(r, lkb, error);
out:
unlock_rsb(r);
put_rsb(r);
}
if (flags & DLM_LKF_VALBLK) {
- ua->lksb.sb_lvbptr = kzalloc(DLM_USER_LVB_LEN, GFP_NOFS);
+ ua->lksb.sb_lvbptr = kzalloc(DLM_USER_LVB_LEN, GFP_KERNEL);
if (!ua->lksb.sb_lvbptr) {
kfree(ua);
__put_lkb(ls, lkb);
ua = lkb->lkb_ua;
if (flags & DLM_LKF_VALBLK && !ua->lksb.sb_lvbptr) {
- ua->lksb.sb_lvbptr = kzalloc(DLM_USER_LVB_LEN, GFP_NOFS);
+ ua->lksb.sb_lvbptr = kzalloc(DLM_USER_LVB_LEN, GFP_KERNEL);
if (!ua->lksb.sb_lvbptr) {
error = -ENOMEM;
goto out_put;
error = -ENOMEM;
- ls = kzalloc(sizeof(struct dlm_ls) + namelen, GFP_NOFS);
+ ls = kzalloc(sizeof(struct dlm_ls) + namelen, GFP_KERNEL);
if (!ls)
goto out;
memcpy(ls->ls_name, name, namelen);
if (flags & DLM_LSFL_TIMEWARN)
set_bit(LSFL_TIMEWARN, &ls->ls_flags);
+ if (flags & DLM_LSFL_FS)
+ ls->ls_allocation = GFP_NOFS;
+ else
+ ls->ls_allocation = GFP_KERNEL;
+
/* ls_exflags are forced to match among nodes, and we don't
need to require all nodes to have some flags set */
ls->ls_exflags = (flags & ~(DLM_LSFL_TIMEWARN | DLM_LSFL_FS |
size = dlm_config.ci_rsbtbl_size;
ls->ls_rsbtbl_size = size;
- ls->ls_rsbtbl = kmalloc(sizeof(struct dlm_rsbtable) * size, GFP_NOFS);
+ ls->ls_rsbtbl = kmalloc(sizeof(struct dlm_rsbtable) * size, GFP_KERNEL);
if (!ls->ls_rsbtbl)
goto out_lsfree;
for (i = 0; i < size; i++) {
size = dlm_config.ci_lkbtbl_size;
ls->ls_lkbtbl_size = size;
- ls->ls_lkbtbl = kmalloc(sizeof(struct dlm_lkbtable) * size, GFP_NOFS);
+ ls->ls_lkbtbl = kmalloc(sizeof(struct dlm_lkbtable) * size, GFP_KERNEL);
if (!ls->ls_lkbtbl)
goto out_rsbfree;
for (i = 0; i < size; i++) {
size = dlm_config.ci_dirtbl_size;
ls->ls_dirtbl_size = size;
- ls->ls_dirtbl = kmalloc(sizeof(struct dlm_dirtable) * size, GFP_NOFS);
+ ls->ls_dirtbl = kmalloc(sizeof(struct dlm_dirtable) * size, GFP_KERNEL);
if (!ls->ls_dirtbl)
goto out_lkbfree;
for (i = 0; i < size; i++) {
mutex_init(&ls->ls_requestqueue_mutex);
mutex_init(&ls->ls_clear_proc_locks);
- ls->ls_recover_buf = kmalloc(dlm_config.ci_buffer_size, GFP_NOFS);
+ ls->ls_recover_buf = kmalloc(dlm_config.ci_buffer_size, GFP_KERNEL);
if (!ls->ls_recover_buf)
goto out_dirfree;
if (dlm_our_addr(&sas, i))
break;
- addr = kmalloc(sizeof(*addr), GFP_NOFS);
+ addr = kmalloc(sizeof(*addr), GFP_KERNEL);
if (!addr)
break;
memcpy(addr, &sas, sizeof(*addr));
struct sockaddr_storage localaddr;
struct sctp_event_subscribe subscribe;
int result = -EINVAL, num = 1, i, addr_len;
- struct connection *con = nodeid2con(0, GFP_NOFS);
+ struct connection *con = nodeid2con(0, GFP_KERNEL);
int bufsize = NEEDED_RMEM;
if (!con)
static int tcp_listen_for_all(void)
{
struct socket *sock = NULL;
- struct connection *con = nodeid2con(0, GFP_NOFS);
+ struct connection *con = nodeid2con(0, GFP_KERNEL);
int result = -EINVAL;
if (!con)
struct dlm_member *memb;
int w, error;
- memb = kzalloc(sizeof(struct dlm_member), GFP_NOFS);
+ memb = kzalloc(sizeof(struct dlm_member), ls->ls_allocation);
if (!memb)
return -ENOMEM;
ls->ls_total_weight = total;
- array = kmalloc(sizeof(int) * total, GFP_NOFS);
+ array = kmalloc(sizeof(int) * total, ls->ls_allocation);
if (!array)
return;
continue;
log_debug(ls, "new nodeid %d is a re-added member", rv->new[i]);
- memb = kzalloc(sizeof(struct dlm_member), GFP_NOFS);
+ memb = kzalloc(sizeof(struct dlm_member), ls->ls_allocation);
if (!memb)
return -ENOMEM;
memb->nodeid = rv->new[i];
int *ids = NULL, *new = NULL;
int error, ids_count = 0, new_count = 0;
- rv = kzalloc(sizeof(struct dlm_recover), GFP_NOFS);
+ rv = kzalloc(sizeof(struct dlm_recover), ls->ls_allocation);
if (!rv)
return -ENOMEM;
{
char *p;
- p = kzalloc(ls->ls_lvblen, GFP_NOFS);
+ p = kzalloc(ls->ls_lvblen, ls->ls_allocation);
return p;
}
DLM_ASSERT(namelen <= DLM_RESNAME_MAXLEN,);
- r = kzalloc(sizeof(*r) + namelen, GFP_NOFS);
+ r = kzalloc(sizeof(*r) + namelen, ls->ls_allocation);
return r;
}
{
struct dlm_lkb *lkb;
- lkb = kmem_cache_zalloc(lkb_cache, GFP_NOFS);
+ lkb = kmem_cache_zalloc(lkb_cache, ls->ls_allocation);
return lkb;
}
struct sk_buff *skb;
void *data;
- skb = genlmsg_new(size, GFP_NOFS);
+ skb = genlmsg_new(size, GFP_KERNEL);
if (!skb)
return -ENOMEM;
if (!ls)
return -EINVAL;
- xop = kzalloc(sizeof(*xop), GFP_NOFS);
+ xop = kzalloc(sizeof(*xop), GFP_KERNEL);
if (!xop) {
rv = -ENOMEM;
goto out;
if (!ls)
return -EINVAL;
- op = kzalloc(sizeof(*op), GFP_NOFS);
+ op = kzalloc(sizeof(*op), GFP_KERNEL);
if (!op) {
rv = -ENOMEM;
goto out;
if (!ls)
return -EINVAL;
- op = kzalloc(sizeof(*op), GFP_NOFS);
+ op = kzalloc(sizeof(*op), GFP_KERNEL);
if (!op) {
rv = -ENOMEM;
goto out;
char *mb;
int mb_len = sizeof(struct dlm_rcom) + len;
- mh = dlm_lowcomms_get_buffer(to_nodeid, mb_len, GFP_NOFS, &mb);
+ mh = dlm_lowcomms_get_buffer(to_nodeid, mb_len, ls->ls_allocation, &mb);
if (!mh) {
log_print("create_rcom to %d type %d len %d ENOBUFS",
to_nodeid, type, len);
struct rq_entry *e;
int length = ms->m_header.h_length - sizeof(struct dlm_message);
- e = kmalloc(sizeof(struct rq_entry) + length, GFP_NOFS);
+ e = kmalloc(sizeof(struct rq_entry) + length, ls->ls_allocation);
if (!e) {
log_print("dlm_add_requestqueue: out of memory len %d", length);
return;
/*
- * Copyright (C) 2006-2010 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
/* we could possibly check if the cancel of an orphan has resulted in the lkb
being removed and then remove that lkb from the orphans list and free it */
-void dlm_user_add_ast(struct dlm_lkb *lkb, int type, int mode)
+void dlm_user_add_ast(struct dlm_lkb *lkb, int type, int bastmode)
{
struct dlm_ls *ls;
struct dlm_user_args *ua;
ast_type = lkb->lkb_ast_type;
lkb->lkb_ast_type |= type;
- if (type == AST_BAST)
- lkb->lkb_bastmode = mode;
- else
- lkb->lkb_castmode = mode;
+ if (bastmode)
+ lkb->lkb_bastmode = bastmode;
if (!ast_type) {
kref_get(&lkb->lkb_ref);
goto out;
}
- ua = kzalloc(sizeof(struct dlm_user_args), GFP_NOFS);
+ ua = kzalloc(sizeof(struct dlm_user_args), GFP_KERNEL);
if (!ua)
goto out;
ua->proc = proc;
if (!ls)
return -ENOENT;
- ua = kzalloc(sizeof(struct dlm_user_args), GFP_NOFS);
+ ua = kzalloc(sizeof(struct dlm_user_args), GFP_KERNEL);
if (!ua)
goto out;
ua->proc = proc;
error = -ENOMEM;
len = strlen(name) + strlen(name_prefix) + 2;
- ls->ls_device.name = kzalloc(len, GFP_NOFS);
+ ls->ls_device.name = kzalloc(len, GFP_KERNEL);
if (!ls->ls_device.name)
goto fail;
#endif
return -EINVAL;
- kbuf = kzalloc(count + 1, GFP_NOFS);
+ kbuf = kzalloc(count + 1, GFP_KERNEL);
if (!kbuf)
return -ENOMEM;
/* add 1 after namelen so that the name string is terminated */
kbuf = kzalloc(sizeof(struct dlm_write_request) + namelen + 1,
- GFP_NOFS);
+ GFP_KERNEL);
if (!kbuf) {
kfree(k32buf);
return -ENOMEM;
if (!ls)
return -ENOENT;
- proc = kzalloc(sizeof(struct dlm_user_proc), GFP_NOFS);
+ proc = kzalloc(sizeof(struct dlm_user_proc), GFP_KERNEL);
if (!proc) {
dlm_put_lockspace(ls);
return -ENOMEM;
/*
- * Copyright (C) 2006-2010 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
#ifndef __USER_DOT_H__
#define __USER_DOT_H__
-void dlm_user_add_ast(struct dlm_lkb *lkb, int type, int mode);
+void dlm_user_add_ast(struct dlm_lkb *lkb, int type, int bastmode);
int dlm_user_init(void);
void dlm_user_exit(void);
int dlm_device_deregister(struct dlm_ls *ls);
"the persistent file for the dentry with name "
"[%s]; rc = [%d]\n", __func__,
ecryptfs_dentry->d_name.name, rc);
- goto out_free;
+ goto out;
}
}
if ((ecryptfs_inode_to_private(inode)->lower_file->f_flags & O_RDONLY)
rc = -EPERM;
printk(KERN_WARNING "%s: Lower persistent file is RO; eCryptfs "
"file must hence be opened RO\n", __func__);
- goto out_free;
+ goto out;
}
ecryptfs_set_file_lower(
file, ecryptfs_inode_to_private(inode)->lower_file);
return rc;
}
-static long
-ecryptfs_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
-{
- struct file *lower_file = NULL;
- long rc = -ENOTTY;
-
- if (ecryptfs_file_to_private(file))
- lower_file = ecryptfs_file_to_lower(file);
- if (lower_file && lower_file->f_op && lower_file->f_op->unlocked_ioctl)
- rc = lower_file->f_op->unlocked_ioctl(lower_file, cmd, arg);
- return rc;
-}
-
-#ifdef CONFIG_COMPAT
-static long
-ecryptfs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
-{
- struct file *lower_file = NULL;
- long rc = -ENOIOCTLCMD;
-
- if (ecryptfs_file_to_private(file))
- lower_file = ecryptfs_file_to_lower(file);
- if (lower_file && lower_file->f_op && lower_file->f_op->compat_ioctl)
- rc = lower_file->f_op->compat_ioctl(lower_file, cmd, arg);
- return rc;
-}
-#endif
+static int ecryptfs_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg);
const struct file_operations ecryptfs_dir_fops = {
.readdir = ecryptfs_readdir,
- .unlocked_ioctl = ecryptfs_unlocked_ioctl,
-#ifdef CONFIG_COMPAT
- .compat_ioctl = ecryptfs_compat_ioctl,
-#endif
+ .ioctl = ecryptfs_ioctl,
.mmap = generic_file_mmap,
.open = ecryptfs_open,
.flush = ecryptfs_flush,
.write = do_sync_write,
.aio_write = generic_file_aio_write,
.readdir = ecryptfs_readdir,
- .unlocked_ioctl = ecryptfs_unlocked_ioctl,
-#ifdef CONFIG_COMPAT
- .compat_ioctl = ecryptfs_compat_ioctl,
-#endif
+ .ioctl = ecryptfs_ioctl,
.mmap = generic_file_mmap,
.open = ecryptfs_open,
.flush = ecryptfs_flush,
.fasync = ecryptfs_fasync,
.splice_read = generic_file_splice_read,
};
+
+static int
+ecryptfs_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int rc = 0;
+ struct file *lower_file = NULL;
+
+ if (ecryptfs_file_to_private(file))
+ lower_file = ecryptfs_file_to_lower(file);
+ if (lower_file && lower_file->f_op && lower_file->f_op->ioctl)
+ rc = lower_file->f_op->ioctl(ecryptfs_inode_to_lower(inode),
+ lower_file, cmd, arg);
+ else
+ rc = -ENOTTY;
+ return rc;
+}
struct vfsmount *lower_mnt = ecryptfs_dentry_to_lower_mnt(dentry);
struct dentry *dentry_save;
struct vfsmount *vfsmount_save;
- unsigned int flags_save;
int rc;
dentry_save = nd->path.dentry;
vfsmount_save = nd->path.mnt;
- flags_save = nd->flags;
nd->path.dentry = lower_dentry;
nd->path.mnt = lower_mnt;
- nd->flags &= ~LOOKUP_OPEN;
rc = vfs_create(lower_dir_inode, lower_dentry, mode, nd);
nd->path.dentry = dentry_save;
nd->path.mnt = vfsmount_save;
- nd->flags = flags_save;
return rc;
}
printk(KERN_ERR "%s: Out of memory whilst attempting "
"to allocate ecryptfs_dentry_info struct\n",
__func__);
- goto out_put;
+ goto out_dput;
}
ecryptfs_set_dentry_lower(ecryptfs_dentry, lower_dentry);
ecryptfs_set_dentry_lower_mnt(ecryptfs_dentry, lower_mnt);
out_free_kmem:
kmem_cache_free(ecryptfs_header_cache_2, page_virt);
goto out;
-out_put:
+out_dput:
dput(lower_dentry);
- mntput(lower_mnt);
d_drop(ecryptfs_dentry);
out:
return rc;
return rc;
}
-static int ecryptfs_readlink_lower(struct dentry *dentry, char **buf,
- size_t *bufsiz)
+static int
+ecryptfs_readlink(struct dentry *dentry, char __user *buf, int bufsiz)
{
- struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
char *lower_buf;
- size_t lower_bufsiz = PATH_MAX;
+ size_t lower_bufsiz;
+ struct dentry *lower_dentry;
+ struct ecryptfs_mount_crypt_stat *mount_crypt_stat;
+ char *plaintext_name;
+ size_t plaintext_name_size;
mm_segment_t old_fs;
int rc;
+ lower_dentry = ecryptfs_dentry_to_lower(dentry);
+ if (!lower_dentry->d_inode->i_op->readlink) {
+ rc = -EINVAL;
+ goto out;
+ }
+ mount_crypt_stat = &ecryptfs_superblock_to_private(
+ dentry->d_sb)->mount_crypt_stat;
+ /*
+ * If the lower filename is encrypted, it will result in a significantly
+ * longer name. If needed, truncate the name after decode and decrypt.
+ */
+ if (mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES)
+ lower_bufsiz = PATH_MAX;
+ else
+ lower_bufsiz = bufsiz;
+ /* Released in this function */
lower_buf = kmalloc(lower_bufsiz, GFP_KERNEL);
- if (!lower_buf) {
+ if (lower_buf == NULL) {
+ printk(KERN_ERR "%s: Out of memory whilst attempting to "
+ "kmalloc [%zd] bytes\n", __func__, lower_bufsiz);
rc = -ENOMEM;
goto out;
}
(char __user *)lower_buf,
lower_bufsiz);
set_fs(old_fs);
- if (rc < 0)
- goto out;
- lower_bufsiz = rc;
- rc = ecryptfs_decode_and_decrypt_filename(buf, bufsiz, dentry,
- lower_buf, lower_bufsiz);
-out:
+ if (rc >= 0) {
+ rc = ecryptfs_decode_and_decrypt_filename(&plaintext_name,
+ &plaintext_name_size,
+ dentry, lower_buf,
+ rc);
+ if (rc) {
+ printk(KERN_ERR "%s: Error attempting to decode and "
+ "decrypt filename; rc = [%d]\n", __func__,
+ rc);
+ goto out_free_lower_buf;
+ }
+ /* Check for bufsiz <= 0 done in sys_readlinkat() */
+ rc = copy_to_user(buf, plaintext_name,
+ min((size_t) bufsiz, plaintext_name_size));
+ if (rc)
+ rc = -EFAULT;
+ else
+ rc = plaintext_name_size;
+ kfree(plaintext_name);
+ fsstack_copy_attr_atime(dentry->d_inode, lower_dentry->d_inode);
+ }
+out_free_lower_buf:
kfree(lower_buf);
- return rc;
-}
-
-static int
-ecryptfs_readlink(struct dentry *dentry, char __user *buf, int bufsiz)
-{
- char *kbuf;
- size_t kbufsiz, copied;
- int rc;
-
- rc = ecryptfs_readlink_lower(dentry, &kbuf, &kbufsiz);
- if (rc)
- goto out;
- copied = min_t(size_t, bufsiz, kbufsiz);
- rc = copy_to_user(buf, kbuf, copied) ? -EFAULT : copied;
- kfree(kbuf);
- fsstack_copy_attr_atime(dentry->d_inode,
- ecryptfs_dentry_to_lower(dentry)->d_inode);
out:
return rc;
}
return rc;
}
-int ecryptfs_getattr_link(struct vfsmount *mnt, struct dentry *dentry,
- struct kstat *stat)
-{
- struct ecryptfs_mount_crypt_stat *mount_crypt_stat;
- int rc = 0;
-
- mount_crypt_stat = &ecryptfs_superblock_to_private(
- dentry->d_sb)->mount_crypt_stat;
- generic_fillattr(dentry->d_inode, stat);
- if (mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES) {
- char *target;
- size_t targetsiz;
-
- rc = ecryptfs_readlink_lower(dentry, &target, &targetsiz);
- if (!rc) {
- kfree(target);
- stat->size = targetsiz;
- }
- }
- return rc;
-}
-
int ecryptfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat)
{
lower_dentry = ecryptfs_dentry_to_lower(dentry);
if (!lower_dentry->d_inode->i_op->setxattr) {
- rc = -EOPNOTSUPP;
+ rc = -ENOSYS;
goto out;
}
mutex_lock(&lower_dentry->d_inode->i_mutex);
int rc = 0;
if (!lower_dentry->d_inode->i_op->getxattr) {
- rc = -EOPNOTSUPP;
+ rc = -ENOSYS;
goto out;
}
mutex_lock(&lower_dentry->d_inode->i_mutex);
lower_dentry = ecryptfs_dentry_to_lower(dentry);
if (!lower_dentry->d_inode->i_op->listxattr) {
- rc = -EOPNOTSUPP;
+ rc = -ENOSYS;
goto out;
}
mutex_lock(&lower_dentry->d_inode->i_mutex);
lower_dentry = ecryptfs_dentry_to_lower(dentry);
if (!lower_dentry->d_inode->i_op->removexattr) {
- rc = -EOPNOTSUPP;
+ rc = -ENOSYS;
goto out;
}
mutex_lock(&lower_dentry->d_inode->i_mutex);
.put_link = ecryptfs_put_link,
.permission = ecryptfs_permission,
.setattr = ecryptfs_setattr,
- .getattr = ecryptfs_getattr_link,
.setxattr = ecryptfs_setxattr,
.getxattr = ecryptfs_getxattr,
.listxattr = ecryptfs_listxattr,
static struct hlist_head *ecryptfs_daemon_hash;
struct mutex ecryptfs_daemon_hash_mux;
-static int ecryptfs_hash_bits;
+static int ecryptfs_hash_buckets;
#define ecryptfs_uid_hash(uid) \
- hash_long((unsigned long)uid, ecryptfs_hash_bits)
+ hash_long((unsigned long)uid, ecryptfs_hash_buckets)
static u32 ecryptfs_msg_counter;
static struct ecryptfs_msg_ctx *ecryptfs_msg_ctx_arr;
}
mutex_init(&ecryptfs_daemon_hash_mux);
mutex_lock(&ecryptfs_daemon_hash_mux);
- ecryptfs_hash_bits = 1;
- while (ecryptfs_number_of_users >> ecryptfs_hash_bits)
- ecryptfs_hash_bits++;
+ ecryptfs_hash_buckets = 1;
+ while (ecryptfs_number_of_users >> ecryptfs_hash_buckets)
+ ecryptfs_hash_buckets++;
ecryptfs_daemon_hash = kmalloc((sizeof(struct hlist_head)
- * (1 << ecryptfs_hash_bits)),
- GFP_KERNEL);
+ * ecryptfs_hash_buckets), GFP_KERNEL);
if (!ecryptfs_daemon_hash) {
rc = -ENOMEM;
printk(KERN_ERR "%s: Failed to allocate memory\n", __func__);
mutex_unlock(&ecryptfs_daemon_hash_mux);
goto out;
}
- for (i = 0; i < (1 << ecryptfs_hash_bits); i++)
+ for (i = 0; i < ecryptfs_hash_buckets; i++)
INIT_HLIST_HEAD(&ecryptfs_daemon_hash[i]);
mutex_unlock(&ecryptfs_daemon_hash_mux);
ecryptfs_msg_ctx_arr = kmalloc((sizeof(struct ecryptfs_msg_ctx)
int i;
mutex_lock(&ecryptfs_daemon_hash_mux);
- for (i = 0; i < (1 << ecryptfs_hash_bits); i++) {
+ for (i = 0; i < ecryptfs_hash_buckets; i++) {
int rc;
hlist_for_each_entry(daemon, elem,
if (lower_dentry->d_inode) {
fput(inode_info->lower_file);
inode_info->lower_file = NULL;
+ d_drop(lower_dentry);
}
}
ecryptfs_destroy_crypt_stat(&inode_info->crypt_stat);
argv++;
if (i++ >= max)
return -E2BIG;
-
- if (fatal_signal_pending(current))
- return -ERESTARTNOHAND;
cond_resched();
}
}
while (len > 0) {
int offset, bytes_to_copy;
- if (fatal_signal_pending(current)) {
- ret = -ERESTARTNOHAND;
- goto out;
- }
- cond_resched();
-
offset = pos % PAGE_SIZE;
if (offset == 0)
offset = PAGE_SIZE;
#else
stack_top = arch_align_stack(stack_top);
stack_top = PAGE_ALIGN(stack_top);
-
- if (unlikely(stack_top < mmap_min_addr) ||
- unlikely(vma->vm_end - vma->vm_start >= stack_top - mmap_min_addr))
- return -ENOMEM;
-
stack_shift = vma->vm_end - stack_top;
bprm->p -= stack_shift;
* will align it up.
*/
rlim_stack = rlimit(RLIMIT_STACK) & PAGE_MASK;
+ rlim_stack = min(rlim_stack, stack_size);
#ifdef CONFIG_STACK_GROWSUP
if (stack_size + stack_expand > rlim_stack)
stack_base = vma->vm_start + rlim_stack;
if (retval < 0)
goto out;
+ current->stack_start = current->mm->start_stack;
+
/* execve succeeded */
current->fs->in_exec = 0;
current->in_execve = 0;
/*
* Dont allow local users get cute and trick others to coredump
* into their pre-created files:
- * Note, this is not relevant for pipes
*/
- if (!ispipe && (inode->i_uid != current_fsuid()))
+ if (inode->i_uid != current_fsuid())
goto close_fail;
if (!file->f_op)
goto close_fail;
de->inode_no = cpu_to_le64(parent->i_ino);
memcpy(de->name, PARENT_DIR, sizeof(PARENT_DIR));
exofs_set_de_type(de, inode);
- kunmap_atomic(kaddr, KM_USER0);
+ kunmap_atomic(page, KM_USER0);
err = exofs_commit_chunk(page, 0, chunk_size);
fail:
page_cache_release(page);
buf->f_bsize = sb->s_blocksize;
buf->f_blocks = le32_to_cpu(es->s_blocks_count) - sbi->s_overhead_last;
buf->f_bfree = percpu_counter_sum_positive(&sbi->s_freeblocks_counter);
+ es->s_free_blocks_count = cpu_to_le32(buf->f_bfree);
buf->f_bavail = buf->f_bfree - le32_to_cpu(es->s_r_blocks_count);
if (buf->f_bfree < le32_to_cpu(es->s_r_blocks_count))
buf->f_bavail = 0;
buf->f_files = le32_to_cpu(es->s_inodes_count);
buf->f_ffree = percpu_counter_sum_positive(&sbi->s_freeinodes_counter);
+ es->s_free_inodes_count = cpu_to_le32(buf->f_ffree);
buf->f_namelen = EXT3_NAME_LEN;
fsid = le64_to_cpup((void *)es->s_uuid) ^
le64_to_cpup((void *)es->s_uuid + sizeof(u64));
if (error)
goto cleanup;
- error = ext3_journal_get_write_access(handle, is.iloc.bh);
- if (error)
- goto cleanup;
-
if (EXT3_I(inode)->i_state & EXT3_STATE_NEW) {
struct ext3_inode *raw_inode = ext3_raw_inode(&is.iloc);
memset(raw_inode, 0, EXT3_SB(inode->i_sb)->s_inode_size);
if (flags & XATTR_CREATE)
goto cleanup;
}
+ error = ext3_journal_get_write_access(handle, is.iloc.bh);
+ if (error)
+ goto cleanup;
if (!value) {
if (!is.s.not_found)
error = ext3_xattr_ibody_set(handle, inode, &i, &is);
* when a file system is mounted (see ext4_fill_super).
*/
+
+#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
+
/**
* ext4_get_group_desc() -- load group descriptor from disk
* @sb: super block
if (error_msg != NULL)
ext4_error(dir->i_sb, function,
- "bad entry in directory #%lu: %s - block=%llu"
- "offset=%u(%u), inode=%u, rec_len=%d, name_len=%d",
- dir->i_ino, error_msg,
- (unsigned long long) bh->b_blocknr,
- (unsigned) (offset%bh->b_size), offset,
+ "bad entry in directory #%lu: %s - "
+ "offset=%u, inode=%u, rec_len=%d, name_len=%d",
+ dir->i_ino, error_msg, offset,
le32_to_cpu(de->inode),
rlen, de->name_len);
return error_msg == NULL ? 1 : 0;
if (EXT4_HAS_COMPAT_FEATURE(inode->i_sb,
EXT4_FEATURE_COMPAT_DIR_INDEX) &&
- ((ext4_test_inode_flag(inode, EXT4_INODE_INDEX)) ||
+ ((EXT4_I(inode)->i_flags & EXT4_INDEX_FL) ||
((inode->i_size >> sb->s_blocksize_bits) == 1))) {
err = ext4_dx_readdir(filp, dirent, filldir);
if (err != ERR_BAD_DX_DIR) {
* We don't set the inode dirty flag since it's not
* critical that it get flushed back to the disk.
*/
- ext4_clear_inode_flag(filp->f_path.dentry->d_inode, EXT4_INODE_INDEX);
+ EXT4_I(filp->f_path.dentry->d_inode)->i_flags &= ~EXT4_INDEX_FL;
}
stored = 0;
offset = filp->f_pos & (sb->s_blocksize - 1);
#include <linux/wait.h>
#include <linux/blockgroup_lock.h>
#include <linux/percpu_counter.h>
-#ifdef __KERNEL__
-#include <linux/compat.h>
-#endif
/*
* The fourth extended filesystem constants/structures
struct inode *inode; /* file being written to */
unsigned int flag; /* unwritten or not */
int error; /* I/O error code */
- loff_t offset; /* offset in the file */
- ssize_t size; /* size of the extent */
+ ext4_lblk_t offset; /* offset in the file */
+ size_t size; /* size of the extent */
struct work_struct work; /* data work queue */
} ext4_io_end_t;
#define EXT4_TOPDIR_FL 0x00020000 /* Top of directory hierarchies*/
#define EXT4_HUGE_FILE_FL 0x00040000 /* Set to each huge file */
#define EXT4_EXTENTS_FL 0x00080000 /* Inode uses extents */
-#define EXT4_EA_INODE_FL 0x00200000 /* Inode used for large EA */
-#define EXT4_EOFBLOCKS_FL 0x00400000 /* Blocks allocated beyond EOF */
#define EXT4_RESERVED_FL 0x80000000 /* reserved for ext4 lib */
-#define EXT4_FL_USER_VISIBLE 0x004BDFFF /* User visible flags */
-#define EXT4_FL_USER_MODIFIABLE 0x004B80FF /* User modifiable flags */
+#define EXT4_FL_USER_VISIBLE 0x000BDFFF /* User visible flags */
+#define EXT4_FL_USER_MODIFIABLE 0x000B80FF /* User modifiable flags */
/* Flags that should be inherited by new inodes from their parent. */
#define EXT4_FL_INHERITED (EXT4_SECRM_FL | EXT4_UNRM_FL | EXT4_COMPR_FL |\
}
/*
- * Inode flags used for atomic set/get
- */
-enum {
- EXT4_INODE_SECRM = 0, /* Secure deletion */
- EXT4_INODE_UNRM = 1, /* Undelete */
- EXT4_INODE_COMPR = 2, /* Compress file */
- EXT4_INODE_SYNC = 3, /* Synchronous updates */
- EXT4_INODE_IMMUTABLE = 4, /* Immutable file */
- EXT4_INODE_APPEND = 5, /* writes to file may only append */
- EXT4_INODE_NODUMP = 6, /* do not dump file */
- EXT4_INODE_NOATIME = 7, /* do not update atime */
-/* Reserved for compression usage... */
- EXT4_INODE_DIRTY = 8,
- EXT4_INODE_COMPRBLK = 9, /* One or more compressed clusters */
- EXT4_INODE_NOCOMPR = 10, /* Don't compress */
- EXT4_INODE_ECOMPR = 11, /* Compression error */
-/* End compression flags --- maybe not all used */
- EXT4_INODE_INDEX = 12, /* hash-indexed directory */
- EXT4_INODE_IMAGIC = 13, /* AFS directory */
- EXT4_INODE_JOURNAL_DATA = 14, /* file data should be journaled */
- EXT4_INODE_NOTAIL = 15, /* file tail should not be merged */
- EXT4_INODE_DIRSYNC = 16, /* dirsync behaviour (directories only) */
- EXT4_INODE_TOPDIR = 17, /* Top of directory hierarchies*/
- EXT4_INODE_HUGE_FILE = 18, /* Set to each huge file */
- EXT4_INODE_EXTENTS = 19, /* Inode uses extents */
- EXT4_INODE_EA_INODE = 21, /* Inode used for large EA */
- EXT4_INODE_EOFBLOCKS = 22, /* Blocks allocated beyond EOF */
- EXT4_INODE_RESERVED = 31, /* reserved for ext4 lib */
-};
-
-#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1 << EXT4_INODE_##FLAG))
-#define CHECK_FLAG_VALUE(FLAG) if (!TEST_FLAG_VALUE(FLAG)) { \
- printk(KERN_EMERG "EXT4 flag fail: " #FLAG ": %d %d\n", \
- EXT4_##FLAG##_FL, EXT4_INODE_##FLAG); BUG_ON(1); }
-
-/*
- * Since it's pretty easy to mix up bit numbers and hex values, and we
- * can't do a compile-time test for ENUM values, we use a run-time
- * test to make sure that EXT4_XXX_FL is consistent with respect to
- * EXT4_INODE_XXX. If all is well the printk and BUG_ON will all drop
- * out so it won't cost any extra space in the compiled kernel image.
- * But it's important that these values are the same, since we are
- * using EXT4_INODE_XXX to test for the flag values, but EXT4_XX_FL
- * must be consistent with the values of FS_XXX_FL defined in
- * include/linux/fs.h and the on-disk values found in ext2, ext3, and
- * ext4 filesystems, and of course the values defined in e2fsprogs.
- *
- * It's not paranoia if the Murphy's Law really *is* out to get you. :-)
+ * Inode dynamic state flags
*/
-static inline void ext4_check_flag_values(void)
-{
- CHECK_FLAG_VALUE(SECRM);
- CHECK_FLAG_VALUE(UNRM);
- CHECK_FLAG_VALUE(COMPR);
- CHECK_FLAG_VALUE(SYNC);
- CHECK_FLAG_VALUE(IMMUTABLE);
- CHECK_FLAG_VALUE(APPEND);
- CHECK_FLAG_VALUE(NODUMP);
- CHECK_FLAG_VALUE(NOATIME);
- CHECK_FLAG_VALUE(DIRTY);
- CHECK_FLAG_VALUE(COMPRBLK);
- CHECK_FLAG_VALUE(NOCOMPR);
- CHECK_FLAG_VALUE(ECOMPR);
- CHECK_FLAG_VALUE(INDEX);
- CHECK_FLAG_VALUE(IMAGIC);
- CHECK_FLAG_VALUE(JOURNAL_DATA);
- CHECK_FLAG_VALUE(NOTAIL);
- CHECK_FLAG_VALUE(DIRSYNC);
- CHECK_FLAG_VALUE(TOPDIR);
- CHECK_FLAG_VALUE(HUGE_FILE);
- CHECK_FLAG_VALUE(EXTENTS);
- CHECK_FLAG_VALUE(EA_INODE);
- CHECK_FLAG_VALUE(EOFBLOCKS);
- CHECK_FLAG_VALUE(RESERVED);
-}
+#define EXT4_STATE_JDATA 0x00000001 /* journaled data exists */
+#define EXT4_STATE_NEW 0x00000002 /* inode is newly created */
+#define EXT4_STATE_XATTR 0x00000004 /* has in-inode xattrs */
+#define EXT4_STATE_NO_EXPAND 0x00000008 /* No space for expansion */
+#define EXT4_STATE_DA_ALLOC_CLOSE 0x00000010 /* Alloc DA blks on close */
+#define EXT4_STATE_EXT_MIGRATE 0x00000020 /* Inode is migrating */
+#define EXT4_STATE_DIO_UNWRITTEN 0x00000040 /* need convert on dio done*/
/* Used to pass group descriptor data when online resize is done */
struct ext4_new_group_input {
__u16 unused;
};
-#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
-struct compat_ext4_new_group_input {
- u32 group;
- compat_u64 block_bitmap;
- compat_u64 inode_bitmap;
- compat_u64 inode_table;
- u32 blocks_count;
- u16 reserved_blocks;
- u16 unused;
-};
-#endif
-
/* The struct ext4_new_group_input in kernel space, with free_blocks_count */
struct ext4_new_group_data {
__u32 group;
so set the magic i_delalloc_reserve_flag after taking the
inode allocation semaphore for */
#define EXT4_GET_BLOCKS_DELALLOC_RESERVE 0x0004
+ /* Call ext4_da_update_reserve_space() after successfully
+ allocating the blocks */
+#define EXT4_GET_BLOCKS_UPDATE_RESERVE_SPACE 0x0008
/* caller is from the direct IO path, request to creation of an
unitialized extents if not allocated, split the uninitialized
extent if blocks has been preallocated already*/
-#define EXT4_GET_BLOCKS_DIO 0x0008
-#define EXT4_GET_BLOCKS_CONVERT 0x0010
+#define EXT4_GET_BLOCKS_DIO 0x0010
+#define EXT4_GET_BLOCKS_CONVERT 0x0020
#define EXT4_GET_BLOCKS_DIO_CREATE_EXT (EXT4_GET_BLOCKS_DIO|\
EXT4_GET_BLOCKS_CREATE_UNINIT_EXT)
/* Convert extent to initialized after direct IO complete */
#define EXT4_IOC_ALLOC_DA_BLKS _IO('f', 12)
#define EXT4_IOC_MOVE_EXT _IOWR('f', 15, struct move_extent)
-#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
/*
* ioctl commands in 32 bit emulation
*/
#define EXT4_IOC32_GETRSVSZ _IOR('f', 5, int)
#define EXT4_IOC32_SETRSVSZ _IOW('f', 6, int)
#define EXT4_IOC32_GROUP_EXTEND _IOW('f', 7, unsigned int)
-#define EXT4_IOC32_GROUP_ADD _IOW('f', 8, struct compat_ext4_new_group_input)
#ifdef CONFIG_JBD2_DEBUG
#define EXT4_IOC32_WAIT_FOR_READONLY _IOR('f', 99, int)
#endif
#define EXT4_IOC32_GETVERSION_OLD FS_IOC32_GETVERSION
#define EXT4_IOC32_SETVERSION_OLD FS_IOC32_SETVERSION
-#endif
/*
*/
struct ext4_inode_info {
__le32 i_data[15]; /* unconverted */
- __u32 i_dtime;
+ __u32 i_flags;
ext4_fsblk_t i_file_acl;
+ __u32 i_dtime;
/*
* i_block_group is the number of the block group which contains
* near to their parent directory's inode.
*/
ext4_group_t i_block_group;
- unsigned long i_state_flags; /* Dynamic state flags */
- unsigned long i_flags;
+ __u32 i_state; /* Dynamic state flags for ext4 */
ext4_lblk_t i_dir_start_lookup;
#ifdef CONFIG_EXT4_FS_XATTR
unsigned int i_reserved_meta_blocks;
unsigned int i_allocated_meta_blocks;
unsigned short i_delalloc_reserved_flag;
- sector_t i_da_metadata_calc_last_lblock;
- int i_da_metadata_calc_len;
/* on-disk additional length */
__u16 i_extra_isize;
(ino >= EXT4_FIRST_INO(sb) &&
ino <= le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count));
}
-
-/*
- * Inode dynamic state flags
- */
-enum {
- EXT4_STATE_JDATA, /* journaled data exists */
- EXT4_STATE_NEW, /* inode is newly created */
- EXT4_STATE_XATTR, /* has in-inode xattrs */
- EXT4_STATE_NO_EXPAND, /* No space for expansion */
- EXT4_STATE_DA_ALLOC_CLOSE, /* Alloc DA blks on close */
- EXT4_STATE_EXT_MIGRATE, /* Inode is migrating */
- EXT4_STATE_DIO_UNWRITTEN, /* need convert on dio done*/
- EXT4_STATE_NEWENTRY, /* File just added to dir */
-};
-
-#define EXT4_INODE_BIT_FNS(name, field) \
-static inline int ext4_test_inode_##name(struct inode *inode, int bit) \
-{ \
- return test_bit(bit, &EXT4_I(inode)->i_##field); \
-} \
-static inline void ext4_set_inode_##name(struct inode *inode, int bit) \
-{ \
- set_bit(bit, &EXT4_I(inode)->i_##field); \
-} \
-static inline void ext4_clear_inode_##name(struct inode *inode, int bit) \
-{ \
- clear_bit(bit, &EXT4_I(inode)->i_##field); \
-}
-
-EXT4_INODE_BIT_FNS(flag, flags)
-EXT4_INODE_BIT_FNS(state, state_flags)
#else
/* Assume that user mode programs are passing in an ext4fs superblock, not
* a kernel struct super_block. This will allow us to call the feature-test
#define is_dx(dir) (EXT4_HAS_COMPAT_FEATURE(dir->i_sb, \
EXT4_FEATURE_COMPAT_DIR_INDEX) && \
- ext4_test_inode_flag((dir), EXT4_INODE_INDEX))
+ (EXT4_I(dir)->i_flags & EXT4_INDEX_FL))
#define EXT4_DIR_LINK_MAX(dir) (!is_dx(dir) && (dir)->i_nlink >= EXT4_LINK_MAX)
#define EXT4_DIR_LINK_EMPTY(dir) ((dir)->i_nlink == 2 || (dir)->i_nlink == 1)
extern int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf);
extern qsize_t *ext4_get_reserved_space(struct inode *inode);
extern int flush_aio_dio_completed_IO(struct inode *inode);
-extern void ext4_da_update_reserve_space(struct inode *inode,
- int used, int quota_claim);
/* ioctl.c */
extern long ext4_ioctl(struct file *, unsigned int, unsigned long);
extern long ext4_compat_ioctl(struct file *, unsigned int, unsigned long);
ext4_grpblk_t bb_first_free; /* first free block */
ext4_grpblk_t bb_free; /* total free blocks */
ext4_grpblk_t bb_fragments; /* nr of freespace fragments */
- ext4_grpblk_t bb_largest_free_order;/* order of largest frag in BG */
struct list_head bb_prealloc_list;
#ifdef DOUBLE_CHECK
void *bb_bitmap;
extern long ext4_fallocate(struct inode *inode, int mode, loff_t offset,
loff_t len);
extern int ext4_convert_unwritten_extents(struct inode *inode, loff_t offset,
- ssize_t len);
+ loff_t len);
extern int ext4_get_blocks(handle_t *handle, struct inode *inode,
sector_t block, unsigned int max_blocks,
struct buffer_head *bh, int flags);
set_bit(BH_BITMAP_UPTODATE, &(bh)->b_state);
}
-#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
-
#endif /* __KERNEL__ */
#endif /* _EXT4_H */
ext->ee_len = cpu_to_le16(ext4_ext_get_actual_len(ext));
}
-extern int ext4_ext_calc_metadata_amount(struct inode *inode,
- sector_t lblocks);
+extern int ext4_ext_calc_metadata_amount(struct inode *inode, int blocks);
extern ext4_fsblk_t ext_pblock(struct ext4_extent *ex);
extern ext4_fsblk_t idx_pblock(struct ext4_extent_idx *);
extern void ext4_ext_store_pblock(struct ext4_extent *, ext4_fsblk_t);
ext4_journal_abort_handle(where, __func__, bh,
handle, err);
} else {
- if (inode)
+ if (inode && bh)
mark_buffer_dirty_inode(bh, inode);
else
mark_buffer_dirty(bh);
return 1;
if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
return 1;
- if (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA))
+ if (EXT4_I(inode)->i_flags & EXT4_JOURNAL_DATA_FL)
return 1;
return 0;
}
return 0;
if (!S_ISREG(inode->i_mode))
return 0;
- if (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA))
+ if (EXT4_I(inode)->i_flags & EXT4_JOURNAL_DATA_FL)
return 0;
if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_ORDERED_DATA)
return 1;
return 0;
if (EXT4_JOURNAL(inode) == NULL)
return 1;
- if (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA))
+ if (EXT4_I(inode)->i_flags & EXT4_JOURNAL_DATA_FL)
return 0;
if (test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_WRITEBACK_DATA)
return 1;
if (err <= 0)
return err;
err = ext4_truncate_restart_trans(handle, inode, needed);
- if (err == 0)
- err = -EAGAIN;
+ /*
+ * We have dropped i_data_sem so someone might have cached again
+ * an extent we are going to truncate.
+ */
+ ext4_ext_invalidate_cache(inode);
return err;
}
* to allocate @blocks
* Worse case is one block per extent
*/
-int ext4_ext_calc_metadata_amount(struct inode *inode, sector_t lblock)
+int ext4_ext_calc_metadata_amount(struct inode *inode, int blocks)
{
- struct ext4_inode_info *ei = EXT4_I(inode);
- int idxs, num = 0;
+ int lcap, icap, rcap, leafs, idxs, num;
+ int newextents = blocks;
- idxs = ((inode->i_sb->s_blocksize - sizeof(struct ext4_extent_header))
- / sizeof(struct ext4_extent_idx));
+ rcap = ext4_ext_space_root_idx(inode, 0);
+ lcap = ext4_ext_space_block(inode, 0);
+ icap = ext4_ext_space_block_idx(inode, 0);
- /*
- * If the new delayed allocation block is contiguous with the
- * previous da block, it can share index blocks with the
- * previous block, so we only need to allocate a new index
- * block every idxs leaf blocks. At ldxs**2 blocks, we need
- * an additional index block, and at ldxs**3 blocks, yet
- * another index blocks.
- */
- if (ei->i_da_metadata_calc_len &&
- ei->i_da_metadata_calc_last_lblock+1 == lblock) {
- if ((ei->i_da_metadata_calc_len % idxs) == 0)
- num++;
- if ((ei->i_da_metadata_calc_len % (idxs*idxs)) == 0)
- num++;
- if ((ei->i_da_metadata_calc_len % (idxs*idxs*idxs)) == 0) {
- num++;
- ei->i_da_metadata_calc_len = 0;
- } else
- ei->i_da_metadata_calc_len++;
- ei->i_da_metadata_calc_last_lblock++;
- return num;
- }
+ /* number of new leaf blocks needed */
+ num = leafs = (newextents + lcap - 1) / lcap;
/*
- * In the worst case we need a new set of index blocks at
- * every level of the inode's extent tree.
+ * Worse case, we need separate index block(s)
+ * to link all new leaf blocks
*/
- ei->i_da_metadata_calc_len = 1;
- ei->i_da_metadata_calc_last_lblock = lblock;
- return ext_depth(inode) + 1;
+ idxs = (leafs + icap - 1) / icap;
+ do {
+ num += idxs;
+ idxs = (idxs + icap - 1) / icap;
+ } while (idxs > rcap);
+
+ return num;
}
static int
BUG_ON(cex->ec_type != EXT4_EXT_CACHE_GAP &&
cex->ec_type != EXT4_EXT_CACHE_EXTENT);
- if (in_range(block, cex->ec_block, cex->ec_len)) {
+ if (block >= cex->ec_block && block < cex->ec_block + cex->ec_len) {
ex->ee_block = cpu_to_le32(cex->ec_block);
ext4_ext_store_pblock(ex, cex->ec_start);
ex->ee_len = cpu_to_le16(cex->ec_len);
int depth = ext_depth(inode);
struct ext4_ext_path *path;
handle_t *handle;
- int i, err;
+ int i = 0, err = 0;
ext_debug("truncate since %u\n", start);
if (IS_ERR(handle))
return PTR_ERR(handle);
-again:
ext4_ext_invalidate_cache(inode);
/*
* We start scanning from right side, freeing all the blocks
* after i_size and walking into the tree depth-wise.
*/
- depth = ext_depth(inode);
path = kzalloc(sizeof(struct ext4_ext_path) * (depth + 1), GFP_NOFS);
if (path == NULL) {
ext4_journal_stop(handle);
return -ENOMEM;
}
- path[0].p_depth = depth;
path[0].p_hdr = ext_inode_hdr(inode);
if (ext4_ext_check(inode, path[0].p_hdr, depth)) {
err = -EIO;
goto out;
}
- i = err = 0;
+ path[0].p_depth = depth;
while (i >= 0 && err == 0) {
if (i == depth) {
out:
ext4_ext_drop_refs(path);
kfree(path);
- if (err == -EAGAIN)
- goto again;
ext4_journal_stop(handle);
return err;
/* FIXME!! we need to try to merge to left or right after zero-out */
static int ext4_ext_zeroout(struct inode *inode, struct ext4_extent *ex)
{
- int ret;
+ int ret = -EIO;
struct bio *bio;
int blkbits, blocksize;
sector_t ee_pblock;
len = ee_len;
bio = bio_alloc(GFP_NOIO, len);
- if (!bio)
- return -ENOMEM;
-
bio->bi_sector = ee_pblock;
bio->bi_bdev = inode->i_sb->s_bdev;
submit_bio(WRITE, bio);
wait_for_completion(&event);
- if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
- bio_put(bio);
- return -EIO;
+ if (test_bit(BIO_UPTODATE, &bio->bi_flags))
+ ret = 0;
+ else {
+ ret = -EIO;
+ break;
}
bio_put(bio);
ee_len -= done;
ee_pblock += done << (blkbits - 9);
}
- return 0;
+ return ret;
}
#define EXT4_EXT_ZERO_LEN 7
struct ext4_extent *ex2 = NULL;
struct ext4_extent *ex3 = NULL;
struct ext4_extent_header *eh;
- ext4_lblk_t ee_block, eof_block;
+ ext4_lblk_t ee_block;
unsigned int allocated, ee_len, depth;
ext4_fsblk_t newblock;
int err = 0;
int ret = 0;
- int may_zeroout;
-
- ext_debug("ext4_ext_convert_to_initialized: inode %lu, logical"
- "block %llu, max_blocks %u\n", inode->i_ino,
- (unsigned long long)iblock, max_blocks);
-
- eof_block = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
- inode->i_sb->s_blocksize_bits;
- if (eof_block < iblock + max_blocks)
- eof_block = iblock + max_blocks;
depth = ext_depth(inode);
eh = path[depth].p_hdr;
ee_len = ext4_ext_get_actual_len(ex);
allocated = ee_len - (iblock - ee_block);
newblock = iblock - ee_block + ext_pblock(ex);
-
ex2 = ex;
orig_ex.ee_block = ex->ee_block;
orig_ex.ee_len = cpu_to_le16(ee_len);
ext4_ext_store_pblock(&orig_ex, ext_pblock(ex));
- /*
- * It is safe to convert extent to initialized via explicit
- * zeroout only if extent is fully insde i_size or new_size.
- */
- may_zeroout = ee_block + ee_len <= eof_block;
-
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
goto out;
/* If extent has less than 2*EXT4_EXT_ZERO_LEN zerout directly */
- if (ee_len <= 2*EXT4_EXT_ZERO_LEN && may_zeroout) {
+ if (ee_len <= 2*EXT4_EXT_ZERO_LEN) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
if (allocated > max_blocks) {
unsigned int newdepth;
/* If extent has less than EXT4_EXT_ZERO_LEN zerout directly */
- if (allocated <= EXT4_EXT_ZERO_LEN && may_zeroout) {
+ if (allocated <= EXT4_EXT_ZERO_LEN) {
/*
* iblock == ee_block is handled by the zerouout
* at the beginning.
ex3->ee_len = cpu_to_le16(allocated - max_blocks);
ext4_ext_mark_uninitialized(ex3);
err = ext4_ext_insert_extent(handle, inode, path, ex3, 0);
- if (err == -ENOSPC && may_zeroout) {
+ if (err == -ENOSPC) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
* update the extent length after successful insert of the
* split extent
*/
- ee_len -= ext4_ext_get_actual_len(ex3);
- orig_ex.ee_len = cpu_to_le16(ee_len);
- may_zeroout = ee_block + ee_len <= eof_block;
-
+ orig_ex.ee_len = cpu_to_le16(ee_len -
+ ext4_ext_get_actual_len(ex3));
depth = newdepth;
ext4_ext_drop_refs(path);
path = ext4_ext_find_extent(inode, iblock, path);
* otherwise give the extent a chance to merge to left
*/
if (le16_to_cpu(orig_ex.ee_len) <= EXT4_EXT_ZERO_LEN &&
- iblock != ee_block && may_zeroout) {
+ iblock != ee_block) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
goto out;
insert:
err = ext4_ext_insert_extent(handle, inode, path, &newex, 0);
- if (err == -ENOSPC && may_zeroout) {
+ if (err == -ENOSPC) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
struct ext4_extent *ex2 = NULL;
struct ext4_extent *ex3 = NULL;
struct ext4_extent_header *eh;
- ext4_lblk_t ee_block, eof_block;
+ ext4_lblk_t ee_block;
unsigned int allocated, ee_len, depth;
ext4_fsblk_t newblock;
int err = 0;
- int may_zeroout;
-
- ext_debug("ext4_split_unwritten_extents: inode %lu, logical"
- "block %llu, max_blocks %u\n", inode->i_ino,
- (unsigned long long)iblock, max_blocks);
-
- eof_block = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
- inode->i_sb->s_blocksize_bits;
- if (eof_block < iblock + max_blocks)
- eof_block = iblock + max_blocks;
+ ext_debug("ext4_split_unwritten_extents: inode %lu,"
+ "iblock %llu, max_blocks %u\n", inode->i_ino,
+ (unsigned long long)iblock, max_blocks);
depth = ext_depth(inode);
eh = path[depth].p_hdr;
ex = path[depth].p_ext;
ee_len = ext4_ext_get_actual_len(ex);
allocated = ee_len - (iblock - ee_block);
newblock = iblock - ee_block + ext_pblock(ex);
-
ex2 = ex;
orig_ex.ee_block = ex->ee_block;
orig_ex.ee_len = cpu_to_le16(ee_len);
ext4_ext_store_pblock(&orig_ex, ext_pblock(ex));
- /*
- * It is safe to convert extent to initialized via explicit
- * zeroout only if extent is fully insde i_size or new_size.
- */
- may_zeroout = ee_block + ee_len <= eof_block;
-
/*
* If the uninitialized extent begins at the same logical
* block where the write begins, and the write completely
ex3->ee_len = cpu_to_le16(allocated - max_blocks);
ext4_ext_mark_uninitialized(ex3);
err = ext4_ext_insert_extent(handle, inode, path, ex3, flags);
- if (err == -ENOSPC && may_zeroout) {
+ if (err == -ENOSPC) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
* update the extent length after successful insert of the
* split extent
*/
- ee_len -= ext4_ext_get_actual_len(ex3);
- orig_ex.ee_len = cpu_to_le16(ee_len);
- may_zeroout = ee_block + ee_len <= eof_block;
-
+ orig_ex.ee_len = cpu_to_le16(ee_len -
+ ext4_ext_get_actual_len(ex3));
depth = newdepth;
ext4_ext_drop_refs(path);
path = ext4_ext_find_extent(inode, iblock, path);
goto out;
insert:
err = ext4_ext_insert_extent(handle, inode, path, &newex, flags);
- if (err == -ENOSPC && may_zeroout) {
+ if (err == -ENOSPC) {
err = ext4_ext_zeroout(inode, &orig_ex);
if (err)
goto fix_extent_len;
return err;
}
-static void unmap_underlying_metadata_blocks(struct block_device *bdev,
- sector_t block, int count)
-{
- int i;
- for (i = 0; i < count; i++)
- unmap_underlying_metadata(bdev, block + i);
-}
-
static int
ext4_ext_handle_uninitialized_extents(handle_t *handle, struct inode *inode,
ext4_lblk_t iblock, unsigned int max_blocks,
if (io)
io->flag = DIO_AIO_UNWRITTEN;
else
- ext4_set_inode_state(inode, EXT4_STATE_DIO_UNWRITTEN);
+ EXT4_I(inode)->i_state |= EXT4_STATE_DIO_UNWRITTEN;
goto out;
}
/* async DIO end_io complete, convert the filled extent to written */
} else
allocated = ret;
set_buffer_new(bh_result);
- /*
- * if we allocated more blocks than requested
- * we need to make sure we unmap the extra block
- * allocated. The actual needed block will get
- * unmapped later when we find the buffer_head marked
- * new.
- */
- if (allocated > max_blocks) {
- unmap_underlying_metadata_blocks(inode->i_sb->s_bdev,
- newblock + max_blocks,
- allocated - max_blocks);
- allocated = max_blocks;
- }
-
- /*
- * If we have done fallocate with the offset that is already
- * delayed allocated, we would have block reservation
- * and quota reservation done in the delayed write path.
- * But fallocate would have already updated quota and block
- * count for this offset. So cancel these reservation
- */
- if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
- ext4_da_update_reserve_space(inode, allocated, 0);
-
map_out:
set_buffer_mapped(bh_result);
out1:
{
struct ext4_ext_path *path = NULL;
struct ext4_extent_header *eh;
- struct ext4_extent newex, *ex, *last_ex;
+ struct ext4_extent newex, *ex;
ext4_fsblk_t newblock;
- int i, err = 0, depth, ret, cache_type;
+ int err = 0, depth, ret, cache_type;
unsigned int allocated = 0;
struct ext4_allocation_request ar;
ext4_io_end_t *io = EXT4_I(inode)->cur_aio_dio;
* this situation is possible, though, _during_ tree modification;
* this is why assert can't be put in ext4_ext_find_extent()
*/
- if (path[depth].p_ext == NULL && depth != 0) {
- ext4_error(inode->i_sb, __func__, "bad extent address "
- "inode: %lu, iblock: %lu, depth: %d",
- inode->i_ino, (unsigned long) iblock, depth);
- err = -EIO;
- goto out2;
- }
+ BUG_ON(path[depth].p_ext == NULL && depth != 0);
eh = path[depth].p_hdr;
ex = path[depth].p_ext;
*/
ee_len = ext4_ext_get_actual_len(ex);
/* if found extent covers block, simply return it */
- if (in_range(iblock, ee_block, ee_len)) {
+ if (iblock >= ee_block && iblock < ee_block + ee_len) {
newblock = iblock - ee_block + ee_start;
/* number of remaining blocks in the extent */
allocated = ee_len - (iblock - ee_block);
if (io)
io->flag = DIO_AIO_UNWRITTEN;
else
- ext4_set_inode_state(inode,
- EXT4_STATE_DIO_UNWRITTEN);
+ EXT4_I(inode)->i_state |=
+ EXT4_STATE_DIO_UNWRITTEN;;
}
}
-
- if (unlikely(ext4_test_inode_flag(inode, EXT4_INODE_EOFBLOCKS))) {
- if (unlikely(!eh->eh_entries)) {
- ext4_error(inode->i_sb, __func__,
- "inode#%lu, eh->eh_entries = 0 and "
- "EOFBLOCKS_FL set", inode->i_ino);
- err = -EIO;
- goto out2;
- }
- last_ex = EXT_LAST_EXTENT(eh);
- /*
- * If the current leaf block was reached by looking at
- * the last index block all the way down the tree, and
- * we are extending the inode beyond the last extent
- * in the current leaf block, then clear the
- * EOFBLOCKS_FL flag.
- */
- for (i = depth-1; i >= 0; i--) {
- if (path[i].p_idx != EXT_LAST_INDEX(path[i].p_hdr))
- break;
- }
- if ((i < 0) &&
- (iblock + ar.len > le32_to_cpu(last_ex->ee_block) +
- ext4_ext_get_actual_len(last_ex)))
- ext4_clear_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
- }
err = ext4_ext_insert_extent(handle, inode, path, &newex, flags);
if (err) {
/* free data blocks we just allocated */
/* previous routine could use block we allocated */
newblock = ext_pblock(&newex);
allocated = ext4_ext_get_actual_len(&newex);
- if (allocated > max_blocks)
- allocated = max_blocks;
set_buffer_new(bh_result);
- /*
- * Update reserved blocks/metadata blocks after successful
- * block allocation which had been deferred till now.
- */
- if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
- ext4_da_update_reserve_space(inode, allocated, 1);
-
/*
* Cache the extent and update transaction to commit on fdatasync only
* when it is _not_ an uninitialized extent.
i_size_write(inode, new_size);
if (new_size > EXT4_I(inode)->i_disksize)
ext4_update_i_disksize(inode, new_size);
- } else {
- /*
- * Mark that we allocate beyond EOF so the subsequent truncate
- * can proceed even if the new size is the same as i_size.
- */
- if (new_size > i_size_read(inode))
- ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
}
}
* currently supporting (pre)allocate mode for extent-based
* files _only_
*/
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
return -EOPNOTSUPP;
/* preallocation to directories is currently not supported */
*/
credits = ext4_chunk_trans_blocks(inode, max_blocks);
mutex_lock(&inode->i_mutex);
- ret = inode_newsize_ok(inode, (len + offset));
- if (ret) {
- mutex_unlock(&inode->i_mutex);
- return ret;
- }
retry:
while (ret >= 0 && ret < max_blocks) {
block = block + ret;
* Returns 0 on success.
*/
int ext4_convert_unwritten_extents(struct inode *inode, loff_t offset,
- ssize_t len)
+ loff_t len)
{
handle_t *handle;
ext4_lblk_t block;
int error = 0;
/* in-inode? */
- if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) {
+ if (EXT4_I(inode)->i_state & EXT4_STATE_XATTR) {
struct ext4_iloc iloc;
int offset; /* offset of xattr in inode */
physical += offset;
length = EXT4_SB(inode->i_sb)->s_inode_size - offset;
flags |= FIEMAP_EXTENT_DATA_INLINE;
- brelse(iloc.bh);
} else { /* external block */
physical = EXT4_I(inode)->i_file_acl << blockbits;
length = inode->i_sb->s_blocksize;
__u64 start, __u64 len)
{
ext4_lblk_t start_blk;
+ ext4_lblk_t len_blks;
int error = 0;
/* fallback to generic here if not in extents fmt */
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
return generic_block_fiemap(inode, fieinfo, start, len,
ext4_get_block);
if (fieinfo->fi_flags & FIEMAP_FLAG_XATTR) {
error = ext4_xattr_fiemap(inode, fieinfo);
} else {
- ext4_lblk_t len_blks;
- __u64 last_blk;
-
start_blk = start >> inode->i_sb->s_blocksize_bits;
- last_blk = (start + len - 1) >> inode->i_sb->s_blocksize_bits;
- if (last_blk >= EXT_MAX_BLOCK)
- last_blk = EXT_MAX_BLOCK-1;
- len_blks = ((ext4_lblk_t) last_blk) - start_blk + 1;
+ len_blks = len >> inode->i_sb->s_blocksize_bits;
/*
* Walk the extent tree gathering extent information.
*/
static int ext4_release_file(struct inode *inode, struct file *filp)
{
- if (ext4_test_inode_state(inode, EXT4_STATE_DA_ALLOC_CLOSE)) {
+ if (EXT4_I(inode)->i_state & EXT4_STATE_DA_ALLOC_CLOSE) {
ext4_alloc_da_blocks(inode);
- ext4_clear_inode_state(inode, EXT4_STATE_DA_ALLOC_CLOSE);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_DA_ALLOC_CLOSE;
}
/* if we are the last writer on the inode, drop the block reservation */
if ((filp->f_mode & FMODE_WRITE) &&
* is smaller than s_maxbytes, which is for extent-mapped files.
*/
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
size_t length = iov_length(iov, nr_segs);
#include <trace/events/ext4.h>
-/*
- * If we're not journaling and this is a just-created file, we have to
- * sync our parent directory (if it was freshly created) since
- * otherwise it will only be written by writeback, leaving a huge
- * window during which a crash may lose the file. This may apply for
- * the parent directory's parent as well, and so on recursively, if
- * they are also freshly created.
- */
-static void ext4_sync_parent(struct inode *inode)
-{
- struct dentry *dentry = NULL;
-
- while (inode && ext4_test_inode_state(inode, EXT4_STATE_NEWENTRY)) {
- ext4_clear_inode_state(inode, EXT4_STATE_NEWENTRY);
- dentry = list_entry(inode->i_dentry.next,
- struct dentry, d_alias);
- if (!dentry || !dentry->d_parent || !dentry->d_parent->d_inode)
- break;
- inode = dentry->d_parent->d_inode;
- sync_mapping_buffers(inode->i_mapping);
- }
-}
-
/*
* akpm: A new design for ext4_sync_file().
*
if (ret < 0)
return ret;
- if (!journal) {
- ret = simple_fsync(file, dentry, datasync);
- if (!ret && !list_empty(&inode->i_dentry))
- ext4_sync_parent(inode);
- return ret;
- }
+ if (!journal)
+ return simple_fsync(file, dentry, datasync);
/*
* data=writeback,ordered:
return ext4_force_commit(inode->i_sb);
commit_tid = datasync ? ei->i_datasync_tid : ei->i_sync_tid;
- if (jbd2_log_start_commit(journal, commit_tid)) {
- /*
- * When the journal is on a different device than the
- * fs data disk, we need to issue the barrier in
- * writeback mode. (In ordered mode, the jbd2 layer
- * will take care of issuing the barrier. In
- * data=journal, all of the data blocks are written to
- * the journal device.)
- */
- if (ext4_should_writeback_data(inode) &&
- (journal->j_fs_dev != journal->j_dev) &&
- (journal->j_flags & JBD2_BARRIER))
- blkdev_issue_flush(inode->i_sb->s_bdev, NULL);
- ret = jbd2_log_wait_commit(journal, commit_tid);
- } else if (journal->j_flags & JBD2_BARRIER)
+ if (jbd2_log_start_commit(journal, commit_tid))
+ jbd2_log_wait_commit(journal, commit_tid);
+ else if (journal->j_flags & JBD2_BARRIER)
blkdev_issue_flush(inode->i_sb->s_bdev, NULL);
return ret;
}
if (fatal)
goto error_return;
- fatal = -ESRCH;
- gdp = ext4_get_group_desc(sb, block_group, &bh2);
- if (gdp) {
+ /* Ok, now we can actually update the inode bitmaps.. */
+ cleared = ext4_clear_bit_atomic(ext4_group_lock_ptr(sb, block_group),
+ bit, bitmap_bh->b_data);
+ if (!cleared)
+ ext4_error(sb, "ext4_free_inode",
+ "bit already cleared for inode %lu", ino);
+ else {
+ gdp = ext4_get_group_desc(sb, block_group, &bh2);
+
BUFFER_TRACE(bh2, "get_write_access");
fatal = ext4_journal_get_write_access(handle, bh2);
- }
- ext4_lock_group(sb, block_group);
- cleared = ext4_clear_bit(bit, bitmap_bh->b_data);
- if (fatal || !cleared) {
- ext4_unlock_group(sb, block_group);
- goto out;
- }
-
- count = ext4_free_inodes_count(sb, gdp) + 1;
- ext4_free_inodes_set(sb, gdp, count);
- if (is_directory) {
- count = ext4_used_dirs_count(sb, gdp) - 1;
- ext4_used_dirs_set(sb, gdp, count);
- percpu_counter_dec(&sbi->s_dirs_counter);
- }
- gdp->bg_checksum = ext4_group_desc_csum(sbi, block_group, gdp);
- ext4_unlock_group(sb, block_group);
-
- percpu_counter_inc(&sbi->s_freeinodes_counter);
- if (sbi->s_log_groups_per_flex) {
- ext4_group_t f = ext4_flex_group(sbi, block_group);
+ if (fatal) goto error_return;
+
+ if (gdp) {
+ ext4_lock_group(sb, block_group);
+ count = ext4_free_inodes_count(sb, gdp) + 1;
+ ext4_free_inodes_set(sb, gdp, count);
+ if (is_directory) {
+ count = ext4_used_dirs_count(sb, gdp) - 1;
+ ext4_used_dirs_set(sb, gdp, count);
+ if (sbi->s_log_groups_per_flex) {
+ ext4_group_t f;
+
+ f = ext4_flex_group(sbi, block_group);
+ atomic_dec(&sbi->s_flex_groups[f].free_inodes);
+ }
- atomic_inc(&sbi->s_flex_groups[f].free_inodes);
- if (is_directory)
- atomic_dec(&sbi->s_flex_groups[f].used_dirs);
+ }
+ gdp->bg_checksum = ext4_group_desc_csum(sbi,
+ block_group, gdp);
+ ext4_unlock_group(sb, block_group);
+ percpu_counter_inc(&sbi->s_freeinodes_counter);
+ if (is_directory)
+ percpu_counter_dec(&sbi->s_dirs_counter);
+
+ if (sbi->s_log_groups_per_flex) {
+ ext4_group_t f;
+
+ f = ext4_flex_group(sbi, block_group);
+ atomic_inc(&sbi->s_flex_groups[f].free_inodes);
+ }
+ }
+ BUFFER_TRACE(bh2, "call ext4_handle_dirty_metadata");
+ err = ext4_handle_dirty_metadata(handle, NULL, bh2);
+ if (!fatal) fatal = err;
}
- BUFFER_TRACE(bh2, "call ext4_handle_dirty_metadata");
- fatal = ext4_handle_dirty_metadata(handle, NULL, bh2);
-out:
- if (cleared) {
- BUFFER_TRACE(bitmap_bh, "call ext4_handle_dirty_metadata");
- err = ext4_handle_dirty_metadata(handle, NULL, bitmap_bh);
- if (!fatal)
- fatal = err;
- sb->s_dirt = 1;
- } else
- ext4_error(sb, "ext4_free_inode",
- "bit already cleared for inode %lu", ino);
-
+ BUFFER_TRACE(bitmap_bh, "call ext4_handle_dirty_metadata");
+ err = ext4_handle_dirty_metadata(handle, NULL, bitmap_bh);
+ if (!fatal)
+ fatal = err;
+ sb->s_dirt = 1;
error_return:
brelse(bitmap_bh);
ext4_std_error(sb, fatal);
if (S_ISDIR(mode) &&
((parent == sb->s_root->d_inode) ||
- (ext4_test_inode_flag(parent, EXT4_INODE_TOPDIR)))) {
+ (EXT4_I(parent)->i_flags & EXT4_TOPDIR_FL))) {
int best_ndir = inodes_per_group;
int ret = -1;
if (sbi->s_log_groups_per_flex) {
ext4_group_t f = ext4_flex_group(sbi, group);
- atomic_inc(&sbi->s_flex_groups[f].used_dirs);
+ atomic_inc(&sbi->s_flex_groups[f].free_inodes);
}
}
gdp->bg_checksum = ext4_group_desc_csum(sbi, group, gdp);
BUFFER_TRACE(inode_bitmap_bh,
"call ext4_handle_dirty_metadata");
err = ext4_handle_dirty_metadata(handle,
- NULL,
+ inode,
inode_bitmap_bh);
if (err)
goto fail;
inode->i_generation = sbi->s_next_generation++;
spin_unlock(&sbi->s_next_gen_lock);
- ei->i_state_flags = 0;
- ext4_set_inode_state(inode, EXT4_STATE_NEW);
+ ei->i_state = EXT4_STATE_NEW;
ei->i_extra_isize = EXT4_SB(sb)->s_want_extra_isize;
if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_EXTENTS)) {
/* set extent flag only for directory, file and normal symlink*/
if (S_ISDIR(mode) || S_ISREG(mode) || S_ISLNK(mode)) {
- ext4_set_inode_flag(inode, EXT4_INODE_EXTENTS);
+ EXT4_I(inode)->i_flags |= EXT4_EXTENTS_FL;
ext4_ext_tree_init(handle, inode);
}
}
int count = 0;
ext4_fsblk_t first_block = 0;
- J_ASSERT(!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)));
+ J_ASSERT(!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL));
J_ASSERT(handle != NULL || (flags & EXT4_GET_BLOCKS_CREATE) == 0);
depth = ext4_block_to_path(inode, iblock, offsets,
&blocks_to_boundary);
return &EXT4_I(inode)->i_reserved_quota;
}
#endif
-
/*
* Calculate the number of metadata blocks need to reserve
- * to allocate a new block at @lblocks for non extent file based file
+ * to allocate @blocks for non extent file based file
*/
-static int ext4_indirect_calc_metadata_amount(struct inode *inode,
- sector_t lblock)
+static int ext4_indirect_calc_metadata_amount(struct inode *inode, int blocks)
{
- struct ext4_inode_info *ei = EXT4_I(inode);
- sector_t dind_mask = ~((sector_t)EXT4_ADDR_PER_BLOCK(inode->i_sb) - 1);
- int blk_bits;
+ int icap = EXT4_ADDR_PER_BLOCK(inode->i_sb);
+ int ind_blks, dind_blks, tind_blks;
- if (lblock < EXT4_NDIR_BLOCKS)
- return 0;
+ /* number of new indirect blocks needed */
+ ind_blks = (blocks + icap - 1) / icap;
- lblock -= EXT4_NDIR_BLOCKS;
+ dind_blks = (ind_blks + icap - 1) / icap;
- if (ei->i_da_metadata_calc_len &&
- (lblock & dind_mask) == ei->i_da_metadata_calc_last_lblock) {
- ei->i_da_metadata_calc_len++;
- return 0;
- }
- ei->i_da_metadata_calc_last_lblock = lblock & dind_mask;
- ei->i_da_metadata_calc_len = 1;
- blk_bits = order_base_2(lblock);
- return (blk_bits / EXT4_ADDR_PER_BLOCK_BITS(inode->i_sb)) + 1;
+ tind_blks = 1;
+
+ return ind_blks + dind_blks + tind_blks;
}
/*
* Calculate the number of metadata blocks need to reserve
- * to allocate a block located at @lblock
+ * to allocate given number of blocks
*/
-static int ext4_calc_metadata_amount(struct inode *inode, sector_t lblock)
+static int ext4_calc_metadata_amount(struct inode *inode, int blocks)
{
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
- return ext4_ext_calc_metadata_amount(inode, lblock);
+ if (!blocks)
+ return 0;
- return ext4_indirect_calc_metadata_amount(inode, lblock);
+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)
+ return ext4_ext_calc_metadata_amount(inode, blocks);
+
+ return ext4_indirect_calc_metadata_amount(inode, blocks);
}
-/*
- * Called with i_data_sem down, which is important since we can call
- * ext4_discard_preallocations() from here.
- */
-void ext4_da_update_reserve_space(struct inode *inode,
- int used, int quota_claim)
+static void ext4_da_update_reserve_space(struct inode *inode, int used)
{
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
- struct ext4_inode_info *ei = EXT4_I(inode);
- int mdb_free = 0, allocated_meta_blocks = 0;
-
- spin_lock(&ei->i_block_reservation_lock);
- if (unlikely(used > ei->i_reserved_data_blocks)) {
- ext4_msg(inode->i_sb, KERN_NOTICE, "%s: ino %lu, used %d "
- "with only %d reserved data blocks\n",
- __func__, inode->i_ino, used,
- ei->i_reserved_data_blocks);
- WARN_ON(1);
- used = ei->i_reserved_data_blocks;
- }
-
- /* Update per-inode reservations */
- ei->i_reserved_data_blocks -= used;
- used += ei->i_allocated_meta_blocks;
- ei->i_reserved_meta_blocks -= ei->i_allocated_meta_blocks;
- allocated_meta_blocks = ei->i_allocated_meta_blocks;
- ei->i_allocated_meta_blocks = 0;
- percpu_counter_sub(&sbi->s_dirtyblocks_counter, used);
-
- if (ei->i_reserved_data_blocks == 0) {
- /*
- * We can release all of the reserved metadata blocks
- * only when we have written all of the delayed
- * allocation blocks.
- */
- mdb_free = ei->i_reserved_meta_blocks;
- ei->i_reserved_meta_blocks = 0;
- ei->i_da_metadata_calc_len = 0;
+ int total, mdb, mdb_free;
+
+ spin_lock(&EXT4_I(inode)->i_block_reservation_lock);
+ /* recalculate the number of metablocks still need to be reserved */
+ total = EXT4_I(inode)->i_reserved_data_blocks - used;
+ mdb = ext4_calc_metadata_amount(inode, total);
+
+ /* figure out how many metablocks to release */
+ BUG_ON(mdb > EXT4_I(inode)->i_reserved_meta_blocks);
+ mdb_free = EXT4_I(inode)->i_reserved_meta_blocks - mdb;
+
+ if (mdb_free) {
+ /* Account for allocated meta_blocks */
+ mdb_free -= EXT4_I(inode)->i_allocated_meta_blocks;
+
+ /* update fs dirty blocks counter */
percpu_counter_sub(&sbi->s_dirtyblocks_counter, mdb_free);
+ EXT4_I(inode)->i_allocated_meta_blocks = 0;
+ EXT4_I(inode)->i_reserved_meta_blocks = mdb;
}
+
+ /* update per-inode reservations */
+ BUG_ON(used > EXT4_I(inode)->i_reserved_data_blocks);
+ EXT4_I(inode)->i_reserved_data_blocks -= used;
spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
- /* Update quota subsystem */
- if (quota_claim) {
- vfs_dq_claim_block(inode, used);
- if (mdb_free)
- vfs_dq_release_reservation_block(inode, mdb_free);
- } else {
- /*
- * We did fallocate with an offset that is already delayed
- * allocated. So on delayed allocated writeback we should
- * not update the quota for allocated blocks. But then
- * converting an fallocate region to initialized region would
- * have caused a metadata allocation. So claim quota for
- * that
- */
- if (allocated_meta_blocks)
- vfs_dq_claim_block(inode, allocated_meta_blocks);
- vfs_dq_release_reservation_block(inode, mdb_free + used -
- allocated_meta_blocks);
- }
+ /*
+ * free those over-booking quota for metadata blocks
+ */
+ if (mdb_free)
+ vfs_dq_release_reservation_block(inode, mdb_free);
/*
* If we have done all the pending block allocations and if
* there aren't any writers on the inode, we can discard the
* inode's preallocations.
*/
- if ((ei->i_reserved_data_blocks == 0) &&
- (atomic_read(&inode->i_writecount) == 0))
+ if (!total && (atomic_read(&inode->i_writecount) == 0))
ext4_discard_preallocations(inode);
}
* file system block.
*/
down_read((&EXT4_I(inode)->i_data_sem));
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {
+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
bh, 0);
} else {
* We need to check for EXT4 here because migrate
* could have changed the inode type in between
*/
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {
+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
retval = ext4_ext_get_blocks(handle, inode, block, max_blocks,
bh, flags);
} else {
* i_data's format changing. Force the migrate
* to fail by clearing migrate flags
*/
- ext4_clear_inode_state(inode, EXT4_STATE_EXT_MIGRATE);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_EXT_MIGRATE;
}
-
- /*
- * Update reserved blocks/metadata blocks after successful
- * block allocation which had been deferred till now. We don't
- * support fallocate for non extent files. So we can update
- * reserve space here.
- */
- if ((retval > 0) &&
- (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE))
- ext4_da_update_reserve_space(inode, retval, 1);
}
+
if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
EXT4_I(inode)->i_delalloc_reserved_flag = 0;
+ /*
+ * Update reserved blocks/metadata blocks after successful
+ * block allocation which had been deferred till now.
+ */
+ if ((retval > 0) && (flags & EXT4_GET_BLOCKS_UPDATE_RESERVE_SPACE))
+ ext4_da_update_reserve_space(inode, retval);
+
up_write((&EXT4_I(inode)->i_data_sem));
if (retval > 0 && buffer_mapped(bh)) {
int ret = check_block_validity(inode, "file system "
new_i_size = pos + copied;
if (new_i_size > inode->i_size)
i_size_write(inode, pos+copied);
- ext4_set_inode_state(inode, EXT4_STATE_JDATA);
+ EXT4_I(inode)->i_state |= EXT4_STATE_JDATA;
if (new_i_size > EXT4_I(inode)->i_disksize) {
ext4_update_i_disksize(inode, new_i_size);
ret2 = ext4_mark_inode_dirty(handle, inode);
return ret ? ret : copied;
}
-/*
- * Reserve a single block located at lblock
- */
-static int ext4_da_reserve_space(struct inode *inode, sector_t lblock)
+static int ext4_da_reserve_space(struct inode *inode, int nrblocks)
{
int retries = 0;
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
- struct ext4_inode_info *ei = EXT4_I(inode);
- unsigned long md_needed, md_reserved;
+ unsigned long md_needed, mdblocks, total = 0;
/*
* recalculate the amount of metadata blocks to reserve
* worse case is one extent per block
*/
repeat:
- spin_lock(&ei->i_block_reservation_lock);
- md_reserved = ei->i_reserved_meta_blocks;
- md_needed = ext4_calc_metadata_amount(inode, lblock);
- spin_unlock(&ei->i_block_reservation_lock);
+ spin_lock(&EXT4_I(inode)->i_block_reservation_lock);
+ total = EXT4_I(inode)->i_reserved_data_blocks + nrblocks;
+ mdblocks = ext4_calc_metadata_amount(inode, total);
+ BUG_ON(mdblocks < EXT4_I(inode)->i_reserved_meta_blocks);
+
+ md_needed = mdblocks - EXT4_I(inode)->i_reserved_meta_blocks;
+ total = md_needed + nrblocks;
+ spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
/*
* Make quota reservation here to prevent quota overflow
* later. Real quota accounting is done at pages writeout
* time.
*/
- if (vfs_dq_reserve_block(inode, md_needed + 1))
+ if (vfs_dq_reserve_block(inode, total))
return -EDQUOT;
- if (ext4_claim_free_blocks(sbi, md_needed + 1)) {
- vfs_dq_release_reservation_block(inode, md_needed + 1);
+ if (ext4_claim_free_blocks(sbi, total)) {
+ vfs_dq_release_reservation_block(inode, total);
if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
yield();
goto repeat;
}
return -ENOSPC;
}
- spin_lock(&ei->i_block_reservation_lock);
- ei->i_reserved_data_blocks++;
- ei->i_reserved_meta_blocks += md_needed;
- spin_unlock(&ei->i_block_reservation_lock);
+ spin_lock(&EXT4_I(inode)->i_block_reservation_lock);
+ EXT4_I(inode)->i_reserved_data_blocks += nrblocks;
+ EXT4_I(inode)->i_reserved_meta_blocks += md_needed;
+ spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
return 0; /* success */
}
static void ext4_da_release_space(struct inode *inode, int to_free)
{
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
- struct ext4_inode_info *ei = EXT4_I(inode);
+ int total, mdb, mdb_free, release;
if (!to_free)
return; /* Nothing to release, exit */
spin_lock(&EXT4_I(inode)->i_block_reservation_lock);
- if (unlikely(to_free > ei->i_reserved_data_blocks)) {
+ if (!EXT4_I(inode)->i_reserved_data_blocks) {
/*
- * if there aren't enough reserved blocks, then the
- * counter is messed up somewhere. Since this
- * function is called from invalidate page, it's
- * harmless to return without any action.
+ * if there is no reserved blocks, but we try to free some
+ * then the counter is messed up somewhere.
+ * but since this function is called from invalidate
+ * page, it's harmless to return without any action
*/
- ext4_msg(inode->i_sb, KERN_NOTICE, "ext4_da_release_space: "
- "ino %lu, to_free %d with only %d reserved "
- "data blocks\n", inode->i_ino, to_free,
- ei->i_reserved_data_blocks);
- WARN_ON(1);
- to_free = ei->i_reserved_data_blocks;
+ printk(KERN_INFO "ext4 delalloc try to release %d reserved "
+ "blocks for inode %lu, but there is no reserved "
+ "data blocks\n", to_free, inode->i_ino);
+ spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
+ return;
}
- ei->i_reserved_data_blocks -= to_free;
- if (ei->i_reserved_data_blocks == 0) {
- /*
- * We can release all of the reserved metadata blocks
- * only when we have written all of the delayed
- * allocation blocks.
- */
- to_free += ei->i_reserved_meta_blocks;
- ei->i_reserved_meta_blocks = 0;
- ei->i_da_metadata_calc_len = 0;
- }
+ /* recalculate the number of metablocks still need to be reserved */
+ total = EXT4_I(inode)->i_reserved_data_blocks - to_free;
+ mdb = ext4_calc_metadata_amount(inode, total);
+
+ /* figure out how many metablocks to release */
+ BUG_ON(mdb > EXT4_I(inode)->i_reserved_meta_blocks);
+ mdb_free = EXT4_I(inode)->i_reserved_meta_blocks - mdb;
+
+ release = to_free + mdb_free;
- /* update fs dirty blocks counter */
- percpu_counter_sub(&sbi->s_dirtyblocks_counter, to_free);
+ /* update fs dirty blocks counter for truncate case */
+ percpu_counter_sub(&sbi->s_dirtyblocks_counter, release);
+ /* update per-inode reservations */
+ BUG_ON(to_free > EXT4_I(inode)->i_reserved_data_blocks);
+ EXT4_I(inode)->i_reserved_data_blocks -= to_free;
+
+ BUG_ON(mdb > EXT4_I(inode)->i_reserved_meta_blocks);
+ EXT4_I(inode)->i_reserved_meta_blocks = mdb;
spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
- vfs_dq_release_reservation_block(inode, to_free);
+ vfs_dq_release_reservation_block(inode, release);
}
static void ext4_da_page_release_reservation(struct page *page,
* variables are updated after the blocks have been allocated.
*/
new.b_state = 0;
- get_blocks_flags = EXT4_GET_BLOCKS_CREATE;
+ get_blocks_flags = (EXT4_GET_BLOCKS_CREATE |
+ EXT4_GET_BLOCKS_DELALLOC_RESERVE);
if (mpd->b_state & (1 << BH_Delay))
- get_blocks_flags |= EXT4_GET_BLOCKS_DELALLOC_RESERVE;
-
+ get_blocks_flags |= EXT4_GET_BLOCKS_UPDATE_RESERVE_SPACE;
blks = ext4_get_blocks(handle, mpd->inode, next, max_blocks,
&new, get_blocks_flags);
if (blks < 0) {
ext4_msg(mpd->inode->i_sb, KERN_CRIT,
"delayed block allocation failed for inode %lu at "
"logical offset %llu with max blocks %zd with "
- "error %d", mpd->inode->i_ino,
+ "error %d\n", mpd->inode->i_ino,
(unsigned long long) next,
mpd->b_size >> mpd->inode->i_blkbits, err);
printk(KERN_CRIT "This should not happen!! "
sector_t next;
int nrblocks = mpd->b_size >> mpd->inode->i_blkbits;
- /*
- * XXX Don't go larger than mballoc is willing to allocate
- * This is a stopgap solution. We eventually need to fold
- * mpage_da_submit_io() into this function and then call
- * ext4_get_blocks() multiple times in a loop
- */
- if (nrblocks >= 8*1024*1024/mpd->inode->i_sb->s_blocksize)
- goto flush_it;
-
/* check if thereserved journal credits might overflow */
- if (!(ext4_test_inode_flag(mpd->inode, EXT4_INODE_EXTENTS))) {
+ if (!(EXT4_I(mpd->inode)->i_flags & EXT4_EXTENTS_FL)) {
if (nrblocks >= EXT4_MAX_TRANS_DATA) {
/*
* With non-extent format we are limited by the journal
* XXX: __block_prepare_write() unmaps passed block,
* is it OK?
*/
- ret = ext4_da_reserve_space(inode, iblock);
+ ret = ext4_da_reserve_space(inode, 1);
if (ret)
/* not enough space to reserve */
return ret;
ret = err;
walk_page_buffers(handle, page_bufs, 0, len, NULL, bput_one);
- ext4_set_inode_state(inode, EXT4_STATE_JDATA);
+ EXT4_I(inode)->i_state |= EXT4_STATE_JDATA;
out:
return ret;
}
* number of contiguous block. So we will limit
* number of contiguous block to a sane value
*/
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) &&
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) &&
(max_blocks > EXT4_MAX_TRANS_DATA))
max_blocks = EXT4_MAX_TRANS_DATA;
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: "
- "%ld pages, ino %lu; err %d", __func__,
+ "%ld pages, ino %lu; err %d\n", __func__,
wbc->nr_to_write, inode->i_ino, ret);
goto out_writepages;
}
if (pages_skipped != wbc->pages_skipped)
ext4_msg(inode->i_sb, KERN_CRIT,
"This should not happen leaving %s "
- "with nr_to_write = %ld ret = %d",
+ "with nr_to_write = %ld ret = %d\n",
__func__, wbc->nr_to_write, ret);
/* Update index */
out_writepages:
if (!no_nrwrite_index_update)
wbc->no_nrwrite_index_update = 0;
- wbc->nr_to_write -= nr_to_writebump;
+ if (wbc->nr_to_write > nr_to_writebump)
+ wbc->nr_to_write -= nr_to_writebump;
wbc->range_start = range_start;
trace_ext4_da_writepages_result(inode, wbc, ret, pages_written);
return ret;
if (2 * free_blocks < 3 * dirty_blocks ||
free_blocks < (dirty_blocks + EXT4_FREEBLOCKS_WATERMARK)) {
/*
- * free block count is less than 150% of dirty blocks
- * or free blocks is less than watermark
+ * free block count is less that 150% of dirty blocks
+ * or free blocks is less that watermark
*/
return 1;
}
- /*
- * Even if we don't switch but are nearing capacity,
- * start pushing delalloc when 1/2 of free blocks are dirty.
- */
- if (free_blocks < 2 * dirty_blocks)
- writeback_inodes_sb_if_idle(sb);
-
return 0;
}
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata)
{
- int ret, retries = 0, quota_retries = 0;
+ int ret, retries = 0;
struct page *page;
pgoff_t index;
unsigned from, to;
if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry;
-
- if ((ret == -EDQUOT) &&
- EXT4_I(inode)->i_reserved_meta_blocks &&
- (quota_retries++ < 3)) {
- /*
- * Since we often over-estimate the number of meta
- * data blocks required, we may sometimes get a
- * spurios out of quota error even though there would
- * be enough space once we write the data blocks and
- * find out how many meta data blocks were _really_
- * required. So try forcing the inode write to see if
- * that helps.
- */
- write_inode_now(inode, (quota_retries == 3));
- goto retry;
- }
out:
return ret;
}
filemap_write_and_wait(mapping);
}
- if (EXT4_JOURNAL(inode) &&
- ext4_test_inode_state(inode, EXT4_STATE_JDATA)) {
+ if (EXT4_JOURNAL(inode) && EXT4_I(inode)->i_state & EXT4_STATE_JDATA) {
/*
* This is a REALLY heavyweight approach, but the use of
* bmap on dirty files is expected to be extremely rare:
* everything they get.
*/
- ext4_clear_inode_state(inode, EXT4_STATE_JDATA);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_JDATA;
journal = EXT4_JOURNAL(inode);
jbd2_journal_lock_updates(journal);
err = jbd2_journal_flush(journal);
* but cannot extend i_size. Bail out and pretend
* the write failed... */
ret = PTR_ERR(handle);
- if (inode->i_nlink)
- ext4_orphan_del(NULL, inode);
-
goto out;
}
if (inode->i_nlink)
{
struct inode *inode = io->inode;
loff_t offset = io->offset;
- ssize_t size = io->size;
+ size_t size = io->size;
int ret = 0;
ext4_debug("end_aio_dio_onlock: io 0x%p from inode %lu,list->next 0x%p,"
if (ret != -EIOCBQUEUED && ret <= 0 && iocb->private) {
ext4_free_io_end(iocb->private);
iocb->private = NULL;
- } else if (ret > 0 && ext4_test_inode_state(inode,
- EXT4_STATE_DIO_UNWRITTEN)) {
+ } else if (ret > 0 && (EXT4_I(inode)->i_state &
+ EXT4_STATE_DIO_UNWRITTEN)) {
int err;
/*
* for non AIO case, since the IO is already
offset, ret);
if (err < 0)
ret = err;
- ext4_clear_inode_state(inode, EXT4_STATE_DIO_UNWRITTEN);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_DIO_UNWRITTEN;
}
return ret;
}
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)
return ext4_ext_direct_IO(rw, iocb, iov, offset, nr_segs);
return ext4_ind_direct_IO(rw, iocb, iov, offset, nr_segs);
if (!ext4_can_truncate(inode))
return;
- ext4_clear_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
-
if (inode->i_size == 0 && !test_opt(inode->i_sb, NO_AUTO_DA_ALLOC))
- ext4_set_inode_state(inode, EXT4_STATE_DA_ALLOC_CLOSE);
+ ei->i_state |= EXT4_STATE_DA_ALLOC_CLOSE;
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {
+ if (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL) {
ext4_ext_truncate(inode);
return;
}
{
/* We have all inode data except xattrs in memory here. */
return __ext4_get_inode_loc(inode, iloc,
- !ext4_test_inode_state(inode, EXT4_STATE_XATTR));
+ !(EXT4_I(inode)->i_state & EXT4_STATE_XATTR));
}
void ext4_set_inode_flags(struct inode *inode)
/* Propagate flags from i_flags to EXT4_I(inode)->i_flags */
void ext4_get_inode_flags(struct ext4_inode_info *ei)
{
- unsigned int vfs_fl;
- unsigned long old_fl, new_fl;
-
- do {
- vfs_fl = ei->vfs_inode.i_flags;
- old_fl = ei->i_flags;
- new_fl = old_fl & ~(EXT4_SYNC_FL|EXT4_APPEND_FL|
- EXT4_IMMUTABLE_FL|EXT4_NOATIME_FL|
- EXT4_DIRSYNC_FL);
- if (vfs_fl & S_SYNC)
- new_fl |= EXT4_SYNC_FL;
- if (vfs_fl & S_APPEND)
- new_fl |= EXT4_APPEND_FL;
- if (vfs_fl & S_IMMUTABLE)
- new_fl |= EXT4_IMMUTABLE_FL;
- if (vfs_fl & S_NOATIME)
- new_fl |= EXT4_NOATIME_FL;
- if (vfs_fl & S_DIRSYNC)
- new_fl |= EXT4_DIRSYNC_FL;
- } while (cmpxchg(&ei->i_flags, old_fl, new_fl) != old_fl);
+ unsigned int flags = ei->vfs_inode.i_flags;
+
+ ei->i_flags &= ~(EXT4_SYNC_FL|EXT4_APPEND_FL|
+ EXT4_IMMUTABLE_FL|EXT4_NOATIME_FL|EXT4_DIRSYNC_FL);
+ if (flags & S_SYNC)
+ ei->i_flags |= EXT4_SYNC_FL;
+ if (flags & S_APPEND)
+ ei->i_flags |= EXT4_APPEND_FL;
+ if (flags & S_IMMUTABLE)
+ ei->i_flags |= EXT4_IMMUTABLE_FL;
+ if (flags & S_NOATIME)
+ ei->i_flags |= EXT4_NOATIME_FL;
+ if (flags & S_DIRSYNC)
+ ei->i_flags |= EXT4_DIRSYNC_FL;
}
static blkcnt_t ext4_inode_blocks(struct ext4_inode *raw_inode,
}
inode->i_nlink = le16_to_cpu(raw_inode->i_links_count);
- ei->i_state_flags = 0;
+ ei->i_state = 0;
ei->i_dir_start_lookup = 0;
ei->i_dtime = le32_to_cpu(raw_inode->i_dtime);
/* We now have enough fields to check if the inode was active or not.
EXT4_GOOD_OLD_INODE_SIZE +
ei->i_extra_isize;
if (*magic == cpu_to_le32(EXT4_XATTR_MAGIC))
- ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+ ei->i_state |= EXT4_STATE_XATTR;
}
} else
ei->i_extra_isize = 0;
*/
raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
raw_inode->i_blocks_high = 0;
- ext4_clear_inode_flag(inode, EXT4_INODE_HUGE_FILE);
+ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
return 0;
}
if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_HUGE_FILE))
*/
raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
raw_inode->i_blocks_high = cpu_to_le16(i_blocks >> 32);
- ext4_clear_inode_flag(inode, EXT4_INODE_HUGE_FILE);
+ ei->i_flags &= ~EXT4_HUGE_FILE_FL;
} else {
- ext4_set_inode_flag(inode, EXT4_INODE_HUGE_FILE);
+ ei->i_flags |= EXT4_HUGE_FILE_FL;
/* i_block is stored in file system block size */
i_blocks = i_blocks >> (inode->i_blkbits - 9);
raw_inode->i_blocks_lo = cpu_to_le32(i_blocks);
/* For fields not not tracking in the in-memory inode,
* initialise them to zero for new inodes. */
- if (ext4_test_inode_state(inode, EXT4_STATE_NEW))
+ if (ei->i_state & EXT4_STATE_NEW)
memset(raw_inode, 0, EXT4_SB(inode->i_sb)->s_inode_size);
ext4_get_inode_flags(ei);
EXT4_FEATURE_RO_COMPAT_LARGE_FILE);
sb->s_dirt = 1;
ext4_handle_sync(handle);
- err = ext4_handle_dirty_metadata(handle, NULL,
+ err = ext4_handle_dirty_metadata(handle, inode,
EXT4_SB(sb)->s_sbh);
}
}
}
BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
- rc = ext4_handle_dirty_metadata(handle, NULL, bh);
+ rc = ext4_handle_dirty_metadata(handle, inode, bh);
if (!err)
err = rc;
- ext4_clear_inode_state(inode, EXT4_STATE_NEW);
+ ei->i_state &= ~EXT4_STATE_NEW;
ext4_update_inode_fsync_trans(handle, inode, 0);
out_brelse:
} else {
struct ext4_iloc iloc;
- err = __ext4_get_inode_loc(inode, &iloc, 0);
+ err = ext4_get_inode_loc(inode, &iloc);
if (err)
return err;
if (wait)
(unsigned long long)iloc.bh->b_blocknr);
err = -EIO;
}
- brelse(iloc.bh);
}
return err;
}
}
if (attr->ia_valid & ATTR_SIZE) {
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL)) {
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
if (attr->ia_size > sbi->s_bitmap_maxbytes) {
}
if (S_ISREG(inode->i_mode) &&
- attr->ia_valid & ATTR_SIZE &&
- (attr->ia_size < inode->i_size ||
- (ext4_test_inode_flag(inode, EXT4_INODE_EOFBLOCKS)))) {
+ attr->ia_valid & ATTR_SIZE && attr->ia_size < inode->i_size) {
handle_t *handle;
handle = ext4_journal_start(inode, 3);
goto err_out;
}
}
- /* ext4_truncate will clear the flag */
- if ((ext4_test_inode_flag(inode, EXT4_INODE_EOFBLOCKS)))
- ext4_truncate(inode);
}
rc = inode_setattr(inode, attr);
static int ext4_index_trans_blocks(struct inode *inode, int nrblocks, int chunk)
{
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
return ext4_indirect_trans_blocks(inode, nrblocks, chunk);
return ext4_ext_index_trans_blocks(inode, nrblocks, chunk);
}
entry = IFIRST(header);
/* No extended attributes present */
- if (!ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||
- header->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ if (!(EXT4_I(inode)->i_state & EXT4_STATE_XATTR) ||
+ header->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC)) {
memset((void *)raw_inode + EXT4_GOOD_OLD_INODE_SIZE, 0,
new_extra_isize);
EXT4_I(inode)->i_extra_isize = new_extra_isize;
err = ext4_reserve_inode_write(handle, inode, &iloc);
if (ext4_handle_valid(handle) &&
EXT4_I(inode)->i_extra_isize < sbi->s_want_extra_isize &&
- !ext4_test_inode_state(inode, EXT4_STATE_NO_EXPAND)) {
+ !(EXT4_I(inode)->i_state & EXT4_STATE_NO_EXPAND)) {
/*
* We need extra buffer credits since we may write into EA block
* with this same handle. If journal_extend fails, then it will
sbi->s_want_extra_isize,
iloc, handle);
if (ret) {
- ext4_set_inode_state(inode,
- EXT4_STATE_NO_EXPAND);
+ EXT4_I(inode)->i_state |= EXT4_STATE_NO_EXPAND;
if (mnt_count !=
le16_to_cpu(sbi->s_es->s_mnt_count)) {
ext4_warning(inode->i_sb, __func__,
err = jbd2_journal_get_write_access(handle, iloc.bh);
if (!err)
err = ext4_handle_dirty_metadata(handle,
- NULL,
+ inode,
iloc.bh);
brelse(iloc.bh);
}
*/
if (val)
- ext4_set_inode_flag(inode, EXT4_INODE_JOURNAL_DATA);
+ EXT4_I(inode)->i_flags |= EXT4_JOURNAL_DATA_FL;
else
- ext4_clear_inode_flag(inode, EXT4_INODE_JOURNAL_DATA);
+ EXT4_I(inode)->i_flags &= ~EXT4_JOURNAL_DATA_FL;
ext4_set_aops(inode);
jbd2_journal_unlock_updates(journal);
flags &= ~EXT4_EXTENTS_FL;
}
- if (flags & EXT4_EOFBLOCKS_FL) {
- /* we don't support adding EOFBLOCKS flag */
- if (!(oldflags & EXT4_EOFBLOCKS_FL)) {
- err = -EOPNOTSUPP;
- goto flags_out;
- }
- } else if (oldflags & EXT4_EOFBLOCKS_FL)
- ext4_truncate(inode);
-
handle = ext4_journal_start(inode, 1);
if (IS_ERR(handle)) {
err = PTR_ERR(handle);
if (me.moved_len > 0)
file_remove_suid(donor_filp);
- if (copy_to_user((struct move_extent __user *)arg,
- &me, sizeof(me)))
+ if (copy_to_user((struct move_extent *)arg, &me, sizeof(me)))
err = -EFAULT;
mext_out:
fput(donor_filp);
case EXT4_IOC32_SETRSVSZ:
cmd = EXT4_IOC_SETRSVSZ;
break;
- case EXT4_IOC32_GROUP_ADD: {
- struct compat_ext4_new_group_input __user *uinput;
- struct ext4_new_group_input input;
- mm_segment_t old_fs;
- int err;
-
- uinput = compat_ptr(arg);
- err = get_user(input.group, &uinput->group);
- err |= get_user(input.block_bitmap, &uinput->block_bitmap);
- err |= get_user(input.inode_bitmap, &uinput->inode_bitmap);
- err |= get_user(input.inode_table, &uinput->inode_table);
- err |= get_user(input.blocks_count, &uinput->blocks_count);
- err |= get_user(input.reserved_blocks,
- &uinput->reserved_blocks);
- if (err)
- return -EFAULT;
- old_fs = get_fs();
- set_fs(KERNEL_DS);
- err = ext4_ioctl(file, EXT4_IOC_GROUP_ADD,
- (unsigned long) &input);
- set_fs(old_fs);
- return err;
- }
- case EXT4_IOC_MOVE_EXT:
+ case EXT4_IOC_GROUP_ADD:
break;
default:
return -ENOIOCTLCMD;
}
}
-/*
- * Cache the order of the largest free extent we have available in this block
- * group.
- */
-static void
-mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
-{
- int i;
- int bits;
-
- grp->bb_largest_free_order = -1; /* uninit */
-
- bits = sb->s_blocksize_bits + 1;
- for (i = bits; i >= 0; i--) {
- if (grp->bb_counters[i] > 0) {
- grp->bb_largest_free_order = i;
- break;
- }
- }
-}
-
static noinline_for_stack
void ext4_mb_generate_buddy(struct super_block *sb,
void *buddy, void *bitmap, ext4_group_t group)
*/
grp->bb_free = free;
}
- mb_set_largest_free_order(sb, grp);
clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));
* contain blocks_per_page (PAGE_CACHE_SIZE / blocksize) blocks.
* So it can have information regarding groups_per_page which
* is blocks_per_page/2
- *
- * Locking note: This routine takes the block group lock of all groups
- * for this page; do not hold this lock when calling this routine!
*/
static int ext4_mb_init_cache(struct page *page, char *incore)
return err;
}
-/*
- * Locking note: This routine calls ext4_mb_init_cache(), which takes the
- * block group lock of all groups for this page; do not hold the BG lock when
- * calling this routine!
- */
static noinline_for_stack
int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
{
return ret;
}
-/*
- * Locking note: This routine calls ext4_mb_init_cache(), which takes the
- * block group lock of all groups for this page; do not hold the BG lock when
- * calling this routine!
- */
static noinline_for_stack int
ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
struct ext4_buddy *e4b)
return ret;
}
-static void ext4_mb_unload_buddy(struct ext4_buddy *e4b)
+static void ext4_mb_release_desc(struct ext4_buddy *e4b)
{
if (e4b->bd_bitmap_page)
page_cache_release(e4b->bd_bitmap_page);
buddy = buddy2;
} while (1);
}
- mb_set_largest_free_order(sb, e4b->bd_info);
mb_check_buddy(e4b);
}
e4b->bd_info->bb_counters[ord]++;
e4b->bd_info->bb_counters[ord]++;
}
- mb_set_largest_free_order(e4b->bd_sb, e4b->bd_info);
mb_set_bits(EXT4_MB_BITMAP(e4b), ex->fe_start, len0);
mb_check_buddy(e4b);
}
ext4_unlock_group(ac->ac_sb, group);
- ext4_mb_unload_buddy(e4b);
+ ext4_mb_release_desc(e4b);
return 0;
}
ext4_mb_use_best_found(ac, e4b);
}
ext4_unlock_group(ac->ac_sb, group);
- ext4_mb_unload_buddy(e4b);
+ ext4_mb_release_desc(e4b);
return 0;
}
}
}
-/* This is now called BEFORE we load the buddy bitmap. */
static int ext4_mb_good_group(struct ext4_allocation_context *ac,
ext4_group_t group, int cr)
{
unsigned free, fragments;
+ unsigned i, bits;
int flex_size = ext4_flex_bg_size(EXT4_SB(ac->ac_sb));
struct ext4_group_info *grp = ext4_get_group_info(ac->ac_sb, group);
BUG_ON(cr < 0 || cr >= 4);
-
- /* We only do this if the grp has never been initialized */
- if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
- int ret = ext4_mb_init_group(ac->ac_sb, group);
- if (ret)
- return 0;
- }
+ BUG_ON(EXT4_MB_GRP_NEED_INIT(grp));
free = grp->bb_free;
fragments = grp->bb_fragments;
case 0:
BUG_ON(ac->ac_2order == 0);
- if (grp->bb_largest_free_order < ac->ac_2order)
- return 0;
-
/* Avoid using the first bg of a flexgroup for data files */
if ((ac->ac_flags & EXT4_MB_HINT_DATA) &&
(flex_size >= EXT4_FLEX_SIZE_DIR_ALLOC_SCHEME) &&
((group % flex_size) == 0))
return 0;
- return 1;
+ bits = ac->ac_sb->s_blocksize_bits + 1;
+ for (i = ac->ac_2order; i <= bits; i++)
+ if (grp->bb_counters[i] > 0)
+ return 1;
+ break;
case 1:
if ((free / fragments) >= ac->ac_g_ex.fe_len)
return 1;
sbi = EXT4_SB(sb);
ngroups = ext4_get_groups_count(sb);
/* non-extent files are limited to low blocks/groups */
- if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(ac->ac_inode)->i_flags & EXT4_EXTENTS_FL))
ngroups = sbi->s_blockfile_groups;
BUG_ON(ac->ac_status == AC_STATUS_FOUND);
group = ac->ac_g_ex.fe_group;
for (i = 0; i < ngroups; group++, i++) {
+ struct ext4_group_info *grp;
+ struct ext4_group_desc *desc;
+
if (group == ngroups)
group = 0;
- /* This now checks without needing the buddy page */
- if (!ext4_mb_good_group(ac, group, cr))
+ /* quick check to skip empty groups */
+ grp = ext4_get_group_info(sb, group);
+ if (grp->bb_free == 0)
continue;
err = ext4_mb_load_buddy(sb, group, &e4b);
goto out;
ext4_lock_group(sb, group);
-
- /*
- * We need to check again after locking the
- * block group
- */
if (!ext4_mb_good_group(ac, group, cr)) {
+ /* someone did allocation from this group */
ext4_unlock_group(sb, group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
continue;
}
ac->ac_groups_scanned++;
+ desc = ext4_get_group_desc(sb, group, NULL);
if (cr == 0)
ext4_mb_simple_scan_group(ac, &e4b);
else if (cr == 1 &&
ext4_mb_complex_scan_group(ac, &e4b);
ext4_unlock_group(sb, group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
if (ac->ac_status != AC_STATUS_CONTINUE)
break;
ext4_lock_group(sb, group);
memcpy(&sg, ext4_get_group_info(sb, group), i);
ext4_unlock_group(sb, group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
seq_printf(seq, "#%-5u: %-5u %-5u %-5u [", group, sg.info.bb_free,
sg.info.bb_fragments, sg.info.bb_first_free);
INIT_LIST_HEAD(&meta_group_info[i]->bb_prealloc_list);
init_rwsem(&meta_group_info[i]->alloc_sem);
meta_group_info[i]->bb_free_root.rb_node = NULL;
- meta_group_info[i]->bb_largest_free_order = -1; /* uninit */
#ifdef DOUBLE_CHECK
{
mb_debug(1, "gonna free %u blocks in group %u (0x%p):",
entry->count, entry->group, entry);
- if (test_opt(sb, DISCARD)) {
- int ret;
- ext4_fsblk_t discard_block;
-
- discard_block = entry->start_blk +
- ext4_group_first_block_no(sb, entry->group);
- trace_ext4_discard_blocks(sb,
- (unsigned long long)discard_block,
- entry->count);
- ret = sb_issue_discard(sb, discard_block, entry->count);
- if (ret == EOPNOTSUPP) {
- ext4_warning(sb, __func__,
- "discard not supported, disabling");
- clear_opt(EXT4_SB(sb)->s_mount_opt, DISCARD);
- }
- }
-
err = ext4_mb_load_buddy(sb, entry->group, &e4b);
/* we expect to find existing buddy because it's pinned */
BUG_ON(err != 0);
page_cache_release(e4b.bd_bitmap_page);
}
ext4_unlock_group(sb, entry->group);
+ if (test_opt(sb, DISCARD)) {
+ ext4_fsblk_t discard_block;
+ struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+
+ discard_block = (ext4_fsblk_t)entry->group *
+ EXT4_BLOCKS_PER_GROUP(sb)
+ + entry->start_blk
+ + le32_to_cpu(es->s_first_data_block);
+ trace_ext4_discard_blocks(sb,
+ (unsigned long long)discard_block,
+ entry->count);
+ sb_issue_discard(sb, discard_block, entry->count);
+ }
kmem_cache_free(ext4_free_ext_cachep, entry);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
}
mb_debug(1, "freed %u blocks in %u structures\n", count, count2);
if (!(ac->ac_flags & EXT4_MB_DELALLOC_RESERVED))
/* release all the reserved blocks if non delalloc */
percpu_counter_sub(&sbi->s_dirtyblocks_counter, reserv_blks);
+ else {
+ percpu_counter_sub(&sbi->s_dirtyblocks_counter,
+ ac->ac_b_ex.fe_len);
+ /* convert reserved quota blocks to real quota blocks */
+ vfs_dq_claim_block(ac->ac_inode, ac->ac_b_ex.fe_len);
+ }
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi,
continue;
/* non-extent files can't have physical blocks past 2^32 */
- if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) &&
+ if (!(EXT4_I(ac->ac_inode)->i_flags & EXT4_EXTENTS_FL) &&
pa->pa_pstart + pa->pa_len > EXT4_MAX_BLOCK_FILE_PHYS)
continue;
ext4_unlock_group(sb, group);
if (ac)
kmem_cache_free(ext4_ac_cachep, ac);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
put_bh(bitmap_bh);
return free;
}
if (bitmap_bh == NULL) {
ext4_error(sb, __func__, "Error in reading block "
"bitmap for %u", group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
continue;
}
ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa, ac);
ext4_unlock_group(sb, group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
put_bh(bitmap_bh);
list_del(&pa->u.pa_tmp_list);
/* don't use group allocation for large files */
size = max(size, isize);
- if (size > sbi->s_mb_stream_request) {
+ if (size >= sbi->s_mb_stream_request) {
ac->ac_flags |= EXT4_MB_STREAM_ALLOC;
return;
}
ext4_mb_release_group_pa(&e4b, pa, ac);
ext4_unlock_group(sb, group);
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
list_del(&pa->u.pa_tmp_list);
call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
}
atomic_add(count, &sbi->s_flex_groups[flex_group].free_blocks);
}
- ext4_mb_unload_buddy(&e4b);
+ ext4_mb_release_desc(&e4b);
*freed += count;
#define EXT4_MB_BITMAP(e4b) ((e4b)->bd_bitmap)
#define EXT4_MB_BUDDY(e4b) ((e4b)->bd_buddy)
+#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
+
static inline ext4_fsblk_t ext4_grp_offs_to_block(struct super_block *sb,
struct ext4_free_extent *fex)
{
* happened after we started the migrate. We need to
* fail the migrate
*/
- if (!ext4_test_inode_state(inode, EXT4_STATE_EXT_MIGRATE)) {
+ if (!(EXT4_I(inode)->i_state & EXT4_STATE_EXT_MIGRATE)) {
retval = -EAGAIN;
up_write(&EXT4_I(inode)->i_data_sem);
goto err_out;
} else
- ext4_clear_inode_state(inode, EXT4_STATE_EXT_MIGRATE);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_EXT_MIGRATE;
/*
* We have the extent map build with the tmp inode.
* Now copy the i_data across
*/
if (!EXT4_HAS_INCOMPAT_FEATURE(inode->i_sb,
EXT4_FEATURE_INCOMPAT_EXTENTS) ||
- (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ (EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
return -EINVAL;
if (S_ISLNK(inode->i_mode) && inode->i_blocks == 0)
}
i_size_write(tmp_inode, i_size_read(inode));
/*
- * Set the i_nlink to zero so it will be deleted later
- * when we drop inode reference.
+ * We don't want the inode to be reclaimed
+ * if we got interrupted in between. We have
+ * this tmp inode carrying reference to the
+ * data blocks of the original file. We set
+ * the i_nlink to zero at the last stage after
+ * switching the original file to extent format
*/
- tmp_inode->i_nlink = 0;
+ tmp_inode->i_nlink = 1;
ext4_ext_tree_init(handle, tmp_inode);
ext4_orphan_add(handle, tmp_inode);
* allocation.
*/
down_read((&EXT4_I(inode)->i_data_sem));
- ext4_set_inode_state(inode, EXT4_STATE_EXT_MIGRATE);
+ EXT4_I(inode)->i_state |= EXT4_STATE_EXT_MIGRATE;
up_read((&EXT4_I(inode)->i_data_sem));
handle = ext4_journal_start(inode, 1);
- if (IS_ERR(handle)) {
- /*
- * It is impossible to update on-disk structures without
- * a handle, so just rollback in-core changes and live other
- * work to orphan_list_cleanup()
- */
- ext4_orphan_del(NULL, tmp_inode);
- retval = PTR_ERR(handle);
- goto out;
- }
ei = EXT4_I(inode);
i_data = ei->i_data;
/* Reset the extent details */
ext4_ext_tree_init(handle, tmp_inode);
+
+ /*
+ * Set the i_nlink to zero so that
+ * generic_drop_inode really deletes the
+ * inode
+ */
+ tmp_inode->i_nlink = 0;
+
ext4_journal_stop(handle);
-out:
unlock_new_inode(tmp_inode);
iput(tmp_inode);
}
o_start->ee_len = start_ext->ee_len;
- eblock = le32_to_cpu(start_ext->ee_block);
new_flag = 1;
} else if (start_ext->ee_len && new_ext->ee_len &&
* orig |------------------------------|
*/
o_start->ee_len = start_ext->ee_len;
- eblock = le32_to_cpu(start_ext->ee_block);
new_flag = 1;
} else if (!start_ext->ee_len && new_ext->ee_len &&
struct ext4_extent *oext, *o_start, *o_end, *prev_ext;
struct ext4_extent new_ext, start_ext, end_ext;
ext4_lblk_t new_ext_end;
+ ext4_fsblk_t new_phys_end;
int oext_alen, new_ext_alen, end_ext_alen;
int depth = ext_depth(orig_inode);
int ret;
new_ext.ee_len = dext->ee_len;
new_ext_alen = ext4_ext_get_actual_len(&new_ext);
new_ext_end = le32_to_cpu(new_ext.ee_block) + new_ext_alen - 1;
+ new_phys_end = ext_pblock(&new_ext) + new_ext_alen - 1;
/*
* Case: original extent is first
le32_to_cpu(oext->ee_block) + oext_alen) {
start_ext.ee_len = cpu_to_le16(le32_to_cpu(new_ext.ee_block) -
le32_to_cpu(oext->ee_block));
- start_ext.ee_block = oext->ee_block;
copy_extent_status(oext, &start_ext);
} else if (oext > EXT_FIRST_EXTENT(orig_path[depth].p_hdr)) {
prev_ext = oext - 1;
start_ext.ee_len = cpu_to_le16(
ext4_ext_get_actual_len(prev_ext) +
new_ext_alen);
- start_ext.ee_block = oext->ee_block;
copy_extent_status(prev_ext, &start_ext);
new_ext.ee_len = 0;
}
}
/**
- * mext_check_arguments - Check whether move extent can be done
+ * mext_check_argumants - Check whether move extent can be done
*
* @orig_inode: original inode
* @donor_inode: donor inode
unsigned int blkbits = orig_inode->i_blkbits;
unsigned int blocksize = 1 << blkbits;
+ /* Regular file check */
+ if (!S_ISREG(orig_inode->i_mode) || !S_ISREG(donor_inode->i_mode)) {
+ ext4_debug("ext4 move extent: The argument files should be "
+ "regular file [ino:orig %lu, donor %lu]\n",
+ orig_inode->i_ino, donor_inode->i_ino);
+ return -EINVAL;
+ }
+
if (donor_inode->i_mode & (S_ISUID|S_ISGID)) {
ext4_debug("ext4 move extent: suid or sgid is set"
" to donor file [ino:orig %lu, donor %lu]\n",
return -EINVAL;
}
- if (IS_IMMUTABLE(donor_inode) || IS_APPEND(donor_inode))
- return -EPERM;
-
/* Ext4 move extent does not support swapfile */
if (IS_SWAPFILE(orig_inode) || IS_SWAPFILE(donor_inode)) {
ext4_debug("ext4 move extent: The argument files should "
}
/* Ext4 move extent supports only extent based file */
- if (!(ext4_test_inode_flag(orig_inode, EXT4_INODE_EXTENTS))) {
+ if (!(EXT4_I(orig_inode)->i_flags & EXT4_EXTENTS_FL)) {
ext4_debug("ext4 move extent: orig file is not extents "
"based file [ino:orig %lu]\n", orig_inode->i_ino);
return -EOPNOTSUPP;
- } else if (!(ext4_test_inode_flag(donor_inode, EXT4_INODE_EXTENTS))) {
+ } else if (!(EXT4_I(donor_inode)->i_flags & EXT4_EXTENTS_FL)) {
ext4_debug("ext4 move extent: donor file is not extents "
"based file [ino:donor %lu]\n", donor_inode->i_ino);
return -EOPNOTSUPP;
return -EINVAL;
}
- /* Regular file check */
- if (!S_ISREG(orig_inode->i_mode) || !S_ISREG(donor_inode->i_mode)) {
- ext4_debug("ext4 move extent: The argument files should be "
- "regular file [ino:orig %lu, donor %lu]\n",
- orig_inode->i_ino, donor_inode->i_ino);
- return -EINVAL;
- }
-
/* Protect orig and donor inodes against a truncate */
ret1 = mext_inode_double_lock(orig_inode, donor_inode);
if (ret1 < 0)
dxtrace(printk(KERN_DEBUG "In htree_fill_tree, start hash: %x:%x\n",
start_hash, start_minor_hash));
dir = dir_file->f_path.dentry->d_inode;
- if (!(ext4_test_inode_flag(dir, EXT4_INODE_INDEX))) {
+ if (!(EXT4_I(dir)->i_flags & EXT4_INDEX_FL)) {
hinfo.hash_version = EXT4_SB(dir->i_sb)->s_def_hash_version;
if (hinfo.hash_version <= DX_HASH_TEA)
hinfo.hash_version +=
{
if (!EXT4_HAS_COMPAT_FEATURE(inode->i_sb,
EXT4_FEATURE_COMPAT_DIR_INDEX))
- ext4_clear_inode_flag(inode, EXT4_INODE_INDEX);
+ EXT4_I(inode)->i_flags &= ~EXT4_INDEX_FL;
}
/*
brelse(bh);
return retval;
}
- ext4_set_inode_flag(dir, EXT4_INODE_INDEX);
+ EXT4_I(dir)->i_flags |= EXT4_INDEX_FL;
data1 = bh2->b_data;
memcpy (data1, de, len);
retval = ext4_dx_add_entry(handle, dentry, inode);
if (!retval || (retval != ERR_BAD_DX_DIR))
return retval;
- ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ EXT4_I(dir)->i_flags &= ~EXT4_INDEX_FL;
dx_fallback++;
ext4_mark_inode_dirty(handle, dir);
}
de->rec_len = ext4_rec_len_to_disk(blocksize, blocksize);
retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
brelse(bh);
- if (retval == 0)
- ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
return retval;
}
err = ext4_reserve_inode_write(handle, inode, &iloc);
if (err)
goto out_unlock;
- /*
- * Due to previous errors inode may be already a part of on-disk
- * orphan list. If so skip on-disk list modification.
- */
- if (NEXT_ORPHAN(inode) && NEXT_ORPHAN(inode) <=
- (le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count)))
- goto mem_insert;
/* Insert this inode at the head of the on-disk orphan list... */
NEXT_ORPHAN(inode) = le32_to_cpu(EXT4_SB(sb)->s_es->s_last_orphan);
EXT4_SB(sb)->s_es->s_last_orphan = cpu_to_le32(inode->i_ino);
- err = ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh);
+ err = ext4_handle_dirty_metadata(handle, inode, EXT4_SB(sb)->s_sbh);
rc = ext4_mark_iloc_dirty(handle, inode, &iloc);
if (!err)
err = rc;
*
* This is safe: on error we're going to ignore the orphan list
* anyway on the next recovery. */
-mem_insert:
if (!err)
list_add(&EXT4_I(inode)->i_orphan, &EXT4_SB(sb)->s_orphan);
if (err)
goto out_brelse;
sbi->s_es->s_last_orphan = cpu_to_le32(ino_next);
- err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh);
+ err = ext4_handle_dirty_metadata(handle, inode, sbi->s_sbh);
} else {
struct ext4_iloc iloc2;
struct inode *i_prev =
}
} else {
/* clear the extent format for fast symlink */
- ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS);
+ EXT4_I(inode)->i_flags &= ~EXT4_EXTENTS_FL;
inode->i_op = &ext4_fast_symlink_inode_operations;
memcpy((char *)&EXT4_I(inode)->i_data, symname, l);
inode->i_size = l-1;
percpu_counter_add(&sbi->s_freeinodes_counter,
EXT4_INODES_PER_GROUP(sb));
- if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG) &&
- sbi->s_log_groups_per_flex) {
+ if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) {
ext4_group_t flex_group;
flex_group = ext4_flex_group(sbi, input->group);
atomic_add(input->free_blocks_count,
if (sb->s_flags & MS_RDONLY)
return ERR_PTR(-EROFS);
- vfs_check_frozen(sb, SB_FREEZE_TRANS);
/* Special case here: if the journal has aborted behind our
* backs (eg. EIO in the commit thread), then we still need to
* take the FS itself readonly cleanly. */
ei->i_reserved_data_blocks = 0;
ei->i_reserved_meta_blocks = 0;
ei->i_allocated_meta_blocks = 0;
- ei->i_da_metadata_calc_len = 0;
ei->i_delalloc_reserved_flag = 0;
spin_lock_init(&(ei->i_block_reservation_lock));
#ifdef CONFIG_QUOTA
seq_puts(seq, test_opt(sb, BARRIER) ? "1" : "0");
if (test_opt(sb, JOURNAL_ASYNC_COMMIT))
seq_puts(seq, ",journal_async_commit");
- else if (test_opt(sb, JOURNAL_CHECKSUM))
- seq_puts(seq, ",journal_checksum");
if (test_opt(sb, NOBH))
seq_puts(seq, ",nobh");
if (test_opt(sb, I_VERSION))
if (!*p)
continue;
- /*
- * Initialize args struct so we know whether arg was
- * found; some options take optional arguments.
- */
- args[0].to = args[0].from = 0;
token = match_token(p, tokens, args);
switch (token) {
case Opt_bsd_df:
clear_opt(sbi->s_mount_opt, BARRIER);
break;
case Opt_barrier:
- if (args[0].from) {
- if (match_int(&args[0], &option))
- return 0;
- } else
- option = 1; /* No argument, default to 1 */
+ if (match_int(&args[0], &option)) {
+ set_opt(sbi->s_mount_opt, BARRIER);
+ break;
+ }
if (option)
set_opt(sbi->s_mount_opt, BARRIER);
else
set_opt(sbi->s_mount_opt,NO_AUTO_DA_ALLOC);
break;
case Opt_auto_da_alloc:
- if (args[0].from) {
- if (match_int(&args[0], &option))
- return 0;
- } else
- option = 1; /* No argument, default to 1 */
+ if (match_int(&args[0], &option)) {
+ clear_opt(sbi->s_mount_opt, NO_AUTO_DA_ALLOC);
+ break;
+ }
if (option)
clear_opt(sbi->s_mount_opt, NO_AUTO_DA_ALLOC);
else
get_random_bytes(&sbi->s_next_generation, sizeof(u32));
spin_lock_init(&sbi->s_next_gen_lock);
+ err = percpu_counter_init(&sbi->s_freeblocks_counter,
+ ext4_count_free_blocks(sb));
+ if (!err) {
+ err = percpu_counter_init(&sbi->s_freeinodes_counter,
+ ext4_count_free_inodes(sb));
+ }
+ if (!err) {
+ err = percpu_counter_init(&sbi->s_dirs_counter,
+ ext4_count_dirs(sb));
+ }
+ if (!err) {
+ err = percpu_counter_init(&sbi->s_dirtyblocks_counter, 0);
+ }
+ if (err) {
+ ext4_msg(sb, KERN_ERR, "insufficient memory");
+ goto failed_mount3;
+ }
+
sbi->s_stripe = ext4_get_stripe_size(sbi);
sbi->s_max_writeback_mb_bump = 128;
set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
no_journal:
- err = percpu_counter_init(&sbi->s_freeblocks_counter,
- ext4_count_free_blocks(sb));
- if (!err)
- err = percpu_counter_init(&sbi->s_freeinodes_counter,
- ext4_count_free_inodes(sb));
- if (!err)
- err = percpu_counter_init(&sbi->s_dirs_counter,
- ext4_count_dirs(sb));
- if (!err)
- err = percpu_counter_init(&sbi->s_dirtyblocks_counter, 0);
- if (err) {
- ext4_msg(sb, KERN_ERR, "insufficient memory");
- goto failed_mount_wq;
- }
+
if (test_opt(sb, NOBH)) {
if (!(test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_WRITEBACK_DATA)) {
ext4_msg(sb, KERN_WARNING, "Ignoring nobh option - "
err = ext4_setup_system_zone(sb);
if (err) {
ext4_msg(sb, KERN_ERR, "failed to initialize system "
- "zone (%d)", err);
+ "zone (%d)\n", err);
goto failed_mount4;
}
jbd2_journal_destroy(sbi->s_journal);
sbi->s_journal = NULL;
}
- percpu_counter_destroy(&sbi->s_freeblocks_counter);
- percpu_counter_destroy(&sbi->s_freeinodes_counter);
- percpu_counter_destroy(&sbi->s_dirs_counter);
- percpu_counter_destroy(&sbi->s_dirtyblocks_counter);
failed_mount3:
if (sbi->s_flex_groups) {
if (is_vmalloc_addr(sbi->s_flex_groups))
else
kfree(sbi->s_flex_groups);
}
+ percpu_counter_destroy(&sbi->s_freeblocks_counter);
+ percpu_counter_destroy(&sbi->s_freeinodes_counter);
+ percpu_counter_destroy(&sbi->s_dirs_counter);
+ percpu_counter_destroy(&sbi->s_dirtyblocks_counter);
failed_mount2:
for (i = 0; i < db_count; i++)
brelse(sbi->s_group_desc[i]);
return 0;
journal = EXT4_SB(sb)->s_journal;
- if (journal) {
- vfs_check_frozen(sb, SB_FREEZE_TRANS);
+ if (journal)
ret = ext4_journal_force_commit(journal);
- }
return ret;
}
* the journal.
*/
error = jbd2_journal_flush(journal);
- if (error < 0)
- goto out;
+ if (error < 0) {
+ out:
+ jbd2_journal_unlock_updates(journal);
+ return error;
+ }
/* Journal blocked and flushed, clear needs_recovery flag. */
EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
error = ext4_commit_super(sb, 1);
-out:
- /* we rely on s_frozen to stop further updates */
- jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
- return error;
+ if (error)
+ goto out;
+ return 0;
}
/*
EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
ext4_commit_super(sb, 1);
unlock_super(sb);
+ jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
return 0;
}
{
int err;
- ext4_check_flag_values();
err = init_ext4_system_zone();
if (err)
return err;
void *end;
int error;
- if (!ext4_test_inode_state(inode, EXT4_STATE_XATTR))
+ if (!(EXT4_I(inode)->i_state & EXT4_STATE_XATTR))
return -ENODATA;
error = ext4_get_inode_loc(inode, &iloc);
if (error)
void *end;
int error;
- if (!ext4_test_inode_state(inode, EXT4_STATE_XATTR))
+ if (!(EXT4_I(inode)->i_state & EXT4_STATE_XATTR))
return 0;
error = ext4_get_inode_loc(inode, &iloc);
if (error)
EXT4_I(inode)->i_block_group);
/* non-extent files can't have physical blocks past 2^32 */
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
goal = goal & EXT4_MAX_BLOCK_FILE_PHYS;
block = ext4_new_meta_blocks(handle, inode,
if (error)
goto cleanup;
- if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+ if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
BUG_ON(block > EXT4_MAX_BLOCK_FILE_PHYS);
ea_idebug(inode, "creating block %d", block);
is->s.base = is->s.first = IFIRST(header);
is->s.here = is->s.first;
is->s.end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
- if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) {
+ if (EXT4_I(inode)->i_state & EXT4_STATE_XATTR) {
error = ext4_xattr_check_names(IFIRST(header), is->s.end);
if (error)
return error;
header = IHDR(inode, ext4_raw_inode(&is->iloc));
if (!IS_LAST_ENTRY(s->first)) {
header->h_magic = cpu_to_le32(EXT4_XATTR_MAGIC);
- ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+ EXT4_I(inode)->i_state |= EXT4_STATE_XATTR;
} else {
header->h_magic = cpu_to_le32(0);
- ext4_clear_inode_state(inode, EXT4_STATE_XATTR);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_XATTR;
}
return 0;
}
if (strlen(name) > 255)
return -ERANGE;
down_write(&EXT4_I(inode)->xattr_sem);
- no_expand = ext4_test_inode_state(inode, EXT4_STATE_NO_EXPAND);
- ext4_set_inode_state(inode, EXT4_STATE_NO_EXPAND);
+ no_expand = EXT4_I(inode)->i_state & EXT4_STATE_NO_EXPAND;
+ EXT4_I(inode)->i_state |= EXT4_STATE_NO_EXPAND;
error = ext4_get_inode_loc(inode, &is.iloc);
if (error)
if (error)
goto cleanup;
- if (ext4_test_inode_state(inode, EXT4_STATE_NEW)) {
+ if (EXT4_I(inode)->i_state & EXT4_STATE_NEW) {
struct ext4_inode *raw_inode = ext4_raw_inode(&is.iloc);
memset(raw_inode, 0, EXT4_SB(inode->i_sb)->s_inode_size);
- ext4_clear_inode_state(inode, EXT4_STATE_NEW);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_NEW;
}
error = ext4_xattr_ibody_find(inode, &i, &is);
ext4_xattr_update_super_block(handle, inode->i_sb);
inode->i_ctime = ext4_current_time(inode);
if (!value)
- ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_NO_EXPAND;
error = ext4_mark_iloc_dirty(handle, inode, &is.iloc);
/*
* The bh is consumed by ext4_mark_iloc_dirty, even with
brelse(is.iloc.bh);
brelse(bs.bh);
if (no_expand == 0)
- ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
+ EXT4_I(inode)->i_state &= ~EXT4_STATE_NO_EXPAND;
up_write(&EXT4_I(inode)->xattr_sem);
return error;
}
goto cleanup;
kfree(b_entry_name);
kfree(buffer);
- b_entry_name = NULL;
- buffer = NULL;
brelse(is->iloc.bh);
kfree(is);
kfree(bs);
{
struct fat_mount_options *opts = &MSDOS_SB(dir->i_sb)->options;
wchar_t *ip, *ext_start, *end, *name_start;
- unsigned char base[9], ext[4], buf[5], *p;
+ unsigned char base[9], ext[4], buf[8], *p;
unsigned char charbuf[NLS_MAX_CHARSET_SIZE];
int chl, chi;
int sz = 0, extlen, baselen, i, numtail_baselen, numtail2_baselen;
return 0;
}
- i = jiffies;
+ i = jiffies & 0xffff;
sz = (jiffies >> 16) & 0x7;
if (baselen > 2) {
baselen = numtail2_baselen;
name_res[baselen + 4] = '~';
name_res[baselen + 5] = '1' + sz;
while (1) {
- snprintf(buf, sizeof(buf), "%04X", i & 0xffff);
+ sprintf(buf, "%04X", i);
memcpy(&name_res[baselen], buf, 4);
if (vfat_find_form(dir, name_res) < 0)
break;
continue;
if (!(f->f_mode & FMODE_WRITE))
continue;
- spin_lock(&f->f_lock);
f->f_mode &= ~FMODE_WRITE;
- spin_unlock(&f->f_lock);
if (file_check_writeable(f) != 0)
continue;
file_release_write(f);
unsigned long expired;
long nr_pages;
- /*
- * When set to zero, disable periodic writeback
- */
- if (!dirty_writeback_interval)
- return 0;
-
expired = wb->last_old_flush +
msecs_to_jiffies(dirty_writeback_interval * 10);
if (time_before(jiffies, expired))
break;
}
- if (dirty_writeback_interval) {
- wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
- schedule_timeout_interruptible(wait_jiffies);
- } else
- schedule();
-
+ wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+ schedule_timeout_interruptible(wait_jiffies);
try_to_freeze();
}
}
EXPORT_SYMBOL(writeback_inodes_sb);
-/**
- * writeback_inodes_sb_if_idle - start writeback if none underway
- * @sb: the superblock
- *
- * Invoke writeback_inodes_sb if no writeback is currently underway.
- * Returns 1 if writeback was started, 0 if not.
- */
-int writeback_inodes_sb_if_idle(struct super_block *sb)
-{
- if (!writeback_in_progress(sb->s_bdi)) {
- writeback_inodes_sb(sb);
- return 1;
- } else
- return 0;
-}
-EXPORT_SYMBOL(writeback_inodes_sb_if_idle);
-
/**
* sync_inodes_sb - sync sb inode pages
* @sb: the superblock
}
}
-static void end_queued_requests(struct fuse_conn *fc)
-{
- fc->max_background = UINT_MAX;
- flush_bg_queue(fc);
- end_requests(fc, &fc->pending);
- end_requests(fc, &fc->processing);
-}
-
/*
* Abort all requests.
*
fc->connected = 0;
fc->blocked = 0;
end_io_requests(fc);
- end_queued_requests(fc);
+ end_requests(fc, &fc->pending);
+ end_requests(fc, &fc->processing);
wake_up_all(&fc->waitq);
wake_up_all(&fc->blocked_waitq);
kill_fasync(&fc->fasync, SIGIO, POLL_IN);
if (fc) {
spin_lock(&fc->lock);
fc->connected = 0;
- fc->blocked = 0;
- end_queued_requests(fc);
- wake_up_all(&fc->blocked_waitq);
+ end_requests(fc, &fc->pending);
+ end_requests(fc, &fc->processing);
spin_unlock(&fc->lock);
fuse_conn_put(fc);
}
void fuse_finish_open(struct inode *inode, struct file *file)
{
struct fuse_file *ff = file->private_data;
- struct fuse_conn *fc = get_fuse_conn(inode);
if (ff->open_flags & FOPEN_DIRECT_IO)
file->f_op = &fuse_direct_io_file_operations;
invalidate_inode_pages2(inode->i_mapping);
if (ff->open_flags & FOPEN_NONSEEKABLE)
nonseekable_open(inode, file);
- if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
- struct fuse_inode *fi = get_fuse_inode(inode);
-
- spin_lock(&fc->lock);
- fi->attr_version = ++fc->attr_version;
- i_size_write(inode, 0);
- spin_unlock(&fc->lock);
- fuse_invalidate_attr(inode);
- }
}
int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
#include <linux/spinlock.h>
#include <linux/completion.h>
#include <linux/buffer_head.h>
-#include <linux/xattr.h>
#include <linux/posix_acl.h>
#include <linux/posix_acl_xattr.h>
#include <linux/gfs2_ondisk.h>
#include "trans.h"
#include "util.h"
+#define ACL_ACCESS 1
+#define ACL_DEFAULT 0
+
+int gfs2_acl_validate_set(struct gfs2_inode *ip, int access,
+ struct gfs2_ea_request *er, int *remove, mode_t *mode)
+{
+ struct posix_acl *acl;
+ int error;
+
+ error = gfs2_acl_validate_remove(ip, access);
+ if (error)
+ return error;
+
+ if (!er->er_data)
+ return -EINVAL;
+
+ acl = posix_acl_from_xattr(er->er_data, er->er_data_len);
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+ if (!acl) {
+ *remove = 1;
+ return 0;
+ }
+
+ error = posix_acl_valid(acl);
+ if (error)
+ goto out;
+
+ if (access) {
+ error = posix_acl_equiv_mode(acl, mode);
+ if (!error)
+ *remove = 1;
+ else if (error > 0)
+ error = 0;
+ }
+
+out:
+ posix_acl_release(acl);
+ return error;
+}
+
+int gfs2_acl_validate_remove(struct gfs2_inode *ip, int access)
+{
+ if (!GFS2_SB(&ip->i_inode)->sd_args.ar_posix_acl)
+ return -EOPNOTSUPP;
+ if (!is_owner_or_cap(&ip->i_inode))
+ return -EPERM;
+ if (S_ISLNK(ip->i_inode.i_mode))
+ return -EOPNOTSUPP;
+ if (!access && !S_ISDIR(ip->i_inode.i_mode))
+ return -EACCES;
+
+ return 0;
+}
+
static int acl_get(struct gfs2_inode *ip, const char *name,
struct posix_acl **acl, struct gfs2_ea_location *el,
char **datap, unsigned int *lenp)
return error;
}
-static int gfs2_acl_type(const char *name)
-{
- if (strcmp(name, GFS2_POSIX_ACL_ACCESS) == 0)
- return ACL_TYPE_ACCESS;
- if (strcmp(name, GFS2_POSIX_ACL_DEFAULT) == 0)
- return ACL_TYPE_DEFAULT;
- return -EINVAL;
-}
-
-static int gfs2_xattr_system_get(struct inode *inode, const char *name,
- void *buffer, size_t size)
-{
- int type;
-
- type = gfs2_acl_type(name);
- if (type < 0)
- return type;
-
- return gfs2_xattr_get(inode, GFS2_EATYPE_SYS, name, buffer, size);
-}
-
-static int gfs2_set_mode(struct inode *inode, mode_t mode)
-{
- int error = 0;
-
- if (mode != inode->i_mode) {
- struct iattr iattr;
-
- iattr.ia_valid = ATTR_MODE;
- iattr.ia_mode = mode;
-
- error = gfs2_setattr_simple(GFS2_I(inode), &iattr);
- }
-
- return error;
-}
-
-static int gfs2_xattr_system_set(struct inode *inode, const char *name,
- const void *value, size_t size, int flags)
-{
- struct gfs2_sbd *sdp = GFS2_SB(inode);
- struct posix_acl *acl = NULL;
- int error = 0, type;
-
- if (!sdp->sd_args.ar_posix_acl)
- return -EOPNOTSUPP;
-
- type = gfs2_acl_type(name);
- if (type < 0)
- return type;
- if (flags & XATTR_CREATE)
- return -EINVAL;
- if (type == ACL_TYPE_DEFAULT && !S_ISDIR(inode->i_mode))
- return value ? -EACCES : 0;
- if ((current_fsuid() != inode->i_uid) && !capable(CAP_FOWNER))
- return -EPERM;
- if (S_ISLNK(inode->i_mode))
- return -EOPNOTSUPP;
-
- if (!value)
- goto set_acl;
-
- acl = posix_acl_from_xattr(value, size);
- if (!acl) {
- /*
- * acl_set_file(3) may request that we set default ACLs with
- * zero length -- defend (gracefully) against that here.
- */
- goto out;
- }
- if (IS_ERR(acl)) {
- error = PTR_ERR(acl);
- goto out;
- }
-
- error = posix_acl_valid(acl);
- if (error)
- goto out_release;
-
- error = -EINVAL;
- if (acl->a_count > GFS2_ACL_MAX_ENTRIES)
- goto out_release;
-
- if (type == ACL_TYPE_ACCESS) {
- mode_t mode = inode->i_mode;
- error = posix_acl_equiv_mode(acl, &mode);
-
- if (error <= 0) {
- posix_acl_release(acl);
- acl = NULL;
-
- if (error < 0)
- return error;
- }
-
- error = gfs2_set_mode(inode, mode);
- if (error)
- goto out_release;
- }
-
-set_acl:
- error = gfs2_xattr_set(inode, GFS2_EATYPE_SYS, name, value, size, 0);
-out_release:
- posix_acl_release(acl);
-out:
- return error;
-}
-
-struct xattr_handler gfs2_xattr_system_handler = {
- .prefix = XATTR_SYSTEM_PREFIX,
- .get = gfs2_xattr_system_get,
- .set = gfs2_xattr_system_set,
-};
-
#include "incore.h"
#define GFS2_POSIX_ACL_ACCESS "posix_acl_access"
+#define GFS2_POSIX_ACL_ACCESS_LEN 16
#define GFS2_POSIX_ACL_DEFAULT "posix_acl_default"
-#define GFS2_ACL_MAX_ENTRIES 25
+#define GFS2_POSIX_ACL_DEFAULT_LEN 17
-extern int gfs2_check_acl(struct inode *inode, int mask);
-extern int gfs2_acl_create(struct gfs2_inode *dip, struct gfs2_inode *ip);
-extern int gfs2_acl_chmod(struct gfs2_inode *ip, struct iattr *attr);
-extern struct xattr_handler gfs2_xattr_system_handler;
+#define GFS2_ACL_IS_ACCESS(name, len) \
+ ((len) == GFS2_POSIX_ACL_ACCESS_LEN && \
+ !memcmp(GFS2_POSIX_ACL_ACCESS, (name), (len)))
+
+#define GFS2_ACL_IS_DEFAULT(name, len) \
+ ((len) == GFS2_POSIX_ACL_DEFAULT_LEN && \
+ !memcmp(GFS2_POSIX_ACL_DEFAULT, (name), (len)))
+
+struct gfs2_ea_request;
+
+int gfs2_acl_validate_set(struct gfs2_inode *ip, int access,
+ struct gfs2_ea_request *er,
+ int *remove, mode_t *mode);
+int gfs2_acl_validate_remove(struct gfs2_inode *ip, int access);
+int gfs2_check_acl(struct inode *inode, int mask);
+int gfs2_acl_create(struct gfs2_inode *dip, struct gfs2_inode *ip);
+int gfs2_acl_chmod(struct gfs2_inode *ip, struct iattr *attr);
#endif /* __ACL_DOT_H__ */
unsigned totlen = be16_to_cpu(dent->de_rec_len);
if (gfs2_dirent_sentinel(dent))
- actual = 0;
+ actual = GFS2_DIRENT_SIZE(0);
if (totlen - actual >= required)
return 1;
return 0;
if (error)
goto out_drop_write;
- error = -EACCES;
- if (!is_owner_or_cap(inode))
- goto out;
-
- error = 0;
flags = ip->i_diskflags;
new_flags = (flags & ~mask) | (reqflags & mask);
if ((new_flags ^ flags) == 0)
{
struct inode *inode = filp->f_path.dentry->d_inode;
u32 fsflags, gfsflags;
-
if (get_user(fsflags, ptr))
return -EFAULT;
-
gfsflags = fsflags_cvt(fsflags_to_gfs2, fsflags);
if (!S_ISDIR(inode->i_mode)) {
if (gfsflags & GFS2_DIF_INHERIT_JDATA)
if (!(fl->fl_flags & FL_POSIX))
return -ENOLCK;
- if (__mandatory_lock(&ip->i_inode) && fl->fl_type != F_UNLCK)
+ if (__mandatory_lock(&ip->i_inode))
return -ENOLCK;
if (cmd == F_CANCELLK) {
return gfs2_xattr_set(inode, GFS2_EATYPE_USR, name, value, size, flags);
}
+static int gfs2_xattr_system_get(struct inode *inode, const char *name,
+ void *buffer, size_t size)
+{
+ return gfs2_xattr_get(inode, GFS2_EATYPE_SYS, name, buffer, size);
+}
+
+static int gfs2_xattr_system_set(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+{
+ return gfs2_xattr_set(inode, GFS2_EATYPE_SYS, name, value, size, flags);
+}
+
static int gfs2_xattr_security_get(struct inode *inode, const char *name,
void *buffer, size_t size)
{
.set = gfs2_xattr_security_set,
};
+static struct xattr_handler gfs2_xattr_system_handler = {
+ .prefix = XATTR_SYSTEM_PREFIX,
+ .get = gfs2_xattr_system_get,
+ .set = gfs2_xattr_system_set,
+};
+
struct xattr_handler *gfs2_xattr_handlers[] = {
&gfs2_xattr_user_handler,
&gfs2_xattr_security_handler,
{
jbd_debugfs_dir = debugfs_create_dir("jbd", NULL);
if (jbd_debugfs_dir)
- jbd_debug = debugfs_create_u8("jbd-debug", S_IRUGO | S_IWUSR,
+ jbd_debug = debugfs_create_u8("jbd-debug", S_IRUGO,
jbd_debugfs_dir,
&journal_enable_debug);
}
#include <linux/jbd2.h>
#include <linux/errno.h>
#include <linux/slab.h>
-#include <linux/blkdev.h>
#include <trace/events/jbd2.h>
/*
journal->j_tail_sequence = first_tid;
journal->j_tail = blocknr;
spin_unlock(&journal->j_state_lock);
-
- /*
- * If there is an external journal, we need to make sure that
- * any data blocks that were recently written out --- perhaps
- * by jbd2_log_do_checkpoint() --- are flushed out before we
- * drop the transactions from the external journal. It's
- * unlikely this will be necessary, especially with a
- * appropriately sized journal, but we need this to guarantee
- * correctness. Fortunately jbd2_cleanup_journal_tail()
- * doesn't get called all that often.
- */
- if ((journal->j_fs_dev != journal->j_dev) &&
- (journal->j_flags & JBD2_BARRIER))
- blkdev_issue_flush(journal->j_fs_dev, NULL);
if (!(journal->j_flags & JBD2_ABORT))
jbd2_journal_update_superblock(journal, 1);
return 0;
ret = err;
spin_lock(&journal->j_list_lock);
J_ASSERT(jinode->i_transaction == commit_transaction);
- commit_transaction->t_flushed_data_blocks = 1;
jinode->i_flags &= ~JI_COMMIT_RUNNING;
wake_up_bit(&jinode->i_flags, __JI_COMMIT_RUNNING);
}
}
}
- /*
- * If the journal is not located on the file system device,
- * then we must flush the file system device before we issue
- * the commit record
- */
- if (commit_transaction->t_flushed_data_blocks &&
- (journal->j_fs_dev != journal->j_dev) &&
- (journal->j_flags & JBD2_BARRIER))
- blkdev_issue_flush(journal->j_fs_dev, NULL);
-
/* Done it all: now write the commit record asynchronously. */
+
if (JBD2_HAS_INCOMPAT_FEATURE(journal,
JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT)) {
err = journal_submit_commit_record(journal, commit_transaction,
blkdev_issue_flush(journal->j_dev, NULL);
}
+ /*
+ * This is the right place to wait for data buffers both for ASYNC
+ * and !ASYNC commit. If commit is ASYNC, we need to wait only after
+ * the commit block went to disk (which happens above). If commit is
+ * SYNC, we need to wait for data buffers before we start writing
+ * commit block, which happens below in such setting.
+ */
err = journal_finish_inode_data_buffers(journal, commit_transaction);
if (err) {
printk(KERN_WARNING
{
jbd2_debugfs_dir = debugfs_create_dir("jbd2", NULL);
if (jbd2_debugfs_dir)
- jbd2_debug = debugfs_create_u8(JBD2_DEBUG_NAME,
- S_IRUGO | S_IWUSR,
+ jbd2_debug = debugfs_create_u8(JBD2_DEBUG_NAME, S_IRUGO,
jbd2_debugfs_dir,
&jbd2_journal_enable_debug);
}
struct inode *iplist[1];
struct jfs_superblock *j_sb, *j_sb2;
uint old_agsize;
- int agsizechanged = 0;
struct buffer_head *bh, *bh2;
/* If the volume hasn't grown, get out now */
*/
if ((rc = dbExtendFS(ipbmap, XAddress, nblocks)))
goto error_out;
-
- agsizechanged |= (bmp->db_agsize != old_agsize);
-
/*
* the map now has extended to cover additional nblocks:
* dn_mapsize = oldMapsize + nblocks;
* will correctly identify the new ag);
*/
/* if new AG size the same as old AG size, done! */
- if (agsizechanged) {
+ if (bmp->db_agsize != old_agsize) {
if ((rc = diExtendFS(ipimap, ipbmap)))
goto error_out;
#define EA_MALLOC 0x0008
-static int is_known_namespace(const char *name)
-{
- if (strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) &&
- strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
- strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN) &&
- strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
- return false;
-
- return true;
-}
-
/*
* These three routines are used to recognize on-disk extended attributes
* that are in a recognized namespace. If the attribute is not recognized,
* "os2." is prepended to the name
*/
-static int is_os2_xattr(struct jfs_ea *ea)
+static inline int is_os2_xattr(struct jfs_ea *ea)
{
- return !is_known_namespace(ea->name);
+ /*
+ * Check for "system."
+ */
+ if ((ea->namelen >= XATTR_SYSTEM_PREFIX_LEN) &&
+ !strncmp(ea->name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
+ return false;
+ /*
+ * Check for "user."
+ */
+ if ((ea->namelen >= XATTR_USER_PREFIX_LEN) &&
+ !strncmp(ea->name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
+ return false;
+ /*
+ * Check for "security."
+ */
+ if ((ea->namelen >= XATTR_SECURITY_PREFIX_LEN) &&
+ !strncmp(ea->name, XATTR_SECURITY_PREFIX,
+ XATTR_SECURITY_PREFIX_LEN))
+ return false;
+ /*
+ * Check for "trusted."
+ */
+ if ((ea->namelen >= XATTR_TRUSTED_PREFIX_LEN) &&
+ !strncmp(ea->name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN))
+ return false;
+ /*
+ * Add any other valid namespace prefixes here
+ */
+
+ /*
+ * We assume it's OS/2's flat namespace
+ */
+ return true;
}
static inline int name_size(struct jfs_ea *ea)
if (!strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN))
return can_set_system_xattr(inode, name, value, value_len);
- if (!strncmp(name, XATTR_OS2_PREFIX, XATTR_OS2_PREFIX_LEN)) {
- /*
- * This makes sure that we aren't trying to set an
- * attribute in a different namespace by prefixing it
- * with "os2."
- */
- if (is_known_namespace(name + XATTR_OS2_PREFIX_LEN))
- return -EOPNOTSUPP;
- return 0;
- }
-
/*
* Don't allow setting an attribute in an unknown namespace.
*/
if (strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) &&
strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN) &&
- strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN))
+ strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN) &&
+ strncmp(name, XATTR_OS2_PREFIX, XATTR_OS2_PREFIX_LEN))
return -EOPNOTSUPP;
return 0;
int xattr_size;
ssize_t size;
int namelen = strlen(name);
+ char *os2name = NULL;
char *value;
+ if (strncmp(name, XATTR_OS2_PREFIX, XATTR_OS2_PREFIX_LEN) == 0) {
+ os2name = kmalloc(namelen - XATTR_OS2_PREFIX_LEN + 1,
+ GFP_KERNEL);
+ if (!os2name)
+ return -ENOMEM;
+ strcpy(os2name, name + XATTR_OS2_PREFIX_LEN);
+ name = os2name;
+ namelen -= XATTR_OS2_PREFIX_LEN;
+ }
+
down_read(&JFS_IP(inode)->xattr_sem);
xattr_size = ea_get(inode, &ea_buf, 0);
out:
up_read(&JFS_IP(inode)->xattr_sem);
+ kfree(os2name);
+
return size;
}
{
int err;
- if (strncmp(name, XATTR_OS2_PREFIX, XATTR_OS2_PREFIX_LEN) == 0) {
- /*
- * skip past "os2." prefix
- */
- name += XATTR_OS2_PREFIX_LEN;
- /*
- * Don't allow retrieving properly prefixed attributes
- * by prepending them with "os2."
- */
- if (is_known_namespace(name))
- return -EOPNOTSUPP;
- }
-
err = __jfs_getxattr(dentry->d_inode, name, data, buf_size);
return err;
* unique inode values later for this filesystem, then you must take care
* to pass it an appropriate max_reserved value to avoid collisions.
*/
-int simple_fill_super(struct super_block *s, unsigned long magic,
- struct tree_descr *files)
+int simple_fill_super(struct super_block *s, int magic, struct tree_descr *files)
{
struct inode *inode;
struct dentry *root;
return PTR_ERR(dentry);
}
-/*
- * This is a temporary kludge to deal with "automount" symlinks; proper
- * solution is to trigger them on follow_mount(), so that do_lookup()
- * would DTRT. To be killed before 2.6.34-final.
- */
-static inline int follow_on_final(struct inode *inode, unsigned lookup_flags)
-{
- return inode && unlikely(inode->i_op->follow_link) &&
- ((lookup_flags & LOOKUP_FOLLOW) || S_ISDIR(inode->i_mode));
-}
-
/*
* Name resolution.
* This is the basic name resolution function, turning a pathname into
if (err)
break;
inode = next.dentry->d_inode;
- if (follow_on_final(inode, lookup_flags)) {
+ if ((lookup_flags & LOOKUP_FOLLOW)
+ && inode && inode->i_op->follow_link) {
err = do_follow_link(&next, nd);
if (err)
goto return_err;
{
struct path path;
int retval;
- int lookup_flags = 0;
- if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
- return -EINVAL;
-
- if (!(flags & UMOUNT_NOFOLLOW))
- lookup_flags |= LOOKUP_FOLLOW;
-
- retval = user_path_at(AT_FDCWD, name, lookup_flags, &path);
+ retval = user_path(name, &path);
if (retval)
goto out;
retval = -EINVAL;
sin1->sin6_scope_id != sin2->sin6_scope_id)
return 0;
- return ipv6_addr_equal(&sin1->sin6_addr, &sin2->sin6_addr);
+ return ipv6_addr_equal(&sin1->sin6_addr, &sin1->sin6_addr);
}
#else /* !defined(CONFIG_IPV6) && !defined(CONFIG_IPV6_MODULE) */
static int nfs_sockaddr_match_ipaddr6(const struct sockaddr *sa1,
static void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *source)
{
target->flags = source->flags;
- target->rsize = source->rsize;
- target->wsize = source->wsize;
target->acregmin = source->acregmin;
target->acregmax = source->acregmax;
target->acdirmin = source->acdirmin;
/* Initialise the client representation from the mount data */
server->flags = data->flags;
- server->caps |= NFS_CAP_ATOMIC_OPEN|NFS_CAP_CHANGE_ATTR|
- NFS_CAP_POSIX_LOCK;
+ server->caps |= NFS_CAP_ATOMIC_OPEN|NFS_CAP_CHANGE_ATTR;
server->options = data->options;
/* Get a client record */
}
#endif
-static inline int nfs_have_delegated_attributes(struct inode *inode)
-{
- return nfs_have_delegation(inode, FMODE_READ) &&
- !(NFS_I(inode)->cache_validity & NFS_INO_REVAL_FORCED);
-}
-
#endif
/* If we have submounts, don't unhash ! */
if (have_submounts(dentry))
goto out_valid;
- if (dentry->d_flags & DCACHE_DISCONNECTED)
- goto out_valid;
shrink_dcache_parent(dentry);
}
d_drop(dentry);
res = NULL;
goto out;
/* This turned out not to be a regular file */
- case -EISDIR:
case -ENOTDIR:
goto no_open;
case -ELOOP:
if (!(nd->intent.open.flags & O_NOFOLLOW))
goto no_open;
+ /* case -EISDIR: */
/* case -EINVAL: */
default:
goto out;
cache = nfs_access_search_rbtree(inode, cred);
if (cache == NULL)
goto out;
- if (!nfs_have_delegated_attributes(inode) &&
+ if (!nfs_have_delegation(inode, FMODE_READ) &&
!time_in_range_open(jiffies, cache->jiffies, cache->jiffies + nfsi->attrtimeo))
goto out_stale;
res->jiffies = cache->jiffies;
};
-static void nfs_dns_ent_update(struct cache_head *cnew,
- struct cache_head *ckey)
-{
- struct nfs_dns_ent *new;
- struct nfs_dns_ent *key;
-
- new = container_of(cnew, struct nfs_dns_ent, h);
- key = container_of(ckey, struct nfs_dns_ent, h);
-
- memcpy(&new->addr, &key->addr, key->addrlen);
- new->addrlen = key->addrlen;
-}
-
static void nfs_dns_ent_init(struct cache_head *cnew,
struct cache_head *ckey)
{
new->hostname = kstrndup(key->hostname, key->namelen, GFP_KERNEL);
if (new->hostname) {
new->namelen = key->namelen;
- nfs_dns_ent_update(cnew, ckey);
+ memcpy(&new->addr, &key->addr, key->addrlen);
+ new->addrlen = key->addrlen;
} else {
new->namelen = 0;
new->addrlen = 0;
.cache_show = nfs_dns_show,
.match = nfs_dns_match,
.init = nfs_dns_ent_init,
- .update = nfs_dns_ent_update,
+ .update = nfs_dns_ent_init,
.alloc = nfs_dns_ent_alloc,
};
#include <linux/slab.h>
#include <linux/pagemap.h>
#include <linux/aio.h>
-#include <linux/gfp.h>
-#include <linux/swap.h>
#include <asm/uaccess.h>
#include <asm/system.h>
*/
static int nfs_release_page(struct page *page, gfp_t gfp)
{
- struct address_space *mapping = page->mapping;
-
dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page);
- /* Only do I/O if gfp is a superset of GFP_KERNEL */
- if (mapping && (gfp & GFP_KERNEL) == GFP_KERNEL) {
- int how = FLUSH_SYNC;
-
- /* Don't let kswapd deadlock waiting for OOM RPC calls */
- if (current_is_kswapd())
- how = 0;
- nfs_commit_inode(mapping->host, how);
- }
+ if (gfp & __GFP_WAIT)
+ nfs_wb_page(page->mapping->host, page);
/* If PagePrivate() is set, then the page is not freeable */
if (PagePrivate(page))
return 0;
{
struct nfs_inode *nfsi = NFS_I(inode);
- if (nfs_have_delegated_attributes(inode))
+ if (nfs_have_delegation(inode, FMODE_READ))
return 0;
return !time_in_range_open(jiffies, nfsi->read_cache_jiffies, nfsi->read_cache_jiffies + nfsi->attrtimeo);
}
nfs_post_op_update_inode(dir, o_res->dir_attr);
} else
nfs_refresh_inode(dir, o_res->dir_attr);
- if ((o_res->rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) == 0)
- server->caps &= ~NFS_CAP_POSIX_LOCK;
if(o_res->rflags & NFS4_OPEN_RESULT_CONFIRM) {
status = _nfs4_proc_open_confirm(data);
if (status != 0)
status = PTR_ERR(state);
if (IS_ERR(state))
goto err_opendata_put;
- if (server->caps & NFS_CAP_POSIX_LOCK)
+ if ((opendata->o_res.rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) != 0)
set_bit(NFS_STATE_POSIX_LOCKS, &state->flags);
nfs4_opendata_put(opendata);
nfs4_put_state_owner(sp);
bmval1 |= FATTR4_WORD1_TIME_ACCESS_SET;
*p++ = cpu_to_be32(NFS4_SET_TO_CLIENT_TIME);
*p++ = cpu_to_be32(0);
- *p++ = cpu_to_be32(iap->ia_atime.tv_sec);
- *p++ = cpu_to_be32(iap->ia_atime.tv_nsec);
+ *p++ = cpu_to_be32(iap->ia_mtime.tv_sec);
+ *p++ = cpu_to_be32(iap->ia_mtime.tv_nsec);
}
else if (iap->ia_valid & ATTR_ATIME) {
bmval1 |= FATTR4_WORD1_TIME_ACCESS_SET;
encode_compound_hdr(&xdr, req, &hdr);
encode_sequence(&xdr, &args->seq_args, &hdr);
encode_putfh(&xdr, args->fh, &hdr);
- replen = hdr.replen + op_decode_hdr_maxsz + nfs4_fattr_bitmap_maxsz + 1;
+ replen = hdr.replen + nfs4_fattr_bitmap_maxsz + 1;
encode_getattr_two(&xdr, FATTR4_WORD0_ACL, 0, &hdr);
xdr_inline_pages(&req->rq_rcv_buf, replen << 2,
*/
int nfs_set_page_tag_locked(struct nfs_page *req)
{
+ struct nfs_inode *nfsi = NFS_I(req->wb_context->path.dentry->d_inode);
+
if (!nfs_lock_request_dontget(req))
return 0;
if (req->wb_page != NULL)
- radix_tree_tag_set(&NFS_I(req->wb_context->path.dentry->d_inode)->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
+ radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
return 1;
}
*/
void nfs_clear_page_tag_locked(struct nfs_page *req)
{
- if (req->wb_page != NULL) {
- struct inode *inode = req->wb_context->path.dentry->d_inode;
- struct nfs_inode *nfsi = NFS_I(inode);
+ struct inode *inode = req->wb_context->path.dentry->d_inode;
+ struct nfs_inode *nfsi = NFS_I(inode);
+ if (req->wb_page != NULL) {
spin_lock(&inode->i_lock);
radix_tree_tag_clear(&nfsi->nfs_page_tree, req->wb_index, NFS_PAGE_TAG_LOCKED);
nfs_unlock_request(req);
* nfs_clear_request - Free up all resources allocated to the request
* @req:
*
- * Release page and open context resources associated with a read/write
- * request after it has completed.
+ * Release page resources associated with a write request after it
+ * has completed.
*/
void nfs_clear_request(struct nfs_page *req)
{
struct page *page = req->wb_page;
- struct nfs_open_context *ctx = req->wb_context;
-
if (page != NULL) {
page_cache_release(page);
req->wb_page = NULL;
}
- if (ctx != NULL) {
- put_nfs_open_context(ctx);
- req->wb_context = NULL;
- }
}
{
struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref);
- /* Release struct file and open context */
+ /* Release struct file or cached credential */
nfs_clear_request(req);
+ put_nfs_open_context(req->wb_context);
nfs_page_free(req);
}
}
}
-#ifdef CONFIG_NFS_V4
-static void nfs_show_nfsv4_options(struct seq_file *m, struct nfs_server *nfss,
- int showdefaults)
-{
- struct nfs_client *clp = nfss->nfs_client;
-
- seq_printf(m, ",clientaddr=%s", clp->cl_ipaddr);
- seq_printf(m, ",minorversion=%u", clp->cl_minorversion);
-}
-#else
-static void nfs_show_nfsv4_options(struct seq_file *m, struct nfs_server *nfss,
- int showdefaults)
-{
-}
-#endif
-
/*
* Describe the mount options in force on this server representation
*/
if (version != 4)
nfs_show_mountd_options(m, nfss, showdefaults);
- else
- nfs_show_nfsv4_options(m, nfss, showdefaults);
+#ifdef CONFIG_NFS_V4
+ if (clp->rpc_ops->version == 4)
+ seq_printf(m, ",clientaddr=%s", clp->cl_ipaddr);
+#endif
if (nfss->options & NFS_OPTION_FSCACHE)
seq_printf(m, ",fsc");
-
- if (nfss->flags & NFS_MOUNT_LOOKUP_CACHE_NONEG) {
- if (nfss->flags & NFS_MOUNT_LOOKUP_CACHE_NONE)
- seq_printf(m, ",lookupcache=none");
- else
- seq_printf(m, ",lookupcache=pos");
- }
}
/*
{
if (share_access & NFS4_SHARE_ACCESS_WRITE) {
drop_file_write_access(filp);
- spin_lock(&filp->f_lock);
filp->f_mode = (filp->f_mode | FMODE_READ) & ~FMODE_WRITE;
- spin_unlock(&filp->f_lock);
}
}
argp->p = page_address(argp->pagelist[0]);
argp->pagelist++;
if (argp->pagelen < PAGE_SIZE) {
- argp->end = argp->p + (argp->pagelen>>2);
+ argp->end = p + (argp->pagelen>>2);
argp->pagelen = 0;
} else {
- argp->end = argp->p + (PAGE_SIZE>>2);
+ argp->end = p + (PAGE_SIZE>>2);
argp->pagelen -= PAGE_SIZE;
}
memcpy(((char*)p)+avail, argp->p, (nbytes - avail));
argp->p = page_address(argp->pagelist[0]);
argp->pagelist++;
if (argp->pagelen < PAGE_SIZE) {
- argp->end = argp->p + (argp->pagelen>>2);
+ argp->end = p + (argp->pagelen>>2);
argp->pagelen = 0;
} else {
- argp->end = argp->p + (PAGE_SIZE>>2);
+ argp->end = p + (PAGE_SIZE>>2);
argp->pagelen -= PAGE_SIZE;
}
}
* and this is the root of a cross-mounted filesystem.
*/
if (ignore_crossmnt == 0 &&
- dentry == exp->ex_path.mnt->mnt_root) {
- struct path path = exp->ex_path;
- path_get(&path);
- while (follow_up(&path)) {
- if (path.dentry != path.mnt->mnt_root)
- break;
- }
- err = vfs_getattr(path.mnt, path.dentry, &stat);
- path_put(&path);
+ exp->ex_path.mnt->mnt_root->d_inode == dentry->d_inode) {
+ err = vfs_getattr(exp->ex_path.mnt->mnt_parent,
+ exp->ex_path.mnt->mnt_mountpoint, &stat);
if (err)
goto out_nfserr;
}
int nfsd_vers(int vers, enum vers_op change)
{
if (vers < NFSD_MINVERS || vers >= NFSD_NRVERS)
- return 0;
+ return -1;
switch(change) {
case NFSD_SET:
nfsd_versions[vers] = nfsd_version[vers];
sb->s_export_op = &nilfs_export_ops;
sb->s_root = NULL;
sb->s_time_gran = 1;
- sb->s_bdi = nilfs->ns_bdi;
if (!nilfs_loaded(nilfs)) {
err = load_nilfs(nilfs, sbi);
#include <linux/path.h> /* struct path */
#include <linux/slab.h> /* kmem_* */
#include <linux/types.h>
-#include <linux/sched.h>
#include "inotify.h"
ret = 0;
}
- if (entry->mask & IN_ONESHOT)
- fsnotify_destroy_mark_by_entry(entry);
-
/*
* If we hold the entry until after the event is on the queue
* IN_IGNORED won't be able to pass this event in the queue
idr_for_each(&group->inotify_data.idr, idr_callback, group);
idr_remove_all(&group->inotify_data.idr);
idr_destroy(&group->inotify_data.idr);
- free_uid(group->inotify_data.user);
}
void inotify_free_event_priv(struct fsnotify_event_private_data *fsn_event_priv)
{
__u32 mask;
- /*
- * everything should accept their own ignored, cares about children,
- * and should receive events when the inode is unmounted
- */
- mask = (FS_IN_IGNORED | FS_EVENT_ON_CHILD | FS_UNMOUNT);
+ /* everything should accept their own ignored and cares about children */
+ mask = (FS_IN_IGNORED | FS_EVENT_ON_CHILD);
/* mask off the flags used to open the fd */
mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT));
if (unlikely(!idr_pre_get(&group->inotify_data.idr, GFP_KERNEL)))
goto out_err;
- /* we are putting the mark on the idr, take a reference */
- fsnotify_get_mark(&tmp_ientry->fsn_entry);
-
spin_lock(&group->inotify_data.idr_lock);
ret = idr_get_new_above(&group->inotify_data.idr, &tmp_ientry->fsn_entry,
group->inotify_data.last_wd+1,
&tmp_ientry->wd);
spin_unlock(&group->inotify_data.idr_lock);
if (ret) {
- /* we didn't get on the idr, drop the idr reference */
- fsnotify_put_mark(&tmp_ientry->fsn_entry);
-
/* idr was out of memory allocate and try again */
if (ret == -EAGAIN)
goto retry;
goto out_err;
}
+ /* we put the mark on the idr, take a reference */
+ fsnotify_get_mark(&tmp_ientry->fsn_entry);
+
/* we are on the idr, now get on the inode */
ret = fsnotify_add_mark(&tmp_ientry->fsn_entry, group, inode);
if (ret) {
#include "alloc.h"
#include "dlmglue.h"
#include "file.h"
-#include "inode.h"
-#include "journal.h"
#include "ocfs2_fs.h"
#include "xattr.h"
return acl;
}
-/*
- * Helper function to set i_mode in memory and disk. Some call paths
- * will not have di_bh or a journal handle to pass, in which case it
- * will create it's own.
- */
-static int ocfs2_acl_set_mode(struct inode *inode, struct buffer_head *di_bh,
- handle_t *handle, umode_t new_mode)
-{
- int ret, commit_handle = 0;
- struct ocfs2_dinode *di;
-
- if (di_bh == NULL) {
- ret = ocfs2_read_inode_block(inode, &di_bh);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
- } else
- get_bh(di_bh);
-
- if (handle == NULL) {
- handle = ocfs2_start_trans(OCFS2_SB(inode->i_sb),
- OCFS2_INODE_UPDATE_CREDITS);
- if (IS_ERR(handle)) {
- ret = PTR_ERR(handle);
- mlog_errno(ret);
- goto out_brelse;
- }
-
- commit_handle = 1;
- }
-
- di = (struct ocfs2_dinode *)di_bh->b_data;
- ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), di_bh,
- OCFS2_JOURNAL_ACCESS_WRITE);
- if (ret) {
- mlog_errno(ret);
- goto out_commit;
- }
-
- inode->i_mode = new_mode;
- di->i_mode = cpu_to_le16(inode->i_mode);
-
- ocfs2_journal_dirty(handle, di_bh);
-
-out_commit:
- if (commit_handle)
- ocfs2_commit_trans(OCFS2_SB(inode->i_sb), handle);
-out_brelse:
- brelse(di_bh);
-out:
- return ret;
-}
-
/*
* Set the access or default ACL of an inode.
*/
if (ret < 0)
return ret;
else {
+ inode->i_mode = mode;
if (ret == 0)
acl = NULL;
-
- ret = ocfs2_acl_set_mode(inode, di_bh,
- handle, mode);
- if (ret)
- return ret;
-
}
}
break;
int ocfs2_check_acl(struct inode *inode, int mask)
{
- struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
- struct buffer_head *di_bh = NULL;
- struct posix_acl *acl;
- int ret = -EAGAIN;
-
- if (!(osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL))
- return ret;
-
- ret = ocfs2_read_inode_block(inode, &di_bh);
- if (ret < 0) {
- mlog_errno(ret);
- return ret;
- }
-
- acl = ocfs2_get_acl_nolock(inode, ACL_TYPE_ACCESS, di_bh);
+ struct posix_acl *acl = ocfs2_get_acl(inode, ACL_TYPE_ACCESS);
- brelse(di_bh);
-
- if (IS_ERR(acl)) {
- mlog_errno(PTR_ERR(acl));
+ if (IS_ERR(acl))
return PTR_ERR(acl);
- }
if (acl) {
- ret = posix_acl_permission(inode, acl, mask);
+ int ret = posix_acl_permission(inode, acl, mask);
posix_acl_release(acl);
return ret;
}
{
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
struct posix_acl *acl = NULL;
- int ret = 0, ret2;
- mode_t mode;
+ int ret = 0;
if (!S_ISLNK(inode->i_mode)) {
if (osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) {
if (IS_ERR(acl))
return PTR_ERR(acl);
}
- if (!acl) {
- mode = inode->i_mode & ~current_umask();
- ret = ocfs2_acl_set_mode(inode, di_bh, handle, mode);
- if (ret) {
- mlog_errno(ret);
- goto cleanup;
- }
- }
+ if (!acl)
+ inode->i_mode &= ~current_umask();
}
if ((osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) && acl) {
struct posix_acl *clone;
+ mode_t mode;
if (S_ISDIR(inode->i_mode)) {
ret = ocfs2_set_acl(handle, inode, di_bh,
mode = inode->i_mode;
ret = posix_acl_create_masq(clone, &mode);
if (ret >= 0) {
- ret2 = ocfs2_acl_set_mode(inode, di_bh, handle, mode);
- if (ret2) {
- mlog_errno(ret2);
- ret = ret2;
- goto cleanup;
- }
+ inode->i_mode = mode;
if (ret > 0) {
ret = ocfs2_set_acl(handle, inode,
di_bh, ACL_TYPE_ACCESS,
*
* The array index of the subtree root is passed back.
*/
-int ocfs2_find_subtree_root(struct ocfs2_extent_tree *et,
- struct ocfs2_path *left,
- struct ocfs2_path *right)
+static int ocfs2_find_subtree_root(struct ocfs2_extent_tree *et,
+ struct ocfs2_path *left,
+ struct ocfs2_path *right)
{
int i = 0;
* This looks similar, but is subtly different to
* ocfs2_find_cpos_for_left_leaf().
*/
-int ocfs2_find_cpos_for_right_leaf(struct super_block *sb,
- struct ocfs2_path *path, u32 *cpos)
+static int ocfs2_find_cpos_for_right_leaf(struct super_block *sb,
+ struct ocfs2_path *path, u32 *cpos)
{
int i, j, ret = 0;
u64 blkno;
int ocfs2_journal_access_path(struct ocfs2_caching_info *ci,
handle_t *handle,
struct ocfs2_path *path);
-int ocfs2_find_cpos_for_right_leaf(struct super_block *sb,
- struct ocfs2_path *path, u32 *cpos);
-int ocfs2_find_subtree_root(struct ocfs2_extent_tree *et,
- struct ocfs2_path *left,
- struct ocfs2_path *right);
#endif /* OCFS2_ALLOC_H */
goto bail;
}
- /* We should already CoW the refcounted extent in case of create. */
- BUG_ON(create && (ext_flags & OCFS2_EXT_REFCOUNTED));
-
+ /* We should already CoW the refcounted extent. */
+ BUG_ON(ext_flags & OCFS2_EXT_REFCOUNTED);
/*
* get_more_blocks() expects us to describe a hole by clearing
* the mapped bit on bh_result().
struct buffer_head *bh)
{
int ret = 0;
- struct ocfs2_dinode *di = (struct ocfs2_dinode *)bh->b_data;
mlog_entry_void();
get_bh(bh); /* for end_buffer_write_sync() */
bh->b_end_io = end_buffer_write_sync;
- ocfs2_compute_meta_ecc(osb->sb, bh->b_data, &di->i_check);
submit_bh(WRITE, bh);
wait_on_buffer(bh);
if ((count + *ppos) > i_size_read(inode))
readlen = i_size_read(inode) - *ppos;
else
- readlen = count;
+ readlen = count - *ppos;
lvb_buf = kmalloc(readlen, GFP_NOFS);
if (!lvb_buf)
atomic_dec(&dlm->res_cur_count);
+ dlm_put(dlm);
+
if (!hlist_unhashed(&res->hash_node) ||
!list_empty(&res->granted) ||
!list_empty(&res->converting) ||
res->migration_pending = 0;
res->inflight_locks = 0;
+ /* put in dlm_lockres_release */
+ dlm_grab(dlm);
res->dlm = dlm;
kref_init(&res->refs);
/* check for pre-existing lock */
spin_lock(&dlm->spinlock);
res = __dlm_lookup_lockres(dlm, name, namelen, hash);
+ spin_lock(&dlm->master_lock);
+
if (res) {
spin_lock(&res->spinlock);
if (res->state & DLM_LOCK_RES_RECOVERING) {
spin_unlock(&res->spinlock);
}
- spin_lock(&dlm->master_lock);
/* ignore status. only nonzero status would BUG. */
ret = dlm_add_migration_mle(dlm, res, mle, &oldmle,
name, namelen,
migrate->new_master,
migrate->master);
- spin_unlock(&dlm->master_lock);
unlock:
+ spin_unlock(&dlm->master_lock);
spin_unlock(&dlm->spinlock);
if (oldmle) {
struct list_head *queue;
struct dlm_lock *lock, *next;
- assert_spin_locked(&dlm->spinlock);
- assert_spin_locked(&res->spinlock);
res->state |= DLM_LOCK_RES_RECOVERING;
if (!list_empty(&res->recovering)) {
mlog(0,
/* zero the lvb if necessary */
dlm_revalidate_lvb(dlm, res, dead_node);
if (res->owner == dead_node) {
- if (res->state & DLM_LOCK_RES_DROPPING_REF) {
- mlog(ML_NOTICE, "Ignore %.*s for "
- "recovery as it is being freed\n",
- res->lockname.len,
- res->lockname.name);
- } else
- dlm_move_lockres_to_recovery_list(dlm,
- res);
+ if (res->state & DLM_LOCK_RES_DROPPING_REF)
+ mlog(0, "%s:%.*s: owned by "
+ "dead node %u, this node was "
+ "dropping its ref when it died. "
+ "continue, dropping the flag.\n",
+ dlm->name, res->lockname.len,
+ res->lockname.name, dead_node);
+
+ /* the wake_up for this will happen when the
+ * RECOVERING flag is dropped later */
+ res->state &= ~DLM_LOCK_RES_DROPPING_REF;
+ dlm_move_lockres_to_recovery_list(dlm, res);
} else if (res->owner == dlm->node_num) {
dlm_free_dead_locks(dlm, res, dead_node);
__dlm_lockres_calc_usage(dlm, res);
* truly ready to be freed. */
int __dlm_lockres_unused(struct dlm_lock_resource *res)
{
- int bit;
-
- if (__dlm_lockres_has_locks(res))
- return 0;
-
- if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY)
- return 0;
-
- if (res->state & DLM_LOCK_RES_RECOVERING)
- return 0;
-
- bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0);
- if (bit < O2NM_MAX_NODES)
- return 0;
-
- /*
- * since the bit for dlm->node_num is not set, inflight_locks better
- * be zero
- */
- BUG_ON(res->inflight_locks != 0);
- return 1;
+ if (!__dlm_lockres_has_locks(res) &&
+ (list_empty(&res->dirty) && !(res->state & DLM_LOCK_RES_DIRTY))) {
+ /* try not to scan the bitmap unless the first two
+ * conditions are already true */
+ int bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0);
+ if (bit >= O2NM_MAX_NODES) {
+ /* since the bit for dlm->node_num is not
+ * set, inflight_locks better be zero */
+ BUG_ON(res->inflight_locks != 0);
+ return 1;
+ }
+ }
+ return 0;
}
spin_unlock(&dlm->spinlock);
}
-static void dlm_purge_lockres(struct dlm_ctxt *dlm,
+static int dlm_purge_lockres(struct dlm_ctxt *dlm,
struct dlm_lock_resource *res)
{
int master;
int ret = 0;
- assert_spin_locked(&dlm->spinlock);
- assert_spin_locked(&res->spinlock);
+ spin_lock(&res->spinlock);
+ if (!__dlm_lockres_unused(res)) {
+ mlog(0, "%s:%.*s: tried to purge but not unused\n",
+ dlm->name, res->lockname.len, res->lockname.name);
+ __dlm_print_one_lock_resource(res);
+ spin_unlock(&res->spinlock);
+ BUG();
+ }
+
+ if (res->state & DLM_LOCK_RES_MIGRATING) {
+ mlog(0, "%s:%.*s: Delay dropref as this lockres is "
+ "being remastered\n", dlm->name, res->lockname.len,
+ res->lockname.name);
+ /* Re-add the lockres to the end of the purge list */
+ if (!list_empty(&res->purge)) {
+ list_del_init(&res->purge);
+ list_add_tail(&res->purge, &dlm->purge_list);
+ }
+ spin_unlock(&res->spinlock);
+ return 0;
+ }
master = (res->owner == dlm->node_num);
+ if (!master)
+ res->state |= DLM_LOCK_RES_DROPPING_REF;
+ spin_unlock(&res->spinlock);
mlog(0, "purging lockres %.*s, master = %d\n", res->lockname.len,
res->lockname.name, master);
if (!master) {
- res->state |= DLM_LOCK_RES_DROPPING_REF;
/* drop spinlock... retake below */
- spin_unlock(&res->spinlock);
spin_unlock(&dlm->spinlock);
spin_lock(&res->spinlock);
mlog(0, "%s:%.*s: dlm_deref_lockres returned %d\n",
dlm->name, res->lockname.len, res->lockname.name, ret);
spin_lock(&dlm->spinlock);
- spin_lock(&res->spinlock);
}
+ spin_lock(&res->spinlock);
if (!list_empty(&res->purge)) {
mlog(0, "removing lockres %.*s:%p from purgelist, "
"master = %d\n", res->lockname.len, res->lockname.name,
res, master);
list_del_init(&res->purge);
+ spin_unlock(&res->spinlock);
dlm_lockres_put(res);
dlm->purge_count--;
- }
-
- if (!__dlm_lockres_unused(res)) {
- mlog(ML_ERROR, "found lockres %s:%.*s: in use after deref\n",
- dlm->name, res->lockname.len, res->lockname.name);
- __dlm_print_one_lock_resource(res);
- BUG();
- }
+ } else
+ spin_unlock(&res->spinlock);
__dlm_unhash_lockres(res);
/* lockres is not in the hash now. drop the flag and wake up
* any processes waiting in dlm_get_lock_resource. */
if (!master) {
+ spin_lock(&res->spinlock);
res->state &= ~DLM_LOCK_RES_DROPPING_REF;
spin_unlock(&res->spinlock);
wake_up(&res->wq);
- } else
- spin_unlock(&res->spinlock);
+ }
+ return 0;
}
static void dlm_run_purge_list(struct dlm_ctxt *dlm,
lockres = list_entry(dlm->purge_list.next,
struct dlm_lock_resource, purge);
+ /* Status of the lockres *might* change so double
+ * check. If the lockres is unused, holding the dlm
+ * spinlock will prevent people from getting and more
+ * refs on it -- there's no need to keep the lockres
+ * spinlock. */
spin_lock(&lockres->spinlock);
+ unused = __dlm_lockres_unused(lockres);
+ spin_unlock(&lockres->spinlock);
+
+ if (!unused)
+ continue;
purge_jiffies = lockres->last_used +
msecs_to_jiffies(DLM_PURGE_INTERVAL_MS);
* in tail order, we can stop at the first
* unpurgable resource -- anyone added after
* him will have a greater last_used value */
- spin_unlock(&lockres->spinlock);
break;
}
- /* Status of the lockres *might* change so double
- * check. If the lockres is unused, holding the dlm
- * spinlock will prevent people from getting and more
- * refs on it. */
- unused = __dlm_lockres_unused(lockres);
- if (!unused ||
- (lockres->state & DLM_LOCK_RES_MIGRATING)) {
- mlog(0, "lockres %s:%.*s: is in use or "
- "being remastered, used %d, state %d\n",
- dlm->name, lockres->lockname.len,
- lockres->lockname.name, !unused, lockres->state);
- list_move_tail(&dlm->purge_list, &lockres->purge);
- spin_unlock(&lockres->spinlock);
- continue;
- }
-
dlm_lockres_get(lockres);
- dlm_purge_lockres(dlm, lockres);
+ /* This may drop and reacquire the dlm spinlock if it
+ * has to do migration. */
+ if (dlm_purge_lockres(dlm, lockres))
+ BUG();
dlm_lockres_put(lockres);
OCFS2_BH_IGNORE_CACHE);
} else {
status = ocfs2_read_blocks_sync(osb, args->fi_blkno, 1, &bh);
- /*
- * If buffer is in jbd, then its checksum may not have been
- * computed as yet.
- */
- if (!status && !buffer_jbd(bh))
+ if (!status)
status = ocfs2_validate_inode_block(osb->sb, bh);
}
if (status < 0) {
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
- handle = NULL;
mlog_errno(status);
goto out;
}
if (!(fl->fl_flags & FL_POSIX))
return -ENOLCK;
- if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK)
+ if (__mandatory_lock(inode))
return -ENOLCK;
return ocfs2_plock(osb->cconn, OCFS2_I(inode)->ip_blkno, file, cmd, fl);
return 0;
}
-/*
- * Find the end range for a leaf refcount block indicated by
- * el->l_recs[index].e_blkno.
- */
-static int ocfs2_get_refcount_cpos_end(struct ocfs2_caching_info *ci,
- struct buffer_head *ref_root_bh,
- struct ocfs2_extent_block *eb,
- struct ocfs2_extent_list *el,
- int index, u32 *cpos_end)
-{
- int ret, i, subtree_root;
- u32 cpos;
- u64 blkno;
- struct super_block *sb = ocfs2_metadata_cache_get_super(ci);
- struct ocfs2_path *left_path = NULL, *right_path = NULL;
- struct ocfs2_extent_tree et;
- struct ocfs2_extent_list *tmp_el;
-
- if (index < le16_to_cpu(el->l_next_free_rec) - 1) {
- /*
- * We have a extent rec after index, so just use the e_cpos
- * of the next extent rec.
- */
- *cpos_end = le32_to_cpu(el->l_recs[index+1].e_cpos);
- return 0;
- }
-
- if (!eb || (eb && !eb->h_next_leaf_blk)) {
- /*
- * We are the last extent rec, so any high cpos should
- * be stored in this leaf refcount block.
- */
- *cpos_end = UINT_MAX;
- return 0;
- }
-
- /*
- * If the extent block isn't the last one, we have to find
- * the subtree root between this extent block and the next
- * leaf extent block and get the corresponding e_cpos from
- * the subroot. Otherwise we may corrupt the b-tree.
- */
- ocfs2_init_refcount_extent_tree(&et, ci, ref_root_bh);
-
- left_path = ocfs2_new_path_from_et(&et);
- if (!left_path) {
- ret = -ENOMEM;
- mlog_errno(ret);
- goto out;
- }
-
- cpos = le32_to_cpu(eb->h_list.l_recs[index].e_cpos);
- ret = ocfs2_find_path(ci, left_path, cpos);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
-
- right_path = ocfs2_new_path_from_path(left_path);
- if (!right_path) {
- ret = -ENOMEM;
- mlog_errno(ret);
- goto out;
- }
-
- ret = ocfs2_find_cpos_for_right_leaf(sb, left_path, &cpos);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
-
- ret = ocfs2_find_path(ci, right_path, cpos);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
-
- subtree_root = ocfs2_find_subtree_root(&et, left_path,
- right_path);
-
- tmp_el = left_path->p_node[subtree_root].el;
- blkno = left_path->p_node[subtree_root+1].bh->b_blocknr;
- for (i = 0; i < le32_to_cpu(tmp_el->l_next_free_rec); i++) {
- if (le64_to_cpu(tmp_el->l_recs[i].e_blkno) == blkno) {
- *cpos_end = le32_to_cpu(tmp_el->l_recs[i+1].e_cpos);
- break;
- }
- }
-
- BUG_ON(i == le32_to_cpu(tmp_el->l_next_free_rec));
-
-out:
- ocfs2_free_path(left_path);
- ocfs2_free_path(right_path);
- return ret;
-}
-
/*
* Given a cpos and len, try to find the refcount record which contains cpos.
* 1. If cpos can be found in one refcount record, return the record.
struct buffer_head **ret_bh)
{
int ret = 0, i, found;
- u32 low_cpos, uninitialized_var(cpos_end);
+ u32 low_cpos;
struct ocfs2_extent_list *el;
- struct ocfs2_extent_rec *rec = NULL;
- struct ocfs2_extent_block *eb = NULL;
+ struct ocfs2_extent_rec *tmp, *rec = NULL;
+ struct ocfs2_extent_block *eb;
struct buffer_head *eb_bh = NULL, *ref_leaf_bh = NULL;
struct super_block *sb = ocfs2_metadata_cache_get_super(ci);
struct ocfs2_refcount_block *rb =
}
}
- if (found) {
- ret = ocfs2_get_refcount_cpos_end(ci, ref_root_bh,
- eb, el, i, &cpos_end);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
+ /* adjust len when we have ocfs2_extent_rec after it. */
+ if (found && i < le16_to_cpu(el->l_next_free_rec) - 1) {
+ tmp = &el->l_recs[i+1];
- if (cpos_end < low_cpos + len)
- len = cpos_end - low_cpos;
+ if (le32_to_cpu(tmp->e_cpos) < cpos + len)
+ len = le32_to_cpu(tmp->e_cpos) - cpos;
}
ret = ocfs2_read_refcount_block(ci, le64_to_cpu(rec->e_blkno),
len = min((u64)cpos + clusters, le64_to_cpu(rec.r_cpos) +
le32_to_cpu(rec.r_clusters)) - cpos;
/*
+ * If the refcount rec already exist, cool. We just need
+ * to check whether there is a split. Otherwise we just need
+ * to increase the refcount.
+ * If we will insert one, increases recs_add.
+ *
* We record all the records which will be inserted to the
* same refcount block, so that we can tell exactly whether
* we need a new refcount block or not.
- *
- * If we will insert a new one, this is easy and only happens
- * during adding refcounted flag to the extent, so we don't
- * have a chance of spliting. We just need one record.
- *
- * If the refcount rec already exists, that would be a little
- * complicated. we may have to:
- * 1) split at the beginning if the start pos isn't aligned.
- * we need 1 more record in this case.
- * 2) split int the end if the end pos isn't aligned.
- * we need 1 more record in this case.
- * 3) split in the middle because of file system fragmentation.
- * we need 2 more records in this case(we can't detect this
- * beforehand, so always think of the worst case).
*/
if (rec.r_refcount) {
- recs_add += 2;
/* Check whether we need a split at the beginning. */
if (cpos == start_cpos &&
cpos != le64_to_cpu(rec.r_cpos))
di->i_attr = s_di->i_attr;
if (preserve) {
- t_inode->i_uid = s_inode->i_uid;
- t_inode->i_gid = s_inode->i_gid;
- t_inode->i_mode = s_inode->i_mode;
di->i_uid = s_di->i_uid;
di->i_gid = s_di->i_gid;
di->i_mode = s_di->i_mode;
#define do_error(fmt, ...) \
do{ \
- if (resize) \
+ if (clean_error) \
mlog(ML_ERROR, fmt "\n", ##__VA_ARGS__); \
else \
ocfs2_error(sb, fmt, ##__VA_ARGS__); \
static int ocfs2_validate_gd_self(struct super_block *sb,
struct buffer_head *bh,
- int resize)
+ int clean_error)
{
struct ocfs2_group_desc *gd = (struct ocfs2_group_desc *)bh->b_data;
static int ocfs2_validate_gd_parent(struct super_block *sb,
struct ocfs2_dinode *di,
struct buffer_head *bh,
- int resize)
+ int clean_error)
{
unsigned int max_bits;
struct ocfs2_group_desc *gd = (struct ocfs2_group_desc *)bh->b_data;
return -EINVAL;
}
- /* In resize, we may meet the case bg_chain == cl_next_free_rec. */
- if ((le16_to_cpu(gd->bg_chain) >
- le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) ||
- ((le16_to_cpu(gd->bg_chain) ==
- le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) && !resize)) {
+ if (le16_to_cpu(gd->bg_chain) >=
+ le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) {
do_error("Group descriptor #%llu has bad chain %u",
(unsigned long long)bh->b_blocknr,
le16_to_cpu(gd->bg_chain));
if (!ocfs2_is_hard_readonly(osb))
ocfs2_set_journal_params(osb);
-
- sb->s_flags = (sb->s_flags & ~MS_POSIXACL) |
- ((osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) ?
- MS_POSIXACL : 0);
}
out:
unlock_kernel();
}
/* Fast symlinks can't be large */
- len = strnlen(target, ocfs2_fast_symlink_chars(inode->i_sb));
+ len = strlen(target);
link = kzalloc(len + 1, GFP_NOFS);
if (!link) {
status = -ENOMEM;
} *label;
unsigned char *data;
Sector sect;
- sector_t labelsect;
res = 0;
blocksize = bdev_logical_block_size(bdev);
ioctl_by_bdev(bdev, HDIO_GETGEO, (unsigned long)geo) != 0)
goto out_freeall;
- /*
- * Special case for FBA disks: label sector does not depend on
- * blocksize.
- */
- if ((info->cu_type == 0x6310 && info->dev_type == 0x9336) ||
- (info->cu_type == 0x3880 && info->dev_type == 0x3370))
- labelsect = info->label_block;
- else
- labelsect = info->label_block * (blocksize >> 9);
-
/*
* Get volume label, extract name and type.
*/
- data = read_dev_sector(bdev, labelsect, §);
+ data = read_dev_sector(bdev, info->label_block*(blocksize/512), §);
if (data == NULL)
goto out_readerr;
*/
#include <asm/unaligned.h>
-#define SYS_IND(p) get_unaligned(&p->sys_ind)
+#define SYS_IND(p) (get_unaligned(&p->sys_ind))
+#define NR_SECTS(p) ({ __le32 __a = get_unaligned(&p->nr_sects); \
+ le32_to_cpu(__a); \
+ })
-static inline sector_t nr_sects(struct partition *p)
-{
- return (sector_t)get_unaligned_le32(&p->nr_sects);
-}
-
-static inline sector_t start_sect(struct partition *p)
-{
- return (sector_t)get_unaligned_le32(&p->start_sect);
-}
+#define START_SECT(p) ({ __le32 __a = get_unaligned(&p->start_sect); \
+ le32_to_cpu(__a); \
+ })
static inline int is_extended_partition(struct partition *p)
{
static void
parse_extended(struct parsed_partitions *state, struct block_device *bdev,
- sector_t first_sector, sector_t first_size)
+ u32 first_sector, u32 first_size)
{
struct partition *p;
Sector sect;
unsigned char *data;
- sector_t this_sector, this_size;
- sector_t sector_size = bdev_logical_block_size(bdev) / 512;
+ u32 this_sector, this_size;
+ int sector_size = bdev_logical_block_size(bdev) / 512;
int loopct = 0; /* number of links followed
without finding a data partition */
int i;
* First process the data partition(s)
*/
for (i=0; i<4; i++, p++) {
- sector_t offs, size, next;
- if (!nr_sects(p) || is_extended_partition(p))
+ u32 offs, size, next;
+ if (!NR_SECTS(p) || is_extended_partition(p))
continue;
/* Check the 3rd and 4th entries -
these sometimes contain random garbage */
- offs = start_sect(p)*sector_size;
- size = nr_sects(p)*sector_size;
+ offs = START_SECT(p)*sector_size;
+ size = NR_SECTS(p)*sector_size;
next = this_sector + offs;
if (i >= 2) {
if (offs + size > this_size)
*/
p -= 4;
for (i=0; i<4; i++, p++)
- if (nr_sects(p) && is_extended_partition(p))
+ if (NR_SECTS(p) && is_extended_partition(p))
break;
if (i == 4)
goto done; /* nothing left to do */
- this_sector = first_sector + start_sect(p) * sector_size;
- this_size = nr_sects(p) * sector_size;
+ this_sector = first_sector + START_SECT(p) * sector_size;
+ this_size = NR_SECTS(p) * sector_size;
put_dev_sector(sect);
}
done:
static void
parse_solaris_x86(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_SOLARIS_X86_PARTITION
Sector sect;
*/
static void
parse_bsd(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin, char *flavour,
+ u32 offset, u32 size, int origin, char *flavour,
int max_partitions)
{
Sector sect;
if (le16_to_cpu(l->d_npartitions) < max_partitions)
max_partitions = le16_to_cpu(l->d_npartitions);
for (p = l->d_partitions; p - l->d_partitions < max_partitions; p++) {
- sector_t bsd_start, bsd_size;
+ u32 bsd_start, bsd_size;
if (state->next == state->limit)
break;
static void
parse_freebsd(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_BSD_DISKLABEL
parse_bsd(state, bdev, offset, size, origin,
static void
parse_netbsd(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_BSD_DISKLABEL
parse_bsd(state, bdev, offset, size, origin,
static void
parse_openbsd(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_BSD_DISKLABEL
parse_bsd(state, bdev, offset, size, origin,
*/
static void
parse_unixware(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_UNIXWARE_DISKLABEL
Sector sect;
if (p->s_label != UNIXWARE_FS_UNUSED)
put_partition(state, state->next++,
- le32_to_cpu(p->start_sect),
- le32_to_cpu(p->nr_sects));
+ START_SECT(p), NR_SECTS(p));
p++;
}
put_dev_sector(sect);
*/
static void
parse_minix(struct parsed_partitions *state, struct block_device *bdev,
- sector_t offset, sector_t size, int origin)
+ u32 offset, u32 size, int origin)
{
#ifdef CONFIG_MINIX_SUBPARTITION
Sector sect;
/* add each partition in use */
if (SYS_IND(p) == MINIX_PARTITION)
put_partition(state, state->next++,
- start_sect(p), nr_sects(p));
+ START_SECT(p), NR_SECTS(p));
}
printk(" >\n");
}
static struct {
unsigned char id;
void (*parse)(struct parsed_partitions *, struct block_device *,
- sector_t, sector_t, int);
+ u32, u32, int);
} subtypes[] = {
{FREEBSD_PARTITION, parse_freebsd},
{NETBSD_PARTITION, parse_netbsd},
int msdos_partition(struct parsed_partitions *state, struct block_device *bdev)
{
- sector_t sector_size = bdev_logical_block_size(bdev) / 512;
+ int sector_size = bdev_logical_block_size(bdev) / 512;
Sector sect;
unsigned char *data;
struct partition *p;
state->next = 5;
for (slot = 1 ; slot <= 4 ; slot++, p++) {
- sector_t start = start_sect(p)*sector_size;
- sector_t size = nr_sects(p)*sector_size;
+ u32 start = START_SECT(p)*sector_size;
+ u32 size = NR_SECTS(p)*sector_size;
if (!size)
continue;
if (is_extended_partition(p)) {
- /*
- * prevent someone doing mkfs or mkswap on an
- * extended partition, but leave room for LILO
- * FIXME: this uses one logical sector for > 512b
- * sector, although it may not be enough/proper.
- */
- sector_t n = 2;
- n = min(size, max(sector_size, n));
- put_partition(state, slot, start, n);
-
+ /* prevent someone doing mkfs or mkswap on an
+ extended partition, but leave room for LILO */
+ put_partition(state, slot, start, size == 1 ? 1 : 2);
printk(" <");
parse_extended(state, bdev, start, size);
printk(" >");
unsigned char id = SYS_IND(p);
int n;
- if (!nr_sects(p))
+ if (!NR_SECTS(p))
continue;
for (n = 0; subtypes[n].parse && id != subtypes[n].id; n++)
if (!subtypes[n].parse)
continue;
- subtypes[n].parse(state, bdev, start_sect(p)*sector_size,
- nr_sects(p)*sector_size, slot);
+ subtypes[n].parse(state, bdev, START_SECT(p)*sector_size,
+ NR_SECTS(p)*sector_size, slot);
}
put_dev_sector(sect);
return 1;
error = ops->confirm(pipe, buf);
if (error) {
if (!ret)
- ret = error;
+ error = ret;
break;
}
#include <linux/pid_namespace.h>
#include <linux/ptrace.h>
#include <linux/tracehook.h>
+#include <linux/swapops.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
p->nivcsw);
}
+#ifdef CONFIG_MMU
+
+struct stack_stats {
+ struct vm_area_struct *vma;
+ unsigned long startpage;
+ unsigned long usage;
+};
+
+static int stack_usage_pte_range(pmd_t *pmd, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+{
+ struct stack_stats *ss = walk->private;
+ struct vm_area_struct *vma = ss->vma;
+ pte_t *pte, ptent;
+ spinlock_t *ptl;
+ int ret = 0;
+
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+ for (; addr != end; pte++, addr += PAGE_SIZE) {
+ ptent = *pte;
+
+#ifdef CONFIG_STACK_GROWSUP
+ if (pte_present(ptent) || is_swap_pte(ptent))
+ ss->usage = addr - ss->startpage + PAGE_SIZE;
+#else
+ if (pte_present(ptent) || is_swap_pte(ptent)) {
+ ss->usage = ss->startpage - addr + PAGE_SIZE;
+ pte++;
+ ret = 1;
+ break;
+ }
+#endif
+ }
+ pte_unmap_unlock(pte - 1, ptl);
+ cond_resched();
+ return ret;
+}
+
+static inline unsigned long get_stack_usage_in_bytes(struct vm_area_struct *vma,
+ struct task_struct *task)
+{
+ struct stack_stats ss;
+ struct mm_walk stack_walk = {
+ .pmd_entry = stack_usage_pte_range,
+ .mm = vma->vm_mm,
+ .private = &ss,
+ };
+
+ if (!vma->vm_mm || is_vm_hugetlb_page(vma))
+ return 0;
+
+ ss.vma = vma;
+ ss.startpage = task->stack_start & PAGE_MASK;
+ ss.usage = 0;
+
+#ifdef CONFIG_STACK_GROWSUP
+ walk_page_range(KSTK_ESP(task) & PAGE_MASK, vma->vm_end,
+ &stack_walk);
+#else
+ walk_page_range(vma->vm_start, (KSTK_ESP(task) & PAGE_MASK) + PAGE_SIZE,
+ &stack_walk);
+#endif
+ return ss.usage;
+}
+
+static inline void task_show_stack_usage(struct seq_file *m,
+ struct task_struct *task)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = get_task_mm(task);
+
+ if (mm) {
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, task->stack_start);
+ if (vma)
+ seq_printf(m, "Stack usage:\t%lu kB\n",
+ get_stack_usage_in_bytes(vma, task) >> 10);
+
+ up_read(&mm->mmap_sem);
+ mmput(mm);
+ }
+}
+#else
+static void task_show_stack_usage(struct seq_file *m, struct task_struct *task)
+{
+}
+#endif /* CONFIG_MMU */
+
int proc_pid_status(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task)
{
task_show_regs(m, task);
#endif
task_context_switch_counts(m, task);
+ task_show_stack_usage(m, task);
return 0;
}
/* add up live thread stats at the group level */
if (whole) {
+ struct task_cputime cputime;
struct task_struct *t = task;
do {
min_flt += t->min_flt;
min_flt += sig->min_flt;
maj_flt += sig->maj_flt;
- thread_group_times(task, &utime, &stime);
+ thread_group_cputime(task, &cputime);
+ utime = cputime.utime;
+ stime = cputime.stime;
gtime = cputime_add(gtime, sig->gtime);
}
rsslim,
mm ? mm->start_code : 0,
mm ? mm->end_code : 0,
- (permitted && mm) ? mm->start_stack : 0,
+ (permitted && mm) ? task->stack_start : 0,
esp,
eip,
/* The signal information here is obsolete.
unsigned long badness(struct task_struct *p, unsigned long uptime);
static int proc_oom_score(struct task_struct *task, char *buffer)
{
- unsigned long points = 0;
+ unsigned long points;
struct timespec uptime;
do_posix_clock_monotonic_gettime(&uptime);
read_lock(&tasklist_lock);
- if (pid_alive(task))
- points = badness(task, uptime.tv_sec);
+ points = badness(task->group_leader, uptime.tv_sec);
read_unlock(&tasklist_lock);
return sprintf(buffer, "%lu\n", points);
}
{
struct pid_namespace *ns = dentry->d_sb->s_fs_info;
pid_t tgid = task_tgid_nr_ns(current, ns);
- char *name = ERR_PTR(-ENOENT);
- if (tgid) {
- name = __getname();
- if (!name)
- name = ERR_PTR(-ENOMEM);
- else
- sprintf(name, "%d", tgid);
- }
- nd_set_link(nd, name);
- return NULL;
-}
-
-static void proc_self_put_link(struct dentry *dentry, struct nameidata *nd,
- void *cookie)
-{
- char *s = nd_get_link(nd);
- if (!IS_ERR(s))
- __putname(s);
+ char tmp[PROC_NUMBUF];
+ if (!tgid)
+ return ERR_PTR(-ENOENT);
+ sprintf(tmp, "%d", task_tgid_nr_ns(current, ns));
+ return ERR_PTR(vfs_follow_link(nd,tmp));
}
static const struct inode_operations proc_self_inode_operations = {
.readlink = proc_self_readlink,
.follow_link = proc_self_follow_link,
- .put_link = proc_self_put_link,
};
/*
*/
static const struct pid_entry tid_base_stuff[] = {
DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations),
- DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations),
+ DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fd_operations),
REG("environ", S_IRUSR, proc_environ_operations),
INF("auxv", S_IRUSR, proc_pid_auxv),
ONE("status", S_IRUGO, proc_pid_status),
int flags = vma->vm_flags;
unsigned long ino = 0;
unsigned long long pgoff = 0;
- unsigned long start;
dev_t dev = 0;
int len;
pgoff = ((loff_t)vma->vm_pgoff) << PAGE_SHIFT;
}
- /* We don't show the stack guard page in /proc/maps */
- start = vma->vm_start;
- if (vma->vm_flags & VM_GROWSDOWN)
- if (!vma_stack_continue(vma->vm_prev, vma->vm_start))
- start += PAGE_SIZE;
-
seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu %n",
- start,
+ vma->vm_start,
vma->vm_end,
flags & VM_READ ? 'r' : '-',
flags & VM_WRITE ? 'w' : '-',
} else if (vma->vm_start <= mm->start_stack &&
vma->vm_end >= mm->start_stack) {
name = "[stack]";
+ } else {
+ unsigned long stack_start;
+ struct proc_maps_private *pmp;
+
+ pmp = m->private;
+ stack_start = pmp->task->stack_start;
+
+ if (vma->vm_start <= stack_start &&
+ vma->vm_end >= stack_start) {
+ pad_len_spaces(m, len);
+ seq_printf(m,
+ "[threadstack:%08lx]",
+#ifdef CONFIG_STACK_GROWSUP
+ vma->vm_end - stack_start
+#else
+ stack_start - vma->vm_start
+#endif
+ );
+ }
}
} else {
name = "[vdso]";
struct dqstats dqstats;
EXPORT_SYMBOL(dqstats);
-static qsize_t inode_get_rsv_space(struct inode *inode);
-
static inline unsigned int
hashfn(const struct super_block *sb, unsigned int id, int type)
{
static void add_dquot_ref(struct super_block *sb, int type)
{
struct inode *inode, *old_inode = NULL;
- int reserved = 0;
spin_lock(&inode_lock);
list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
if (inode->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE|I_NEW))
continue;
- if (unlikely(inode_get_rsv_space(inode) > 0))
- reserved = 1;
if (!atomic_read(&inode->i_writecount))
continue;
if (!dqinit_needed(inode, type))
}
spin_unlock(&inode_lock);
iput(old_inode);
-
- if (reserved) {
- printk(KERN_WARNING "VFS (%s): Writes happened before quota"
- " was turned on thus quota information is probably "
- "inconsistent. Please run quotacheck(8).\n", sb->s_id);
- }
}
/*
/*
* Claim reserved quota space
*/
-static void dquot_claim_reserved_space(struct dquot *dquot, qsize_t number)
+static void dquot_claim_reserved_space(struct dquot *dquot,
+ qsize_t number)
{
- if (dquot->dq_dqb.dqb_rsvspace < number) {
- WARN_ON_ONCE(1);
- number = dquot->dq_dqb.dqb_rsvspace;
- }
+ WARN_ON(dquot->dq_dqb.dqb_rsvspace < number);
dquot->dq_dqb.dqb_curspace += number;
dquot->dq_dqb.dqb_rsvspace -= number;
}
static inline
void dquot_free_reserved_space(struct dquot *dquot, qsize_t number)
{
- if (dquot->dq_dqb.dqb_rsvspace >= number)
- dquot->dq_dqb.dqb_rsvspace -= number;
- else {
- WARN_ON_ONCE(1);
- dquot->dq_dqb.dqb_rsvspace = 0;
- }
+ dquot->dq_dqb.dqb_rsvspace -= number;
}
static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
return QUOTA_NL_BHARDBELOW;
return QUOTA_NL_NOWARN;
}
-
/*
* Initialize quota pointers in inode
* We do things in a bit complicated way but by that we avoid calling
int cnt, ret = 0;
struct dquot *got[MAXQUOTAS] = { NULL, NULL };
struct super_block *sb = inode->i_sb;
- qsize_t rsv;
/* First test before acquiring mutex - solves deadlocks when we
* re-enter the quota code and are already holding the mutex */
if (!inode->i_dquot[cnt]) {
inode->i_dquot[cnt] = got[cnt];
got[cnt] = NULL;
- /*
- * Make quota reservation system happy if someone
- * did a write before quota was turned on
- */
- rsv = inode_get_rsv_space(inode);
- if (unlikely(rsv))
- dquot_resv_space(inode->i_dquot[cnt], rsv);
}
}
out_err:
return inode->i_sb->dq_op->get_reserved_space(inode);
}
-void inode_add_rsv_space(struct inode *inode, qsize_t number)
+static void inode_add_rsv_space(struct inode *inode, qsize_t number)
{
spin_lock(&inode->i_lock);
*inode_reserved_space(inode) += number;
spin_unlock(&inode->i_lock);
}
-EXPORT_SYMBOL(inode_add_rsv_space);
-void inode_claim_rsv_space(struct inode *inode, qsize_t number)
+
+static void inode_claim_rsv_space(struct inode *inode, qsize_t number)
{
spin_lock(&inode->i_lock);
*inode_reserved_space(inode) -= number;
__inode_add_bytes(inode, number);
spin_unlock(&inode->i_lock);
}
-EXPORT_SYMBOL(inode_claim_rsv_space);
-void inode_sub_rsv_space(struct inode *inode, qsize_t number)
+static void inode_sub_rsv_space(struct inode *inode, qsize_t number)
{
spin_lock(&inode->i_lock);
*inode_reserved_space(inode) -= number;
spin_unlock(&inode->i_lock);
}
-EXPORT_SYMBOL(inode_sub_rsv_space);
static qsize_t inode_get_rsv_space(struct inode *inode)
{
if (di->dqb_valid & QIF_SPACE) {
dm->dqb_curspace = di->dqb_curspace - dm->dqb_rsvspace;
check_blim = 1;
- set_bit(DQ_LASTSET_B + QIF_SPACE_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_SPACE_B, &dquot->dq_flags);
}
if (di->dqb_valid & QIF_BLIMITS) {
dm->dqb_bsoftlimit = qbtos(di->dqb_bsoftlimit);
dm->dqb_bhardlimit = qbtos(di->dqb_bhardlimit);
check_blim = 1;
- set_bit(DQ_LASTSET_B + QIF_BLIMITS_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_BLIMITS_B, &dquot->dq_flags);
}
if (di->dqb_valid & QIF_INODES) {
dm->dqb_curinodes = di->dqb_curinodes;
check_ilim = 1;
- set_bit(DQ_LASTSET_B + QIF_INODES_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_INODES_B, &dquot->dq_flags);
}
if (di->dqb_valid & QIF_ILIMITS) {
dm->dqb_isoftlimit = di->dqb_isoftlimit;
dm->dqb_ihardlimit = di->dqb_ihardlimit;
check_ilim = 1;
- set_bit(DQ_LASTSET_B + QIF_ILIMITS_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_ILIMITS_B, &dquot->dq_flags);
}
if (di->dqb_valid & QIF_BTIME) {
dm->dqb_btime = di->dqb_btime;
check_blim = 1;
- set_bit(DQ_LASTSET_B + QIF_BTIME_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_BTIME_B, &dquot->dq_flags);
}
if (di->dqb_valid & QIF_ITIME) {
dm->dqb_itime = di->dqb_itime;
check_ilim = 1;
- set_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags);
+ __set_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags);
}
if (check_blim) {
struct reiserfs_de_head *deh)
{
struct dentry *privroot = REISERFS_SB(dir->d_sb)->priv_root;
+ if (reiserfs_expose_privroot(dir->d_sb))
+ return 0;
return (dir == dir->d_parent && privroot->d_inode &&
deh->deh_objectid == INODE_PKEY(privroot->d_inode)->k_objectid);
}
brelse(d_bh);
return 1;
}
-
- if (bdev_read_only(sb->s_bdev)) {
- reiserfs_warning(sb, "clm-2076",
- "device is readonly, unable to replay log");
- brelse(c_bh);
- brelse(d_bh);
- return -EROFS;
- }
-
trans_id = get_desc_trans_id(desc);
/* now we know we've got a good transaction, and it was inside the valid time ranges */
log_blocks = kmalloc(get_desc_trans_len(desc) *
goto start_log_replay;
}
+ if (continue_replay && bdev_read_only(sb->s_bdev)) {
+ reiserfs_warning(sb, "clm-2076",
+ "device is readonly, unable to replay log");
+ return -1;
+ }
+
/* ok, there are transactions that need to be replayed. start with the first log block, find
** all the valid transactions, and pick out the oldest.
*/
if (!err && new_size < i_size_read(dentry->d_inode)) {
struct iattr newattrs = {
.ia_ctime = current_fs_time(inode->i_sb),
- .ia_size = new_size,
+ .ia_size = buffer_size,
.ia_valid = ATTR_SIZE | ATTR_CTIME,
};
mutex_lock_nested(&dentry->d_inode->i_mutex, I_MUTEX_XATTR);
return generic_permission(inode, mask, NULL);
}
-static int xattr_hide_revalidate(struct dentry *dentry, struct nameidata *nd)
+/* This will catch lookups from the fs root to .reiserfs_priv */
+static int
+xattr_lookup_poison(struct dentry *dentry, struct qstr *q1, struct qstr *name)
{
- return -EPERM;
+ struct dentry *priv_root = REISERFS_SB(dentry->d_sb)->priv_root;
+ if (container_of(q1, struct dentry, d_name) == priv_root)
+ return -ENOENT;
+ if (q1->len == name->len &&
+ !memcmp(q1->name, name->name, name->len))
+ return 0;
+ return 1;
}
static const struct dentry_operations xattr_lookup_poison_ops = {
- .d_revalidate = xattr_hide_revalidate,
+ .d_compare = xattr_lookup_poison,
};
int reiserfs_lookup_privroot(struct super_block *s)
strlen(PRIVROOT_NAME));
if (!IS_ERR(dentry)) {
REISERFS_SB(s)->priv_root = dentry;
- dentry->d_op = &xattr_lookup_poison_ops;
+ if (!reiserfs_expose_privroot(s))
+ s->s_root->d_op = &xattr_lookup_poison_ops;
if (dentry->d_inode)
dentry->d_inode->i_flags |= S_PRIVATE;
} else
return error;
}
- if (sec->length && reiserfs_xattrs_initialized(inode->i_sb)) {
+ if (sec->length) {
blocks = reiserfs_xattr_jcreate_nblocks(inode) +
reiserfs_xattr_nblocks(inode, sec->length);
/* We don't want to count the directories twice if we have
err |= __put_user(kinfo->si_tid, &uinfo->ssi_tid);
err |= __put_user(kinfo->si_overrun, &uinfo->ssi_overrun);
err |= __put_user((long) kinfo->si_ptr, &uinfo->ssi_ptr);
- err |= __put_user(kinfo->si_int, &uinfo->ssi_int);
break;
case __SI_POLL:
err |= __put_user(kinfo->si_band, &uinfo->ssi_band);
err |= __put_user(kinfo->si_pid, &uinfo->ssi_pid);
err |= __put_user(kinfo->si_uid, &uinfo->ssi_uid);
err |= __put_user((long) kinfo->si_ptr, &uinfo->ssi_ptr);
- err |= __put_user(kinfo->si_int, &uinfo->ssi_int);
break;
default:
/*
* If the page isn't uptodate, we may need to start io on it
*/
if (!PageUptodate(page)) {
- lock_page(page);
+ /*
+ * If in nonblock mode then dont block on waiting
+ * for an in-flight io page
+ */
+ if (flags & SPLICE_F_NONBLOCK) {
+ if (!trylock_page(page)) {
+ error = -EAGAIN;
+ break;
+ }
+ } else
+ lock_page(page);
/*
* Page was truncated, or invalidated by the
char *p;
p = d_path(&file->f_path, last_sysfs_file, sizeof(last_sysfs_file));
- if (!IS_ERR(p))
+ if (p)
memmove(last_sysfs_file, p, strlen(p) + 1);
/* need attr_sd for attr and ops, its parent for kobj */
if (mode != inode->i_mode) {
struct iattr iattr;
- iattr.ia_valid = ATTR_MODE | ATTR_CTIME;
+ iattr.ia_valid = ATTR_MODE;
iattr.ia_mode = mode;
- iattr.ia_ctime = current_fs_time(inode->i_sb);
error = -xfs_setattr(XFS_I(inode), &iattr, XFS_ATTR_NOACL);
}
}
/*
- * Update on-disk file size now that data has been written to disk. The
- * current in-memory file size is i_size. If a write is beyond eof i_new_size
- * will be the intended file size until i_size is updated. If this write does
- * not extend all the way to the valid file size then restrict this update to
- * the end of the write.
- *
- * This function does not block as blocking on the inode lock in IO completion
- * can lead to IO completion order dependency deadlocks.. If it can't get the
- * inode ilock it will return EAGAIN. Callers must handle this.
+ * Update on-disk file size now that data has been written to disk.
+ * The current in-memory file size is i_size. If a write is beyond
+ * eof i_new_size will be the intended file size until i_size is
+ * updated. If this write does not extend all the way to the valid
+ * file size then restrict this update to the end of the write.
*/
-STATIC int
+
+STATIC void
xfs_setfilesize(
xfs_ioend_t *ioend)
{
ASSERT(ioend->io_type != IOMAP_READ);
if (unlikely(ioend->io_error))
- return 0;
-
- if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
- return EAGAIN;
+ return;
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
isize = xfs_ioend_new_eof(ioend);
if (isize) {
ip->i_d.di_size = isize;
}
xfs_iunlock(ip, XFS_ILOCK_EXCL);
- return 0;
-}
-
-/*
- * Schedule IO completion handling on a xfsdatad if this was
- * the final hold on this ioend. If we are asked to wait,
- * flush the workqueue.
- */
-STATIC void
-xfs_finish_ioend(
- xfs_ioend_t *ioend,
- int wait)
-{
- if (atomic_dec_and_test(&ioend->io_remaining)) {
- struct workqueue_struct *wq;
-
- wq = (ioend->io_type == IOMAP_UNWRITTEN) ?
- xfsconvertd_workqueue : xfsdatad_workqueue;
- queue_work(wq, &ioend->io_work);
- if (wait)
- flush_workqueue(wq);
- }
}
/*
{
xfs_ioend_t *ioend =
container_of(work, xfs_ioend_t, io_work);
- int error;
- /*
- * If we didn't complete processing of the ioend, requeue it to the
- * tail of the workqueue for another attempt later. Otherwise destroy
- * it.
- */
- error = xfs_setfilesize(ioend);
- if (error == EAGAIN) {
- atomic_inc(&ioend->io_remaining);
- xfs_finish_ioend(ioend, 0);
- /* ensure we don't spin on blocked ioends */
- delay(1);
- } else {
- ASSERT(!error);
- xfs_destroy_ioend(ioend);
- }
+ xfs_setfilesize(ioend);
+ xfs_destroy_ioend(ioend);
}
/*
{
xfs_ioend_t *ioend =
container_of(work, xfs_ioend_t, io_work);
- int error;
- /*
- * If we didn't complete processing of the ioend, requeue it to the
- * tail of the workqueue for another attempt later. Otherwise destroy
- * it.
- */
- error = xfs_setfilesize(ioend);
- if (error == EAGAIN) {
- atomic_inc(&ioend->io_remaining);
- xfs_finish_ioend(ioend, 0);
- /* ensure we don't spin on blocked ioends */
- delay(1);
- } else {
- ASSERT(!error);
- xfs_destroy_ioend(ioend);
- }
+ xfs_setfilesize(ioend);
+ xfs_destroy_ioend(ioend);
}
/*
size_t size = ioend->io_size;
if (likely(!ioend->io_error)) {
- int error;
if (!XFS_FORCED_SHUTDOWN(ip->i_mount)) {
+ int error;
error = xfs_iomap_write_unwritten(ip, offset, size);
if (error)
ioend->io_error = error;
}
- /*
- * If we didn't complete processing of the ioend, requeue it to the
- * tail of the workqueue for another attempt later. Otherwise destroy
- * it.
- */
- error = xfs_setfilesize(ioend);
- if (error == EAGAIN) {
- atomic_inc(&ioend->io_remaining);
- xfs_finish_ioend(ioend, 0);
- /* ensure we don't spin on blocked ioends */
- delay(1);
- return;
- }
+ xfs_setfilesize(ioend);
}
xfs_destroy_ioend(ioend);
}
xfs_destroy_ioend(ioend);
}
+/*
+ * Schedule IO completion handling on a xfsdatad if this was
+ * the final hold on this ioend. If we are asked to wait,
+ * flush the workqueue.
+ */
+STATIC void
+xfs_finish_ioend(
+ xfs_ioend_t *ioend,
+ int wait)
+{
+ if (atomic_dec_and_test(&ioend->io_remaining)) {
+ struct workqueue_struct *wq = xfsdatad_workqueue;
+ if (ioend->io_work.func == xfs_end_bio_unwritten)
+ wq = xfsconvertd_workqueue;
+
+ queue_work(wq, &ioend->io_work);
+ if (wait)
+ flush_workqueue(wq);
+ }
+}
+
/*
* Allocate and initialise an IO completion structure.
* We need to track unwritten extent write completion here initially.
{
struct fsxattr fa;
- memset(&fa, 0, sizeof(struct fsxattr));
-
xfs_ilock(ip, XFS_ILOCK_SHARED);
fa.fsx_xflags = xfs_ip2xflags(ip);
fa.fsx_extsize = ip->i_d.di_extsize << ip->i_mount->m_sb.sb_blocklog;
bf.l_len = len;
xfs_ilock(ip, XFS_IOLOCK_EXCL);
- error = -xfs_change_file_space(ip, XFS_IOC_RESVSP, &bf,
- 0, XFS_ATTR_NOLOCK);
+ error = xfs_change_file_space(ip, XFS_IOC_RESVSP, &bf,
+ 0, XFS_ATTR_NOLOCK);
if (!error && !(mode & FALLOC_FL_KEEP_SIZE) &&
offset + len > i_size_read(inode))
new_size = offset + len;
iattr.ia_valid = ATTR_SIZE;
iattr.ia_size = new_size;
- error = -xfs_setattr(ip, &iattr, XFS_ATTR_NOLOCK);
+ error = xfs_setattr(ip, &iattr, XFS_ATTR_NOLOCK);
}
xfs_iunlock(ip, XFS_IOLOCK_EXCL);
*/
STATIC void
xfs_fs_destroy_inode(
- struct inode *inode)
+ struct inode *inode)
{
- struct xfs_inode *ip = XFS_I(inode);
-
- xfs_itrace_entry(ip);
+ xfs_inode_t *ip = XFS_I(inode);
XFS_STATS_INC(vn_reclaim);
-
- /* bad inode, get out here ASAP */
- if (is_bad_inode(inode))
- goto out_reclaim;
-
- xfs_ioend_wait(ip);
-
- ASSERT(XFS_FORCED_SHUTDOWN(ip->i_mount) || ip->i_delayed_blks == 0);
-
- /*
- * We should never get here with one of the reclaim flags already set.
- */
- ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_IRECLAIMABLE));
- ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_IRECLAIM));
-
- /*
- * We always use background reclaim here because even if the
- * inode is clean, it still may be under IO and hence we have
- * to take the flush lock. The background reclaim path handles
- * this more efficiently than we can here, so simply let background
- * reclaim tear down all inodes.
- */
-out_reclaim:
- xfs_inode_set_reclaim_tag(ip);
+ if (xfs_reclaim(ip))
+ panic("%s: cannot reclaim 0x%p\n", __func__, inode);
}
/*
xfs_unmountfs(mp);
xfs_freesb(mp);
- xfs_inode_shrinker_unregister(mp);
xfs_icsb_destroy_counters(mp);
xfs_close_devices(mp);
xfs_dmops_put(mp);
/* ro -> rw */
if ((mp->m_flags & XFS_MOUNT_RDONLY) && !(*flags & MS_RDONLY)) {
- __uint64_t resblks;
-
mp->m_flags &= ~XFS_MOUNT_RDONLY;
if (mp->m_flags & XFS_MOUNT_BARRIER)
xfs_mountfs_check_barriers(mp);
}
mp->m_update_flags = 0;
}
-
- /*
- * Fill out the reserve pool if it is empty. Use the stashed
- * value if it is non-zero, otherwise go with the default.
- */
- if (mp->m_resblks_save) {
- resblks = mp->m_resblks_save;
- mp->m_resblks_save = 0;
- } else {
- resblks = mp->m_sb.sb_dblocks;
- do_div(resblks, 20);
- resblks = min_t(__uint64_t, resblks, 1024);
- }
- xfs_reserve_blocks(mp, &resblks, NULL);
}
/* rw -> ro */
if (!(mp->m_flags & XFS_MOUNT_RDONLY) && (*flags & MS_RDONLY)) {
- /*
- * After we have synced the data but before we sync the
- * metadata, we need to free up the reserve block pool so that
- * the used block count in the superblock on disk is correct at
- * the end of the remount. Stash the current reserve pool size
- * so that if we get remounted rw, we can return it to the same
- * size.
- */
- __uint64_t resblks = 0;
-
xfs_quiesce_data(mp);
- mp->m_resblks_save = mp->m_resblks;
- xfs_reserve_blocks(mp, &resblks, NULL);
xfs_quiesce_attr(mp);
mp->m_flags |= XFS_MOUNT_RDONLY;
}
if (error)
goto fail_vnrele;
- xfs_inode_shrinker_register(mp);
-
kfree(mtpt);
xfs_itrace_exit(XFS_I(sb->s_root->d_inode));
goto out_cleanup_procfs;
vfs_initquota();
- xfs_inode_shrinker_init();
error = register_filesystem(&xfs_fs_type);
if (error)
{
vfs_exitquota();
unregister_filesystem(&xfs_fs_type);
- xfs_inode_shrinker_destroy();
xfs_sysctl_unregister();
xfs_cleanup_procfs();
xfs_buf_terminate();
* as the tree is sparse and a gang lookup walks to find
* the number of objects requested.
*/
+ read_lock(&pag->pag_ici_lock);
if (tag == XFS_ICI_NO_TAG) {
nr_found = radix_tree_gang_lookup(&pag->pag_ici_root,
(void **)&ip, *first_index, 1);
(void **)&ip, *first_index, 1, tag);
}
if (!nr_found)
- return NULL;
+ goto unlock;
/*
* Update the index for the next lookup. Catch overflows
*/
*first_index = XFS_INO_TO_AGINO(mp, ip->i_ino + 1);
if (*first_index < XFS_INO_TO_AGINO(mp, ip->i_ino))
- return NULL;
+ goto unlock;
+
return ip;
+
+unlock:
+ read_unlock(&pag->pag_ici_lock);
+ return NULL;
}
STATIC int
int (*execute)(struct xfs_inode *ip,
struct xfs_perag *pag, int flags),
int flags,
- int tag,
- int exclusive,
- int *nr_to_scan)
+ int tag)
{
struct xfs_perag *pag = &mp->m_perag[ag];
uint32_t first_index;
int error = 0;
xfs_inode_t *ip;
- if (exclusive)
- write_lock(&pag->pag_ici_lock);
- else
- read_lock(&pag->pag_ici_lock);
ip = xfs_inode_ag_lookup(mp, pag, &first_index, tag);
- if (!ip) {
- if (exclusive)
- write_unlock(&pag->pag_ici_lock);
- else
- read_unlock(&pag->pag_ici_lock);
+ if (!ip)
break;
- }
- /* execute releases pag->pag_ici_lock */
error = execute(ip, pag, flags);
if (error == EAGAIN) {
skipped++;
}
if (error)
last_error = error;
-
- /* bail out if the filesystem is corrupted. */
+ /*
+ * bail out if the filesystem is corrupted.
+ */
if (error == EFSCORRUPTED)
break;
- } while ((*nr_to_scan)--);
+ } while (1);
if (skipped) {
delay(1);
int (*execute)(struct xfs_inode *ip,
struct xfs_perag *pag, int flags),
int flags,
- int tag,
- int exclusive,
- int *nr_to_scan)
+ int tag)
{
int error = 0;
int last_error = 0;
xfs_agnumber_t ag;
- int nr;
- nr = nr_to_scan ? *nr_to_scan : INT_MAX;
for (ag = 0; ag < mp->m_sb.sb_agcount; ag++) {
if (!mp->m_perag[ag].pag_ici_init)
continue;
- error = xfs_inode_ag_walk(mp, ag, execute, flags, tag,
- exclusive, &nr);
+ error = xfs_inode_ag_walk(mp, ag, execute, flags, tag);
if (error) {
last_error = error;
if (error == EFSCORRUPTED)
break;
}
- if (nr <= 0)
- break;
}
- if (nr_to_scan)
- *nr_to_scan = nr;
return XFS_ERROR(last_error);
}
struct xfs_perag *pag)
{
struct inode *inode = VFS_I(ip);
- int error = EFSCORRUPTED;
/* nothing to sync during shutdown */
- if (XFS_FORCED_SHUTDOWN(ip->i_mount))
- goto out_unlock;
-
- /* avoid new or reclaimable inodes. Leave for reclaim code to flush */
- error = ENOENT;
- if (xfs_iflags_test(ip, XFS_INEW | XFS_IRECLAIMABLE | XFS_IRECLAIM))
- goto out_unlock;
+ if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
+ read_unlock(&pag->pag_ici_lock);
+ return EFSCORRUPTED;
+ }
- /* If we can't grab the inode, it must on it's way to reclaim. */
- if (!igrab(inode))
- goto out_unlock;
+ /*
+ * If we can't get a reference on the inode, it must be in reclaim.
+ * Leave it for the reclaim code to flush. Also avoid inodes that
+ * haven't been fully initialised.
+ */
+ if (!igrab(inode)) {
+ read_unlock(&pag->pag_ici_lock);
+ return ENOENT;
+ }
+ read_unlock(&pag->pag_ici_lock);
- if (is_bad_inode(inode)) {
+ if (is_bad_inode(inode) || xfs_iflags_test(ip, XFS_INEW)) {
IRELE(ip);
- goto out_unlock;
+ return ENOENT;
}
- /* inode is valid */
- error = 0;
-out_unlock:
- read_unlock(&pag->pag_ici_lock);
- return error;
+ return 0;
}
STATIC int
ASSERT((flags & ~(SYNC_TRYLOCK|SYNC_WAIT)) == 0);
error = xfs_inode_ag_iterator(mp, xfs_sync_inode_data, flags,
- XFS_ICI_NO_TAG, 0, NULL);
+ XFS_ICI_NO_TAG);
if (error)
return XFS_ERROR(error);
ASSERT((flags & ~SYNC_WAIT) == 0);
return xfs_inode_ag_iterator(mp, xfs_sync_inode_attr, flags,
- XFS_ICI_NO_TAG, 0, NULL);
+ XFS_ICI_NO_TAG);
}
STATIC int
kthread_stop(mp->m_sync_task);
}
-void
-__xfs_inode_set_reclaim_tag(
- struct xfs_perag *pag,
- struct xfs_inode *ip)
-{
- radix_tree_tag_set(&pag->pag_ici_root,
- XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino),
- XFS_ICI_RECLAIM_TAG);
- pag->pag_ici_reclaimable++;
-}
-
-/*
- * We set the inode flag atomically with the radix tree tag.
- * Once we get tag lookups on the radix tree, this inode flag
- * can go away.
- */
-void
-xfs_inode_set_reclaim_tag(
- xfs_inode_t *ip)
-{
- xfs_mount_t *mp = ip->i_mount;
- xfs_perag_t *pag = xfs_get_perag(mp, ip->i_ino);
-
- write_lock(&pag->pag_ici_lock);
- spin_lock(&ip->i_flags_lock);
- __xfs_inode_set_reclaim_tag(pag, ip);
- __xfs_iflags_set(ip, XFS_IRECLAIMABLE);
- spin_unlock(&ip->i_flags_lock);
- write_unlock(&pag->pag_ici_lock);
- xfs_put_perag(mp, pag);
-}
-
-void
-__xfs_inode_clear_reclaim_tag(
- xfs_mount_t *mp,
- xfs_perag_t *pag,
- xfs_inode_t *ip)
-{
- radix_tree_tag_clear(&pag->pag_ici_root,
- XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
- pag->pag_ici_reclaimable--;
-}
-
-STATIC int
+int
xfs_reclaim_inode(
- struct xfs_inode *ip,
- struct xfs_perag *pag,
- int sync_mode)
+ xfs_inode_t *ip,
+ int locked,
+ int sync_mode)
{
- /*
- * The radix tree lock here protects a thread in xfs_iget from racing
- * with us starting reclaim on the inode. Once we have the
- * XFS_IRECLAIM flag set it will not touch us.
+ xfs_perag_t *pag = xfs_get_perag(ip->i_mount, ip->i_ino);
+
+ /* The hash lock here protects a thread in xfs_iget_core from
+ * racing with us on linking the inode back with a vnode.
+ * Once we have the XFS_IRECLAIM flag set it will not touch
+ * us.
*/
+ write_lock(&pag->pag_ici_lock);
spin_lock(&ip->i_flags_lock);
- ASSERT_ALWAYS(__xfs_iflags_test(ip, XFS_IRECLAIMABLE));
- if (__xfs_iflags_test(ip, XFS_IRECLAIM)) {
- /* ignore as it is already under reclaim */
+ if (__xfs_iflags_test(ip, XFS_IRECLAIM) ||
+ !__xfs_iflags_test(ip, XFS_IRECLAIMABLE)) {
spin_unlock(&ip->i_flags_lock);
write_unlock(&pag->pag_ici_lock);
- return 0;
+ if (locked) {
+ xfs_ifunlock(ip);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ }
+ return -EAGAIN;
}
__xfs_iflags_set(ip, XFS_IRECLAIM);
spin_unlock(&ip->i_flags_lock);
write_unlock(&pag->pag_ici_lock);
+ xfs_put_perag(ip->i_mount, pag);
/*
* If the inode is still dirty, then flush it out. If the inode
* We get the flush lock regardless, though, just to make sure
* we don't free it while it is being flushed.
*/
- xfs_ilock(ip, XFS_ILOCK_EXCL);
- xfs_iflock(ip);
+ if (!locked) {
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ xfs_iflock(ip);
+ }
/*
* In the case of a forced shutdown we rely on xfs_iflush() to
return 0;
}
-int
-xfs_reclaim_inodes(
- xfs_mount_t *mp,
- int mode)
+void
+__xfs_inode_set_reclaim_tag(
+ struct xfs_perag *pag,
+ struct xfs_inode *ip)
{
- return xfs_inode_ag_iterator(mp, xfs_reclaim_inode, mode,
- XFS_ICI_RECLAIM_TAG, 1, NULL);
+ radix_tree_tag_set(&pag->pag_ici_root,
+ XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino),
+ XFS_ICI_RECLAIM_TAG);
}
/*
- * Shrinker infrastructure.
- *
- * This is all far more complex than it needs to be. It adds a global list of
- * mounts because the shrinkers can only call a global context. We need to make
- * the shrinkers pass a context to avoid the need for global state.
+ * We set the inode flag atomically with the radix tree tag.
+ * Once we get tag lookups on the radix tree, this inode flag
+ * can go away.
*/
-static LIST_HEAD(xfs_mount_list);
-static struct rw_semaphore xfs_mount_list_lock;
-
-static int
-xfs_reclaim_inode_shrink(
- int nr_to_scan,
- gfp_t gfp_mask)
+void
+xfs_inode_set_reclaim_tag(
+ xfs_inode_t *ip)
{
- struct xfs_mount *mp;
- xfs_agnumber_t ag;
- int reclaimable = 0;
-
- if (nr_to_scan) {
- if (!(gfp_mask & __GFP_FS))
- return -1;
-
- down_read(&xfs_mount_list_lock);
- list_for_each_entry(mp, &xfs_mount_list, m_mplist) {
- xfs_inode_ag_iterator(mp, xfs_reclaim_inode, 0,
- XFS_ICI_RECLAIM_TAG, 1, &nr_to_scan);
- if (nr_to_scan <= 0)
- break;
- }
- up_read(&xfs_mount_list_lock);
- }
-
- down_read(&xfs_mount_list_lock);
- list_for_each_entry(mp, &xfs_mount_list, m_mplist) {
- for (ag = 0; ag < mp->m_sb.sb_agcount; ag++) {
-
- if (!mp->m_perag[ag].pag_ici_init)
- continue;
- reclaimable += mp->m_perag[ag].pag_ici_reclaimable;
- }
- }
- up_read(&xfs_mount_list_lock);
- return reclaimable;
-}
-
-static struct shrinker xfs_inode_shrinker = {
- .shrink = xfs_reclaim_inode_shrink,
- .seeks = DEFAULT_SEEKS,
-};
+ xfs_mount_t *mp = ip->i_mount;
+ xfs_perag_t *pag = xfs_get_perag(mp, ip->i_ino);
-void __init
-xfs_inode_shrinker_init(void)
-{
- init_rwsem(&xfs_mount_list_lock);
- register_shrinker(&xfs_inode_shrinker);
+ read_lock(&pag->pag_ici_lock);
+ spin_lock(&ip->i_flags_lock);
+ __xfs_inode_set_reclaim_tag(pag, ip);
+ __xfs_iflags_set(ip, XFS_IRECLAIMABLE);
+ spin_unlock(&ip->i_flags_lock);
+ read_unlock(&pag->pag_ici_lock);
+ xfs_put_perag(mp, pag);
}
void
-xfs_inode_shrinker_destroy(void)
+__xfs_inode_clear_reclaim_tag(
+ xfs_mount_t *mp,
+ xfs_perag_t *pag,
+ xfs_inode_t *ip)
{
- ASSERT(list_empty(&xfs_mount_list));
- unregister_shrinker(&xfs_inode_shrinker);
+ radix_tree_tag_clear(&pag->pag_ici_root,
+ XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
}
-void
-xfs_inode_shrinker_register(
- struct xfs_mount *mp)
+STATIC int
+xfs_reclaim_inode_now(
+ struct xfs_inode *ip,
+ struct xfs_perag *pag,
+ int flags)
{
- down_write(&xfs_mount_list_lock);
- list_add_tail(&mp->m_mplist, &xfs_mount_list);
- up_write(&xfs_mount_list_lock);
+ /* ignore if already under reclaim */
+ if (xfs_iflags_test(ip, XFS_IRECLAIM)) {
+ read_unlock(&pag->pag_ici_lock);
+ return 0;
+ }
+ read_unlock(&pag->pag_ici_lock);
+
+ return xfs_reclaim_inode(ip, 0, flags);
}
-void
-xfs_inode_shrinker_unregister(
- struct xfs_mount *mp)
+int
+xfs_reclaim_inodes(
+ xfs_mount_t *mp,
+ int mode)
{
- down_write(&xfs_mount_list_lock);
- list_del(&mp->m_mplist);
- up_write(&xfs_mount_list_lock);
+ return xfs_inode_ag_iterator(mp, xfs_reclaim_inode_now, mode,
+ XFS_ICI_RECLAIM_TAG);
}
void xfs_flush_inodes(struct xfs_inode *ip);
+int xfs_reclaim_inode(struct xfs_inode *ip, int locked, int sync_mode);
int xfs_reclaim_inodes(struct xfs_mount *mp, int mode);
void xfs_inode_set_reclaim_tag(struct xfs_inode *ip);
int xfs_sync_inode_valid(struct xfs_inode *ip, struct xfs_perag *pag);
int xfs_inode_ag_iterator(struct xfs_mount *mp,
int (*execute)(struct xfs_inode *ip, struct xfs_perag *pag, int flags),
- int flags, int tag, int write_lock, int *nr_to_scan);
-
-void xfs_inode_shrinker_init(void);
-void xfs_inode_shrinker_destroy(void);
-void xfs_inode_shrinker_register(struct xfs_mount *mp);
-void xfs_inode_shrinker_unregister(struct xfs_mount *mp);
+ int flags, int tag);
#endif
be64_to_cpu(dp->d_blk_hardlimit);
if (limit && statp->f_blocks > limit) {
statp->f_blocks = limit;
- statp->f_bfree = statp->f_bavail =
+ statp->f_bfree =
(statp->f_blocks > be64_to_cpu(dp->d_bcount)) ?
(statp->f_blocks - be64_to_cpu(dp->d_bcount)) : 0;
}
uint flags)
{
ASSERT(mp->m_quotainfo);
- xfs_inode_ag_iterator(mp, xfs_dqrele_inode, flags,
- XFS_ICI_NO_TAG, 0, NULL);
+ xfs_inode_ag_iterator(mp, xfs_dqrele_inode, flags, XFS_ICI_NO_TAG);
}
/*------------------------------------------------------------------------*/
int pag_ici_init; /* incore inode cache initialised */
rwlock_t pag_ici_lock; /* incore inode lock */
struct radix_tree_root pag_ici_root; /* incore inode cache root */
- int pag_ici_reclaimable; /* reclaimable inodes */
#endif
} xfs_perag_t;
xfs_mount_t *mp;
xfs_perag_busy_t *bsy;
xfs_agblock_t uend, bend;
- xfs_lsn_t lsn = 0;
+ xfs_lsn_t lsn;
int cnt;
mp = tp->t_mountp;
spin_lock(&mp->m_perag[agno].pagb_lock);
+ cnt = mp->m_perag[agno].pagb_count;
+
uend = bno + len - 1;
- /*
- * search pagb_list for this slot, skipping open slots. We have to
- * search the entire array as there may be multiple overlaps and
- * we have to get the most recent LSN for the log force to push out
- * all the transactions that span the range.
- */
- for (cnt = 0; cnt < mp->m_perag[agno].pagb_count; cnt++) {
- bsy = &mp->m_perag[agno].pagb_list[cnt];
- if (!bsy->busy_tp)
- continue;
- bend = bsy->busy_start + bsy->busy_length - 1;
- if (bno > bend || uend < bsy->busy_start)
- continue;
+ /* search pagb_list for this slot, skipping open slots */
+ for (bsy = mp->m_perag[agno].pagb_list; cnt; bsy++) {
- /* (start1,length1) within (start2, length2) */
- if (XFS_LSN_CMP(bsy->busy_tp->t_commit_lsn, lsn) > 0)
- lsn = bsy->busy_tp->t_commit_lsn;
+ /*
+ * (start1,length1) within (start2, length2)
+ */
+ if (bsy->busy_tp != NULL) {
+ bend = bsy->busy_start + bsy->busy_length - 1;
+ if ((bno > bend) || (uend < bsy->busy_start)) {
+ cnt--;
+ } else {
+ TRACE_BUSYSEARCH("xfs_alloc_search_busy",
+ "found1", agno, bno, len, tp);
+ break;
+ }
+ }
}
- spin_unlock(&mp->m_perag[agno].pagb_lock);
- TRACE_BUSYSEARCH("xfs_alloc_search_busy", lsn ? "found" : "not-found",
- agno, bno, len, tp);
- if (lsn)
+
+ /*
+ * If a block was found, force the log through the LSN of the
+ * transaction that freed the block
+ */
+ if (cnt) {
+ TRACE_BUSYSEARCH("xfs_alloc_search_busy", "found", agno, bno, len, tp);
+ lsn = bsy->busy_tp->t_commit_lsn;
+ spin_unlock(&mp->m_perag[agno].pagb_lock);
xfs_log_force(mp, lsn, XFS_LOG_FORCE|XFS_LOG_SYNC);
+ } else {
+ TRACE_BUSYSEARCH("xfs_alloc_search_busy", "not-found", agno, bno, len, tp);
+ spin_unlock(&mp->m_perag[agno].pagb_lock);
+ }
}
goto out;
}
- if (!(file->f_mode & FMODE_WRITE) ||
- !(file->f_mode & FMODE_READ) ||
- (file->f_flags & O_APPEND)) {
+ if (!(file->f_mode & FMODE_WRITE) || (file->f_flags & O_APPEND)) {
error = XFS_ERROR(EBADF);
goto out_put_file;
}
}
if (!(target_file->f_mode & FMODE_WRITE) ||
- !(target_file->f_mode & FMODE_READ) ||
(target_file->f_flags & O_APPEND)) {
error = XFS_ERROR(EBADF);
goto out_put_target_file;
return error;
}
-/*
- * We need to check that the format of the data fork in the temporary inode is
- * valid for the target inode before doing the swap. This is not a problem with
- * attr1 because of the fixed fork offset, but attr2 has a dynamically sized
- * data fork depending on the space the attribute fork is taking so we can get
- * invalid formats on the target inode.
- *
- * E.g. target has space for 7 extents in extent format, temp inode only has
- * space for 6. If we defragment down to 7 extents, then the tmp format is a
- * btree, but when swapped it needs to be in extent format. Hence we can't just
- * blindly swap data forks on attr2 filesystems.
- *
- * Note that we check the swap in both directions so that we don't end up with
- * a corrupt temporary inode, either.
- *
- * Note that fixing the way xfs_fsr sets up the attribute fork in the source
- * inode will prevent this situation from occurring, so all we do here is
- * reject and log the attempt. basically we are putting the responsibility on
- * userspace to get this right.
- */
-static int
-xfs_swap_extents_check_format(
- xfs_inode_t *ip, /* target inode */
- xfs_inode_t *tip) /* tmp inode */
-{
-
- /* Should never get a local format */
- if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL ||
- tip->i_d.di_format == XFS_DINODE_FMT_LOCAL)
- return EINVAL;
-
- /*
- * if the target inode has less extents that then temporary inode then
- * why did userspace call us?
- */
- if (ip->i_d.di_nextents < tip->i_d.di_nextents)
- return EINVAL;
-
- /*
- * if the target inode is in extent form and the temp inode is in btree
- * form then we will end up with the target inode in the wrong format
- * as we already know there are less extents in the temp inode.
- */
- if (ip->i_d.di_format == XFS_DINODE_FMT_EXTENTS &&
- tip->i_d.di_format == XFS_DINODE_FMT_BTREE)
- return EINVAL;
-
- /* Check temp in extent form to max in target */
- if (tip->i_d.di_format == XFS_DINODE_FMT_EXTENTS &&
- XFS_IFORK_NEXTENTS(tip, XFS_DATA_FORK) > ip->i_df.if_ext_max)
- return EINVAL;
-
- /* Check target in extent form to max in temp */
- if (ip->i_d.di_format == XFS_DINODE_FMT_EXTENTS &&
- XFS_IFORK_NEXTENTS(ip, XFS_DATA_FORK) > tip->i_df.if_ext_max)
- return EINVAL;
-
- /* Check root block of temp in btree form to max in target */
- if (tip->i_d.di_format == XFS_DINODE_FMT_BTREE &&
- XFS_IFORK_BOFF(ip) &&
- tip->i_df.if_broot_bytes > XFS_IFORK_BOFF(ip))
- return EINVAL;
-
- /* Check root block of target in btree form to max in temp */
- if (ip->i_d.di_format == XFS_DINODE_FMT_BTREE &&
- XFS_IFORK_BOFF(tip) &&
- ip->i_df.if_broot_bytes > XFS_IFORK_BOFF(tip))
- return EINVAL;
-
- return 0;
-}
-
int
xfs_swap_extents(
- xfs_inode_t *ip, /* target inode */
- xfs_inode_t *tip, /* tmp inode */
+ xfs_inode_t *ip,
+ xfs_inode_t *tip,
xfs_swapext_t *sxp)
{
xfs_mount_t *mp;
goto out_unlock;
}
+ /* Should never get a local format */
+ if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL ||
+ tip->i_d.di_format == XFS_DINODE_FMT_LOCAL) {
+ error = XFS_ERROR(EINVAL);
+ goto out_unlock;
+ }
+
if (VN_CACHED(VFS_I(tip)) != 0) {
xfs_inval_cached_trace(tip, 0, -1, 0, -1);
error = xfs_flushinval_pages(tip, 0, -1,
goto out_unlock;
}
- /* check inode formats now that data is flushed */
- error = xfs_swap_extents_check_format(ip, tip);
- if (error) {
- xfs_fs_cmn_err(CE_NOTE, mp,
- "%s: inode 0x%llx format is incompatible for exchanging.",
- __FILE__, ip->i_ino);
+ /*
+ * If the target has extended attributes, the tmp file
+ * must also in order to ensure the correct data fork
+ * format.
+ */
+ if ( XFS_IFORK_Q(ip) != XFS_IFORK_Q(tip) ) {
+ error = XFS_ERROR(EINVAL);
goto out_unlock;
}
*ifp = *tifp; /* struct copy */
*tifp = *tempifp; /* struct copy */
- /*
- * Fix the in-memory data fork values that are dependent on the fork
- * offset in the inode. We can't assume they remain the same as attr2
- * has dynamic fork offsets.
- */
- ifp->if_ext_max = XFS_IFORK_SIZE(ip, XFS_DATA_FORK) /
- (uint)sizeof(xfs_bmbt_rec_t);
- tifp->if_ext_max = XFS_IFORK_SIZE(tip, XFS_DATA_FORK) /
- (uint)sizeof(xfs_bmbt_rec_t);
-
/*
* Fix the on-disk inode values
*/
xfs_inode_t *ip;
int error;
- tp = _xfs_trans_alloc(mp, XFS_TRANS_DUMMY1, KM_SLEEP);
+ tp = _xfs_trans_alloc(mp, XFS_TRANS_DUMMY1);
error = xfs_trans_reserve(tp, 0, XFS_ICHANGE_LOG_RES(mp), 0, 0, 0);
if (error) {
xfs_trans_cancel(tp, 0);
xfs_itrace_exit_tag(ip, "xfs_iget.alloc");
/*
- * We need to set XFS_IRECLAIM to prevent xfs_reclaim_inode
- * from stomping over us while we recycle the inode. We can't
- * clear the radix tree reclaimable tag yet as it requires
- * pag_ici_lock to be held exclusive.
+ * We need to set XFS_INEW atomically with clearing the
+ * reclaimable tag so that we do have an indicator of the
+ * inode still being initialized.
*/
- ip->i_flags |= XFS_IRECLAIM;
+ ip->i_flags |= XFS_INEW;
+ ip->i_flags &= ~XFS_IRECLAIMABLE;
+ __xfs_inode_clear_reclaim_tag(mp, pag, ip);
spin_unlock(&ip->i_flags_lock);
read_unlock(&pag->pag_ici_lock);
__xfs_inode_set_reclaim_tag(pag, ip);
goto out_error;
}
-
- write_lock(&pag->pag_ici_lock);
- spin_lock(&ip->i_flags_lock);
- ip->i_flags &= ~(XFS_IRECLAIMABLE | XFS_IRECLAIM);
- ip->i_flags |= XFS_INEW;
- __xfs_inode_clear_reclaim_tag(mp, pag, ip);
inode->i_state = I_LOCK|I_NEW;
- spin_unlock(&ip->i_flags_lock);
- write_unlock(&pag->pag_ici_lock);
} else {
/* If the VFS inode is being torn down, pause and try again. */
if (!igrab(inode)) {
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_perag *pag;
- xfs_agino_t agino = XFS_INO_TO_AGINO(mp, ip->i_ino);
XFS_STATS_INC(xs_ig_reclaims);
/*
- * Remove the inode from the per-AG radix tree.
- *
- * Because radix_tree_delete won't complain even if the item was never
- * added to the tree assert that it's been there before to catch
- * problems with the inode life time early on.
+ * Remove the inode from the per-AG radix tree. It doesn't matter
+ * if it was never added to it because radix_tree_delete can deal
+ * with that case just fine.
*/
pag = xfs_get_perag(mp, ip->i_ino);
write_lock(&pag->pag_ici_lock);
- if (!radix_tree_delete(&pag->pag_ici_root, agino))
- ASSERT(0);
+ radix_tree_delete(&pag->pag_ici_root, XFS_INO_TO_AGINO(mp, ip->i_ino));
write_unlock(&pag->pag_ici_lock);
xfs_put_perag(mp, pag);
mp = ip->i_mount;
/*
- * If the inode isn't dirty, then just release the inode flush lock and
- * do nothing.
+ * If the inode isn't dirty, then just release the inode
+ * flush lock and do nothing.
*/
if (xfs_inode_clean(ip)) {
xfs_ifunlock(ip);
}
xfs_iunpin_wait(ip);
- /*
- * For stale inodes we cannot rely on the backing buffer remaining
- * stale in cache for the remaining life of the stale inode and so
- * xfs_itobp() below may give us a buffer that no longer contains
- * inodes below. We have to check this after ensuring the inode is
- * unpinned so that it is safe to reclaim the stale inode after the
- * flush call.
- */
- if (xfs_iflags_test(ip, XFS_ISTALE)) {
- xfs_ifunlock(ip);
- return 0;
- }
-
/*
* This may have been unpinned because the filesystem is shutting
* down forcibly. If that's the case we must not write this inode
* set up a transaction to convert the range of extents
* from unwritten to real. Do allocations in a loop until
* we have covered the range passed in.
- *
- * Note that we open code the transaction allocation here
- * to pass KM_NOFS--we can't risk to recursing back into
- * the filesystem here as we might be asked to write out
- * the same inode that we complete here and might deadlock
- * on the iolock.
*/
- xfs_wait_for_freeze(mp, SB_FREEZE_TRANS);
- tp = _xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE, KM_NOFS);
+ tp = xfs_trans_alloc(mp, XFS_TRANS_STRAT_WRITE);
tp->t_flags |= XFS_TRANS_RESERVE;
error = xfs_trans_reserve(tp, resblks,
XFS_WRITE_LOG_RES(mp), 0,
{
xlog_rec_header_t *rhead;
xfs_daddr_t blk_no;
- xfs_caddr_t offset;
+ xfs_caddr_t bufaddr, offset;
xfs_buf_t *hbp, *dbp;
int error = 0, h_size;
int bblks, split_bblks;
/*
* Check for header wrapping around physical end-of-log
*/
- offset = XFS_BUF_PTR(hbp);
+ offset = NULL;
split_hblks = 0;
wrapped_hblks = 0;
if (blk_no + hblks <= log->l_logBBsize) {
* - order is important.
*/
wrapped_hblks = hblks - split_hblks;
+ bufaddr = XFS_BUF_PTR(hbp);
error = XFS_BUF_SET_PTR(hbp,
- offset + BBTOB(split_hblks),
+ bufaddr + BBTOB(split_hblks),
BBTOB(hblks - split_hblks));
if (error)
goto bread_err2;
if (error)
goto bread_err2;
- error = XFS_BUF_SET_PTR(hbp, offset,
+ error = XFS_BUF_SET_PTR(hbp, bufaddr,
BBTOB(hblks));
if (error)
goto bread_err2;
+
+ if (!offset)
+ offset = xlog_align(log, 0,
+ wrapped_hblks, hbp);
}
rhead = (xlog_rec_header_t *)offset;
error = xlog_valid_rec_header(log, rhead,
} else {
/* This log record is split across the
* physical end of log */
- offset = XFS_BUF_PTR(dbp);
+ offset = NULL;
split_bblks = 0;
if (blk_no != log->l_logBBsize) {
/* some data is before the physical
* _first_, then the log start (LR header end)
* - order is important.
*/
+ bufaddr = XFS_BUF_PTR(dbp);
error = XFS_BUF_SET_PTR(dbp,
- offset + BBTOB(split_bblks),
+ bufaddr + BBTOB(split_bblks),
BBTOB(bblks - split_bblks));
if (error)
goto bread_err2;
if (error)
goto bread_err2;
- error = XFS_BUF_SET_PTR(dbp, offset, h_size);
+ error = XFS_BUF_SET_PTR(dbp, bufaddr, h_size);
if (error)
goto bread_err2;
+
+ if (!offset)
+ offset = xlog_align(log, wrapped_hblks,
+ bblks - split_bblks, dbp);
}
xlog_unpack_data(rhead, offset, log);
if ((error = xlog_recover_process_data(log, rhash,
if (!xfs_sb_version_haslazysbcount(&mp->m_sb))
return 0;
- tp = _xfs_trans_alloc(mp, XFS_TRANS_SB_COUNT, KM_SLEEP);
+ tp = _xfs_trans_alloc(mp, XFS_TRANS_SB_COUNT);
error = xfs_trans_reserve(tp, 0, mp->m_sb.sb_sectsize + 128, 0, 0,
XFS_DEFAULT_LOG_COUNT);
if (error) {
__uint64_t m_maxioffset; /* maximum inode offset */
__uint64_t m_resblks; /* total reserved blocks */
__uint64_t m_resblks_avail;/* available reserved blocks */
- __uint64_t m_resblks_save; /* reserved blks @ remount,ro */
int m_dalign; /* stripe unit */
int m_swidth; /* stripe width */
int m_sinoalign; /* stripe unit inode alignment */
wait_queue_head_t m_wait_single_sync_task;
__int64_t m_update_flags; /* sb flags we need to update
on the next remount,rw */
- struct list_head m_mplist; /* inode shrinker mount list */
} xfs_mount_t;
/*
XFS_FSB_TO_DADDR((ip)->i_mount, (fsb)));
}
+/*
+ * Flags for xfs_free_eofblocks
+ */
+#define XFS_FREE_EOF_LOCK (1<<0)
+#define XFS_FREE_EOF_NOLOCK (1<<1)
+
+
/*
* helper function to extract extent size hint from inode
*/
uint type)
{
xfs_wait_for_freeze(mp, SB_FREEZE_TRANS);
- return _xfs_trans_alloc(mp, type, KM_SLEEP);
+ return _xfs_trans_alloc(mp, type);
}
xfs_trans_t *
_xfs_trans_alloc(
xfs_mount_t *mp,
- uint type,
- uint memflags)
+ uint type)
{
xfs_trans_t *tp;
atomic_inc(&mp->m_active_trans);
- tp = kmem_zone_zalloc(xfs_trans_zone, memflags);
+ tp = kmem_zone_zalloc(xfs_trans_zone, KM_SLEEP);
tp->t_magic = XFS_TRANS_MAGIC;
tp->t_type = type;
tp->t_mountp = mp;
* XFS transaction mechanism exported interfaces.
*/
xfs_trans_t *xfs_trans_alloc(struct xfs_mount *, uint);
-xfs_trans_t *_xfs_trans_alloc(struct xfs_mount *, uint, uint);
+xfs_trans_t *_xfs_trans_alloc(struct xfs_mount *, uint);
xfs_trans_t *xfs_trans_dup(xfs_trans_t *);
int xfs_trans_reserve(xfs_trans_t *, uint, uint, uint,
uint, uint);
uint commit_flags=0;
uid_t uid=0, iuid=0;
gid_t gid=0, igid=0;
+ int timeflags = 0;
struct xfs_dquot *udqp, *gdqp, *olddquot1, *olddquot2;
int need_iolock = 1;
if (flags & XFS_ATTR_NOLOCK)
need_iolock = 0;
if (!(mask & ATTR_SIZE)) {
- tp = xfs_trans_alloc(mp, XFS_TRANS_SETATTR_NOT_SIZE);
- commit_flags = 0;
- code = xfs_trans_reserve(tp, 0, XFS_ICHANGE_LOG_RES(mp),
- 0, 0, 0);
- if (code) {
- lock_flags = 0;
- goto error_return;
+ if ((mask != (ATTR_CTIME|ATTR_ATIME|ATTR_MTIME)) ||
+ (mp->m_flags & XFS_MOUNT_WSYNC)) {
+ tp = xfs_trans_alloc(mp, XFS_TRANS_SETATTR_NOT_SIZE);
+ commit_flags = 0;
+ if ((code = xfs_trans_reserve(tp, 0,
+ XFS_ICHANGE_LOG_RES(mp), 0,
+ 0, 0))) {
+ lock_flags = 0;
+ goto error_return;
+ }
}
} else {
if (DM_EVENT_ENABLED(ip, DM_EVENT_TRUNCATE) &&
* or we are explicitly asked to change it. This handles
* the semantic difference between truncate() and ftruncate()
* as implemented in the VFS.
- *
- * The regular truncate() case without ATTR_CTIME and ATTR_MTIME
- * is a special case where we need to update the times despite
- * not having these flags set. For all other operations the
- * VFS set these flags explicitly if it wants a timestamp
- * update.
*/
- if (iattr->ia_size != ip->i_size &&
- (!(mask & (ATTR_CTIME | ATTR_MTIME)))) {
- iattr->ia_ctime = iattr->ia_mtime =
- current_fs_time(inode->i_sb);
- mask |= ATTR_CTIME | ATTR_MTIME;
- }
+ if (iattr->ia_size != ip->i_size || (mask & ATTR_CTIME))
+ timeflags |= XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG;
if (iattr->ia_size > ip->i_size) {
ip->i_d.di_size = iattr->ia_size;
ip->i_size = iattr->ia_size;
+ if (!(flags & XFS_ATTR_DMI))
+ xfs_ichgtime(ip, XFS_ICHGTIME_CHG);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
} else if (iattr->ia_size <= ip->i_size ||
(iattr->ia_size == 0 && ip->i_d.di_nextents)) {
ip->i_d.di_gid = gid;
inode->i_gid = gid;
}
+
+ xfs_trans_log_inode (tp, ip, XFS_ILOG_CORE);
+ timeflags |= XFS_ICHGTIME_CHG;
}
/*
inode->i_mode &= S_IFMT;
inode->i_mode |= mode & ~S_IFMT;
+
+ xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+ timeflags |= XFS_ICHGTIME_CHG;
}
/*
* Change file access or modified times.
*/
- if (mask & ATTR_ATIME) {
- inode->i_atime = iattr->ia_atime;
- ip->i_d.di_atime.t_sec = iattr->ia_atime.tv_sec;
- ip->i_d.di_atime.t_nsec = iattr->ia_atime.tv_nsec;
- ip->i_update_core = 1;
+ if (mask & (ATTR_ATIME|ATTR_MTIME)) {
+ if (mask & ATTR_ATIME) {
+ inode->i_atime = iattr->ia_atime;
+ ip->i_d.di_atime.t_sec = iattr->ia_atime.tv_sec;
+ ip->i_d.di_atime.t_nsec = iattr->ia_atime.tv_nsec;
+ ip->i_update_core = 1;
+ }
+ if (mask & ATTR_MTIME) {
+ inode->i_mtime = iattr->ia_mtime;
+ ip->i_d.di_mtime.t_sec = iattr->ia_mtime.tv_sec;
+ ip->i_d.di_mtime.t_nsec = iattr->ia_mtime.tv_nsec;
+ timeflags &= ~XFS_ICHGTIME_MOD;
+ timeflags |= XFS_ICHGTIME_CHG;
+ }
+ if (tp && (mask & (ATTR_MTIME_SET|ATTR_ATIME_SET)))
+ xfs_trans_log_inode (tp, ip, XFS_ILOG_CORE);
}
- if (mask & ATTR_CTIME) {
+
+ /*
+ * Change file inode change time only if ATTR_CTIME set
+ * AND we have been called by a DMI function.
+ */
+
+ if ((flags & XFS_ATTR_DMI) && (mask & ATTR_CTIME)) {
inode->i_ctime = iattr->ia_ctime;
ip->i_d.di_ctime.t_sec = iattr->ia_ctime.tv_sec;
ip->i_d.di_ctime.t_nsec = iattr->ia_ctime.tv_nsec;
ip->i_update_core = 1;
- }
- if (mask & ATTR_MTIME) {
- inode->i_mtime = iattr->ia_mtime;
- ip->i_d.di_mtime.t_sec = iattr->ia_mtime.tv_sec;
- ip->i_d.di_mtime.t_nsec = iattr->ia_mtime.tv_nsec;
- ip->i_update_core = 1;
+ timeflags &= ~XFS_ICHGTIME_CHG;
}
/*
- * And finally, log the inode core if any attribute in it
- * has been changed.
+ * Send out timestamp changes that need to be set to the
+ * current time. Not done when called by a DMI function.
*/
- if (mask & (ATTR_UID|ATTR_GID|ATTR_MODE|
- ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
- xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+ if (timeflags && !(flags & XFS_ATTR_DMI))
+ xfs_ichgtime(ip, timeflags);
XFS_STATS_INC(xs_ig_attrchg);
* mix so this probably isn't worth the trouble to optimize.
*/
code = 0;
- if (mp->m_flags & XFS_MOUNT_WSYNC)
- xfs_trans_set_sync(tp);
+ if (tp) {
+ if (mp->m_flags & XFS_MOUNT_WSYNC)
+ xfs_trans_set_sync(tp);
- code = xfs_trans_commit(tp, commit_flags);
+ code = xfs_trans_commit(tp, commit_flags);
+ }
xfs_iunlock(ip, lock_flags);
{
xfs_trans_t *tp;
int error = 0;
- int log_flushed = 0;
+ int log_flushed = 0, changed = 1;
xfs_itrace_entry(ip);
* disk yet, the inode will be still be pinned. If it is,
* force the log.
*/
+
xfs_iunlock(ip, XFS_ILOCK_SHARED);
+
if (xfs_ipincount(ip)) {
error = _xfs_log_force(ip->i_mount, (xfs_lsn_t)0,
XFS_LOG_FORCE | XFS_LOG_SYNC,
&log_flushed);
+ } else {
+ /*
+ * If the inode is not pinned and nothing has changed
+ * we don't need to flush the cache.
+ */
+ changed = 0;
}
} else {
/*
xfs_iunlock(ip, XFS_ILOCK_EXCL);
}
- if (ip->i_mount->m_flags & XFS_MOUNT_BARRIER) {
+ if ((ip->i_mount->m_flags & XFS_MOUNT_BARRIER) && changed) {
/*
* If the log write didn't issue an ordered tag we need
* to flush the disk cache for the data device now.
return error;
}
-/*
- * Flags for xfs_free_eofblocks
- */
-#define XFS_FREE_EOF_TRYLOCK (1<<0)
-
/*
* This is called by xfs_inactive to free any blocks beyond eof
* when the link count isn't zero and by xfs_dm_punch_hole() when
xfs_filblks_t map_len;
int nimaps;
xfs_bmbt_irec_t imap;
+ int use_iolock = (flags & XFS_FREE_EOF_LOCK);
/*
* Figure out if there are any blocks beyond the end
* cache and we can't
* do that within a transaction.
*/
- if (flags & XFS_FREE_EOF_TRYLOCK) {
- if (!xfs_ilock_nowait(ip, XFS_IOLOCK_EXCL)) {
- xfs_trans_cancel(tp, 0);
- return 0;
- }
- } else {
+ if (use_iolock)
xfs_ilock(ip, XFS_IOLOCK_EXCL);
- }
error = xfs_itruncate_start(ip, XFS_ITRUNC_DEFINITE,
ip->i_size);
if (error) {
xfs_trans_cancel(tp, 0);
- xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+ if (use_iolock)
+ xfs_iunlock(ip, XFS_IOLOCK_EXCL);
return error;
}
error = xfs_trans_commit(tp,
XFS_TRANS_RELEASE_LOG_RES);
}
- xfs_iunlock(ip, XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL);
+ xfs_iunlock(ip, (use_iolock ? (XFS_IOLOCK_EXCL|XFS_ILOCK_EXCL)
+ : XFS_ILOCK_EXCL));
}
return error;
}
(ip->i_df.if_flags & XFS_IFEXTENTS)) &&
(!(ip->i_d.di_flags &
(XFS_DIFLAG_PREALLOC | XFS_DIFLAG_APPEND)))) {
-
- /*
- * If we can't get the iolock just skip truncating
- * the blocks past EOF because we could deadlock
- * with the mmap_sem otherwise. We'll get another
- * chance to drop them once the last reference to
- * the inode is dropped, so we'll never leak blocks
- * permanently.
- */
- error = xfs_free_eofblocks(mp, ip,
- XFS_FREE_EOF_TRYLOCK);
+ error = xfs_free_eofblocks(mp, ip, XFS_FREE_EOF_LOCK);
if (error)
return error;
}
(!(ip->i_d.di_flags &
(XFS_DIFLAG_PREALLOC | XFS_DIFLAG_APPEND)) ||
(ip->i_delayed_blks != 0)))) {
- error = xfs_free_eofblocks(mp, ip, 0);
+ error = xfs_free_eofblocks(mp, ip, XFS_FREE_EOF_LOCK);
if (error)
return VN_INACTIVE_CACHE;
}
return error;
}
+int
+xfs_reclaim(
+ xfs_inode_t *ip)
+{
+
+ xfs_itrace_entry(ip);
+
+ ASSERT(!VN_MAPPED(VFS_I(ip)));
+
+ /* bad inode, get out here ASAP */
+ if (is_bad_inode(VFS_I(ip))) {
+ xfs_ireclaim(ip);
+ return 0;
+ }
+
+ xfs_ioend_wait(ip);
+
+ ASSERT(XFS_FORCED_SHUTDOWN(ip->i_mount) || ip->i_delayed_blks == 0);
+
+ /*
+ * If we have nothing to flush with this inode then complete the
+ * teardown now, otherwise break the link between the xfs inode and the
+ * linux inode and clean up the xfs inode later. This avoids flushing
+ * the inode to disk during the delete operation itself.
+ *
+ * When breaking the link, we need to set the XFS_IRECLAIMABLE flag
+ * first to ensure that xfs_iunpin() will never see an xfs inode
+ * that has a linux inode being reclaimed. Synchronisation is provided
+ * by the i_flags_lock.
+ */
+ if (!ip->i_update_core && (ip->i_itemp == NULL)) {
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ xfs_iflock(ip);
+ xfs_iflags_set(ip, XFS_IRECLAIMABLE);
+ return xfs_reclaim_inode(ip, 1, XFS_IFLUSH_DELWRI_ELSE_SYNC);
+ }
+ xfs_inode_set_reclaim_tag(ip);
+ return 0;
+}
+
/*
* xfs_alloc_file_space()
* This routine allocates disk space for the given file.
const char *target_path, mode_t mode, struct xfs_inode **ipp,
cred_t *credp);
int xfs_set_dmattrs(struct xfs_inode *ip, u_int evmask, u_int16_t state);
+int xfs_reclaim(struct xfs_inode *ip);
int xfs_change_file_space(struct xfs_inode *ip, int cmd,
xfs_flock64_t *bf, xfs_off_t offset, int attr_flags);
int xfs_rename(struct xfs_inode *src_dp, struct xfs_name *src_name,
u8 space_id;
u8 bit_width;
u8 bit_offset;
- u8 access_size;
+ u8 reserved;
u64 address;
} __attribute__ ((packed));
u32 power;
u32 usage;
u64 time;
- u8 bm_sts_skip;
struct acpi_processor_cx_policy promotion;
struct acpi_processor_cx_policy demotion;
char desc[ACPI_CX_DESC_LEN];
debug_dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
} else
- dma_sync_single_for_cpu(dev, addr + offset, size, dir);
+ dma_sync_single_for_cpu(dev, addr, size, dir);
}
static inline void dma_sync_single_range_for_device(struct device *dev,
debug_dma_sync_single_range_for_device(dev, addr, offset, size, dir);
} else
- dma_sync_single_for_device(dev, addr + offset, size, dir);
+ dma_sync_single_for_device(dev, addr, size, dir);
}
static inline void
{0x1002, 0x3150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \
{0x1002, 0x3152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x3154, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
- {0x1002, 0x3155, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x3E50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_NEW_MEMMAP}, \
{0x1002, 0x3E54, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_NEW_MEMMAP}, \
{0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|RADEON_IS_IGP}, \
{0x1002, 0x5460, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \
{0x1002, 0x5462, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \
{0x1002, 0x5464, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \
+ {0x1002, 0x5657, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5548, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R423|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5549, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R423|RADEON_NEW_MEMMAP}, \
{0x1002, 0x554A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R423|RADEON_NEW_MEMMAP}, \
{0x1002, 0x564F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5652, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5653, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
- {0x1002, 0x5657, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|RADEON_IS_IGP}, \
{0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
{0x1002, 0x5954, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS480|RADEON_IS_IGP|RADEON_IS_MOBILITY|RADEON_IS_IGPGART}, \
{0x1002, 0x9712, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9713, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9714, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
- {0x1002, 0x9715, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0, 0, 0}
#define r128_PCI_IDS \
static inline int lba_28_ok(u64 block, u32 n_block)
{
- /* check the ending block number: must be LESS THAN 0x0fffffff */
- return ((block + n_block) < ((1 << 28) - 1)) && (n_block <= 256);
+ /* check the ending block number */
+ return ((block + n_block) < ((u64)1 << 28)) && (n_block <= 256);
}
static inline int lba_48_ok(u64 block, u32 n_block)
extern void blk_queue_max_discard_sectors(struct request_queue *q,
unsigned int max_discard_sectors);
extern void blk_queue_logical_block_size(struct request_queue *, unsigned short);
-extern void blk_queue_physical_block_size(struct request_queue *, unsigned int);
+extern void blk_queue_physical_block_size(struct request_queue *, unsigned short);
extern void blk_queue_alignment_offset(struct request_queue *q,
unsigned int alignment);
extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
return q->limits.physical_block_size;
}
-static inline unsigned int bdev_physical_block_size(struct block_device *bdev)
+static inline int bdev_physical_block_size(struct block_device *bdev)
{
return queue_physical_block_size(bdev_get_queue(bdev));
}
extern void clocksource_mark_unstable(struct clocksource *cs);
#ifdef CONFIG_GENERIC_TIME_VSYSCALL
-extern void
-update_vsyscall(struct timespec *ts, struct clocksource *c, u32 mult);
+extern void update_vsyscall(struct timespec *ts, struct clocksource *c);
extern void update_vsyscall_tz(void);
#else
-static inline void
-update_vsyscall(struct timespec *ts, struct clocksource *c, u32 mult)
+static inline void update_vsyscall(struct timespec *ts, struct clocksource *c)
{
}
asmlinkage long compat_sys_openat(unsigned int dfd, const char __user *filename,
int flags, int mode);
-extern void __user *compat_alloc_user_space(unsigned long len);
-
#endif /* CONFIG_COMPAT */
#endif /* _LINUX_COMPAT_H */
extern int cpuset_init(void);
extern void cpuset_init_smp(void);
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
-extern int cpuset_cpus_allowed_fallback(struct task_struct *p);
+extern void cpuset_cpus_allowed_locked(struct task_struct *p,
+ struct cpumask *mask);
extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
#define cpuset_current_mems_allowed (current->mems_allowed)
void cpuset_init_current_mems_allowed(void);
extern void cpuset_task_status_allowed(struct seq_file *m,
struct task_struct *task);
+extern void cpuset_lock(void);
+extern void cpuset_unlock(void);
+
extern int cpuset_mem_spread_node(void);
static inline int cpuset_do_page_mem_spread(void)
{
cpumask_copy(mask, cpu_possible_mask);
}
-
-static inline int cpuset_cpus_allowed_fallback(struct task_struct *p)
+static inline void cpuset_cpus_allowed_locked(struct task_struct *p,
+ struct cpumask *mask)
{
- cpumask_copy(&p->cpus_allowed, cpu_possible_mask);
- return cpumask_any(cpu_active_mask);
+ cpumask_copy(mask, cpu_possible_mask);
}
static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
{
}
+static inline void cpuset_lock(void) {}
+static inline void cpuset_unlock(void) {}
+
static inline int cpuset_mem_spread_node(void)
{
return 0;
/* Code active when included from pre-boot environment: */
-/*
- * Some architectures want to ensure there is no local data in their
- * pre-boot environment, so that data can arbitarily relocated (via
- * GOT references). This is achieved by defining STATIC_RW_DATA to
- * be null.
- */
-#ifndef STATIC_RW_DATA
-#define STATIC_RW_DATA static
-#endif
-
/* A trivial malloc implementation, adapted from
* malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
*/
-STATIC_RW_DATA unsigned long malloc_ptr;
-STATIC_RW_DATA int malloc_count;
+static unsigned long malloc_ptr;
+static int malloc_count;
static void *malloc(int size)
{
__u32 flow_type;
/* The rx flow hash value or the rule DB size */
__u64 data;
- /* The following fields are not valid and must not be used for
- * the ETHTOOL_{G,X}RXFH commands. */
struct ethtool_rx_flow_spec fs;
__u32 rule_cnt;
__u32 rule_locs[0];
#define FBINFO_MISC_USEREVENT 0x10000 /* event request
from userspace */
#define FBINFO_MISC_TILEBLITTING 0x20000 /* use tile blitting */
+#define FBINFO_MISC_FIRMWARE 0x40000 /* a replaceable firmware
+ inited framebuffer */
/* A driver may set this flag to indicate that it does want a set_par to be
* called every time when fbcon_switch is executed. The advantage is that with
*/
#define FBINFO_MISC_ALWAYS_SETPAR 0x40000
-/* where the fb is a firmware driver, and can be replaced with a proper one */
-#define FBINFO_MISC_FIRMWARE 0x80000
/*
* Host and GPU endianness differ.
*/
struct firmware {
size_t size;
const u8 *data;
- struct page **pages;
};
struct device;
extern void cancel_freezing(struct task_struct *p);
#ifdef CONFIG_CGROUP_FREEZER
-extern int cgroup_freezing_or_frozen(struct task_struct *task);
+extern int cgroup_frozen(struct task_struct *task);
#else /* !CONFIG_CGROUP_FREEZER */
-static inline int cgroup_freezing_or_frozen(struct task_struct *task)
-{
- return 0;
-}
+static inline int cgroup_frozen(struct task_struct *task) { return 0; }
#endif /* !CONFIG_CGROUP_FREEZER */
/*
*/
#define FMODE_NOCMTIME ((__force fmode_t)2048)
-/* Expect random access pattern */
-#define FMODE_RANDOM ((__force fmode_t)4096)
-
/*
* The below are the various read and write types that we support. Some of
* them include behavioral modifiers that send information down to the
*
*/
#define RW_MASK 1
-#define RWA_MASK 16
+#define RWA_MASK 2
#define READ 0
#define WRITE 1
-#define READA 16 /* readahead - don't block if no resources */
-#define SWRITE 17 /* for ll_rw_block(), wait for buffer lock */
+#define READA 2 /* read-ahead - don't block if no resources */
+#define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */
#define READ_SYNC (READ | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG))
#define READ_META (READ | (1 << BIO_RW_META))
#define WRITE_SYNC_PLUG (WRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_NOIDLE))
#define MNT_FORCE 0x00000001 /* Attempt to forcibily umount */
#define MNT_DETACH 0x00000002 /* Just detach from the tree */
#define MNT_EXPIRE 0x00000004 /* Mark for expiry */
-#define UMOUNT_NOFOLLOW 0x00000008 /* Don't follow symlink on umount */
-#define UMOUNT_UNUSED 0x80000000 /* Flag guaranteed to be unused */
extern struct list_head super_blocks;
extern spinlock_t sb_lock;
/* fs/block_dev.c */
extern ssize_t blkdev_aio_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos);
-extern int block_fsync(struct file *filp, struct dentry *dentry, int datasync);
/* fs/splice.c */
extern ssize_t generic_file_splice_read(struct file *, loff_t *,
extern const struct inode_operations simple_dir_inode_operations;
struct tree_descr { char *name; const struct file_operations *ops; int mode; };
struct dentry *d_alloc_name(struct dentry *, const char *);
-extern int simple_fill_super(struct super_block *, unsigned long, struct tree_descr *);
+extern int simple_fill_super(struct super_block *, int, struct tree_descr *);
extern int simple_pin_fs(struct file_system_type *, struct vfsmount **mount, int *count);
extern void simple_release_fs(struct vfsmount **mount, int *count);
* @expires_next: absolute time of the next event which was scheduled
* via clock_set_next_event()
* @hres_active: State of high resolution mode
- * @hang_detected: The last hrtimer interrupt detected a hang
- * @nr_events: Total number of hrtimer interrupt events
- * @nr_retries: Total number of hrtimer interrupt retries
- * @nr_hangs: Total number of hrtimer interrupt hangs
- * @max_hang_time: Maximum time spent in hrtimer_interrupt
+ * @check_clocks: Indictator, when set evaluate time source and clock
+ * event devices whether high resolution mode can be
+ * activated.
+ * @nr_events: Total number of timer interrupt events
*/
struct hrtimer_cpu_base {
spinlock_t lock;
#ifdef CONFIG_HIGH_RES_TIMERS
ktime_t expires_next;
int hres_active;
- int hang_detected;
unsigned long nr_events;
- unsigned long nr_retries;
- unsigned long nr_hangs;
- ktime_t max_hang_time;
#endif
};
WLAN_CATEGORY_SA_QUERY = 8,
WLAN_CATEGORY_PROTECTED_DUAL_OF_ACTION = 9,
WLAN_CATEGORY_WMM = 17,
- WLAN_CATEGORY_MESH_PLINK = 30, /* Pending ANA approval */
- WLAN_CATEGORY_MESH_PATH_SEL = 32, /* Pending ANA approval */
WLAN_CATEGORY_VENDOR_SPECIFIC_PROTECTED = 126,
WLAN_CATEGORY_VENDOR_SPECIFIC = 127,
};
#define _IF_TUNNEL_H_
#include <linux/types.h>
-#include <asm/byteorder.h>
#ifdef __KERNEL__
#include <linux/ip.h>
* IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished.
* Used by threaded interrupts which need to keep the
* irq line disabled until the threaded handler has been run.
- * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
- *
*/
#define IRQF_DISABLED 0x00000020
#define IRQF_SAMPLE_RANDOM 0x00000040
#define IRQF_SHARED 0x00000080
#define IRQF_PROBE_SHARED 0x00000100
-#define __IRQF_TIMER 0x00000200
+#define IRQF_TIMER 0x00000200
#define IRQF_PERCPU 0x00000400
#define IRQF_NOBALANCING 0x00000800
#define IRQF_IRQPOLL 0x00001000
#define IRQF_ONESHOT 0x00002000
-#define IRQF_NO_SUSPEND 0x00004000
-
-#define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND)
/*
* Bits used by threaded handlers:
/* Dynamic irq helper functions */
extern void dynamic_irq_init(unsigned int irq);
-void dynamic_irq_init_keep_chip_data(unsigned int irq);
extern void dynamic_irq_cleanup(unsigned int irq);
-void dynamic_irq_cleanup_keep_chip_data(unsigned int irq);
/* Set/get chip/data for an IRQ: */
extern int set_irq_chip(unsigned int irq, struct irq_chip *chip);
* waiting for it to finish.
*/
unsigned int t_synchronous_commit:1;
- unsigned int t_flushed_data_blocks:1;
/*
* For use by the filesystem to store fs-specific data
*/
struct kvm_io_bus {
int dev_count;
-#define NR_IOBUS_DEVS 200
+#define NR_IOBUS_DEVS 6
struct kvm_io_device *devs[NR_IOBUS_DEVS];
};
int user_alloc;
};
-static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot)
-{
- return ALIGN(memslot->npages, BITS_PER_LONG) / 8;
-}
-
struct kvm_kernel_irq_routing_entry {
u32 gsi;
u32 type;
+++ /dev/null
-#ifndef _LCM_H
-#define _LCM_H
-
-#include <linux/compiler.h>
-
-unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__;
-
-#endif /* _LCM_H */
ATA_EHI_HOTPLUGGED = (1 << 0), /* could have been hotplugged */
ATA_EHI_NO_AUTOPSY = (1 << 2), /* no autopsy */
ATA_EHI_QUIET = (1 << 3), /* be quiet */
- ATA_EHI_NO_RECOVERY = (1 << 4), /* no recovery */
ATA_EHI_DID_SOFTRESET = (1 << 16), /* already soft-reset this port */
ATA_EHI_DID_HARDRESET = (1 << 17), /* already soft-reset this port */
#define VM_MAYSHARE 0x00000080
#define VM_GROWSDOWN 0x00000100 /* general info on the segment */
-#if defined(CONFIG_STACK_GROWSUP) || defined(CONFIG_IA64)
#define VM_GROWSUP 0x00000200
-#else
-#define VM_GROWSUP 0x00000000
-#endif
#define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page", just pure PFN */
#define VM_DENYWRITE 0x00000800 /* ETXTBSY on write attempts.. */
int set_page_dirty_lock(struct page *page);
int clear_page_dirty_for_io(struct page *page);
-/* Is the vma a continuation of the stack vma above it? */
-static inline int vma_stack_continue(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN);
-}
-
extern unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len);
/* Do stack extension */
extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
-#if VM_GROWSUP
+#ifdef CONFIG_IA64
extern int expand_upwards(struct vm_area_struct *vma, unsigned long address);
-#else
- #define expand_upwards(vma, address) do { } while (0)
#endif
extern int expand_stack_downwards(struct vm_area_struct *vma,
unsigned long address);
within vm_mm. */
/* linked list of VM areas per task, sorted by address */
- struct vm_area_struct *vm_next, *vm_prev;
+ struct vm_area_struct *vm_next;
pgprot_t vm_page_prot; /* Access permissions of this VMA. */
unsigned long vm_flags; /* Flags, see mm.h. */
#define SDIO_BUS_WIDTH_1BIT 0x00
#define SDIO_BUS_WIDTH_4BIT 0x02
-#define SDIO_BUS_ECSI 0x20 /* Enable continuous SPI interrupt */
-#define SDIO_BUS_SCSI 0x40 /* Support continuous SPI interrupt */
#define SDIO_BUS_CD_DISABLE 0x80 /* disable pull-up on DAT3 (pin 1) */
/* zone watermarks, access with *_wmark_pages(zone) macros */
unsigned long watermark[NR_WMARK];
- /*
- * When free pages are below this point, additional steps are taken
- * when reading the number of free pages to avoid per-cpu counter
- * drift allowing watermarks to be breached
- */
- unsigned long percpu_drift_mark;
-
/*
* We don't know if the memory that we're going to allocate will be freeable
* or/and it will be released eventually, so to avoid totally wasting several
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
-#ifdef CONFIG_SMP
-unsigned long zone_nr_free_pages(struct zone *zone);
-#else
-#define zone_nr_free_pages(zone) zone_page_state(zone, NR_FREE_PAGES)
-#endif /* CONFIG_SMP */
-
/*
* The "priority" of VM scanning is how much of the queues we will scan in one
* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
extern void mask_msi_irq(unsigned int irq);
extern void unmask_msi_irq(unsigned int irq);
extern void read_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg);
-extern void get_cached_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg);
extern void write_msi_msg_desc(struct irq_desc *desc, struct msi_msg *msg);
extern void read_msi_msg(unsigned int irq, struct msi_msg *msg);
-extern void get_cached_msi_msg(unsigned int irq, struct msi_msg *msg);
extern void write_msi_msg(unsigned int irq, struct msi_msg *msg);
struct msi_desc {
extern void netif_carrier_off(struct net_device *dev);
-extern void netif_notify_peers(struct net_device *dev);
-
/**
* netif_dormant_on - mark device as dormant.
* @dev: network device
#define NFS_CAP_ATIME (1U << 11)
#define NFS_CAP_CTIME (1U << 12)
#define NFS_CAP_MTIME (1U << 13)
-#define NFS_CAP_POSIX_LOCK (1U << 14)
/* maximum number of slots to use */
#define NETDEV_PRE_UP 0x000D
#define NETDEV_BONDING_OLDTYPE 0x000E
#define NETDEV_BONDING_NEWTYPE 0x000F
-#define NETDEV_NOTIFY_PEERS 0x0013
#define SYS_DOWN 0x0001 /* Notify of system down */
#define SYS_RESTART SYS_DOWN
}
#endif /* CONFIG_PCI_DOMAINS */
-/* some architectures require additional setup to direct VGA traffic */
-typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode,
- unsigned int command_bits, bool change_bridge);
-extern void pci_register_set_vga_state(arch_set_vga_state_t func);
-
#else /* CONFIG_PCI is not enabled */
/*
#define PCI_DEVICE_ID_VLSI_82C147 0x0105
#define PCI_DEVICE_ID_VLSI_VAS96011 0x0702
-/* AMD RD890 Chipset */
-#define PCI_DEVICE_ID_RD890_IOMMU 0x5a23
-
#define PCI_VENDOR_ID_ADL 0x1005
#define PCI_DEVICE_ID_ADL_2301 0x2301
#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP77_IDE 0x0759
#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP73_SMBUS 0x07D8
#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP79_SMBUS 0x0AA2
-#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP89_SATA 0x0D85
#define PCI_VENDOR_ID_IMS 0x10e0
#define PCI_DEVICE_ID_IMS_TT128 0x9128
#define PCI_DEVICE_ID_AFAVLAB_P030 0x2182
#define PCI_SUBDEVICE_ID_AFAVLAB_P061 0x2150
-#define PCI_VENDOR_ID_BCM_GVC 0x14a4
#define PCI_VENDOR_ID_BROADCOM 0x14e4
#define PCI_DEVICE_ID_TIGON3_5752 0x1600
#define PCI_DEVICE_ID_TIGON3_5752M 0x1601
#define PCI_VENDOR_ID_JMICRON 0x197B
#define PCI_DEVICE_ID_JMICRON_JMB360 0x2360
#define PCI_DEVICE_ID_JMICRON_JMB361 0x2361
-#define PCI_DEVICE_ID_JMICRON_JMB362 0x2362
#define PCI_DEVICE_ID_JMICRON_JMB363 0x2363
#define PCI_DEVICE_ID_JMICRON_JMB365 0x2365
#define PCI_DEVICE_ID_JMICRON_JMB366 0x2366
#define PCI_DEVICE_ID_INTEL_82840_HB 0x1a21
#define PCI_DEVICE_ID_INTEL_82845_HB 0x1a30
#define PCI_DEVICE_ID_INTEL_IOAT 0x1a38
-#define PCI_DEVICE_ID_INTEL_CPT_SMBUS 0x1c22
-#define PCI_DEVICE_ID_INTEL_CPT_LPC1 0x1c42
-#define PCI_DEVICE_ID_INTEL_CPT_LPC2 0x1c43
#define PCI_DEVICE_ID_INTEL_82801AA_0 0x2410
#define PCI_DEVICE_ID_INTEL_82801AA_1 0x2411
#define PCI_DEVICE_ID_INTEL_82801AA_3 0x2413
#define _LINUX_POISON_H
/********** include/linux/list.h **********/
-
-/*
- * Architectures might want to move the poison pointer offset
- * into some well-recognized area such as 0xdead000000000000,
- * that is also not mappable by user-space exploits:
- */
-#ifdef CONFIG_ILLEGAL_POINTER_VALUE
-# define POISON_POINTER_DELTA _AC(CONFIG_ILLEGAL_POINTER_VALUE, UL)
-#else
-# define POISON_POINTER_DELTA 0
-#endif
-
/*
* These are non-NULL pointers that will result in page faults
* under normal circumstances, used to verify that nobody uses
* non-initialized list entries.
*/
-#define LIST_POISON1 ((void *) 0x00100100 + POISON_POINTER_DELTA)
-#define LIST_POISON2 ((void *) 0x00200200 + POISON_POINTER_DELTA)
+#define LIST_POISON1 ((void *) 0x00100100)
+#define LIST_POISON2 ((void *) 0x00200200)
/********** include/linux/timer.h **********/
/*
#define POISON_FREE 0x6b /* for use-after-free poisoning */
#define POISON_END 0xa5 /* end-byte of poisoning */
-/********** mm/hugetlb.c **********/
-/*
- * Private mappings of hugetlb pages use this poisoned value for
- * page->mapping. The core VM should not be doing anything with this mapping
- * but futex requires the existence of some page->mapping value even though it
- * is unused if PAGE_MAPPING_ANON is set.
- */
-#define HUGETLB_POISON ((void *)(0x00300300 + POISON_POINTER_DELTA + PAGE_MAPPING_ANON))
-
/********** arch/$ARCH/mm/init.c **********/
#define POISON_FREE_INITMEM 0xcc
sb->s_qcop->quota_sync(sb, type);
}
-void inode_add_rsv_space(struct inode *inode, qsize_t number);
-void inode_claim_rsv_space(struct inode *inode, qsize_t number);
-void inode_sub_rsv_space(struct inode *inode, qsize_t number);
-
int dquot_initialize(struct inode *inode, int type);
int dquot_drop(struct inode *inode);
struct dquot *dqget(struct super_block *sb, unsigned int id, int type);
int dquot_reserve_space(struct inode *inode, qsize_t number, int prealloc);
int dquot_claim_space(struct inode *inode, qsize_t number);
void dquot_release_reserved_space(struct inode *inode, qsize_t number);
+qsize_t dquot_get_reserved_space(struct inode *inode);
int dquot_free_space(struct inode *inode, qsize_t number);
int dquot_free_inode(const struct inode *inode, qsize_t number);
if (inode->i_sb->dq_op->reserve_space(inode, nr, 0) == NO_QUOTA)
return 1;
}
- else
- inode_add_rsv_space(inode, nr);
return 0;
}
if (inode->i_sb->dq_op->claim_space(inode, nr) == NO_QUOTA)
return 1;
} else
- inode_claim_rsv_space(inode, nr);
+ inode_add_bytes(inode, nr);
mark_inode_dirty(inode);
return 0;
{
if (sb_any_quota_active(inode->i_sb))
inode->i_sb->dq_op->release_rsv(inode, nr);
- else
- inode_sub_rsv_space(inode, nr);
}
static inline void vfs_dq_free_space_nodirty(struct inode *inode, qsize_t nr)
void reiserfs_security_free(struct reiserfs_security_handle *sec);
#endif
-static inline int reiserfs_xattrs_initialized(struct super_block *sb)
-{
- return REISERFS_SB(sb)->priv_root != NULL;
-}
-
#define xattr_size(size) ((size) + sizeof(struct reiserfs_xattr_header))
static inline loff_t reiserfs_xattr_nblocks(struct inode *inode, loff_t size)
{
#include <linux/time.h>
+struct task_struct;
+
/*
* Resource control/accounting header file for linux
*/
*/
#include <asm/resource.h>
-#ifdef __KERNEL__
-
-struct task_struct;
-
int getrusage(struct task_struct *p, int who, struct rusage __user *ru);
-#endif /* __KERNEL__ */
-
#endif
extern void calc_global_load(void);
+extern u64 cpu_nr_migrations(int cpu);
extern unsigned long get_parent_ip(unsigned long addr);
cputime_t utime, stime, cutime, cstime;
cputime_t gtime;
cputime_t cgtime;
-#ifndef CONFIG_VIRT_CPU_ACCOUNTING
- cputime_t prev_utime, prev_stime;
-#endif
unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
unsigned long min_flt, maj_flt, cmin_flt, cmaj_flt;
unsigned long inblock, oublock, cinblock, coublock;
if (sched_smt_power_savings)
return SD_POWERSAVINGS_BALANCE;
- if (!sched_mc_power_savings)
- return SD_PREFER_SIBLING;
-
- return 0;
+ return SD_PREFER_SIBLING;
}
static inline int sd_balance_for_package_power(void)
char *name;
#endif
- unsigned int span_weight;
/*
* Span of all CPUs in this domain.
*
struct sched_class {
const struct sched_class *next;
- void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup,
- bool head);
+ void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup);
void (*dequeue_task) (struct rq *rq, struct task_struct *p, int sleep);
void (*yield_task) (struct rq *rq);
void (*put_prev_task) (struct rq *rq, struct task_struct *p);
#ifdef CONFIG_SMP
- int (*select_task_rq)(struct rq *rq, struct task_struct *p,
- int sd_flag, int flags);
+ int (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
unsigned long (*load_balance) (struct rq *this_rq, int this_cpu,
struct rq *busiest, unsigned long max_load_move,
enum cpu_idle_type idle);
void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
void (*post_schedule) (struct rq *this_rq);
- void (*task_waking) (struct rq *this_rq, struct task_struct *task);
- void (*task_woken) (struct rq *this_rq, struct task_struct *task);
+ void (*task_wake_up) (struct rq *this_rq, struct task_struct *task);
void (*set_cpus_allowed)(struct task_struct *p,
const struct cpumask *newmask);
void (*set_curr_task) (struct rq *rq);
void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
- void (*task_fork) (struct task_struct *p);
+ void (*task_new) (struct rq *rq, struct task_struct *p);
void (*switched_from) (struct rq *this_rq, struct task_struct *task,
int running);
void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
int oldprio, int running);
- unsigned int (*get_rr_interval) (struct rq *rq,
- struct task_struct *task);
+ unsigned int (*get_rr_interval) (struct task_struct *task);
#ifdef CONFIG_FAIR_GROUP_SCHED
- void (*moved_group) (struct task_struct *p, int on_rq);
+ void (*moved_group) (struct task_struct *p);
#endif
};
u64 nr_failed_migrations_running;
u64 nr_failed_migrations_hot;
u64 nr_forced_migrations;
+ u64 nr_forced2_migrations;
u64 nr_wakeups;
u64 nr_wakeups_sync;
/* bitmask of trace recursion */
unsigned long trace_recursion;
#endif /* CONFIG_TRACING */
+ unsigned long stack_start;
};
/* Future-safe accessor for struct task_struct's cpus_allowed. */
extern cputime_t task_utime(struct task_struct *p);
extern cputime_t task_stime(struct task_struct *p);
extern cputime_t task_gtime(struct task_struct *p);
-extern void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st);
extern int task_free_register(struct notifier_block *n);
extern int task_free_unregister(struct notifier_block *n);
extern void sched_clock_idle_wakeup_event(u64 delta_ns);
#ifdef CONFIG_HOTPLUG_CPU
-extern void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p);
extern void idle_task_exit(void);
#else
static inline void idle_task_exit(void) {}
/* ID information about the Chip. */
u16 chip_id;
u16 chip_rev;
- u16 sprom_offset;
u16 sprom_size; /* number of words in sprom */
u8 chip_package;
extern void ssb_bus_unregister(struct ssb_bus *bus);
-/* Does the device have an SPROM? */
-extern bool ssb_is_sprom_available(struct ssb_bus *bus);
-
/* Set a fallback SPROM.
* See kdoc at the function definition for complete documentation. */
extern int ssb_arch_set_fallback_sprom(const struct ssb_sprom *sprom);
#define SSB_CHIPCO_CAP_64BIT 0x08000000 /* 64-bit Backplane */
#define SSB_CHIPCO_CAP_PMU 0x10000000 /* PMU available (rev >= 20) */
#define SSB_CHIPCO_CAP_ECI 0x20000000 /* ECI available (rev >= 20) */
-#define SSB_CHIPCO_CAP_SPROM 0x40000000 /* SPROM present */
#define SSB_CHIPCO_CORECTL 0x0008
#define SSB_CHIPCO_CORECTL_UARTCLK0 0x00000001 /* Drive UART with internal clock */
#define SSB_CHIPCO_CORECTL_SE 0x00000002 /* sync clk out enable (corerev >= 3) */
/** Chip specific Chip-Status register contents. */
-#define SSB_CHIPCO_CHST_4322_SPROM_EXISTS 0x00000040 /* SPROM present */
#define SSB_CHIPCO_CHST_4325_SPROM_OTP_SEL 0x00000003
#define SSB_CHIPCO_CHST_4325_DEFCIS_SEL 0 /* OTP is powered up, use def. CIS, no SPROM */
#define SSB_CHIPCO_CHST_4325_SPROM_SEL 1 /* OTP is powered up, SPROM is present */
#define SSB_CHIPCO_CHST_4325_RCAL_VALUE_SHIFT 4
#define SSB_CHIPCO_CHST_4325_PMUTOP_2B 0x00000200 /* 1 for 2b, 0 for to 2a */
-/** Macros to determine SPROM presence based on Chip-Status register. */
-#define SSB_CHIPCO_CHST_4312_SPROM_PRESENT(status) \
- ((status & SSB_CHIPCO_CHST_4325_SPROM_OTP_SEL) != \
- SSB_CHIPCO_CHST_4325_OTP_SEL)
-#define SSB_CHIPCO_CHST_4322_SPROM_PRESENT(status) \
- (status & SSB_CHIPCO_CHST_4322_SPROM_EXISTS)
-#define SSB_CHIPCO_CHST_4325_SPROM_PRESENT(status) \
- (((status & SSB_CHIPCO_CHST_4325_SPROM_OTP_SEL) != \
- SSB_CHIPCO_CHST_4325_DEFCIS_SEL) && \
- ((status & SSB_CHIPCO_CHST_4325_SPROM_OTP_SEL) != \
- SSB_CHIPCO_CHST_4325_OTP_SEL))
-
/** Clockcontrol masks and values **/
struct ssb_chipcommon {
struct ssb_device *dev;
u32 capabilities;
- u32 status;
/* Fast Powerup Delay constant */
u16 fast_pwrup_delay;
struct ssb_chipcommon_pmu pmu;
#define SSB_SPROMSIZE_WORDS_R4 220
#define SSB_SPROMSIZE_BYTES_R123 (SSB_SPROMSIZE_WORDS_R123 * sizeof(u16))
#define SSB_SPROMSIZE_BYTES_R4 (SSB_SPROMSIZE_WORDS_R4 * sizeof(u16))
-#define SSB_SPROM_BASE1 0x1000
-#define SSB_SPROM_BASE31 0x0800
+#define SSB_SPROM_BASE 0x1000
#define SSB_SPROM_REVISION 0x107E
#define SSB_SPROM_REVISION_REV 0x00FF /* SPROM Revision number */
#define SSB_SPROM_REVISION_CRC 0xFF00 /* SPROM CRC8 value */
__lru_cache_add(page, LRU_INACTIVE_ANON);
}
+static inline void lru_cache_add_active_anon(struct page *page)
+{
+ __lru_cache_add(page, LRU_ACTIVE_ANON);
+}
+
static inline void lru_cache_add_file(struct page *page)
{
__lru_cache_add(page, LRU_INACTIVE_FILE);
}
+static inline void lru_cache_add_active_file(struct page *page)
+{
+ __lru_cache_add(page, LRU_ACTIVE_FILE);
+}
+
/* linux/mm/vmscan.c */
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask, nodemask_t *mask);
#define __SC_STR_TDECL6(t, a, ...) #t, __SC_STR_TDECL5(__VA_ARGS__)
#define SYSCALL_TRACE_ENTER_EVENT(sname) \
- static struct ftrace_event_call \
- __attribute__((__aligned__(4))) event_enter_##sname; \
+ static struct ftrace_event_call event_enter_##sname; \
struct trace_event enter_syscall_print_##sname = { \
.trace = print_syscall_enter, \
}; \
}
#define SYSCALL_TRACE_EXIT_EVENT(sname) \
- static struct ftrace_event_call \
- __attribute__((__aligned__(4))) event_exit_##sname; \
+ static struct ftrace_event_call event_exit_##sname; \
struct trace_event exit_syscall_print_##sname = { \
.trace = print_syscall_exit, \
}; \
#else
-#define tboot_enabled() 0
#define tboot_probe() do { } while (0)
#define tboot_shutdown(shutdown_type) do { } while (0)
#define tboot_sleep(sleep_state, pm1a_control, pm1b_control) \
extern struct tick_sched *tick_get_tick_sched(int cpu);
extern void tick_check_idle(int cpu);
extern int tick_oneshot_mode_active(void);
-# ifndef arch_needs_cpu
-# define arch_needs_cpu(cpu) (0)
-# endif
# else
static inline void tick_clock_notify(void) { }
static inline int tick_check_oneshot_change(int allow_nohz) { return 0; }
| 1*SD_WAKE_AFFINE \
| 1*SD_SHARE_CPUPOWER \
| 0*SD_POWERSAVINGS_BALANCE \
- | 1*SD_SHARE_PKG_RESOURCES \
+ | 0*SD_SHARE_PKG_RESOURCES \
| 0*SD_SERIALIZE \
| 0*SD_PREFER_SIBLING \
, \
unsigned long data[0];
};
-/*
- * We default to dicing tty buffer allocations to this many characters
- * in order to avoid multiple page allocations. We know the size of
- * tty_buffer itself but it must also be taken into account that the
- * the buffer is 256 byte aligned. See tty_buffer_find for the allocation
- * logic this must match
- */
-
-#define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
-
-
struct tty_bufhead {
struct delayed_work work;
spinlock_t lock;
/* device can't handle its Configuration or Interface strings */
#define USB_QUIRK_CONFIG_INTF_STRINGS 0x00000008
-/* device needs a pause during initialization, after we read the device
- descriptor */
-#define USB_QUIRK_DELAY_INIT 0x00000040
-
#endif /* __LINUX_USB_QUIRKS_H */
return x;
}
-/*
- * More accurate version that also considers the currently pending
- * deltas. For that we need to loop over all cpus to find the current
- * deltas. There is no synchronization so the result cannot be
- * exactly accurate either.
- */
-static inline unsigned long zone_page_state_snapshot(struct zone *zone,
- enum zone_stat_item item)
-{
- long x = atomic_long_read(&zone->vm_stat[item]);
-
-#ifdef CONFIG_SMP
- int cpu;
- for_each_online_cpu(cpu)
- x += zone_pcp(zone, cpu)->vm_stat_diff[item];
-
- if (x < 0)
- x = 0;
-#endif
- return x;
-}
-
extern unsigned long global_reclaimable_pages(void);
extern unsigned long zone_reclaimable_pages(struct zone *zone);
struct bdi_writeback;
int inode_wait(void *);
void writeback_inodes_sb(struct super_block *);
-int writeback_inodes_sb_if_idle(struct super_block *);
void sync_inodes_sb(struct super_block *);
void writeback_inodes_wbc(struct writeback_control *wbc);
long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
X##_e -= (_FP_W_TYPE_SIZE - rsize); \
X##_e = rsize - X##_e - 1; \
\
- if (_FP_FRACBITS_##fs < rsize && _FP_WFRACBITS_##fs <= X##_e) \
+ if (_FP_FRACBITS_##fs < rsize && _FP_WFRACBITS_##fs < X##_e) \
__FP_FRAC_SRS_1(ur_, (X##_e - _FP_WFRACBITS_##fs + 1), rsize);\
_FP_FRAC_DISASSEMBLE_##wc(X, ur_, rsize); \
if ((_FP_WFRACBITS_##fs - X##_e - 1) > 0) \
* @IEEE80211_HW_BEACON_FILTER:
* Hardware supports dropping of irrelevant beacon frames to
* avoid waking up cpu.
- * @IEEE80211_HW_REPORTS_TX_ACK_STATUS:
- * Hardware can provide ack status reports of Tx frames to
- * the stack.
*/
enum ieee80211_hw_flags {
IEEE80211_HW_RX_INCLUDES_FCS = 1<<1,
IEEE80211_HW_SUPPORTS_DYNAMIC_PS = 1<<12,
IEEE80211_HW_MFP_CAPABLE = 1<<13,
IEEE80211_HW_BEACON_FILTER = 1<<14,
- IEEE80211_HW_REPORTS_TX_ACK_STATUS = 1<<15,
};
/**
struct iovec *data);
void sctp_chunk_free(struct sctp_chunk *);
void *sctp_addto_chunk(struct sctp_chunk *, int len, const void *data);
-void *sctp_addto_chunk_fixed(struct sctp_chunk *, int len, const void *data);
struct sctp_chunk *sctp_chunkify(struct sk_buff *,
const struct sctp_association *,
struct sock *);
return seq3 - seq2 >= seq1 - seq2;
}
-static inline bool tcp_too_many_orphans(struct sock *sk, int shift)
+static inline int tcp_too_many_orphans(struct sock *sk, int num)
{
- struct percpu_counter *ocp = sk->sk_prot->orphan_count;
- int orphans = percpu_counter_read_positive(ocp);
-
- if (orphans << shift > sysctl_tcp_max_orphans) {
- orphans = percpu_counter_sum_positive(ocp);
- if (orphans << shift > sysctl_tcp_max_orphans)
- return true;
- }
-
- if (sk->sk_wmem_queued > SOCK_MIN_SNDBUF &&
- atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2])
- return true;
- return false;
+ return (num > sysctl_tcp_max_orphans) ||
+ (sk->sk_wmem_queued > SOCK_MIN_SNDBUF &&
+ atomic_read(&tcp_memory_allocated) > sysctl_tcp_mem[2]);
}
/* syncookies: remember time of last synqueue overflow */
/* Bound MSS / TSO packet size with the half of the window */
static inline int tcp_bound_to_half_wnd(struct tcp_sock *tp, int pktsize)
{
- int cutoff;
-
- /* When peer uses tiny windows, there is no use in packetizing
- * to sub-MSS pieces for the sake of SWS or making sure there
- * are enough packets in the pipe for fast recovery.
- *
- * On the other hand, for extremely large MSS devices, handling
- * smaller than MSS windows in this way does make sense.
- */
- if (tp->max_window >= 512)
- cutoff = (tp->max_window >> 1);
- else
- cutoff = tp->max_window;
-
- if (cutoff && pktsize > cutoff)
- return max_t(int, cutoff, 68U - tp->tcp_header_len);
+ if (tp->max_window && pktsize > (tp->max_window >> 1))
+ return max(tp->max_window >> 1, 68U - tp->tcp_header_len);
else
return pktsize;
}
extern int sysctl_x25_ack_holdback_timeout;
extern int sysctl_x25_forward;
-extern int x25_parse_address_block(struct sk_buff *skb,
- struct x25_address *called_addr,
- struct x25_address *calling_addr);
-
extern int x25_addr_ntoa(unsigned char *, struct x25_address *,
struct x25_address *);
extern int x25_addr_aton(unsigned char *, struct x25_address *,
struct fc_bsg_rport_els r_els;
struct fc_bsg_rport_ct r_ct;
} rqst_data;
-} __attribute__((packed));
+};
/* response (request sense data) structure of the sg_io_v4 */
unsigned int card_type; /* EMU10K1_CARD_* */
unsigned int ecard_ctrl; /* ecard control bits */
unsigned long dma_mask; /* PCI DMA mask */
- unsigned int delay_pcm_irq; /* in samples */
int max_cache_pages; /* max memory size / PAGE_SIZE */
struct snd_dma_buffer silent_page; /* silent page */
struct snd_dma_buffer ptb_pages; /* page table pages */
tstruct \
char __data[0]; \
}; \
- static struct ftrace_event_call \
- __attribute__((__aligned__(4))) event_##name
+ static struct ftrace_event_call event_##name
#undef __cpparg
#define __cpparg(arg...) arg
compress_name);
message = msg_buf;
}
- } else
- error("junk in compressed archive");
+ }
if (state != Reset)
error("junk in compressed archive");
else
{
unsigned int cpu;
+ /*
+ * Set up the current CPU as possible to migrate to.
+ * The other ones will be done by cpu_up/cpu_down()
+ */
+ set_cpu_active(smp_processor_id(), true);
+
/* FIXME: This should be done in userspace --RR */
for_each_present_cpu(cpu) {
if (num_online_cpus() >= setup_max_cpus)
int cpu = smp_processor_id();
/* Mark the boot cpu "present", "online" etc for SMP and UP case */
set_cpu_online(cpu, true);
- set_cpu_active(cpu, true);
set_cpu_present(cpu, true);
set_cpu_possible(cpu, true);
}
/*
* init can allocate pages on any node
*/
- set_mems_allowed(node_states[N_HIGH_MEMORY]);
+ set_mems_allowed(node_possible_map);
/*
* init can run on any cpu.
*/
struct semid64_ds __user *up64;
int version = compat_ipc_parse_version(&third);
- memset(&s64, 0, sizeof(s64));
-
if (!uptr)
return -EINVAL;
if (get_user(pad, (u32 __user *) uptr))
int version = compat_ipc_parse_version(&second);
void __user *p;
- memset(&m64, 0, sizeof(m64));
-
switch (second & (~IPC_64)) {
case IPC_INFO:
case IPC_RMID:
int err, err2;
int version = compat_ipc_parse_version(&second);
- memset(&s64, 0, sizeof(s64));
-
switch (second & (~IPC_64)) {
case IPC_RMID:
case SHM_LOCK:
void __user *p = NULL;
if (u_attr && oflag & O_CREAT) {
struct mq_attr attr;
-
- memset(&attr, 0, sizeof(attr));
-
p = compat_alloc_user_space(sizeof(attr));
if (get_compat_mq_attr(&attr, u_attr) ||
copy_to_user(p, &attr, sizeof(attr)))
struct mq_attr __user *p = compat_alloc_user_space(2 * sizeof(*p));
long ret;
- memset(&mqstat, 0, sizeof(mqstat));
-
if (u_mqstat) {
if (get_compat_mq_attr(&mqstat, u_mqstat) ||
copy_to_user(p, &mqstat, sizeof(mqstat)))
dentry = lookup_one_len(name, ipc_ns->mq_mnt->mnt_root, strlen(name));
if (IS_ERR(dentry)) {
error = PTR_ERR(dentry);
- goto out_putfd;
+ goto out_err;
}
mntget(ipc_ns->mq_mnt);
mntput(ipc_ns->mq_mnt);
out_putfd:
put_unused_fd(fd);
+out_err:
fd = error;
out_upsem:
mutex_unlock(&ipc_ns->mq_mnt->mnt_root->d_inode->i_mutex);
{
struct semid_ds out;
- memset(&out, 0, sizeof(out));
-
ipc64_perm_to_ipc_perm(&in->sem_perm, &out.sem_perm);
out.sem_otime = in->sem_otime;
{
struct shmid_ds out;
- memset(&out, 0, sizeof(out));
ipc64_perm_to_ipc_perm(&in->shm_perm, &out.shm_perm);
out.shm_segsz = in->shm_segsz;
out.shm_atime = in->shm_atime;
struct freezer, css);
}
-int cgroup_freezing_or_frozen(struct task_struct *task)
+int cgroup_frozen(struct task_struct *task)
{
struct freezer *freezer;
enum freezer_state state;
task_lock(task);
freezer = task_freezer(task);
- if (!freezer->css.cgroup->parent)
- state = CGROUP_THAWED; /* root cgroup can't be frozen */
- else
- state = freezer->state;
+ state = freezer->state;
task_unlock(task);
- return (state == CGROUP_FREEZING) || (state == CGROUP_FROZEN);
+ return state == CGROUP_FROZEN;
}
/*
#include <linux/posix-timers.h>
#include <linux/times.h>
#include <linux/ptrace.h>
-#include <linux/module.h>
#include <asm/uaccess.h>
{
int ret;
cpumask_var_t mask;
+ unsigned long *k;
+ unsigned int min_length = cpumask_size();
- if ((len * BITS_PER_BYTE) < nr_cpu_ids)
- return -EINVAL;
- if (len & (sizeof(compat_ulong_t)-1))
+ if (nr_cpu_ids <= BITS_PER_COMPAT_LONG)
+ min_length = sizeof(compat_ulong_t);
+
+ if (len < min_length)
return -EINVAL;
if (!alloc_cpumask_var(&mask, GFP_KERNEL))
return -ENOMEM;
ret = sched_getaffinity(pid, mask);
- if (ret == 0) {
- size_t retlen = min_t(size_t, len, cpumask_size());
+ if (ret < 0)
+ goto out;
- if (compat_put_bitmap(user_mask_ptr, cpumask_bits(mask), retlen * 8))
- ret = -EFAULT;
- else
- ret = retlen;
- }
- free_cpumask_var(mask);
+ k = cpumask_bits(mask);
+ ret = compat_put_bitmap(user_mask_ptr, k, min_length * 8);
+ if (ret == 0)
+ ret = min_length;
+out:
+ free_cpumask_var(mask);
return ret;
}
return 0;
}
-
-/*
- * Allocate user-space memory for the duration of a single system call,
- * in order to marshall parameters inside a compat thunk.
- */
-void __user *compat_alloc_user_space(unsigned long len)
-{
- void __user *ptr;
-
- /* If len would occupy more than half of the entire compat space... */
- if (unlikely(len > (((compat_uptr_t)~0) >> 1)))
- return NULL;
-
- ptr = arch_compat_alloc_user_space(len);
-
- if (unlikely(!access_ok(VERIFY_WRITE, ptr, len)))
- return NULL;
-
- return ptr;
-}
-EXPORT_SYMBOL_GPL(compat_alloc_user_space);
write_lock_irq(&tasklist_lock);
for_each_process(p) {
- if (task_cpu(p) == cpu && p->state == TASK_RUNNING &&
+ if (task_cpu(p) == cpu &&
(!cputime_eq(p->utime, cputime_zero) ||
!cputime_eq(p->stime, cputime_zero)))
printk(KERN_WARNING "Task %s (pid = %d) is on cpu %d\
}
struct take_cpu_down_param {
- struct task_struct *caller;
unsigned long mod;
void *hcpu;
};
static int __ref take_cpu_down(void *_param)
{
struct take_cpu_down_param *param = _param;
- unsigned int cpu = (unsigned long)param->hcpu;
int err;
/* Ensure this CPU doesn't handle any more interrupts. */
raw_notifier_call_chain(&cpu_chain, CPU_DYING | param->mod,
param->hcpu);
- if (task_cpu(param->caller) == cpu)
- move_task_off_dead_cpu(cpu, param->caller);
/* Force idle task to run as soon as we yield: it should
immediately notice cpu is offline and die quickly. */
sched_idle_next();
static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
{
int err, nr_calls = 0;
+ cpumask_var_t old_allowed;
void *hcpu = (void *)(long)cpu;
unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
struct take_cpu_down_param tcd_param = {
- .caller = current,
.mod = mod,
.hcpu = hcpu,
};
if (!cpu_online(cpu))
return -EINVAL;
+ if (!alloc_cpumask_var(&old_allowed, GFP_KERNEL))
+ return -ENOMEM;
+
cpu_hotplug_begin();
- set_cpu_active(cpu, false);
err = __raw_notifier_call_chain(&cpu_chain, CPU_DOWN_PREPARE | mod,
hcpu, -1, &nr_calls);
if (err == NOTIFY_BAD) {
goto out_release;
}
+ /* Ensure that we are not runnable on dying cpu */
+ cpumask_copy(old_allowed, ¤t->cpus_allowed);
+ set_cpus_allowed_ptr(current, cpu_active_mask);
+
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
if (err) {
set_cpu_active(cpu, true);
hcpu) == NOTIFY_BAD)
BUG();
- goto out_release;
+ goto out_allowed;
}
BUG_ON(cpu_online(cpu));
check_for_tasks(cpu);
+out_allowed:
+ set_cpus_allowed_ptr(current, old_allowed);
out_release:
cpu_hotplug_done();
if (!err) {
hcpu) == NOTIFY_BAD)
BUG();
}
+ free_cpumask_var(old_allowed);
return err;
}
goto out;
}
+ set_cpu_active(cpu, false);
+
+ /*
+ * Make sure the all cpus did the reschedule and are not
+ * using stale version of the cpu_active_mask.
+ * This is not strictly necessary becuase stop_machine()
+ * that we run down the line already provides the required
+ * synchronization. But it's really a side effect and we do not
+ * want to depend on the innards of the stop_machine here.
+ */
+ synchronize_sched();
+
err = _cpu_down(cpu, 0);
out:
return error;
cpu_maps_update_begin();
first_cpu = cpumask_first(cpu_online_mask);
- /*
- * We take down all of the non-boot CPUs in one shot to avoid races
+ /* We take down all of the non-boot CPUs in one shot to avoid races
* with the userspace trying to use the CPU hotplug at the same time
*/
cpumask_clear(frozen_cpus);
+ for_each_online_cpu(cpu) {
+ if (cpu == first_cpu)
+ continue;
+ set_cpu_active(cpu, false);
+ }
+
+ synchronize_sched();
+
printk("Disabling non-boot CPUs ...\n");
for_each_online_cpu(cpu) {
if (cpu == first_cpu)
* call to guarantee_online_mems(), as we know no one is changing
* our task's cpuset.
*
+ * Hold callback_mutex around the two modifications of our tasks
+ * mems_allowed to synchronize with cpuset_mems_allowed().
+ *
* While the mm_struct we are migrating is typically from some
* other task, the task_struct mems_allowed that we are hacking
* is for our current task, which must allocate new pages for that
if (cs == &top_cpuset) {
cpumask_copy(cpus_attach, cpu_possible_mask);
+ to = node_possible_map;
} else {
guarantee_online_cpus(cs, cpus_attach);
+ guarantee_online_mems(cs, &to);
}
- guarantee_online_mems(cs, &to);
/* do per-task migration stuff possibly for each in the threadgroup */
cpuset_attach_task(tsk, &to, cs);
static int cpuset_track_online_nodes(struct notifier_block *self,
unsigned long action, void *arg)
{
- nodemask_t oldmems;
-
cgroup_lock();
switch (action) {
case MEM_ONLINE:
- oldmems = top_cpuset.mems_allowed;
+ case MEM_OFFLINE:
mutex_lock(&callback_mutex);
top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];
mutex_unlock(&callback_mutex);
- update_tasks_nodemask(&top_cpuset, &oldmems, NULL);
- break;
- case MEM_OFFLINE:
- /*
- * needn't update top_cpuset.mems_allowed explicitly because
- * scan_for_empty_cpusets() will update it.
- */
- scan_for_empty_cpusets(&top_cpuset);
+ if (action == MEM_OFFLINE)
+ scan_for_empty_cpusets(&top_cpuset);
break;
default:
break;
void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
{
mutex_lock(&callback_mutex);
- task_lock(tsk);
- guarantee_online_cpus(task_cs(tsk), pmask);
- task_unlock(tsk);
+ cpuset_cpus_allowed_locked(tsk, pmask);
mutex_unlock(&callback_mutex);
}
-int cpuset_cpus_allowed_fallback(struct task_struct *tsk)
+/**
+ * cpuset_cpus_allowed_locked - return cpus_allowed mask from a tasks cpuset.
+ * Must be called with callback_mutex held.
+ **/
+void cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
{
- const struct cpuset *cs;
- int cpu;
-
- rcu_read_lock();
- cs = task_cs(tsk);
- if (cs)
- cpumask_copy(&tsk->cpus_allowed, cs->cpus_allowed);
- rcu_read_unlock();
-
- /*
- * We own tsk->cpus_allowed, nobody can change it under us.
- *
- * But we used cs && cs->cpus_allowed lockless and thus can
- * race with cgroup_attach_task() or update_cpumask() and get
- * the wrong tsk->cpus_allowed. However, both cases imply the
- * subsequent cpuset_change_cpumask()->set_cpus_allowed_ptr()
- * which takes task_rq_lock().
- *
- * If we are called after it dropped the lock we must see all
- * changes in tsk_cs()->cpus_allowed. Otherwise we can temporary
- * set any mask even if it is not right from task_cs() pov,
- * the pending set_cpus_allowed_ptr() will fix things.
- */
-
- cpu = cpumask_any_and(&tsk->cpus_allowed, cpu_active_mask);
- if (cpu >= nr_cpu_ids) {
- /*
- * Either tsk->cpus_allowed is wrong (see above) or it
- * is actually empty. The latter case is only possible
- * if we are racing with remove_tasks_in_empty_cpuset().
- * Like above we can temporary set any mask and rely on
- * set_cpus_allowed_ptr() as synchronization point.
- */
- cpumask_copy(&tsk->cpus_allowed, cpu_possible_mask);
- cpu = cpumask_any(cpu_active_mask);
- }
-
- return cpu;
+ task_lock(tsk);
+ guarantee_online_cpus(task_cs(tsk), pmask);
+ task_unlock(tsk);
}
void cpuset_init_current_mems_allowed(void)
return 0;
}
+/**
+ * cpuset_lock - lock out any changes to cpuset structures
+ *
+ * The out of memory (oom) code needs to mutex_lock cpusets
+ * from being changed while it scans the tasklist looking for a
+ * task in an overlapping cpuset. Expose callback_mutex via this
+ * cpuset_lock() routine, so the oom code can lock it, before
+ * locking the task list. The tasklist_lock is a spinlock, so
+ * must be taken inside callback_mutex.
+ */
+
+void cpuset_lock(void)
+{
+ mutex_lock(&callback_mutex);
+}
+
/**
* cpuset_unlock - release lock on cpuset changes
*
{
if (cred->magic != CRED_MAGIC)
return true;
+ if (atomic_read(&cred->usage) < atomic_read(&cred->subscribers))
+ return true;
#ifdef CONFIG_SECURITY_SELINUX
if (selinux_is_enabled()) {
if ((unsigned long) cred->security < PAGE_SIZE)
* We won't ever get here for the group leader, since it
* will have been the last reference on the signal_struct.
*/
- sig->utime = cputime_add(sig->utime, tsk->utime);
- sig->stime = cputime_add(sig->stime, tsk->stime);
+ sig->utime = cputime_add(sig->utime, task_utime(tsk));
+ sig->stime = cputime_add(sig->stime, task_stime(tsk));
sig->gtime = cputime_add(sig->gtime, task_gtime(tsk));
sig->min_flt += tsk->min_flt;
sig->maj_flt += tsk->maj_flt;
if (unlikely(!tsk->pid))
panic("Attempted to kill the idle task!");
- /*
- * If do_exit is called because this processes oopsed, it's possible
- * that get_fs() was left as KERNEL_DS, so reset it to USER_DS before
- * continuing. Amongst other possible reasons, this is to prevent
- * mm_release()->clear_child_tid() from writing to a user-controlled
- * kernel address.
- */
- set_fs(USER_DS);
-
tracehook_report_exit(&code);
validate_creds_for_do_exit(tsk);
struct signal_struct *psig;
struct signal_struct *sig;
unsigned long maxrss;
- cputime_t tgutime, tgstime;
/*
* The resource counters for the group leader are in its
* need to protect the access to parent->signal fields,
* as other threads in the parent group can be right
* here reaping other children at the same time.
- *
- * We use thread_group_times() to get times for the thread
- * group, which consolidates times for all threads in the
- * group including the group leader.
*/
- thread_group_times(p, &tgutime, &tgstime);
spin_lock_irq(&p->real_parent->sighand->siglock);
psig = p->real_parent->signal;
sig = p->signal;
psig->cutime =
cputime_add(psig->cutime,
- cputime_add(tgutime,
- sig->cutime));
+ cputime_add(p->utime,
+ cputime_add(sig->utime,
+ sig->cutime)));
psig->cstime =
cputime_add(psig->cstime,
- cputime_add(tgstime,
- sig->cstime));
+ cputime_add(p->stime,
+ cputime_add(sig->stime,
+ sig->cstime)));
psig->cgtime =
cputime_add(psig->cgtime,
cputime_add(p->gtime,
if (!unlikely(wo->wo_flags & WNOWAIT))
*p_code = 0;
- uid = task_uid(p);
+ /* don't need the RCU readlock here as we're holding a spinlock */
+ uid = __task_cred(p)->uid;
unlock_sig:
spin_unlock_irq(&p->sighand->siglock);
if (!exit_code)
}
if (!unlikely(wo->wo_flags & WNOWAIT))
p->signal->flags &= ~SIGNAL_STOP_CONTINUED;
- uid = task_uid(p);
+ uid = __task_cred(p)->uid;
spin_unlock_irq(&p->sighand->siglock);
pid = task_pid_vnr(p);
#ifdef CONFIG_MMU
static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
{
- struct vm_area_struct *mpnt, *tmp, *prev, **pprev;
+ struct vm_area_struct *mpnt, *tmp, **pprev;
struct rb_node **rb_link, *rb_parent;
int retval;
unsigned long charge;
if (retval)
goto out;
- prev = NULL;
for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) {
struct file *file;
vma_set_policy(tmp, pol);
tmp->vm_flags &= ~VM_LOCKED;
tmp->vm_mm = mm;
- tmp->vm_next = tmp->vm_prev = NULL;
+ tmp->vm_next = NULL;
anon_vma_link(tmp);
file = tmp->vm_file;
if (file) {
*/
*pprev = tmp;
pprev = &tmp->vm_next;
- tmp->vm_prev = prev;
- prev = tmp;
__vma_link_rb(mm, tmp, rb_link, rb_parent);
rb_link = &tmp->vm_rb.rb_right;
sig->utime = sig->stime = sig->cutime = sig->cstime = cputime_zero;
sig->gtime = cputime_zero;
sig->cgtime = cputime_zero;
-#ifndef CONFIG_VIRT_CPU_ACCOUNTING
- sig->prev_utime = sig->prev_stime = cputime_zero;
-#endif
sig->nvcsw = sig->nivcsw = sig->cnvcsw = sig->cnivcsw = 0;
sig->min_flt = sig->maj_flt = sig->cmin_flt = sig->cmaj_flt = 0;
sig->inblock = sig->oublock = sig->cinblock = sig->coublock = 0;
p->bts = NULL;
+ p->stack_start = stack_start;
+
/* Perform scheduler related setup. Assign this task to a CPU. */
sched_fork(p, clone_flags);
/* Need tasklist lock for parent etc handling! */
write_lock_irq(&tasklist_lock);
+ /*
+ * The task hasn't been attached yet, so its cpus_allowed mask will
+ * not be changed, nor will its assigned CPU.
+ *
+ * The cpus_allowed mask of the parent may have changed after it was
+ * copied first time - so re-copy it here, then check the child's CPU
+ * to ensure it is on a valid CPU (and if not, just force it back to
+ * parent's CPU). This avoids alot of nasty races.
+ */
+ p->cpus_allowed = current->cpus_allowed;
+ p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
+ if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) ||
+ !cpu_online(task_cpu(p))))
+ set_task_cpu(p, smp_processor_id());
+
/* CLONE_PARENT re-uses the old parent */
if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) {
p->real_parent = current->real_parent;
static struct task_struct * futex_find_get_task(pid_t pid)
{
struct task_struct *p;
+ const struct cred *cred = current_cred(), *pcred;
rcu_read_lock();
p = find_task_by_vpid(pid);
- if (p)
- get_task_struct(p);
+ if (!p) {
+ p = ERR_PTR(-ESRCH);
+ } else {
+ pcred = __task_cred(p);
+ if (cred->euid != pcred->euid &&
+ cred->euid != pcred->uid)
+ p = ERR_PTR(-ESRCH);
+ else
+ get_task_struct(p);
+ }
rcu_read_unlock();
if (!pid)
return -ESRCH;
p = futex_find_get_task(pid);
- if (!p)
- return -ESRCH;
+ if (IS_ERR(p))
+ return PTR_ERR(p);
/*
* We need to look at the task state flags to figure out,
{
struct futex_hash_bucket *hb;
+ get_futex_key_refs(&q->key);
hb = hash_futex(&q->key);
q->lock_ptr = &hb->lock;
queue_unlock(struct futex_q *q, struct futex_hash_bucket *hb)
{
spin_unlock(&hb->lock);
+ drop_futex_key_refs(&q->key);
}
/**
q->pi_state = NULL;
spin_unlock(q->lock_ptr);
+
+ drop_futex_key_refs(&q->key);
}
/*
}
retry:
- /*
- * Prepare to wait on uaddr. On success, holds hb lock and increments
- * q.key refs.
- */
+ /* Prepare to wait on uaddr. */
ret = futex_wait_setup(uaddr, val, fshared, &q, &hb);
if (ret)
goto out;
/* If we were woken (and unqueued), we succeeded, whatever. */
ret = 0;
- /* unqueue_me() drops q.key ref */
if (!unqueue_me(&q))
- goto out;
+ goto out_put_key;
ret = -ETIMEDOUT;
if (to && !to->task)
- goto out;
+ goto out_put_key;
/*
* We expect signal_pending(current), but we might be the
* victim of a spurious wakeup as well.
*/
- if (!signal_pending(current))
+ if (!signal_pending(current)) {
+ put_futex_key(fshared, &q.key);
goto retry;
+ }
ret = -ERESTARTSYS;
if (!abs_time)
- goto out;
+ goto out_put_key;
restart = ¤t_thread_info()->restart_block;
restart->fn = futex_wait_restart;
ret = -ERESTART_RESTARTBLOCK;
+out_put_key:
+ put_futex_key(fshared, &q.key);
out:
if (to) {
hrtimer_cancel(&to->timer);
q.rt_waiter = &rt_waiter;
q.requeue_pi_key = &key2;
- /*
- * Prepare to wait on uaddr. On success, increments q.key (key1) ref
- * count.
- */
+ /* Prepare to wait on uaddr. */
ret = futex_wait_setup(uaddr, val, fshared, &q, &hb);
if (ret)
goto out_key2;
* In order for us to be here, we know our q.key == key2, and since
* we took the hb->lock above, we also know that futex_requeue() has
* completed and we no longer have to concern ourselves with a wakeup
- * race with the atomic proxy lock acquisition by the requeue code. The
- * futex_requeue dropped our key1 reference and incremented our key2
- * reference count.
+ * race with the atomic proxy lock acquition by the requeue code.
*/
/* Check if the requeue code acquired the second futex for us. */
* @children: child nodes
* @all: list head for list of all nodes
* @parent: parent node
- * @loaded_info: array of pointers to profiling data sets for loaded object
- * files.
- * @num_loaded: number of profiling data sets for loaded object files.
- * @unloaded_info: accumulated copy of profiling data sets for unloaded
- * object files. Used only when gcov_persist=1.
+ * @info: associated profiling data structure if not a directory
+ * @ghost: when an object file containing profiling data is unloaded we keep a
+ * copy of the profiling data here to allow collecting coverage data
+ * for cleanup code. Such a node is called a "ghost".
* @dentry: main debugfs entry, either a directory or data file
* @links: associated symbolic links
* @name: data file basename
struct list_head children;
struct list_head all;
struct gcov_node *parent;
- struct gcov_info **loaded_info;
- struct gcov_info *unloaded_info;
+ struct gcov_info *info;
+ struct gcov_info *ghost;
struct dentry *dentry;
struct dentry **links;
- int num_loaded;
char name[0];
};
};
/*
- * Return a profiling data set associated with the given node. This is
- * either a data set for a loaded object file or a data set copy in case
- * all associated object files have been unloaded.
+ * Return the profiling data set for a given node. This can either be the
+ * original profiling data structure or a duplicate (also called "ghost")
+ * in case the associated object file has been unloaded.
*/
static struct gcov_info *get_node_info(struct gcov_node *node)
{
- if (node->num_loaded > 0)
- return node->loaded_info[0];
+ if (node->info)
+ return node->info;
- return node->unloaded_info;
-}
-
-/*
- * Return a newly allocated profiling data set which contains the sum of
- * all profiling data associated with the given node.
- */
-static struct gcov_info *get_accumulated_info(struct gcov_node *node)
-{
- struct gcov_info *info;
- int i = 0;
-
- if (node->unloaded_info)
- info = gcov_info_dup(node->unloaded_info);
- else
- info = gcov_info_dup(node->loaded_info[i++]);
- if (!info)
- return NULL;
- for (; i < node->num_loaded; i++)
- gcov_info_add(info, node->loaded_info[i]);
-
- return info;
+ return node->ghost;
}
/*
mutex_lock(&node_lock);
/*
* Read from a profiling data copy to minimize reference tracking
- * complexity and concurrent access and to keep accumulating multiple
- * profiling data sets associated with one node simple.
+ * complexity and concurrent access.
*/
- info = get_accumulated_info(node);
+ info = gcov_info_dup(get_node_info(node));
if (!info)
goto out_unlock;
iter = gcov_iter_new(info);
return NULL;
}
-/*
- * Reset all profiling data associated with the specified node.
- */
-static void reset_node(struct gcov_node *node)
-{
- int i;
-
- if (node->unloaded_info)
- gcov_info_reset(node->unloaded_info);
- for (i = 0; i < node->num_loaded; i++)
- gcov_info_reset(node->loaded_info[i]);
-}
-
static void remove_node(struct gcov_node *node);
/*
* write() implementation for gcov data files. Reset profiling data for the
- * corresponding file. If all associated object files have been unloaded,
- * remove the debug fs node as well.
+ * associated file. If the object file has been unloaded (i.e. this is
+ * a "ghost" node), remove the debug fs node as well.
*/
static ssize_t gcov_seq_write(struct file *file, const char __user *addr,
size_t len, loff_t *pos)
node = get_node_by_name(info->filename);
if (node) {
/* Reset counts or remove node for unloaded modules. */
- if (node->num_loaded == 0)
+ if (node->ghost)
remove_node(node);
else
- reset_node(node);
+ gcov_info_reset(node->info);
}
/* Reset counts for open file. */
gcov_info_reset(info);
INIT_LIST_HEAD(&node->list);
INIT_LIST_HEAD(&node->children);
INIT_LIST_HEAD(&node->all);
- if (node->loaded_info) {
- node->loaded_info[0] = info;
- node->num_loaded = 1;
- }
+ node->info = info;
node->parent = parent;
if (name)
strcpy(node->name, name);
struct gcov_node *node;
node = kzalloc(sizeof(struct gcov_node) + strlen(name) + 1, GFP_KERNEL);
- if (!node)
- goto err_nomem;
- if (info) {
- node->loaded_info = kcalloc(1, sizeof(struct gcov_info *),
- GFP_KERNEL);
- if (!node->loaded_info)
- goto err_nomem;
+ if (!node) {
+ pr_warning("out of memory\n");
+ return NULL;
}
init_node(node, info, name, parent);
/* Differentiate between gcov data file nodes and directory nodes. */
list_add(&node->all, &all_head);
return node;
-
-err_nomem:
- kfree(node);
- pr_warning("out of memory\n");
- return NULL;
}
/* Remove symbolic links associated with node. */
list_del(&node->all);
debugfs_remove(node->dentry);
remove_links(node);
- kfree(node->loaded_info);
- if (node->unloaded_info)
- gcov_info_free(node->unloaded_info);
+ if (node->ghost)
+ gcov_info_free(node->ghost);
kfree(node);
}
/*
* write() implementation for reset file. Reset all profiling data to zero
- * and remove nodes for which all associated object files are unloaded.
+ * and remove ghost nodes.
*/
static ssize_t reset_write(struct file *file, const char __user *addr,
size_t len, loff_t *pos)
mutex_lock(&node_lock);
restart:
list_for_each_entry(node, &all_head, all) {
- if (node->num_loaded > 0)
- reset_node(node);
+ if (node->info)
+ gcov_info_reset(node->info);
else if (list_empty(&node->children)) {
remove_node(node);
/* Several nodes may have gone - restart loop. */
}
/*
- * Associate a profiling data set with an existing node. Needs to be called
- * with node_lock held.
+ * The profiling data set associated with this node is being unloaded. Store a
+ * copy of the profiling data and turn this node into a "ghost".
*/
-static void add_info(struct gcov_node *node, struct gcov_info *info)
+static int ghost_node(struct gcov_node *node)
{
- struct gcov_info **loaded_info;
- int num = node->num_loaded;
-
- /*
- * Prepare new array. This is done first to simplify cleanup in
- * case the new data set is incompatible, the node only contains
- * unloaded data sets and there's not enough memory for the array.
- */
- loaded_info = kcalloc(num + 1, sizeof(struct gcov_info *), GFP_KERNEL);
- if (!loaded_info) {
- pr_warning("could not add '%s' (out of memory)\n",
- info->filename);
- return;
- }
- memcpy(loaded_info, node->loaded_info,
- num * sizeof(struct gcov_info *));
- loaded_info[num] = info;
- /* Check if the new data set is compatible. */
- if (num == 0) {
- /*
- * A module was unloaded, modified and reloaded. The new
- * data set replaces the copy of the last one.
- */
- if (!gcov_info_is_compatible(node->unloaded_info, info)) {
- pr_warning("discarding saved data for %s "
- "(incompatible version)\n", info->filename);
- gcov_info_free(node->unloaded_info);
- node->unloaded_info = NULL;
- }
- } else {
- /*
- * Two different versions of the same object file are loaded.
- * The initial one takes precedence.
- */
- if (!gcov_info_is_compatible(node->loaded_info[0], info)) {
- pr_warning("could not add '%s' (incompatible "
- "version)\n", info->filename);
- kfree(loaded_info);
- return;
- }
+ node->ghost = gcov_info_dup(node->info);
+ if (!node->ghost) {
+ pr_warning("could not save data for '%s' (out of memory)\n",
+ node->info->filename);
+ return -ENOMEM;
}
- /* Overwrite previous array. */
- kfree(node->loaded_info);
- node->loaded_info = loaded_info;
- node->num_loaded = num + 1;
-}
+ node->info = NULL;
-/*
- * Return the index of a profiling data set associated with a node.
- */
-static int get_info_index(struct gcov_node *node, struct gcov_info *info)
-{
- int i;
-
- for (i = 0; i < node->num_loaded; i++) {
- if (node->loaded_info[i] == info)
- return i;
- }
- return -ENOENT;
+ return 0;
}
/*
- * Save the data of a profiling data set which is being unloaded.
+ * Profiling data for this node has been loaded again. Add profiling data
+ * from previous instantiation and turn this node into a regular node.
*/
-static void save_info(struct gcov_node *node, struct gcov_info *info)
+static void revive_node(struct gcov_node *node, struct gcov_info *info)
{
- if (node->unloaded_info)
- gcov_info_add(node->unloaded_info, info);
+ if (gcov_info_is_compatible(node->ghost, info))
+ gcov_info_add(info, node->ghost);
else {
- node->unloaded_info = gcov_info_dup(info);
- if (!node->unloaded_info) {
- pr_warning("could not save data for '%s' "
- "(out of memory)\n", info->filename);
- }
- }
-}
-
-/*
- * Disassociate a profiling data set from a node. Needs to be called with
- * node_lock held.
- */
-static void remove_info(struct gcov_node *node, struct gcov_info *info)
-{
- int i;
-
- i = get_info_index(node, info);
- if (i < 0) {
- pr_warning("could not remove '%s' (not found)\n",
+ pr_warning("discarding saved data for '%s' (version changed)\n",
info->filename);
- return;
}
- if (gcov_persist)
- save_info(node, info);
- /* Shrink array. */
- node->loaded_info[i] = node->loaded_info[node->num_loaded - 1];
- node->num_loaded--;
- if (node->num_loaded > 0)
- return;
- /* Last loaded data set was removed. */
- kfree(node->loaded_info);
- node->loaded_info = NULL;
- node->num_loaded = 0;
- if (!node->unloaded_info)
- remove_node(node);
+ gcov_info_free(node->ghost);
+ node->ghost = NULL;
+ node->info = info;
}
/*
node = get_node_by_name(info->filename);
switch (action) {
case GCOV_ADD:
- if (node)
- add_info(node, info);
- else
+ /* Add new node or revive ghost. */
+ if (!node) {
add_node(info);
+ break;
+ }
+ if (gcov_persist)
+ revive_node(node, info);
+ else {
+ pr_warning("could not add '%s' (already exists)\n",
+ info->filename);
+ }
break;
case GCOV_REMOVE:
- if (node)
- remove_info(node, info);
- else {
+ /* Remove node or turn into ghost. */
+ if (!node) {
pr_warning("could not remove '%s' (not found)\n",
info->filename);
+ break;
}
+ if (gcov_persist) {
+ if (!ghost_node(node))
+ break;
+ }
+ remove_node(node);
break;
}
mutex_unlock(&node_lock);
right = group_info->ngroups;
while (left < right) {
unsigned int mid = (left+right)/2;
- if (grp > GROUP_AT(group_info, mid))
+ int cmp = grp - GROUP_AT(group_info, mid);
+ if (cmp > 0)
left = mid + 1;
- else if (grp < GROUP_AT(group_info, mid))
+ else if (cmp < 0)
right = mid;
else
return 1;
static int hrtimer_reprogram(struct hrtimer *timer,
struct hrtimer_clock_base *base)
{
- struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
+ ktime_t *expires_next = &__get_cpu_var(hrtimer_bases).expires_next;
ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
int res;
if (expires.tv64 < 0)
return -ETIME;
- if (expires.tv64 >= cpu_base->expires_next.tv64)
- return 0;
-
- /*
- * If a hang was detected in the last timer interrupt then we
- * do not schedule a timer which is earlier than the expiry
- * which we enforced in the hang detection. We want the system
- * to make progress.
- */
- if (cpu_base->hang_detected)
+ if (expires.tv64 >= expires_next->tv64)
return 0;
/*
*/
res = tick_program_event(expires, 0);
if (!IS_ERR_VALUE(res))
- cpu_base->expires_next = expires;
+ *expires_next = expires;
return res;
}
remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base)
{
if (hrtimer_is_queued(timer)) {
- unsigned long state;
int reprogram;
/*
debug_deactivate(timer);
timer_stats_hrtimer_clear_start_info(timer);
reprogram = base->cpu_base == &__get_cpu_var(hrtimer_bases);
- /*
- * We must preserve the CALLBACK state flag here,
- * otherwise we could move the timer base in
- * switch_hrtimer_base.
- */
- state = timer->state & HRTIMER_STATE_CALLBACK;
- __remove_hrtimer(timer, base, state, reprogram);
+ __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE,
+ reprogram);
return 1;
}
return 0;
BUG_ON(timer->state != HRTIMER_STATE_CALLBACK);
enqueue_hrtimer(timer, base);
}
-
- WARN_ON_ONCE(!(timer->state & HRTIMER_STATE_CALLBACK));
-
timer->state &= ~HRTIMER_STATE_CALLBACK;
}
#ifdef CONFIG_HIGH_RES_TIMERS
+static int force_clock_reprogram;
+
+/*
+ * After 5 iteration's attempts, we consider that hrtimer_interrupt()
+ * is hanging, which could happen with something that slows the interrupt
+ * such as the tracing. Then we force the clock reprogramming for each future
+ * hrtimer interrupts to avoid infinite loops and use the min_delta_ns
+ * threshold that we will overwrite.
+ * The next tick event will be scheduled to 3 times we currently spend on
+ * hrtimer_interrupt(). This gives a good compromise, the cpus will spend
+ * 1/4 of their time to process the hrtimer interrupts. This is enough to
+ * let it running without serious starvation.
+ */
+
+static inline void
+hrtimer_interrupt_hanging(struct clock_event_device *dev,
+ ktime_t try_time)
+{
+ force_clock_reprogram = 1;
+ dev->min_delta_ns = (unsigned long)try_time.tv64 * 3;
+ printk(KERN_WARNING "hrtimer: interrupt too slow, "
+ "forcing clock min delta to %lu ns\n", dev->min_delta_ns);
+}
/*
* High resolution timer interrupt
* Called with interrupts disabled
{
struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
struct hrtimer_clock_base *base;
- ktime_t expires_next, now, entry_time, delta;
- int i, retries = 0;
+ ktime_t expires_next, now;
+ int nr_retries = 0;
+ int i;
BUG_ON(!cpu_base->hres_active);
cpu_base->nr_events++;
dev->next_event.tv64 = KTIME_MAX;
- entry_time = now = ktime_get();
-retry:
+ retry:
+ /* 5 retries is enough to notice a hang */
+ if (!(++nr_retries % 5))
+ hrtimer_interrupt_hanging(dev, ktime_sub(ktime_get(), now));
+
+ now = ktime_get();
+
expires_next.tv64 = KTIME_MAX;
spin_lock(&cpu_base->lock);
spin_unlock(&cpu_base->lock);
/* Reprogramming necessary ? */
- if (expires_next.tv64 == KTIME_MAX ||
- !tick_program_event(expires_next, 0)) {
- cpu_base->hang_detected = 0;
- return;
+ if (expires_next.tv64 != KTIME_MAX) {
+ if (tick_program_event(expires_next, force_clock_reprogram))
+ goto retry;
}
-
- /*
- * The next timer was already expired due to:
- * - tracing
- * - long lasting callbacks
- * - being scheduled away when running in a VM
- *
- * We need to prevent that we loop forever in the hrtimer
- * interrupt routine. We give it 3 attempts to avoid
- * overreacting on some spurious event.
- */
- now = ktime_get();
- cpu_base->nr_retries++;
- if (++retries < 3)
- goto retry;
- /*
- * Give the system a chance to do something else than looping
- * here. We stored the entry time, so we know exactly how long
- * we spent here. We schedule the next event this amount of
- * time away.
- */
- cpu_base->nr_hangs++;
- cpu_base->hang_detected = 1;
- delta = ktime_sub(now, entry_time);
- if (delta.tv64 > cpu_base->max_hang_time.tv64)
- cpu_base->max_hang_time = delta;
- /*
- * Limit it to a sensible value as we enforce a longer
- * delay. Give the CPU at least 100ms to catch up.
- */
- if (delta.tv64 > 100 * NSEC_PER_MSEC)
- expires_next = ktime_add_ns(now, 100 * NSEC_PER_MSEC);
- else
- expires_next = ktime_add(now, delta);
- tick_program_event(expires_next, 1);
- printk_once(KERN_WARNING "hrtimer: interrupt took %llu ns\n",
- ktime_to_ns(delta));
}
/*
#include "internals.h"
-static void dynamic_irq_init_x(unsigned int irq, bool keep_chip_data)
+/**
+ * dynamic_irq_init - initialize a dynamically allocated irq
+ * @irq: irq number to initialize
+ */
+void dynamic_irq_init(unsigned int irq)
{
struct irq_desc *desc;
unsigned long flags;
desc->depth = 1;
desc->msi_desc = NULL;
desc->handler_data = NULL;
- if (!keep_chip_data)
- desc->chip_data = NULL;
+ desc->chip_data = NULL;
desc->action = NULL;
desc->irq_count = 0;
desc->irqs_unhandled = 0;
}
/**
- * dynamic_irq_init - initialize a dynamically allocated irq
- * @irq: irq number to initialize
- */
-void dynamic_irq_init(unsigned int irq)
-{
- dynamic_irq_init_x(irq, false);
-}
-
-/**
- * dynamic_irq_init_keep_chip_data - initialize a dynamically allocated irq
+ * dynamic_irq_cleanup - cleanup a dynamically allocated irq
* @irq: irq number to initialize
- *
- * does not set irq_to_desc(irq)->chip_data to NULL
*/
-void dynamic_irq_init_keep_chip_data(unsigned int irq)
-{
- dynamic_irq_init_x(irq, true);
-}
-
-static void dynamic_irq_cleanup_x(unsigned int irq, bool keep_chip_data)
+void dynamic_irq_cleanup(unsigned int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
unsigned long flags;
}
desc->msi_desc = NULL;
desc->handler_data = NULL;
- if (!keep_chip_data)
- desc->chip_data = NULL;
+ desc->chip_data = NULL;
desc->handle_irq = handle_bad_irq;
desc->chip = &no_irq_chip;
desc->name = NULL;
spin_unlock_irqrestore(&desc->lock, flags);
}
-/**
- * dynamic_irq_cleanup - cleanup a dynamically allocated irq
- * @irq: irq number to initialize
- */
-void dynamic_irq_cleanup(unsigned int irq)
-{
- dynamic_irq_cleanup_x(irq, false);
-}
-
-/**
- * dynamic_irq_cleanup_keep_chip_data - cleanup a dynamically allocated irq
- * @irq: irq number to initialize
- *
- * does not set irq_to_desc(irq)->chip_data to NULL
- */
-void dynamic_irq_cleanup_keep_chip_data(unsigned int irq)
-{
- dynamic_irq_cleanup_x(irq, true);
-}
-
/**
* set_irq_chip - set the irq chip for an irq
void __disable_irq(struct irq_desc *desc, unsigned int irq, bool suspend)
{
if (suspend) {
- if (!desc->action || (desc->action->flags & IRQF_NO_SUSPEND))
+ if (!desc->action || (desc->action->flags & IRQF_TIMER))
return;
desc->status |= IRQ_SUSPENDED;
}
/* note that IRQF_TRIGGER_MASK == IRQ_TYPE_SENSE_MASK */
desc->status &= ~(IRQ_LEVEL | IRQ_TYPE_SENSE_MASK);
desc->status |= flags;
-
- if (chip != desc->chip)
- irq_chip_set_defaults(desc->chip);
}
return ret;
if (new->flags & IRQF_ONESHOT)
desc->status |= IRQ_ONESHOT;
- /*
- * Force MSI interrupts to run with interrupts
- * disabled. The multi vector cards can cause stack
- * overflows due to nested interrupts when enough of
- * them are directed to a core and fire at the same
- * time.
- */
- if (desc->msi_desc)
- new->flags |= IRQF_DISABLED;
-
if (!(desc->status & IRQ_NOAUTOEN)) {
desc->depth = 0;
desc->status &= ~IRQ_DISABLED;
set_task_comm(tsk, "kthreadd");
ignore_signals(tsk);
set_cpus_allowed_ptr(tsk, cpu_all_mask);
- set_mems_allowed(node_states[N_HIGH_MEMORY]);
+ set_mems_allowed(node_possible_map);
current->flags |= PF_NOFREEZE | PF_FREEZER_NOSIG;
account_global_scheduler_latency(tsk, &lat);
- for (i = 0; i < tsk->latency_record_count; i++) {
+ /*
+ * short term hack; if we're > 32 we stop; future we recycle:
+ */
+ tsk->latency_record_count++;
+ if (tsk->latency_record_count >= LT_SAVECOUNT)
+ goto out_unlock;
+
+ for (i = 0; i < LT_SAVECOUNT; i++) {
struct latency_record *mylat;
int same = 1;
}
}
- /*
- * short term hack; if we're > 32 we stop; future we recycle:
- */
- if (tsk->latency_record_count >= LT_SAVECOUNT)
- goto out_unlock;
-
/* Allocated a new one: */
- i = tsk->latency_record_count++;
+ i = tsk->latency_record_count;
memcpy(&tsk->latency_record[i], &lat, sizeof(struct latency_record));
out_unlock:
mutex_lock(&module_mutex);
/* Store the name of the last unloaded module for diagnostic purposes */
strlcpy(last_unloaded_module, mod->name, sizeof(last_unloaded_module));
+ ddebug_remove_module(mod->name);
free_module(mod);
out:
remove_sect_attrs(mod);
mod_kobject_remove(mod);
- /* Remove dynamic debug info */
- ddebug_remove_module(mod->name);
-
/* Arch-specific cleanup. */
module_arch_cleanup(mod);
for (;;) {
struct thread_info *owner;
- /*
- * If we own the BKL, then don't spin. The owner of
- * the mutex might be waiting on us to release the BKL.
- */
- if (unlikely(current->lock_depth >= 0))
- break;
-
/*
* If there's an owner, wait for it to either
* release the lock or go to sleep.
struct perf_event_context *ctx;
struct file *event_file = NULL;
struct file *group_file = NULL;
- int event_fd;
int fput_needed = 0;
+ int fput_needed2 = 0;
int err;
/* for future expandability... */
return -EINVAL;
}
- event_fd = get_unused_fd_flags(O_RDWR);
- if (event_fd < 0)
- return event_fd;
-
/*
* Get the target context (task or percpu):
*/
ctx = find_get_context(pid, cpu);
- if (IS_ERR(ctx)) {
- err = PTR_ERR(ctx);
- goto err_fd;
- }
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
/*
* Look up the group leader (we will attach this event to it):
if (IS_ERR(event))
goto err_put_context;
- event_file = anon_inode_getfile("[perf_event]", &perf_fops, event, O_RDWR);
- if (IS_ERR(event_file)) {
- err = PTR_ERR(event_file);
+ err = anon_inode_getfd("[perf_event]", &perf_fops, event, 0);
+ if (err < 0)
+ goto err_free_put_context;
+
+ event_file = fget_light(err, &fput_needed2);
+ if (!event_file)
goto err_free_put_context;
- }
if (flags & PERF_FLAG_FD_OUTPUT) {
err = perf_event_set_output(event, group_fd);
list_add_tail(&event->owner_entry, ¤t->perf_event_list);
mutex_unlock(¤t->perf_event_mutex);
- fput_light(group_file, fput_needed);
- fd_install(event_fd, event_file);
- return event_fd;
-
err_fput_free_put_context:
- fput(event_file);
+ fput_light(event_file, fput_needed2);
+
err_free_put_context:
- free_event(event);
+ if (err < 0)
+ kfree(event);
+
err_put_context:
+ if (err < 0)
+ put_ctx(ctx);
+
fput_light(group_file, fput_needed);
- put_ctx(ctx);
-err_fd:
- put_unused_fd(event_fd);
+
return err;
}
return ret;
}
-static void __init perf_event_init_all_cpus(void)
-{
- int cpu;
- struct perf_cpu_context *cpuctx;
-
- for_each_possible_cpu(cpu) {
- cpuctx = &per_cpu(perf_cpu_context, cpu);
- __perf_event_init_context(&cpuctx->ctx, NULL);
- }
-}
-
static void __cpuinit perf_event_init_cpu(int cpu)
{
struct perf_cpu_context *cpuctx;
cpuctx = &per_cpu(perf_cpu_context, cpu);
+ __perf_event_init_context(&cpuctx->ctx, NULL);
spin_lock(&perf_resource_lock);
cpuctx->max_pertask = perf_max_events - perf_reserved_percpu;
void __init perf_event_init(void)
{
- perf_event_init_all_cpus();
perf_cpu_notify(&perf_cpu_nb, (unsigned long)CPU_UP_PREPARE,
(void *)(long)smp_processor_id());
perf_cpu_notify(&perf_cpu_nb, (unsigned long)CPU_ONLINE,
new_timer->it_id = (timer_t) new_timer_id;
new_timer->it_clock = which_clock;
new_timer->it_overrun = -1;
+ error = CLOCK_DISPATCH(which_clock, timer_create, (new_timer));
+ if (error)
+ goto out;
+ /*
+ * return the timer_id now. The next step is hard to
+ * back out if there is an error.
+ */
if (copy_to_user(created_timer_id,
&new_timer_id, sizeof (new_timer_id))) {
error = -EFAULT;
new_timer->sigq->info.si_tid = new_timer->it_id;
new_timer->sigq->info.si_code = SI_TIMER;
- error = CLOCK_DISPATCH(which_clock, timer_create, (new_timer));
- if (error)
- goto out;
-
spin_lock_irq(¤t->sighand->siglock);
new_timer->it_signal = current->signal;
list_add(&new_timer->list, ¤t->signal->posix_timers);
if (nosig_only && should_send_signal(p))
continue;
- if (cgroup_freezing_or_frozen(p))
+ if (cgroup_frozen(p))
continue;
thaw_process(p);
memory_bm_position_reset(©_bm);
- while (to_free_normal > 0 || to_free_highmem > 0) {
+ while (to_free_normal > 0 && to_free_highmem > 0) {
unsigned long pfn = memory_bm_next_pfn(©_bm);
struct page *page = pfn_to_page(pfn);
return 0;
prof_buffer = vmalloc(buffer_bytes);
- if (prof_buffer) {
- memset(prof_buffer, 0, buffer_bytes);
+ if (prof_buffer)
return 0;
- }
free_cpumask_var(prof_cpu_mask);
return -ENOMEM;
struct load_weight load;
unsigned long nr_load_updates;
u64 nr_switches;
+ u64 nr_migrations_in;
struct cfs_rq cfs;
struct rt_rq rt;
size_t cnt, loff_t *ppos)
{
char buf[64];
- char *cmp;
+ char *cmp = buf;
int neg = 0;
int i;
return -EFAULT;
buf[cnt] = 0;
- cmp = strstrip(buf);
if (strncmp(buf, "NO_", 3) == 0) {
neg = 1;
}
for (i = 0; sched_feat_names[i]; i++) {
- if (strcmp(cmp, sched_feat_names[i]) == 0) {
+ int len = strlen(sched_feat_names[i]);
+
+ if (strncmp(cmp, sched_feat_names[i], len) == 0) {
if (neg)
sysctl_sched_features &= ~(1UL << i);
else
}
#endif /* __ARCH_WANT_UNLOCKED_CTXSW */
-/*
- * Check whether the task is waking, we use this to synchronize ->cpus_allowed
- * against ttwu().
- */
-static inline int task_is_waking(struct task_struct *p)
-{
- return unlikely(p->state == TASK_WAKING);
-}
-
/*
* __task_rq_lock - lock the runqueue a given task resides on.
* Must be called interrupts disabled.
static inline struct rq *__task_rq_lock(struct task_struct *p)
__acquires(rq->lock)
{
- struct rq *rq;
-
for (;;) {
- rq = task_rq(p);
+ struct rq *rq = task_rq(p);
spin_lock(&rq->lock);
if (likely(rq == task_rq(p)))
return rq;
s64 period = sched_avg_period();
while ((s64)(rq->clock - rq->age_stamp) > period) {
- /*
- * Inline assembly required to prevent the compiler
- * optimising this loop into a divmod call.
- * See __iter_div_u64_rem() for another example of this.
- */
- asm("" : "+rm" (rq->age_stamp));
rq->age_stamp += period;
rq->rt_avg /= 2;
}
*/
static int tg_shares_up(struct task_group *tg, void *data)
{
- unsigned long weight, rq_weight = 0, sum_weight = 0, shares = 0;
+ unsigned long weight, rq_weight = 0, shares = 0;
unsigned long *usd_rq_weight;
struct sched_domain *sd = data;
unsigned long flags;
weight = tg->cfs_rq[i]->load.weight;
usd_rq_weight[i] = weight;
- rq_weight += weight;
/*
* If there are currently no tasks on the cpu pretend there
* is one of average load so that when a new task gets to
if (!weight)
weight = NICE_0_LOAD;
- sum_weight += weight;
+ rq_weight += weight;
shares += tg->cfs_rq[i]->shares;
}
- if (!rq_weight)
- rq_weight = sum_weight;
-
if ((!shares && rq_weight) || shares > tg->shares)
shares = tg->shares;
static void update_h_load(long cpu)
{
+ if (root_task_group_empty())
+ return;
+
walk_tg_tree(tg_load_down, tg_nop, (void *)cpu);
}
static void calc_load_account_active(struct rq *this_rq);
static void update_sysctl(void);
-static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-{
- set_task_rq(p, cpu);
-#ifdef CONFIG_SMP
- /*
- * After ->cpu is set up to a new value, task_rq_lock(p, ...) can be
- * successfuly executed on another CPU. We must ensure that updates of
- * per-task data have been completed by this moment.
- */
- smp_wmb();
- task_thread_info(p)->cpu = cpu;
-#endif
-}
-
#include "sched_stats.h"
#include "sched_idletask.c"
#include "sched_fair.c"
*avg += diff >> 3;
}
-static void
-enqueue_task(struct rq *rq, struct task_struct *p, int wakeup, bool head)
+static void enqueue_task(struct rq *rq, struct task_struct *p, int wakeup)
{
if (wakeup)
p->se.start_runtime = p->se.sum_exec_runtime;
sched_info_queued(p);
- p->sched_class->enqueue_task(rq, p, wakeup, head);
+ p->sched_class->enqueue_task(rq, p, wakeup);
p->se.on_rq = 1;
}
if (task_contributes_to_load(p))
rq->nr_uninterruptible--;
- enqueue_task(rq, p, wakeup, false);
+ enqueue_task(rq, p, wakeup);
inc_nr_running(rq);
}
return cpu_curr(task_cpu(p)) == p;
}
+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
+{
+ set_task_rq(p, cpu);
+#ifdef CONFIG_SMP
+ /*
+ * After ->cpu is set up to a new value, task_rq_lock(p, ...) can be
+ * successfuly executed on another CPU. We must ensure that updates of
+ * per-task data have been completed by this moment.
+ */
+ smp_wmb();
+ task_thread_info(p)->cpu = cpu;
+#endif
+}
+
static inline void check_class_changed(struct rq *rq, struct task_struct *p,
const struct sched_class *prev_class,
int oldprio, int running)
*/
void kthread_bind(struct task_struct *p, unsigned int cpu)
{
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
/* Must have done schedule() in kthread() before we set_task_cpu */
if (!wait_task_inactive(p, TASK_UNINTERRUPTIBLE)) {
WARN_ON(1);
return;
}
+ spin_lock_irqsave(&rq->lock, flags);
+ set_task_cpu(p, cpu);
p->cpus_allowed = cpumask_of_cpu(cpu);
p->rt.nr_cpus_allowed = 1;
p->flags |= PF_THREAD_BOUND;
+ spin_unlock_irqrestore(&rq->lock, flags);
}
EXPORT_SYMBOL(kthread_bind);
void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
{
int old_cpu = task_cpu(p);
+ struct rq *old_rq = cpu_rq(old_cpu), *new_rq = cpu_rq(new_cpu);
+ struct cfs_rq *old_cfsrq = task_cfs_rq(p),
+ *new_cfsrq = cpu_cfs_rq(old_cfsrq, new_cpu);
+ u64 clock_offset;
-#ifdef CONFIG_SCHED_DEBUG
- /*
- * We should never call set_task_cpu() on a blocked task,
- * ttwu() will sort out the placement.
- */
- WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING &&
- !(task_thread_info(p)->preempt_count & PREEMPT_ACTIVE));
-#endif
+ clock_offset = old_rq->clock - new_rq->clock;
trace_sched_migrate_task(p, new_cpu);
+#ifdef CONFIG_SCHEDSTATS
+ if (p->se.wait_start)
+ p->se.wait_start -= clock_offset;
+ if (p->se.sleep_start)
+ p->se.sleep_start -= clock_offset;
+ if (p->se.block_start)
+ p->se.block_start -= clock_offset;
+#endif
if (old_cpu != new_cpu) {
p->se.nr_migrations++;
+ new_rq->nr_migrations_in++;
+#ifdef CONFIG_SCHEDSTATS
+ if (task_hot(p, old_rq->clock, NULL))
+ schedstat_inc(p, se.nr_forced2_migrations);
+#endif
perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS,
1, 1, NULL, 0);
}
+ p->se.vruntime -= old_cfsrq->min_vruntime -
+ new_cfsrq->min_vruntime;
__set_task_cpu(p, new_cpu);
}
/*
* If the task is not on a runqueue (and not running), then
- * the next wake-up will properly place the task.
+ * it is sufficient to simply update the task's cpu field.
*/
- if (!p->se.on_rq && !task_running(rq, p))
+ if (!p->se.on_rq && !task_running(rq, p)) {
+ set_task_cpu(p, dest_cpu);
return 0;
+ }
init_completion(&req->done);
req->task = p;
preempt_enable();
}
-#ifdef CONFIG_SMP
-/*
- * ->cpus_allowed is protected by either TASK_WAKING or rq->lock held.
- */
-static int select_fallback_rq(int cpu, struct task_struct *p)
-{
- int dest_cpu;
- const struct cpumask *nodemask = cpumask_of_node(cpu_to_node(cpu));
-
- /* Look for allowed, online CPU in same node. */
- for_each_cpu_and(dest_cpu, nodemask, cpu_active_mask)
- if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
- return dest_cpu;
-
- /* Any allowed, online CPU? */
- dest_cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask);
- if (dest_cpu < nr_cpu_ids)
- return dest_cpu;
-
- /* No more Mr. Nice Guy. */
- if (unlikely(dest_cpu >= nr_cpu_ids)) {
- dest_cpu = cpuset_cpus_allowed_fallback(p);
- /*
- * Don't tell them about moving exiting tasks or
- * kernel threads (both mm NULL), since they never
- * leave kernel.
- */
- if (p->mm && printk_ratelimit()) {
- printk(KERN_INFO "process %d (%s) no "
- "longer affine to cpu%d\n",
- task_pid_nr(p), p->comm, cpu);
- }
- }
-
- return dest_cpu;
-}
-
-/*
- * The caller (fork, wakeup) owns TASK_WAKING, ->cpus_allowed is stable.
- */
-static inline
-int select_task_rq(struct rq *rq, struct task_struct *p, int sd_flags, int wake_flags)
-{
- int cpu = p->sched_class->select_task_rq(rq, p, sd_flags, wake_flags);
-
- /*
- * In order not to call set_task_cpu() on a blocking task we need
- * to rely on ttwu() to place the task on a valid ->cpus_allowed
- * cpu.
- *
- * Since this is common to all placement strategies, this lives here.
- *
- * [ this allows ->select_task() to simply return task_cpu(p) and
- * not worry about this generic constraint ]
- */
- if (unlikely(!cpumask_test_cpu(cpu, &p->cpus_allowed) ||
- !cpu_online(cpu)))
- cpu = select_fallback_rq(task_cpu(p), p);
-
- return cpu;
-}
-#endif
-
/***
* try_to_wake_up - wake up a thread
* @p: the to-be-woken-up thread
*
* First fix up the nr_uninterruptible count:
*/
- if (task_contributes_to_load(p)) {
- if (likely(cpu_online(orig_cpu)))
- rq->nr_uninterruptible--;
- else
- this_rq()->nr_uninterruptible--;
- }
+ if (task_contributes_to_load(p))
+ rq->nr_uninterruptible--;
p->state = TASK_WAKING;
+ task_rq_unlock(rq, &flags);
- if (p->sched_class->task_waking)
- p->sched_class->task_waking(rq, p);
-
- cpu = select_task_rq(rq, p, SD_BALANCE_WAKE, wake_flags);
+ cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags);
if (cpu != orig_cpu)
set_task_cpu(p, cpu);
- __task_rq_unlock(rq);
- rq = cpu_rq(cpu);
- spin_lock(&rq->lock);
- update_rq_clock(rq);
+ rq = task_rq_lock(p, &flags);
+
+ if (rq != orig_rq)
+ update_rq_clock(rq);
- /*
- * We migrated the task without holding either rq->lock, however
- * since the task is not on the task list itself, nobody else
- * will try and migrate the task, hence the rq should match the
- * cpu we just moved it to.
- */
- WARN_ON(task_cpu(p) != cpu);
WARN_ON(p->state != TASK_WAKING);
+ cpu = task_cpu(p);
#ifdef CONFIG_SCHEDSTATS
schedstat_inc(rq, ttwu_count);
p->state = TASK_RUNNING;
#ifdef CONFIG_SMP
- if (p->sched_class->task_woken)
- p->sched_class->task_woken(rq, p);
+ if (p->sched_class->task_wake_up)
+ p->sched_class->task_wake_up(rq, p);
if (unlikely(rq->idle_stamp)) {
u64 delta = rq->clock - rq->idle_stamp;
p->se.nr_failed_migrations_running = 0;
p->se.nr_failed_migrations_hot = 0;
p->se.nr_forced_migrations = 0;
+ p->se.nr_forced2_migrations = 0;
p->se.nr_wakeups = 0;
p->se.nr_wakeups_sync = 0;
#ifdef CONFIG_PREEMPT_NOTIFIERS
INIT_HLIST_HEAD(&p->preempt_notifiers);
#endif
+
+ /*
+ * We mark the process as running here, but have not actually
+ * inserted it onto the runqueue yet. This guarantees that
+ * nobody will actually run it, and a signal or other external
+ * event cannot wake it up and insert it on the runqueue either.
+ */
+ p->state = TASK_RUNNING;
}
/*
int cpu = get_cpu();
__sched_fork(p);
- /*
- * We mark the process as running here. This guarantees that
- * nobody will actually run it, and a signal or other external
- * event cannot wake it up and insert it on the runqueue either.
- */
- p->state = TASK_RUNNING;
/*
* Revert to default priority/policy on fork if requested.
if (!rt_prio(p->prio))
p->sched_class = &fair_sched_class;
- if (p->sched_class->task_fork)
- p->sched_class->task_fork(p);
-
+#ifdef CONFIG_SMP
+ cpu = p->sched_class->select_task_rq(p, SD_BALANCE_FORK, 0);
+#endif
set_task_cpu(p, cpu);
#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
{
unsigned long flags;
struct rq *rq;
- int cpu = get_cpu();
-
-#ifdef CONFIG_SMP
- rq = task_rq_lock(p, &flags);
- p->state = TASK_WAKING;
-
- /*
- * Fork balancing, do it here and not earlier because:
- * - cpus_allowed can change in the fork path
- * - any previously selected cpu might disappear through hotplug
- *
- * We set TASK_WAKING so that select_task_rq() can drop rq->lock
- * without people poking at ->cpus_allowed.
- */
- cpu = select_task_rq(rq, p, SD_BALANCE_FORK, 0);
- set_task_cpu(p, cpu);
-
- p->state = TASK_RUNNING;
- task_rq_unlock(rq, &flags);
-#endif
rq = task_rq_lock(p, &flags);
+ BUG_ON(p->state != TASK_RUNNING);
update_rq_clock(rq);
- activate_task(rq, p, 0);
+
+ if (!p->sched_class->task_new || !current->se.on_rq) {
+ activate_task(rq, p, 0);
+ } else {
+ /*
+ * Let the scheduling class do new task startup
+ * management (if any):
+ */
+ p->sched_class->task_new(rq, p);
+ inc_nr_running(rq);
+ }
trace_sched_wakeup_new(rq, p, 1);
check_preempt_curr(rq, p, WF_FORK);
#ifdef CONFIG_SMP
- if (p->sched_class->task_woken)
- p->sched_class->task_woken(rq, p);
+ if (p->sched_class->task_wake_up)
+ p->sched_class->task_wake_up(rq, p);
#endif
task_rq_unlock(rq, &flags);
- put_cpu();
}
#ifdef CONFIG_PREEMPT_NOTIFIERS
}
}
+/*
+ * Externally visible per-cpu scheduler statistics:
+ * cpu_nr_migrations(cpu) - number of migrations into that cpu
+ */
+u64 cpu_nr_migrations(int cpu)
+{
+ return cpu_rq(cpu)->nr_migrations_in;
+}
+
/*
* Update rq->cpu_load[] statistics. This function is usually called every
* scheduler tick (TICK_NSEC).
}
/*
- * sched_exec - execve() is a valuable balancing opportunity, because at
- * this point the task has the smallest effective memory and cache footprint.
+ * If dest_cpu is allowed for this process, migrate the task to it.
+ * This is accomplished by forcing the cpu_allowed mask to only
+ * allow dest_cpu, which will force the cpu onto dest_cpu. Then
+ * the cpu_allowed mask is restored.
*/
-void sched_exec(void)
+static void sched_migrate_task(struct task_struct *p, int dest_cpu)
{
- struct task_struct *p = current;
struct migration_req req;
unsigned long flags;
struct rq *rq;
- int dest_cpu;
rq = task_rq_lock(p, &flags);
- dest_cpu = p->sched_class->select_task_rq(rq, p, SD_BALANCE_EXEC, 0);
- if (dest_cpu == smp_processor_id())
- goto unlock;
+ if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed)
+ || unlikely(!cpu_active(dest_cpu)))
+ goto out;
- /*
- * select_task_rq() can race against ->cpus_allowed
- */
- if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed) &&
- likely(cpu_active(dest_cpu)) &&
- migrate_task(p, dest_cpu, &req)) {
+ /* force the process onto the specified CPU */
+ if (migrate_task(p, dest_cpu, &req)) {
/* Need to wait for migration thread (might exit: take ref). */
struct task_struct *mt = rq->migration_thread;
return;
}
-unlock:
+out:
task_rq_unlock(rq, &flags);
}
+/*
+ * sched_exec - execve() is a valuable balancing opportunity, because at
+ * this point the task has the smallest effective memory and cache footprint.
+ */
+void sched_exec(void)
+{
+ int new_cpu, this_cpu = get_cpu();
+ new_cpu = current->sched_class->select_task_rq(current, SD_BALANCE_EXEC, 0);
+ put_cpu();
+ if (new_cpu != this_cpu)
+ sched_migrate_task(current, new_cpu);
+}
+
/*
* pull_task - move a task from a remote runqueue to the local runqueue.
* Both runqueues must be locked.
unsigned long max_load;
unsigned long busiest_load_per_task;
unsigned long busiest_nr_running;
- unsigned long busiest_group_capacity;
int group_imb; /* Is there imbalance in this sd */
#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
{
- unsigned long weight = sd->span_weight;
+ unsigned long weight = cpumask_weight(sched_domain_span(sd));
unsigned long smt_gain = sd->smt_gain;
smt_gain /= weight;
static void update_cpu_power(struct sched_domain *sd, int cpu)
{
- unsigned long weight = sd->span_weight;
+ unsigned long weight = cpumask_weight(sched_domain_span(sd));
unsigned long power = SCHED_LOAD_SCALE;
struct sched_group *sdg = sd->groups;
unsigned long load, max_cpu_load, min_cpu_load;
int i;
unsigned int balance_cpu = -1, first_idle_cpu = 0;
- unsigned long avg_load_per_task = 0;
+ unsigned long sum_avg_load_per_task;
+ unsigned long avg_load_per_task;
if (local_group) {
balance_cpu = group_first_cpu(group);
}
/* Tally up the load of all CPUs in the group */
+ sum_avg_load_per_task = avg_load_per_task = 0;
max_cpu_load = 0;
min_cpu_load = ~0UL;
sgs->sum_nr_running += rq->nr_running;
sgs->sum_weighted_load += weighted_cpuload(i);
+ sum_avg_load_per_task += cpu_avg_load_per_task(i);
}
/*
/* Adjust by relative CPU power of the group */
sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power;
+
/*
* Consider the group unbalanced when the imbalance is larger
* than the average weight of two tasks.
* normalized nr_running number somewhere that negates
* the hierarchy?
*/
- if (sgs->sum_nr_running)
- avg_load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
+ avg_load_per_task = (sum_avg_load_per_task * SCHED_LOAD_SCALE) /
+ group->cpu_power;
if ((max_cpu_load - min_cpu_load) > 2*avg_load_per_task)
sgs->group_imb = 1;
sds->max_load = sgs.avg_load;
sds->busiest = group;
sds->busiest_nr_running = sgs.sum_nr_running;
- sds->busiest_group_capacity = sgs.group_capacity;
sds->busiest_load_per_task = sgs.sum_weighted_load;
sds->group_imb = sgs.group_imb;
}
{
unsigned long tmp, pwr_now = 0, pwr_move = 0;
unsigned int imbn = 2;
- unsigned long scaled_busy_load_per_task;
if (sds->this_nr_running) {
sds->this_load_per_task /= sds->this_nr_running;
sds->this_load_per_task =
cpu_avg_load_per_task(this_cpu);
- scaled_busy_load_per_task = sds->busiest_load_per_task
- * SCHED_LOAD_SCALE;
- scaled_busy_load_per_task /= sds->busiest->cpu_power;
-
- if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
- (scaled_busy_load_per_task * imbn)) {
+ if (sds->max_load - sds->this_load + sds->busiest_load_per_task >=
+ sds->busiest_load_per_task * imbn) {
*imbalance = sds->busiest_load_per_task;
return;
}
static inline void calculate_imbalance(struct sd_lb_stats *sds, int this_cpu,
unsigned long *imbalance)
{
- unsigned long max_pull, load_above_capacity = ~0UL;
-
- sds->busiest_load_per_task /= sds->busiest_nr_running;
- if (sds->group_imb) {
- sds->busiest_load_per_task =
- min(sds->busiest_load_per_task, sds->avg_load);
- }
-
+ unsigned long max_pull;
/*
* In the presence of smp nice balancing, certain scenarios can have
* max load less than avg load(as we skip the groups at or below
return fix_small_imbalance(sds, this_cpu, imbalance);
}
- if (!sds->group_imb) {
- /*
- * Don't want to pull so many tasks that a group would go idle.
- */
- load_above_capacity = (sds->busiest_nr_running -
- sds->busiest_group_capacity);
-
- load_above_capacity *= (SCHED_LOAD_SCALE * SCHED_LOAD_SCALE);
-
- load_above_capacity /= sds->busiest->cpu_power;
- }
-
- /*
- * We're trying to get all the cpus to the average_load, so we don't
- * want to push ourselves above the average load, nor do we wish to
- * reduce the max loaded cpu below the average load. At the same time,
- * we also don't want to reduce the group load below the group capacity
- * (so that we can implement power-savings policies etc). Thus we look
- * for the minimum possible imbalance.
- * Be careful of negative numbers as they'll appear as very large values
- * with unsigned longs.
- */
- max_pull = min(sds->max_load - sds->avg_load, load_above_capacity);
+ /* Don't want to pull so many tasks that a group would go idle */
+ max_pull = min(sds->max_load - sds->avg_load,
+ sds->max_load - sds->busiest_load_per_task);
/* How much load to actually move to equalise the imbalance */
*imbalance = min(max_pull * sds->busiest->cpu_power,
* 4) This group is more busy than the avg busieness at this
* sched_domain.
* 5) The imbalance is within the specified limit.
+ * 6) Any rebalance would lead to ping-pong
*/
if (balance && !(*balance))
goto ret;
if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)
goto out_balanced;
+ sds.busiest_load_per_task /= sds.busiest_nr_running;
+ if (sds.group_imb)
+ sds.busiest_load_per_task =
+ min(sds.busiest_load_per_task, sds.avg_load);
+
+ /*
+ * We're trying to get all the cpus to the average_load, so we don't
+ * want to push ourselves above the average load, nor do we wish to
+ * reduce the max loaded cpu below the average load, as either of these
+ * actions would just result in more rebalancing later, and ping-pong
+ * tasks around. Thus we look for the minimum possible imbalance.
+ * Negative imbalances (*we* are more loaded than anyone else) will
+ * be counted as no imbalance for these purposes -- we can't fix that
+ * by pulling tasks to us. Be careful of negative numbers as they'll
+ * appear as very large values with unsigned longs.
+ */
+ if (sds.max_load <= sds.busiest_load_per_task)
+ goto out_balanced;
+
/* Looks like there is an imbalance. Compute it */
calculate_imbalance(&sds, this_cpu, imbalance);
return sds.busiest;
continue;
rq = cpu_rq(i);
- wl = weighted_cpuload(i);
+ wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
+ wl /= power;
- /*
- * When comparing with imbalance, use weighted_cpuload()
- * which is not scaled with the cpu power.
- */
if (capacity && rq->nr_running == 1 && wl > imbalance)
continue;
- /*
- * For the load comparisons with the other cpu's, consider
- * the weighted_cpuload() scaled with the cpu power, so that
- * the load can be moved away from the cpu that is potentially
- * running at a lower capacity.
- */
- wl = (wl * SCHED_LOAD_SCALE) / power;
-
if (wl > max_load) {
max_load = wl;
busiest = rq;
{
return p->stime;
}
-
-void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
-{
- struct task_cputime cputime;
-
- thread_group_cputime(p, &cputime);
-
- *ut = cputime.utime;
- *st = cputime.stime;
-}
#else
-
-#ifndef nsecs_to_cputime
-# define nsecs_to_cputime(__nsecs) \
- msecs_to_cputime(div_u64((__nsecs), NSEC_PER_MSEC))
-#endif
-
cputime_t task_utime(struct task_struct *p)
{
- cputime_t utime = p->utime, total = utime + p->stime;
+ clock_t utime = cputime_to_clock_t(p->utime),
+ total = utime + cputime_to_clock_t(p->stime);
u64 temp;
/*
* Use CFS's precise accounting:
*/
- temp = (u64)nsecs_to_cputime(p->se.sum_exec_runtime);
+ temp = (u64)nsec_to_clock_t(p->se.sum_exec_runtime);
if (total) {
temp *= utime;
do_div(temp, total);
}
- utime = (cputime_t)temp;
+ utime = (clock_t)temp;
- p->prev_utime = max(p->prev_utime, utime);
+ p->prev_utime = max(p->prev_utime, clock_t_to_cputime(utime));
return p->prev_utime;
}
cputime_t task_stime(struct task_struct *p)
{
- cputime_t stime;
+ clock_t stime;
/*
* Use CFS's precise accounting. (we subtract utime from
* the total, to make sure the total observed by userspace
* grows monotonically - apps rely on that):
*/
- stime = nsecs_to_cputime(p->se.sum_exec_runtime) - task_utime(p);
+ stime = nsec_to_clock_t(p->se.sum_exec_runtime) -
+ cputime_to_clock_t(task_utime(p));
if (stime >= 0)
- p->prev_stime = max(p->prev_stime, stime);
+ p->prev_stime = max(p->prev_stime, clock_t_to_cputime(stime));
return p->prev_stime;
}
-
-/*
- * Must be called with siglock held.
- */
-void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
-{
- struct signal_struct *sig = p->signal;
- struct task_cputime cputime;
- cputime_t rtime, utime, total;
-
- thread_group_cputime(p, &cputime);
-
- total = cputime_add(cputime.utime, cputime.stime);
- rtime = nsecs_to_cputime(cputime.sum_exec_runtime);
-
- if (total) {
- u64 temp = rtime;
-
- temp *= cputime.utime;
- do_div(temp, total);
- utime = (cputime_t)temp;
- } else
- utime = rtime;
-
- sig->prev_utime = max(sig->prev_utime, utime);
- sig->prev_stime = max(sig->prev_stime,
- cputime_sub(rtime, sig->prev_utime));
-
- *ut = sig->prev_utime;
- *st = sig->prev_stime;
-}
#endif
inline cputime_t task_gtime(struct task_struct *p)
* the mutex owner just released it and exited.
*/
if (probe_kernel_address(&owner->cpu, cpu))
- return 0;
+ goto out;
#else
cpu = owner->cpu;
#endif
* the cpu field may no longer be valid.
*/
if (cpu >= nr_cpumask_bits)
- return 0;
+ goto out;
/*
* We need to validate that we can do a
* get_cpu() and that we have the percpu area.
*/
if (!cpu_online(cpu))
- return 0;
+ goto out;
rq = cpu_rq(cpu);
cpu_relax();
}
-
+out:
return 1;
}
#endif
*/
bool try_wait_for_completion(struct completion *x)
{
- unsigned long flags;
int ret = 1;
- spin_lock_irqsave(&x->wait.lock, flags);
+ spin_lock_irq(&x->wait.lock);
if (!x->done)
ret = 0;
else
x->done--;
- spin_unlock_irqrestore(&x->wait.lock, flags);
+ spin_unlock_irq(&x->wait.lock);
return ret;
}
EXPORT_SYMBOL(try_wait_for_completion);
*/
bool completion_done(struct completion *x)
{
- unsigned long flags;
int ret = 1;
- spin_lock_irqsave(&x->wait.lock, flags);
+ spin_lock_irq(&x->wait.lock);
if (!x->done)
ret = 0;
- spin_unlock_irqrestore(&x->wait.lock, flags);
+ spin_unlock_irq(&x->wait.lock);
return ret;
}
EXPORT_SYMBOL(completion_done);
unsigned long flags;
int oldprio, on_rq, running;
struct rq *rq;
- const struct sched_class *prev_class;
+ const struct sched_class *prev_class = p->sched_class;
BUG_ON(prio < 0 || prio > MAX_PRIO);
update_rq_clock(rq);
oldprio = p->prio;
- prev_class = p->sched_class;
on_rq = p->se.on_rq;
running = task_current(rq, p);
if (on_rq)
if (running)
p->sched_class->set_curr_task(rq);
if (on_rq) {
- enqueue_task(rq, p, 0, oldprio < prio);
+ enqueue_task(rq, p, 0);
check_class_changed(rq, p, prev_class, oldprio, running);
}
delta = p->prio - old_prio;
if (on_rq) {
- enqueue_task(rq, p, 0, false);
+ enqueue_task(rq, p, 0);
/*
* If the task increased its priority or is running and
* lowered its priority, then reschedule its CPU:
{
int retval, oldprio, oldpolicy = -1, on_rq, running;
unsigned long flags;
- const struct sched_class *prev_class;
+ const struct sched_class *prev_class = p->sched_class;
struct rq *rq;
int reset_on_fork;
p->sched_reset_on_fork = reset_on_fork;
oldprio = p->prio;
- prev_class = p->sched_class;
__setscheduler(rq, p, policy, param->sched_priority);
if (running)
return -EINVAL;
retval = -ESRCH;
- rcu_read_lock();
+ read_lock(&tasklist_lock);
p = find_process_by_pid(pid);
if (p) {
retval = security_task_getscheduler(p);
retval = p->policy
| (p->sched_reset_on_fork ? SCHED_RESET_ON_FORK : 0);
}
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
return retval;
}
if (!param || pid < 0)
return -EINVAL;
- rcu_read_lock();
+ read_lock(&tasklist_lock);
p = find_process_by_pid(pid);
retval = -ESRCH;
if (!p)
goto out_unlock;
lp.sched_priority = p->rt_priority;
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
/*
* This one might sleep, we cannot do it with a spinlock held ...
return retval;
out_unlock:
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
return retval;
}
int retval;
get_online_cpus();
- rcu_read_lock();
+ read_lock(&tasklist_lock);
p = find_process_by_pid(pid);
if (!p) {
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
put_online_cpus();
return -ESRCH;
}
- /* Prevent p going away */
+ /*
+ * It is not safe to call set_cpus_allowed with the
+ * tasklist_lock held. We will bump the task_struct's
+ * usage count and then drop tasklist_lock.
+ */
get_task_struct(p);
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
retval = -ENOMEM;
long sched_getaffinity(pid_t pid, struct cpumask *mask)
{
struct task_struct *p;
- unsigned long flags;
- struct rq *rq;
int retval;
get_online_cpus();
- rcu_read_lock();
+ read_lock(&tasklist_lock);
retval = -ESRCH;
p = find_process_by_pid(pid);
if (retval)
goto out_unlock;
- rq = task_rq_lock(p, &flags);
cpumask_and(mask, &p->cpus_allowed, cpu_online_mask);
- task_rq_unlock(rq, &flags);
out_unlock:
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
put_online_cpus();
return retval;
int ret;
cpumask_var_t mask;
- if ((len * BITS_PER_BYTE) < nr_cpu_ids)
- return -EINVAL;
- if (len & (sizeof(unsigned long)-1))
+ if (len < cpumask_size())
return -EINVAL;
if (!alloc_cpumask_var(&mask, GFP_KERNEL))
ret = sched_getaffinity(pid, mask);
if (ret == 0) {
- size_t retlen = min_t(size_t, len, cpumask_size());
-
- if (copy_to_user(user_mask_ptr, mask, retlen))
+ if (copy_to_user(user_mask_ptr, mask, cpumask_size()))
ret = -EFAULT;
else
- ret = retlen;
+ ret = cpumask_size();
}
free_cpumask_var(mask);
{
struct task_struct *p;
unsigned int time_slice;
- unsigned long flags;
- struct rq *rq;
int retval;
struct timespec t;
return -EINVAL;
retval = -ESRCH;
- rcu_read_lock();
+ read_lock(&tasklist_lock);
p = find_process_by_pid(pid);
if (!p)
goto out_unlock;
if (retval)
goto out_unlock;
- rq = task_rq_lock(p, &flags);
- time_slice = p->sched_class->get_rr_interval(rq, p);
- task_rq_unlock(rq, &flags);
+ time_slice = p->sched_class->get_rr_interval(p);
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
jiffies_to_timespec(time_slice, &t);
retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0;
return retval;
out_unlock:
- rcu_read_unlock();
+ read_unlock(&tasklist_lock);
return retval;
}
spin_lock_irqsave(&rq->lock, flags);
__sched_fork(idle);
- idle->state = TASK_RUNNING;
idle->se.exec_start = sched_clock();
cpumask_copy(&idle->cpus_allowed, cpumask_of(cpu));
struct rq *rq;
int ret = 0;
- /*
- * Serialize against TASK_WAKING so that ttwu() and wunt() can
- * drop the rq->lock and still rely on ->cpus_allowed.
- */
-again:
- while (task_is_waking(p))
- cpu_relax();
rq = task_rq_lock(p, &flags);
- if (task_is_waking(p)) {
- task_rq_unlock(rq, &flags);
- goto again;
- }
-
if (!cpumask_intersects(new_mask, cpu_active_mask)) {
ret = -EINVAL;
goto out;
get_task_struct(mt);
task_rq_unlock(rq, &flags);
- wake_up_process(mt);
+ wake_up_process(rq->migration_thread);
put_task_struct(mt);
wait_for_completion(&req.done);
tlb_migrate_finish(p->mm);
static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
{
struct rq *rq_dest, *rq_src;
- int ret = 0;
+ int ret = 0, on_rq;
if (unlikely(!cpu_active(dest_cpu)))
return ret;
if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
goto fail;
- /*
- * If we're not on a rq, the next wake-up will ensure we're
- * placed properly.
- */
- if (p->se.on_rq) {
+ on_rq = p->se.on_rq;
+ if (on_rq)
deactivate_task(rq_src, p, 0);
- set_task_cpu(p, dest_cpu);
+
+ set_task_cpu(p, dest_cpu);
+ if (on_rq) {
activate_task(rq_dest, p, 0);
check_preempt_curr(rq_dest, p, 0);
}
}
#ifdef CONFIG_HOTPLUG_CPU
+
+static int __migrate_task_irq(struct task_struct *p, int src_cpu, int dest_cpu)
+{
+ int ret;
+
+ local_irq_disable();
+ ret = __migrate_task(p, src_cpu, dest_cpu);
+ local_irq_enable();
+ return ret;
+}
+
/*
* Figure out where task on dead CPU should go, use force if necessary.
*/
-void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
+static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
{
- struct rq *rq = cpu_rq(dead_cpu);
- int needs_cpu, uninitialized_var(dest_cpu);
- unsigned long flags;
+ int dest_cpu;
+ const struct cpumask *nodemask = cpumask_of_node(cpu_to_node(dead_cpu));
- local_irq_save(flags);
+again:
+ /* Look for allowed, online CPU in same node. */
+ for_each_cpu_and(dest_cpu, nodemask, cpu_active_mask)
+ if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
+ goto move;
- spin_lock(&rq->lock);
- needs_cpu = (task_cpu(p) == dead_cpu) && (p->state != TASK_WAKING);
- if (needs_cpu)
- dest_cpu = select_fallback_rq(dead_cpu, p);
- spin_unlock(&rq->lock);
- /*
- * It can only fail if we race with set_cpus_allowed(),
- * in the racer should migrate the task anyway.
- */
- if (needs_cpu)
- __migrate_task(p, dead_cpu, dest_cpu);
- local_irq_restore(flags);
+ /* Any allowed, online CPU? */
+ dest_cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask);
+ if (dest_cpu < nr_cpu_ids)
+ goto move;
+
+ /* No more Mr. Nice Guy. */
+ if (dest_cpu >= nr_cpu_ids) {
+ cpuset_cpus_allowed_locked(p, &p->cpus_allowed);
+ dest_cpu = cpumask_any_and(cpu_active_mask, &p->cpus_allowed);
+
+ /*
+ * Don't tell them about moving exiting tasks or
+ * kernel threads (both mm NULL), since they never
+ * leave kernel.
+ */
+ if (p->mm && printk_ratelimit()) {
+ printk(KERN_INFO "process %d (%s) no "
+ "longer affine to cpu%d\n",
+ task_pid_nr(p), p->comm, dead_cpu);
+ }
+ }
+
+move:
+ /* It can have affinity changed while we were choosing. */
+ if (unlikely(!__migrate_task_irq(p, dead_cpu, dest_cpu)))
+ goto again;
}
/*
unsigned long flags;
struct rq *rq;
- switch (action & ~CPU_TASKS_FROZEN) {
+ switch (action) {
case CPU_UP_PREPARE:
+ case CPU_UP_PREPARE_FROZEN:
p = kthread_create(migration_thread, hcpu, "migration/%d", cpu);
if (IS_ERR(p))
return NOTIFY_BAD;
break;
case CPU_ONLINE:
+ case CPU_ONLINE_FROZEN:
/* Strictly unnecessary, as first user will wake it. */
wake_up_process(cpu_rq(cpu)->migration_thread);
#ifdef CONFIG_HOTPLUG_CPU
case CPU_UP_CANCELED:
+ case CPU_UP_CANCELED_FROZEN:
if (!cpu_rq(cpu)->migration_thread)
break;
/* Unbind it from offline cpu so it can run. Fall thru. */
cpu_rq(cpu)->migration_thread = NULL;
break;
- case CPU_POST_DEAD:
- /*
- * Bring the migration thread down in CPU_POST_DEAD event,
- * since the timers should have got migrated by now and thus
- * we should not see a deadlock between trying to kill the
- * migration thread and the sched_rt_period_timer.
- */
+ case CPU_DEAD:
+ case CPU_DEAD_FROZEN:
+ cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */
+ migrate_live_tasks(cpu);
rq = cpu_rq(cpu);
kthread_stop(rq->migration_thread);
put_task_struct(rq->migration_thread);
rq->migration_thread = NULL;
- break;
-
- case CPU_DEAD:
- migrate_live_tasks(cpu);
- rq = cpu_rq(cpu);
/* Idle task back to normal (off runqueue, low prio) */
spin_lock_irq(&rq->lock);
update_rq_clock(rq);
rq->idle->sched_class = &idle_sched_class;
migrate_dead_tasks(cpu);
spin_unlock_irq(&rq->lock);
+ cpuset_unlock();
migrate_nr_uninterruptible(rq);
BUG_ON(rq->nr_running != 0);
calc_global_load_remove(rq);
break;
case CPU_DYING:
+ case CPU_DYING_FROZEN:
/* Update our root-domain */
rq = cpu_rq(cpu);
spin_lock_irqsave(&rq->lock, flags);
struct rq *rq = cpu_rq(cpu);
struct sched_domain *tmp;
- for (tmp = sd; tmp; tmp = tmp->parent)
- tmp->span_weight = cpumask_weight(sched_domain_span(tmp));
-
/* Remove the sched domains which do not contribute to scheduling. */
for (tmp = sd; tmp; ) {
struct sched_domain *parent = tmp->parent;
#ifdef CONFIG_FAIR_GROUP_SCHED
if (tsk->sched_class->moved_group)
- tsk->sched_class->moved_group(tsk, on_rq);
+ tsk->sched_class->moved_group(tsk);
#endif
if (unlikely(running))
tsk->sched_class->set_curr_task(rq);
if (on_rq)
- enqueue_task(rq, tsk, 0, false);
+ enqueue_task(rq, tsk, 0);
task_rq_unlock(rq, &flags);
}
rcu_read_unlock();
}
-/*
- * When CONFIG_VIRT_CPU_ACCOUNTING is enabled one jiffy can be very large
- * in cputime_t units. As a result, cpuacct_update_stats calls
- * percpu_counter_add with values large enough to always overflow the
- * per cpu batch limit causing bad SMP scalability.
- *
- * To fix this we scale percpu_counter_batch by cputime_one_jiffy so we
- * batch the same amount of time with CONFIG_VIRT_CPU_ACCOUNTING disabled
- * and enabled. We cap it at INT_MAX which is the largest allowed batch value.
- */
-#ifdef CONFIG_SMP
-#define CPUACCT_BATCH \
- min_t(long, percpu_counter_batch * cputime_one_jiffy, INT_MAX)
-#else
-#define CPUACCT_BATCH 0
-#endif
-
/*
* Charge the system/user time to the task's accounting group.
*/
enum cpuacct_stat_index idx, cputime_t val)
{
struct cpuacct *ca;
- int batch = CPUACCT_BATCH;
if (unlikely(!cpuacct_subsys.active))
return;
ca = task_ca(tsk);
do {
- __percpu_counter_add(&ca->cpustat[idx], val, batch);
+ percpu_counter_add(&ca->cpustat[idx], val);
ca = ca->parent;
} while (ca);
rcu_read_unlock();
P(se.nr_failed_migrations_running);
P(se.nr_failed_migrations_hot);
P(se.nr_forced_migrations);
+ P(se.nr_forced2_migrations);
P(se.nr_wakeups);
P(se.nr_wakeups_sync);
P(se.nr_wakeups_migrate);
p->se.nr_failed_migrations_running = 0;
p->se.nr_failed_migrations_hot = 0;
p->se.nr_forced_migrations = 0;
+ p->se.nr_forced2_migrations = 0;
p->se.nr_wakeups = 0;
p->se.nr_wakeups_sync = 0;
p->se.nr_wakeups_migrate = 0;
curr->sum_exec_runtime += delta_exec;
schedstat_add(cfs_rq, exec_clock, delta_exec);
delta_exec_weighted = calc_delta_fair(delta_exec, curr);
-
curr->vruntime += delta_exec_weighted;
update_min_vruntime(cfs_rq);
}
se->vruntime = vruntime;
}
-#define ENQUEUE_WAKEUP 1
-#define ENQUEUE_MIGRATE 2
-
static void
-enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int wakeup)
{
- /*
- * Update the normalized vruntime before updating min_vruntime
- * through callig update_curr().
- */
- if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_MIGRATE))
- se->vruntime += cfs_rq->min_vruntime;
-
/*
* Update run-time statistics of the 'current'.
*/
update_curr(cfs_rq);
account_entity_enqueue(cfs_rq, se);
- if (flags & ENQUEUE_WAKEUP) {
+ if (wakeup) {
place_entity(cfs_rq, se, 0);
enqueue_sleeper(cfs_rq, se);
}
__dequeue_entity(cfs_rq, se);
account_entity_dequeue(cfs_rq, se);
update_min_vruntime(cfs_rq);
-
- /*
- * Normalize the entity after updating the min_vruntime because the
- * update can refer to the ->curr item and we need to reflect this
- * movement in our normalized position.
- */
- if (!sleep)
- se->vruntime -= cfs_rq->min_vruntime;
}
/*
* increased. Here we update the fair scheduling stats and
* then put the task into the rbtree:
*/
-static void
-enqueue_task_fair(struct rq *rq, struct task_struct *p, int wakeup, bool head)
+static void enqueue_task_fair(struct rq *rq, struct task_struct *p, int wakeup)
{
struct cfs_rq *cfs_rq;
struct sched_entity *se = &p->se;
- int flags = 0;
-
- if (wakeup)
- flags |= ENQUEUE_WAKEUP;
- if (p->state == TASK_WAKING)
- flags |= ENQUEUE_MIGRATE;
for_each_sched_entity(se) {
if (se->on_rq)
break;
cfs_rq = cfs_rq_of(se);
- enqueue_entity(cfs_rq, se, flags);
- flags = ENQUEUE_WAKEUP;
+ enqueue_entity(cfs_rq, se, wakeup);
+ wakeup = 1;
}
hrtick_update(rq);
#ifdef CONFIG_SMP
-static void task_waking_fair(struct rq *rq, struct task_struct *p)
-{
- struct sched_entity *se = &p->se;
- struct cfs_rq *cfs_rq = cfs_rq_of(se);
-
- se->vruntime -= cfs_rq->min_vruntime;
-}
-
#ifdef CONFIG_FAIR_GROUP_SCHED
/*
* effective_load() calculates the load change as seen from the root_task_group
* effect of the currently running task from the load
* of the current CPU:
*/
- rcu_read_lock();
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
balanced = !this_load ||
100*(this_load + effective_load(tg, this_cpu, weight, weight)) <=
imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
- rcu_read_unlock();
/*
* If the currently running task will sleep within
return idlest;
}
-/*
- * Try and locate an idle CPU in the sched_domain.
- */
-static int select_idle_sibling(struct task_struct *p, int target)
-{
- int cpu = smp_processor_id();
- int prev_cpu = task_cpu(p);
- struct sched_domain *sd;
- int i;
-
- /*
- * If the task is going to be woken-up on this cpu and if it is
- * already idle, then it is the right target.
- */
- if (target == cpu && idle_cpu(cpu))
- return cpu;
-
- /*
- * If the task is going to be woken-up on the cpu where it previously
- * ran and if it is currently idle, then it the right target.
- */
- if (target == prev_cpu && idle_cpu(prev_cpu))
- return prev_cpu;
-
- /*
- * Otherwise, iterate the domains and find an elegible idle cpu.
- */
- for_each_domain(target, sd) {
- if (!(sd->flags & SD_SHARE_PKG_RESOURCES))
- break;
-
- for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) {
- if (idle_cpu(i)) {
- target = i;
- break;
- }
- }
-
- /*
- * Lets stop looking for an idle sibling when we reached
- * the domain that spans the current cpu and prev_cpu.
- */
- if (cpumask_test_cpu(cpu, sched_domain_span(sd)) &&
- cpumask_test_cpu(prev_cpu, sched_domain_span(sd)))
- break;
- }
-
- return target;
-}
-
/*
* sched_balance_self: balance the current task (running on cpu) in domains
* that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and
*
* preempt must be disabled.
*/
-static int
-select_task_rq_fair(struct rq *rq, struct task_struct *p, int sd_flag, int wake_flags)
+static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
{
struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
int cpu = smp_processor_id();
new_cpu = prev_cpu;
}
+ rcu_read_lock();
for_each_domain(cpu, tmp) {
if (!(tmp->flags & SD_LOAD_BALANCE))
continue;
want_sd = 0;
}
- /*
- * If both cpu and prev_cpu are part of this domain,
- * cpu is a valid SD_WAKE_AFFINE target.
- */
- if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
- cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
- affine_sd = tmp;
- want_affine = 0;
+ if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) {
+ int candidate = -1, i;
+
+ if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp)))
+ candidate = cpu;
+
+ /*
+ * Check for an idle shared cache.
+ */
+ if (tmp->flags & SD_PREFER_SIBLING) {
+ if (candidate == cpu) {
+ if (!cpu_rq(prev_cpu)->cfs.nr_running)
+ candidate = prev_cpu;
+ }
+
+ if (candidate == -1 || candidate == cpu) {
+ for_each_cpu(i, sched_domain_span(tmp)) {
+ if (!cpumask_test_cpu(i, &p->cpus_allowed))
+ continue;
+ if (!cpu_rq(i)->cfs.nr_running) {
+ candidate = i;
+ break;
+ }
+ }
+ }
+ }
+
+ if (candidate >= 0) {
+ affine_sd = tmp;
+ want_affine = 0;
+ cpu = candidate;
+ }
}
if (!want_sd && !want_affine)
sd = tmp;
}
-#ifdef CONFIG_FAIR_GROUP_SCHED
if (sched_feat(LB_SHARES_UPDATE)) {
/*
* Pick the largest domain to update shares over
*/
tmp = sd;
- if (affine_sd && (!tmp || affine_sd->span_weight > sd->span_weight))
+ if (affine_sd && (!tmp ||
+ cpumask_weight(sched_domain_span(affine_sd)) >
+ cpumask_weight(sched_domain_span(sd))))
tmp = affine_sd;
- if (tmp) {
- spin_unlock(&rq->lock);
+ if (tmp)
update_shares(tmp);
- spin_lock(&rq->lock);
- }
}
-#endif
- if (affine_sd) {
- if (cpu == prev_cpu || wake_affine(affine_sd, p, sync))
- return select_idle_sibling(p, cpu);
- else
- return select_idle_sibling(p, prev_cpu);
+ if (affine_sd && wake_affine(affine_sd, p, sync)) {
+ new_cpu = cpu;
+ goto out;
}
while (sd) {
/* Now try balancing at a lower domain level of new_cpu */
cpu = new_cpu;
- weight = sd->span_weight;
+ weight = cpumask_weight(sched_domain_span(sd));
sd = NULL;
for_each_domain(cpu, tmp) {
- if (weight <= tmp->span_weight)
+ if (weight <= cpumask_weight(sched_domain_span(tmp)))
break;
if (tmp->flags & sd_flag)
sd = tmp;
/* while loop will break here if sd == NULL */
}
+out:
+ rcu_read_unlock();
return new_cpu;
}
#endif /* CONFIG_SMP */
}
/*
- * called on fork with the child task as argument from the parent's context
- * - child not yet on the tasklist
- * - preemption disabled
+ * Share the fairness runtime between parent and child, thus the
+ * total amount of pressure for CPU stays equal - new tasks
+ * get a chance to run but frequent forkers are not allowed to
+ * monopolize the CPU. Note: the parent runqueue is locked,
+ * the child is not running yet.
*/
-static void task_fork_fair(struct task_struct *p)
+static void task_new_fair(struct rq *rq, struct task_struct *p)
{
- struct cfs_rq *cfs_rq = task_cfs_rq(current);
+ struct cfs_rq *cfs_rq = task_cfs_rq(p);
struct sched_entity *se = &p->se, *curr = cfs_rq->curr;
int this_cpu = smp_processor_id();
- struct rq *rq = this_rq();
- unsigned long flags;
-
- spin_lock_irqsave(&rq->lock, flags);
-
- update_rq_clock(rq);
- if (unlikely(task_cpu(p) != this_cpu))
- __set_task_cpu(p, this_cpu);
+ sched_info_queued(p);
update_curr(cfs_rq);
-
if (curr)
se->vruntime = curr->vruntime;
place_entity(cfs_rq, se, 1);
- if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) {
+ /* 'curr' will be NULL if the child belongs to a different group */
+ if (sysctl_sched_child_runs_first && this_cpu == task_cpu(p) &&
+ curr && entity_before(curr, se)) {
/*
* Upon rescheduling, sched_class::put_prev_task() will place
* 'current' within the tree based on its new key value.
resched_task(rq->curr);
}
- se->vruntime -= cfs_rq->min_vruntime;
-
- spin_unlock_irqrestore(&rq->lock, flags);
+ enqueue_task_fair(rq, p, 0);
}
/*
}
#ifdef CONFIG_FAIR_GROUP_SCHED
-static void moved_group_fair(struct task_struct *p, int on_rq)
+static void moved_group_fair(struct task_struct *p)
{
struct cfs_rq *cfs_rq = task_cfs_rq(p);
update_curr(cfs_rq);
- if (!on_rq)
- place_entity(cfs_rq, &p->se, 1);
+ place_entity(cfs_rq, &p->se, 1);
}
#endif
-unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task)
+unsigned int get_rr_interval_fair(struct task_struct *task)
{
struct sched_entity *se = &task->se;
+ unsigned long flags;
+ struct rq *rq;
unsigned int rr_interval = 0;
/*
* Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
* idle runqueue:
*/
+ rq = task_rq_lock(task, &flags);
if (rq->cfs.load.weight)
rr_interval = NS_TO_JIFFIES(sched_slice(&rq->cfs, se));
+ task_rq_unlock(rq, &flags);
return rr_interval;
}
.move_one_task = move_one_task_fair,
.rq_online = rq_online_fair,
.rq_offline = rq_offline_fair,
-
- .task_waking = task_waking_fair,
#endif
.set_curr_task = set_curr_task_fair,
.task_tick = task_tick_fair,
- .task_fork = task_fork_fair,
+ .task_new = task_new_fair,
.prio_changed = prio_changed_fair,
.switched_to = switched_to_fair,
*/
#ifdef CONFIG_SMP
-static int
-select_task_rq_idle(struct rq *rq, struct task_struct *p, int sd_flag, int flags)
+static int select_task_rq_idle(struct task_struct *p, int sd_flag, int flags)
{
return task_cpu(p); /* IDLE tasks as never migrated */
}
check_preempt_curr(rq, p, 0);
}
-unsigned int get_rr_interval_idle(struct rq *rq, struct task_struct *task)
+unsigned int get_rr_interval_idle(struct task_struct *task)
{
return 0;
}
return rt_se->my_q;
}
-static void enqueue_rt_entity(struct sched_rt_entity *rt_se, bool head);
+static void enqueue_rt_entity(struct sched_rt_entity *rt_se);
static void dequeue_rt_entity(struct sched_rt_entity *rt_se);
static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
if (rt_rq->rt_nr_running) {
if (rt_se && !on_rt_rq(rt_se))
- enqueue_rt_entity(rt_se, false);
+ enqueue_rt_entity(rt_se);
if (rt_rq->highest_prio.curr < curr->prio)
resched_task(curr);
}
dec_rt_group(rt_se, rt_rq);
}
-static void __enqueue_rt_entity(struct sched_rt_entity *rt_se, bool head)
+static void __enqueue_rt_entity(struct sched_rt_entity *rt_se)
{
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
struct rt_prio_array *array = &rt_rq->active;
if (group_rq && (rt_rq_throttled(group_rq) || !group_rq->rt_nr_running))
return;
- if (head)
- list_add(&rt_se->run_list, queue);
- else
- list_add_tail(&rt_se->run_list, queue);
+ list_add_tail(&rt_se->run_list, queue);
__set_bit(rt_se_prio(rt_se), array->bitmap);
inc_rt_tasks(rt_se, rt_rq);
}
}
-static void enqueue_rt_entity(struct sched_rt_entity *rt_se, bool head)
+static void enqueue_rt_entity(struct sched_rt_entity *rt_se)
{
dequeue_rt_stack(rt_se);
for_each_sched_rt_entity(rt_se)
- __enqueue_rt_entity(rt_se, head);
+ __enqueue_rt_entity(rt_se);
}
static void dequeue_rt_entity(struct sched_rt_entity *rt_se)
struct rt_rq *rt_rq = group_rt_rq(rt_se);
if (rt_rq && rt_rq->rt_nr_running)
- __enqueue_rt_entity(rt_se, false);
+ __enqueue_rt_entity(rt_se);
}
}
/*
* Adding/removing a task to/from a priority array:
*/
-static void
-enqueue_task_rt(struct rq *rq, struct task_struct *p, int wakeup, bool head)
+static void enqueue_task_rt(struct rq *rq, struct task_struct *p, int wakeup)
{
struct sched_rt_entity *rt_se = &p->rt;
if (wakeup)
rt_se->timeout = 0;
- enqueue_rt_entity(rt_se, head);
+ enqueue_rt_entity(rt_se);
if (!task_current(rq, p) && p->rt.nr_cpus_allowed > 1)
enqueue_pushable_task(rq, p);
#ifdef CONFIG_SMP
static int find_lowest_rq(struct task_struct *task);
-static int
-select_task_rq_rt(struct rq *rq, struct task_struct *p, int sd_flag, int flags)
+static int select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
{
+ struct rq *rq = task_rq(p);
+
if (sd_flag != SD_BALANCE_WAKE)
return smp_processor_id();
* If we are not running and we are not going to reschedule soon, we should
* try to push tasks away now
*/
-static void task_woken_rt(struct rq *rq, struct task_struct *p)
+static void task_wake_up_rt(struct rq *rq, struct task_struct *p)
{
if (!task_running(rq, p) &&
!test_tsk_need_resched(rq->curr) &&
dequeue_pushable_task(rq, p);
}
-unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
+unsigned int get_rr_interval_rt(struct task_struct *task)
{
/*
* Time slice is 0 for SCHED_FIFO tasks
.rq_offline = rq_offline_rt,
.pre_schedule = pre_schedule_rt,
.post_schedule = post_schedule_rt,
- .task_woken = task_woken_rt,
+ .task_wake_up = task_wake_up_rt,
.switched_from = switched_from_rt,
#endif
static int check_kill_permission(int sig, struct siginfo *info,
struct task_struct *t)
{
- const struct cred *cred, *tcred;
+ const struct cred *cred = current_cred(), *tcred;
struct pid *sid;
int error;
if (error)
return error;
- cred = current_cred();
tcred = __task_cred(t);
- if (!same_thread_group(current, t) &&
- (cred->euid ^ tcred->suid) &&
+ if ((cred->euid ^ tcred->suid) &&
(cred->euid ^ tcred->uid) &&
(cred->uid ^ tcred->suid) &&
(cred->uid ^ tcred->uid) &&
goto cancelled;
/* the timer holds a reference whilst it is pending */
- ret = slow_work_get_ref(work);
+ ret = work->ops->get_ref(work);
if (ret < 0)
goto cant_get_ref;
* Wake up the high-prio watchdog task twice per
* threshold timespan.
*/
- if (time_after(now - softlockup_thresh/2, touch_timestamp))
+ if (now > touch_timestamp + softlockup_thresh/2)
wake_up_process(per_cpu(watchdog_task, this_cpu));
/* Warn about unreasonable delays: */
- if (time_before_eq(now - softlockup_thresh, touch_timestamp))
+ if (now <= (touch_timestamp + softlockup_thresh))
return;
per_cpu(print_timestamp, this_cpu) = touch_timestamp;
void do_sys_times(struct tms *tms)
{
- cputime_t tgutime, tgstime, cutime, cstime;
+ struct task_cputime cputime;
+ cputime_t cutime, cstime;
+ thread_group_cputime(current, &cputime);
spin_lock_irq(¤t->sighand->siglock);
- thread_group_times(current, &tgutime, &tgstime);
cutime = current->signal->cutime;
cstime = current->signal->cstime;
spin_unlock_irq(¤t->sighand->siglock);
- tms->tms_utime = cputime_to_clock_t(tgutime);
- tms->tms_stime = cputime_to_clock_t(tgstime);
+ tms->tms_utime = cputime_to_clock_t(cputime.utime);
+ tms->tms_stime = cputime_to_clock_t(cputime.stime);
tms->tms_cutime = cputime_to_clock_t(cutime);
tms->tms_cstime = cputime_to_clock_t(cstime);
}
pgid = pid;
if (pgid < 0)
return -EINVAL;
- rcu_read_lock();
/* From this point forward we keep holding onto the tasklist lock
* so that our parent does not change from under us. -DaveM
out:
/* All paths lead to here, thus we are safe. -DaveM */
write_unlock_irq(&tasklist_lock);
- rcu_read_unlock();
return err;
}
{
struct task_struct *t;
unsigned long flags;
- cputime_t tgutime, tgstime, utime, stime;
+ cputime_t utime, stime;
+ struct task_cputime cputime;
unsigned long maxrss = 0;
memset((char *) r, 0, sizeof *r);
break;
case RUSAGE_SELF:
- thread_group_times(p, &tgutime, &tgstime);
- utime = cputime_add(utime, tgutime);
- stime = cputime_add(stime, tgstime);
+ thread_group_cputime(p, &cputime);
+ utime = cputime_add(utime, cputime.utime);
+ stime = cputime_add(stime, cputime.stime);
r->ru_nvcsw += p->signal->nvcsw;
r->ru_nivcsw += p->signal->nivcsw;
r->ru_minflt += p->signal->min_flt;
*/
static int __init clocksource_done_booting(void)
{
- mutex_lock(&clocksource_mutex);
- curr_clocksource = clocksource_default_clock();
- mutex_unlock(&clocksource_mutex);
-
finished_booting = 1;
/*
* value. We do this unconditionally on any cpu, as we don't know whether the
* cpu, which has the update task assigned is in a long sleep.
*/
-static void tick_nohz_update_jiffies(ktime_t now)
+static void tick_nohz_update_jiffies(void)
{
int cpu = smp_processor_id();
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
unsigned long flags;
+ ktime_t now;
+
+ if (!ts->tick_stopped)
+ return;
cpumask_clear_cpu(cpu, nohz_cpu_mask);
+ now = ktime_get();
ts->idle_waketime = now;
local_irq_save(flags);
touch_softlockup_watchdog();
}
-static void tick_nohz_stop_idle(int cpu, ktime_t now)
+static void tick_nohz_stop_idle(int cpu)
{
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
- ktime_t delta;
- delta = ktime_sub(now, ts->idle_entrytime);
- ts->idle_lastupdate = now;
- ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
- ts->idle_active = 0;
+ if (ts->idle_active) {
+ ktime_t now, delta;
+ now = ktime_get();
+ delta = ktime_sub(now, ts->idle_entrytime);
+ ts->idle_lastupdate = now;
+ ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);
+ ts->idle_active = 0;
- sched_clock_idle_wakeup_event(0);
+ sched_clock_idle_wakeup_event(0);
+ }
}
static ktime_t tick_nohz_start_idle(struct tick_sched *ts)
time_delta = KTIME_MAX;
} while (read_seqretry(&xtime_lock, seq));
- if (rcu_needs_cpu(cpu) || printk_needs_cpu(cpu) ||
- arch_needs_cpu(cpu)) {
- next_jiffies = last_jiffies + 1;
+ /* Get the next timer wheel timer */
+ next_jiffies = get_next_timer_interrupt(last_jiffies);
+ delta_jiffies = next_jiffies - last_jiffies;
+
+ if (rcu_needs_cpu(cpu) || printk_needs_cpu(cpu))
delta_jiffies = 1;
- } else {
- /* Get the next timer wheel timer */
- next_jiffies = get_next_timer_interrupt(last_jiffies);
- delta_jiffies = next_jiffies - last_jiffies;
- }
/*
* Do not stop the tick, if we are only one off
* or if the cpu is required for rcu
ktime_t now;
local_irq_disable();
- if (ts->idle_active || (ts->inidle && ts->tick_stopped))
- now = ktime_get();
-
- if (ts->idle_active)
- tick_nohz_stop_idle(cpu, now);
+ tick_nohz_stop_idle(cpu);
if (!ts->inidle || !ts->tick_stopped) {
ts->inidle = 0;
/* Update jiffies first */
select_nohz_load_balancer(0);
+ now = ktime_get();
tick_do_update_jiffies64(now);
cpumask_clear_cpu(cpu, nohz_cpu_mask);
* timer and do not touch the other magic bits which need to be done
* when idle is left.
*/
-static void tick_nohz_kick_tick(int cpu, ktime_t now)
+static void tick_nohz_kick_tick(int cpu)
{
#if 0
/* Switch back to 2.6.27 behaviour */
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
- ktime_t delta;
+ ktime_t delta, now;
+
+ if (!ts->tick_stopped)
+ return;
/*
* Do not touch the tick device, when the next expiry is either
* already reached or less/equal than the tick period.
*/
+ now = ktime_get();
delta = ktime_sub(hrtimer_get_expires(&ts->sched_timer), now);
if (delta.tv64 <= tick_period.tv64)
return;
#endif
}
-static inline void tick_check_nohz(int cpu)
-{
- struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
- ktime_t now;
-
- if (!ts->idle_active && !ts->tick_stopped)
- return;
- now = ktime_get();
- if (ts->idle_active)
- tick_nohz_stop_idle(cpu, now);
- if (ts->tick_stopped) {
- tick_nohz_update_jiffies(now);
- tick_nohz_kick_tick(cpu, now);
- }
-}
-
#else
static inline void tick_nohz_switch_to_nohz(void) { }
-static inline void tick_check_nohz(int cpu) { }
#endif /* NO_HZ */
void tick_check_idle(int cpu)
{
tick_check_oneshot_broadcast(cpu);
- tick_check_nohz(cpu);
+#ifdef CONFIG_NO_HZ
+ tick_nohz_stop_idle(cpu);
+ tick_nohz_update_jiffies();
+ tick_nohz_kick_tick(cpu);
+#endif
}
/*
{
xtime.tv_sec += leapsecond;
wall_to_monotonic.tv_sec -= leapsecond;
- update_vsyscall(&xtime, timekeeper.clock, timekeeper.mult);
+ update_vsyscall(&xtime, timekeeper.clock);
}
#ifdef CONFIG_GENERIC_TIME
timekeeper.ntp_error = 0;
ntp_clear();
- update_vsyscall(&xtime, timekeeper.clock, timekeeper.mult);
+ update_vsyscall(&xtime, timekeeper.clock);
write_sequnlock_irqrestore(&xtime_lock, flags);
update_xtime_cache(nsecs);
/* check to see if there is a new clocksource to use */
- update_vsyscall(&xtime, timekeeper.clock, timekeeper.mult);
+ update_vsyscall(&xtime, timekeeper.clock);
}
/**
P_ns(expires_next);
P(hres_active);
P(nr_events);
- P(nr_retries);
- P(nr_hangs);
- P_ns(max_hang_time);
#endif
#undef P
#undef P_ns
u64 now = ktime_to_ns(ktime_get());
int cpu;
- SEQ_printf(m, "Timer List Version: v0.5\n");
+ SEQ_printf(m, "Timer List Version: v0.4\n");
SEQ_printf(m, "HRTIMER_MAX_CLOCK_BASES: %d\n", HRTIMER_MAX_CLOCK_BASES);
SEQ_printf(m, "now at %Ld nsecs\n", (unsigned long long)now);
{
struct ftrace_profile *rec = v;
char str[KSYM_SYMBOL_LEN];
- int ret = 0;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ static DEFINE_MUTEX(mutex);
static struct trace_seq s;
unsigned long long avg;
#endif
- mutex_lock(&ftrace_profile_lock);
-
- /* we raced with function_profile_reset() */
- if (unlikely(rec->counter == 0)) {
- ret = -EBUSY;
- goto out;
- }
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
seq_printf(m, " %-30.30s %10lu", str, rec->counter);
avg = rec->time;
do_div(avg, rec->counter);
+ mutex_lock(&mutex);
trace_seq_init(&s);
trace_print_graph_duration(rec->time, &s);
trace_seq_puts(&s, " ");
trace_print_graph_duration(avg, &s);
trace_print_seq(m, &s);
+ mutex_unlock(&mutex);
#endif
seq_putc(m, '\n');
-out:
- mutex_unlock(&ftrace_profile_lock);
- return ret;
+ return 0;
}
static void ftrace_profile_reset(struct ftrace_profile_stat *stat)
if (*pos > 0)
return t_hash_start(m, pos);
iter->flags |= FTRACE_ITER_PRINTALL;
- /* reset in case of seek/pread */
- iter->flags &= ~FTRACE_ITER_HASH;
return iter;
}
.open = ftrace_filter_open,
.read = seq_read,
.write = ftrace_filter_write,
- .llseek = no_llseek,
+ .llseek = ftrace_regex_lseek,
.release = ftrace_filter_release,
};
{
/* Make sure we do not use the parent ret_stack */
t->ret_stack = NULL;
- t->curr_ret_stack = -1;
if (ftrace_graph_active) {
struct ftrace_ret_stack *ret_stack;
GFP_KERNEL);
if (!ret_stack)
return;
+ t->curr_ret_stack = -1;
atomic_set(&t->tracing_graph_pause, 0);
atomic_set(&t->trace_overrun, 0);
t->ftrace_timestamp = 0;
#define BUF_MAX_DATA_SIZE (BUF_PAGE_SIZE - (sizeof(u32) * 2))
/* Max number of timestamps that can fit on a page */
-#define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_EXTEND)
+#define RB_TIMESTAMPS_PER_PAGE (BUF_PAGE_SIZE / RB_LEN_TIME_STAMP)
int ring_buffer_print_page_header(struct trace_seq *s)
{
if (ring_buffer_flags != RB_BUFFERS_ON)
return NULL;
+ if (atomic_read(&buffer->record_disabled))
+ return NULL;
+
/* If we are tracing schedule, we don't want to recurse */
resched = ftrace_preempt_disable();
- if (atomic_read(&buffer->record_disabled))
- goto out_nocheck;
-
if (trace_recursive_lock())
goto out_nocheck;
if (ring_buffer_flags != RB_BUFFERS_ON)
return -EBUSY;
- resched = ftrace_preempt_disable();
-
if (atomic_read(&buffer->record_disabled))
- goto out;
+ return -EBUSY;
+
+ resched = ftrace_preempt_disable();
cpu = raw_smp_processor_id();
mutex_unlock(&trace_types_lock);
}
-static void __tracing_reset(struct ring_buffer *buffer, int cpu)
+static void __tracing_reset(struct trace_array *tr, int cpu)
{
ftrace_disable_cpu();
- ring_buffer_reset_cpu(buffer, cpu);
+ ring_buffer_reset_cpu(tr->buffer, cpu);
ftrace_enable_cpu();
}
/* Make sure all commits have finished */
synchronize_sched();
- __tracing_reset(buffer, cpu);
+ __tracing_reset(tr, cpu);
ring_buffer_record_enable(buffer);
}
tr->time_start = ftrace_now(tr->cpu);
for_each_online_cpu(cpu)
- __tracing_reset(buffer, cpu);
+ __tracing_reset(tr, cpu);
ring_buffer_record_enable(buffer);
}
goto out;
}
- /* Prevent the buffers from switching */
- __raw_spin_lock(&ftrace_max_lock);
buffer = global_trace.buffer;
if (buffer)
if (buffer)
ring_buffer_record_enable(buffer);
- __raw_spin_unlock(&ftrace_max_lock);
-
ftrace_start();
out:
spin_unlock_irqrestore(&tracing_start_lock, flags);
if (trace_stop_count++)
goto out;
- /* Prevent the buffers from switching */
- __raw_spin_lock(&ftrace_max_lock);
-
buffer = global_trace.buffer;
if (buffer)
ring_buffer_record_disable(buffer);
if (buffer)
ring_buffer_record_disable(buffer);
- __raw_spin_unlock(&ftrace_max_lock);
-
out:
spin_unlock_irqrestore(&tracing_start_lock, flags);
}
if (!(trace_flags & TRACE_ITER_USERSTACKTRACE))
return;
- /*
- * NMIs can not handle page faults, even with fix ups.
- * The save user stack can (and often does) fault.
- */
- if (unlikely(in_nmi()))
- return;
-
event = trace_buffer_lock_reserve(buffer, TRACE_USER_STACK,
sizeof(*entry), flags, pc);
if (!event)
#undef FTRACE_ENTRY
#define FTRACE_ENTRY(call, struct_name, id, tstruct, print) \
- extern struct ftrace_event_call \
- __attribute__((__aligned__(4))) event_##call;
+ extern struct ftrace_event_call event_##call;
#undef FTRACE_ENTRY_DUP
#define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print) \
FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print))
obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
- string_helpers.o gcd.o lcm.o
+ string_helpers.o gcd.o
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
ret->element_size = element_size;
ret->total_nr_elements = total;
if (elements_fit_in_base(ret) && !(flags & __GFP_ZERO))
- memset(&ret->parts[0], FLEX_ARRAY_FREE,
+ memset(ret->parts[0], FLEX_ARRAY_FREE,
FLEX_ARRAY_BASE_BYTES_LEFT);
return ret;
}
id = (id | ((1 << (IDR_BITS * l)) - 1)) + 1;
/* if already at the top layer, we need to grow */
- if (id >= 1 << (idp->layers * IDR_BITS)) {
+ if (!(p = pa[l])) {
*starting_id = id;
return IDR_NEED_TO_GROW;
}
- p = pa[l];
- BUG_ON(!p);
/* If we need to go up one layer, continue the
* loop; otherwise, restart from the top.
+++ /dev/null
-#include <linux/kernel.h>
-#include <linux/gcd.h>
-#include <linux/module.h>
-
-/* Lowest common multiple */
-unsigned long lcm(unsigned long a, unsigned long b)
-{
- if (a && b)
- return (a * b) / gcd(a, b);
- else if (b)
- return b;
-
- return a;
-}
-EXPORT_SYMBOL_GPL(lcm);
if (!fbc->counters)
return -ENOMEM;
#ifdef CONFIG_HOTPLUG_CPU
- INIT_LIST_HEAD(&fbc->list);
mutex_lock(&percpu_counters_lock);
list_add(&fbc->list, &percpu_counters);
mutex_unlock(&percpu_counters_lock);
*/
vfrom = page_address(fromvec->bv_page) + tovec->bv_offset;
- bounce_copy_vec(tovec, vfrom);
flush_dcache_page(tovec->bv_page);
+ bounce_copy_vec(tovec, vfrom);
}
}
switch (advice) {
case POSIX_FADV_NORMAL:
file->f_ra.ra_pages = bdi->ra_pages;
- spin_lock(&file->f_lock);
- file->f_mode &= ~FMODE_RANDOM;
- spin_unlock(&file->f_lock);
break;
case POSIX_FADV_RANDOM:
- spin_lock(&file->f_lock);
- file->f_mode |= FMODE_RANDOM;
- spin_unlock(&file->f_lock);
+ file->f_ra.ra_pages = 0;
break;
case POSIX_FADV_SEQUENTIAL:
file->f_ra.ra_pages = bdi->ra_pages * 2;
- spin_lock(&file->f_lock);
- file->f_mode &= ~FMODE_RANDOM;
- spin_unlock(&file->f_lock);
break;
case POSIX_FADV_WILLNEED:
if (!mapping->a_ops->readpage) {
/*
* Splice_read and readahead add shmem/tmpfs pages into the page cache
* before shmem_readpage has a chance to mark them as SwapBacked: they
- * need to go on the anon lru below, and mem_cgroup_cache_charge
+ * need to go on the active_anon lru below, and mem_cgroup_cache_charge
* (called in add_to_page_cache) needs to know where they're going too.
*/
if (mapping_cap_swap_backed(mapping))
if (page_is_file_cache(page))
lru_cache_add_file(page);
else
- lru_cache_add_anon(page);
+ lru_cache_add_active_anon(page);
}
return ret;
}
goto page_not_up_to_date;
if (!trylock_page(page))
goto page_not_up_to_date;
- /* Did it get truncated before we got the lock? */
- if (!page->mapping)
- goto page_not_up_to_date_locked;
if (!mapping->a_ops->is_partially_uptodate(page,
desc, offset))
goto page_not_up_to_date_locked;
}
readpage:
- /*
- * A previous I/O error may have been due to temporary
- * failures, eg. multipath errors.
- * PG_error will be set again if readpage fails.
- */
- ClearPageError(page);
/* Start the actual read. The read will unlock the page. */
error = mapping->a_ops->readpage(filp, page);
{
int i;
- if (unlikely(sz/PAGE_SIZE > MAX_ORDER_NR_PAGES)) {
+ if (unlikely(sz > MAX_ORDER_NR_PAGES)) {
clear_gigantic_page(page, addr, sz);
return;
}
mapping = (struct address_space *) page_private(page);
set_page_private(page, 0);
- page->mapping = NULL;
BUG_ON(page_count(page));
INIT_LIST_HEAD(&page->lru);
page = alloc_buddy_huge_page(h, vma, addr);
if (!page) {
hugetlb_put_quota(inode->i_mapping, chg);
- return ERR_PTR(-VM_FAULT_SIGBUS);
+ return ERR_PTR(-VM_FAULT_OOM);
}
}
spin_lock(&inode->i_lock);
inode->i_blocks += blocks_per_huge_page(h);
spin_unlock(&inode->i_lock);
- } else {
+ } else
lock_page(page);
- page->mapping = HUGETLB_POISON;
- }
}
/*
*/
static inline unsigned long page_order(struct page *page)
{
- /* PageBuddy() must be checked by the caller */
+ VM_BUG_ON(!PageBuddy(page));
return page_private(page);
}
}
unlock_page_cgroup(pc);
- *ptr = mem;
if (mem) {
- ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, ptr, false,
+ ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem, false,
page);
css_put(&mem->css);
}
+ *ptr = mem;
return ret;
}
{ lru|dirty, lru|dirty, "LRU", me_pagecache_dirty },
{ lru|dirty, lru, "clean LRU", me_pagecache_clean },
+ { swapbacked, swapbacked, "anonymous", me_pagecache_clean },
/*
* Catchall entry: must be at end.
* Do all that is necessary to remove user space mappings. Unmap
* the pages and send SIGBUS to the processes if the data was dirty.
*/
-static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
+static void hwpoison_user_mappings(struct page *p, unsigned long pfn,
int trapno)
{
enum ttu_flags ttu = TTU_UNMAP | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
int i;
int kill = 1;
- if (PageReserved(p) || PageSlab(p))
- return SWAP_SUCCESS;
+ if (PageReserved(p) || PageCompound(p) || PageSlab(p) || PageKsm(p))
+ return;
/*
* This check implies we don't kill processes if their pages
* are in the swap cache early. Those are always late kills.
*/
if (!page_mapped(p))
- return SWAP_SUCCESS;
-
- if (PageCompound(p) || PageKsm(p))
- return SWAP_FAIL;
+ return;
if (PageSwapCache(p)) {
printk(KERN_ERR
*/
kill_procs_ao(&tokill, !!PageDirty(p), trapno,
ret != SWAP_SUCCESS, pfn);
-
- return ret;
}
int __memory_failure(unsigned long pfn, int trapno, int ref)
/*
* Now take care of user space mappings.
- * Abort on fail: __remove_from_page_cache() assumes unmapped page.
*/
- if (hwpoison_user_mappings(p, pfn, trapno) != SWAP_SUCCESS) {
- printk(KERN_ERR "MCE %#lx: cannot unmap page, give up\n", pfn);
- res = -EBUSY;
- goto out;
- }
+ hwpoison_user_mappings(p, pfn, trapno);
/*
* Torn down by someone else?
return i ? : -EFAULT;
}
if (pages) {
- struct page *page;
-
- page = vm_normal_page(gate_vma, start, *pte);
- if (!page) {
- if (!(gup_flags & FOLL_DUMP) &&
- is_zero_pfn(pte_pfn(*pte)))
- page = pte_page(*pte);
- else {
- pte_unmap(pte);
- return i ? : -EFAULT;
- }
- }
+ struct page *page = vm_normal_page(gate_vma, start, *pte);
pages[i] = page;
- get_page(page);
+ if (page)
+ get_page(page);
}
pte_unmap(pte);
if (vmas)
return ret;
}
-/*
- * This is like a special single-page "expand_{down|up}wards()",
- * except we must first make sure that 'address{-|+}PAGE_SIZE'
- * doesn't hit another vma.
- */
-static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
-{
- address &= PAGE_MASK;
- if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
- struct vm_area_struct *prev = vma->vm_prev;
-
- /*
- * Is there a mapping abutting this one below?
- *
- * That's only ok if it's the same stack mapping
- * that has gotten split..
- */
- if (prev && prev->vm_end == address)
- return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
-
- expand_stack(vma, address - PAGE_SIZE);
- }
- if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
- struct vm_area_struct *next = vma->vm_next;
-
- /* As VM_GROWSDOWN but s/below/above/ */
- if (next && next->vm_start == address + PAGE_SIZE)
- return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
-
- expand_upwards(vma, address + PAGE_SIZE);
- }
- return 0;
-}
-
/*
* We enter with non-exclusive mmap_sem (to exclude vma changes,
* but allow concurrent faults), and pte mapped but not yet locked.
spinlock_t *ptl;
pte_t entry;
- pte_unmap(page_table);
-
- /* Check if we need to add a guard page to the stack */
- if (check_stack_guard_page(vma, address) < 0)
- return VM_FAULT_SIGBUS;
-
- /* Use the zero-page for reads */
if (!(flags & FAULT_FLAG_WRITE)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(address),
vma->vm_page_prot));
- page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
+ ptl = pte_lockptr(mm, pmd);
+ spin_lock(ptl);
if (!pte_none(*page_table))
goto unlock;
goto setpte;
}
/* Allocate our own private page. */
+ pte_unmap(page_table);
+
if (unlikely(anon_vma_prepare(vma)))
goto oom;
page = alloc_zeroed_user_highpage_movable(vma, address);
/* Return the start of the next active pageblock after a given page */
static struct page *next_active_pageblock(struct page *page)
{
+ int pageblocks_stride;
+
/* Ensure the starting page is pageblock-aligned */
BUG_ON(page_to_pfn(page) & (pageblock_nr_pages - 1));
+ /* Move forward by at least 1 * pageblock_nr_pages */
+ pageblocks_stride = 1;
+
/* If the entire pageblock is free, move to the end of free page */
- if (pageblock_free(page)) {
- int order;
- /* be careful. we don't have locks, page_order can be changed.*/
- order = page_order(page);
- if ((order < MAX_ORDER) && (order >= pageblock_order))
- return page + (1 << order);
- }
+ if (pageblock_free(page))
+ pageblocks_stride += page_order(page) - pageblock_order;
- return page + pageblock_nr_pages;
+ return page + (pageblocks_stride * pageblock_nr_pages);
}
/* Checks if this range of memory is likely to be hot-removable. */
* Scanning pfn is much easier than scanning lru list.
* Scan pfn from start to end and Find LRU page.
*/
-unsigned long scan_lru_pages(unsigned long start, unsigned long end)
+int scan_lru_pages(unsigned long start, unsigned long end)
{
unsigned long pfn;
struct page *page;
(void)first_zones_zonelist(zonelist, highest_zoneidx,
&policy->v.nodes,
&zone);
- return zone ? zone->node : numa_node_id();
+ return zone->node;
}
default:
char *rest = nodelist;
while (isdigit(*rest))
rest++;
- if (*rest)
- goto out;
+ if (!*rest)
+ err = 0;
}
break;
case MPOL_INTERLEAVE:
*/
if (!nodelist)
nodes = node_states[N_HIGH_MEMORY];
+ err = 0;
break;
case MPOL_LOCAL:
/*
goto out;
mode = MPOL_PREFERRED;
break;
- case MPOL_DEFAULT:
- /*
- * Insist on a empty nodelist
- */
- if (!nodelist)
- err = 0;
- goto out;
- case MPOL_BIND:
- /*
- * Insist on a nodelist
- */
- if (!nodelist)
- goto out;
+
+ /*
+ * case MPOL_BIND: mpol_new() enforces non-empty nodemask.
+ * case MPOL_DEFAULT: mpol_new() enforces empty nodemask, ignores flags.
+ */
}
mode_flags = 0;
else if (!strcmp(flags, "relative"))
mode_flags |= MPOL_F_RELATIVE_NODES;
else
- goto out;
+ err = 1;
}
new = mpol_new(mode, mode_flags, &nodes);
if (IS_ERR(new))
- goto out;
-
- {
+ err = 1;
+ else {
int ret;
NODEMASK_SCRATCH(scratch);
if (scratch) {
ret = -ENOMEM;
NODEMASK_SCRATCH_FREE(scratch);
if (ret) {
+ err = 1;
mpol_put(new);
- goto out;
+ } else if (no_context) {
+ /* save for contextualization */
+ new->w.user_nodemask = nodes;
}
}
- err = 0;
- if (no_context) {
- /* save for contextualization */
- new->w.user_nodemask = nodes;
- }
out:
/* Restore string for error message */
}
}
-static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSDOWN) &&
- (vma->vm_start == addr) &&
- !vma_stack_continue(vma->vm_prev, addr);
-}
-
/**
* __mlock_vma_pages_range() - mlock a range of pages in the vma.
* @vma: target vma
if (vma->vm_flags & VM_WRITE)
gup_flags |= FOLL_WRITE;
- /* We don't try to access the guard page of a stack vma */
- if (stack_guard_page(vma, start)) {
- addr += PAGE_SIZE;
- nr_pages--;
- }
-
while (nr_pages > 0) {
int i;
__vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma,
struct vm_area_struct *prev, struct rb_node *rb_parent)
{
- struct vm_area_struct *next;
-
- vma->vm_prev = prev;
if (prev) {
- next = prev->vm_next;
+ vma->vm_next = prev->vm_next;
prev->vm_next = vma;
} else {
mm->mmap = vma;
if (rb_parent)
- next = rb_entry(rb_parent,
+ vma->vm_next = rb_entry(rb_parent,
struct vm_area_struct, vm_rb);
else
- next = NULL;
+ vma->vm_next = NULL;
}
- vma->vm_next = next;
- if (next)
- next->vm_prev = vma;
}
void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
__vma_unlink(struct mm_struct *mm, struct vm_area_struct *vma,
struct vm_area_struct *prev)
{
- struct vm_area_struct *next = vma->vm_next;
-
- prev->vm_next = next;
- if (next)
- next->vm_prev = prev;
+ prev->vm_next = vma->vm_next;
rb_erase(&vma->vm_rb, &mm->mm_rb);
if (mm->mmap_cache == vma)
mm->mmap_cache = prev;
* PA-RISC uses this for its stack; IA64 for its Register Backing Store.
* vma is the last one with address > vma->vm_end. Have to extend vma.
*/
+#ifndef CONFIG_IA64
+static
+#endif
int expand_upwards(struct vm_area_struct *vma, unsigned long address)
{
int error;
unsigned long addr;
insertion_point = (prev ? &prev->vm_next : &mm->mmap);
- vma->vm_prev = NULL;
do {
rb_erase(&vma->vm_rb, &mm->mm_rb);
mm->map_count--;
vma = vma->vm_next;
} while (vma && vma->vm_start < end);
*insertion_point = vma;
- if (vma)
- vma->vm_prev = prev;
tail_vma->vm_next = NULL;
if (mm->unmap_area == arch_unmap_area)
addr = prev ? prev->vm_end : mm->mmap_base;
return 1;
}
#endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */
-
-#ifdef CONFIG_SMP
-/* Called when a more accurate view of NR_FREE_PAGES is needed */
-unsigned long zone_nr_free_pages(struct zone *zone)
-{
- unsigned long nr_free_pages = zone_page_state(zone, NR_FREE_PAGES);
-
- /*
- * While kswapd is awake, it is considered the zone is under some
- * memory pressure. Under pressure, there is a risk that
- * per-cpu-counter-drift will allow the min watermark to be breached
- * potentially causing a live-lock. While kswapd is awake and
- * free pages are low, get a better estimate for free pages
- */
- if (nr_free_pages < zone->percpu_drift_mark &&
- !waitqueue_active(&zone->zone_pgdat->kswapd_wait))
- return zone_page_state_snapshot(zone, NR_FREE_PAGES);
-
- return nr_free_pages;
-}
-#endif /* CONFIG_SMP */
mmu_notifier_invalidate_range_end(mm, start, end);
vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
vm_stat_account(mm, newflags, vma->vm_file, nrpages);
- perf_event_mmap(vma);
return 0;
fail:
error = mprotect_fixup(vma, &prev, nstart, tmp, newflags);
if (error)
goto out;
+ perf_event_mmap(vma);
nstart = tmp;
if (nstart < prev->vm_end)
*/
static void add_vma_to_mm(struct mm_struct *mm, struct vm_area_struct *vma)
{
- struct vm_area_struct *pvma, **pp, *next;
+ struct vm_area_struct *pvma, **pp;
struct address_space *mapping;
struct rb_node **p, *parent;
break;
}
- next = *pp;
+ vma->vm_next = *pp;
*pp = vma;
- vma->vm_next = next;
- if (next)
- next->vm_prev = vma;
}
/*
mm->mmap = vma->vm_next;
delete_vma_from_mm(vma);
delete_vma(mm, vma);
- cond_resched();
}
kleave("");
list_for_each_entry(c, &p->children, sibling) {
if (c->mm == p->mm)
continue;
- if (mem && !task_in_mem_cgroup(c, mem))
- continue;
if (!oom_kill_task(c))
return 0;
}
{
int migratetype = 0;
int batch_free = 0;
- int to_free = count;
spin_lock(&zone->lock);
zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
zone->pages_scanned = 0;
- while (to_free) {
+ __mod_zone_page_state(zone, NR_FREE_PAGES, count);
+ while (count) {
struct page *page;
struct list_head *list;
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, zone, 0, page_private(page));
trace_mm_page_pcpu_drain(page, 0, page_private(page));
- } while (--to_free && --batch_free && !list_empty(list));
+ } while (--count && --batch_free && !list_empty(list));
}
- __mod_zone_page_state(zone, NR_FREE_PAGES, count);
spin_unlock(&zone->lock);
}
zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
zone->pages_scanned = 0;
- __free_one_page(page, zone, order, migratetype);
__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+ __free_one_page(page, zone, order, migratetype);
spin_unlock(&zone->lock);
}
{
/* free_pages my go negative - that's OK */
long min = mark;
- long free_pages = zone_nr_free_pages(z) - (1 << order) + 1;
+ long free_pages = zone_page_state(z, NR_FREE_PAGES) - (1 << order) + 1;
int o;
if (alloc_flags & ALLOC_HIGH)
struct page *page = NULL;
struct reclaim_state reclaim_state;
struct task_struct *p = current;
- bool drained = false;
cond_resched();
cond_resched();
- if (unlikely(!(*did_some_progress)))
- return NULL;
+ if (order != 0)
+ drain_all_pages();
-retry:
- page = get_page_from_freelist(gfp_mask, nodemask, order,
+ if (likely(*did_some_progress))
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
-
- /*
- * If an allocation failed after direct reclaim, it could be because
- * pages are pinned on the per-cpu lists. Drain them and try again
- */
- if (!page && !drained) {
- drain_all_pages();
- drained = true;
- goto retry;
- }
-
return page;
}
" all_unreclaimable? %s"
"\n",
zone->name,
- K(zone_nr_free_pages(zone)),
+ K(zone_page_state(zone, NR_FREE_PAGES)),
K(min_wmark_pages(zone)),
K(low_wmark_pages(zone)),
K(high_wmark_pages(zone)),
if (pcpu_first_unit_cpu == NR_CPUS)
pcpu_first_unit_cpu = cpu;
- pcpu_last_unit_cpu = cpu;
}
}
+ pcpu_last_unit_cpu = cpu;
pcpu_nr_units = unit;
for_each_possible_cpu(cpu)
if (!ra->ra_pages)
return;
- /* be dumb */
- if (filp && (filp->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(mapping, filp, offset, req_size);
- return;
- }
-
/* do read-ahead */
ondemand_readahead(mapping, ra, filp, false, offset, req_size);
}
/* do read-ahead */
ondemand_readahead(mapping, ra, filp, true, offset, req_size);
-
-#ifdef CONFIG_BLOCK
- /*
- * Normally the current page is !uptodate and lock_page() will be
- * immediately called to implicitly unplug the device. However this
- * is not always true for RAID conifgurations, where data arrives
- * not strictly in their submission order. In this case we need to
- * explicitly kick off the IO.
- */
- if (PageUptodate(page))
- blk_run_backing_dev(mapping->backing_dev_info, NULL);
-#endif
}
EXPORT_SYMBOL_GPL(page_cache_async_readahead);
if (limit > 1)
limit = 12;
- ac_ptr = kzalloc_node(memsize, gfp, node);
+ ac_ptr = kmalloc_node(memsize, gfp, node);
if (ac_ptr) {
for_each_node(i) {
- if (i == node || !node_online(i))
+ if (i == node || !node_online(i)) {
+ ac_ptr[i] = NULL;
continue;
+ }
ac_ptr[i] = alloc_arraycache(node, limit, 0xbaadf00d, gfp);
if (!ac_ptr[i]) {
for (i--; i >= 0; i--)
}
#if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
if (size >= malloc_sizes[INDEX_L3 + 1].cs_size
- && cachep->obj_size > cache_line_size() && ALIGN(size, align) < PAGE_SIZE) {
- cachep->obj_offset += PAGE_SIZE - ALIGN(size, align);
+ && cachep->obj_size > cache_line_size() && size < PAGE_SIZE) {
+ cachep->obj_offset += PAGE_SIZE - size;
size = PAGE_SIZE;
}
#endif
if (offset > si->highest_bit)
scan_base = offset = si->lowest_bit;
- /* reuse swap entry of cache-only swap if not hibernation. */
- if (vm_swap_full()
- && cache == SWAP_CACHE
- && si->swap_map[offset] == SWAP_HAS_CACHE) {
+ /* reuse swap entry of cache-only swap if not busy. */
+ if (vm_swap_full() && si->swap_map[offset] == SWAP_HAS_CACHE) {
int swap_was_freed;
spin_unlock(&swap_lock);
swap_was_freed = __try_to_reclaim_swap(si, offset);
/* for per-CPU blocks */
static void purge_fragmented_blocks_allcpus(void);
-/*
- * called before a call to iounmap() if the caller wants vm_area_struct's
- * immediately freed.
- */
-void set_iounmap_nonlazy(void)
-{
- atomic_set(&vmap_lazy_nr, lazy_max_pages()+1);
-}
-
/*
* Purges all lazily-freed vmap areas.
*
return isolated > inactive;
}
-/*
- * Returns true if the caller should wait to clean dirty/writeback pages.
- *
- * If we are direct reclaiming for contiguous pages and we do not reclaim
- * everything in the list, try again and wait for writeback IO to complete.
- * This will stall high-order allocations noticeably. Only do that when really
- * need to free the pages under high memory pressure.
- */
-static inline bool should_reclaim_stall(unsigned long nr_taken,
- unsigned long nr_freed,
- int priority,
- int lumpy_reclaim,
- struct scan_control *sc)
-{
- int lumpy_stall_priority;
-
- /* kswapd should not stall on sync IO */
- if (current_is_kswapd())
- return false;
-
- /* Only stall on lumpy reclaim */
- if (!lumpy_reclaim)
- return false;
-
- /* If we have relaimed everything on the isolated list, no stall */
- if (nr_freed == nr_taken)
- return false;
-
- /*
- * For high-order allocations, there are two stall thresholds.
- * High-cost allocations stall immediately where as lower
- * order allocations such as stacks require the scanning
- * priority to be much higher before stalling.
- */
- if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
- lumpy_stall_priority = DEF_PRIORITY;
- else
- lumpy_stall_priority = DEF_PRIORITY / 3;
-
- return priority <= lumpy_stall_priority;
-}
-
/*
* shrink_inactive_list() is a helper for shrink_zone(). It returns the number
* of reclaimed pages
nr_scanned += nr_scan;
nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
- /* Check if we should syncronously wait for writeback */
- if (should_reclaim_stall(nr_taken, nr_freed, priority,
- lumpy_reclaim, sc)) {
+ /*
+ * If we are direct reclaiming for contiguous pages and we do
+ * not reclaim everything in the list, try again and wait
+ * for IO to complete. This will stall high-order allocations
+ * but that should be acceptable to the caller
+ */
+ if (nr_freed < nr_taken && !current_is_kswapd() &&
+ lumpy_reclaim) {
congestion_wait(BLK_RW_ASYNC, HZ/10);
/*
int threshold;
for_each_populated_zone(zone) {
- unsigned long max_drift, tolerate_drift;
-
threshold = calculate_threshold(zone);
for_each_online_cpu(cpu)
zone_pcp(zone, cpu)->stat_threshold = threshold;
-
- /*
- * Only set percpu_drift_mark if there is a danger that
- * NR_FREE_PAGES reports the low watermark is ok when in fact
- * the min watermark could be breached by an allocation
- */
- tolerate_drift = low_wmark_pages(zone) - min_wmark_pages(zone);
- max_drift = num_online_cpus() * threshold;
- if (max_drift > tolerate_drift)
- zone->percpu_drift_mark = high_wmark_pages(zone) +
- max_drift;
}
}
"\n scanned %lu"
"\n spanned %lu"
"\n present %lu",
- zone_nr_free_pages(zone),
+ zone_page_state(zone, NR_FREE_PAGES),
min_wmark_pages(zone),
low_wmark_pages(zone),
high_wmark_pages(zone),
csocket = NULL;
- if (strlen(addr) >= UNIX_PATH_MAX) {
+ if (strlen(addr) > UNIX_PATH_MAX) {
P9_EPRINTK(KERN_ERR, "p9_trans_unix: address too long: %s\n",
addr);
err = -ENAMETOOLONG;
int len = cmd->len - sizeof(*rsp);
char req[64];
- if (len > sizeof(req) - sizeof(struct l2cap_conf_req)) {
- l2cap_send_disconn_req(conn, sk);
- goto done;
- }
-
/* throw out any old stored conf requests */
result = L2CAP_CONF_SUCCESS;
len = l2cap_parse_conf_rsp(sk, rsp->data,
struct sock *sk;
struct hlist_node *node;
char *str = buf;
- int size = PAGE_SIZE;
read_lock_bh(&l2cap_sk_list.lock);
sk_for_each(sk, node, &l2cap_sk_list.head) {
struct l2cap_pinfo *pi = l2cap_pi(sk);
- int len;
- len = snprintf(str, size, "%s %s %d %d 0x%4.4x 0x%4.4x %d %d %d\n",
+ str += sprintf(str, "%s %s %d %d 0x%4.4x 0x%4.4x %d %d %d\n",
batostr(&bt_sk(sk)->src), batostr(&bt_sk(sk)->dst),
sk->sk_state, __le16_to_cpu(pi->psm), pi->scid,
pi->dcid, pi->imtu, pi->omtu, pi->sec_level);
-
- size -= len;
- if (size <= 0)
- break;
-
- str += len;
}
read_unlock_bh(&l2cap_sk_list.lock);
return hci_conn_security(l2cap_pi(sk)->conn->hcon, d->sec_level,
auth_type);
}
-#if 0 //cz@rock-chips.com
-static void rfcomm_session_timeout(unsigned long arg)
-{
- struct rfcomm_session *s = (void *) arg;
-
- BT_DBG("session %p state %ld", s, s->state);
- set_bit(RFCOMM_TIMED_OUT, &s->flags);
- rfcomm_schedule(RFCOMM_SCHED_TIMEO);
-}
-
-static void rfcomm_session_set_timer(struct rfcomm_session *s, long timeout)
-{
- BT_DBG("session %p state %ld timeout %ld", s, s->state, timeout);
-
- if (!mod_timer(&s->timer, jiffies + timeout))
- rfcomm_session_hold(s);
-}
-
-static void rfcomm_session_clear_timer(struct rfcomm_session *s)
-{
- BT_DBG("session %p state %ld", s, s->state);
-
- if (timer_pending(&s->timer) && del_timer(&s->timer))
- rfcomm_session_put(s);
-}
-#endif
/* ---- RFCOMM DLCs ---- */
static void rfcomm_dlc_timeout(unsigned long arg)
{
list_for_each_safe(p, n, &session_list) {
struct rfcomm_session *s;
s = list_entry(p, struct rfcomm_session, list);
-#if 0 //cz@rock-chips.com
- if (test_and_clear_bit(RFCOMM_TIMED_OUT, &s->flags)) {
- s->state = BT_DISCONN;
- rfcomm_send_disc(s, 0);
- rfcomm_session_put(s);
- continue;
- }
-#endif
+
if (s->state == BT_LISTEN) {
rfcomm_accept_connection(s);
continue;
struct rfcomm_session *s;
struct list_head *pp, *p;
char *str = buf;
- int size = PAGE_SIZE;
rfcomm_lock();
list_for_each(pp, &s->dlcs) {
struct sock *sk = s->sock->sk;
struct rfcomm_dlc *d = list_entry(pp, struct rfcomm_dlc, list);
- int len;
- len = snprintf(str, size, "%s %s %ld %d %d %d %d\n",
+ str += sprintf(str, "%s %s %ld %d %d %d %d\n",
batostr(&bt_sk(sk)->src), batostr(&bt_sk(sk)->dst),
d->state, d->dlci, d->mtu, d->rx_credits, d->tx_credits);
-
- size -= len;
- if (size <= 0)
- break;
-
- str += len;
}
-
- if (size <= 0)
- break;
}
rfcomm_unlock();
struct sock *sk;
struct hlist_node *node;
char *str = buf;
- int size = PAGE_SIZE;
read_lock_bh(&rfcomm_sk_list.lock);
sk_for_each(sk, node, &rfcomm_sk_list.head) {
- int len;
-
- len = snprintf(str, size, "%s %s %d %d\n",
+ str += sprintf(str, "%s %s %d %d\n",
batostr(&bt_sk(sk)->src), batostr(&bt_sk(sk)->dst),
sk->sk_state, rfcomm_pi(sk)->channel);
-
- size -= len;
- if (size <= 0)
- break;
-
- str += len;
}
read_unlock_bh(&rfcomm_sk_list.lock);
struct sock *sk;
struct hlist_node *node;
char *str = buf;
- int size = PAGE_SIZE;
read_lock_bh(&sco_sk_list.lock);
sk_for_each(sk, node, &sco_sk_list.head) {
- int len;
-
- len = snprintf(str, size, "%s %s %d\n",
+ str += sprintf(str, "%s %s %d\n",
batostr(&bt_sk(sk)->src), batostr(&bt_sk(sk)->dst),
sk->sk_state);
-
- size -= len;
- if (size <= 0)
- break;
-
- str += len;
}
read_unlock_bh(&sco_sk_list.lock);
pskb_trim_rcsum(skb, len);
- /* BUG: Should really parse the IP options here. */
- memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
-
nf_bridge_put(skb->nf_bridge);
if (!nf_bridge_alloc(skb))
return NF_DROP;
if (skb->nfct != NULL &&
(skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb)) &&
skb->len > skb->dev->mtu &&
- !skb_is_gso(skb)) {
- /* BUG: Should really parse the IP options here. */
- memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ !skb_is_gso(skb))
return ip_fragment(skb, br_dev_queue_push_xmit);
- } else
+ else
return br_dev_queue_push_xmit(skb);
}
#else
#include <net/sock.h>
#include <net/net_namespace.h>
-/*
- * To send multiple CAN frame content within TX_SETUP or to filter
- * CAN messages with multiplex index within RX_SETUP, the number of
- * different filters is limited to 256 due to the one byte index value.
- */
-#define MAX_NFRAMES 256
-
/* use of last_frames[index].can_dlc */
#define RX_RECV 0x40 /* received data for this element */
#define RX_THR 0x80 /* element not been sent due to throttle feature */
struct list_head list;
int ifindex;
canid_t can_id;
- u32 flags;
+ int flags;
unsigned long frames_abs, frames_filtered;
struct timeval ival1, ival2;
struct hrtimer timer, thrtimer;
struct tasklet_struct tsklet, thrtsklet;
ktime_t rx_stamp, kt_ival1, kt_ival2, kt_lastmsg;
int rx_ifindex;
- u32 count;
- u32 nframes;
- u32 currframe;
+ int count;
+ int nframes;
+ int currframe;
struct can_frame *frames;
struct can_frame *last_frames;
struct can_frame sframe;
struct list_head tx_ops;
unsigned long dropped_usr_msgs;
struct proc_dir_entry *bcm_proc_read;
- char procname [20]; /* pointer printed in ASCII with \0 */
+ char procname [9]; /* pointer printed in ASCII with \0 */
};
static inline struct bcm_sock *bcm_sk(const struct sock *sk)
seq_printf(m, "rx_op: %03X %-5s ",
op->can_id, bcm_proc_getifname(ifname, op->ifindex));
- seq_printf(m, "[%u]%c ", op->nframes,
+ seq_printf(m, "[%d]%c ", op->nframes,
(op->flags & RX_CHECK_DLC)?'d':' ');
if (op->kt_ival1.tv64)
seq_printf(m, "timeo=%lld ",
list_for_each_entry(op, &bo->tx_ops, list) {
- seq_printf(m, "tx_op: %03X %s [%u] ",
+ seq_printf(m, "tx_op: %03X %s [%d] ",
op->can_id,
bcm_proc_getifname(ifname, op->ifindex),
op->nframes);
struct can_frame *firstframe;
struct sockaddr_can *addr;
struct sock *sk = op->sk;
- unsigned int datalen = head->nframes * CFSIZ;
+ int datalen = head->nframes * CFSIZ;
int err;
skb = alloc_skb(sizeof(*head) + datalen, gfp_any());
* bcm_rx_cmp_to_index - (bit)compares the currently received data to formerly
* received data stored in op->last_frames[]
*/
-static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index,
+static void bcm_rx_cmp_to_index(struct bcm_op *op, int index,
const struct can_frame *rxdata)
{
/*
/*
* bcm_rx_do_flush - helper for bcm_rx_thr_flush
*/
-static inline int bcm_rx_do_flush(struct bcm_op *op, int update,
- unsigned int index)
+static inline int bcm_rx_do_flush(struct bcm_op *op, int update, int index)
{
if ((op->last_frames) && (op->last_frames[index].can_dlc & RX_THR)) {
if (update)
int updated = 0;
if (op->nframes > 1) {
- unsigned int i;
+ int i;
/* for MUX filter we start at index 1 */
for (i = 1; i < op->nframes; i++)
{
struct bcm_op *op = (struct bcm_op *)data;
const struct can_frame *rxframe = (struct can_frame *)skb->data;
- unsigned int i;
+ int i;
/* disable timeout */
hrtimer_cancel(&op->timer);
{
struct bcm_sock *bo = bcm_sk(sk);
struct bcm_op *op;
- unsigned int i;
- int err;
+ int i, err;
/* we need a real device to send frames */
if (!ifindex)
return -ENODEV;
- /* check nframes boundaries - we need at least one can_frame */
- if (msg_head->nframes < 1 || msg_head->nframes > MAX_NFRAMES)
+ /* we need at least one can_frame */
+ if (msg_head->nframes < 1)
return -EINVAL;
/* check the given can_id */
msg_head->nframes = 0;
}
- /* the first element contains the mux-mask => MAX_NFRAMES + 1 */
- if (msg_head->nframes > MAX_NFRAMES + 1)
- return -EINVAL;
-
if ((msg_head->flags & RX_RTR_FRAME) &&
((msg_head->nframes != 1) ||
(!(msg_head->can_id & CAN_RTR_FLAG))))
compat_size_t len;
if (get_user(len, &uiov32->iov_len) ||
- get_user(buf, &uiov32->iov_base))
- return -EFAULT;
-
- if (len > INT_MAX - tot_len)
- len = INT_MAX - tot_len;
-
+ get_user(buf, &uiov32->iov_base)) {
+ tot_len = -EFAULT;
+ break;
+ }
tot_len += len;
kiov->iov_base = compat_ptr(buf);
kiov->iov_len = (__kernel_size_t) len;
static bool can_checksum_protocol(unsigned long features, __be16 protocol)
{
- return ((features & NETIF_F_NO_CSUM) ||
- ((features & NETIF_F_V4_CSUM) &&
+ return ((features & NETIF_F_GEN_CSUM) ||
+ ((features & NETIF_F_IP_CSUM) &&
protocol == htons(ETH_P_IP)) ||
- ((features & NETIF_F_V6_CSUM) &&
+ ((features & NETIF_F_IPV6_CSUM) &&
protocol == htons(ETH_P_IPV6)) ||
((features & NETIF_F_FCOE_CRC) &&
protocol == htons(ETH_P_FCOE)));
put_page(skb_shinfo(skb)->frags[0].page);
memmove(skb_shinfo(skb)->frags,
skb_shinfo(skb)->frags + 1,
- --skb_shinfo(skb)->nr_frags * sizeof(skb_frag_t));
+ --skb_shinfo(skb)->nr_frags);
}
}
switch (ret) {
case GRO_NORMAL:
case GRO_HELD:
- skb->protocol = eth_type_trans(skb, skb->dev);
+ skb->protocol = eth_type_trans(skb, napi->dev);
if (ret == GRO_NORMAL)
return netif_receive_skb(skb);
return 0;
}
-static int ethtool_set_rxnfc(struct net_device *dev,
- u32 cmd, void __user *useraddr)
+static int ethtool_set_rxnfc(struct net_device *dev, void __user *useraddr)
{
- struct ethtool_rxnfc info;
- size_t info_size = sizeof(info);
+ struct ethtool_rxnfc cmd;
if (!dev->ethtool_ops->set_rxnfc)
return -EOPNOTSUPP;
- /* struct ethtool_rxnfc was originally defined for
- * ETHTOOL_{G,S}RXFH with only the cmd, flow_type and data
- * members. User-space might still be using that
- * definition. */
- if (cmd == ETHTOOL_SRXFH)
- info_size = (offsetof(struct ethtool_rxnfc, data) +
- sizeof(info.data));
-
- if (copy_from_user(&info, useraddr, info_size))
+ if (copy_from_user(&cmd, useraddr, sizeof(cmd)))
return -EFAULT;
- return dev->ethtool_ops->set_rxnfc(dev, &info);
+ return dev->ethtool_ops->set_rxnfc(dev, &cmd);
}
-static int ethtool_get_rxnfc(struct net_device *dev,
- u32 cmd, void __user *useraddr)
+static int ethtool_get_rxnfc(struct net_device *dev, void __user *useraddr)
{
struct ethtool_rxnfc info;
- size_t info_size = sizeof(info);
const struct ethtool_ops *ops = dev->ethtool_ops;
int ret;
void *rule_buf = NULL;
if (!ops->get_rxnfc)
return -EOPNOTSUPP;
- /* struct ethtool_rxnfc was originally defined for
- * ETHTOOL_{G,S}RXFH with only the cmd, flow_type and data
- * members. User-space might still be using that
- * definition. */
- if (cmd == ETHTOOL_GRXFH)
- info_size = (offsetof(struct ethtool_rxnfc, data) +
- sizeof(info.data));
-
- if (copy_from_user(&info, useraddr, info_size))
+ if (copy_from_user(&info, useraddr, sizeof(info)))
return -EFAULT;
if (info.cmd == ETHTOOL_GRXCLSRLALL) {
if (info.rule_cnt > 0) {
- if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32))
- rule_buf = kzalloc(info.rule_cnt * sizeof(u32),
- GFP_USER);
+ rule_buf = kmalloc(info.rule_cnt * sizeof(u32),
+ GFP_USER);
if (!rule_buf)
return -ENOMEM;
}
goto err_out;
ret = -EFAULT;
- if (copy_to_user(useraddr, &info, info_size))
+ if (copy_to_user(useraddr, &info, sizeof(info)))
goto err_out;
if (rule_buf) {
if (regs.len > reglen)
regs.len = reglen;
- regbuf = kzalloc(reglen, GFP_USER);
+ regbuf = kmalloc(reglen, GFP_USER);
if (!regbuf)
return -ENOMEM;
case ETHTOOL_GRXCLSRLCNT:
case ETHTOOL_GRXCLSRULE:
case ETHTOOL_GRXCLSRLALL:
- rc = ethtool_get_rxnfc(dev, ethcmd, useraddr);
+ rc = ethtool_get_rxnfc(dev, useraddr);
break;
case ETHTOOL_SRXFH:
case ETHTOOL_SRXCLSRLDEL:
case ETHTOOL_SRXCLSRLINS:
- rc = ethtool_set_rxnfc(dev, ethcmd, useraddr);
+ rc = ethtool_set_rxnfc(dev, useraddr);
break;
case ETHTOOL_GGRO:
rc = ethtool_get_gro(dev, useraddr);
int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr *address, int mode)
{
- int size, ct, err;
+ int size, err, ct;
if (m->msg_namelen) {
if (mode == VERIFY_READ) {
err = 0;
for (ct = 0; ct < m->msg_iovlen; ct++) {
- size_t len = iov[ct].iov_len;
-
- if (len > INT_MAX - err) {
- len = INT_MAX - err;
- iov[ct].iov_len = len;
- }
- err += len;
+ err += iov[ct].iov_len;
+ /*
+ * Goal is not to verify user data, but to prevent returning
+ * negative value, which is interpreted as errno.
+ * Overflow is still possible, but it is harmless.
+ */
+ if (err < 0)
+ return -EMSGSIZE;
}
return err;
{
struct hh_cache *hh;
void (*update)(struct hh_cache*, const struct net_device*, const unsigned char *)
- = NULL;
-
- if (neigh->dev->header_ops)
- update = neigh->dev->header_ops->cache_update;
+ = neigh->dev->header_ops->cache_update;
if (update) {
for (hh = neigh->hh; hh; hh = hh->hh_next) {
const struct iw_statistics *iw;
ssize_t ret = -EINVAL;
- if (!rtnl_trylock())
- return restart_syscall();
+ rtnl_lock();
if (dev_isalive(dev)) {
iw = get_wireless_stats(dev);
if (iw)
switch (cmsg->cmsg_type)
{
case SCM_RIGHTS:
- if (!sock->ops || sock->ops->family != PF_UNIX)
- goto error;
err=scm_fp_copy(cmsg, &p->fp);
if (err<0)
goto error;
__copy_skb_header(nskb, skb);
nskb->mac_len = skb->mac_len;
- /* nskb and skb might have different headroom */
- if (nskb->ip_summed == CHECKSUM_PARTIAL)
- nskb->csum_start += skb_headroom(nskb) - headroom;
-
skb_reset_mac_header(nskb);
skb_set_network_header(nskb, skb->mac_len);
nskb->transport_header = (nskb->network_header +
return -E2BIG;
headroom = skb_headroom(p);
- nskb = alloc_skb(headroom + skb_gro_offset(p), GFP_ATOMIC);
+ nskb = netdev_alloc_skb(p->dev, headroom + skb_gro_offset(p));
if (unlikely(!nskb))
return -ENOMEM;
*NAPI_GRO_CB(nskb) = *NAPI_GRO_CB(p);
skb_shinfo(nskb)->frag_list = p;
skb_shinfo(nskb)->gso_size = pinfo->gso_size;
- pinfo->gso_size = 0;
skb_header_release(p);
nskb->prev = p;
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
sk->sk_write_pending++;
- sk_wait_event(sk, ¤t_timeo, sk->sk_err ||
- (sk->sk_shutdown & SEND_SHUTDOWN) ||
- (sk_stream_memory_free(sk) &&
- !vm_wait));
+ sk_wait_event(sk, ¤t_timeo, !sk->sk_err &&
+ !(sk->sk_shutdown & SEND_SHUTDOWN) &&
+ sk_stream_memory_free(sk) &&
+ vm_wait);
sk->sk_write_pending--;
if (vm_wait) {
if (!proc_net_fops_create(&init_net, procname, S_IRUSR, &dccpprobe_fops))
goto err0;
- ret = try_then_request_module((register_jprobe(&dccp_send_probe) == 0),
- "dccp");
+ ret = register_jprobe(&dccp_send_probe);
if (ret)
goto err1;
if (r_len > sizeof(struct linkinfo_dn))
r_len = sizeof(struct linkinfo_dn);
- memset(&link, 0, sizeof(link));
-
switch(sock->state) {
case SS_CONNECTING:
link.idn_linkstate = LL_CONNECTING;
mutex_lock(&econet_mutex);
- if (saddr == NULL || msg->msg_namelen < sizeof(struct sockaddr_ec)) {
- mutex_unlock(&econet_mutex);
- return -EINVAL;
- }
- addr.station = saddr->addr.station;
- addr.net = saddr->addr.net;
- port = saddr->port;
- cb = saddr->cb;
+ if (saddr == NULL) {
+ struct econet_sock *eo = ec_sk(sk);
+
+ addr.station = eo->station;
+ addr.net = eo->net;
+ port = eo->port;
+ cb = eo->cb;
+ } else {
+ if (msg->msg_namelen < sizeof(struct sockaddr_ec)) {
+ mutex_unlock(&econet_mutex);
+ return -EINVAL;
+ }
+ addr.station = saddr->addr.station;
+ addr.net = saddr->addr.net;
+ port = saddr->port;
+ cb = saddr->cb;
+ }
/* Look for a device with the right network number. */
dev = net2dev_map[addr.net];
eb = (struct ec_cb *)&skb->cb;
+ /* BUG: saddr may be NULL */
eb->cookie = saddr->cookie;
eb->sec = *saddr;
eb->sent = ec_tx_done;
err = 0;
switch (cmd) {
case SIOCSIFADDR:
- if (!capable(CAP_NET_ADMIN))
- return -EPERM;
-
edev = dev->ec_ptr;
if (edev == NULL) {
/* Magic up a new one. */
}
ip_mc_up(in_dev);
/* fall through */
- case NETDEV_NOTIFY_PEERS:
case NETDEV_CHANGEADDR:
/* Send gratuitous ARP to notify of link change */
if (IN_DEV_ARP_NOTIFY(in_dev)) {
{
int *valp = ctl->data;
int val = *valp;
- loff_t pos = *ppos;
int ret = proc_dointvec(ctl, write, buffer, lenp, ppos);
if (write && *valp != val) {
struct net *net = ctl->extra2;
if (valp != &IPV4_DEVCONF_DFLT(net, FORWARDING)) {
- if (!rtnl_trylock()) {
- /* Restore the original values before restarting */
- *valp = val;
- *ppos = pos;
+ if (!rtnl_trylock())
return restart_syscall();
- }
if (valp == &IPV4_DEVCONF_ALL(net, FORWARDING)) {
inet_forward_change(net);
} else if (*valp) {
break;
case IGMP_HOST_MEMBERSHIP_REPORT:
case IGMPV2_HOST_MEMBERSHIP_REPORT:
+ case IGMPV3_HOST_MEMBERSHIP_REPORT:
/* Is it our report looped back? */
if (skb_rtable(skb)->fl.iif == 0)
break;
in_dev_put(in_dev);
return pim_rcv_v1(skb);
#endif
- case IGMPV3_HOST_MEMBERSHIP_REPORT:
case IGMP_DVMRP:
case IGMP_TRACE:
case IGMP_HOST_LEAVE_MESSAGE:
* we can switch to copy when see the first bad fragment.
*/
if (skb_has_frags(skb)) {
- struct sk_buff *frag, *frag2;
+ struct sk_buff *frag;
int first_len = skb_pagelen(skb);
+ int truesizes = 0;
if (first_len - hlen > mtu ||
((first_len - hlen) & 7) ||
if (frag->len > mtu ||
((frag->len & 7) && frag->next) ||
skb_headroom(frag) < hlen)
- goto slow_path_clean;
+ goto slow_path;
/* Partially cloned skb? */
if (skb_shared(frag))
- goto slow_path_clean;
+ goto slow_path;
BUG_ON(frag->sk);
if (skb->sk) {
frag->sk = skb->sk;
frag->destructor = sock_wfree;
}
- skb->truesize -= frag->truesize;
+ truesizes += frag->truesize;
}
/* Everything is OK. Generate! */
frag = skb_shinfo(skb)->frag_list;
skb_frag_list_init(skb);
skb->data_len = first_len - skb_headlen(skb);
+ skb->truesize -= truesizes;
skb->len = first_len;
iph->tot_len = htons(first_len);
iph->frag_off = htons(IP_MF);
}
IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
return err;
-
-slow_path_clean:
- skb_walk_frags(skb, frag2) {
- if (frag2 == frag)
- break;
- frag2->sk = NULL;
- frag2->destructor = NULL;
- skb->truesize += frag2->truesize;
- }
}
slow_path:
EXPORT_SYMBOL_GPL(__ip_route_output_key);
-static struct dst_entry *ipv4_blackhole_dst_check(struct dst_entry *dst, u32 cookie)
-{
- return NULL;
-}
-
static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu)
{
}
.family = AF_INET,
.protocol = cpu_to_be16(ETH_P_IP),
.destroy = ipv4_dst_destroy,
- .check = ipv4_blackhole_dst_check,
+ .check = ipv4_dst_check,
.update_pmtu = ipv4_rt_blackhole_update_pmtu,
.entries = ATOMIC_INIT(0),
};
*/
mask = 0;
+ if (sk->sk_err)
+ mask = POLLERR;
/*
* POLLHUP is certainly not done right. But poll() doesn't
if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk))
mask |= POLLOUT | POLLWRNORM;
}
- } else
- mask |= POLLOUT | POLLWRNORM;
+ }
if (tp->urg_data & TCP_URG_VALID)
mask |= POLLPRI;
}
- /* This barrier is coupled with smp_wmb() in tcp_reset() */
- smp_rmb();
- if (sk->sk_err)
- mask |= POLLERR;
-
return mask;
}
goto out_err;
while (--iovlen >= 0) {
- size_t seglen = iov->iov_len;
+ int seglen = iov->iov_len;
unsigned char __user *from = iov->iov_base;
iov++;
sk_eat_skb(sk, skb, 0);
if (!desc->count)
break;
- tp->copied_seq = seq;
}
tp->copied_seq = seq;
}
}
if (sk->sk_state != TCP_CLOSE) {
+ int orphan_count = percpu_counter_read_positive(
+ sk->sk_prot->orphan_count);
+
sk_mem_reclaim(sk);
- if (tcp_too_many_orphans(sk, 0)) {
+ if (tcp_too_many_orphans(sk, orphan_count)) {
if (net_ratelimit())
printk(KERN_INFO "TCP: too many of orphaned "
"sockets\n");
{
struct sk_buff *skb = NULL;
unsigned long nr_pages, limit;
- int i, max_share, cnt;
+ int order, i, max_share;
BUILD_BUG_ON(sizeof(struct tcp_skb_cb) > sizeof(skb->cb));
INIT_HLIST_HEAD(&tcp_hashinfo.bhash[i].chain);
}
-
- cnt = tcp_hashinfo.ehash_size;
-
- tcp_death_row.sysctl_max_tw_buckets = cnt / 2;
- sysctl_tcp_max_orphans = cnt / 2;
- sysctl_max_syn_backlog = max(128, cnt / 256);
+ /* Try to be a bit smarter and adjust defaults depending
+ * on available memory.
+ */
+ for (order = 0; ((1 << order) << PAGE_SHIFT) <
+ (tcp_hashinfo.bhash_size * sizeof(struct inet_bind_hashbucket));
+ order++)
+ ;
+ if (order >= 4) {
+ tcp_death_row.sysctl_max_tw_buckets = 180000;
+ sysctl_tcp_max_orphans = 4096 << (order - 4);
+ sysctl_max_syn_backlog = 1024;
+ } else if (order < 3) {
+ tcp_death_row.sysctl_max_tw_buckets >>= (3 - order);
+ sysctl_tcp_max_orphans >>= (3 - order);
+ sysctl_max_syn_backlog = 128;
+ }
/* Set the pressure threshold to be a fraction of global memory that
* is up to 1/2 at 256 MB, decreasing toward zero with the amount of
- * memory, with a floor of 128 pages, and a ceiling that prevents an
- * integer overflow.
+ * memory, with a floor of 128 pages.
*/
nr_pages = totalram_pages - totalhigh_pages;
limit = min(nr_pages, 1UL<<(28-PAGE_SHIFT)) >> (20-PAGE_SHIFT);
limit = (limit * (nr_pages >> (20-PAGE_SHIFT))) >> (PAGE_SHIFT-11);
limit = max(limit, 128UL);
- limit = min(limit, INT_MAX * 4UL / 3 / 2);
sysctl_tcp_mem[0] = limit / 4 * 3;
sysctl_tcp_mem[1] = limit;
sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2;
default:
sk->sk_err = ECONNRESET;
}
- /* This barrier is coupled with smp_rmb() in tcp_poll() */
- smp_wmb();
if (!sock_flag(sk, SOCK_DEAD))
sk->sk_error_report(sk);
/* tcp_ack considers this ACK as duplicate
* and does not calculate rtt.
- * Force it here.
+ * Fix it at least with timestamps.
*/
- tcp_ack_update_rtt(sk, 0, 0);
+ if (tp->rx_opt.saw_tstamp &&
+ tp->rx_opt.rcv_tsecr && !tp->srtt)
+ tcp_ack_saw_tstamp(sk, 0);
if (tp->rx_opt.tstamp_ok)
tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
int mib_idx;
int fwd_rexmitting = 0;
- if (!tp->packets_out)
- return;
-
if (!tp->lost_out)
tp->retransmit_high = tp->snd_una;
static int tcp_out_of_resources(struct sock *sk, int do_reset)
{
struct tcp_sock *tp = tcp_sk(sk);
- int shift = 0;
+ int orphans = percpu_counter_read_positive(&tcp_orphan_count);
/* If peer does not open window for long time, or did not transmit
* anything for long time, penalize it. */
if ((s32)(tcp_time_stamp - tp->lsndtime) > 2*TCP_RTO_MAX || !do_reset)
- shift++;
+ orphans <<= 1;
/* If some dubious ICMP arrived, penalize even more. */
if (sk->sk_err_soft)
- shift++;
+ orphans <<= 1;
- if (tcp_too_many_orphans(sk, shift)) {
+ if (tcp_too_many_orphans(sk, orphans)) {
if (net_ratelimit())
printk(KERN_INFO "Out of socket memory\n");
uh = udp_hdr(skb);
ulen = ntohs(uh->len);
- saddr = ip_hdr(skb)->saddr;
- daddr = ip_hdr(skb)->daddr;
-
if (ulen > skb->len)
goto short_packet;
if (udp4_csum_init(skb, uh, proto))
goto csum_error;
+ saddr = ip_hdr(skb)->saddr;
+ daddr = ip_hdr(skb)->daddr;
+
if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST))
return __udp4_lib_mcast_deliver(net, skb, uh,
saddr, daddr, udptable);
udp_table_init(&udp_table);
/* Set the pressure threshold up by the same strategy of TCP. It is a
* fraction of global memory that is up to 1/2 at 256 MB, decreasing
- * toward zero with the amount of memory, with a floor of 128 pages,
- * and a ceiling that prevents an integer overflow.
+ * toward zero with the amount of memory, with a floor of 128 pages.
*/
nr_pages = totalram_pages - totalhigh_pages;
limit = min(nr_pages, 1UL<<(28-PAGE_SHIFT)) >> (20-PAGE_SHIFT);
limit = (limit * (nr_pages >> (20-PAGE_SHIFT))) >> (PAGE_SHIFT-11);
limit = max(limit, 128UL);
- limit = min(limit, INT_MAX * 4UL / 3 / 2);
sysctl_udp_mem[0] = limit / 4 * 3;
sysctl_udp_mem[1] = limit;
sysctl_udp_mem[2] = sysctl_udp_mem[0] * 2;
if (xdst->u.rt.fl.oif == fl->oif && /*XXX*/
xdst->u.rt.fl.fl4_dst == fl->fl4_dst &&
xdst->u.rt.fl.fl4_src == fl->fl4_src &&
- !((xdst->u.rt.fl.fl4_tos ^ fl->fl4_tos) & IPTOS_RT_MASK) &&
+ xdst->u.rt.fl.fl4_tos == fl->fl4_tos &&
xfrm_bundle_ok(policy, xdst, fl, AF_INET, 0)) {
dst_clone(dst);
break;
static int xfrm4_get_tos(struct flowi *fl)
{
- return IPTOS_RT_MASK & fl->fl4_tos; /* Strip ECN bits */
+ return fl->fl4_tos;
}
static int xfrm4_init_path(struct xfrm_dst *path, struct dst_entry *dst,
if (p == &net->ipv6.devconf_dflt->forwarding)
return 0;
- if (!rtnl_trylock()) {
- /* Restore the original values before restarting */
- *p = old;
+ if (!rtnl_trylock())
return restart_syscall();
- }
if (p == &net->ipv6.devconf_all->forwarding) {
__s32 newf = net->ipv6.devconf_all->forwarding;
{
int *valp = ctl->data;
int val = *valp;
- loff_t pos = *ppos;
int ret;
ret = proc_dointvec(ctl, write, buffer, lenp, ppos);
if (write)
ret = addrconf_fixup_forwarding(ctl, valp, val);
- if (ret)
- *ppos = pos;
return ret;
}
if (p == &net->ipv6.devconf_dflt->disable_ipv6)
return 0;
- if (!rtnl_trylock()) {
- /* Restore the original values before restarting */
- *p = old;
+ if (!rtnl_trylock())
return restart_syscall();
- }
if (p == &net->ipv6.devconf_all->disable_ipv6) {
__s32 newf = net->ipv6.devconf_all->disable_ipv6;
{
int *valp = ctl->data;
int val = *valp;
- loff_t pos = *ppos;
int ret;
ret = proc_dointvec(ctl, write, buffer, lenp, ppos);
if (write)
ret = addrconf_disable_ipv6(ctl, valp, val);
- if (ret)
- *ppos = pos;
return ret;
}
if (skb_has_frags(skb)) {
int first_len = skb_pagelen(skb);
- struct sk_buff *frag2;
+ int truesizes = 0;
if (first_len - hlen > mtu ||
((first_len - hlen) & 7) ||
if (frag->len > mtu ||
((frag->len & 7) && frag->next) ||
skb_headroom(frag) < hlen)
- goto slow_path_clean;
+ goto slow_path;
/* Partially cloned skb? */
if (skb_shared(frag))
- goto slow_path_clean;
+ goto slow_path;
BUG_ON(frag->sk);
if (skb->sk) {
frag->sk = skb->sk;
frag->destructor = sock_wfree;
+ truesizes += frag->truesize;
}
- skb->truesize -= frag->truesize;
}
err = 0;
first_len = skb_pagelen(skb);
skb->data_len = first_len - skb_headlen(skb);
+ skb->truesize -= truesizes;
skb->len = first_len;
ipv6_hdr(skb)->payload_len = htons(first_len -
sizeof(struct ipv6hdr));
IPSTATS_MIB_FRAGFAILS);
dst_release(&rt->u.dst);
return err;
-
-slow_path_clean:
- skb_walk_frags(skb, frag2) {
- if (frag2 == frag)
- break;
- frag2->sk = NULL;
- frag2->destructor = NULL;
- skb->truesize += frag2->truesize;
- }
}
slow_path:
fl.fl_ip_dport = otcph.source;
security_skb_classify_flow(oldskb, &fl);
dst = ip6_route_output(net, NULL, &fl);
- if (dst == NULL || dst->error) {
- dst_release(dst);
+ if (dst == NULL)
return;
- }
- if (xfrm_lookup(net, &dst, &fl, NULL, 0))
+ if (dst->error || xfrm_lookup(net, &dst, &fl, NULL, 0))
return;
hh_len = (dst->dev->hard_header_len + 15)&~15;
struct inet_frag_queue q;
__be32 id; /* fragment id */
- u32 user;
struct in6_addr saddr;
struct in6_addr daddr;
* i.e. Path MTU discovery
*/
-static void rt6_do_pmtu_disc(struct in6_addr *daddr, struct in6_addr *saddr,
- struct net *net, u32 pmtu, int ifindex)
+void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr,
+ struct net_device *dev, u32 pmtu)
{
struct rt6_info *rt, *nrt;
+ struct net *net = dev_net(dev);
int allfrag = 0;
- rt = rt6_lookup(net, daddr, saddr, ifindex, 0);
+ rt = rt6_lookup(net, daddr, saddr, dev->ifindex, 0);
if (rt == NULL)
return;
dst_release(&rt->u.dst);
}
-void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr,
- struct net_device *dev, u32 pmtu)
-{
- struct net *net = dev_net(dev);
-
- /*
- * RFC 1981 states that a node "MUST reduce the size of the packets it
- * is sending along the path" that caused the Packet Too Big message.
- * Since it's not possible in the general case to determine which
- * interface was used to send the original packet, we update the MTU
- * on the interface that will be used to send future packets. We also
- * update the MTU on the interface that received the Packet Too Big in
- * case the original packet was forced out that interface with
- * SO_BINDTODEVICE or similar. This is the next best thing to the
- * correct behaviour, which would be to update the MTU on all
- * interfaces.
- */
- rt6_do_pmtu_disc(daddr, saddr, net, pmtu, 0);
- rt6_do_pmtu_disc(daddr, saddr, net, pmtu, dev->ifindex);
-}
-
/*
* Misc support functions
*/
err = irda_open_tsap(self, addr->sir_lsap_sel, addr->sir_name);
if (err < 0) {
- irias_delete_object(self->ias_obj);
- self->ias_obj = NULL;
+ kfree(self->ias_obj->name);
+ kfree(self->ias_obj);
return err;
}
IRDA_DEBUG(4, "%s(), strlen=%d\n", __func__, value_len);
/* Make sure the string is null-terminated */
- if (n + value_len < skb->len)
- fp[n + value_len] = 0x00;
+ fp[n+value_len] = 0x00;
IRDA_DEBUG(4, "Got string %s\n", fp+n);
/* Will truncate to IAS_MAX_STRING bytes */
memcpy(&val_len, buf+n, 2); /* To avoid alignment problems */
le16_to_cpus(&val_len); n+=2;
- if (val_len >= 1016) {
+ if (val_len > 1016) {
IRDA_DEBUG(2, "%s(), parameter length to long\n", __func__ );
return -RSP_INVALID_COMMAND_FORMAT;
}
p.pi = pi; /* In case handler needs to know */
p.pl = buf[1]; /* Extract length of value */
- if (p.pl > 32)
- p.pl = 32;
IRDA_DEBUG(2, "%s(), pi=%#x, pl=%d\n", __func__,
p.pi, p.pl);
(__u8) str[0], (__u8) str[1]);
/* Null terminate string */
- str[p.pl] = '\0';
+ str[p.pl+1] = '\0';
p.pv.c = str; /* Handler will need to take a copy */
{
struct sock *sk = sock->sk;
struct llc_sock *llc = llc_sk(sk);
- unsigned int opt;
- int rc = -EINVAL;
+ int rc = -EINVAL, opt;
lock_sock(sk);
if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
if MAC80211 != n
-config MAC80211_HAS_RC
- def_bool n
-
config MAC80211_RC_PID
bool "PID controller based rate control algorithm" if EMBEDDED
- select MAC80211_HAS_RC
---help---
This option enables a TX rate control algorithm for
mac80211 that uses a PID controller to select the TX
config MAC80211_RC_MINSTREL
bool "Minstrel" if EMBEDDED
- select MAC80211_HAS_RC
default y
---help---
This option enables the 'minstrel' TX rate control algorithm
choice
prompt "Default rate control algorithm"
- depends on MAC80211_HAS_RC
default MAC80211_RC_DEFAULT_MINSTREL
---help---
This option selects the default rate control algorithm
endif
-comment "Some wireless drivers require a rate control algorithm"
- depends on MAC80211_HAS_RC=n
-
config MAC80211_MESH
bool "Enable mac80211 mesh networking (pre-802.11s) support"
depends on MAC80211 && EXPERIMENTAL
/* check if the TID waits for addBA response */
spin_lock_bh(&sta->lock);
- if ((*state & (HT_ADDBA_REQUESTED_MSK | HT_ADDBA_RECEIVED_MSK |
- HT_AGG_STATE_REQ_STOP_BA_MSK)) !=
+ if ((*state & (HT_ADDBA_REQUESTED_MSK | HT_ADDBA_RECEIVED_MSK)) !=
HT_ADDBA_REQUESTED_MSK) {
spin_unlock_bh(&sta->lock);
+ *state = HT_AGG_STATE_IDLE;
#ifdef CONFIG_MAC80211_HT_DEBUG
printk(KERN_DEBUG "timer expired on tid %d but we are not "
"(or no longer) expecting addBA response there",
IEEE80211_STA_DISABLE_11N = BIT(4),
IEEE80211_STA_CSA_RECEIVED = BIT(5),
IEEE80211_STA_MFP_ENABLED = BIT(6),
- IEEE80211_STA_NULLFUNC_ACKED = BIT(7),
};
/* flags for MLME request */
rcu_read_lock();
sband = local->hw.wiphy->bands[info->band];
- fc = hdr->frame_control;
sta = sta_info_get(local, hdr->addr1);
local->dot11FailedCount++;
}
- if (ieee80211_is_nullfunc(fc) && ieee80211_has_pm(fc) &&
- (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) &&
- !(info->flags & IEEE80211_TX_CTL_INJECTED) &&
- local->ps_sdata && !(local->scanning)) {
- if (info->flags & IEEE80211_TX_STAT_ACK) {
- local->ps_sdata->u.mgd.flags |=
- IEEE80211_STA_NULLFUNC_ACKED;
- ieee80211_queue_work(&local->hw,
- &local->dynamic_ps_enable_work);
- } else
- mod_timer(&local->dynamic_ps_timer, jiffies +
- msecs_to_jiffies(10));
- }
-
/* this was a transmitted frame, but now we want to reuse it */
skb_orphan(skb);
if (wk->bss->wmm_used)
wmm = 1;
+ /* get all rates supported by the device and the AP as
+ * some APs don't like getting a superset of their rates
+ * in the association request (e.g. D-Link DAP 1353 in
+ * b-only mode) */
+ rates_len = ieee80211_compatible_rates(wk->bss, sband, &rates);
+
if ((wk->bss->cbss.capability & WLAN_CAPABILITY_SPECTRUM_MGMT) &&
(local->hw.flags & IEEE80211_HW_SPECTRUM_MGMT))
capab |= WLAN_CAPABILITY_SPECTRUM_MGMT;
*pos++ = wk->ssid_len;
memcpy(pos, wk->ssid, wk->ssid_len);
- if (wk->bss->supp_rates_len) {
- /* get all rates supported by the device and the AP as
- * some APs don't like getting a superset of their rates
- * in the association request (e.g. D-Link DAP 1353 in
- * b-only mode) */
- rates_len = ieee80211_compatible_rates(wk->bss, sband, &rates);
- } else {
- rates = ~0;
- rates_len = sband->n_bitrates;
- }
-
/* add all rates which were marked to be used above */
supp_rates_len = rates_len;
if (supp_rates_len > 8)
} else {
if (local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK)
ieee80211_send_nullfunc(local, sdata, 1);
-
- if (!(local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)) {
- conf->flags |= IEEE80211_CONF_PS;
- ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
- }
+ conf->flags |= IEEE80211_CONF_PS;
+ ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
}
}
container_of(work, struct ieee80211_local,
dynamic_ps_enable_work);
struct ieee80211_sub_if_data *sdata = local->ps_sdata;
- struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
/* can only happen when PS was just disabled anyway */
if (!sdata)
if (local->hw.conf.flags & IEEE80211_CONF_PS)
return;
- if ((local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK) &&
- (!(ifmgd->flags & IEEE80211_STA_NULLFUNC_ACKED)))
+ if (local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK)
ieee80211_send_nullfunc(local, sdata, 1);
- if (!(local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) ||
- (ifmgd->flags & IEEE80211_STA_NULLFUNC_ACKED)) {
- ifmgd->flags &= ~IEEE80211_STA_NULLFUNC_ACKED;
- local->hw.conf.flags |= IEEE80211_CONF_PS;
- ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
- }
+ local->hw.conf.flags |= IEEE80211_CONF_PS;
+ ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
}
void ieee80211_dynamic_ps_timer(unsigned long data)
list_add(&wk->list, &ifmgd->work_list);
ifmgd->flags &= ~IEEE80211_STA_DISABLE_11N;
- ifmgd->flags &= ~IEEE80211_STA_NULLFUNC_ACKED;
for (i = 0; i < req->crypto.n_ciphers_pairwise; i++)
if (req->crypto.ciphers_pairwise[i] == WLAN_CIPHER_SUITE_WEP40 ||
(rx->key || rx->sdata->drop_unencrypted)))
return -EACCES;
if (rx->sta && test_sta_flags(rx->sta, WLAN_STA_MFP)) {
- if (unlikely(!ieee80211_has_protected(fc) &&
- ieee80211_is_unicast_robust_mgmt_frame(rx->skb) &&
+ if (unlikely(ieee80211_is_unicast_robust_mgmt_frame(rx->skb) &&
rx->key))
return -EACCES;
/* BIP does not use Protected field, so need to check MMIE */
ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
{
struct net_device *dev = rx->dev;
- struct ieee80211_local *local = rx->local;
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
__le16 fc = hdr->frame_control;
int err;
dev->stats.rx_packets++;
dev->stats.rx_bytes += rx->skb->len;
- if (ieee80211_is_data(hdr->frame_control) &&
- !is_multicast_ether_addr(hdr->addr1) &&
- local->hw.conf.dynamic_ps_timeout > 0 && local->ps_sdata) {
- mod_timer(&local->dynamic_ps_timer, jiffies +
- msecs_to_jiffies(local->hw.conf.dynamic_ps_timeout));
- }
-
ieee80211_deliver_skb(rx);
return RX_QUEUED;
return RX_CONTINUE;
}
break;
- case WLAN_CATEGORY_MESH_PLINK:
- case WLAN_CATEGORY_MESH_PATH_SEL:
- if (ieee80211_vif_is_mesh(&sdata->vif))
- return ieee80211_mesh_rx_mgmt(sdata, rx->skb);
- break;
default:
/* do not process rejected action frames */
if (mgmt->u.action.category & 0x80)
bool beacon)
{
struct ieee80211_bss *bss;
- int clen, srlen;
+ int clen;
s32 signal = 0;
if (local->hw.flags & IEEE80211_HW_SIGNAL_DBM)
if (bss->dtim_period == 0)
bss->dtim_period = 1;
- /* replace old supported rates if we get new values */
- srlen = 0;
+ bss->supp_rates_len = 0;
if (elems->supp_rates) {
- clen = IEEE80211_MAX_SUPP_RATES;
+ clen = IEEE80211_MAX_SUPP_RATES - bss->supp_rates_len;
if (clen > elems->supp_rates_len)
clen = elems->supp_rates_len;
- memcpy(bss->supp_rates, elems->supp_rates, clen);
- srlen += clen;
+ memcpy(&bss->supp_rates[bss->supp_rates_len], elems->supp_rates,
+ clen);
+ bss->supp_rates_len += clen;
}
if (elems->ext_supp_rates) {
- clen = IEEE80211_MAX_SUPP_RATES - srlen;
+ clen = IEEE80211_MAX_SUPP_RATES - bss->supp_rates_len;
if (clen > elems->ext_supp_rates_len)
clen = elems->ext_supp_rates_len;
- memcpy(bss->supp_rates + srlen, elems->ext_supp_rates, clen);
- srlen += clen;
+ memcpy(&bss->supp_rates[bss->supp_rates_len],
+ elems->ext_supp_rates, clen);
+ bss->supp_rates_len += clen;
}
- if (srlen)
- bss->supp_rates_len = srlen;
bss->wmm_used = elems->wmm_param || elems->wmm_info;
if (local->scan_req)
return -EBUSY;
- if (req != local->int_scan_req &&
- sdata->vif.type == NL80211_IFTYPE_STATION &&
- !list_empty(&ifmgd->work_list)) {
- /* actually wait for the work it's doing to finish/time out */
- set_bit(IEEE80211_STA_REQ_SCAN, &ifmgd->request);
- local->scan_req = req;
- local->scan_sdata = sdata;
- return 0;
- }
-
if (local->ops->hw_scan) {
u8 *ies;
int ielen;
local->scan_req = req;
local->scan_sdata = sdata;
+ if (req != local->int_scan_req &&
+ sdata->vif.type == NL80211_IFTYPE_STATION &&
+ !list_empty(&ifmgd->work_list)) {
+ /* actually wait for the work it's doing to finish/time out */
+ set_bit(IEEE80211_STA_REQ_SCAN, &ifmgd->request);
+ return 0;
+ }
+
if (local->ops->hw_scan)
__set_bit(SCAN_HW_SCANNING, &local->scanning);
else
struct ieee80211_hdr *hdr = (void *)tx->skb->data;
struct ieee80211_supported_band *sband;
struct ieee80211_rate *rate;
- int i;
- u32 len;
+ int i, len;
bool inval = false, rts = false, short_preamble = false;
struct ieee80211_tx_rate_control txrc;
u32 sta_flags;
sband = tx->local->hw.wiphy->bands[tx->channel->band];
- len = min_t(u32, tx->skb->len + FCS_LEN,
+ len = min_t(int, tx->skb->len + FCS_LEN,
tx->local->hw.wiphy->frag_threshold);
/* set up the tx rate control struct we give the RC algo */
void ieee80211_tx_pending(unsigned long data)
{
struct ieee80211_local *local = (struct ieee80211_local *)data;
- struct ieee80211_sub_if_data *sdata;
unsigned long flags;
int i;
bool txok;
if (!txok)
break;
}
-
- if (skb_queue_empty(&local->pending[i]))
- list_for_each_entry_rcu(sdata, &local->interfaces, list)
- netif_tx_wake_queue(
- netdev_get_tx_queue(sdata->dev, i));
}
spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
/* someone still has this queue stopped */
return;
- if (skb_queue_empty(&local->pending[queue])) {
- rcu_read_lock();
- list_for_each_entry_rcu(sdata, &local->interfaces, list)
- netif_tx_wake_queue(netdev_get_tx_queue(sdata->dev, queue));
- rcu_read_unlock();
- } else
+ if (!skb_queue_empty(&local->pending[queue]))
tasklet_schedule(&local->tx_pending_tasklet);
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(sdata, &local->interfaces, list)
+ netif_tx_wake_queue(netdev_get_tx_queue(sdata->dev, queue));
+ rcu_read_unlock();
}
void ieee80211_wake_queue_by_reason(struct ieee80211_hw *hw, int queue,
}
}
- rcu_read_lock();
- if (hw->flags & IEEE80211_HW_AMPDU_AGGREGATION) {
- list_for_each_entry_rcu(sta, &local->sta_list, list) {
- ieee80211_sta_tear_down_BA_sessions(sta);
- }
- }
- rcu_read_unlock();
-
/* add back keys */
list_for_each_entry(sdata, &local->interfaces, list)
if (netif_running(sdata->dev))
hash = ip_vs_conn_hashkey(cp->af, cp->protocol, &cp->caddr, cp->cport);
ct_write_lock(hash);
- spin_lock(&cp->lock);
if (!(cp->flags & IP_VS_CONN_F_HASHED)) {
list_add(&cp->c_list, &ip_vs_conn_tab[hash]);
ret = 0;
}
- spin_unlock(&cp->lock);
ct_write_unlock(hash);
return ret;
hash = ip_vs_conn_hashkey(cp->af, cp->protocol, &cp->caddr, cp->cport);
ct_write_lock(hash);
- spin_lock(&cp->lock);
if (cp->flags & IP_VS_CONN_F_HASHED) {
list_del(&cp->c_list);
} else
ret = 0;
- spin_unlock(&cp->lock);
ct_write_unlock(hash);
return ret;
if (!hash) {
*vmalloced = 1;
printk(KERN_WARNING "nf_conntrack: falling back to vmalloc.\n");
- hash = __vmalloc(sz, GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
- PAGE_KERNEL);
+ hash = __vmalloc(sz, GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL);
}
if (hash && nulls)
static void recent_entry_update(struct recent_table *t, struct recent_entry *e)
{
- e->index %= ip_pkt_list_tot;
e->stamps[e->index++] = jiffies;
if (e->index > e->nstamps)
e->nstamps = e->index;
+ e->index %= ip_pkt_list_tot;
list_move_tail(&e->lru_list, &t->lru_list);
}
for (i = 0; i < e->nstamps; i++) {
if (info->seconds && time_after(time, e->stamps[i]))
continue;
- if (!info->hit_count || ++hits >= info->hit_count) {
+ if (++hits >= info->hit_count) {
ret = !ret;
break;
}
struct netlink_sock *nlk = nlk_sk(sk);
int noblock = flags&MSG_DONTWAIT;
size_t copied;
- struct sk_buff *skb, *data_skb;
+ struct sk_buff *skb, *frag __maybe_unused = NULL;
int err;
if (flags&MSG_OOB)
if (skb == NULL)
goto out;
- data_skb = skb;
-
#ifdef CONFIG_COMPAT_NETLINK_MESSAGES
if (unlikely(skb_shinfo(skb)->frag_list)) {
+ bool need_compat = !!(flags & MSG_CMSG_COMPAT);
+
/*
- * If this skb has a frag_list, then here that means that we
- * will have to use the frag_list skb's data for compat tasks
- * and the regular skb's data for normal (non-compat) tasks.
+ * If this skb has a frag_list, then here that means that
+ * we will have to use the frag_list skb for compat tasks
+ * and the regular skb for non-compat tasks.
*
- * If we need to send the compat skb, assign it to the
- * 'data_skb' variable so that it will be used below for data
- * copying. We keep 'skb' for everything else, including
- * freeing both later.
+ * The skb might (and likely will) be cloned, so we can't
+ * just reset frag_list and go on with things -- we need to
+ * keep that. For the compat case that's easy -- simply get
+ * a reference to the compat skb and free the regular one
+ * including the frag. For the non-compat case, we need to
+ * avoid sending the frag to the user -- so assign NULL but
+ * restore it below before freeing the skb.
*/
- if (flags & MSG_CMSG_COMPAT)
- data_skb = skb_shinfo(skb)->frag_list;
+ if (need_compat) {
+ struct sk_buff *compskb = skb_shinfo(skb)->frag_list;
+ skb_get(compskb);
+ kfree_skb(skb);
+ skb = compskb;
+ } else {
+ frag = skb_shinfo(skb)->frag_list;
+ skb_shinfo(skb)->frag_list = NULL;
+ }
}
#endif
msg->msg_namelen = 0;
- copied = data_skb->len;
+ copied = skb->len;
if (len < copied) {
msg->msg_flags |= MSG_TRUNC;
copied = len;
}
- skb_reset_transport_header(data_skb);
- err = skb_copy_datagram_iovec(data_skb, 0, msg->msg_iov, copied);
+ skb_reset_transport_header(skb);
+ err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (msg->msg_name) {
struct sockaddr_nl *addr = (struct sockaddr_nl *)msg->msg_name;
}
siocb->scm->creds = *NETLINK_CREDS(skb);
if (flags & MSG_TRUNC)
- copied = data_skb->len;
+ copied = skb->len;
+
+#ifdef CONFIG_COMPAT_NETLINK_MESSAGES
+ skb_shinfo(skb)->frag_list = frag;
+#endif
skb_free_datagram(sk, skb);
struct phonet_protocol *pnp;
int err;
- if (!net_eq(net, &init_net))
- return -EAFNOSUPPORT;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
struct sockaddr_pn sa;
u16 len;
- if (!net_eq(net, &init_net))
- goto out;
/* check we have at least a full Phonet header */
if (!pskb_pull(skb, sizeof(struct phonethdr)))
goto out;
static int pipe_rcv_status(struct sock *sk, struct sk_buff *skb)
{
struct pep_sock *pn = pep_sk(sk);
- struct pnpipehdr *hdr;
+ struct pnpipehdr *hdr = pnp_hdr(skb);
int wake = 0;
if (!pskb_may_pull(skb, sizeof(*hdr) + 4))
return -EINVAL;
- hdr = pnp_hdr(skb);
if (hdr->data[0] != PN_PEP_TYPE_COMMON) {
LIMIT_NETDEBUG(KERN_DEBUG"Phonet unknown PEP type: %u\n",
(unsigned)hdr->data[0]);
/* Per-namespace Phonet devices handling */
static int phonet_init_net(struct net *net)
{
- struct phonet_net *pnn;
-
- if (!net_eq(net, &init_net))
- return 0;
- pnn = kmalloc(sizeof(*pnn), GFP_KERNEL);
+ struct phonet_net *pnn = kmalloc(sizeof(*pnn), GFP_KERNEL);
if (!pnn)
return -ENOMEM;
static void phonet_exit_net(struct net *net)
{
- struct phonet_net *pnn;
+ struct phonet_net *pnn = net_generic(net, phonet_net_id);
struct net_device *dev;
- if (!net_eq(net, &init_net))
- return;
- pnn = net_generic(net, phonet_net_id);
-
rtnl_lock();
for_each_netdev(net, dev)
phonet_device_destroy(dev);
int err;
u8 pnaddr;
- if (!net_eq(net, &init_net))
- return -EOPNOTSUPP;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
static int getaddr_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
{
- struct net *net = sock_net(skb->sk);
struct phonet_device_list *pndevs;
struct phonet_device *pnd;
int dev_idx = 0, dev_start_idx = cb->args[0];
int addr_idx = 0, addr_start_idx = cb->args[1];
- if (!net_eq(net, &init_net))
- goto skip;
-
- pndevs = phonet_device_list(net);
+ pndevs = phonet_device_list(sock_net(skb->sk));
spin_lock_bh(&pndevs->lock);
list_for_each_entry(pnd, &pndevs->list, list) {
u8 addr;
out:
spin_unlock_bh(&pndevs->lock);
-skip:
cb->args[0] = dev_idx;
cb->args[1] = addr_idx;
unsigned long ret;
void *addr;
- addr = kmap(page);
- if (to_user) {
+ if (to_user)
rds_stats_add(s_copy_to_user, bytes);
- ret = copy_to_user(ptr, addr + offset, bytes);
- } else {
+ else
rds_stats_add(s_copy_from_user, bytes);
- ret = copy_from_user(addr + offset, ptr, bytes);
+
+ addr = kmap_atomic(page, KM_USER0);
+ if (to_user)
+ ret = __copy_to_user_inatomic(ptr, addr + offset, bytes);
+ else
+ ret = __copy_from_user_inatomic(addr + offset, ptr, bytes);
+ kunmap_atomic(addr, KM_USER0);
+
+ if (ret) {
+ addr = kmap(page);
+ if (to_user)
+ ret = copy_to_user(ptr, addr + offset, bytes);
+ else
+ ret = copy_from_user(addr + offset, ptr, bytes);
+ kunmap(page);
+ if (ret)
+ return -EFAULT;
}
- kunmap(page);
- return ret ? -EFAULT : 0;
+ return 0;
}
EXPORT_SYMBOL_GPL(rds_page_copy_user);
goto out;
}
- if (args->nr_local > UIO_MAXIOV) {
+ if (args->nr_local > (u64)UINT_MAX) {
ret = -EMSGSIZE;
goto out;
}
int rds_notify_queue_get(struct rds_sock *rs, struct msghdr *msghdr)
{
struct rds_notifier *notifier;
- struct rds_rdma_notify cmsg = { 0 }; /* fill holes with zero */
+ struct rds_rdma_notify cmsg;
unsigned int count = 0, max_messages = ~0U;
unsigned long flags;
LIST_HEAD(copy);
if (addr_len == sizeof(struct sockaddr_rose) && addr->srose_ndigis > 1)
return -EINVAL;
- if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS)
+ if (addr->srose_ndigis > ROSE_MAX_DIGIS)
return -EINVAL;
if ((dev = rose_dev_get(&addr->srose_addr)) == NULL) {
if (addr_len == sizeof(struct sockaddr_rose) && addr->srose_ndigis > 1)
return -EINVAL;
- if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS)
+ if (addr->srose_ndigis > ROSE_MAX_DIGIS)
return -EINVAL;
/* Source + Destination digis should not exceed ROSE_MAX_DIGIS */
static int tcf_gact_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
{
unsigned char *b = skb_tail_pointer(skb);
+ struct tc_gact opt;
struct tcf_gact *gact = a->priv;
- struct tc_gact opt = {
- .index = gact->tcf_index,
- .refcnt = gact->tcf_refcnt - ref,
- .bindcnt = gact->tcf_bindcnt - bind,
- .action = gact->tcf_action,
- };
struct tcf_t t;
+ opt.index = gact->tcf_index;
+ opt.refcnt = gact->tcf_refcnt - ref;
+ opt.bindcnt = gact->tcf_bindcnt - bind;
+ opt.action = gact->tcf_action;
NLA_PUT(skb, TCA_GACT_PARMS, sizeof(opt), &opt);
#ifdef CONFIG_GACT_PROB
if (gact->tcfg_ptype) {
- struct tc_gact_p p_opt = {
- .paction = gact->tcfg_paction,
- .pval = gact->tcfg_pval,
- .ptype = gact->tcfg_ptype,
- };
-
+ struct tc_gact_p p_opt;
+ p_opt.paction = gact->tcfg_paction;
+ p_opt.pval = gact->tcfg_pval;
+ p_opt.ptype = gact->tcfg_ptype;
NLA_PUT(skb, TCA_GACT_PROB, sizeof(p_opt), &p_opt);
}
#endif
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_mirred *m = a->priv;
- struct tc_mirred opt = {
- .index = m->tcf_index,
- .action = m->tcf_action,
- .refcnt = m->tcf_refcnt - ref,
- .bindcnt = m->tcf_bindcnt - bind,
- .eaction = m->tcfm_eaction,
- .ifindex = m->tcfm_ifindex,
- };
+ struct tc_mirred opt;
struct tcf_t t;
+ opt.index = m->tcf_index;
+ opt.action = m->tcf_action;
+ opt.refcnt = m->tcf_refcnt - ref;
+ opt.bindcnt = m->tcf_bindcnt - bind;
+ opt.eaction = m->tcfm_eaction;
+ opt.ifindex = m->tcfm_ifindex;
NLA_PUT(skb, TCA_MIRRED_PARMS, sizeof(opt), &opt);
t.install = jiffies_to_clock_t(jiffies - m->tcf_tm.install);
t.lastuse = jiffies_to_clock_t(jiffies - m->tcf_tm.lastuse);
iph->saddr = new_addr;
inet_proto_csum_replace4(&icmph->checksum, skb, addr, new_addr,
- 0);
+ 1);
break;
}
default:
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_nat *p = a->priv;
- struct tc_nat opt = {
- .old_addr = p->old_addr,
- .new_addr = p->new_addr,
- .mask = p->mask,
- .flags = p->flags,
-
- .index = p->tcf_index,
- .action = p->tcf_action,
- .refcnt = p->tcf_refcnt - ref,
- .bindcnt = p->tcf_bindcnt - bind,
- };
+ struct tc_nat *opt;
struct tcf_t t;
+ int s;
- NLA_PUT(skb, TCA_NAT_PARMS, sizeof(opt), &opt);
+ s = sizeof(*opt);
+
+ /* netlink spinlocks held above us - must use ATOMIC */
+ opt = kzalloc(s, GFP_ATOMIC);
+ if (unlikely(!opt))
+ return -ENOBUFS;
+
+ opt->old_addr = p->old_addr;
+ opt->new_addr = p->new_addr;
+ opt->mask = p->mask;
+ opt->flags = p->flags;
+
+ opt->index = p->tcf_index;
+ opt->action = p->tcf_action;
+ opt->refcnt = p->tcf_refcnt - ref;
+ opt->bindcnt = p->tcf_bindcnt - bind;
+
+ NLA_PUT(skb, TCA_NAT_PARMS, s, opt);
t.install = jiffies_to_clock_t(jiffies - p->tcf_tm.install);
t.lastuse = jiffies_to_clock_t(jiffies - p->tcf_tm.lastuse);
t.expires = jiffies_to_clock_t(p->tcf_tm.expires);
NLA_PUT(skb, TCA_NAT_TM, sizeof(t), &t);
+ kfree(opt);
+
return skb->len;
nla_put_failure:
nlmsg_trim(skb, b);
+ kfree(opt);
return -1;
}
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_police *police = a->priv;
- struct tc_police opt = {
- .index = police->tcf_index,
- .action = police->tcf_action,
- .mtu = police->tcfp_mtu,
- .burst = police->tcfp_burst,
- .refcnt = police->tcf_refcnt - ref,
- .bindcnt = police->tcf_bindcnt - bind,
- };
-
+ struct tc_police opt;
+
+ opt.index = police->tcf_index;
+ opt.action = police->tcf_action;
+ opt.mtu = police->tcfp_mtu;
+ opt.burst = police->tcfp_burst;
+ opt.refcnt = police->tcf_refcnt - ref;
+ opt.bindcnt = police->tcf_bindcnt - bind;
if (police->tcfp_R_tab)
opt.rate = police->tcfp_R_tab->rate;
+ else
+ memset(&opt.rate, 0, sizeof(opt.rate));
if (police->tcfp_P_tab)
opt.peakrate = police->tcfp_P_tab->rate;
+ else
+ memset(&opt.peakrate, 0, sizeof(opt.peakrate));
NLA_PUT(skb, TCA_POLICE_TBF, sizeof(opt), &opt);
if (police->tcfp_result)
NLA_PUT_U32(skb, TCA_POLICE_RESULT, police->tcfp_result);
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_defact *d = a->priv;
- struct tc_defact opt = {
- .index = d->tcf_index,
- .refcnt = d->tcf_refcnt - ref,
- .bindcnt = d->tcf_bindcnt - bind,
- .action = d->tcf_action,
- };
+ struct tc_defact opt;
struct tcf_t t;
+ opt.index = d->tcf_index;
+ opt.refcnt = d->tcf_refcnt - ref;
+ opt.bindcnt = d->tcf_bindcnt - bind;
+ opt.action = d->tcf_action;
NLA_PUT(skb, TCA_DEF_PARMS, sizeof(opt), &opt);
NLA_PUT_STRING(skb, TCA_DEF_DATA, d->tcfd_defdata);
t.install = jiffies_to_clock_t(jiffies - d->tcf_tm.install);
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_skbedit *d = a->priv;
- struct tc_skbedit opt = {
- .index = d->tcf_index,
- .refcnt = d->tcf_refcnt - ref,
- .bindcnt = d->tcf_bindcnt - bind,
- .action = d->tcf_action,
- };
+ struct tc_skbedit opt;
struct tcf_t t;
+ opt.index = d->tcf_index;
+ opt.refcnt = d->tcf_refcnt - ref;
+ opt.bindcnt = d->tcf_bindcnt - bind;
+ opt.action = d->tcf_action;
NLA_PUT(skb, TCA_SKBEDIT_PARMS, sizeof(opt), &opt);
if (d->flags & SKBEDIT_F_PRIORITY)
NLA_PUT(skb, TCA_SKBEDIT_PRIORITY, sizeof(d->priority),
}
EXPORT_SYMBOL(netif_carrier_off);
-/**
- * netif_notify_peers - notify network peers about existence of @dev
- * @dev: network device
- *
- * Generate traffic such that interested network peers are aware of
- * @dev, such as by generating a gratuitous ARP. This may be used when
- * a device wants to inform the rest of the network about some sort of
- * reconfiguration such as a failover event or virtual machine
- * migration.
- */
-void netif_notify_peers(struct net_device *dev)
-{
- rtnl_lock();
- call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, dev);
- rtnl_unlock();
-}
-EXPORT_SYMBOL(netif_notify_peers);
-
/* "NOOP" scheduler: the best scheduler, recommended for all interfaces
under all circumstances. It is difficult to invent anything faster or
cheaper.
SCTP_DEBUG_PRINTK("%s: packet:%p vtag:0x%x\n", __func__,
packet, vtag);
+ sctp_packet_reset(packet);
packet->vtag = vtag;
if (ecn_capable && sctp_packet_empty(packet)) {
/* Set the pressure threshold to be a fraction of global memory that
* is up to 1/2 at 256 MB, decreasing toward zero with the amount of
- * memory, with a floor of 128 pages, and a ceiling that prevents an
- * integer overflow.
+ * memory, with a floor of 128 pages.
* Note this initalizes the data in sctpv6_prot too
* Unabashedly stolen from tcp_init
*/
limit = min(nr_pages, 1UL<<(28-PAGE_SHIFT)) >> (20-PAGE_SHIFT);
limit = (limit * (nr_pages >> (20-PAGE_SHIFT))) >> (PAGE_SHIFT-11);
limit = max(limit, 128UL);
- limit = min(limit, INT_MAX * 4UL / 3 / 2);
sysctl_sctp_mem[0] = limit / 4 * 3;
sysctl_sctp_mem[1] = limit;
sysctl_sctp_mem[2] = sysctl_sctp_mem[0] * 2;
cpu_to_be16(sizeof(struct sctp_paramhdr)),
};
-/* A helper to initialize an op error inside a
+/* A helper to initialize to initialize an op error inside a
* provided chunk, as most cause codes will be embedded inside an
* abort chunk.
*/
chunk->subh.err_hdr = sctp_addto_chunk(chunk, sizeof(sctp_errhdr_t), &err);
}
-/* A helper to initialize an op error inside a
- * provided chunk, as most cause codes will be embedded inside an
- * abort chunk. Differs from sctp_init_cause in that it won't oops
- * if there isn't enough space in the op error chunk
- */
-int sctp_init_cause_fixed(struct sctp_chunk *chunk, __be16 cause_code,
- size_t paylen)
-{
- sctp_errhdr_t err;
- __u16 len;
-
- /* Cause code constants are now defined in network order. */
- err.cause = cause_code;
- len = sizeof(sctp_errhdr_t) + paylen;
- err.length = htons(len);
-
- if (skb_tailroom(chunk->skb) < len)
- return -ENOSPC;
- chunk->subh.err_hdr = sctp_addto_chunk_fixed(chunk,
- sizeof(sctp_errhdr_t),
- &err);
- return 0;
-}
/* 3.3.2 Initiation (INIT) (1)
*
* This chunk is used to initiate a SCTP association between two
return retval;
}
-/* Create an Operation Error chunk of a fixed size,
- * specifically, max(asoc->pathmtu, SCTP_DEFAULT_MAXSEGMENT)
- * This is a helper function to allocate an error chunk for
- * for those invalid parameter codes in which we may not want
- * to report all the errors, if the incomming chunk is large
- */
-static inline struct sctp_chunk *sctp_make_op_error_fixed(
- const struct sctp_association *asoc,
- const struct sctp_chunk *chunk)
-{
- size_t size = asoc ? asoc->pathmtu : 0;
-
- if (!size)
- size = SCTP_DEFAULT_MAXSEGMENT;
-
- return sctp_make_op_error_space(asoc, chunk, size);
-}
-
/* Create an Operation Error chunk. */
struct sctp_chunk *sctp_make_op_error(const struct sctp_association *asoc,
const struct sctp_chunk *chunk,
return target;
}
-/* Append bytes to the end of a chunk. Returns NULL if there isn't sufficient
- * space in the chunk
- */
-void *sctp_addto_chunk_fixed(struct sctp_chunk *chunk,
- int len, const void *data)
-{
- if (skb_tailroom(chunk->skb) >= len)
- return sctp_addto_chunk(chunk, len, data);
- else
- return NULL;
-}
-
/* Append bytes from user space to the end of a chunk. Will panic if
* chunk is not big enough.
* Returns a kernel err value.
* returning multiple unknown parameters.
*/
if (NULL == *errp)
- *errp = sctp_make_op_error_fixed(asoc, chunk);
+ *errp = sctp_make_op_error_space(asoc, chunk,
+ ntohs(chunk->chunk_hdr->length));
if (*errp) {
- sctp_init_cause_fixed(*errp, SCTP_ERROR_UNKNOWN_PARAM,
+ sctp_init_cause(*errp, SCTP_ERROR_UNKNOWN_PARAM,
WORD_ROUND(ntohs(param.p->length)));
- sctp_addto_chunk_fixed(*errp,
+ sctp_addto_chunk(*errp,
WORD_ROUND(ntohs(param.p->length)),
param.v);
} else {
struct iovec iov;
int fput_needed;
- if (len > INT_MAX)
- len = INT_MAX;
sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out;
int err, err2;
int fput_needed;
- if (size > INT_MAX)
- size = INT_MAX;
sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out;
struct rpc_inode *rpci = RPC_I(inode);
struct gss_upcall_msg *gss_msg;
-restart:
spin_lock(&inode->i_lock);
- list_for_each_entry(gss_msg, &rpci->in_downcall, list) {
+ while (!list_empty(&rpci->in_downcall)) {
- if (!list_empty(&gss_msg->msg.list))
- continue;
+ gss_msg = list_entry(rpci->in_downcall.next,
+ struct gss_upcall_msg, list);
gss_msg->msg.errno = -EPIPE;
atomic_inc(&gss_msg->count);
__gss_unhash_msg(gss_msg);
spin_unlock(&inode->i_lock);
gss_release_msg(gss_msg);
- goto restart;
+ spin_lock(&inode->i_lock);
}
spin_unlock(&inode->i_lock);
rqstp->rq_release_snd_buf = priv_release_snd_buf;
return 0;
out_free:
- rqstp->rq_enc_pages_num = i;
- priv_release_snd_buf(rqstp);
+ for (i--; i >= 0; i--) {
+ __free_page(rqstp->rq_enc_pages[i]);
+ }
out:
return -EAGAIN;
}
return;
do {
msg = list_entry(head->next, struct rpc_pipe_msg, list);
- list_del_init(&msg->list);
+ list_del(&msg->list);
msg->errno = err;
destroy_msg(msg);
} while (!list_empty(head));
if (msg != NULL) {
spin_lock(&inode->i_lock);
msg->errno = -EAGAIN;
- list_del_init(&msg->list);
+ list_del(&msg->list);
spin_unlock(&inode->i_lock);
rpci->ops->destroy_msg(msg);
}
if (res < 0 || msg->len == msg->copied) {
filp->private_data = NULL;
spin_lock(&inode->i_lock);
- list_del_init(&msg->list);
+ list_del(&msg->list);
spin_unlock(&inode->i_lock);
rpci->ops->destroy_msg(msg);
}
struct dentry *dentry;
dentry = __rpc_lookup_create(parent, name);
- if (IS_ERR(dentry))
- return dentry;
if (dentry->d_inode == NULL)
return dentry;
dput(dentry);
spin_unlock_bh(&pool->sp_lock);
len = 0;
- if (test_bit(XPT_CLOSE, &xprt->xpt_flags)) {
- dprintk("svc_recv: found XPT_CLOSE\n");
- svc_delete_xprt(xprt);
- } else if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
+ if (test_bit(XPT_LISTENER, &xprt->xpt_flags) &&
+ !test_bit(XPT_CLOSE, &xprt->xpt_flags)) {
struct svc_xprt *newxpt;
newxpt = xprt->xpt_ops->xpo_accept(xprt);
if (newxpt) {
svc_xprt_received(newxpt);
}
svc_xprt_received(xprt);
- } else {
+ } else if (!test_bit(XPT_CLOSE, &xprt->xpt_flags)) {
dprintk("svc: server %p, pool %u, transport %p, inuse=%d\n",
rqstp, pool->sp_id, xprt,
atomic_read(&xprt->xpt_ref.refcount));
dprintk("svc: got len=%d\n", len);
}
+ if (test_bit(XPT_CLOSE, &xprt->xpt_flags)) {
+ dprintk("svc_recv: found XPT_CLOSE\n");
+ svc_delete_xprt(xprt);
+ }
+
/* No data, incomplete (TCP) read, or accept() */
if (len == 0 || len == -EAGAIN) {
rqstp->rq_res.len = 0;
if (test_bit(XPT_TEMP, &xprt->xpt_flags))
serv->sv_tmpcnt--;
- while ((dr = svc_deferred_dequeue(xprt)) != NULL)
+ for (dr = svc_deferred_dequeue(xprt); dr;
+ dr = svc_deferred_dequeue(xprt)) {
+ svc_xprt_put(xprt);
kfree(dr);
+ }
svc_xprt_put(xprt);
spin_unlock_bh(&serv->sv_lock);
return NULL;
}
-static struct group_info *unix_gid_find(uid_t uid, struct svc_rqst *rqstp)
+static int unix_gid_find(uid_t uid, struct group_info **gip,
+ struct svc_rqst *rqstp)
{
- struct unix_gid *ug;
- struct group_info *gi;
- int ret;
-
- ug = unix_gid_lookup(uid);
+ struct unix_gid *ug = unix_gid_lookup(uid);
if (!ug)
- return ERR_PTR(-EAGAIN);
- ret = cache_check(&unix_gid_cache, &ug->h, &rqstp->rq_chandle);
- switch (ret) {
+ return -EAGAIN;
+ switch (cache_check(&unix_gid_cache, &ug->h, &rqstp->rq_chandle)) {
case -ENOENT:
- return ERR_PTR(-ENOENT);
+ *gip = NULL;
+ return 0;
case 0:
- gi = get_group_info(ug->gi);
+ *gip = ug->gi;
+ get_group_info(*gip);
cache_put(&ug->h, &unix_gid_cache);
- return gi;
+ return 0;
default:
- return ERR_PTR(-EAGAIN);
+ return -EAGAIN;
}
}
struct sockaddr_in *sin;
struct sockaddr_in6 *sin6, sin6_storage;
struct ip_map *ipm;
- struct group_info *gi;
- struct svc_cred *cred = &rqstp->rq_cred;
switch (rqstp->rq_addr.ss_family) {
case AF_INET:
ip_map_cached_put(rqstp, ipm);
break;
}
-
- gi = unix_gid_find(cred->cr_uid, rqstp);
- switch (PTR_ERR(gi)) {
- case -EAGAIN:
- return SVC_DROP;
- case -ENOENT:
- break;
- default:
- put_group_info(cred->cr_group_info);
- cred->cr_group_info = gi;
- }
return SVC_OK;
}
slen = svc_getnl(argv); /* gids length */
if (slen > 16 || (len -= (slen + 2)*4) < 0)
goto badcred;
- cred->cr_group_info = groups_alloc(slen);
- if (cred->cr_group_info == NULL)
+ if (unix_gid_find(cred->cr_uid, &cred->cr_group_info, rqstp)
+ == -EAGAIN)
return SVC_DROP;
- for (i = 0; i < slen; i++)
- GROUP_AT(cred->cr_group_info, i) = svc_getnl(argv);
+ if (cred->cr_group_info == NULL) {
+ cred->cr_group_info = groups_alloc(slen);
+ if (cred->cr_group_info == NULL)
+ return SVC_DROP;
+ for (i = 0; i < slen; i++)
+ GROUP_AT(cred->cr_group_info, i) = svc_getnl(argv);
+ } else {
+ for (i = 0; i < slen ; i++)
+ svc_getnl(argv);
+ }
if (svc_getu32(argv) != htonl(RPC_AUTH_NULL) || svc_getu32(argv) != 0) {
*authp = rpc_autherr_badverf;
return SVC_DENIED;
return len;
err_delete:
set_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags);
- svc_xprt_received(&svsk->sk_xprt);
err_again:
return -EAGAIN;
}
* State of TCP reply receive
*/
__be32 tcp_fraghdr,
- tcp_xid,
- tcp_calldir;
+ tcp_xid;
u32 tcp_offset,
tcp_reclen;
{
size_t len, used;
u32 offset;
- char *p;
+ __be32 calldir;
/*
* We want transport->tcp_offset to be 8 at the end of this routine
* transport->tcp_offset is 4 (after having already read the xid).
*/
offset = transport->tcp_offset - sizeof(transport->tcp_xid);
- len = sizeof(transport->tcp_calldir) - offset;
+ len = sizeof(calldir) - offset;
dprintk("RPC: reading CALL/REPLY flag (%Zu bytes)\n", len);
- p = ((char *) &transport->tcp_calldir) + offset;
- used = xdr_skb_read_bits(desc, p, len);
+ used = xdr_skb_read_bits(desc, &calldir, len);
transport->tcp_offset += used;
if (used != len)
return;
transport->tcp_flags &= ~TCP_RCV_READ_CALLDIR;
+ transport->tcp_flags |= TCP_RCV_COPY_CALLDIR;
+ transport->tcp_flags |= TCP_RCV_COPY_DATA;
/*
* We don't yet have the XDR buffer, so we will write the calldir
* out after we get the buffer from the 'struct rpc_rqst'
*/
- switch (ntohl(transport->tcp_calldir)) {
- case RPC_REPLY:
- transport->tcp_flags |= TCP_RCV_COPY_CALLDIR;
- transport->tcp_flags |= TCP_RCV_COPY_DATA;
+ if (ntohl(calldir) == RPC_REPLY)
transport->tcp_flags |= TCP_RPC_REPLY;
- break;
- case RPC_CALL:
- transport->tcp_flags |= TCP_RCV_COPY_CALLDIR;
- transport->tcp_flags |= TCP_RCV_COPY_DATA;
+ else
transport->tcp_flags &= ~TCP_RPC_REPLY;
- break;
- default:
- dprintk("RPC: invalid request message type\n");
- xprt_force_disconnect(&transport->xprt);
- }
+ dprintk("RPC: reading %s CALL/REPLY flag %08x\n",
+ (transport->tcp_flags & TCP_RPC_REPLY) ?
+ "reply for" : "request with", calldir);
xs_tcp_check_fraghdr(transport);
}
/*
* Save the RPC direction in the XDR buffer
*/
+ __be32 calldir = transport->tcp_flags & TCP_RPC_REPLY ?
+ htonl(RPC_REPLY) : 0;
+
memcpy(rcvbuf->head[0].iov_base + transport->tcp_copied,
- &transport->tcp_calldir,
- sizeof(transport->tcp_calldir));
- transport->tcp_copied += sizeof(transport->tcp_calldir);
+ &calldir, sizeof(calldir));
+ transport->tcp_copied += sizeof(calldir);
transport->tcp_flags &= ~TCP_RCV_COPY_CALLDIR;
}
case -EALREADY:
xprt_clear_connecting(xprt);
return;
- case -EINVAL:
- /* Happens, for instance, if the user specified a link
- * local IPv6 address without a scope-id.
- */
- goto out;
}
out_eagain:
status = -EAGAIN;
#define MAX_ADDR_STR 32
-static struct media media_list[MAX_MEDIA];
+static struct media *media_list = NULL;
static u32 media_count = 0;
-struct bearer tipc_bearers[MAX_BEARERS];
+struct bearer *tipc_bearers = NULL;
/**
* media_name_valid - validate media name
int res = -EINVAL;
write_lock_bh(&tipc_net_lock);
-
- if (tipc_mode != TIPC_NET_MODE) {
- warn("Media <%s> rejected, not in networked mode yet\n", name);
+ if (!media_list)
goto exit;
- }
+
if (!media_name_valid(name)) {
warn("Media <%s> rejected, illegal name\n", name);
goto exit;
+int tipc_bearer_init(void)
+{
+ int res;
+
+ write_lock_bh(&tipc_net_lock);
+ tipc_bearers = kcalloc(MAX_BEARERS, sizeof(struct bearer), GFP_ATOMIC);
+ media_list = kcalloc(MAX_MEDIA, sizeof(struct media), GFP_ATOMIC);
+ if (tipc_bearers && media_list) {
+ res = 0;
+ } else {
+ kfree(tipc_bearers);
+ kfree(media_list);
+ tipc_bearers = NULL;
+ media_list = NULL;
+ res = -ENOMEM;
+ }
+ write_unlock_bh(&tipc_net_lock);
+ return res;
+}
+
void tipc_bearer_stop(void)
{
u32 i;
+ if (!tipc_bearers)
+ return;
+
for (i = 0; i < MAX_BEARERS; i++) {
if (tipc_bearers[i].active)
tipc_bearers[i].publ.blocked = 1;
if (tipc_bearers[i].active)
bearer_disable(tipc_bearers[i].publ.name);
}
+ kfree(tipc_bearers);
+ kfree(media_list);
+ tipc_bearers = NULL;
+ media_list = NULL;
media_count = 0;
}
struct link;
-extern struct bearer tipc_bearers[];
+extern struct bearer *tipc_bearers;
void tipc_media_addr_printf(struct print_buf *pb, struct tipc_media_addr *a);
struct sk_buff *tipc_media_get_names(void);
*/
DEFINE_RWLOCK(tipc_net_lock);
-struct _zone *tipc_zones[256] = { NULL, };
-struct network tipc_net = { tipc_zones };
+struct network tipc_net = { NULL };
struct tipc_node *tipc_net_select_remote_node(u32 addr, u32 ref)
{
}
}
+static int net_init(void)
+{
+ memset(&tipc_net, 0, sizeof(tipc_net));
+ tipc_net.zones = kcalloc(tipc_max_zones + 1, sizeof(struct _zone *), GFP_ATOMIC);
+ if (!tipc_net.zones) {
+ return -ENOMEM;
+ }
+ return 0;
+}
+
static void net_stop(void)
{
u32 z_num;
- for (z_num = 1; z_num <= tipc_max_zones; z_num++)
+ if (!tipc_net.zones)
+ return;
+
+ for (z_num = 1; z_num <= tipc_max_zones; z_num++) {
tipc_zone_delete(tipc_net.zones[z_num]);
+ }
+ kfree(tipc_net.zones);
+ tipc_net.zones = NULL;
}
static void net_route_named_msg(struct sk_buff *buf)
tipc_named_reinit();
tipc_port_reinit();
- if ((res = tipc_cltr_init()) ||
+ if ((res = tipc_bearer_init()) ||
+ (res = net_init()) ||
+ (res = tipc_cltr_init()) ||
(res = tipc_bclink_init())) {
return res;
}
static u32 ordernum = 1;
struct unix_address *addr;
int err;
- unsigned int retries = 0;
mutex_lock(&u->readlock);
if (__unix_find_socket_byname(net, addr->name, addr->len, sock->type,
addr->hash)) {
spin_unlock(&unix_table_lock);
- /*
- * __unix_find_socket_byname() may take long time if many names
- * are already in use.
- */
- cond_resched();
- /* Give up if all names seems to be in use. */
- if (retries++ == 0xFFFFF) {
- err = -ENOSPC;
- kfree(addr);
- goto out;
- }
+ /* Sanity yield. It is unusual case, but yet... */
+ if (!(ordernum&0xFF))
+ yield();
goto retry;
}
addr->hash ^= sk->sk_type;
struct wireless_dev *for_wdev,
int freq, enum nl80211_channel_type channel_type);
-u16 cfg80211_calculate_bitrate(struct rate_info *rate);
-
#ifdef CONFIG_CFG80211_DEVELOPER_WARNINGS
#define CFG80211_DEV_WARN_ON(cond) WARN_ON(cond)
#else
}
}
- if (done) {
- nl80211_send_rx_auth(rdev, dev, buf, len, GFP_KERNEL);
- cfg80211_sme_rx_auth(dev, buf, len);
- }
+ WARN_ON(!done);
+
+ nl80211_send_rx_auth(rdev, dev, buf, len, GFP_KERNEL);
+ cfg80211_sme_rx_auth(dev, buf, len);
wdev_unlock(wdev);
}
return 0;
}
+static u16 nl80211_calculate_bitrate(struct rate_info *rate)
+{
+ int modulation, streams, bitrate;
+
+ if (!(rate->flags & RATE_INFO_FLAGS_MCS))
+ return rate->legacy;
+
+ /* the formula below does only work for MCS values smaller than 32 */
+ if (rate->mcs >= 32)
+ return 0;
+
+ modulation = rate->mcs & 7;
+ streams = (rate->mcs >> 3) + 1;
+
+ bitrate = (rate->flags & RATE_INFO_FLAGS_40_MHZ_WIDTH) ?
+ 13500000 : 6500000;
+
+ if (modulation < 4)
+ bitrate *= (modulation + 1);
+ else if (modulation == 4)
+ bitrate *= (modulation + 2);
+ else
+ bitrate *= (modulation + 3);
+
+ bitrate *= streams;
+
+ if (rate->flags & RATE_INFO_FLAGS_SHORT_GI)
+ bitrate = (bitrate / 9) * 10;
+
+ /* do NOT round down here */
+ return (bitrate + 50000) / 100000;
+}
+
static int nl80211_send_station(struct sk_buff *msg, u32 pid, u32 seq,
int flags, struct net_device *dev,
u8 *mac_addr, struct station_info *sinfo)
if (!txrate)
goto nla_put_failure;
- /* cfg80211_calculate_bitrate will return 0 for mcs >= 32 */
- bitrate = cfg80211_calculate_bitrate(&sinfo->txrate);
+ /* nl80211_calculate_bitrate will return 0 for mcs >= 32 */
+ bitrate = nl80211_calculate_bitrate(&sinfo->txrate);
if (bitrate > 0)
NLA_PUT_U16(msg, NL80211_RATE_INFO_BITRATE, bitrate);
{
struct cfg80211_registered_device *dev = wiphy_to_dev(wiphy);
struct cfg80211_internal_bss *bss, *res = NULL;
- unsigned long now = jiffies;
spin_lock_bh(&dev->bss_lock);
continue;
if (channel && bss->pub.channel != channel)
continue;
- /* Don't get expired BSS structs */
- if (time_after(now, bss->ts + IEEE80211_SCAN_RESULT_EXPIRE) &&
- !atomic_read(&bss->hold))
- continue;
if (is_bss(&bss->pub, bssid, ssid, ssid_len)) {
res = bss;
kref_get(&res->ref);
return err;
}
-
-u16 cfg80211_calculate_bitrate(struct rate_info *rate)
-{
- int modulation, streams, bitrate;
-
- if (!(rate->flags & RATE_INFO_FLAGS_MCS))
- return rate->legacy;
-
- /* the formula below does only work for MCS values smaller than 32 */
- if (rate->mcs >= 32)
- return 0;
-
- modulation = rate->mcs & 7;
- streams = (rate->mcs >> 3) + 1;
-
- bitrate = (rate->flags & RATE_INFO_FLAGS_40_MHZ_WIDTH) ?
- 13500000 : 6500000;
-
- if (modulation < 4)
- bitrate *= (modulation + 1);
- else if (modulation == 4)
- bitrate *= (modulation + 2);
- else
- bitrate *= (modulation + 3);
-
- bitrate *= streams;
-
- if (rate->flags & RATE_INFO_FLAGS_SHORT_GI)
- bitrate = (bitrate / 9) * 10;
-
- /* do NOT round down here */
- return (bitrate + 50000) / 100000;
-}
if (!(sinfo.filled & STATION_INFO_TX_BITRATE))
return -EOPNOTSUPP;
- rate->value = 100000 * cfg80211_calculate_bitrate(&sinfo.txrate);
+ rate->value = 0;
+
+ if (!(sinfo.txrate.flags & RATE_INFO_FLAGS_MCS))
+ rate->value = 100000 * sinfo.txrate.legacy;
return 0;
}
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
- data->flags = 0;
- data->length = 0;
-
switch (wdev->iftype) {
case NL80211_IFTYPE_ADHOC:
return cfg80211_ibss_wext_giwessid(dev, info, data, ssid);
}
}
- if (IW_IS_GET(cmd) && !(descr->flags & IW_DESCR_FLAG_NOMAX)) {
- /*
- * If this is a GET, but not NOMAX, it means that the extra
- * data is not bounded by userspace, but by max_tokens. Thus
- * set the length to max_tokens. This matches the extra data
- * allocation.
- * The driver should fill it with the number of tokens it
- * provided, and it may check iwp->length rather than having
- * knowledge of max_tokens. If the driver doesn't change the
- * iwp->length, this ioctl just copies back max_token tokens
- * filled with zeroes. Hopefully the driver isn't claiming
- * them to be valid data.
- */
- iwp->length = descr->max_tokens;
- }
-
err = handler(dev, info, (union iwreq_data *) iwp, extra);
iwp->length += essid_compat;
} else if (!iwp->pointer)
return -EFAULT;
- extra = kzalloc(extra_size, GFP_KERNEL);
+ extra = kmalloc(extra_size, GFP_KERNEL);
if (!extra)
return -ENOMEM;
};
#endif
-
-int x25_parse_address_block(struct sk_buff *skb,
- struct x25_address *called_addr,
- struct x25_address *calling_addr)
-{
- unsigned char len;
- int needed;
- int rc;
-
- if (skb->len < 1) {
- /* packet has no address block */
- rc = 0;
- goto empty;
- }
-
- len = *skb->data;
- needed = 1 + (len >> 4) + (len & 0x0f);
-
- if (skb->len < needed) {
- /* packet is too short to hold the addresses it claims
- to hold */
- rc = -1;
- goto empty;
- }
-
- return x25_addr_ntoa(skb->data, called_addr, calling_addr);
-
-empty:
- *called_addr->x25_addr = 0;
- *calling_addr->x25_addr = 0;
-
- return rc;
-}
-
-
int x25_addr_ntoa(unsigned char *p, struct x25_address *called_addr,
struct x25_address *calling_addr)
{
/*
* Extract the X.25 addresses and convert them to ASCII strings,
* and remove them.
- *
- * Address block is mandatory in call request packets
*/
- addr_len = x25_parse_address_block(skb, &source_addr, &dest_addr);
- if (addr_len <= 0)
- goto out_clear_request;
+ addr_len = x25_addr_ntoa(skb->data, &source_addr, &dest_addr);
skb_pull(skb, addr_len);
/*
* Get the length of the facilities, skip past them for the moment
* get the call user data because this is needed to determine
* the correct listener
- *
- * Facilities length is mandatory in call request packets
*/
- if (skb->len < 1)
- goto out_clear_request;
len = skb->data[0] + 1;
- if (skb->len < len)
- goto out_clear_request;
skb_pull(skb,len);
/*
struct x25_dte_facilities *dte_facs, unsigned long *vc_fac_mask)
{
unsigned char *p = skb->data;
- unsigned int len;
+ unsigned int len = *p++;
*vc_fac_mask = 0;
memset(dte_facs->called_ae, '\0', sizeof(dte_facs->called_ae));
memset(dte_facs->calling_ae, '\0', sizeof(dte_facs->calling_ae));
- if (skb->len < 1)
- return 0;
-
- len = *p++;
-
- if (len >= skb->len)
- return -1;
-
while (len > 0) {
switch (*p & X25_FAC_CLASS_MASK) {
case X25_FAC_CLASS_A:
- if (len < 2)
- return 0;
switch (*p) {
case X25_FAC_REVERSE:
if((p[1] & 0x81) == 0x81) {
len -= 2;
break;
case X25_FAC_CLASS_B:
- if (len < 3)
- return 0;
switch (*p) {
case X25_FAC_PACKET_SIZE:
facilities->pacsize_in = p[1];
len -= 3;
break;
case X25_FAC_CLASS_C:
- if (len < 4)
- return 0;
printk(KERN_DEBUG "X.25: unknown facility %02X, "
"values %02X, %02X, %02X\n",
p[0], p[1], p[2], p[3]);
len -= 4;
break;
case X25_FAC_CLASS_D:
- if (len < p[1] + 2)
- return 0;
switch (*p) {
case X25_FAC_CALLING_AE:
- if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ if (p[1] > X25_MAX_DTE_FACIL_LEN)
+ break;
dte_facs->calling_len = p[2];
memcpy(dte_facs->calling_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLING_AE;
break;
case X25_FAC_CALLED_AE:
- if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ if (p[1] > X25_MAX_DTE_FACIL_LEN)
+ break;
dte_facs->called_len = p[2];
memcpy(dte_facs->called_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLED_AE;
break;
default:
printk(KERN_DEBUG "X.25: unknown facility %02X,"
- "length %d\n", p[0], p[1]);
+ "length %d, values %02X, %02X, "
+ "%02X, %02X\n",
+ p[0], p[1], p[2], p[3], p[4], p[5]);
break;
}
len -= p[1] + 2;
memcpy(new, ours, sizeof(*new));
len = x25_parse_facilities(skb, &theirs, dte, &x25->vc_facil_mask);
- if (len < 0)
- return len;
/*
* They want reverse charging, we won't accept it.
static int x25_state1_machine(struct sock *sk, struct sk_buff *skb, int frametype)
{
struct x25_address source_addr, dest_addr;
- int len;
switch (frametype) {
case X25_CALL_ACCEPTED: {
* Parse the data in the frame.
*/
skb_pull(skb, X25_STD_MIN_LEN);
-
- len = x25_parse_address_block(skb, &source_addr,
- &dest_addr);
- if (len > 0)
- skb_pull(skb, len);
-
- len = x25_parse_facilities(skb, &x25->facilities,
+ skb_pull(skb, x25_addr_ntoa(skb->data, &source_addr, &dest_addr));
+ skb_pull(skb,
+ x25_parse_facilities(skb, &x25->facilities,
&x25->dte_facilities,
- &x25->vc_facil_mask);
- if (len > 0)
- skb_pull(skb, len);
- else
- return -1;
+ &x25->vc_facil_mask));
/*
* Copy any Call User Data.
*/
Makefile:;
-\$(all): all
+\$(all) %/: all
@:
-%/: all
- @:
EOF
int section = sechdr->sh_info;
return (void *)elf->hdr + sechdrs[section].sh_offset +
- r->r_offset - sechdrs[section].sh_addr;
+ (r->r_offset - sechdrs[section].sh_addr);
}
static int addend_386_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
mutex_lock(&parent->d_inode->i_mutex);
*dentry = lookup_one_len(name, parent, strlen(name));
- if (!IS_ERR(*dentry)) {
+ if (!IS_ERR(dentry)) {
if ((mode & S_IFMT) == S_IFDIR)
error = mkdir(parent->d_inode, *dentry, mode);
else
error = create(parent->d_inode, *dentry, mode);
} else
- error = PTR_ERR(*dentry);
+ error = PTR_ERR(dentry);
mutex_unlock(&parent->d_inode->i_mutex);
return error;
keyring_r = NULL;
me = current;
- rcu_read_lock();
write_lock_irq(&tasklist_lock);
parent = me->real_parent;
goto not_permitted;
/* the keyrings must have the same UID */
- if ((pcred->tgcred->session_keyring &&
- pcred->tgcred->session_keyring->uid != mycred->euid) ||
+ if (pcred ->tgcred->session_keyring->uid != mycred->euid ||
mycred->tgcred->session_keyring->uid != mycred->euid)
goto not_permitted;
set_ti_thread_flag(task_thread_info(parent), TIF_NOTIFY_RESUME);
write_unlock_irq(&tasklist_lock);
- rcu_read_unlock();
if (oldcred)
put_cred(oldcred);
return 0;
ret = 0;
not_permitted:
write_unlock_irq(&tasklist_lock);
- rcu_read_unlock();
put_cred(cred);
return ret;
struct key *keyring;
int bucket;
+ keyring = ERR_PTR(-EINVAL);
if (!name)
- return ERR_PTR(-EINVAL);
+ goto error;
bucket = keyring_hash(name);
KEY_SEARCH) < 0)
continue;
- /* we've got a match but we might end up racing with
- * key_cleanup() if the keyring is currently 'dead'
- * (ie. it has a zero usage count) */
- if (!atomic_inc_not_zero(&keyring->usage))
- continue;
- goto out;
+ /* we've got a match */
+ atomic_inc(&keyring->usage);
+ read_unlock(&keyring_name_lock);
+ goto error;
}
}
- keyring = ERR_PTR(-ENOKEY);
-out:
read_unlock(&keyring_name_lock);
+ keyring = ERR_PTR(-ENOKEY);
+
+ error:
return keyring;
} /* end find_keyring_by_name() */
ret = install_thread_keyring();
if (ret < 0) {
- key_ref = ERR_PTR(ret);
+ key = ERR_PTR(ret);
goto error;
}
goto reget_creds;
ret = install_process_keyring();
if (ret < 0) {
- key_ref = ERR_PTR(ret);
+ key = ERR_PTR(ret);
goto error;
}
goto reget_creds;
case KEY_SPEC_GROUP_KEYRING:
/* group keyrings are not yet supported */
- key_ref = ERR_PTR(-EINVAL);
+ key = ERR_PTR(-EINVAL);
goto error;
case KEY_SPEC_REQKEY_AUTH_KEY:
key_already_present:
mutex_unlock(&key_construction_mutex);
- if (dest_keyring) {
- __key_link(dest_keyring, key_ref_to_ptr(key_ref));
+ if (dest_keyring)
up_write(&dest_keyring->sem);
- }
mutex_unlock(&user->cons_lock);
key_put(key);
*_key = key = key_ref_to_ptr(key_ref);
if (!IS_ERR(key_ref)) {
key = key_ref_to_ptr(key_ref);
- if (dest_keyring) {
- construct_get_dest_keyring(&dest_keyring);
- key_link(dest_keyring, key);
- key_put(dest_keyring);
- }
} else if (PTR_ERR(key_ref) != -EAGAIN) {
key = ERR_CAST(key_ref);
} else {
{
int ret;
- if (write && !capable(CAP_SYS_RAWIO))
- return -EPERM;
-
ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
update_mmap_min_addr();
cmap_idx = delta / NETLBL_CATMAP_MAPSIZE;
cmap_sft = delta % NETLBL_CATMAP_MAPSIZE;
c_iter->bitmap[cmap_idx]
- |= e_iter->maps[i] << cmap_sft;
+ |= e_iter->maps[cmap_idx] << cmap_sft;
}
e_iter = e_iter->next;
}
/* max number of user-defined controls */
#define MAX_USER_CONTROLS 32
-#define MAX_CONTROL_COUNT 1028
struct snd_kctl_ioctl {
struct list_head list; /* list of all ioctls */
if (snd_BUG_ON(!control || !control->count))
return NULL;
-
- if (control->count > MAX_CONTROL_COUNT)
- return NULL;
-
kctl = kzalloc(sizeof(*kctl) + sizeof(struct snd_kcontrol_volatile) * control->count, GFP_KERNEL);
if (kctl == NULL) {
snd_printk(KERN_ERR "Cannot allocate control instance\n");
if (!params->info)
params->info = hw->info & ~SNDRV_PCM_INFO_FIFO_IN_FRAMES;
if (!params->fifo_size) {
- m = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
- i = hw_param_interval(params, SNDRV_PCM_HW_PARAM_CHANNELS);
- if (snd_mask_min(m) == snd_mask_max(m) &&
- snd_interval_min(i) == snd_interval_max(i)) {
+ if (snd_mask_min(¶ms->masks[SNDRV_PCM_HW_PARAM_FORMAT]) ==
+ snd_mask_max(¶ms->masks[SNDRV_PCM_HW_PARAM_FORMAT]) &&
+ snd_mask_min(¶ms->masks[SNDRV_PCM_HW_PARAM_CHANNELS]) ==
+ snd_mask_max(¶ms->masks[SNDRV_PCM_HW_PARAM_CHANNELS])) {
changed = substream->ops->ioctl(substream,
SNDRV_PCM_IOCTL1_FIFO_SIZE, params);
if (changed < 0)
{
if (substream->runtime->trigger_master != substream)
return 0;
- /* some drivers might use hw_ptr to recover from the pause -
- update the hw_ptr now */
- if (push)
- snd_pcm_update_hw_ptr(substream);
/* The jiffies check in snd_pcm_update_hw_ptr*() is done by
* a delta betwen the current jiffies, this gives a large enough
* delta, effectively to skip the check once.
{
struct snd_rawmidi_file *rfile;
struct snd_rawmidi *rmidi;
- struct module *module;
rfile = file->private_data;
rmidi = rfile->rmidi;
rawmidi_release_priv(rfile);
kfree(rfile);
- module = rmidi->card->module;
snd_card_file_remove(rmidi->card, file);
- module_put(module);
+ module_put(rmidi->card->module);
return 0;
}
return 0;
_error:
+ snd_seq_oss_writeq_delete(dp->writeq);
+ snd_seq_oss_readq_delete(dp->readq);
snd_seq_oss_synth_cleanup(dp);
snd_seq_oss_midi_cleanup(dp);
- delete_seq_queue(dp->queue);
delete_port(dp);
+ delete_seq_queue(dp->queue);
+ kfree(dp);
return rc;
}
static int
delete_port(struct seq_oss_devinfo *dp)
{
- if (dp->port < 0) {
- kfree(dp);
+ if (dp->port < 0)
return 0;
- }
debug_printk(("delete_port %i\n", dp->port));
return snd_seq_event_port_detach(dp->cseq, dp->port);
0x10140523, /* Thinkpad R40 */
0x10140534, /* Thinkpad X31 */
0x10140537, /* Thinkpad T41p */
- 0x1014053e, /* Thinkpad R40e */
0x10140554, /* Thinkpad T42p/R50p */
0x10140567, /* Thinkpad T43p 2668-G7U */
0x10140581, /* Thinkpad X41-2527 */
0x10280160, /* Dell Dimension 2400 */
0x104380b0, /* Asus A7V8X-MX */
0x11790241, /* Toshiba Satellite A-15 S127 */
- 0x1179ff10, /* Toshiba P500 */
0x144dc01a, /* Samsung NP-X20C004/SEG */
0 /* end */
};
struct snd_pcm_substream *substream)
{
size_t ptr;
- unsigned int reg, rem, tries;
-
+ unsigned int reg;
if (!rec->running)
return 0;
#if 1 // this seems better..
reg = rec->ch ? CM_REG_CH1_FRAME2 : CM_REG_CH0_FRAME2;
- for (tries = 0; tries < 3; tries++) {
- rem = snd_cmipci_read_w(cm, reg);
- if (rem < rec->dma_size)
- goto ok;
- }
- printk(KERN_ERR "cmipci: invalid PCM pointer: %#x\n", rem);
- return SNDRV_PCM_POS_XRUN;
-ok:
- ptr = (rec->dma_size - (rem + 1)) >> rec->shift;
+ ptr = rec->dma_size - (snd_cmipci_read_w(cm, reg) + 1);
+ ptr >>= rec->shift;
#else
reg = rec->ch ? CM_REG_CH1_FRAME1 : CM_REG_CH0_FRAME1;
ptr = snd_cmipci_read(cm, reg) - rec->offset;
/* The hardware doesn't tell us which substream caused the irq,
thus we have to check all running substreams. */
for (ss = 0; ss < DSP_MAXPIPES; ss++) {
- substream = chip->substream[ss];
- if (substream && ((struct audiopipe *)substream->runtime->
- private_data)->state == PIPE_STATE_STARTED) {
+ if ((substream = chip->substream[ss])) {
period = pcm_pointer(substream) /
substream->runtime->period_size;
if (period != chip->last_period[ss]) {
static int max_buffer_size[SNDRV_CARDS] = {[0 ... (SNDRV_CARDS - 1)] = 128};
static int enable_ir[SNDRV_CARDS];
static uint subsystem[SNDRV_CARDS]; /* Force card subsystem model */
-static uint delay_pcm_irq[SNDRV_CARDS] = {[0 ... (SNDRV_CARDS - 1)] = 2};
module_param_array(index, int, NULL, 0444);
MODULE_PARM_DESC(index, "Index value for the EMU10K1 soundcard.");
MODULE_PARM_DESC(enable_ir, "Enable IR.");
module_param_array(subsystem, uint, NULL, 0444);
MODULE_PARM_DESC(subsystem, "Force card subsystem model.");
-module_param_array(delay_pcm_irq, uint, NULL, 0444);
-MODULE_PARM_DESC(delay_pcm_irq, "Delay PCM interrupt by specified number of samples (default 0).");
/*
* Class 0401: 1102:0008 (rev 00) Subsystem: 1102:1001 -> Audigy2 Value Model:SB0400
*/
&emu)) < 0)
goto error;
card->private_data = emu;
- emu->delay_pcm_irq = delay_pcm_irq[dev] & 0x1f;
if ((err = snd_emu10k1_pcm(emu, 0, NULL)) < 0)
goto error;
if ((err = snd_emu10k1_pcm_mic(emu, 1, NULL)) < 0)
evoice->epcm->ccca_start_addr = start_addr + ccis;
if (extra) {
start_addr += ccis;
- end_addr += ccis + emu->delay_pcm_irq;
+ end_addr += ccis;
}
if (stereo && !extra) {
snd_emu10k1_ptr_write(emu, CPF, voice, CPF_STEREO_MASK);
/* Assumption that PT is already 0 so no harm overwriting */
snd_emu10k1_ptr_write(emu, PTRX, voice, (send_amount[0] << 8) | send_amount[1]);
snd_emu10k1_ptr_write(emu, DSL, voice, end_addr | (send_amount[3] << 24));
- snd_emu10k1_ptr_write(emu, PSST, voice,
- (start_addr + (extra ? emu->delay_pcm_irq : 0)) |
- (send_amount[2] << 24));
+ snd_emu10k1_ptr_write(emu, PSST, voice, start_addr | (send_amount[2] << 24));
if (emu->card_capabilities->emu_model)
pitch_target = PITCH_48000; /* Disable interpolators on emu1010 card */
else
snd_emu10k1_ptr_write(emu, IP, voice, 0);
}
-static inline void snd_emu10k1_playback_mangle_extra(struct snd_emu10k1 *emu,
- struct snd_emu10k1_pcm *epcm,
- struct snd_pcm_substream *substream,
- struct snd_pcm_runtime *runtime)
-{
- unsigned int ptr, period_pos;
-
- /* try to sychronize the current position for the interrupt
- source voice */
- period_pos = runtime->status->hw_ptr - runtime->hw_ptr_interrupt;
- period_pos %= runtime->period_size;
- ptr = snd_emu10k1_ptr_read(emu, CCCA, epcm->extra->number);
- ptr &= ~0x00ffffff;
- ptr |= epcm->ccca_start_addr + period_pos;
- snd_emu10k1_ptr_write(emu, CCCA, epcm->extra->number, ptr);
-}
-
static int snd_emu10k1_playback_trigger(struct snd_pcm_substream *substream,
int cmd)
{
/* follow thru */
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
case SNDRV_PCM_TRIGGER_RESUME:
- if (cmd == SNDRV_PCM_TRIGGER_PAUSE_RELEASE)
- snd_emu10k1_playback_mangle_extra(emu, epcm, substream, runtime);
mix = &emu->pcm_mixer[substream->number];
snd_emu10k1_playback_prepare_voice(emu, epcm->voices[0], 1, 0, mix);
snd_emu10k1_playback_prepare_voice(emu, epcm->voices[1], 0, 0, mix);
#endif
/*
printk(KERN_DEBUG
- "ptr = 0x%lx, buffer_size = 0x%lx, period_size = 0x%lx\n",
- (long)ptr, (long)runtime->buffer_size,
- (long)runtime->period_size);
+ "ptr = 0x%x, buffer_size = 0x%x, period_size = 0x%x\n",
+ ptr, runtime->buffer_size, runtime->period_size);
*/
return ptr;
}
if (snd_BUG_ON(!hdr))
return NULL;
- idx = runtime->period_size >= runtime->buffer_size ?
- (emu->delay_pcm_irq * 2) : 0;
mutex_lock(&hdr->block_mutex);
- blk = search_empty(emu, runtime->dma_bytes + idx);
+ blk = search_empty(emu, runtime->dma_bytes);
if (blk == NULL) {
mutex_unlock(&hdr->block_mutex);
return NULL;
"{Intel, ICH9},"
"{Intel, ICH10},"
"{Intel, PCH},"
- "{Intel, CPT},"
"{Intel, SCH},"
"{ATI, SB450},"
"{ATI, SB600},"
/* driver types */
enum {
AZX_DRIVER_ICH,
- AZX_DRIVER_PCH,
AZX_DRIVER_SCH,
AZX_DRIVER_ATI,
AZX_DRIVER_ATIHDMI,
static char *driver_short_names[] __devinitdata = {
[AZX_DRIVER_ICH] = "HDA Intel",
- [AZX_DRIVER_PCH] = "HDA Intel PCH",
[AZX_DRIVER_SCH] = "HDA Intel MID",
[AZX_DRIVER_ATI] = "HDA ATI SB",
[AZX_DRIVER_ATIHDMI] = "HDA ATI HDMI",
0x01, NVIDIA_HDA_ENABLE_COHBIT);
break;
case AZX_DRIVER_SCH:
- case AZX_DRIVER_PCH:
pci_read_config_word(chip->pci, INTEL_SCH_HDA_DEVC, &snoop);
if (snoop & INTEL_SCH_HDA_DEVC_NOSNOOP) {
pci_write_config_word(chip->pci, INTEL_SCH_HDA_DEVC,
* white/black-listing for position_fix
*/
static struct snd_pci_quirk position_fix_list[] __devinitdata = {
- SND_PCI_QUIRK(0x1025, 0x009f, "Acer Aspire 5110", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1028, 0x01cc, "Dell D820", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1028, 0x01de, "Dell Precision 390", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x103c, 0x306d, "HP dv3", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1028, 0x01f6, "Dell Latitude 131L", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1043, 0x813d, "ASUS P5AD2", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1043, 0x81b3, "ASUS", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1043, 0x81e7, "ASUS M2V", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x104d, 0x9069, "Sony VPCS11V9E", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1106, 0x3288, "ASUS M2V-MX SE", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1179, 0xff10, "Toshiba A100-259", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1297, 0x3166, "Shuttle", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1458, 0xa022, "ga-ma770-ud3", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1462, 0x1002, "MSI Wind U115", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1565, 0x820f, "Biostar Microtech", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1565, 0x8218, "Biostar Microtech", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x1849, 0x0888, "775Dual-VSTA", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x8086, 0x2503, "DG965OT AAD63733-203", POS_FIX_LPIB),
- SND_PCI_QUIRK(0x8086, 0xd601, "eMachines T5212", POS_FIX_LPIB),
{}
};
static struct snd_pci_quirk msi_white_list[] __devinitdata = {
SND_PCI_QUIRK(0x103c, 0x30f7, "HP Pavilion dv4t-1300", 1),
SND_PCI_QUIRK(0x103c, 0x3607, "HP Compa CQ40", 1),
- SND_PCI_QUIRK(0x107b, 0x0380, "Gateway M-6866", 1),
{}
};
"hda_intel: msi for device %04x:%04x set to %d\n",
q->subvendor, q->subdevice, q->value);
chip->msi = q->value;
- return;
- }
-
- /* NVidia chipsets seem to cause troubles with MSI */
- if (chip->driver_type == AZX_DRIVER_NVIDIA) {
- printk(KERN_INFO "hda_intel: Disable MSI for Nvidia chipset\n");
- chip->msi = 0;
}
}
if (bdl_pos_adj[dev] < 0) {
switch (chip->driver_type) {
case AZX_DRIVER_ICH:
- case AZX_DRIVER_PCH:
bdl_pos_adj[dev] = 1;
break;
default:
{ PCI_DEVICE(0x8086, 0x3a6e), .driver_data = AZX_DRIVER_ICH },
/* PCH */
{ PCI_DEVICE(0x8086, 0x3b56), .driver_data = AZX_DRIVER_ICH },
- { PCI_DEVICE(0x8086, 0x3b57), .driver_data = AZX_DRIVER_ICH },
- /* CPT */
- { PCI_DEVICE(0x8086, 0x1c20), .driver_data = AZX_DRIVER_PCH },
/* SCH */
{ PCI_DEVICE(0x8086, 0x811b), .driver_data = AZX_DRIVER_SCH },
/* ATI SB 450/600 */
SND_PCI_QUIRK(0x1043, 0x81cb, "ASUS M2N", AD1986A_3STACK),
SND_PCI_QUIRK(0x1043, 0x8234, "ASUS M2N", AD1986A_3STACK),
SND_PCI_QUIRK(0x10de, 0xcb84, "ASUS A8N-VM", AD1986A_3STACK),
- SND_PCI_QUIRK(0x1179, 0xff40, "Toshiba Satellite L40-10Q", AD1986A_3STACK),
+ SND_PCI_QUIRK(0x1179, 0xff40, "Toshiba", AD1986A_LAPTOP_EAPD),
SND_PCI_QUIRK(0x144d, 0xb03c, "Samsung R55", AD1986A_3STACK),
SND_PCI_QUIRK(0x144d, 0xc01e, "FSC V2060", AD1986A_LAPTOP),
SND_PCI_QUIRK(0x144d, 0xc024, "Samsung P50", AD1986A_SAMSUNG_P50),
case AD1981_THINKPAD:
spec->mixers[0] = ad1981_thinkpad_mixers;
spec->input_mux = &ad1981_thinkpad_capture_source;
- /* set the upper-limit for mixer amp to 0dB for avoiding the
- * possible damage by overloading
- */
- snd_hda_override_amp_caps(codec, 0x11, HDA_INPUT,
- (0x17 << AC_AMPCAP_OFFSET_SHIFT) |
- (0x17 << AC_AMPCAP_NUM_STEPS_SHIFT) |
- (0x05 << AC_AMPCAP_STEP_SIZE_SHIFT) |
- (1 << AC_AMPCAP_MUTE_SHIFT));
break;
case AD1981_TOSHIBA:
spec->mixers[0] = ad1981_hp_mixers;
/* Lenovo Thinkpad T61/X61 */
SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1984_THINKPAD),
SND_PCI_QUIRK(0x1028, 0x0214, "Dell T3400", AD1984_DELL_DESKTOP),
- SND_PCI_QUIRK(0x1028, 0x0233, "Dell Latitude E6400", AD1984_DELL_DESKTOP),
{}
};
switch (codec->subsystem_id >> 16) {
case 0x103c:
- case 0x1631:
- case 0x1734:
- case 0x17aa:
- /* HP, Packard Bell, Fujitsu-Siemens & Lenovo laptops have
- * really bad sound over 0dB on NID 0x17. Fix max PCM level to
- * 0 dB (originally it has 0x2b steps with 0dB offset 0x14)
+ /* HP laptop has a really bad sound over 0dB on NID 0x17.
+ * Fix max PCM level to 0 dB
+ * (originall it has 0x2b steps with 0dB offset 0x14)
*/
snd_hda_override_amp_caps(codec, 0x17, HDA_INPUT,
(0x14 << AC_AMPCAP_OFFSET_SHIFT) |
#endif
}
spec->vmaster_nid = 0x13;
-
- switch (codec->subsystem_id >> 16) {
- case 0x103c:
- /* HP laptops have really bad sound over 0 dB on NID 0x10.
- * Fix max PCM level to 0 dB (originally it has 0x1e steps
- * with 0 dB offset 0x17)
- */
- snd_hda_override_amp_caps(codec, 0x10, HDA_INPUT,
- (0x17 << AC_AMPCAP_OFFSET_SHIFT) |
- (0x17 << AC_AMPCAP_NUM_STEPS_SHIFT) |
- (0x05 << AC_AMPCAP_STEP_SIZE_SHIFT) |
- (1 << AC_AMPCAP_MUTE_SHIFT));
- break;
- }
-
return 0;
}
SND_PCI_QUIRK(0x1028, 0x02f5, "Dell",
CXT5066_DELL_LAPTOP),
SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT5066_OLPC_XO_1_5),
- SND_PCI_QUIRK(0x1179, 0xff50, "Toshiba Satellite P500-PSPGSC-01800T", CXT5066_OLPC_XO_1_5),
- SND_PCI_QUIRK(0x1179, 0xffe0, "Toshiba Satellite Pro T130-15F", CXT5066_OLPC_XO_1_5),
{}
};
unsigned int mux_idx = snd_ctl_get_ioffidx(kcontrol, &uinfo->id);
if (mux_idx >= spec->num_mux_defs)
mux_idx = 0;
- if (!spec->input_mux[mux_idx].num_items && mux_idx > 0)
- mux_idx = 0;
return snd_hda_input_mux_info(&spec->input_mux[mux_idx], uinfo);
}
mux_idx = adc_idx >= spec->num_mux_defs ? 0 : adc_idx;
imux = &spec->input_mux[mux_idx];
- if (!imux->num_items && mux_idx > 0)
- imux = &spec->input_mux[0];
type = get_wcaps_type(get_wcaps(codec, nid));
if (type == AC_WID_AUD_MIX) {
SND_PCI_QUIRK(0x1695, 0x4012, "EPox EP-5LDA", ALC880_5ST_DIG),
SND_PCI_QUIRK(0x1734, 0x107c, "FSC F1734", ALC880_F1734),
SND_PCI_QUIRK(0x1734, 0x1094, "FSC Amilo M1451G", ALC880_FUJITSU),
- SND_PCI_QUIRK(0x1734, 0x10ac, "FSC AMILO Xi 1526", ALC880_F1734),
+ SND_PCI_QUIRK(0x1734, 0x10ac, "FSC", ALC880_UNIWILL),
SND_PCI_QUIRK(0x1734, 0x10b0, "Fujitsu", ALC880_FUJITSU),
SND_PCI_QUIRK(0x1854, 0x0018, "LG LW20", ALC880_LG_LW),
SND_PCI_QUIRK(0x1854, 0x003b, "LG", ALC880_LG),
static struct snd_pci_quirk alc260_cfg_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x007b, "Acer C20x", ALC260_ACER),
- SND_PCI_QUIRK(0x1025, 0x007f, "Acer", ALC260_WILL),
SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_ACER),
SND_PCI_QUIRK(0x1509, 0x4540, "Favorit 100XS", ALC260_FAVORIT100),
SND_PCI_QUIRK(0x103c, 0x2808, "HP d5700", ALC260_HP_3013),
.num_dacs = ARRAY_SIZE(alc260_dac_nids),
.dac_nids = alc260_dac_nids,
.num_adc_nids = ARRAY_SIZE(alc260_dual_adc_nids),
- .adc_nids = alc260_dual_adc_nids,
+ .adc_nids = alc260_adc_nids,
.num_channel_mode = ARRAY_SIZE(alc260_modes),
.channel_mode = alc260_modes,
.input_mux = &alc260_capture_source,
spec->stream_analog_playback = &alc260_pcm_analog_playback;
spec->stream_analog_capture = &alc260_pcm_analog_capture;
- spec->stream_analog_alt_capture = &alc260_pcm_analog_capture;
spec->stream_digital_playback = &alc260_pcm_digital_playback;
spec->stream_digital_capture = &alc260_pcm_digital_capture;
.num_items = 4,
.items = {
{ "Mic", 0x0 },
- { "Int Mic", 0x1 },
+ { "iMic", 0x1 },
{ "Line", 0x2 },
{ "CD", 0x4 },
},
HDA_CODEC_MUTE("CD Playback Switch", 0x0b, 0x04, HDA_INPUT),
HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x0, HDA_INPUT),
HDA_CODEC_MUTE("Mic Playback Switch", 0x0b, 0x0, HDA_INPUT),
- HDA_CODEC_VOLUME("Int Mic Playback Volume", 0x0b, 0x1, HDA_INPUT),
- HDA_CODEC_MUTE("Int Mic Playback Switch", 0x0b, 0x1, HDA_INPUT),
+ HDA_CODEC_VOLUME("iMic Playback Volume", 0x0b, 0x1, HDA_INPUT),
+ HDA_CODEC_MUTE("iMic Playback Switch", 0x0b, 0x1, HDA_INPUT),
{ } /* end */
};
SND_PCI_QUIRK(0x1462, 0xaa08, "MSI", ALC883_TARGA_2ch_DIG),
SND_PCI_QUIRK(0x147b, 0x1083, "Abit IP35-PRO", ALC883_6ST_DIG),
- SND_PCI_QUIRK(0x1558, 0x0571, "Clevo laptop M570U", ALC883_3ST_6ch_DIG),
SND_PCI_QUIRK(0x1558, 0x0721, "Clevo laptop M720R", ALC883_CLEVO_M720),
SND_PCI_QUIRK(0x1558, 0x0722, "Clevo laptop M720SR", ALC883_CLEVO_M720),
SND_PCI_QUIRK(0x1558, 0x5409, "Clevo laptop M540R", ALC883_CLEVO_M540R),
SND_PCI_QUIRK(0x8086, 0x0022, "DX58SO", ALC889_INTEL),
SND_PCI_QUIRK(0x8086, 0x0021, "Intel IbexPeak", ALC889A_INTEL),
SND_PCI_QUIRK(0x8086, 0x3b56, "Intel IbexPeak", ALC889A_INTEL),
- SND_PCI_QUIRK(0x8086, 0xd601, "D102GGC", ALC882_6ST_DIG),
+ SND_PCI_QUIRK(0x8086, 0xd601, "D102GGC", ALC883_3ST_6ch),
{}
};
SND_PCI_QUIRK(0x106b, 0x1000, "iMac 24", ALC885_IMAC24),
SND_PCI_QUIRK(0x106b, 0x2800, "AppleTV", ALC885_IMAC24),
SND_PCI_QUIRK(0x106b, 0x2c00, "MacbookPro rev3", ALC885_MBP3),
- SND_PCI_QUIRK(0x106b, 0x3000, "iMac", ALC889A_MB31),
SND_PCI_QUIRK(0x106b, 0x3600, "Macbook 3,1", ALC889A_MB31),
SND_PCI_QUIRK(0x106b, 0x3800, "MacbookPro 4,1", ALC885_MBP3),
SND_PCI_QUIRK(0x106b, 0x3e00, "iMac 24 Aluminum", ALC885_IMAC24),
SND_PCI_QUIRK(0x106b, 0x3f00, "Macbook 5,1", ALC885_MB5),
- SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC885_MB5),
/* FIXME: HP jack sense seems not working for MBP 5,1 or 5,2,
* so apparently no perfect solution yet
*/
continue;
mux_idx = c >= spec->num_mux_defs ? 0 : c;
imux = &spec->input_mux[mux_idx];
- if (!imux->num_items && mux_idx > 0)
- imux = &spec->input_mux[0];
for (idx = 0; idx < conns; idx++) {
/* if the current connection is the selected one,
* unmute it as default - otherwise mute it
{}
};
-static struct hda_verb alc262_lenovo_3000_init_verbs[] = {
- /* Front Mic pin: input vref at 50% */
- {0x19, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_VREF50},
- {0x19, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE},
- {}
-};
-
static struct hda_input_mux alc262_fujitsu_capture_source = {
.num_items = 3,
.items = {
[ALC262_LENOVO_3000] = {
.mixers = { alc262_lenovo_3000_mixer },
.init_verbs = { alc262_init_verbs, alc262_EAPD_verbs,
- alc262_lenovo_3000_unsol_verbs,
- alc262_lenovo_3000_init_verbs },
+ alc262_lenovo_3000_unsol_verbs },
.num_dacs = ARRAY_SIZE(alc262_dac_nids),
.dac_nids = alc262_dac_nids,
.hp_nid = 0x03,
dac = 0x02;
break;
case 0x15:
- case 0x1a: /* ALC259/269 only */
- case 0x1b: /* ALC259/269 only */
- case 0x21: /* ALC269vb has this pin, too */
dac = 0x03;
break;
default:
return 0x02;
else if (nid >= 0x0c && nid <= 0x0e)
return nid - 0x0c + 0x02;
- else if (nid == 0x26) /* ALC887-VD has this DAC too */
- return 0x25;
else
return 0;
}
static hda_nid_t alc662_dac_to_mix(struct hda_codec *codec, hda_nid_t pin,
hda_nid_t dac)
{
- hda_nid_t mix[5];
+ hda_nid_t mix[4];
int i, num;
num = snd_hda_get_connections(codec, pin, mix, ARRAY_SIZE(mix));
"Dell Studio 1555", STAC_DELL_M6_DMIC),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x02bd,
"Dell Studio 1557", STAC_DELL_M6_DMIC),
- SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x02fe,
- "Dell Studio XPS 1645", STAC_DELL_M6_BOTH),
- SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0413,
- "Dell Studio 1558", STAC_DELL_M6_BOTH),
{} /* terminator */
};
static struct snd_pci_quirk stac92hd73xx_codec_id_cfg_tbl[] = {
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x02a1,
"Alienware M17x", STAC_ALIENWARE_M17X),
- SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x043a,
- "Alienware M17x", STAC_ALIENWARE_M17X),
{} /* terminator */
};
"HP HDX", STAC_HP_HDX), /* HDX16 */
SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x3620,
"HP dv6", STAC_HP_DV5),
- SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x3061,
- "HP dv6", STAC_HP_DV5), /* HP dv6-1110ax */
SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_HP, 0xfff0, 0x7010,
"HP", STAC_HP_DV5),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0233,
SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_INTEL, 0xff00, 0x2000,
"Intel D965", STAC_D965_3ST),
/* Dell 3 stack systems */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01f7, "Dell XPS M1730", STAC_DELL_3ST),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01dd, "Dell Dimension E520", STAC_DELL_3ST),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01ed, "Dell ", STAC_DELL_3ST),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01f4, "Dell ", STAC_DELL_3ST),
/* Dell 3 stack systems with verb table in BIOS */
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01f3, "Dell Inspiron 1420", STAC_DELL_BIOS),
- SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01f7, "Dell XPS M1730", STAC_DELL_BIOS),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0227, "Dell Vostro 1400 ", STAC_DELL_BIOS),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x022e, "Dell ", STAC_DELL_BIOS),
SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x022f, "Dell Inspiron 1525", STAC_DELL_BIOS),
/* known working input slots (0-4) */
#define MAYA_LINE_IN 1 /* in-2 */
-#define MAYA_MIC_IN 3 /* in-4 */
+#define MAYA_MIC_IN 4 /* in-5 */
static void wm8776_select_input(struct snd_maya44 *chip, int idx, int line)
{
int changed;
mutex_lock(&chip->mutex);
- changed = maya_set_gpio_bits(chip->ice, 1 << GPIO_MIC_RELAY,
- sel ? (1 << GPIO_MIC_RELAY) : 0);
+ changed = maya_set_gpio_bits(chip->ice, GPIO_MIC_RELAY,
+ sel ? GPIO_MIC_RELAY : 0);
wm8776_select_input(chip, 0, sel ? MAYA_MIC_IN : MAYA_LINE_IN);
mutex_unlock(&chip->mutex);
return changed;
.name = "HP/Compaq nx7010",
.type = AC97_TUNE_MUTE_LED
},
- {
- .subvendor = 0x1014,
- .subdevice = 0x0534,
- .name = "ThinkPad X31",
- .type = AC97_TUNE_INV_EAPD
- },
{
.subvendor = 0x1014,
.subdevice = 0x1f00,
.name = "Dell Inspiron 8600", /* STAC9750/51 */
.type = AC97_TUNE_HP_ONLY
},
- {
- .subvendor = 0x1028,
- .subdevice = 0x0182,
- .name = "Dell Latitude D610", /* STAC9750/51 */
- .type = AC97_TUNE_HP_ONLY
- },
{
.subvendor = 0x1028,
.subdevice = 0x0186,
struct snd_kcontrol *master_switch;
struct snd_kcontrol *master_volume;
struct tasklet_struct hwvol_tq;
- unsigned int in_suspend;
#ifdef CONFIG_PM
u16 *suspend_mem;
MODULE_DEVICE_TABLE(pci, snd_m3_ids);
static struct snd_pci_quirk m3_amp_quirk_list[] __devinitdata = {
- SND_PCI_QUIRK(0x0E11, 0x0094, "Compaq Evo N600c", 0x0c),
SND_PCI_QUIRK(0x10f7, 0x833e, "Panasonic CF-28", 0x0d),
SND_PCI_QUIRK(0x10f7, 0x833d, "Panasonic CF-72", 0x0d),
SND_PCI_QUIRK(0x1033, 0x80f1, "NEC LM800J/7", 0x03),
outb(0x88, chip->iobase + SHADOW_MIX_REG_MASTER);
outb(0x88, chip->iobase + HW_VOL_COUNTER_MASTER);
- /* Ignore spurious HV interrupts during suspend / resume, this avoids
- mistaking them for a mute button press. */
- if (chip->in_suspend)
- return;
-
if (!chip->master_switch || !chip->master_volume)
return;
if (chip->suspend_mem == NULL)
return 0;
- chip->in_suspend = 1;
snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);
snd_pcm_suspend_all(chip->pcm);
snd_ac97_suspend(chip->ac97);
snd_m3_hv_init(chip);
snd_power_change_state(card, SNDRV_CTL_POWER_D0);
- chip->in_suspend = 0;
return 0;
}
#endif /* CONFIG_PM */
unsigned long count, unsigned long pos)
{
struct mixart_mgr *mgr = entry->private_data;
- unsigned long maxsize;
- if (pos >= MIXART_BA0_SIZE)
- return 0;
- maxsize = MIXART_BA0_SIZE - pos;
- if (count > maxsize)
- count = maxsize;
count = count & ~3; /* make sure the read size is a multiple of 4 bytes */
- if (copy_to_user_fromio(buf, MIXART_MEM(mgr, pos), count))
+ if(count <= 0)
+ return 0;
+ if(pos + count > MIXART_BA0_SIZE)
+ count = (long)(MIXART_BA0_SIZE - pos);
+ if(copy_to_user_fromio(buf, MIXART_MEM( mgr, pos ), count))
return -EFAULT;
return count;
}
unsigned long count, unsigned long pos)
{
struct mixart_mgr *mgr = entry->private_data;
- unsigned long maxsize;
- if (pos > MIXART_BA1_SIZE)
- return 0;
- maxsize = MIXART_BA1_SIZE - pos;
- if (count > maxsize)
- count = maxsize;
count = count & ~3; /* make sure the read size is a multiple of 4 bytes */
- if (copy_to_user_fromio(buf, MIXART_REG(mgr, pos), count))
+ if(count <= 0)
+ return 0;
+ if(pos + count > MIXART_BA1_SIZE)
+ count = (long)(MIXART_BA1_SIZE - pos);
+ if(copy_to_user_fromio(buf, MIXART_REG( mgr, pos ), count))
return -EFAULT;
return count;
}
chip->model.suspend = claro_suspend;
chip->model.resume = claro_resume;
chip->model.set_adc_params = set_ak5385_params;
- chip->model.device_config = PLAYBACK_0_TO_I2S |
- PLAYBACK_1_TO_SPDIF |
- CAPTURE_0_FROM_I2S_2 |
- CAPTURE_1_FROM_SPDIF;
break;
}
if (id->driver_data == MODEL_MERIDIAN ||
firmware.firmware.ASIC, firmware.firmware.CODEC,
firmware.firmware.AUXDSP, firmware.firmware.PROG);
- if (!chip)
- return 1;
-
for (i = 0; i < FIRMWARE_VERSIONS; i++) {
if (!memcmp(&firmware_versions[i], &firmware, sizeof(firmware)))
- return 1; /* OK */
-
+ break;
}
+ if (i >= FIRMWARE_VERSIONS)
+ return 0; /* no match */
+
+ if (!chip)
+ return 1; /* OK */
snd_printdd("Writing Firmware\n");
if (!chip->fw_entry) {
if (err < 0)
return err;
- memset(&info, 0, sizeof(info));
spin_lock_irqsave(&hdsp->lock, flags);
info.pref_sync_ref = (unsigned char)hdsp_pref_sync_ref(hdsp);
info.wordclock_sync_check = (unsigned char)hdsp_wc_sync_check(hdsp);
case SNDRV_HDSPM_IOCTL_GET_CONFIG_INFO:
- memset(&info, 0, sizeof(info));
spin_lock_irq(&hdspm->lock);
info.pref_sync_ref = hdspm_pref_sync_ref(hdspm);
info.wordclock_sync_check = hdspm_wc_sync_check(hdspm);
.name = "ASRock K7VT2",
.type = AC97_TUNE_HP_ONLY
},
- {
- .subvendor = 0x110a,
- .subdevice = 0x0079,
- .name = "Fujitsu Siemens D1289",
- .type = AC97_TUNE_HP_ONLY
- },
{
.subvendor = 0x1019,
.subdevice = 0x0a81,
if (reg >= codec->reg_cache_size)
return -EINVAL;
+ reg &= AK4104_REG_MASK;
+ reg |= AK4104_WRITE;
+
/* only write to the hardware if value has changed */
if (cache[reg] != value) {
- u8 tmp[2] = { (reg & AK4104_REG_MASK) | AK4104_WRITE, value };
-
+ u8 tmp[2] = { reg, value };
if (spi_write(spi, tmp, sizeof(tmp))) {
dev_err(&spi->dev, "SPI write failed\n");
return -EIO;
SOC_ENUM_SINGLE(WM8350_INPUT_MIXER_VOLUME, 15, 2, wm8350_lr),
};
-static DECLARE_TLV_DB_SCALE(pre_amp_tlv, -1200, 3525, 0);
-static DECLARE_TLV_DB_SCALE(out_pga_tlv, -5700, 600, 0);
+static DECLARE_TLV_DB_LINEAR(pre_amp_tlv, -1200, 3525);
+static DECLARE_TLV_DB_LINEAR(out_pga_tlv, -5700, 600);
static DECLARE_TLV_DB_SCALE(dac_pcm_tlv, -7163, 36, 1);
static DECLARE_TLV_DB_SCALE(adc_pcm_tlv, -12700, 50, 1);
static DECLARE_TLV_DB_SCALE(out_mix_tlv, -1500, 300, 1);
wm8400_reset_codec_reg_cache(wm8400->wm8400);
}
-static const DECLARE_TLV_DB_SCALE(rec_mix_tlv, -1500, 600, 0);
+static const DECLARE_TLV_DB_LINEAR(rec_mix_tlv, -1500, 600);
-static const DECLARE_TLV_DB_SCALE(in_pga_tlv, -1650, 3000, 0);
+static const DECLARE_TLV_DB_LINEAR(in_pga_tlv, -1650, 3000);
-static const DECLARE_TLV_DB_SCALE(out_mix_tlv, -2100, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_mix_tlv, -2100, 0);
-static const DECLARE_TLV_DB_SCALE(out_pga_tlv, -7300, 600, 0);
+static const DECLARE_TLV_DB_LINEAR(out_pga_tlv, -7300, 600);
-static const DECLARE_TLV_DB_SCALE(out_omix_tlv, -600, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_omix_tlv, -600, 0);
-static const DECLARE_TLV_DB_SCALE(out_dac_tlv, -7163, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_dac_tlv, -7163, 0);
-static const DECLARE_TLV_DB_SCALE(in_adc_tlv, -7163, 1763, 0);
+static const DECLARE_TLV_DB_LINEAR(in_adc_tlv, -7163, 1763);
-static const DECLARE_TLV_DB_SCALE(out_sidetone_tlv, -3600, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_sidetone_tlv, -3600, 0);
static int wm8400_outpga_put_volsw_vu(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
/* INMIX dB values */
static const unsigned int in_mix_tlv[] = {
TLV_DB_RANGE_HEAD(1),
- 0,7, TLV_DB_SCALE_ITEM(-1200, 600, 0),
+ 0,7, TLV_DB_LINEAR_ITEM(-1200, 600),
};
/* Left In PGA Connections */
SOC_DOUBLE("DAC3 Invert Switch", WM8580_DAC_CONTROL4, 4, 5, 1, 0),
SOC_SINGLE("DAC ZC Switch", WM8580_DAC_CONTROL5, 5, 1, 0),
-SOC_SINGLE("DAC1 Switch", WM8580_DAC_CONTROL5, 0, 1, 1),
-SOC_SINGLE("DAC2 Switch", WM8580_DAC_CONTROL5, 1, 1, 1),
-SOC_SINGLE("DAC3 Switch", WM8580_DAC_CONTROL5, 2, 1, 1),
+SOC_SINGLE("DAC1 Switch", WM8580_DAC_CONTROL5, 0, 1, 0),
+SOC_SINGLE("DAC2 Switch", WM8580_DAC_CONTROL5, 1, 1, 0),
+SOC_SINGLE("DAC3 Switch", WM8580_DAC_CONTROL5, 2, 1, 0),
SOC_DOUBLE("ADC Mute Switch", WM8580_ADC_CONTROL1, 0, 1, 1, 0),
SOC_SINGLE("ADC High-Pass Filter Switch", WM8580_ADC_CONTROL1, 4, 1, 0),
static const struct snd_soc_dapm_widget wm8776_dapm_widgets[] = {
SND_SOC_DAPM_INPUT("AUX"),
+SND_SOC_DAPM_INPUT("AUX"),
SND_SOC_DAPM_INPUT("AIN1"),
SND_SOC_DAPM_INPUT("AIN2"),
case SND_SOC_DAIFMT_LEFT_J:
iface |= 0x0001;
break;
+ /* FIXME: CHECK A/B */
+ case SND_SOC_DAIFMT_DSP_A:
+ iface |= 0x0003;
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ iface |= 0x0007;
+ break;
default:
return -EINVAL;
}
#define wm8990_reset(c) snd_soc_write(c, WM8990_RESET, 0)
-static const DECLARE_TLV_DB_SCALE(rec_mix_tlv, -1500, 600, 0);
+static const DECLARE_TLV_DB_LINEAR(rec_mix_tlv, -1500, 600);
-static const DECLARE_TLV_DB_SCALE(in_pga_tlv, -1650, 3000, 0);
+static const DECLARE_TLV_DB_LINEAR(in_pga_tlv, -1650, 3000);
-static const DECLARE_TLV_DB_SCALE(out_mix_tlv, 0, -2100, 0);
+static const DECLARE_TLV_DB_LINEAR(out_mix_tlv, 0, -2100);
-static const DECLARE_TLV_DB_SCALE(out_pga_tlv, -7300, 600, 0);
+static const DECLARE_TLV_DB_LINEAR(out_pga_tlv, -7300, 600);
-static const DECLARE_TLV_DB_SCALE(out_omix_tlv, -600, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_omix_tlv, -600, 0);
-static const DECLARE_TLV_DB_SCALE(out_dac_tlv, -7163, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_dac_tlv, -7163, 0);
-static const DECLARE_TLV_DB_SCALE(in_adc_tlv, -7163, 1763, 0);
+static const DECLARE_TLV_DB_LINEAR(in_adc_tlv, -7163, 1763);
-static const DECLARE_TLV_DB_SCALE(out_sidetone_tlv, -3600, 0, 0);
+static const DECLARE_TLV_DB_LINEAR(out_sidetone_tlv, -3600, 0);
static int wm899x_outpga_put_volsw_vu(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
/* INMIX dB values */
static const unsigned int in_mix_tlv[] = {
TLV_DB_RANGE_HEAD(1),
- 0, 7, TLV_DB_SCALE_ITEM(-1200, 600, 0),
+ 0, 7, TLV_DB_LINEAR_ITEM(-1200, 600),
};
/* Left In PGA Connections */
return err;
}
-/*
- * This call will put the synth in "USB send" mode, i.e it will send MIDI
- * messages through USB (this is disabled at startup). The synth will
- * acknowledge by sending a sysex on endpoint 0x85 and by displaying a USB
- * sign on its LCD. Values here are chosen based on sniffing USB traffic
- * under Windows.
- */
-static int snd_usb_accessmusic_boot_quirk(struct usb_device *dev)
-{
- int err, actual_length;
-
- /* "midi send" enable */
- static const u8 seq[] = { 0x4e, 0x73, 0x52, 0x01 };
-
- void *buf = kmemdup(seq, ARRAY_SIZE(seq), GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
- err = usb_interrupt_msg(dev, usb_sndintpipe(dev, 0x05), buf,
- ARRAY_SIZE(seq), &actual_length, 1000);
- kfree(buf);
- if (err < 0)
- return err;
-
- return 0;
-}
-
/*
* Setup quirks
*/
goto __err_val;
}
- /* Access Music VirusTI Desktop */
- if (id == USB_ID(0x133e, 0x0815)) {
- if (snd_usb_accessmusic_boot_quirk(dev) < 0)
- goto __err_val;
- }
-
/*
* found a config. now register to ALSA
*/
DEFINE_WAIT(wait);
long timeout = msecs_to_jiffies(50);
- if (ep->umidi->disconnected)
- return;
/*
* The substream buffer is empty, but some data might still be in the
* currently active URBs, so we have to wait for those to complete.
* Frees an output endpoint.
* May be called when ep hasn't been initialized completely.
*/
-static void snd_usbmidi_out_endpoint_clear(struct snd_usb_midi_out_endpoint *ep)
+static void snd_usbmidi_out_endpoint_delete(struct snd_usb_midi_out_endpoint* ep)
{
unsigned int i;
for (i = 0; i < OUTPUT_URBS; ++i)
- if (ep->urbs[i].urb) {
+ if (ep->urbs[i].urb)
free_urb_and_buffer(ep->umidi, ep->urbs[i].urb,
ep->max_transfer);
- ep->urbs[i].urb = NULL;
- }
-}
-
-static void snd_usbmidi_out_endpoint_delete(struct snd_usb_midi_out_endpoint *ep)
-{
- snd_usbmidi_out_endpoint_clear(ep);
kfree(ep);
}
usb_kill_urb(ep->out->urbs[j].urb);
if (umidi->usb_protocol_ops->finish_out_endpoint)
umidi->usb_protocol_ops->finish_out_endpoint(ep->out);
- ep->out->active_urbs = 0;
- if (ep->out->drain_urbs) {
- ep->out->drain_urbs = 0;
- wake_up(&ep->out->drain_wait);
- }
}
if (ep->in)
for (j = 0; j < INPUT_URBS; ++j)
usb_kill_urb(ep->in->urbs[j]);
/* free endpoints here; later call can result in Oops */
- if (ep->out)
- snd_usbmidi_out_endpoint_clear(ep->out);
+ if (ep->out) {
+ snd_usbmidi_out_endpoint_delete(ep->out);
+ ep->out = NULL;
+ }
if (ep->in) {
snd_usbmidi_in_endpoint_delete(ep->in);
ep->in = NULL;
EXTERNAL_PORT(0x086a, 0x0001, 8, "%s Broadcast"),
EXTERNAL_PORT(0x086a, 0x0002, 8, "%s Broadcast"),
EXTERNAL_PORT(0x086a, 0x0003, 4, "%s Broadcast"),
- /* Access Music Virus TI */
- EXTERNAL_PORT(0x133e, 0x0815, 0, "%s MIDI"),
- PORT_INFO(0x133e, 0x0815, 1, "%s Synth", 0,
- SNDRV_SEQ_PORT_TYPE_MIDI_GENERIC |
- SNDRV_SEQ_PORT_TYPE_HARDWARE |
- SNDRV_SEQ_PORT_TYPE_SYNTHESIZER),
};
static struct port_info *find_port_info(struct snd_usb_midi* umidi, int number)
}
},
-/* Access Music devices */
-{
- /* VirusTI Desktop */
- USB_DEVICE_VENDOR_SPEC(0x133e, 0x0815),
- .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
- .ifnum = QUIRK_ANY_INTERFACE,
- .type = QUIRK_COMPOSITE,
- .data = &(const struct snd_usb_audio_quirk[]) {
- {
- .ifnum = 3,
- .type = QUIRK_MIDI_FIXED_ENDPOINT,
- .data = &(const struct snd_usb_midi_endpoint_info) {
- .out_cables = 0x0003,
- .in_cables = 0x0003
- }
- },
- {
- .ifnum = 4,
- .type = QUIRK_IGNORE_INTERFACE
- },
- {
- .ifnum = -1
- }
- }
- }
-},
-
/* */
{
/* aka. Serato Scratch Live DJ Box */
DOC_MAN5=$(patsubst %.txt,%.5,$(MAN5_TXT))
DOC_MAN7=$(patsubst %.txt,%.7,$(MAN7_TXT))
-# Make the path relative to DESTDIR, not prefix
-ifndef DESTDIR
prefix?=$(HOME)
-endif
bindir?=$(prefix)/bin
htmldir?=$(prefix)/share/doc/perf-doc
pdfdir?=$(prefix)/share/doc/perf-doc
man1dir=$(mandir)/man1
man5dir=$(mandir)/man5
man7dir=$(mandir)/man7
+# DESTDIR=
ASCIIDOC=asciidoc
ASCIIDOC_EXTRA = --unsafe
# runtime figures out where they are based on the path to the executable.
# This can help installing the suite in a relocatable way.
-# Make the path relative to DESTDIR, not to prefix
-ifndef DESTDIR
prefix = $(HOME)
-endif
bindir_relative = bin
bindir = $(prefix)/$(bindir_relative)
mandir = share/man
ETC_PERFCONFIG = etc/perfconfig
endif
lib = lib
+# DESTDIR=
export prefix bindir sharedir sysconfdir
INIT_LIST_HEAD(&node->brothers);
INIT_LIST_HEAD(&node->children);
INIT_LIST_HEAD(&node->val);
-
- node->children_hit = 0;
- node->parent = NULL;
- node->hit = 0;
}
static inline u64 cumul_hits(struct callchain_node *node)
/* Allocate page dirty bitmap if needed */
if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
- unsigned long dirty_bytes = kvm_dirty_bitmap_bytes(&new);
+ unsigned dirty_bytes = ALIGN(npages, BITS_PER_LONG) / 8;
new.dirty_bitmap = vmalloc(dirty_bytes);
if (!new.dirty_bitmap)
{
struct kvm_memory_slot *memslot;
int r, i;
- unsigned long n;
+ int n;
unsigned long any = 0;
r = -EINVAL;
if (!memslot->dirty_bitmap)
goto out;
- n = kvm_dirty_bitmap_bytes(memslot);
+ n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
for (i = 0; !any && i < n/sizeof(long); ++i)
any = memslot->dirty_bitmap[i];
memslot = gfn_to_memslot_unaliased(kvm, gfn);
if (memslot && memslot->dirty_bitmap) {
unsigned long rel_gfn = gfn - memslot->base_gfn;
- unsigned long *p = memslot->dirty_bitmap +
- rel_gfn / BITS_PER_LONG;
- int offset = rel_gfn % BITS_PER_LONG;
/* avoid RMW */
- if (!test_bit(offset, p))
- set_bit(offset, p);
+ if (!test_bit(rel_gfn, memslot->dirty_bitmap))
+ set_bit(rel_gfn, memslot->dirty_bitmap);
}
}