The ECC bytes must be placed immidiately after the data
bytes in order to make the syndrome generator work. This
is contrary to the usual layout used by software ECC. The
- seperation of data and out of band area is not longer
+ separation of data and out of band area is not longer
possible. The nand driver code handles this layout and
the remaining free bytes in the oob area are managed by
the autoplacement code. Provide a matching oob-layout
bad blocks. They have factory marked good blocks. The marker pattern
is erased when the block is erased to be reused. So in case of
powerloss before writing the pattern back to the chip this block
- would be lost and added to the bad blocks. Therefor we scan the
+ would be lost and added to the bad blocks. Therefore we scan the
chip(s) when we detect them the first time for good blocks and
store this information in a bad block table before erasing any
of the blocks.
manufacturers specifications. This applies similar to the spare area.
</para>
<para>
- Therefor NAND aware filesystems must either write in page size chunks
+ Therefore NAND aware filesystems must either write in page size chunks
or hold a writebuffer to collect smaller writes until they sum up to
pagesize. Available NAND aware filesystems: JFFS2, YAFFS.
</para>
captured or output, applications can request frame skipping or
duplicating on the driver side. This is especially useful when using
the &func-read; or &func-write;, which are not augmented by timestamps
-or sequence counters, and to avoid unneccessary data copying.</para>
+or sequence counters, and to avoid unnecessary data copying.</para>
<para>Finally these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For
duplicating on the driver side. This is especially useful when using
the <function>read()</function> or <function>write()</function>, which
are not augmented by timestamps or sequence counters, and to avoid
-unneccessary data copying.</para>
+unnecessary data copying.</para>
<para>Further these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For
how the clocks are arranged. The first implementation used as single
PLL to feed the ARM, memory and peripherals via a series of dividers
and muxes and this is the implementation that is documented here. A
- newer version where there is a seperate PLL and clock divider for the
- ARM core is available as a seperate driver.
+ newer version where there is a separate PLL and clock divider for the
+ ARM core is available as a separate driver.
Layout
bank1_types=1,1,0,0,0,0,0,2,0,0,0,0,2,0,0,1
You may also need to specify the fan_sensors option for these boards
fan_sensors=5
- 2) There is a seperate abituguru3 driver for these motherboards,
+ 2) There is a separate abituguru3 driver for these motherboards,
the abituguru (without the 3 !) driver will not work on these
motherboards (and visa versa)!
the configuration.
Because GPIO to IRQ mapping is platform specific, this information must
-be given in seperately to the driver. See the example below.
+be given in separately to the driver. See the example below.
---------<snip>---------
=======================
From v2.01 on, the driver is integrated in the linux kernel sources.
-Therefor, the installation is the same as for any other adapter
+Therefore, the installation is the same as for any other adapter
supported by the kernel.
Refer to the manual of your distribution about the installation
of network adapters.
see also: include/linux/kvm.h
This ioctl stores the state of the cpu at the guest real address given as
argument, unless one of the following values defined in include/linux/kvm.h
-is given as arguement:
+is given as argument:
KVM_S390_STORE_STATUS_NOADDR - the CPU stores its status to the save area in
absolute lowcore as defined by the principles of operation
KVM_S390_STORE_STATUS_PREFIXED - the CPU stores its status to the save area in
* Remove redundant port_cmp != 2 check in if
(!port_cmp) { .... if (port_cmp != 2).... }
* Clock changes: removed struct clk_data and timerList.
- * Clock changes: seperate nodev_tmo and els_retry_delay into 2
- seperate timers and convert to 1 argument changed
+ * Clock changes: separate nodev_tmo and els_retry_delay into 2
+ separate timers and convert to 1 argument changed
LPFC_NODE_FARP_PEND_t to struct lpfc_node_farp_pend convert
ipfarp_tmo to 1 argument convert target struct tmofunc and
rtplunfunc to 1 argument * cr_count, cr_delay and
* Remove unused elxclock declaration in elx_sli.h.
* Since everywhere IOCB_ENTRY is used, the return value is cast,
move the cast into the macro.
- * Split ioctls out into seperate files
+ * Split ioctls out into separate files
Changes from 20040326 to 20040402
* Unused variable cleanup
* Use Linux list macros for DMABUF_t
* Break up ioctls into 3 sections, dfc, util, hbaapi
- rearranged code so this could be easily seperated into a
+ rearranged code so this could be easily separated into a
differnet module later All 3 are currently turned on by
defines in lpfc_ioctl.c LPFC_DFC_IOCTL, LPFC_UTIL_IOCTL,
LPFC_HBAAPI_IOCTL
started by lpfc_online(). lpfc_offline() only stopped
els_timeout routine. It now stops all timeout routines
associated with that hba.
- * Replace seperate next and prev pointers in struct
+ * Replace separate next and prev pointers in struct
lpfc_bindlist with list_head type. In elxHBA_t, replace
fc_nlpbind_start and _end with fc_nlpbind_list and use
list_head macros to access it.
When tracing is enabled, kstop_machine is called to prevent
races with the CPUS executing code being modified (which can
-cause the CPU to do undesireable things), and the nops are
+cause the CPU to do undesirable things), and the nops are
patched back to calls. But this time, they do not call mcount
(which is just a function stub). They now call into the ftrace
infrastructure.
*
* Micro9-High has up to 64MB of 32-bit flash on CS1
* Micro9-Mid has up to 64MB of either 32-bit or 16-bit flash on CS1
- * Micro9-Lite uses a seperate MTD map driver for flash support
+ * Micro9-Lite uses a separate MTD map driver for flash support
* Micro9-Slim has up to 64MB of either 32-bit or 16-bit flash on CS1
*************************************************************************/
static struct physmap_flash_data micro9_flash_data;
#define SRC_CR_INIT_MASK 0x00007fff
#define SRC_CR_INIT_VAL 0x2aaa8000
-/* These adresses span 16MB, so use three individual pages */
+/* These addresses span 16MB, so use three individual pages */
static struct resource nhk8815_nand_resources[] = {
{
.name = "nand_addr",
/*
* The AVE3e requires two regions of 256MB that it considers
* "invisible". The hardware will not be able to access these
- * adresses, so they should never point to system RAM.
+ * addresses, so they should never point to system RAM.
*/
{
.name = "AVE3e Reserved 0",
/*
* Some devices and their resources require reserved physical memory from
* the end of the available RAM. This function traverses the list of devices
- * and assigns actual adresses to these.
+ * and assigns actual addresses to these.
*/
static void __init u300_assign_physmem(void)
{
#include <mach/hardware.h>
.macro addruart,rx
- /* If we move the adress using MMU, use this. */
+ /* If we move the address using MMU, use this. */
mrc p15, 0, \rx, c1, c0
tst \rx, #1 @ MMU enabled?
ldreq \rx, = U300_SLOW_PER_PHYS_BASE @ MMU off, physical address
* others = Special functions (dependant on bank)
*
* Note, since the code to deal with the case where there are two control
- * registers instead of one, we do not have a seperate set of functions for
+ * registers instead of one, we do not have a separate set of functions for
* each case.
*/
extern int s3c_gpio_setcfg_s3c64xx_4bit(struct s3c_gpio_chip *chip,
* published by the Free Software Foundation.
*/
-/* Note, this is a seperate header file as some of the clock framework
+/* Note, this is a separate header file as some of the clock framework
* needs to touch this if the clk_48m is used as the USB OHCI or other
* peripheral source.
*/
* @locktime_m: The lock-time in uS for the MPLL.
* @locktime_u: The lock-time in uS for the UPLL.
* @locttime_bits: The number of bits each LOCKTIME field.
- * @need_pll: Set if this driver needs to change the PLL values to acheive
+ * @need_pll: Set if this driver needs to change the PLL values to achieve
* any frequency changes. This is really only need by devices like the
* S3C2410 where there is no or limited divider between the PLL and the
* ARMCLK.
sum += *buff++;
if (endMarker > buff)
- sum += *(const u8 *)buff; /* add extra byte seperately */
+ sum += *(const u8 *)buff; /* add extra byte separately */
BITOFF;
return (__force __wsum)sum;
spin_unlock(&mmu_context_lock);
/*
- * Remember the pgd for the fault handlers. Keep a seperate
+ * Remember the pgd for the fault handlers. Keep a separate
* copy of it because current and active_mm might be invalid
* at points where * there's still a need to derefer the pgd.
*/
* memory location directly.
*/
/* ++roman: The assignments to temp. vars avoid that gcc sometimes generates
- * two accesses to memory, which may be undesireable for some devices.
+ * two accesses to memory, which may be undesirable for some devices.
*/
/*
* Note: This stuff is duped here because Altix requires the PCDP to
* locate a usable VGA device due to lack of proper ACPI support. Structures
* could be used from drivers/firmware/pcdp.h, but it was decided that moving
- * this file to a more public location just for Altix use was undesireable.
+ * this file to a more public location just for Altix use was undesirable.
*/
struct hcdp_uart_desc {
* bytes have been lost and in which state of the packet structure we are now.
* This usually causes keyboards bytes to be interpreted as mouse movements
* and vice versa, which is very annoying. It seems better to throw away some
- * bytes (that are usually mouse bytes) than to misinterpret them. Therefor I
+ * bytes (that are usually mouse bytes) than to misinterpret them. Therefore I
* introduced the RESYNC state for IKBD data. In this state, the bytes up to
* one that really looks like a key event (0x04..0xf2) or the start of a mouse
* packet (0xf8..0xfb) are thrown away, but at most 2 bytes. This at least
* memory location directly.
*/
/* ++roman: The assignments to temp. vars avoid that gcc sometimes generates
- * two accesses to memory, which may be undesireable for some devices.
+ * two accesses to memory, which may be undesirable for some devices.
*/
/*
compatible = "cfi-flash";
/*
* The Intel P30 chip has 2 non-identical chips on
- * one die, so we need to define 2 seperate regions
+ * one die, so we need to define 2 separate regions
* that are scanned by physmap_of independantly.
*/
reg = <0 0x00000000 0x02000000
/**
* struct ccw1 - channel command word
* @cmd_code: command code
- * @flags: flags, like IDA adressing, etc.
+ * @flags: flags, like IDA addressing, etc.
* @count: byte count
* @cda: data address
*
lh %r9,0(%r8) # update sccb length
ar %r9,%r6
sth %r9,0(%r8)
- ar %r7,%r6 # update current mto adress
+ ar %r7,%r6 # update current mto address
ltr %r0,%r0 # more characters?
jnz .LinitmtoS4
l %r2,.LwritedataS4-.LbaseS4(%r13)# write data
if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) &
(1<<LEON3_GPTIMER_SEPIRQ))) {
- prom_printf("irq timer not configured with seperate irqs \n");
+ prom_printf("irq timer not configured with separate irqs \n");
BUG();
}
}
/* Like powerpc we can't get PMU interrupts within the PMU handler,
- * so no need for seperate NMI and IRQ chains as on x86.
+ * so no need for separate NMI and IRQ chains as on x86.
*/
static DEFINE_PER_CPU(struct perf_callchain_entry, callchain);
#include <asm/asm-offsets.h>
-/* return adress at 0 */
+/* return address at 0 */
#define in_blk 12 /* input byte array address parameter*/
#define out_blk 8 /* output byte array address parameter*/
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
- add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
- mov in_blk+16(%esp),%edi /* input adress in edi */
+ add $crypto_tfm_ctx_offset, %ebp /* ctx address */
+ mov in_blk+16(%esp),%edi /* input address in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
- add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
- mov in_blk+16(%esp),%edi /* input adress in edi */
+ add $crypto_tfm_ctx_offset, %ebp /* ctx address */
+ mov in_blk+16(%esp),%edi /* input address in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
twofish_enc_blk:
pushq R1
- /* %rdi contains the crypto tfm adress */
- /* %rsi contains the output adress */
- /* %rdx contains the input adress */
- add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
- /* ctx adress is moved to free one non-rex register
+ /* %rdi contains the crypto tfm address */
+ /* %rsi contains the output address */
+ /* %rdx contains the input address */
+ add $crypto_tfm_ctx_offset, %rdi /* set ctx address */
+ /* ctx address is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
twofish_dec_blk:
pushq R1
- /* %rdi contains the crypto tfm adress */
- /* %rsi contains the output adress */
- /* %rdx contains the input adress */
- add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
- /* ctx adress is moved to free one non-rex register
+ /* %rdi contains the crypto tfm address */
+ /* %rsi contains the output address */
+ /* %rdx contains the input address */
+ add $crypto_tfm_ctx_offset, %rdi /* set ctx address */
+ /* ctx address is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
#define GET_CR2_INTO_RCX movq %cr2, %rcx
#endif
-/* we are not able to switch in one step to the final KERNEL ADRESS SPACE
+/* we are not able to switch in one step to the final KERNEL ADDRESS SPACE
* because we need identity-mapped pages.
*
*/
/*
* get_tce_space_from_tar():
* Function for kdump case. Get the tce tables from first kernel
- * by reading the contents of the base adress register of calgary iommu
+ * by reading the contents of the base address register of calgary iommu
*/
static void __init get_tce_space_from_tar(void)
{
* unstable. We do this because unlike Time Of Day,
* the scheduler clock tolerates small errors and it's
* very important for it to be as fast as the platform
- * can achive it. )
+ * can achieve it. )
*/
if (unlikely(tsc_disabled)) {
/* No locking but a rare wrong value is not a big deal: */
* excsave has been restored, and
* stack pointer (a1) has been set.
*
- * Note: _user_exception might be at an odd adress. Don't use call0..call12
+ * Note: _user_exception might be at an odd address. Don't use call0..call12
*/
ENTRY(user_exception)
* excsave has been restored, and
* stack pointer (a1) has been set.
*
- * Note: _kernel_exception might be at an odd adress. Don't use call0..call12
+ * Note: _kernel_exception might be at an odd address. Don't use call0..call12
*/
ENTRY(kernel_exception)
return ERR_PTR(ret);
/*
- * map scatter-gather elements seperately and string them to request
+ * map scatter-gather elements separately and string them to request
*/
rq = blk_get_request(q, rw, GFP_KERNEL);
if (!rq)
list_for_each_entry(dock_station, &dock_stations, sibling) {
/*
* An ATA bay can be in a dock and itself can be ejected
- * seperately, so there are two 'dock stations' which need the
+ * separately, so there are two 'dock stations' which need the
* ops
*/
dd = find_dock_dependent_device(dock_station, handle);
* @qc: command
*
* Drain the FIFO and device of any stuck data following a command
- * failing to complete. In some cases this is neccessary before a
+ * failing to complete. In some cases this is necessary before a
* reset will recover the device.
*
*/
*
* Called when the libata layer is about to issue a command. We wrap
* this interface so that we can load the correct ATA timings if
- * neccessary.
+ * necessary.
*/
static unsigned int pacpi_qc_issue(struct ata_queued_cmd *qc)
* @id: Entry in match table
*
* Perform basic initialisation. We set the device up so we access all
- * ports via BAR4. This is neccessary to work around errata.
+ * ports via BAR4. This is necessary to work around errata.
*/
static int hpt3x3_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
* @qc: command
*
* Drain the FIFO and device of any stuck data following a command
- * failing to complete. In some cases this is neccessary before a
+ * failing to complete. In some cases this is necessary before a
* reset will recover the device.
*
*/
/* All EEs on the free list should have ID_VACANT (== 0)
* freshly allocated EEs get !ID_VACANT (== 1)
- * so if it says "cannot dereference null pointer at adress 0x00000001",
+ * so if it says "cannot dereference null pointer at address 0x00000001",
* it is most likely one of these :( */
#define ID_IN_SYNC (4711ULL)
/* Meta data layout
We reserve a 128MB Block (4k aligned)
* either at the end of the backing device
- * or on a seperate meta data device. */
+ * or on a separate meta data device. */
#define MD_RESERVED_SECT (128LU << 11) /* 128 MB, unit sectors */
/* The following numbers are sectors */
*
* It may me handed over to the local disk subsystem.
* It may be completed by the local disk subsystem,
- * either sucessfully or with io-error.
+ * either successfully or with io-error.
* In case it is a READ request, and it failed locally,
* it may be retried remotely.
*
j++;
}
} else {
- /* sg may merge pages, but we have to seperate
+ /* sg may merge pages, but we have to separate
* per-page addr for GTT */
unsigned int len, m;
/* et passe en argument a acinit, mais est scrute sur le bus pour s'adapter */
/* au nombre de cartes presentes sur le bus. IOCL code 6 affichait V2.4.3 */
/* F.LAFORSE 28/11/95 creation de fichiers acXX.o avec les differentes */
-/* adresses de base des cartes, IOCTL 6 plus complet */
+/* addresses de base des cartes, IOCTL 6 plus complet */
/* J.PAGET le 19/08/96 copie de la version V2.6 en V2.8.0 sans modification */
/* de code autre que le texte V2.6.1 en V2.8.0 */
/*****************************************************************************/
if (!hvlpevent_is_int(event)) {
printk(KERN_WARNING
- "hvc: got unexpected close acknowlegement\n");
+ "hvc: got unexpected close acknowledgement\n");
return;
}
* x22 + x21 + x17 + x15 + x13 + x12 + x11 + x7 + x5 + x + 1
*
* The RNG_CTL_VCO value of each noise cell must be programmed
- * seperately. This is why 4 control register values must be provided
+ * separately. This is why 4 control register values must be provided
* to the hypervisor. During a write, the hypervisor writes them all,
* one at a time, to the actual RNG_CTL register. The first three
* values are used to setup the desired RNG_CTL_VCO for each entropy
2) It may be hard-coded into your source by including a .h file (typically
supplied by Computone), which declares a data array and initializes every
- element. This acheives the same result as if an entire loadware file had
+ element. This achieves the same result as if an entire loadware file had
been read into the array.
This requires more data space in your program, but access to the file system
* @tty: tty being resized
* @ws: window size being set.
*
- * Update the termios variables and send the neccessary signals to
+ * Update the termios variables and send the necessary signals to
* peform a terminal resize correctly
*/
* @rows: rows (character)
* @cols: cols (character)
*
- * Update the termios variables and send the neccessary signals to
+ * Update the termios variables and send the necessary signals to
* peform a terminal resize correctly
*/
*
* Resize a virtual console, clipping according to the actual constraints.
* If the caller passes a tty structure then update the termios winsize
- * information and perform any neccessary signal handling.
+ * information and perform any necessary signal handling.
*
* Caller must hold the console semaphore. Takes the termios mutex and
* ctrl_lock of the tty IFF a tty is passed.
* @pool: pool handle
* @dev: dma device
* @lli_nbr: number of lli:s in the pool
- * @algin: adress alignemtn of lli:s
+ * @algin: address alignemtn of lli:s
* returns 0 on success otherwise none zero
*/
int coh901318_pool_create(struct coh901318_pool *pool,
* at which modes should be set up in the dual link style.
*
* Following the header, the BMP (ver 0xa) table has several records,
- * indexed by a seperate xlat table, indexed in turn by the fp strap in
+ * indexed by a separate xlat table, indexed in turn by the fp strap in
* EXTDEV_BOOT. Each record had a config byte, followed by 6 script
* numbers for use by INIT_SUB which controlled panel init and power,
* and finally a dword of ms to sleep between power off and on
uint32_t ramro_offset;
uint32_t ramro_size;
- /* base physical adresses */
+ /* base physical addresses */
uint64_t fb_phys;
uint64_t fb_available_size;
uint64_t fb_mappable_pages;
cur_irq++;
}
- /* Acknowlege interrupts */
+ /* Acknowledge interrupts */
VIA_WRITE(VIA_REG_INTERRUPT, status);
u32 status;
if (dev_priv) {
- /* Acknowlege interrupts */
+ /* Acknowledge interrupts */
status = VIA_READ(VIA_REG_INTERRUPT);
VIA_WRITE(VIA_REG_INTERRUPT, status |
dev_priv->irq_pending_mask);
*
* History:
* Apr 2002: Initial version [CS]
- * Jun 2002: Properly seperated algo/adap [FB]
+ * Jun 2002: Properly separated algo/adap [FB]
* Jan 2003: Fixed several bugs concerning interrupt handling [Kai-Uwe Bloem]
* Jan 2003: added limited signal handling [Kai-Uwe Bloem]
* Sep 2004: Major rework to ensure efficient bus handling [RMK]
#include "ehca_tools.h"
-/* virtual scatter gather entry to specify remote adresses with length */
+/* virtual scatter gather entry to specify remote addresses with length */
struct ehca_vsgentry {
u64 vaddr;
u32 lkey;
u32 immediate_data;
union {
struct {
- u64 remote_virtual_adress;
+ u64 remote_virtual_address;
u32 rkey;
u32 reserved;
u64 atomic_1st_op_dma_len;
/* no break is intentional here */
case IB_QPT_RC:
/* TODO: atomic not implemented */
- wqe_p->u.nud.remote_virtual_adress =
+ wqe_p->u.nud.remote_virtual_address =
send_wr->wr.rdma.remote_addr;
wqe_p->u.nud.rkey = send_wr->wr.rdma.rkey;
* yld_status struct.
*/
-/* LCD, each segment must be driven seperately.
+/* LCD, each segment must be driven separately.
*
* Layout:
*
/*
* isdn net devices manage lots of configuration variables as linked lists.
* Those lists must only be manipulated from user space. Some of the ioctl's
- * service routines access user space and are not atomic. Therefor, ioctl's
+ * service routines access user space and are not atomic. Therefore, ioctl's
* manipulating the lists and ioctl's sleeping while accessing the lists
* are serialized by means of a semaphore.
*/
int (*get_status)(struct dvb_frontend *fe, u32 *status);
int (*get_rf_strength)(struct dvb_frontend *fe, u16 *strength);
- /** These are provided seperately from set_params in order to facilitate silicon
- * tuners which require sophisticated tuning loops, controlling each parameter seperately. */
+ /** These are provided separately from set_params in order to facilitate silicon
+ * tuners which require sophisticated tuning loops, controlling each parameter separately. */
int (*set_frequency)(struct dvb_frontend *fe, u32 frequency);
int (*set_bandwidth)(struct dvb_frontend *fe, u32 bandwidth);
/*
- * These are provided seperately from set_params in order to facilitate silicon
- * tuners which require sophisticated tuning loops, controlling each parameter seperately.
+ * These are provided separately from set_params in order to facilitate silicon
+ * tuners which require sophisticated tuning loops, controlling each parameter separately.
*/
int (*set_state)(struct dvb_frontend *fe, enum tuner_param param, struct tuner_state *state);
int (*get_state)(struct dvb_frontend *fe, enum tuner_param param, struct tuner_state *state);
/* Tibet Systems 'Progress DVR' CS16 muxsel helper [Chris Fanning]
*
* The CS16 (available on eBay cheap) is a PCI board with four Fusion
- * 878A chips, a PCI bridge, an Atmel microcontroller, four sync seperator
+ * 878A chips, a PCI bridge, an Atmel microcontroller, four sync separator
* chips, ten eight input analog multiplexors, a not chip and a few
* other components.
*
*
* There is an ATMEL microcontroller with an 8031 core on board. I have not
* determined what function (if any) it provides. With the microcontroller
- * and sync seperator chips a guess is that it might have to do with video
+ * and sync separator chips a guess is that it might have to do with video
* switching and maybe some digital I/O.
*/
static void tibetCS16_muxsel(struct bttv *btv, unsigned int input)
/*
* The FX2 chip does not give us a zero length read at end of frame.
* It does, however, give a short read at the end of a frame, if
- * neccessary, rather than run two frames together.
+ * necessary, rather than run two frames together.
*
* By choosing the right bulk transfer size, we are guaranteed to always
* get a short read for the last read of each frame. Frame sizes are
contains decompression routines that allow you to use higher image sizes and
framerates; in addition the webcam uses less bandwidth on the USB bus (handy
if you want to run more than 1 camera simultaneously). These routines fall
-under a NDA, and may therefor not be distributed as source; however, its use
+under a NDA, and may therefore not be distributed as source; however, its use
is completely optional.
You can build this code either into your kernel, or as a module. I recommend
/*
Write multiple registers with constant values. For example:
sn9c102_write_const_regs(cam, {0x00, 0x14}, {0x60, 0x17}, {0x0f, 0x18});
- Register adresses must be < 256.
+ Register addresses must be < 256.
*/
#define sn9c102_write_const_regs(sn9c102_device, data...) \
({ static const u8 _valreg[][2] = {data}; \
The tea6420 is a bus controlled audio-matrix with 5 stereo inputs,
4 stereo outputs and gain control for each output.
- It is cascadable, i.e. it can be found at the adresses 0x98
+ It is cascadable, i.e. it can be found at the addresses 0x98
and 0x9a on the i2c-bus.
For detailed informations download the specifications directly
unsigned long clock = readl(sm->regs + SM501_CURRENT_CLOCK);
unsigned char reg;
unsigned int pll_reg = 0;
- unsigned long sm501_freq; /* the actual frequency acheived */
+ unsigned long sm501_freq; /* the actual frequency achieved */
struct sm501_clock to;
switch (clksrc) {
case SM501_CLOCK_P2XCLK:
- /* This clock is divided in half so to achive the
+ /* This clock is divided in half so to achieve the
* requested frequency the value must be multiplied by
* 2. This clock also has an additional pre divisor */
break;
case SM501_CLOCK_V2XCLK:
- /* This clock is divided in half so to achive the
+ /* This clock is divided in half so to achieve the
* requested frequency the value must be multiplied by 2. */
sm501_freq = (sm501_select_clock(2 * req_freq, &to, 3) / 2);
unsigned long req_freq)
{
struct sm501_devdata *sm = dev_get_drvdata(dev);
- unsigned long sm501_freq; /* the frequency achiveable by the 501 */
+ unsigned long sm501_freq; /* the frequency achieveable by the 501 */
struct sm501_clock to;
switch (clksrc) {
* This is a driver for the SDHC controller found in Freescale MX2/MX3
* SoCs. It is basically the same hardware as found on MX1 (imxmmc.c).
* Unlike the hardware found on MX1, this hardware just works and does
- * not need all the quirks found in imxmmc.c, hence the seperate driver.
+ * not need all the quirks found in imxmmc.c, hence the separate driver.
*
* Copyright (C) 2008 Sascha Hauer, Pengutronix <s.hauer@pengutronix.de>
* Copyright (C) 2006 Pavel Pisa, PiKRON <ppisa@pikron.com>
* exists, but is for MTD_UADDR_NOT_SUPPORTED - and, therefore,
* should not be used. The problem is that structures with
* initializers have extra fields initialized to 0. It is _very_
- * desireable to have the unlock address entries for unsupported
+ * desirable to have the unlock address entries for unsupported
* data widths automatically initialized - that means that
* MTD_UADDR_NOT_SUPPORTED must be 0 and the first entry here
* must go unused.
if (!r)
return -ENXIO;
- /* map physical adress */
+ /* map physical address */
bcm_umi_io_base = ioremap(r->start, r->end - r->start + 1);
if (!bcm_umi_io_base) {
/* Release resources, unregister device */
nand_release(board_mtd);
- /* unmap physical adress */
+ /* unmap physical address */
iounmap(bcm_umi_io_base);
/* Free the MTD device structure */
* MXC NANDFC can only perform full page+spare or
* spare-only read/write. When the upper layers
* layers perform a read/write buf operation,
- * we will used the saved column adress to index into
+ * we will used the saved column address to index into
* the full page.
*/
send_addr(host, 0, page_addr == -1);
struct atl2_ring_header {
/* pointer to the descriptor ring memory */
void *desc;
- /* physical adress of the descriptor ring */
+ /* physical address of the descriptor ring */
dma_addr_t dma;
/* length of descriptor ring in bytes */
unsigned int size;
*
* Interrupts are handled by a single CPU and it is likely that on a MP system
* the application is migrated to another CPU. In that scenario, we try to
- * seperate the RX(in irq context) and TX state in order to decrease memory
+ * separate the RX(in irq context) and TX state in order to decrease memory
* contention.
*/
struct sge {
*
* 1) down
* 2) autoneg_progress
- * 3) autoneg_complete (the link sucessfully autonegotiated)
+ * 3) autoneg_complete (the link successfully autonegotiated)
* 4) forced_up (the link has been forced up, it did not autonegotiate)
*
**/
if (!(rxcw & E1000_RXCW_IV)) {
mac->serdes_has_link = true;
e_dbg("SERDES: Link up - autoneg "
- "completed sucessfully.\n");
+ "completed successfully.\n");
} else {
mac->serdes_has_link = false;
e_dbg("SERDES: Link down - invalid"
/* start with one vector for every rx queue */
numvecs = adapter->num_rx_queues;
- /* if tx handler is seperate add 1 for every tx queue */
+ /* if tx handler is separate add 1 for every tx queue */
if (!(adapter->flags & IGB_FLAG_QUEUE_PAIRS))
numvecs += adapter->num_tx_queues;
* If we missed a speed change, initialise at the new speed
* directly. It is debatable whether this is actually
* required, but in the interests of continuing from where
- * we left off it is desireable. The converse argument is
+ * we left off it is desirable. The converse argument is
* that we should re-negotiate at 9600 baud again.
*/
if (si->newspeed) {
u32 wol = 0;
status = ql_mb_wol_mode(qdev, wol);
QPRINTK(qdev, DRV, ERR, "WOL %s (wol code 0x%x) on %s\n",
- (status == 0) ? "cleared sucessfully" : "clear failed",
+ (status == 0) ? "cleared successfully" : "clear failed",
wol, qdev->ndev->name);
}
wol |= MB_WOL_MODE_ON;
status = ql_mb_wol_mode(qdev, wol);
QPRINTK(qdev, DRV, ERR, "WOL %s (wol code 0x%x) on %s\n",
- (status == 0) ? "Sucessfully set" : "Failed", wol,
+ (status == 0) ? "Successfully set" : "Failed", wol,
qdev->ndev->name);
}
#define FRF_AA_INT_ACK_KER_FIELD_LBN 0
#define FRF_AA_INT_ACK_KER_FIELD_WIDTH 32
-/* INT_ISR0_REG: Function 0 Interrupt Acknowlege Status register */
+/* INT_ISR0_REG: Function 0 Interrupt Acknowledge Status register */
#define FR_BZ_INT_ISR0 0x00000090
#define FRF_BZ_INT_ISR_REG_LBN 0
#define FRF_BZ_INT_ISR_REG_WIDTH 64
netif_carrier_off(dev);
- /* disable, mask and acknowlege all interrupts */
+ /* disable, mask and acknowledge all interrupts */
spin_lock_irqsave(&pd->int_lock, flags);
int_cfg = smsc9420_reg_read(pd, INT_CFG) & (~INT_CFG_IRQ_EN_);
smsc9420_reg_write(pd, INT_CFG, int_cfg);
* spider_net_enable_rxchtails - sets RX dmac chain tail addresses
* @card: card structure
*
- * spider_net_enable_rxchtails sets the RX DMAC chain tail adresses in the
+ * spider_net_enable_rxchtails sets the RX DMAC chain tail addresses in the
* chip by writing to the appropriate register. DMA is enabled in
* spider_net_enable_rxdmac.
*/
spider_net_write_reg(card, SPIDER_NET_ECMODE, SPIDER_NET_ECMODE_VALUE);
- /* set chain tail adress for RX chains and
+ /* set chain tail address for RX chains and
* enable DMA */
spider_net_enable_rxchtails(card);
spider_net_enable_rxdmac(card);
break;
/* When writing back RX descriptor, GEM writes status
- * then buffer address, possibly in seperate transactions.
+ * then buffer address, possibly in separate transactions.
* If we don't wait for the chip to write both, we could
* post a new buffer to this descriptor then have GEM spam
* on the buffer address. We sync on the RX completion
* @data - desc's data
* @size - desc's size
*
- * NOTE: this func does check for available space and, if neccessary, waits for
+ * NOTE: this func does check for available space and, if necessary, waits for
* NIC to read existing data before writing new one.
*/
static void bdx_tx_push_desc_safe(struct bdx_priv *priv, void *data, int size)
* NOTE: This function should be used whenever the status of any TPL must be
* modified by the driver, because the compiler may otherwise change the
* order of instructions such that writing the TPL status may be executed at
- * an undesireable time. When this function is used, the status is always
+ * an undesirable time. When this function is used, the status is always
* written when the function is called.
*/
static void tms380tr_write_tpl_status(TPL *tpl, unsigned int Status)
* This function should be used whenever the status of any RPL must be
* modified by the driver, because the compiler may otherwise change the
* order of instructions such that writing the RPL status may be executed
- * at an undesireable time. When this function is used, the status is
+ * at an undesirable time. When this function is used, the status is
* always written when the function is called.
*/
static void tms380tr_write_rpl_status(RPL *rpl, unsigned int Status)
__tun_detach(tun);
- /* If desireable, unregister the netdevice. */
+ /* If desirable, unregister the netdevice. */
if (!(tun->flags & TUN_PERSIST)) {
rtnl_lock();
if (dev->reg_state == NETREG_REGISTERED)
ucc_fast_get_qe_cr_subblock(ugeth->ug_info->uf_info.ucc_num);
/* Ethernet frames are defined in Little Endian mode,
- therefor to insert */
+ therefore to insert */
/* the address to the hash (Big Endian mode), we reverse the bytes.*/
set_mac_addr(&p_82xx_addr_filt->taddr.h, p_enet_addr);
goto error_wait_for_ack;
}
rx_bytes = result;
- /* verify the ack and read more if neccessary [result is the
+ /* verify the ack and read more if necessary [result is the
* final amount of bytes we get in the ack] */
result = __i2400m_bm_ack_verify(i2400m, opcode, ack, ack_size, flags);
if (result < 0)
* @I2400M_BRI_NO_REBOOT: Do not reboot the device and proceed
* directly to wait for a reboot barker from the device.
* @I2400M_BRI_MAC_REINIT: We need to reinitialize the boot
- * rom after reading the MAC adress. This is quite a dirty hack,
+ * rom after reading the MAC address. This is quite a dirty hack,
* if you ask me -- the device requires the bootrom to be
* intialized after reading the MAC address.
*/
*
* The device will be fully reset internally, but won't be
* disconnected from the bus (so no reenumeration will
- * happen). Firmware upload will be neccessary.
+ * happen). Firmware upload will be necessary.
*
* The device will send a reboot barker that will trigger the driver
* to reinitialize the state via __i2400m_dev_reset_handle.
*
* The device will be fully reset internally, disconnected from the
* bus an a reenumeration will happen. Firmware upload will be
- * neccessary. Thus, we don't do any locking or struct
+ * necessary. Thus, we don't do any locking or struct
* reinitialization, as we are going to be fully disconnected and
* reenumerated.
*
*
* The device will be fully reset internally, but won't be
* disconnected from the USB bus (so no reenumeration will
- * happen). Firmware upload will be neccessary.
+ * happen). Firmware upload will be necessary.
*
* The device will send a reboot barker in the notification endpoint
* that will trigger the driver to reinitialize the state
*
* The device will be fully reset internally, disconnected from the
* USB bus an a reenumeration will happen. Firmware upload will be
- * neccessary. Thus, we don't do any locking or struct
+ * necessary. Thus, we don't do any locking or struct
* reinitialization, as we are going to be fully disconnected and
* reenumerated.
*
/*
* this buffer is used for rx stream reconstruction.
* Under heavy load this device (or the transport layer?)
- * tends to split the streams into seperate rx descriptors.
+ * tends to split the streams into separate rx descriptors.
*/
skb = __dev_alloc_skb(AR9170_MAX_RX_BUFFER_SIZE, GFP_KERNEL);
/* Power Management */
#define POWER_TABLE_CMD 0x77
-#define SAVE_RESTORE_ADRESS_CMD 0x78
+#define SAVE_RESTORE_ADDRESS_CMD 0x78
#define REPLY_WATERMARK_CMD 0x79
#define PM_DEBUG_STATISTIC_NOTIFIC 0x7B
#define PD_FLUSH_N_NOTIFICATION 0x7C
/*
* The encryption key doesn't fit within the CSR cache,
- * this means we should allocate it seperately and use
+ * this means we should allocate it separately and use
* rt2x00usb_vendor_request() to send the key to the hardware.
*/
reg = KEY_ENTRY(key->hw_key_idx);
/*
* The driver does not support the IV/EIV generation
* in hardware. However it demands the data to be provided
- * both seperately as well as inside the frame.
+ * both separately as well as inside the frame.
* We already provided the CONFIG_CRYPTO_COPY_IV to rt2x00lib
* to ensure rt2x00lib will not strip the data from the
* frame after the copy, now we must tell mac80211
* There are 2 variations of the rt2870 firmware.
* a) size: 4kb
* b) size: 8kb
- * Note that (b) contains 2 seperate firmware blobs of 4k
+ * Note that (b) contains 2 separate firmware blobs of 4k
* within the file. The first blob is the same firmware as (a),
* but the second blob is for the additional chipsets.
*/
/*
* 8kb firmware files must be checked as if it were
- * 2 seperate firmware files.
+ * 2 separate firmware files.
*/
while (offset < len) {
if (!rt2800usb_check_crc(data + offset, 4096))
/*
* HW crypto statistics.
- * All statistics are stored seperately per cipher type.
+ * All statistics are stored separately per cipher type.
*/
struct rt2x00debug_crypto crypto_stats[CIPHER_MAX];
/*
* Hardware might have stripped the IV/EIV/ICV data,
* in that case it is possible that the data was
- * provided seperately (through hardware descriptor)
+ * provided separately (through hardware descriptor)
* in which case we should reinsert the data into the frame.
*/
if ((rxdesc.dev_flags & RXDONE_CRYPTO_IV) &&
/*
* When hardware encryption is supported, and this frame
* is to be encrypted, we should strip the IV/EIV data from
- * the frame so we can provide it to the driver seperately.
+ * the frame so we can provide it to the driver separately.
*/
if (test_bit(ENTRY_TXD_ENCRYPT, &txdesc.flags) &&
!test_bit(ENTRY_TXD_ENCRYPT_IV, &txdesc.flags)) {
* The driver does not support the IV/EIV generation
* in hardware. However it doesn't support the IV/EIV
* inside the ieee80211 frame either, but requires it
- * to be provided seperately for the descriptor.
+ * to be provided separately for the descriptor.
* rt2x00lib will cut the IV/EIV data out of all frames
* given to us by mac80211, but we must tell mac80211
* to generate the IV/EIV data.
* The driver does not support the IV/EIV generation
* in hardware. However it doesn't support the IV/EIV
* inside the ieee80211 frame either, but requires it
- * to be provided seperately for the descriptor.
+ * to be provided separately for the descriptor.
* rt2x00lib will cut the IV/EIV data out of all frames
* given to us by mac80211, but we must tell mac80211
* to generate the IV/EIV data.
* The driver does not support the IV/EIV generation
* in hardware. However it doesn't support the IV/EIV
* inside the ieee80211 frame either, but requires it
- * to be provided seperately for the descriptor.
+ * to be provided separately for the descriptor.
* rt2x00lib will cut the IV/EIV data out of all frames
* given to us by mac80211, but we must tell mac80211
* to generate the IV/EIV data.
/*
* Hardware has stripped IV/EIV data from 802.11 frame during
- * decryption. It has provided the data seperately but rt2x00lib
+ * decryption. It has provided the data separately but rt2x00lib
* should decide if it should be reinserted.
*/
rxdesc->flags |= RX_FLAG_IV_STRIPPED;
rq->rc = ccw_device_start(rp->cdev, &rq->ccw,
(unsigned long) rq, 0, 0);
if (rq->rc == 0)
- return; /* Sucessfully restarted. */
+ return; /* Successfully restarted. */
break;
case RAW3270_IO_STOP:
if (!rq)
req->start_count++;
if (rc == 0) {
- /* Sucessfully started request */
+ /* Successfully started request */
req->status = SCLP_REQ_RUNNING;
sclp_running_state = sclp_running_state_running;
__sclp_set_request_timer(SCLP_RETRY_INTERVAL * HZ,
* init_orchid - initialise the host adapter
* @host:host adapter to initialise
*
- * Initialise the controller and if neccessary load the firmware.
+ * Initialise the controller and if necessary load the firmware.
*
* Returns -1 if the initialisation fails.
*/
* initio_stop_bm - stop bus master
* @host: InitIO we are stopping
*
- * Stop any pending DMA operation, aborting the DMA if neccessary
+ * Stop any pending DMA operation, aborting the DMA if necessary
*/
static void initio_stop_bm(struct initio_host * host)
#define FC_SRB_CMD_SENT (1 << 0) /* cmd has been sent */
#define FC_SRB_RCV_STATUS (1 << 1) /* response has arrived */
#define FC_SRB_ABORT_PENDING (1 << 2) /* cmd abort sent to device */
-#define FC_SRB_ABORTED (1 << 3) /* abort acknowleged */
+#define FC_SRB_ABORTED (1 << 3) /* abort acknowledged */
#define FC_SRB_DISCONTIG (1 << 4) /* non-sequential data recvd */
#define FC_SRB_COMPL (1 << 5) /* fc_io_compl has been run */
#define FC_SRB_FCP_PROCESSING_TMO (1 << 6) /* timer function processing */
* function returns, it does not guarantee all the IOCBs are actually aborted.
*
* Return code
- * 0 - Sucessfully issued abort iocb on all outstanding flogis (Always 0)
+ * 0 - Successfully issued abort iocb on all outstanding flogis (Always 0)
**/
int
lpfc_els_abort_flogi(struct lpfc_hba *phba)
if (ndlp && NLP_CHK_NODE_ACT(ndlp) &&
(*((uint32_t *) (pcmd)) == ELS_CMD_LS_RJT)) {
/* A LS_RJT associated with Default RPI cleanup has its own
- * seperate code path.
+ * separate code path.
*/
if (!(ndlp->nlp_flag & NLP_RM_DFLT_RPI))
ls_rjt = 1;
#define S_IO BIT(1) /* Input/Output line from SCSI bus */
#define S_CD BIT(2) /* Command/Data line from SCSI bus */
#define S_BUSY BIT(3) /* Busy line from SCSI bus */
-#define S_ACK BIT(4) /* Acknowlege line from SCSI bus */
+#define S_ACK BIT(4) /* Acknowledge line from SCSI bus */
#define S_REQUEST BIT(5) /* Request line from SCSI bus */
#define S_SELECT BIT(6) /* */
#define S_ATN BIT(7) /* */
break;
default:
PM8001_MSG_DBG(pm8001_ha,
- pm8001_printk("unkown device type(%x)\n", deviceType));
+ pm8001_printk("unknown device type(%x)\n", deviceType));
break;
}
phy->phy_type |= PORT_TYPE_SAS;
* by the command "OPC_INB_REG_DEV", after that the HBA will assign a
* device ID(according to device's sas address) and returned it to LLDD. From
* now on, we communicate with HBA FW with the device ID which HBA assigned
- * rather than sas address. it is the neccessary step for our HBA but it is
+ * rather than sas address. it is the necessary step for our HBA but it is
* the optional for other HBA driver.
*/
static int pm8001_dev_found_notify(struct domain_device *dev)
/*
* pmcraid_ioctl_header - definition of header structure that preceeds all the
- * buffers given as ioctl arguements.
+ * buffers given as ioctl arguments.
*
* .signature : always ASCII string, "PMCRAID"
* .reserved : not used
* which is followed by sdaaa.
*
* This is basically 26 base counting with one extra 'nil' entry
- * at the beggining from the second digit on and can be
+ * at the beginning from the second digit on and can be
* determined using similar method as 26 base conversion with the
* index shifted -1 after each digit is computed.
*
* Claim the FIQ handler (only one can be active at any one time) and
* then setup the correct transfer code for this transfer.
*
- * This call updates all the necessary state information if sucessful,
+ * This call updates all the necessary state information if successful,
* so the caller does not need to do anything more than start the transfer
* as normal, since the IRQ will have been re-routed to the FIQ handler.
*/
#define MUSB_FLAT_OFFSET(_epnum, _offset) \
(USB_OFFSET(USB_EP_NI0_TXMAXP) + (0x40 * (_epnum)) + (_offset))
-/* Not implemented - HW has seperate Tx/Rx FIFO */
+/* Not implemented - HW has separate Tx/Rx FIFO */
#define MUSB_TXCSR_MODE 0x0000
static inline void musb_write_txfifosz(void __iomem *mbase, u8 c_size)
int isthrottled; /* if throttled, discard reads */
wait_queue_head_t delta_msr_wait; /* used for TIOCMIWAIT */
char prev_status, diff_status; /* used for TIOCMIWAIT */
- /* we pass a pointer to this as the arguement sent to
+ /* we pass a pointer to this as the argument sent to
cypress_set_termios old_termios */
struct ktermios tmp_termios; /* stores the old termios settings */
};
/*
* Configure the LCD DMA for a palette load operation and do the palette
* downloading synchronously. We don't use the frame+palette load mode of
- * the controller, since the palette can always be downloaded seperately.
+ * the controller, since the palette can always be downloaded separately.
*/
static void load_palette(void)
{
src = (sy * stride) + (bpp * sx);
}
- /* set source adress */
+ /* set source address */
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_SRC_START0, (src & 0xff));
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_SRC_START1, (src >> 8) & 0x00ff);
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_SRC_START2, (src >> 16) & 0x00ff);
- /* set destination adress */
+ /* set destination address */
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_DST_START0, (dst & 0xff));
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_DST_START1, (dst >> 8) & 0x00ff);
s1d13xxxfb_writereg(info->par, S1DREG_BBLT_DST_START2, (dst >> 16) & 0x00ff);
struct sm501fb_par *par = info->par;
struct sm501fb_info *fbi = par->info;
unsigned long pixclock; /* pixelclock in Hz */
- unsigned long sm501pixclock; /* pixelclock the 501 can achive in Hz */
+ unsigned long sm501pixclock; /* pixelclock the 501 can achieve in Hz */
unsigned int mem_type;
unsigned int clock_type;
unsigned int head_addr;
/*
* Allocate a block in the given allocation zone.
* Since we have to byte-swap the bitmap on little-endian
- * machines, this is rather expensive. Therefor we will
+ * machines, this is rather expensive. Therefore we will
* preallocate up to 16 blocks from the same word, if
* possible. We are not doing preallocations in the
* header zone, though.
/*
* fill up all the fields in prstatus from the given task struct, except
- * registers which need to be filled up seperately.
+ * registers which need to be filled up separately.
*/
static void fill_prstatus(struct elf_prstatus *prstatus,
struct task_struct *p, long signr)
* Extracts sharename form full UNC.
* i.e. strips from UNC trailing path that is not part of share
* name and fixup missing '\' in the begining of DFS node refferal
- * if neccessary.
+ * if necessary.
* Returns pointer to share name on success or ERR_PTR on error.
* Caller is responsible for freeing returned string.
*/
goto parse_DFS_referrals_exit;
}
- /* collect neccessary data from referrals */
+ /* collect necessary data from referrals */
for (i = 0; i < *num_of_nodes; i++) {
char *temp;
int max_len;
}
/**
- * mext_check_argumants - Check whether move extent can be done
+ * mext_check_arguments - Check whether move extent can be done
*
* @orig_inode: original inode
* @donor_inode: donor inode
req->in.args[0].size = sizeof(*arg);
req->in.args[0].value = arg;
req->out.numargs = 1;
- /* Variable length arguement used for backward compatibility
+ /* Variable length argument used for backward compatibility
with interface version < 7.5. Rest of init_out is zeroed
by do_get_request(), so a short reply is not a problem */
req->out.argvar = 1;
/**
* gfs2_lm_mount - mount a locking protocol
* @sdp: the filesystem
- * @args: mount arguements
+ * @args: mount arguments
* @silent: if 1, don't complain if the FS isn't a GFS2 fs
*
* Returns: errno
* the case where our storage is so fast that it is more optimal to go
* ahead and force a flush and wait for the transaction to be committed
* than it is to wait for an arbitrary amount of time for new writers to
- * join the transaction. We acheive this by measuring how long it takes
+ * join the transaction. We achieve this by measuring how long it takes
* to commit a transaction, and compare it with how long this
* transaction has been running, and if run time < commit time then we
* sleep for the delta and commit. This greatly helps super fast disks
} } while (0);
/* Encode as an array of strings the string given with components
- * seperated @sep.
+ * separated @sep.
*/
static __be32 nfsd4_encode_components(char sep, char *components,
__be32 **pp, int *buflen)
* ocfs2_file_lock() and ocfs2_file_unlock() map to a single pair of
* flock() calls. The locking approach this requires is sufficiently
* different from all other cluster lock types that we implement a
- * seperate path to the "low-level" dlm calls. In particular:
+ * separate path to the "low-level" dlm calls. In particular:
*
* - No optimization of lock levels is done - we take at exactly
* what's been requested.
if (i == -1) {
/*
* Holes can be larger than the maximum size of an
- * extent, so we return their lengths in a seperate
+ * extent, so we return their lengths in a separate
* field.
*/
if (hole_len) {
return 0; // No free blocks in this bitmap
}
- /* search for a first zero bit -- beggining of a window */
+ /* search for a first zero bit -- beginning of a window */
*beg = reiserfs_find_next_zero_le_bit
((unsigned long *)(bh->b_data), boundary, *beg);
HIL_CMD_PR6 = 0x45, /* Prompt6 */
HIL_CMD_PR7 = 0x46, /* Prompt7 */
HIL_CMD_PRM = 0x47, /* Prompt (General Purpose) */
- HIL_CMD_AK1 = 0x48, /* Acknowlege1 */
- HIL_CMD_AK2 = 0x49, /* Acknowlege2 */
- HIL_CMD_AK3 = 0x4a, /* Acknowlege3 */
- HIL_CMD_AK4 = 0x4b, /* Acknowlege4 */
- HIL_CMD_AK5 = 0x4c, /* Acknowlege5 */
- HIL_CMD_AK6 = 0x4d, /* Acknowlege6 */
- HIL_CMD_AK7 = 0x4e, /* Acknowlege7 */
- HIL_CMD_ACK = 0x4f, /* Acknowlege (General Purpose) */
+ HIL_CMD_AK1 = 0x48, /* Acknowledge1 */
+ HIL_CMD_AK2 = 0x49, /* Acknowledge2 */
+ HIL_CMD_AK3 = 0x4a, /* Acknowledge3 */
+ HIL_CMD_AK4 = 0x4b, /* Acknowledge4 */
+ HIL_CMD_AK5 = 0x4c, /* Acknowledge5 */
+ HIL_CMD_AK6 = 0x4d, /* Acknowledge6 */
+ HIL_CMD_AK7 = 0x4e, /* Acknowledge7 */
+ HIL_CMD_ACK = 0x4f, /* Acknowledge (General Purpose) */
/* 0x50 to 0x78 reserved for future use */
/* 0x80 to 0xEF device-specific commands */
usually the condition is softened to regions that _may_ have been target of
in-flight WRITE IO, e.g. by only lazily clearing the on-disk write-intent
bitmap, trading frequency of meta data transactions against amount of
- (possibly unneccessary) resync traffic.
+ (possibly unnecessary) resync traffic.
If we set a hard limit on the area that may be "hot" at any given time, we
limit the amount of resync traffic needed for crash recovery.
struct list_head *scm_work_list;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- /* Index of current stored adress in ret_stack */
+ /* Index of current stored address in ret_stack */
int curr_ret_stack;
/* Stack of return addresses for return function tracing */
struct ftrace_ret_stack *ret_stack;
u8 started;
/*
* offset where second field starts from the starting of the
- * buffer for field seperated YCbCr formats
+ * buffer for field separated YCbCr formats
*/
u32 field_off;
};
}
/* This function calculates a "timeout" which is equivalent to the timeout of a
- * TCP connection after "boundary" unsucessful, exponentially backed-off
+ * TCP connection after "boundary" unsuccessful, exponentially backed-off
* retransmissions with an initial RTO of TCP_RTO_MIN.
*/
static bool retransmits_timed_out(struct sock *sk,
break;
default:
/* should not get here, PLINK_BLOCKED is dealt with at the
- * beggining of the function
+ * beginning of the function
*/
spin_unlock_bh(&sta->lock);
break;
* tabs, spaces and continuation lines, which are treated as a single whitespace
* character.
*
- * Some headers may appear multiple times. A comma seperated list of values is
+ * Some headers may appear multiple times. A comma separated list of values is
* equivalent to multiple headers.
*/
static const struct sip_header ct_sip_hdrs[] = {
}
EXPORT_SYMBOL_GPL(ct_sip_get_header);
-/* Get next header field in a list of comma seperated values */
+/* Get next header field in a list of comma separated values */
static int ct_sip_next_header(const struct nf_conn *ct, const char *dptr,
unsigned int dataoff, unsigned int datalen,
enum sip_header_types type,
/*
* xt_hashlimit - Netfilter module to limit the number of packets per time
- * seperately for each hashbucket (sourceip/sourceport/dstip/dstport)
+ * separately for each hashbucket (sourceip/sourceport/dstip/dstport)
*
* (C) 2003-2004 by Harald Welte <laforge@netfilter.org>
* Copyright © CC Computer Consultants GmbH, 2007 - 2008
* used to provide an upper bound to this doubling operation.
*
* Special Case: the first HB doesn't trigger exponential backoff.
- * The first unacknowleged HB triggers it. We do this with a flag
+ * The first unacknowledged HB triggers it. We do this with a flag
* that indicates that we have an outstanding HB.
*/
if (!is_hb || transport->hb_sent) {
exit 0
}
-# Parse command-line arguements
+# Parse command-line arguments
while [ $# -gt 0 ]; do
case $1 in
--source)
on MADICARD
- playback mixer matrix: [channelout+64] [output] [value]
- input(thru) mixer matrix: [channelin] [output] [value]
- (better do 2 kontrols for seperation ?)
+ (better do 2 kontrols for separation ?)
*/
#define HDSPM_MIXER(xname, xindex) \
reg = snd_soc_read(codec, WM8990_CLOCKING_2);
snd_soc_write(codec, WM8990_CLOCKING_2, reg | WM8990_SYSCLK_SRC);
- /* set up N , fractional mode and pre-divisor if neccessary */
+ /* set up N , fractional mode and pre-divisor if necessary */
snd_soc_write(codec, WM8990_PLL1, pll_div.n | WM8990_SDM |
(pll_div.div2?WM8990_PRESCALE:0));
snd_soc_write(codec, WM8990_PLL2, (u8)(pll_div.k>>8));
new_depth_mask &= ~(1 << (depth - 1));
/*
- * But we keep the older depth mask for the line seperator
+ * But we keep the older depth mask for the line separator
* to keep the level link until we reach the last child
*/
ret += ipchain__fprintf_graph_line(fp, depth, depth_mask,