1 - major number
2 - minor mumber
3 - device name
- 4 - reads completed succesfully
+ 4 - reads completed successfully
5 - reads merged
6 - sectors read
7 - time spent reading (ms)
Description:
The /sys/block/<disk>/stat files displays the I/O
statistics of disk <disk>. They contain 11 fields:
- 1 - reads completed succesfully
+ 1 - reads completed successfully
2 - reads merged
3 - sectors read
4 - time spent reading (ms)
<sect1 id="Multiple_chip_control">
<title>Multiple chip control</title>
<para>
- The nand driver can control chip arrays. Therefor the
+ The nand driver can control chip arrays. Therefore the
board driver must provide an own select_chip function. This
function must (de)select the requested chip.
The function pointer in the nand_chip structure must
* you do, leave them untouched.
* Inluding less markers will make the
* resulting code smaller, but there will
- * be fewer aplications which can read it.
+ * be fewer applications which can read it.
* The presence of the APP and COM marker
* is influenced by APP_len and COM_len
* ONLY, not by this property! */
pages of the given size and map them onto the virtually contiguous
memory. The virtual pointer is addressed in runtime->dma_area.
The physical address (runtime->dma_addr) is set to zero,
- because the buffer is physically non-contigous.
+ because the buffer is physically non-contiguous.
The physical address table is set up in sgbuf->table.
You can get the physical address at a certain offset via
<function>snd_pcm_sgbuf_get_addr()</function>.
- moved transfer control (pid filter, fifo control) from usb driver to frontend, it seems
better settled there (added xfer_ops-struct)
- created a common files for frontends (mc/p/mb)
- 2004-09-28 - added support for a new device (Unkown, vendor ID is Hyper-Paltek)
+ 2004-09-28 - added support for a new device (Unknown, vendor ID is Hyper-Paltek)
2004-09-20 - added support for a new device (Compro DVB-U2000), thanks
to Amaury Demol for reporting
- changed usb TS transfer method (several urbs, stopping transfer
addr = mmap(NULL, getpagesize() * num,
PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, fd, 0);
if (addr == MAP_FAILED)
- err(1, "Mmaping %u pages of /dev/zero", num);
+ err(1, "Mmapping %u pages of /dev/zero", num);
/*
* One neat mmap feature is that you can close the fd, and it
Disks are exposed with WCE=1. User is advised to enable Write Back
mode only when the controller has battery backup. At this time
Synhronize cache is not supported by the FW. Driver will short-cycle
- the cmd and return sucess without sending down to FW.
+ the cmd and return success without sending down to FW.
1 Release Date : Sun Jan. 14 11:21:32 PDT 2007 -
Sumant Patro <Sumant.Patro@lsil.com>/Bo Yang
The bulk of the driver will be managing the I/O queue fed by transfer().
That queue could be purely conceptual. For example, a driver used only
-for low-frequency sensor acess might be fine using synchronous PIO.
+for low-frequency sensor access might be fine using synchronous PIO.
But the queue will probably be very real, using message->queue, PIO,
often DMA (especially if the root filesystem is in SPI flash), and
mmap_min_addr
This file indicates the amount of address space which a user process will
-be restricted from mmaping. Since kernel null dereference bugs could
+be restricted from mmapping. Since kernel null dereference bugs could
accidentally operate based on the information in the first couple of pages
of memory userspace processes should not be allowed to write to them. By
default this value is set to 0 and no protections will be enforced by the
xxxx vend:prod
----
-spca501 0000:0000 MystFromOri Unknow Camera
+spca501 0000:0000 MystFromOri Unknown Camera
m5602 0402:5602 ALi Video Camera Controller
spca501 040a:0002 Kodak DVC-325
spca500 040a:0300 Kodak EZ200
present = (flags >> i) & 1;
if (!page_flag_names[i]) {
if (present)
- fatal("unkown flag bit %d\n", i);
+ fatal("unknown flag bit %d\n", i);
continue;
}
buf[j++] = present ? page_flag_names[i][0] : '_';
}
if (bootmap_start == -1)
- panic("couldn't find a contigous place for the bootmap");
+ panic("couldn't find a contiguous place for the bootmap");
/* Allocate the bootmap and mark the whole MM as reserved. */
bootmap_size = init_bootmem_node(NODE_DATA(nid), bootmap_start,
{
struct scoop_dev *sdev = container_of(chip, struct scoop_dev, gpio);
- /* XXX: I'm usure, but it seems so */
+ /* XXX: I'm unsure, but it seems so */
return ioread16(sdev->base + SCOOP_GPRR) & (1 << (offset + 1));
}
* @brief Get next available transaction width
*
*
-* @return On sucess : Next avail able transaction width
+* @return On success : Next available transaction width
* On failure : dmacHw_TRANSACTION_WIDTH_8
*
* @note
/**
* Creates a descriptor ring from a memory mapping.
*
-* @return 0 on sucess, error code otherwise.
+* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
/*
* This __REG() version gives the same results as the one above, except
* that we are fooling gcc somehow so it generates far better and smaller
- * assembly code for access to contigous registers. It's a shame that gcc
+ * assembly code for access to contiguous registers. It's a shame that gcc
* doesn't guess this by itself.
*/
#include <asm/types.h>
writel(win_enable, PCI_BAR_ENABLE);
/*
- * Disable automatic update of address remaping when writing to BARs.
+ * Disable automatic update of address remapping when writing to BARs.
*/
orion5x_setbits(PCI_ADDR_DECODE_CTRL, 1);
}
/* BATTERY */
#define PALMLD_BAT_MAX_VOLTAGE 4000 /* 4.00V maximum voltage */
#define PALMLD_BAT_MIN_VOLTAGE 3550 /* 3.55V critical voltage */
-#define PALMLD_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMLD_BAT_MAX_CURRENT 0 /* unknown */
#define PALMLD_BAT_MIN_CURRENT 0 /* unknown */
#define PALMLD_BAT_MAX_CHARGE 1 /* unknown */
#define PALMLD_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMT5_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMT5_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMT5_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMT5_BAT_MAX_CURRENT 0 /* unknown */
#define PALMT5_BAT_MIN_CURRENT 0 /* unknown */
#define PALMT5_BAT_MAX_CHARGE 1 /* unknown */
#define PALMT5_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTC_BAT_MAX_VOLTAGE 4000 /* 4.00V maximum voltage */
#define PALMTC_BAT_MIN_VOLTAGE 3550 /* 3.55V critical voltage */
-#define PALMTC_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTC_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTC_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTC_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTC_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTE2_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMTE2_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMTE2_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTE2_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTE2_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTE2_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTE2_BAT_MIN_CHARGE 1 /* unknown */
/* BATTERY */
#define PALMTX_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMTX_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMTX_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMTX_BAT_MAX_CURRENT 0 /* unknown */
#define PALMTX_BAT_MIN_CURRENT 0 /* unknown */
#define PALMTX_BAT_MAX_CHARGE 1 /* unknown */
#define PALMTX_BAT_MIN_CHARGE 1 /* unknown */
/* Battery */
#define PALMZ72_BAT_MAX_VOLTAGE 4000 /* 4.00v current voltage */
#define PALMZ72_BAT_MIN_VOLTAGE 3550 /* 3.55v critical voltage */
-#define PALMZ72_BAT_MAX_CURRENT 0 /* unknokn */
+#define PALMZ72_BAT_MAX_CURRENT 0 /* unknown */
#define PALMZ72_BAT_MIN_CURRENT 0 /* unknown */
#define PALMZ72_BAT_MAX_CHARGE 1 /* unknown */
#define PALMZ72_BAT_MIN_CHARGE 1 /* unknown */
[0] = "hsmmc",
[1] = "hsmmc",
[2] = "mmc_bus",
- /* [3] = "48m", - note not succesfully used yet */
+ /* [3] = "48m", - note not successfully used yet */
};
void s3c6400_setup_sdhci_cfg_card(struct platform_device *dev,
[0] = "hsmmc",
[1] = "hsmmc",
[2] = "mmc_bus",
- /* [3] = "48m", - note not succesfully used yet */
+ /* [3] = "48m", - note not successfully used yet */
};
/**
- * sa1100_request_dma - allocate one of the SA11x0's DMA chanels
+ * sa1100_request_dma - allocate one of the SA11x0's DMA channels
* @device: The SA11x0 peripheral targeted by this request
* @device_id: An ascii name for the claiming device
* @callback: Function to be called when the DMA completes
* setups a single pin:
* - reserves the pin so that it is not claimed by another driver
* - setups the iomux according to the configuration
- * - if the pin is configured as a GPIO, we claim it throug kernel gpiolib
+ * - if the pin is configured as a GPIO, we claim it through kernel gpiolib
*/
int mxc_iomux_alloc_pin(const unsigned int pin, const char *label);
/*
* setups a single pin:
* - reserves the pin so that it is not claimed by another driver
* - setups the iomux according to the configuration
- * - if the pin is configured as a GPIO, we claim it throug kernel gpiolib
+ * - if the pin is configured as a GPIO, we claim it through kernel gpiolib
*/
int mxc_iomux_alloc_pin(const unsigned int pin_mode, const char *label);
/*
* register to follow the ratio of duty_ns vs. period_ns
* accordingly.
*
- * This is good enought for programming the brightness of
+ * This is good enough for programming the brightness of
* the LCD backlight.
*
* The real implementation would divide PERCLK[0] first by
* OMAP_DMA_DYNAMIC_CHAIN
* @params - Channel parameters
*
- * @return - Succes : 0
+ * @return - Success : 0
* Failure: -EINVAL/-ENOMEM
*/
int omap_request_dma_chain(int dev_id, const char *dev_name,
#define TIPB_SWITCH_BASE (0xfffbc800)
#define OMAP16XX_MMCSD2_SSW_MPU_CONF (TIPB_SWITCH_BASE + 0x160)
-/* UART3 Registers Maping through MPU bus */
+/* UART3 Registers Mapping through MPU bus */
#define UART3_RHR (OMAP_UART3_BASE + 0)
#define UART3_THR (OMAP_UART3_BASE + 0)
#define UART3_DLL (OMAP_UART3_BASE + 0)
/* the calculation for the VA of this must ensure that
* it is the same distance apart from the UART in the
* phsyical address space, as the initial mapping for the IO
- * is done as a 1:1 maping. This puts it (currently) at
+ * is done as a 1:1 mapping. This puts it (currently) at
* 0xFA800000, which is not in the way of any current mapping
* by the base system.
*/
bool "Atmel AC97 Sound support"
help
This enables Sound support for the Hammerhead board. You may
- also go trough the ALSA settings to get it working.
+ also go through the ALSA settings to get it working.
Choose 'Y' here if you have ordered a Corona daugther board and
want to make your board funky.
/*
* Similar to get_user, do some address checking, then dereference
- * Return true on sucess, false on bad address
+ * Return true on success, false on bad address
*/
static bool get_instruction(unsigned short *val, unsigned short *address)
{
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xFFC03300 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xFFC03304 /* HMDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xFFC03308 /* HMDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xFFC0330C /* HMDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xFFC03310 /* HMDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xFFC03314 /* HMDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xFFC03318 /* HMDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xFFC03340 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xFFC03344 /* HMDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xFFC03348 /* HMDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xFFC0334C /* HMDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xFFC03350 /* HMDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xFFC03354 /* HMDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xFFC03358 /* HMDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
#define HMDMA0_CONTROL 0xffc04500 /* Handshake MDMA0 Control Register */
#define HMDMA0_ECINIT 0xffc04504 /* Handshake MDMA0 Initial Edge Count Register */
#define HMDMA0_BCINIT 0xffc04508 /* Handshake MDMA0 Initial Block Count Register */
-#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshhold Register */
+#define HMDMA0_ECURGENT 0xffc0450c /* Handshake MDMA0 Urgent Edge Count Threshold Register */
#define HMDMA0_ECOVERFLOW 0xffc04510 /* Handshake MDMA0 Edge Count Overflow Interrupt Register */
#define HMDMA0_ECOUNT 0xffc04514 /* Handshake MDMA0 Current Edge Count Register */
#define HMDMA0_BCOUNT 0xffc04518 /* Handshake MDMA0 Current Block Count Register */
#define HMDMA1_CONTROL 0xffc04540 /* Handshake MDMA1 Control Register */
#define HMDMA1_ECINIT 0xffc04544 /* Handshake MDMA1 Initial Edge Count Register */
#define HMDMA1_BCINIT 0xffc04548 /* Handshake MDMA1 Initial Block Count Register */
-#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshhold Register */
+#define HMDMA1_ECURGENT 0xffc0454c /* Handshake MDMA1 Urgent Edge Count Threshold Register */
#define HMDMA1_ECOVERFLOW 0xffc04550 /* Handshake MDMA1 Edge Count Overflow Interrupt Register */
#define HMDMA1_ECOUNT 0xffc04554 /* Handshake MDMA1 Current Edge Count Register */
#define HMDMA1_BCOUNT 0xffc04558 /* Handshake MDMA1 Current Block Count Register */
/* Are we prepared to handle this kernel fault?
*
* (The kernel has valid exception-points in the source
- * when it acesses user-memory. When it fails in one
+ * when it accesses user-memory. When it fails in one
* of those points, we find it in a table and do a jump
* to some fixup code that loads an appropriate error
* code)
#endif
/*
- ** Not virtually contigous.
+ ** Not virtually contiguous.
** Terminate prev chunk.
** Start a new chunk.
**
(p6) br.cond.spnt .ia32_strace_check_retval
;; // prevent RAW on r8
END(ia32_ret_from_clone)
- // fall thrugh
+ // fall through
GLOBAL_ENTRY(ia32_ret_from_syscall)
PT_REGS_UNWIND_INFO(0)
unsigned long ip; /* where did the overflow interrupt happened */
unsigned long tstamp; /* ar.itc when entering perfmon intr. handler */
- unsigned short cpu; /* cpu on which the overfow occured */
+ unsigned short cpu; /* cpu on which the overflow occured */
unsigned short set; /* event set active when overflow ocurred */
int tgid; /* thread group id (for NPTL, this is getpid()) */
} pfm_default_smpl_entry_t;
#define IIO_IIDSR_LVL_SHIFT 0
#define IIO_IIDSR_LVL_MASK 0x000000ff
-/* Xtalk timeout threshhold register (IIO_IXTT) */
+/* Xtalk timeout threshold register (IIO_IXTT) */
#define IXTT_RRSP_TO_SHFT 55 /* read response timeout */
#define IXTT_RRSP_TO_MASK (0x1FULL << IXTT_RRSP_TO_SHFT)
#define IXTT_RRSP_PS_SHFT 32 /* read responsed TO prescalar */
case ESI_DESC_ENTRY_POINT:
break;
default:
- printk(KERN_WARNING "Unkown table type %d found in "
+ printk(KERN_WARNING "Unknown table type %d found in "
"ESI table, ignoring rest of table\n", *p);
return -ENODEV;
}
* IA64_THREAD_DBG_VALID set. This indicates a task which was
* able to use the debug registers for debugging purposes via
* ptrace(). Therefore we know it was not using them for
- * perfmormance monitoring, so we only decrement the number
+ * performance monitoring, so we only decrement the number
* of "ptraced" debug register users to keep the count up to date
*/
int
bra.l _real_ovfl
-# overflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# overflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
fovfl_inex_on:
bra.l _real_unfl
-# undeflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# underflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
funfl_inex_on:
tst.w %d0 # is instr fmovm?
bmi.b iea_dis_fmovm # yes
-# instruction is using an extended precision immediate operand. therefore,
+# instruction is using an extended precision immediate operand. Therefore,
# the total instruction length is 16 bytes.
iea_dis_immed:
mov.l &0x10,%d0 # 16 bytes of instruction
bge.b sok_norm2 # thank goodness no
# the multiply factor that we're trying to create should be a denorm
-# for the multiply to work. therefore, we're going to actually do a
+# for the multiply to work. Therefore, we're going to actually do a
# multiply with a denorm which will cause an unimplemented data type
# exception to be put into the machine which will be caught and corrected
# later. we don't do this with the DENORMs above because this method
#
# operand will underflow AND underflow or inexact is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fin_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# The destination was In Range and the source was a ZERO. The result,
-# therefore, is an INF w/ the proper sign.
+# Therefore, is an INF w/ the proper sign.
# So, determine the sign and return a new INF (w/ the j-bit cleared).
#
global fdiv_inf_load # global for fsgldiv
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fneg_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fabs_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# the ZEROes have opposite signs:
-# - therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
+# - Therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
# - -ZERO is returned in the case of RM.
#
fadd_zero_2_chk_rm:
#
# the ZEROes have the same signs:
-# - therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
+# - Therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
# - -ZERO is returned in the case of RM.
#
fsub_zero_2_chk_rm:
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fsqrt_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
tst.l %d0
bne.b fout_pack_set
# "mantissa" is all zero which means that the answer is zero. but, the '040
-# algorithm allows the exponent to be non-zero. the 881/2 do not. therefore,
+# algorithm allows the exponent to be non-zero. the 881/2 do not. Therefore,
# if the mantissa is zero, I will zero the exponent, too.
# the question now is whether the exponents sign bit is allowed to be non-zero
# for a zero, also...
rts
# #
-# dnrm_lp(): normalize exponent/mantissa to specified threshhold #
+# dnrm_lp(): normalize exponent/mantissa to specified threshold #
# #
# INPUT: #
# %a0 : points to the operand to be denormalized #
bgt.b unnorm_nrm_zero # yes; denorm only until exp = 0
#
-# exponent would not go < 0. therefore, number stays normalized
+# exponent would not go < 0. Therefore, number stays normalized
#
sub.w %d0, %d1 # shift exponent value
mov.w FTEMP_EX(%a0), %d0 # load old exponent
bra.l _real_ovfl
-# overflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# overflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
fovfl_inex_on:
bra.l _real_unfl
-# undeflow occurred but is disabled. meanwhile, inexact is enabled. therefore,
+# underflow occurred but is disabled. meanwhile, inexact is enabled. Therefore,
# we must jump to real_inex().
funfl_inex_on:
tst.w %d0 # is instr fmovm?
bmi.b iea_dis_fmovm # yes
-# instruction is using an extended precision immediate operand. therefore,
+# instruction is using an extended precision immediate operand. Therefore,
# the total instruction length is 16 bytes.
iea_dis_immed:
mov.l &0x10,%d0 # 16 bytes of instruction
rts
# #
-# dnrm_lp(): normalize exponent/mantissa to specified threshhold #
+# dnrm_lp(): normalize exponent/mantissa to specified threshold #
# #
# INPUT: #
# %a0 : points to the operand to be denormalized #
bgt.b unnorm_nrm_zero # yes; denorm only until exp = 0
#
-# exponent would not go < 0. therefore, number stays normalized
+# exponent would not go < 0. Therefore, number stays normalized
#
sub.w %d0, %d1 # shift exponent value
mov.w FTEMP_EX(%a0), %d0 # load old exponent
tst.l %d0
bne.b fout_pack_set
# "mantissa" is all zero which means that the answer is zero. but, the '040
-# algorithm allows the exponent to be non-zero. the 881/2 do not. therefore,
+# algorithm allows the exponent to be non-zero. the 881/2 do not. Therefore,
# if the mantissa is zero, I will zero the exponent, too.
# the question now is whether the exponents sign bit is allowed to be non-zero
# for a zero, also...
#
# operand will underflow AND underflow or inexact is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fin_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# The destination was In Range and the source was a ZERO. The result,
-# therefore, is an INF w/ the proper sign.
+# Therefore, is an INF w/ the proper sign.
# So, determine the sign and return a new INF (w/ the j-bit cleared).
#
global fdiv_inf_load # global for fsgldiv
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fneg_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fabs_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
#
# the ZEROes have opposite signs:
-# - therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
+# - Therefore, we return +ZERO if the rounding modes are RN,RZ, or RP.
# - -ZERO is returned in the case of RM.
#
fadd_zero_2_chk_rm:
#
# the ZEROes have the same signs:
-# - therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
+# - Therefore, we return +ZERO if the rounding mode is RN,RZ, or RP
# - -ZERO is returned in the case of RM.
#
fsub_zero_2_chk_rm:
#
# operand will underflow AND underflow is enabled.
-# therefore, we must return the result rounded to extended precision.
+# Therefore, we must return the result rounded to extended precision.
#
fsqrt_sd_unfl_ena:
mov.l FP_SCR0_HI(%a6),FP_SCR1_HI(%a6)
/*
* Macintosh hardware profile data - unused, see macintosh.h for
- * resonable type values
+ * reasonable type values
*/
#define BI_MAC_VIA1BASE 0x8010 /* Mac VIA1 base address (always present) */
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
* It is based on demo code originally Copyright 2001 by Intel Corp, taken from
* http://www.embedded.com/showArticle.jhtml?articleID=19205567
*
- * Attempts were made, unsuccesfully, to contact the original
+ * Attempts were made, unsuccessfully, to contact the original
* author of this code (Michael Morrow, Intel). Below is the original
* copyright notice.
*
/* BIG FAT WARNING: races danger!
No protections exist here. Current users are only early init code,
- when locking is not needed because no cuncurency yet exists there,
+ when locking is not needed because no concurrency yet exists there,
and GPIO IRQ dispatcher, which does locking.
However, if many uses will ever happen, proper locking will be needed
- including locking between different uses
u32 _unused5;
u8 _write[3];
volatile u8 write;
-#define SGIOC_WRITE_NTHRESH 0x01 /* use 4.5db threshhold */
+#define SGIOC_WRITE_NTHRESH 0x01 /* use 4.5db threshold */
#define SGIOC_WRITE_TPSPEED 0x02 /* use 100ohm TP speed */
#define SGIOC_WRITE_EPSEL 0x04 /* force cable mode: 1=AUI 0=TP */
#define SGIOC_WRITE_EASEL 0x08 /* 1=autoselect 0=manual cable selection */
#define G_MAC_TXD_WEIGHT1(x) _SB_GETVALUE(x, S_MAC_TXD_WEIGHT1, M_MAC_TXD_WEIGHT1)
/*
- * MAC Fifo Threshhold registers (Table 9-14)
+ * MAC Fifo Threshold registers (Table 9-14)
* Register: MAC_THRSH_CFG_0
* Register: MAC_THRSH_CFG_1
* Register: MAC_THRSH_CFG_2
struct {
u64 rsvd1: 15,
error: 1, /* Widget rcvd wr resp pkt w/ error */
- ovflow: 5, /* Over flow count. perf measurement */
+ ovflow: 5, /* Overflow count. perf measurement */
fire_and_forget: 1, /* Launch Write without response */
mode: 2, /* Widget operation Mode */
rsvd2: 2,
if (!((asid += ASID_INC) & ASID_MASK) ) {
if (cpu_has_vtag_icache)
flush_icache_all();
- /* Traverse all online CPUs (hack requires contigous range) */
+ /* Traverse all online CPUs (hack requires contiguous range) */
for_each_online_cpu(i) {
/*
* We don't need to worry about our own CPU, nor those of
case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM):
DPDNORMX;
- /* FAAL THOROUGH */
+ /* FALL THROUGH */
case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM):
/* normalize ym,ye */
if (chip_id == SMSC_FDC37M81X_CHIP_ID)
smsc_fdc37m81x_config_end();
else {
- printk(KERN_WARNING "%s: unknow chip id 0x%02x\n", __func__,
+ printk(KERN_WARNING "%s: unknown chip id 0x%02x\n", __func__,
chip_id);
g_smsc_fdc37m81x_base = 0;
}
#define PMRN_PMLCB2 0x112 /* PM Local Control B2 */
#define PMRN_PMLCB3 0x113 /* PM Local Control B3 */
-#define PMLCB_THRESHMUL_MASK 0x0700 /* Threshhold Multiple Field */
+#define PMLCB_THRESHMUL_MASK 0x0700 /* Threshold Multiple Field */
#define PMLCB_THRESHMUL_SHIFT 8
#define PMLCB_THRESHOLD_MASK 0x003f /* Threshold Field */
{ 0x2030, 0x08 /* SIGFPE */ }, /* spe fp data */
{ 0x2040, 0x08 /* SIGFPE */ }, /* spe fp data */
{ 0x2050, 0x08 /* SIGFPE */ }, /* spe fp round */
- { 0x2060, 0x0e /* SIGILL */ }, /* performace monitor */
+ { 0x2060, 0x0e /* SIGILL */ }, /* performance monitor */
{ 0x2900, 0x08 /* SIGFPE */ }, /* apu unavailable */
{ 0x3100, 0x0e /* SIGALRM */ }, /* fixed interval timer */
{ 0x3200, 0x02 /* SIGINT */ }, /* watchdog */
mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TIE | THRM1_TID);
/* setup THRM2,
- * threshold, valid bit, enable interrupts, interrupt when above threshhold
+ * threshold, valid bit, enable interrupts, interrupt when above threshold
*/
mtspr (SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | THRM1_TIE);
#else
* to a latch. The new values (interrupt setting bits, reset
* counter value etc.) are not copied to the actual registers
* until the performance monitor is enabled. In order to get
- * this to work as desired, the permormance monitor needs to
+ * this to work as desired, the performance monitor needs to
* be disabled while writing to the latches. This is a
* HW design issue.
*/
* to a latch. The new values (interrupt setting bits, reset
* counter value etc.) are not copied to the actual registers
* until the performance monitor is enabled. In order to get
- * this to work as desired, the permormance monitor needs to
+ * this to work as desired, the performance monitor needs to
* be disabled while writing to the latches. This is a
* HW design issue.
*/
};
/* ======================================================================== */
-/* PCI configuration acess */
+/* PCI configuration access */
/* ======================================================================== */
static int
* 1 -> Skip the device but act as if the access was successfull
* (return 0xff's on reads, eventually, cache config space
* accesses in a later version)
- * -1 -> Hide the device (unsuccessful acess)
+ * -1 -> Hide the device (unsuccessful access)
*/
static int u3_ht_skip_device(struct pci_controller *hose,
struct pci_bus *bus, unsigned int devfn)
dp = ((unsigned int*)tbl->it_base) + index;
- /* On U3, all memory is contigous, so we can move this
+ /* On U3, all memory is contiguous, so we can move this
* out of the loop.
*/
l = npages;
__u16 opc = *((__u16 *) opcode);
if ((opc & 0x90) == 0) { /* test if rx in {0,2,4,6} */
- /* we got an exception therfore ry can't be in {0,2,4,6} */
+ /* we got an exception therefore ry can't be in {0,2,4,6} */
asm volatile( /* load rx from fp_regs.fprs[ry] */
" bras 1,0f\n"
" ld 0,0(%1)\n"
__u16 opc = *((__u16 *) opcode);
if ((opc & 0x90) == 0) { /* test if rx in {0,2,4,6} */
- /* we got an exception therfore ry can't be in {0,2,4,6} */
+ /* we got an exception therefore ry can't be in {0,2,4,6} */
asm volatile( /* load rx from fp_regs.fprs[ry] */
" bras 1,0f\n"
" le 0,0(%1)\n"
#include <linux/types.h>
/*
- * FIXME: Acessing the desc_struct through its fields is more elegant,
+ * FIXME: Accessing the desc_struct through its fields is more elegant,
* and should be the one valid thing to do. However, a lot of open code
- * still touches the a and b acessors, and doing this allow us to do it
+ * still touches the a and b accessors, and doing this allow us to do it
* incrementally. We keep the signature as a struct, rather than an union,
* so we can get rid of it transparently in the future -- glommer
*/
/*
* generic node memory support, the following assumptions apply:
*
- * 1) memory comes in 64Mb contigious chunks which are either present or not
+ * 1) memory comes in 64Mb contiguous chunks which are either present or not
* 2) we will not have more than 64Gb in total
*
* for now assume that 64Gb is max amount of RAM for whole system
#define DESC_STATUS_SOURCE_TIMEOUT 3
/*
- * source side threshholds at which message retries print a warning
+ * source side thresholds at which message retries print a warning
*/
#define SOURCE_TIMEOUT_LIMIT 20
#define DESTINATION_TIMEOUT_LIMIT 20
if (!acpi_sci_override_gsi)
acpi_sci_ioapic_setup(acpi_gbl_FADT.sci_interrupt, 0, 0);
- /* Fill in identity legacy mapings where no override */
+ /* Fill in identity legacy mappings where no override */
mp_config_acpi_legacy_irqs();
count =
goto out;
/*
- * aperture was sucessfully enlarged by 128 MB, try
+ * aperture was successfully enlarged by 128 MB, try
* allocation again
*/
goto retry;
struct pci_dev *dev = NULL;
u16 devid, devid2;
- /* allocate passthroug domain */
+ /* allocate passthrough domain */
pt_domain = protection_domain_alloc();
if (!pt_domain)
return -ENOMEM;
return 0;
/*
- * If we are way outside a reasoable range then just skip forward:
+ * If we are way outside a reasonable range then just skip forward:
*/
if (unlikely(left <= -period)) {
left = period;
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
- * remain disabled thorough out this function.
+ * remain disabled throughout this function.
*/
static int __kprobes kprobe_handler(struct pt_regs *regs)
{
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate and they
- * remain disabled thoroughout this function.
+ * remain disabled throughout this function.
*/
static int __kprobes post_kprobe_handler(struct pt_regs *regs)
{
*/
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate
- * and they remain disabled thorough out this function.
+ * and they remain disabled throughout this function.
*/
int kmmio_handler(struct pt_regs *regs, unsigned long addr)
{
/*
* Interrupts are disabled on entry as trap1 is an interrupt gate
- * and they remain disabled thorough out this function.
+ * and they remain disabled throughout this function.
* This must always get called as the pair to kmmio_handler().
*/
static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
* Description:
* Add this blk_iopoll structure to the pending poll list and trigger the
* raise of the blk iopoll softirq. The driver must already have gotten a
- * succesful return from blk_iopoll_sched_prep() before calling this.
+ * successful return from blk_iopoll_sched_prep() before calling this.
**/
void blk_iopoll_sched(struct blk_iopoll *iop)
{
{ 0x27DF, 0x1028, 0x02b0 }, /* ICH7 on unknown Dell */
{ 0x27DF, 0x1043, 0x1267 }, /* ICH7 on Asus W5F */
{ 0x27DF, 0x103C, 0x30A1 }, /* ICH7 on HP Compaq nc2400 */
- { 0x27DF, 0x103C, 0x361a }, /* ICH7 on unkown HP */
+ { 0x27DF, 0x103C, 0x361a }, /* ICH7 on unknown HP */
{ 0x27DF, 0x1071, 0xD221 }, /* ICH7 on Hercules EC-900 */
{ 0x27DF, 0x152D, 0x0778 }, /* ICH7 on unknown Intel */
{ 0x24CA, 0x1025, 0x0061 }, /* ICH4 on ACER Aspire 2023WLMi */
/*
* SATA-FSL host controller supports a max. of (15+1) direct PRDEs, and
* chained indirect PRDEs upto a max count of 63.
- * We are allocating an array of 63 PRDEs contigiously, but PRDE#15 will
+ * We are allocating an array of 63 PRDEs contiguously, but PRDE#15 will
* be setup as an indirect descriptor, pointing to it's next
- * (contigious) PRDE. Though chained indirect PRDE arrays are
+ * (contiguous) PRDE. Though chained indirect PRDE arrays are
* supported,it will be more efficient to use a direct PRDT and
* a single chain/link to indirect PRDE array/PRDT.
*/
u32 ttl_dwords = 0;
/*
- * NOTE : direct & indirect prdt's are contigiously allocated
+ * NOTE : direct & indirect prdt's are contiguously allocated
*/
struct prde *prd = (struct prde *)&((struct command_desc *)
cmd_desc)->prdt;
IF_ERR(printk(" cause: packet time out\n");)
}
else {
- IF_ERR(printk(" cause: buffer over flow\n");)
+ IF_ERR(printk(" cause: buffer overflow\n");)
}
goto out_free_desc;
}
* @dev: device to try to bind to the driver
*
* This function returns -ENODEV if the device is not registered,
- * 1 if the device is bound sucessfully and 0 otherwise.
+ * 1 if the device is bound successfully and 0 otherwise.
*
* This function must be called with @dev->sem held. When called for a
* USB interface, @dev->parent->sem must be held as well.
break;
default:
- BT_ERR("Unknow packet type:%d", type);
+ BT_ERR("Unknown packet type:%d", type);
print_hex_dump_bytes("", DUMP_PREFIX_OFFSET, payload,
blksz * buf_block_len);
struct hci_uart *hu;
if (!hdev) {
- BT_ERR("Frame for uknown device (hdev=NULL)");
+ BT_ERR("Frame for unknown device (hdev=NULL)");
return -ENODEV;
}
*
* Added devfs support.
* Jan-11-1998, C. Scott Ananian <cananian@alumni.princeton.edu>
- * Shared /dev/zero mmaping support, Feb 2000, Kanoj Sarcar <kanoj@sgi.com>
+ * Shared /dev/zero mmapping support, Feb 2000, Kanoj Sarcar <kanoj@sgi.com>
*/
#include <linux/mm.h>
/*
* mspec_mmap
*
- * Called when mmaping the device. Initializes the vma with a fault handler
+ * Called when mmapping the device. Initializes the vma with a fault handler
* and private data structure necessary to allocate, track, and free the
* underlying pages.
*/
}
break;
case R3964_WAIT_FOR_RX_REPEAT:
- /* FALLTROUGH */
+ /* FALLTHROUGH */
case R3964_IDLE:
if (c == STX) {
/* Prevent rx_queue from overflow: */
typedef struct COST_ROUTE COST_ROUTE;
struct COST_ROUTE {
unsigned char cost; /* Cost down this link */
- unsigned char route[NODE_BYTES]; /* Nodes thorough this route */
+ unsigned char route[NODE_BYTES]; /* Nodes through this route */
};
typedef struct ROUTE_STR ROUTE_STR;
dev->dmareg |= HIFN_DMAIER_PUBDONE;
hifn_write_1(dev, HIFN_1_DMA_IER, dev->dmareg);
- dprintk("Chip %s: Public key engine has been sucessfully "
+ dprintk("Chip %s: Public key engine has been successfully "
"initialised.\n", dev->name);
}
}
/**
- * atc_desc_get - get a unsused descriptor from free_list
+ * atc_desc_get - get an unused descriptor from free_list
* @atchan: channel we want a new descriptor for
*/
static struct at_desc *atc_desc_get(struct at_dma_chan *atchan)
* This function builds the tree representation of the topology given
* by the self IDs from the latest bus reset. During the construction
* of the tree, the function checks that the self IDs are valid and
- * internally consistent. On succcess this function returns the
+ * internally consistent. On success this function returns the
* fw_node corresponding to the local card otherwise NULL.
*/
static struct fw_node *build_tree(struct fw_card *card,
* functions & device file and adds it to the master fd list.
*
* RETURNS:
- * Zero on success, error code on falure.
+ * Zero on success, error code on failure.
*/
int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
const struct drm_framebuffer_funcs *funcs)
} else if (connector->funcs->set_property)
ret = connector->funcs->set_property(connector, property, out_resp->value);
- /* store the property value if succesful */
+ /* store the property value if successful */
if (!ret)
drm_connector_property_set_value(connector, property, out_resp->value);
out:
* i915_gem_release_mmap - remove physical page mappings
* @obj: obj in question
*
- * Preserve the reservation of the mmaping with the DRM core code, but
+ * Preserve the reservation of the mmapping with the DRM core code, but
* relinquish ownership of the pages back to the system.
*
* It is vital that we remove the page mapping if we have mapped a tiled
/**
- * Curretly it is assumed that the old framebuffer is reused.
+ * Currently it is assumed that the old framebuffer is reused.
*
* LOCKING
* caller should hold the mode config lock.
/* Wrap with our custom algo which switches to DDC mode */
intel_output->ddc_bus->algo = &intel_sdvo_i2c_bit_algo;
- /* In defaut case sdvo lvds is false */
+ /* In default case sdvo lvds is false */
intel_sdvo_get_capabilities(intel_output, &sdvo_priv->caps);
if (intel_sdvo_output_setup(intel_output,
* AGP so that GPU can catch out of VRAM/AGP access
*/
if (rdev->mc.gtt_location > rdev->mc.mc_vram_size) {
- /* Enought place before */
+ /* Enough place before */
rdev->mc.vram_location = rdev->mc.gtt_location -
rdev->mc.mc_vram_size;
} else if (tmp > rdev->mc.mc_vram_size) {
- /* Enought place after */
+ /* Enough place after */
rdev->mc.vram_location = rdev->mc.gtt_location +
rdev->mc.gtt_size;
} else {
};
/**
- * Curretly it is assumed that the old framebuffer is reused.
+ * Currently it is assumed that the old framebuffer is reused.
*
* LOCKING
* caller should hold the mode config lock.
* Note that refcount can be at most 2, since during a free refcount=3
* might mean we have to allocate a new surface which might not always
* be available.
- * For example : we allocate three contigous surfaces ABC. If B is
+ * For example : we allocate three contiguous surfaces ABC. If B is
* freed, we suddenly need two surfaces to store A and C, which might
* not always be available.
*/
new_mem->mem_type == TTM_PL_SYSTEM) ||
(old_mem->mem_type == TTM_PL_SYSTEM &&
new_mem->mem_type == TTM_PL_TT)) {
- /* bind is enought */
+ /* bind is enough */
radeon_move_null(bo, new_mem);
return 0;
}
* AGP so that GPU can catch out of VRAM/AGP access
*/
if (rdev->mc.gtt_location > rdev->mc.mc_vram_size) {
- /* Enought place before */
+ /* Enough place before */
rdev->mc.vram_location = rdev->mc.gtt_location -
rdev->mc.mc_vram_size;
} else if (tmp > rdev->mc.mc_vram_size) {
- /* Enought place after */
+ /* Enough place after */
rdev->mc.vram_location = rdev->mc.gtt_location +
rdev->mc.gtt_size;
} else {
/*
* We need to use vmap to get the desired page protection
- * or to make the buffer object look contigous.
+ * or to make the buffer object look contiguous.
*/
prot = (mem->placement & TTM_PL_FLAG_CACHED) ?
PAGE_KERNEL :
}
/*
-function that update the status of the chips (temperature for exemple)
+function that update the status of the chips (temperature for example)
*/
static struct adm1029_data *adm1029_update_device(struct device *dev)
{
data->prochot_interval = lm93_read_byte(client,
LM93_REG_PROCHOT_INTERVAL);
- /* Fan Boost Termperature registers */
+ /* Fan Boost Temperature registers */
for (i = 0; i < 4; i++)
data->boost[i] = lm93_read_byte(client, LM93_REG_BOOST(i));
0 - no debugging messages
1 - some debugging messages, but none during DMA frame transmission
2 - lots of messages, including during DMA frame transmission
- (will cause undeflows if your machine is too slow!)
+ (will cause underflows if your machine is too slow!)
*/
#define DV1394_DEBUG_LEVEL 0
#define IPATH_GPIO_SCL \
(1ULL << (_IPATH_GPIO_SCL_NUM+INFINIPATH_EXTC_GPIOOE_SHIFT))
-/* keep the code below somewhat more readonable; not used elsewhere */
+/* keep the code below somewhat more readable; not used elsewhere */
#define _IPATH_HTLINK0_CRCBITS (infinipath_hwe_htclnkabyte0crcerr | \
infinipath_hwe_htclnkabyte1crcerr)
#define _IPATH_HTLINK1_CRCBITS (infinipath_hwe_htclnkbbyte0crcerr | \
* @wd: Write Data - value to set in register
* @mask: ones where data should be spliced into reg.
*
- * Basic register read/modify/write, with un-needed acesses elided. That is,
+ * Basic register read/modify/write, with un-needed accesses elided. That is,
* a mask of zero will prevent write, while a mask of 0xFF will prevent read.
* returns current (presumed, if a write was done) contents of selected
* register, or <0 if errors.
/* Set DFELTHFDR/HDR thresholds */
RXEQ_VAL(7, 8, 0, 0, 0, 0), /* FDR */
RXEQ_VAL(7, 0x21, 0, 0, 0, 0), /* HDR */
- /* Set TLTHFDR/HDR theshold */
+ /* Set TLTHFDR/HDR threshold */
RXEQ_VAL(7, 9, 2, 2, 2, 2), /* FDR */
RXEQ_VAL(7, 0x23, 2, 2, 2, 2), /* HDR */
/* Set Preamp setting 2 (ZFR/ZCNT) */
* anymore, so we do this only if selective signaling is off.
*
* Further, on 32-bit platforms, we can't use vmap() to make
- * the QP buffer virtually contigious. Thus we have to use
+ * the QP buffer virtually contiguous. Thus we have to use
* constant-sized WRs to make sure a WR is always fully within
* a single page-sized chunk.
*
INIT_DELAYED_WORK(&moduleloader_work, request_module_delayed);
ret = hp_sdc_init();
- /* after sucessfull initialization give SDC some time to settle
+ /* after successfull initialization give SDC some time to settle
* and then load the hp_sdc_mlc upper layer driver */
if (!ret)
schedule_delayed_work(&moduleloader_work,
break;
default:
- printk(KERN_WARNING PREFIX "Unkown HIL Error status (%x)!\n", data);
+ printk(KERN_WARNING PREFIX "Unknown HIL Error status (%x)!\n", data);
break;
}
#define ATMEL_WM97XX_AC97C_IRQ (29)
#define ATMEL_WM97XX_GPIO_DEFAULT (32+16) /* Pin 16 on port B. */
#else
-#error Unkown CPU, this driver only supports AT32AP700X CPUs.
+#error Unknown CPU, this driver only supports AT32AP700X CPUs.
#endif
struct continuous {
*
* Notes:
* This is a wm97xx extended touch driver to capture touch
- * data in a continuous manner on the Intel XScale archictecture
+ * data in a continuous manner on the Intel XScale architecture
*
* Features:
* - codecs supported:- WM9705, WM9712, WM9713
/* When the AC97 queue has been drained we need to allow time
* to buffer up samples otherwise we end up spinning polling
* for samples. The controller can't have a suitably low
- * threashold set to use the notifications it gives.
+ * threshold set to use the notifications it gives.
*/
schedule_timeout_uninterruptible(1);
/* When the AC97 queue has been drained we need to allow time
* to buffer up samples otherwise we end up spinning polling
* for samples. The controller can't have a suitably low
- * threashold set to use the notifications it gives.
+ * threshold set to use the notifications it gives.
*/
msleep(1);
case 0: break;
case 1: s = "unknown class"; break;
case 2: s = "unknown function"; break;
- default: s = "unkown error"; break;
+ default: s = "unknown error"; break;
}
if (s)
printk(KERN_INFO "capidrv-%d: %s from controller 0x%x function %d: %s\n",
DELIVERY - indication entered isdn_rc function
RNR=... - application had returned RNR=... after the
look ahead callback
- RNum=0 - aplication had not returned any buffer to copy
+ RNum=0 - application had not returned any buffer to copy
this indication and will copy it self
COMPLETE - XDI had copied the data to the buffers provided
bu the application and is about to issue the
}
break;
default:
- diva_mnt_internal_dprintf (0, DLI_ERR, "Unknon IDI Ind (DMA mode): %02x", Ind);
+ diva_mnt_internal_dprintf (0, DLI_ERR, "Unknown IDI Ind (DMA mode): %02x", Ind);
}
p += (this_ind_length+1);
total_length -= (4 + this_ind_length);
}
break;
default:
- diva_mnt_internal_dprintf (0, DLI_ERR, "Unknon IDI Ind: %02x", Ind);
+ diva_mnt_internal_dprintf (0, DLI_ERR, "Unknown IDI Ind: %02x", Ind);
}
}
}
switch (protocol) {
case (-1): /* used for init */
bch->state = -1;
- /* fall trough */
+ /* fall through */
case (ISDN_P_NONE):
if (bch->state == ISDN_P_NONE)
return 0; /* already in idle state */
for (i = 0; list[i].name != NULL; i++)
if (list[i].num == num)
return list[i].name;
- return "<unkown USB Error>";
+ return "<unknown USB Error>";
}
/* USB descriptor need to contain one of the following EndPoint combination: */
pr_debug("%s: pump stev GSTN CLEAR\n", ch->is->name);
break;
default:
- pr_info("u%s: nknown pump stev %x\n", ch->is->name, devt);
+ pr_info("u%s: unknown pump stev %x\n", ch->is->name, devt);
break;
}
}
break;
default:
DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1: unkown state : %#x", pr);
+ "HFC_USB: hfc_usb_d_l2l1: unknown state : %#x", pr);
break;
}
}
unsigned short hl;
struct sk_buff *skb;
/*
- * we need to reserve enought space in front of
+ * we need to reserve enough space in front of
* sk_buff. old call to dev_alloc_skb only reserved
* 16 bytes, now we are looking what the driver want
*/
struct sk_buff *new_skb;
unsigned short hl;
/*
- * we need to reserve enought space in front of
+ * we need to reserve enough space in front of
* sk_buff. old call to dev_alloc_skb only reserved
* 16 bytes, now we are looking what the driver want.
*/
*
* try to accomplish several tasks:
* - reassemble any complete fragment sequence (non-null 'start'
- * indicates there is a continguous sequence present)
+ * indicates there is a contiguous sequence present)
* - discard any incomplete sequences that are below minseq -- due
* to the fact that sender always increment sequence number, if there
* is an incomplete sequence below minseq, no new fragments would
}
return 0;
}
- /* BADMUL=value - dummy 0=disable errorchk disabled (treshold multiplier) */
+ /* BADMUL=value - dummy 0=disable errorchk disabled (threshold multiplier) */
if (!strncmp(p[0], "BADMUL", 6)) {
p[0] += 6;
switch (*p[0]) {
* crossconnections and conferences via software if not possible through
* hardware. If hardware capability is available, hardware is used.
*
- * Echo: Is generated by CMX and is used to check performane of hard and
+ * Echo: Is generated by CMX and is used to check performance of hard and
* software CMX.
*
* The CMX has special functions for conferences with one, two and more
if (tm->rcnt == 1) {
if (*debug & DEBUG_L2_TEI)
tm->tei_m.printdebug(fi,
- "check req for tei %d sucessful\n", tm->l2->tei);
+ "check req for tei %d successful\n", tm->l2->tei);
mISDN_FsmChangeState(fi, ST_TEI_NOP);
} else if (tm->rcnt > 1) {
/* duplicate assignment; remove */
*
* WARNING: This driver has only been testen on Apple's
* 1.25 MHz Dual G4 (March 03). It is tuned for a CPU
- * temperatur around 57 C.
+ * temperature around 57 C.
*
* Copyright (C) 2003, 2004 Samuel Rydh (samuel@ibrium.se)
*
op_count++;
- /* loop throgh all bytes of message i */
+ /* loop through all bytes of message i */
for(j = 0; j < m[i].len; j++) {
/* write back all bytes that could have been read */
m[i].buf[j] = (le32_to_cpu(op[op_count/3]) >> ((3-(op_count%3))*8));
* search callback possible return status
*
* DVBFE_ALGO_SEARCH_SUCCESS
- * The frontend search algorithm completed and returned succesfully
+ * The frontend search algorithm completed and returned successfully
*
* DVBFE_ALGO_SEARCH_ASLEEP
* The frontend search algorithm is sleeping
if (ret)
return ret;
- err("Unkown Anysee version: %02x %02x %02x. "\
+ err("Unknown Anysee version: %02x %02x %02x. "\
"Please report the <linux-dvb@linuxtv.org>.",
hw_info[0], hw_info[1], hw_info[2]);
{ &dibusb_dib3000mb_table[9], &dibusb_dib3000mb_table[11], NULL },
{ &dibusb_dib3000mb_table[10], &dibusb_dib3000mb_table[12], NULL },
},
- { "Unkown USB1.1 DVB-T device ???? please report the name to the author",
+ { "Unknown USB1.1 DVB-T device ???? please report the name to the author",
{ &dibusb_dib3000mb_table[13], NULL },
{ &dibusb_dib3000mb_table[14], NULL },
},
*state = REMOTE_KEY_REPEAT;
break;
default:
- deb_err("unkown type of remote status: %d\n",keybuf[0]);
+ deb_err("unknown type of remote status: %d\n",keybuf[0]);
break;
}
return 0;
stream->complete(stream, b, urb->actual_length);
break;
default:
- err("unkown endpoint type in completition handler.");
+ err("unknown endpoint type in completition handler.");
return;
}
usb_submit_urb(urb,GFP_ATOMIC);
case USB_ISOC:
return usb_isoc_urb_init(stream);
default:
- err("unkown URB-type for data transfer.");
+ err("unknown URB-type for data transfer.");
return -EINVAL;
}
}
if (input_mode == AU8522_INPUT_CONTROL_REG081H_SVIDEO_CH13 ||
input_mode == AU8522_INPUT_CONTROL_REG081H_SVIDEO_CH24) {
/* Despite what the table says, for the HVR-950q we still need
- to be in CVBS mode for the S-Video input (reason uknown). */
+ to be in CVBS mode for the S-Video input (reason unknown). */
/* filter_coef_type = 3; */
filter_coef_type = 5;
} else {
- /*
+/*
cx24110 - Single Chip Satellite Channel Receiver driver module
Copyright (C) 2002 Peter Hettkamp <peter.hettkamp@htp-tel.de> based on
{0x42,0x00}, /* @ middle bytes " */
{0x43,0x00}, /* @ LSB " */
/* leave the carrier tracking loop parameters on default */
- /* leave the bit timing loop parameters at gefault */
+ /* leave the bit timing loop parameters at default */
{0x56,0x4d}, /* set the filtune voltage to 2.7V, as recommended by */
/* the cx24108 data sheet for symbol rates above 15MS/s */
{0x57,0x00}, /* @ Filter sigma delta enabled, positive */
info("detected CX24113 variant\n");
break;
case REV_CX24113:
- info("sucessfully detected\n");
+ info("successfully detected\n");
break;
default:
err("unsupported device id: %x\n", state->rev);
case BANDWIDTH_AUTO:
return -EOPNOTSUPP;
default:
- err("unkown bandwidth value.");
+ err("unknown bandwidth value.");
return -EINVAL;
}
}
switch (state->current_modulation) {
case QAM_256:
case QAM_64:
- /* Need to undestand why there are 3 lock levels here */
+ /* Need to understand why there are 3 lock levels here */
if ((buf[0] & 0x07) == 0x07)
*status |= FE_HAS_CARRIER;
break;
switch (state->current_modulation) {
case QAM_256:
case QAM_64:
- /* Need to undestand why there are 3 lock levels here */
+ /* Need to understand why there are 3 lock levels here */
if ((buf[0] & 0x07) == 0x07)
*status |= FE_HAS_CARRIER;
else
* stb0899_track
* periodically check the signal level against a specified
* threshold level and perform derotator centering.
- * called once we have a lock from a succesful search
+ * called once we have a lock from a successful search
* event.
*
* Will be called periodically called to maintain the
* use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
*/
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- /* set event counter 1 treshold to maximum allowed value (rEC p55) */
+ /* set event counter 1 threshold to maximum allowed value (rEC p55) */
saa7146_write(dev, ECT1R, 0x3fff );
#endif
/* Set RPS1 Address register to point to RPS code (r108 p42) */
* use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
*/
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- /* set event counter 1 treshold to maximum allowed value (rEC p55) */
+ /* set event counter 1 threshold to maximum allowed value (rEC p55) */
saa7146_write(dev, ECT1R, 0x3fff );
#endif
/* Setup BUDGETPATCH MAIN RPS1 "program" (p35) */
// use 0x03 to track RPS1 interrupts - increase by 1 every gpio3 is toggled
// use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
- // set event counter 1 treshold to maximum allowed value (rEC p55)
+ // set event counter 1 threshold to maximum allowed value (rEC p55)
saa7146_write(dev, ECT1R, 0x3fff );
#endif
// Fix VSYNC level
* http://av-usbradio.sourceforge.net/index.php
* http://sourceforge.net/projects/av-usbradio/
* Latest release of theirs project was in 2005.
- * Probably, this driver could be improved trough using their
+ * Probably, this driver could be improved through using their
* achievements (specifications given).
* Also, Faidon Liambotis <paravoid@debian.org> wrote nice driver for this radio
* in 2007. He allowed to use his driver to improve current mr800 radio driver.
cx231xx_info("No ACK after %d msec -GPIO I2C failed!",
nInit * 10);
- /* readAck
- throuth clock stretch ,slave has given a SCL signal,
- so the SDA data can be directly read. */
+ /*
+ * readAck
+ * through clock stretch, slave has given a SCL signal,
+ * so the SDA data can be directly read.
+ */
status = cx231xx_get_gpio_bit(dev, dev->gpio_dir, (u8 *)&dev->gpio_val);
if ((dev->gpio_val & 1 << dev->board.tuner_sda_gpio) == 0) {
int err, i;
/* Here we need to allocate the correct number of frontends,
- * as reflected in the cards struct. The reality is that currrently
+ * as reflected in the cards struct. The reality is that currently
* no cx23885 boards support this - yet. But, if we don't modify this
* code then the second frontend would never be allocated (later)
* and fail with error before the attach in dvb_register().
/* setup image format */
cx_andor(MO_COLOR_CTRL, 0x4000, 0x4000);
- /* setup FIFO Threshholds */
+ /* setup FIFO Thresholds */
cx_write(MO_PDMA_STHRSH, 0x0807);
cx_write(MO_PDMA_DTHRSH, 0x0807);
if ((ccdcparam->med_filt_thres < 0) ||
(ccdcparam->med_filt_thres > CCDC_MED_FILT_THRESH)) {
- dev_dbg(dev, "Invalid value of median filter thresold\n");
+ dev_dbg(dev, "Invalid value of median filter threshold\n");
return -EINVAL;
}
int (*enable_clock)(enum vpss_clock_sel clock_sel, int en);
/* select input to ccdc */
void (*select_ccdc_source)(enum vpss_ccdc_source_sel src_sel);
- /* clear wbl overlflow bit */
+ /* clear wbl overflow bit */
int (*clear_wbl_overflow)(enum vpss_wbl_sel wbl_sel);
};
};
static const __u8 ov6650_sensor_init[][8] =
{
- /* Bright, contrast, etc are set througth SCBB interface.
+ /* Bright, contrast, etc are set through SCBB interface.
* AVCAP on win2 do not send any data on this controls. */
/* Anyway, some registers appears to alter bright and constrat */
int err;
__u8 Data;
- /* some unknow command from Aiptek pocket dv and family300 */
+ /* some unknown command from Aiptek pocket dv and family300 */
reg_w(gspca_dev, 0x00, 0x0d01, 0x01);
reg_w(gspca_dev, 0x00, 0x0d03, 0x00);
{}
};
-/* Unknow camera from Ori Usbid 0x0000:0x0000 */
+/* Unknown camera from Ori Usbid 0x0000:0x0000 */
/* Based on snoops from Ori Cohen */
static const __u16 spca501c_mysterious_open_data[][3] = {
{0x02, 0x000f, 0x0005},
goto error;
break;
case MystFromOriUnknownCamera:
- /* UnKnow Ori CMOS Camera data */
+ /* Unknown Ori CMOS Camera data */
if (write_vector(gspca_dev, spca501c_mysterious_open_data))
goto error;
break;
write_vector(gspca_dev, spca501c_arowana_open_data);
break;
case MystFromOriUnknownCamera:
- /* UnKnow CMOS Camera data */
+ /* Unknown CMOS Camera data */
write_vector(gspca_dev, spca501c_mysterious_init_data);
break;
default:
count = 200;
while (--count > 0) {
msleep(10);
- /* gsmart mini2 write a each wait setting 1 ms is enought */
+ /* gsmart mini2 write a each wait setting 1 ms is enough */
/* reg_w_riv(dev, req, idx, val); */
status = reg_r_12(gspca_dev, 0x01, 0x0001, 1);
if (status == endcode) {
break;
default:
PDEBUG(D_PROBE,
- "Sensor UNKNOW_0 force Tas5130");
+ "Sensor UNKNOWN_0 force Tas5130");
sd->sensor = SENSOR_TAS5130CXX;
}
break;
/* Bit mask of PVR2_CVAL_INPUT choices which are valid for the hardware */
unsigned int input_avail_mask;
- /* Bit mask of PVR2_CVAL_INPUT choices which are currenly allowed */
+ /* Bit mask of PVR2_CVAL_INPUT choices which are currently allowed */
unsigned int input_allowed_mask;
/* Location of eeprom or a negative number if none */
wake_up(&dev->fw_data->wait_fw);
break;
default:
- printk(KERN_INFO "s2255 unknwn resp\n");
+ printk(KERN_INFO "s2255 unknown resp\n");
}
default:
pdata++;
unsigned long jpeg_markers; /* Which markers should go into the JPEG output.
* Unless you exactly know what you do, leave them untouched.
* Inluding less markers will make the resulting code
- * smaller, but there will be fewer aplications
+ * smaller, but there will be fewer applications
* which can read it.
* The presence of the APP and COM marker is
* influenced by APP0_len and COM_len ONLY! */
* Allocate memory for the i2o_block_device struct, gendisk and request
* queue and initialize them as far as no additional information is needed.
*
- * Returns a pointer to the allocated I2O Block device on succes or a
+ * Returns a pointer to the allocated I2O Block device on success or a
* negative error code on failure.
*/
static struct i2o_block_device *i2o_block_device_alloc(void)
* Removes a previously added pointer from the context list and returns
* the matching context id.
*
- * Returns context id on succes or 0 on failure.
+ * Returns context id on success or 0 on failure.
*/
u32 i2o_cntxt_list_remove(struct i2o_controller * c, void *ptr)
{
* @c: controller to which the context list belong
* @ptr: pointer to which the context id should be fetched
*
- * Returns context id which matches to the pointer on succes or 0 on
+ * Returns context id which matches to the pointer on success or 0 on
* failure.
*/
u32 i2o_cntxt_list_get_ptr(struct i2o_controller * c, void *ptr)
/*
* gru_file_mmap
*
- * Called when mmaping the device. Initializes the vma with a fault handler
+ * Called when mmapping the device. Initializes the vma with a fault handler
* and private data structure necessary to allocate, track, and free the
* underlying pages.
*/
static struct s3c24xx_mci_pdata s3cmci_def_pdata = {
/* This is currently here to avoid a number of if (host->pdata)
- * checks. Any zero fields to ensure reaonable defaults are picked. */
+ * checks. Any zero fields to ensure reasonable defaults are picked. */
};
#ifdef CONFIG_CPU_FREQ
to specify the offset instead of the absolute address
NOTE:
- With slram it's only possible to map a contigous memory region. Therfore
+ With slram it's only possible to map a contiguous memory region. Therefore
if there's a device mapped somewhere in the region specified slram will
fail to load (see kernel log if modprobe fails).
};
/* Find the (I)NFTL Media Header, and optionally also the mirror media header.
- On sucessful return, buf will contain a copy of the media header for
+ On successful return, buf will contain a copy of the media header for
further processing. id is the string to scan for, and will presumably be
either "ANAND" or "BNAND". If findmirror=1, also look for the mirror media
header. The page #s of the found media headers are placed in mh0_page and
*
* The b2 shift is there to get rid of the lowest two bits.
* We could also do addressbits[b2] >> 1 but for the
- * performace it does not make any difference
+ * performance it does not make any difference
*/
if (eccsize_mult == 1)
byte_addr = (addressbits[b1] << 4) + addressbits[b0];
* @info: The controller instance.
* @nmtd: The driver version of the MTD instance.
*
- * This routine is called after the chip probe has succesfully completed
+ * This routine is called after the chip probe has successfully completed
* and the relevant per-chip information updated. This call ensure that
* we update the internal state accordingly.
*
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
readl(lp->mmio + CMD7);
return 0;
}
-/* This function is called when a packet transmission fails to complete within a resonable period, on the assumption that an interrupts have been failed or the interface is locked up. This function will reinitialize the hardware */
+/*
+ * This function is called when a packet transmission fails to complete
+ * within a reasonable period, on the assumption that an interrupt have
+ * failed or the interface is locked up. This function will reinitialize
+ * the hardware.
+ */
static void amd8111e_tx_timeout(struct net_device *dev)
{
struct amd8111e_priv* lp = netdev_priv(dev);
* DAYNA driver mode:
* Dayna DL2000/DaynaTalk PC (Half Length), COPS LT-95,
* Farallon PhoneNET PC III, Farallon PhoneNET PC II
- * Other cards possibly supported mode unkown though:
+ * Other cards possibly supported mode unknown though:
* Dayna DL2000 (Full length), COPS LT/M (Micro-Channel)
*
* Cards NOT supported by this driver but supported by the ltpc.c
#define DLNKTST 0x0010 /* Disable Link Status */
#define DAPC 0x0008 /* Disable Automatic Polarity Correction */
#define MENDECL 0x0004 /* MENDEC Loopback Mode */
-#define LRTTSEL 0x0002 /* Low Receive Treshold/Transmit Mode Select */
+#define LRTTSEL 0x0002 /* Low Receive Threshold/Transmit Mode Select */
#define PORTSEL1 0x0001 /* Port Select Bits */
#define PORTSEL2 0x8000 /* Port Select Bits */
#define INTL 0x4000 /* Internal Loopback */
if (status & ISR_OVER)
if (netif_msg_intr(adapter))
dev_warn(&pdev->dev,
- "TX/RX over flow (status = 0x%x)\n",
+ "TX/RX overflow (status = 0x%x)\n",
status & ISR_OVER);
/* link event */
* filtering capabilities. */
struct be_cmd_req_if_create {
struct be_cmd_req_hdr hdr;
- u32 version; /* ignore currntly */
+ u32 version; /* ignore currently */
u32 capability_flags;
u32 enable_flags;
u8 mac_addr[ETH_ALEN];
goto fw_exit;
}
- dev_info(&adapter->pdev->dev, "Firmware flashed succesfully\n");
+ dev_info(&adapter->pdev->dev, "Firmware flashed successfully\n");
fw_exit:
release_firmware(fw);
/* [RC 1] A flag to indicate that overflow error occurred in one of the
queues. */
#define QM_REG_OVFERROR 0x16805c
-/* [RC 7] the Q were the qverflow occurs */
+/* [RC 7] the Q where the overflow occurs */
#define QM_REG_OVFQNUM 0x168058
/* [R 16] Pause state for physical queues 15-0 */
#define QM_REG_PAUSESTATE0 0x168410
/*
* We do not use Tx completion interrupts to free DMAd Tx packets.
- * This is good for performamce but means that we rely on new Tx
+ * This is good for performance but means that we rely on new Tx
* packets arriving to run the destructors of completed packets,
* which open up space in their sockets' send queues. Sometimes
* we do not get such new packets causing Tx to stall. A single
ret = ehea_set_portspeed(port, sp);
if (!ret)
- ehea_info("%s: Port speed succesfully set: %dMbps "
+ ehea_info("%s: Port speed successfully set: %dMbps "
"%s Duplex",
port->netdev->name, port->port_speed,
port->full_duplex == 1 ? "Full" : "Half");
ret = ehea_set_portspeed(port, EHEA_SPEED_AUTONEG);
if (!ret)
- ehea_info("%s: Port speed succesfully set: %dMbps "
+ ehea_info("%s: Port speed successfully set: %dMbps "
"%s Duplex",
port->netdev->name, port->port_speed,
port->full_duplex == 1 ? "Full" : "Half");
* driver only supports standard serial hardware (8250, 16450, 16550A)
*
* This modem usually draws its supply current out of the otherwise unused
- * TXD pin of the serial port. Thus a contignuous stream of 0x00-bytes
+ * TXD pin of the serial port. Thus a contiguous stream of 0x00-bytes
* is transmitted to achieve a positive supply voltage.
*
* hsk: This is a 4800 baud FSK modem, designed for TNC use. It works fine
unsigned long done;
int i = 1;
- /* FIXME: skbs are continguous in real addresses. Do we
+ /* FIXME: skbs are contiguous in real addresses. Do we
* really need to break it into PAGE_SIZE chunks, or can we do
* it just at the granularity of iSeries real->absolute
* mapping? Indeed, given the way the allocator works, can we
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
TBD:
* look at deferring rx frames rather than discarding (as per tulip)
* handle tx ring full as per tulip
- * performace test to tune rx_copybreak
+ * performance test to tune rx_copybreak
Most of my modifications relate to the braindead big-endian
implementation by Intel. When the i596 is operating in
en_dbg(DRV, priv, "Freeing fragment:%d\n", nr);
dma = be64_to_cpu(rx_desc->data[nr].addr);
- en_dbg(DRV, priv, "Unmaping buffer at dma:0x%llx\n", (u64) dma);
+ en_dbg(DRV, priv, "Unmapping buffer at dma:0x%llx\n", (u64) dma);
pci_unmap_single(mdev->pdev, dma, skb_frags[nr].size,
PCI_DMA_FROMDEVICE);
put_page(skb_frags[nr].page);
static int inline_thold __read_mostly = MAX_INLINE;
module_param_named(inline_thold, inline_thold, int, 0444);
-MODULE_PARM_DESC(inline_thold, "treshold for using inline data");
+MODULE_PARM_DESC(inline_thold, "threshold for using inline data");
int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring, u32 size,
#define MLX4_EN_DEF_RX_PAUSE 1
#define MLX4_EN_DEF_TX_PAUSE 1
-/* Interval between sucessive polls in the Tx routine when polling is used
+/* Interval between successive polls in the Tx routine when polling is used
instead of interrupts (in per-core Tx rings) - should be power of 2 */
#define MLX4_EN_TX_POLL_MODER 16
#define MLX4_EN_TX_POLL_TIMEOUT (HZ / 4)
* @card: card structure
* @descr: descriptor to re-init
*
- * return 0 on succes, <0 on failure
+ * return 0 on success, <0 on failure
*
* allocates a new rx skb, iommu-maps it and attaches it to the descriptor.
* Activate the descriptor state-wise
sis_priv->rx_ring[entry].bufptr, RX_BUF_SIZE,
PCI_DMA_FROMDEVICE);
- /* refill the Rx buffer, what if there is not enought
+ /* refill the Rx buffer, what if there is not enough
* memory for new socket buffer ?? */
if ((skb = dev_alloc_skb(RX_BUF_SIZE)) == NULL) {
/*
}
/* This situation should never happen, but due to
- some unknow bugs, it is possible that
+ some unknown bugs, it is possible that
we are working on NULL sk_buff :-( */
if (sis_priv->rx_skbuff[entry] == NULL) {
if (netif_msg_rx_err(sis_priv))
*/
u_long mac_d_max ; /* MAC : D_Max timer value */
- u_long lct_short ; /* LCT : error threshhold */
- u_long lct_medium ; /* LCT : error threshhold */
- u_long lct_long ; /* LCT : error threshhold */
- u_long lct_extended ; /* LCT : error threshhold */
+ u_long lct_short ; /* LCT : error threshold */
+ u_long lct_medium ; /* LCT : error threshold */
+ u_long lct_long ; /* LCT : error threshold */
+ u_long lct_extended ; /* LCT : error threshold */
} ;
#ifdef DEBUG
}
break;
default:
- printk("ioctl for %s: unknow cmd: %04x\n", dev->name, ioc.cmd);
+ printk("ioctl for %s: unknown cmd: %04x\n", dev->name, ioc.cmd);
status = -EOPNOTSUPP;
} // switch
SMSC_TRACE(HW, "Passed Loop Back Test");
#endif /* USE_PHY_WORK_AROUND */
- SMSC_TRACE(HW, "phy initialised succesfully");
+ SMSC_TRACE(HW, "phy initialised successfully");
return 0;
}
#define SMSC_NAPI_WEIGHT 16
/* implements a PHY loopback test at initialisation time, to ensure a packet
- * can be succesfully looped back */
+ * can be successfully looped back */
#define USE_PHY_WORK_AROUND
#define DPRINTK(nlevel, klevel, fmt, args...) \
* @card: card structure
* @descr: descriptor to re-init
*
- * Return 0 on succes, <0 on failure.
+ * Return 0 on success, <0 on failure.
*
* Allocates a new rx skb, iommu-maps it and attaches it to the
* descriptor. Mark the descriptor as activated, ready-to-use.
" (threshold = %d)\n", txmode);
csr6 &= ~DMA_CONTROL_TSF;
csr6 &= DMA_CONTROL_TC_TX_MASK;
- /* Set the transmit threashold */
+ /* Set the transmit threshold */
if (txmode <= 32)
csr6 |= DMA_CONTROL_TTC_32;
else if (txmode <= 64)
#define DMA_CONTROL_DT 0x04000000 /* Disable Drop TCP/IP csum error */
#define DMA_CONTROL_RSF 0x02000000 /* Receive Store and Forward */
#define DMA_CONTROL_DFF 0x01000000 /* Disaable flushing */
-/* Theshold for Activating the FC */
+/* Threshold for Activating the FC */
enum rfa {
act_full_minus_1 = 0x00800000,
act_full_minus_2 = 0x00800200,
act_full_minus_3 = 0x00800400,
act_full_minus_4 = 0x00800600,
};
-/* Theshold for Deactivating the FC */
+/* Threshold for Deactivating the FC */
enum rfd {
deac_full_minus_1 = 0x00400000,
deac_full_minus_2 = 0x00400800,
smctr_malloc(dev, 1L);
/* Allocate Non-MAC receive data buffers.
- * To guarantee a minimum of 256 contigous memory to
+ * To guarantee a minimum of 256 contiguous memory to
* UM_Receive_Packet's lookahead pointer, before a page
* change or ring end is encountered, place each rx buffer on
* a 256 byte boundary.
prop = of_get_property(np, "tx-clock", NULL);
if (!prop) {
printk(KERN_ERR
- "ucc_geth: mising tx-clock-name property\n");
+ "ucc_geth: missing tx-clock-name property\n");
return -EINVAL;
}
if ((*prop < QE_CLK_NONE) || (*prop > QE_CLK24)) {
frames) received that were between 128
(Including FCS length==4) and 255 octets */
u32 txok; /* Total number of octets residing in frames
- that where involved in succesfull
+ that where involved in successfull
transmission */
u16 txcf; /* Total number of PAUSE control frames
transmitted by this MAC */
u8 res4[0x2];
u32 tmca; /* Total number of frames that were transmitted
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
u32 tbca; /* Total number of frames transmitted
- succesfully that had destination address
+ successfully that had destination address
field equal to the broadcast address */
u32 rxfok; /* Total number of frames received OK */
u32 rxbok; /* Total number of octets received OK */
HW because it includes octets in frames that
never even reach the UCC */
u32 rmca; /* Total number of frames that were received
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
- u32 rbca; /* Total number of frames received succesfully
+ u32 rbca; /* Total number of frames received successfully
that had destination address equal to the
broadcast address */
u32 scar; /* Statistics carry register */
frames) received that were between 128
(Including FCS length==4) and 255 octets */
u32 txok; /* Total number of octets residing in frames
- that where involved in succesfull
+ that where involved in successfull
transmission */
u16 txcf; /* Total number of PAUSE control frames
transmitted by this MAC */
u32 tmca; /* Total number of frames that were transmitted
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
u32 tbca; /* Total number of frames transmitted
- succesfully that had destination address
+ successfully that had destination address
field equal to the broadcast address */
u32 rxfok; /* Total number of frames received OK */
u32 rxbok; /* Total number of octets received OK */
HW because it includes octets in frames that
never even reach the UCC */
u32 rmca; /* Total number of frames that were received
- succesfully with the group address bit set
+ successfully with the group address bit set
that are not broadcast frames */
- u32 rbca; /* Total number of frames received succesfully
+ u32 rbca; /* Total number of frames received successfully
that had destination address equal to the
broadcast address */
} __attribute__ ((packed));
mii_nway_restart(&dev->mii);
if (netif_msg_ifup(dev))
- devdbg(dev, "phy initialised succesfully");
+ devdbg(dev, "phy initialised successfully");
return 0;
}
sc->lmc_media = &lmc_t1_media;
break;
default:
- printk(KERN_WARNING "%s: LMC UNKOWN CARD!\n", dev->name);
+ printk(KERN_WARNING "%s: LMC UNKNOWN CARD!\n", dev->name);
break;
}
* device. See the file header for the format. Run all checks on the
* buffer header, then run over each payload's descriptors, verify
* their consistency and act on each payload's contents. If
- * everything is succesful, update the device's statistics.
+ * everything is successful, update the device's statistics.
*
* Note: You need to set the skb to contain only the length of the
* received buffer; for that, use skb_trim(skb, RECEIVED_SIZE).
/*
* This code is used to optimize rf gain on different environments
- * (temprature mostly) based on feedback from a power detector.
+ * (temperature mostly) based on feedback from a power detector.
*
* It's only used on RF5111 and RF5112, later RF chips seem to have
* auto adjustment on hw -notice they have a much smaller BANK 7 and
/* Fill curves in reverse order
* from lower power (max gain)
* to higher power. Use curve -> idx
- * backmaping we did on eeprom init */
+ * backmapping we did on eeprom init */
u8 idx = pdg_curve_to_idx[pdg];
/* Grab the needed curves by index */
/* Now we have a set of curves for this
* channel on tmpL (x range is table_max - table_min
* and y values are tmpL[pdg][]) sorted in the same
- * order as EEPROM (because we've used the backmaping).
+ * order as EEPROM (because we've used the backmapping).
* So for RF5112 it's from higher power to lower power
* and for RF2413 it's from lower power to higher power.
* For RF5111 we only have one curve. */
* Since this probe succeeded, we allow the next
* probe twice as soon. This allows the maxRate
* to move up faster if the probes are
- * succesful.
+ * successful.
*/
ath_rc_priv->probe_time =
now_msec - rate_table->probe_interval / 2;
/* get number of entries */
field_count = *(((u16 *) & field_info) + 1);
- /* abort if no enought memory */
+ /* abort if no enough memory */
total_length = field_len * field_count;
if (total_length > *len) {
*len = total_length;
IPW_MAX_BDS)) {
/* TODO: Support merging buffers if more than
* IPW_MAX_BDS are used */
- IPW_DEBUG_INFO("%s: Maximum BD theshold exceeded. "
+ IPW_DEBUG_INFO("%s: Maximum BD threshold exceeded. "
"Increase fragmentation level.\n",
priv->net_dev->name);
}
range->max_qual.updated = 7; /* Updated all three */
range->avg_qual.qual = 70; /* > 8% missed beacons is 'bad' */
- /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ /* TODO: Find real 'good' to 'bad' threshold value for RSSI */
range->avg_qual.level = 20 + IPW2100_RSSI_TO_DBM;
range->avg_qual.noise = 0;
range->avg_qual.updated = 7; /* Updated all three */
/* get number of entries */
field_count = *(((u16 *) & field_info) + 1);
- /* abort if not enought memory */
+ /* abort if not enough memory */
total_len = field_len * field_count;
if (total_len > *len) {
*len = total_len;
case SEC_LEVEL_0:
break;
default:
- printk(KERN_ERR "Unknow security level %d\n",
+ printk(KERN_ERR "Unknown security level %d\n",
priv->ieee->sec.level);
break;
}
range->max_qual.updated = 7; /* Updated all three */
range->avg_qual.qual = 70;
- /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ /* TODO: Find real 'good' to 'bad' threshold value for RSSI */
range->avg_qual.level = 0; /* FIXME to real average level */
range->avg_qual.noise = 0;
range->avg_qual.updated = 7; /* Updated all three */
case SEC_LEVEL_0:
break;
default:
- printk(KERN_ERR "Unknow security level %d\n",
+ printk(KERN_ERR "Unknown security level %d\n",
priv->ieee->sec.level);
break;
}
ieee->host_decrypt = 1;
ieee->host_mc_decrypt = 1;
- /* Host fragementation in Open mode. Default is enabled.
+ /* Host fragmentation in Open mode. Default is enabled.
* Note: host fragmentation is always enabled if host encryption
* is enabled. For cards can do hardware encryption, they must do
* hardware fragmentation as well. So we don't need a variable
/*
* iwm_hal_send_host_cmd(): sends commands to the UMAC or the LMAC.
* Sending command to the LMAC is equivalent to sending a
- * regular UMAC command with the LMAC passtrough or the LMAC
+ * regular UMAC command with the LMAC passthrough or the LMAC
* wrapper UMAC command IDs.
*/
int iwm_hal_send_host_cmd(struct iwm_priv *iwm,
kfree_skb(packet->skb);
break;
default:
- IWM_ERR(iwm, "Unknow ticket action: %d\n",
+ IWM_ERR(iwm, "Unknown ticket action: %d\n",
le16_to_cpu(ticket_node->ticket->action));
kfree_skb(packet->skb);
}
}
if (i == ARRAY_SIZE(if_sdio_models)) {
- lbs_pr_err("unkown card model 0x%x\n", card->model);
+ lbs_pr_err("unknown card model 0x%x\n", card->model);
ret = -ENODEV;
goto free;
}
return 0;
}
-/* Setting policy also clears the MAC acl, even if we don't change the defaut
+/* Setting policy also clears the MAC acl, even if we don't change the default
* policy
*/
case DOT11_OID_BEACON:
send_formatted_event(priv,
- "Received a beacon from an unkown AP",
+ "Received a beacon from an unknown AP",
mlme, 0);
break;
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 100
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 121
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
/*
* Signal information.
- * Defaul offset is required for RSSI <-> dBm conversion.
+ * Default offset is required for RSSI <-> dBm conversion.
*/
#define DEFAULT_RSSI_OFFSET 120
#endif
/* Prevent reentrancy. We need to do that because we may have
- * multiple interrupt handler running concurently.
+ * multiple interrupt handler running concurrently.
* It is safe because interrupts are disabled before aquiring
* the spinlock. */
spin_lock(&lp->spinlock);
* zd_mac_tx_failed - callback for failed frames
* @dev: the mac80211 wireless device
*
- * This function is called if a frame couldn't be succesfully be
+ * This function is called if a frame couldn't be successfully be
* transferred. The first frame from the tx queue, will be selected and
* reported as error to the upper layers.
*/
* Mark the I/O Pdir entries invalid and blow away the corresponding I/O
* TLB entries.
*
- * FIXME: at some threshhold it might be "cheaper" to just blow
+ * FIXME: at some threshold it might be "cheaper" to just blow
* away the entire I/O TLB instead of individual entries.
*
* FIXME: Uturn has 256 TLB entries. We don't need to purge every
* The speeds are stored on handles
* (FANA:FAN9), (FANC:FANB), (FANE:FAND).
*
- * There are three default speed sets, acessible as handles:
+ * There are three default speed sets, accessible as handles:
* FS1L,FS1M,FS1H; FS2L,FS2M,FS2H; FS3L,FS3M,FS3H
*
* ACPI DSDT switches which set is in use depending on various
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
case SMALL_TAG_END:
return p + 2;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
return (unsigned char *)p;
break;
- default: /* an unkown tag */
+ default: /* an unknown tag */
len_err:
dev_err(&dev->dev, "unknown tag %#x length %d\n",
tag, len);
/**
* struct ps3_sys_manager_header - System manager message header.
* @version: Header version, currently 1.
- * @size: Header size in bytes, curently 16.
+ * @size: Header size in bytes, currently 16.
* @payload_size: Message payload size in bytes.
* @service_id: Message type, one of enum ps3_sys_manager_service_id.
* @request_tag: Unique number to identify reply.
goto err_io;
}
- /* Make sure frequency measurment mode, test modes, and lock
+ /* Make sure frequency measurement mode, test modes, and lock
* are all disabled */
v3020_set_reg(chip, V3020_STATUS_0, 0x0);
}
rc = raw3270_start(view, rq);
if (rc == 0) {
- /* Started sucessfully. Now wait for completion. */
+ /* Started successfully. Now wait for completion. */
wait_event(fp->wait, raw3270_request_final(rq));
}
} while (rc == -EACCES);
chpid_to_chp(chpid)->state = onoff;
}
-/* On succes return 0 if channel-path is varied offline, 1 if it is varied
+/* On success return 0 if channel-path is varied offline, 1 if it is varied
* online. Return -ENODEV if channel-path is not registered. */
int chp_get_status(struct chp_id chpid)
{
* block of memory, which can not be moved as long as any channel
* is active. Therefore, a maximum number of subchannels needs to
* be defined somewhere. This is a module parameter, defaulting to
- * a resonable value of 1024, or 32 kb of memory.
+ * a reasonable value of 1024, or 32 kb of memory.
* Current kernels don't allow kmalloc with more than 128kb, so the
* maximum is 4096.
*/
#define ENVCTRL_CPUTEMP_MON 1 /* cpu temperature monitor */
#define ENVCTRL_CPUVOLTAGE_MON 2 /* voltage monitor */
#define ENVCTRL_FANSTAT_MON 3 /* fan status monitor */
-#define ENVCTRL_ETHERTEMP_MON 4 /* ethernet temperarture */
+#define ENVCTRL_ETHERTEMP_MON 4 /* ethernet temperature */
/* monitor */
#define ENVCTRL_VOLTAGESTAT_MON 5 /* voltage status monitor */
#define ENVCTRL_MTHRBDTEMP_MON 6 /* motherboard temperature */
-#define ENVCTRL_SCSITEMP_MON 7 /* scsi temperarture */
+#define ENVCTRL_SCSITEMP_MON 7 /* scsi temperature */
#define ENVCTRL_GLOBALADDR_MON 8 /* global address */
/* Child device type.
unsigned long flags;
int handled = 0;
- /* Use the host lock to serialise acess to the 53c700
+ /* Use the host lock to serialise access to the 53c700
* hardware. Note: In future, we may need to take the queue
* lock to enter the done routines. When that happens, we
* need to ensure that for this driver, the host lock and the
/*
* The adapter interface specs all queues to be located in the same
- * physically contigous block. The host structure that defines the
+ * physically contiguous block. The host structure that defines the
* commuication queues will assume they are each a separate physically
- * contigous memory region that will support them all being one big
- * contigous block.
+ * contiguous memory region that will support them all being one big
+ * contiguous block.
* There is a command and response queue for each level and direction of
* commuication. These regions are accessed by both the host and adapter.
*/
spin_lock_init(&dev->fib_lock);
/*
- * Allocate the physically contigous space for the commuication
+ * Allocate the physically contiguous space for the commuication
* queue headers.
*/
scbdma_tohost_done:
test CCSCBCTL, CCARREN jz fill_qoutfifo_dmadone;
/*
- * An SCB has been succesfully uploaded to the host.
+ * An SCB has been successfully uploaded to the host.
* If the SCB was uploaded for some reason other than
* bad SCSI status (currently only for underruns), we
* queue the SCB for normal completion. Otherwise, we
/*
* Although the driver does not care about the
* 'Selection in Progress' status bit, the busy
- * LED does. SELINGO is only cleared by a sucessfull
+ * LED does. SELINGO is only cleared by a successfull
* selection, so we must manually clear it to insure
* the LED turns off just incase no future successful
* selections occur (e.g. no devices on the bus).
/*
* Although the driver does not care about the
* 'Selection in Progress' status bit, the busy
- * LED does. SELINGO is only cleared by a sucessfull
+ * LED does. SELINGO is only cleared by a successfull
* selection, so we must manually clear it to insure
* the LED turns off just incase no future successful
* selections occur (e.g. no devices on the bus).
* Port operational type (in sync with SNIA port type).
*/
enum bfa_pport_type {
- BFA_PPORT_TYPE_UNKNOWN = 1, /* port type is unkown */
+ BFA_PPORT_TYPE_UNKNOWN = 1, /* port type is unknown */
BFA_PPORT_TYPE_TRUNKED = 2, /* Trunked mode */
BFA_PPORT_TYPE_NPORT = 5, /* P2P with switched fabric */
BFA_PPORT_TYPE_NLPORT = 6, /* public loop */
* Temperature sensor status values
*/
enum bfa_tsensor_status {
- BFA_TSENSOR_STATUS_UNKNOWN = 1, /* unkown status */
+ BFA_TSENSOR_STATUS_UNKNOWN = 1, /* unknown status */
BFA_TSENSOR_STATUS_FAULTY = 2, /* sensor is faulty */
BFA_TSENSOR_STATUS_BELOW_MIN = 3, /* temperature below mininum */
BFA_TSENSOR_STATUS_NOMINAL = 4, /* normal temperature */
atomic_read(&hba->resetting) == 0, 60 * HZ);
if (atomic_read(&hba->resetting)) {
- /* IOP is in unkown state, abort reset */
+ /* IOP is in unknown state, abort reset */
printk(KERN_ERR "scsi%d: reset failed\n", hba->host->host_no);
return -1;
}
* at the same time.
*
* When discovery succeeds or fails a callback is made to the lport as
- * notification. Currently, succesful discovery causes the lport to take no
+ * notification. Currently, successful discovery causes the lport to take no
* action. A failure will cause the lport to reset. There is likely a circular
* locking problem with this implementation.
*/
* iscsi_tcp_task_xmit - xmit normal PDU task
* @task: iscsi command task
*
- * We're expected to return 0 when everything was transmitted succesfully,
+ * We're expected to return 0 when everything was transmitted successfully,
* -EAGAIN if there's still data in the queue, or != 0 for any other kind
* of error.
*/
* Notes:
* Assumes any error from lpfc_selective_reset() will be negative.
* If lpfc_selective_reset() returns zero then the length of the buffer
- * is returned which indicates succcess
+ * is returned which indicates success
*
* Returns:
* -EINVAL if the buffer does not contain the string "selective"
* sysfs_ctlreg_read - Read method for reading from ctlreg
* @kobj: kernel kobject that contains the kernel class device.
* @bin_attr: kernel attributes passed to us.
- * @buf: if succesful contains the data from the adapter IOREG space.
+ * @buf: if successful contains the data from the adapter IOREG space.
* @off: offset into buffer to beginning of data.
* @count: bytes to transfer.
*
/* FLOGI completes successfully */
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
- "0101 FLOGI completes sucessfully "
+ "0101 FLOGI completes successfully "
"Data: x%x x%x x%x x%x\n",
irsp->un.ulpWord[4], sp->cmn.e_d_tov,
sp->cmn.w2.r_a_tov, sp->cmn.edtovResolution);
/* Indicate we are walking fc_rscn_id_list on this vport */
vport->fc_rscn_flush = 1;
spin_unlock_irq(shost->host_lock);
- /* Get the array count after sucessfully have the token */
+ /* Get the array count after successfully have the token */
rscn_cnt = vport->fc_rscn_id_cnt;
/* If we are already processing an RSCN, save the received
* RSCN payload buffer, cmdiocb->context2 to process later.
* down the SLI Layer.
*
* Return codes
- * 0 - sucess.
+ * 0 - success.
* Any other value - error.
**/
static int
* down the SLI Layer.
*
* Return codes
- * 0 - sucess.
+ * 0 - success.
* Any other value - error.
**/
static int
* uninitialization after the HBA is reset when bring down the SLI Layer.
*
* Return codes
- * 0 - sucess.
+ * 0 - success.
* Any other value - error.
**/
int
* routine from the API jump table function pointer from the lpfc_hba struct.
*
* Return codes
- * 0 - sucess.
+ * 0 - success.
* Any other value - error.
**/
void
* PCI devices.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* support the SLI-3 HBA device it attached to.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* support the SLI-4 HBA device it attached to.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* device specific resource setup to support the HBA device it attached to.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* device specific resource setup to support the HBA device it attached to.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* list and set up the IOCB tag array accordingly.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* list and set up the sgl xritag tag array accordingly.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* enabled and the driver is reinitializing the device.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* PCI device data structure is set.
*
* Return codes
- * pointer to @phba - sucessful
+ * pointer to @phba - successful
* NULL - error
**/
static struct lpfc_hba *
* host with it.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* with SLI-3 interface spec.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* this routine.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - could not allocated memory.
**/
static int
* allocation for the port.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* HBA consistent with the SLI-4 interface spec.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* we just use some constant number as place holder.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* operation.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* operation.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* operation.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* Later, this can be used for all the slow-path events.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* -ENOMEM - No availble memory
**/
static int
* all resources assigned to the PCI function which originates this request.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No availble memory
* EIO - The mailbox failed to complete successfully.
**/
* with SLI-4 interface spec.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* will be left with MSI-X enabled and leaks its vectors.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* is done in this function.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
*/
static int
* MSI-X -> MSI -> IRQ.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static uint32_t
* enabled and leaks its vectors.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* which is done in this function.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static int
* MSI-X -> MSI -> IRQ.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* other values - error
**/
static uint32_t
*/
if (lpfc_sli_chk_mbx_command(pmbox->mbxCommand) ==
MBX_SHUTDOWN) {
- /* Unknow mailbox command compl */
+ /* Unknown mailbox command compl */
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"(%d):0323 Unknown Mailbox command "
"x%x (x%x) Cmpl\n",
* addition, this routine gets the port vpd data.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - could not allocated memory.
**/
static int
* sequential.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* EIO - The mailbox failed to complete successfully.
* When this error occurs, the driver is not guaranteed
* to have any rpi regions posted to the device and
* maps up to 64 rpi context regions.
*
* Return codes
- * 0 - sucessful
+ * 0 - successful
* ENOMEM - No available memory
* EIO - The mailbox failed to complete successfully.
**/
* PAGE_SIZE modulo 64 rpi context headers.
*
* Returns
- * A nonzero rpi defined as rpi_base <= rpi < max_rpi if sucessful
+ * A nonzero rpi defined as rpi_base <= rpi < max_rpi if successful
* LPFC_RPI_ALLOC_ERROR if no rpis are available.
**/
int
u8 battery_status; /*
* BIT 0: battery module missing
* BIT 1: VBAD
- * BIT 2: temprature high
+ * BIT 2: temperature high
* BIT 3: battery pack missing
* BIT 4,5:
* 00 - charge complete
* @inserted_drive : channel:Id of inserted drive
* @battery_status : bit 0: battery module missing
* bit 1: VBAD
- * bit 2: temprature high
+ * bit 2: temperature high
* bit 3: battery pack missing
* bit 4,5:
* 00 - charge complete
}
else {
con_log(CL_ANN, (KERN_NOTICE
- "megaraid mbox: reset sequence completed sucessfully\n"));
+ "megaraid mbox: reset sequence completed successfully\n"));
}
#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
/**
- * _scsih_scsi_ioc_info - translated non-succesfull SCSI_IO request
+ * _scsih_scsi_ioc_info - translated non-successfull SCSI_IO request
* @ioc: per adapter object
* @scmd: pointer to scsi command object
* @mpi_reply: reply mf payload returned from firmware
** we force a SIR_NEGO_PROTO interrupt (it is a hack that avoids
** bloat for such a should_not_happen situation).
** In all other situation, we reset the BUS.
- ** Are these assumptions reasonnable ? (Wait and see ...)
+ ** Are these assumptions reasonable ? (Wait and see ...)
*/
unexpected_phase:
dsp -= 8;
* @direction : data transfer direction
*
* Return value
- * 0 on sucess, non-zero error code on failure
+ * 0 on success, non-zero error code on failure
*/
static int pmcraid_build_passthrough_ioadls(
struct pmcraid_cmd *cmd,
* @direction: data transfer direction
*
* Return value
- * 0 on sucess, non-zero error code on failure
+ * 0 on success, non-zero error code on failure
*/
static void pmcraid_release_passthrough_ioadls(
struct pmcraid_cmd *cmd,
* @arg: pointer to pmcraid_passthrough_buffer user buffer
*
* Return value
- * 0 on sucess, non-zero error code on failure
+ * 0 on success, non-zero error code on failure
*/
static long pmcraid_ioctl_passthrough(
struct pmcraid_instance *pinstance,
{0x01180600, IOASC_LOG_LEVEL_MUST,
"Recovered Error, soft media error, sector reassignment suggested"},
{0x015D0000, IOASC_LOG_LEVEL_MUST,
- "Recovered Error, failure prediction thresold exceeded"},
+ "Recovered Error, failure prediction threshold exceeded"},
{0x015D9200, IOASC_LOG_LEVEL_MUST,
- "Recovered Error, soft Cache Card Battery error thresold"},
+ "Recovered Error, soft Cache Card Battery error threshold"},
{0x015D9200, IOASC_LOG_LEVEL_MUST,
- "Recovered Error, soft Cache Card Battery error thresold"},
+ "Recovered Error, soft Cache Card Battery error threshold"},
{0x02048000, IOASC_LOG_LEVEL_MUST,
"Not Ready, IOA Reset Required"},
{0x02408500, IOASC_LOG_LEVEL_MUST,
* @data_buf: pointer to vendor unique data buffer
*
* Returns:
- * 0 on succesful return
+ * 0 on successful return
* otherwise, failing error code
*
* Notes:
*
* Note:
* This function must only be called on a PHY that has not
- * sucessfully been added using sas_phy_add().
+ * successfully been added using sas_phy_add().
*/
void sas_phy_free(struct sas_phy *phy)
{
*
* Note:
* This function must only be called on a PORT that has not
- * sucessfully been added using sas_port_add().
+ * successfully been added using sas_port_add().
*/
void sas_port_free(struct sas_port *port)
{
*
* Note:
* This function must only be called on a remote
- * PHY that has not sucessfully been added using
+ * PHY that has not successfully been added using
* sas_rphy_add() (or has been sas_rphy_remove()'d)
*/
void sas_rphy_free(struct sas_rphy *rphy)
*
* This routine is similar to sym_set_workarounds(), except
* that, at this point, we already know that the device was
- * succesfully intialized at least once before, and so most
+ * successfully intialized at least once before, and so most
* of the steps taken there are un-needed here.
*/
static void sym2_reset_workarounds(struct pci_dev *pdev)
* we force a SIR_NEGO_PROTO interrupt (it is a hack that avoids
* bloat for such a should_not_happen situation).
* In all other situation, we reset the BUS.
- * Are these assumptions reasonnable ? (Wait and see ...)
+ * Are these assumptions reasonable ? (Wait and see ...)
*/
unexpected_phase:
dsp -= 8;
*
* SYM_OPT_LIMIT_COMMAND_REORDERING
* When this option is set, the driver tries to limit tagged
- * command reordering to some reasonnable value.
+ * command reordering to some reasonable value.
* (set for Linux)
*/
#if 0
{ "LTS0001", 0 },
/* Rockwell's (PORALiNK) 33600 INT PNP */
{ "WCI0003", 0 },
- /* Unkown PnP modems */
+ /* Unknown PnP modems */
{ "PNPCXXX", UNKNOWN_DEV },
- /* More unkown PnP modems */
+ /* More unknown PnP modems */
{ "PNPDXXX", UNKNOWN_DEV },
{ "", 0 }
};
}
/*
- * Register acessors. Note that we don't need to enforce a recovery
+ * Register accessors. Note that we don't need to enforce a recovery
* delay on PCI PowerMac hardware, it's dealt in HW by the MacIO chip,
* though if we try to use this driver on older machines, we might have
* to add it back
* This function will attempt to stuff of all the characters from the
* kernel's transmit buffer into TX BDs.
*
- * A return value of non-zero indicates that it sucessfully stuffed all
+ * A return value of non-zero indicates that it successfully stuffed all
* characters from the kernel buffer.
*
* A return value of zero indicates that there are still characters in the
ixj_WriteDSPCommand(0x1224, j);
ixj_WriteDSPCommand(0xE014, j);
- ixj_WriteDSPCommand(0x0003, j); /* Lock threashold at 3dB */
+ ixj_WriteDSPCommand(0x0003, j); /* Lock threshold at 3dB */
ixj_WriteDSPCommand(0xE338, j); /* Set Echo Suppresser Attenuation to 0dB */
ixj_WriteDSPCommand(0x1224, j);
ixj_WriteDSPCommand(0xE014, j);
- ixj_WriteDSPCommand(0x0003, j); /* Lock threashold at 3dB */
+ ixj_WriteDSPCommand(0x0003, j); /* Lock threshold at 3dB */
ixj_WriteDSPCommand(0xE338, j); /* Set Echo Suppresser Attenuation to 0dB */
goto bad1;
/* FIXME : ADI930 reply wrong preambule (func = 2, sub = 2) to
- * the first MEMACESS cmv. Ignore it...
+ * the first MEMACCESS cmv. Ignore it...
*/
if (cmv->bFunction != dsc->function) {
if (UEA_CHIP_VERSION(sc) == ADI930
/**
- * drivers/usb/class/usbtmc.c - USB Test & Measurment class driver
+ * drivers/usb/class/usbtmc.c - USB Test & Measurement class driver
*
* Copyright (C) 2007 Stefan Kopp, Gechingen, Germany
* Copyright (C) 2008 Novell, Inc.
* routine gets around the normal restrictions by using a work thread to
* submit the change-config request.
*
- * Returns 0 if the request was succesfully queued, error code otherwise.
+ * Returns 0 if the request was successfully queued, error code otherwise.
* The caller has no way to know whether the queued request will eventually
* succeed.
*/
* @length: size of data
* Context: irqs blocked, acm->lock held, acm_notify_req non-null
*
- * Returns zero on sucess or a negative errno.
+ * Returns zero on success or a negative errno.
*
* See section 6.3.5 of the CDC 1.1 specification for information
* about the only notification we issue: SerialState change.
* pxa_udc_wakeup - Force udc device out of suspend
* @_gadget: usb gadget
*
- * Returns 0 if succesfull, error code otherwise
+ * Returns 0 if successfull, error code otherwise
*/
static int pxa_udc_wakeup(struct usb_gadget *_gadget)
{
/*
- * Process normal completions(error or sucess) and clean the schedule.
+ * Process normal completions(error or success) and clean the schedule.
*
* This is the main path for handing urbs back to drivers. The only other patth
* is process_del_list(),which unlinks URBs by scanning EDs,instead of scanning
*
* CCM uses Ax blocks to generate a keystream with which the MIC and
* the message's payload are encoded. A0 always encrypts/decrypts the
- * MIC. Ax (x>0) are used for the sucesive payload blocks.
+ * MIC. Ax (x>0) are used for the successive payload blocks.
*
* The x is the counter, and is increased for each block.
*/
/*
* Callback for the segment request
*
- * If succesful transition state (unless already transitioned or
+ * If successful transition state (unless already transitioned or
* outbound transfer); otherwise, take a note of the error, mark this
* segment done and try completion.
*
/*
* Callback for the IN data phase
*
- * If succesful transition state; otherwise, take a note of the
+ * If successful transition state; otherwise, take a note of the
* error, mark this segment done and try completion.
*
* Note we don't access until we are sure that the transfer hasn't
* will verify it.
*
* Set i1480->evt_result with the result of getting the event or its
- * size (if succesful).
+ * size (if successful).
*
* Delivers the data directly to i1480->evt_buf
*/
* and transmission will be done by the calling function.
* @dst: On return this will contain the device address to which the
* frame is destined.
- * @returns: 0 on success no tx : WLP header sucessfully applied to skb buffer,
+ * @returns: 0 on success no tx : WLP header successfully applied to skb buffer,
* calling function can proceed with tx
* 1 on success with tx : WLP will take over transmission of this
* frame
txtformat = "24 bit interface";
break;
default:
- txtformat = "unkown format";
+ txtformat = "unknown format";
}
} else {
switch (format & 7) {
txtformat = "262144 colours (FDPI-2 mode)";
break;
default:
- txtformat = "unkown format";
+ txtformat = "unknown format";
}
}
PRINTKI("%s%s %s monitor detected: %s\n",
goto err_free_pwm;
}
- /* Turn display off by defatult. */
+ /* Turn display off by default. */
retval = gpio_direction_output(pwmbl->gpio_on,
0 ^ pdata->on_active_low);
if (retval)
if (!data)
return -ENOMEM;
- data->is_vga = true; /* defaut to VGA mode */
+ data->is_vga = true; /* default to VGA mode */
/*
* bits_per_word cannot be configured in platform data
(offs < PCI_BASE_ADDRESS_0 ||
offs > PCI_BASE_ADDRESS_5)) {
printk (KERN_WARNING
- "STI pci region maping for region %d (%02x) can't be mapped\n",
+ "STI pci region mapping for region %d (%02x) can't be mapped\n",
i,sti->rm_entry[i]);
continue;
}
blocks of 512x128, 256x128 or 128x128 pixels, respectively for 8bit,
16bit and 32 bit modes (64 kB). They cover the screen with partial
tiles on the right and/or bottom of the screen if needed.
- For exemple in 640x480 8 bit mode the mapping is:
+ For example in 640x480 8 bit mode the mapping is:
<-------- 640 ----->
<---- 512 ----><128|384 offscreen>
if (fb->info.var.bits_per_pixel == 32)
controlPlaneReg = 0x04000F00;
else
- controlPlaneReg = 0x00000F00; /* 0x00000800 should be enought, but lets clear all 4 bits */
+ controlPlaneReg = 0x00000F00; /* 0x00000800 should be enough, but lets clear all 4 bits */
else
- controlPlaneReg = 0x00000F00; /* 0x00000100 should be enought, but lets clear all 4 bits */
+ controlPlaneReg = 0x00000F00; /* 0x00000100 should be enough, but lets clear all 4 bits */
switch (enable) {
case ENABLE:
*
* 0.1.3 (released 1999-11-02) added Attila's panning support, code
* reorg, hwcursor address page size alignment
- * (for mmaping both frame buffer and regs),
+ * (for mmapping both frame buffer and regs),
* and my changes to get rid of hardcoded
* VGA i/o register locations (uses PCI
* configuration info now)
default:
viaparinfo->tmds_setting_info->dvi_panel_size =
VIA_RES_1024X768;
- DEBUG_MSG(KERN_INFO "Unknow panel size max resolution = %d !\
+ DEBUG_MSG(KERN_INFO "Unknown panel size max resolution = %d !\
set default panel size.\n", max_h);
break;
}
default:
viaparinfo->tmds_setting_info->dvi_panel_size =
VIA_RES_1024X768;
- DEBUG_MSG(KERN_INFO "Unknow panel size max resolution = %d!\
+ DEBUG_MSG(KERN_INFO "Unknown panel size max resolution = %d!\
set default panel size.\n", HSize);
break;
}
svga_wseq_mask(0x1E, 0xF0, 0xF0); // DI/DVP bus
svga_wseq_mask(0x2A, 0x0F, 0x0F); // DI/DVP bus
- svga_wseq_mask(0x16, 0x08, 0xBF); // FIFO read treshold
+ svga_wseq_mask(0x16, 0x08, 0xBF); // FIFO read threshold
vga_wseq(NULL, 0x17, 0x1F); // FIFO depth
vga_wseq(NULL, 0x18, 0x4E);
svga_wseq_mask(0x1A, 0x08, 0x08); // enable MMIO ?
* deactivating the watchdog before it is shut down by it.
*
* NOTE: on future versions of the watchdog, this restriction is
- * gone: the watchdog will be reloaded with a defaul value (1 min)
+ * gone: the watchdog will be reloaded with a default value (1 min)
* instead of last value, and you can conveniently set the watchdog
* timeout to 10ms (value = 1) without any problems.
*/
* wd#1 - 2 seconds;
* wd#2 - 7.2 ms;
* After the expiration of wd#1, it can generate a NMI, SCI, SMI, or
- * a system RESET and it starts wd#2 that unconditionaly will RESET
+ * a system RESET and it starts wd#2 that unconditionally will RESET
* the system when the counter reaches zero.
*
* 14-Dec-2001 Matt Domsch <Matt_Domsch@dell.com>
/**
* wdrtas_get_tokens - reads in RTAS tokens
*
- * returns 0 on succes, <0 on failure
+ * returns 0 on success, <0 on failure
*
* wdrtas_get_tokens reads in the tokens for the RTAS calls used in
* this watchdog driver. It tolerates, if "get-sensor-state" and
/**
* wdrtas_register_devs - registers the misc dev handlers
*
- * returns 0 on succes, <0 on failure
+ * returns 0 on success, <0 on failure
*
* wdrtas_register_devs registers the watchdog and temperature watchdog
* misc devs
/**
* wdrtas_init - init function of the watchdog driver
*
- * returns 0 on succes, <0 on failure
+ * returns 0 on success, <0 on failure
*
* registers the file handlers and the reboot notifier
*/
current->mm->start_stack = bprm->p;
- /* Now we do a little grungy work by mmaping the ELF image into
+ /* Now we do a little grungy work by mmapping the ELF image into
the correct location in memory. */
for(i = 0, elf_ppnt = elf_phdata;
i < loc->elf_ex.e_phnum; i++, elf_ppnt++) {
* for a &struct bio to become free. If a %NULL @bs is passed in, we will
* fall back to just using @kmalloc to allocate the required memory.
*
- * Note that the caller must set ->bi_destructor on succesful return
+ * Note that the caller must set ->bi_destructor on successful return
* of a bio, to do the appropriate freeing of the bio once the reference
* count drops to zero.
**/
* Insert @em into @tree or perform a simple forward/backward merge with
* existing mappings. The extent_map struct passed in will be inserted
* into the tree directly, with an additional reference taken, or a
- * reference dropped if the merge attempt was sucessfull.
+ * reference dropped if the merge attempt was successfull.
*/
int add_extent_mapping(struct extent_map_tree *tree,
struct extent_map *em)
source name to use to represent the client netbios machine
name when doing the RFC1001 netbios session initialize.
direct Do not do inode data caching on files opened on this mount.
- This precludes mmaping files on this mount. In some cases
+ This precludes mmapping files on this mount. In some cases
with fast networks and little or no caching benefits on the
client (e.g. when the application is doing large sequential
reads bigger than page size without rereading the same data)
/*
* MAX_REQ is the maximum number of requests that WE will send
- * on one socket concurently. It also matches the most common
+ * on one socket concurrently. It also matches the most common
* value of max multiplex returned by servers. We may
* eventually want to use the negotiated value (in case
* future servers can handle more) when we are more confident that
/*
* If dentry->d_inode is null (usually meaning the cached dentry
* is a negative dentry) then we would attempt a standard SMB delete, but
- * if that fails we can not attempt the fall back mechanisms on EACESS
- * but will return the EACESS to the caller. Note that the VFS does not call
+ * if that fails we can not attempt the fall back mechanisms on EACCESS
+ * but will return the EACCESS to the caller. Note that the VFS does not call
* unlink on negative dentries currently.
*/
int cifs_unlink(struct inode *dir, struct dentry *dentry)
smbhash(p24 + 16, c8, p21 + 14, 1);
}
-#if 0 /* currently unsued */
+#if 0 /* currently unused */
static void
D_P16(unsigned char *p14, unsigned char *in, unsigned char *out)
{
}
EXPORT_SYMBOL_GPL(dlm_posix_lock);
-/* Returns failure iff a succesful lock operation should be canceled */
+/* Returns failure iff a successful lock operation should be canceled */
static int dlm_plock_callback(struct plock_op *op)
{
struct file *file;
ret = write_cache_pages(mapping, wbc, __mpage_da_writepage,
&mpd);
/*
- * If we have a contigous extent of pages and we
+ * If we have a contiguous extent of pages and we
* haven't done the I/O yet, map the blocks and submit
* them for I/O.
*/
* worse case, the indexs blocks spread over different block groups
*
* If datablocks are discontiguous, they are possible to spread over
- * different block groups too. If they are contiugous, with flexbg,
+ * different block groups too. If they are contiuguous, with flexbg,
* they could still across block group boundary.
*
* Also account for superblock, inode, quota and xattr blocks
* Calculate the journal credits for a chunk of data modification.
*
* This is called from DIO, fallocate or whoever calling
- * ext4_get_blocks() to map/allocate a chunk of contigous disk blocks.
+ * ext4_get_blocks() to map/allocate a chunk of contiguous disk blocks.
*
* journal buffers for data blocks are not included here, as DIO
* and fallocate do no need to journal data buffers.
* 2 blocks and the order of allocation is >= sbi->s_mb_order2_reqs. The
* value of s_mb_order2_reqs can be tuned via
* /sys/fs/ext4/<partition>/mb_order2_req. If the request len is equal to
- * stripe size (sbi->s_stripe), we try to search for contigous block in
+ * stripe size (sbi->s_stripe), we try to search for contiguous block in
* stripe size. This should result in better allocation on RAID setups. If
* not, we search in the specific group using bitmap for best extents. The
* tunable min_to_scan and max_to_scan control the behaviour here.
spin_unlock(&jffs2_compressor_list_lock);
break;
default:
- printk(KERN_ERR "JFFS2: unknow compression mode.\n");
+ printk(KERN_ERR "JFFS2: unknown compression mode.\n");
}
out:
if (ret == JFFS2_COMPR_NONE) {
* Helper function for jffs2_get_inode_nodes().
* The function detects whether more data should be read and reads it if yes.
*
- * Returns: 0 on succes;
+ * Returns: 0 on success;
* negative error code on failure.
*/
static int read_more(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *ref,
* is used to release xattr name/value pair and detach from c->xattrindex.
* reclaim_xattr_datum(c)
* is used to reclaim xattr name/value pairs on the xattr name/value pair cache when
- * memory usage by cache is over c->xdatum_mem_threshold. Currentry, this threshold
+ * memory usage by cache is over c->xdatum_mem_threshold. Currently, this threshold
* is hard coded as 32KiB.
* do_verify_xattr_datum(c, xd)
* is used to load the xdatum informations without name/value pair from the medium.
* allocation group.
*/
if ((blkno & (bmp->db_agsize - 1)) == 0)
- /* check if the AG is currenly being written to.
+ /* check if the AG is currently being written to.
* if so, call dbNextAG() to find a non-busy
* AG with sufficient free space.
*/
for (i = 0, n = 0; i < agno; n++) {
bmp->db_agfree[n] = 0; /* init collection point */
- /* coalesce cotiguous k AGs; */
+ /* coalesce contiguous k AGs; */
for (j = 0; j < k && i < agno; j++, i++) {
/* merge AGi to AGn */
bmp->db_agfree[n] += bmp->db_agfree[i];
case NCP_IOC_SETROOT:
return 0;
default:
- /* unkown IOCTL command, assume write */
+ /* unknown IOCTL command, assume write */
return 1;
}
}
return 0;
ntfs_debug("Failed. Returning error code %s.", err == -EOVERFLOW ?
- "EOVERFLOW" : (!err ? "EIO" : "unkown error"));
+ "EOVERFLOW" : (!err ? "EIO" : "unknown error"));
return err < 0 ? err : -EIO;
read_err:
* @cached_page: allocated but as yet unused page
* @lru_pvec: lru-buffering pagevec of caller
*
- * Obtain @nr_pages locked page cache pages from the mapping @maping and
+ * Obtain @nr_pages locked page cache pages from the mapping @mapping and
* starting at index @index.
*
* If a page is newly created, increment its refcount and add it to the
/*
* Copy as much as we can into the pages and return the number of bytes which
- * were sucessfully copied. If a fault is encountered then clear the pages
+ * were successfully copied. If a fault is encountered then clear the pages
* out to (ofs + bytes) and return the number of bytes which were copied.
*/
static inline size_t ntfs_copy_from_user(struct page **pages,
* copy of the complete multi sector transfer deprotected page. On failure,
* *@wrp is undefined.
*
- * Simillarly, if @lsn is not NULL, on succes *@lsn will be set to the current
+ * Simillarly, if @lsn is not NULL, on success *@lsn will be set to the current
* logfile lsn according to this restart page. On failure, *@lsn is undefined.
*
* The following error codes are defined:
*
* The array is assumed to be large enough to hold an entire path (tree depth).
*
- * Upon succesful return from this function:
+ * Upon successful return from this function:
*
* - The 'right_path' array will contain a path to the leaf block
* whose range contains e_cpos.
* is complete everywhere. if the target dies while this is
* going on, some nodes could potentially see the target as the
* master, so it is important that my recovery finds the migration
- * mle and sets the master to UNKNONWN. */
+ * mle and sets the master to UNKNOWN. */
/* wait for new node to assert master */
* outstanding lock request, so a cancel convert is
* required. We intentionally overwrite 'ret' - if the
* cancel fails and the lock was granted, it's easier
- * to just bubble sucess back up to the user.
+ * to just bubble success back up to the user.
*/
ret = ocfs2_flock_handle_signal(lockres, level);
} else if (!ret && (level > lockres->l_level)) {
default:
status = -EINVAL;
- mlog(ML_ERROR, "Uknown access type!\n");
+ mlog(ML_ERROR, "Unknown access type!\n");
}
if (!status && ocfs2_meta_ecc(osb) && triggers)
jbd2_journal_set_triggers(bh, &triggers->ot_triggers);
* we gonna touch and whether we need to create new blocks.
*
* Normally the refcount blocks store these refcount should be
- * continguous also, so that we can get the number easily.
+ * contiguous also, so that we can get the number easily.
* As for meta_ac, we will at most add split 2 refcount record and
* 2 more refcount block, so just check it in a rough way.
*
}
/*
- * Tries to allocate exactly one block. Returns true if sucessful.
+ * Tries to allocate exactly one block. Returns true if successful.
*/
int omfs_allocate_block(struct super_block *sb, u64 block)
{
/*
* This file implements functions needed to recover from unclean un-mounts.
* When UBIFS is mounted, it checks a flag on the master node to determine if
- * an un-mount was completed sucessfully. If not, the process of mounting
+ * an un-mount was completed successfully. If not, the process of mounting
* incorparates additional checking and fixing of on-flash data structures.
* UBIFS always cleans away all remnants of an unclean un-mount, so that
* errors do not accumulate. However UBIFS defers recovery if it is mounted
#define dq_flags q_lists.dqm_flags
/*
- * Lock hierachy for q_qlock:
+ * Lock hierarchy for q_qlock:
* XFS_QLOCK_NORMAL is the implicit default,
* XFS_QLOCK_NESTED is the dquot with the higher id in xfs_dqlock2
*/
#elif defined(CONFIG_SPARSEMEM_VMEMMAP)
-/* memmap is virtually contigious. */
+/* memmap is virtually contiguous. */
#define __pfn_to_page(pfn) (vmemmap + (pfn))
#define __page_to_pfn(page) (unsigned long)((page) - vmemmap)
* these are provided for both review and as a porting
* help for the C library version.
*
- * Last chance: are any of these important enought to
+ * Last chance: are any of these important enough to
* enable by default?
*/
#ifdef __ARCH_WANT_SYSCALL_NO_AT
* query vendor-specific element types
*
* accessing elements works by specifing type and unit of the element.
- * for eample, storage elements are addressed with type = CHET_ST and
+ * for example, storage elements are addressed with type = CHET_ST and
* unit = 0 .. cp_nslots-1
*
*/
#define PCAP_CLEAR_INTERRUPT_REGISTER 0x01ffffff
#define PCAP_MASK_ALL_INTERRUPT 0x01ffffff
-/* registers acessible by both pcap ports */
+/* registers accessible by both pcap ports */
#define PCAP_REG_ISR 0x0 /* Interrupt Status */
#define PCAP_REG_MSR 0x1 /* Interrupt Mask */
#define PCAP_REG_PSTAT 0x2 /* Processor Status */
#define PCAP_REG_VENDOR_TEST1 0x1e
#define PCAP_REG_VENDOR_TEST2 0x1f
-/* registers acessible by pcap port 1 only (a1200, e2 & e6) */
+/* registers accessible by pcap port 1 only (a1200, e2 & e6) */
#define PCAP_REG_INT_SEL 0x3 /* Interrupt Select */
#define PCAP_REG_SWCTRL 0x4 /* Switching Regulator Control */
#define PCAP_REG_VREG1 0x5 /* Regulator Bank 1 Control */
/*
* use drive write caching -- we need deferred error handling to be
- * able to sucessfully recover with this option (drive will return good
+ * able to successfully recover with this option (drive will return good
* status as soon as the cdb is validated).
*/
#if defined(CONFIG_CDROM_PKTCDVD_WCACHE)
#define UART_IIR_TOD 0x08 /* Character Timeout Indication Detected */
-#define UART_FCR_PXAR1 0x00 /* receive FIFO treshold = 1 */
-#define UART_FCR_PXAR8 0x40 /* receive FIFO treshold = 8 */
-#define UART_FCR_PXAR16 0x80 /* receive FIFO treshold = 16 */
-#define UART_FCR_PXAR32 0xc0 /* receive FIFO treshold = 32 */
+#define UART_FCR_PXAR1 0x00 /* receive FIFO threshold = 1 */
+#define UART_FCR_PXAR8 0x40 /* receive FIFO threshold = 8 */
+#define UART_FCR_PXAR16 0x80 /* receive FIFO threshold = 16 */
+#define UART_FCR_PXAR32 0xc0 /* receive FIFO threshold = 32 */
* you do, leave them untouched.
* Inluding less markers will make the
* resulting code smaller, but there will
- * be fewer aplications which can read it.
+ * be fewer applications which can read it.
* The presence of the APP and COM marker
* is influenced by APP_len and COM_len
* ONLY, not by this property! */
int init_sent_count;
/* state : The current state of this destination,
- * : i.e. SCTP_ACTIVE, SCTP_INACTIVE, SCTP_UNKOWN.
+ * : i.e. SCTP_ACTIVE, SCTP_INACTIVE, SCTP_UNKNOWN.
*/
int state;
skb_queue_walk_from_safe(&(sk)->sk_write_queue, skb, tmp)
/* This function calculates a "timeout" which is equivalent to the timeout of a
- * TCP connection after "boundary" unsucessful, exponentially backed-off
+ * TCP connection after "boundary" unsuccessful, exponentially backed-off
* retransmissions with an initial RTO of TCP_RTO_MIN.
*/
static inline bool retransmits_timed_out(const struct sock *sk,
* drivers have to only report state changes due to external
* conditions.
*
- * All API operations are 'atomic', serialized thorough a mutex in the
+ * All API operations are 'atomic', serialized through a mutex in the
* `struct wimax_dev`.
*
* EXPORTING TO USER SPACE THROUGH GENERIC NETLINK
unsigned int micbias1_lvl:1;
unsigned int micbias2_lvl:1;
- /* Jack detect threashold levels, see datasheet for values */
+ /* Jack detect threshold levels, see datasheet for values */
unsigned int jd_scthr:2;
unsigned int jd_thr:2;
};
if (!task) {
/*
* Per cpu events are removed via an smp call and
- * the removal is always sucessful.
+ * the removal is always successful.
*/
smp_call_function_single(event->cpu,
__perf_event_remove_from_context,
if (!task) {
/*
* Per cpu events are installed via an smp call and
- * the install is always sucessful.
+ * the install is always successful.
*/
smp_call_function_single(cpu, __perf_install_in_context,
event, 1);
bool "Enable full Section mismatch analysis"
depends on UNDEFINED
# This option is on purpose disabled for now.
- # It will be enabled when we are down to a resonable number
+ # It will be enabled when we are down to a reasonable number
# of section mismatch warnings (< 10 for an allyesconfig build)
help
The section mismatch analysis checks if there are illegal
again when using them (during symbol decoding).*/
base = hufGroup->base-1;
limit = hufGroup->limit-1;
- /* Calculate permute[]. Concurently, initialize
+ /* Calculate permute[]. Concurrently, initialize
* temp[] and limit[]. */
pp = 0;
for (i = minLen; i <= maxLen; i++) {
* times. Without a hardware IOMMU this results in the
* same device addresses being put into the dma-debug
* hash multiple times too. This can result in false
- * positives being reported. Therfore we implement a
+ * positives being reported. Therefore we implement a
* best-fit algorithm here which returns the entry from
* the hash which fits best to the reference value
* instead of the first-fit.
/*
* Return the buffer to the free list by setting the corresponding
- * entries to indicate the number of contigous entries available.
+ * entries to indicate the number of contiguous entries available.
* While returning the entries to the free list, we merge the entries
* with slots below and above the pool being returned.
*/
/*
* Copy as much as we can into the page and return the number of bytes which
- * were sucessfully copied. If a fault is encountered then return the number of
+ * were successfully copied. If a fault is encountered then return the number of
* bytes which were copied.
*/
size_t iov_iter_copy_from_user_atomic(struct page *page,
int prev_priority; /* for recording reclaim priority */
/*
- * While reclaiming in a hiearchy, we cache the last child we
+ * While reclaiming in a hierarchy, we cache the last child we
* reclaimed from.
*/
int last_scanned_child;
cgroup_lock();
/*
- * If parent's use_hiearchy is set, we can't make any modifications
+ * If parent's use_hierarchy is set, we can't make any modifications
* in the child subtrees. If it is unset, then the change can
* occur, provided the current cgroup has no children.
*
list_for_each_entry_safe (tk, next, to_kill, nd) {
if (doit) {
/*
- * In case something went wrong with munmaping
+ * In case something went wrong with munmapping
* make sure the process doesn't catch the
* signal and then access the memory. Just kill it.
* the signal handlers
struct tcphdr _tcph, *tcph;
__be16 oldval;
- /* Not enought header? */
+ /* Not enough header? */
tcph = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_tcph), &_tcph);
if (!tcph)
return false;
/* Check if we are in the right state for disconnecting */
switch (self->state) {
- case LAP_XMIT_P: /* FALLTROUGH */
- case LAP_XMIT_S: /* FALLTROUGH */
- case LAP_CONN: /* FALLTROUGH */
- case LAP_RESET_WAIT: /* FALLTROUGH */
+ case LAP_XMIT_P: /* FALLTHROUGH */
+ case LAP_XMIT_S: /* FALLTHROUGH */
+ case LAP_CONN: /* FALLTHROUGH */
+ case LAP_RESET_WAIT: /* FALLTHROUGH */
case LAP_RESET_CHECK:
irlap_do_event(self, DISCONNECT_REQUEST, NULL, NULL);
break;
IRDA_DEBUG(1, "%s(), Sending reset request!\n", __func__);
irlap_do_event(self, RESET_REQUEST, NULL, NULL);
break;
- case LAP_NO_RESPONSE: /* FALLTROUGH */
- case LAP_DISC_INDICATION: /* FALLTROUGH */
- case LAP_FOUND_NONE: /* FALLTROUGH */
+ case LAP_NO_RESPONSE: /* FALLTHROUGH */
+ case LAP_DISC_INDICATION: /* FALLTHROUGH */
+ case LAP_FOUND_NONE: /* FALLTHROUGH */
case LAP_MEDIA_BUSY:
irlmp_link_disconnect_indication(self->notify.instance, self,
reason, NULL);
* Function irlap_state_xmit_s (event, skb, info)
*
* XMIT_S, The secondary station has been given the right to transmit,
- * and we therefor do not expect to receive any transmissions from other
+ * and we therefore do not expect to receive any transmissions from other
* stations.
*/
static int irlap_state_xmit_s(struct irlap_cb *self, IRLAP_EVENT event,
init_timer(&irlmp->discovery_timer);
- /* Do discovery every 3 seconds, conditionaly */
+ /* Do discovery every 3 seconds, conditionally */
if (sysctl_discovery)
irlmp_start_discovery_timer(irlmp,
sysctl_discovery_timeout*HZ);
reason = LM_CONNECT_FAILURE;
break;
default:
- IRDA_DEBUG(1, "%s(), Unknow IrLAP disconnect reason %d!\n",
+ IRDA_DEBUG(1, "%s(), Unknown IrLAP disconnect reason %d!\n",
__func__, lap_reason);
reason = LM_LAP_DISCONNECT;
break;
* @addr: destination address of the path (ETH_ALEN length)
* @sdata: local subif
*
- * Returns: 0 on sucess
+ * Returns: 0 on success
*
* State: the initial state of the new path is set to 0
*/
* @addr: dst address (ETH_ALEN length)
* @sdata: local subif
*
- * Returns: 0 if succesful
+ * Returns: 0 if successful
*/
int mesh_path_del(u8 *addr, struct ieee80211_sub_if_data *sdata)
{
* buckets and @skip_chain entries. For each entry in the table call
* @callback, if @callback returns a negative value stop 'walking' through the
* table and return. Updates the values in @skip_bkt and @skip_chain on
- * return. Returns zero on succcess, negative values on failure.
+ * return. Returns zero on success, negative values on failure.
*
*/
int netlbl_domhsh_walk(u32 *skip_bkt,
if (sctp_style(sk, TCP)) {
/* Change the sk->sk_state of a TCP-style socket that has
- * sucessfully completed a connect() call.
+ * successfully completed a connect() call.
*/
if (sctp_state(asoc, ESTABLISHED) && sctp_sstate(sk, CLOSED))
sk->sk_state = SCTP_SS_ESTABLISHED;
* Assumptions:
* - head[0] is physically contiguous.
* - tail[0] is physically contiguous.
- * - pages[] is not physically or virtually contigous and consists of
+ * - pages[] is not physically or virtually contiguous and consists of
* PAGE_SIZE elements.
*
* Output:
* Called when wanting to reset the device for any reason. Device is
* taken back to power on status.
*
- * This call blocks; on succesful return, the device has completed the
+ * This call blocks; on successful return, the device has completed the
* reset process and is ready to operate.
*/
int wimax_reset(struct wimax_dev *wimax_dev)
"to modify that configuration.\n"
"\n"
"If you are uncertain, then you have probably never used alternate\n"
- "configuration files. You should therefor leave this blank to abort.\n"),
+ "configuration files. You should therefore leave this blank to abort.\n"),
save_config_text[] = N_(
"Enter a filename to which this configuration should be saved "
"as an alternate. Leave blank to abort."),
*
* Description
* Call the NetLabel mechanism to set the label of a packet using @sid.
- * Returns zero on auccess, negative values on failure.
+ * Returns zero on success, negative values on failure.
*
*/
int selinux_netlbl_skbuff_setsid(struct sk_buff *skb,
goto out;
}
- /* type/domain unchaned */
+ /* type/domain unchanged */
if (old_context->type == new_context->type) {
rc = 0;
goto out;
Please read Documentation/feature-removal-schedule.txt for
details.
- If unusre, say Y.
+ If unsure, say Y.
source "sound/oss/dmasound/Kconfig"
{ .id = "CSC0437", .devs = { { "CSC0000" }, { "CSC0010" }, { "CSC0003" } } },
/* Digital PC 5000 Onboard - CS4236B */
{ .id = "CSC0735", .devs = { { "CSC0000" }, { "CSC0010" } } },
- /* some uknown CS4236B */
+ /* some unknown CS4236B */
{ .id = "CSC0b35", .devs = { { "CSC0000" }, { "CSC0010" }, { "CSC0003" } } },
/* Intel PR440FX Onboard sound */
{ .id = "CSC0b36", .devs = { { "CSC0000" }, { "CSC0010" }, { "CSC0003" } } },
static void snd_miro_proc_init(struct snd_miro * miro);
static char * snd_opti9xx_names[] = {
- "unkown",
+ "unknown",
"82C928", "82C929",
"82C924", "82C925",
"82C930", "82C931", "82C933"
#endif
static char * snd_opti9xx_names[] = {
- "unkown",
+ "unknown",
"82C928", "82C929",
"82C924", "82C925",
"82C930", "82C931", "82C933"
len += sprintf(buffer+len, "\tsound.volume_right = %d [0...64]\n",
dmasound.volume_right);
if (len >= space) {
- printk(KERN_ERR "dmasound_paula: overlowed state buffer alloc.\n") ;
+ printk(KERN_ERR "dmasound_paula: overflowed state buffer alloc.\n") ;
len = space ;
}
return len;
snd_iprintf(buffer, "user-defined\n");
break;
default:
- snd_iprintf(buffer, "unkown\n");
+ snd_iprintf(buffer, "unknown\n");
break;
}
snd_iprintf(buffer, "Sample Bits: ");
//
//
// The purpose of this code is very simple: make it possible to tranfser
-// the samples 'as they are' with no alteration from a PCMreader SCB (DMA from host)
-// to any other SCB. This is useful for AC3 throug SPDIF. SRC (source rate converters)
-// task always alters the samples in some how, however it's from 48khz -> 48khz. The
-// alterations are not audible, but AC3 wont work.
+// the samples 'as they are' with no alteration from a PCMreader
+// SCB (DMA from host) to any other SCB. This is useful for AC3 through SPDIF.
+// SRC (source rate converters) task always alters the samples in somehow,
+// however it's from 48khz -> 48khz.
+// The alterations are not audible, but AC3 wont work.
//
// ...
// |
* The hardware has 3 channels for playback and 1 for capture.
* - channel 0 is the front channel
* - channel 1 is the rear channel
- * - channel 2 is the center/lfe chanel
+ * - channel 2 is the center/lfe channel
* Volume is controlled by the AC97 for the front and rear channels by
* the PCM Playback Volume, Sigmatel Surround Playback Volume and
* Surround Playback Volume. The Sigmatel 4-Speaker Stereo switch affects
struct hda_pcm pcm_rec[2]; /* PCM information */
- /* pin deafault configuration */
+ /* pin default configuration */
hda_nid_t pin_nid[NUM_PINS];
unsigned int def_conf[NUM_PINS];
unsigned int pin_def_confs;
/* Front Mic (0x01) unused */
{ "Line", 0x2 },
/* Line 2 (0x03) unused */
- /* CD (0x04) unsused? */
+ /* CD (0x04) unused? */
},
};
insel = "Coaxial";
break;
default:
- insel = "Unkown";
+ insel = "Unknown";
}
switch (hdspm->control_register & HDSPM_SyncRefMask) {
syncref = "MADI";
break;
default:
- syncref = "Unkown";
+ syncref = "Unknown";
}
snd_iprintf(buffer, "Inputsel = %s, SyncRef = %s\n", insel,
syncref);
pr_debug("%s reg: %02X, value:%02X\n", __func__, reg, value);
if (reg >= UDA134X_REGS_NUM) {
- printk(KERN_ERR "%s unkown register: reg: %u",
+ printk(KERN_ERR "%s unknown register: reg: %u",
__func__, reg);
return -EINVAL;
}
ARRAY_SIZE(uda1341_snd_controls));
break;
default:
- printk(KERN_ERR "%s unkown codec type: %d",
+ printk(KERN_ERR "%s unknown codec type: %d",
__func__, pd->model);
return -EINVAL;
}
SOC_SINGLE("DRC Switch", WM8903_DRC_0, 15, 1, 0),
SOC_ENUM("DRC Compressor Slope R0", drc_slope_r0),
SOC_ENUM("DRC Compressor Slope R1", drc_slope_r1),
-SOC_SINGLE_TLV("DRC Compressor Threashold Volume", WM8903_DRC_3, 5, 124, 1,
+SOC_SINGLE_TLV("DRC Compressor Threshold Volume", WM8903_DRC_3, 5, 124, 1,
drc_tlv_thresh),
SOC_SINGLE_TLV("DRC Volume", WM8903_DRC_3, 0, 30, 1, drc_tlv_amp),
SOC_SINGLE_TLV("DRC Minimum Gain Volume", WM8903_DRC_1, 2, 3, 1, drc_tlv_min),
SOC_ENUM("DRC FF Delay", drc_ff_delay),
SOC_SINGLE("DRC Anticlip Switch", WM8903_DRC_0, 1, 1, 0),
SOC_SINGLE("DRC QR Switch", WM8903_DRC_0, 2, 1, 0),
-SOC_SINGLE_TLV("DRC QR Threashold Volume", WM8903_DRC_0, 6, 3, 0, drc_tlv_max),
+SOC_SINGLE_TLV("DRC QR Threshold Volume", WM8903_DRC_0, 6, 3, 0, drc_tlv_max),
SOC_ENUM("DRC QR Decay Rate", drc_qr_decay),
SOC_SINGLE("DRC Smoothing Switch", WM8903_DRC_0, 3, 1, 0),
SOC_SINGLE("DRC Smoothing Hysteresis Switch", WM8903_DRC_0, 0, 1, 0),
-SOC_ENUM("DRC Smoothing Threashold", drc_smoothing),
+SOC_ENUM("DRC Smoothing Threshold", drc_smoothing),
SOC_SINGLE_TLV("DRC Startup Volume", WM8903_DRC_0, 6, 18, 0, drc_tlv_startup),
SOC_DOUBLE_R_TLV("Digital Capture Volume", WM8903_ADC_DIGITAL_VOLUME_LEFT,
SOC_SINGLE("DRC Switch", WM8993_DRC_CONTROL_1, 15, 1, 0),
SOC_ENUM("DRC Path", drc_path),
-SOC_SINGLE_TLV("DRC Compressor Threashold Volume", WM8993_DRC_CONTROL_2,
+SOC_SINGLE_TLV("DRC Compressor Threshold Volume", WM8993_DRC_CONTROL_2,
2, 60, 1, drc_comp_threash),
SOC_SINGLE_TLV("DRC Compressor Amplitude Volume", WM8993_DRC_CONTROL_3,
11, 30, 1, drc_comp_amp),
SOC_ENUM("DRC Quick Release Rate", drc_qr_rate),
SOC_SINGLE("DRC Smoothing Switch", WM8993_DRC_CONTROL_1, 11, 1, 0),
SOC_SINGLE("DRC Smoothing Hysteresis Switch", WM8993_DRC_CONTROL_1, 8, 1, 0),
-SOC_ENUM("DRC Smoothing Hysteresis Threashold", drc_smooth),
+SOC_ENUM("DRC Smoothing Hysteresis Threshold", drc_smooth),
SOC_SINGLE_TLV("DRC Startup Volume", WM8993_DRC_CONTROL_4, 8, 18, 0,
drc_startup_tlv),
gpio_direction_output(pd->amp_gain[1], 0);
}
- /* note, curently we assume GPA0 isn't valid amp */
+ /* note, currently we assume GPA0 isn't valid amp */
if (pdata->amp_gpio > 0) {
ret = gpio_request(pd->amp_gpio, "gpio-amp");
if (ret) {
0 /* destination skip after chunk (impossible) */,
4 /* 16 byte burst size */,
-1 /* don't conserve bandwidth */,
- 0 /* low watermark irq descriptor theshold */,
+ 0 /* low watermark irq descriptor threshold */,
0 /* disable hardware timestamps */,
1 /* enable channel */);
* @dev: device pointer
*
* Allocate a special sound device by minor number from the sound
- * subsystem. The allocated number is returned on succes. On failure
+ * subsystem. The allocated number is returned on success. On failure
* a negative error code is returned.
*/