"gadget", "through", "command", "maintain", "maintain", "controller", "address",
"between", "initiali[zs]e", "instead", "function", "select", "already",
"equal", "access", "management", "hierarchy", "registration", "interest",
"relative", "memory", "offset", "already",
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
<title>Device ready function</title>
<para>
If the hardware interface has the ready busy pin of the NAND chip connected to a
- GPIO or other accesible I/O pin, this function is used to read back the state of the
+ GPIO or other accessible I/O pin, this function is used to read back the state of the
pin. The function has no arguments and should return 0, if the device is busy (R/B pin
is low) and 1, if the device is ready (R/B pin is high).
If the hardware interface does not give access to the ready busy pin, then
if (ret == -1) {
perror("cgroup.event_control "
- "is not accessable any more");
+ "is not accessible any more");
break;
}
written to move_charge_at_immigrate.
9.10 Memory thresholds
- Memory controler implements memory thresholds using cgroups notification
+ Memory controller implements memory thresholds using cgroups notification
API. You can use Documentation/cgroups/cgroup_event_listener.c to test
it.
a) The instructions in DCR must be relocatable.
b) The instructions in DCR must not include a call instruction.
c) JTPR must not be targeted by any jump or call instruction.
-d) DCR must not straddle the border betweeen functions.
+d) DCR must not straddle the border between functions.
Anyway, these limitations are checked by the in-kernel instruction
decoder, so you don't need to worry about that.
- KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
is waiting for an interrupt
- KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
- accesible via KVM_GET_VCPU_EVENTS)
+ accessible via KVM_GET_VCPU_EVENTS)
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace.
This function is called by the CAIF SPI interface to give
you a chance to set up your hardware to be ready to receive
a stream of data from the master. The xfer structure contains
- both physical and logical adresses, as well as the total length
+ both physical and logical addresses, as well as the total length
of the transfer in both directions.The dev parameter can be used
to map to different CAIF SPI slave devices.
* an arbitrary array of bytes
*/
- childnode@addresss { /* define a child node named "childnode"
+ childnode@address { /* define a child node named "childnode"
* whose unit name is "childnode at
* address"
*/
* Backround nodev_timeout processing to DPC This enables us to
unblock (stop dev_loss_tmo) when appopriate.
* Fix array discovery with multiple luns. The max_luns was 0 at
- the time the host structure was intialized. lpfc_cfg_params
+ the time the host structure was initialized. lpfc_cfg_params
then set the max_luns to the correct value afterwards.
* Remove unused define LPFC_MAX_LUN and set the default value of
lpfc_max_lun parameter to 512.
- the pid of the task(process) which initialized the timer
- the name of the process which initialized the timer
-- the function where the timer was intialized
+- the function where the timer was initialized
- the callback function which is associated to the timer
- the number of events (callbacks)
/*
* The following functions are needed for DMA bouncing.
- * ITE8152 chip can addrees up to 64MByte, so all the devices
+ * ITE8152 chip can address up to 64MByte, so all the devices
* connected to ITE8152 (PCI and USB) should have limited DMA window
*/
* vic_init2 - common initialisation code
* @base: Base of the VIC.
*
- * Common initialisation code for registeration
+ * Common initialisation code for registration
* and resume.
*/
static void vic_init2(void __iomem *base)
.platform_data = &my_flash0_platform,
#endif
},
- { /* User accessable spi - cs1 (250KHz) */
+ { /* User accessible spi - cs1 (250KHz) */
.modalias = "spi-cs1",
.chip_select = 1,
.max_speed_hz = 250 * 1000,
},
- { /* User accessable spi - cs2 (1MHz) */
+ { /* User accessible spi - cs2 (1MHz) */
.modalias = "spi-cs2",
.chip_select = 2,
.max_speed_hz = 1 * 1000 * 1000,
},
- { /* User accessable spi - cs3 (10MHz) */
+ { /* User accessible spi - cs3 (10MHz) */
.modalias = "spi-cs3",
.chip_select = 3,
.max_speed_hz = 10 * 1000 * 1000,
t = t << 1;
}
- /* Intialize the result */
+ /* Initialize the result */
r = 0;
do {
*/
/****************************************************************************/
uint32_t dmacHw_getDmaControllerAttribute(dmacHw_HANDLE_t handle, /* [ IN ] DMA Channel handle */
- dmacHw_CONTROLLER_ATTRIB_e attr /* [ IN ] DMA Controler attribute of type dmacHw_CONTROLLER_ATTRIB_e */
+ dmacHw_CONTROLLER_ATTRIB_e attr /* [ IN ] DMA Controller attribute of type dmacHw_CONTROLLER_ATTRIB_e */
) {
dmacHw_CBLK_t *pCblk = dmacHw_HANDLE_TO_CBLK(handle);
/**
* @brief Check if DMA channel is the flow controller
*
-* @return 1 : If DMA is a flow controler
+* @return 1 : If DMA is a flow controller
* 0 : Peripheral is the flow controller
*
* @note
t = t << 1;
}
- /* Intialize the result */
+ /* Initialize the result */
r = 0;
do {
/****************************************************************************/
/**
-* Intializes all of the data structures associated with the DMA.
+* Initializes all of the data structures associated with the DMA.
* @return
* >= 0 - Initialization was successfull.
*
*/
/****************************************************************************/
uint32_t dmacHw_getDmaControllerAttribute(dmacHw_HANDLE_t handle, /* [ IN ] DMA Channel handle */
- dmacHw_CONTROLLER_ATTRIB_e attr /* [ IN ] DMA Controler attribute of type dmacHw_CONTROLLER_ATTRIB_e */
+ dmacHw_CONTROLLER_ATTRIB_e attr /* [ IN ] DMA Controller attribute of type dmacHw_CONTROLLER_ATTRIB_e */
);
#endif /* _DMACHW_H */
/* Data type for DMA Link List Item */
typedef struct {
- uint32_t sar; /* Source Adress Register.
+ uint32_t sar; /* Source Address Register.
Address must be aligned to CTLx.SRC_TR_WIDTH. */
uint32_t dar; /* Destination Address Register.
Address must be aligned to CTLx.DST_TR_WIDTH. */
/* Data type representing DMA channel registers */
typedef struct {
- dmacHw_REG64_t ChannelSar; /* Source Adress Register. 64 bits (upper 32 bits are reserved)
+ dmacHw_REG64_t ChannelSar; /* Source Address Register. 64 bits (upper 32 bits are reserved)
Address must be aligned to CTLx.SRC_TR_WIDTH.
*/
dmacHw_REG64_t ChannelDar; /* Destination Address Register.64 bits (upper 32 bits are reserved)
#define GEMINI_LPC_HOST_BASE 0x47000000
#define GEMINI_LPC_IO_BASE 0x47800000
#define GEMINI_INTERRUPT_BASE 0x48000000
-/* TODO: Different interrupt controlers when SMP
+/* TODO: Different interrupt controllers when SMP
* #define GEMINI_INTERRUPT0_BASE 0x48000000
* #define GEMINI_INTERRUPT1_BASE 0x49000000
*/
{
if (mtype == MT_DEVICE) {
/* The peripherals in the 88000000 - D0000000 range
- * are only accessable by type MT_DEVICE_NONSHARED.
+ * are only accessible by type MT_DEVICE_NONSHARED.
* Adjust mtype as necessary to make this "just work."
*/
if ((phys_addr >= 0x88000000) && (phys_addr < 0xD0000000))
* FIXME: we currently manage device-specific idle states
* for PER and CORE in combination with CPU-specific
* idle states. This is wrong, and device-specific
- * idle managment needs to be separated out into
+ * idle management needs to be separated out into
* its own code.
*/
}
/**
- * omap_serial_init() - intialize all supported serial ports
+ * omap_serial_init() - initialize all supported serial ports
*
* Initializes all available UARTs as serial ports. Platforms
* can call this function when they want to have default behaviour
}
#endif
-/* USB Open Host Controler Interface */
+/* USB Open Host Controller Interface */
static struct pxaohci_platform_data mxm_8x10_ohci_platform_data = {
.port_mode = PMM_NPS_MODE,
.flags = ENABLE_PORT_ALL
/* Set all DMA configuration to be DMA, not SDMA */
writel(0xffffff, S3C_SYSREG(0x110));
- /* Register standard DMA controlers */
+ /* Register standard DMA controllers */
s3c64xx_dma_init1(0, DMACH_UART0, IRQ_DMA0, 0x75000000);
s3c64xx_dma_init1(8, DMACH_PCM1_TX, IRQ_DMA1, 0x75100000);
};
/* Add spear300 specific devices here */
-/* arm gpio1 device registeration */
+/* arm gpio1 device registration */
static struct pl061_platform_data gpio1_plat_data = {
.gpio_base = 8,
.irq_base = SPEAR_GPIO1_INT_BASE,
/* call spear3xx family common init function */
spear3xx_init();
- /* shared irq registeration */
+ /* shared irq registration */
shirq_ras1.regs.base =
ioremap(SPEAR300_TELECOM_BASE, SPEAR300_TELECOM_REG_SIZE);
if (shirq_ras1.regs.base) {
/* call spear3xx family common init function */
spear3xx_init();
- /* shared irq registeration */
+ /* shared irq registration */
base = ioremap(SPEAR310_SOC_CONFIG_BASE, SPEAR310_SOC_CONFIG_SIZE);
if (base) {
/* shirq 1 */
/* call spear3xx family common init function */
spear3xx_init();
- /* shared irq registeration */
+ /* shared irq registration */
base = ioremap(SPEAR320_SOC_CONFIG_BASE, SPEAR320_SOC_CONFIG_SIZE);
if (base) {
/* shirq 1 */
#include <mach/spear.h>
/* Add spear3xx machines common devices here */
-/* gpio device registeration */
+/* gpio device registration */
static struct pl061_platform_data gpio_plat_data = {
.gpio_base = 0,
.irq_base = SPEAR_GPIO_INT_BASE,
.irq = {IRQ_BASIC_GPIO, NO_IRQ},
};
-/* uart device registeration */
+/* uart device registration */
struct amba_device uart_device = {
.dev = {
.init_name = "uart",
pmx_fail:
if (ret)
- printk(KERN_ERR "padmux: registeration failed. err no: %d\n",
+ printk(KERN_ERR "padmux: registration failed. err no: %d\n",
ret);
}
#include <mach/spear.h>
/* Add spear6xx machines common devices here */
-/* uart device registeration */
+/* uart device registration */
struct amba_device uart_device[] = {
{
.dev = {
}
};
-/* gpio device registeration */
+/* gpio device registration */
static struct pl061_platform_data gpio_plat_data[] = {
{
.gpio_base = 0,
bool "Dual RAM"
help
Select this if you want support for Dual RAM phones.
- This is two RAM memorys on different EMIFs.
+ This is two RAM memories on different EMIFs.
endchoice
config U300_DEBUG
* @src_addr: transfer source address
* @dst_addr: transfer destination address
* @link_addr: physical address to next lli
- * @virt_link_addr: virtual addres of next lli (only used by pool_free)
+ * @virt_link_addr: virtual address of next lli (only used by pool_free)
* @phy_this: physical address of current lli (only used by pool_free)
*/
struct coh901318_lli {
* struct coh901318_platform - platform arch structure
* @chans_slave: specifying dma slave channels
* @chans_memcpy: specifying dma memcpy channels
- * @access_memory_state: requesting DMA memeory access (on / off)
+ * @access_memory_state: requesting DMA memory access (on / off)
* @chan_conf: dma channel configurations
* @max_channels: max number of dma chanenls
*/
/* all normal IRQs can be FIQs */
#define FIQ_START 0
-/* switch betwean IRQ and FIQ */
+/* switch between IRQ and FIQ */
extern int mxc_set_irq_fiq(unsigned int irq, unsigned int type);
#endif /* __ASM_ARCH_MXC_IRQS_H__ */
/**
* struct omap_hwmod_omap4_prcm - OMAP4-specific PRCM data
* @clkctrl_reg: PRCM address of the clock control register
- * @rstctrl_reg: adress of the XXX_RSTCTRL register located in the PRM
+ * @rstctrl_reg: address of the XXX_RSTCTRL register located in the PRM
* @submodule_wkdep_bit: bit shift of the WKDEP range
*/
struct omap_hwmod_omap4_prcm {
#define SADD_LEN 0x0002 /* Slave Address Length */
#define STDVAL 0x0004 /* Slave Transmit Data Valid */
#define NAK 0x0008 /* NAK/ACK* Generated At Conclusion Of Transfer */
-#define GEN 0x0010 /* General Call Adrress Matching Enabled */
+#define GEN 0x0010 /* General Call Address Matching Enabled */
/* TWI_SLAVE_STAT Masks */
#define SDIR 0x0001 /* Slave Transfer Direction (Transmit/Receive*) */
#define SADD_LEN 0x0002 /* Slave Address Length */
#define STDVAL 0x0004 /* Slave Transmit Data Valid */
#define NAK 0x0008 /* NAK/ACK* Generated At Conclusion Of Transfer */
-#define GEN 0x0010 /* General Call Adrress Matching Enabled */
+#define GEN 0x0010 /* General Call Address Matching Enabled */
/* TWI_SLAVE_STAT Masks */
#define SDIR 0x0001 /* Slave Transfer Direction (Transmit/Receive*) */
#define SADD_LEN 0x0002 /* Slave Address Length */
#define STDVAL 0x0004 /* Slave Transmit Data Valid */
#define NAK 0x0008 /* NAK/ACK* Generated At Conclusion Of Transfer */
-#define GEN 0x0010 /* General Call Adrress Matching Enabled */
+#define GEN 0x0010 /* General Call Address Matching Enabled */
/* TWI_SLAVE_STAT Masks */
#define SDIR 0x0001 /* Slave Transfer Direction (Transmit/Receive*) */
#define SADD_LEN 0x0002 /* Slave Address Length */
#define STDVAL 0x0004 /* Slave Transmit Data Valid */
#define NAK 0x0008 /* NAK/ACK* Generated At Conclusion Of Transfer */
-#define GEN 0x0010 /* General Call Adrress Matching Enabled */
+#define GEN 0x0010 /* General Call Address Matching Enabled */
/* TWIx_SLAVE_STAT Masks */
#define SDIR 0x0001 /* Slave Transfer Direction (Transmit/Receive*) */
lsrq 8, $r4
move.b $r4, [$r1] ; Row address
lsrq 8, $r4
- move.b $r4, [$r1] ; Row adddress
+ move.b $r4, [$r1] ; Row address
moveq 20, $r4
2: bne 2b
subq 1, $r4
/*
- * The following devices are accessable using this driver using
+ * The following devices are accessible using this driver using
* GPIO_MAJOR (120) and a couple of minor numbers.
*
* For ETRAX 100LX (CONFIG_ETRAX_ARCH_V10):
builtin kernel commandline enabled.
config KERNEL_COMMAND
- string "Buildin commmand string"
+ string "Buildin command string"
depends on DEFAULT_CMDLINE
help
builtin kernel commandline strings.
local_irq_save(psr);
- /*Intercept the acces for PIB range*/
+ /*Intercept the access for PIB range*/
if (iot == GPFN_PIB) {
if (!dir)
lsapic_write(vcpu, src_pa, s, *dest);
au_writel(sleep_usb[1], USBD_ENABLE);
au_sync();
#else
- /* enable accces to OTG memory */
+ /* enable access to OTG memory */
au_writel(au_readl(USB_MSR_BASE + 4) | (1 << 6), USB_MSR_BASE + 4);
au_sync();
}
/* These are not portable and should not be used in drivers. Drivers should
- * be using ioremap() and friends to map physical addreses to virtual
+ * be using ioremap() and friends to map physical addresses to virtual
* addresses and dma_map*() and friends to map virtual addresses into DMA
* addresses and back.
*/
/* Early prototypes of the QI LB60 had only 1GB of NAND.
* In order to support these devices aswell the partition and ecc layout is
- * initalized depending on the NAND size */
+ * initialized depending on the NAND size */
static struct mtd_partition qi_lb60_partitions_1gb[] = {
{
.name = "NAND BOOT partition",
board_gpio_setup();
if (qi_lb60_init_platform_devices())
- panic("Failed to initalize platform devices\n");
+ panic("Failed to initialize platform devices\n");
return 0;
}
for (i = 0; i < ARRAY_SIZE(jz4740_gpio_chips); ++i)
jz4740_gpio_chip_init(&jz4740_gpio_chips[i], i);
- printk(KERN_INFO "JZ4740 GPIO initalized\n");
+ printk(KERN_INFO "JZ4740 GPIO initialized\n");
return 0;
}
static char *mtypes[3] = {
"Dont use memory",
"YAMON PROM memory",
- "Free memmory",
+ "Free memory",
};
#endif
mem_access_subid.s.ror = 0;
/* Disable Relaxed Ordering for Writes. */
mem_access_subid.s.row = 0;
- /* PCIe Adddress Bits <63:34>. */
+ /* PCIe Address Bits <63:34>. */
mem_access_subid.s.ba = 0;
/*
unsigned long ptv_memsize;
/*
- * struct low_mem_reserved - Items in low memmory that are reserved
+ * struct low_mem_reserved - Items in low memory that are reserved
* @start: Physical address of item
* @size: Size, in bytes, of this item
* @is_aliased: True if this is RAM aliased from another location. If false,
/*
* allocate pci_controller and resources.
- * mem_base, io_base: physical addresss. 0 for auto assignment.
+ * mem_base, io_base: physical address. 0 for auto assignment.
* mem_size and io_size means max size on auto assignment.
* pcic must be &txx9_primary_pcic or NULL.
*/
} memctl8xx_t;
/*-----------------------------------------------------------------------
- * BR - Memory Controler: Base Register 16-9
+ * BR - Memory Controller: Base Register 16-9
*/
#define BR_BA_MSK 0xffff8000 /* Base Address Mask */
#define BR_AT_MSK 0x00007000 /* Address Type Mask */
#define BR_V 0x00000001 /* Bank Valid */
/*-----------------------------------------------------------------------
- * OR - Memory Controler: Option Register 16-11
+ * OR - Memory Controller: Option Register 16-11
*/
#define OR_AM_MSK 0xffff8000 /* Address Mask Mask */
#define OR_ATM_MSK 0x00007000 /* Address Type Mask Mask */
* The pm_interval register is setup to write the SPU PC value into the
* trace buffer at the maximum rate possible. The trace buffer is configured
* to store the PCs, wrapping when it is full. The performance counter is
- * intialized to the max hardware count minus the number of events, N, between
+ * initialized to the max hardware count minus the number of events, N, between
* samples. Once the N events have occured, a HW counter overflow occurs
* causing the generation of a HW counter interrupt which also stops the
* writing of the SPU PC values to the trace buffer. Hence the last PC
ori r4, r4, 0x002a
mtspr SPRN_DBAT0L, r4
lis r8, TMP_VIRT_IMMR@h
- ori r4, r8, 0x001e /* 1 MByte accessable from Kernel Space only */
+ ori r4, r8, 0x001e /* 1 MByte accessible from Kernel Space only */
mtspr SPRN_DBAT0U, r4
isync
ori r4, r4, 0x002a
mtspr SPRN_DBAT1L, r4
lis r9, (TMP_VIRT_IMMR + 0x01000000)@h
- ori r4, r9, 0x001e /* 1 MByte accessable from Kernel Space only */
+ ori r4, r9, 0x001e /* 1 MByte accessible from Kernel Space only */
mtspr SPRN_DBAT1U, r4
isync
li r4, 0x0002
mtspr SPRN_DBAT2L, r4
lis r4, KERNELBASE@h
- ori r4, r4, 0x001e /* 1 MByte accessable from Kernel Space only */
+ ori r4, r4, 0x001e /* 1 MByte accessible from Kernel Space only */
mtspr SPRN_DBAT2U, r4
isync
case PS3_DEV_TYPE_STOR_DISK:
result = ps3_setup_storage_dev(repo, PS3_MATCH_ID_STOR_DISK);
- /* Some devices are not accessable from the Other OS lpar. */
+ /* Some devices are not accessible from the Other OS lpar. */
if (result == -ENODEV) {
result = 0;
- pr_debug("%s:%u: not accessable\n", __func__,
+ pr_debug("%s:%u: not accessible\n", __func__,
__LINE__);
}
* @lock:
* @ipi_debug_brk_mask:
*
- * The HV mantains per SMT thread mappings of HV outlet to HV plug on
+ * The HV maintains per SMT thread mappings of HV outlet to HV plug on
* behalf of the guest. These mappings are implemented as 256 bit guest
* supplied bitmaps indexed by plug number. The addresses of the bitmaps
* are registered with the HV through lv1_configure_irq_state_bitmap().
}
/*
- * Flush the range [start,end] of kernel virtual adddress space from
+ * Flush the range [start,end] of kernel virtual address space from
* the I-cache. The corresponding range must be purged from the
* D-cache also because the SH-5 doesn't have cache snooping between
* the caches. The addresses will be visible through the superpage
static const char CHAFSR_IERR_msg[] =
"Internal processor error";
static const char CHAFSR_ISAP_msg[] =
- "System request parity error on incoming addresss";
+ "System request parity error on incoming address";
static const char CHAFSR_UCU_msg[] =
"Uncorrectable E-cache ECC error for ifetch/data";
static const char CHAFSR_UCC_msg[] =
extern void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd);
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
- unsigned long adddress)
+ unsigned long address)
{
___pmd_free_tlb(tlb, pmd);
}
/*
* The below -8 is to reserve 8 bytes on top of the ring0 stack.
* This is necessary to guarantee that the entire "struct pt_regs"
- * is accessable even if the CPU haven't stored the SS/ESP registers
+ * is accessible even if the CPU haven't stored the SS/ESP registers
* on the stack (interrupt gate does not save these registers
* when switching to the same priv ring).
* Therefore beware: accessing the ss/esp fields of the
dma_dom->aperture_size += APERTURE_RANGE_SIZE;
- /* Intialize the exclusion range if necessary */
+ /* Initialize the exclusion range if necessary */
for_each_iommu(iommu) {
if (iommu->exclusion_start &&
iommu->exclusion_start >= dma_dom->aperture[index]->offset
/*
* Allocates a new protection domain usable for the dma_ops functions.
- * It also intializes the page table and the address allocator data
+ * It also initializes the page table and the address allocator data
* structures required for the dma_ops interface
*/
static struct dma_ops_domain *dma_ops_domain_alloc(void)
static unsigned long mrst_spi_paddr = MRST_REGBASE_SPI0;
static u32 *pclk_spi0;
-/* Always contains an accessable address, start with 0 */
+/* Always contains an accessible address, start with 0 */
static struct dw_spi_reg *pspi;
static struct kmsg_dumper dw_dumper;
movsl
movl pa(boot_params) + NEW_CL_POINTER,%esi
andl %esi,%esi
- jz 1f # No comand line
+ jz 1f # No command line
movl $pa(boot_command_line),%edi
movl $(COMMAND_LINE_SIZE/4),%ecx
rep
/*
* Add group onto cgroup list. It might happen that bdi->dev is
- * not initiliazed yet. Initialize this new group without major
+ * not initialized yet. Initialize this new group without major
* and minor info and this info will be filled in once a new thread
* comes for IO. See code above.
*/
#define AOPOBJ_AML_CONSTANT 0x01 /* Integer is an AML constant */
#define AOPOBJ_STATIC_POINTER 0x02 /* Data is part of an ACPI table, don't delete */
-#define AOPOBJ_DATA_VALID 0x04 /* Object is intialized and data is valid */
+#define AOPOBJ_DATA_VALID 0x04 /* Object is initialized and data is valid */
#define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized, _REG was run */
#define AOPOBJ_SETUP_COMPLETE 0x10 /* Region setup is complete */
#define AOPOBJ_INVALID 0x20 /* Host OS won't allow a Region address */
if (id[ATA_ID_CFA_KEY_MGMT] & 1)
ata_dev_printk(dev, KERN_WARNING,
"supports DRM functions and may "
- "not be fully accessable.\n");
+ "not be fully accessible.\n");
snprintf(revbuf, 7, "CFA");
} else {
snprintf(revbuf, 7, "ATA-%d", ata_id_major_version(id));
if (ata_id_has_tpm(id))
ata_dev_printk(dev, KERN_WARNING,
"supports DRM functions and may "
- "not be fully accessable.\n");
+ "not be fully accessible.\n");
}
dev->n_sectors = ata_id_n_sectors(id);
if (pci_resource_len(pdev, 0) == 0)
return -ENODEV;
- /* map IO regions and intialize host accordingly */
+ /* map IO regions and initialize host accordingly */
rc = pcim_iomap_regions(pdev, 1 << VSC_MMIO_BAR, DRV_NAME);
if (rc == -EBUSY)
pcim_pin_device(pdev);
#define SAR_STAT_TSQF 0x00001000 /* Transmit Status Queue full */
#define SAR_STAT_TMROF 0x00000800 /* Timer overflow */
#define SAR_STAT_PHYI 0x00000400 /* PHY device Interrupt flag */
-#define SAR_STAT_CMDBZ 0x00000200 /* ABR SAR Comand Busy Flag */
+#define SAR_STAT_CMDBZ 0x00000200 /* ABR SAR Command Busy Flag */
#define SAR_STAT_FBQ3A 0x00000100 /* Free Buffer Queue 3 Attention */
#define SAR_STAT_FBQ2A 0x00000080 /* Free Buffer Queue 2 Attention */
#define SAR_STAT_RSQF 0x00000040 /* Receive Status Queue full */
- UBR Table size is 4K
- UBR wait queue is 4K
since the table and wait queues are contiguous, all the bytes
- can be initialized by one memeset.
+ can be initialized by one memeset.
*/
vcsize_sel = 0;
- ABR Table size is 2K
- ABR wait queue is 2K
since the table and wait queues are contiguous, all the bytes
- can be intialized by one memeset.
+ can be initialized by one memeset.
*/
i = ABR_SCHED_TABLE * iadev->memSize;
writew((i >> 11) & 0xffff, iadev->seg_reg+ABR_SBPTR_BASE);
*
*
* The driver model core calls device_pm_add() when a device is registered.
- * This will intialize the embedded device_pm_info object in the device
+ * This will initialize the embedded device_pm_info object in the device
* and add it to the list of power-controlled devices. sysfs entries for
* controlling device power management will also be added.
*
* mid_setup_dma - Setup the DMA controller
* @pdev: Controller PCI device structure
*
- * Initilize the DMA controller, channels, registers with DMA engine,
- * ISR. Initilize DMA controller channels.
+ * Initialize the DMA controller, channels, registers with DMA engine,
+ * ISR. Initialize DMA controller channels.
*/
static int mid_setup_dma(struct pci_dev *pdev)
{
* @pdev: Controller PCI device structure
* @id: pci device id structure
*
- * Initilize the PCI device, map BARs, query driver data.
+ * Initialize the PCI device, map BARs, query driver data.
* Call setup_dma to complete contoller and chan initilzation
*/
static int __devinit intel_mid_dma_probe(struct pci_dev *pdev,
/*
* AMD8131 chipset has two pairs of PCIX Bridge and related IOAPIC
- * Controler, and ATCA-6101 has two AMD8131 chipsets, so there are
+ * Controller, and ATCA-6101 has two AMD8131 chipsets, so there are
* four PCIX Bridges on ATCA-6101 altogether.
*
* These PCIX Bridges share the same PCI Device ID and are all of
offset = address & ~PAGE_MASK;
syndrome = (ar & 0x000000001fe00000ul) >> 21;
- /* TODO: Decoding of the error addresss */
+ /* TODO: Decoding of the error address */
edac_mc_handle_ce(mci, csrow->first_page + pfn, offset,
syndrome, 0, chan, "");
}
pfn = address >> PAGE_SHIFT;
offset = address & ~PAGE_MASK;
- /* TODO: Decoding of the error addresss */
+ /* TODO: Decoding of the error address */
edac_mc_handle_ue(mci, csrow->first_page + pfn, offset, 0, "");
}
* for single channel are 64 bits, for dual channel 128
* bits.
*
- * Single-Ranked stick: A Single-ranked stick has 1 chip-select row of memmory.
+ * Single-Ranked stick: A Single-ranked stick has 1 chip-select row of memory.
* Motherboards commonly drive two chip-select pins to
* a memory stick. A single-ranked stick, will occupy
* only one of those rows. The other will be unused.
}
/**
- * ppc4xx_edac_init_csrows - intialize driver instance rows
+ * ppc4xx_edac_init_csrows - initialize driver instance rows
* @mci: A pointer to the EDAC memory controller instance
* associated with the ibm,sdram-4xx-ddr2 controller for which
* the csrows (i.e. banks/ranks) are being initialized.
* currently set for the controller, from which bank width
* and memory typ information is derived.
*
- * This routine intializes the virtual "chip select rows" associated
+ * This routine initializes the virtual "chip select rows" associated
* with the EDAC memory controller instance. An ibm,sdram-4xx-ddr2
* controller bank/rank is mapped to a row.
*
}
/**
- * ppc4xx_edac_mc_init - intialize driver instance
+ * ppc4xx_edac_mc_init - initialize driver instance
* @mci: A pointer to the EDAC memory controller instance being
* initialized.
* @op: A pointer to the OpenFirmware device tree node associated
typedef struct _READ_EDID_FROM_HW_I2C_DATA_PARAMETERS
{
USHORT usPrescale; //Ratio between Engine clock and I2C clock
- USHORT usVRAMAddress; //Adress in Frame Buffer where to pace raw EDID
+ USHORT usVRAMAddress; //Address in Frame Buffer where to pace raw EDID
USHORT usStatus; //When use output: lower byte EDID checksum, high byte hardware status
//WHen use input: lower byte as 'byte to read':currently limited to 128byte or 1byte
UCHAR ucSlaveAddr; //Read from which slave
}
if (timeout == 0) {
- /* controler has timedout, re-init the h/w */
+ /* controller has timedout, re-init the h/w */
dev_err(&dev->pdev->dev, "controller timed out, re-init h/w\n");
(void) init_hw(dev);
status = -ETIMEDOUT;
}
if (timeout == 0) {
- /* controler has timedout, re-init the h/w */
+ /* controller has timedout, re-init the h/w */
dev_err(&dev->pdev->dev, "controller timed out, re-init h/w\n");
(void) init_hw(dev);
status = -ETIMEDOUT;
* A T3 WQ implements both the SQ and RQ.
*/
struct t3_wq {
- union t3_wr *queue; /* DMA accessable memory */
+ union t3_wr *queue; /* DMA accessible memory */
dma_addr_t dma_addr; /* DMA address for HW */
DEFINE_DMA_UNMAP_ADDR(mapping); /* unmap kruft */
u32 error; /* 1 once we go to ERROR */
#define krp_serdesctrl KREG_IBPORT_IDX(IBSerdesCtrl)
/*
- * Per-context kernel registers. Acess only with qib_read_kreg_ctxt()
+ * Per-context kernel registers. Access only with qib_read_kreg_ctxt()
* or qib_write_kreg_ctxt()
*/
#define krc_rcvhdraddr KREG_IDX(RcvHdrAddr0)
config TOUCHSCREEN_USB_ETT_TC45USB
default y
- bool "ET&T USB series TC4UM/TC5UH touchscreen controler support" if EMBEDDED
+ bool "ET&T USB series TC4UM/TC5UH touchscreen controller support" if EMBEDDED
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_NEXIO
__func__, le16_to_cpu(udev->descriptor.idVendor),
le16_to_cpu(udev->descriptor.idProduct));
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
cs = gigaset_initcs(driver, BAS_CHANNELS, 0, 0, cidmode,
GIGASET_MODULENAME);
if (!cs)
{
int result;
- /* allocate memory for our driver state and intialize it */
+ /* allocate memory for our driver state and initialize it */
driver = gigaset_initdriver(GIGASET_MINOR, GIGASET_MINORS,
GIGASET_MODULENAME, GIGASET_DEVNAME,
&gigops, THIS_MODULE);
return -ENODEV;
}
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
cs = gigaset_initcs(driver, 1, 1, 0, cidmode, GIGASET_MODULENAME);
if (!cs)
goto error;
return rc;
}
- /* allocate memory for our driver state and intialize it */
+ /* allocate memory for our driver state and initialize it */
driver = gigaset_initdriver(GIGASET_MINOR, GIGASET_MINORS,
GIGASET_MODULENAME, GIGASET_DEVNAME,
&ops, THIS_MODULE);
dev_info(&udev->dev, "%s: Device matched ... !\n", __func__);
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
cs = gigaset_initcs(driver, 1, 1, 0, cidmode, GIGASET_MODULENAME);
if (!cs)
return -ENODEV;
{
int result;
- /* allocate memory for our driver state and intialize it */
+ /* allocate memory for our driver state and initialize it */
driver = gigaset_initdriver(GIGASET_MINOR, GIGASET_MINORS,
GIGASET_MODULENAME, GIGASET_DEVNAME,
&ops, THIS_MODULE);
u32 type;
u32 off; /* offset to isac regs */
char *name;
- spinlock_t *hwlock; /* lock HW acccess */
+ spinlock_t *hwlock; /* lock HW access */
read_reg_func *read_reg;
write_reg_func *write_reg;
fifo_func *read_fifo;
struct hscx_hw hscx[2];
char *name;
void *hw;
- spinlock_t *hwlock; /* lock HW acccess */
+ spinlock_t *hwlock; /* lock HW access */
struct module *owner;
u32 type;
read_reg_func *read_reg;
struct isar_hw {
struct isar_ch ch[2];
void *hw;
- spinlock_t *hwlock; /* lock HW acccess */
+ spinlock_t *hwlock; /* lock HW access */
char *name;
struct module *owner;
read_reg_func *read_reg;
&bcs->hw.isar.reg->Flags))
bcs->hw.isar.dpath = 1;
else {
- printk(KERN_WARNING"isar modeisar analog funktions only with DP1\n");
- debugl1(cs, "isar modeisar analog funktions only with DP1");
+ printk(KERN_WARNING"isar modeisar analog functions only with DP1\n");
+ debugl1(cs, "isar modeisar analog functions only with DP1");
return(1);
}
break;
u32 rem;
/*
- * The 2 lsb's of the pulse width timer count are not accessable, hence
+ * The 2 lsb's of the pulse width timer count are not accessible, hence
* the (1 << 2)
*/
n = ((u64) ns) * CX25840_IR_REFCLK_FREQ / 1000000; /* millicycles */
#define regr(reg) readl((reg) + vpif_base)
#define regw(value, reg) writel(value, (reg + vpif_base))
-/* Register Addresss Offsets */
+/* Register Address Offsets */
#define VPIF_PID (0x0000)
#define VPIF_CH0_CTRL (0x0004)
#define VPIF_CH1_CTRL (0x0008)
/*
* vpss operations. Depends on platform. Not all functions are available
* on all platforms. The api, first check if a functio is available before
- * invoking it. In the probe, the function ptrs are intialized based on
+ * invoking it. In the probe, the function ptrs are initialized based on
* vpss name. vpss name can be "dm355_vpss", "dm644x_vpss" etc.
*/
struct vpss_hw_ops {
videobuf_mmap_free(q);
/* Even if apply changes fails we should continue
- freeing allocated memeory */
+ freeing allocated memory */
if (vout->streaming) {
u32 mask = 0;
goto out;
}
- /* Check that the hardware is accessable. If the status bytes are
- * 0xFF then the device is not accessable, the the IRQ belongs
+ /* Check that the hardware is accessible. If the status bytes are
+ * 0xFF then the device is not accessible, the the IRQ belongs
* to another driver.
* 4 x u32 interrupt registers.
*/
struct sn9c102_sensor {
char name[32], /* sensor name */
- maintainer[64]; /* name of the mantainer <email> */
+ maintainer[64]; /* name of the maintainer <email> */
enum sn9c102_bridge supported_bridge; /* supported SN9C1xx bridges */
int quality; /* Measure for quality of compressed images.
* Scales linearly with the size of the compressed images.
- * Must be beetween 0 and 100, 100 is a compression
+ * Must be between 0 and 100, 100 is a compression
* ratio of 1:4 */
int odd_even; /* Which field should come first ??? */
/* Compatibility Error : IR Disabled */
#define IR_LOGINFO_COMPAT_ERROR_RAID_DISABLED (0x00010030)
-/* Compatibility Error : Inquiry Comand failed */
+/* Compatibility Error : Inquiry Command failed */
#define IR_LOGINFO_COMPAT_ERROR_INQUIRY_FAILED (0x00010031)
/* Compatibility Error : Device not direct access device */
#define IR_LOGINFO_COMPAT_ERROR_NOT_DIRECT_ACCESS (0x00010032)
NULL, /* 2Eh */
NULL, /* 2Fh */
"Compatibility Error: IR Disabled", /* 30h */
- "Compatibility Error: Inquiry Comand Failed", /* 31h */
+ "Compatibility Error: Inquiry Command Failed", /* 31h */
"Compatibility Error: Device not Direct Access "
"Device ", /* 32h */
"Compatibility Error: Removable Device Found", /* 33h */
*
* This function will delete scheduled target reset from the list and
* try to send next target reset. This will be called from completion
- * context of any Task managment command.
+ * context of any Task management command.
*/
void
* @ireq: I2O block request
* @mptr: message body pointer
*
- * Builds the SG list and map it to be accessable by the controller.
+ * Builds the SG list and map it to be accessible by the controller.
*
* Returns 0 on failure or 1 on success.
*/
INIT_DELAYED_WORK(&lcd->init_work, charlcd_init_work);
schedule_delayed_work(&lcd->init_work, 0);
- dev_info(&pdev->dev, "initalized ARM character LCD at %08x\n",
+ dev_info(&pdev->dev, "initialized ARM character LCD at %08x\n",
lcd->phybase);
return 0;
cmd.flags = MMC_RSP_SPI_R2 | MMC_RSP_R1 | MMC_CMD_AC;
err = mmc_wait_for_cmd(card->host, &cmd, 0);
if (err)
- printk(KERN_ERR "%s: error %d sending status comand",
+ printk(KERN_ERR "%s: error %d sending status command",
req->rq_disk->disk_name, err);
return cmd.resp[0];
}
tristate "SuperH Internal MMCIF support"
depends on MMC_BLOCK && (SUPERH || ARCH_SHMOBILE)
help
- This selects the MMC Host Interface controler (MMCIF).
+ This selects the MMC Host Interface controller (MMCIF).
This driver supports MMCIF in sh7724/sh7757/sh7372.
au_writel(config2 | SD_CONFIG2_DF, HOST_CONFIG2(host));
au_sync();
- /* Send the stop commmand */
+ /* Send the stop command */
au_writel(STOP_CMD, HOST_CMD(host));
}
mmc->max_seg_size = 1024 * 512;
mmc->max_blk_size = 512;
- /* reset the controler */
+ /* reset the controller */
if (sdricoh_reset(host)) {
dev_dbg(dev, "could not reset\n");
result = -EIO;
dev_info(&pcmcia_dev->dev, "Searching MMC controller for pcmcia device"
" %s %s ...\n", pcmcia_dev->prod_id[0], pcmcia_dev->prod_id[1]);
- /* search pci cardbus bridge that contains the mmc controler */
+ /* search pci cardbus bridge that contains the mmc controller */
/* the io region is already claimed by yenta_socket... */
while ((pci_dev =
pci_get_device(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_RL5C476,
*
* Wait for command done. This is a helper function for nand_wait used when
* we are in interrupt context. May happen when in panic and trying to write
- * an oops trough mtdoops.
+ * an oops through mtdoops.
*/
static void panic_nand_wait(struct mtd_info *mtd, struct nand_chip *chip,
unsigned long timeo)
memset(&ilt_cli, 0, sizeof(struct ilt_client_info));
memset(&ilt, 0, sizeof(struct bnx2x_ilt));
- /* initalize dummy TM client */
+ /* initialize dummy TM client */
ilt_cli.start = 0;
ilt_cli.end = ILT_NUM_PAGE_ENTRIES - 1;
ilt_cli.client_num = ILT_CLIENT_TM;
(~misc_registers_sw_timer_cfg_4.sw_timer_cfg_4[1] ) is set */
#define MISC_REG_SW_TIMER_RELOAD_VAL_4 0xa2fc
/* [RW 32] the value of the counter for sw timers1-8. there are 8 addresses
- in this register. addres 0 - timer 1; address 1 - timer 2, ... address 7 -
+ in this register. address 0 - timer 1; address 1 - timer 2, ... address 7 -
timer 8 */
#define MISC_REG_SW_TIMER_VAL 0xa5c0
/* [RW 1] Set by the MCP to remember if one or more of the drivers is/are
lacpdu_header = (struct lacpdu_header *)skb_put(skb, length);
memcpy(lacpdu_header->hdr.h_dest, lacpdu_mcast_addr, ETH_ALEN);
- /* Note: source addres is set to be the member's PERMANENT address,
+ /* Note: source address is set to be the member's PERMANENT address,
because we use it to identify loopback lacpdus in receive. */
memcpy(lacpdu_header->hdr.h_source, slave->perm_hwaddr, ETH_ALEN);
lacpdu_header->hdr.h_proto = PKT_TYPE_LACPDU;
marker_header = (struct bond_marker_header *)skb_put(skb, length);
memcpy(marker_header->hdr.h_dest, lacpdu_mcast_addr, ETH_ALEN);
- /* Note: source addres is set to be the member's PERMANENT address,
+ /* Note: source address is set to be the member's PERMANENT address,
because we use it to identify loopback MARKERs in receive. */
memcpy(marker_header->hdr.h_source, slave->perm_hwaddr, ETH_ALEN);
marker_header->hdr.h_proto = PKT_TYPE_LACPDU;
return -1;
}
- //check that the slave has not been intialized yet.
+ //check that the slave has not been initialized yet.
if (SLAVE_AD_INFO(slave).port.slave != slave) {
// port initialization
#define EEPROM_MAX_POLL 4
/*
- * Read SEEPROM. A zero is written to the flag register when the addres is
+ * Read SEEPROM. A zero is written to the flag register when the address is
* written to the Control register. The hardware device will set the flag to a
* one when 4B have been transferred to the Data register.
*/
/*
* Initialization that requires the OS and protocol layers to already
- * be intialized goes here.
+ * be initialized goes here.
*/
int t3_mc5_init(struct mc5 *mc5, unsigned int nservers, unsigned int nfilters,
unsigned int nroutes)
*
* Read a 32-bit word from a location in VPD EEPROM using the card's PCI
* VPD ROM capability. A zero is written to the flag bit when the
- * addres is written to the control register. The hardware device will
+ * address is written to the control register. The hardware device will
* set the flag to 1 when 4 bytes have been read into the data register.
*/
int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data)
struct e1000_hw_stats;
/* Enumerated types specific to the e1000 hardware */
-/* Media Access Controlers */
+/* Media Access Controllers */
typedef enum {
e1000_undefined = 0,
e1000_82542_rev2_0,
* addresses take precedence to avoid disabling unicast filtering
* when possible.
*
- * RAR 0 is used for the station MAC adddress
+ * RAR 0 is used for the station MAC address
* if there are not 14 addresses, go ahead and clear the filters
*/
i = 1;
/*
* Ensure that the inter-port SWSM.SMBI lock bit is clear before
- * first NVM or PHY acess. This should be done for single-port
+ * first NVM or PHY access. This should be done for single-port
* devices, and for one port only on dual-port devices so that
* for those devices we can still use the SMBI lock to synchronize
* inter-port accesses to the PHY & NVM.
}
/*
- * Reset the PHY before any acccess to it. Doing so, ensures that
+ * Reset the PHY before any access to it. Doing so, ensures that
* the PHY is in a known good state before we read/write PHY registers.
* The generic reset is sufficient here, because we haven't determined
* the PHY type yet.
}
/**
- * e1000_get_phy_addr_for_hv_page - Get PHY adrress based on page
+ * e1000_get_phy_addr_for_hv_page - Get PHY address based on page
* @page: page to be accessed
**/
static u32 e1000_get_phy_addr_for_hv_page(u32 page)
module_param_array(irq, int, NULL, 0);
module_param_array(mem, int, NULL, 0);
module_param(autodetect, int, 0);
-MODULE_PARM_DESC(io, "EtherExpress Pro/10 I/O base addres(es)");
+MODULE_PARM_DESC(io, "EtherExpress Pro/10 I/O base address(es)");
MODULE_PARM_DESC(irq, "EtherExpress Pro/10 IRQ number(s)");
MODULE_PARM_DESC(mem, "EtherExpress Pro/10 Rx buffer size(es) in kB (3-29)");
MODULE_PARM_DESC(autodetect, "EtherExpress Pro/10 force board(s) detection (0-1)");
* or the type-DO IR port.
*
* IrDA chip set list from Toshiba Computer Engineering Corp.
- * model method maker controler Version
+ * model method maker controller Version
* Portege 320CT FIR,SIR Toshiba Oboe(Triangle)
* Portege 3010CT FIR,SIR Toshiba Oboe(Sydney)
* Portege 3015CT FIR,SIR Toshiba Oboe(Sydney)
/*
* The defaults in the HW for RX PB 1-7 are not zero and so should be
- * intialized to zero for non DCB mode otherwise actual total RX PB
+ * initialized to zero for non DCB mode otherwise actual total RX PB
* would be bigger than programmed and filter space would run into
* the PB 0 region.
*/
/*
* The defaults in the HW for RX PB 1-7 are not zero and so should be
- * intialized to zero for non DCB mode otherwise actual total RX PB
+ * initialized to zero for non DCB mode otherwise actual total RX PB
* would be bigger than programmed and filter space would run into
* the PB 0 region.
*/
goto out;
}
/* allocate the tx and rx ring buffer descriptors. */
- /* returns a virtual addres and a physical address. */
+ /* returns a virtual address and a physical address. */
lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent,
sizeof(*lp->tx_bd_v) * TX_BD_NUM,
&lp->tx_bd_p, GFP_KERNEL);
Rev 1.07.06 Nov. 7 2000 Jeff Garzik <jgarzik@pobox.com> some bug fix and cleaning
Rev 1.07.05 Nov. 6 2000 metapirat<metapirat@gmx.de> contribute media type select by ifconfig
Rev 1.07.04 Sep. 6 2000 Lei-Chun Chang added ICS1893 PHY support
- Rev 1.07.03 Aug. 24 2000 Lei-Chun Chang (lcchang@sis.com.tw) modified 630E eqaulizer workaround rule
+ Rev 1.07.03 Aug. 24 2000 Lei-Chun Chang (lcchang@sis.com.tw) modified 630E equalizer workaround rule
Rev 1.07.01 Aug. 08 2000 Ollie Lho minor update for SiS 630E and SiS 630E A1
Rev 1.07 Mar. 07 2000 Ollie Lho bug fix in Rx buffer ring
Rev 1.06.04 Feb. 11 2000 Jeff Garzik <jgarzik@pobox.com> softnet and init for kernel 2.4
/*
* RX HW/SW interaction overview
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- * There are 2 types of RX communication channels betwean driver and NIC.
+ * There are 2 types of RX communication channels between driver and NIC.
* 1) RX Free Fifo - RXF - holds descriptors of empty buffers to accept incoming
* traffic. This Fifo is filled by SW and is readen by HW. Each descriptor holds
* info about buffer's location, size and ID. An ID field is used to identify a
}
/* use PMF to accept first MAC_MCST_NUM (15) addresses */
- /* TBD: sort addreses and write them in ascending order
+ /* TBD: sort addresses and write them in ascending order
* into RX_MAC_MCST regs. we skip this phase now and accept ALL
* multicast frames throu IMF */
/* accept the rest of addresses throu IMF */
/*
* TX HW/SW interaction overview
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- * There are 2 types of TX communication channels betwean driver and NIC.
+ * There are 2 types of TX communication channels between driver and NIC.
* 1) TX Free Fifo - TXF - holds ack descriptors for sent packets
* 2) TX Data Fifo - TXD - holds descriptors of full buffers.
*
break;
case SIOCGIFHWADDR:
- /* Get hw addres */
+ /* Get hw address */
memcpy(ifr.ifr_hwaddr.sa_data, tun->dev->dev_addr, ETH_ALEN);
ifr.ifr_hwaddr.sa_family = tun->dev->type;
if (copy_to_user(argp, &ifr, ifreq_len))
* struct vxge_hw_device_stats - Contains HW per-device statistics,
* including hw.
* @devh: HW device handle.
- * @dma_addr: DMA addres of the %hw_info. Given to device to fill-in the stats.
+ * @dma_addr: DMA address of the %hw_info. Given to device to fill-in the stats.
* @hw_info_dmah: DMA handle used to map hw statistics onto the device memory
* space.
* @hw_info_dma_acch: One more DMA handle used subsequently to free the
/* Module parameters */
MODULE_AUTHOR("Maintainer: Francois Romieu <romieu@cogenit.fr>");
-MODULE_DESCRIPTION("Siemens PEB20534 PCI Controler");
+MODULE_DESCRIPTION("Siemens PEB20534 PCI Controller");
MODULE_LICENSE("GPL");
module_param(debug, int, 0);
MODULE_PARM_DESC(debug,"Enable/disable extra messages");
result);
goto error;
}
- /* Extract MAC addresss */
+ /* Extract MAC address */
ddi = (void *) skb->data;
BUILD_BUG_ON(ETH_ALEN != sizeof(ddi->mac_address));
d_printf(2, dev, "GET DEVICE INFO: mac addr %pM\n",
* @I2400M_BRI_MAC_REINIT: We need to reinitialize the boot
* rom after reading the MAC address. This is quite a dirty hack,
* if you ask me -- the device requires the bootrom to be
- * intialized after reading the MAC address.
+ * initialized after reading the MAC address.
*/
enum i2400m_bri {
I2400M_BRI_SOFT = 1 << 1,
/*
* EEPROM command register
*/
-#define AR5K_EEPROM_CMD 0x6008 /* Register Addres */
+#define AR5K_EEPROM_CMD 0x6008 /* Register Address */
#define AR5K_EEPROM_CMD_READ 0x00000001 /* EEPROM read */
#define AR5K_EEPROM_CMD_WRITE 0x00000002 /* EEPROM write */
#define AR5K_EEPROM_CMD_RESET 0x00000004 /* EEPROM reset */
/*
* EEPROM config register
*/
-#define AR5K_EEPROM_CFG 0x6010 /* Register Addres */
+#define AR5K_EEPROM_CFG 0x6010 /* Register Address */
#define AR5K_EEPROM_CFG_SIZE 0x00000003 /* Size determination override */
#define AR5K_EEPROM_CFG_SIZE_AUTO 0
#define AR5K_EEPROM_CFG_SIZE_4KBIT 1
* Second station id register (Upper 16 bits of MAC address + PCU settings)
*/
#define AR5K_STA_ID1 0x8004 /* Register Address */
-#define AR5K_STA_ID1_ADDR_U16 0x0000ffff /* Upper 16 bits of MAC addres */
+#define AR5K_STA_ID1_ADDR_U16 0x0000ffff /* Upper 16 bits of MAC address */
#define AR5K_STA_ID1_AP 0x00010000 /* Set AP mode */
#define AR5K_STA_ID1_ADHOC 0x00020000 /* Set Ad-Hoc mode */
#define AR5K_STA_ID1_PWR_SV 0x00040000 /* Power save reporting */
b43_hf_write(dev, b43_hf_read(dev) | B43_HF_HWPCTL);
}
-/* Intialize B/G PHY power control */
+/* Initialize B/G PHY power control */
static void b43_phy_init_pctl(struct b43_wldev *dev)
{
struct ssb_bus *bus = dev->dev->bus;
phy->calibrated = 1;
}
-/* intialize B PHY power control
+/* initialize B PHY power control
* as described in http://bcm-specs.sipsolutions.net/InitPowerControl
*/
static void b43legacy_phy_init_pctl(struct b43legacy_wldev *dev)
/*
* XXX: The MAC address in the command buffer is often changed from
* the original sent to the device. That is, the MAC address
- * written to the command buffer often is not the same MAC adress
+ * written to the command buffer often is not the same MAC address
* read from the command buffer when the command returns. This
* issue has not yet been resolved and this debugging is left to
* observe the problem.
printk(KERN_DEBUG "islpci_alloc_memory\n");
#endif
- /* remap the PCI device base address to accessable */
+ /* remap the PCI device base address to accessible */
if (!(priv->device_base =
ioremap(pci_resource_start(priv->pdev, 0),
ISL38XX_PCI_MEM_SIZE))) {
PCI_DMA_FROMDEVICE);
if (!priv->pci_map_rx_address[counter]) {
/* error mapping the buffer to device
- accessable memory address */
+ accessible memory address */
printk(KERN_ERR "failed to map skb DMA'able\n");
goto out_free;
}
priv->data_low_rx[counter] = NULL;
}
- /* Free the acces control list and the WPA list */
+ /* Free the access control list and the WPA list */
prism54_acl_clean(&priv->acl);
prism54_wpa_bss_ie_clean(priv);
mgt_clean(priv);
MAX_FRAGMENT_SIZE_RX + 2,
PCI_DMA_FROMDEVICE);
if (unlikely(!priv->pci_map_rx_address[index])) {
- /* error mapping the buffer to device accessable memory address */
+ /* error mapping the buffer to device accessible memory address */
DEBUG(SHOW_ERROR_MESSAGES,
"Error mapping DMA address\n");
intf->beacon = entry;
/*
- * The MAC adddress must be configured after the device
+ * The MAC address must be configured after the device
* has been initialized. Otherwise the device can reset
* the MAC registers.
* The BSSID address must only be configured in AP mode,
}
/*
- * Get Ethernet MAC addresss.
+ * Get Ethernet MAC address.
*
* WARNING: We switch to FPAGE0 and switc back again.
* Making sure there is no other WL function beening called by ISR.
#endif
/*
- * M32R PC Card Controler
+ * M32R PC Card Controller
*/
#define M32R_PCC0_BASE 0x00ef7000
#define M32R_PCC1_BASE 0x00ef7020
#define M32R_MAX_PCC 2
/*
- * M32R PC Card Controler
+ * M32R PC Card Controller
*/
#define M32R_PCC0_BASE 0x00ef7000
#define M32R_PCC1_BASE 0x00ef7020
out_be32(M8XX_PGCRX(1),
M8XX_PGCRX_CXOE | (mk_int_int_mask(hwirq) << 16));
- /* intialize the fixed memory windows */
+ /* initialize the fixed memory windows */
for (i = 0; i < PCMCIA_SOCKETS_NO; i++) {
for (m = 0; m < PCMCIA_MEM_WIN_NO; m++) {
* TPACPI_FAN_WR_ACPI_FANS (X31/X40/X41)
*
* FIRMWARE BUG: on some models, EC 0x2f might not be initialized at
- * boot. Apparently the EC does not intialize it, so unless ACPI DSDT
+ * boot. Apparently the EC does not initialize it, so unless ACPI DSDT
* does so, its initial value is meaningless (0x07).
*
* For firmware bugs, refer to:
/*
- * iPAQ h1930/h1940/rx1950 battery controler driver
+ * iPAQ h1930/h1940/rx1950 battery controller driver
* Copyright (c) Vasily Khoruzhick
* Based on h1940_battery.c by Arnaud Patard
*
module_exit(s3c_adc_bat_exit);
MODULE_AUTHOR("Vasily Khoruzhick <anarsoul@gmail.com>");
-MODULE_DESCRIPTION("iPAQ H1930/H1940/RX1950 battery controler driver");
+MODULE_DESCRIPTION("iPAQ H1930/H1940/RX1950 battery controller driver");
MODULE_LICENSE("GPL");
}
/**
- * Emit buffer of a lan comand.
+ * Emit buffer of a lan command.
*/
static void
lcs_lancmd_timeout(unsigned long data)
/**
* zfcp_cfdc_port_denied - Process "access denied" for port
- * @port: The port where the acces has been denied
+ * @port: The port where the access has been denied
* @qual: The FSF status qualifier for the access denied FSF status
*/
void zfcp_cfdc_port_denied(struct zfcp_port *port,
/* Go back and check they match */
outb(PRGMRST | DOWNLOAD, host->base + ORC_RISCCTL); /* Reset program count 0 */
- bios_addr -= 0x1000; /* Reset the BIOS adddress */
+ bios_addr -= 0x1000; /* Reset the BIOS address */
for (i = 0, data32_ptr = (u8 *) & data32; /* Check the code */
i < 0x1000; /* Firmware code size = 4K */
i++, bios_addr++) {
* aac_fib_setup - setup the fibs
* @dev: Adapter to set up
*
- * Allocate the PCI space for the fibs, map it and then intialise the
+ * Allocate the PCI space for the fibs, map it and then initialise the
* fib area, the unmapped fib data and also the free list
*/
/*
* Retrieve an SCB by SCBID first searching the disconnected list falling
* back to DMA'ing the SCB down from the host. This routine assumes that
- * ARG_1 is the SCBID of interrest and that SINDEX is the position in the
+ * ARG_1 is the SCBID of interest and that SINDEX is the position in the
* disconnected list to start the search from. If SINDEX is SCB_LIST_NULL,
* we go directly to the host for the SCB.
*/
#define PHY_START_CAL 0x01
/*
- * HST_PCIX2 Registers, Addresss Range: (0x00-0xFC)
+ * HST_PCIX2 Registers, Address Range: (0x00-0xFC)
*/
#define PCIX_REG_BASE_ADR 0xB8040000
#define PCIC_TP_CTRL 0xFC
/*
- * EXSI Registers, Addresss Range: (0x00-0xFC)
+ * EXSI Registers, Address Range: (0x00-0xFC)
*/
#define EXSI_REG_BASE_ADR REG_BASE_ADDR_EXSI
int j;
/* Start from Page 1 of Mode 0 and 1. */
moffs = LSEQ_PAGE_SIZE + i*LSEQ_MODE_SCRATCH_SIZE;
- /* All the fields of page 1 can be intialized to 0. */
+ /* All the fields of page 1 can be initialized to 0. */
for (j = 0; j < LSEQ_PAGE_SIZE; j += 4)
asd_write_reg_dword(asd_ha, LmSCRATCH(lseq)+moffs+j,0);
}
asd_write_reg_dword(asd_ha, SCBPRO, 0);
asd_write_reg_dword(asd_ha, CSEQCON, 0);
- /* Intialize CSEQ Mode 11 Interrupt Vectors.
+ /* Initialize CSEQ Mode 11 Interrupt Vectors.
* The addresses are 16 bit wide and in dword units.
* The values of their macros are in byte units.
* Thus we have to divide by 4. */
asd_write_reg_word(asd_ha, CPRGMCNT, cseq_idle_loop);
for (i = 0; i < 8; i++) {
- /* Intialize Mode n Link m Interrupt Enable. */
+ /* Initialize Mode n Link m Interrupt Enable. */
asd_write_reg_dword(asd_ha, CMnINTEN(i), EN_CMnRSPMBXF);
/* Initialize Mode n Request Mailbox. */
asd_write_reg_dword(asd_ha, CMnREQMBX(i), 0);
case BFA_IOIM_SM_ABORT:
/**
- * IO is alraedy being cleaned up implicitly
+ * IO is already being cleaned up implicitly
*/
ioim->io_cbfn = __bfa_cb_ioim_abort;
break;
switch (status) {
case BFA_STATUS_OK:
/*
- * Initialiaze the V-Port fields
+ * Initialize the V-Port fields
*/
__vport_fcid(vport) = bfa_lps_get_pid(vport->lps);
vport->vport_stats.fdisc_accepts++;
* adapter_add_device - Adds the device instance to the adaptor instance.
*
* @acb: The adapter device to be updated
- * @dcb: A newly created and intialised device instance to add.
+ * @dcb: A newly created and initialised device instance to add.
**/
static void adapter_add_device(struct AdapterCtlBlk *acb,
struct DeviceCtlBlk *dcb)
* init_adapter - Grab the resource for the card, setup the adapter
* information, set the card into a known state, create the various
* tables etc etc. This basically gets all adapter information all up
- * to date, intialised and gets the chip in sync with it.
+ * to date, initialised and gets the chip in sync with it.
*
* @host: This hosts adapter structure
* @io_port: The base I/O port
* that it finds in the system. The pci_dev strcuture indicates which
* instance we are being called from.
*
- * @dev: The PCI device to intialize.
+ * @dev: The PCI device to initialize.
* @id: Looks like a pointer to the entry in our pci device table
* that was actually matched by the PCI subsystem.
*
* dc395x_remove_one - Called to remove a single instance of the
* adapter.
*
- * @dev: The PCI device to intialize.
+ * @dev: The PCI device to initialize.
**/
static void __devexit dc395x_remove_one(struct pci_dev *dev)
{
/**
* fc_lun_reset() - Send a LUN RESET command to a device
* and wait for the reply
- * @lport: The local port to sent the comand on
+ * @lport: The local port to sent the command on
* @fsp: The FCP packet that identifies the LUN to be reset
* @id: The SCSI command ID
* @lun: The LUN ID to be reset
}
/**
- * lpfc_param_init - Intializes a cfg attribute
+ * lpfc_param_init - Initializes a cfg attribute
*
* Description:
* Macro that given an attr e.g. hba_queue_depth expands
if (unlikely(!fcf_record)) {
lpfc_printf_log(phba, KERN_ERR,
LOG_MBOX | LOG_SLI,
- "2554 Could not allocate memmory for "
+ "2554 Could not allocate memory for "
"fcf record\n");
rc = -ENODEV;
goto out;
* lpfc_sli4_queue_free - free a queue structure and associated memory
* @queue: The queue structure to free.
*
- * This function frees a queue structure and the DMAable memeory used for
+ * This function frees a queue structure and the DMAable memory used for
* the host resident queue. This function must be called after destroying the
* queue on the HBA.
**/
*/
/*
- * Comand coalescing - This feature allows the driver to be able to combine
+ * Command coalescing - This feature allows the driver to be able to combine
* two or more commands and issue as one command in order to boost I/O
* performance. Useful if the nature of the I/O is sequential. It is not very
* useful for random natured I/Os.
#endif
intx:
- /* intialize the INT-X interrupt */
+ /* initialize the INT-X interrupt */
rc = request_irq(pdev->irq, irq_handler, IRQF_SHARED, DRV_NAME,
SHOST_TO_SAS_HA(pm8001_ha->shost));
return rc;
/**
- * scsi_netlink_init - Called by SCSI subsystem to intialize
+ * scsi_netlink_init - Called by SCSI subsystem to initialize
* the SCSI transport netlink interface
*
**/
*
* This routine is similar to sym_set_workarounds(), except
* that, at this point, we already know that the device was
- * successfully intialized at least once before, and so most
+ * successfully initialized at least once before, and so most
* of the steps taken there are un-needed here.
*/
static void sym2_reset_workarounds(struct pci_dev *pdev)
/*
* For DMA, tx_buf/tx_dma have the same relationship as rx_buf/rx_dma:
* - The buffer is either valid for CPU access, else NULL
- * - If the buffer is valid, so is its DMA addresss
+ * - If the buffer is valid, so is its DMA address
*
- * This driver manages the dma addresss unless message->is_dma_mapped.
+ * This driver manages the dma address unless message->is_dma_mapped.
*/
static int
atmel_spi_dma_map_xfer(struct atmel_spi *as, struct spi_transfer *xfer)
/*
- * This supports acccess to SPI devices using normal userspace I/O calls.
+ * This supports access to SPI devices using normal userspace I/O calls.
* Note that while traditional UNIX/POSIX I/O semantics are half duplex,
* and often mask message boundaries, full SPI support requires full duplex
* transfers. There are several kinds of internal message boundaries to
}
}
-/* Intialize bitmangler to map from a byte value to the mangled word that
+/* Initialize bitmangler to map from a byte value to the mangled word that
* must be output to program the Xilinx part through the DEBI port.
* Xilinx Data Bit->DEBI Bit: 0->15 1->7 2->6 3->12 4->11 5->2 6->1 7->0
* transfer FPGA code, init IBM chip, transfer IBM microcode
};
/*******************************************************************************
- * USB gadged driver functions
+ * USB gadget driver functions
*******************************************************************************
*/
int usb_gadget_probe_driver(struct usb_gadget_driver *driver,
spin_lock_irqsave(&imx21->lock, flags);
- /* Reset the Host controler modules */
+ /* Reset the Host controller modules */
writel(USBOTG_RST_RSTCTRL | USBOTG_RST_RSTRH |
USBOTG_RST_RSTHSIE | USBOTG_RST_RSTHC,
imx21->regs + USBOTG_RST_CTRL);
/* Some boards (mostly VIA?) report bogus overcurrent indications,
* causing massive log spam unless we completely ignore them. It
- * may be relevant that VIA VT8235 controlers, where PORT_POWER is
+ * may be relevant that VIA VT8235 controllers, where PORT_POWER is
* always set, seem to clear PORT_OCC and PORT_CSC when writing to
* PORT_POWER; that's surprising, but maybe within-spec.
*/
goto exit;
}
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
dev = kzalloc(sizeof(struct adu_device), GFP_KERNEL);
if (dev == NULL) {
dev_err(&interface->dev, "Out of memory\n");
int i;
int retval = -ENOMEM;
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
dev = kzalloc(sizeof(struct iowarrior), GFP_KERNEL);
if (dev == NULL) {
dev_err(&interface->dev, "Out of memory\n");
int i;
int retval = -ENOMEM;
- /* allocate memory for our device state and intialize it */
+ /* allocate memory for our device state and initialize it */
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (dev == NULL) {
struct musb_hw_ep *hw_ep;
unsigned count = 0;
- /* intialize endpoint list just once */
+ /* initialize endpoint list just once */
INIT_LIST_HEAD(&(musb->g.ep_list));
for (epnum = 0, hw_ep = musb->endpoints;
*
* -EINVAL something went wrong (not driver)
* -EBUSY another gadget is already using the controller
- * -ENOMEM no memeory to perform the operation
+ * -ENOMEM no memory to perform the operation
*
* @param driver the gadget driver
* @param bind the driver's bind function
*
* USB Stack port number 4 (1 based)
* WUSB code port index 3 (0 based)
- * USB Addresss 5 (2 based -- 0 is for default, 1 for root hub)
+ * USB Address 5 (2 based -- 0 is for default, 1 for root hub)
*
* Now, because we don't use the concept as default address exactly
* like the (wired) USB code does, we need to kind of skip it. So we
fbiinit2 = sst_read(FBIINIT2);
fbiinit3 = sst_read(FBIINIT3);
- /* everything is reset. we enable fbiinit2/3 remap : dac acces ok */
+ /* everything is reset. we enable fbiinit2/3 remap : dac access ok */
pci_write_config_dword(sst_dev, PCI_INIT_ENABLE,
PCI_EN_INIT_WR | PCI_REMAP_DAC );
#endif
};
-/* Max physical block we can addres w/o extents */
+/* Max physical block we can address w/o extents */
#define EXT4_MAX_BLOCK_FILE_PHYS 0xFFFFFFFF
/*
* to an uninitialized extent.
*
* Writing to an uninitized extent may result in splitting the uninitialized
- * extent into multiple /intialized unintialized extents (up to three)
+ * extent into multiple /initialized uninitialized extents (up to three)
* There are three possibilities:
* a> There is no split required: Entire extent should be uninitialized
* b> Splits in two extents: Write is happening at either end of the extent
* c> Splits in three extents: Somone is writing in middle of the extent
*
* One of more index blocks maybe needed if the extent tree grow after
- * the unintialized extent split. To prevent ENOSPC occur at the IO
+ * the uninitialized extent split. To prevent ENOSPC occur at the IO
* complete, we need to split the uninitialized extent before DIO submit
* the IO. The uninitialized extent called at this time will be split
* into three uninitialized extent(at most). After IO complete, the part
* preallocated extents, and those write extend the file, no need to
* fall back to buffered IO.
*
- * For holes, we fallocate those blocks, mark them as unintialized
+ * For holes, we fallocate those blocks, mark them as uninitialized
* If those blocks were preallocated, we mark sure they are splited, but
- * still keep the range to write as unintialized.
+ * still keep the range to write as uninitialized.
*
* The unwrritten extents will be converted to written when DIO is completed.
* For async direct IO, since the IO may still pending when return, we
* #1 and #2 can be simply solved by never taking the lock
* here for system files (which are the only type we read
* during mount). It's a heavier approach, but our main
- * concern is user-accesible files anyway.
+ * concern is user-accessible files anyway.
*
* #3 works itself out because we'll eventually take the
* cluster lock before trusting anything anyway.
if (res->sr_bg_blkno) {
/* Attempt to short-circuit the usual search mechanism
* by jumping straight to the most recently used
- * allocation group. This helps us mantain some
+ * allocation group. This helps us maintain some
* contiguousness across allocations. */
status = ocfs2_search_one_group(ac, handle, bits_wanted,
min_bits, res, &bits_left);
* Slab object creation initialisation for the XFS inode.
* This covers only the idempotent fields in the XFS inode;
* all other fields need to be initialised on allocation
- * from the slab. This avoids the need to repeatedly intialise
+ * from the slab. This avoids the need to repeatedly initialise
* fields in the xfs inode that left in the initialise state
* when freeing the inode.
*/
struct acpi_table_bert {
struct acpi_table_header header; /* Common ACPI table header */
u32 region_length; /* Length of the boot error region */
- u64 address; /* Physical addresss of the error region */
+ u64 address; /* Physical address of the error region */
};
/* Boot Error Region (not a subtable, pointed to by Address field above) */
/*
* To iterate across the tasks in a cgroup:
*
- * 1) call cgroup_iter_start to intialize an iterator
+ * 1) call cgroup_iter_start to initialize an iterator
*
* 2) call cgroup_iter_next() to retrieve member tasks until it
* returns NULL or until you want to end the iteration
* @closure: See &fw_cdev_event_common;
* set by %FW_CDEV_CREATE_ISO_CONTEXT ioctl
* @type: %FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL
- * @completed: Offset into the receive buffer; data before this offest is valid
+ * @completed: Offset into the receive buffer; data before this offset is valid
*
* This event is sent in multichannel contexts (context type
* %FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL) for &fw_cdev_iso_packet buffer
size_t data_size;
/*
- * This resources can be specified relatievly to the parent device.
+ * This resources can be specified relatively to the parent device.
* For accessing device you should use resources from device
*/
int num_resources;
#define SCTP_SOCKOPT_PEELOFF 102 /* peel off association. */
/* Options 104-106 are deprecated and removed. Do not use this space */
#define SCTP_SOCKOPT_CONNECTX_OLD 107 /* CONNECTX old requests. */
-#define SCTP_GET_PEER_ADDRS 108 /* Get all peer addresss. */
-#define SCTP_GET_LOCAL_ADDRS 109 /* Get all local addresss. */
+#define SCTP_GET_PEER_ADDRS 108 /* Get all peer address. */
+#define SCTP_GET_LOCAL_ADDRS 109 /* Get all local address. */
#define SCTP_SOCKOPT_CONNECTX 110 /* CONNECTX requests. */
#define SCTP_SOCKOPT_CONNECTX3 111 /* CONNECTX requests (updated) */
*/
struct fcp_cmnd {
__u8 fc_lun[8]; /* logical unit number */
- __u8 fc_cmdref; /* commmand reference number */
+ __u8 fc_cmdref; /* command reference number */
__u8 fc_pri_ta; /* priority and task attribute */
__u8 fc_tm_flags; /* task management flags */
__u8 fc_flags; /* additional len & flags */
struct fcp_cmnd32 {
__u8 fc_lun[8]; /* logical unit number */
- __u8 fc_cmdref; /* commmand reference number */
+ __u8 fc_cmdref; /* command reference number */
__u8 fc_pri_ta; /* priority and task attribute */
__u8 fc_tm_flags; /* task management flags */
__u8 fc_flags; /* additional len & flags */
}
}
-/* Intialize kdb_printf, breakpoint tables and kdb state */
+/* Initialize kdb_printf, breakpoint tables and kdb state */
void __init kdb_init(int lvl)
{
static int kdb_init_lvl = KDB_NOT_INITIALIZED;
* just verifies it is an address we can use.
*
* Since the kernel does everything in page size chunks ensure
- * the destination addreses are page aligned. Too many
+ * the destination addresses are page aligned. Too many
* special cases crop of when we don't do this. The most
* insidious is getting overlapping destination addresses
* simply because addresses are changed to page size
/**
* swsusp_read - read the hibernation image.
* @flags_p: flags passed by the "frozen" kernel in the image header should
- * be written into this memeory location
+ * be written into this memory location
*/
int swsusp_read(unsigned int *flags_p)
* try_to_wake_up_local - try to wake up a local task with rq lock held
* @p: the thread to be awakened
*
- * Put @p on the run-queue if it's not alredy there. The caller must
+ * Put @p on the run-queue if it's not already there. The caller must
* ensure that this_rq() is locked, @p is bound to this_rq() and not
* the current task. this_rq() stays locked over invocation.
*/
buf[result] = '\0';
- /* Convert the decnet addresss to binary */
+ /* Convert the decnet address to binary */
result = -EIO;
nodep = strchr(buf, '.') + 1;
if (!nodep)
int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq)
{
- /* Intialize mult/shift and max_idle_ns */
+ /* Initialize mult/shift and max_idle_ns */
__clocksource_updatefreq_scale(cs, scale, freq);
/* Add clocksource to the clcoksource list */
*/
/*
- * Function trace entry - function address and parent function addres:
+ * Function trace entry - function address and parent function address:
*/
FTRACE_ENTRY(function, ftrace_entry,
* @policy: validation policy
*
* Parses a stream of attributes and stores a pointer to each attribute in
- * the tb array accessable via the attribute type. Attributes with a type
+ * the tb array accessible via the attribute type. Attributes with a type
* exceeding maxtype will be silently ignored for backwards compatibility
* reasons. policy may be set to NULL if no validation is required.
*
static char *io_tlb_start, *io_tlb_end;
/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * The number of IO TLB blocks (in groups of 64) between io_tlb_start and
* io_tlb_end. This is command line adjustable via setup_io_tlb_npages.
*/
static unsigned long io_tlb_nslabs;
/*
* (Un)populated page region iterators. Iterate over (un)populated
- * page regions betwen @start and @end in @chunk. @rs and @re should
+ * page regions between @start and @end in @chunk. @rs and @re should
* be integer variables and will be set to start and end page index of
* the current region.
*/
*
* However, virtual mappings need a page table and TLBs. Many Linux
* architectures already map their physical space using 1-1 mappings
- * via TLBs. For those arches the virtual memmory map is essentially
+ * via TLBs. For those arches the virtual memory map is essentially
* for free if we use the same page size as the 1-1 mappings. In that
* case the overhead consists of a few additional pages that are
* allocated to create a view of memory for vmemmap.
static void __net_exit default_device_exit_batch(struct list_head *net_list)
{
/* At exit all network devices most be removed from a network
- * namespace. Do this in the reverse order of registeration.
+ * namespace. Do this in the reverse order of registration.
* Do this across as many network namespaces as possible to
* improve batching efficiency.
*/
/*
* This processes a device up event. We only start up
* the loopback device & ethernet devices with correct
- * MAC addreses automatically. Others must be started
+ * MAC addresses automatically. Others must be started
* specifically.
*
* FIXME: How should we configure the loopback address ? If we could dispense
return 0;
}
-/* Intialize TSO state of a skb.
+/* Initialize TSO state of a skb.
* This must be invoked the first time we consider transmitting
* SKB onto the wire.
*/
int cnt, def;
/*
- * If choice is mod then we may have more items slected
+ * If choice is mod then we may have more items selected
* and if no then no-one.
* In both cases stop.
*/
* A module includes a number of sections that are discarded
* either when loaded or when used as built-in.
* For loaded modules all functions marked __init and all data
- * marked __initdata will be discarded when the module has been intialized.
+ * marked __initdata will be discarded when the module has been initialized.
* Likewise for modules used built-in the sections marked __exit
* are discarded because __exit marked function are supposed to be called
* only when a module is unloaded which never happens for built-in modules.
* The format used for transition tables is based on the GNU flex table
* file format (--tables-file option; see Table File Format in the flex
* info pages and the flex sources for documentation). The magic number
- * used in the header is 0x1B5E783D insted of 0xF13C57B1 though, because
+ * used in the header is 0x1B5E783D instead of 0xF13C57B1 though, because
* the YY_ID_CHK (check) and YY_ID_DEF (default) tables are used
* slightly differently (see the apparmor-parser package).
*/
* external accesses. Thus, you should call this function at the end
* of the initialization of the card.
*
- * Returns zero otherwise a negative error code if the registrain failed.
+ * Returns zero otherwise a negative error code if the registration failed.
*/
int snd_card_register(struct snd_card *card)
{
if (push)
snd_pcm_update_hw_ptr(substream);
/* The jiffies check in snd_pcm_update_hw_ptr*() is done by
- * a delta betwen the current jiffies, this gives a large enough
+ * a delta between the current jiffies, this gives a large enough
* delta, effectively to skip the check once.
*/
substream->runtime->hw_ptr_jiffies = jiffies - HZ * 1000;
snd_printd("OPL3-SA [0x%lx] detect (1) = 0x%x (0x%x)\n", port, tmp, tmp1);
return -ENODEV;
}
- /* try if the MIC register is accesible */
+ /* try if the MIC register is accessible */
tmp = snd_opl3sa2_read(chip, OPL3SA2_MIC);
snd_opl3sa2_write(chip, OPL3SA2_MIC, 0x8a);
if (((tmp1 = snd_opl3sa2_read(chip, OPL3SA2_MIC)) & 0x9f) != 0x8a) {
#define PLAYBACK_LIST_PTR 0x02 /* Pointer to the current period being played */
/* PTR[5:0], Default: 0x0 */
#define PLAYBACK_UNKNOWN3 0x03 /* Not used ?? */
-#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA addresss */
+#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA address */
/* DMA[31:0], Default: 0x0 */
#define PLAYBACK_PERIOD_SIZE 0x05 /* Playback period size. win2000 uses 0x04000000 */
/* SIZE[31:16], Default: 0x0 */
*/
#define PLAYBACK_LIST_SIZE 0x01 /* Size of list in bytes << 16. E.g. 8 periods -> 0x00380000 */
#define PLAYBACK_LIST_PTR 0x02 /* Pointer to the current period being played */
-#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA addresss */
+#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA address */
#define PLAYBACK_PERIOD_SIZE 0x05 /* Playback period size */
#define PLAYBACK_POINTER 0x06 /* Playback period pointer. Sample currently in DAC */
#define PLAYBACK_UNKNOWN1 0x07
#define PLAYBACK_LIST_SIZE 0x01 /* Size of list in bytes << 16. E.g. 8 periods -> 0x00380000 */
#define PLAYBACK_LIST_PTR 0x02 /* Pointer to the current period being played */
#define PLAYBACK_UNKNOWN3 0x03 /* Not used */
-#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA addresss */
+#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA address */
#define PLAYBACK_PERIOD_SIZE 0x05 /* Playback period size. win2000 uses 0x04000000 */
#define PLAYBACK_POINTER 0x06 /* Playback period pointer. Used with PLAYBACK_LIST_PTR to determine buffer position currently in DAC */
#define PLAYBACK_FIFO_END_ADDRESS 0x07 /* Playback FIFO end address */
#define RINGB_EN_2CODEC 0x0020
#define RINGB_SING_BIT_DUAL 0x0040
-/* ****Port Adresses**** */
+/* ****Port Addresses**** */
/* Write & Read */
#define ESM_INDEX 0x02
struct snd_kcontrol *playback_mixer_ctls[HDSPM_MAX_CHANNELS];
/* but input to much, so not used */
struct snd_kcontrol *input_mixer_ctls[HDSPM_MAX_CHANNELS];
- /* full mixer accessable over mixer ioctl or hwdep-device */
+ /* full mixer accessible over mixer ioctl or hwdep-device */
struct hdspm_mixer *mixer;
};
return bit2freq_tab[n];
}
-/* Write/read to/from HDSPM with Adresses in Bytes
+/* Write/read to/from HDSPM with Addresses in Bytes
not words but only 32Bit writes are allowed */
static inline void hdspm_write(struct hdspm * hdspm, unsigned int reg,
/* Channel playback mixer as default control
Note: the whole matrix would be 128*HDSPM_MIXER_CHANNELS Faders,
- thats too * big for any alsamixer they are accesible via special
+ thats too * big for any alsamixer they are accessible via special
IOCTL on hwdep and the mixer 2dimensional mixer control
*/
return ret;
}
- /* initalize private data */
+ /* initialize private data */
max98088->sysclk = (unsigned)-1;
max98088->eq_textcnt = 0;
goto out3;
}
- /* Set audio clock heirachy for S/PDIF */
+ /* Set audio clock hierarchy for S/PDIF */
clk_set_parent(mout_epll, fout_epll);
clk_set_parent(sclk_audio0, mout_epll);
clk_set_parent(sclk_spdif, sclk_audio0);
/* We should haved to set clock directly on this part because of clock
* scheme of Samsudng SoCs did not support to set rates from abstrct
- * clock of it's heirachy.
+ * clock of it's hierarchy.
*/
static int set_audio_clock_rate(unsigned long epll_rate,
unsigned long audio_rate)
if (ret)
goto err1;
- /* Set audio clock heirachy manually */
+ /* Set audio clock hierarchy manually */
ret = set_audio_clock_heirachy(smdk_snd_spdif_device);
if (ret)
goto err1;