* 'staging-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6:
staging: zram: fix data corruption issue
Staging: Comedi: Fix a few NI module dependencies
Staging: comedi: Add MODULE_LICENSE and similar to NI modules
staging: brcm80211: bugfix for softmac crash on multi cpu configurations
staging: sst: Fix for dmic capture on v2 pmic
staging: hv: Enable sending GARP packet after live migration
Who: Jean Delvare <khali@linux-fr.org>
----------------------------
+
+What: noswapaccount kernel command line parameter
+When: 2.6.40
+Why: The original implementation of memsw feature enabled by
+ CONFIG_CGROUP_MEM_RES_CTLR_SWAP could be disabled by the noswapaccount
+ kernel parameter (introduced in 2.6.29-rc1). Later on, this decision
+ turned out to be not ideal because we cannot have the feature compiled
+ in and disabled by default and let only interested to enable it
+ (e.g. general distribution kernels might need it). Therefore we have
+ added swapaccount[=0|1] parameter (introduced in 2.6.37) which provides
+ the both possibilities. If we remove noswapaccount we will have
+ less command line parameters with the same functionality and we
+ can also cleanup the parameter handling a bit ().
+Who: Michal Hocko <mhocko@suse.cz>
+
+----------------------------
tcp_dsack - BOOLEAN
Allows TCP to send "duplicate" SACKs.
-tcp_ecn - BOOLEAN
+tcp_ecn - INTEGER
Enable Explicit Congestion Notification (ECN) in TCP. ECN is only
used when both ends of the TCP flow support it. It is useful to
avoid losses due to congestion (when the bottleneck router supports
+Version 15 of schedstats dropped counters for some sched_yield:
+yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is
+identical to version 14.
+
Version 14 of schedstats includes support for sched_domains, which hit the
mainline kernel in 2.6.20 although it is identical to the stats from version
12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
CPU statistics
--------------
-cpu<N> 1 2 3 4 5 6 7 8 9 10 11 12
-
-NOTE: In the sched_yield() statistics, the active queue is considered empty
- if it has only one process in it, since obviously the process calling
- sched_yield() is that process.
+cpu<N> 1 2 3 4 5 6 7 8 9
-First four fields are sched_yield() statistics:
- 1) # of times both the active and the expired queue were empty
- 2) # of times just the active queue was empty
- 3) # of times just the expired queue was empty
- 4) # of times sched_yield() was called
+First field is a sched_yield() statistic:
+ 1) # of times sched_yield() was called
Next three are schedule() statistics:
- 5) # of times we switched to the expired queue and reused it
- 6) # of times schedule() was called
- 7) # of times schedule() left the processor idle
+ 2) # of times we switched to the expired queue and reused it
+ 3) # of times schedule() was called
+ 4) # of times schedule() left the processor idle
Next two are try_to_wake_up() statistics:
- 8) # of times try_to_wake_up() was called
- 9) # of times try_to_wake_up() was called to wake up the local cpu
+ 5) # of times try_to_wake_up() was called
+ 6) # of times try_to_wake_up() was called to wake up the local cpu
Next three are statistics describing scheduling latency:
- 10) sum of all time spent running by tasks on this processor (in jiffies)
- 11) sum of all time spent waiting to run by tasks on this processor (in
+ 7) sum of all time spent running by tasks on this processor (in jiffies)
+ 8) sum of all time spent waiting to run by tasks on this processor (in
jiffies)
- 12) # of timeslices run on this cpu
+ 9) # of timeslices run on this cpu
Domain statistics
=============
laptop Basic Laptop config (default)
hp-laptop HP laptops, e g G60
+ asus Asus K52JU, Lenovo G560
dell-laptop Dell laptops
dell-vostro Dell Vostro
olpc-xo-1_5 OLPC XO 1.5
F: arch/arm/plat-samsung/
F: arch/arm/plat-s3c24xx/
F: arch/arm/plat-s5p/
+F: drivers/*/*s3c2410*
+F: drivers/*/*/*s3c2410*
ARM/S3C2410 ARM ARCHITECTURE
M: Ben Dooks <ben-linux@fluff.org>
F: drivers/scsi/be2iscsi/
SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
-M: Sathya Perla <sathyap@serverengines.com>
-M: Subbu Seetharaman <subbus@serverengines.com>
-M: Sarveshwar Bandi <sarveshwarb@serverengines.com>
-M: Ajit Khaparde <ajitk@serverengines.com>
+M: Sathya Perla <sathya.perla@emulex.com>
+M: Subbu Seetharaman <subbu.seetharaman@emulex.com>
+M: Ajit Khaparde <ajit.khaparde@emulex.com>
L: netdev@vger.kernel.org
-W: http://www.serverengines.com
+W: http://www.emulex.com
S: Supported
F: drivers/net/benet/
SIMTEC EB110ATX (Chalice CATS)
P: Ben Dooks
-M: Vincent Sanders <support@simtec.co.uk>
+P: Vincent Sanders <vince@simtec.co.uk>
+M: Simtec Linux Team <linux@simtec.co.uk>
W: http://www.simtec.co.uk/products/EB110ATX/
S: Supported
SIMTEC EB2410ITX (BAST)
P: Ben Dooks
-M: Vincent Sanders <support@simtec.co.uk>
+P: Vincent Sanders <vince@simtec.co.uk>
+M: Simtec Linux Team <linux@simtec.co.uk>
W: http://www.simtec.co.uk/products/EB2410ITX/
S: Supported
-F: arch/arm/mach-s3c2410/
-F: drivers/*/*s3c2410*
-F: drivers/*/*/*s3c2410*
+F: arch/arm/mach-s3c2410/mach-bast.c
+F: arch/arm/mach-s3c2410/bast-ide.c
+F: arch/arm/mach-s3c2410/bast-irq.c
TI DAVINCI MACHINE SUPPORT
M: Kevin Hilman <khilman@deeprootsystems.com>
F: drivers/net/wireless/wl1251/*
WL1271 WIRELESS DRIVER
-M: Luciano Coelho <luciano.coelho@nokia.com>
+M: Luciano Coelho <coelho@ti.com>
L: linux-wireless@vger.kernel.org
-W: http://wireless.kernel.org
+W: http://wireless.kernel.org/en/users/Drivers/wl12xx
T: git git://git.kernel.org/pub/scm/linux/kernel/git/luca/wl12xx.git
S: Maintained
-F: drivers/net/wireless/wl12xx/wl1271*
+F: drivers/net/wireless/wl12xx/
F: include/linux/wl12xx.h
WL3501 WIRELESS PCMCIA CARD DRIVER
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 38
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*
static struct resource ep93xx_ac97_resources[] = {
{
.start = EP93XX_AAC_PHYS_BASE,
- .end = EP93XX_AAC_PHYS_BASE + 0xb0 - 1,
+ .end = EP93XX_AAC_PHYS_BASE + 0xac - 1,
.flags = IORESOURCE_MEM,
},
{
KEY(3, 3, KEY_POWER),
};
-static const struct matrix_keymap_data mx25pdk_keymap_data __initdata = {
+static const struct matrix_keymap_data mx25pdk_keymap_data __initconst = {
.keymap = mx25pdk_keymap,
.keymap_size = ARRAY_SIZE(mx25pdk_keymap),
};
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-unsigned long ixp4xx_timer_freq = FREQ;
+unsigned long ixp4xx_timer_freq = IXP4XX_TIMER_FREQ;
EXPORT_SYMBOL(ixp4xx_timer_freq);
static void __init ixp4xx_clocksource_init(void)
{
static void __init ixp4xx_clockevent_init(void)
{
- clockevent_ixp4xx.mult = div_sc(FREQ, NSEC_PER_SEC,
+ clockevent_ixp4xx.mult = div_sc(IXP4XX_TIMER_FREQ, NSEC_PER_SEC,
clockevent_ixp4xx.shift);
clockevent_ixp4xx.max_delta_ns =
clockevent_delta2ns(0xfffffffe, &clockevent_ixp4xx);
* 66.66... MHz. We do a convulted calculation of CLOCK_TICK_RATE b/c the
* timer register ignores the bottom 2 bits of the LATCH value.
*/
-#define FREQ 66666000
-#define CLOCK_TICK_RATE (((FREQ / HZ & ~IXP4XX_OST_RELOAD_MASK) + 1) * HZ)
+#define IXP4XX_TIMER_FREQ 66666000
+#define CLOCK_TICK_RATE \
+ (((IXP4XX_TIMER_FREQ / HZ & ~IXP4XX_OST_RELOAD_MASK) + 1) * HZ)
qmgr_queue_descs[queue], queue);
qmgr_queue_descs[queue][0] = '\x0';
#endif
+
+ while ((addr = qmgr_get_entry(queue)))
+ printk(KERN_ERR "qmgr: released queue %i not empty: 0x%08X\n",
+ queue, addr);
+
__raw_writel(0, &qmgr_regs->sram[queue]);
used_sram_bitmap[0] &= ~mask[0];
spin_unlock_irq(&qmgr_lock);
module_put(THIS_MODULE);
-
- while ((addr = qmgr_get_entry(queue)))
- printk(KERN_ERR "qmgr: released queue %i not empty: 0x%08X\n",
- queue, addr);
}
static int qmgr_init(void)
reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \
reg &= ~BM_CLKCTRL_##dr##_DIV; \
reg |= div << BP_CLKCTRL_##dr##_DIV; \
- if (reg | (1 << clk->enable_shift)) { \
+ if (reg & (1 << clk->enable_shift)) { \
pr_err("%s: clock is gated\n", __func__); \
return -EINVAL; \
} \
{ \
if (parent != clk->parent) { \
__raw_writel(BM_CLKCTRL_CLKSEQ_BYPASS_##bit, \
- HW_CLKCTRL_CLKSEQ_TOG); \
+ CLKCTRL_BASE_ADDR + HW_CLKCTRL_CLKSEQ_TOG); \
clk->parent = parent; \
} \
\
} else { \
reg &= ~BM_CLKCTRL_##dr##_DIV; \
reg |= div << BP_CLKCTRL_##dr##_DIV; \
- if (reg | (1 << clk->enable_shift)) { \
+ if (reg & (1 << clk->enable_shift)) { \
pr_err("%s: clock is gated\n", __func__); \
return -EINVAL; \
} \
} \
- __raw_writel(reg, CLKCTRL_BASE_ADDR + HW_CLKCTRL_CPU); \
+ __raw_writel(reg, CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \
\
for (i = 10000; i; i--) \
if (!(__raw_readl(CLKCTRL_BASE_ADDR + \
{ \
if (parent != clk->parent) { \
__raw_writel(BM_CLKCTRL_CLKSEQ_BYPASS_##bit, \
- HW_CLKCTRL_CLKSEQ_TOG); \
+ CLKCTRL_BASE_ADDR + HW_CLKCTRL_CLKSEQ_TOG); \
clk->parent = parent; \
} \
\
_REGISTER_CLOCK("duart", NULL, uart_clk)
_REGISTER_CLOCK("imx28-fec.0", NULL, fec_clk)
_REGISTER_CLOCK("imx28-fec.1", NULL, fec_clk)
- _REGISTER_CLOCK("fec.0", NULL, fec_clk)
_REGISTER_CLOCK("rtc", NULL, rtc_clk)
_REGISTER_CLOCK("pll2", NULL, pll2_clk)
_REGISTER_CLOCK(NULL, "hclk", hbus_clk)
if (clk->disable)
clk->disable(clk);
__clk_disable(clk->parent);
- __clk_disable(clk->secondary);
}
}
if (clk->usecount++ == 0) {
__clk_enable(clk->parent);
- __clk_enable(clk->secondary);
if (clk->enable)
clk->enable(clk);
struct mxs_gpio_port *port = (struct mxs_gpio_port *)get_irq_data(irq);
u32 gpio_irq_no_base = port->virtual_irq_start;
+ desc->irq_data.chip->irq_ack(&desc->irq_data);
+
irq_stat = __raw_readl(port->base + PINCTRL_IRQSTAT(port->id)) &
__raw_readl(port->base + PINCTRL_IRQEN(port->id));
int id;
/* Source clock this clk depends on */
struct clk *parent;
- /* Secondary clock to enable/disable with this clock */
- struct clk *secondary;
/* Reference count of clock enable/disable */
__s8 usecount;
/* Register bit position for clock's enable/disable control. */
* On OMAP1510, internal LCD controller will start the transfer
* when it gets enabled, so assume DMA running if LCD enabled.
*/
- if (cpu_is_omap1510())
+ if (cpu_is_omap15xx())
if (omap_readw(OMAP_LCDC_CONTROL) & OMAP_LCDC_CTRL_LCD_EN)
return 1;
void omap_set_lcd_dma_b1_rotation(int rotate)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA rotation is not supported in 1510 mode\n");
BUG();
return;
void omap_set_lcd_dma_b1_mirror(int mirror)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA mirror is not supported in 1510 mode\n");
BUG();
}
void omap_set_lcd_dma_b1_vxres(unsigned long vxres)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA virtual resulotion is not supported "
"in 1510 mode\n");
BUG();
void omap_set_lcd_dma_b1_scale(unsigned int xscale, unsigned int yscale)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA scale is not supported in 1510 mode\n");
BUG();
}
bottom = PIXADDR(lcd_dma.xres - 1, lcd_dma.yres - 1);
/* 1510 DMA requires the bottom address to be 2 more
* than the actual last memory access location. */
- if (cpu_is_omap1510() &&
+ if (cpu_is_omap15xx() &&
lcd_dma.data_type == OMAP_DMA_DATA_TYPE_S32)
bottom += 2;
ei = PIXSTEP(0, 0, 1, 0);
return; /* Suppress warning about uninitialized vars */
}
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
omap_writew(top >> 16, OMAP1510_DMA_LCD_TOP_F1_U);
omap_writew(top, OMAP1510_DMA_LCD_TOP_F1_L);
omap_writew(bottom >> 16, OMAP1510_DMA_LCD_BOT_F1_U);
BUG();
return;
}
- if (!cpu_is_omap1510())
+ if (!cpu_is_omap15xx())
omap_writew(omap_readw(OMAP1610_DMA_LCD_CCR) & ~1,
OMAP1610_DMA_LCD_CCR);
lcd_dma.reserved = 0;
* connected. Otherwise the OMAP internal controller will
* start the transfer when it gets enabled.
*/
- if (cpu_is_omap1510() || !lcd_dma.ext_ctrl)
+ if (cpu_is_omap15xx() || !lcd_dma.ext_ctrl)
return;
w = omap_readw(OMAP1610_DMA_LCD_CTRL);
void omap_setup_lcd_dma(void)
{
BUG_ON(lcd_dma.active);
- if (!cpu_is_omap1510()) {
+ if (!cpu_is_omap15xx()) {
/* Set some reasonable defaults */
omap_writew(0x5440, OMAP1610_DMA_LCD_CCR);
omap_writew(0x9102, OMAP1610_DMA_LCD_CSDP);
omap_writew(0x0004, OMAP1610_DMA_LCD_LCH_CTRL);
}
set_b1_regs();
- if (!cpu_is_omap1510()) {
+ if (!cpu_is_omap15xx()) {
u16 w;
w = omap_readw(OMAP1610_DMA_LCD_CCR);
u16 w;
lcd_dma.active = 0;
- if (cpu_is_omap1510() || !lcd_dma.ext_ctrl)
+ if (cpu_is_omap15xx() || !lcd_dma.ext_ctrl)
return;
w = omap_readw(OMAP1610_DMA_LCD_CCR);
#include <linux/clocksource.h>
#include <linux/clockchips.h>
#include <linux/io.h>
-#include <linux/sched.h>
#include <asm/system.h>
#include <mach/hardware.h>
static int devkit8000_panel_enable_lcd(struct omap_dss_device *dssdev)
{
- twl_i2c_write_u8(TWL4030_MODULE_GPIO, 0x80, REG_GPIODATADIR1);
- twl_i2c_write_u8(TWL4030_MODULE_LED, 0x0, 0x0);
-
if (gpio_is_valid(dssdev->reset_gpio))
gpio_set_value_cansleep(dssdev->reset_gpio, 1);
return 0;
static int devkit8000_twl_gpio_setup(struct device *dev,
unsigned gpio, unsigned ngpio)
{
+ int ret;
+
omap_mux_init_gpio(29, OMAP_PIN_INPUT);
/* gpio + 0 is "mmc0_cd" (input/IRQ) */
mmc[0].gpio_cd = gpio + 0;
/* TWL4030_GPIO_MAX + 1 == ledB, PMU_STAT (out, active low LED) */
gpio_leds[2].gpio = gpio + TWL4030_GPIO_MAX + 1;
- /* gpio + 1 is "LCD_PWREN" (out, active high) */
- devkit8000_lcd_device.reset_gpio = gpio + 1;
- gpio_request(devkit8000_lcd_device.reset_gpio, "LCD_PWREN");
- /* Disable until needed */
- gpio_direction_output(devkit8000_lcd_device.reset_gpio, 0);
+ /* TWL4030_GPIO_MAX + 0 is "LCD_PWREN" (out, active high) */
+ devkit8000_lcd_device.reset_gpio = gpio + TWL4030_GPIO_MAX + 0;
+ ret = gpio_request_one(devkit8000_lcd_device.reset_gpio,
+ GPIOF_DIR_OUT | GPIOF_INIT_LOW, "LCD_PWREN");
+ if (ret < 0) {
+ devkit8000_lcd_device.reset_gpio = -EINVAL;
+ printk(KERN_ERR "Failed to request GPIO for LCD_PWRN\n");
+ }
/* gpio + 7 is "DVI_PD" (out, active low) */
devkit8000_dvi_device.reset_gpio = gpio + 7;
- gpio_request(devkit8000_dvi_device.reset_gpio, "DVI PowerDown");
- /* Disable until needed */
- gpio_direction_output(devkit8000_dvi_device.reset_gpio, 0);
+ ret = gpio_request_one(devkit8000_dvi_device.reset_gpio,
+ GPIOF_DIR_OUT | GPIOF_INIT_LOW, "DVI PowerDown");
+ if (ret < 0) {
+ devkit8000_dvi_device.reset_gpio = -EINVAL;
+ printk(KERN_ERR "Failed to request GPIO for DVI PowerDown\n");
+ }
return 0;
}
platform_add_devices(panda_devices, ARRAY_SIZE(panda_devices));
omap_serial_init();
omap4_twl6030_hsmmc_init(mmc);
- /* OMAP4 Panda uses internal transceiver so register nop transceiver */
- usb_nop_xceiv_register();
omap4_ehci_init();
usb_musb_init(&musb_board_data);
}
static struct regulator_init_data rm680_vemmc = {
.constraints = {
.name = "rm680_vemmc",
- .min_uV = 2900000,
- .max_uV = 2900000,
- .apply_uV = 1,
.valid_modes_mask = REGULATOR_MODE_NORMAL
| REGULATOR_MODE_STANDBY,
.valid_ops_mask = REGULATOR_CHANGE_STATUS
if (!partition->base) {
pr_err("%s: Could not ioremap mux partition at 0x%08x\n",
__func__, partition->phys);
+ kfree(partition);
return -ENODEV;
}
* once during boot sequence, but this works as we are not using secure
* services.
*/
-static void omap3_save_secure_ram_context(u32 target_mpu_state)
+static void omap3_save_secure_ram_context(void)
{
u32 ret;
+ int mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm);
if (omap_type() != OMAP2_DEVICE_TYPE_GP) {
/*
pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
ret = _omap_save_secure_sram((u32 *)
__pa(omap3_secure_ram_storage));
- pwrdm_set_next_pwrst(mpu_pwrdm, target_mpu_state);
+ pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);
/* Following is for error tracking, it should not happen */
if (ret) {
printk(KERN_ERR "save_secure_sram() returns %08x\n",
local_fiq_disable();
omap_dma_global_context_save();
- omap3_save_secure_ram_context(PWRDM_POWER_ON);
+ omap3_save_secure_ram_context();
omap_dma_global_context_restore();
local_irq_enable();
struct omap_sr *sr_info = (struct omap_sr *) data;
if (!sr_info) {
- pr_warning("%s: omap_sr struct for sr_%s not found\n",
- __func__, sr_info->voltdm->name);
+ pr_warning("%s: omap_sr struct not found\n", __func__);
return -EINVAL;
}
struct omap_sr *sr_info = (struct omap_sr *) data;
if (!sr_info) {
- pr_warning("%s: omap_sr struct for sr_%s not found\n",
- __func__, sr_info->voltdm->name);
+ pr_warning("%s: omap_sr struct not found\n", __func__);
return -EINVAL;
}
if (!pdata) {
dev_err(&pdev->dev, "%s: platform data missing\n", __func__);
- return -EINVAL;
+ ret = -EINVAL;
+ goto err_free_devinfo;
}
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
}
sr_info = _sr_lookup(pdata->voltdm);
- if (!sr_info) {
+ if (IS_ERR(sr_info)) {
dev_warn(&pdev->dev, "%s: omap_sr struct not found\n",
__func__);
return -EINVAL;
strcat(name, vdd->voltdm.name);
vdd->debug_dir = debugfs_create_dir(name, voltage_dir);
+ kfree(name);
if (IS_ERR(vdd->debug_dir)) {
pr_warning("%s: Unable to create debugfs directory for"
" vdd_%s\n", __func__, vdd->voltdm.name);
case MACH_TYPE_MX35_3DS:
case MACH_TYPE_PCM043:
case MACH_TYPE_LILLY1131:
+ case MACH_TYPE_VPR200:
uart_base = MX3X_UART1_BASE_ADDR;
break;
case MACH_TYPE_MAGX_ZN5:
break;
case MACH_TYPE_MX51_BABBAGE:
case MACH_TYPE_EUKREA_CPUIMX51SD:
+ case MACH_TYPE_MX51_3DS:
uart_base = MX51_UART1_BASE_ADDR;
break;
case MACH_TYPE_MX50_RDP:
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
-# Last update: Sun Dec 12 23:24:27 2010
+# Last update: Mon Feb 7 08:59:27 2011
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
vs_v210 MACH_VS_V210 VS_V210 2252
vs_v212 MACH_VS_V212 VS_V212 2253
hmt MACH_HMT HMT 2254
-suen3 MACH_SUEN3 SUEN3 2255
+km_kirkwood MACH_KM_KIRKWOOD KM_KIRKWOOD 2255
vesper MACH_VESPER VESPER 2256
str9 MACH_STR9 STR9 2257
omap3_wl_ff MACH_OMAP3_WL_FF OMAP3_WL_FF 2258
ea20 MACH_EA20 EA20 3002
awm2 MACH_AWM2 AWM2 3003
ti8148evm MACH_TI8148EVM TI8148EVM 3004
-tegra_seaboard MACH_TEGRA_SEABOARD TEGRA_SEABOARD 3005
+seaboard MACH_SEABOARD SEABOARD 3005
linkstation_chlv2 MACH_LINKSTATION_CHLV2 LINKSTATION_CHLV2 3006
tera_pro2_rack MACH_TERA_PRO2_RACK TERA_PRO2_RACK 3007
rubys MACH_RUBYS RUBYS 3008
ics_if_voip MACH_ICS_IF_VOIP ICS_IF_VOIP 3206
wlf_cragg_6410 MACH_WLF_CRAGG_6410 WLF_CRAGG_6410 3207
punica MACH_PUNICA PUNICA 3208
-sbc_nt250 MACH_SBC_NT250 SBC_NT250 3209
+trimslice MACH_TRIMSLICE TRIMSLICE 3209
mx27_wmultra MACH_MX27_WMULTRA MX27_WMULTRA 3210
mackerel MACH_MACKEREL MACKEREL 3211
fa9x27 MACH_FA9X27 FA9X27 3213
pcm048 MACH_PCM048 PCM048 3236
dds MACH_DDS DDS 3237
chalten_xa1 MACH_CHALTEN_XA1 CHALTEN_XA1 3238
+ts48xx MACH_TS48XX TS48XX 3239
+tonga2_tfttimer MACH_TONGA2_TFTTIMER TONGA2_TFTTIMER 3240
+whistler MACH_WHISTLER WHISTLER 3241
+asl_phoenix MACH_ASL_PHOENIX ASL_PHOENIX 3242
+at91sam9263otlite MACH_AT91SAM9263OTLITE AT91SAM9263OTLITE 3243
+ddplug MACH_DDPLUG DDPLUG 3244
+d2plug MACH_D2PLUG D2PLUG 3245
+kzm9d MACH_KZM9D KZM9D 3246
+verdi_lte MACH_VERDI_LTE VERDI_LTE 3247
+nanozoom MACH_NANOZOOM NANOZOOM 3248
+dm3730_som_lv MACH_DM3730_SOM_LV DM3730_SOM_LV 3249
+dm3730_torpedo MACH_DM3730_TORPEDO DM3730_TORPEDO 3250
+anchovy MACH_ANCHOVY ANCHOVY 3251
+re2rev20 MACH_RE2REV20 RE2REV20 3253
+re2rev21 MACH_RE2REV21 RE2REV21 3254
+cns21xx MACH_CNS21XX CNS21XX 3255
+rider MACH_RIDER RIDER 3257
+nsk330 MACH_NSK330 NSK330 3258
+cns2133evb MACH_CNS2133EVB CNS2133EVB 3259
+z3_816x_mod MACH_Z3_816X_MOD Z3_816X_MOD 3260
+z3_814x_mod MACH_Z3_814X_MOD Z3_814X_MOD 3261
+beect MACH_BEECT BEECT 3262
+dma_thunderbug MACH_DMA_THUNDERBUG DMA_THUNDERBUG 3263
+omn_at91sam9g20 MACH_OMN_AT91SAM9G20 OMN_AT91SAM9G20 3264
+mx25_e2s_uc MACH_MX25_E2S_UC MX25_E2S_UC 3265
+mione MACH_MIONE MIONE 3266
+top9000_tcu MACH_TOP9000_TCU TOP9000_TCU 3267
+top9000_bsl MACH_TOP9000_BSL TOP9000_BSL 3268
+kingdom MACH_KINGDOM KINGDOM 3269
+armadillo460 MACH_ARMADILLO460 ARMADILLO460 3270
+lq2 MACH_LQ2 LQ2 3271
+sweda_tms2 MACH_SWEDA_TMS2 SWEDA_TMS2 3272
+mx53_loco MACH_MX53_LOCO MX53_LOCO 3273
+acer_a8 MACH_ACER_A8 ACER_A8 3275
+acer_gauguin MACH_ACER_GAUGUIN ACER_GAUGUIN 3276
+guppy MACH_GUPPY GUPPY 3277
+mx61_ard MACH_MX61_ARD MX61_ARD 3278
+tx53 MACH_TX53 TX53 3279
+omapl138_case_a3 MACH_OMAPL138_CASE_A3 OMAPL138_CASE_A3 3280
+uemd MACH_UEMD UEMD 3281
+ccwmx51mut MACH_CCWMX51MUT CCWMX51MUT 3282
+rockhopper MACH_ROCKHOPPER ROCKHOPPER 3283
+nookcolor MACH_NOOKCOLOR NOOKCOLOR 3284
+hkdkc100 MACH_HKDKC100 HKDKC100 3285
+ts42xx MACH_TS42XX TS42XX 3286
+aebl MACH_AEBL AEBL 3287
+wario MACH_WARIO WARIO 3288
+gfs_spm MACH_GFS_SPM GFS_SPM 3289
+cm_t3730 MACH_CM_T3730 CM_T3730 3290
+isc3 MACH_ISC3 ISC3 3291
+rascal MACH_RASCAL RASCAL 3292
+hrefv60 MACH_HREFV60 HREFV60 3293
+tpt_2_0 MACH_TPT_2_0 TPT_2_0 3294
+pyramid_td MACH_PYRAMID_TD PYRAMID_TD 3295
+splendor MACH_SPLENDOR SPLENDOR 3296
+guf_planet MACH_GUF_PLANET GUF_PLANET 3297
+msm8x60_qt MACH_MSM8X60_QT MSM8X60_QT 3298
+htc_hd_mini MACH_HTC_HD_MINI HTC_HD_MINI 3299
+athene MACH_ATHENE ATHENE 3300
+deep_r_ek_1 MACH_DEEP_R_EK_1 DEEP_R_EK_1 3301
+vivow_ct MACH_VIVOW_CT VIVOW_CT 3302
+nery_1000 MACH_NERY_1000 NERY_1000 3303
+rfl109145_ssrv MACH_RFL109145_SSRV RFL109145_SSRV 3304
+nmh MACH_NMH NMH 3305
+wn802t MACH_WN802T WN802T 3306
+dragonet MACH_DRAGONET DRAGONET 3307
+geneva_b MACH_GENEVA_B GENEVA_B 3308
+at91sam9263desk16l MACH_AT91SAM9263DESK16L AT91SAM9263DESK16L 3309
+bcmhana_sv MACH_BCMHANA_SV BCMHANA_SV 3310
+bcmhana_tablet MACH_BCMHANA_TABLET BCMHANA_TABLET 3311
+koi MACH_KOI KOI 3312
+ts4800 MACH_TS4800 TS4800 3313
+tqma9263 MACH_TQMA9263 TQMA9263 3314
+holiday MACH_HOLIDAY HOLIDAY 3315
+dma_6410 MACH_DMA6410 DMA6410 3316
+pcats_overlay MACH_PCATS_OVERLAY PCATS_OVERLAY 3317
+hwgw6410 MACH_HWGW6410 HWGW6410 3318
+shenzhou MACH_SHENZHOU SHENZHOU 3319
+cwme9210 MACH_CWME9210 CWME9210 3320
+cwme9210js MACH_CWME9210JS CWME9210JS 3321
+pgs_v1 MACH_PGS_SITARA PGS_SITARA 3322
+colibri_tegra2 MACH_COLIBRI_TEGRA2 COLIBRI_TEGRA2 3323
+w21 MACH_W21 W21 3324
+polysat1 MACH_POLYSAT1 POLYSAT1 3325
+dataway MACH_DATAWAY DATAWAY 3326
+cobral138 MACH_COBRAL138 COBRAL138 3327
+roverpcs8 MACH_ROVERPCS8 ROVERPCS8 3328
+marvelc MACH_MARVELC MARVELC 3329
+navefihid MACH_NAVEFIHID NAVEFIHID 3330
+dm365_cv100 MACH_DM365_CV100 DM365_CV100 3331
+able MACH_ABLE ABLE 3332
+legacy MACH_LEGACY LEGACY 3333
+icong MACH_ICONG ICONG 3334
+rover_g8 MACH_ROVER_G8 ROVER_G8 3335
+t5388p MACH_T5388P T5388P 3336
+dingo MACH_DINGO DINGO 3337
+goflexhome MACH_GOFLEXHOME GOFLEXHOME 3338
#ifdef CONFIG_DEBUG_STACKOVERFLOW
/* FIXME M32R */
#endif
- __do_IRQ(irq);
+ generic_handle_irq(irq);
irq_exit();
set_irq_regs(old_regs);
We ensure r7 points to a valid FDT, just in case the bootloader
is broken or non-existent */
beqi r7, no_fdt_arg /* NULL pointer? don't copy */
- lw r11, r0, r7 /* Does r7 point to a */
- rsubi r11, r11, OF_DT_HEADER /* valid FDT? */
+/* Does r7 point to a valid FDT? Load HEADER magic number */
+ /* Run time Big/Little endian platform */
+ /* Save 1 as word and load byte - 0 - BIG, 1 - LITTLE */
+ addik r11, r0, 0x1 /* BIG/LITTLE checking value */
+ /* __bss_start will be zeroed later - it is just temp location */
+ swi r11, r0, TOPHYS(__bss_start)
+ lbui r11, r0, TOPHYS(__bss_start)
+ beqid r11, big_endian /* DO NOT break delay stop dependency */
+ lw r11, r0, r7 /* Big endian load in delay slot */
+ lwr r11, r0, r7 /* Little endian load */
+big_endian:
+ rsubi r11, r11, OF_DT_HEADER /* Check FDT header */
beqi r11, _prepare_copy_fdt
or r7, r0, r0 /* clear R7 when not valid DTB */
bnei r11, no_fdt_arg /* No - get out of here */
#if CONFIG_XILINX_MICROBLAZE0_USE_BARREL > 0
#define BSRLI(rD, rA, imm) \
bsrli rD, rA, imm
- #elif CONFIG_XILINX_MICROBLAZE0_USE_DIV > 0
- #define BSRLI(rD, rA, imm) \
- ori rD, r0, (1 << imm); \
- idivu rD, rD, rA
#else
#define BSRLI(rD, rA, imm) BSRLI ## imm (rD, rA)
/* Only the used shift constants defined here - add more if needed */
* between mem locations with size of xfer spec'd in bytes
*/
+#ifdef __MICROBLAZEEL__
+#error Microblaze LE not support ASM optimized lib func. Disable OPT_LIB_ASM.
+#endif
+
#include <linux/linkage.h>
.text
.globl memcpy
/* MAS registers bit definitions */
-#define MAS0_TLBSEL(x) ((x << 28) & 0x30000000)
-#define MAS0_ESEL(x) ((x << 16) & 0x0FFF0000)
+#define MAS0_TLBSEL(x) (((x) << 28) & 0x30000000)
+#define MAS0_ESEL(x) (((x) << 16) & 0x0FFF0000)
#define MAS0_NV(x) ((x) & 0x00000FFF)
#define MAS0_HES 0x00004000
#define MAS0_WQ_ALLWAYS 0x00000000
#define MAS1_VALID 0x80000000
#define MAS1_IPROT 0x40000000
-#define MAS1_TID(x) ((x << 16) & 0x3FFF0000)
+#define MAS1_TID(x) (((x) << 16) & 0x3FFF0000)
#define MAS1_IND 0x00002000
#define MAS1_TS 0x00001000
#define MAS1_TSIZE_MASK 0x00000f80
#define MAS1_TSIZE_SHIFT 7
-#define MAS1_TSIZE(x) ((x << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK)
+#define MAS1_TSIZE(x) (((x) << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK)
#define MAS2_EPN 0xFFFFF000
#define MAS2_X0 0x00000040
#ifdef CONFIG_FLATMEM
#define ARCH_PFN_OFFSET (MEMORY_START >> PAGE_SHIFT)
-#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < (ARCH_PFN_OFFSET + max_mapnr))
+#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
#endif
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#include <asm/mmu.h>
_GLOBAL(__setup_cpu_603)
- mflr r4
+ mflr r5
BEGIN_MMU_FTR_SECTION
li r10,0
mtspr SPRN_SPRG_603_LRU,r10 /* init SW LRU tracking */
bl __init_fpu_registers
END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE)
bl setup_common_caches
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_604)
- mflr r4
+ mflr r5
bl setup_common_caches
bl setup_604_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750cx)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
bl setup_750cx
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750fx)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
bl setup_750fx
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_7400)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_7400_workarounds
bl setup_common_caches
bl setup_750_7400_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_7410)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_7410_workarounds
bl setup_common_caches
bl setup_750_7400_hid0
li r3,0
mtspr SPRN_L2CR2,r3
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_745x)
- mflr r4
+ mflr r5
bl setup_common_caches
bl setup_745x_specifics
- mtlr r4
+ mtlr r5
blr
/* Enable caches for 603's, 604, 750 & 7400 */
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
cror 4*cr0+eq,4*cr0+eq,4*cr2+eq
bnelr
- lwz r6,CPU_SPEC_FEATURES(r5)
+ lwz r6,CPU_SPEC_FEATURES(r4)
li r7,CPU_FTR_CAN_NAP
andc r6,r6,r7
- stw r6,CPU_SPEC_FEATURES(r5)
+ stw r6,CPU_SPEC_FEATURES(r4)
blr
/* 750fx specific
andis. r11,r11,L3CR_L3E@h
beq 1f
END_FTR_SECTION_IFSET(CPU_FTR_L3CR)
- lwz r6,CPU_SPEC_FEATURES(r5)
+ lwz r6,CPU_SPEC_FEATURES(r4)
andi. r0,r6,CPU_FTR_L3_DISABLE_NAP
beq 1f
li r7,CPU_FTR_CAN_NAP
andc r6,r6,r7
- stw r6,CPU_SPEC_FEATURES(r5)
+ stw r6,CPU_SPEC_FEATURES(r4)
1:
mfspr r11,SPRN_HID0
* pointer on ppc64 and booke as we are running at 0 in real mode
* on ppc64 and reloc_offset is always 0 on booke.
*/
- if (s->cpu_setup) {
- s->cpu_setup(offset, s);
+ if (t->cpu_setup) {
+ t->cpu_setup(offset, t);
}
#endif /* CONFIG_PPC64 || CONFIG_BOOKE */
}
dbg("removing cpu %lu from node %d\n", cpu, node);
if (cpumask_test_cpu(cpu, node_to_cpumask_map[node])) {
- cpumask_set_cpu(cpu, node_to_cpumask_map[node]);
+ cpumask_clear_cpu(cpu, node_to_cpumask_map[node]);
} else {
printk(KERN_ERR "WARNING: cpu %lu not found in node %d\n",
cpu, node);
}
#endif /* CONFIG_MEMORY_HOTPLUG */
-/* Vrtual Processor Home Node (VPHN) support */
+/* Virtual Processor Home Node (VPHN) support */
#ifdef CONFIG_PPC_SPLPAR
-#define VPHN_NR_CHANGE_CTRS (8)
-static u8 vphn_cpu_change_counts[NR_CPUS][VPHN_NR_CHANGE_CTRS];
+static u8 vphn_cpu_change_counts[NR_CPUS][MAX_DISTANCE_REF_POINTS];
static cpumask_t cpu_associativity_changes_mask;
static int vphn_enabled;
static void set_topology_timer(void);
*/
static void setup_cpu_associativity_change_counters(void)
{
- int cpu = 0;
+ int cpu;
+
+ /* The VPHN feature supports a maximum of 8 reference points */
+ BUILD_BUG_ON(MAX_DISTANCE_REF_POINTS > 8);
for_each_possible_cpu(cpu) {
- int i = 0;
+ int i;
u8 *counts = vphn_cpu_change_counts[cpu];
volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts;
- for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) {
+ for (i = 0; i < distance_ref_points_depth; i++)
counts[i] = hypervisor_counts[i];
- }
}
}
*/
static int update_cpu_associativity_changes_mask(void)
{
- int cpu = 0, nr_cpus = 0;
+ int cpu, nr_cpus = 0;
cpumask_t *changes = &cpu_associativity_changes_mask;
cpumask_clear(changes);
u8 *counts = vphn_cpu_change_counts[cpu];
volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts;
- for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) {
- if (hypervisor_counts[i] > counts[i]) {
+ for (i = 0; i < distance_ref_points_depth; i++) {
+ if (hypervisor_counts[i] != counts[i]) {
counts[i] = hypervisor_counts[i];
changed = 1;
}
return nr_cpus;
}
-/* 6 64-bit registers unpacked into 12 32-bit associativity values */
-#define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32))
+/*
+ * 6 64-bit registers unpacked into 12 32-bit associativity values. To form
+ * the complete property we have to add the length in the first cell.
+ */
+#define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32) + 1)
/*
* Convert the associativity domain numbers returned from the hypervisor
*/
static int vphn_unpack_associativity(const long *packed, unsigned int *unpacked)
{
- int i = 0;
- int nr_assoc_doms = 0;
+ int i, nr_assoc_doms = 0;
const u16 *field = (const u16*) packed;
#define VPHN_FIELD_UNUSED (0xffff)
#define VPHN_FIELD_MSB (0x8000)
#define VPHN_FIELD_MASK (~VPHN_FIELD_MSB)
- for (i = 0; i < VPHN_ASSOC_BUFSIZE; i++) {
+ for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
if (*field == VPHN_FIELD_UNUSED) {
/* All significant fields processed, and remaining
* fields contain the reserved value of all 1's.
*/
unpacked[i] = *((u32*)field);
field += 2;
- }
- else if (*field & VPHN_FIELD_MSB) {
+ } else if (*field & VPHN_FIELD_MSB) {
/* Data is in the lower 15 bits of this field */
unpacked[i] = *field & VPHN_FIELD_MASK;
field++;
nr_assoc_doms++;
- }
- else {
+ } else {
/* Data is in the lower 15 bits of this field
* concatenated with the next 16 bit field
*/
}
}
+ /* The first cell contains the length of the property */
+ unpacked[0] = nr_assoc_doms;
+
return nr_assoc_doms;
}
*/
static long hcall_vphn(unsigned long cpu, unsigned int *associativity)
{
- long rc = 0;
+ long rc;
long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
u64 flags = 1;
int hwcpu = get_hard_smp_processor_id(cpu);
static long vphn_get_associativity(unsigned long cpu,
unsigned int *associativity)
{
- long rc = 0;
+ long rc;
rc = hcall_vphn(cpu, associativity);
*/
int arch_update_cpu_topology(void)
{
- int cpu = 0, nid = 0, old_nid = 0;
+ int cpu, nid, old_nid;
unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0};
- struct sys_device *sysdev = NULL;
+ struct sys_device *sysdev;
for_each_cpu_mask(cpu, cpu_associativity_changes_mask) {
vphn_get_associativity(cpu, associativity);
{
int rc = 0;
- if (firmware_has_feature(FW_FEATURE_VPHN)) {
+ if (firmware_has_feature(FW_FEATURE_VPHN) &&
+ get_lppaca()->shared_proc) {
vphn_enabled = 1;
setup_cpu_associativity_change_counters();
init_timer_deferrable(&topology_timer);
/* NB: reg/unreg are called while guarded with the tracepoints_mutex */
extern long hcall_tracepoint_refcount;
+/*
+ * Since the tracing code might execute hcalls we need to guard against
+ * recursion. One example of this are spinlocks calling H_YIELD on
+ * shared processor partitions.
+ */
+static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
+
void hcall_tracepoint_regfunc(void)
{
hcall_tracepoint_refcount++;
void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
{
+ unsigned long flags;
+ unsigned int *depth;
+
+ local_irq_save(flags);
+
+ depth = &__get_cpu_var(hcall_trace_depth);
+
+ if (*depth)
+ goto out;
+
+ (*depth)++;
trace_hcall_entry(opcode, args);
+ (*depth)--;
+
+out:
+ local_irq_restore(flags);
}
void __trace_hcall_exit(long opcode, unsigned long retval,
unsigned long *retbuf)
{
+ unsigned long flags;
+ unsigned int *depth;
+
+ local_irq_save(flags);
+
+ depth = &__get_cpu_var(hcall_trace_depth);
+
+ if (*depth)
+ goto out;
+
+ (*depth)++;
trace_hcall_exit(opcode, retval, retbuf);
+ (*depth)--;
+
+out:
+ local_irq_restore(flags);
}
#endif
If unsure, say Y.
config CHSC_SCH
- def_tristate y
+ def_tristate m
prompt "Support for CHSC subchannels"
help
This driver allows usage of CHSC subchannels. A CHSC subchannel
#ifndef _S390_CACHEFLUSH_H
#define _S390_CACHEFLUSH_H
-/* Keep includes the same across arches. */
-#include <linux/mm.h>
-
/* Caches aren't brain-dead on the s390. */
-#define flush_cache_all() do { } while (0)
-#define flush_cache_mm(mm) do { } while (0)
-#define flush_cache_dup_mm(mm) do { } while (0)
-#define flush_cache_range(vma, start, end) do { } while (0)
-#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
-#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
-#define flush_dcache_page(page) do { } while (0)
-#define flush_dcache_mmap_lock(mapping) do { } while (0)
-#define flush_dcache_mmap_unlock(mapping) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
-#define flush_icache_page(vma,pg) do { } while (0)
-#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
-#define flush_cache_vmap(start, end) do { } while (0)
-#define flush_cache_vunmap(start, end) do { } while (0)
-
-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
-#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
+#include <asm-generic/cacheflush.h>
#ifdef CONFIG_DEBUG_PAGEALLOC
void kernel_map_pages(struct page *page, int numpages, int enable);
*/
#include <linux/mm.h>
+#include <linux/pagemap.h>
#include <linux/swap.h>
#include <asm/processor.h>
#include <asm/pgalloc.h>
unsigned long tmp1;
asm volatile(
+ " sacf 256\n"
" "AHI" %0,-1\n"
" jo 5f\n"
- " sacf 256\n"
" bras %3,3f\n"
"0:"AHI" %0,257\n"
"1: mvc 0(1,%1),0(%2)\n"
"3:"AHI" %0,-256\n"
" jnm 2b\n"
"4: ex %0,1b-0b(%3)\n"
- " sacf 0\n"
"5: "SLR" %0,%0\n"
- "6:\n"
+ "6: sacf 0\n"
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "+a" (from), "=a" (tmp1)
: : "cc", "memory");
unsigned long tmp1, tmp2;
asm volatile(
+ " sacf 256\n"
" "AHI" %0,-1\n"
" jo 5f\n"
- " sacf 256\n"
" bras %3,3f\n"
" xc 0(1,%1),0(%1)\n"
"0:"AHI" %0,257\n"
"3:"AHI" %0,-256\n"
" jnm 2b\n"
"4: ex %0,0(%3)\n"
- " sacf 0\n"
"5: "SLR" %0,%0\n"
- "6:\n"
+ "6: sacf 0\n"
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "=a" (tmp1), "=a" (tmp2)
: : "cc", "memory");
page->flags ^= bits;
if (page->flags & FRAG_MASK) {
/* Page now has some free pgtable fragments. */
- list_move(&page->lru, &mm->context.pgtable_list);
+ if (!list_empty(&page->lru))
+ list_move(&page->lru, &mm->context.pgtable_list);
page = NULL;
} else
/* All fragments of the 4K page have been freed. */
unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
- /* stop flush ipis for the previous mm */
- cpumask_clear_cpu(cpu, mm_cpumask(prev));
#ifdef CONFIG_SMP
percpu_write(cpu_tlbstate.state, TLBSTATE_OK);
percpu_write(cpu_tlbstate.active_mm, next);
/* Re-load page tables */
load_cr3(next->pgd);
+ /* stop flush ipis for the previous mm */
+ cpumask_clear_cpu(cpu, mm_cpumask(prev));
+
/*
* load the LDT, if the LDT is different:
*/
DECLARE_EARLY_PER_CPU(u16, x86_bios_cpu_apicid);
/* Static state in head.S used to set up a CPU */
-extern struct {
- void *sp;
- unsigned short ss;
-} stack_start;
+extern unsigned long stack_start; /* Initial stack pointer address */
struct smp_ops {
void (*smp_prepare_boot_cpu)(void);
#include <linux/cpumask.h>
#include <asm/segment.h>
#include <asm/desc.h>
-
-#ifdef CONFIG_X86_32
#include <asm/pgtable.h>
-#endif
+#include <asm/cacheflush.h>
#include "realmode/wakeup.h"
#include "sleep.h"
#else /* CONFIG_64BIT */
header->trampoline_segment = setup_trampoline() >> 4;
#ifdef CONFIG_SMP
- stack_start.sp = temp_stack + sizeof(temp_stack);
+ stack_start = (unsigned long)temp_stack + sizeof(temp_stack);
early_gdt_descr.address =
(unsigned long)get_cpu_gdt_table(smp_processor_id());
initial_gs = per_cpu_offset(smp_processor_id());
memblock_x86_reserve_range(mem, mem + WAKEUP_SIZE, "ACPI WAKEUP");
}
+int __init acpi_configure_wakeup_memory(void)
+{
+ if (acpi_realmode)
+ set_memory_x(acpi_realmode, WAKEUP_SIZE >> PAGE_SHIFT);
+
+ return 0;
+}
+arch_initcall(acpi_configure_wakeup_memory);
+
static int __init acpi_sleep_setup(char *str)
{
}
/*
- * MTRR initialization for all AP's
+ * Delayed MTRR initialization for all AP's
*/
void mtrr_aps_init(void)
{
if (!use_intel())
return;
+ /*
+ * Check if someone has requested the delay of AP MTRR initialization,
+ * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
+ * then we are done.
+ */
+ if (!mtrr_aps_delayed_init)
+ return;
+
set_mtrr(~0U, 0, 0, 0);
mtrr_aps_delayed_init = false;
}
* if an event is shared accross the logical threads
* the user needs special permissions to be able to use it
*/
- if (p4_event_bind_map[v].shared) {
+ if (p4_ht_active() && p4_event_bind_map[v].shared) {
if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
return -EACCES;
}
event->hw.config = p4_set_ht_bit(event->hw.config);
if (event->attr.type == PERF_TYPE_RAW) {
-
+ struct p4_event_bind *bind;
+ unsigned int esel;
/*
* Clear bits we reserve to be managed by kernel itself
* and never allowed from a user space
* bits since we keep additional info here (for cache events and etc)
*/
event->hw.config |= event->attr.config;
+ bind = p4_config_get_bind(event->attr.config);
+ if (!bind) {
+ rc = -EINVAL;
+ goto out;
+ }
+ esel = P4_OPCODE_ESEL(bind->opcode);
+ event->hw.config |= p4_config_pack_cccr(P4_CCCR_ESEL(esel));
}
rc = x86_setup_perfctr(event);
*/
__HEAD
ENTRY(startup_32)
+ movl pa(stack_start),%ecx
+
/* test KEEP_SEGMENTS flag to see if the bootloader is asking
us to not reload segments */
testb $(1<<6), BP_loadflags(%esi)
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
+ movl %eax,%ss
2:
+ leal -__PAGE_OFFSET(%ecx),%esp
/*
* Clear BSS first so that there are no surprises...
* _brk_end is set up to point to the first "safe" location.
* Mappings are created both at virtual address 0 (identity mapping)
* and PAGE_OFFSET for up to _end.
- *
- * Note that the stack is not yet set up!
*/
#ifdef CONFIG_X86_PAE
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
+ movl pa(stack_start),%ecx
+ movl %eax,%ss
+ leal -__PAGE_OFFSET(%ecx),%esp
#endif /* CONFIG_SMP */
default_entry:
movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1:
- /* Set up the stack pointer */
- lss stack_start,%esp
+ /* Shift the stack pointer to a virtual address */
+ addl $__PAGE_OFFSET, %esp
/*
* Initialize eflags. Some BIOS's leave bits like NT set. This would
#ifdef CONFIG_SMP
cmpb $0, ready
- jz 1f /* Initial CPU cleans BSS */
- jmp checkCPUtype
-1:
+ jnz checkCPUtype
#endif /* CONFIG_SMP */
/*
cld # gcc2 wants the direction flag cleared at all times
pushl $0 # fake return address for unwinder
-#ifdef CONFIG_SMP
- movb ready, %cl
movb $1, ready
- cmpb $0,%cl # the first CPU calls start_kernel
- je 1f
- movl (stack_start), %esp
-1:
-#endif /* CONFIG_SMP */
jmp *(initial_code)
/*
#endif
.data
+.balign 4
ENTRY(stack_start)
.long init_thread_union+THREAD_SIZE
- .long __BOOT_DS
-
-ready: .byte 0
early_recursion_flag:
.long 0
+ready: .byte 0
+
int_msg:
.asciz "Unknown interrupt or fault at: %p %p %p\n"
* target processor state.
*/
startup_ipi_hook(phys_apicid, (unsigned long) start_secondary,
- (unsigned long)stack_start.sp);
+ stack_start);
/*
* Run STARTUP IPI loop.
#endif
early_gdt_descr.address = (unsigned long)get_cpu_gdt_table(cpu);
initial_code = (unsigned long)start_secondary;
- stack_start.sp = (void *) c_idle.idle->thread.sp;
+ stack_start = c_idle.idle->thread.sp;
/* start_ip had better be page-aligned! */
start_ip = setup_trampoline();
unsigned long pfn)
{
pgprot_t forbidden = __pgprot(0);
- pgprot_t required = __pgprot(0);
/*
* The BIOS area between 640k and 1Mb needs to be executable for
if (within(pfn, __pa((unsigned long)__start_rodata) >> PAGE_SHIFT,
__pa((unsigned long)__end_rodata) >> PAGE_SHIFT))
pgprot_val(forbidden) |= _PAGE_RW;
- /*
- * .data and .bss should always be writable.
- */
- if (within(address, (unsigned long)_sdata, (unsigned long)_edata) ||
- within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))
- pgprot_val(required) |= _PAGE_RW;
#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
/*
#endif
prot = __pgprot(pgprot_val(prot) & ~pgprot_val(forbidden));
- prot = __pgprot(pgprot_val(prot) | pgprot_val(required));
return prot;
}
* tree of blkg (instead of traversing through hash list all
* the time.
*/
- tg = tg_of_blkg(blkiocg_lookup_group(blkcg, key));
+
+ /*
+ * This is the common case when there are no blkio cgroups.
+ * Avoid lookup in this case
+ */
+ if (blkcg == &blkio_root_cgroup)
+ tg = &td->root_tg;
+ else
+ tg = tg_of_blkg(blkiocg_lookup_group(blkcg, key));
/* Fill in device details for root group */
if (tg && !tg->blkg.dev && bdi->dev && dev_name(bdi->dev)) {
}
static inline unsigned
-cfq_scaled_group_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+cfq_scaled_cfqq_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
unsigned slice = cfq_prio_to_slice(cfqd, cfqq);
if (cfqd->cfq_latency) {
static inline void
cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
- unsigned slice = cfq_scaled_group_slice(cfqd, cfqq);
+ unsigned slice = cfq_scaled_cfqq_slice(cfqd, cfqq);
cfqq->slice_start = jiffies;
cfqq->slice_end = jiffies + slice;
*/
if (timed_out) {
if (cfq_cfqq_slice_new(cfqq))
- cfqq->slice_resid = cfq_scaled_group_slice(cfqd, cfqq);
+ cfqq->slice_resid = cfq_scaled_cfqq_slice(cfqd, cfqq);
else
cfqq->slice_resid = cfqq->slice_end - jiffies;
cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid);
{
struct cfq_io_context *cic = cfqd->active_cic;
+ /* If the queue already has requests, don't wait */
+ if (!RB_EMPTY_ROOT(&cfqq->sort_list))
+ return false;
+
/* If there are other queues in the group, don't wait */
if (cfqq->cfqg->nr_cfqq > 1)
return false;
obj-$(CONFIG_BLK_DEV_DRBD) += drbd/
obj-$(CONFIG_BLK_DEV_RBD) += rbd.o
-swim_mod-objs := swim.o swim_asm.o
+swim_mod-y := swim.o swim_asm.o
#
obj-$(CONFIG_ATA_OVER_ETH) += aoe.o
-aoe-objs := aoeblk.o aoechr.o aoecmd.o aoedev.o aoemain.o aoenet.o
+aoe-y := aoeblk.o aoechr.o aoecmd.o aoedev.o aoemain.o aoenet.o
sector_t total_size;
InquiryData_struct *inq_buff = NULL;
- for (logvol = 0; logvol < CISS_MAX_LUN; logvol++) {
+ for (logvol = 0; logvol <= h->highest_lun; logvol++) {
if (!h->drv[logvol])
continue;
if (memcmp(h->drv[logvol]->LunID, drv->LunID,
static void loop_free(struct loop_device *lo)
{
+ if (!lo->lo_queue->queue_lock)
+ lo->lo_queue->queue_lock = &lo->lo_queue->__queue_lock;
+
blk_cleanup_queue(lo->lo_queue);
put_disk(lo->lo_disk);
list_del(&lo->lo_list);
}
ENSURE(drive_status, CDC_DRIVE_STATUS );
- ENSURE(media_changed, CDC_MEDIA_CHANGED);
+ if (cdo->check_events == NULL && cdo->media_changed == NULL)
+ *change_capability = ~(CDC_MEDIA_CHANGED | CDC_SELECT_DISC);
ENSURE(tray_move, CDC_CLOSE_TRAY | CDC_OPEN_TRAY);
ENSURE(lock_door, CDC_LOCK);
ENSURE(select_speed, CDC_SELECT_SPEED);
config AGP_AMD
tristate "AMD Irongate, 761, and 762 chipset support"
- depends on AGP && (X86_32 || ALPHA)
+ depends on AGP && X86_32
help
This option gives you AGP support for the GLX component of
X on AMD Irongate, 761, and 762 chipsets.
if (page_map->real == NULL)
return -ENOMEM;
-#ifndef CONFIG_X86
- SetPageReserved(virt_to_page(page_map->real));
- global_cache_flush();
- page_map->remapped = ioremap_nocache(virt_to_phys(page_map->real),
- PAGE_SIZE);
- if (page_map->remapped == NULL) {
- ClearPageReserved(virt_to_page(page_map->real));
- free_page((unsigned long) page_map->real);
- page_map->real = NULL;
- return -ENOMEM;
- }
- global_cache_flush();
-#else
set_memory_uc((unsigned long)page_map->real, 1);
page_map->remapped = page_map->real;
-#endif
for (i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) {
writel(agp_bridge->scratch_page, page_map->remapped+i);
static void amd_free_page_map(struct amd_page_map *page_map)
{
-#ifndef CONFIG_X86
- iounmap(page_map->remapped);
- ClearPageReserved(virt_to_page(page_map->real));
-#else
set_memory_wb((unsigned long)page_map->real, 1);
-#endif
free_page((unsigned long) page_map->real);
}
dev_info(&pdev->dev, "Intel %s Chipset\n", intel_agp_chipsets[i].name);
- /*
- * If the device has not been properly setup, the following will catch
- * the problem and should stop the system from crashing.
- * 20030610 - hamish@zot.org
- */
- if (pci_enable_device(pdev)) {
- dev_err(&pdev->dev, "can't enable PCI device\n");
- agp_put_bridge(bridge);
- return -ENODEV;
- }
-
/*
* The following fixes the case where the BIOS has "forgotten" to
* provide an address range for the GART.
* 20030610 - hamish@zot.org
+ * This happens before pci_enable_device() intentionally;
+ * calling pci_enable_device() before assigning the resource
+ * will result in the GART being disabled on machines with such
+ * BIOSs (the GART ends up with a BAR starting at 0, which
+ * conflicts a lot of other devices).
*/
r = &pdev->resource[0];
if (!r->start && r->end) {
}
}
+ /*
+ * If the device has not been properly setup, the following will catch
+ * the problem and should stop the system from crashing.
+ * 20030610 - hamish@zot.org
+ */
+ if (pci_enable_device(pdev)) {
+ dev_err(&pdev->dev, "can't enable PCI device\n");
+ agp_put_bridge(bridge);
+ return -ENODEV;
+ }
+
/* Fill in the mode register */
if (cap_ptr) {
pci_read_config_dword(pdev,
mutex_unlock(&dev->mode_config.mutex);
return ret;
}
+
+void drm_mode_config_reset(struct drm_device *dev)
+{
+ struct drm_crtc *crtc;
+ struct drm_encoder *encoder;
+ struct drm_connector *connector;
+
+ list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
+ if (crtc->funcs->reset)
+ crtc->funcs->reset(crtc);
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head)
+ if (encoder->funcs->reset)
+ encoder->funcs->reset(encoder);
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head)
+ if (connector->funcs->reset)
+ connector->funcs->reset(connector);
+}
+EXPORT_SYMBOL(drm_mode_config_reset);
struct drm_encoder *encoder;
bool ret = true;
- adjusted_mode = drm_mode_duplicate(dev, mode);
-
crtc->enabled = drm_helper_crtc_in_use(crtc);
-
if (!crtc->enabled)
return true;
+ adjusted_mode = drm_mode_duplicate(dev, mode);
+
saved_hwmode = crtc->hwmode;
saved_mode = crtc->mode;
saved_x = crtc->x;
*/
drm_calc_timestamping_constants(crtc);
- /* XXX free adjustedmode */
- drm_mode_destroy(dev, adjusted_mode);
/* FIXME: add subpixel order */
done:
+ drm_mode_destroy(dev, adjusted_mode);
if (!ret) {
crtc->hwmode = saved_hwmode;
crtc->mode = saved_mode;
crtc_funcs = set->crtc->helper_private;
+ if (!set->mode)
+ set->fb = NULL;
+
if (set->fb) {
DRM_DEBUG_KMS("[CRTC:%d] [FB:%d] #connectors=%d (x y) (%i %i)\n",
set->crtc->base.id, set->fb->base.id,
(int)set->num_connectors, set->x, set->y);
} else {
- DRM_DEBUG_KMS("[CRTC:%d] [NOFB] #connectors=%d (x y) (%i %i)\n",
- set->crtc->base.id, (int)set->num_connectors,
- set->x, set->y);
+ DRM_DEBUG_KMS("[CRTC:%d] [NOFB]\n", set->crtc->base.id);
+ set->mode = NULL;
+ set->num_connectors = 0;
}
dev = set->crtc->dev;
mode_changed = true;
if (mode_changed) {
- set->crtc->enabled = (set->mode != NULL);
- if (set->mode != NULL) {
+ set->crtc->enabled = drm_helper_crtc_in_use(set->crtc);
+ if (set->crtc->enabled) {
DRM_DEBUG_KMS("attempting to set mode from"
" userspace\n");
drm_mode_debug_printmodeline(set->mode);
ret = -EINVAL;
goto fail;
}
+ DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
+ for (i = 0; i < set->num_connectors; i++) {
+ DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
+ drm_get_connector_name(set->connectors[i]));
+ set->connectors[i]->dpms = DRM_MODE_DPMS_ON;
+ }
}
drm_helper_disable_unused_functions(dev);
} else if (fb_changed) {
goto fail;
}
}
- DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
- for (i = 0; i < set->num_connectors; i++) {
- DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
- drm_get_connector_name(set->connectors[i]));
- set->connectors[i]->dpms = DRM_MODE_DPMS_ON;
- }
kfree(save_connectors);
kfree(save_encoders);
* Drivers should call this routine in their vblank interrupt handlers to
* update the vblank counter and send any signals that may be pending.
*/
-void drm_handle_vblank(struct drm_device *dev, int crtc)
+bool drm_handle_vblank(struct drm_device *dev, int crtc)
{
u32 vblcount;
s64 diff_ns;
unsigned long irqflags;
if (!dev->num_crtcs)
- return;
+ return false;
/* Need timestamp lock to prevent concurrent execution with
* vblank enable/disable, as this would cause inconsistent
/* Vblank irq handling disabled. Nothing to do. */
if (!dev->vblank_enabled[crtc]) {
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
- return;
+ return false;
}
/* Fetch corresponding timestamp for this vblank interval from
drm_handle_vblank_events(dev, crtc);
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
+ return true;
}
EXPORT_SYMBOL(drm_handle_vblank);
error = i915_gem_init_ringbuffer(dev);
mutex_unlock(&dev->struct_mutex);
+ drm_mode_config_reset(dev);
drm_irq_install(dev);
/* Resume the modeset for every activated CRTC */
mutex_unlock(&dev->struct_mutex);
drm_irq_uninstall(dev);
+ drm_mode_config_reset(dev);
drm_irq_install(dev);
mutex_lock(&dev->struct_mutex);
}
static int __devinit
i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
+ /* Only bind to function 0 of the device. Early generations
+ * used function 1 as a placeholder for multi-head. This causes
+ * us confusion instead, especially on the systems where both
+ * functions have the same PCI-ID!
+ */
+ if (PCI_FUNC(pdev->devfn))
+ return -ENODEV;
+
return drm_get_pci_dev(pdev, ent, &driver);
}
intel_finish_page_flip_plane(dev, 1);
}
- if (pipea_stats & vblank_status) {
+ if (pipea_stats & vblank_status &&
+ drm_handle_vblank(dev, 0)) {
vblank++;
- drm_handle_vblank(dev, 0);
if (!dev_priv->flip_pending_is_done) {
i915_pageflip_stall_check(dev, 0);
intel_finish_page_flip(dev, 0);
}
}
- if (pipeb_stats & vblank_status) {
+ if (pipeb_stats & vblank_status &&
+ drm_handle_vblank(dev, 1)) {
vblank++;
- drm_handle_vblank(dev, 1);
if (!dev_priv->flip_pending_is_done) {
i915_pageflip_stall_check(dev, 1);
intel_finish_page_flip(dev, 1);
return 0;
}
+static void intel_crt_reset(struct drm_connector *connector)
+{
+ struct drm_device *dev = connector->dev;
+ struct intel_crt *crt = intel_attached_crt(connector);
+
+ if (HAS_PCH_SPLIT(dev))
+ crt->force_hotplug_required = 1;
+}
+
/*
* Routines for controlling stuff on the analog port
*/
};
static const struct drm_connector_funcs intel_crt_connector_funcs = {
+ .reset = intel_crt_reset,
.dpms = drm_helper_connector_dpms,
.detect = intel_crt_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
return ret;
}
+static void intel_crtc_reset(struct drm_crtc *crtc)
+{
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ /* Reset flags back to the 'unknown' status so that they
+ * will be correctly set on the initial modeset.
+ */
+ intel_crtc->cursor_addr = 0;
+ intel_crtc->dpms_mode = -1;
+ intel_crtc->active = true; /* force the pipe off on setup_init_config */
+}
+
static struct drm_crtc_helper_funcs intel_helper_funcs = {
.dpms = intel_crtc_dpms,
.mode_fixup = intel_crtc_mode_fixup,
};
static const struct drm_crtc_funcs intel_crtc_funcs = {
+ .reset = intel_crtc_reset,
.cursor_set = intel_crtc_cursor_set,
.cursor_move = intel_crtc_cursor_move,
.gamma_set = intel_crtc_gamma_set,
dev_priv->plane_to_crtc_mapping[intel_crtc->plane] = &intel_crtc->base;
dev_priv->pipe_to_crtc_mapping[intel_crtc->pipe] = &intel_crtc->base;
- intel_crtc->cursor_addr = 0;
- intel_crtc->dpms_mode = -1;
- intel_crtc->active = true; /* force the pipe off on setup_init_config */
+ intel_crtc_reset(&intel_crtc->base);
if (HAS_PCH_SPLIT(dev)) {
intel_helper_funcs.prepare = ironlake_crtc_prepare;
return false;
}
- i = 3;
- while (status == SDVO_CMD_STATUS_PENDING && i--) {
- if (!intel_sdvo_read_byte(intel_sdvo,
- SDVO_I2C_CMD_STATUS,
- &status))
- return false;
- }
- if (status != SDVO_CMD_STATUS_SUCCESS) {
- DRM_DEBUG_KMS("command returns response %s [%d]\n",
- status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP ? cmd_status_names[status] : "???",
- status);
- return false;
- }
-
return true;
}
u8 status;
int i;
+ DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(intel_sdvo));
+
/*
* The documentation states that all commands will be
* processed within 15µs, and that we need only poll
*
* Check 5 times in case the hardware failed to read the docs.
*/
- do {
+ if (!intel_sdvo_read_byte(intel_sdvo,
+ SDVO_I2C_CMD_STATUS,
+ &status))
+ goto log_fail;
+
+ while (status == SDVO_CMD_STATUS_PENDING && retry--) {
+ udelay(15);
if (!intel_sdvo_read_byte(intel_sdvo,
SDVO_I2C_CMD_STATUS,
&status))
- return false;
- } while (status == SDVO_CMD_STATUS_PENDING && --retry);
+ goto log_fail;
+ }
- DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(intel_sdvo));
if (status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP)
DRM_LOG_KMS("(%s)", cmd_status_names[status]);
else
return true;
log_fail:
- DRM_LOG_KMS("\n");
+ DRM_LOG_KMS("... failed\n");
return false;
}
static bool intel_sdvo_set_control_bus_switch(struct intel_sdvo *intel_sdvo,
u8 ddc_bus)
{
+ /* This must be the immediately preceding write before the i2c xfer */
return intel_sdvo_write_cmd(intel_sdvo,
SDVO_CMD_SET_CONTROL_BUS_SWITCH,
&ddc_bus, 1);
static bool intel_sdvo_set_value(struct intel_sdvo *intel_sdvo, u8 cmd, const void *data, int len)
{
- return intel_sdvo_write_cmd(intel_sdvo, cmd, data, len);
+ if (!intel_sdvo_write_cmd(intel_sdvo, cmd, data, len))
+ return false;
+
+ return intel_sdvo_read_response(intel_sdvo, NULL, 0);
}
static bool
intel_dip_infoframe_csum(&avi_if);
- if (!intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_INDEX,
+ if (!intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_INDEX,
set_buf_index, 2))
return false;
for (i = 0; i < sizeof(avi_if); i += 8) {
- if (!intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_DATA,
+ if (!intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_DATA,
data, 8))
return false;
data++;
}
- return intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_TXRATE,
+ return intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_TXRATE,
&tx_rate, 1);
}
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
if (pm->hwmon) {
- sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
+ sysfs_remove_group(&dev->pdev->dev.kobj, &hwmon_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
#endif
nv50_evo_channel_del(&dev_priv->evo);
return ret;
}
- } else
- if (dev_priv->chipset != 0x50) {
+ } else {
ret = nv50_evo_dmaobj_new(evo, 0x3d, NvEvoFB16, 0x70, 0x19,
0, 0xffffffff, 0x00010000);
if (ret) {
dp_clock = dig_connector->dp_clock;
}
}
+/* this might work properly with the new pll algo */
#if 0 /* doesn't work properly on some laptops */
/* use recommended ref_div for ss */
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
adjusted_clock = mode->clock * 2;
if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT))
pll->flags |= RADEON_PLL_PREFER_CLOSEST_LOWER;
+ /* rv515 needs more testing with this option */
+ if (rdev->family != CHIP_RV515) {
+ if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
+ pll->flags |= RADEON_PLL_IS_LCD;
+ }
} else {
if (encoder->encoder_type != DRM_MODE_ENCODER_DAC)
pll->flags |= RADEON_PLL_NO_ODD_POST_DIV;
/* adjust pixel clock as needed */
adjusted_clock = atombios_adjust_pll(crtc, mode, pll, ss_enabled, &ss);
- radeon_compute_pll(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
- &ref_div, &post_div);
+ /* rv515 seems happier with the old algo */
+ if (rdev->family == CHIP_RV515)
+ radeon_compute_pll_legacy(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
+ &ref_div, &post_div);
+ else if (ASIC_IS_AVIVO(rdev))
+ radeon_compute_pll_avivo(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
+ &ref_div, &post_div);
+ else
+ radeon_compute_pll_legacy(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
+ &ref_div, &post_div);
atombios_crtc_program_ss(crtc, ATOM_DISABLE, radeon_crtc->pll_id, &ss);
}
/* get temperature in millidegrees */
-u32 evergreen_get_temp(struct radeon_device *rdev)
+int evergreen_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_MULT_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
u32 actual_temp = 0;
- if ((temp >> 10) & 1)
- actual_temp = 0;
- else if ((temp >> 9) & 1)
+ if (temp & 0x400)
+ actual_temp = -256;
+ else if (temp & 0x200)
actual_temp = 255;
- else
- actual_temp = (temp >> 1) & 0xff;
+ else if (temp & 0x100) {
+ actual_temp = temp & 0x1ff;
+ actual_temp |= ~0x1ff;
+ } else
+ actual_temp = temp & 0xff;
- return actual_temp * 1000;
+ return (actual_temp * 1000) / 2;
}
-u32 sumo_get_temp(struct radeon_device *rdev)
+int sumo_get_temp(struct radeon_device *rdev)
{
u32 temp = RREG32(CG_THERMAL_STATUS) & 0xff;
- u32 actual_temp = (temp >> 1) & 0xff;
+ int actual_temp = temp - 49;
return actual_temp * 1000;
}
/*
* CP.
*/
+void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib)
+{
+ /* set to DX10/11 mode */
+ radeon_ring_write(rdev, PACKET3(PACKET3_MODE_CONTROL, 0));
+ radeon_ring_write(rdev, 1);
+ /* FIXME: implement */
+ radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+ radeon_ring_write(rdev, ib->gpu_addr & 0xFFFFFFFC);
+ radeon_ring_write(rdev, upper_32_bits(ib->gpu_addr) & 0xFF);
+ radeon_ring_write(rdev, ib->length_dw);
+}
+
static int evergreen_cp_load_microcode(struct radeon_device *rdev)
{
cp_me = 0xff;
WREG32(CP_ME_CNTL, cp_me);
- r = radeon_ring_lock(rdev, evergreen_default_size + 15);
+ r = radeon_ring_lock(rdev, evergreen_default_size + 19);
if (r) {
DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r);
return r;
radeon_ring_write(rdev, 0xffffffff);
radeon_ring_write(rdev, 0xffffffff);
+ radeon_ring_write(rdev, 0xc0026900);
+ radeon_ring_write(rdev, 0x00000316);
+ radeon_ring_write(rdev, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
+ radeon_ring_write(rdev, 0x00000010); /* */
+
radeon_ring_unlock_commit(rdev);
return 0;
WREG32(VGT_CACHE_INVALIDATION, vgt_cache_invalidation);
WREG32(VGT_GS_VERTEX_REUSE, 16);
+ WREG32(PA_SU_LINE_STIPPLE_VALUE, 0);
WREG32(PA_SC_LINE_STIPPLE_STATE, 0);
WREG32(VGT_VERTEX_REUSE_BLOCK_CNTL, 14);
}
-/* emits 34 */
+/* emits 36 */
static void
set_default_state(struct radeon_device *rdev)
{
radeon_ring_write(rdev, 0x00000000);
radeon_ring_write(rdev, 0x00000000);
+ /* set to DX10/11 mode */
+ radeon_ring_write(rdev, PACKET3(PACKET3_MODE_CONTROL, 0));
+ radeon_ring_write(rdev, 1);
+
/* emit an IB pointing at default state */
dwords = ALIGN(rdev->r600_blit.state_len, 0x10);
gpu_addr = rdev->r600_blit.shader_gpu_addr + rdev->r600_blit.state_offset;
/* calculate number of loops correctly */
ring_size = num_loops * dwords_per_loop;
/* set default + shaders */
- ring_size += 50; /* shaders + def state */
+ ring_size += 52; /* shaders + def state */
ring_size += 10; /* fence emit for VB IB */
ring_size += 5; /* done copy */
ring_size += 10; /* fence emit for done copy */
if (r)
return r;
- set_default_state(rdev); /* 34 */
+ set_default_state(rdev); /* 36 */
set_shaders(rdev); /* 16 */
return 0;
}
#define FORCE_EOV_MAX_CLK_CNT(x) ((x) << 0)
#define FORCE_EOV_MAX_REZ_CNT(x) ((x) << 16)
#define PA_SC_LINE_STIPPLE 0x28A0C
+#define PA_SU_LINE_STIPPLE_VALUE 0x8A60
#define PA_SC_LINE_STIPPLE_STATE 0x8B10
#define SCRATCH_REG0 0x8500
#define PACKET3_DISPATCH_DIRECT 0x15
#define PACKET3_DISPATCH_INDIRECT 0x16
#define PACKET3_INDIRECT_BUFFER_END 0x17
+#define PACKET3_MODE_CONTROL 0x18
#define PACKET3_SET_PREDICATION 0x20
#define PACKET3_REG_RMW 0x21
#define PACKET3_COND_EXEC 0x22
static void r600_pcie_gen2_enable(struct radeon_device *rdev);
/* get temperature in millidegrees */
-u32 rv6xx_get_temp(struct radeon_device *rdev)
+int rv6xx_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
+ int actual_temp = temp & 0xff;
- return temp * 1000;
+ if (temp & 0x100)
+ actual_temp -= 256;
+
+ return actual_temp * 1000;
}
void r600_pm_get_dynpm_state(struct radeon_device *rdev)
void radeon_atombios_get_power_modes(struct radeon_device *rdev);
void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level);
void rs690_pm_info(struct radeon_device *rdev);
-extern u32 rv6xx_get_temp(struct radeon_device *rdev);
-extern u32 rv770_get_temp(struct radeon_device *rdev);
-extern u32 evergreen_get_temp(struct radeon_device *rdev);
-extern u32 sumo_get_temp(struct radeon_device *rdev);
+extern int rv6xx_get_temp(struct radeon_device *rdev);
+extern int rv770_get_temp(struct radeon_device *rdev);
+extern int evergreen_get_temp(struct radeon_device *rdev);
+extern int sumo_get_temp(struct radeon_device *rdev);
/*
* Fences.
fixed20_12 sclk;
fixed20_12 mclk;
fixed20_12 needed_bandwidth;
- /* XXX: use a define for num power modes */
- struct radeon_power_state power_state[8];
+ struct radeon_power_state *power_state;
/* number of valid power states */
int num_power_states;
int current_power_state_index;
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
bool evergreen_gpu_is_lockup(struct radeon_device *rdev);
int evergreen_asic_reset(struct radeon_device *rdev);
void evergreen_bandwidth_update(struct radeon_device *rdev);
+void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
int evergreen_copy_blit(struct radeon_device *rdev,
uint64_t src_offset, uint64_t dst_offset,
unsigned num_pages, struct radeon_fence *fence);
p1pll->pll_out_min = 64800;
else
p1pll->pll_out_min = 20000;
- } else if (p1pll->pll_out_min > 64800) {
- /* Limiting the pll output range is a good thing generally as
- * it limits the number of possible pll combinations for a given
- * frequency presumably to the ones that work best on each card.
- * However, certain duallink DVI monitors seem to like
- * pll combinations that would be limited by this at least on
- * pre-DCE 3.0 r6xx hardware. This might need to be adjusted per
- * family.
- */
- p1pll->pll_out_min = 64800;
}
p1pll->pll_in_min =
num_modes = power_info->info.ucNumOfPowerModeEntries;
if (num_modes > ATOM_MAX_NUMBEROF_POWER_BLOCK)
num_modes = ATOM_MAX_NUMBEROF_POWER_BLOCK;
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) * num_modes, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
/* last mode is usually default, array is low to high */
for (i = 0; i < num_modes; i++) {
rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
radeon_atombios_add_pplib_thermal_controller(rdev, &power_info->pplib.sThermalController);
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) *
+ power_info->pplib.ucNumStates, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
/* first mode is usually default, followed by low to high */
for (i = 0; i < power_info->pplib.ucNumStates; i++) {
mode_index = 0;
non_clock_info_array = (struct NonClockInfoArray *)
(mode_info->atom_context->bios + data_offset +
power_info->pplib.usNonClockInfoArrayOffset);
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) *
+ state_array->ucNumEntries, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
for (i = 0; i < state_array->ucNumEntries; i++) {
mode_index = 0;
power_state = (union pplib_power_state *)&state_array->states[i];
break;
}
} else {
- /* add the default mode */
- rdev->pm.power_state[state_index].type =
- POWER_STATE_TYPE_DEFAULT;
- rdev->pm.power_state[state_index].num_clock_modes = 1;
- rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk;
- rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk;
- rdev->pm.power_state[state_index].default_clock_mode =
- &rdev->pm.power_state[state_index].clock_info[0];
- rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
- rdev->pm.power_state[state_index].pcie_lanes = 16;
- rdev->pm.default_power_state_index = state_index;
- rdev->pm.power_state[state_index].flags = 0;
- state_index++;
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state), GFP_KERNEL);
+ if (rdev->pm.power_state) {
+ /* add the default mode */
+ rdev->pm.power_state[state_index].type =
+ POWER_STATE_TYPE_DEFAULT;
+ rdev->pm.power_state[state_index].num_clock_modes = 1;
+ rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk;
+ rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk;
+ rdev->pm.power_state[state_index].default_clock_mode =
+ &rdev->pm.power_state[state_index].clock_info[0];
+ rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
+ rdev->pm.power_state[state_index].pcie_lanes = 16;
+ rdev->pm.default_power_state_index = state_index;
+ rdev->pm.power_state[state_index].flags = 0;
+ state_index++;
+ }
}
rdev->pm.num_power_states = state_index;
bios_2_scratch &= ~ATOM_S2_VRI_BRIGHT_ENABLE;
/* tell the bios not to handle mode switching */
- bios_6_scratch |= (ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH | ATOM_S6_ACC_MODE);
+ bios_6_scratch |= ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH;
if (rdev->family >= CHIP_R600) {
WREG32(R600_BIOS_2_SCRATCH, bios_2_scratch);
else
bios_6_scratch = RREG32(RADEON_BIOS_6_SCRATCH);
- if (lock)
+ if (lock) {
bios_6_scratch |= ATOM_S6_CRITICAL_STATE;
- else
+ bios_6_scratch &= ~ATOM_S6_ACC_MODE;
+ } else {
bios_6_scratch &= ~ATOM_S6_CRITICAL_STATE;
+ bios_6_scratch |= ATOM_S6_ACC_MODE;
+ }
if (rdev->family >= CHIP_R600)
WREG32(R600_BIOS_6_SCRATCH, bios_6_scratch);
rdev->pm.default_power_state_index = -1;
+ /* allocate 2 power states */
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) * 2, GFP_KERNEL);
+ if (!rdev->pm.power_state) {
+ rdev->pm.default_power_state_index = state_index;
+ rdev->pm.num_power_states = 0;
+
+ rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
+ rdev->pm.current_clock_mode_index = 0;
+ return;
+ }
+
if (rdev->flags & RADEON_IS_MOBILITY) {
offset = combios_get_table_offset(dev, COMBIOS_POWERPLAY_INFO_TABLE);
if (offset) {
return ret;
}
+/* avivo */
+static void avivo_get_fb_div(struct radeon_pll *pll,
+ u32 target_clock,
+ u32 post_div,
+ u32 ref_div,
+ u32 *fb_div,
+ u32 *frac_fb_div)
+{
+ u32 tmp = post_div * ref_div;
+
+ tmp *= target_clock;
+ *fb_div = tmp / pll->reference_freq;
+ *frac_fb_div = tmp % pll->reference_freq;
+}
+
+static u32 avivo_get_post_div(struct radeon_pll *pll,
+ u32 target_clock)
+{
+ u32 vco, post_div, tmp;
+
+ if (pll->flags & RADEON_PLL_USE_POST_DIV)
+ return pll->post_div;
+
+ if (pll->flags & RADEON_PLL_PREFER_MINM_OVER_MAXP) {
+ if (pll->flags & RADEON_PLL_IS_LCD)
+ vco = pll->lcd_pll_out_min;
+ else
+ vco = pll->pll_out_min;
+ } else {
+ if (pll->flags & RADEON_PLL_IS_LCD)
+ vco = pll->lcd_pll_out_max;
+ else
+ vco = pll->pll_out_max;
+ }
+
+ post_div = vco / target_clock;
+ tmp = vco % target_clock;
+
+ if (pll->flags & RADEON_PLL_PREFER_MINM_OVER_MAXP) {
+ if (tmp)
+ post_div++;
+ } else {
+ if (!tmp)
+ post_div--;
+ }
+
+ return post_div;
+}
+
+#define MAX_TOLERANCE 10
+
+void radeon_compute_pll_avivo(struct radeon_pll *pll,
+ u32 freq,
+ u32 *dot_clock_p,
+ u32 *fb_div_p,
+ u32 *frac_fb_div_p,
+ u32 *ref_div_p,
+ u32 *post_div_p)
+{
+ u32 target_clock = freq / 10;
+ u32 post_div = avivo_get_post_div(pll, target_clock);
+ u32 ref_div = pll->min_ref_div;
+ u32 fb_div = 0, frac_fb_div = 0, tmp;
+
+ if (pll->flags & RADEON_PLL_USE_REF_DIV)
+ ref_div = pll->reference_div;
+
+ if (pll->flags & RADEON_PLL_USE_FRAC_FB_DIV) {
+ avivo_get_fb_div(pll, target_clock, post_div, ref_div, &fb_div, &frac_fb_div);
+ frac_fb_div = (100 * frac_fb_div) / pll->reference_freq;
+ if (frac_fb_div >= 5) {
+ frac_fb_div -= 5;
+ frac_fb_div = frac_fb_div / 10;
+ frac_fb_div++;
+ }
+ if (frac_fb_div >= 10) {
+ fb_div++;
+ frac_fb_div = 0;
+ }
+ } else {
+ while (ref_div <= pll->max_ref_div) {
+ avivo_get_fb_div(pll, target_clock, post_div, ref_div,
+ &fb_div, &frac_fb_div);
+ if (frac_fb_div >= (pll->reference_freq / 2))
+ fb_div++;
+ frac_fb_div = 0;
+ tmp = (pll->reference_freq * fb_div) / (post_div * ref_div);
+ tmp = (tmp * 10000) / target_clock;
+
+ if (tmp > (10000 + MAX_TOLERANCE))
+ ref_div++;
+ else if (tmp >= (10000 - MAX_TOLERANCE))
+ break;
+ else
+ ref_div++;
+ }
+ }
+
+ *dot_clock_p = ((pll->reference_freq * fb_div * 10) + (pll->reference_freq * frac_fb_div)) /
+ (ref_div * post_div * 10);
+ *fb_div_p = fb_div;
+ *frac_fb_div_p = frac_fb_div;
+ *ref_div_p = ref_div;
+ *post_div_p = post_div;
+ DRM_DEBUG_KMS("%d, pll dividers - fb: %d.%d ref: %d, post %d\n",
+ *dot_clock_p, fb_div, frac_fb_div, ref_div, post_div);
+}
+
+/* pre-avivo */
static inline uint32_t radeon_div(uint64_t n, uint32_t d)
{
uint64_t mod;
return n;
}
-void radeon_compute_pll(struct radeon_pll *pll,
- uint64_t freq,
- uint32_t *dot_clock_p,
- uint32_t *fb_div_p,
- uint32_t *frac_fb_div_p,
- uint32_t *ref_div_p,
- uint32_t *post_div_p)
+void radeon_compute_pll_legacy(struct radeon_pll *pll,
+ uint64_t freq,
+ uint32_t *dot_clock_p,
+ uint32_t *fb_div_p,
+ uint32_t *frac_fb_div_p,
+ uint32_t *ref_div_p,
+ uint32_t *post_div_p)
{
uint32_t min_ref_div = pll->min_ref_div;
uint32_t max_ref_div = pll->max_ref_div;
pll_out_max = pll->pll_out_max;
}
+ if (pll_out_min > 64800)
+ pll_out_min = 64800;
+
if (pll->flags & RADEON_PLL_USE_REF_DIV)
min_ref_div = max_ref_div = pll->reference_div;
else {
max_fractional_feed_div = pll->max_frac_feedback_div;
}
- for (post_div = max_post_div; post_div >= min_post_div; --post_div) {
+ for (post_div = min_post_div; post_div <= max_post_div; ++post_div) {
uint32_t ref_div;
if ((pll->flags & RADEON_PLL_NO_ODD_POST_DIV) && (post_div & 1))
*frac_fb_div_p = best_frac_feedback_div;
*ref_div_p = best_ref_div;
*post_div_p = best_post_div;
+ DRM_DEBUG_KMS("%d %d, pll dividers - fb: %d.%d ref: %d, post %d\n",
+ freq, best_freq / 1000, best_feedback_div, best_frac_feedback_div,
+ best_ref_div, best_post_div);
+
}
static void radeon_user_framebuffer_destroy(struct drm_framebuffer *fb)
if (!ASIC_IS_DCE4(rdev))
return;
- if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) ||
+ if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) &&
(action != ATOM_TRANSMITTER_ACTION_POWER_OFF))
return;
DRM_DEBUG_KMS("\n");
if (!use_bios_divs) {
- radeon_compute_pll(pll, mode->clock,
- &freq, &feedback_div, &frac_fb_div,
- &reference_div, &post_divider);
+ radeon_compute_pll_legacy(pll, mode->clock,
+ &freq, &feedback_div, &frac_fb_div,
+ &reference_div, &post_divider);
for (post_div = &post_divs[0]; post_div->divider; ++post_div) {
if (post_div->divider == post_divider)
#define RADEON_PLL_PREFER_CLOSEST_LOWER (1 << 11)
#define RADEON_PLL_USE_POST_DIV (1 << 12)
#define RADEON_PLL_IS_LCD (1 << 13)
+#define RADEON_PLL_PREFER_MINM_OVER_MAXP (1 << 14)
struct radeon_pll {
/* reference frequency */
struct radeon_atom_ss *ss,
int id, u32 clock);
-extern void radeon_compute_pll(struct radeon_pll *pll,
- uint64_t freq,
- uint32_t *dot_clock_p,
- uint32_t *fb_div_p,
- uint32_t *frac_fb_div_p,
- uint32_t *ref_div_p,
- uint32_t *post_div_p);
+extern void radeon_compute_pll_legacy(struct radeon_pll *pll,
+ uint64_t freq,
+ uint32_t *dot_clock_p,
+ uint32_t *fb_div_p,
+ uint32_t *frac_fb_div_p,
+ uint32_t *ref_div_p,
+ uint32_t *post_div_p);
+
+extern void radeon_compute_pll_avivo(struct radeon_pll *pll,
+ u32 freq,
+ u32 *dot_clock_p,
+ u32 *fb_div_p,
+ u32 *frac_fb_div_p,
+ u32 *ref_div_p,
+ u32 *post_div_p);
extern void radeon_setup_encoder_clones(struct drm_device *dev);
{
struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
struct radeon_device *rdev = ddev->dev_private;
- u32 temp;
+ int temp;
switch (rdev->pm.int_thermal_type) {
case THERMAL_TYPE_RV6XX:
#endif
}
+ if (rdev->pm.power_state)
+ kfree(rdev->pm.power_state);
+
radeon_hwmon_fini(rdev);
}
}
/* get temperature in millidegrees */
-u32 rv770_get_temp(struct radeon_device *rdev)
+int rv770_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_MULT_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
- u32 actual_temp = 0;
-
- if ((temp >> 9) & 1)
- actual_temp = 0;
- else
- actual_temp = (temp >> 1) & 0xff;
-
- return actual_temp * 1000;
+ int actual_temp;
+
+ if (temp & 0x400)
+ actual_temp = -256;
+ else if (temp & 0x200)
+ actual_temp = 255;
+ else if (temp & 0x100) {
+ actual_temp = temp & 0x1ff;
+ actual_temp |= ~0x1ff;
+ } else
+ actual_temp = temp & 0xff;
+
+ return (actual_temp * 1000) / 2;
}
void rv770_pm_misc(struct radeon_device *rdev)
config STUB_POULSBO
tristate "Intel GMA500 Stub Driver"
depends on PCI
+ depends on NET # for THERMAL
# Poulsbo stub depends on ACPI_VIDEO when ACPI is enabled
# but for select to work, need to select ACPI_VIDEO's dependencies, ick
select BACKLIGHT_CLASS_DEVICE if ACPI
select INPUT if ACPI
select ACPI_VIDEO if ACPI
+ select THERMAL if ACPI
help
Choose this option if you have a system that has Intel GMA500
(Poulsbo) integrated graphics. If M is selected, the module will
ib_unregister_event_handler(&sa_dev->event_handler);
- flush_scheduled_work();
+ flush_workqueue(ib_wq);
for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
if (rdma_port_get_link_layer(device, i + 1) == IB_LINK_LAYER_INFINIBAND) {
}
}
+static void ucma_copy_iw_route(struct rdma_ucm_query_route_resp *resp,
+ struct rdma_route *route)
+{
+ struct rdma_dev_addr *dev_addr;
+
+ dev_addr = &route->addr.dev_addr;
+ rdma_addr_get_dgid(dev_addr, (union ib_gid *) &resp->ib_route[0].dgid);
+ rdma_addr_get_sgid(dev_addr, (union ib_gid *) &resp->ib_route[0].sgid);
+}
+
static ssize_t ucma_query_route(struct ucma_file *file,
const char __user *inbuf,
int in_len, int out_len)
resp.node_guid = (__force __u64) ctx->cm_id->device->node_guid;
resp.port_num = ctx->cm_id->port_num;
- if (rdma_node_get_transport(ctx->cm_id->device->node_type) == RDMA_TRANSPORT_IB) {
- switch (rdma_port_get_link_layer(ctx->cm_id->device, ctx->cm_id->port_num)) {
+ switch (rdma_node_get_transport(ctx->cm_id->device->node_type)) {
+ case RDMA_TRANSPORT_IB:
+ switch (rdma_port_get_link_layer(ctx->cm_id->device,
+ ctx->cm_id->port_num)) {
case IB_LINK_LAYER_INFINIBAND:
ucma_copy_ib_route(&resp, &ctx->cm_id->route);
break;
default:
break;
}
+ break;
+ case RDMA_TRANSPORT_IWARP:
+ ucma_copy_iw_route(&resp, &ctx->cm_id->route);
+ break;
+ default:
+ break;
}
out:
r = kmalloc(sizeof(struct c2_vq_req), GFP_KERNEL);
if (r) {
init_waitqueue_head(&r->wait_object);
- r->reply_msg = (u64) NULL;
+ r->reply_msg = 0;
r->event = 0;
r->cm_id = NULL;
r->qp = NULL;
*/
void vq_req_free(struct c2_dev *c2dev, struct c2_vq_req *r)
{
- r->reply_msg = (u64) NULL;
+ r->reply_msg = 0;
if (atomic_dec_and_test(&r->refcnt)) {
kfree(r);
}
void vq_req_put(struct c2_dev *c2dev, struct c2_vq_req *r)
{
if (atomic_dec_and_test(&r->refcnt)) {
- if (r->reply_msg != (u64) NULL)
+ if (r->reply_msg != 0)
vq_repbuf_free(c2dev,
(void *) (unsigned long) r->reply_msg);
kfree(r);
16)) | FW_WR_FLOWID(ep->hwtid));
flowc->mnemval[0].mnemonic = FW_FLOWC_MNEM_PFNVFN;
- flowc->mnemval[0].val = cpu_to_be32(0);
+ flowc->mnemval[0].val = cpu_to_be32(PCI_FUNC(ep->com.dev->rdev.lldi.pdev->devfn) << 8);
flowc->mnemval[1].mnemonic = FW_FLOWC_MNEM_CH;
flowc->mnemval[1].val = cpu_to_be32(ep->tx_chan);
flowc->mnemval[2].mnemonic = FW_FLOWC_MNEM_PORT;
V_FW_RI_RES_WR_DCAEN(0) |
V_FW_RI_RES_WR_DCACPU(0) |
V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(3) |
+ V_FW_RI_RES_WR_FBMAX(2) |
V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
V_FW_RI_RES_WR_CIDXFTHRESH(0) |
V_FW_RI_RES_WR_EQSIZE(eqsize));
V_FW_RI_RES_WR_DCAEN(0) |
V_FW_RI_RES_WR_DCACPU(0) |
V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(3) |
+ V_FW_RI_RES_WR_FBMAX(2) |
V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
V_FW_RI_RES_WR_CIDXFTHRESH(0) |
V_FW_RI_RES_WR_EQSIZE(eqsize));
u8 ibmalfusesnap;
struct qib_qsfp_data qsfp_data;
char epmsgbuf[192]; /* for port error interrupt msg buffer */
- u8 bounced;
};
static struct {
IB_PHYSPORTSTATE_DISABLED)
qib_set_ib_7322_lstate(ppd, 0,
QLOGIC_IB_IBCC_LINKINITCMD_DISABLE);
- else {
- u32 lstate;
- /*
- * We need the current logical link state before
- * lflags are set in handle_e_ibstatuschanged.
- */
- lstate = qib_7322_iblink_state(ibcs);
-
- if (IS_QMH(dd) && !ppd->cpspec->bounced &&
- ltstate == IB_PHYSPORTSTATE_LINKUP &&
- (lstate >= IB_PORT_INIT &&
- lstate <= IB_PORT_ACTIVE)) {
- ppd->cpspec->bounced = 1;
- qib_7322_set_ib_cfg(ppd, QIB_IB_CFG_LSTATE,
- IB_LINKCMD_DOWN | IB_LINKINITCMD_POLL);
- }
-
+ else
/*
* Since going into a recovery state causes the link
* state to go down and since recovery is transitory,
ltstate != IB_PHYSPORTSTATE_RECOVERY_WAITRMT &&
ltstate != IB_PHYSPORTSTATE_RECOVERY_IDLE)
qib_handle_e_ibstatuschanged(ppd, ibcs);
- }
}
if (*msg && iserr)
qib_dev_porterr(dd, ppd->port, "%s error\n", msg);
qib_write_kreg_port(ppd, krp_rcvctrl, ppd->p_rcvctrl);
spin_unlock_irqrestore(&dd->cspec->rcvmod_lock, flags);
+ /* Hold the link state machine for mezz boards */
+ if (IS_QMH(dd) || IS_QME(dd))
+ qib_set_ib_7322_lstate(ppd, 0,
+ QLOGIC_IB_IBCC_LINKINITCMD_DISABLE);
+
/* Also enable IBSTATUSCHG interrupt. */
val = qib_read_kreg_port(ppd, krp_errmask);
qib_write_kreg_port(ppd, krp_errmask,
ppd->cpspec->h1_val = h1;
/* now change the IBC and serdes, overriding generic */
init_txdds_table(ppd, 1);
+ /* Re-enable the physical state machine on mezz boards
+ * now that the correct settings have been set. */
+ if (IS_QMH(dd) || IS_QME(dd))
+ qib_set_ib_7322_lstate(ppd, 0,
+ QLOGIC_IB_IBCC_LINKINITCMD_SLEEP);
any++;
}
if (*nxt == '\n')
}
if (value > 20 && value < 32767)
-#ifndef FREQ
- count = (ixp4xx_get_board_tick_rate() / (value * 4)) - 1;
-#else
- count = (FREQ / (value * 4)) - 1;
-#endif
+ count = (IXP4XX_TIMER_FREQ / (value * 4)) - 1;
ixp4xx_spkr_control(pin, count);
/*************************/
/* im/exported functions */
/*************************/
-extern char *hysdn_getrev(const char *);
/* hysdn_procconf.c */
extern int hysdn_procconf_init(void); /* init proc config filesys */
/* hysdn_net.c */
extern unsigned int hynet_enable;
-extern char *hysdn_net_revision;
extern int hysdn_net_create(hysdn_card *); /* create a new net device */
extern int hysdn_net_release(hysdn_card *); /* delete the device */
extern char *hysdn_net_getname(hysdn_card *); /* get name of net interface */
MODULE_AUTHOR("Werner Cornelius");
MODULE_LICENSE("GPL");
-static char *hysdn_init_revision = "$Revision: 1.6.6.6 $";
static int cardmax; /* number of found cards */
hysdn_card *card_root = NULL; /* pointer to first card */
static hysdn_card *card_last = NULL; /* pointer to first card */
/* Additionally newer versions may be activated without rebooting. */
/****************************************************************************/
-/******************************************************/
-/* extract revision number from string for log output */
-/******************************************************/
-char *
-hysdn_getrev(const char *revision)
-{
- char *rev;
- char *p;
-
- if ((p = strchr(revision, ':'))) {
- rev = p + 2;
- p = strchr(rev, '$');
- *--p = 0;
- } else
- rev = "???";
- return rev;
-}
-
-
/****************************************************************************/
/* init_module is called once when the module is loaded to do all necessary */
/* things like autodetect... */
static int __init
hysdn_init(void)
{
- char tmp[50];
int rc;
- strcpy(tmp, hysdn_init_revision);
- printk(KERN_NOTICE "HYSDN: module Rev: %s loaded\n", hysdn_getrev(tmp));
- strcpy(tmp, hysdn_net_revision);
- printk(KERN_NOTICE "HYSDN: network interface Rev: %s \n", hysdn_getrev(tmp));
+ printk(KERN_NOTICE "HYSDN: module loaded\n");
rc = pci_register_driver(&hysdn_pci_driver);
if (rc)
unsigned int hynet_enable = 0xffffffff;
module_param(hynet_enable, uint, 0);
-/* store the actual version for log reporting */
-char *hysdn_net_revision = "$Revision: 1.8.6.4 $";
-
#define MAX_SKB_BUFFERS 20 /* number of buffers for keeping TX-data */
/****************************************************************************/
#include "hysdn_defs.h"
static DEFINE_MUTEX(hysdn_conf_mutex);
-static char *hysdn_procconf_revision = "$Revision: 1.8.6.4 $";
#define INFO_OUT_LEN 80 /* length of info line including lf */
card = card->next; /* next entry */
}
- printk(KERN_NOTICE "HYSDN: procfs Rev. %s initialised\n", hysdn_getrev(hysdn_procconf_revision));
+ printk(KERN_NOTICE "HYSDN: procfs initialised\n");
return (0);
} /* hysdn_procconf_init */
static int __init icn_init(void)
{
char *p;
- char rev[20];
+ char rev[21];
memset(&dev, 0, sizeof(icn_dev));
dev.memaddr = (membase & 0x0ffc000);
if ((p = strchr(revision, ':'))) {
strncpy(rev, p + 1, 20);
+ rev[20] = '\0';
p = strchr(rev, '$');
if (p)
*p = 0;
mddev_t *mddev = q->queuedata;
int rv;
int cpu;
+ unsigned int sectors;
if (mddev == NULL || mddev->pers == NULL
|| !mddev->ready) {
atomic_inc(&mddev->active_io);
rcu_read_unlock();
+ /*
+ * save the sectors now since our bio can
+ * go away inside make_request
+ */
+ sectors = bio_sectors(bio);
rv = mddev->pers->make_request(mddev, bio);
cpu = part_stat_lock();
part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
- part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw],
- bio_sectors(bio));
+ part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
part_stat_unlock();
if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
__bdevname(dev, b));
return PTR_ERR(bdev);
}
- if (!shared)
- set_bit(AllReserved, &rdev->flags);
rdev->bdev = bdev;
return err;
}
if (rdev->raid_disk != -1)
return -EBUSY;
+ if (test_bit(MD_RECOVERY_RUNNING, &rdev->mddev->recovery))
+ return -EBUSY;
+
if (rdev->mddev->pers->hot_add_disk == NULL)
return -EINVAL;
mddev_lock(mddev);
list_for_each_entry(rdev2, &mddev->disks, same_set)
- if (test_bit(AllReserved, &rdev2->flags) ||
- (rdev->bdev == rdev2->bdev &&
- rdev != rdev2 &&
- overlaps(rdev->data_offset, rdev->sectors,
- rdev2->data_offset,
- rdev2->sectors))) {
+ if (rdev->bdev == rdev2->bdev &&
+ rdev != rdev2 &&
+ overlaps(rdev->data_offset, rdev->sectors,
+ rdev2->data_offset,
+ rdev2->sectors)) {
overlap = 1;
break;
}
mddev->delta_disks = raid_disks - mddev->raid_disks;
rv = mddev->pers->check_reshape(mddev);
+ if (rv < 0)
+ mddev->delta_disks = 0;
return rv;
}
} else if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
mddev->resync_min = mddev->curr_resync_completed;
mddev->curr_resync = 0;
- if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery))
- mddev->curr_resync_completed = 0;
- sysfs_notify(&mddev->kobj, NULL, "sync_completed");
wake_up(&resync_wait);
set_bit(MD_RECOVERY_DONE, &mddev->recovery);
md_wakeup_thread(mddev->thread);
}
}
- if (mddev->degraded && ! mddev->ro && !mddev->recovery_disabled) {
+ if (mddev->degraded && !mddev->recovery_disabled) {
list_for_each_entry(rdev, &mddev->disks, same_set) {
if (rdev->raid_disk >= 0 &&
!test_bit(In_sync, &rdev->flags) &&
/* Only thing we do on a ro array is remove
* failed devices.
*/
- remove_and_add_spares(mddev);
+ mdk_rdev_t *rdev;
+ list_for_each_entry(rdev, &mddev->disks, same_set)
+ if (rdev->raid_disk >= 0 &&
+ !test_bit(Blocked, &rdev->flags) &&
+ test_bit(Faulty, &rdev->flags) &&
+ atomic_read(&rdev->nr_pending)==0) {
+ if (mddev->pers->hot_remove_disk(
+ mddev, rdev->raid_disk)==0) {
+ char nm[20];
+ sprintf(nm,"rd%d", rdev->raid_disk);
+ sysfs_remove_link(&mddev->kobj, nm);
+ rdev->raid_disk = -1;
+ }
+ }
clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
goto unlock;
}
#define Faulty 1 /* device is known to have a fault */
#define In_sync 2 /* device is in_sync with rest of array */
#define WriteMostly 4 /* Avoid reading if at all possible */
-#define AllReserved 6 /* If whole device is reserved for
- * one array */
#define AutoDetected 7 /* added by auto-detect */
#define Blocked 8 /* An error occured on an externally
* managed array, don't allow writes
rdev1->new_raid_disk = j;
}
+ if (mddev->level == 1) {
+ /* taiking over a raid1 array-
+ * we have only one active disk
+ */
+ j = 0;
+ rdev1->new_raid_disk = j;
+ }
+
if (j < 0 || j >= mddev->raid_disks) {
printk(KERN_ERR "md/raid0:%s: bad disk number %d - "
"aborting!\n", mdname(mddev), j);
return priv_conf;
}
+static void *raid0_takeover_raid1(mddev_t *mddev)
+{
+ raid0_conf_t *priv_conf;
+
+ /* Check layout:
+ * - (N - 1) mirror drives must be already faulty
+ */
+ if ((mddev->raid_disks - 1) != mddev->degraded) {
+ printk(KERN_ERR "md/raid0:%s: (N - 1) mirrors drives must be already faulty!\n",
+ mdname(mddev));
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* Set new parameters */
+ mddev->new_level = 0;
+ mddev->new_layout = 0;
+ mddev->new_chunk_sectors = 128; /* by default set chunk size to 64k */
+ mddev->delta_disks = 1 - mddev->raid_disks;
+ /* make sure it will be not marked as dirty */
+ mddev->recovery_cp = MaxSector;
+
+ create_strip_zones(mddev, &priv_conf);
+ return priv_conf;
+}
+
static void *raid0_takeover(mddev_t *mddev)
{
/* raid0 can take over:
* raid4 - if all data disks are active.
* raid5 - providing it is Raid4 layout and one disk is faulty
* raid10 - assuming we have all necessary active disks
+ * raid1 - with (N -1) mirror drives faulty
*/
if (mddev->level == 4)
return raid0_takeover_raid45(mddev);
if (mddev->level == 10)
return raid0_takeover_raid10(mddev);
+ if (mddev->level == 1)
+ return raid0_takeover_raid1(mddev);
+
+ printk(KERN_ERR "Takeover from raid%i to raid0 not supported\n",
+ mddev->level);
+
return ERR_PTR(-EINVAL);
}
mddev->recovery_cp = MaxSector;
conf = setup_conf(mddev);
- if (!IS_ERR(conf))
+ if (!IS_ERR(conf)) {
list_for_each_entry(rdev, &mddev->disks, same_set)
if (rdev->raid_disk >= 0)
rdev->new_raid_disk = rdev->raid_disk * 2;
-
+ conf->barrier = 1;
+ }
+
return conf;
}
raid5_conf_t *conf = mddev->private;
mdk_rdev_t *rdev;
int spares = 0;
- int added_devices = 0;
unsigned long flags;
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
return -ENOSPC;
list_for_each_entry(rdev, &mddev->disks, same_set)
- if ((rdev->raid_disk < 0 || rdev->raid_disk >= conf->raid_disks)
- && !test_bit(Faulty, &rdev->flags))
+ if (!test_bit(In_sync, &rdev->flags)
+ && !test_bit(Faulty, &rdev->flags))
spares++;
if (spares - mddev->degraded < mddev->delta_disks - conf->max_degraded)
* to correctly record the "partially reconstructed" state of
* such devices during the reshape and confusion could result.
*/
- if (mddev->delta_disks >= 0)
- list_for_each_entry(rdev, &mddev->disks, same_set)
- if (rdev->raid_disk < 0 &&
- !test_bit(Faulty, &rdev->flags)) {
- if (raid5_add_disk(mddev, rdev) == 0) {
- char nm[20];
- if (rdev->raid_disk >= conf->previous_raid_disks) {
- set_bit(In_sync, &rdev->flags);
- added_devices++;
- } else
- rdev->recovery_offset = 0;
- sprintf(nm, "rd%d", rdev->raid_disk);
- if (sysfs_create_link(&mddev->kobj,
- &rdev->kobj, nm))
- /* Failure here is OK */;
- } else
- break;
- } else if (rdev->raid_disk >= conf->previous_raid_disks
- && !test_bit(Faulty, &rdev->flags)) {
- /* This is a spare that was manually added */
- set_bit(In_sync, &rdev->flags);
- added_devices++;
- }
+ if (mddev->delta_disks >= 0) {
+ int added_devices = 0;
+ list_for_each_entry(rdev, &mddev->disks, same_set)
+ if (rdev->raid_disk < 0 &&
+ !test_bit(Faulty, &rdev->flags)) {
+ if (raid5_add_disk(mddev, rdev) == 0) {
+ char nm[20];
+ if (rdev->raid_disk
+ >= conf->previous_raid_disks) {
+ set_bit(In_sync, &rdev->flags);
+ added_devices++;
+ } else
+ rdev->recovery_offset = 0;
+ sprintf(nm, "rd%d", rdev->raid_disk);
+ if (sysfs_create_link(&mddev->kobj,
+ &rdev->kobj, nm))
+ /* Failure here is OK */;
+ }
+ } else if (rdev->raid_disk >= conf->previous_raid_disks
+ && !test_bit(Faulty, &rdev->flags)) {
+ /* This is a spare that was manually added */
+ set_bit(In_sync, &rdev->flags);
+ added_devices++;
+ }
- /* When a reshape changes the number of devices, ->degraded
- * is measured against the larger of the pre and post number of
- * devices.*/
- if (mddev->delta_disks > 0) {
+ /* When a reshape changes the number of devices,
+ * ->degraded is measured against the larger of the
+ * pre and post number of devices.
+ */
spin_lock_irqsave(&conf->device_lock, flags);
mddev->degraded += (conf->raid_disks - conf->previous_raid_disks)
- added_devices;
-/* ir-lirc-codec.c - ir-core to classic lirc interface bridge
+/* ir-lirc-codec.c - rc-core to classic lirc interface bridge
*
* Copyright (C) 2010 by Jarod Wilson <jarod@redhat.com>
*
/* Carrier reports */
if (ev.carrier_report) {
sample = LIRC_FREQUENCY(ev.carrier);
+ IR_dprintk(2, "carrier report (freq: %d)\n", sample);
/* Packet end */
} else if (ev.timeout) {
return 0;
sample = LIRC_TIMEOUT(ev.duration / 1000);
+ IR_dprintk(2, "timeout report (duration: %d)\n", sample);
/* Normal sample */
} else {
sample = ev.pulse ? LIRC_PULSE(ev.duration / 1000) :
LIRC_SPACE(ev.duration / 1000);
+ IR_dprintk(2, "delivering %uus %s to lirc_dev\n",
+ TO_US(ev.duration), TO_STR(ev.pulse));
}
lirc_buffer_write(dev->raw->lirc.drv->rbuf,
*
* Copyright (c) 2010 by Jarod Wilson <jarod@redhat.com>
*
+ * See http://mediacenterguides.com/book/export/html/31 for details on
+ * key mappings.
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
{ 0x800f0426, KEY_EPG }, /* Guide */
{ 0x800f0427, KEY_ZOOM }, /* Aspect */
+ { 0x800f0432, KEY_MODE }, /* Visualization */
+ { 0x800f0433, KEY_PRESENTATION }, /* Slide Show */
+ { 0x800f0434, KEY_EJECTCD },
{ 0x800f043a, KEY_BRIGHTNESSUP },
{ 0x800f0446, KEY_TV },
switch (ir->buf_in[index]) {
/* 2-byte return value commands */
case MCE_CMD_S_TIMEOUT:
- ir->rc->timeout = MS_TO_NS((hi << 8 | lo) / 2);
+ ir->rc->timeout = US_TO_NS((hi << 8 | lo) / 2);
break;
/* 1-byte return value commands */
break;
case PARSE_IRDATA:
ir->rem--;
+ init_ir_raw_event(&rawir);
rawir.pulse = ((ir->buf_in[i] & MCE_PULSE_BIT) != 0);
rawir.duration = (ir->buf_in[i] & MCE_PULSE_MASK)
- * MS_TO_US(MCE_TIME_UNIT);
+ * US_TO_NS(MCE_TIME_UNIT);
dev_dbg(ir->dev, "Storing %s with duration %d\n",
rawir.pulse ? "pulse" : "space",
i, ir->rem + 1, false);
if (ir->rem)
ir->parser_state = PARSE_IRDATA;
+ else
+ ir_raw_event_reset(ir->rc);
break;
}
rc->priv = ir;
rc->driver_type = RC_DRIVER_IR_RAW;
rc->allowed_protos = RC_TYPE_ALL;
- rc->timeout = MS_TO_NS(1000);
+ rc->timeout = US_TO_NS(1000);
if (!ir->flags.no_tx) {
rc->s_tx_mask = mceusb_set_tx_mask;
rc->s_tx_carrier = mceusb_set_tx_carrier;
return 0;
}
- carrier = (count * 1000000) / duration;
+ carrier = MS_TO_NS(count) / duration;
if ((carrier > MAX_CARRIER) || (carrier < MIN_CARRIER))
nvt_dbg("WTF? Carrier frequency out of range!");
sample = nvt->buf[i];
rawir.pulse = ((sample & BUF_PULSE_BIT) != 0);
- rawir.duration = (sample & BUF_LEN_MASK)
- * SAMPLE_PERIOD * 1000;
+ rawir.duration = US_TO_NS((sample & BUF_LEN_MASK)
+ * SAMPLE_PERIOD);
if ((sample & BUF_LEN_MASK) == BUF_LEN_MASK) {
if (nvt->rawir.pulse == rawir.pulse)
sz->signal_start.tv_usec -
sz->signal_last.tv_usec);
rawir.duration -= sz->sum;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
rawir.duration &= IR_MAX_DURATION;
}
sz_push(sz, rawir);
rawir.duration = ((int) value) * SZ_RESOLUTION;
rawir.duration += SZ_RESOLUTION / 2;
sz->sum += rawir.duration;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
rawir.duration &= IR_MAX_DURATION;
sz_push(sz, rawir);
}
rawir.duration = ((int) value) * SZ_RESOLUTION;
rawir.duration += SZ_RESOLUTION / 2;
sz->sum += rawir.duration;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
sz_push(sz, rawir);
}
if (sz->timeout_enabled)
sz_push(sz, rawir);
ir_raw_event_handle(sz->rdev);
+ ir_raw_event_reset(sz->rdev);
} else {
sz_push_full_space(sz, sz->buf_in[i]);
}
}
}
+ ir_raw_event_handle(sz->rdev);
usb_submit_urb(urb, GFP_ATOMIC);
return;
sz->decoder_state = PulseSpace;
/* FIXME: don't yet have a way to set this */
sz->timeout_enabled = true;
- sz->rdev->timeout = (((SZ_TIMEOUT * SZ_RESOLUTION * 1000) &
+ sz->rdev->timeout = ((US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION) &
IR_MAX_DURATION) | 0x03000000);
#if 0
/* not yet supported, depends on patches from maxim */
/* see also: LIRC_GET_REC_RESOLUTION and LIRC_SET_REC_TIMEOUT */
- sz->min_timeout = SZ_TIMEOUT * SZ_RESOLUTION * 1000;
- sz->max_timeout = SZ_TIMEOUT * SZ_RESOLUTION * 1000;
+ sz->min_timeout = US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION);
+ sz->max_timeout = US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION);
#endif
do_gettimeofday(&sz->signal_start);
break;
default:
/* case 0xdd: * delay */
- msleep(action->val / 64 + 10);
+ msleep(action->idx);
break;
}
action++;
[SENSOR_GC0305] = gc0305_matrix,
[SENSOR_HDCS2020b] = NULL,
[SENSOR_HV7131B] = NULL,
- [SENSOR_HV7131R] = NULL,
+ [SENSOR_HV7131R] = po2030_matrix,
[SENSOR_ICM105A] = po2030_matrix,
[SENSOR_MC501CB] = NULL,
[SENSOR_MT9V111_1] = gc0305_matrix,
case SENSOR_ADCM2700:
case SENSOR_GC0305:
case SENSOR_HV7131B:
+ case SENSOR_HV7131R:
case SENSOR_OV7620:
case SENSOR_PAS202B:
case SENSOR_PO2030:
reg_w(gspca_dev, 0x02, 0x003b);
reg_w(gspca_dev, 0x00, 0x0038);
break;
+ case SENSOR_HV7131R:
case SENSOR_PAS202B:
reg_w(gspca_dev, 0x03, 0x003b);
reg_w(gspca_dev, 0x0c, 0x003a);
reg_w(gspca_dev, 0x0b, 0x0039);
- reg_w(gspca_dev, 0x0b, 0x0038);
+ if (sensor == SENSOR_PAS202B)
+ reg_w(gspca_dev, 0x0b, 0x0038);
break;
}
}
reg_w(gspca_dev, 0x02, 0x003b);
reg_w(gspca_dev, 0x00, 0x0038);
break;
+ case SENSOR_HV7131R:
case SENSOR_PAS202B:
reg_w(gspca_dev, 0x03, 0x003b);
reg_w(gspca_dev, 0x0c, 0x003a);
reg_w(gspca_dev, 0x0b, 0x0039);
+ if (sd->sensor == SENSOR_HV7131R)
+ reg_w(gspca_dev, 0x50, ZC3XX_R11D_GLOBALGAIN);
break;
}
break;
case SENSOR_PAS202B:
case SENSOR_GC0305:
+ case SENSOR_HV7131R:
case SENSOR_TAS5130C:
reg_r(gspca_dev, 0x0008);
/* fall thru */
/* ms-win + */
reg_w(gspca_dev, 0x40, 0x0117);
break;
+ case SENSOR_HV7131R:
+ i2c_write(gspca_dev, 0x25, 0x04, 0x00); /* exposure */
+ i2c_write(gspca_dev, 0x26, 0x93, 0x00);
+ i2c_write(gspca_dev, 0x27, 0xe0, 0x00);
+ reg_w(gspca_dev, 0x00, ZC3XX_R1A7_CALCGLOBALMEAN);
+ break;
case SENSOR_GC0305:
case SENSOR_TAS5130C:
reg_w(gspca_dev, 0x09, 0x01ad); /* (from win traces) */
{
struct sd *sd = (struct sd *) gspca_dev;
- if (data[0] == 0xff && data[1] == 0xd8) { /* start of frame */
+ /* check the JPEG end of frame */
+ if (len >= 3
+ && data[len - 3] == 0xff && data[len - 2] == 0xd9) {
+/*fixme: what does the last byte mean?*/
gspca_frame_add(gspca_dev, LAST_PACKET,
- NULL, 0);
+ data, len - 1);
+ return;
+ }
+
+ /* check the JPEG start of a frame */
+ if (data[0] == 0xff && data[1] == 0xd8) {
/* put the JPEG header in the new frame */
gspca_frame_add(gspca_dev, FIRST_PACKET,
sd->jpeg_hdr, JPEG_HDR_SZ);
struct hdpvr_device *dev;
struct usb_host_interface *iface_desc;
struct usb_endpoint_descriptor *endpoint;
+ struct i2c_client *client;
size_t buffer_size;
int i;
int retval = -ENOMEM;
#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
retval = hdpvr_register_i2c_adapter(dev);
if (retval < 0) {
- v4l2_err(&dev->v4l2_dev, "registering i2c adapter failed\n");
+ v4l2_err(&dev->v4l2_dev, "i2c adapter register failed\n");
goto error;
}
- retval = hdpvr_register_i2c_ir(dev);
- if (retval < 0)
- v4l2_err(&dev->v4l2_dev, "registering i2c IR devices failed\n");
+ client = hdpvr_register_ir_rx_i2c(dev);
+ if (!client) {
+ v4l2_err(&dev->v4l2_dev, "i2c IR RX device register failed\n");
+ goto reg_fail;
+ }
+
+ client = hdpvr_register_ir_tx_i2c(dev);
+ if (!client) {
+ v4l2_err(&dev->v4l2_dev, "i2c IR TX device register failed\n");
+ goto reg_fail;
+ }
#endif
/* let the user know what node this device is now attached to */
video_device_node_name(dev->video_dev));
return 0;
+reg_fail:
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+ i2c_del_adapter(&dev->i2c_adapter);
+#endif
error:
if (dev) {
/* Destroy single thread */
mutex_lock(&dev->io_mutex);
hdpvr_cancel_queue(dev);
mutex_unlock(&dev->io_mutex);
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+ i2c_del_adapter(&dev->i2c_adapter);
+#endif
video_unregister_device(dev->video_dev);
atomic_dec(&dev_nr);
}
#define Z8F0811_IR_RX_I2C_ADDR 0x71
-static struct i2c_board_info hdpvr_i2c_board_info = {
- I2C_BOARD_INFO("ir_tx_z8f0811_hdpvr", Z8F0811_IR_TX_I2C_ADDR),
- I2C_BOARD_INFO("ir_rx_z8f0811_hdpvr", Z8F0811_IR_RX_I2C_ADDR),
-};
+struct i2c_client *hdpvr_register_ir_tx_i2c(struct hdpvr_device *dev)
+{
+ struct IR_i2c_init_data *init_data = &dev->ir_i2c_init_data;
+ struct i2c_board_info hdpvr_ir_tx_i2c_board_info = {
+ I2C_BOARD_INFO("ir_tx_z8f0811_hdpvr", Z8F0811_IR_TX_I2C_ADDR),
+ };
+
+ init_data->name = "HD-PVR";
+ hdpvr_ir_tx_i2c_board_info.platform_data = init_data;
-int hdpvr_register_i2c_ir(struct hdpvr_device *dev)
+ return i2c_new_device(&dev->i2c_adapter, &hdpvr_ir_tx_i2c_board_info);
+}
+
+struct i2c_client *hdpvr_register_ir_rx_i2c(struct hdpvr_device *dev)
{
- struct i2c_client *c;
struct IR_i2c_init_data *init_data = &dev->ir_i2c_init_data;
+ struct i2c_board_info hdpvr_ir_rx_i2c_board_info = {
+ I2C_BOARD_INFO("ir_rx_z8f0811_hdpvr", Z8F0811_IR_RX_I2C_ADDR),
+ };
/* Our default information for ir-kbd-i2c.c to use */
init_data->ir_codes = RC_MAP_HAUPPAUGE_NEW;
init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP_XVR;
init_data->type = RC_TYPE_RC5;
- init_data->name = "HD PVR";
- hdpvr_i2c_board_info.platform_data = init_data;
-
- c = i2c_new_device(&dev->i2c_adapter, &hdpvr_i2c_board_info);
+ init_data->name = "HD-PVR";
+ hdpvr_ir_rx_i2c_board_info.platform_data = init_data;
- return (c == NULL) ? -ENODEV : 0;
+ return i2c_new_device(&dev->i2c_adapter, &hdpvr_ir_rx_i2c_board_info);
}
static int hdpvr_i2c_read(struct hdpvr_device *dev, int bus,
/* i2c adapter registration */
int hdpvr_register_i2c_adapter(struct hdpvr_device *dev);
-int hdpvr_register_i2c_ir(struct hdpvr_device *dev);
+struct i2c_client *hdpvr_register_ir_rx_i2c(struct hdpvr_device *dev);
+struct i2c_client *hdpvr_register_ir_tx_i2c(struct hdpvr_device *dev);
/*========================================================================*/
/* buffer management */
static int get_key_haup_xvr(struct IR_i2c *ir, u32 *ir_key, u32 *ir_raw)
{
+ int ret;
+ unsigned char buf[1] = { 0 };
+
+ /*
+ * This is the same apparent "are you ready?" poll command observed
+ * watching Windows driver traffic and implemented in lirc_zilog. With
+ * this added, we get far saner remote behavior with z8 chips on usb
+ * connected devices, even with the default polling interval of 100ms.
+ */
+ ret = i2c_master_send(ir->c, buf, 1);
+ if (ret != 1)
+ return (ret < 0) ? ret : -EINVAL;
+
return get_key_haup_common (ir, ir_key, ir_raw, 6, 3);
}
init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP_XVR;
init_data->type = RC_TYPE_RC5;
init_data->name = hdw->hdw_desc->description;
- init_data->polling_interval = 260; /* ms From lirc_zilog */
/* IR Receiver */
info.addr = 0x71;
info.platform_data = init_data;
chip_id = name[5];
/* Check whether this chip is part of the saa711x series */
- if (memcmp(name, "1f711", 5)) {
+ if (memcmp(name + 1, "f711", 4)) {
v4l_dbg(1, debug, client, "chip found @ 0x%x (ID %s) does not match a known saa711x chip.\n",
client->addr << 1, name);
return -ENODEV;
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L2C_B)},
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L2C_B2)},
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L1D)},
+ {PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L1D_2_0)},
/* required last entry */
{ 0 }
};
spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
req = nonemb_cmd->va;
sge = nonembedded_sgl(wrb);
status = be_mcc_notify_wait(adapter);
+err:
spin_unlock_bh(&adapter->mcc_lock);
return status;
}
if (adapter->link_up != link_up) {
adapter->link_speed = -1;
if (link_up) {
- netif_start_queue(netdev);
netif_carrier_on(netdev);
printk(KERN_INFO "%s: Link up\n", netdev->name);
} else {
- netif_stop_queue(netdev);
netif_carrier_off(netdev);
printk(KERN_INFO "%s: Link down\n", netdev->name);
}
netif_napi_add(netdev, &adapter->tx_eq.napi, be_poll_tx_mcc,
BE_NAPI_WEIGHT);
-
- netif_stop_queue(netdev);
}
static void be_unmap_pci_bars(struct be_adapter *adapter)
* (you will need to reboot afterwards) */
/* #define BNX2X_STOP_ON_ERROR */
-#define DRV_MODULE_VERSION "1.62.00-4"
-#define DRV_MODULE_RELDATE "2011/01/18"
+#define DRV_MODULE_VERSION "1.62.00-5"
+#define DRV_MODULE_RELDATE "2011/01/30"
#define BNX2X_BC_VER 0x040200
#define BNX2X_MULTI_QUEUE
return rc;
}
-static void bnx2x_8073_set_xaui_low_power_mode(struct bnx2x *bp,
- struct bnx2x_phy *phy)
-{
- u16 val;
- bnx2x_cl45_read(bp, phy,
- MDIO_PMA_DEVAD, MDIO_PMA_REG_8073_CHIP_REV, &val);
-
- if (val == 0) {
- /* Mustn't set low power mode in 8073 A0 */
- return;
- }
-
- /* Disable PLL sequencer (use read-modify-write to clear bit 13) */
- bnx2x_cl45_read(bp, phy,
- MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, &val);
- val &= ~(1<<13);
- bnx2x_cl45_write(bp, phy,
- MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, val);
-
- /* PLL controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805E, 0x1077);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805D, 0x0000);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805C, 0x030B);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805B, 0x1240);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805A, 0x2490);
-
- /* Tx Controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A7, 0x0C74);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A6, 0x9041);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A5, 0x4640);
-
- /* Rx Controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FE, 0x01C4);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FD, 0x9249);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FC, 0x2015);
-
- /* Enable PLL sequencer (use read-modify-write to set bit 13) */
- bnx2x_cl45_read(bp, phy, MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, &val);
- val |= (1<<13);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, val);
-}
-
/******************************************************************/
/* BCM8073 PHY SECTION */
/******************************************************************/
bnx2x_8073_set_pause_cl37(params, phy, vars);
- bnx2x_8073_set_xaui_low_power_mode(bp, phy);
-
bnx2x_cl45_read(bp, phy,
MDIO_PMA_DEVAD, MDIO_PMA_REG_M8051_MSGOUT_REG, &tmp1);
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_LED1_MASK,
0x80);
+
+ /* Tell LED3 to blink on source */
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ &val);
+ val &= ~(7<<6);
+ val |= (1<<6); /* A83B[8:6]= 1 */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ val);
}
break;
}
struct bnx2x_phy phy[PORT_MAX];
struct bnx2x_phy *phy_blk[PORT_MAX];
u16 val;
- s8 port;
+ s8 port = 0;
s8 port_of_path = 0;
-
- bnx2x_ext_phy_hw_reset(bp, 0);
+ u32 swap_val, swap_override;
+ swap_val = REG_RD(bp, NIG_REG_PORT_SWAP);
+ swap_override = REG_RD(bp, NIG_REG_STRAP_OVERRIDE);
+ port ^= (swap_val && swap_override);
+ bnx2x_ext_phy_hw_reset(bp, port);
/* PART1 - Reset both phys */
for (port = PORT_MAX - 1; port >= PORT_0; port--) {
u32 shmem_base, shmem2_base;
/* accept matched ucast */
drop_all_ucast = 0;
}
- if (filters & BNX2X_ACCEPT_MULTICAST) {
+ if (filters & BNX2X_ACCEPT_MULTICAST)
/* accept matched mcast */
drop_all_mcast = 0;
- if (IS_MF_SI(bp))
- /* since mcast addresses won't arrive with ovlan,
- * fw needs to accept all of them in
- * switch-independent mode */
- accp_all_mcast = 1;
- }
+
if (filters & BNX2X_ACCEPT_ALL_UNICAST) {
/* accept all mcast */
drop_all_ucast = 0;
def_q_filters |= BNX2X_ACCEPT_UNICAST | BNX2X_ACCEPT_BROADCAST |
BNX2X_ACCEPT_MULTICAST;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id,
+ BNX2X_ACCEPT_UNICAST |
+ BNX2X_ACCEPT_MULTICAST);
+ }
#endif
break;
def_q_filters |= BNX2X_ACCEPT_UNICAST | BNX2X_ACCEPT_BROADCAST |
BNX2X_ACCEPT_ALL_MULTICAST;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ /*
+ * Prevent duplication of multicast packets by configuring FCoE
+ * L2 Client to receive only matched unicast frames.
+ */
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id,
+ BNX2X_ACCEPT_UNICAST);
+ }
#endif
break;
case BNX2X_RX_MODE_PROMISC:
def_q_filters |= BNX2X_PROMISCUOUS_MODE;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ /*
+ * Prevent packets duplication by configuring DROP_ALL for FCoE
+ * L2 Client.
+ */
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_NONE);
+ }
#endif
/* pass management unicast packets as well */
llh_mask |= NIG_LLH0_BRB1_DRV_MASK_REG_LLH0_BRB1_DRV_MASK_UNCST;
}
}
- bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
- bp->common.shmem_base,
- bp->common.shmem2_base);
-
bnx2x_setup_fan_failure_detection(bp);
/* clear PXP2 attentions */
bnx2x_init_block(bp, MCP_BLOCK, init_stage);
bnx2x_init_block(bp, DMAE_BLOCK, init_stage);
- bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
- bp->common.shmem_base,
- bp->common.shmem2_base);
if (bnx2x_fan_failure_det_req(bp, bp->common.shmem_base,
bp->common.shmem2_base, port)) {
u32 reg_addr = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
(ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN))
bp->mdio.prtad =
XGXS_EXT_PHY_ADDR(ext_phy_config);
+
+ /*
+ * Check if hw lock is required to access MDC/MDIO bus to the PHY(s)
+ * In MF mode, it is set to cover self test cases
+ */
+ if (IS_MF(bp))
+ bp->port.need_hw_lock = 1;
+ else
+ bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
+ bp->common.shmem_base,
+ bp->common.shmem2_base);
}
static void __devinit bnx2x_get_mac_hwinfo(struct bnx2x *bp)
As only the sending and receiving of CAN frames is implemented, this
driver should work with the (serial/USB) CAN hardware from:
- www.canusb.com / www.can232.com / www.mictronic.com / www.canhack.de
+ www.canusb.com / www.can232.com / www.mictronics.de / www.canhack.de
Userspace tools to attach the SLCAN line discipline (slcan_attach,
slcand) can be found in the can-utils at the SocketCAN SVN, see
return ret;
}
-static DEVICE_ATTR(mb0_id, S_IWUGO | S_IRUGO,
+static DEVICE_ATTR(mb0_id, S_IWUSR | S_IRUGO,
at91_sysfs_show_mb0_id, at91_sysfs_set_mb0_id);
static struct attribute *at91_sysfs_attrs[] = {
return count;
}
-static DEVICE_ATTR(termination, S_IWUGO | S_IRUGO, ican3_sysfs_show_term,
+static DEVICE_ATTR(termination, S_IWUSR | S_IRUGO, ican3_sysfs_show_term,
ican3_sysfs_set_term);
static struct attribute *ican3_sysfs_attrs[] = {
static struct can_bittiming_const pch_can_bittiming_const = {
.name = KBUILD_MODNAME,
- .tseg1_min = 1,
+ .tseg1_min = 2,
.tseg1_max = 16,
.tseg2_min = 1,
.tseg2_max = 8,
struct pch_can_priv *priv = netdev_priv(ndev);
unregister_candev(priv->ndev);
- pci_iounmap(pdev, priv->regs);
if (priv->use_msi)
pci_disable_msi(priv->dev);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
pch_can_reset(priv);
+ pci_iounmap(pdev, priv->regs);
free_candev(priv->ndev);
}
priv->use_msi = 0;
} else {
netdev_err(ndev, "PCH CAN opened with MSI\n");
+ pci_set_master(pdev);
priv->use_msi = 1;
}
config CAN_SOFTING
tristate "Softing Gmbh CAN generic support"
- depends on CAN_DEV
+ depends on CAN_DEV && HAS_IOMEM
---help---
Support for CAN cards from Softing Gmbh & some cards
from Vector Gmbh.
#include <linux/module.h>
#include <linux/kernel.h>
+#include <linux/slab.h>
#include <pcmcia/cistpl.h>
#include <pcmcia/ds.h>
}
}
/* Change buffer ownership for this last frame, back to the adapter */
- for (; lp->rx_old != entry; lp->rx_old = (++lp->rx_old) & lp->rxRingMask) {
+ for (; lp->rx_old != entry; lp->rx_old = (lp->rx_old + 1) & lp->rxRingMask) {
writel(readl(&lp->rx_ring[lp->rx_old].base) | R_OWN, &lp->rx_ring[lp->rx_old].base);
}
writel(readl(&lp->rx_ring[entry].base) | R_OWN, &lp->rx_ring[entry].base);
/*
** Update entry information
*/
- lp->rx_new = (++lp->rx_new) & lp->rxRingMask;
+ lp->rx_new = (lp->rx_new + 1) & lp->rxRingMask;
}
return 0;
}
/* Update all the pointers */
- lp->tx_old = (++lp->tx_old) & lp->txRingMask;
+ lp->tx_old = (lp->tx_old + 1) & lp->txRingMask;
}
return 0;
/* Free all the skbuffs in the queue. */
for (i = 0; i < RX_RING_SIZE; i++) {
- np->rx_ring[i].status = 0;
- np->rx_ring[i].fraginfo = 0;
skb = np->rx_skbuff[i];
if (skb) {
pci_unmap_single(np->pdev,
dev_kfree_skb (skb);
np->rx_skbuff[i] = NULL;
}
+ np->rx_ring[i].status = 0;
+ np->rx_ring[i].fraginfo = 0;
}
for (i = 0; i < TX_RING_SIZE; i++) {
skb = np->tx_skbuff[i];
case M88E1000_I_PHY_ID:
case M88E1011_I_PHY_ID:
case M88E1111_I_PHY_ID:
+ case M88E1118_E_PHY_ID:
hw->phy_type = e1000_phy_m88;
break;
case IGP01E1000_I_PHY_ID:
break;
case e1000_ce4100:
if ((hw->phy_id == RTL8211B_PHY_ID) ||
- (hw->phy_id == RTL8201N_PHY_ID))
+ (hw->phy_id == RTL8201N_PHY_ID) ||
+ (hw->phy_id == M88E1118_E_PHY_ID))
match = true;
break;
case e1000_82541:
#define M88E1000_14_PHY_ID M88E1000_E_PHY_ID
#define M88E1011_I_REV_4 0x04
#define M88E1111_I_PHY_ID 0x01410CC0
+#define M88E1118_E_PHY_ID 0x01410E40
#define L1LXT971A_PHY_ID 0x001378E0
#define RTL8211B_PHY_ID 0x001CC910
* to get done, so reset controller to flush Tx.
* (Do the reset outside of interrupt context).
*/
- adapter->tx_timeout_count++;
schedule_work(&adapter->reset_task);
/* return immediately since reset is imminent */
return;
if (netif_msg_hw(priv))
printk(KERN_DEBUG DRV_NAME ": reading TSV at addr:0x%04x\n",
endptr + 1);
- enc28j60_mem_read(priv, endptr + 1, sizeof(tsv), tsv);
+ enc28j60_mem_read(priv, endptr + 1, TSV_SIZE, tsv);
}
static void enc28j60_dump_tsv(struct enc28j60_net *priv, const char *msg,
hw_dbg(hw, " New MAC Addr =%pM\n", hw->mac.addr);
hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0, IXGBE_RAH_AV);
+
+ /* clear VMDq pool/queue selection for RAR 0 */
+ hw->mac.ops.clear_vmdq(hw, 0, IXGBE_CLEAR_VMDQ_ALL);
}
hw->addr_ctrl.overflow_promisc = 0;
unsigned int thisoff = 0;
unsigned int thislen = 0;
u32 fcbuff, fcdmarw, fcfltrw;
- dma_addr_t addr;
+ dma_addr_t addr = 0;
if (!netdev || !sgl)
return 0;
static const char ixgbe_driver_string[] =
"Intel(R) 10 Gigabit PCI Express Network Driver";
-#define DRV_VERSION "3.0.12-k2"
+#define DRV_VERSION "3.2.9-k2"
const char ixgbe_driver_version[] = DRV_VERSION;
static char ixgbe_copyright[] = "Copyright (c) 1999-2010 Intel Corporation.";
u32 mhadd, hlreg0;
/* Decide whether to use packet split mode or not */
+ /* On by default */
+ adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED;
+
/* Do not use packet split if we're in SR-IOV Mode */
- if (!adapter->num_vfs)
- adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED;
+ if (adapter->num_vfs)
+ adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED;
+
+ /* Disable packet split due to 82599 erratum #45 */
+ if (hw->mac.type == ixgbe_mac_82599EB)
+ adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED;
/* Set the RX buffer length according to the mode */
if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) {
{
int q_idx, num_q_vectors;
struct ixgbe_q_vector *q_vector;
- int napi_vectors;
int (*poll)(struct napi_struct *, int);
if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) {
num_q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS;
- napi_vectors = adapter->num_rx_queues;
poll = &ixgbe_clean_rxtx_many;
} else {
num_q_vectors = 1;
- napi_vectors = 1;
poll = &ixgbe_poll;
}
return adapter->hw.mac.ops.set_vfta(&adapter->hw, vid, vf, (bool)add);
}
-
static void ixgbe_set_vmolr(struct ixgbe_hw *hw, u32 vf, bool aupe)
{
u32 vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));
vmolr |= (IXGBE_VMOLR_ROMPE |
- IXGBE_VMOLR_ROPE |
IXGBE_VMOLR_BAM);
if (aupe)
vmolr |= IXGBE_VMOLR_AUPE;
}
ctrl = IXGBE_READ_REG(hw, IXGBE_CTRL);
- IXGBE_WRITE_REG(hw, IXGBE_CTRL, (ctrl | IXGBE_CTRL_RST));
+ IXGBE_WRITE_REG(hw, IXGBE_CTRL, (ctrl | reset_bit));
IXGBE_WRITE_FLUSH(hw);
/* Poll for reset bit to self-clear indicating reset is complete */
for (i = 0; i < 10; i++) {
udelay(1);
ctrl = IXGBE_READ_REG(hw, IXGBE_CTRL);
- if (!(ctrl & IXGBE_CTRL_RST))
+ if (!(ctrl & reset_bit))
break;
}
- if (ctrl & IXGBE_CTRL_RST) {
+ if (ctrl & reset_bit) {
status = IXGBE_ERR_RESET_FAILED;
hw_dbg(hw, "Reset polling failed to complete.\n");
}
{ PCI_VDEVICE(MELLANOX, 0x6764) }, /* MT26468 ConnectX EN 10GigE PCIe gen2*/
{ PCI_VDEVICE(MELLANOX, 0x6746) }, /* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
{ PCI_VDEVICE(MELLANOX, 0x676e) }, /* MT26478 ConnectX2 40GigE PCIe gen2 */
+ { PCI_VDEVICE(MELLANOX, 0x1002) }, /* MT25400 Family [ConnectX-2 Virtual Function] */
+ { PCI_VDEVICE(MELLANOX, 0x1003) }, /* MT27500 Family [ConnectX-3] */
+ { PCI_VDEVICE(MELLANOX, 0x1004) }, /* MT27500 Family [ConnectX-3 Virtual Function] */
+ { PCI_VDEVICE(MELLANOX, 0x1005) }, /* MT27510 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1006) }, /* MT27511 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1007) }, /* MT27520 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1008) }, /* MT27521 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1009) }, /* MT27530 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100a) }, /* MT27531 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100b) }, /* MT27540 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100c) }, /* MT27541 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100d) }, /* MT27550 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100e) }, /* MT27551 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100f) }, /* MT27560 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1010) }, /* MT27561 Family */
{ 0, }
};
{
struct niu_parent *parent = np->parent;
int first_rx_channel, first_tx_channel;
+ int num_rx_rings, num_tx_rings;
+ struct rx_ring_info *rx_rings;
+ struct tx_ring_info *tx_rings;
int i, port, err;
port = np->port;
first_tx_channel += parent->txchan_per_port[i];
}
- np->num_rx_rings = parent->rxchan_per_port[port];
- np->num_tx_rings = parent->txchan_per_port[port];
+ num_rx_rings = parent->rxchan_per_port[port];
+ num_tx_rings = parent->txchan_per_port[port];
- netif_set_real_num_rx_queues(np->dev, np->num_rx_rings);
- netif_set_real_num_tx_queues(np->dev, np->num_tx_rings);
-
- np->rx_rings = kcalloc(np->num_rx_rings, sizeof(struct rx_ring_info),
- GFP_KERNEL);
+ rx_rings = kcalloc(num_rx_rings, sizeof(struct rx_ring_info),
+ GFP_KERNEL);
err = -ENOMEM;
- if (!np->rx_rings)
+ if (!rx_rings)
goto out_err;
+ np->num_rx_rings = num_rx_rings;
+ smp_wmb();
+ np->rx_rings = rx_rings;
+
+ netif_set_real_num_rx_queues(np->dev, num_rx_rings);
+
for (i = 0; i < np->num_rx_rings; i++) {
struct rx_ring_info *rp = &np->rx_rings[i];
return err;
}
- np->tx_rings = kcalloc(np->num_tx_rings, sizeof(struct tx_ring_info),
- GFP_KERNEL);
+ tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),
+ GFP_KERNEL);
err = -ENOMEM;
- if (!np->tx_rings)
+ if (!tx_rings)
goto out_err;
+ np->num_tx_rings = num_tx_rings;
+ smp_wmb();
+ np->tx_rings = tx_rings;
+
+ netif_set_real_num_tx_queues(np->dev, num_tx_rings);
+
for (i = 0; i < np->num_tx_rings; i++) {
struct tx_ring_info *rp = &np->tx_rings[i];
static void niu_get_rx_stats(struct niu *np)
{
unsigned long pkts, dropped, errors, bytes;
+ struct rx_ring_info *rx_rings;
int i;
pkts = dropped = errors = bytes = 0;
+
+ rx_rings = ACCESS_ONCE(np->rx_rings);
+ if (!rx_rings)
+ goto no_rings;
+
for (i = 0; i < np->num_rx_rings; i++) {
- struct rx_ring_info *rp = &np->rx_rings[i];
+ struct rx_ring_info *rp = &rx_rings[i];
niu_sync_rx_discard_stats(np, rp, 0);
dropped += rp->rx_dropped;
errors += rp->rx_errors;
}
+
+no_rings:
np->dev->stats.rx_packets = pkts;
np->dev->stats.rx_bytes = bytes;
np->dev->stats.rx_dropped = dropped;
static void niu_get_tx_stats(struct niu *np)
{
unsigned long pkts, errors, bytes;
+ struct tx_ring_info *tx_rings;
int i;
pkts = errors = bytes = 0;
+
+ tx_rings = ACCESS_ONCE(np->tx_rings);
+ if (!tx_rings)
+ goto no_rings;
+
for (i = 0; i < np->num_tx_rings; i++) {
- struct tx_ring_info *rp = &np->tx_rings[i];
+ struct tx_ring_info *rp = &tx_rings[i];
pkts += rp->tx_packets;
bytes += rp->tx_bytes;
errors += rp->tx_errors;
}
+
+no_rings:
np->dev->stats.tx_packets = pkts;
np->dev->stats.tx_bytes = bytes;
np->dev->stats.tx_errors = errors;
{
struct niu *np = netdev_priv(dev);
- niu_get_rx_stats(np);
- niu_get_tx_stats(np);
-
+ if (netif_running(dev)) {
+ niu_get_rx_stats(np);
+ niu_get_tx_stats(np);
+ }
return &dev->stats;
}
struct pch_gbe_adapter *adapter;
adapter = container_of(work, struct pch_gbe_adapter, reset_task);
+ rtnl_lock();
pch_gbe_reinit_locked(adapter);
+ rtnl_unlock();
}
/**
*/
void pch_gbe_reinit_locked(struct pch_gbe_adapter *adapter)
{
- struct net_device *netdev = adapter->netdev;
-
- rtnl_lock();
- if (netif_running(netdev)) {
- pch_gbe_down(adapter);
- pch_gbe_up(adapter);
- }
- rtnl_unlock();
+ pch_gbe_down(adapter);
+ pch_gbe_up(adapter);
}
/**
/*
* Wait a full Tx time (1.2ms) + some guard time, NS says 1.6ms total.
- * Early datasheets said to poll the reset bit, but now they say that
- * it "is not a reliable indicator and subsequently should be ignored."
- * We wait at least 10ms.
+ * We wait at least 2ms.
*/
- mdelay(10);
+ mdelay(2);
/*
* Reset RBCR[01] back to zero as per magic incantation.
if (pm)
pm_request_resume(&tp->pci_dev->dev);
netif_carrier_on(dev);
- netif_info(tp, ifup, dev, "link up\n");
+ if (net_ratelimit())
+ netif_info(tp, ifup, dev, "link up\n");
} else {
netif_carrier_off(dev);
netif_info(tp, ifdown, dev, "link down\n");
RTL_W16(IntrMitigate, 0x5151);
/* Work around for RxFIFO overflow. */
- if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_22) {
tp->intr_event |= RxFIFOOver | PCSTimeout;
tp->intr_event &= ~RxOverflow;
}
break;
}
- /* Work around for rx fifo overflow */
- if (unlikely(status & RxFIFOOver) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
- netif_stop_queue(dev);
- rtl8169_tx_timeout(dev);
- break;
+ if (unlikely(status & RxFIFOOver)) {
+ switch (tp->mac_version) {
+ /* Work around for rx fifo overflow */
+ case RTL_GIGA_MAC_VER_11:
+ case RTL_GIGA_MAC_VER_22:
+ case RTL_GIGA_MAC_VER_26:
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ goto done;
+ /* Testers needed. */
+ case RTL_GIGA_MAC_VER_17:
+ case RTL_GIGA_MAC_VER_19:
+ case RTL_GIGA_MAC_VER_20:
+ case RTL_GIGA_MAC_VER_21:
+ case RTL_GIGA_MAC_VER_23:
+ case RTL_GIGA_MAC_VER_24:
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ /* Experimental science. Pktgen proof. */
+ case RTL_GIGA_MAC_VER_12:
+ case RTL_GIGA_MAC_VER_25:
+ if (status == RxFIFOOver)
+ goto done;
+ break;
+ default:
+ break;
+ }
}
if (unlikely(status & SYSErr)) {
(status & RxFIFOOver) ? (status | RxOverflow) : status);
status = RTL_R16(IntrStatus);
}
-
+done:
return IRQ_RETVAL(handled);
}
"cur_rx:%4.4d, dirty_rx:%4.4d\n",
net_dev->name, sis_priv->cur_rx,
sis_priv->dirty_rx);
+ dev_kfree_skb(skb);
break;
}
/*
* cdc_ncm.c
*
- * Copyright (C) ST-Ericsson 2010
+ * Copyright (C) ST-Ericsson 2010-2011
* Contact: Alexey Orishko <alexey.orishko@stericsson.com>
* Original author: Hans Petter Selasky <hans.petter.selasky@stericsson.com>
*
#include <linux/usb/usbnet.h>
#include <linux/usb/cdc.h>
-#define DRIVER_VERSION "17-Jan-2011"
+#define DRIVER_VERSION "7-Feb-2011"
/* CDC NCM subclass 3.2.1 */
#define USB_CDC_NCM_NDP16_LENGTH_MIN 0x10
*/
#define CDC_NCM_DPT_DATAGRAMS_MAX 32
+/* Maximum amount of IN datagrams in NTB */
+#define CDC_NCM_DPT_DATAGRAMS_IN_MAX 0 /* unlimited */
+
/* Restart the timer, if amount of datagrams is less than given value */
#define CDC_NCM_RESTART_TIMER_DATAGRAM_CNT 3
(sizeof(struct usb_cdc_ncm_nth16) + sizeof(struct usb_cdc_ncm_ndp16) + \
(CDC_NCM_DPT_DATAGRAMS_MAX + 1) * sizeof(struct usb_cdc_ncm_dpe16))
-struct connection_speed_change {
- __le32 USBitRate; /* holds 3GPP downlink value, bits per second */
- __le32 DSBitRate; /* holds 3GPP uplink value, bits per second */
-} __attribute__ ((packed));
-
struct cdc_ncm_data {
struct usb_cdc_ncm_nth16 nth16;
struct usb_cdc_ncm_ndp16 ndp16;
{
struct usb_cdc_notification req;
u32 val;
- __le16 max_datagram_size;
u8 flags;
u8 iface_no;
int err;
+ u16 ntb_fmt_supported;
iface_no = ctx->control->cur_altsetting->desc.bInterfaceNumber;
ctx->tx_remainder = le16_to_cpu(ctx->ncm_parm.wNdpOutPayloadRemainder);
ctx->tx_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutDivisor);
ctx->tx_ndp_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutAlignment);
+ /* devices prior to NCM Errata shall set this field to zero */
+ ctx->tx_max_datagrams = le16_to_cpu(ctx->ncm_parm.wNtbOutMaxDatagrams);
+ ntb_fmt_supported = le16_to_cpu(ctx->ncm_parm.bmNtbFormatsSupported);
if (ctx->func_desc != NULL)
flags = ctx->func_desc->bmNetworkCapabilities;
pr_debug("dwNtbInMaxSize=%u dwNtbOutMaxSize=%u "
"wNdpOutPayloadRemainder=%u wNdpOutDivisor=%u "
- "wNdpOutAlignment=%u flags=0x%x\n",
+ "wNdpOutAlignment=%u wNtbOutMaxDatagrams=%u flags=0x%x\n",
ctx->rx_max, ctx->tx_max, ctx->tx_remainder, ctx->tx_modulus,
- ctx->tx_ndp_modulus, flags);
+ ctx->tx_ndp_modulus, ctx->tx_max_datagrams, flags);
- /* max count of tx datagrams without terminating NULL entry */
- ctx->tx_max_datagrams = CDC_NCM_DPT_DATAGRAMS_MAX;
+ /* max count of tx datagrams */
+ if ((ctx->tx_max_datagrams == 0) ||
+ (ctx->tx_max_datagrams > CDC_NCM_DPT_DATAGRAMS_MAX))
+ ctx->tx_max_datagrams = CDC_NCM_DPT_DATAGRAMS_MAX;
/* verify maximum size of received NTB in bytes */
- if ((ctx->rx_max <
- (CDC_NCM_MIN_HDR_SIZE + CDC_NCM_MIN_DATAGRAM_SIZE)) ||
- (ctx->rx_max > CDC_NCM_NTB_MAX_SIZE_RX)) {
+ if (ctx->rx_max < USB_CDC_NCM_NTB_MIN_IN_SIZE) {
+ pr_debug("Using min receive length=%d\n",
+ USB_CDC_NCM_NTB_MIN_IN_SIZE);
+ ctx->rx_max = USB_CDC_NCM_NTB_MIN_IN_SIZE;
+ }
+
+ if (ctx->rx_max > CDC_NCM_NTB_MAX_SIZE_RX) {
pr_debug("Using default maximum receive length=%d\n",
CDC_NCM_NTB_MAX_SIZE_RX);
ctx->rx_max = CDC_NCM_NTB_MAX_SIZE_RX;
}
+ /* inform device about NTB input size changes */
+ if (ctx->rx_max != le32_to_cpu(ctx->ncm_parm.dwNtbInMaxSize)) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_NTB_INPUT_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+
+ if (flags & USB_CDC_NCM_NCAP_NTB_INPUT_SIZE) {
+ struct usb_cdc_ncm_ndp_input_size ndp_in_sz;
+
+ req.wLength = 8;
+ ndp_in_sz.dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+ ndp_in_sz.wNtbInMaxDatagrams =
+ cpu_to_le16(CDC_NCM_DPT_DATAGRAMS_MAX);
+ ndp_in_sz.wReserved = 0;
+ err = cdc_ncm_do_request(ctx, &req, &ndp_in_sz, 0, NULL,
+ 1000);
+ } else {
+ __le32 dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+
+ req.wLength = 4;
+ err = cdc_ncm_do_request(ctx, &req, &dwNtbInMaxSize, 0,
+ NULL, 1000);
+ }
+
+ if (err)
+ pr_debug("Setting NTB Input Size failed\n");
+ }
+
/* verify maximum size of transmitted NTB in bytes */
if ((ctx->tx_max <
(CDC_NCM_MIN_HDR_SIZE + CDC_NCM_MIN_DATAGRAM_SIZE)) ||
/* additional configuration */
/* set CRC Mode */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_SET_CRC_MODE;
- req.wValue = cpu_to_le16(USB_CDC_NCM_CRC_NOT_APPENDED);
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = 0;
-
- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
- if (err)
- pr_debug("Setting CRC mode off failed\n");
+ if (flags & USB_CDC_NCM_NCAP_CRC_MODE) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_CRC_MODE;
+ req.wValue = cpu_to_le16(USB_CDC_NCM_CRC_NOT_APPENDED);
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 0;
+
+ err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+ if (err)
+ pr_debug("Setting CRC mode off failed\n");
+ }
- /* set NTB format */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_SET_NTB_FORMAT;
- req.wValue = cpu_to_le16(USB_CDC_NCM_NTB16_FORMAT);
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = 0;
+ /* set NTB format, if both formats are supported */
+ if (ntb_fmt_supported & USB_CDC_NCM_NTH32_SIGN) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_NTB_FORMAT;
+ req.wValue = cpu_to_le16(USB_CDC_NCM_NTB16_FORMAT);
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 0;
+
+ err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+ if (err)
+ pr_debug("Setting NTB format to 16-bit failed\n");
+ }
- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
- if (err)
- pr_debug("Setting NTB format to 16-bit failed\n");
+ ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
/* set Max Datagram Size (MTU) */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_GET_MAX_DATAGRAM_SIZE;
- req.wValue = 0;
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = cpu_to_le16(2);
+ if (flags & USB_CDC_NCM_NCAP_MAX_DATAGRAM_SIZE) {
+ __le16 max_datagram_size;
+ u16 eth_max_sz = le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
+
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_GET_MAX_DATAGRAM_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = cpu_to_le16(2);
+
+ err = cdc_ncm_do_request(ctx, &req, &max_datagram_size, 0, NULL,
+ 1000);
+ if (err) {
+ pr_debug("GET_MAX_DATAGRAM_SIZE failed, use size=%u\n",
+ CDC_NCM_MIN_DATAGRAM_SIZE);
+ } else {
+ ctx->max_datagram_size = le16_to_cpu(max_datagram_size);
+ /* Check Eth descriptor value */
+ if (eth_max_sz < CDC_NCM_MAX_DATAGRAM_SIZE) {
+ if (ctx->max_datagram_size > eth_max_sz)
+ ctx->max_datagram_size = eth_max_sz;
+ } else {
+ if (ctx->max_datagram_size >
+ CDC_NCM_MAX_DATAGRAM_SIZE)
+ ctx->max_datagram_size =
+ CDC_NCM_MAX_DATAGRAM_SIZE;
+ }
- err = cdc_ncm_do_request(ctx, &req, &max_datagram_size, 0, NULL, 1000);
- if (err) {
- pr_debug(" GET_MAX_DATAGRAM_SIZE failed, using size=%u\n",
- CDC_NCM_MIN_DATAGRAM_SIZE);
- /* use default */
- ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
- } else {
- ctx->max_datagram_size = le16_to_cpu(max_datagram_size);
+ if (ctx->max_datagram_size < CDC_NCM_MIN_DATAGRAM_SIZE)
+ ctx->max_datagram_size =
+ CDC_NCM_MIN_DATAGRAM_SIZE;
+
+ /* if value changed, update device */
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_MAX_DATAGRAM_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 2;
+ max_datagram_size = cpu_to_le16(ctx->max_datagram_size);
+
+ err = cdc_ncm_do_request(ctx, &req, &max_datagram_size,
+ 0, NULL, 1000);
+ if (err)
+ pr_debug("SET_MAX_DATAGRAM_SIZE failed\n");
+ }
- if (ctx->max_datagram_size < CDC_NCM_MIN_DATAGRAM_SIZE)
- ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
- else if (ctx->max_datagram_size > CDC_NCM_MAX_DATAGRAM_SIZE)
- ctx->max_datagram_size = CDC_NCM_MAX_DATAGRAM_SIZE;
}
if (ctx->netdev->mtu != (ctx->max_datagram_size - ETH_HLEN))
ctx->ether_desc =
(const struct usb_cdc_ether_desc *)buf;
-
dev->hard_mtu =
le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
- if (dev->hard_mtu <
- (CDC_NCM_MIN_DATAGRAM_SIZE - ETH_HLEN))
- dev->hard_mtu =
- CDC_NCM_MIN_DATAGRAM_SIZE - ETH_HLEN;
-
- else if (dev->hard_mtu >
- (CDC_NCM_MAX_DATAGRAM_SIZE - ETH_HLEN))
- dev->hard_mtu =
- CDC_NCM_MAX_DATAGRAM_SIZE - ETH_HLEN;
+ if (dev->hard_mtu < CDC_NCM_MIN_DATAGRAM_SIZE)
+ dev->hard_mtu = CDC_NCM_MIN_DATAGRAM_SIZE;
+ else if (dev->hard_mtu > CDC_NCM_MAX_DATAGRAM_SIZE)
+ dev->hard_mtu = CDC_NCM_MAX_DATAGRAM_SIZE;
break;
case USB_CDC_NCM_TYPE:
u32 offset;
u32 last_offset;
u16 n = 0;
- u8 timeout = 0;
+ u8 ready2send = 0;
/* if there is a remaining skb, it gets priority */
if (skb != NULL)
swap(skb, ctx->tx_rem_skb);
else
- timeout = 1;
+ ready2send = 1;
/*
* +----------------+
for (; n < ctx->tx_max_datagrams; n++) {
/* check if end of transmit buffer is reached */
- if (offset >= ctx->tx_max)
+ if (offset >= ctx->tx_max) {
+ ready2send = 1;
break;
-
+ }
/* compute maximum buffer size */
rem = ctx->tx_max - offset;
}
ctx->tx_rem_skb = skb;
skb = NULL;
-
- /* loop one more time */
- timeout = 1;
+ ready2send = 1;
}
break;
}
ctx->tx_curr_last_offset = last_offset;
goto exit_no_skb;
- } else if ((n < ctx->tx_max_datagrams) && (timeout == 0)) {
+ } else if ((n < ctx->tx_max_datagrams) && (ready2send == 0)) {
/* wait for more frames */
/* push variables */
ctx->tx_curr_skb = skb_out;
cpu_to_le16(sizeof(ctx->tx_ncm.nth16));
ctx->tx_ncm.nth16.wSequence = cpu_to_le16(ctx->tx_seq);
ctx->tx_ncm.nth16.wBlockLength = cpu_to_le16(last_offset);
- ctx->tx_ncm.nth16.wFpIndex = ALIGN(sizeof(struct usb_cdc_ncm_nth16),
+ ctx->tx_ncm.nth16.wNdpIndex = ALIGN(sizeof(struct usb_cdc_ncm_nth16),
ctx->tx_ndp_modulus);
memcpy(skb_out->data, &(ctx->tx_ncm.nth16), sizeof(ctx->tx_ncm.nth16));
rem = sizeof(ctx->tx_ncm.ndp16) + ((ctx->tx_curr_frame_num + 1) *
sizeof(struct usb_cdc_ncm_dpe16));
ctx->tx_ncm.ndp16.wLength = cpu_to_le16(rem);
- ctx->tx_ncm.ndp16.wNextFpIndex = 0; /* reserved */
+ ctx->tx_ncm.ndp16.wNextNdpIndex = 0; /* reserved */
- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wFpIndex,
+ memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex,
&(ctx->tx_ncm.ndp16),
sizeof(ctx->tx_ncm.ndp16));
- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wFpIndex +
+ memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex +
sizeof(ctx->tx_ncm.ndp16),
&(ctx->tx_ncm.dpe16),
(ctx->tx_curr_frame_num + 1) *
goto error;
}
- temp = le16_to_cpu(ctx->rx_ncm.nth16.wFpIndex);
+ temp = le16_to_cpu(ctx->rx_ncm.nth16.wNdpIndex);
if ((temp + sizeof(ctx->rx_ncm.ndp16)) > actlen) {
pr_debug("invalid DPT16 index\n");
goto error;
static void
cdc_ncm_speed_change(struct cdc_ncm_ctx *ctx,
- struct connection_speed_change *data)
+ struct usb_cdc_speed_change *data)
{
- uint32_t rx_speed = le32_to_cpu(data->USBitRate);
- uint32_t tx_speed = le32_to_cpu(data->DSBitRate);
+ uint32_t rx_speed = le32_to_cpu(data->DLBitRRate);
+ uint32_t tx_speed = le32_to_cpu(data->ULBitRate);
/*
* Currently the USB-NET API does not support reporting the actual
/* test for split data in 8-byte chunks */
if (test_and_clear_bit(EVENT_STS_SPLIT, &dev->flags)) {
cdc_ncm_speed_change(ctx,
- (struct connection_speed_change *)urb->transfer_buffer);
+ (struct usb_cdc_speed_change *)urb->transfer_buffer);
return;
}
break;
case USB_CDC_NOTIFY_SPEED_CHANGE:
- if (urb->actual_length <
- (sizeof(*event) + sizeof(struct connection_speed_change)))
+ if (urb->actual_length < (sizeof(*event) +
+ sizeof(struct usb_cdc_speed_change)))
set_bit(EVENT_STS_SPLIT, &dev->flags);
else
cdc_ncm_speed_change(ctx,
- (struct connection_speed_change *) &event[1]);
+ (struct usb_cdc_speed_change *) &event[1]);
break;
default:
}
}
+static void virtnet_napi_enable(struct virtnet_info *vi)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
vi = container_of(work, struct virtnet_info, refill.work);
napi_disable(&vi->napi);
still_empty = !try_fill_recv(vi, GFP_KERNEL);
- napi_enable(&vi->napi);
+ virtnet_napi_enable(vi);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again. */
{
struct virtnet_info *vi = netdev_priv(dev);
- napi_enable(&vi->napi);
-
- /* If all buffers were filled by other side before we napi_enabled, we
- * won't get another interrupt, so process any outstanding packets
- * now. virtnet_poll wants re-enable the queue, so we disable here.
- * We synchronize against interrupts via NAPI_STATE_SCHED */
- if (napi_schedule_prep(&vi->napi)) {
- virtqueue_disable_cb(vi->rvq);
- __napi_schedule(&vi->napi);
- }
+ virtnet_napi_enable(vi);
return 0;
}
if (status != VXGE_HW_OK)
goto exit;
- if ((rts_table != VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) ||
+ if ((rts_table != VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) &&
(rts_table !=
VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT))
*data1 = 0;
for (i = 0; i < qmax; i++) {
err = ath5k_hw_stop_tx_dma(ah, i);
/* -EINVAL -> queue inactive */
- if (err != -EINVAL)
+ if (err && err != -EINVAL)
return err;
}
- return err;
+ return 0;
}
if (!ah->ah_bwmode) {
dur = ieee80211_generic_frame_duration(sc->hw,
NULL, len, rate);
- return dur;
+ return le16_to_cpu(dur);
}
bitrate = rate->bitrate;
* what rate we should choose to TX ACKs. */
tx_time = ath5k_hw_get_frame_duration(ah, 10, rate);
- tx_time = le16_to_cpu(tx_time);
-
ath5k_hw_reg_write(ah, tx_time, reg);
if (!(rate->flags & IEEE80211_RATE_SHORT_PREAMBLE))
}
/* WAR for ASPM system hang */
- if (AR_SREV_9280(ah) || AR_SREV_9285(ah) || AR_SREV_9287(ah)) {
+ if (AR_SREV_9285(ah) || AR_SREV_9287(ah))
val |= (AR_WA_BIT6 | AR_WA_BIT7);
- }
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
struct ath_buf_state {
u8 bf_type;
u8 bfs_paprd;
+ unsigned long bfs_paprd_timestamp;
enum ath9k_internal_frame_type bfs_ftype;
};
struct work_struct paprd_work;
struct work_struct hw_check_work;
struct completion paprd_complete;
- bool paprd_pending;
u32 intrstatus;
u32 sc_flags; /* SC_OP_* */
{
ath9k_htc_exit_debug(priv->ah);
ath9k_hw_deinit(priv->ah);
- tasklet_kill(&priv->swba_tasklet);
- tasklet_kill(&priv->rx_tasklet);
- tasklet_kill(&priv->tx_tasklet);
kfree(priv->ah);
priv->ah = NULL;
}
int ret = 0;
u8 cmd_rsp;
- /* Cancel all the running timers/work .. */
- cancel_work_sync(&priv->fatal_work);
- cancel_work_sync(&priv->ps_work);
- cancel_delayed_work_sync(&priv->ath9k_led_blink_work);
- ath9k_led_stop_brightness(priv);
-
mutex_lock(&priv->mutex);
if (priv->op_flags & OP_INVALID) {
WMI_CMD(WMI_DISABLE_INTR_CMDID);
WMI_CMD(WMI_DRAIN_TXQ_ALL_CMDID);
WMI_CMD(WMI_STOP_RECV_CMDID);
+
+ tasklet_kill(&priv->swba_tasklet);
+ tasklet_kill(&priv->rx_tasklet);
+ tasklet_kill(&priv->tx_tasklet);
+
skb_queue_purge(&priv->tx_queue);
+ mutex_unlock(&priv->mutex);
+
+ /* Cancel all the running timers/work .. */
+ cancel_work_sync(&priv->fatal_work);
+ cancel_work_sync(&priv->ps_work);
+ cancel_delayed_work_sync(&priv->ath9k_led_blink_work);
+ ath9k_led_stop_brightness(priv);
+
+ mutex_lock(&priv->mutex);
+
/* Remove monitor interface here */
if (ah->opmode == NL80211_IFTYPE_MONITOR) {
if (ath9k_htc_remove_monitor_interface(priv))
err_queues:
ath9k_hw_deinit(ah);
err_hw:
- tasklet_kill(&sc->intr_tq);
- tasklet_kill(&sc->bcon_tasklet);
kfree(ah);
sc->sc_ah = NULL;
ath9k_hw_deinit(sc->sc_ah);
- tasklet_kill(&sc->intr_tq);
- tasklet_kill(&sc->bcon_tasklet);
-
kfree(sc->sc_ah);
sc->sc_ah = NULL;
}
wiphy_rfkill_stop_polling(sc->hw->wiphy);
ath_deinit_leds(sc);
+ ath9k_ps_restore(sc);
+
for (i = 0; i < sc->num_sec_wiphy; i++) {
struct ath_wiphy *aphy = sc->sec_wiphy[i];
if (aphy == NULL)
{
struct ieee80211_hw *hw = sc->hw;
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ath_hw *ah = sc->sc_ah;
+ struct ath_common *common = ath9k_hw_common(ah);
struct ath_tx_control txctl;
int time_left;
tx_info->control.rates[1].idx = -1;
init_completion(&sc->paprd_complete);
- sc->paprd_pending = true;
txctl.paprd = BIT(chain);
- if (ath_tx_start(hw, skb, &txctl) != 0)
+
+ if (ath_tx_start(hw, skb, &txctl) != 0) {
+ ath_dbg(common, ATH_DBG_XMIT, "PAPRD TX failed\n");
+ dev_kfree_skb_any(skb);
return false;
+ }
time_left = wait_for_completion_timeout(&sc->paprd_complete,
msecs_to_jiffies(ATH_PAPRD_TIMEOUT));
- sc->paprd_pending = false;
if (!time_left)
ath_dbg(ath9k_hw_common(sc->sc_ah), ATH_DBG_CALIBRATE,
spin_unlock_bh(&sc->sc_pcu_lock);
ath9k_ps_restore(sc);
-
- ath9k_setpower(sc, ATH9K_PM_FULL_SLEEP);
}
int ath_reset(struct ath_softc *sc, bool retry_tx)
spin_lock_bh(&sc->sc_pcu_lock);
+ /* prevent tasklets to enable interrupts once we disable them */
+ ah->imask &= ~ATH9K_INT_GLOBAL;
+
/* make sure h/w will not generate any interrupt
* before setting the invalid flag. */
ath9k_hw_disable_interrupts(ah);
spin_unlock_bh(&sc->sc_pcu_lock);
+ /* we can now sync irq and kill any running tasklets, since we already
+ * disabled interrupts and not holding a spin lock */
+ synchronize_irq(sc->irq);
+ tasklet_kill(&sc->intr_tq);
+ tasklet_kill(&sc->bcon_tasklet);
+
ath9k_ps_restore(sc);
sc->ps_idle = true;
ar9003_hw_set_paprd_txdesc(sc->sc_ah, bf->bf_desc,
bf->bf_state.bfs_paprd);
+ if (txctl->paprd)
+ bf->bf_state.bfs_paprd_timestamp = jiffies;
+
ath_tx_send_normal(sc, txctl->txq, tid, &bf_head);
}
bf->bf_buf_addr = 0;
if (bf->bf_state.bfs_paprd) {
- if (!sc->paprd_pending)
+ if (time_after(jiffies,
+ bf->bf_state.bfs_paprd_timestamp +
+ msecs_to_jiffies(ATH_PAPRD_TIMEOUT)))
dev_kfree_skb_any(skb);
else
complete(&sc->paprd_complete);
cam = ieee80211_check_tim(tim_ie, tim_len, ar->common.curaid);
/* 2. Maybe the AP wants to send multicast/broadcast data? */
- cam = !!(tim_ie->bitmap_ctrl & 0x01);
+ cam |= !!(tim_ie->bitmap_ctrl & 0x01);
if (!cam) {
/* back to low-power land. */
.fw_name_pre = IWL6050_FW_PRE, \
.ucode_api_max = IWL6050_UCODE_API_MAX, \
.ucode_api_min = IWL6050_UCODE_API_MIN, \
+ .valid_tx_ant = ANT_AB, /* .cfg overwrite */ \
+ .valid_rx_ant = ANT_AB, /* .cfg overwrite */ \
.ops = &iwl6050_ops, \
.eeprom_ver = EEPROM_6050_EEPROM_VERSION, \
.eeprom_calib_ver = EEPROM_6050_TX_POWER_VERSION, \
/* only Re-enable if disabled by irq */
if (test_bit(STATUS_INT_ENABLED, &priv->status))
iwl_enable_interrupts(priv);
+ /* Re-enable RF_KILL if it occurred */
+ else if (handled & CSR_INT_BIT_RF_KILL)
+ iwl_enable_rfkill_int(priv);
#ifdef CONFIG_IWLWIFI_DEBUG
if (iwl_get_debug_level(priv) & (IWL_DL_ISR)) {
/* only Re-enable if disabled by irq */
if (test_bit(STATUS_INT_ENABLED, &priv->status))
iwl_enable_interrupts(priv);
+ /* Re-enable RF_KILL if it occurred */
+ else if (handled & CSR_INT_BIT_RF_KILL)
+ iwl_enable_rfkill_int(priv);
}
/* the threshold ratio of actual_ack_cnt to expected_ack_cnt in percent */
}
static void efuse_write_data_case1(struct ieee80211_hw *hw, u16 *efuse_addr,
- u8 efuse_data, u8 offset, int *bcontinual,
- u8 *write_state, struct pgpkt_struct target_pkt,
- int *repeat_times, int *bresult, u8 word_en)
+ u8 efuse_data, u8 offset, int *bcontinual,
+ u8 *write_state, struct pgpkt_struct *target_pkt,
+ int *repeat_times, int *bresult, u8 word_en)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct pgpkt_struct tmp_pkt;
tmp_pkt.word_en = tmp_header & 0x0F;
tmp_word_cnts = efuse_calculate_word_cnts(tmp_pkt.word_en);
- if (tmp_pkt.offset != target_pkt.offset) {
- efuse_addr = efuse_addr + (tmp_word_cnts * 2) + 1;
+ if (tmp_pkt.offset != target_pkt->offset) {
+ *efuse_addr = *efuse_addr + (tmp_word_cnts * 2) + 1;
*write_state = PG_STATE_HEADER;
} else {
for (tmpindex = 0; tmpindex < (tmp_word_cnts * 2); tmpindex++) {
}
if (bdataempty == false) {
- efuse_addr = efuse_addr + (tmp_word_cnts * 2) + 1;
+ *efuse_addr = *efuse_addr + (tmp_word_cnts * 2) + 1;
*write_state = PG_STATE_HEADER;
} else {
match_word_en = 0x0F;
- if (!((target_pkt.word_en & BIT(0)) |
+ if (!((target_pkt->word_en & BIT(0)) |
(tmp_pkt.word_en & BIT(0))))
match_word_en &= (~BIT(0));
- if (!((target_pkt.word_en & BIT(1)) |
+ if (!((target_pkt->word_en & BIT(1)) |
(tmp_pkt.word_en & BIT(1))))
match_word_en &= (~BIT(1));
- if (!((target_pkt.word_en & BIT(2)) |
+ if (!((target_pkt->word_en & BIT(2)) |
(tmp_pkt.word_en & BIT(2))))
match_word_en &= (~BIT(2));
- if (!((target_pkt.word_en & BIT(3)) |
+ if (!((target_pkt->word_en & BIT(3)) |
(tmp_pkt.word_en & BIT(3))))
match_word_en &= (~BIT(3));
badworden = efuse_word_enable_data_write(
hw, *efuse_addr + 1,
tmp_pkt.word_en,
- target_pkt.data);
+ target_pkt->data);
if (0x0F != (badworden & 0x0F)) {
u8 reorg_offset = offset;
}
tmp_word_en = 0x0F;
- if ((target_pkt.word_en & BIT(0)) ^
+ if ((target_pkt->word_en & BIT(0)) ^
(match_word_en & BIT(0)))
tmp_word_en &= (~BIT(0));
- if ((target_pkt.word_en & BIT(1)) ^
+ if ((target_pkt->word_en & BIT(1)) ^
(match_word_en & BIT(1)))
tmp_word_en &= (~BIT(1));
- if ((target_pkt.word_en & BIT(2)) ^
+ if ((target_pkt->word_en & BIT(2)) ^
(match_word_en & BIT(2)))
tmp_word_en &= (~BIT(2));
- if ((target_pkt.word_en & BIT(3)) ^
+ if ((target_pkt->word_en & BIT(3)) ^
(match_word_en & BIT(3)))
tmp_word_en &= (~BIT(3));
if ((tmp_word_en & 0x0F) != 0x0F) {
*efuse_addr = efuse_get_current_size(hw);
- target_pkt.offset = offset;
- target_pkt.word_en = tmp_word_en;
+ target_pkt->offset = offset;
+ target_pkt->word_en = tmp_word_en;
} else
*bcontinual = false;
*write_state = PG_STATE_HEADER;
}
} else {
*efuse_addr += (2 * tmp_word_cnts) + 1;
- target_pkt.offset = offset;
- target_pkt.word_en = word_en;
+ target_pkt->offset = offset;
+ target_pkt->word_en = word_en;
*write_state = PG_STATE_HEADER;
}
}
efuse_write_data_case1(hw, &efuse_addr,
efuse_data, offset,
&bcontinual,
- &write_state, target_pkt,
+ &write_state, &target_pkt,
&repeat_times, &bresult,
word_en);
else
if (changed & BSS_CHANGED_BEACON) {
beacon = ieee80211_beacon_get(hw, vif);
+ if (!beacon)
+ goto out_sleep;
+
ret = wl1251_cmd_template_set(wl, CMD_BEACON, beacon->data,
beacon->len);
spi_message_add_tail(&t, &m);
spi_sync(wl_to_spi(wl), &m);
- kfree(cmd);
-
wl1271_dump(DEBUG_SPI, "spi reset -> ", cmd, WSPI_INIT_CMD_LEN);
+ kfree(cmd);
}
static void wl1271_spi_init(struct wl1271 *wl)
unsigned long rx_pfn_array[NET_RX_RING_SIZE];
struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+
+ /* Statistics */
+ int rx_gso_checksum_fixup;
};
struct netfront_rx_info {
return cons;
}
-static int skb_checksum_setup(struct sk_buff *skb)
+static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
{
struct iphdr *iph;
unsigned char *th;
int err = -EPROTO;
+ int recalculate_partial_csum = 0;
+
+ /*
+ * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
+ * peers can fail to set NETRXF_csum_blank when sending a GSO
+ * frame. In this case force the SKB to CHECKSUM_PARTIAL and
+ * recalculate the partial checksum.
+ */
+ if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
+ struct netfront_info *np = netdev_priv(dev);
+ np->rx_gso_checksum_fixup++;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ recalculate_partial_csum = 1;
+ }
+
+ /* A non-CHECKSUM_PARTIAL SKB does not require setup. */
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
if (skb->protocol != htons(ETH_P_IP))
goto out;
switch (iph->protocol) {
case IPPROTO_TCP:
skb->csum_offset = offsetof(struct tcphdr, check);
+
+ if (recalculate_partial_csum) {
+ struct tcphdr *tcph = (struct tcphdr *)th;
+ tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
+ skb->len - iph->ihl*4,
+ IPPROTO_TCP, 0);
+ }
break;
case IPPROTO_UDP:
skb->csum_offset = offsetof(struct udphdr, check);
+
+ if (recalculate_partial_csum) {
+ struct udphdr *udph = (struct udphdr *)th;
+ udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
+ skb->len - iph->ihl*4,
+ IPPROTO_UDP, 0);
+ }
break;
default:
if (net_ratelimit())
/* Ethernet work: Delayed to here as it peeks the header. */
skb->protocol = eth_type_trans(skb, dev);
- if (skb->ip_summed == CHECKSUM_PARTIAL) {
- if (skb_checksum_setup(skb)) {
- kfree_skb(skb);
- packets_dropped++;
- dev->stats.rx_errors++;
- continue;
- }
+ if (checksum_setup(dev, skb)) {
+ kfree_skb(skb);
+ packets_dropped++;
+ dev->stats.rx_errors++;
+ continue;
}
dev->stats.rx_packets++;
}
}
+static const struct xennet_stat {
+ char name[ETH_GSTRING_LEN];
+ u16 offset;
+} xennet_stats[] = {
+ {
+ "rx_gso_checksum_fixup",
+ offsetof(struct netfront_info, rx_gso_checksum_fixup)
+ },
+};
+
+static int xennet_get_sset_count(struct net_device *dev, int string_set)
+{
+ switch (string_set) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(xennet_stats);
+ default:
+ return -EINVAL;
+ }
+}
+
+static void xennet_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 * data)
+{
+ void *np = netdev_priv(dev);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
+ data[i] = *(int *)(np + xennet_stats[i].offset);
+}
+
+static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
+{
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
+ memcpy(data + i * ETH_GSTRING_LEN,
+ xennet_stats[i].name, ETH_GSTRING_LEN);
+ break;
+ }
+}
+
static const struct ethtool_ops xennet_ethtool_ops =
{
.set_tx_csum = ethtool_op_set_tx_csum,
.set_sg = xennet_set_sg,
.set_tso = xennet_set_tso,
.get_link = ethtool_op_get_link,
+
+ .get_sset_count = xennet_get_sset_count,
+ .get_ethtool_stats = xennet_get_ethtool_stats,
+ .get_strings = xennet_get_strings,
};
#ifdef CONFIG_SYSFS
rtc->id = id;
rtc->ops = ops;
rtc->owner = owner;
+ rtc->irq_freq = 1;
rtc->max_user_freq = 64;
rtc->dev.parent = dev;
rtc->dev.class = rtc_class;
int err = 0;
unsigned long flags;
+ if (freq <= 0)
+ return -EINVAL;
+
spin_lock_irqsave(&rtc->irq_task_lock, flags);
if (rtc->irq_task != NULL && task == NULL)
err = -EBUSY;
private = (struct dasd_eckd_private *) device->private;
lcu = private->lcu;
+ /* nothing to do if already disconnected */
+ if (!lcu)
+ return;
device->discipline->get_uid(device, &uid);
spin_lock_irqsave(&lcu->lock, flags);
list_del_init(&device->alias_list);
private = (struct dasd_eckd_private *) device->private;
lcu = private->lcu;
+ /* nothing to do if already removed */
+ if (!lcu)
+ return 0;
spin_lock_irqsave(&lcu->lock, flags);
_remove_device_from_lcu(lcu, device);
spin_unlock_irqrestore(&lcu->lock, flags);
static int get_inbound_buffer_frontier(struct qdio_q *q)
{
int count, stop;
- unsigned char state;
+ unsigned char state = 0;
/*
* Don't check 128 buffers, as otherwise qdio_inbound_q_moved
static int get_outbound_buffer_frontier(struct qdio_q *q)
{
int count, stop;
- unsigned char state;
+ unsigned char state = 0;
if (need_siga_sync(q))
if (((queue_type(q) != QDIO_IQDIO_QFMT) &&
struct iucv_event ev;
int rc;
- if (memcmp(iucvMagic, ipuser, sizeof(ipuser)))
+ if (memcmp(iucvMagic, ipuser, 16))
/* ipuser must match iucvMagic. */
return -EINVAL;
rc = -EINVAL;
chp_dsc = (struct channelPath_dsc *)ccw_device_get_chp_desc(ccwdev, 0);
if (chp_dsc != NULL) {
/* CHPP field bit 6 == 1 -> single queue */
- if ((chp_dsc->chpp & 0x02) == 0x02)
+ if ((chp_dsc->chpp & 0x02) == 0x02) {
+ if ((atomic_read(&card->qdio.state) !=
+ QETH_QDIO_UNINITIALIZED) &&
+ (card->qdio.no_out_queues == 4))
+ /* change from 4 to 1 outbound queues */
+ qeth_free_qdio_buffers(card);
card->qdio.no_out_queues = 1;
+ if (card->qdio.default_out_queue != 0)
+ dev_info(&card->gdev->dev,
+ "Priority Queueing not supported\n");
+ card->qdio.default_out_queue = 0;
+ } else {
+ if ((atomic_read(&card->qdio.state) !=
+ QETH_QDIO_UNINITIALIZED) &&
+ (card->qdio.no_out_queues == 1)) {
+ /* change from 1 to 4 outbound queues */
+ qeth_free_qdio_buffers(card);
+ card->qdio.default_out_queue = 2;
+ }
+ card->qdio.no_out_queues = 4;
+ }
card->info.func_level = 0x4100 + chp_dsc->desc;
kfree(chp_dsc);
}
- if (card->qdio.no_out_queues == 1) {
- card->qdio.default_out_queue = 0;
- dev_info(&card->gdev->dev,
- "Priority Queueing not supported\n");
- }
QETH_DBF_TEXT_(SETUP, 2, "nr:%x", card->qdio.no_out_queues);
QETH_DBF_TEXT_(SETUP, 2, "lvl:%02x", card->info.func_level);
return;
}
}
-static inline int qeth_get_max_mtu_for_card(int cardtype)
-{
- switch (cardtype) {
-
- case QETH_CARD_TYPE_UNKNOWN:
- case QETH_CARD_TYPE_OSD:
- case QETH_CARD_TYPE_OSN:
- case QETH_CARD_TYPE_OSM:
- case QETH_CARD_TYPE_OSX:
- return 61440;
- case QETH_CARD_TYPE_IQD:
- return 57344;
- default:
- return 1500;
- }
-}
-
-static inline int qeth_get_mtu_out_of_mpc(int cardtype)
-{
- switch (cardtype) {
- case QETH_CARD_TYPE_IQD:
- return 1;
- default:
- return 0;
- }
-}
-
static inline int qeth_get_mtu_outof_framesize(int framesize)
{
switch (framesize) {
case QETH_CARD_TYPE_OSD:
case QETH_CARD_TYPE_OSM:
case QETH_CARD_TYPE_OSX:
- return ((mtu >= 576) && (mtu <= 61440));
case QETH_CARD_TYPE_IQD:
return ((mtu >= 576) &&
- (mtu <= card->info.max_mtu + 4096 - 32));
+ (mtu <= card->info.max_mtu));
case QETH_CARD_TYPE_OSN:
case QETH_CARD_TYPE_UNKNOWN:
default:
memcpy(&card->token.ulp_filter_r,
QETH_ULP_ENABLE_RESP_FILTER_TOKEN(iob->data),
QETH_MPC_TOKEN_LENGTH);
- if (qeth_get_mtu_out_of_mpc(card->info.type)) {
+ if (card->info.type == QETH_CARD_TYPE_IQD) {
memcpy(&framesize, QETH_ULP_ENABLE_RESP_MAX_MTU(iob->data), 2);
mtu = qeth_get_mtu_outof_framesize(framesize);
if (!mtu) {
QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc);
return 0;
}
- card->info.max_mtu = mtu;
+ if (card->info.initial_mtu && (card->info.initial_mtu != mtu)) {
+ /* frame size has changed */
+ if (card->dev &&
+ ((card->dev->mtu == card->info.initial_mtu) ||
+ (card->dev->mtu > mtu)))
+ card->dev->mtu = mtu;
+ qeth_free_qdio_buffers(card);
+ }
card->info.initial_mtu = mtu;
+ card->info.max_mtu = mtu;
card->qdio.in_buf_size = mtu + 2 * PAGE_SIZE;
} else {
card->info.initial_mtu = qeth_get_initial_mtu_for_card(card);
- card->info.max_mtu = qeth_get_max_mtu_for_card(card->info.type);
+ card->info.max_mtu = *(__u16 *)QETH_ULP_ENABLE_RESP_MAX_MTU(
+ iob->data);
card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT;
}
}
}
+static void qeth_determine_capabilities(struct qeth_card *card)
+{
+ int rc;
+ int length;
+ char *prcd;
+ struct ccw_device *ddev;
+ int ddev_offline = 0;
+
+ QETH_DBF_TEXT(SETUP, 2, "detcapab");
+ ddev = CARD_DDEV(card);
+ if (!ddev->online) {
+ ddev_offline = 1;
+ rc = ccw_device_set_online(ddev);
+ if (rc) {
+ QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ goto out;
+ }
+ }
+
+ rc = qeth_read_conf_data(card, (void **) &prcd, &length);
+ if (rc) {
+ QETH_DBF_MESSAGE(2, "%s qeth_read_conf_data returned %i\n",
+ dev_name(&card->gdev->dev), rc);
+ QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ goto out_offline;
+ }
+ qeth_configure_unitaddr(card, prcd);
+ qeth_configure_blkt_default(card, prcd);
+ kfree(prcd);
+
+ rc = qdio_get_ssqd_desc(ddev, &card->ssqd);
+ if (rc)
+ QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+
+out_offline:
+ if (ddev_offline == 1)
+ ccw_device_set_offline(ddev);
+out:
+ return;
+}
+
static int qeth_qdio_establish(struct qeth_card *card)
{
struct qdio_initialize init_data;
QETH_DBF_TEXT(SETUP, 2, "hrdsetup");
atomic_set(&card->force_alloc_skb, 0);
+ qeth_get_channel_path_desc(card);
retry:
if (retries)
QETH_DBF_MESSAGE(2, "%s Retrying to do IDX activates.\n",
else
goto retry;
}
+ qeth_determine_capabilities(card);
qeth_init_tokens(card);
qeth_init_func_level(card);
rc = qeth_idx_activate_channel(&card->read, qeth_idx_read_cb);
card->discipline.ccwgdriver = NULL;
}
-static void qeth_determine_capabilities(struct qeth_card *card)
-{
- int rc;
- int length;
- char *prcd;
-
- QETH_DBF_TEXT(SETUP, 2, "detcapab");
- rc = ccw_device_set_online(CARD_DDEV(card));
- if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
- goto out;
- }
-
-
- rc = qeth_read_conf_data(card, (void **) &prcd, &length);
- if (rc) {
- QETH_DBF_MESSAGE(2, "%s qeth_read_conf_data returned %i\n",
- dev_name(&card->gdev->dev), rc);
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
- goto out_offline;
- }
- qeth_configure_unitaddr(card, prcd);
- qeth_configure_blkt_default(card, prcd);
- kfree(prcd);
-
- rc = qdio_get_ssqd_desc(CARD_DDEV(card), &card->ssqd);
- if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
-
-out_offline:
- ccw_device_set_offline(CARD_DDEV(card));
-out:
- return;
-}
-
static int qeth_core_probe_device(struct ccwgroup_device *gdev)
{
struct qeth_card *card;
case IPA_RC_L2_DUP_LAYER3_MAC:
dev_warn(&card->gdev->dev,
"MAC address %pM already exists\n",
- card->dev->dev_addr);
+ cmd->data.setdelmac.mac);
break;
case IPA_RC_L2_MAC_NOT_AUTH_BY_HYP:
case IPA_RC_L2_MAC_NOT_AUTH_BY_ADP:
dev_warn(&card->gdev->dev,
"MAC address %pM is not authorized\n",
- card->dev->dev_addr);
+ cmd->data.setdelmac.mac);
break;
default:
break;
static int smsg_path_pending(struct iucv_path *path, u8 ipvmid[8],
u8 ipuser[16])
{
- if (strncmp(ipvmid, "*MSG ", sizeof(ipvmid)) != 0)
+ if (strncmp(ipvmid, "*MSG ", 8) != 0)
return -EINVAL;
/* Path pending from *MSG. */
return iucv_path_accept(path, &smsg_handler, "SMSGIUCV ", NULL);
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr.h
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: SCSI RAID Device Driver for
** ARECA RAID Host adapter
*******************************************************************************
struct device_attribute;
/*The limit of outstanding scsi command that firmware can handle*/
#define ARCMSR_MAX_OUTSTANDING_CMD 256
-#define ARCMSR_MAX_FREECCB_NUM 320
-#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.15 2010/02/02"
+#ifdef CONFIG_XEN
+ #define ARCMSR_MAX_FREECCB_NUM 160
+#else
+ #define ARCMSR_MAX_FREECCB_NUM 320
+#endif
+#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.15 2010/08/05"
#define ARCMSR_SCSI_INITIATOR_ID 255
#define ARCMSR_MAX_XFER_SECTORS 512
#define ARCMSR_MAX_XFER_SECTORS_B 4096
#define ARCMSR_MAX_HBB_POSTQUEUE 264
#define ARCMSR_MAX_XFER_LEN 0x26000 /* 152K */
#define ARCMSR_CDB_SG_PAGE_LENGTH 256
-#define SCSI_CMD_ARECA_SPECIFIC 0xE1
#ifndef PCI_DEVICE_ID_ARECA_1880
#define PCI_DEVICE_ID_ARECA_1880 0x1880
#endif
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr_attr.c
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: attributes exported to sysfs and device host
*******************************************************************************
** Copyright (C) 2002 - 2005, Areca Technology Corporation All rights reserved
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr_hba.c
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: SCSI RAID Device Driver for
** ARECA RAID Host adapter
*******************************************************************************
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(ARCMSR_DRIVER_VERSION);
static int sleeptime = 10;
-static int retrycount = 30;
+static int retrycount = 12;
wait_queue_head_t wait_q;
static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
struct scsi_cmnd *cmd);
if (isleep > 0) {
msleep(isleep*1000);
}
- printk(KERN_NOTICE "wake-up\n");
return 0;
}
}
static void arcmsr_drain_donequeue(struct AdapterControlBlock *acb, struct CommandControlBlock *pCCB, bool error)
-
{
int id, lun;
if ((pCCB->acb != acb) || (pCCB->startdone != ARCMSR_CCB_START)) {
, pCCB->startdone
, atomic_read(&acb->ccboutstandingcount));
return;
- }
+ }
arcmsr_report_ccb_state(acb, pCCB, error);
}
case ACB_ADAPTER_TYPE_B: {
struct MessageUnit_B *reg = acb->pmuB;
/*clear all outbound posted Q*/
- writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, ®->iop2drv_doorbell); /* clear doorbell interrupt */
+ writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, reg->iop2drv_doorbell); /* clear doorbell interrupt */
for (i = 0; i < ARCMSR_MAX_HBB_POSTQUEUE; i++) {
if ((flag_ccb = readl(®->done_qbuffer[i])) != 0) {
writel(0, ®->done_qbuffer[i]);
arcmsr_drain_donequeue(acb, pCCB, error);
}
}
-
static void arcmsr_hbb_postqueue_isr(struct AdapterControlBlock *acb)
{
uint32_t index;
if (atomic_read(&acb->ccboutstandingcount) >=
ARCMSR_MAX_OUTSTANDING_CMD)
return SCSI_MLQUEUE_HOST_BUSY;
- if ((scsicmd == SCSI_CMD_ARECA_SPECIFIC)) {
- printk(KERN_NOTICE "Receiveing SCSI_CMD_ARECA_SPECIFIC command..\n");
- return 0;
- }
ccb = arcmsr_get_freeccb(acb);
if (!ccb)
return SCSI_MLQUEUE_HOST_BUSY;
int index, rtn;
bool error;
polling_hbb_ccb_retry:
+
poll_count++;
/* clear doorbell interrupt */
writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, reg->iop2drv_doorbell);
{
struct MessageUnit_A __iomem *reg = acb->pmuA;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0 ) || ((acb->acb_flags & ACB_F_ABORT) != 0 )){
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if (atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_INBOUND_MESG0_GET_CONFIG, ®->inbound_msgaddr0);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
}
{
struct MessageUnit_B __iomem *reg = acb->pmuB;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0 ) || ((acb->acb_flags & ACB_F_ABORT) != 0 )){
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
if (atomic_read(&acb->ante_token_value) == atomic_read(&acb->rq_map_token)) {
- atomic_set(&acb->rq_map_token,16);
+ atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if(atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_MESSAGE_GET_CONFIG, reg->drv2iop_doorbell);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
}
{
struct MessageUnit_C __iomem *reg = acb->pmuC;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0) || ((acb->acb_flags & ACB_F_ABORT) != 0)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if (atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_INBOUND_MESG0_GET_CONFIG, ®->inbound_msgaddr0);
writel(ARCMSR_HBCMU_DRV2IOP_MESSAGE_CMD_DONE, ®->inbound_doorbell);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
uint32_t intmask_org;
uint8_t rtnval = 0x00;
int i = 0;
+ unsigned long flags;
+
if (atomic_read(&acb->ccboutstandingcount) != 0) {
/* disable all outbound interrupt */
intmask_org = arcmsr_disable_outbound_ints(acb);
for (i = 0; i < ARCMSR_MAX_FREECCB_NUM; i++) {
ccb = acb->pccb_pool[i];
if (ccb->startdone == ARCMSR_CCB_START) {
- arcmsr_ccb_complete(ccb);
+ scsi_dma_unmap(ccb->pcmd);
+ ccb->startdone = ARCMSR_CCB_DONE;
+ ccb->ccb_flags = 0;
+ spin_lock_irqsave(&acb->ccblist_lock, flags);
+ list_add_tail(&ccb->list, &acb->ccb_free_list);
+ spin_unlock_irqrestore(&acb->ccblist_lock, flags);
}
}
atomic_set(&acb->ccboutstandingcount, 0);
static int arcmsr_bus_reset(struct scsi_cmnd *cmd)
{
- struct AdapterControlBlock *acb =
- (struct AdapterControlBlock *)cmd->device->host->hostdata;
+ struct AdapterControlBlock *acb;
uint32_t intmask_org, outbound_doorbell;
int retry_count = 0;
int rtn = FAILED;
atomic_set(&acb->rq_map_token, 16);
atomic_set(&acb->ante_token_value, 16);
acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
acb->acb_flags &= ~ACB_F_BUS_RESET;
rtn = SUCCESS;
printk(KERN_ERR "arcmsr: scsi bus reset eh returns with success\n");
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
rtn = SUCCESS;
}
break;
rtn = FAILED;
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
rtn = SUCCESS;
}
break;
atomic_set(&acb->rq_map_token, 16);
atomic_set(&acb->ante_token_value, 16);
acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6 * HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
acb->acb_flags &= ~ACB_F_BUS_RESET;
rtn = SUCCESS;
printk(KERN_ERR "arcmsr: scsi bus reset eh returns with success\n");
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
rtn = SUCCESS;
}
break;
spin_lock_irqsave(shost->host_lock, flags);
list_splice_init(&shost->eh_cmd_q, &eh_work_q);
+ shost->host_eh_scheduled = 0;
spin_unlock_irqrestore(shost->host_lock, flags);
SAS_DPRINTK("Enter %s\n", __func__);
/* adjust hba_queue_depth, reply_free_queue_depth,
* and queue_size
*/
- ioc->hba_queue_depth -= queue_diff;
- ioc->reply_free_queue_depth -= queue_diff;
- queue_size -= queue_diff;
+ ioc->hba_queue_depth -= (queue_diff / 2);
+ ioc->reply_free_queue_depth -= (queue_diff / 2);
+ queue_size = facts->MaxReplyDescriptorPostQueueDepth;
}
ioc->reply_post_queue_depth = queue_size;
static void
_base_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase)
{
+ mpt2sas_scsih_reset_handler(ioc, reset_phase);
+ mpt2sas_ctl_reset_handler(ioc, reset_phase);
switch (reset_phase) {
case MPT2_IOC_PRE_RESET:
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: "
"MPT2_IOC_DONE_RESET\n", ioc->name, __func__));
break;
}
- mpt2sas_scsih_reset_handler(ioc, reset_phase);
- mpt2sas_ctl_reset_handler(ioc, reset_phase);
}
/**
{
int r;
unsigned long flags;
+ u8 pe_complete = ioc->wait_for_port_enable_to_complete;
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter\n", ioc->name,
__func__));
if (r)
goto out;
_base_reset_handler(ioc, MPT2_IOC_AFTER_RESET);
+
+ /* If this hard reset is called while port enable is active, then
+ * there is no reason to call make_ioc_operational
+ */
+ if (pe_complete) {
+ r = -EFAULT;
+ goto out;
+ }
r = _base_make_ioc_operational(ioc, sleep_flag);
if (!r)
_base_reset_handler(ioc, MPT2_IOC_DONE_RESET);
}
/**
- * mptscsih_get_scsi_lookup - returns scmd entry
+ * _scsih_scsi_lookup_get - returns scmd entry
* @ioc: per adapter object
* @smid: system request message index
*
return ioc->scsi_lookup[smid - 1].scmd;
}
+/**
+ * _scsih_scsi_lookup_get_clear - returns scmd entry
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns the smid stored scmd pointer.
+ * Then will derefrence the stored scmd pointer.
+ */
+static inline struct scsi_cmnd *
+_scsih_scsi_lookup_get_clear(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+ unsigned long flags;
+ struct scsi_cmnd *scmd;
+
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+ scmd = ioc->scsi_lookup[smid - 1].scmd;
+ ioc->scsi_lookup[smid - 1].scmd = NULL;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+ return scmd;
+}
+
/**
* _scsih_scsi_lookup_find_by_scmd - scmd lookup
* @ioc: per adapter object
u16 handle;
for (i = 0 ; i < event_data->NumEntries; i++) {
- if (event_data->PHY[i].PhyStatus &
- MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT)
- continue;
handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
if (!handle)
continue;
u16 count = 0;
for (smid = 1; smid <= ioc->scsiio_depth; smid++) {
- scmd = _scsih_scsi_lookup_get(ioc, smid);
+ scmd = _scsih_scsi_lookup_get_clear(ioc, smid);
if (!scmd)
continue;
count++;
u32 response_code = 0;
mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
- scmd = _scsih_scsi_lookup_get(ioc, smid);
+ scmd = _scsih_scsi_lookup_get_clear(ioc, smid);
if (scmd == NULL)
return 1;
event_data);
#endif
+ /* In MPI Revision K (0xC), the internal device reset complete was
+ * implemented, so avoid setting tm_busy flag for older firmware.
+ */
+ if ((ioc->facts.HeaderVersion >> 8) < 0xC)
+ return;
+
if (event_data->ReasonCode !=
MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET &&
event_data->ReasonCode !=
struct fw_event_work *fw_event)
{
struct scsi_cmnd *scmd;
+ struct scsi_device *sdev;
u16 smid, handle;
u32 lun;
struct MPT2SAS_DEVICE *sas_device_priv_data;
Mpi2EventDataSasBroadcastPrimitive_t *event_data = fw_event->event_data;
#endif
u16 ioc_status;
+ unsigned long flags;
+ int r;
+
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "broadcast primative: "
"phy number(%d), width(%d)\n", ioc->name, event_data->PhyNum,
event_data->PortWidth));
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter\n", ioc->name,
__func__));
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+ ioc->broadcast_aen_busy = 0;
termination_count = 0;
query_count = 0;
mpi_reply = ioc->tm_cmds.reply;
scmd = _scsih_scsi_lookup_get(ioc, smid);
if (!scmd)
continue;
- sas_device_priv_data = scmd->device->hostdata;
+ sdev = scmd->device;
+ sas_device_priv_data = sdev->hostdata;
if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
continue;
/* skip hidden raid components */
lun = sas_device_priv_data->lun;
query_count++;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
mpt2sas_scsih_issue_tm(ioc, handle, 0, 0, lun,
MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK, smid, 30, NULL);
ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
(mpi_reply->ResponseCode ==
MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED ||
mpi_reply->ResponseCode ==
- MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC))
+ MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC)) {
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
continue;
-
- mpt2sas_scsih_issue_tm(ioc, handle, 0, 0, lun,
- MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET, 0, 30, NULL);
+ }
+ r = mpt2sas_scsih_issue_tm(ioc, handle, sdev->channel, sdev->id,
+ sdev->lun, MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30,
+ scmd);
+ if (r == FAILED)
+ sdev_printk(KERN_WARNING, sdev, "task abort: FAILED "
+ "scmd(%p)\n", scmd);
termination_count += le32_to_cpu(mpi_reply->TerminationCount);
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
}
- ioc->broadcast_aen_busy = 0;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT
"%s - exit, query_count = %d termination_count = %d\n",
destroy_workqueue(wq);
/* release all the volumes */
+ _scsih_ir_shutdown(ioc);
list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list,
list) {
if (raid_device->starget) {
/* Fetch the vendor specific tuples. */
res = pcmcia_loop_tuple(bus->host_pcmcia, SSB_PCMCIA_CIS,
- ssb_pcmcia_do_get_invariants, sprom);
+ ssb_pcmcia_do_get_invariants, iv);
if ((res == 0) || (res == -ENOSPC))
return 0;
/* send boot data to the IR TX device */
static int send_boot_data(struct IR_tx *tx)
{
- int ret;
+ int ret, i;
unsigned char buf[4];
/* send the boot block */
if (ret != 0)
return ret;
- /* kick it off? */
+ /* Hit the go button to activate the new boot data */
buf[0] = 0x00;
buf[1] = 0x20;
ret = i2c_master_send(tx->c, buf, 2);
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
- ret = i2c_master_send(tx->c, buf, 1);
+
+ /*
+ * Wait for zilog to settle after hitting go post boot block upload.
+ * Without this delay, the HD-PVR and HVR-1950 both return an -EIO
+ * upon attempting to get firmware revision, and tx probe thus fails.
+ */
+ for (i = 0; i < 10; i++) {
+ ret = i2c_master_send(tx->c, buf, 1);
+ if (ret == 1)
+ break;
+ udelay(100);
+ }
+
if (ret != 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
zilog_error("i2c_master_recv failed with %d\n", ret);
return 0;
}
- if (buf[0] != 0x80) {
- zilog_error("unexpected IR TX response: %02x\n", buf[0]);
+ if ((buf[0] != 0x80) && (buf[0] != 0xa0)) {
+ zilog_error("unexpected IR TX init response: %02x\n", buf[0]);
return 0;
}
zilog_notify("Zilog/Hauppauge IR blaster firmware version "
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
- ret = i2c_master_send(tx->c, buf, 1);
+
+ /* Give the z8 a moment to process data block */
+ for (i = 0; i < 10; i++) {
+ ret = i2c_master_send(tx->c, buf, 1);
+ if (ret == 1)
+ break;
+ udelay(100);
+ }
+
if (ret != 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
/*
* Copyright (C) 2006, 2007, 2009 Rusty Russell, IBM Corporation
- * Copyright (C) 2009, 2010 Red Hat, Inc.
+ * Copyright (C) 2009, 2010, 2011 Red Hat, Inc.
+ * Copyright (C) 2009, 2010, 2011 Amit Shah <amit.shah@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
spin_unlock(&portdev->cvq_lock);
}
+static void out_intr(struct virtqueue *vq)
+{
+ struct port *port;
+
+ port = find_port_by_vq(vq->vdev->priv, vq);
+ if (!port)
+ return;
+
+ wake_up_interruptible(&port->waitqueue);
+}
+
static void in_intr(struct virtqueue *vq)
{
struct port *port;
*/
j = 0;
io_callbacks[j] = in_intr;
- io_callbacks[j + 1] = NULL;
+ io_callbacks[j + 1] = out_intr;
io_names[j] = "input";
io_names[j + 1] = "output";
j += 2;
for (i = 1; i < nr_ports; i++) {
j += 2;
io_callbacks[j] = in_intr;
- io_callbacks[j + 1] = NULL;
+ io_callbacks[j + 1] = out_intr;
io_names[j] = "input";
io_names[j + 1] = "output";
}
size_t hdr_size;
struct socket *sock;
- /* TODO: check that we are running from vhost_worker?
- * Not sure it's worth it, it's straight-forward enough. */
+ /* TODO: check that we are running from vhost_worker? */
sock = rcu_dereference_check(vq->private_data, 1);
if (!sock)
return;
size_t len, total_len = 0;
int err;
size_t hdr_size;
- struct socket *sock = rcu_dereference(vq->private_data);
+ /* TODO: check that we are running from vhost_worker? */
+ struct socket *sock = rcu_dereference_check(vq->private_data, 1);
if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
return;
int err, headcount;
size_t vhost_hlen, sock_hlen;
size_t vhost_len, sock_len;
- struct socket *sock = rcu_dereference(vq->private_data);
+ /* TODO: check that we are running from vhost_worker? */
+ struct socket *sock = rcu_dereference_check(vq->private_data, 1);
if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
return;
{
unsigned acked_features;
- acked_features =
- rcu_dereference_index_check(dev->acked_features,
- lockdep_is_held(&dev->mutex));
+ /* TODO: check that we are running from vhost_worker or dev mutex is
+ * held? */
+ acked_features = rcu_dereference_index_check(dev->acked_features, 1);
return acked_features & (1 << bit);
}
char *value = NULL;
struct posix_acl *acl;
+ if (!IS_POSIXACL(inode))
+ return NULL;
+
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
struct posix_acl *acl;
int ret = 0;
+ if (!IS_POSIXACL(dentry->d_inode))
+ return -EOPNOTSUPP;
+
acl = btrfs_get_acl(dentry->d_inode, type);
if (IS_ERR(acl))
u64 em_len;
u64 em_start;
struct extent_map *em;
- int ret;
+ int ret = -ENOMEM;
u32 *sums;
tree = &BTRFS_I(inode)->io_tree;
compressed_len = em->block_len;
cb = kmalloc(compressed_bio_size(root, compressed_len), GFP_NOFS);
+ if (!cb)
+ goto out;
+
atomic_set(&cb->pending_bios, 0);
cb->errors = 0;
cb->inode = inode;
nr_pages = (compressed_len + PAGE_CACHE_SIZE - 1) /
PAGE_CACHE_SIZE;
- cb->compressed_pages = kmalloc(sizeof(struct page *) * nr_pages,
+ cb->compressed_pages = kzalloc(sizeof(struct page *) * nr_pages,
GFP_NOFS);
+ if (!cb->compressed_pages)
+ goto fail1;
+
bdev = BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev;
for (page_index = 0; page_index < nr_pages; page_index++) {
cb->compressed_pages[page_index] = alloc_page(GFP_NOFS |
__GFP_HIGHMEM);
+ if (!cb->compressed_pages[page_index])
+ goto fail2;
}
cb->nr_pages = nr_pages;
cb->len = uncompressed_len;
comp_bio = compressed_bio_alloc(bdev, cur_disk_byte, GFP_NOFS);
+ if (!comp_bio)
+ goto fail2;
comp_bio->bi_private = cb;
comp_bio->bi_end_io = end_compressed_bio_read;
atomic_inc(&cb->pending_bios);
bio_put(comp_bio);
return 0;
+
+fail2:
+ for (page_index = 0; page_index < nr_pages; page_index++)
+ free_page((unsigned long)cb->compressed_pages[page_index]);
+
+ kfree(cb->compressed_pages);
+fail1:
+ kfree(cb);
+out:
+ free_extent_map(em);
+ return ret;
}
static struct list_head comp_idle_workspace[BTRFS_COMPRESS_TYPES];
return ret;
}
-void __exit btrfs_exit_compress(void)
+void btrfs_exit_compress(void)
{
free_workspaces();
}
spin_unlock(&root->fs_info->new_trans_lock);
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
if (transid == trans->transid) {
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
up_write(&root->fs_info->cleanup_work_sem);
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
/* run commit again to drop the original snapshot */
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
btrfs_commit_transaction(trans, root);
ret = btrfs_write_and_wait_transaction(NULL, root);
BUG_ON(ret);
kfree(fs_info->chunk_root);
kfree(fs_info->dev_root);
kfree(fs_info->csum_root);
+ kfree(fs_info);
+
return 0;
}
int ret;
path = btrfs_alloc_path();
+ if (!path)
+ return ERR_PTR(-ENOMEM);
if (dir->i_ino == BTRFS_FIRST_FREE_OBJECTID) {
key.objectid = root->root_key.objectid;
if (!path)
return -ENOMEM;
- exclude_super_stripes(extent_root, block_group);
- spin_lock(&block_group->space_info->lock);
- block_group->space_info->bytes_readonly += block_group->bytes_super;
- spin_unlock(&block_group->space_info->lock);
-
last = max_t(u64, block_group->key.objectid, BTRFS_SUPER_INFO_OFFSET);
/*
cache->cached = BTRFS_CACHE_NO;
}
spin_unlock(&cache->lock);
- if (ret == 1)
+ if (ret == 1) {
+ free_excluded_extents(fs_info->extent_root, cache);
return 0;
+ }
}
if (load_cache_only)
u64 reserved;
u64 max_reclaim;
u64 reclaimed = 0;
+ long time_left;
int pause = 1;
int nr_pages = (2 * 1024 * 1024) >> PAGE_CACHE_SHIFT;
+ int loops = 0;
block_rsv = &root->fs_info->delalloc_block_rsv;
space_info = block_rsv->space_info;
max_reclaim = min(reserved, to_reclaim);
- while (1) {
+ while (loops < 1024) {
/* have the flusher threads jump in and do some IO */
smp_mb();
nr_pages = min_t(unsigned long, nr_pages,
writeback_inodes_sb_nr_if_idle(root->fs_info->sb, nr_pages);
spin_lock(&space_info->lock);
- if (reserved > space_info->bytes_reserved)
+ if (reserved > space_info->bytes_reserved) {
+ loops = 0;
reclaimed += reserved - space_info->bytes_reserved;
+ } else {
+ loops++;
+ }
reserved = space_info->bytes_reserved;
spin_unlock(&space_info->lock);
return -EAGAIN;
__set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(pause);
+ time_left = schedule_timeout(pause);
+
+ /* We were interrupted, exit */
+ if (time_left)
+ break;
+
pause <<= 1;
if (pause > HZ / 10)
pause = HZ / 10;
if (num_bytes > 0) {
if (dest) {
- block_rsv_add_bytes(dest, num_bytes, 0);
- } else {
+ spin_lock(&dest->lock);
+ if (!dest->full) {
+ u64 bytes_to_add;
+
+ bytes_to_add = dest->size - dest->reserved;
+ bytes_to_add = min(num_bytes, bytes_to_add);
+ dest->reserved += bytes_to_add;
+ if (dest->reserved >= dest->size)
+ dest->full = 1;
+ num_bytes -= bytes_to_add;
+ }
+ spin_unlock(&dest->lock);
+ }
+ if (num_bytes) {
spin_lock(&space_info->lock);
space_info->bytes_reserved -= num_bytes;
spin_unlock(&space_info->lock);
num_bytes = ALIGN(num_bytes, root->sectorsize);
atomic_dec(&BTRFS_I(inode)->outstanding_extents);
+ WARN_ON(atomic_read(&BTRFS_I(inode)->outstanding_extents) < 0);
spin_lock(&BTRFS_I(inode)->accounting_lock);
nr_extents = atomic_read(&BTRFS_I(inode)->outstanding_extents);
struct btrfs_root *root, u32 blocksize)
{
struct btrfs_block_rsv *block_rsv;
+ struct btrfs_block_rsv *global_rsv = &root->fs_info->global_block_rsv;
int ret;
block_rsv = get_block_rsv(trans, root);
if (block_rsv->size == 0) {
ret = reserve_metadata_bytes(trans, root, block_rsv,
blocksize, 0);
- if (ret)
+ /*
+ * If we couldn't reserve metadata bytes try and use some from
+ * the global reserve.
+ */
+ if (ret && block_rsv != global_rsv) {
+ ret = block_rsv_use_bytes(global_rsv, blocksize);
+ if (!ret)
+ return global_rsv;
+ return ERR_PTR(ret);
+ } else if (ret) {
return ERR_PTR(ret);
+ }
return block_rsv;
}
ret = block_rsv_use_bytes(block_rsv, blocksize);
if (!ret)
return block_rsv;
+ if (ret) {
+ WARN_ON(1);
+ ret = reserve_metadata_bytes(trans, root, block_rsv, blocksize,
+ 0);
+ if (!ret) {
+ spin_lock(&block_rsv->lock);
+ block_rsv->size += blocksize;
+ spin_unlock(&block_rsv->lock);
+ return block_rsv;
+ } else if (ret && block_rsv != global_rsv) {
+ ret = block_rsv_use_bytes(global_rsv, blocksize);
+ if (!ret)
+ return global_rsv;
+ }
+ }
return ERR_PTR(-ENOSPC);
}
BUG_ON(!wc);
trans = btrfs_start_transaction(tree_root, 0);
+ BUG_ON(IS_ERR(trans));
+
if (block_rsv)
trans->block_rsv = block_rsv;
btrfs_end_transaction_throttle(trans, tree_root);
trans = btrfs_start_transaction(tree_root, 0);
+ BUG_ON(IS_ERR(trans));
if (block_rsv)
trans->block_rsv = block_rsv;
}
int ret = 0;
ra = kzalloc(sizeof(*ra), GFP_NOFS);
+ if (!ra)
+ return -ENOMEM;
mutex_lock(&inode->i_mutex);
first_index = start >> PAGE_CACHE_SHIFT;
BUG_ON(reloc_root->commit_root != NULL);
while (1) {
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
mutex_lock(&root->fs_info->drop_mutex);
ret = btrfs_drop_snapshot(trans, reloc_root);
if (found) {
trans = btrfs_start_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
}
trans = btrfs_start_transaction(extent_root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
if (extent_key->objectid == 0) {
ret = del_extent_zero(trans, extent_root, path, extent_key);
if (block_group->cached == BTRFS_CACHE_STARTED)
wait_block_group_cache_done(block_group);
+ /*
+ * We haven't cached this block group, which means we could
+ * possibly have excluded extents on this block group.
+ */
+ if (block_group->cached == BTRFS_CACHE_NO)
+ free_excluded_extents(info->extent_root, block_group);
+
btrfs_remove_free_space_cache(block_group);
btrfs_put_block_group(block_group);
cache->flags = btrfs_block_group_flags(&cache->item);
cache->sectorsize = root->sectorsize;
+ /*
+ * We need to exclude the super stripes now so that the space
+ * info has super bytes accounted for, otherwise we'll think
+ * we have more space than we actually do.
+ */
+ exclude_super_stripes(root, cache);
+
/*
* check for two cases, either we are full, and therefore
* don't need to bother with the caching work since we won't
* time, particularly in the full case.
*/
if (found_key.offset == btrfs_block_group_used(&cache->item)) {
- exclude_super_stripes(root, cache);
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
free_excluded_extents(root, cache);
} else if (btrfs_block_group_used(&cache->item) == 0) {
- exclude_super_stripes(root, cache);
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
add_new_free_space(cache, root->fs_info,
bio_get(bio);
if (tree->ops && tree->ops->submit_bio_hook)
- tree->ops->submit_bio_hook(page->mapping->host, rw, bio,
+ ret = tree->ops->submit_bio_hook(page->mapping->host, rw, bio,
mirror_num, bio_flags, start);
else
submit_bio(rw, bio);
nr = bio_get_nr_vecs(bdev);
bio = btrfs_bio_alloc(bdev, sector, nr, GFP_NOFS | __GFP_HIGH);
+ if (!bio)
+ return -ENOMEM;
bio_add_page(bio, page, page_size, offset);
bio->bi_end_io = end_io_func;
ret = __extent_read_full_page(tree, page, get_extent, &bio, 0,
&bio_flags);
if (bio)
- submit_one_bio(READ, bio, 0, bio_flags);
+ ret = submit_one_bio(READ, bio, 0, bio_flags);
return ret;
}
root = root->fs_info->csum_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
while (1) {
key.objectid = BTRFS_EXTENT_CSUM_OBJECTID;
if (path->slots[0] == 0)
goto out;
path->slots[0]--;
+ } else if (ret < 0) {
+ goto out;
}
+
leaf = path->nodes[0];
btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
for (i = 0; i < num_pages; i++) {
pages[i] = grab_cache_page(inode->i_mapping, index + i);
if (!pages[i]) {
- err = -ENOMEM;
- BUG_ON(1);
+ int c;
+ for (c = i - 1; c >= 0; c--) {
+ unlock_page(pages[c]);
+ page_cache_release(pages[c]);
+ }
+ return -ENOMEM;
}
wait_on_page_writeback(pages[i]);
}
PAGE_CACHE_SIZE, PAGE_CACHE_SIZE /
(sizeof(struct page *)));
pages = kmalloc(nrptrs * sizeof(struct page *), GFP_KERNEL);
+ if (!pages) {
+ ret = -ENOMEM;
+ goto out;
+ }
/* generic_write_checks can change our pos */
start_pos = pos;
size_t write_bytes = min(iov_iter_count(&i),
nrptrs * (size_t)PAGE_CACHE_SIZE -
offset);
- size_t num_pages = (write_bytes + PAGE_CACHE_SIZE - 1) >>
- PAGE_CACHE_SHIFT;
+ size_t num_pages = (write_bytes + offset +
+ PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
WARN_ON(num_pages > nrptrs);
memset(pages, 0, sizeof(struct page *) * nrptrs);
copied = btrfs_copy_from_user(pos, num_pages,
write_bytes, pages, &i);
- dirty_pages = (copied + PAGE_CACHE_SIZE - 1) >>
- PAGE_CACHE_SHIFT;
+ dirty_pages = (copied + offset + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT;
if (num_pages > dirty_pages) {
if (copied > 0)
return entry;
}
-static void unlink_free_space(struct btrfs_block_group_cache *block_group,
- struct btrfs_free_space *info)
+static inline void
+__unlink_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info)
{
rb_erase(&info->offset_index, &block_group->free_space_offset);
block_group->free_extents--;
+}
+
+static void unlink_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info)
+{
+ __unlink_free_space(block_group, info);
block_group->free_space -= info->bytes;
}
u64 max_bytes;
u64 bitmap_bytes;
u64 extent_bytes;
+ u64 size = block_group->key.offset;
/*
* The goal is to keep the total amount of memory used per 1gb of space
* at or below 32k, so we need to adjust how much memory we allow to be
* used by extent based free space tracking
*/
- max_bytes = MAX_CACHE_BYTES_PER_GIG *
- (div64_u64(block_group->key.offset, 1024 * 1024 * 1024));
+ if (size < 1024 * 1024 * 1024)
+ max_bytes = MAX_CACHE_BYTES_PER_GIG;
+ else
+ max_bytes = MAX_CACHE_BYTES_PER_GIG *
+ div64_u64(size, 1024 * 1024 * 1024);
/*
* we want to account for 1 more bitmap than what we have so we can make
recalculate_thresholds(block_group);
}
+static void free_bitmap(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *bitmap_info)
+{
+ unlink_free_space(block_group, bitmap_info);
+ kfree(bitmap_info->bitmap);
+ kfree(bitmap_info);
+ block_group->total_bitmaps--;
+ recalculate_thresholds(block_group);
+}
+
static noinline int remove_from_bitmap(struct btrfs_block_group_cache *block_group,
struct btrfs_free_space *bitmap_info,
u64 *offset, u64 *bytes)
*/
search_start = *offset;
search_bytes = *bytes;
+ search_bytes = min(search_bytes, end - search_start + 1);
ret = search_bitmap(block_group, bitmap_info, &search_start,
&search_bytes);
BUG_ON(ret < 0 || search_start != *offset);
if (*bytes) {
struct rb_node *next = rb_next(&bitmap_info->offset_index);
- if (!bitmap_info->bytes) {
- unlink_free_space(block_group, bitmap_info);
- kfree(bitmap_info->bitmap);
- kfree(bitmap_info);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ if (!bitmap_info->bytes)
+ free_bitmap(block_group, bitmap_info);
/*
* no entry after this bitmap, but we still have bytes to
return -EAGAIN;
goto again;
- } else if (!bitmap_info->bytes) {
- unlink_free_space(block_group, bitmap_info);
- kfree(bitmap_info->bitmap);
- kfree(bitmap_info);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ } else if (!bitmap_info->bytes)
+ free_bitmap(block_group, bitmap_info);
return 0;
}
return ret;
}
-int btrfs_add_free_space(struct btrfs_block_group_cache *block_group,
- u64 offset, u64 bytes)
+bool try_merge_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info, bool update_stat)
{
- struct btrfs_free_space *right_info = NULL;
- struct btrfs_free_space *left_info = NULL;
- struct btrfs_free_space *info = NULL;
- int ret = 0;
-
- info = kzalloc(sizeof(struct btrfs_free_space), GFP_NOFS);
- if (!info)
- return -ENOMEM;
-
- info->offset = offset;
- info->bytes = bytes;
-
- spin_lock(&block_group->tree_lock);
+ struct btrfs_free_space *left_info;
+ struct btrfs_free_space *right_info;
+ bool merged = false;
+ u64 offset = info->offset;
+ u64 bytes = info->bytes;
/*
* first we want to see if there is free space adjacent to the range we
else
left_info = tree_search_offset(block_group, offset - 1, 0, 0);
- /*
- * If there was no extent directly to the left or right of this new
- * extent then we know we're going to have to allocate a new extent, so
- * before we do that see if we need to drop this into a bitmap
- */
- if ((!left_info || left_info->bitmap) &&
- (!right_info || right_info->bitmap)) {
- ret = insert_into_bitmap(block_group, info);
-
- if (ret < 0) {
- goto out;
- } else if (ret) {
- ret = 0;
- goto out;
- }
- }
-
if (right_info && !right_info->bitmap) {
- unlink_free_space(block_group, right_info);
+ if (update_stat)
+ unlink_free_space(block_group, right_info);
+ else
+ __unlink_free_space(block_group, right_info);
info->bytes += right_info->bytes;
kfree(right_info);
+ merged = true;
}
if (left_info && !left_info->bitmap &&
left_info->offset + left_info->bytes == offset) {
- unlink_free_space(block_group, left_info);
+ if (update_stat)
+ unlink_free_space(block_group, left_info);
+ else
+ __unlink_free_space(block_group, left_info);
info->offset = left_info->offset;
info->bytes += left_info->bytes;
kfree(left_info);
+ merged = true;
}
+ return merged;
+}
+
+int btrfs_add_free_space(struct btrfs_block_group_cache *block_group,
+ u64 offset, u64 bytes)
+{
+ struct btrfs_free_space *info;
+ int ret = 0;
+
+ info = kzalloc(sizeof(struct btrfs_free_space), GFP_NOFS);
+ if (!info)
+ return -ENOMEM;
+
+ info->offset = offset;
+ info->bytes = bytes;
+
+ spin_lock(&block_group->tree_lock);
+
+ if (try_merge_free_space(block_group, info, true))
+ goto link;
+
+ /*
+ * There was no extent directly to the left or right of this new
+ * extent then we know we're going to have to allocate a new extent, so
+ * before we do that see if we need to drop this into a bitmap
+ */
+ ret = insert_into_bitmap(block_group, info);
+ if (ret < 0) {
+ goto out;
+ } else if (ret) {
+ ret = 0;
+ goto out;
+ }
+link:
ret = link_free_space(block_group, info);
if (ret)
kfree(info);
node = rb_next(&entry->offset_index);
rb_erase(&entry->offset_index, &cluster->root);
BUG_ON(entry->bitmap);
+ try_merge_free_space(block_group, entry, false);
tree_insert_offset(&block_group->free_space_offset,
entry->offset, &entry->offset_index, 0);
}
ret = offset;
if (entry->bitmap) {
bitmap_clear_bits(block_group, entry, offset, bytes);
- if (!entry->bytes) {
- unlink_free_space(block_group, entry);
- kfree(entry->bitmap);
- kfree(entry);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ if (!entry->bytes)
+ free_bitmap(block_group, entry);
} else {
unlink_free_space(block_group, entry);
entry->offset += bytes;
ret = search_start;
bitmap_clear_bits(block_group, entry, ret, bytes);
+ if (entry->bytes == 0)
+ free_bitmap(block_group, entry);
out:
spin_unlock(&cluster->lock);
spin_unlock(&block_group->tree_lock);
entry->offset += bytes;
entry->bytes -= bytes;
- if (entry->bytes == 0) {
+ if (entry->bytes == 0)
rb_erase(&entry->offset_index, &cluster->root);
- kfree(entry);
- }
break;
}
out:
spin_unlock(&cluster->lock);
+ if (!ret)
+ return 0;
+
+ spin_lock(&block_group->tree_lock);
+
+ block_group->free_space -= bytes;
+ if (entry->bytes == 0) {
+ block_group->free_extents--;
+ kfree(entry);
+ }
+
+ spin_unlock(&block_group->tree_lock);
+
return ret;
}
}
if (start == 0) {
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
GFP_NOFS);
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_reserve_extent(trans, root,
async_extent->compressed_size,
async_extent->compressed_size,
BUG_ON(root == root->fs_info->tree_root);
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
} else {
trans = btrfs_join_transaction(root, 1);
}
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
cow_start = (u64)-1;
cur_offset = start;
out_page:
unlock_page(page);
page_cache_release(page);
+ kfree(fixup);
}
/*
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
ret = btrfs_update_inode(trans, root, inode);
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
*/
if (is_bad_inode(inode)) {
trans = btrfs_start_transaction(root, 0);
+ BUG_ON(IS_ERR(trans));
btrfs_orphan_del(trans, inode);
btrfs_end_transaction(trans, root);
iput(inode);
if (root->orphan_block_rsv || root->orphan_item_inserted) {
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_end_transaction(trans, root);
}
path = btrfs_alloc_path();
if (!path) {
ret = -ENOMEM;
- goto err;
+ goto out;
}
path->leave_spinning = 1;
struct extent_buffer *eb;
int level;
u64 refs = 1;
- int uninitialized_var(ret);
for (level = 0; level < BTRFS_MAX_LEVEL; level++) {
+ int ret;
+
if (!path->nodes[level])
break;
eb = path->nodes[level];
if (refs > 1)
return 1;
}
- return ret; /* XXX callers? */
+ return 0;
}
/*
}
srcu_read_unlock(&root->fs_info->subvol_srcu, index);
- if (root != sub_root) {
+ if (!IS_ERR(inode) && root != sub_root) {
down_read(&root->fs_info->cleanup_work_sem);
if (!(inode->i_sb->s_flags & MS_RDONLY))
btrfs_orphan_cleanup(sub_root);
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
btrfs_set_trans_block_group(trans, inode);
if (nolock)
ret = btrfs_end_transaction_nolock(trans, root);
return;
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
ret = btrfs_update_inode(trans, root, inode);
em = NULL;
btrfs_release_path(root, path);
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return ERR_CAST(trans);
goto again;
}
map = kmap(page);
btrfs_drop_extent_cache(inode, start, start + len - 1, 0);
trans = btrfs_join_transaction(root, 0);
- if (!trans)
- return ERR_PTR(-ENOMEM);
+ if (IS_ERR(trans))
+ return ERR_CAST(trans);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
* while we look for nocow cross refs
*/
trans = btrfs_join_transaction(root, 0);
- if (!trans)
+ if (IS_ERR(trans))
goto must_cow;
if (can_nocow_odirect(trans, inode, start, len) == 1) {
BUG_ON(!ordered);
trans = btrfs_join_transaction(root, 1);
- if (!trans) {
+ if (IS_ERR(trans)) {
err = -ENOMEM;
goto out;
}
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_update_inode(trans, root, inode);
BUG_ON(ret);
if (new_size > old_size) {
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ goto out_unlock;
+ }
ret = btrfs_grow_device(trans, device, new_size);
btrfs_commit_transaction(trans, root);
} else {
memcpy(&new_key, &key, sizeof(new_key));
new_key.objectid = inode->i_ino;
- new_key.offset = key.offset + destoff - off;
+ if (off <= key.offset)
+ new_key.offset = key.offset + destoff - off;
+ else
+ new_key.offset = destoff;
trans = btrfs_start_transaction(root, 1);
if (IS_ERR(trans)) {
ret = -ENOMEM;
trans = btrfs_start_ioctl_transaction(root, 0);
- if (!trans)
+ if (IS_ERR(trans))
goto out_drop;
file->private_data = trans;
path->leave_spinning = 1;
trans = btrfs_start_transaction(root, 1);
- if (!trans) {
+ if (IS_ERR(trans)) {
btrfs_free_path(path);
- return -ENOMEM;
+ return PTR_ERR(trans);
}
dir_id = btrfs_super_root_dir(&root->fs_info->super_copy);
u64 transid;
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
transid = trans->transid;
btrfs_commit_transaction_async(trans, root, 0);
u64 file_offset)
{
struct rb_root *root = &tree->tree;
- struct rb_node *prev;
+ struct rb_node *prev = NULL;
struct rb_node *ret;
struct btrfs_ordered_extent *entry;
#else
BUG();
#endif
+ break;
case BTRFS_BLOCK_GROUP_ITEM_KEY:
bi = btrfs_item_ptr(l, i,
struct btrfs_block_group_item);
while (1) {
trans = btrfs_start_transaction(root, 0);
+ BUG_ON(IS_ERR(trans));
trans->block_rsv = rc->block_rsv;
ret = btrfs_block_rsv_check(trans, root, rc->block_rsv,
}
trans = btrfs_join_transaction(rc->extent_root, 1);
+ if (IS_ERR(trans)) {
+ if (!err)
+ btrfs_block_rsv_release(rc->extent_root,
+ rc->block_rsv, num_bytes);
+ return PTR_ERR(trans);
+ }
if (!err) {
if (num_bytes != rc->merging_rsv_size) {
trans = btrfs_join_transaction(root, 0);
if (IS_ERR(trans)) {
btrfs_free_path(path);
+ ret = PTR_ERR(trans);
goto out;
}
set_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_commit_transaction(trans, rc->extent_root);
return 0;
}
while (1) {
trans = btrfs_start_transaction(rc->extent_root, 0);
+ BUG_ON(IS_ERR(trans));
if (update_backref_cache(trans, &rc->backref_cache)) {
btrfs_end_transaction(trans, rc->extent_root);
/* get rid of pinned extents */
trans = btrfs_join_transaction(rc->extent_root, 1);
- btrfs_commit_transaction(trans, rc->extent_root);
+ if (IS_ERR(trans))
+ err = PTR_ERR(trans);
+ else
+ btrfs_commit_transaction(trans, rc->extent_root);
out_free:
btrfs_free_block_rsv(rc->extent_root, rc->block_rsv);
btrfs_free_path(path);
int ret;
trans = btrfs_start_transaction(root->fs_info->tree_root, 0);
+ BUG_ON(IS_ERR(trans));
memset(&root->root_item.drop_progress, 0,
sizeof(root->root_item.drop_progress));
set_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
+ if (IS_ERR(trans)) {
+ unset_reloc_control(rc);
+ err = PTR_ERR(trans);
+ goto out_free;
+ }
rc->merge_reloc_tree = 1;
unset_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
- btrfs_commit_transaction(trans, rc->extent_root);
-out:
+ if (IS_ERR(trans))
+ err = PTR_ERR(trans);
+ else
+ btrfs_commit_transaction(trans, rc->extent_root);
+out_free:
kfree(rc);
+out:
while (!list_empty(&reloc_roots)) {
reloc_root = list_entry(reloc_roots.next,
struct btrfs_root, root_list);
struct btrfs_fs_devices **fs_devices)
{
substring_t args[MAX_OPT_ARGS];
- char *opts, *p;
+ char *opts, *orig, *p;
int error = 0;
int intarg;
opts = kstrdup(options, GFP_KERNEL);
if (!opts)
return -ENOMEM;
+ orig = opts;
while ((p = strsep(&opts, ",")) != NULL) {
int token;
}
out_free_opts:
- kfree(opts);
+ kfree(orig);
out:
/*
* If no subvolume name is specified we use the default one. Allocate
btrfs_wait_ordered_extents(root, 0, 0);
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
ret = btrfs_commit_transaction(trans, root);
return ret;
}
}
btrfs_close_devices(fs_devices);
+ kfree(fs_info);
+ kfree(tree_root);
} else {
char b[BDEVNAME_SIZE];
INIT_DELAYED_WORK(&ac->work, do_async_commit);
ac->root = root;
ac->newtrans = btrfs_join_transaction(root, 0);
+ if (IS_ERR(ac->newtrans)) {
+ int err = PTR_ERR(ac->newtrans);
+ kfree(ac);
+ return err;
+ }
/* take transaction reference */
mutex_lock(&root->fs_info->trans_mutex);
}
dst_copy = kmalloc(item_size, GFP_NOFS);
src_copy = kmalloc(item_size, GFP_NOFS);
+ if (!dst_copy || !src_copy) {
+ btrfs_release_path(root, path);
+ kfree(dst_copy);
+ kfree(src_copy);
+ return -ENOMEM;
+ }
read_extent_buffer(eb, src_copy, src_ptr, item_size);
btrfs_dir_item_key_to_cpu(leaf, di, &location);
name_len = btrfs_dir_name_len(leaf, di);
name = kmalloc(name_len, GFP_NOFS);
+ if (!name)
+ return -ENOMEM;
+
read_extent_buffer(leaf, name, (unsigned long)(di + 1), name_len);
btrfs_release_path(root, path);
int match = 0;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
ret = btrfs_search_slot(NULL, log, key, path, 0, 0);
if (ret != 0)
goto out;
key.offset = (u64)-1;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
while (1) {
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
name_len = btrfs_dir_name_len(eb, di);
name = kmalloc(name_len, GFP_NOFS);
+ if (!name)
+ return -ENOMEM;
+
log_type = btrfs_dir_type(eb, di);
read_extent_buffer(eb, name, (unsigned long)(di + 1),
name_len);
root_owner = btrfs_header_owner(parent);
next = btrfs_find_create_tree_block(root, bytenr, blocksize);
+ if (!next)
+ return -ENOMEM;
if (*level == 1) {
wc->process_func(root, next, wc, ptr_gen);
wait_log_commit(trans, log_root_tree,
log_root_tree->log_transid);
mutex_unlock(&log_root_tree->log_mutex);
+ ret = 0;
goto out;
}
atomic_set(&log_root_tree->log_commit[index2], 1);
smp_mb();
if (waitqueue_active(&root->log_commit_wait[index1]))
wake_up(&root->log_commit_wait[index1]);
- return 0;
+ return ret;
}
static void free_log_tree(struct btrfs_trans_handle *trans,
log = root->log_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
di = btrfs_lookup_dir_item(trans, log, path, dir->i_ino,
name, name_len, -1);
if (IS_ERR(di)) {
ins_data = kmalloc(nr * sizeof(struct btrfs_key) +
nr * sizeof(u32), GFP_NOFS);
+ if (!ins_data)
+ return -ENOMEM;
+
ins_sizes = (u32 *)ins_data;
ins_keys = (struct btrfs_key *)(ins_data + nr * sizeof(u32));
log = root->log_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
dst_path = btrfs_alloc_path();
+ if (!dst_path) {
+ btrfs_free_path(path);
+ return -ENOMEM;
+ }
min_key.objectid = inode->i_ino;
min_key.type = BTRFS_INODE_ITEM_KEY;
BUG_ON(!path);
trans = btrfs_start_transaction(fs_info->tree_root, 0);
+ BUG_ON(IS_ERR(trans));
wc.trans = trans;
wc.pin = 1;
return -ENOMEM;
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ btrfs_free_path(path);
+ return PTR_ERR(trans);
+ }
key.objectid = BTRFS_DEV_ITEMS_OBJECTID;
key.type = BTRFS_DEV_ITEM_KEY;
key.offset = device->devid;
}
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ kfree(device);
+ ret = PTR_ERR(trans);
+ goto error;
+ }
+
lock_chunks(root);
device->writeable = 1;
return ret;
trans = btrfs_start_transaction(root, 0);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
lock_chunks(root);
BUG_ON(ret);
trans = btrfs_start_transaction(dev_root, 0);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_grow_device(trans, device, old_size);
BUG_ON(ret);
/* Shrinking succeeded, else we would be at "done". */
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ goto done;
+ }
+
lock_chunks(root);
device->disk_total_bytes = new_size;
depends on INET
select NLS
select CRYPTO
+ select CRYPTO_MD4
select CRYPTO_MD5
select CRYPTO_HMAC
select CRYPTO_ARC4
cFYI(1, "in %s", __func__);
BUG_ON(IS_ROOT(mntpt));
- xid = GetXid();
-
/*
* The MSDFS spec states that paths in DFS referral requests and
* responses must be prefixed by a single '\' character instead of
mnt = ERR_PTR(-ENOMEM);
full_path = build_path_from_dentry(mntpt);
if (full_path == NULL)
- goto free_xid;
+ goto cdda_exit;
cifs_sb = CIFS_SB(mntpt->d_inode->i_sb);
tlink = cifs_sb_tlink(cifs_sb);
}
ses = tlink_tcon(tlink)->ses;
+ xid = GetXid();
rc = get_dfs_path(xid, ses, full_path + 1, cifs_sb->local_nls,
&num_referrals, &referrals,
cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);
+ FreeXid(xid);
cifs_put_tlink(tlink);
free_dfs_info_array(referrals, num_referrals);
free_full_path:
kfree(full_path);
-free_xid:
- FreeXid(xid);
+cdda_exit:
cFYI(1, "leaving %s" , __func__);
return mnt;
}
ppace = kmalloc(num_aces * sizeof(struct cifs_ace *),
GFP_KERNEL);
+ if (!ppace) {
+ cERROR(1, "DACL memory allocation error");
+ return;
+ }
for (i = 0; i < num_aces; ++i) {
ppace[i] = (struct cifs_ace *) (acl_base + acl_size);
get_random_bytes(sec_key, CIFS_SESS_KEY_SIZE);
tfm_arc4 = crypto_alloc_blkcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
- if (!tfm_arc4 || IS_ERR(tfm_arc4)) {
+ if (IS_ERR(tfm_arc4)) {
+ rc = PTR_ERR(tfm_arc4);
cERROR(1, "could not allocate crypto API arc4\n");
- return PTR_ERR(tfm_arc4);
+ return rc;
}
desc.tfm = tfm_arc4;
extern const struct export_operations cifs_export_ops;
#endif /* EXPERIMENTAL */
-#define CIFS_VERSION "1.69"
+#define CIFS_VERSION "1.70"
#endif /* _CIFSFS_H */
}
}
- if (ses->status == CifsExiting)
- return -EIO;
-
/*
* Give demultiplex thread up to 10 seconds to reconnect, should be
* greater than cifs socket timeout which is 7 seconds
* retrying until process is killed or server comes
* back on-line
*/
- if (!tcon->retry || ses->status == CifsExiting) {
+ if (!tcon->retry) {
cFYI(1, "gave up waiting on reconnect in smb_init");
return -EHOSTDOWN;
}
__u16 fid, __u32 pid_of_opener, bool SetAllocation)
{
struct smb_com_transaction2_sfi_req *pSMB = NULL;
- char *data_offset;
struct file_end_of_file_info *parm_data;
int rc = 0;
__u16 params, param_offset, offset, byte_count, count;
param_offset = offsetof(struct smb_com_transaction2_sfi_req, Fid) - 4;
offset = param_offset + params;
- data_offset = (char *) (&pSMB->hdr.Protocol) + offset;
-
count = sizeof(struct file_end_of_file_info);
pSMB->MaxParameterCount = cpu_to_le16(2);
/* BB find exact max SMB PDU from sess structure BB */
struct TCP_Server_Info *server = container_of(work,
struct TCP_Server_Info, echo.work);
- /* no need to ping if we got a response recently */
- if (time_before(jiffies, server->lstrp + SMB_ECHO_INTERVAL - HZ))
+ /*
+ * We cannot send an echo until the NEGOTIATE_PROTOCOL request is done.
+ * Also, no need to ping if we got a response recently
+ */
+ if (server->tcpStatus != CifsGood ||
+ time_before(jiffies, server->lstrp + SMB_ECHO_INTERVAL - HZ))
goto requeue_echo;
rc = CIFSSMBEcho(server);
else if (reconnect == 1)
continue;
- length += 4; /* account for rfc1002 hdr */
+ total_read += 4; /* account for rfc1002 hdr */
-
- dump_smb(smb_buffer, length);
- if (checkSMB(smb_buffer, smb_buffer->Mid, total_read+4)) {
- cifs_dump_mem("Bad SMB: ", smb_buffer, 48);
+ dump_smb(smb_buffer, total_read);
+ if (checkSMB(smb_buffer, smb_buffer->Mid, total_read)) {
+ cifs_dump_mem("Bad SMB: ", smb_buffer,
+ total_read < 48 ? total_read : 48);
continue;
}
mid_entry->largeBuf = isLargeBuf;
multi_t2_fnd:
mid_entry->midState = MID_RESPONSE_RECEIVED;
- list_del_init(&mid_entry->qhead);
- mid_entry->callback(mid_entry);
#ifdef CONFIG_CIFS_STATS2
mid_entry->when_received = jiffies;
#endif
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
break;
}
mid_entry = NULL;
struct cifsTconInfo *tcon;
struct tcon_link *tlink;
struct cifsFileInfo *pCifsFile = NULL;
- struct cifsInodeInfo *pCifsInode;
char *full_path = NULL;
bool posix_open_ok = false;
__u16 netfid;
}
tcon = tlink_tcon(tlink);
- pCifsInode = CIFS_I(file->f_path.dentry->d_inode);
-
full_path = build_path_from_dentry(file->f_path.dentry);
if (full_path == NULL) {
rc = -ENOMEM;
char *write_data;
int rc = -EFAULT;
int bytes_written = 0;
- struct cifs_sb_info *cifs_sb;
struct inode *inode;
struct cifsFileInfo *open_file;
return -EFAULT;
inode = page->mapping->host;
- cifs_sb = CIFS_SB(inode->i_sb);
offset += (loff_t)from;
write_data = kmap(page);
cifs_iovec_write(struct file *file, const struct iovec *iov,
unsigned long nr_segs, loff_t *poffset)
{
- size_t total_written = 0, written = 0;
- unsigned long num_pages, npages;
- size_t copied, len, cur_len, i;
+ unsigned int written;
+ unsigned long num_pages, npages, i;
+ size_t copied, len, cur_len;
+ ssize_t total_written = 0;
struct kvec *to_send;
struct page **pages;
struct iov_iter it;
{
int rc;
int xid;
- unsigned int total_read, bytes_read = 0;
+ ssize_t total_read;
+ unsigned int bytes_read = 0;
size_t len, cur_len;
int iov_offset = 0;
struct cifs_sb_info *cifs_sb;
md5 = crypto_alloc_shash("md5", 0, 0);
if (IS_ERR(md5)) {
+ rc = PTR_ERR(md5);
cERROR(1, "%s: Crypto md5 allocation error %d\n", __func__, rc);
- return PTR_ERR(md5);
+ return rc;
}
size = sizeof(struct shash_desc) + crypto_shash_descsize(md5);
sdescmd5 = kmalloc(size, GFP_KERNEL);
{
__u16 mid = 0;
__u16 last_mid;
- int collision;
-
- if (server == NULL)
- return mid;
+ bool collision;
spin_lock(&GlobalMid_Lock);
last_mid = server->CurrentMid; /* we do not want to loop forever */
(and it would also have to have been a request that
did not time out) */
while (server->CurrentMid != last_mid) {
- struct list_head *tmp;
struct mid_q_entry *mid_entry;
+ unsigned int num_mids;
- collision = 0;
+ collision = false;
if (server->CurrentMid == 0)
server->CurrentMid++;
- list_for_each(tmp, &server->pending_mid_q) {
- mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
-
- if ((mid_entry->mid == server->CurrentMid) &&
- (mid_entry->midState == MID_REQUEST_SUBMITTED)) {
+ num_mids = 0;
+ list_for_each_entry(mid_entry, &server->pending_mid_q, qhead) {
+ ++num_mids;
+ if (mid_entry->mid == server->CurrentMid &&
+ mid_entry->midState == MID_REQUEST_SUBMITTED) {
/* This mid is in use, try a different one */
- collision = 1;
+ collision = true;
break;
}
}
- if (collision == 0) {
+
+ /*
+ * if we have more than 32k mids in the list, then something
+ * is very wrong. Possibly a local user is trying to DoS the
+ * box by issuing long-running calls and SIGKILL'ing them. If
+ * we get to 2^16 mids then we're in big trouble as this
+ * function could loop forever.
+ *
+ * Go ahead and assign out the mid in this situation, but force
+ * an eventual reconnect to clean out the pending_mid_q.
+ */
+ if (num_mids > 32768)
+ server->tcpStatus = CifsNeedReconnect;
+
+ if (!collision) {
mid = server->CurrentMid;
break;
}
}
static int
-checkSMBhdr(struct smb_hdr *smb, __u16 mid)
+check_smb_hdr(struct smb_hdr *smb, __u16 mid)
{
- /* Make sure that this really is an SMB, that it is a response,
- and that the message ids match */
- if ((*(__le32 *) smb->Protocol == cpu_to_le32(0x424d53ff)) &&
- (mid == smb->Mid)) {
- if (smb->Flags & SMBFLG_RESPONSE)
- return 0;
- else {
- /* only one valid case where server sends us request */
- if (smb->Command == SMB_COM_LOCKING_ANDX)
- return 0;
- else
- cERROR(1, "Received Request not response");
- }
- } else { /* bad signature or mid */
- if (*(__le32 *) smb->Protocol != cpu_to_le32(0x424d53ff))
- cERROR(1, "Bad protocol string signature header %x",
- *(unsigned int *) smb->Protocol);
- if (mid != smb->Mid)
- cERROR(1, "Mids do not match");
+ /* does it have the right SMB "signature" ? */
+ if (*(__le32 *) smb->Protocol != cpu_to_le32(0x424d53ff)) {
+ cERROR(1, "Bad protocol string signature header 0x%x",
+ *(unsigned int *)smb->Protocol);
+ return 1;
+ }
+
+ /* Make sure that message ids match */
+ if (mid != smb->Mid) {
+ cERROR(1, "Mids do not match. received=%u expected=%u",
+ smb->Mid, mid);
+ return 1;
}
- cERROR(1, "bad smb detected. The Mid=%d", smb->Mid);
+
+ /* if it's a response then accept */
+ if (smb->Flags & SMBFLG_RESPONSE)
+ return 0;
+
+ /* only one valid case where server sends us request */
+ if (smb->Command == SMB_COM_LOCKING_ANDX)
+ return 0;
+
+ cERROR(1, "Server sent request, not response. mid=%u", smb->Mid);
return 1;
}
return 1;
}
- if (checkSMBhdr(smb, mid))
+ if (check_smb_hdr(smb, mid))
return 1;
clc_len = smbCalcSize_LE(smb);
if (((4 + len) & 0xFFFF) == (clc_len & 0xFFFF))
return 0; /* bcc wrapped */
}
- cFYI(1, "Calculated size %d vs length %d mismatch for mid %d",
+ cFYI(1, "Calculated size %u vs length %u mismatch for mid=%u",
clc_len, 4 + len, smb->Mid);
- /* Windows XP can return a few bytes too much, presumably
- an illegal pad, at the end of byte range lock responses
- so we allow for that three byte pad, as long as actual
- received length is as long or longer than calculated length */
- /* We have now had to extend this more, since there is a
- case in which it needs to be bigger still to handle a
- malformed response to transact2 findfirst from WinXP when
- access denied is returned and thus bcc and wct are zero
- but server says length is 0x21 bytes too long as if the server
- forget to reset the smb rfc1001 length when it reset the
- wct and bcc to minimum size and drop the t2 parms and data */
- if ((4+len > clc_len) && (len <= clc_len + 512))
- return 0;
- else {
- cERROR(1, "RFC1001 size %d bigger than SMB for Mid=%d",
+
+ if (4 + len < clc_len) {
+ cERROR(1, "RFC1001 size %u smaller than SMB for mid=%u",
len, smb->Mid);
return 1;
+ } else if (len > clc_len + 512) {
+ /*
+ * Some servers (Windows XP in particular) send more
+ * data than the lengths in the SMB packet would
+ * indicate on certain calls (byte range locks and
+ * trans2 find first calls in particular). While the
+ * client can handle such a frame by ignoring the
+ * trailing data, we choose limit the amount of extra
+ * data to 512 bytes.
+ */
+ cERROR(1, "RFC1001 size %u more than 512 bytes larger "
+ "than SMB for mid=%u", len, smb->Mid);
+ return 1;
}
}
return 0;
{
int rc = 0;
int xid, i;
- struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
struct cifsFileInfo *cifsFile = NULL;
char *current_entry;
xid = GetXid();
- cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
-
/*
* Ensure FindFirst doesn't fail before doing filldir() for '.' and
* '..'. Otherwise we won't be able to notify VFS in case of failure.
md4 = crypto_alloc_shash("md4", 0, 0);
if (IS_ERR(md4)) {
+ rc = PTR_ERR(md4);
cERROR(1, "%s: Crypto md4 allocation error %d\n", __func__, rc);
- return PTR_ERR(md4);
+ return rc;
}
size = sizeof(struct shash_desc) + crypto_shash_descsize(md4);
sdescmd4 = kmalloc(size, GFP_KERNEL);
server->tcpStatus = CifsNeedReconnect;
}
- if (rc < 0) {
+ if (rc < 0 && rc != -EINTR)
cERROR(1, "Error %d sending data on socket to server", rc);
- } else
+ else
rc = 0;
/* Don't want to modify the buffer as a
if (rc)
return rc;
+ /* enable signing if server requires it */
+ if (server->secMode & (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED))
+ in_buf->Flags2 |= SMBFLG2_SECURITY_SIGNATURE;
+
mutex_lock(&server->srv_mutex);
mid = AllocMidQEntry(in_buf, server);
if (mid == NULL) {
#endif
mutex_unlock(&ses->server->srv_mutex);
- cifs_small_buf_release(in_buf);
- if (rc < 0)
+ if (rc < 0) {
+ cifs_small_buf_release(in_buf);
goto out;
+ }
- if (long_op == CIFS_ASYNC_OP)
+ if (long_op == CIFS_ASYNC_OP) {
+ cifs_small_buf_release(in_buf);
goto out;
+ }
rc = wait_for_response(ses->server, midQ);
- if (rc != 0)
- goto out;
+ if (rc != 0) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
+ if (midQ->midState == MID_REQUEST_SUBMITTED) {
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ cifs_small_buf_release(in_buf);
+ atomic_dec(&ses->server->inFlight);
+ wake_up(&ses->server->request_q);
+ return rc;
+ }
+ spin_unlock(&GlobalMid_Lock);
+ }
+
+ cifs_small_buf_release(in_buf);
rc = sync_mid_result(midQ, ses->server);
if (rc != 0) {
goto out;
rc = wait_for_response(ses->server, midQ);
- if (rc != 0)
- goto out;
+ if (rc != 0) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
+ if (midQ->midState == MID_REQUEST_SUBMITTED) {
+ /* no longer considered to be "in-flight" */
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ atomic_dec(&ses->server->inFlight);
+ wake_up(&ses->server->request_q);
+ return rc;
+ }
+ spin_unlock(&GlobalMid_Lock);
+ }
rc = sync_mid_result(midQ, ses->server);
if (rc != 0) {
}
}
- if (wait_for_response(ses->server, midQ) == 0) {
- /* We got the response - restart system call. */
- rstart = 1;
+ rc = wait_for_response(ses->server, midQ);
+ if (rc) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
+ if (midQ->midState == MID_REQUEST_SUBMITTED) {
+ /* no longer considered to be "in-flight" */
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ return rc;
+ }
+ spin_unlock(&GlobalMid_Lock);
}
+
+ /* We got the response - restart system call. */
+ rstart = 1;
}
rc = sync_mid_result(midQ, ses->server);
return ep_scan_ready_list(ep, ep_send_events_proc, &esed);
}
+static inline struct timespec ep_set_mstimeout(long ms)
+{
+ struct timespec now, ts = {
+ .tv_sec = ms / MSEC_PER_SEC,
+ .tv_nsec = NSEC_PER_MSEC * (ms % MSEC_PER_SEC),
+ };
+
+ ktime_get_ts(&now);
+ return timespec_add_safe(now, ts);
+}
+
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
int maxevents, long timeout)
{
unsigned long flags;
long slack;
wait_queue_t wait;
- struct timespec end_time;
ktime_t expires, *to = NULL;
if (timeout > 0) {
- ktime_get_ts(&end_time);
- timespec_add_ns(&end_time, (u64)timeout * NSEC_PER_MSEC);
+ struct timespec end_time = ep_set_mstimeout(timeout);
+
slack = select_estimate_accuracy(&end_time);
to = &expires;
*to = timespec_to_ktime(end_time);
goto out;
file = do_filp_open(AT_FDCWD, tmp,
- O_LARGEFILE | O_RDONLY | FMODE_EXEC, 0,
+ O_LARGEFILE | O_RDONLY | __FMODE_EXEC, 0,
MAY_READ | MAY_EXEC | MAY_OPEN);
putname(tmp);
error = PTR_ERR(file);
int err;
file = do_filp_open(AT_FDCWD, name,
- O_LARGEFILE | O_RDONLY | FMODE_EXEC, 0,
+ O_LARGEFILE | O_RDONLY | __FMODE_EXEC, 0,
MAY_EXEC | MAY_OPEN);
if (IS_ERR(file))
goto out;
memcpy(oi->i_data, fcb.i_data, sizeof(fcb.i_data));
}
- inode->i_mapping->backing_dev_info = sb->s_bdi;
if (S_ISREG(inode->i_mode)) {
inode->i_op = &exofs_file_inode_operations;
inode->i_fop = &exofs_file_operations;
sbi = sb->s_fs_info;
- inode->i_mapping->backing_dev_info = sb->s_bdi;
sb->s_dirt = 1;
inode_init_owner(inode, dir, mode);
inode->i_ino = sbi->s_nextid++;
__O_SYNC | O_DSYNC | FASYNC |
O_DIRECT | O_LARGEFILE | O_DIRECTORY |
O_NOFOLLOW | O_NOATIME | O_CLOEXEC |
- FMODE_EXEC
+ __FMODE_EXEC
));
fasync_cache = kmem_cache_create("fasync_cache",
goto fail;
percpu_counter_inc(&nr_files);
+ f->f_cred = get_cred(cred);
if (security_file_alloc(f))
goto fail_sec;
INIT_LIST_HEAD(&f->f_u.fu_list);
atomic_long_set(&f->f_count, 1);
rwlock_init(&f->f_owner.lock);
- f->f_cred = get_cred(cred);
spin_lock_init(&f->f_lock);
eventpoll_init_file(f);
/* f->f_version: 0 */
u32 start, len, goal;
int res;
- if (sbi->total_blocks - sbi->free_blocks + 8 >
- sbi->alloc_file->i_size * 8) {
+ if (sbi->alloc_file->i_size * 8 <
+ sbi->total_blocks - sbi->free_blocks + 8) {
/* extend alloc file */
printk(KERN_ERR "hfs: extend alloc file! "
"(%llu,%u,%u)\n",
res = hfsplus_submit_bio(sb->s_bdev, *part_start + HFS_PMAP_BLK,
data, READ);
if (res)
- return res;
+ goto out;
switch (be16_to_cpu(*((__be16 *)data))) {
case HFS_OLD_PMAP_MAGIC:
res = -ENOENT;
break;
}
-
+out:
kfree(data);
return res;
}
struct inode *root, *inode;
struct qstr str;
struct nls_table *nls = NULL;
- int err = -EINVAL;
+ int err;
+ err = -EINVAL;
sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
if (!sbi)
- return -ENOMEM;
+ goto out;
sb->s_fs_info = sbi;
mutex_init(&sbi->alloc_mutex);
mutex_init(&sbi->vh_mutex);
hfsplus_fill_defaults(sbi);
+
+ err = -EINVAL;
if (!hfsplus_parse_options(data, sbi)) {
printk(KERN_ERR "hfs: unable to parse mount options\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
/* temporarily use utf8 to correctly find the hidden dir below */
sbi->nls = load_nls("utf8");
if (!sbi->nls) {
printk(KERN_ERR "hfs: unable to load nls for utf8\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
/* Grab the volume header */
if (hfsplus_read_wrapper(sb)) {
if (!silent)
printk(KERN_WARNING "hfs: unable to find HFS+ superblock\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
vhdr = sbi->s_vhdr;
if (be16_to_cpu(vhdr->version) < HFSPLUS_MIN_VERSION ||
be16_to_cpu(vhdr->version) > HFSPLUS_CURRENT_VERSION) {
printk(KERN_ERR "hfs: wrong filesystem version\n");
- goto cleanup;
+ goto out_free_vhdr;
}
sbi->total_blocks = be32_to_cpu(vhdr->total_blocks);
sbi->free_blocks = be32_to_cpu(vhdr->free_blocks);
sbi->ext_tree = hfs_btree_open(sb, HFSPLUS_EXT_CNID);
if (!sbi->ext_tree) {
printk(KERN_ERR "hfs: failed to load extents file\n");
- goto cleanup;
+ goto out_free_vhdr;
}
sbi->cat_tree = hfs_btree_open(sb, HFSPLUS_CAT_CNID);
if (!sbi->cat_tree) {
printk(KERN_ERR "hfs: failed to load catalog file\n");
- goto cleanup;
+ goto out_close_ext_tree;
}
inode = hfsplus_iget(sb, HFSPLUS_ALLOC_CNID);
if (IS_ERR(inode)) {
printk(KERN_ERR "hfs: failed to load allocation file\n");
err = PTR_ERR(inode);
- goto cleanup;
+ goto out_close_cat_tree;
}
sbi->alloc_file = inode;
if (IS_ERR(root)) {
printk(KERN_ERR "hfs: failed to load root directory\n");
err = PTR_ERR(root);
- goto cleanup;
- }
- sb->s_d_op = &hfsplus_dentry_operations;
- sb->s_root = d_alloc_root(root);
- if (!sb->s_root) {
- iput(root);
- err = -ENOMEM;
- goto cleanup;
+ goto out_put_alloc_file;
}
str.len = sizeof(HFSP_HIDDENDIR_NAME) - 1;
if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
hfs_find_exit(&fd);
if (entry.type != cpu_to_be16(HFSPLUS_FOLDER))
- goto cleanup;
+ goto out_put_root;
inode = hfsplus_iget(sb, be32_to_cpu(entry.folder.id));
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
- goto cleanup;
+ goto out_put_root;
}
sbi->hidden_dir = inode;
} else
hfs_find_exit(&fd);
- if (sb->s_flags & MS_RDONLY)
- goto out;
+ if (!(sb->s_flags & MS_RDONLY)) {
+ /*
+ * H+LX == hfsplusutils, H+Lx == this driver, H+lx is unused
+ * all three are registered with Apple for our use
+ */
+ vhdr->last_mount_vers = cpu_to_be32(HFSP_MOUNT_VERSION);
+ vhdr->modify_date = hfsp_now2mt();
+ be32_add_cpu(&vhdr->write_count, 1);
+ vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_UNMNT);
+ vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_INCNSTNT);
+ hfsplus_sync_fs(sb, 1);
- /* H+LX == hfsplusutils, H+Lx == this driver, H+lx is unused
- * all three are registered with Apple for our use
- */
- vhdr->last_mount_vers = cpu_to_be32(HFSP_MOUNT_VERSION);
- vhdr->modify_date = hfsp_now2mt();
- be32_add_cpu(&vhdr->write_count, 1);
- vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_UNMNT);
- vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_INCNSTNT);
- hfsplus_sync_fs(sb, 1);
-
- if (!sbi->hidden_dir) {
- mutex_lock(&sbi->vh_mutex);
- sbi->hidden_dir = hfsplus_new_inode(sb, S_IFDIR);
- hfsplus_create_cat(sbi->hidden_dir->i_ino, sb->s_root->d_inode,
- &str, sbi->hidden_dir);
- mutex_unlock(&sbi->vh_mutex);
-
- hfsplus_mark_inode_dirty(sbi->hidden_dir, HFSPLUS_I_CAT_DIRTY);
+ if (!sbi->hidden_dir) {
+ mutex_lock(&sbi->vh_mutex);
+ sbi->hidden_dir = hfsplus_new_inode(sb, S_IFDIR);
+ hfsplus_create_cat(sbi->hidden_dir->i_ino, root, &str,
+ sbi->hidden_dir);
+ mutex_unlock(&sbi->vh_mutex);
+
+ hfsplus_mark_inode_dirty(sbi->hidden_dir,
+ HFSPLUS_I_CAT_DIRTY);
+ }
}
-out:
+
+ sb->s_d_op = &hfsplus_dentry_operations;
+ sb->s_root = d_alloc_root(root);
+ if (!sb->s_root) {
+ err = -ENOMEM;
+ goto out_put_hidden_dir;
+ }
+
unload_nls(sbi->nls);
sbi->nls = nls;
return 0;
-cleanup:
- hfsplus_put_super(sb);
+out_put_hidden_dir:
+ iput(sbi->hidden_dir);
+out_put_root:
+ iput(sbi->alloc_file);
+out_put_alloc_file:
+ iput(sbi->alloc_file);
+out_close_cat_tree:
+ hfs_btree_close(sbi->cat_tree);
+out_close_ext_tree:
+ hfs_btree_close(sbi->ext_tree);
+out_free_vhdr:
+ kfree(sbi->s_vhdr);
+ kfree(sbi->s_backup_vhdr);
+out_unload_nls:
+ unload_nls(sbi->nls);
unload_nls(nls);
+ kfree(sbi);
+out:
return err;
}
break;
case cpu_to_be16(HFSP_WRAP_MAGIC):
if (!hfsplus_read_mdb(sbi->s_vhdr, &wd))
- goto out;
+ goto out_free_backup_vhdr;
wd.ablk_size >>= HFSPLUS_SECTOR_SHIFT;
part_start += wd.ablk_start + wd.embed_start * wd.ablk_size;
part_size = wd.embed_count * wd.ablk_size;
* (should do this only for cdrom/loop though)
*/
if (hfs_part_find(sb, &part_start, &part_size))
- goto out;
+ goto out_free_backup_vhdr;
goto reread;
}
len = isize;
}
+ /*
+ * Some filesystems can't deal with being asked to map less than
+ * blocksize, so make sure our len is at least block length.
+ */
+ if (logical_to_blk(inode, len) == 0)
+ len = blk_to_logical(inode, 1);
+
start_blk = logical_to_blk(inode, start);
last_blk = logical_to_blk(inode, start + len - 1);
#endif
#ifdef CONFIG_EVENT_TRACING
-#define FTRACE_EVENTS() VMLINUX_SYMBOL(__start_ftrace_events) = .; \
+#define FTRACE_EVENTS() . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start_ftrace_events) = .; \
*(_ftrace_events) \
VMLINUX_SYMBOL(__stop_ftrace_events) = .;
#else
#endif
#ifdef CONFIG_FTRACE_SYSCALLS
-#define TRACE_SYSCALLS() VMLINUX_SYMBOL(__start_syscalls_metadata) = .; \
+#define TRACE_SYSCALLS() . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start_syscalls_metadata) = .; \
*(__syscalls_metadata) \
VMLINUX_SYMBOL(__stop_syscalls_metadata) = .;
#else
CPU_KEEP(exit.data) \
MEM_KEEP(init.data) \
MEM_KEEP(exit.data) \
- . = ALIGN(32); \
- VMLINUX_SYMBOL(__start___tracepoints) = .; \
+ STRUCT_ALIGN(); \
*(__tracepoints) \
- VMLINUX_SYMBOL(__stop___tracepoints) = .; \
/* implement dynamic printk debug */ \
. = ALIGN(8); \
VMLINUX_SYMBOL(__start___verbose) = .; \
VMLINUX_SYMBOL(__stop___verbose) = .; \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
- TRACE_PRINTKS() \
- \
- STRUCT_ALIGN(); \
- FTRACE_EVENTS() \
- \
- STRUCT_ALIGN(); \
- TRACE_SYSCALLS()
+ TRACE_PRINTKS()
/*
* Data section helpers
VMLINUX_SYMBOL(__start_rodata) = .; \
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
+ . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start___tracepoints_ptrs) = .; \
+ *(__tracepoints_ptrs) /* Tracepoints: pointer array */\
+ VMLINUX_SYMBOL(__stop___tracepoints_ptrs) = .; \
*(__markers_strings) /* Markers: strings */ \
*(__tracepoints_strings)/* Tracepoints: strings */ \
} \
KERNEL_CTORS() \
*(.init.rodata) \
MCOUNT_REC() \
+ FTRACE_EVENTS() \
+ TRACE_SYSCALLS() \
DEV_DISCARD(init.rodata) \
CPU_DISCARD(init.rodata) \
MEM_DISCARD(init.rodata) \
extern u32 drm_vblank_count(struct drm_device *dev, int crtc);
extern u32 drm_vblank_count_and_time(struct drm_device *dev, int crtc,
struct timeval *vblanktime);
-extern void drm_handle_vblank(struct drm_device *dev, int crtc);
+extern bool drm_handle_vblank(struct drm_device *dev, int crtc);
extern int drm_vblank_get(struct drm_device *dev, int crtc);
extern void drm_vblank_put(struct drm_device *dev, int crtc);
extern void drm_vblank_off(struct drm_device *dev, int crtc);
/**
* drm_crtc_funcs - control CRTCs for a given device
+ * @reset: reset CRTC after state has been invalidate (e.g. resume)
* @dpms: control display power levels
* @save: save CRTC state
* @resore: restore CRTC state
void (*save)(struct drm_crtc *crtc); /* suspend? */
/* Restore CRTC state */
void (*restore)(struct drm_crtc *crtc); /* resume? */
+ /* Reset CRTC state */
+ void (*reset)(struct drm_crtc *crtc);
/* cursor controls */
int (*cursor_set)(struct drm_crtc *crtc, struct drm_file *file_priv,
* @dpms: set power state (see drm_crtc_funcs above)
* @save: save connector state
* @restore: restore connector state
+ * @reset: reset connector after state has been invalidate (e.g. resume)
* @mode_valid: is this mode valid on the given connector?
* @mode_fixup: try to fixup proposed mode for this connector
* @mode_set: set this mode
void (*dpms)(struct drm_connector *connector, int mode);
void (*save)(struct drm_connector *connector);
void (*restore)(struct drm_connector *connector);
+ void (*reset)(struct drm_connector *connector);
/* Check to see if anything is attached to the connector.
* @force is set to false whilst polling, true when checking the
};
struct drm_encoder_funcs {
+ void (*reset)(struct drm_encoder *encoder);
void (*destroy)(struct drm_encoder *encoder);
};
struct drm_display_mode *mode);
extern void drm_mode_debug_printmodeline(struct drm_display_mode *mode);
extern void drm_mode_config_init(struct drm_device *dev);
+extern void drm_mode_config_reset(struct drm_device *dev);
extern void drm_mode_config_cleanup(struct drm_device *dev);
extern void drm_mode_set_name(struct drm_display_mode *mode);
extern bool drm_mode_equal(struct drm_display_mode *mode1, struct drm_display_mode *mode2);
{0x1002, 0x4156, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
{0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP}, \
{0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
- {0x1002, 0x4243, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
{0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
{0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
{0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
header-y += byteorder/
header-y += can/
+header-y += caif/
header-y += dvb/
header-y += hdlc/
header-y += isdn/
--- /dev/null
+header-y += caif_socket.h
+header-y += if_caif.h
void __user *buffer, size_t *lenp, loff_t *ppos);
int __init get_filesystem_list(char *buf);
+#define __FMODE_EXEC ((__force int) FMODE_EXEC)
+#define __FMODE_NONOTIFY ((__force int) FMODE_NONOTIFY)
+
#define ACC_MODE(x) ("\004\002\006\006"[(x)&O_ACCMODE])
#define OPEN_FMODE(flag) ((__force fmode_t)(((flag + 1) & O_ACCMODE) | \
- (flag & FMODE_NONOTIFY)))
+ (flag & __FMODE_NONOTIFY)))
#endif /* __KERNEL__ */
#endif /* _LINUX_FS_H */
#define IRQF_MODIFY_MASK \
(IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
- IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL)
+ IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
+ IRQ_PER_CPU)
#ifdef CONFIG_IRQ_PER_CPU
# define CHECK_IRQ_PER_CPU(var) ((var) & IRQ_PER_CPU)
keeping pointers to this stuff */
char *args;
#ifdef CONFIG_TRACEPOINTS
- struct tracepoint *tracepoints;
+ struct tracepoint * const *tracepoints_ptrs;
unsigned int num_tracepoints;
#endif
#ifdef HAVE_JUMP_LABEL
unsigned int num_trace_bprintk_fmt;
#endif
#ifdef CONFIG_EVENT_TRACING
- struct ftrace_event_call *trace_events;
+ struct ftrace_event_call **trace_events;
unsigned int num_trace_events;
#endif
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
extern int ip_mroute_setsockopt(struct sock *, int, char __user *, unsigned int);
extern int ip_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
extern int ipmr_ioctl(struct sock *sk, int cmd, void __user *arg);
+extern int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
extern int ip_mr_init(void);
#else
static inline
extern int ip6_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
extern int ip6_mr_input(struct sk_buff *skb);
extern int ip6mr_ioctl(struct sock *sk, int cmd, void __user *arg);
+extern int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
extern int ip6_mr_init(void);
extern void ip6_mr_cleanup(void);
#else
return ret;
}
+/**
+ * res_counter_check_margin - check if the counter allows charging
+ * @cnt: the resource counter to check
+ * @bytes: the number of bytes to check the remaining space against
+ *
+ * Returns a boolean value on whether the counter can be charged
+ * @bytes or whether this would exceed the limit.
+ */
+static inline bool res_counter_check_margin(struct res_counter *cnt,
+ unsigned long bytes)
+{
+ bool ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cnt->lock, flags);
+ ret = cnt->limit - cnt->usage >= bytes;
+ spin_unlock_irqrestore(&cnt->lock, flags);
+ return ret;
+}
+
static inline bool res_counter_check_under_soft_limit(struct res_counter *cnt)
{
bool ret;
extern struct trace_event_functions exit_syscall_print_funcs;
#define SYSCALL_TRACE_ENTER_EVENT(sname) \
- static struct syscall_metadata \
- __attribute__((__aligned__(4))) __syscall_meta_##sname; \
+ static struct syscall_metadata __syscall_meta_##sname; \
static struct ftrace_event_call __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("_ftrace_events"))) \
event_enter_##sname = { \
.name = "sys_enter"#sname, \
.class = &event_class_syscall_enter, \
.event.funcs = &enter_syscall_print_funcs, \
.data = (void *)&__syscall_meta_##sname,\
}; \
+ static struct ftrace_event_call __used \
+ __attribute__((section("_ftrace_events"))) \
+ *__event_enter_##sname = &event_enter_##sname; \
__TRACE_EVENT_FLAGS(enter_##sname, TRACE_EVENT_FL_CAP_ANY)
#define SYSCALL_TRACE_EXIT_EVENT(sname) \
- static struct syscall_metadata \
- __attribute__((__aligned__(4))) __syscall_meta_##sname; \
+ static struct syscall_metadata __syscall_meta_##sname; \
static struct ftrace_event_call __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("_ftrace_events"))) \
event_exit_##sname = { \
.name = "sys_exit"#sname, \
.class = &event_class_syscall_exit, \
.event.funcs = &exit_syscall_print_funcs, \
.data = (void *)&__syscall_meta_##sname,\
}; \
+ static struct ftrace_event_call __used \
+ __attribute__((section("_ftrace_events"))) \
+ *__event_exit_##sname = &event_exit_##sname; \
__TRACE_EVENT_FLAGS(exit_##sname, TRACE_EVENT_FL_CAP_ANY)
#define SYSCALL_METADATA(sname, nb) \
SYSCALL_TRACE_ENTER_EVENT(sname); \
SYSCALL_TRACE_EXIT_EVENT(sname); \
static struct syscall_metadata __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("__syscalls_metadata"))) \
__syscall_meta_##sname = { \
.name = "sys"#sname, \
.nb_args = nb, \
.enter_event = &event_enter_##sname, \
.exit_event = &event_exit_##sname, \
.enter_fields = LIST_HEAD_INIT(__syscall_meta_##sname.enter_fields), \
- };
+ }; \
+ static struct syscall_metadata __used \
+ __attribute__((section("__syscalls_metadata"))) \
+ *__p_syscall_meta_##sname = &__syscall_meta_##sname;
#define SYSCALL_DEFINE0(sname) \
SYSCALL_TRACE_ENTER_EVENT(_##sname); \
SYSCALL_TRACE_EXIT_EVENT(_##sname); \
static struct syscall_metadata __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("__syscalls_metadata"))) \
__syscall_meta__##sname = { \
.name = "sys_"#sname, \
.nb_args = 0, \
.exit_event = &event_exit__##sname, \
.enter_fields = LIST_HEAD_INIT(__syscall_meta__##sname.enter_fields), \
}; \
+ static struct syscall_metadata __used \
+ __attribute__((section("__syscalls_metadata"))) \
+ *__p_syscall_meta_##sname = &__syscall_meta__##sname; \
asmlinkage long sys_##sname(void)
#else
#define SYSCALL_DEFINE0(name) asmlinkage long sys_##name(void)
void (*regfunc)(void);
void (*unregfunc)(void);
struct tracepoint_func __rcu *funcs;
-} __attribute__((aligned(32))); /*
- * Aligned on 32 bytes because it is
- * globally visible and gcc happily
- * align these on the structure size.
- * Keep in sync with vmlinux.lds.h.
- */
+};
/*
* Connect a probe to a tracepoint.
struct tracepoint_iter {
struct module *module;
- struct tracepoint *tracepoint;
+ struct tracepoint * const *tracepoint;
};
extern void tracepoint_iter_start(struct tracepoint_iter *iter);
extern void tracepoint_iter_next(struct tracepoint_iter *iter);
extern void tracepoint_iter_stop(struct tracepoint_iter *iter);
extern void tracepoint_iter_reset(struct tracepoint_iter *iter);
-extern int tracepoint_get_iter_range(struct tracepoint **tracepoint,
- struct tracepoint *begin, struct tracepoint *end);
+extern int tracepoint_get_iter_range(struct tracepoint * const **tracepoint,
+ struct tracepoint * const *begin, struct tracepoint * const *end);
/*
* tracepoint_synchronize_unregister must be called between the last tracepoint
#define PARAMS(args...) args
#ifdef CONFIG_TRACEPOINTS
-extern void tracepoint_update_probe_range(struct tracepoint *begin,
- struct tracepoint *end);
+extern
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end);
#else
-static inline void tracepoint_update_probe_range(struct tracepoint *begin,
- struct tracepoint *end)
+static inline
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end)
{ }
#endif /* CONFIG_TRACEPOINTS */
{ \
}
+/*
+ * We have no guarantee that gcc and the linker won't up-align the tracepoint
+ * structures, so we create an array of pointers that will be used for iteration
+ * on the tracepoints.
+ */
#define DEFINE_TRACE_FN(name, reg, unreg) \
static const char __tpstrtab_##name[] \
__attribute__((section("__tracepoints_strings"))) = #name; \
struct tracepoint __tracepoint_##name \
- __attribute__((section("__tracepoints"), aligned(32))) = \
- { __tpstrtab_##name, 0, reg, unreg, NULL }
+ __attribute__((section("__tracepoints"))) = \
+ { __tpstrtab_##name, 0, reg, unreg, NULL }; \
+ static struct tracepoint * const __tracepoint_ptr_##name __used \
+ __attribute__((section("__tracepoints_ptrs"))) = \
+ &__tracepoint_##name;
#define DEFINE_TRACE(name) \
DEFINE_TRACE_FN(name, NULL, NULL);
#define USB_CDC_COMM_FEATURE 0x01
#define USB_CDC_CAP_LINE 0x02
-#define USB_CDC_CAP_BRK 0x04
+#define USB_CDC_CAP_BRK 0x04
#define USB_CDC_CAP_NOTIFY 0x08
/* "Union Functional Descriptor" from CDC spec 5.2.3.8 */
__le16 wLength;
} __attribute__ ((packed));
+struct usb_cdc_speed_change {
+ __le32 DLBitRRate; /* contains the downlink bit rate (IN pipe) */
+ __le32 ULBitRate; /* contains the uplink bit rate (OUT pipe) */
+} __attribute__ ((packed));
+
/*-------------------------------------------------------------------------*/
/*
__le16 wNdpOutDivisor;
__le16 wNdpOutPayloadRemainder;
__le16 wNdpOutAlignment;
- __le16 wPadding2;
+ __le16 wNtbOutMaxDatagrams;
} __attribute__ ((packed));
/*
__le16 wHeaderLength;
__le16 wSequence;
__le16 wBlockLength;
- __le16 wFpIndex;
+ __le16 wNdpIndex;
} __attribute__ ((packed));
struct usb_cdc_ncm_nth32 {
__le16 wHeaderLength;
__le16 wSequence;
__le32 dwBlockLength;
- __le32 dwFpIndex;
+ __le32 dwNdpIndex;
} __attribute__ ((packed));
/*
struct usb_cdc_ncm_ndp16 {
__le32 dwSignature;
__le16 wLength;
- __le16 wNextFpIndex;
+ __le16 wNextNdpIndex;
struct usb_cdc_ncm_dpe16 dpe16[0];
} __attribute__ ((packed));
#define USB_CDC_NCM_NCAP_ENCAP_COMMAND (1 << 2)
#define USB_CDC_NCM_NCAP_MAX_DATAGRAM_SIZE (1 << 3)
#define USB_CDC_NCM_NCAP_CRC_MODE (1 << 4)
+#define USB_CDC_NCM_NCAP_NTB_INPUT_SIZE (1 << 5)
/* CDC NCM subclass Table 6-3: NTB Parameter Structure */
#define USB_CDC_NCM_NTB16_SUPPORTED (1 << 0)
#define USB_CDC_NCM_NTB_MIN_IN_SIZE 2048
#define USB_CDC_NCM_NTB_MIN_OUT_SIZE 2048
+/* NTB Input Size Structure */
+struct usb_cdc_ncm_ndp_input_size {
+ __le32 dwNtbInMaxSize;
+ __le16 wNtbInMaxDatagrams;
+ __le16 wReserved;
+} __attribute__ ((packed));
+
/* CDC NCM subclass 6.2.11 SetCrcMode */
#define USB_CDC_NCM_CRC_NOT_APPENDED 0x00
#define USB_CDC_NCM_CRC_APPENDED 0x01
* This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
* anyone can use the definitions to implement compatible drivers/servers.
*
- * Copyright (C) Red Hat, Inc., 2009, 2010
+ * Copyright (C) Red Hat, Inc., 2009, 2010, 2011
+ * Copyright (C) Amit Shah <amit.shah@redhat.com>, 2009, 2010, 2011
*/
/* Feature bits */
*/
static inline void genlmsg_cancel(struct sk_buff *skb, void *hdr)
{
- nlmsg_cancel(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
+ if (hdr)
+ nlmsg_cancel(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
}
/**
if (e == NULL)
return;
- if (!(e->ctmask & (1 << event)))
- return;
-
set_bit(event, &e->cache);
}
int level,
int optname, char __user *optval,
int __user *option);
+ int (*compat_ioctl)(struct sock *sk,
+ unsigned int cmd, unsigned long arg);
#endif
int (*sendmsg)(struct kiocb *iocb, struct sock *sk,
struct msghdr *msg, size_t len);
#define _SCSI_SCSI_H
#include <linux/types.h>
+#include <linux/scatterlist.h>
struct scsi_cmnd;
* .reg = ftrace_event_reg,
* };
*
- * static struct ftrace_event_call __used
- * __attribute__((__aligned__(4)))
- * __attribute__((section("_ftrace_events"))) event_<call> = {
+ * static struct ftrace_event_call event_<call> = {
* .name = "<call>",
* .class = event_class_<template>,
* .event = &ftrace_event_type_<call>,
* .print_fmt = print_fmt_<call>,
* };
+ * // its only safe to use pointers when doing linker tricks to
+ * // create an array.
+ * static struct ftrace_event_call __used
+ * __attribute__((section("_ftrace_events"))) *__event_<call> = &event_<call>;
*
*/
#undef DEFINE_EVENT
#define DEFINE_EVENT(template, call, proto, args) \
\
-static struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+static struct ftrace_event_call __used event_##call = { \
.name = #call, \
.class = &event_class_##template, \
.event.funcs = &ftrace_event_type_funcs_##template, \
.print_fmt = print_fmt_##template, \
-};
+}; \
+static struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
#undef DEFINE_EVENT_PRINT
#define DEFINE_EVENT_PRINT(template, call, proto, args, print) \
\
static const char print_fmt_##call[] = print; \
\
-static struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+static struct ftrace_event_call __used event_##call = { \
.name = #call, \
.class = &event_class_##template, \
.event.funcs = &ftrace_event_type_funcs_##call, \
.print_fmt = print_fmt_##call, \
-}
+}; \
+static struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
pre_start = 0;
read_current_timer(&start);
start_jiffies = jiffies;
- while (jiffies <= (start_jiffies + 1)) {
+ while (time_before_eq(jiffies, start_jiffies + 1)) {
pre_start = start;
read_current_timer(&start);
}
pre_end = 0;
end = post_start;
- while (jiffies <=
- (start_jiffies + 1 + DELAY_CALIBRATION_TICKS)) {
+ while (time_before_eq(jiffies, start_jiffies + 1 +
+ DELAY_CALIBRATION_TICKS)) {
pre_end = end;
read_current_timer(&end);
}
#endif
atomic_set(&new->usage, 1);
+#ifdef CONFIG_DEBUG_CREDENTIALS
+ new->magic = CRED_MAGIC;
+#endif
if (security_cred_alloc_blank(new, GFP_KERNEL) < 0)
goto error;
-#ifdef CONFIG_DEBUG_CREDENTIALS
- new->magic = CRED_MAGIC;
-#endif
return new;
error:
validate_creds(old);
*new = *old;
+ atomic_set(&new->usage, 1);
+ set_cred_subscribers(new, 0);
get_uid(new->user);
get_group_info(new->group_info);
if (security_prepare_creds(new, old, GFP_KERNEL) < 0)
goto error;
- atomic_set(&new->usage, 1);
- set_cred_subscribers(new, 0);
put_cred(old);
validate_creds(new);
return new;
if (cred->magic != CRED_MAGIC)
return true;
#ifdef CONFIG_SECURITY_SELINUX
- if (selinux_is_enabled()) {
+ /*
+ * cred->security == NULL if security_cred_alloc_blank() or
+ * security_prepare_creds() returned an error.
+ */
+ if (selinux_is_enabled() && cred->security) {
if ((unsigned long) cred->security < PAGE_SIZE)
return true;
if ((*(u32 *)cred->security & 0xffffff00) ==
void move_native_irq(int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
+ bool masked;
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
if (unlikely(desc->status & IRQ_DISABLED))
return;
- desc->irq_data.chip->irq_mask(&desc->irq_data);
+ /*
+ * Be careful vs. already masked interrupts. If this is a
+ * threaded interrupt with ONESHOT set, we can end up with an
+ * interrupt storm.
+ */
+ masked = desc->status & IRQ_MASKED;
+ if (!masked)
+ desc->irq_data.chip->irq_mask(&desc->irq_data);
move_masked_irq(irq);
- desc->irq_data.chip->irq_unmask(&desc->irq_data);
+ if (!masked)
+ desc->irq_data.chip->irq_unmask(&desc->irq_data);
}
-
#endif
#ifdef CONFIG_TRACEPOINTS
- mod->tracepoints = section_objs(info, "__tracepoints",
- sizeof(*mod->tracepoints),
- &mod->num_tracepoints);
+ mod->tracepoints_ptrs = section_objs(info, "__tracepoints_ptrs",
+ sizeof(*mod->tracepoints_ptrs),
+ &mod->num_tracepoints);
#endif
#ifdef HAVE_JUMP_LABEL
mod->jump_entries = section_objs(info, "__jump_table",
struct modversion_info *ver,
struct kernel_param *kp,
struct kernel_symbol *ks,
- struct tracepoint *tp)
+ struct tracepoint * const *tp)
{
}
EXPORT_SYMBOL(module_layout);
mutex_lock(&module_mutex);
list_for_each_entry(mod, &modules, list)
if (!mod->taints)
- tracepoint_update_probe_range(mod->tracepoints,
- mod->tracepoints + mod->num_tracepoints);
+ tracepoint_update_probe_range(mod->tracepoints_ptrs,
+ mod->tracepoints_ptrs + mod->num_tracepoints);
mutex_unlock(&module_mutex);
}
else if (iter_mod > iter->module)
iter->tracepoint = NULL;
found = tracepoint_get_iter_range(&iter->tracepoint,
- iter_mod->tracepoints,
- iter_mod->tracepoints
+ iter_mod->tracepoints_ptrs,
+ iter_mod->tracepoints_ptrs
+ iter_mod->num_tracepoints);
if (found) {
iter->module = iter_mod;
return;
raw_spin_lock(&ctx->lock);
- update_context_time(ctx);
+ if (ctx->is_active)
+ update_context_time(ctx);
update_event_times(event);
+ if (event->state == PERF_EVENT_STATE_ACTIVE)
+ event->pmu->read(event);
raw_spin_unlock(&ctx->lock);
-
- event->pmu->read(event);
}
static inline u64 perf_event_count(struct perf_event *event)
* accessed from NMI. Use a temporary manual per cpu allocation
* until that gets sorted out.
*/
- size = sizeof(*entries) + sizeof(struct perf_callchain_entry *) *
- num_possible_cpus();
+ size = offsetof(struct callchain_cpus_entries, cpu_entries[nr_cpu_ids]);
entries = kzalloc(size, GFP_KERNEL);
if (!entries)
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
u64 delta_exec;
- if (!task_has_rt_policy(curr))
+ if (curr->sched_class != &rt_sched_class)
return;
delta_exec = rq->clock_task - curr->se.exec_start;
int del_timer_sync(struct timer_list *timer)
{
#ifdef CONFIG_LOCKDEP
+ unsigned long flags;
+
+ raw_local_irq_save(flags);
local_bh_disable();
lock_map_acquire(&timer->lockdep_map);
lock_map_release(&timer->lockdep_map);
- local_bh_enable();
+ _local_bh_enable();
+ raw_local_irq_restore(flags);
#endif
/*
* don't use it in hardirq context, because it
!blk_tracer_enabled))
return;
+ /*
+ * If the BLK_TC_NOTIFY action mask isn't set, don't send any note
+ * message to the trace.
+ */
+ if (!(bt->act_mask & BLK_TC_NOTIFY))
+ return;
+
local_irq_save(flags);
buf = per_cpu_ptr(bt->msg_data, smp_processor_id());
va_start(args, fmt);
static void trace_module_add_events(struct module *mod)
{
struct ftrace_module_file_ops *file_ops = NULL;
- struct ftrace_event_call *call, *start, *end;
+ struct ftrace_event_call **call, **start, **end;
start = mod->trace_events;
end = mod->trace_events + mod->num_trace_events;
return;
for_each_event(call, start, end) {
- __trace_add_event_call(call, mod,
+ __trace_add_event_call(*call, mod,
&file_ops->id, &file_ops->enable,
&file_ops->filter, &file_ops->format);
}
.priority = 0,
};
-extern struct ftrace_event_call __start_ftrace_events[];
-extern struct ftrace_event_call __stop_ftrace_events[];
+extern struct ftrace_event_call *__start_ftrace_events[];
+extern struct ftrace_event_call *__stop_ftrace_events[];
static char bootup_event_buf[COMMAND_LINE_SIZE] __initdata;
static __init int event_trace_init(void)
{
- struct ftrace_event_call *call;
+ struct ftrace_event_call **call;
struct dentry *d_tracer;
struct dentry *entry;
struct dentry *d_events;
pr_warning("tracing: Failed to allocate common fields");
for_each_event(call, __start_ftrace_events, __stop_ftrace_events) {
- __trace_add_event_call(call, NULL, &ftrace_event_id_fops,
+ __trace_add_event_call(*call, NULL, &ftrace_event_id_fops,
&ftrace_enable_fops,
&ftrace_event_filter_fops,
&ftrace_event_format_fops);
.fields = LIST_HEAD_INIT(event_class_ftrace_##call.fields),\
}; \
\
-struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+struct ftrace_event_call __used event_##call = { \
.name = #call, \
.event.type = etype, \
.class = &event_class_ftrace_##call, \
.print_fmt = print, \
}; \
+struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call;
#include "trace_entries.h"
.raw_init = init_syscall_trace,
};
-extern unsigned long __start_syscalls_metadata[];
-extern unsigned long __stop_syscalls_metadata[];
+extern struct syscall_metadata *__start_syscalls_metadata[];
+extern struct syscall_metadata *__stop_syscalls_metadata[];
static struct syscall_metadata **syscalls_metadata;
-static struct syscall_metadata *find_syscall_meta(unsigned long syscall)
+static __init struct syscall_metadata *
+find_syscall_meta(unsigned long syscall)
{
- struct syscall_metadata *start;
- struct syscall_metadata *stop;
+ struct syscall_metadata **start;
+ struct syscall_metadata **stop;
char str[KSYM_SYMBOL_LEN];
- start = (struct syscall_metadata *)__start_syscalls_metadata;
- stop = (struct syscall_metadata *)__stop_syscalls_metadata;
+ start = __start_syscalls_metadata;
+ stop = __stop_syscalls_metadata;
kallsyms_lookup(syscall, NULL, NULL, NULL, str);
for ( ; start < stop; start++) {
* with "SyS" instead of "sys", leading to an unwanted
* mismatch.
*/
- if (start->name && !strcmp(start->name + 3, str + 3))
- return start;
+ if ((*start)->name && !strcmp((*start)->name + 3, str + 3))
+ return *start;
}
return NULL;
}
#include <linux/sched.h>
#include <linux/jump_label.h>
-extern struct tracepoint __start___tracepoints[];
-extern struct tracepoint __stop___tracepoints[];
+extern struct tracepoint * const __start___tracepoints_ptrs[];
+extern struct tracepoint * const __stop___tracepoints_ptrs[];
/* Set to 1 to enable tracepoint debug output */
static const int tracepoint_debug;
*
* Updates the probe callback corresponding to a range of tracepoints.
*/
-void
-tracepoint_update_probe_range(struct tracepoint *begin, struct tracepoint *end)
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end)
{
- struct tracepoint *iter;
+ struct tracepoint * const *iter;
struct tracepoint_entry *mark_entry;
if (!begin)
mutex_lock(&tracepoints_mutex);
for (iter = begin; iter < end; iter++) {
- mark_entry = get_tracepoint(iter->name);
+ mark_entry = get_tracepoint((*iter)->name);
if (mark_entry) {
- set_tracepoint(&mark_entry, iter,
+ set_tracepoint(&mark_entry, *iter,
!!mark_entry->refcount);
} else {
- disable_tracepoint(iter);
+ disable_tracepoint(*iter);
}
}
mutex_unlock(&tracepoints_mutex);
static void tracepoint_update_probes(void)
{
/* Core kernel tracepoints */
- tracepoint_update_probe_range(__start___tracepoints,
- __stop___tracepoints);
+ tracepoint_update_probe_range(__start___tracepoints_ptrs,
+ __stop___tracepoints_ptrs);
/* tracepoints in modules. */
module_update_tracepoints();
}
* Will return the first tracepoint in the range if the input tracepoint is
* NULL.
*/
-int tracepoint_get_iter_range(struct tracepoint **tracepoint,
- struct tracepoint *begin, struct tracepoint *end)
+int tracepoint_get_iter_range(struct tracepoint * const **tracepoint,
+ struct tracepoint * const *begin, struct tracepoint * const *end)
{
if (!*tracepoint && begin != end) {
*tracepoint = begin;
/* Core kernel tracepoints */
if (!iter->module) {
found = tracepoint_get_iter_range(&iter->tracepoint,
- __start___tracepoints, __stop___tracepoints);
+ __start___tracepoints_ptrs,
+ __stop___tracepoints_ptrs);
if (found)
goto end;
}
switch (val) {
case MODULE_STATE_COMING:
case MODULE_STATE_GOING:
- tracepoint_update_probe_range(mod->tracepoints,
- mod->tracepoints + mod->num_tracepoints);
+ tracepoint_update_probe_range(mod->tracepoints_ptrs,
+ mod->tracepoints_ptrs + mod->num_tracepoints);
break;
}
return 0;
#include <asm/irq_regs.h>
#include <linux/perf_event.h>
-int watchdog_enabled;
+int watchdog_enabled = 1;
int __read_mostly softlockup_thresh = 60;
static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
#endif
-static int no_watchdog;
-
-
/* boot commands */
/*
* Should we panic when a soft-lockup or hard-lockup occurs:
if (!strncmp(str, "panic", 5))
hardlockup_panic = 1;
else if (!strncmp(str, "0", 1))
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nmi_watchdog=", hardlockup_panic_setup);
static int __init nowatchdog_setup(char *str)
{
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nowatchdog", nowatchdog_setup);
/* deprecated */
static int __init nosoftlockup_setup(char *str)
{
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nosoftlockup", nosoftlockup_setup);
wake_up_process(p);
}
- /* if any cpu succeeds, watchdog is considered enabled for the system */
- watchdog_enabled = 1;
-
return 0;
}
static void watchdog_enable_all_cpus(void)
{
int cpu;
- int result = 0;
+
+ watchdog_enabled = 0;
for_each_online_cpu(cpu)
- result += watchdog_enable(cpu);
+ if (!watchdog_enable(cpu))
+ /* if any cpu succeeds, watchdog is considered
+ enabled for the system */
+ watchdog_enabled = 1;
- if (result)
+ if (!watchdog_enabled)
printk(KERN_ERR "watchdog: failed to be enabled on some cpus\n");
}
{
int cpu;
- if (no_watchdog)
- return;
-
for_each_online_cpu(cpu)
watchdog_disable(cpu);
{
proc_dointvec(table, write, buffer, length, ppos);
- if (watchdog_enabled)
- watchdog_enable_all_cpus();
- else
- watchdog_disable_all_cpus();
+ if (write) {
+ if (watchdog_enabled)
+ watchdog_enable_all_cpus();
+ else
+ watchdog_disable_all_cpus();
+ }
return 0;
}
break;
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
- err = watchdog_enable(hotcpu);
+ if (watchdog_enabled)
+ err = watchdog_enable(hotcpu);
break;
#ifdef CONFIG_HOTPLUG_CPU
case CPU_UP_CANCELED:
void *cpu = (void *)(long)smp_processor_id();
int err;
- if (no_watchdog)
- return;
-
err = cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
WARN_ON(notifier_to_errno(err));
/* after clearing PageTail the gup refcount can be released */
smp_mb();
- page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+ /*
+ * retain hwpoison flag of the poisoned tail page:
+ * fix for the unsuitable process killed on Guest Machine(KVM)
+ * by the memory-failure.
+ */
+ page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON;
page_tail->flags |= (page->flags &
((1L << PG_referenced) |
(1L << PG_swapbacked) |
/* pagein of a big page is an event. So, ignore page size */
if (nr_pages > 0)
__this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGIN_COUNT]);
- else
+ else {
__this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGOUT_COUNT]);
+ nr_pages = -nr_pages; /* for event */
+ }
__this_cpu_add(mem->stat->count[MEM_CGROUP_EVENTS], nr_pages);
return false;
}
+/**
+ * mem_cgroup_check_margin - check if the memory cgroup allows charging
+ * @mem: memory cgroup to check
+ * @bytes: the number of bytes the caller intends to charge
+ *
+ * Returns a boolean value on whether @mem can be charged @bytes or
+ * whether this would exceed the limit.
+ */
+static bool mem_cgroup_check_margin(struct mem_cgroup *mem, unsigned long bytes)
+{
+ if (!res_counter_check_margin(&mem->res, bytes))
+ return false;
+ if (do_swap_account && !res_counter_check_margin(&mem->memsw, bytes))
+ return false;
+ return true;
+}
+
static unsigned int get_swappiness(struct mem_cgroup *memcg)
{
struct cgroup *cgrp = memcg->css.cgroup;
flags |= MEM_CGROUP_RECLAIM_NOSWAP;
} else
mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
-
- if (csize > PAGE_SIZE) /* change csize and retry */
+ /*
+ * csize can be either a huge page (HPAGE_SIZE), a batch of
+ * regular pages (CHARGE_SIZE), or a single regular page
+ * (PAGE_SIZE).
+ *
+ * Never reclaim on behalf of optional batching, retry with a
+ * single page instead.
+ */
+ if (csize == CHARGE_SIZE)
return CHARGE_RETRY;
if (!(gfp_mask & __GFP_WAIT))
return CHARGE_WOULDBLOCK;
ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
- gfp_mask, flags);
+ gfp_mask, flags);
+ if (mem_cgroup_check_margin(mem_over_limit, csize))
+ return CHARGE_RETRY;
/*
- * try_to_free_mem_cgroup_pages() might not give us a full
- * picture of reclaim. Some pages are reclaimed and might be
- * moved to swap cache or just unmapped from the cgroup.
- * Check the limit again to see if the reclaim reduced the
- * current usage of the cgroup before giving up
+ * Even though the limit is exceeded at this point, reclaim
+ * may have been able to free some pages. Retry the charge
+ * before killing the task.
+ *
+ * Only for regular pages, though: huge pages are rather
+ * unlikely to succeed so close to the limit, and we fall back
+ * to regular pages anyway in case of failure.
*/
- if (ret || mem_cgroup_check_under_limit(mem_over_limit))
+ if (csize == PAGE_SIZE && ret)
return CHARGE_RETRY;
/*
gfp_t gfp_mask, enum charge_type ctype)
{
struct mem_cgroup *mem = NULL;
+ int page_size = PAGE_SIZE;
struct page_cgroup *pc;
+ bool oom = true;
int ret;
- int page_size = PAGE_SIZE;
if (PageTransHuge(page)) {
page_size <<= compound_order(page);
VM_BUG_ON(!PageTransHuge(page));
+ /*
+ * Never OOM-kill a process for a huge page. The
+ * fault handler will fall back to regular pages.
+ */
+ oom = false;
}
pc = lookup_page_cgroup(page);
return 0;
prefetchw(pc);
- ret = __mem_cgroup_try_charge(mm, gfp_mask, &mem, true, page_size);
+ ret = __mem_cgroup_try_charge(mm, gfp_mask, &mem, oom, page_size);
if (ret || !mem)
return ret;
static int __init enable_swap_account(char *s)
{
/* consider enabled if no parameter or 1 is given */
- if (!s || !strcmp(s, "1"))
+ if (!(*s) || !strcmp(s, "=1"))
really_do_swap_account = 1;
- else if (!strcmp(s, "0"))
+ else if (!strcmp(s, "=0"))
really_do_swap_account = 0;
return 1;
}
static int __init disable_swap_account(char *s)
{
- enable_swap_account("0");
+ printk_once("noswapaccount is deprecated and will be removed in 2.6.40. Use swapaccount=0 instead\n");
+ enable_swap_account("=0");
return 1;
}
__setup("noswapaccount", disable_swap_account);
}
/*
- * Only all shrink_slab here (which would also
- * shrink other caches) if access is not potentially fatal.
+ * Only call shrink_slab here (which would also shrink other caches) if
+ * access is not potentially fatal.
*/
if (access) {
int nr;
struct task_struct *tsk;
struct anon_vma *av;
- if (!PageHuge(page) && unlikely(split_huge_page(page)))
- return;
read_lock(&tasklist_lock);
av = page_lock_anon_vma(page);
if (av == NULL) /* Not actually mapped anymore */
int ret;
int kill = 1;
struct page *hpage = compound_head(p);
+ struct page *ppage;
if (PageReserved(p) || PageSlab(p))
return SWAP_SUCCESS;
}
}
+ /*
+ * ppage: poisoned page
+ * if p is regular page(4k page)
+ * ppage == real poisoned page;
+ * else p is hugetlb or THP, ppage == head page.
+ */
+ ppage = hpage;
+
+ if (PageTransHuge(hpage)) {
+ /*
+ * Verify that this isn't a hugetlbfs head page, the check for
+ * PageAnon is just for avoid tripping a split_huge_page
+ * internal debug check, as split_huge_page refuses to deal with
+ * anything that isn't an anon page. PageAnon can't go away fro
+ * under us because we hold a refcount on the hpage, without a
+ * refcount on the hpage. split_huge_page can't be safely called
+ * in the first place, having a refcount on the tail isn't
+ * enough * to be safe.
+ */
+ if (!PageHuge(hpage) && PageAnon(hpage)) {
+ if (unlikely(split_huge_page(hpage))) {
+ /*
+ * FIXME: if splitting THP is failed, it is
+ * better to stop the following operation rather
+ * than causing panic by unmapping. System might
+ * survive if the page is freed later.
+ */
+ printk(KERN_INFO
+ "MCE %#lx: failed to split THP\n", pfn);
+
+ BUG_ON(!PageHWPoison(p));
+ return SWAP_FAIL;
+ }
+ /* THP is split, so ppage should be the real poisoned page. */
+ ppage = p;
+ }
+ }
+
/*
* First collect all the processes that have the page
* mapped in dirty form. This has to be done before try_to_unmap,
* there's nothing that can be done.
*/
if (kill)
- collect_procs(hpage, &tokill);
+ collect_procs(ppage, &tokill);
+
+ if (hpage != ppage)
+ lock_page_nosync(ppage);
- ret = try_to_unmap(hpage, ttu);
+ ret = try_to_unmap(ppage, ttu);
if (ret != SWAP_SUCCESS)
printk(KERN_ERR "MCE %#lx: failed to unmap page (mapcount=%d)\n",
- pfn, page_mapcount(hpage));
+ pfn, page_mapcount(ppage));
+
+ if (hpage != ppage)
+ unlock_page(ppage);
/*
* Now that the dirty bit has been propagated to the
* use a more force-full uncatchable kill to prevent
* any accesses to the poisoned memory.
*/
- kill_procs_ao(&tokill, !!PageDirty(hpage), trapno,
+ kill_procs_ao(&tokill, !!PageDirty(ppage), trapno,
ret != SWAP_SUCCESS, p, pfn);
return ret;
* The check (unnecessarily) ignores LRU pages being isolated and
* walked by the page reclaim code, however that's not a big loss.
*/
- if (!PageLRU(p) && !PageHuge(p))
- shake_page(p, 0);
- if (!PageLRU(p) && !PageHuge(p)) {
- /*
- * shake_page could have turned it free.
- */
- if (is_free_buddy_page(p)) {
- action_result(pfn, "free buddy, 2nd try", DELAYED);
- return 0;
+ if (!PageHuge(p) && !PageTransCompound(p)) {
+ if (!PageLRU(p))
+ shake_page(p, 0);
+ if (!PageLRU(p)) {
+ /*
+ * shake_page could have turned it free.
+ */
+ if (is_free_buddy_page(p)) {
+ action_result(pfn, "free buddy, 2nd try",
+ DELAYED);
+ return 0;
+ }
+ action_result(pfn, "non LRU", IGNORED);
+ put_page(p);
+ return -EBUSY;
}
- action_result(pfn, "non LRU", IGNORED);
- put_page(p);
- return -EBUSY;
}
/*
* For error on the tail page, we should set PG_hwpoison
* on the head page to show that the hugepage is hwpoisoned
*/
- if (PageTail(p) && TestSetPageHWPoison(hpage)) {
+ if (PageHuge(p) && PageTail(p) && TestSetPageHWPoison(hpage)) {
action_result(pfn, "hugepage already hardware poisoned",
IGNORED);
unlock_page(hpage);
ret = migrate_huge_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL, 0,
true);
if (ret) {
- putback_lru_pages(&pagelist);
+ struct page *page1, *page2;
+ list_for_each_entry_safe(page1, page2, &pagelist, lru)
+ put_page(page1);
+
pr_debug("soft offline: %#lx: migration failed %d, type %lx\n",
pfn, ret, page->flags);
if (ret > 0)
ret = migrate_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL,
0, true);
if (ret) {
+ putback_lru_pages(&pagelist);
pr_info("soft offline: %#lx: migration failed %d, type %lx\n",
pfn, ret, page->flags);
if (ret > 0)
unlock:
unlock_page(page);
+move_newpage:
if (rc != -EAGAIN) {
/*
* A page that has been migrated has all references
putback_lru_page(page);
}
-move_newpage:
-
/*
* Move the new page to the LRU. If migration was not successful
* then this will free the page.
}
rc = 0;
out:
-
- list_for_each_entry_safe(page, page2, from, lru)
- put_page(page);
-
if (rc)
return rc;
if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE)
gup_flags |= FOLL_WRITE;
+ /*
+ * We want mlock to succeed for regions that have any permissions
+ * other than PROT_NONE.
+ */
+ if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))
+ gup_flags |= FOLL_FORCE;
+
if (vma->vm_flags & VM_LOCKED)
gup_flags |= FOLL_MLOCK;
skb = tfp->skb;
}
+ if (skb_linearize(skb) < 0 || skb_linearize(tmp_skb) < 0)
+ goto err;
+
skb_pull(tmp_skb, sizeof(struct unicast_frag_packet));
- if (pskb_expand_head(skb, 0, tmp_skb->len, GFP_ATOMIC) < 0) {
- /* free buffered skb, skb will be freed later */
- kfree_skb(tfp->skb);
- return NULL;
- }
+ if (pskb_expand_head(skb, 0, tmp_skb->len, GFP_ATOMIC) < 0)
+ goto err;
/* move free entry to end */
tfp->skb = NULL;
unicast_packet->packet_type = BAT_UNICAST;
return skb;
+
+err:
+ /* free buffered skb, skb will be freed later */
+ kfree_skb(tfp->skb);
+ return NULL;
}
static void frag_create_entry(struct list_head *head, struct sk_buff *skb)
spin_unlock_bh(&bat_priv->vis_list_lock);
kfree_skb(info->skb_packet);
+ kfree(info);
}
/* Compare two vis packets, used by the hashing algorithm */
buff_pos += sprintf(buff + buff_pos, "%pM,",
entry->addr);
- for (i = 0; i < packet->entries; i++)
+ for (j = 0; j < packet->entries; j++)
buff_pos += vis_data_read_entry(
buff + buff_pos,
- &entries[i],
+ &entries[j],
entry->addr,
entry->primary);
info);
if (hash_added < 0) {
/* did not work (for some reason) */
- kref_put(&old_info->refcount, free_info);
+ kref_put(&info->refcount, free_info);
info = NULL;
}
container_of(work, struct delayed_work, work);
struct bat_priv *bat_priv =
container_of(delayed_work, struct bat_priv, vis_work);
- struct vis_info *info, *temp;
+ struct vis_info *info;
spin_lock_bh(&bat_priv->vis_hash_lock);
purge_vis_packets(bat_priv);
send_list_add(bat_priv, bat_priv->my_vis_info);
}
- list_for_each_entry_safe(info, temp, &bat_priv->vis_send_list,
- send_list) {
+ while (!list_empty(&bat_priv->vis_send_list)) {
+ info = list_first_entry(&bat_priv->vis_send_list,
+ typeof(*info), send_list);
kref_get(&info->refcount);
spin_unlock_bh(&bat_priv->vis_hash_lock);
fdb = kmem_cache_alloc(br_fdb_cache, GFP_ATOMIC);
if (fdb) {
memcpy(fdb->addr.addr, addr, ETH_ALEN);
- hlist_add_head_rcu(&fdb->hlist, head);
-
fdb->dst = source;
fdb->is_local = is_local;
fdb->is_static = is_local;
fdb->ageing_timer = jiffies;
+
+ hlist_add_head_rcu(&fdb->hlist, head);
}
return fdb;
}
priv->conn_req.sockaddr.u.dgm.connection_id = -1;
priv->flowenabled = false;
- ASSERT_RTNL();
init_waitqueue_head(&priv->netmgmt_wq);
- list_add(&priv->list_field, &chnl_net_list);
}
ret = register_netdevice(dev);
if (ret)
pr_warn("device rtml registration failed\n");
+ else
+ list_add(&caifdev->list_field, &chnl_net_list);
return ret;
}
map = rcu_dereference(rxqueue->rps_map);
if (map) {
- if (map->len == 1) {
+ if (map->len == 1 &&
+ !rcu_dereference_raw(rxqueue->rps_flow_table)) {
tcpu = map->cpus[0];
if (cpu_online(tcpu))
cpu = tcpu;
__skb_pull(skb, skb_headlen(skb));
skb_reserve(skb, NET_IP_ALIGN - skb_headroom(skb));
skb->vlan_tci = 0;
+ skb->dev = napi->dev;
+ skb->skb_iif = 0;
napi->skb = skb;
}
dev_net_set(dev, &init_net);
+ dev->gso_max_size = GSO_MAX_SIZE;
+
+ INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
+ dev->ethtool_ntuple_list.count = 0;
+ INIT_LIST_HEAD(&dev->napi_list);
+ INIT_LIST_HEAD(&dev->unreg_list);
+ INIT_LIST_HEAD(&dev->link_watch_list);
+ dev->priv_flags = IFF_XMIT_DST_RELEASE;
+ setup(dev);
+
dev->num_tx_queues = txqs;
dev->real_num_tx_queues = txqs;
if (netif_alloc_netdev_queues(dev))
- goto free_pcpu;
+ goto free_all;
#ifdef CONFIG_RPS
dev->num_rx_queues = rxqs;
dev->real_num_rx_queues = rxqs;
if (netif_alloc_rx_queues(dev))
- goto free_pcpu;
+ goto free_all;
#endif
- dev->gso_max_size = GSO_MAX_SIZE;
-
- INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
- dev->ethtool_ntuple_list.count = 0;
- INIT_LIST_HEAD(&dev->napi_list);
- INIT_LIST_HEAD(&dev->unreg_list);
- INIT_LIST_HEAD(&dev->link_watch_list);
- dev->priv_flags = IFF_XMIT_DST_RELEASE;
- setup(dev);
strcpy(dev->name, name);
return dev;
+free_all:
+ free_netdev(dev);
+ return NULL;
+
free_pcpu:
free_percpu(dev->pcpu_refcnt);
kfree(dev->_tx);
return -EOPNOTSUPP;
if (af_ops->validate_link_af) {
- err = af_ops->validate_link_af(dev,
- tb[IFLA_AF_SPEC]);
+ err = af_ops->validate_link_af(dev, af);
if (err < 0)
return err;
}
snprintf(ifname, IFNAMSIZ, "%s%%d", ops->kind);
dest_net = rtnl_link_get_net(net, tb);
+ if (IS_ERR(dest_net))
+ return PTR_ERR(dest_net);
+
dev = rtnl_create_link(net, dest_net, ifname, ops, tb);
if (IS_ERR(dev))
shinfo = skb_shinfo(skb);
memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
atomic_set(&shinfo->dataref, 1);
+ kmemcheck_annotate_variable(shinfo->destructor_arg);
if (fclone) {
struct sk_buff *child = skb + 1;
static int econet_sendmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len)
{
- struct sock *sk = sock->sk;
struct sockaddr_ec *saddr=(struct sockaddr_ec *)msg->msg_name;
struct net_device *dev;
struct ec_addr addr;
int err;
unsigned char port, cb;
#if defined(CONFIG_ECONET_AUNUDP) || defined(CONFIG_ECONET_NATIVE)
+ struct sock *sk = sock->sk;
struct sk_buff *skb;
struct ec_cb *eb;
#endif
error_free_buf:
vfree(userbuf);
+error:
#else
err = -EPROTOTYPE;
#endif
- error:
mutex_unlock(&econet_mutex);
return err;
}
EXPORT_SYMBOL(inet_ioctl);
+#ifdef CONFIG_COMPAT
+int inet_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+{
+ struct sock *sk = sock->sk;
+ int err = -ENOIOCTLCMD;
+
+ if (sk->sk_prot->compat_ioctl)
+ err = sk->sk_prot->compat_ioctl(sk, cmd, arg);
+
+ return err;
+}
+#endif
+
const struct proto_ops inet_stream_ops = {
.family = PF_INET,
.owner = THIS_MODULE,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
EXPORT_SYMBOL(inet_stream_ops);
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
EXPORT_SYMBOL(inet_dgram_ops);
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
#include <linux/notifier.h>
#include <linux/if_arp.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/compat.h>
#include <net/ipip.h>
#include <net/checksum.h>
#include <net/netlink.h>
}
}
+#ifdef CONFIG_COMPAT
+struct compat_sioc_sg_req {
+ struct in_addr src;
+ struct in_addr grp;
+ compat_ulong_t pktcnt;
+ compat_ulong_t bytecnt;
+ compat_ulong_t wrong_if;
+};
+
+struct compat_sioc_vif_req {
+ vifi_t vifi; /* Which iface */
+ compat_ulong_t icount;
+ compat_ulong_t ocount;
+ compat_ulong_t ibytes;
+ compat_ulong_t obytes;
+};
+
+int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+{
+ struct compat_sioc_sg_req sr;
+ struct compat_sioc_vif_req vr;
+ struct vif_device *vif;
+ struct mfc_cache *c;
+ struct net *net = sock_net(sk);
+ struct mr_table *mrt;
+
+ mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ if (mrt == NULL)
+ return -ENOENT;
+
+ switch (cmd) {
+ case SIOCGETVIFCNT:
+ if (copy_from_user(&vr, arg, sizeof(vr)))
+ return -EFAULT;
+ if (vr.vifi >= mrt->maxvif)
+ return -EINVAL;
+ read_lock(&mrt_lock);
+ vif = &mrt->vif_table[vr.vifi];
+ if (VIF_EXISTS(mrt, vr.vifi)) {
+ vr.icount = vif->pkt_in;
+ vr.ocount = vif->pkt_out;
+ vr.ibytes = vif->bytes_in;
+ vr.obytes = vif->bytes_out;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &vr, sizeof(vr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ case SIOCGETSGCNT:
+ if (copy_from_user(&sr, arg, sizeof(sr)))
+ return -EFAULT;
+
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr.src.s_addr, sr.grp.s_addr);
+ if (c) {
+ sr.pktcnt = c->mfc_un.res.pkt;
+ sr.bytecnt = c->mfc_un.res.bytes;
+ sr.wrong_if = c->mfc_un.res.wrong_if;
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+ return -EFAULT;
+ return 0;
+ }
+ rcu_read_unlock();
+ return -EADDRNOTAVAIL;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+#endif
+
static int ipmr_device_event(struct notifier_block *this, unsigned long event, void *ptr)
{
if (mangle->flags & ~ARPT_MANGLE_MASK ||
!(mangle->flags & ARPT_MANGLE_MASK))
- return false;
+ return -EINVAL;
if (mangle->target != NF_DROP && mangle->target != NF_ACCEPT &&
mangle->target != XT_CONTINUE)
- return false;
- return true;
+ return -EINVAL;
+ return 0;
}
static struct xt_target arpt_mangle_reg __read_mostly = {
#include <linux/seq_file.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/compat.h>
static struct raw_hashinfo raw_v4_hashinfo = {
.lock = __RW_LOCK_UNLOCKED(raw_v4_hashinfo.lock),
}
}
+#ifdef CONFIG_COMPAT
+static int compat_raw_ioctl(struct sock *sk, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case SIOCOUTQ:
+ case SIOCINQ:
+ return -ENOIOCTLCMD;
+ default:
+#ifdef CONFIG_IP_MROUTE
+ return ipmr_compat_ioctl(sk, cmd, compat_ptr(arg));
+#else
+ return -ENOIOCTLCMD;
+#endif
+ }
+}
+#endif
+
struct proto raw_prot = {
.name = "RAW",
.owner = THIS_MODULE,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_raw_setsockopt,
.compat_getsockopt = compat_raw_getsockopt,
+ .compat_ioctl = compat_raw_ioctl,
#endif
};
return NULL;
}
+static unsigned int ipv4_blackhole_default_mtu(const struct dst_entry *dst)
+{
+ return 0;
+}
+
static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu)
{
}
.protocol = cpu_to_be16(ETH_P_IP),
.destroy = ipv4_dst_destroy,
.check = ipv4_blackhole_dst_check,
+ .default_mtu = ipv4_blackhole_default_mtu,
.update_pmtu = ipv4_rt_blackhole_update_pmtu,
};
#include <linux/seq_file.h>
#include <linux/init.h>
#include <linux/slab.h>
+#include <linux/compat.h>
#include <net/protocol.h>
#include <linux/skbuff.h>
#include <net/sock.h>
}
}
+#ifdef CONFIG_COMPAT
+struct compat_sioc_sg_req6 {
+ struct sockaddr_in6 src;
+ struct sockaddr_in6 grp;
+ compat_ulong_t pktcnt;
+ compat_ulong_t bytecnt;
+ compat_ulong_t wrong_if;
+};
+
+struct compat_sioc_mif_req6 {
+ mifi_t mifi;
+ compat_ulong_t icount;
+ compat_ulong_t ocount;
+ compat_ulong_t ibytes;
+ compat_ulong_t obytes;
+};
+
+int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+{
+ struct compat_sioc_sg_req6 sr;
+ struct compat_sioc_mif_req6 vr;
+ struct mif_device *vif;
+ struct mfc6_cache *c;
+ struct net *net = sock_net(sk);
+ struct mr6_table *mrt;
+
+ mrt = ip6mr_get_table(net, raw6_sk(sk)->ip6mr_table ? : RT6_TABLE_DFLT);
+ if (mrt == NULL)
+ return -ENOENT;
+
+ switch (cmd) {
+ case SIOCGETMIFCNT_IN6:
+ if (copy_from_user(&vr, arg, sizeof(vr)))
+ return -EFAULT;
+ if (vr.mifi >= mrt->maxvif)
+ return -EINVAL;
+ read_lock(&mrt_lock);
+ vif = &mrt->vif6_table[vr.mifi];
+ if (MIF_EXISTS(mrt, vr.mifi)) {
+ vr.icount = vif->pkt_in;
+ vr.ocount = vif->pkt_out;
+ vr.ibytes = vif->bytes_in;
+ vr.obytes = vif->bytes_out;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &vr, sizeof(vr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ case SIOCGETSGCNT_IN6:
+ if (copy_from_user(&sr, arg, sizeof(sr)))
+ return -EFAULT;
+
+ read_lock(&mrt_lock);
+ c = ip6mr_cache_find(mrt, &sr.src.sin6_addr, &sr.grp.sin6_addr);
+ if (c) {
+ sr.pktcnt = c->mfc_un.res.pkt;
+ sr.bytecnt = c->mfc_un.res.bytes;
+ sr.wrong_if = c->mfc_un.res.wrong_if;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+#endif
static inline int ip6mr_forward2_finish(struct sk_buff *skb)
{
#include <linux/netfilter.h>
#include <linux/netfilter_ipv6.h>
#include <linux/skbuff.h>
+#include <linux/compat.h>
#include <asm/uaccess.h>
#include <asm/ioctls.h>
}
}
+#ifdef CONFIG_COMPAT
+static int compat_rawv6_ioctl(struct sock *sk, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case SIOCOUTQ:
+ case SIOCINQ:
+ return -ENOIOCTLCMD;
+ default:
+#ifdef CONFIG_IPV6_MROUTE
+ return ip6mr_compat_ioctl(sk, cmd, compat_ptr(arg));
+#else
+ return -ENOIOCTLCMD;
+#endif
+ }
+}
+#endif
+
static void rawv6_close(struct sock *sk, long timeout)
{
if (inet_sk(sk)->inet_num == IPPROTO_RAW)
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_rawv6_setsockopt,
.compat_getsockopt = compat_rawv6_getsockopt,
+ .compat_ioctl = compat_rawv6_ioctl,
#endif
};
.local_out = __ip6_local_out,
};
+static unsigned int ip6_blackhole_default_mtu(const struct dst_entry *dst)
+{
+ return 0;
+}
+
static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu)
{
}
.protocol = cpu_to_be16(ETH_P_IPV6),
.destroy = ip6_dst_destroy,
.check = ip6_dst_check,
+ .default_mtu = ip6_blackhole_default_mtu,
.update_pmtu = ip6_rt_blackhole_update_pmtu,
};
in6_dev_put(idev);
}
if (peer) {
- BUG_ON(!(rt->rt6i_flags & RTF_CACHE));
rt->rt6i_peer = NULL;
inet_putpeer(peer);
}
{
struct inet_peer *peer;
- if (WARN_ON(!(rt->rt6i_flags & RTF_CACHE)))
- return;
-
peer = inet_getpeer_v6(&rt->rt6i_dst.addr, create);
if (peer && cmpxchg(&rt->rt6i_peer, NULL, peer) != NULL)
inet_putpeer(peer);
#include <net/addrconf.h>
#include <net/inet_frag.h>
+static struct ctl_table empty[1];
+
static ctl_table ipv6_table_template[] = {
{
.procname = "route",
.mode = 0644,
.proc_handler = proc_dointvec
},
+ {
+ .procname = "neigh",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = empty,
+ },
{ }
};
int ipv6_static_sysctl_register(void)
{
- static struct ctl_table empty[1];
ip6_base = register_sysctl_paths(net_ipv6_ctl_path, empty);
if (ip6_base == NULL)
return -ENOMEM;
*cookie ^= 2;
IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_TX_OFFCHAN;
local->hw_roc_skb = skb;
+ local->hw_roc_skb_for_status = skb;
mutex_unlock(&local->mtx);
return 0;
if (ret == 0) {
kfree_skb(local->hw_roc_skb);
local->hw_roc_skb = NULL;
+ local->hw_roc_skb_for_status = NULL;
}
mutex_unlock(&local->mtx);
struct ieee80211_channel *hw_roc_channel;
struct net_device *hw_roc_dev;
- struct sk_buff *hw_roc_skb;
+ struct sk_buff *hw_roc_skb, *hw_roc_skb_for_status;
struct work_struct hw_roc_start, hw_roc_done;
enum nl80211_channel_type hw_roc_channel_type;
unsigned int hw_roc_duration;
if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) {
struct ieee80211_work *wk;
+ u64 cookie = (unsigned long)skb;
rcu_read_lock();
list_for_each_entry_rcu(wk, &local->work_list, list) {
break;
}
rcu_read_unlock();
+ if (local->hw_roc_skb_for_status == skb) {
+ cookie = local->hw_roc_cookie ^ 2;
+ local->hw_roc_skb_for_status = NULL;
+ }
cfg80211_mgmt_tx_status(
- skb->dev, (unsigned long) skb, skb->data, skb->len,
+ skb->dev, cookie, skb->data, skb->len,
!!(info->flags & IEEE80211_TX_STAT_ACK), GFP_ATOMIC);
}
skb_orphan(skb);
}
- if (skb_header_cloned(skb))
+ if (skb_cloned(skb))
I802_DEBUG_INC(local->tx_expand_skb_head_cloned);
else if (head_need || tail_need)
I802_DEBUG_INC(local->tx_expand_skb_head);
if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status))
nf_conntrack_event_cache(IPCT_REPLY, ct);
out:
- if (tmpl)
- nf_ct_put(tmpl);
+ if (tmpl) {
+ /* Special case: we have to repeat this hook, assign the
+ * template again to this packet. We assume that this packet
+ * has no conntrack assigned. This is used by nf_ct_tcp. */
+ if (ret == NF_REPEAT)
+ skb->nfct = (struct nf_conntrack *)tmpl;
+ else
+ nf_ct_put(tmpl);
+ }
return ret;
}
* this does not harm and it happens very rarely. */
unsigned long missed = e->missed;
+ if (!((events | missed) & e->ctmask))
+ goto out_unlock;
+
ret = notify->fcn(events | missed, &item);
if (unlikely(ret < 0 || missed)) {
spin_lock_bh(&ct->lock);
if (ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).pid,
cb->nlh->nlmsg_seq,
IPCTNL_MSG_CT_NEW, ct) < 0) {
+ nf_conntrack_get(&ct->ct_general);
cb->args[1] = (unsigned long)ct;
goto out;
}
}
static inline int
-iprange_ipv6_sub(const struct in6_addr *a, const struct in6_addr *b)
+iprange_ipv6_lt(const struct in6_addr *a, const struct in6_addr *b)
{
unsigned int i;
- int r;
for (i = 0; i < 4; ++i) {
- r = ntohl(a->s6_addr32[i]) - ntohl(b->s6_addr32[i]);
- if (r != 0)
- return r;
+ if (a->s6_addr32[i] != b->s6_addr32[i])
+ return ntohl(a->s6_addr32[i]) < ntohl(b->s6_addr32[i]);
}
return 0;
bool m;
if (info->flags & IPRANGE_SRC) {
- m = iprange_ipv6_sub(&iph->saddr, &info->src_min.in6) < 0;
- m |= iprange_ipv6_sub(&iph->saddr, &info->src_max.in6) > 0;
+ m = iprange_ipv6_lt(&iph->saddr, &info->src_min.in6);
+ m |= iprange_ipv6_lt(&info->src_max.in6, &iph->saddr);
m ^= !!(info->flags & IPRANGE_SRC_INV);
if (m)
return false;
}
if (info->flags & IPRANGE_DST) {
- m = iprange_ipv6_sub(&iph->daddr, &info->dst_min.in6) < 0;
- m |= iprange_ipv6_sub(&iph->daddr, &info->dst_max.in6) > 0;
+ m = iprange_ipv6_lt(&iph->daddr, &info->dst_min.in6);
+ m |= iprange_ipv6_lt(&info->dst_max.in6, &iph->daddr);
m ^= !!(info->flags & IPRANGE_DST_INV);
if (m)
return false;
#include <net/sock.h>
#include <net/x25.h>
-/*
- * Parse a set of facilities into the facilities structures. Unrecognised
- * facilities are written to the debug log file.
+/**
+ * x25_parse_facilities - Parse facilities from skb into the facilities structs
+ *
+ * @skb: sk_buff to parse
+ * @facilities: Regular facilites, updated as facilities are found
+ * @dte_facs: ITU DTE facilities, updated as DTE facilities are found
+ * @vc_fac_mask: mask is updated with all facilities found
+ *
+ * Return codes:
+ * -1 - Parsing error, caller should drop call and clean up
+ * 0 - Parse OK, this skb has no facilities
+ * >0 - Parse OK, returns the length of the facilities header
+ *
*/
int x25_parse_facilities(struct sk_buff *skb, struct x25_facilities *facilities,
struct x25_dte_facilities *dte_facs, unsigned long *vc_fac_mask)
switch (*p & X25_FAC_CLASS_MASK) {
case X25_FAC_CLASS_A:
if (len < 2)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_REVERSE:
if((p[1] & 0x81) == 0x81) {
break;
case X25_FAC_CLASS_B:
if (len < 3)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_PACKET_SIZE:
facilities->pacsize_in = p[1];
break;
case X25_FAC_CLASS_C:
if (len < 4)
- return 0;
+ return -1;
printk(KERN_DEBUG "X.25: unknown facility %02X, "
"values %02X, %02X, %02X\n",
p[0], p[1], p[2], p[3]);
break;
case X25_FAC_CLASS_D:
if (len < p[1] + 2)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_CALLING_AE:
if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ return -1;
dte_facs->calling_len = p[2];
memcpy(dte_facs->calling_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLING_AE;
break;
case X25_FAC_CALLED_AE:
if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ return -1;
dte_facs->called_len = p[2];
memcpy(dte_facs->called_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLED_AE;
{
struct x25_address source_addr, dest_addr;
int len;
+ struct x25_sock *x25 = x25_sk(sk);
switch (frametype) {
case X25_CALL_ACCEPTED: {
- struct x25_sock *x25 = x25_sk(sk);
x25_stop_timer(sk);
x25->condition = 0x00;
&dest_addr);
if (len > 0)
skb_pull(skb, len);
+ else if (len < 0)
+ goto out_clear;
len = x25_parse_facilities(skb, &x25->facilities,
&x25->dte_facilities,
&x25->vc_facil_mask);
if (len > 0)
skb_pull(skb, len);
- else
- return -1;
+ else if (len < 0)
+ goto out_clear;
/*
* Copy any Call User Data.
*/
}
return 0;
+
+out_clear:
+ x25_write_internal(sk, X25_CLEAR_REQUEST);
+ x25->state = X25_STATE_2;
+ x25_start_t23timer(sk);
+ return 0;
}
/*
write_lock_bh(&x25_neigh_list_lock);
list_for_each_safe(entry, tmp, &x25_neigh_list) {
+ struct net_device *dev;
+
nb = list_entry(entry, struct x25_neigh, node);
+ dev = nb->dev;
__x25_remove_neigh(nb);
- dev_put(nb->dev);
+ dev_put(dev);
}
write_unlock_bh(&x25_neigh_list_lock);
}
fi
# Build header package
-find . -name Makefile -o -name Kconfig\* -o -name \*.pl > /tmp/files$$
-find arch/x86/include include scripts -type f >> /tmp/files$$
+(cd $srctree; find . -name Makefile -o -name Kconfig\* -o -name \*.pl > /tmp/files$$)
+(cd $srctree; find arch/$SRCARCH/include include scripts -type f >> /tmp/files$$)
(cd $objtree; find .config Module.symvers include scripts -type f >> /tmp/objfiles$$)
destdir=$kernel_headers_dir/usr/src/linux-headers-$version
mkdir -p "$destdir"
-tar -c -f - -T /tmp/files$$ | (cd $destdir; tar -xf -)
+(cd $srctree; tar -c -f - -T /tmp/files$$) | (cd $destdir; tar -xf -)
(cd $objtree; tar -c -f - -T /tmp/objfiles$$) | (cd $destdir; tar -xf -)
rm -f /tmp/files$$ /tmp/objfiles$$
arch=$(dpkg --print-architecture)
{
struct task_security_struct *tsec = cred->security;
- BUG_ON((unsigned long) cred->security < PAGE_SIZE);
+ /*
+ * cred->security == NULL if security_cred_alloc_blank() or
+ * security_prepare_creds() returned an error.
+ */
+ BUG_ON(cred->security && (unsigned long) cred->security < PAGE_SIZE);
cred->security = (void *) 0x7UL;
kfree(tsec);
}
if (v & SLFR_1RXV)
readl(aaci->base + AACI_SL1RX);
- writel(maincr, aaci->base + AACI_MAINCR);
+ if (maincr != readl(aaci->base + AACI_MAINCR)) {
+ writel(maincr, aaci->base + AACI_MAINCR);
+ readl(aaci->base + AACI_MAINCR);
+ udelay(1);
+ }
}
/*
* disabling the channel doesn't clear the FIFO.
*/
writel(aaci->maincr & ~MAINCR_IE, aaci->base + AACI_MAINCR);
+ readl(aaci->base + AACI_MAINCR);
+ udelay(1);
writel(aaci->maincr, aaci->base + AACI_MAINCR);
/*
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/ioport.h>
+#include <linux/io.h>
#include <linux/moduleparam.h>
#include <sound/core.h>
#include <sound/initval.h>
#include <sound/rawmidi.h>
#include <linux/delay.h>
-#include <asm/io.h>
-
/*
* globals
*/
$(obj)/bin2hex pss_synth < $< > $@
else
$(obj)/pss_boot.h:
- ( \
+ $(Q)( \
echo 'static unsigned char * pss_synth = NULL;'; \
echo 'static int pss_synthLen = 0;'; \
) > $@
$(obj)/hex2hex -i trix_boot < $< > $@
else
$(obj)/trix_boot.h:
- ( \
+ $(Q)( \
echo 'static unsigned char * trix_boot = NULL;'; \
echo 'static int trix_boot_len = 0;'; \
) > $@
unsigned int auto_mic;
int auto_mic_ext; /* autocfg.inputs[] index for ext mic */
unsigned int need_dac_fix;
+ hda_nid_t slave_dig_outs[2];
/* capture */
unsigned int num_adc_nids;
unsigned int ideapad:1;
unsigned int thinkpad:1;
unsigned int hp_laptop:1;
+ unsigned int asus:1;
unsigned int ext_mic_present;
unsigned int recording;
info->stream[SNDRV_PCM_STREAM_CAPTURE].nid =
spec->dig_in_nid;
}
+ if (spec->slave_dig_outs[0])
+ codec->slave_dig_outs = spec->slave_dig_outs;
}
return 0;
struct conexant_spec *spec;
struct conexant_jack *jack;
const char *name;
- int err;
+ int i, err;
spec = codec->spec;
snd_array_init(&spec->jacks, sizeof(*jack), 32);
+
+ jack = spec->jacks.list;
+ for (i = 0; i < spec->jacks.used; i++, jack++)
+ if (jack->nid == nid)
+ return 0 ; /* already present */
+
jack = snd_array_new(&spec->jacks);
name = (type == SND_JACK_HEADPHONE) ? "Headphone" : "Mic" ;
static hda_nid_t cxt5066_dac_nids[1] = { 0x10 };
static hda_nid_t cxt5066_adc_nids[3] = { 0x14, 0x15, 0x16 };
static hda_nid_t cxt5066_capsrc_nids[1] = { 0x17 };
-#define CXT5066_SPDIF_OUT 0x21
+static hda_nid_t cxt5066_digout_pin_nids[2] = { 0x20, 0x22 };
/* OLPC's microphone port is DC coupled for use with external sensors,
* therefore we use a 50% mic bias in order to center the input signal with
}
}
+
+/* toggle input of built-in digital mic and mic jack appropriately */
+static void cxt5066_asus_automic(struct hda_codec *codec)
+{
+ unsigned int present;
+
+ present = snd_hda_jack_detect(codec, 0x1b);
+ snd_printdd("CXT5066: external microphone present=%d\n", present);
+ snd_hda_codec_write(codec, 0x17, 0, AC_VERB_SET_CONNECT_SEL,
+ present ? 1 : 0);
+}
+
+
/* toggle input of built-in digital mic and mic jack appropriately */
static void cxt5066_hp_laptop_automic(struct hda_codec *codec)
{
cxt5066_update_speaker(codec);
}
-/* unsolicited event for jack sensing */
-static void cxt5066_olpc_unsol_event(struct hda_codec *codec, unsigned int res)
+/* Dispatch the right mic autoswitch function */
+static void cxt5066_automic(struct hda_codec *codec)
{
struct conexant_spec *spec = codec->spec;
- snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
- /* ignore mic events in DC mode; we're always using the jack */
- if (!spec->dc_enable)
- cxt5066_olpc_automic(codec);
- break;
- }
-}
-/* unsolicited event for jack sensing */
-static void cxt5066_vostro_event(struct hda_codec *codec, unsigned int res)
-{
- snd_printdd("CXT5066_vostro: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
+ if (spec->dell_vostro)
cxt5066_vostro_automic(codec);
- break;
- }
-}
-
-/* unsolicited event for jack sensing */
-static void cxt5066_ideapad_event(struct hda_codec *codec, unsigned int res)
-{
- snd_printdd("CXT5066_ideapad: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
+ else if (spec->ideapad)
cxt5066_ideapad_automic(codec);
- break;
- }
+ else if (spec->thinkpad)
+ cxt5066_thinkpad_automic(codec);
+ else if (spec->hp_laptop)
+ cxt5066_hp_laptop_automic(codec);
+ else if (spec->asus)
+ cxt5066_asus_automic(codec);
}
/* unsolicited event for jack sensing */
-static void cxt5066_hp_laptop_event(struct hda_codec *codec, unsigned int res)
+static void cxt5066_olpc_unsol_event(struct hda_codec *codec, unsigned int res)
{
- snd_printdd("CXT5066_hp_laptop: unsol event %x (%x)\n", res, res >> 26);
+ struct conexant_spec *spec = codec->spec;
+ snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
switch (res >> 26) {
case CONEXANT_HP_EVENT:
cxt5066_hp_automute(codec);
break;
case CONEXANT_MIC_EVENT:
- cxt5066_hp_laptop_automic(codec);
+ /* ignore mic events in DC mode; we're always using the jack */
+ if (!spec->dc_enable)
+ cxt5066_olpc_automic(codec);
break;
}
}
/* unsolicited event for jack sensing */
-static void cxt5066_thinkpad_event(struct hda_codec *codec, unsigned int res)
+static void cxt5066_unsol_event(struct hda_codec *codec, unsigned int res)
{
- snd_printdd("CXT5066_thinkpad: unsol event %x (%x)\n", res, res >> 26);
+ snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
switch (res >> 26) {
case CONEXANT_HP_EVENT:
cxt5066_hp_automute(codec);
break;
case CONEXANT_MIC_EVENT:
- cxt5066_thinkpad_automic(codec);
+ cxt5066_automic(codec);
break;
}
}
+
static const struct hda_input_mux cxt5066_analog_mic_boost = {
.num_items = 5,
.items = {
spec->recording = 0;
}
+static void conexant_check_dig_outs(struct hda_codec *codec,
+ hda_nid_t *dig_pins,
+ int num_pins)
+{
+ struct conexant_spec *spec = codec->spec;
+ hda_nid_t *nid_loc = &spec->multiout.dig_out_nid;
+ int i;
+
+ for (i = 0; i < num_pins; i++, dig_pins++) {
+ unsigned int cfg = snd_hda_codec_get_pincfg(codec, *dig_pins);
+ if (get_defcfg_connect(cfg) == AC_JACK_PORT_NONE)
+ continue;
+ if (snd_hda_get_connections(codec, *dig_pins, nid_loc, 1) != 1)
+ continue;
+ if (spec->slave_dig_outs[0])
+ nid_loc++;
+ else
+ nid_loc = spec->slave_dig_outs;
+ }
+}
+
static struct hda_input_mux cxt5066_capture_source = {
.num_items = 4,
.items = {
/* initialize jack-sensing, too */
static int cxt5066_init(struct hda_codec *codec)
{
- struct conexant_spec *spec = codec->spec;
-
snd_printdd("CXT5066: init\n");
conexant_init(codec);
if (codec->patch_ops.unsol_event) {
cxt5066_hp_automute(codec);
- if (spec->dell_vostro)
- cxt5066_vostro_automic(codec);
- else if (spec->ideapad)
- cxt5066_ideapad_automic(codec);
- else if (spec->thinkpad)
- cxt5066_thinkpad_automic(codec);
- else if (spec->hp_laptop)
- cxt5066_hp_laptop_automic(codec);
+ cxt5066_automic(codec);
}
cxt5066_set_mic_boost(codec);
return 0;
CXT5066_DELL_VOSTRO, /* Dell Vostro 1015i */
CXT5066_IDEAPAD, /* Lenovo IdeaPad U150 */
CXT5066_THINKPAD, /* Lenovo ThinkPad T410s, others? */
+ CXT5066_ASUS, /* Asus K52JU, Lenovo G560 - Int mic at 0x1a and Ext mic at 0x1b */
CXT5066_HP_LAPTOP, /* HP Laptop */
CXT5066_MODELS
};
[CXT5066_DELL_VOSTRO] = "dell-vostro",
[CXT5066_IDEAPAD] = "ideapad",
[CXT5066_THINKPAD] = "thinkpad",
+ [CXT5066_ASUS] = "asus",
[CXT5066_HP_LAPTOP] = "hp-laptop",
};
SND_PCI_QUIRK(0x1028, 0x0402, "Dell Vostro", CXT5066_DELL_VOSTRO),
SND_PCI_QUIRK(0x1028, 0x0408, "Dell Inspiron One 19T", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x103c, 0x360b, "HP G60", CXT5066_HP_LAPTOP),
- SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_HP_LAPTOP),
+ SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_ASUS),
+ SND_PCI_QUIRK(0x1043, 0x1643, "Asus K52JU", CXT5066_ASUS),
+ SND_PCI_QUIRK(0x1043, 0x1993, "Asus U50F", CXT5066_ASUS),
SND_PCI_QUIRK(0x1179, 0xff1e, "Toshiba Satellite C650D", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x1179, 0xff50, "Toshiba Satellite P500-PSPGSC-01800T", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x1179, 0xffe0, "Toshiba Satellite Pro T130-15F", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400s", CXT5066_THINKPAD),
SND_PCI_QUIRK(0x17aa, 0x21c5, "Thinkpad Edge 13", CXT5066_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x21c6, "Thinkpad Edge 13", CXT5066_ASUS),
SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo Thinkpad", CXT5066_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo G560", CXT5066_ASUS),
SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo", CXT5066_IDEAPAD), /* Fallback for Lenovos without dock mic */
{}
};
spec->multiout.max_channels = 2;
spec->multiout.num_dacs = ARRAY_SIZE(cxt5066_dac_nids);
spec->multiout.dac_nids = cxt5066_dac_nids;
- spec->multiout.dig_out_nid = CXT5066_SPDIF_OUT;
+ conexant_check_dig_outs(codec, cxt5066_digout_pin_nids,
+ ARRAY_SIZE(cxt5066_digout_pin_nids));
spec->num_adc_nids = 1;
spec->adc_nids = cxt5066_adc_nids;
spec->capsrc_nids = cxt5066_capsrc_nids;
spec->num_init_verbs++;
spec->dell_automute = 1;
break;
+ case CXT5066_ASUS:
case CXT5066_HP_LAPTOP:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_hp_laptop_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->init_verbs[spec->num_init_verbs] =
cxt5066_init_verbs_hp_laptop;
spec->num_init_verbs++;
- spec->hp_laptop = 1;
+ spec->hp_laptop = board_config == CXT5066_HP_LAPTOP;
+ spec->asus = board_config == CXT5066_ASUS;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
/* no S/PDIF out */
- spec->multiout.dig_out_nid = 0;
+ if (board_config == CXT5066_HP_LAPTOP)
+ spec->multiout.dig_out_nid = 0;
/* input source automatically selected */
spec->input_mux = NULL;
spec->port_d_mode = 0;
break;
case CXT5066_DELL_VOSTRO:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_vostro_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->init_verbs[0] = cxt5066_init_verbs_vostro;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master_olpc;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
break;
case CXT5066_IDEAPAD:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_ideapad_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
spec->init_verbs[0] = cxt5066_init_verbs_ideapad;
break;
case CXT5066_THINKPAD:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_thinkpad_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
spec->init_verbs[0] = cxt5066_init_verbs_thinkpad;
void (*update_dac_volume)(struct oxygen *chip);
void (*update_dac_mute)(struct oxygen *chip);
void (*update_center_lfe_mix)(struct oxygen *chip, bool mixed);
+ unsigned int (*adjust_dac_routing)(struct oxygen *chip,
+ unsigned int play_routing);
void (*gpio_changed)(struct oxygen *chip);
void (*uart_input)(struct oxygen *chip);
void (*ac97_switch)(struct oxygen *chip,
(1 << OXYGEN_PLAY_DAC1_SOURCE_SHIFT) |
(2 << OXYGEN_PLAY_DAC2_SOURCE_SHIFT) |
(3 << OXYGEN_PLAY_DAC3_SOURCE_SHIFT);
+ if (chip->model.adjust_dac_routing)
+ reg_value = chip->model.adjust_dac_routing(chip, reg_value);
oxygen_write16_masked(chip, OXYGEN_PLAY_ROUTING, reg_value,
OXYGEN_PLAY_DAC0_SOURCE_MASK |
OXYGEN_PLAY_DAC1_SOURCE_MASK |
*
* SPI 0 -> CS4245
*
+ * I²S 1 -> CS4245
+ * I²S 2 -> CS4361 (center/LFE)
+ * I²S 3 -> CS4361 (surround)
+ * I²S 4 -> CS4361 (front)
+ *
* GPIO 3 <- ?
* GPIO 4 <- headphone detect
* GPIO 5 -> route input jack to line-in (0) or mic-in (1)
* input 1 <- aux
* input 2 <- front mic
* input 4 <- line/mic
+ * DAC out -> headphones
* aux out -> front panel headphones
*/
cs4245_write_cached(chip, CS4245_ADC_CTRL, value);
}
+static inline unsigned int shift_bits(unsigned int value,
+ unsigned int shift_from,
+ unsigned int shift_to,
+ unsigned int mask)
+{
+ if (shift_from < shift_to)
+ return (value << (shift_to - shift_from)) & mask;
+ else
+ return (value >> (shift_from - shift_to)) & mask;
+}
+
+static unsigned int adjust_dg_dac_routing(struct oxygen *chip,
+ unsigned int play_routing)
+{
+ return (play_routing & OXYGEN_PLAY_DAC0_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC2_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC1_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC1_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC1_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC2_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC2_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC0_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC3_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC3_SOURCE_MASK);
+}
+
static int output_switch_info(struct snd_kcontrol *ctl,
struct snd_ctl_elem_info *info)
{
.resume = dg_resume,
.set_dac_params = set_cs4245_dac_params,
.set_adc_params = set_cs4245_adc_params,
+ .adjust_dac_routing = adjust_dg_dac_routing,
.dump_registers = dump_cs4245_registers,
.model_data_size = sizeof(struct dg),
.device_config = PLAYBACK_0_TO_I2S |
#define __PDAUDIOCF_H
#include <sound/pcm.h>
-#include <asm/io.h>
+#include <linux/io.h>
#include <linux/interrupt.h>
#include <pcmcia/cistpl.h>
#include <pcmcia/ds.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/firmware.h>
+#include <linux/io.h>
#include <sound/core.h>
-#include <asm/io.h>
#include "vxpocket.h"
static int cq93vc_probe(struct snd_soc_codec *codec)
{
- struct davinci_vc *davinci_vc = codec->dev->platform_data;
+ struct davinci_vc *davinci_vc = snd_soc_codec_get_drvdata(codec);
davinci_vc->cq93vc.codec = codec;
codec->control_data = davinci_vc;
return 0;
}
+static const u8 cx20442_reg = CX20442_TELOUT | CX20442_MIC;
+
static struct snd_soc_codec_driver cx20442_codec_dev = {
.probe = cx20442_codec_probe,
.remove = cx20442_codec_remove,
+ .reg_cache_default = &cx20442_reg,
.reg_cache_size = 1,
.reg_word_size = sizeof(u8),
.read = cx20442_read_reg_cache,
/* Set up digital mute if not provided by the codec */
if (!codec_dai->driver->ops) {
codec_dai->driver->ops = &ams_delta_dai_ops;
- } else if (!codec_dai->driver->ops->digital_mute) {
- codec_dai->driver->ops->digital_mute = ams_delta_digital_mute;
} else {
ams_delta_ops.startup = ams_delta_startup;
ams_delta_ops.shutdown = ams_delta_shutdown;
goto out;
found:
- if (!try_module_get(codec->dev->driver->owner))
- return -ENODEV;
-
ret = soc_probe_codec(card, codec);
if (ret < 0)
return ret;
int max = mc->max;
unsigned int mask = (1 << fls(max)) - 1;
unsigned int invert = mc->invert;
- unsigned int val, val_mask;
+ unsigned int val;
int connect, change;
struct snd_soc_dapm_update update;
if (invert)
val = max - val;
- val_mask = mask << shift;
+ mask = mask << shift;
val = val << shift;
mutex_lock(&widget->codec->mutex);
widget->value = val;
- change = snd_soc_test_bits(widget->codec, reg, val_mask, val);
+ change = snd_soc_test_bits(widget->codec, reg, mask, val);
if (change) {
if (val)
/* new connection */
int cpu, thread;
struct perf_counts_values *aggr = &evsel->counts->aggr, count;
- aggr->val = 0;
+ aggr->val = aggr->ena = aggr->run = 0;
for (cpu = 0; cpu < ncpus; cpu++) {
for (thread = 0; thread < nthreads; thread++) {