fix one regression and one endian issue.
* 'drm-fixes-3.18' of git://people.freedesktop.org/~agd5f/linux:
drm/radeon: fix endian swapping in vbios fetch for tdp table
drm/radeon: disable native backlight control on pre-r6xx asics (v2)
Required properties:
- compatible : should contain one of the following:
- "renesas,sata-r8a7779" for R-Car H1
- - "renesas,sata-r8a7790" for R-Car H2
- - "renesas,sata-r8a7791" for R-Car M2
+ - "renesas,sata-r8a7790-es1" for R-Car H2 ES1
+ - "renesas,sata-r8a7790" for R-Car H2 other than ES1
+ - "renesas,sata-r8a7791" for R-Car M2-W
+ - "renesas,sata-r8a7793" for R-Car M2-N
- reg : address and length of the SATA registers;
- interrupts : must consist of one interrupt specifier.
7.2.1 Status packet
7.2.2 Head packet
7.2.3 Motion packet
+ 8. Trackpoint (for Hardware version 3 and 4)
+ 8.1 Registers
+ 8.2 Native relative mode 6 byte packet format
+ 8.2.1 Status Packet
1. Introduction
~~~~~~~~~~~~
-Currently the Linux Elantech touchpad driver is aware of two different
-hardware versions unimaginatively called version 1 and version 2. Version 1
-is found in "older" laptops and uses 4 bytes per packet. Version 2 seems to
-be introduced with the EeePC and uses 6 bytes per packet, and provides
-additional features such as position of two fingers, and width of the touch.
+Currently the Linux Elantech touchpad driver is aware of four different
+hardware versions unimaginatively called version 1,version 2, version 3
+and version 4. Version 1 is found in "older" laptops and uses 4 bytes per
+packet. Version 2 seems to be introduced with the EeePC and uses 6 bytes
+per packet, and provides additional features such as position of two fingers,
+and width of the touch. Hardware version 3 uses 6 bytes per packet (and
+for 2 fingers the concatenation of two 6 bytes packets) and allows tracking
+of up to 3 fingers. Hardware version 4 uses 6 bytes per packet, and can
+combine a status packet with multiple head or motion packets. Hardware version
+4 allows tracking up to 5 fingers.
+
+Some Hardware version 3 and version 4 also have a trackpoint which uses a
+separate packet format. It is also 6 bytes per packet.
The driver tries to support both hardware versions and should be compatible
with the Xorg Synaptics touchpad driver and its graphical configuration
utilities.
+Note that a mouse button is also associated with either the touchpad or the
+trackpoint when a trackpoint is available. Disabling the Touchpad in xorg
+(TouchPadOff=0) will also disable the buttons associated with the touchpad.
+
Additionally the operation of the touchpad can be altered by adjusting the
contents of some of its internal registers. These registers are represented
by the driver as sysfs entries under /sys/bus/serio/drivers/psmouse/serio?
2. Extra knobs
~~~~~~~~~~~
-Currently the Linux Elantech touchpad driver provides two extra knobs under
+Currently the Linux Elantech touchpad driver provides three extra knobs under
/sys/bus/serio/drivers/psmouse/serio? for the user.
* debug
data consistency checking can be done. For now checking is disabled by
default. Currently even turning it on will do nothing.
+* crc_enabled
+
+ Sets crc_enabled to 0/1. The name "crc_enabled" is the official name of
+ this integrity check, even though it is not an actual cyclic redundancy
+ check.
+
+ Depending on the state of crc_enabled, certain basic data integrity
+ verification is done by the driver on hardware version 3 and 4. The
+ driver will reject any packet that appears corrupted. Using this knob,
+ The state of crc_enabled can be altered with this knob.
+
+ Reading the crc_enabled value will show the active value. Echoing
+ "0" or "1" to this file will set the state to "0" or "1".
+
/////////////////////////////////////////////////////////////////////////////
3. Differentiating hardware versions
byte 0 ~ 2 for one finger
byte 3 ~ 5 for another
+
+
+8. Trackpoint (for Hardware version 3 and 4)
+ =========================================
+8.1 Registers
+ ~~~~~~~~~
+No special registers have been identified.
+
+8.2 Native relative mode 6 byte packet format
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+8.2.1 Status Packet
+ ~~~~~~~~~~~~~
+
+byte 0:
+ bit 7 6 5 4 3 2 1 0
+ 0 0 sx sy 0 M R L
+byte 1:
+ bit 7 6 5 4 3 2 1 0
+ ~sx 0 0 0 0 0 0 0
+byte 2:
+ bit 7 6 5 4 3 2 1 0
+ ~sy 0 0 0 0 0 0 0
+byte 3:
+ bit 7 6 5 4 3 2 1 0
+ 0 0 ~sy ~sx 0 1 1 0
+byte 4:
+ bit 7 6 5 4 3 2 1 0
+ x7 x6 x5 x4 x3 x2 x1 x0
+byte 5:
+ bit 7 6 5 4 3 2 1 0
+ y7 y6 y5 y4 y3 y2 y1 y0
+
+
+ x and y are written in two's complement spread
+ over 9 bits with sx/sy the relative top bit and
+ x7..x0 and y7..y0 the lower bits.
+ ~sx is the inverse of sx, ~sy is the inverse of sy.
+ The sign of y is opposite to what the input driver
+ expects for a relative movement
usb-storage.delay_use=
[UMS] The delay in seconds before a new device is
- scanned for Logical Units (default 5).
+ scanned for Logical Units (default 1).
usb-storage.quirks=
[UMS] A list of quirks entries to supplement or
0 - disabled
1 - enabled
+fwmark_reflect - BOOLEAN
+ Controls the fwmark of kernel-generated IPv4 reply packets that are not
+ associated with a socket for example, TCP RSTs or ICMP echo replies).
+ If unset, these packets have a fwmark of zero. If set, they have the
+ fwmark of the packet they are replying to.
+ Default: 0
+
route/max_size - INTEGER
Maximum number of routes allowed in the kernel. Increase
this when using large numbers of interfaces and/or routes.
proxy_ndp - BOOLEAN
Do proxy ndp.
+fwmark_reflect - BOOLEAN
+ Controls the fwmark of kernel-generated IPv6 reply packets that are not
+ associated with a socket for example, TCP RSTs or ICMPv6 echo replies).
+ If unset, these packets have a fwmark of zero. If set, they have the
+ fwmark of the packet they are replying to.
+ Default: 0
+
conf/interface/*:
Change special settings per interface.
key, not quality.
multiplanar: select whether each device instance supports multi-planar formats,
- and thus the V4L2 multi-planar API. By default the first device instance
- is single-planar, the second multi-planar, and it keeps alternating.
+ and thus the V4L2 multi-planar API. By default device instances are
+ single-planar.
This module option can override that for each instance. Values are:
- 0: use alternating single and multi-planar devices.
1: this is a single-planar instance.
2: this is a multi-planar instance.
0 otherwise.
The driver has to be configured to support the multiplanar formats. By default
-the first driver instance is single-planar, the second is multi-planar, and it
-keeps alternating. This can be changed by setting the multiplanar module option,
-see section 1 for more details on that option.
+the driver instances are single-planar. This can be changed by setting the
+multiplanar module option, see section 1 for more details on that option.
If the driver instance is using the multiplanar formats/API, then the first
single planar format (YUYV) and the multiplanar NV16M and NV61M formats the
to see the blended framebuffer overlay that's being written to by the second
instance. This setup would require the following commands:
- $ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1 multiplanar=1,1
+ $ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1
$ v4l2-ctl -d1 --find-fb
/dev/fb1 is the framebuffer associated with base address 0x12800000
$ sudo v4l2-ctl -d2 --set-fbuf fb=1
BROADCOM BCM2835 ARM ARCHITECTURE
M: Stephen Warren <swarren@wwwdotorg.org>
+M: Lee Jones <lee@kernel.org>
L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers)
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-rpi.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rpi/linux-rpi.git
S: Maintained
N: bcm2835
S: Supported
F: drivers/net/ethernet/chelsio/cxgb3/
+CXGB3 ISCSI DRIVER (CXGB3I)
+M: Karen Xie <kxie@chelsio.com>
+L: linux-scsi@vger.kernel.org
+W: http://www.chelsio.com
+S: Supported
+F: drivers/scsi/cxgbi/cxgb3i
+
CXGB3 IWARP RNIC DRIVER (IW_CXGB3)
M: Steve Wise <swise@chelsio.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/net/ethernet/chelsio/cxgb4/
+CXGB4 ISCSI DRIVER (CXGB4I)
+M: Karen Xie <kxie@chelsio.com>
+L: linux-scsi@vger.kernel.org
+W: http://www.chelsio.com
+S: Supported
+F: drivers/scsi/cxgbi/cxgb4i
+
CXGB4 IWARP RNIC DRIVER (IW_CXGB4)
M: Steve Wise <swise@chelsio.com>
L: linux-rdma@vger.kernel.org
S: Maintained
F: drivers/iio/
F: drivers/staging/iio/
+F: include/linux/iio/
IKANOS/ADI EAGLE ADSL USB DRIVER
M: Matthieu Castet <castet.matthieu@free.fr>
S: Maintained
F: arch/arm/*omap*/
F: drivers/i2c/busses/i2c-omap.c
+F: drivers/irqchip/irq-omap-intc.c
+F: drivers/mfd/*omap*.c
+F: drivers/mfd/menelaus.c
+F: drivers/mfd/palmas.c
+F: drivers/mfd/tps65217.c
+F: drivers/mfd/tps65218.c
+F: drivers/mfd/tps65910.c
+F: drivers/mfd/twl-core.[ch]
+F: drivers/mfd/twl4030*.c
+F: drivers/mfd/twl6030*.c
+F: drivers/mfd/twl6040*.c
+F: drivers/regulator/palmas-regulator*.c
+F: drivers/regulator/pbias-regulator.c
+F: drivers/regulator/tps65217-regulator.c
+F: drivers/regulator/tps65218-regulator.c
+F: drivers/regulator/tps65910-regulator.c
+F: drivers/regulator/twl-regulator.c
F: include/linux/i2c-omap.h
OMAP DEVICE TREE SUPPORT
S: Maintained
F: arch/arm/boot/dts/*omap*
F: arch/arm/boot/dts/*am3*
+F: arch/arm/boot/dts/*am4*
+F: arch/arm/boot/dts/*am5*
+F: arch/arm/boot/dts/*dra7*
OMAP CLOCK FRAMEWORK SUPPORT
M: Paul Walmsley <paul@pwsan.com>
F: Documentation/hid/hiddev.txt
F: drivers/hid/usbhid/
-USB/IP DRIVERS
-L: linux-usb@vger.kernel.org
-S: Orphan
-F: drivers/staging/usbip/
-
USB ISP116X DRIVER
M: Olav Kongas <ok@artecdesign.ee>
L: linux-usb@vger.kernel.org
VERSION = 3
PATCHLEVEL = 18
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc5
NAME = Diseased Newt
# *DOCUMENTATION*
HOSTCC = gcc
HOSTCXX = g++
-HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer
+HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89
HOSTCXXFLAGS = -O2
ifeq ($(shell $(HOSTCC) -v 2>&1 | grep -c "clang version"), 1)
KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -fno-common \
-Werror-implicit-function-declaration \
- -Wno-format-security
+ -Wno-format-security \
+ -std=gnu89
KBUILD_AFLAGS_KERNEL :=
KBUILD_CFLAGS_KERNEL :=
add sp, sp, r6
#endif
- tst r4, #1
- bleq cache_clean_flush
+ bl cache_clean_flush
adr r0, BSYM(restart)
add r0, r0, r6
b call_cache_fn
__armv4_mpu_cache_flush:
+ tst r4, #1
+ movne pc, lr
mov r2, #1
mov r3, #0
mcr p15, 0, ip, c7, c6, 0 @ invalidate D cache
mov pc, lr
__fa526_cache_flush:
+ tst r4, #1
+ movne pc, lr
mov r1, #0
mcr p15, 0, r1, c7, c14, 0 @ clean and invalidate D cache
mcr p15, 0, r1, c7, c5, 0 @ flush I cache
__armv6_mmu_cache_flush:
mov r1, #0
- mcr p15, 0, r1, c7, c14, 0 @ clean+invalidate D
+ tst r4, #1
+ mcreq p15, 0, r1, c7, c14, 0 @ clean+invalidate D
mcr p15, 0, r1, c7, c5, 0 @ invalidate I+BTB
- mcr p15, 0, r1, c7, c15, 0 @ clean+invalidate unified
+ mcreq p15, 0, r1, c7, c15, 0 @ clean+invalidate unified
mcr p15, 0, r1, c7, c10, 4 @ drain WB
mov pc, lr
__armv7_mmu_cache_flush:
+ tst r4, #1
+ bne iflush
mrc p15, 0, r10, c0, c1, 5 @ read ID_MMFR1
tst r10, #0xf << 16 @ hierarchical cache (ARMv7)
mov r10, #0
mov pc, lr
__armv5tej_mmu_cache_flush:
+ tst r4, #1
+ movne pc, lr
1: mrc p15, 0, r15, c7, c14, 3 @ test,clean,invalidate D cache
bne 1b
mcr p15, 0, r0, c7, c5, 0 @ flush I cache
mov pc, lr
__armv4_mmu_cache_flush:
+ tst r4, #1
+ movne pc, lr
mov r2, #64*1024 @ default: 32K dcache size (*2)
mov r11, #32 @ default: 32 byte line size
mrc p15, 0, r3, c0, c0, 1 @ read cache type
__armv3_mmu_cache_flush:
__armv3_mpu_cache_flush:
+ tst r4, #1
+ movne pc, lr
mov r1, #0
mcr p15, 0, r1, c7, c0, 0 @ invalidate whole cache v3
mov pc, lr
reg = <0x00060000 0x00020000>;
};
partition@4 {
- label = "NAND.u-boot-spl";
+ label = "NAND.u-boot-spl-os";
reg = <0x00080000 0x00040000>;
};
partition@5 {
dcdc3: regulator-dcdc3 {
compatible = "ti,tps65218-dcdc3";
regulator-name = "vdcdc3";
- regulator-min-microvolt = <1350000>;
- regulator-max-microvolt = <1350000>;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
regulator-boot-on;
regulator-always-on;
};
dcdc3: regulator-dcdc3 {
compatible = "ti,tps65218-dcdc3";
regulator-name = "vdds_ddr";
- regulator-min-microvolt = <1350000>;
- regulator-max-microvolt = <1350000>;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
regulator-boot-on;
regulator-always-on;
};
dcdc3: regulator-dcdc3 {
compatible = "ti,tps65218-dcdc3";
regulator-name = "vdcdc3";
- regulator-min-microvolt = <1350000>;
- regulator-max-microvolt = <1350000>;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
regulator-boot-on;
regulator-always-on;
};
#include "sama5d3_uart.dtsi"
/ {
- compatible = "atmel,samad31", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d31", "atmel,sama5d3", "atmel,sama5";
};
#include "sama5d3_gmac.dtsi"
/ {
- compatible = "atmel,samad33", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d33", "atmel,sama5d3", "atmel,sama5";
};
#include "sama5d3_mci2.dtsi"
/ {
- compatible = "atmel,samad34", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d34", "atmel,sama5d3", "atmel,sama5";
};
#include "sama5d3_tcb1.dtsi"
/ {
- compatible = "atmel,samad35", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d35", "atmel,sama5d3", "atmel,sama5";
};
#include "sama5d3_uart.dtsi"
/ {
- compatible = "atmel,samad36", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d36", "atmel,sama5d3", "atmel,sama5";
};
*/
/ {
- compatible = "atmel,samad3xcm", "atmel,sama5d3", "atmel,sama5";
+ compatible = "atmel,sama5d3xcm", "atmel,sama5d3", "atmel,sama5";
chosen {
bootargs = "console=ttyS0,115200 rootfstype=ubifs ubi.mtd=5 root=ubi0:rootfs";
};
+&esdhc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_esdhc1>;
+ bus-width = <4>;
+ status = "okay";
+};
+
&fec1 {
phy-mode = "rmii";
pinctrl-names = "default";
&iomuxc {
vf610-cosmic {
+ pinctrl_esdhc1: esdhc1grp {
+ fsl,pins = <
+ VF610_PAD_PTA24__ESDHC1_CLK 0x31ef
+ VF610_PAD_PTA25__ESDHC1_CMD 0x31ef
+ VF610_PAD_PTA26__ESDHC1_DAT0 0x31ef
+ VF610_PAD_PTA27__ESDHC1_DAT1 0x31ef
+ VF610_PAD_PTA28__ESDHC1_DATA2 0x31ef
+ VF610_PAD_PTA29__ESDHC1_DAT3 0x31ef
+ VF610_PAD_PTB28__GPIO_98 0x219d
+ >;
+ };
+
pinctrl_fec1: fec1grp {
fsl,pins = <
VF610_PAD_PTC9__ENET_RMII1_MDC 0x30d2
};
};
+&clkc {
+ fclk-enable = <0xf>;
+};
+
&gem0 {
status = "okay";
phy-mode = "rgmii-id";
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/edma.h>
+#include <linux/dma-mapping.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
struct device_node *node = pdev->dev.of_node;
struct device *dev = &pdev->dev;
int ret;
+ struct platform_device_info edma_dev_info = {
+ .name = "edma-dma-engine",
+ .dma_mask = DMA_BIT_MASK(32),
+ .parent = &pdev->dev,
+ };
if (node) {
/* Check if this is a second instance registered */
edma_write_array(j, EDMA_QRAE, i, 0x0);
}
arch_num_cc++;
+
+ edma_dev_info.id = j;
+ platform_device_register_full(&edma_dev_info);
}
return 0;
# CONFIG_HW_RANDOM is not set
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_IMX=y
+CONFIG_SPI=y
CONFIG_SPI_IMX=y
CONFIG_SPI_SPIDEV=y
CONFIG_GPIO_SYSFS=y
CONFIG_I2C_ALGOPCF=m
CONFIG_I2C_ALGOPCA=m
CONFIG_I2C_IMX=y
+CONFIG_SPI=y
CONFIG_SPI_IMX=y
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_MC9S08DZ60=y
#define PFD_PLL1_BASE (anatop_base + 0x2b0)
#define PFD_PLL2_BASE (anatop_base + 0x100)
#define PFD_PLL3_BASE (anatop_base + 0xf0)
+#define PLL1_CTRL (anatop_base + 0x270)
+#define PLL2_CTRL (anatop_base + 0x30)
#define PLL3_CTRL (anatop_base + 0x10)
+#define PLL4_CTRL (anatop_base + 0x70)
+#define PLL5_CTRL (anatop_base + 0xe0)
+#define PLL6_CTRL (anatop_base + 0xa0)
#define PLL7_CTRL (anatop_base + 0x20)
+#define ANA_MISC1 (anatop_base + 0x160)
static void __iomem *anatop_base;
static void __iomem *ccm_base;
/* sources for multiplexer clocks, this is used multiple times */
static const char *fast_sels[] = { "firc", "fxosc", };
static const char *slow_sels[] = { "sirc_32k", "sxosc", };
-static const char *pll1_sels[] = { "pll1_main", "pll1_pfd1", "pll1_pfd2", "pll1_pfd3", "pll1_pfd4", };
-static const char *pll2_sels[] = { "pll2_main", "pll2_pfd1", "pll2_pfd2", "pll2_pfd3", "pll2_pfd4", };
-static const char *sys_sels[] = { "fast_clk_sel", "slow_clk_sel", "pll2_pfd_sel", "pll2_main", "pll1_pfd_sel", "pll3_main", };
+static const char *pll1_sels[] = { "pll1_sys", "pll1_pfd1", "pll1_pfd2", "pll1_pfd3", "pll1_pfd4", };
+static const char *pll2_sels[] = { "pll2_bus", "pll2_pfd1", "pll2_pfd2", "pll2_pfd3", "pll2_pfd4", };
+static const char *pll_bypass_src_sels[] = { "fast_clk_sel", "lvds1_in", };
+static const char *pll1_bypass_sels[] = { "pll1", "pll1_bypass_src", };
+static const char *pll2_bypass_sels[] = { "pll2", "pll2_bypass_src", };
+static const char *pll3_bypass_sels[] = { "pll3", "pll3_bypass_src", };
+static const char *pll4_bypass_sels[] = { "pll4", "pll4_bypass_src", };
+static const char *pll5_bypass_sels[] = { "pll5", "pll5_bypass_src", };
+static const char *pll6_bypass_sels[] = { "pll6", "pll6_bypass_src", };
+static const char *pll7_bypass_sels[] = { "pll7", "pll7_bypass_src", };
+static const char *sys_sels[] = { "fast_clk_sel", "slow_clk_sel", "pll2_pfd_sel", "pll2_bus", "pll1_pfd_sel", "pll3_usb_otg", };
static const char *ddr_sels[] = { "pll2_pfd2", "sys_sel", };
static const char *rmii_sels[] = { "enet_ext", "audio_ext", "enet_50m", "enet_25m", };
static const char *enet_ts_sels[] = { "enet_ext", "fxosc", "audio_ext", "usb", "enet_ts", "enet_25m", "enet_50m", };
-static const char *esai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", };
-static const char *sai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", };
+static const char *esai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_audio_div", };
+static const char *sai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_audio_div", };
static const char *nfc_sels[] = { "platform_bus", "pll1_pfd1", "pll3_pfd1", "pll3_pfd3", };
-static const char *qspi_sels[] = { "pll3_main", "pll3_pfd4", "pll2_pfd4", "pll1_pfd4", };
-static const char *esdhc_sels[] = { "pll3_main", "pll3_pfd3", "pll1_pfd3", "platform_bus", };
-static const char *dcu_sels[] = { "pll1_pfd2", "pll3_main", };
+static const char *qspi_sels[] = { "pll3_usb_otg", "pll3_pfd4", "pll2_pfd4", "pll1_pfd4", };
+static const char *esdhc_sels[] = { "pll3_usb_otg", "pll3_pfd3", "pll1_pfd3", "platform_bus", };
+static const char *dcu_sels[] = { "pll1_pfd2", "pll3_usb_otg", };
static const char *gpu_sels[] = { "pll2_pfd2", "pll3_pfd2", };
-static const char *vadc_sels[] = { "pll6_main_div", "pll3_main_div", "pll3_main", };
+static const char *vadc_sels[] = { "pll6_video_div", "pll3_usb_otg_div", "pll3_usb_otg", };
/* FTM counter clock source, not module clock */
static const char *ftm_ext_sels[] = {"sirc_128k", "sxosc", "fxosc_half", "audio_ext", };
static const char *ftm_fix_sels[] = { "sxosc", "ipg_bus", };
-static struct clk_div_table pll4_main_div_table[] = {
+
+static struct clk_div_table pll4_audio_div_table[] = {
{ .val = 0, .div = 1 },
{ .val = 1, .div = 2 },
{ .val = 2, .div = 6 },
clk[VF610_CLK_AUDIO_EXT] = imx_obtain_fixed_clock("audio_ext", 0);
clk[VF610_CLK_ENET_EXT] = imx_obtain_fixed_clock("enet_ext", 0);
+ /* Clock source from external clock via LVDs PAD */
+ clk[VF610_CLK_ANACLK1] = imx_obtain_fixed_clock("anaclk1", 0);
+
clk[VF610_CLK_FXOSC_HALF] = imx_clk_fixed_factor("fxosc_half", "fxosc", 1, 2);
np = of_find_compatible_node(NULL, NULL, "fsl,vf610-anatop");
clk[VF610_CLK_SLOW_CLK_SEL] = imx_clk_mux("slow_clk_sel", CCM_CCSR, 4, 1, slow_sels, ARRAY_SIZE(slow_sels));
clk[VF610_CLK_FASK_CLK_SEL] = imx_clk_mux("fast_clk_sel", CCM_CCSR, 5, 1, fast_sels, ARRAY_SIZE(fast_sels));
- clk[VF610_CLK_PLL1_MAIN] = imx_clk_fixed_factor("pll1_main", "fast_clk_sel", 22, 1);
- clk[VF610_CLK_PLL1_PFD1] = imx_clk_pfd("pll1_pfd1", "pll1_main", PFD_PLL1_BASE, 0);
- clk[VF610_CLK_PLL1_PFD2] = imx_clk_pfd("pll1_pfd2", "pll1_main", PFD_PLL1_BASE, 1);
- clk[VF610_CLK_PLL1_PFD3] = imx_clk_pfd("pll1_pfd3", "pll1_main", PFD_PLL1_BASE, 2);
- clk[VF610_CLK_PLL1_PFD4] = imx_clk_pfd("pll1_pfd4", "pll1_main", PFD_PLL1_BASE, 3);
-
- clk[VF610_CLK_PLL2_MAIN] = imx_clk_fixed_factor("pll2_main", "fast_clk_sel", 22, 1);
- clk[VF610_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_main", PFD_PLL2_BASE, 0);
- clk[VF610_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_main", PFD_PLL2_BASE, 1);
- clk[VF610_CLK_PLL2_PFD3] = imx_clk_pfd("pll2_pfd3", "pll2_main", PFD_PLL2_BASE, 2);
- clk[VF610_CLK_PLL2_PFD4] = imx_clk_pfd("pll2_pfd4", "pll2_main", PFD_PLL2_BASE, 3);
-
- clk[VF610_CLK_PLL3_MAIN] = imx_clk_fixed_factor("pll3_main", "fast_clk_sel", 20, 1);
- clk[VF610_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_main", PFD_PLL3_BASE, 0);
- clk[VF610_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_main", PFD_PLL3_BASE, 1);
- clk[VF610_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_main", PFD_PLL3_BASE, 2);
- clk[VF610_CLK_PLL3_PFD4] = imx_clk_pfd("pll3_pfd4", "pll3_main", PFD_PLL3_BASE, 3);
-
- clk[VF610_CLK_PLL4_MAIN] = imx_clk_fixed_factor("pll4_main", "fast_clk_sel", 25, 1);
- /* Enet pll: fixed 50Mhz */
- clk[VF610_CLK_PLL5_MAIN] = imx_clk_fixed_factor("pll5_main", "fast_clk_sel", 125, 6);
- /* pll6: default 960Mhz */
- clk[VF610_CLK_PLL6_MAIN] = imx_clk_fixed_factor("pll6_main", "fast_clk_sel", 40, 1);
- /* pll7: USB1 PLL at 480MHz */
- clk[VF610_CLK_PLL7_MAIN] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7_main", "fast_clk_sel", PLL7_CTRL, 0x2);
+ clk[VF610_CLK_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", PLL1_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL2_BYPASS_SRC] = imx_clk_mux("pll2_bypass_src", PLL2_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL3_BYPASS_SRC] = imx_clk_mux("pll3_bypass_src", PLL3_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL4_BYPASS_SRC] = imx_clk_mux("pll4_bypass_src", PLL4_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL5_BYPASS_SRC] = imx_clk_mux("pll5_bypass_src", PLL5_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL6_BYPASS_SRC] = imx_clk_mux("pll6_bypass_src", PLL6_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+ clk[VF610_CLK_PLL7_BYPASS_SRC] = imx_clk_mux("pll7_bypass_src", PLL7_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+
+ clk[VF610_CLK_PLL1] = imx_clk_pllv3(IMX_PLLV3_GENERIC, "pll1", "pll1_bypass_src", PLL1_CTRL, 0x1);
+ clk[VF610_CLK_PLL2] = imx_clk_pllv3(IMX_PLLV3_GENERIC, "pll2", "pll2_bypass_src", PLL2_CTRL, 0x1);
+ clk[VF610_CLK_PLL3] = imx_clk_pllv3(IMX_PLLV3_USB, "pll3", "pll3_bypass_src", PLL3_CTRL, 0x1);
+ clk[VF610_CLK_PLL4] = imx_clk_pllv3(IMX_PLLV3_AV, "pll4", "pll4_bypass_src", PLL4_CTRL, 0x7f);
+ clk[VF610_CLK_PLL5] = imx_clk_pllv3(IMX_PLLV3_ENET, "pll5", "pll5_bypass_src", PLL5_CTRL, 0x3);
+ clk[VF610_CLK_PLL6] = imx_clk_pllv3(IMX_PLLV3_AV, "pll6", "pll6_bypass_src", PLL6_CTRL, 0x7f);
+ clk[VF610_CLK_PLL7] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7", "pll7_bypass_src", PLL7_CTRL, 0x1);
+
+ clk[VF610_PLL1_BYPASS] = imx_clk_mux_flags("pll1_bypass", PLL1_CTRL, 16, 1, pll1_bypass_sels, ARRAY_SIZE(pll1_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL2_BYPASS] = imx_clk_mux_flags("pll2_bypass", PLL2_CTRL, 16, 1, pll2_bypass_sels, ARRAY_SIZE(pll2_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL3_BYPASS] = imx_clk_mux_flags("pll3_bypass", PLL3_CTRL, 16, 1, pll3_bypass_sels, ARRAY_SIZE(pll3_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL4_BYPASS] = imx_clk_mux_flags("pll4_bypass", PLL4_CTRL, 16, 1, pll4_bypass_sels, ARRAY_SIZE(pll4_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL5_BYPASS] = imx_clk_mux_flags("pll5_bypass", PLL5_CTRL, 16, 1, pll5_bypass_sels, ARRAY_SIZE(pll5_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL6_BYPASS] = imx_clk_mux_flags("pll6_bypass", PLL6_CTRL, 16, 1, pll6_bypass_sels, ARRAY_SIZE(pll6_bypass_sels), CLK_SET_RATE_PARENT);
+ clk[VF610_PLL7_BYPASS] = imx_clk_mux_flags("pll7_bypass", PLL7_CTRL, 16, 1, pll7_bypass_sels, ARRAY_SIZE(pll7_bypass_sels), CLK_SET_RATE_PARENT);
+
+ /* Do not bypass PLLs initially */
+ clk_set_parent(clk[VF610_PLL1_BYPASS], clk[VF610_CLK_PLL1]);
+ clk_set_parent(clk[VF610_PLL2_BYPASS], clk[VF610_CLK_PLL2]);
+ clk_set_parent(clk[VF610_PLL3_BYPASS], clk[VF610_CLK_PLL3]);
+ clk_set_parent(clk[VF610_PLL4_BYPASS], clk[VF610_CLK_PLL4]);
+ clk_set_parent(clk[VF610_PLL5_BYPASS], clk[VF610_CLK_PLL5]);
+ clk_set_parent(clk[VF610_PLL6_BYPASS], clk[VF610_CLK_PLL6]);
+ clk_set_parent(clk[VF610_PLL7_BYPASS], clk[VF610_CLK_PLL7]);
+
+ clk[VF610_CLK_PLL1_SYS] = imx_clk_gate("pll1_sys", "pll1_bypass", PLL1_CTRL, 13);
+ clk[VF610_CLK_PLL2_BUS] = imx_clk_gate("pll2_bus", "pll2_bypass", PLL2_CTRL, 13);
+ clk[VF610_CLK_PLL3_USB_OTG] = imx_clk_gate("pll3_usb_otg", "pll3_bypass", PLL3_CTRL, 13);
+ clk[VF610_CLK_PLL4_AUDIO] = imx_clk_gate("pll4_audio", "pll4_bypass", PLL4_CTRL, 13);
+ clk[VF610_CLK_PLL5_ENET] = imx_clk_gate("pll5_enet", "pll5_bypass", PLL5_CTRL, 13);
+ clk[VF610_CLK_PLL6_VIDEO] = imx_clk_gate("pll6_video", "pll6_bypass", PLL6_CTRL, 13);
+ clk[VF610_CLK_PLL7_USB_HOST] = imx_clk_gate("pll7_usb_host", "pll7_bypass", PLL7_CTRL, 13);
+
+ clk[VF610_CLK_LVDS1_IN] = imx_clk_gate_exclusive("lvds1_in", "anaclk1", ANA_MISC1, 12, BIT(10));
+
+ clk[VF610_CLK_PLL1_PFD1] = imx_clk_pfd("pll1_pfd1", "pll1_sys", PFD_PLL1_BASE, 0);
+ clk[VF610_CLK_PLL1_PFD2] = imx_clk_pfd("pll1_pfd2", "pll1_sys", PFD_PLL1_BASE, 1);
+ clk[VF610_CLK_PLL1_PFD3] = imx_clk_pfd("pll1_pfd3", "pll1_sys", PFD_PLL1_BASE, 2);
+ clk[VF610_CLK_PLL1_PFD4] = imx_clk_pfd("pll1_pfd4", "pll1_sys", PFD_PLL1_BASE, 3);
+
+ clk[VF610_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_bus", PFD_PLL2_BASE, 0);
+ clk[VF610_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_bus", PFD_PLL2_BASE, 1);
+ clk[VF610_CLK_PLL2_PFD3] = imx_clk_pfd("pll2_pfd3", "pll2_bus", PFD_PLL2_BASE, 2);
+ clk[VF610_CLK_PLL2_PFD4] = imx_clk_pfd("pll2_pfd4", "pll2_bus", PFD_PLL2_BASE, 3);
+
+ clk[VF610_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_usb_otg", PFD_PLL3_BASE, 0);
+ clk[VF610_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_usb_otg", PFD_PLL3_BASE, 1);
+ clk[VF610_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_usb_otg", PFD_PLL3_BASE, 2);
+ clk[VF610_CLK_PLL3_PFD4] = imx_clk_pfd("pll3_pfd4", "pll3_usb_otg", PFD_PLL3_BASE, 3);
clk[VF610_CLK_PLL1_PFD_SEL] = imx_clk_mux("pll1_pfd_sel", CCM_CCSR, 16, 3, pll1_sels, 5);
clk[VF610_CLK_PLL2_PFD_SEL] = imx_clk_mux("pll2_pfd_sel", CCM_CCSR, 19, 3, pll2_sels, 5);
clk[VF610_CLK_PLATFORM_BUS] = imx_clk_divider("platform_bus", "sys_bus", CCM_CACRR, 3, 3);
clk[VF610_CLK_IPG_BUS] = imx_clk_divider("ipg_bus", "platform_bus", CCM_CACRR, 11, 2);
- clk[VF610_CLK_PLL3_MAIN_DIV] = imx_clk_divider("pll3_main_div", "pll3_main", CCM_CACRR, 20, 1);
- clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_main_div", "pll4_main", 0, CCM_CACRR, 6, 3, 0, pll4_main_div_table, &imx_ccm_lock);
- clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_main_div", "pll6_main", CCM_CACRR, 21, 1);
+ clk[VF610_CLK_PLL3_MAIN_DIV] = imx_clk_divider("pll3_usb_otg_div", "pll3_usb_otg", CCM_CACRR, 20, 1);
+ clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_audio_div", "pll4_audio", 0, CCM_CACRR, 6, 3, 0, pll4_audio_div_table, &imx_ccm_lock);
+ clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_video_div", "pll6_video", CCM_CACRR, 21, 1);
- clk[VF610_CLK_USBPHY0] = imx_clk_gate("usbphy0", "pll3_main", PLL3_CTRL, 6);
- clk[VF610_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll7_main", PLL7_CTRL, 6);
+ clk[VF610_CLK_USBPHY0] = imx_clk_gate("usbphy0", "pll3_usb_otg", PLL3_CTRL, 6);
+ clk[VF610_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll7_usb_host", PLL7_CTRL, 6);
clk[VF610_CLK_USBC0] = imx_clk_gate2("usbc0", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(4));
clk[VF610_CLK_USBC1] = imx_clk_gate2("usbc1", "ipg_bus", CCM_CCGR7, CCM_CCGRx_CGn(4));
clk[VF610_CLK_QSPI1_X1_DIV] = imx_clk_divider("qspi1_x1", "qspi1_x2", CCM_CSCDR3, 11, 1);
clk[VF610_CLK_QSPI1] = imx_clk_gate2("qspi1", "qspi1_x1", CCM_CCGR8, CCM_CCGRx_CGn(4));
- clk[VF610_CLK_ENET_50M] = imx_clk_fixed_factor("enet_50m", "pll5_main", 1, 10);
- clk[VF610_CLK_ENET_25M] = imx_clk_fixed_factor("enet_25m", "pll5_main", 1, 20);
+ clk[VF610_CLK_ENET_50M] = imx_clk_fixed_factor("enet_50m", "pll5_enet", 1, 10);
+ clk[VF610_CLK_ENET_25M] = imx_clk_fixed_factor("enet_25m", "pll5_enet", 1, 20);
clk[VF610_CLK_ENET_SEL] = imx_clk_mux("enet_sel", CCM_CSCMR2, 4, 2, rmii_sels, 4);
clk[VF610_CLK_ENET_TS_SEL] = imx_clk_mux("enet_ts_sel", CCM_CSCMR2, 0, 3, enet_ts_sels, 7);
clk[VF610_CLK_ENET] = imx_clk_gate("enet", "enet_sel", CCM_CSCDR1, 24);
static void __init mvebu_dt_init(void)
{
- if (of_machine_is_compatible("plathome,openblocks-ax3-4"))
+ if (of_machine_is_compatible("marvell,armadaxp"))
i2c_quirk();
if (of_machine_is_compatible("marvell,a375-db")) {
external_abort_quirk();
config KUSER_HELPERS
bool "Enable kuser helpers in vector page" if !NEED_KUSER_HELPERS
+ depends on MMU
default y
help
Warning: disabling this option may break user programs.
#define orion_gpio_dbg_show NULL
#endif
+static void orion_gpio_unmask_irq(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ u32 reg_val;
+ u32 mask = d->mask;
+
+ irq_gc_lock(gc);
+ reg_val = irq_reg_readl(gc->reg_base + ct->regs.mask);
+ reg_val |= mask;
+ irq_reg_writel(reg_val, gc->reg_base + ct->regs.mask);
+ irq_gc_unlock(gc);
+}
+
+static void orion_gpio_mask_irq(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ u32 mask = d->mask;
+ u32 reg_val;
+
+ irq_gc_lock(gc);
+ reg_val = irq_reg_readl(gc->reg_base + ct->regs.mask);
+ reg_val &= ~mask;
+ irq_reg_writel(reg_val, gc->reg_base + ct->regs.mask);
+ irq_gc_unlock(gc);
+}
+
void __init orion_gpio_init(struct device_node *np,
int gpio_base, int ngpio,
void __iomem *base, int mask_offset,
ct = gc->chip_types;
ct->regs.mask = ochip->mask_offset + GPIO_LEVEL_MASK_OFF;
ct->type = IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW;
- ct->chip.irq_mask = irq_gc_mask_clr_bit;
- ct->chip.irq_unmask = irq_gc_mask_set_bit;
+ ct->chip.irq_mask = orion_gpio_mask_irq;
+ ct->chip.irq_unmask = orion_gpio_unmask_irq;
ct->chip.irq_set_type = gpio_irq_set_type;
ct->chip.name = ochip->chip.label;
ct->regs.ack = GPIO_EDGE_CAUSE_OFF;
ct->type = IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING;
ct->chip.irq_ack = irq_gc_ack_clr_bit;
- ct->chip.irq_mask = irq_gc_mask_clr_bit;
- ct->chip.irq_unmask = irq_gc_mask_set_bit;
+ ct->chip.irq_mask = orion_gpio_mask_irq;
+ ct->chip.irq_unmask = orion_gpio_unmask_irq;
ct->chip.irq_set_type = gpio_irq_set_type;
ct->handler = handle_edge_irq;
ct->chip.name = ochip->chip.label;
compatible = "apm,xgene-enet";
status = "disabled";
reg = <0x0 0x17020000 0x0 0xd100>,
- <0x0 0X17030000 0x0 0X400>,
+ <0x0 0X17030000 0x0 0Xc300>,
<0x0 0X10000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0x3c 0x4>;
sgenet0: ethernet@1f210000 {
compatible = "apm,xgene-enet";
status = "disabled";
- reg = <0x0 0x1f210000 0x0 0x10000>,
- <0x0 0x1f200000 0x0 0X10000>,
- <0x0 0x1B000000 0x0 0X20000>;
+ reg = <0x0 0x1f210000 0x0 0xd100>,
+ <0x0 0x1f200000 0x0 0Xc300>,
+ <0x0 0x1B000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0xA0 0x4>;
dma-coherent;
compatible = "apm,xgene-enet";
status = "disabled";
reg = <0x0 0x1f610000 0x0 0xd100>,
- <0x0 0x1f600000 0x0 0X400>,
+ <0x0 0x1f600000 0x0 0Xc300>,
<0x0 0x18000000 0x0 0X200>;
reg-names = "enet_csr", "ring_csr", "ring_cmd";
interrupts = <0x0 0x60 0x4>;
CONFIG_ARCH_THUNDER=y
CONFIG_ARCH_VEXPRESS=y
CONFIG_ARCH_XGENE=y
+CONFIG_PCI=y
+CONFIG_PCI_MSI=y
+CONFIG_PCI_XGENE=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_KSM=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_LRO is not set
# CONFIG_IPV6 is not set
+CONFIG_BPF_JIT=y
# CONFIG_WIRELESS is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_ATA=y
+CONFIG_SATA_AHCI=y
+CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_XGENE=y
-CONFIG_PHY_XGENE=y
CONFIG_PATA_PLATFORM=y
CONFIG_PATA_OF_PLATFORM=y
CONFIG_NETDEVICES=y
CONFIG_TUN=y
CONFIG_VIRTIO_NET=y
+CONFIG_NET_XGENE=y
CONFIG_SMC91X=y
CONFIG_SMSC911X=y
-CONFIG_NET_XGENE=y
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_HW_RANDOM is not set
+# CONFIG_HMC_DRV is not set
+CONFIG_SPI=y
+CONFIG_SPI_PL022=y
+CONFIG_GPIO_PL061=y
+CONFIG_GPIO_XGENE=y
# CONFIG_HWMON is not set
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_USB=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_ISP1760_HCD=y
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_OHCI_HCD_PLATFORM=y
CONFIG_USB_STORAGE=y
+CONFIG_USB_ULPI=y
CONFIG_MMC=y
CONFIG_MMC_ARMMMCI=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SPI=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_EFI=y
+CONFIG_RTC_DRV_XGENE=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_PHY_XGENE=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
* virt_to_page(k) convert a _valid_ virtual address to struct page *
* virt_addr_valid(k) indicates whether a virtual address is valid
*/
-#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
+#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
__SYSCALL(__NR_getrandom, sys_getrandom)
#define __NR_memfd_create 385
__SYSCALL(__NR_memfd_create, sys_memfd_create)
+#define __NR_bpf 386
+__SYSCALL(__NR_bpf, sys_bpf)
b.eq efi_load_fail
/*
- * efi_entry() will have relocated the kernel image if necessary
- * and we return here with device tree address in x0 and the kernel
- * entry point stored at *image_addr. Save those values in registers
- * which are callee preserved.
+ * efi_entry() will have copied the kernel image if necessary and we
+ * return here with device tree address in x0 and the kernel entry
+ * point stored at *image_addr. Save those values in registers which
+ * are callee preserved.
*/
mov x20, x0 // DTB address
ldr x0, [sp, #16] // relocated _text address
mov x21, x0
/*
- * Flush dcache covering current runtime addresses
- * of kernel text/data. Then flush all of icache.
+ * Calculate size of the kernel Image (same for original and copy).
*/
adrp x1, _text
add x1, x1, #:lo12:_text
add x2, x2, #:lo12:_edata
sub x1, x2, x1
+ /*
+ * Flush the copied Image to the PoC, and ensure it is not shadowed by
+ * stale icache entries from before relocation.
+ */
bl __flush_dcache_area
ic ialluis
+ /*
+ * Ensure that the rest of this function (in the original Image) is
+ * visible when the caches are disabled. The I-cache can't have stale
+ * entries for the VA range of the current image, so no maintenance is
+ * necessary.
+ */
+ adr x0, efi_stub_entry
+ adr x1, efi_stub_entry_end
+ sub x1, x1, x0
+ bl __flush_dcache_area
+
/* Turn off Dcache and MMU */
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
ldp x29, x30, [sp], #32
ret
+efi_stub_entry_end:
ENDPROC(efi_stub_entry)
* which ends with "dsb; isb" pair guaranteeing global
* visibility.
*/
- atomic_set(&pp->cpu_count, -1);
+ /* Notify other processors with an additional increment. */
+ atomic_inc(&pp->cpu_count);
} else {
- while (atomic_read(&pp->cpu_count) != -1)
+ while (atomic_read(&pp->cpu_count) <= num_online_cpus())
cpu_relax();
isb();
}
if (WARN_ON_ONCE(!index))
return -EINVAL;
- if (state->type == PSCI_POWER_STATE_TYPE_STANDBY)
+ if (state[index - 1].type == PSCI_POWER_STATE_TYPE_STANDBY)
ret = psci_ops.cpu_suspend(state[index - 1], 0);
else
ret = __cpu_suspend(index, psci_suspend_finisher);
sub x1, x1, #2
4: adds x1, x1, #1
b.mi 5f
- strb wzr, [x0]
+USER(9f, strb wzr, [x0] )
5: mov x0, #0
ret
ENDPROC(__clear_user)
}
static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
- unsigned long end, unsigned long phys,
+ unsigned long end, phys_addr_t phys,
int map_io)
{
pud_t *pud;
KBUILD_AFLAGS_MODULE += -mlong-calls
KBUILD_CFLAGS_MODULE += -mlong-calls
+#
+# pass -msoft-float to GAS if it supports it. However on newer binutils
+# (specifically newer than 2.24.51.20140728) we then also need to explicitly
+# set ".set hardfloat" in all files which manipulate floating point registers.
+#
+ifneq ($(call as-option,-Wa$(comma)-msoft-float,),)
+ cflags-y += -DGAS_HAS_SET_HARDFLOAT -Wa,-msoft-float
+endif
+
cflags-y += -ffreestanding
#
.irq_set_type = octeon_irq_ciu_gpio_set_type,
#ifdef CONFIG_SMP
.irq_set_affinity = octeon_irq_ciu_set_affinity_v2,
+ .irq_cpu_offline = octeon_irq_cpu_offline_ciu,
#endif
.flags = IRQCHIP_SET_TYPE_MASKED,
};
.irq_set_type = octeon_irq_ciu_gpio_set_type,
#ifdef CONFIG_SMP
.irq_set_affinity = octeon_irq_ciu_set_affinity,
+ .irq_cpu_offline = octeon_irq_cpu_offline_ciu,
#endif
.flags = IRQCHIP_SET_TYPE_MASKED,
};
#include <asm/mipsregs.h>
.macro fpu_save_single thread tmp=t0
+ .set push
+ SET_HARDFLOAT
cfc1 \tmp, fcr31
swc1 $f0, THREAD_FPR0_LS64(\thread)
swc1 $f1, THREAD_FPR1_LS64(\thread)
swc1 $f30, THREAD_FPR30_LS64(\thread)
swc1 $f31, THREAD_FPR31_LS64(\thread)
sw \tmp, THREAD_FCR31(\thread)
+ .set pop
.endm
.macro fpu_restore_single thread tmp=t0
+ .set push
+ SET_HARDFLOAT
lw \tmp, THREAD_FCR31(\thread)
lwc1 $f0, THREAD_FPR0_LS64(\thread)
lwc1 $f1, THREAD_FPR1_LS64(\thread)
lwc1 $f30, THREAD_FPR30_LS64(\thread)
lwc1 $f31, THREAD_FPR31_LS64(\thread)
ctc1 \tmp, fcr31
+ .set pop
.endm
.macro cpu_save_nonscratch thread
#endif /* CONFIG_CPU_MIPSR2 */
.macro fpu_save_16even thread tmp=t0
+ .set push
+ SET_HARDFLOAT
cfc1 \tmp, fcr31
sdc1 $f0, THREAD_FPR0_LS64(\thread)
sdc1 $f2, THREAD_FPR2_LS64(\thread)
sdc1 $f28, THREAD_FPR28_LS64(\thread)
sdc1 $f30, THREAD_FPR30_LS64(\thread)
sw \tmp, THREAD_FCR31(\thread)
+ .set pop
.endm
.macro fpu_save_16odd thread
.set push
.set mips64r2
+ SET_HARDFLOAT
sdc1 $f1, THREAD_FPR1_LS64(\thread)
sdc1 $f3, THREAD_FPR3_LS64(\thread)
sdc1 $f5, THREAD_FPR5_LS64(\thread)
.endm
.macro fpu_restore_16even thread tmp=t0
+ .set push
+ SET_HARDFLOAT
lw \tmp, THREAD_FCR31(\thread)
ldc1 $f0, THREAD_FPR0_LS64(\thread)
ldc1 $f2, THREAD_FPR2_LS64(\thread)
.macro fpu_restore_16odd thread
.set push
.set mips64r2
+ SET_HARDFLOAT
ldc1 $f1, THREAD_FPR1_LS64(\thread)
ldc1 $f3, THREAD_FPR3_LS64(\thread)
ldc1 $f5, THREAD_FPR5_LS64(\thread)
.macro cfcmsa rd, cs
.set push
.set noat
+ SET_HARDFLOAT
.insn
.word CFC_MSA_INSN | (\cs << 11)
move \rd, $1
.macro ctcmsa cd, rs
.set push
.set noat
+ SET_HARDFLOAT
move $1, \rs
.word CTC_MSA_INSN | (\cd << 6)
.set pop
.macro ld_d wd, off, base
.set push
.set noat
+ SET_HARDFLOAT
add $1, \base, \off
.word LDD_MSA_INSN | (\wd << 6)
.set pop
.macro st_d wd, off, base
.set push
.set noat
+ SET_HARDFLOAT
add $1, \base, \off
.word STD_MSA_INSN | (\wd << 6)
.set pop
.macro copy_u_w rd, ws, n
.set push
.set noat
+ SET_HARDFLOAT
.insn
.word COPY_UW_MSA_INSN | (\n << 16) | (\ws << 11)
/* move triggers an assembler bug... */
.macro copy_u_d rd, ws, n
.set push
.set noat
+ SET_HARDFLOAT
.insn
.word COPY_UD_MSA_INSN | (\n << 16) | (\ws << 11)
/* move triggers an assembler bug... */
.macro insert_w wd, n, rs
.set push
.set noat
+ SET_HARDFLOAT
/* move triggers an assembler bug... */
or $1, \rs, zero
.word INSERT_W_MSA_INSN | (\n << 16) | (\wd << 6)
.macro insert_d wd, n, rs
.set push
.set noat
+ SET_HARDFLOAT
/* move triggers an assembler bug... */
or $1, \rs, zero
.word INSERT_D_MSA_INSN | (\n << 16) | (\wd << 6)
st_d 31, THREAD_FPR31, \thread
.set push
.set noat
+ SET_HARDFLOAT
cfcmsa $1, MSA_CSR
sw $1, THREAD_MSA_CSR(\thread)
.set pop
.macro msa_restore_all thread
.set push
.set noat
+ SET_HARDFLOAT
lw $1, THREAD_MSA_CSR(\thread)
ctcmsa MSA_CSR, $1
.set pop
.macro msa_init_all_upper
.set push
.set noat
+ SET_HARDFLOAT
not $1, zero
msa_init_upper 0
.set pop
#include <asm/sgidefs.h>
+/*
+ * starting with binutils 2.24.51.20140729, MIPS binutils warn about mixing
+ * hardfloat and softfloat object files. The kernel build uses soft-float by
+ * default, so we also need to pass -msoft-float along to GAS if it supports it.
+ * But this in turn causes assembler errors in files which access hardfloat
+ * registers. We detect if GAS supports "-msoft-float" in the Makefile and
+ * explicitly put ".set hardfloat" where floating point registers are touched.
+ */
+#ifdef GAS_HAS_SET_HARDFLOAT
+#define SET_HARDFLOAT .set hardfloat
+#else
+#define SET_HARDFLOAT
+#endif
+
#if _MIPS_SIM == _MIPS_SIM_ABI32
/*
if (is_msa_enabled()) {
if (save) {
save_msa(current);
- asm volatile("cfc1 %0, $31"
- : "=r"(current->thread.fpu.fcr31));
+ current->thread.fpu.fcr31 =
+ read_32bit_cp1_register(CP1_STATUS);
}
disable_msa();
clear_thread_flag(TIF_USEDMSA);
/*
* Macros to access the floating point coprocessor control registers
*/
-#define read_32bit_cp1_register(source) \
+#define _read_32bit_cp1_register(source, gas_hardfloat) \
({ \
int __res; \
\
" # gas fails to assemble cfc1 for some archs, \n" \
" # like Octeon. \n" \
" .set mips1 \n" \
+ " "STR(gas_hardfloat)" \n" \
" cfc1 %0,"STR(source)" \n" \
" .set pop \n" \
: "=r" (__res)); \
__res; \
})
+#ifdef GAS_HAS_SET_HARDFLOAT
+#define read_32bit_cp1_register(source) \
+ _read_32bit_cp1_register(source, .set hardfloat)
+#else
+#define read_32bit_cp1_register(source) \
+ _read_32bit_cp1_register(source, )
+#endif
+
#ifdef HAVE_AS_DSP
#define rddsp(mask) \
({ \
#define __NR_seccomp (__NR_Linux + 352)
#define __NR_getrandom (__NR_Linux + 353)
#define __NR_memfd_create (__NR_Linux + 354)
+#define __NR_bpf (__NR_Linux + 355)
/*
* Offset of the last Linux o32 flavoured syscall
*/
-#define __NR_Linux_syscalls 354
+#define __NR_Linux_syscalls 355
#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */
#define __NR_O32_Linux 4000
-#define __NR_O32_Linux_syscalls 354
+#define __NR_O32_Linux_syscalls 355
#if _MIPS_SIM == _MIPS_SIM_ABI64
#define __NR_seccomp (__NR_Linux + 312)
#define __NR_getrandom (__NR_Linux + 313)
#define __NR_memfd_create (__NR_Linux + 314)
+#define __NR_bpf (__NR_Linux + 315)
/*
* Offset of the last Linux 64-bit flavoured syscall
*/
-#define __NR_Linux_syscalls 314
+#define __NR_Linux_syscalls 315
#endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */
#define __NR_64_Linux 5000
-#define __NR_64_Linux_syscalls 314
+#define __NR_64_Linux_syscalls 315
#if _MIPS_SIM == _MIPS_SIM_NABI32
#define __NR_seccomp (__NR_Linux + 316)
#define __NR_getrandom (__NR_Linux + 317)
#define __NR_memfd_create (__NR_Linux + 318)
+#define __NR_memfd_create (__NR_Linux + 319)
/*
* Offset of the last N32 flavoured syscall
*/
-#define __NR_Linux_syscalls 318
+#define __NR_Linux_syscalls 319
#endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */
#define __NR_N32_Linux 6000
-#define __NR_N32_Linux_syscalls 318
+#define __NR_N32_Linux_syscalls 319
#endif /* _UAPI_ASM_UNISTD_H */
case mm_bc1t_op:
preempt_disable();
if (is_fpu_owner())
- asm volatile("cfc1\t%0,$31" : "=r" (fcr31));
+ fcr31 = read_32bit_cp1_register(CP1_STATUS);
else
fcr31 = current->thread.fpu.fcr31;
preempt_enable();
case cop1_op:
preempt_disable();
if (is_fpu_owner())
- asm volatile(
- ".set push\n"
- "\t.set mips1\n"
- "\tcfc1\t%0,$31\n"
- "\t.set pop" : "=r" (fcr31));
+ fcr31 = read_32bit_cp1_register(CP1_STATUS);
else
fcr31 = current->thread.fpu.fcr31;
preempt_enable();
.set push
/* gas fails to assemble cfc1 for some archs (octeon).*/ \
.set mips1
+ SET_HARDFLOAT
cfc1 a1, fcr31
li a2, ~(0x3f << 12)
and a2, a1
.set mips1
/* Save floating point context */
LEAF(_save_fp_context)
+ .set push
+ SET_HARDFLOAT
li v0, 0 # assume success
cfc1 t1,fcr31
EX(swc1 $f0,(SC_FPREGS+0)(a0))
EX(sw t1,(SC_FPC_CSR)(a0))
cfc1 t0,$0 # implementation/version
jr ra
+ .set pop
.set nomacro
EX(sw t0,(SC_FPC_EIR)(a0))
.set macro
* stack frame which might have been changed by the user.
*/
LEAF(_restore_fp_context)
+ .set push
+ SET_HARDFLOAT
li v0, 0 # assume success
EX(lw t0,(SC_FPC_CSR)(a0))
EX(lwc1 $f0,(SC_FPREGS+0)(a0))
EX(lwc1 $f31,(SC_FPREGS+248)(a0))
jr ra
ctc1 t0,fcr31
+ .set pop
END(_restore_fp_context)
.set reorder
#define FPU_DEFAULT 0x00000000
+ .set push
+ SET_HARDFLOAT
+
LEAF(_init_fpu)
mfc0 t0, CP0_STATUS
li t1, ST0_CU1
mtc1 t0, $f31
jr ra
END(_init_fpu)
+
+ .set pop
#include <asm/asm-offsets.h>
#include <asm/regdef.h>
+/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */
+#undef fp
+
.macro EX insn, reg, src
.set push
+ SET_HARDFLOAT
.set nomacro
.ex\@: \insn \reg, \src
.set pop
.set arch=r4000
LEAF(_save_fp_context)
+ .set push
+ SET_HARDFLOAT
cfc1 t1, fcr31
+ .set pop
#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2)
.set push
+ SET_HARDFLOAT
#ifdef CONFIG_CPU_MIPS32_R2
- .set mips64r2
+ .set mips32r2
+ .set fp=64
mfc0 t0, CP0_STATUS
sll t0, t0, 5
bgez t0, 1f # skip storing odd if FR=0
1: .set pop
#endif
+ .set push
+ SET_HARDFLOAT
/* Store the 16 even double precision registers */
EX sdc1 $f0, SC_FPREGS+0(a0)
EX sdc1 $f2, SC_FPREGS+16(a0)
EX sw t1, SC_FPC_CSR(a0)
jr ra
li v0, 0 # success
+ .set pop
END(_save_fp_context)
#ifdef CONFIG_MIPS32_COMPAT
/* Save 32-bit process floating point context */
LEAF(_save_fp_context32)
+ .set push
+ SET_HARDFLOAT
cfc1 t1, fcr31
mfc0 t0, CP0_STATUS
EX sw t1, SC32_FPC_CSR(a0)
cfc1 t0, $0 # implementation/version
EX sw t0, SC32_FPC_EIR(a0)
+ .set pop
jr ra
li v0, 0 # success
#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2)
.set push
+ SET_HARDFLOAT
#ifdef CONFIG_CPU_MIPS32_R2
- .set mips64r2
+ .set mips32r2
+ .set fp=64
mfc0 t0, CP0_STATUS
sll t0, t0, 5
bgez t0, 1f # skip loading odd if FR=0
EX ldc1 $f31, SC_FPREGS+248(a0)
1: .set pop
#endif
+ .set push
+ SET_HARDFLOAT
EX ldc1 $f0, SC_FPREGS+0(a0)
EX ldc1 $f2, SC_FPREGS+16(a0)
EX ldc1 $f4, SC_FPREGS+32(a0)
EX ldc1 $f28, SC_FPREGS+224(a0)
EX ldc1 $f30, SC_FPREGS+240(a0)
ctc1 t1, fcr31
+ .set pop
jr ra
li v0, 0 # success
END(_restore_fp_context)
#ifdef CONFIG_MIPS32_COMPAT
LEAF(_restore_fp_context32)
/* Restore an o32 sigcontext. */
+ .set push
+ SET_HARDFLOAT
EX lw t1, SC32_FPC_CSR(a0)
mfc0 t0, CP0_STATUS
ctc1 t1, fcr31
jr ra
li v0, 0 # success
+ .set pop
END(_restore_fp_context32)
#endif
#include <asm/asmmacro.h>
+/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */
+#undef fp
+
/*
* Offset to the current process status flags, the first 32 bytes of the
* stack are not used.
bgtz a3, 1f
/* Save 128b MSA vector context + scalar FP control & status. */
+ .set push
+ SET_HARDFLOAT
cfc1 t1, fcr31
msa_save_all a0
+ .set pop /* SET_HARDFLOAT */
+
sw t1, THREAD_FCR31(a0)
b 2f
#define FPU_DEFAULT 0x00000000
+ .set push
+ SET_HARDFLOAT
+
LEAF(_init_fpu)
mfc0 t0, CP0_STATUS
li t1, ST0_CU1
#ifdef CONFIG_CPU_MIPS32_R2
.set push
- .set mips64r2
+ .set mips32r2
+ .set fp=64
sll t0, t0, 5 # is Status.FR set?
bgez t0, 1f # no: skip setting upper 32b
#endif
jr ra
END(_init_fpu)
+
+ .set pop /* SET_HARDFLOAT */
.set noreorder
.set mips2
+ .set push
+ SET_HARDFLOAT
+
/* Save floating point context */
LEAF(_save_fp_context)
mfc0 t0,CP0_STATUS
1: jr ra
nop
END(_restore_fp_context)
+
+ .set pop /* SET_HARDFLOAT */
PTR sys_seccomp
PTR sys_getrandom
PTR sys_memfd_create
+ PTR sys_bpf /* 4355 */
PTR sys_seccomp
PTR sys_getrandom
PTR sys_memfd_create
+ PTR sys_bpf /* 5315 */
.size sys_call_table,.-sys_call_table
PTR sys_seccomp
PTR sys_getrandom
PTR sys_memfd_create
+ PTR sys_bpf
.size sysn32_call_table,.-sysn32_call_table
PTR sys_seccomp
PTR sys_getrandom
PTR sys_memfd_create
+ PTR sys_bpf /* 4355 */
.size sys32_call_table,.-sys32_call_table
dma_contiguous_reserve(PFN_PHYS(max_low_pfn));
/* Tell bootmem about cma reserved memblock section */
for_each_memblock(reserved, reg)
- reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);
+ if (reg->size != 0)
+ reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);
}
static void __init resource_init(void)
entrylo0 = read_c0_entrylo0();
/* Unused entries have a virtual address of KSEG0. */
- if ((entryhi & 0xffffe000) != 0x80000000
+ if ((entryhi & 0xfffff000) != 0x80000000
&& (entryhi & 0xfc0) == asid) {
/*
* Only print entries in use
printk("va=%08lx asid=%08lx"
" [pa=%06lx n=%d d=%d v=%d g=%d]",
- (entryhi & 0xffffe000),
+ (entryhi & 0xfffff000),
entryhi & 0xfc0,
entrylo0 & PAGE_MASK,
(entrylo0 & (1 << 11)) ? 1 : 0,
.else
EX(lbe, t0, (v0), .Lfault\@)
.endif
- PTR_ADDIU v0, 1
+ .set noreorder
bnez t0, 1b
-1: PTR_SUBU v0, a0
+1: PTR_ADDIU v0, 1
+ .set reorder
+ PTR_SUBU v0, a0
jr ra
END(__strnlen_\func\()_asm)
if (insn.i_format.rs == bc_op) {
preempt_disable();
if (is_fpu_owner())
- asm volatile(
- ".set push\n"
- "\t.set mips1\n"
- "\tcfc1\t%0,$31\n"
- "\t.set pop" : "=r" (fcr31));
+ fcr31 = read_32bit_cp1_register(CP1_STATUS);
else
fcr31 = current->thread.fpu.fcr31;
preempt_enable();
msg.data = 0xc00 | msixvec;
ret = irq_set_msi_desc(xirq, desc);
- if (ret < 0) {
- destroy_irq(xirq);
+ if (ret < 0)
return ret;
- }
write_msi_msg(xirq, &msg);
return 0;
#include <asm/errno.h>
#include <asm-generic/uaccess-unaligned.h>
+#include <linux/bug.h>
+
#define VERIFY_READ 0
#define VERIFY_WRITE 1
* that put_user is the same as __put_user, etc.
*/
-extern int __get_kernel_bad(void);
-extern int __get_user_bad(void);
-extern int __put_kernel_bad(void);
-extern int __put_user_bad(void);
-
static inline long access_ok(int type, const void __user * addr,
unsigned long size)
{
#define get_user __get_user
#if !defined(CONFIG_64BIT)
-#define LDD_KERNEL(ptr) __get_kernel_bad();
-#define LDD_USER(ptr) __get_user_bad();
+#define LDD_KERNEL(ptr) BUILD_BUG()
+#define LDD_USER(ptr) BUILD_BUG()
#define STD_KERNEL(x, ptr) __put_kernel_asm64(x,ptr)
#define STD_USER(x, ptr) __put_user_asm64(x,ptr)
#define ASM_WORD_INSN ".word\t"
case 2: __get_kernel_asm("ldh",ptr); break; \
case 4: __get_kernel_asm("ldw",ptr); break; \
case 8: LDD_KERNEL(ptr); break; \
- default: __get_kernel_bad(); break; \
+ default: BUILD_BUG(); break; \
} \
} \
else { \
case 2: __get_user_asm("ldh",ptr); break; \
case 4: __get_user_asm("ldw",ptr); break; \
case 8: LDD_USER(ptr); break; \
- default: __get_user_bad(); break; \
+ default: BUILD_BUG(); break; \
} \
} \
\
case 2: __put_kernel_asm("sth",__x,ptr); break; \
case 4: __put_kernel_asm("stw",__x,ptr); break; \
case 8: STD_KERNEL(__x,ptr); break; \
- default: __put_kernel_bad(); break; \
+ default: BUILD_BUG(); break; \
} \
} \
else { \
case 2: __put_user_asm("sth",__x,ptr); break; \
case 4: __put_user_asm("stw",__x,ptr); break; \
case 8: STD_USER(__x,ptr); break; \
- default: __put_user_bad(); break; \
+ default: BUILD_BUG(); break; \
} \
} \
\
#ifndef __ASM_PARISC_BITSPERLONG_H
#define __ASM_PARISC_BITSPERLONG_H
-/*
- * using CONFIG_* outside of __KERNEL__ is wrong,
- * __LP64__ was also removed from headers, so what
- * is the right approach on parisc?
- * -arnd
- */
-#if (defined(__KERNEL__) && defined(CONFIG_64BIT)) || defined (__LP64__)
+#if defined(__LP64__)
#define __BITS_PER_LONG 64
#define SHIFT_PER_LONG 6
#else
#ifndef _PARISC_MSGBUF_H
#define _PARISC_MSGBUF_H
+#include <asm/bitsperlong.h>
+
/*
* The msqid64_ds structure for parisc architecture, copied from sparc.
* Note extra padding because this structure is passed back and forth
struct msqid64_ds {
struct ipc64_perm msg_perm;
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad1;
#endif
__kernel_time_t msg_stime; /* last msgsnd time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad2;
#endif
__kernel_time_t msg_rtime; /* last msgrcv time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad3;
#endif
__kernel_time_t msg_ctime; /* last change time */
#ifndef _PARISC_SEMBUF_H
#define _PARISC_SEMBUF_H
+#include <asm/bitsperlong.h>
+
/*
* The semid64_ds structure for parisc architecture.
* Note extra padding because this structure is passed back and forth
struct semid64_ds {
struct ipc64_perm sem_perm; /* permissions .. see ipc.h */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad1;
#endif
__kernel_time_t sem_otime; /* last semop time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad2;
#endif
__kernel_time_t sem_ctime; /* last change time */
#ifndef _PARISC_SHMBUF_H
#define _PARISC_SHMBUF_H
+#include <asm/bitsperlong.h>
+
/*
* The shmid64_ds structure for parisc architecture.
* Note extra padding because this structure is passed back and forth
struct shmid64_ds {
struct ipc64_perm shm_perm; /* operation perms */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad1;
#endif
__kernel_time_t shm_atime; /* last attach time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad2;
#endif
__kernel_time_t shm_dtime; /* last detach time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad3;
#endif
__kernel_time_t shm_ctime; /* last change time */
-#ifndef CONFIG_64BIT
+#if __BITS_PER_LONG != 64
unsigned int __pad4;
#endif
size_t shm_segsz; /* size of segment (bytes) */
unsigned int __unused2;
};
-#ifdef CONFIG_64BIT
-/* The 'unsigned int' (formerly 'unsigned long') data types below will
- * ensure that a 32-bit app calling shmctl(*,IPC_INFO,*) will work on
- * a wide kernel, but if some of these values are meant to contain pointers
- * they may need to be 'long long' instead. -PB XXX FIXME
- */
-#endif
struct shminfo64 {
- unsigned int shmmax;
- unsigned int shmmin;
- unsigned int shmmni;
- unsigned int shmseg;
- unsigned int shmall;
- unsigned int __unused1;
- unsigned int __unused2;
- unsigned int __unused3;
- unsigned int __unused4;
+ unsigned long shmmax;
+ unsigned long shmmin;
+ unsigned long shmmni;
+ unsigned long shmseg;
+ unsigned long shmall;
+ unsigned long __unused1;
+ unsigned long __unused2;
+ unsigned long __unused3;
+ unsigned long __unused4;
};
#endif /* _PARISC_SHMBUF_H */
struct siginfo;
/* Type of a signal handler. */
-#ifdef CONFIG_64BIT
+#if defined(__LP64__)
/* function pointers on 64-bit parisc are pointers to little structs and the
* compiler doesn't support code which changes or tests the address of
* the function in the little struct. This is really ugly -PB
#define __NR_seccomp (__NR_Linux + 338)
#define __NR_getrandom (__NR_Linux + 339)
#define __NR_memfd_create (__NR_Linux + 340)
+#define __NR_bpf (__NR_Linux + 341)
-#define __NR_Linux_syscalls (__NR_memfd_create + 1)
+#define __NR_Linux_syscalls (__NR_bpf + 1)
#define __IGNORE_select /* newselect */
ENTRY_COMP(msgsnd)
ENTRY_COMP(msgrcv)
ENTRY_SAME(msgget) /* 190 */
- ENTRY_SAME(msgctl)
- ENTRY_SAME(shmat)
+ ENTRY_COMP(msgctl)
+ ENTRY_COMP(shmat)
ENTRY_SAME(shmdt)
ENTRY_SAME(shmget)
- ENTRY_SAME(shmctl) /* 195 */
+ ENTRY_COMP(shmctl) /* 195 */
ENTRY_SAME(ni_syscall) /* streams1 */
ENTRY_SAME(ni_syscall) /* streams2 */
ENTRY_SAME(lstat64)
ENTRY_SAME(epoll_ctl) /* 225 */
ENTRY_SAME(epoll_wait)
ENTRY_SAME(remap_file_pages)
- ENTRY_SAME(semtimedop)
+ ENTRY_COMP(semtimedop)
ENTRY_COMP(mq_open)
ENTRY_SAME(mq_unlink) /* 230 */
ENTRY_COMP(mq_timedsend)
ENTRY_SAME(seccomp)
ENTRY_SAME(getrandom)
ENTRY_SAME(memfd_create) /* 340 */
+ ENTRY_SAME(bpf)
/* Nothing yet */
int atomic_add_return(int, atomic_t *);
int atomic_cmpxchg(atomic_t *, int, int);
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+int atomic_xchg(atomic_t *, int);
int __atomic_add_unless(atomic_t *, int, int);
void atomic_set(atomic_t *, int);
#ifndef __ARCH_SPARC_CMPXCHG__
#define __ARCH_SPARC_CMPXCHG__
-static inline unsigned long xchg_u32(__volatile__ unsigned long *m, unsigned long val)
-{
- __asm__ __volatile__("swap [%2], %0"
- : "=&r" (val)
- : "0" (val), "r" (m)
- : "memory");
- return val;
-}
-
+unsigned long __xchg_u32(volatile u32 *m, u32 new);
void __xchg_called_with_bad_pointer(void);
static inline unsigned long __xchg(unsigned long x, __volatile__ void * ptr, int size)
{
switch (size) {
case 4:
- return xchg_u32(ptr, x);
+ return __xchg_u32(ptr, x);
}
__xchg_called_with_bad_pointer();
return x;
{
__u16 ret;
- __asm__ __volatile__ ("lduha [%1] %2, %0"
+ __asm__ __volatile__ ("lduha [%2] %3, %0"
: "=r" (ret)
- : "r" (addr), "i" (ASI_PL));
+ : "m" (*addr), "r" (addr), "i" (ASI_PL));
return ret;
}
#define __arch_swab16p __arch_swab16p
{
__u32 ret;
- __asm__ __volatile__ ("lduwa [%1] %2, %0"
+ __asm__ __volatile__ ("lduwa [%2] %3, %0"
: "=r" (ret)
- : "r" (addr), "i" (ASI_PL));
+ : "m" (*addr), "r" (addr), "i" (ASI_PL));
return ret;
}
#define __arch_swab32p __arch_swab32p
{
__u64 ret;
- __asm__ __volatile__ ("ldxa [%1] %2, %0"
+ __asm__ __volatile__ ("ldxa [%2] %3, %0"
: "=r" (ret)
- : "r" (addr), "i" (ASI_PL));
+ : "m" (*addr), "r" (addr), "i" (ASI_PL));
return ret;
}
#define __arch_swab64p __arch_swab64p
{
unsigned long csr_reg, csr, csr_error_bits;
irqreturn_t ret = IRQ_NONE;
- u16 stat;
+ u32 stat;
csr_reg = pbm->pbm_regs + SCHIZO_PCI_CTRL;
csr = upa_readq(csr_reg);
pbm->name);
ret = IRQ_HANDLED;
}
- pci_read_config_word(pbm->pci_bus->self, PCI_STATUS, &stat);
+ pbm->pci_ops->read(pbm->pci_bus, 0, PCI_STATUS, 2, &stat);
if (stat & (PCI_STATUS_PARITY |
PCI_STATUS_SIG_TARGET_ABORT |
PCI_STATUS_REC_TARGET_ABORT |
PCI_STATUS_SIG_SYSTEM_ERROR)) {
printk("%s: PCI bus error, PCI_STATUS[%04x]\n",
pbm->name, stat);
- pci_write_config_word(pbm->pci_bus->self, PCI_STATUS, 0xffff);
+ pbm->pci_ops->write(pbm->pci_bus, 0, PCI_STATUS, 2, 0xffff);
ret = IRQ_HANDLED;
}
return ret;
void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs)
{
clear_softint(1 << irq);
+ irq_enter();
generic_smp_call_function_interrupt();
+ irq_exit();
}
void __irq_entry smp_call_function_single_client(int irq, struct pt_regs *regs)
{
clear_softint(1 << irq);
+ irq_enter();
generic_smp_call_function_single_interrupt();
+ irq_exit();
}
static void tsb_sync(void *info)
#undef ATOMIC_OP
+int atomic_xchg(atomic_t *v, int new)
+{
+ int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(ATOMIC_HASH(v), flags);
+ ret = v->counter;
+ v->counter = new;
+ spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
+ return ret;
+}
+EXPORT_SYMBOL(atomic_xchg);
+
int atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
return (unsigned long)prev;
}
EXPORT_SYMBOL(__cmpxchg_u32);
+
+unsigned long __xchg_u32(volatile u32 *ptr, u32 new)
+{
+ unsigned long flags;
+ u32 prev;
+
+ spin_lock_irqsave(ATOMIC_HASH(ptr), flags);
+ prev = *ptr;
+ *ptr = new;
+ spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags);
+
+ return (unsigned long)prev;
+}
+EXPORT_SYMBOL(__xchg_u32);
suffix-$(CONFIG_KERNEL_LZO) := lzo
suffix-$(CONFIG_KERNEL_LZ4) := lz4
+RUN_SIZE = $(shell objdump -h vmlinux | \
+ perl $(srctree)/arch/x86/tools/calc_run_size.pl)
quiet_cmd_mkpiggy = MKPIGGY $@
- cmd_mkpiggy = $(obj)/mkpiggy $< > $@ || ( rm -f $@ ; false )
+ cmd_mkpiggy = $(obj)/mkpiggy $< $(RUN_SIZE) > $@ || ( rm -f $@ ; false )
targets += piggy.S
$(obj)/piggy.S: $(obj)/vmlinux.bin.$(suffix-y) $(obj)/mkpiggy FORCE
* Do the decompression, and jump to the new kernel..
*/
/* push arguments for decompress_kernel: */
- pushl $z_output_len /* decompressed length */
+ pushl $z_run_size /* size of kernel with .bss and .brk */
+ pushl $z_output_len /* decompressed length, end of relocs */
leal z_extract_offset_negative(%ebx), %ebp
pushl %ebp /* output address */
pushl $z_input_len /* input_len */
pushl %eax /* heap area */
pushl %esi /* real mode pointer */
call decompress_kernel /* returns kernel location in %eax */
- addl $24, %esp
+ addl $28, %esp
/*
* Jump to the decompressed kernel.
* Do the decompression, and jump to the new kernel..
*/
pushq %rsi /* Save the real mode argument */
+ movq $z_run_size, %r9 /* size of kernel with .bss and .brk */
+ pushq %r9
movq %rsi, %rdi /* real mode address */
leaq boot_heap(%rip), %rsi /* malloc area for uncompression */
leaq input_data(%rip), %rdx /* input_data */
movl $z_input_len, %ecx /* input_len */
movq %rbp, %r8 /* output target address */
- movq $z_output_len, %r9 /* decompressed length */
+ movq $z_output_len, %r9 /* decompressed length, end of relocs */
call decompress_kernel /* returns kernel location in %rax */
+ popq %r9
popq %rsi
/*
unsigned char *input_data,
unsigned long input_len,
unsigned char *output,
- unsigned long output_len)
+ unsigned long output_len,
+ unsigned long run_size)
{
real_mode = rmode;
free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
- output = choose_kernel_location(input_data, input_len,
- output, output_len);
+ /*
+ * The memory hole needed for the kernel is the larger of either
+ * the entire decompressed kernel plus relocation table, or the
+ * entire decompressed kernel plus .bss and .brk sections.
+ */
+ output = choose_kernel_location(input_data, input_len, output,
+ output_len > run_size ? output_len
+ : run_size);
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
uint32_t olen;
long ilen;
unsigned long offs;
+ unsigned long run_size;
FILE *f = NULL;
int retval = 1;
- if (argc < 2) {
- fprintf(stderr, "Usage: %s compressed_file\n", argv[0]);
+ if (argc < 3) {
+ fprintf(stderr, "Usage: %s compressed_file run_size\n",
+ argv[0]);
goto bail;
}
offs += olen >> 12; /* Add 8 bytes for each 32K block */
offs += 64*1024 + 128; /* Add 64K + 128 bytes slack */
offs = (offs+4095) & ~4095; /* Round to a 4K boundary */
+ run_size = atoi(argv[2]);
printf(".section \".rodata..compressed\",\"a\",@progbits\n");
printf(".globl z_input_len\n");
/* z_extract_offset_negative allows simplification of head_32.S */
printf(".globl z_extract_offset_negative\n");
printf("z_extract_offset_negative = -0x%lx\n", offs);
+ printf(".globl z_run_size\n");
+ printf("z_run_size = %lu\n", run_size);
printf(".globl input_data, input_data_end\n");
printf("input_data:\n");
}
void cpu_disable_common(void);
+void cpu_die_common(unsigned int cpu);
void native_smp_prepare_boot_cpu(void);
void native_smp_prepare_cpus(unsigned int max_cpus);
void native_smp_cpus_done(unsigned int max_cpus);
* load_microcode_amd() to save equivalent cpu table and microcode patches in
* kernel heap memory.
*/
-static void apply_ucode_in_initrd(void *ucode, size_t size)
+static void apply_ucode_in_initrd(void *ucode, size_t size, bool save_patch)
{
struct equiv_cpu_entry *eq;
size_t *cont_sz;
u32 *header;
u8 *data, **cont;
+ u8 (*patch)[PATCH_MAX_SIZE];
u16 eq_id = 0;
int offset, left;
u32 rev, eax, ebx, ecx, edx;
new_rev = (u32 *)__pa_nodebug(&ucode_new_rev);
cont_sz = (size_t *)__pa_nodebug(&container_size);
cont = (u8 **)__pa_nodebug(&container);
+ patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
#else
new_rev = &ucode_new_rev;
cont_sz = &container_size;
cont = &container;
+ patch = &amd_ucode_patch;
#endif
data = ucode;
rev = mc->hdr.patch_id;
*new_rev = rev;
- /* save ucode patch */
- memcpy(amd_ucode_patch, mc,
- min_t(u32, header[1], PATCH_MAX_SIZE));
+ if (save_patch)
+ memcpy(patch, mc,
+ min_t(u32, header[1], PATCH_MAX_SIZE));
}
}
*data = cp.data;
*size = cp.size;
- apply_ucode_in_initrd(cp.data, cp.size);
+ apply_ucode_in_initrd(cp.data, cp.size, true);
}
#ifdef CONFIG_X86_32
size_t *usize;
void **ucode;
- mc = (struct microcode_amd *)__pa(amd_ucode_patch);
+ mc = (struct microcode_amd *)__pa_nodebug(amd_ucode_patch);
if (mc->hdr.patch_id && mc->hdr.processor_rev_id) {
__apply_microcode_amd(mc);
return;
if (!*ucode || !*usize)
return;
- apply_ucode_in_initrd(*ucode, *usize);
+ apply_ucode_in_initrd(*ucode, *usize, false);
}
static void __init collect_cpu_sig_on_bsp(void *arg)
* AP has a different equivalence ID than BSP, looks like
* mixed-steppings silicon so go through the ucode blob anew.
*/
- apply_ucode_in_initrd(ucode_cpio.data, ucode_cpio.size);
+ apply_ucode_in_initrd(ucode_cpio.data, ucode_cpio.size, false);
}
}
#endif
int __init save_microcode_in_initrd_amd(void)
{
unsigned long cont;
+ int retval = 0;
enum ucode_state ret;
+ u8 *cont_va;
u32 eax;
if (!container)
#ifdef CONFIG_X86_32
get_bsp_sig();
- cont = (unsigned long)container;
+ cont = (unsigned long)container;
+ cont_va = __va(container);
#else
/*
* We need the physical address of the container for both bitness since
* boot_params.hdr.ramdisk_image is a physical address.
*/
- cont = __pa(container);
+ cont = __pa(container);
+ cont_va = container;
#endif
/*
if (relocated_ramdisk)
container = (u8 *)(__va(relocated_ramdisk) +
(cont - boot_params.hdr.ramdisk_image));
+ else
+ container = cont_va;
if (ucode_new_rev)
pr_info("microcode: updated early to new patch_level=0x%08x\n",
ret = load_microcode_amd(eax, container, container_size);
if (ret != UCODE_OK)
- return -EINVAL;
+ retval = -EINVAL;
/*
* This will be freed any msec now, stash patches for the current
container = NULL;
container_size = 0;
- return 0;
+ return retval;
}
static bool check_loader_disabled_ap(void)
{
#ifdef CONFIG_X86_32
- return __pa_nodebug(dis_ucode_ldr);
+ return *((bool *)__pa_nodebug(&dis_ucode_ldr));
#else
return dis_ucode_ldr;
#endif
numa_remove_cpu(cpu);
}
+static DEFINE_PER_CPU(struct completion, die_complete);
+
void cpu_disable_common(void)
{
int cpu = smp_processor_id();
+ init_completion(&per_cpu(die_complete, smp_processor_id()));
+
remove_siblinginfo(cpu);
/* It's now safe to remove this processor from the online map */
fixup_irqs();
}
-static DEFINE_PER_CPU(struct completion, die_complete);
-
int native_cpu_disable(void)
{
int ret;
return ret;
clear_local_APIC();
- init_completion(&per_cpu(die_complete, smp_processor_id()));
cpu_disable_common();
return 0;
}
+void cpu_die_common(unsigned int cpu)
+{
+ wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ);
+}
+
void native_cpu_die(unsigned int cpu)
{
/* We don't do anything here: idle task is faking death itself. */
- wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ);
+
+ cpu_die_common(cpu);
/* They ack this in play_dead() by setting CPU_DEAD */
if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
fetch_register_operand(op);
break;
case OpCL:
+ op->type = OP_IMM;
op->bytes = 1;
op->val = reg_read(ctxt, VCPU_REGS_RCX) & 0xff;
break;
rc = decode_imm(ctxt, op, 1, true);
break;
case OpOne:
+ op->type = OP_IMM;
op->bytes = 1;
op->val = 1;
break;
ctxt->memop.bytes = ctxt->op_bytes + 2;
goto mem_common;
case OpES:
+ op->type = OP_IMM;
op->val = VCPU_SREG_ES;
break;
case OpCS:
+ op->type = OP_IMM;
op->val = VCPU_SREG_CS;
break;
case OpSS:
+ op->type = OP_IMM;
op->val = VCPU_SREG_SS;
break;
case OpDS:
+ op->type = OP_IMM;
op->val = VCPU_SREG_DS;
break;
case OpFS:
+ op->type = OP_IMM;
op->val = VCPU_SREG_FS;
break;
case OpGS:
+ op->type = OP_IMM;
op->val = VCPU_SREG_GS;
break;
case OpImplicit:
while (((unsigned long)src & 6) && len >= 2) {
__u16 val16;
- *errp = __get_user(val16, (const __u16 __user *)src);
- if (*errp)
- return isum;
+ if (__get_user(val16, (const __u16 __user *)src))
+ goto out_err;
*(__u16 *)dst = val16;
isum = (__force __wsum)add32_with_carry(
--- /dev/null
+#!/usr/bin/perl
+#
+# Calculate the amount of space needed to run the kernel, including room for
+# the .bss and .brk sections.
+#
+# Usage:
+# objdump -h a.out | perl calc_run_size.pl
+use strict;
+
+my $mem_size = 0;
+my $file_offset = 0;
+
+my $sections=" *[0-9]+ \.(?:bss|brk) +";
+while (<>) {
+ if (/^$sections([0-9a-f]+) +(?:[0-9a-f]+ +){2}([0-9a-f]+)/) {
+ my $size = hex($1);
+ my $offset = hex($2);
+ $mem_size += $size;
+ if ($file_offset == 0) {
+ $file_offset = $offset;
+ } elsif ($file_offset != $offset) {
+ die ".bss and .brk lack common file offset\n";
+ }
+ }
+}
+
+if ($file_offset == 0) {
+ die "Never found .bss or .brk file offset\n";
+}
+printf("%d\n", $mem_size + $file_offset);
current->state = TASK_UNINTERRUPTIBLE;
schedule_timeout(HZ/10);
}
+
+ cpu_die_common(cpu);
+
xen_smp_intr_free(cpu);
xen_uninit_lock_cpu(cpu);
xen_teardown_timer(cpu);
config XTENSA_PLATFORM_XTFPGA
bool "XTFPGA"
+ select ETHOC if ETHERNET
select SERIAL_CONSOLE
- select ETHOC
select XTENSA_CALIBRATE_CCOUNT
help
XTFPGA is the name of Tensilica board family (LX60, LX110, LX200, ML605).
config BLK_DEV_SIMDISK
tristate "Host file-based simulated block device support"
default n
- depends on XTENSA_PLATFORM_ISS
+ depends on XTENSA_PLATFORM_ISS && BLOCK
help
Create block devices that map to files in the host file system.
Device binding to host file may be changed at runtime via proc
--- /dev/null
+/dts-v1/;
+/include/ "xtfpga.dtsi"
+/include/ "xtfpga-flash-16m.dtsi"
+
+/ {
+ compatible = "cdns,xtensa-lx200";
+ memory@0 {
+ device_type = "memory";
+ reg = <0x00000000 0x06000000>;
+ };
+ pic: pic {
+ compatible = "cdns,xtensa-mx";
+ #interrupt-cells = <2>;
+ interrupt-controller;
+ };
+};
--- /dev/null
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_RESOURCE_COUNTERS=y
+CONFIG_MEMCG=y
+CONFIG_NAMESPACES=y
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_EXPERT=y
+CONFIG_SYSCTL_SYSCALL=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
+CONFIG_XTENSA_VARIANT_DC233C=y
+CONFIG_XTENSA_UNALIGNED_USER=y
+CONFIG_PREEMPT=y
+CONFIG_HIGHMEM=y
+# CONFIG_PCI is not set
+CONFIG_XTENSA_PLATFORM_XTFPGA=y
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug"
+CONFIG_USE_OF=y
+CONFIG_BUILTIN_DTB="kc705"
+# CONFIG_COMPACTION is not set
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+CONFIG_IP_PNP_RARP=y
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_STANDALONE is not set
+CONFIG_MTD=y
+CONFIG_MTD_CFI=y
+CONFIG_MTD_JEDECPROBE=y
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_STAA=y
+CONFIG_MTD_PHYSMAP_OF=y
+CONFIG_MTD_UBI=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_NETDEVICES=y
+# CONFIG_NET_VENDOR_ARC is not set
+# CONFIG_NET_VENDOR_BROADCOM is not set
+# CONFIG_NET_VENDOR_INTEL is not set
+# CONFIG_NET_VENDOR_MARVELL is not set
+# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_SAMSUNG is not set
+# CONFIG_NET_VENDOR_SEEQ is not set
+# CONFIG_NET_VENDOR_SMSC is not set
+# CONFIG_NET_VENDOR_STMICRO is not set
+# CONFIG_NET_VENDOR_VIA is not set
+# CONFIG_NET_VENDOR_WIZNET is not set
+CONFIG_MARVELL_PHY=y
+# CONFIG_WLAN is not set
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_SERIO is not set
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_HW_RANDOM=y
+# CONFIG_HWMON is not set
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_NOWAYOUT=y
+CONFIG_SOFT_WATCHDOG=y
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT4_FS=y
+CONFIG_FANOTIFY=y
+CONFIG_VFAT_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_UBIFS_FS=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V4=y
+CONFIG_NFS_SWAP=y
+CONFIG_ROOT_NFS=y
+CONFIG_SUNRPC_DEBUG=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_LOCKUP_DETECTOR=y
+# CONFIG_SCHED_DEBUG is not set
+CONFIG_SCHEDSTATS=y
+CONFIG_TIMER_STATS=y
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_STACKTRACE=y
+CONFIG_RCU_TRACE=y
+# CONFIG_FTRACE is not set
+CONFIG_LD_NO_RELAX=y
+# CONFIG_S32C1I_SELFTEST is not set
+CONFIG_CRYPTO_ANSI_CPRNG=y
--- /dev/null
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_RESOURCE_COUNTERS=y
+CONFIG_MEMCG=y
+CONFIG_NAMESPACES=y
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_EXPERT=y
+CONFIG_SYSCTL_SYSCALL=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
+CONFIG_XTENSA_VARIANT_CUSTOM=y
+CONFIG_XTENSA_VARIANT_CUSTOM_NAME="test_mmuhifi_c3"
+CONFIG_XTENSA_UNALIGNED_USER=y
+CONFIG_PREEMPT=y
+CONFIG_HAVE_SMP=y
+CONFIG_SMP=y
+CONFIG_HOTPLUG_CPU=y
+# CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is not set
+# CONFIG_PCI is not set
+CONFIG_XTENSA_PLATFORM_XTFPGA=y
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug"
+CONFIG_USE_OF=y
+CONFIG_BUILTIN_DTB="lx200mx"
+# CONFIG_COMPACTION is not set
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+CONFIG_IP_PNP_RARP=y
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_STANDALONE is not set
+CONFIG_MTD=y
+CONFIG_MTD_CFI=y
+CONFIG_MTD_JEDECPROBE=y
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_STAA=y
+CONFIG_MTD_PHYSMAP_OF=y
+CONFIG_MTD_UBI=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_NETDEVICES=y
+# CONFIG_NET_VENDOR_ARC is not set
+# CONFIG_NET_VENDOR_BROADCOM is not set
+# CONFIG_NET_VENDOR_INTEL is not set
+# CONFIG_NET_VENDOR_MARVELL is not set
+# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_SAMSUNG is not set
+# CONFIG_NET_VENDOR_SEEQ is not set
+# CONFIG_NET_VENDOR_SMSC is not set
+# CONFIG_NET_VENDOR_STMICRO is not set
+# CONFIG_NET_VENDOR_VIA is not set
+# CONFIG_NET_VENDOR_WIZNET is not set
+CONFIG_MARVELL_PHY=y
+# CONFIG_WLAN is not set
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_SERIO is not set
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_HW_RANDOM=y
+# CONFIG_HWMON is not set
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_NOWAYOUT=y
+CONFIG_SOFT_WATCHDOG=y
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT4_FS=y
+CONFIG_FANOTIFY=y
+CONFIG_VFAT_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_UBIFS_FS=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V4=y
+CONFIG_NFS_SWAP=y
+CONFIG_ROOT_NFS=y
+CONFIG_SUNRPC_DEBUG=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_VM=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_SCHEDSTATS=y
+CONFIG_TIMER_STATS=y
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_STACKTRACE=y
+CONFIG_RCU_TRACE=y
+# CONFIG_FTRACE is not set
+CONFIG_LD_NO_RELAX=y
+# CONFIG_S32C1I_SELFTEST is not set
+CONFIG_CRYPTO_ANSI_CPRNG=y
static inline pte_t pte_mkspecial(pte_t pte)
{ return pte; }
+#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CA_MASK))
+
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
#define __NR_pivot_root 175
__SYSCALL(175, sys_pivot_root, 2)
#define __NR_umount 176
-__SYSCALL(176, sys_umount, 2)
+__SYSCALL(176, sys_oldumount, 1)
+#define __ARCH_WANT_SYS_OLDUMOUNT
#define __NR_swapoff 177
__SYSCALL(177, sys_swapoff, 1)
#define __NR_sync 178
#define __NR_renameat2 336
__SYSCALL(336, sys_renameat2, 5)
-#define __NR_syscall_count 337
+#define __NR_seccomp 337
+__SYSCALL(337, sys_seccomp, 3)
+#define __NR_getrandom 338
+__SYSCALL(338, sys_getrandom, 3)
+#define __NR_memfd_create 339
+__SYSCALL(339, sys_memfd_create, 2)
+
+#define __NR_syscall_count 340
/*
* sysxtensa syscall handler
void blk_recount_segments(struct request_queue *q, struct bio *bio)
{
- bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE,
- &q->queue_flags);
- bool merge_not_need = bio->bi_vcnt < queue_max_segments(q);
+ unsigned short seg_cnt;
+
+ /* estimate segment number by bi_vcnt for non-cloned bio */
+ if (bio_flagged(bio, BIO_CLONED))
+ seg_cnt = bio_segments(bio);
+ else
+ seg_cnt = bio->bi_vcnt;
- if (no_sg_merge && !bio_flagged(bio, BIO_CLONED) &&
- merge_not_need)
- bio->bi_phys_segments = bio->bi_vcnt;
+ if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) &&
+ (seg_cnt < queue_max_segments(q)))
+ bio->bi_phys_segments = seg_cnt;
else {
struct bio *nxt = bio->bi_next;
bio->bi_next = NULL;
- bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio,
- no_sg_merge && merge_not_need);
+ bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false);
bio->bi_next = nxt;
}
wake_up_all(&q->mq_freeze_wq);
}
-/*
- * Guarantee no request is in use, so we can change any data structure of
- * the queue afterward.
- */
-void blk_mq_freeze_queue(struct request_queue *q)
+static void blk_mq_freeze_queue_start(struct request_queue *q)
{
bool freeze;
percpu_ref_kill(&q->mq_usage_counter);
blk_mq_run_queues(q, false);
}
+}
+
+static void blk_mq_freeze_queue_wait(struct request_queue *q)
+{
wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->mq_usage_counter));
}
+/*
+ * Guarantee no request is in use, so we can change any data structure of
+ * the queue afterward.
+ */
+void blk_mq_freeze_queue(struct request_queue *q)
+{
+ blk_mq_freeze_queue_start(q);
+ blk_mq_freeze_queue_wait(q);
+}
+
static void blk_mq_unfreeze_queue(struct request_queue *q)
{
bool wake;
/* Basically redo blk_mq_init_queue with queue frozen */
static void blk_mq_queue_reinit(struct request_queue *q)
{
- blk_mq_freeze_queue(q);
+ WARN_ON_ONCE(!q->mq_freeze_depth);
blk_mq_sysfs_unregister(q);
blk_mq_map_swqueue(q);
blk_mq_sysfs_register(q);
-
- blk_mq_unfreeze_queue(q);
}
static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
return NOTIFY_OK;
mutex_lock(&all_q_mutex);
+
+ /*
+ * We need to freeze and reinit all existing queues. Freezing
+ * involves synchronous wait for an RCU grace period and doing it
+ * one by one may take a long time. Start freezing all queues in
+ * one swoop and then wait for the completions so that freezing can
+ * take place in parallel.
+ */
+ list_for_each_entry(q, &all_q_list, all_q_node)
+ blk_mq_freeze_queue_start(q);
+ list_for_each_entry(q, &all_q_list, all_q_node)
+ blk_mq_freeze_queue_wait(q);
+
list_for_each_entry(q, &all_q_list, all_q_node)
blk_mq_queue_reinit(q);
+
+ list_for_each_entry(q, &all_q_list, all_q_node)
+ blk_mq_unfreeze_queue(q);
+
mutex_unlock(&all_q_mutex);
return NOTIFY_OK;
}
int ioprio_best(unsigned short aprio, unsigned short bprio)
{
- unsigned short aclass = IOPRIO_PRIO_CLASS(aprio);
- unsigned short bclass = IOPRIO_PRIO_CLASS(bprio);
+ unsigned short aclass;
+ unsigned short bclass;
- if (aclass == IOPRIO_CLASS_NONE)
- aclass = IOPRIO_CLASS_BE;
- if (bclass == IOPRIO_CLASS_NONE)
- bclass = IOPRIO_CLASS_BE;
+ if (!ioprio_valid(aprio))
+ aprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
+ if (!ioprio_valid(bprio))
+ bprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM);
+ aclass = IOPRIO_PRIO_CLASS(aprio);
+ bclass = IOPRIO_PRIO_CLASS(bprio);
if (aclass == bclass)
return min(aprio, bprio);
if (aclass > bclass)
rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
- goto error;
+ goto error_free_buffer;
}
blk_rq_set_block_pc(rq);
}
error:
+ blk_put_request(rq);
+
+error_free_buffer:
kfree(buffer);
- if (rq)
- blk_put_request(rq);
+
return err;
}
EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"),
},
},
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Vostro 3546",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3546"),
+ },
+ },
/*
* BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
/* board IDs by feature in alphabetical order */
board_ahci,
board_ahci_ign_iferr,
+ board_ahci_nomsi,
board_ahci_noncq,
board_ahci_nosntf,
board_ahci_yes_fbs,
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_ops,
},
+ [board_ahci_nomsi] = {
+ AHCI_HFLAGS (AHCI_HFLAG_NO_MSI),
+ .flags = AHCI_FLAG_COMMON,
+ .pio_mask = ATA_PIO4,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &ahci_ops,
+ },
[board_ahci_noncq] = {
AHCI_HFLAGS (AHCI_HFLAG_NO_NCQ),
.flags = AHCI_FLAG_COMMON,
{ PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */
+ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */
/*
- * Samsung SSDs found on some macbooks. NCQ times out.
- * https://bugzilla.kernel.org/show_bug.cgi?id=60731
+ * Samsung SSDs found on some macbooks. NCQ times out if MSI is
+ * enabled. https://bugzilla.kernel.org/show_bug.cgi?id=60731
*/
- { PCI_VDEVICE(SAMSUNG, 0x1600), board_ahci_noncq },
+ { PCI_VDEVICE(SAMSUNG, 0x1600), board_ahci_nomsi },
/* Enmotus */
{ PCI_DEVICE(0x1c44, 0x8000), board_ahci },
static void ahci_pci_save_initial_config(struct pci_dev *pdev,
struct ahci_host_priv *hpriv)
{
- unsigned int force_port_map = 0;
- unsigned int mask_port_map = 0;
-
if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
dev_info(&pdev->dev, "JMB361 has only one port\n");
- force_port_map = 1;
+ hpriv->force_port_map = 1;
}
/*
*/
if (hpriv->flags & AHCI_HFLAG_MV_PATA) {
if (pdev->device == 0x6121)
- mask_port_map = 0x3;
+ hpriv->mask_port_map = 0x3;
else
- mask_port_map = 0xf;
+ hpriv->mask_port_map = 0xf;
dev_info(&pdev->dev,
"Disabling your PATA port. Use the boot option 'ahci.marvell_enable=0' to avoid this.\n");
}
}
}
-static void ahci_update_intr_status(struct ata_port *ap)
+static void ahci_port_intr(struct ata_port *ap)
{
void __iomem *port_mmio = ahci_port_base(ap);
- struct ahci_port_priv *pp = ap->private_data;
u32 status;
status = readl(port_mmio + PORT_IRQ_STAT);
writel(status, port_mmio + PORT_IRQ_STAT);
- atomic_or(status, &pp->intr_status);
+ ahci_handle_port_interrupt(ap, port_mmio, status);
}
static irqreturn_t ahci_port_thread_fn(int irq, void *dev_instance)
return IRQ_HANDLED;
}
-irqreturn_t ahci_thread_fn(int irq, void *dev_instance)
-{
- struct ata_host *host = dev_instance;
- struct ahci_host_priv *hpriv = host->private_data;
- u32 irq_masked = hpriv->port_map;
- unsigned int i;
-
- for (i = 0; i < host->n_ports; i++) {
- struct ata_port *ap;
-
- if (!(irq_masked & (1 << i)))
- continue;
-
- ap = host->ports[i];
- if (ap) {
- ahci_port_thread_fn(irq, ap);
- VPRINTK("port %u\n", i);
- } else {
- VPRINTK("port %u (no irq)\n", i);
- if (ata_ratelimit())
- dev_warn(host->dev,
- "interrupt on disabled port %u\n", i);
- }
- }
-
- return IRQ_HANDLED;
-}
-
static irqreturn_t ahci_multi_irqs_intr(int irq, void *dev_instance)
{
struct ata_port *ap = dev_instance;
irq_masked = irq_stat & hpriv->port_map;
+ spin_lock(&host->lock);
+
for (i = 0; i < host->n_ports; i++) {
struct ata_port *ap;
ap = host->ports[i];
if (ap) {
- ahci_update_intr_status(ap);
+ ahci_port_intr(ap);
VPRINTK("port %u\n", i);
} else {
VPRINTK("port %u (no irq)\n", i);
*/
writel(irq_stat, mmio + HOST_IRQ_STAT);
+ spin_unlock(&host->lock);
+
VPRINTK("EXIT\n");
- return handled ? IRQ_WAKE_THREAD : IRQ_NONE;
+ return IRQ_RETVAL(handled);
}
unsigned int ahci_qc_issue(struct ata_queued_cmd *qc)
*/
pp->intr_mask = DEF_PORT_IRQ;
- spin_lock_init(&pp->lock);
- ap->lock = &pp->lock;
+ /*
+ * Switch to per-port locking in case each port has its own MSI vector.
+ */
+ if ((hpriv->flags & AHCI_HFLAG_MULTI_MSI)) {
+ spin_lock_init(&pp->lock);
+ ap->lock = &pp->lock;
+ }
ap->private_data = pp;
return rc;
}
-static int ahci_host_activate_single_irq(struct ata_host *host, int irq,
- struct scsi_host_template *sht)
-{
- int i, rc;
-
- rc = ata_host_start(host);
- if (rc)
- return rc;
-
- rc = devm_request_threaded_irq(host->dev, irq, ahci_single_irq_intr,
- ahci_thread_fn, IRQF_SHARED,
- dev_driver_string(host->dev), host);
- if (rc)
- return rc;
-
- for (i = 0; i < host->n_ports; i++)
- ata_port_desc(host->ports[i], "irq %d", irq);
-
- rc = ata_host_register(host, sht);
- if (rc)
- devm_free_irq(host->dev, irq, host);
-
- return rc;
-}
-
/**
* ahci_host_activate - start AHCI host, request IRQs and register it
* @host: target ATA host
if (hpriv->flags & AHCI_HFLAG_MULTI_MSI)
rc = ahci_host_activate_multi_irqs(host, irq, sht);
else
- rc = ahci_host_activate_single_irq(host, irq, sht);
+ rc = ata_host_activate(host, irq, ahci_single_irq_intr,
+ IRQF_SHARED, sht);
return rc;
}
EXPORT_SYMBOL_GPL(ahci_host_activate);
enum sata_rcar_type {
RCAR_GEN1_SATA,
RCAR_GEN2_SATA,
+ RCAR_R8A7790_ES1_SATA,
};
struct sata_rcar_priv {
ap->udma_mask = ATA_UDMA6;
ap->flags |= ATA_FLAG_SATA;
+ if (priv->type == RCAR_R8A7790_ES1_SATA)
+ ap->flags |= ATA_FLAG_NO_DIPM;
+
ioaddr->cmd_addr = base + SDATA_REG;
ioaddr->ctl_addr = base + SSDEVCON_REG;
ioaddr->scr_addr = base + SCRSSTS_REG;
sata_rcar_gen1_phy_init(priv);
break;
case RCAR_GEN2_SATA:
+ case RCAR_R8A7790_ES1_SATA:
sata_rcar_gen2_phy_init(priv);
break;
default:
.compatible = "renesas,sata-r8a7790",
.data = (void *)RCAR_GEN2_SATA
},
+ {
+ .compatible = "renesas,sata-r8a7790-es1",
+ .data = (void *)RCAR_R8A7790_ES1_SATA
+ },
{
.compatible = "renesas,sata-r8a7791",
.data = (void *)RCAR_GEN2_SATA
},
+ {
+ .compatible = "renesas,sata-r8a7793",
+ .data = (void *)RCAR_GEN2_SATA
+ },
{ },
};
MODULE_DEVICE_TABLE(of, sata_rcar_match);
{ "sata_rcar", RCAR_GEN1_SATA }, /* Deprecated by "sata-r8a7779" */
{ "sata-r8a7779", RCAR_GEN1_SATA },
{ "sata-r8a7790", RCAR_GEN2_SATA },
+ { "sata-r8a7790-es1", RCAR_R8A7790_ES1_SATA },
{ "sata-r8a7791", RCAR_GEN2_SATA },
+ { "sata-r8a7793", RCAR_GEN2_SATA },
{ },
};
MODULE_DEVICE_TABLE(platform, sata_rcar_id_table);
Drivers should "select" this option if they desire to use the
device coredump mechanism.
-config DISABLE_DEV_COREDUMP
- bool "Disable device coredump" if EXPERT
+config ALLOW_DEV_COREDUMP
+ bool "Allow device coredump" if EXPERT
+ default y
help
- Disable the device coredump mechanism despite drivers wanting to
- use it; this allows for more sensitive systems or systems that
- don't want to ever access the information to not have the code,
- nor keep any data.
+ This option controls if the device coredump mechanism is available or
+ not; if disabled, the mechanism will be omitted even if drivers that
+ can use it are enabled.
+ Say 'N' for more sensitive systems or systems that don't want
+ to ever access the information to not have the code, nor keep any
+ data.
- If unsure, say N.
+ If unsure, say Y.
config DEV_COREDUMP
bool
default y if WANT_DEV_COREDUMP
- depends on !DISABLE_DEV_COREDUMP
+ depends on ALLOW_DEV_COREDUMP
config DEBUG_DRIVER
bool "Driver Core verbose debug messages"
return &dir->kobj;
}
+static DEFINE_MUTEX(gdp_mutex);
static struct kobject *get_device_parent(struct device *dev,
struct device *parent)
{
if (dev->class) {
- static DEFINE_MUTEX(gdp_mutex);
struct kobject *kobj = NULL;
struct kobject *parent_kobj;
struct kobject *k;
glue_dir->kset != &dev->class->p->glue_dirs)
return;
+ mutex_lock(&gdp_mutex);
kobject_put(glue_dir);
+ mutex_unlock(&gdp_mutex);
}
static void cleanup_device_parent(struct device *dev)
struct device *dev = pdd->dev;
int ret = 0;
- if (gpd_data->need_restore)
+ if (gpd_data->need_restore > 0)
return 0;
+ /*
+ * If the value of the need_restore flag is still unknown at this point,
+ * we trust that pm_genpd_poweroff() has verified that the device is
+ * already runtime PM suspended.
+ */
+ if (gpd_data->need_restore < 0) {
+ gpd_data->need_restore = 1;
+ return 0;
+ }
+
mutex_unlock(&genpd->lock);
genpd_start_dev(genpd, dev);
mutex_lock(&genpd->lock);
if (!ret)
- gpd_data->need_restore = true;
+ gpd_data->need_restore = 1;
return ret;
}
{
struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd);
struct device *dev = pdd->dev;
- bool need_restore = gpd_data->need_restore;
+ int need_restore = gpd_data->need_restore;
- gpd_data->need_restore = false;
+ gpd_data->need_restore = 0;
mutex_unlock(&genpd->lock);
genpd_start_dev(genpd, dev);
+
+ /*
+ * Call genpd_restore_dev() for recently added devices too (need_restore
+ * is negative then).
+ */
if (need_restore)
genpd_restore_dev(genpd, dev);
static int pm_genpd_runtime_suspend(struct device *dev)
{
struct generic_pm_domain *genpd;
+ struct generic_pm_domain_data *gpd_data;
bool (*stop_ok)(struct device *__dev);
int ret;
return 0;
mutex_lock(&genpd->lock);
+
+ /*
+ * If we have an unknown state of the need_restore flag, it means none
+ * of the runtime PM callbacks has been invoked yet. Let's update the
+ * flag to reflect that the current state is active.
+ */
+ gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
+ if (gpd_data->need_restore < 0)
+ gpd_data->need_restore = 0;
+
genpd->in_progress++;
pm_genpd_poweroff(genpd);
genpd->in_progress--;
spin_unlock_irq(&dev->power.lock);
if (genpd->attach_dev)
- genpd->attach_dev(dev);
+ genpd->attach_dev(genpd, dev);
mutex_lock(&gpd_data->lock);
gpd_data->base.dev = dev;
list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
- gpd_data->need_restore = genpd->status == GPD_STATE_POWER_OFF;
+ gpd_data->need_restore = -1;
gpd_data->td.constraint_changed = true;
gpd_data->td.effective_constraint_ns = -1;
mutex_unlock(&gpd_data->lock);
genpd->max_off_time_changed = true;
if (genpd->detach_dev)
- genpd->detach_dev(dev);
+ genpd->detach_dev(genpd, dev);
spin_lock_irq(&dev->power.lock);
psd = dev_to_psd(dev);
if (psd && psd->domain_data)
- to_gpd_data(psd->domain_data)->need_restore = val;
+ to_gpd_data(psd->domain_data)->need_restore = val ? 1 : 0;
spin_unlock_irqrestore(&dev->power.lock, flags);
}
}
if (page_zero_filled(uncmem)) {
- kunmap_atomic(user_mem);
+ if (user_mem)
+ kunmap_atomic(user_mem);
/* Free memory associated with this sector now. */
bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
zram_free_page(zram, index);
#include <asm/vio.h>
-static int pseries_rng_data_read(struct hwrng *rng, u32 *data)
+static int pseries_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
+ u64 buffer[PLPAR_HCALL_BUFSIZE];
+ size_t size = max < 8 ? max : 8;
int rc;
- rc = plpar_hcall(H_RANDOM, (unsigned long *)data);
+ rc = plpar_hcall(H_RANDOM, (unsigned long *)buffer);
if (rc != H_SUCCESS) {
pr_err_ratelimited("H_RANDOM call failed %d\n", rc);
return -EIO;
}
+ memcpy(data, buffer, size);
/* The hypervisor interface returns 64 bits */
- return 8;
+ return size;
}
/**
static struct hwrng pseries_rng = {
.name = KBUILD_MODNAME,
- .data_read = pseries_rng_data_read,
+ .read = pseries_rng_read,
};
static int __init pseries_rng_probe(struct vio_dev *dev,
spin_lock_init(&port->outvq_lock);
init_waitqueue_head(&port->waitqueue);
- virtio_device_ready(portdev->vdev);
-
/* Fill the in_vq with buffers so the host can send us data. */
nr_added_bufs = fill_queue(port->in_vq, &port->inbuf_lock);
if (!nr_added_bufs) {
spin_lock_init(&portdev->ports_lock);
INIT_LIST_HEAD(&portdev->ports);
+ virtio_device_ready(portdev->vdev);
+
if (multiport) {
unsigned int nr_added_bufs;
if (ret == -EPROBE_DEFER)
dev_dbg(cpu_dev, "cpu%d clock not ready, retry\n", cpu);
else
- dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", ret,
- cpu);
+ dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", cpu,
+ ret);
} else {
*cdev = cpu_dev;
*creg = cpu_reg;
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
- policy->governor = NULL;
+ if (policy)
+ policy->governor = NULL;
return policy;
}
u32 *desc;
struct split_key_result result;
dma_addr_t dma_addr_in, dma_addr_out;
- int ret = 0;
+ int ret = -ENOMEM;
desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
if (!desc) {
dev_err(jrdev, "unable to allocate key input memory\n");
- return -ENOMEM;
+ return ret;
}
- init_job_desc(desc, 0);
-
dma_addr_in = dma_map_single(jrdev, (void *)key_in, keylen,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, dma_addr_in)) {
dev_err(jrdev, "unable to map key input memory\n");
- kfree(desc);
- return -ENOMEM;
+ goto out_free;
}
+
+ dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, dma_addr_out)) {
+ dev_err(jrdev, "unable to map key output memory\n");
+ goto out_unmap_in;
+ }
+
+ init_job_desc(desc, 0);
append_key(desc, dma_addr_in, keylen, CLASS_2 | KEY_DEST_CLASS_REG);
/* Sets MDHA up into an HMAC-INIT */
* FIFO_STORE with the explicit split-key content store
* (0x26 output type)
*/
- dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(jrdev, dma_addr_out)) {
- dev_err(jrdev, "unable to map key output memory\n");
- kfree(desc);
- return -ENOMEM;
- }
append_fifo_store(desc, dma_addr_out, split_key_len,
LDST_CLASS_2_CCB | FIFOST_TYPE_SPLIT_KEK);
dma_unmap_single(jrdev, dma_addr_out, split_key_pad_len,
DMA_FROM_DEVICE);
+out_unmap_in:
dma_unmap_single(jrdev, dma_addr_in, keylen, DMA_TO_DEVICE);
-
+out_free:
kfree(desc);
-
return ret;
}
EXPORT_SYMBOL(gen_split_key);
struct dentry *debugfs_dir;
struct list_head list;
struct module *owner;
- uint8_t accel_id;
- uint8_t numa_node;
struct adf_accel_pci accel_pci_dev;
+ uint8_t accel_id;
} __packed;
#endif
WRITE_CSR_RING_BASE(csr_addr, bank_num, i, 0);
ring = &bank->rings[i];
if (hw_data->tx_rings_mask & (1 << i)) {
- ring->inflights = kzalloc_node(sizeof(atomic_t),
- GFP_KERNEL,
- accel_dev->numa_node);
+ ring->inflights =
+ kzalloc_node(sizeof(atomic_t),
+ GFP_KERNEL,
+ dev_to_node(&GET_DEV(accel_dev)));
if (!ring->inflights)
goto err;
} else {
int i, ret;
etr_data = kzalloc_node(sizeof(*etr_data), GFP_KERNEL,
- accel_dev->numa_node);
+ dev_to_node(&GET_DEV(accel_dev)));
if (!etr_data)
return -ENOMEM;
num_banks = GET_MAX_BANKS(accel_dev);
size = num_banks * sizeof(struct adf_etr_bank_data);
- etr_data->banks = kzalloc_node(size, GFP_KERNEL, accel_dev->numa_node);
+ etr_data->banks = kzalloc_node(size, GFP_KERNEL,
+ dev_to_node(&GET_DEV(accel_dev)));
if (!etr_data->banks) {
ret = -ENOMEM;
goto err_bank;
if (unlikely(!n))
return -EINVAL;
- bufl = kmalloc_node(sz, GFP_ATOMIC, inst->accel_dev->numa_node);
+ bufl = kmalloc_node(sz, GFP_ATOMIC,
+ dev_to_node(&GET_DEV(inst->accel_dev)));
if (unlikely(!bufl))
return -ENOMEM;
goto err;
for_each_sg(assoc, sg, assoc_n, i) {
+ if (!sg->length)
+ continue;
bufl->bufers[bufs].addr = dma_map_single(dev,
sg_virt(sg),
sg->length,
struct qat_alg_buf *bufers;
buflout = kmalloc_node(sz, GFP_ATOMIC,
- inst->accel_dev->numa_node);
+ dev_to_node(&GET_DEV(inst->accel_dev)));
if (unlikely(!buflout))
goto err;
bloutp = dma_map_single(dev, buflout, sz, DMA_TO_DEVICE);
list_for_each(itr, adf_devmgr_get_head()) {
accel_dev = list_entry(itr, struct adf_accel_dev, list);
- if (accel_dev->numa_node == node && adf_dev_started(accel_dev))
+ if ((node == dev_to_node(&GET_DEV(accel_dev)) ||
+ dev_to_node(&GET_DEV(accel_dev)) < 0)
+ && adf_dev_started(accel_dev))
break;
accel_dev = NULL;
}
if (!accel_dev) {
- pr_err("QAT: Could not find device on give node\n");
+ pr_err("QAT: Could not find device on node %d\n", node);
accel_dev = adf_devmgr_get_first();
}
if (!accel_dev || !adf_dev_started(accel_dev))
for (i = 0; i < num_inst; i++) {
inst = kzalloc_node(sizeof(*inst), GFP_KERNEL,
- accel_dev->numa_node);
+ dev_to_node(&GET_DEV(accel_dev)));
if (!inst)
goto err;
uint64_t reg_val;
admin = kzalloc_node(sizeof(*accel_dev->admin), GFP_KERNEL,
- accel_dev->numa_node);
+ dev_to_node(&GET_DEV(accel_dev)));
if (!admin)
return -ENOMEM;
admin->virt_addr = dma_zalloc_coherent(&GET_DEV(accel_dev), PAGE_SIZE,
kfree(accel_dev);
}
-static uint8_t adf_get_dev_node_id(struct pci_dev *pdev)
-{
- unsigned int bus_per_cpu = 0;
- struct cpuinfo_x86 *c = &cpu_data(num_online_cpus() - 1);
-
- if (!c->phys_proc_id)
- return 0;
-
- bus_per_cpu = 256 / (c->phys_proc_id + 1);
-
- if (bus_per_cpu != 0)
- return pdev->bus->number / bus_per_cpu;
- return 0;
-}
-
static int qat_dev_start(struct adf_accel_dev *accel_dev)
{
int cpus = num_online_cpus();
void __iomem *pmisc_bar_addr = NULL;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
- uint8_t node;
int ret;
switch (ent->device) {
return -ENODEV;
}
- node = adf_get_dev_node_id(pdev);
- accel_dev = kzalloc_node(sizeof(*accel_dev), GFP_KERNEL, node);
+ if (num_possible_nodes() > 1 && dev_to_node(&pdev->dev) < 0) {
+ /* If the accelerator is connected to a node with no memory
+ * there is no point in using the accelerator since the remote
+ * memory transaction will be very slow. */
+ dev_err(&pdev->dev, "Invalid NUMA configuration.\n");
+ return -EINVAL;
+ }
+
+ accel_dev = kzalloc_node(sizeof(*accel_dev), GFP_KERNEL,
+ dev_to_node(&pdev->dev));
if (!accel_dev)
return -ENOMEM;
- accel_dev->numa_node = node;
INIT_LIST_HEAD(&accel_dev->crypto_list);
/* Add accel device to accel table.
accel_dev->owner = THIS_MODULE;
/* Allocate and configure device configuration structure */
- hw_data = kzalloc_node(sizeof(*hw_data), GFP_KERNEL, node);
+ hw_data = kzalloc_node(sizeof(*hw_data), GFP_KERNEL,
+ dev_to_node(&pdev->dev));
if (!hw_data) {
ret = -ENOMEM;
goto out_err;
uint32_t msix_num_entries = hw_data->num_banks + 1;
entries = kzalloc_node(msix_num_entries * sizeof(*entries),
- GFP_KERNEL, accel_dev->numa_node);
+ GFP_KERNEL, dev_to_node(&GET_DEV(accel_dev)));
if (!entries)
return -ENOMEM;
}
EXPORT_SYMBOL(edma_filter_fn);
-static struct platform_device *pdev0, *pdev1;
-
-static const struct platform_device_info edma_dev_info0 = {
- .name = "edma-dma-engine",
- .id = 0,
- .dma_mask = DMA_BIT_MASK(32),
-};
-
-static const struct platform_device_info edma_dev_info1 = {
- .name = "edma-dma-engine",
- .id = 1,
- .dma_mask = DMA_BIT_MASK(32),
-};
-
static int edma_init(void)
{
- int ret = platform_driver_register(&edma_driver);
-
- if (ret == 0) {
- pdev0 = platform_device_register_full(&edma_dev_info0);
- if (IS_ERR(pdev0)) {
- platform_driver_unregister(&edma_driver);
- ret = PTR_ERR(pdev0);
- goto out;
- }
- }
-
- if (!of_have_populated_dt() && EDMA_CTLRS == 2) {
- pdev1 = platform_device_register_full(&edma_dev_info1);
- if (IS_ERR(pdev1)) {
- platform_driver_unregister(&edma_driver);
- platform_device_unregister(pdev0);
- ret = PTR_ERR(pdev1);
- }
- }
-
-out:
- return ret;
+ return platform_driver_register(&edma_driver);
}
subsys_initcall(edma_init);
static void __exit edma_exit(void)
{
- platform_device_unregister(pdev0);
- if (pdev1)
- platform_device_unregister(pdev1);
platform_driver_unregister(&edma_driver);
}
module_exit(edma_exit);
_IOC_SIZE(cmd) > sizeof(buffer))
return -ENOTTY;
- if (_IOC_DIR(cmd) == _IOC_READ)
- memset(&buffer, 0, _IOC_SIZE(cmd));
+ memset(&buffer, 0, sizeof(buffer));
if (_IOC_DIR(cmd) & _IOC_WRITE)
if (copy_from_user(&buffer, arg, _IOC_SIZE(cmd)))
goto out_regs;
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- ret = i915_kick_out_vgacon(dev_priv);
+ /* WARNING: Apparently we must kick fbdev drivers before vgacon,
+ * otherwise the vga fbdev driver falls over. */
+ ret = i915_kick_out_firmware_fb(dev_priv);
if (ret) {
- DRM_ERROR("failed to remove conflicting VGA console\n");
+ DRM_ERROR("failed to remove conflicting framebuffer drivers\n");
goto out_gtt;
}
- ret = i915_kick_out_firmware_fb(dev_priv);
+ ret = i915_kick_out_vgacon(dev_priv);
if (ret) {
- DRM_ERROR("failed to remove conflicting framebuffer drivers\n");
+ DRM_ERROR("failed to remove conflicting VGA console\n");
goto out_gtt;
}
}
I915_WRITE(_3D_CHICKEN,
_MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB));
- /* WaSetupGtModeTdRowDispatch:snb */
- if (IS_SNB_GT1(dev))
- I915_WRITE(GEN6_GT_MODE,
- _MASKED_BIT_ENABLE(GEN6_TD_FOUR_ROW_DISPATCH_DISABLE));
-
/* WaDisable_RenderCache_OperationalFlush:snb */
I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE));
hdev->hiddev_disconnect(hdev);
if (hdev->claimed & HID_CLAIMED_HIDRAW)
hidraw_disconnect(hdev);
+ hdev->claimed = 0;
}
EXPORT_SYMBOL_GPL(hid_disconnect);
#define USB_VENDOR_ID_ELAN 0x04f3
#define USB_DEVICE_ID_ELAN_TOUCHSCREEN 0x0089
#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B 0x009b
+#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103 0x0103
#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F 0x016f
#define USB_VENDOR_ID_ELECOM 0x056e
{ USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) },
{}
};
MODULE_DEVICE_TABLE(pci, fam15h_power_id_table);
opal = of_find_node_by_path("/ibm,opal/sensors");
if (!opal) {
- dev_err(&pdev->dev, "Opal node 'sensors' not found\n");
+ dev_dbg(&pdev->dev, "Opal node 'sensors' not found\n");
return -ENODEV;
}
err = platform_driver_probe(&ibmpowernv_driver, ibmpowernv_probe);
if (err) {
- pr_err("Platfrom driver probe failed\n");
+ if (err != -ENODEV)
+ pr_err("Platform driver probe failed (%d)\n", err);
+
goto exit_device_del;
}
static int pwm_fan_resume(struct device *dev)
{
struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
+ unsigned long duty;
+ int ret;
- if (ctx->pwm_value)
- return pwm_enable(ctx->pwm);
- return 0;
+ if (ctx->pwm_value == 0)
+ return 0;
+
+ duty = DIV_ROUND_UP(ctx->pwm_value * (ctx->pwm->period - 1), MAX_PWM);
+ ret = pwm_config(ctx->pwm, duty, ctx->pwm->period);
+ if (ret)
+ return ret;
+ return pwm_enable(ctx->pwm);
}
#endif
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- MA 02110-1301 USA.
* ------------------------------------------------------------------------- */
/* With some changes from Frodo Looijaard <frodol@dds.nl>, Kyösti Mälkki
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- * MA 02110-1301 USA.
*/
#include <linux/kernel.h>
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- * MA 02110-1301 USA.
- *
* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and
* Frodo Looijaard <frodol@dds.nl>, and also from Martin Bailey
* <mbailey@littlefeet-inc.com>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- MA 02110-1301 USA. */
+ GNU General Public License for more details. */
/* -------------------------------------------------------------------- */
/* With some changes from Frodo Looijaard <frodol@dds.nl> */
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
}
}
- ret = wait_for_completion_io_timeout(&dev->cmd_complete,
+ ret = wait_for_completion_timeout(&dev->cmd_complete,
dev->adapter.timeout);
if (ret == 0) {
dev_err(dev->dev, "controller timed out\n");
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/delay.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ----------------------------------------------------------------------------
*
*/
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ----------------------------------------------------------------------------
*
*/
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ----------------------------------------------------------------------------
*
*/
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ----------------------------------------------------------------------------
*
*/
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ----------------------------------------------------------------------------
*
*/
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/module.h>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+ GNU General Public License for more details. */
/* ------------------------------------------------------------------------- */
/* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and even
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307,
- * USA.
- *
* Author:
* Darius Augulis, Teltonika Inc.
*
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+ GNU General Public License for more details. */
/* ------------------------------------------------------------------------- */
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
* The full GNU General Public License is included in this distribution
* in the file called LICENSE.GPL.
*
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/module.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ------------------------------------------------------------------------ */
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ------------------------------------------------------------------------ */
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* ------------------------------------------------------------------------ */
#define PORT_DATA 0
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/module.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * You should have received a copy of the GNU General Public License along
- * with this program; if not, write to the Free Software Foundation, Inc.,
- * 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-
*/
#include <linux/module.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/kernel.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/kernel.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/kernel.h>
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/* Note: we assume there can only be one SIS5595 with one SMBus interface */
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/delay.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
*
* This code was implemented by Mocean Laboratories AB when porting linux
* to the automotive development board Russellville. The copyright holder
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- * MA 02110-1301 USA.
*/
#include <linux/kernel.h>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- MA 02110-1301 USA. */
+ GNU General Public License for more details. */
/* ------------------------------------------------------------------------- */
/* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi>.
status = driver->remove(client);
}
+ if (dev->of_node)
+ irq_dispose_mapping(client->irq);
+
dev_pm_domain_detach(&client->dev, true);
return status;
}
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- * MA 02110-1301 USA.
*/
#include <linux/rwsem.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- MA 02110-1301 USA.
*/
/* Note that this is a complete rewrite of Simon Vogl's i2c-dev module.
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
- * MA 02110-1301 USA.
*/
#include <linux/kernel.h>
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#define DEBUG 1
static const struct iio_event_spec kxcjk1013_event = {
.type = IIO_EV_TYPE_THRESH,
- .dir = IIO_EV_DIR_RISING | IIO_EV_DIR_FALLING,
+ .dir = IIO_EV_DIR_EITHER,
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
BIT(IIO_EV_INFO_ENABLE) |
BIT(IIO_EV_INFO_PERIOD)
return i2c_smbus_write_byte_data(to_i2c_client(dev), TSL4531_CONTROL,
TSL4531_MODE_NORMAL);
}
-#endif
static SIMPLE_DEV_PM_OPS(tsl4531_pm_ops, tsl4531_suspend, tsl4531_resume);
+#define TSL4531_PM_OPS (&tsl4531_pm_ops)
+#else
+#define TSL4531_PM_OPS NULL
+#endif
static const struct i2c_device_id tsl4531_id[] = {
{ "tsl4531", 0 },
static struct i2c_driver tsl4531_driver = {
.driver = {
.name = TSL4531_DRV_NAME,
- .pm = &tsl4531_pm_ops,
+ .pm = TSL4531_PM_OPS,
.owner = THIS_MODULE,
},
.probe = tsl4531_probe,
return -EINVAL;
}
- indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(st));
+ indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st));
if (!indio_dev)
return -ENOMEM;
}
platform_set_drvdata(pdev, pwr);
+ device_init_wakeup(&pdev->dev, true);
return 0;
}
{
struct alps_data *priv = psmouse->private;
- if ((psmouse->packet[0] & 0xc8) == 0x08) { /* PS/2 packet */
+ /*
+ * Check if we are dealing with a bare PS/2 packet, presumably from
+ * a device connected to the external PS/2 port. Because bare PS/2
+ * protocol does not have enough constant bits to self-synchronize
+ * properly we only do this if the device is fully synchronized.
+ */
+ if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) {
if (psmouse->pktcnt == 3) {
alps_report_bare_ps2_packet(psmouse, psmouse->packet,
true);
}
/* Bytes 2 - pktsize should have 0 in the highest bit */
- if ((priv->proto_version < ALPS_PROTO_V5) &&
+ if (priv->proto_version < ALPS_PROTO_V5 &&
psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize &&
(psmouse->packet[psmouse->pktcnt - 1] & 0x80)) {
psmouse_dbg(psmouse, "refusing packet[%i] = %x\n",
psmouse->pktcnt - 1,
psmouse->packet[psmouse->pktcnt - 1]);
+
+ if (priv->proto_version == ALPS_PROTO_V3 &&
+ psmouse->pktcnt == psmouse->pktsize) {
+ /*
+ * Some Dell boxes, such as Latitude E6440 or E7440
+ * with closed lid, quite often smash last byte of
+ * otherwise valid packet with 0xff. Given that the
+ * next packet is very likely to be valid let's
+ * report PSMOUSE_FULL_PACKET but not process data,
+ * rather than reporting PSMOUSE_BAD_DATA and
+ * filling the logs.
+ */
+ return PSMOUSE_FULL_PACKET;
+ }
+
return PSMOUSE_BAD_DATA;
}
/* We are having trouble resyncing ALPS touchpads so disable it for now */
psmouse->resync_time = 0;
+ /* Allow 2 invalid packets without resetting device */
+ psmouse->resetafter = psmouse->pktsize * 2;
+
return 0;
init_fail:
} else {
input_report_key(dev, BTN_LEFT, packet[0] & 0x01);
input_report_key(dev, BTN_RIGHT, packet[0] & 0x02);
+ input_report_key(dev, BTN_MIDDLE, packet[0] & 0x04);
}
input_mt_report_pointer_emulation(dev, true);
unsigned char packet_type = packet[3] & 0x03;
bool sanity_check;
+ if ((packet[3] & 0x0f) == 0x06)
+ return PACKET_TRACKPOINT;
+
/*
* Sanity check based on the constant bits of a packet.
* The constant bits change depending on the value of
case 4:
packet_type = elantech_packet_check_v4(psmouse);
- if (packet_type == PACKET_UNKNOWN)
+ switch (packet_type) {
+ case PACKET_UNKNOWN:
return PSMOUSE_BAD_DATA;
- elantech_report_absolute_v4(psmouse, packet_type);
+ case PACKET_TRACKPOINT:
+ elantech_report_trackpoint(psmouse, packet_type);
+ break;
+
+ default:
+ elantech_report_absolute_v4(psmouse, packet_type);
+ break;
+ }
+
break;
}
}
}
+/*
+ * Some hw_version 4 models do have a middle button
+ */
+static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+#if defined(CONFIG_DMI) && defined(CONFIG_X86)
+ {
+ /* Fujitsu H730 has a middle button */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H730"),
+ },
+ },
+#endif
+ { }
+};
+
/*
* Set the appropriate event bits for the input subsystem
*/
__clear_bit(EV_REL, dev->evbit);
__set_bit(BTN_LEFT, dev->keybit);
+ if (dmi_check_system(elantech_dmi_has_middle_button))
+ __set_bit(BTN_MIDDLE, dev->keybit);
__set_bit(BTN_RIGHT, dev->keybit);
__set_bit(BTN_TOUCH, dev->keybit);
ELANTECH_INT_ATTR(reg_26, 0x26);
ELANTECH_INT_ATTR(debug, 0);
ELANTECH_INT_ATTR(paritycheck, 0);
+ELANTECH_INT_ATTR(crc_enabled, 0);
static struct attribute *elantech_attrs[] = {
&psmouse_attr_reg_07.dattr.attr,
&psmouse_attr_reg_26.dattr.attr,
&psmouse_attr_debug.dattr.attr,
&psmouse_attr_paritycheck.dattr.attr,
+ &psmouse_attr_crc_enabled.dattr.attr,
NULL
};
return 0;
}
+/*
+ * Some hw_version 4 models do not work with crc_disabled
+ */
+static const struct dmi_system_id elantech_dmi_force_crc_enabled[] = {
+#if defined(CONFIG_DMI) && defined(CONFIG_X86)
+ {
+ /* Fujitsu H730 does not work with crc_enabled == 0 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H730"),
+ },
+ },
+#endif
+ { }
+};
+
/*
* Some hw_version 3 models go into error state when we try to set
* bit 3 and/or bit 1 of r10.
* The signatures of v3 and v4 packets change depending on the
* value of this hardware flag.
*/
- etd->crc_enabled = ((etd->fw_version & 0x4000) == 0x4000);
+ etd->crc_enabled = (etd->fw_version & 0x4000) == 0x4000 ||
+ dmi_check_system(elantech_dmi_force_crc_enabled);
/* Enable real hardware resolution on hw_version 3 ? */
etd->set_hw_resolution = !dmi_check_system(no_hw_res_dmi_table);
1232, 5710, 1156, 4696
},
{
- (const char * const []){"LEN0034", "LEN0036", "LEN2002",
- "LEN2004", NULL},
+ (const char * const []){"LEN0034", "LEN0036", "LEN0039",
+ "LEN2002", "LEN2004", NULL},
1024, 5112, 2024, 4832
},
{
"LEN0036", /* T440 */
"LEN0037",
"LEN0038",
+ "LEN0039", /* T440s */
"LEN0041",
"LEN0042", /* Yoga */
"LEN0045",
#define ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS (0x34)
#define ARMADA_370_XP_INT_SOURCE_CTL(irq) (0x100 + irq*4)
#define ARMADA_370_XP_INT_SOURCE_CPU_MASK 0xF
+#define ARMADA_370_XP_INT_IRQ_FIQ_MASK(cpuid) ((BIT(0) | BIT(8)) << cpuid)
#define ARMADA_370_XP_CPU_INTACK_OFFS (0x44)
#define ARMADA_375_PPI_CAUSE (0x10)
struct irq_desc *desc)
{
struct irq_chip *chip = irq_get_chip(irq);
- unsigned long irqmap, irqn;
+ unsigned long irqmap, irqn, irqsrc, cpuid;
unsigned int cascade_irq;
chained_irq_enter(chip, desc);
irqmap = readl_relaxed(per_cpu_int_base + ARMADA_375_PPI_CAUSE);
-
- if (irqmap & BIT(0)) {
- armada_370_xp_handle_msi_irq(NULL, true);
- irqmap &= ~BIT(0);
- }
+ cpuid = cpu_logical_map(smp_processor_id());
for_each_set_bit(irqn, &irqmap, BITS_PER_LONG) {
+ irqsrc = readl_relaxed(main_int_base +
+ ARMADA_370_XP_INT_SOURCE_CTL(irqn));
+
+ /* Check if the interrupt is not masked on current CPU.
+ * Test IRQ (0-1) and FIQ (8-9) mask bits.
+ */
+ if (!(irqsrc & ARMADA_370_XP_INT_IRQ_FIQ_MASK(cpuid)))
+ continue;
+
+ if (irqn == 1) {
+ armada_370_xp_handle_msi_irq(NULL, true);
+ continue;
+ }
+
cascade_irq = irq_find_mapping(armada_370_xp_mpic_domain, irqn);
generic_handle_irq(cascade_irq);
}
/*
* Test if the buffer is unused and too old, and commit it.
- * At if noio is set, we must not do any I/O because we hold
- * dm_bufio_clients_lock and we would risk deadlock if the I/O gets rerouted to
- * different bufio client.
+ * And if GFP_NOFS is used, we must not do any I/O because we hold
+ * dm_bufio_clients_lock and we would risk deadlock if the I/O gets
+ * rerouted to different bufio client.
*/
static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp,
unsigned long max_jiffies)
if (jiffies - b->last_accessed < max_jiffies)
return 0;
- if (!(gfp & __GFP_IO)) {
+ if (!(gfp & __GFP_FS)) {
if (test_bit(B_READING, &b->state) ||
test_bit(B_WRITING, &b->state) ||
test_bit(B_DIRTY, &b->state))
unsigned long freed;
c = container_of(shrink, struct dm_bufio_client, shrinker);
- if (sc->gfp_mask & __GFP_IO)
+ if (sc->gfp_mask & __GFP_FS)
dm_bufio_lock(c);
else if (!dm_bufio_trylock(c))
return SHRINK_STOP;
unsigned long count;
c = container_of(shrink, struct dm_bufio_client, shrinker);
- if (sc->gfp_mask & __GFP_IO)
+ if (sc->gfp_mask & __GFP_FS)
dm_bufio_lock(c);
else if (!dm_bufio_trylock(c))
return 0;
__le32 layout;
__le32 stripe_sectors;
- __u8 pad[452]; /* Round struct to 512 bytes. */
- /* Always set to 0 when writing. */
+ /* Remainder of a logical block is zero-filled when writing (see super_sync()). */
} __packed;
static int read_disk_sb(struct md_rdev *rdev, int size)
test_bit(Faulty, &(rs->dev[i].rdev.flags)))
failed_devices |= (1ULL << i);
- memset(sb, 0, sizeof(*sb));
+ memset(sb + 1, 0, rdev->sb_size - sizeof(*sb));
sb->magic = cpu_to_le32(DM_RAID_MAGIC);
sb->features = cpu_to_le32(0); /* No features yet */
uint64_t events_sb, events_refsb;
rdev->sb_start = 0;
- rdev->sb_size = sizeof(*sb);
+ rdev->sb_size = bdev_logical_block_size(rdev->meta_bdev);
+ if (rdev->sb_size < sizeof(*sb) || rdev->sb_size > PAGE_SIZE) {
+ DMERR("superblock size of a logical block is no longer valid");
+ return -EINVAL;
+ }
ret = read_disk_sb(rdev, rdev->sb_size);
if (ret)
raid456 = (rs->md.level == 4 || rs->md.level == 5 || rs->md.level == 6);
for (i = 0; i < rs->md.raid_disks; i++) {
- struct request_queue *q = bdev_get_queue(rs->dev[i].rdev.bdev);
+ struct request_queue *q;
+
+ if (!rs->dev[i].rdev.bdev)
+ continue;
+ q = bdev_get_queue(rs->dev[i].rdev.bdev);
if (!q || !blk_queue_discard(q))
return;
sc->stripes_shift = __ffs(stripes);
r = dm_set_target_max_io_len(ti, chunk_size);
- if (r)
+ if (r) {
+ kfree(sc);
return r;
+ }
ti->num_flush_bios = stripes;
ti->num_discard_bios = stripes;
return DM_MAPIO_SUBMITTED;
}
+ /*
+ * We must hold the virtual cell before doing the lookup, otherwise
+ * there's a race with discard.
+ */
+ build_virtual_key(tc->td, block, &key);
+ if (dm_bio_detain(tc->pool->prison, &key, bio, &cell1, &cell_result))
+ return DM_MAPIO_SUBMITTED;
+
r = dm_thin_find_block(td, block, 0, &result);
/*
* shared flag will be set in their case.
*/
thin_defer_bio(tc, bio);
+ cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED;
}
- build_virtual_key(tc->td, block, &key);
- if (dm_bio_detain(tc->pool->prison, &key, bio, &cell1, &cell_result))
- return DM_MAPIO_SUBMITTED;
-
build_data_key(tc->td, result.block, &key);
if (dm_bio_detain(tc->pool->prison, &key, bio, &cell2, &cell_result)) {
cell_defer_no_holder_no_free(tc, &cell1);
* of doing so.
*/
handle_unserviceable_bio(tc->pool, bio);
+ cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED;
}
/* fall through */
* provide the hint to load the metadata into cache.
*/
thin_defer_bio(tc, bio);
+ cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED;
default:
* pool is switched to fail-io mode.
*/
bio_io_error(bio);
+ cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED;
}
}
printk("md: %s still in use.\n",mdname(mddev));
if (did_freeze) {
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
md_wakeup_thread(mddev->thread);
}
err = -EBUSY;
mddev->ro = 1;
set_disk_ro(mddev->gendisk, 1);
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
sysfs_notify_dirent_safe(mddev->sysfs_state);
err = 0;
}
mutex_unlock(&mddev->open_mutex);
if (did_freeze) {
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
md_wakeup_thread(mddev->thread);
}
return -EBUSY;
} __packed;
+/*
+ * Locks a block using the btree node validator.
+ */
+int bn_read_lock(struct dm_btree_info *info, dm_block_t b,
+ struct dm_block **result);
+
void inc_children(struct dm_transaction_manager *tm, struct btree_node *n,
struct dm_btree_value_type *vt);
/*----------------------------------------------------------------*/
-static int bn_read_lock(struct dm_btree_info *info, dm_block_t b,
+int bn_read_lock(struct dm_btree_info *info, dm_block_t b,
struct dm_block **result)
{
return dm_tm_read_lock(info->tm, b, &btree_node_validator, result);
* FIXME: We shouldn't use a recursive algorithm when we have limited stack
* space. Also this only works for single level trees.
*/
-static int walk_node(struct ro_spine *s, dm_block_t block,
+static int walk_node(struct dm_btree_info *info, dm_block_t block,
int (*fn)(void *context, uint64_t *keys, void *leaf),
void *context)
{
int r;
unsigned i, nr;
+ struct dm_block *node;
struct btree_node *n;
uint64_t keys;
- r = ro_step(s, block);
- n = ro_node(s);
+ r = bn_read_lock(info, block, &node);
+ if (r)
+ return r;
+
+ n = dm_block_data(node);
nr = le32_to_cpu(n->header.nr_entries);
for (i = 0; i < nr; i++) {
if (le32_to_cpu(n->header.flags) & INTERNAL_NODE) {
- r = walk_node(s, value64(n, i), fn, context);
+ r = walk_node(info, value64(n, i), fn, context);
if (r)
goto out;
} else {
}
out:
- ro_pop(s);
+ dm_tm_unlock(info->tm, node);
return r;
}
int (*fn)(void *context, uint64_t *keys, void *leaf),
void *context)
{
- int r;
- struct ro_spine spine;
-
BUG_ON(info->levels > 1);
-
- init_ro_spine(&spine, info);
- r = walk_node(&spine, root, fn, context);
- exit_ro_spine(&spine);
-
- return r;
+ return walk_node(info, root, fn, context);
}
EXPORT_SYMBOL_GPL(dm_btree_walk);
case SYS_ATSC:
c->modulation = VSB_8;
break;
+ case SYS_ISDBS:
+ c->symbol_rate = 28860000;
+ c->rolloff = ROLLOFF_35;
+ c->bandwidth_hz = c->symbol_rate / 100 * 135;
+ break;
default:
c->modulation = QAM_AUTO;
break;
break;
case SYS_DVBS:
case SYS_TURBO:
+ case SYS_ISDBS:
rolloff = 135;
break;
case SYS_DVBS2:
memcpy(&state->frontend.ops, &ds3000_ops,
sizeof(struct dvb_frontend_ops));
state->frontend.demodulator_priv = state;
+
+ /*
+ * Some devices like T480 starts with voltage on. Be sure
+ * to turn voltage off during init, as this can otherwise
+ * interfere with Unicable SCR systems.
+ */
+ ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF);
return &state->frontend;
error3:
return s->status;
}
-int sp2_init(struct sp2 *s)
+static int sp2_init(struct sp2 *s)
{
int ret = 0;
u8 buf;
return ret;
}
-int sp2_exit(struct i2c_client *client)
+static int sp2_exit(struct i2c_client *client)
{
struct sp2 *s;
c->delivery_system = SYS_ISDBS;
layers = 0;
- ret = reg_read(state, 0xe8, val, 3);
+ ret = reg_read(state, 0xe6, val, 5);
if (ret == 0) {
- int slots;
u8 v;
+ c->stream_id = val[0] << 8 | val[1];
+
/* high/single layer */
- v = (val[0] & 0x70) >> 4;
+ v = (val[2] & 0x70) >> 4;
c->modulation = (v == 7) ? PSK_8 : QPSK;
c->fec_inner = fec_conv_sat[v];
c->layer[0].fec = c->fec_inner;
c->layer[0].modulation = c->modulation;
- c->layer[0].segment_count = val[1] & 0x3f; /* slots */
+ c->layer[0].segment_count = val[3] & 0x3f; /* slots */
/* low layer */
- v = (val[0] & 0x07);
+ v = (val[2] & 0x07);
c->layer[1].fec = fec_conv_sat[v];
if (v == 0) /* no low layer */
c->layer[1].segment_count = 0;
else
- c->layer[1].segment_count = val[2] & 0x3f; /* slots */
+ c->layer[1].segment_count = val[4] & 0x3f; /* slots */
/* actually, BPSK if v==1, but not defined in fe_modulation_t */
c->layer[1].modulation = QPSK;
layers = (v > 0) ? 2 : 1;
-
- slots = c->layer[0].segment_count + c->layer[1].segment_count;
- c->symbol_rate = 28860000 * slots / 48;
}
/* statistics */
u8 v;
c->isdbt_partial_reception = val[0] & 0x01;
- c->isdbt_sb_mode = (val[0] & 0xc0) == 0x01;
+ c->isdbt_sb_mode = (val[0] & 0xc0) == 0x40;
/* layer A */
v = (val[2] & 0x78) >> 3;
"\t\t bit 0=crop, 1=compose, 2=scale,\n"
"\t\t -1=user-controlled (default)");
-static unsigned multiplanar[VIVID_MAX_DEVS];
+static unsigned multiplanar[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 1 };
module_param_array(multiplanar, uint, NULL, 0444);
-MODULE_PARM_DESC(multiplanar, " 0 (default) is alternating single and multiplanar devices,\n"
- "\t\t 1 is single planar devices,\n"
- "\t\t 2 is multiplanar devices");
+MODULE_PARM_DESC(multiplanar, " 1 (default) creates a single planar device, 2 creates a multiplanar device.");
/* Default: video + vbi-cap (raw and sliced) + radio rx + radio tx + sdr + vbi-out + vid-out */
static unsigned node_types[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 0x1d3d };
/* start detecting feature set */
/* do we use single- or multi-planar? */
- if (multiplanar[inst] == 0)
- dev->multiplanar = inst & 1;
- else
- dev->multiplanar = multiplanar[inst] > 1;
+ dev->multiplanar = multiplanar[inst] > 1;
v4l2_info(&dev->v4l2_dev, "using %splanar format API\n",
dev->multiplanar ? "multi" : "single ");
if (press_type == 0)
rc_keyup(ictx->rdev);
else {
- if (ictx->rc_type == RC_BIT_RC6_MCE)
+ if (ictx->rc_type == RC_BIT_RC6_MCE ||
+ ictx->rc_type == RC_BIT_OTHER)
rc_keydown(ictx->rdev,
ictx->rc_type == RC_BIT_RC6_MCE ? RC_TYPE_RC6_MCE : RC_TYPE_OTHER,
ictx->rc_scancode, ictx->rc_toggle);
return 0;
}
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
static int hix5hd2_ir_suspend(struct device *dev)
{
struct hix5hd2_ir_priv *priv = dev_get_drvdata(dev);
u32 scancode;
enum rc_type protocol;
- if (!(dev->enabled_protocols & (RC_BIT_RC5 | RC_BIT_RC5X)))
+ if (!(dev->enabled_protocols & (RC_BIT_RC5 | RC_BIT_RC5X | RC_BIT_RC5_SZ)))
return 0;
if (!is_timing_event(ev)) {
return -ENOMEM;
dev->raw->dev = dev;
- dev->enabled_protocols = ~0;
dev->change_protocol = change_protocol;
rc = kfifo_alloc(&dev->raw->kfifo,
sizeof(struct ir_raw_event) * MAX_IR_EVENT_SIZE,
if (dev->change_protocol) {
u64 rc_type = (1 << rc_map->rc_type);
+ if (dev->driver_type == RC_DRIVER_IR_RAW)
+ rc_type |= RC_BIT_LIRC;
rc = dev->change_protocol(dev, &rc_type);
if (rc < 0)
goto out_raw;
goto err_irq_charger;
}
- ret = regmap_add_irq_chip(max77693->regmap, max77693->irq,
+ ret = regmap_add_irq_chip(max77693->regmap_muic, max77693->irq,
IRQF_ONESHOT | IRQF_SHARED |
IRQF_TRIGGER_FALLING, 0,
&max77693_muic_irq_chip,
goto err_irq_muic;
}
+ /* Unmask interrupts from all blocks in interrupt source register */
+ ret = regmap_update_bits(max77693->regmap,
+ MAX77693_PMIC_REG_INTSRC_MASK,
+ SRC_IRQ_ALL, (unsigned int)~SRC_IRQ_ALL);
+ if (ret < 0) {
+ dev_err(max77693->dev,
+ "Could not unmask interrupts in INTSRC: %d\n",
+ ret);
+ goto err_intsrc;
+ }
+
pm_runtime_set_active(max77693->dev);
ret = mfd_add_devices(max77693->dev, -1, max77693_devs,
err_mfd:
mfd_remove_devices(max77693->dev);
+err_intsrc:
regmap_del_irq_chip(max77693->irq, max77693->irq_data_muic);
err_irq_muic:
regmap_del_irq_chip(max77693->irq, max77693->irq_data_charger);
mutex_unlock(&pcr->pcr_mutex);
}
+#ifdef CONFIG_PM
static void rtsx_pci_power_off(struct rtsx_pcr *pcr, u8 pm_state)
{
if (pcr->ops->turn_off_led)
if (pcr->ops->force_power_down)
pcr->ops->force_power_down(pcr, pm_state);
}
+#endif
static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
{
#define STMPE24XX_REG_CHIP_ID 0x80
#define STMPE24XX_REG_IEGPIOR_LSB 0x18
#define STMPE24XX_REG_ISGPIOR_MSB 0x19
-#define STMPE24XX_REG_GPMR_LSB 0xA5
+#define STMPE24XX_REG_GPMR_LSB 0xA4
#define STMPE24XX_REG_GPSR_LSB 0x85
#define STMPE24XX_REG_GPCR_LSB 0x88
#define STMPE24XX_REG_GPDR_LSB 0x8B
#define PWR_DEVSLP BIT(1)
#define PWR_DEVOFF BIT(0)
+/* Register bits for CFG_P1_TRANSITION (also for P2 and P3) */
+#define STARTON_SWBUG BIT(7) /* Start on watchdog */
+#define STARTON_VBUS BIT(5) /* Start on VBUS */
+#define STARTON_VBAT BIT(4) /* Start on battery insert */
+#define STARTON_RTC BIT(3) /* Start on RTC */
+#define STARTON_USB BIT(2) /* Start on USB host */
+#define STARTON_CHG BIT(1) /* Start on charger */
+#define STARTON_PWON BIT(0) /* Start on PWRON button */
+
#define SEQ_OFFSYNC (1 << 0)
#define PHY_TO_OFF_PM_MASTER(p) (p - 0x36)
return 0;
}
+static int twl4030_starton_mask_and_set(u8 bitmask, u8 bitvalues)
+{
+ u8 regs[3] = { TWL4030_PM_MASTER_CFG_P1_TRANSITION,
+ TWL4030_PM_MASTER_CFG_P2_TRANSITION,
+ TWL4030_PM_MASTER_CFG_P3_TRANSITION, };
+ u8 val;
+ int i, err;
+
+ err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, TWL4030_PM_MASTER_KEY_CFG1,
+ TWL4030_PM_MASTER_PROTECT_KEY);
+ if (err)
+ goto relock;
+ err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER,
+ TWL4030_PM_MASTER_KEY_CFG2,
+ TWL4030_PM_MASTER_PROTECT_KEY);
+ if (err)
+ goto relock;
+
+ for (i = 0; i < sizeof(regs); i++) {
+ err = twl_i2c_read_u8(TWL_MODULE_PM_MASTER,
+ &val, regs[i]);
+ if (err)
+ break;
+ val = (~bitmask & val) | (bitmask & bitvalues);
+ err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER,
+ val, regs[i]);
+ if (err)
+ break;
+ }
+
+ if (err)
+ pr_err("TWL4030 Register access failed: %i\n", err);
+
+relock:
+ return twl_i2c_write_u8(TWL_MODULE_PM_MASTER, 0,
+ TWL4030_PM_MASTER_PROTECT_KEY);
+}
+
/*
* In master mode, start the power off sequence.
* After a successful execution, TWL shuts down the power to the SoC
{
int err;
+ /* Disable start on charger or VBUS as it can break poweroff */
+ err = twl4030_starton_mask_and_set(STARTON_VBUS | STARTON_CHG, 0);
+ if (err)
+ pr_err("TWL4030 Unable to configure start-up\n");
+
err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, PWR_DEVOFF,
TWL4030_PM_MASTER_P1_SW_EVENTS);
if (err)
version >> 8, version & 0xff,
vb->usb_dev->bus->busnum, vb->usb_dev->devnum);
- ret = mfd_add_devices(&interface->dev, -1, vprbrd_devs,
- ARRAY_SIZE(vprbrd_devs), NULL, 0, NULL);
+ ret = mfd_add_devices(&interface->dev, PLATFORM_DEVID_AUTO,
+ vprbrd_devs, ARRAY_SIZE(vprbrd_devs), NULL, 0,
+ NULL);
if (ret != 0) {
dev_err(&interface->dev, "Failed to add mfd devices to core.");
goto error;
xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, data & ~TX_EN);
}
-static void xgene_enet_reset(struct xgene_enet_pdata *pdata)
+bool xgene_ring_mgr_init(struct xgene_enet_pdata *p)
+{
+ if (!ioread32(p->ring_csr_addr + CLKEN_ADDR))
+ return false;
+
+ if (ioread32(p->ring_csr_addr + SRST_ADDR))
+ return false;
+
+ return true;
+}
+
+static int xgene_enet_reset(struct xgene_enet_pdata *pdata)
{
u32 val;
+ if (!xgene_ring_mgr_init(pdata))
+ return -ENODEV;
+
clk_prepare_enable(pdata->clk);
clk_disable_unprepare(pdata->clk);
clk_prepare_enable(pdata->clk);
val |= SCAN_AUTO_INCR;
MGMT_CLOCK_SEL_SET(&val, 1);
xgene_enet_wr_mcx_mac(pdata, MII_MGMT_CONFIG_ADDR, val);
+
+ return 0;
}
static void xgene_gport_shutdown(struct xgene_enet_pdata *pdata)
#define BLOCK_ETH_MAC_OFFSET 0x0000
#define BLOCK_ETH_MAC_CSR_OFFSET 0x2800
+#define CLKEN_ADDR 0xc208
+#define SRST_ADDR 0xc200
+
#define MAC_ADDR_REG_OFFSET 0x00
#define MAC_COMMAND_REG_OFFSET 0x04
#define MAC_WRITE_REG_OFFSET 0x08
int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata);
void xgene_enet_mdio_remove(struct xgene_enet_pdata *pdata);
+bool xgene_ring_mgr_init(struct xgene_enet_pdata *p);
extern struct xgene_mac_ops xgene_gmac_ops;
extern struct xgene_port_ops xgene_gport_ops;
struct device *dev = ndev_to_dev(ndev);
struct xgene_enet_desc_ring *rx_ring, *tx_ring, *cp_ring;
struct xgene_enet_desc_ring *buf_pool = NULL;
- u8 cpu_bufnum = 0, eth_bufnum = 0;
- u8 bp_bufnum = 0x20;
- u16 ring_id, ring_num = 0;
+ u8 cpu_bufnum = 0, eth_bufnum = START_ETH_BUFNUM;
+ u8 bp_bufnum = START_BP_BUFNUM;
+ u16 ring_id, ring_num = START_RING_NUM;
int ret;
/* allocate rx descriptor ring */
u16 dst_ring_num;
int ret;
- pdata->port_ops->reset(pdata);
+ ret = pdata->port_ops->reset(pdata);
+ if (ret)
+ return ret;
ret = xgene_enet_create_desc_rings(ndev);
if (ret) {
return ret;
err:
+ unregister_netdev(ndev);
free_netdev(ndev);
return ret;
}
#define SKB_BUFFER_SIZE (XGENE_ENET_MAX_MTU - NET_IP_ALIGN)
#define NUM_PKT_BUF 64
#define NUM_BUFPOOL 32
+#define START_ETH_BUFNUM 2
+#define START_BP_BUFNUM 0x22
+#define START_RING_NUM 8
#define PHY_POLL_LINK_ON (10 * HZ)
#define PHY_POLL_LINK_OFF (PHY_POLL_LINK_ON / 5)
};
struct xgene_port_ops {
- void (*reset)(struct xgene_enet_pdata *pdata);
+ int (*reset)(struct xgene_enet_pdata *pdata);
void (*cle_bypass)(struct xgene_enet_pdata *pdata,
u32 dst_ring_num, u16 bufpool_id);
void (*shutdown)(struct xgene_enet_pdata *pdata);
xgene_sgmac_rxtx(p, TX_EN, false);
}
-static void xgene_enet_reset(struct xgene_enet_pdata *p)
+static int xgene_enet_reset(struct xgene_enet_pdata *p)
{
+ if (!xgene_ring_mgr_init(p))
+ return -ENODEV;
+
clk_prepare_enable(p->clk);
clk_disable_unprepare(p->clk);
clk_prepare_enable(p->clk);
xgene_enet_ecc_init(p);
xgene_enet_config_ring_if_assoc(p);
+
+ return 0;
}
static void xgene_enet_cle_bypass(struct xgene_enet_pdata *p,
xgene_enet_wr_mac(pdata, AXGMAC_CONFIG_1, data & ~HSTTFEN);
}
-static void xgene_enet_reset(struct xgene_enet_pdata *pdata)
+static int xgene_enet_reset(struct xgene_enet_pdata *pdata)
{
+ if (!xgene_ring_mgr_init(pdata))
+ return -ENODEV;
+
clk_prepare_enable(pdata->clk);
clk_disable_unprepare(pdata->clk);
clk_prepare_enable(pdata->clk);
xgene_enet_ecc_init(pdata);
xgene_enet_config_ring_if_assoc(pdata);
+
+ return 0;
}
static void xgene_enet_xgcle_bypass(struct xgene_enet_pdata *pdata,
/* We just need one DMA descriptor which is DMA-able, since writing to
* the port will allocate a new descriptor in its internal linked-list
*/
- p = dma_zalloc_coherent(kdev, 1, &ring->desc_dma, GFP_KERNEL);
+ p = dma_zalloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma,
+ GFP_KERNEL);
if (!p) {
netif_err(priv, hw, priv->netdev, "DMA alloc failed\n");
return -ENOMEM;
if (!(reg & TDMA_DISABLED))
netdev_warn(priv->netdev, "TDMA not stopped!\n");
+ /* ring->cbs is the last part in bcm_sysport_init_tx_ring which could
+ * fail, so by checking this pointer we know whether the TX ring was
+ * fully initialized or not.
+ */
+ if (!ring->cbs)
+ return;
+
napi_disable(&ring->napi);
netif_napi_del(&ring->napi);
ring->cbs = NULL;
if (ring->desc_dma) {
- dma_free_coherent(kdev, 1, ring->desc_cpu, ring->desc_dma);
+ dma_free_coherent(kdev, sizeof(struct dma_desc),
+ ring->desc_cpu, ring->desc_dma);
ring->desc_dma = 0;
}
ring->size = 0;
goto err_irq0;
}
+ /* Re-configure the port multiplexer towards the PHY device */
+ bcmgenet_mii_config(priv->dev, false);
+
+ phy_connect_direct(dev, priv->phydev, bcmgenet_mii_setup,
+ priv->phy_interface);
+
bcmgenet_netif_start(dev);
return 0;
bcmgenet_netif_stop(dev);
+ /* Really kill the PHY state machine and disconnect from it */
+ phy_disconnect(priv->phydev);
+
/* Disable MAC receive */
umac_enable_set(priv, CMD_RX_EN, false);
phy_init_hw(priv->phydev);
/* Speed settings must be restored */
- bcmgenet_mii_config(priv->dev);
+ bcmgenet_mii_config(priv->dev, false);
/* disable ethernet MAC while updating its registers */
umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, false);
/* MDIO routines */
int bcmgenet_mii_init(struct net_device *dev);
-int bcmgenet_mii_config(struct net_device *dev);
+int bcmgenet_mii_config(struct net_device *dev, bool init);
void bcmgenet_mii_exit(struct net_device *dev);
void bcmgenet_mii_reset(struct net_device *dev);
+void bcmgenet_mii_setup(struct net_device *dev);
/* Wake-on-LAN routines */
void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol);
/* setup netdev link state when PHY link status change and
* update UMAC and RGMII block when link up
*/
-static void bcmgenet_mii_setup(struct net_device *dev)
+void bcmgenet_mii_setup(struct net_device *dev)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev = priv->phydev;
bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
}
-int bcmgenet_mii_config(struct net_device *dev)
+int bcmgenet_mii_config(struct net_device *dev, bool init)
{
struct bcmgenet_priv *priv = netdev_priv(dev);
struct phy_device *phydev = priv->phydev;
return -EINVAL;
}
- dev_info(kdev, "configuring instance for %s\n", phy_name);
+ if (init)
+ dev_info(kdev, "configuring instance for %s\n", phy_name);
return 0;
}
* PHY speed which is needed for bcmgenet_mii_config() to configure
* things appropriately.
*/
- ret = bcmgenet_mii_config(dev);
+ ret = bcmgenet_mii_config(dev, true);
if (ret) {
phy_disconnect(priv->phydev);
return ret;
app.protocol = dcb->app_priority[i].protocolid;
if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) {
+ app.priority = dcb->app_priority[i].user_prio_map;
app.selector = dcb->app_priority[i].sel_field + 1;
- err = dcb_ieee_setapp(dev, &app);
+ err = dcb_ieee_delapp(dev, &app);
} else {
app.selector = !!(dcb->app_priority[i].sel_field);
err = dcb_setapp(dev, &app);
case CXGB4_DCB_INPUT_FW_ENABLED: {
/* we're going to use Firmware DCB */
dcb->state = CXGB4_DCB_STATE_FW_INCOMPLETE;
- dcb->supported = CXGB4_DCBX_FW_SUPPORT;
+ dcb->supported = DCB_CAP_DCBX_LLD_MANAGED;
+ if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE)
+ dcb->supported |= DCB_CAP_DCBX_VER_IEEE;
+ else
+ dcb->supported |= DCB_CAP_DCBX_VER_CEE;
break;
}
*up_tc_map = (1 << tc);
/* prio_type is link strict */
- *prio_type = 0x2;
+ if (*pgid != 0xF)
+ *prio_type = 0x2;
}
static void cxgb4_getpgtccfg_tx(struct net_device *dev, int tc,
u8 *prio_type, u8 *pgid, u8 *bw_per,
u8 *up_tc_map)
{
- return cxgb4_getpgtccfg(dev, tc, prio_type, pgid, bw_per, up_tc_map, 1);
+ /* tc 0 is written at MSB position */
+ return cxgb4_getpgtccfg(dev, (7 - tc), prio_type, pgid, bw_per,
+ up_tc_map, 1);
}
u8 *prio_type, u8 *pgid, u8 *bw_per,
u8 *up_tc_map)
{
- return cxgb4_getpgtccfg(dev, tc, prio_type, pgid, bw_per, up_tc_map, 0);
+ /* tc 0 is written at MSB position */
+ return cxgb4_getpgtccfg(dev, (7 - tc), prio_type, pgid, bw_per,
+ up_tc_map, 0);
}
static void cxgb4_setpgtccfg_tx(struct net_device *dev, int tc,
struct fw_port_cmd pcmd;
struct port_info *pi = netdev2pinfo(dev);
struct adapter *adap = pi->adapter;
+ int fw_tc = 7 - tc;
u32 _pgid;
int err;
}
_pgid = be32_to_cpu(pcmd.u.dcb.pgid.pgid);
- _pgid &= ~(0xF << (tc * 4));
- _pgid |= pgid << (tc * 4);
+ _pgid &= ~(0xF << (fw_tc * 4));
+ _pgid |= pgid << (fw_tc * 4);
pcmd.u.dcb.pgid.pgid = cpu_to_be32(_pgid);
INIT_PORT_DCB_WRITE_CMD(pcmd, pi->port_id);
priority >= CXGB4_MAX_PRIORITY)
*pfccfg = 0;
else
- *pfccfg = (pi->dcb.pfcen >> priority) & 1;
+ *pfccfg = (pi->dcb.pfcen >> (7 - priority)) & 1;
}
/* Enable/disable Priority Pause Frames for the specified Traffic Class
pcmd.u.dcb.pfc.pfcen = pi->dcb.pfcen;
if (pfccfg)
- pcmd.u.dcb.pfc.pfcen |= (1 << priority);
+ pcmd.u.dcb.pfc.pfcen |= (1 << (7 - priority));
else
- pcmd.u.dcb.pfc.pfcen &= (~(1 << priority));
+ pcmd.u.dcb.pfc.pfcen &= (~(1 << (7 - priority)));
err = t4_wr_mbox(adap, adap->mbox, &pcmd, sizeof(pcmd), &pcmd);
if (err != FW_PORT_DCB_CFG_SUCCESS) {
int t4_sge_init(struct adapter *adap)
{
struct sge *s = &adap->sge;
- u32 sge_control, sge_conm_ctrl;
+ u32 sge_control, sge_control2, sge_conm_ctrl;
+ unsigned int ingpadboundary, ingpackboundary;
int ret, egress_threshold;
/*
sge_control = t4_read_reg(adap, SGE_CONTROL);
s->pktshift = PKTSHIFT_GET(sge_control);
s->stat_len = (sge_control & EGRSTATUSPAGESIZE_MASK) ? 128 : 64;
- s->fl_align = 1 << (INGPADBOUNDARY_GET(sge_control) +
- X_INGPADBOUNDARY_SHIFT);
+
+ /* T4 uses a single control field to specify both the PCIe Padding and
+ * Packing Boundary. T5 introduced the ability to specify these
+ * separately. The actual Ingress Packet Data alignment boundary
+ * within Packed Buffer Mode is the maximum of these two
+ * specifications.
+ */
+ ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_control) +
+ X_INGPADBOUNDARY_SHIFT);
+ if (is_t4(adap->params.chip)) {
+ s->fl_align = ingpadboundary;
+ } else {
+ /* T5 has a different interpretation of one of the PCIe Packing
+ * Boundary values.
+ */
+ sge_control2 = t4_read_reg(adap, SGE_CONTROL2_A);
+ ingpackboundary = INGPACKBOUNDARY_G(sge_control2);
+ if (ingpackboundary == INGPACKBOUNDARY_16B_X)
+ ingpackboundary = 16;
+ else
+ ingpackboundary = 1 << (ingpackboundary +
+ INGPACKBOUNDARY_SHIFT_X);
+
+ s->fl_align = max(ingpadboundary, ingpackboundary);
+ }
if (adap->flags & USING_SOFT_PARAMS)
ret = t4_sge_init_soft(adap);
HOSTPAGESIZEPF6(sge_hps) |
HOSTPAGESIZEPF7(sge_hps));
- t4_set_reg_field(adap, SGE_CONTROL,
- INGPADBOUNDARY_MASK |
- EGRSTATUSPAGESIZE_MASK,
- INGPADBOUNDARY(fl_align_log - 5) |
- EGRSTATUSPAGESIZE(stat_len != 64));
-
+ if (is_t4(adap->params.chip)) {
+ t4_set_reg_field(adap, SGE_CONTROL,
+ INGPADBOUNDARY_MASK |
+ EGRSTATUSPAGESIZE_MASK,
+ INGPADBOUNDARY(fl_align_log - 5) |
+ EGRSTATUSPAGESIZE(stat_len != 64));
+ } else {
+ /* T5 introduced the separation of the Free List Padding and
+ * Packing Boundaries. Thus, we can select a smaller Padding
+ * Boundary to avoid uselessly chewing up PCIe Link and Memory
+ * Bandwidth, and use a Packing Boundary which is large enough
+ * to avoid false sharing between CPUs, etc.
+ *
+ * For the PCI Link, the smaller the Padding Boundary the
+ * better. For the Memory Controller, a smaller Padding
+ * Boundary is better until we cross under the Memory Line
+ * Size (the minimum unit of transfer to/from Memory). If we
+ * have a Padding Boundary which is smaller than the Memory
+ * Line Size, that'll involve a Read-Modify-Write cycle on the
+ * Memory Controller which is never good. For T5 the smallest
+ * Padding Boundary which we can select is 32 bytes which is
+ * larger than any known Memory Controller Line Size so we'll
+ * use that.
+ *
+ * T5 has a different interpretation of the "0" value for the
+ * Packing Boundary. This corresponds to 16 bytes instead of
+ * the expected 32 bytes. We never have a Packing Boundary
+ * less than 32 bytes so we can't use that special value but
+ * on the other hand, if we wanted 32 bytes, the best we can
+ * really do is 64 bytes.
+ */
+ if (fl_align <= 32) {
+ fl_align = 64;
+ fl_align_log = 6;
+ }
+ t4_set_reg_field(adap, SGE_CONTROL,
+ INGPADBOUNDARY_MASK |
+ EGRSTATUSPAGESIZE_MASK,
+ INGPADBOUNDARY(INGPCIEBOUNDARY_32B_X) |
+ EGRSTATUSPAGESIZE(stat_len != 64));
+ t4_set_reg_field(adap, SGE_CONTROL2_A,
+ INGPACKBOUNDARY_V(INGPACKBOUNDARY_M),
+ INGPACKBOUNDARY_V(fl_align_log -
+ INGPACKBOUNDARY_SHIFT_X));
+ }
/*
* Adjust various SGE Free List Host Buffer Sizes.
*
#define X_INGPADBOUNDARY_SHIFT 5
#define SGE_CONTROL 0x1008
+#define SGE_CONTROL2_A 0x1124
#define DCASYSTYPE 0x00080000U
#define RXPKTCPLMODE_MASK 0x00040000U
#define RXPKTCPLMODE_SHIFT 18
#define PKTSHIFT_SHIFT 10
#define PKTSHIFT(x) ((x) << PKTSHIFT_SHIFT)
#define PKTSHIFT_GET(x) (((x) & PKTSHIFT_MASK) >> PKTSHIFT_SHIFT)
+#define INGPCIEBOUNDARY_32B_X 0
#define INGPCIEBOUNDARY_MASK 0x00000380U
#define INGPCIEBOUNDARY_SHIFT 7
#define INGPCIEBOUNDARY(x) ((x) << INGPCIEBOUNDARY_SHIFT)
#define INGPADBOUNDARY(x) ((x) << INGPADBOUNDARY_SHIFT)
#define INGPADBOUNDARY_GET(x) (((x) & INGPADBOUNDARY_MASK) \
>> INGPADBOUNDARY_SHIFT)
+#define INGPACKBOUNDARY_16B_X 0
+#define INGPACKBOUNDARY_SHIFT_X 5
+
+#define INGPACKBOUNDARY_S 16
+#define INGPACKBOUNDARY_M 0x7U
+#define INGPACKBOUNDARY_V(x) ((x) << INGPACKBOUNDARY_S)
+#define INGPACKBOUNDARY_G(x) (((x) >> INGPACKBOUNDARY_S) \
+ & INGPACKBOUNDARY_M)
#define EGRPCIEBOUNDARY_MASK 0x0000000eU
#define EGRPCIEBOUNDARY_SHIFT 1
#define EGRPCIEBOUNDARY(x) ((x) << EGRPCIEBOUNDARY_SHIFT)
u16 timer_val[SGE_NTIMERS]; /* interrupt holdoff timer array */
u8 counter_val[SGE_NCOUNTERS]; /* interrupt RX threshold array */
+ /* Decoded Adapter Parameters.
+ */
+ u32 fl_pg_order; /* large page allocation size */
+ u32 stat_len; /* length of status page at ring end */
+ u32 pktshift; /* padding between CPL & packet data */
+ u32 fl_align; /* response queue message alignment */
+ u32 fl_starve_thres; /* Free List starvation threshold */
+
/*
* Reverse maps from Absolute Queue IDs to associated queue pointers.
* The absolute Queue IDs are in a compact range which start at a
#include "../cxgb4/t4fw_api.h"
#include "../cxgb4/t4_msg.h"
-/*
- * Decoded Adapter Parameters.
- */
-static u32 FL_PG_ORDER; /* large page allocation size */
-static u32 STAT_LEN; /* length of status page at ring end */
-static u32 PKTSHIFT; /* padding between CPL and packet data */
-static u32 FL_ALIGN; /* response queue message alignment */
-
/*
* Constants ...
*/
TX_QCHECK_PERIOD = (HZ / 2),
MAX_TIMER_TX_RECLAIM = 100,
- /*
- * An FL with <= FL_STARVE_THRES buffers is starving and a periodic
- * timer will attempt to refill it.
- */
- FL_STARVE_THRES = 4,
-
/*
* Suspend an Ethernet TX queue with fewer available descriptors than
* this. We always want to have room for a maximum sized packet:
/**
* fl_starving - return whether a Free List is starving.
+ * @adapter: pointer to the adapter
* @fl: the Free List
*
* Tests specified Free List to see whether the number of buffers
* available to the hardware has falled below our "starvation"
* threshold.
*/
-static inline bool fl_starving(const struct sge_fl *fl)
+static inline bool fl_starving(const struct adapter *adapter,
+ const struct sge_fl *fl)
{
- return fl->avail - fl->pend_cred <= FL_STARVE_THRES;
+ const struct sge *s = &adapter->sge;
+
+ return fl->avail - fl->pend_cred <= s->fl_starve_thres;
}
/**
/**
* get_buf_size - return the size of an RX Free List buffer.
+ * @adapter: pointer to the associated adapter
* @sdesc: pointer to the software buffer descriptor
*/
-static inline int get_buf_size(const struct rx_sw_desc *sdesc)
+static inline int get_buf_size(const struct adapter *adapter,
+ const struct rx_sw_desc *sdesc)
{
- return FL_PG_ORDER > 0 && (sdesc->dma_addr & RX_LARGE_BUF)
- ? (PAGE_SIZE << FL_PG_ORDER)
- : PAGE_SIZE;
+ const struct sge *s = &adapter->sge;
+
+ return (s->fl_pg_order > 0 && (sdesc->dma_addr & RX_LARGE_BUF)
+ ? (PAGE_SIZE << s->fl_pg_order) : PAGE_SIZE);
}
/**
if (is_buf_mapped(sdesc))
dma_unmap_page(adapter->pdev_dev, get_buf_addr(sdesc),
- get_buf_size(sdesc), PCI_DMA_FROMDEVICE);
+ get_buf_size(adapter, sdesc),
+ PCI_DMA_FROMDEVICE);
put_page(sdesc->page);
sdesc->page = NULL;
if (++fl->cidx == fl->size)
if (is_buf_mapped(sdesc))
dma_unmap_page(adapter->pdev_dev, get_buf_addr(sdesc),
- get_buf_size(sdesc), PCI_DMA_FROMDEVICE);
+ get_buf_size(adapter, sdesc),
+ PCI_DMA_FROMDEVICE);
sdesc->page = NULL;
if (++fl->cidx == fl->size)
fl->cidx = 0;
static unsigned int refill_fl(struct adapter *adapter, struct sge_fl *fl,
int n, gfp_t gfp)
{
+ struct sge *s = &adapter->sge;
struct page *page;
dma_addr_t dma_addr;
unsigned int cred = fl->avail;
* If we don't support large pages, drop directly into the small page
* allocation code.
*/
- if (FL_PG_ORDER == 0)
+ if (s->fl_pg_order == 0)
goto alloc_small_pages;
while (n) {
page = alloc_pages(gfp | __GFP_COMP | __GFP_NOWARN,
- FL_PG_ORDER);
+ s->fl_pg_order);
if (unlikely(!page)) {
/*
* We've failed inour attempt to allocate a "large
fl->large_alloc_failed++;
break;
}
- poison_buf(page, PAGE_SIZE << FL_PG_ORDER);
+ poison_buf(page, PAGE_SIZE << s->fl_pg_order);
dma_addr = dma_map_page(adapter->pdev_dev, page, 0,
- PAGE_SIZE << FL_PG_ORDER,
+ PAGE_SIZE << s->fl_pg_order,
PCI_DMA_FROMDEVICE);
if (unlikely(dma_mapping_error(adapter->pdev_dev, dma_addr))) {
/*
* because DMA mapping resources are typically
* critical resources once they become scarse.
*/
- __free_pages(page, FL_PG_ORDER);
+ __free_pages(page, s->fl_pg_order);
goto out;
}
dma_addr |= RX_LARGE_BUF;
fl->pend_cred += cred;
ring_fl_db(adapter, fl);
- if (unlikely(fl_starving(fl))) {
+ if (unlikely(fl_starving(adapter, fl))) {
smp_wmb();
set_bit(fl->cntxt_id, adapter->sge.starving_fl);
}
static void do_gro(struct sge_eth_rxq *rxq, const struct pkt_gl *gl,
const struct cpl_rx_pkt *pkt)
{
+ struct adapter *adapter = rxq->rspq.adapter;
+ struct sge *s = &adapter->sge;
int ret;
struct sk_buff *skb;
return;
}
- copy_frags(skb, gl, PKTSHIFT);
- skb->len = gl->tot_len - PKTSHIFT;
+ copy_frags(skb, gl, s->pktshift);
+ skb->len = gl->tot_len - s->pktshift;
skb->data_len = skb->len;
skb->truesize += skb->data_len;
skb->ip_summed = CHECKSUM_UNNECESSARY;
bool csum_ok = pkt->csum_calc && !pkt->err_vec &&
(rspq->netdev->features & NETIF_F_RXCSUM);
struct sge_eth_rxq *rxq = container_of(rspq, struct sge_eth_rxq, rspq);
+ struct adapter *adapter = rspq->adapter;
+ struct sge *s = &adapter->sge;
/*
* If this is a good TCP packet and we have Generic Receive Offload
rxq->stats.rx_drops++;
return 0;
}
- __skb_pull(skb, PKTSHIFT);
+ __skb_pull(skb, s->pktshift);
skb->protocol = eth_type_trans(skb, rspq->netdev);
skb_record_rx_queue(skb, rspq->idx);
rxq->stats.pkts++;
static int process_responses(struct sge_rspq *rspq, int budget)
{
struct sge_eth_rxq *rxq = container_of(rspq, struct sge_eth_rxq, rspq);
+ struct adapter *adapter = rspq->adapter;
+ struct sge *s = &adapter->sge;
int budget_left = budget;
while (likely(budget_left)) {
BUG_ON(frag >= MAX_SKB_FRAGS);
BUG_ON(rxq->fl.avail == 0);
sdesc = &rxq->fl.sdesc[rxq->fl.cidx];
- bufsz = get_buf_size(sdesc);
+ bufsz = get_buf_size(adapter, sdesc);
fp->page = sdesc->page;
fp->offset = rspq->offset;
fp->size = min(bufsz, len);
*/
ret = rspq->handler(rspq, rspq->cur_desc, &gl);
if (likely(ret == 0))
- rspq->offset += ALIGN(fp->size, FL_ALIGN);
+ rspq->offset += ALIGN(fp->size, s->fl_align);
else
restore_rx_bufs(&gl, &rxq->fl, frag);
} else if (likely(rsp_type == RSP_TYPE_CPL)) {
* schedule napi but the FL is no longer starving.
* No biggie.
*/
- if (fl_starving(fl)) {
+ if (fl_starving(adapter, fl)) {
struct sge_eth_rxq *rxq;
rxq = container_of(fl, struct sge_eth_rxq, fl);
int intr_dest,
struct sge_fl *fl, rspq_handler_t hnd)
{
+ struct sge *s = &adapter->sge;
struct port_info *pi = netdev_priv(dev);
struct fw_iq_cmd cmd, rpl;
int ret, iqandst, flsz = 0;
fl->size = roundup(fl->size, FL_PER_EQ_UNIT);
fl->desc = alloc_ring(adapter->pdev_dev, fl->size,
sizeof(__be64), sizeof(struct rx_sw_desc),
- &fl->addr, &fl->sdesc, STAT_LEN);
+ &fl->addr, &fl->sdesc, s->stat_len);
if (!fl->desc) {
ret = -ENOMEM;
goto err;
* free list ring) in Egress Queue Units.
*/
flsz = (fl->size / FL_PER_EQ_UNIT +
- STAT_LEN / EQ_UNIT);
+ s->stat_len / EQ_UNIT);
/*
* Fill in all the relevant firmware Ingress Queue Command
struct net_device *dev, struct netdev_queue *devq,
unsigned int iqid)
{
+ struct sge *s = &adapter->sge;
int ret, nentries;
struct fw_eq_eth_cmd cmd, rpl;
struct port_info *pi = netdev_priv(dev);
* Calculate the size of the hardware TX Queue (including the Status
* Page on the end of the TX Queue) in units of TX Descriptors.
*/
- nentries = txq->q.size + STAT_LEN / sizeof(struct tx_desc);
+ nentries = txq->q.size + s->stat_len / sizeof(struct tx_desc);
/*
* Allocate the hardware ring for the TX ring (with space for its
txq->q.desc = alloc_ring(adapter->pdev_dev, txq->q.size,
sizeof(struct tx_desc),
sizeof(struct tx_sw_desc),
- &txq->q.phys_addr, &txq->q.sdesc, STAT_LEN);
+ &txq->q.phys_addr, &txq->q.sdesc, s->stat_len);
if (!txq->q.desc)
return -ENOMEM;
*/
static void free_txq(struct adapter *adapter, struct sge_txq *tq)
{
+ struct sge *s = &adapter->sge;
+
dma_free_coherent(adapter->pdev_dev,
- tq->size * sizeof(*tq->desc) + STAT_LEN,
+ tq->size * sizeof(*tq->desc) + s->stat_len,
tq->desc, tq->phys_addr);
tq->cntxt_id = 0;
tq->sdesc = NULL;
static void free_rspq_fl(struct adapter *adapter, struct sge_rspq *rspq,
struct sge_fl *fl)
{
+ struct sge *s = &adapter->sge;
unsigned int flid = fl ? fl->cntxt_id : 0xffff;
t4vf_iq_free(adapter, FW_IQ_TYPE_FL_INT_CAP,
if (fl) {
free_rx_bufs(adapter, fl, fl->avail);
dma_free_coherent(adapter->pdev_dev,
- fl->size * sizeof(*fl->desc) + STAT_LEN,
+ fl->size * sizeof(*fl->desc) + s->stat_len,
fl->desc, fl->addr);
kfree(fl->sdesc);
fl->sdesc = NULL;
u32 fl0 = sge_params->sge_fl_buffer_size[0];
u32 fl1 = sge_params->sge_fl_buffer_size[1];
struct sge *s = &adapter->sge;
+ unsigned int ingpadboundary, ingpackboundary;
/*
* Start by vetting the basic SGE parameters which have been set up by
* Now translate the adapter parameters into our internal forms.
*/
if (fl1)
- FL_PG_ORDER = ilog2(fl1) - PAGE_SHIFT;
- STAT_LEN = ((sge_params->sge_control & EGRSTATUSPAGESIZE_MASK)
- ? 128 : 64);
- PKTSHIFT = PKTSHIFT_GET(sge_params->sge_control);
- FL_ALIGN = 1 << (INGPADBOUNDARY_GET(sge_params->sge_control) +
- SGE_INGPADBOUNDARY_SHIFT);
+ s->fl_pg_order = ilog2(fl1) - PAGE_SHIFT;
+ s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_MASK)
+ ? 128 : 64);
+ s->pktshift = PKTSHIFT_GET(sge_params->sge_control);
+
+ /* T4 uses a single control field to specify both the PCIe Padding and
+ * Packing Boundary. T5 introduced the ability to specify these
+ * separately. The actual Ingress Packet Data alignment boundary
+ * within Packed Buffer Mode is the maximum of these two
+ * specifications. (Note that it makes no real practical sense to
+ * have the Pading Boudary be larger than the Packing Boundary but you
+ * could set the chip up that way and, in fact, legacy T4 code would
+ * end doing this because it would initialize the Padding Boundary and
+ * leave the Packing Boundary initialized to 0 (16 bytes).)
+ */
+ ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_params->sge_control) +
+ X_INGPADBOUNDARY_SHIFT);
+ if (is_t4(adapter->params.chip)) {
+ s->fl_align = ingpadboundary;
+ } else {
+ /* T5 has a different interpretation of one of the PCIe Packing
+ * Boundary values.
+ */
+ ingpackboundary = INGPACKBOUNDARY_G(sge_params->sge_control2);
+ if (ingpackboundary == INGPACKBOUNDARY_16B_X)
+ ingpackboundary = 16;
+ else
+ ingpackboundary = 1 << (ingpackboundary +
+ INGPACKBOUNDARY_SHIFT_X);
+
+ s->fl_align = max(ingpadboundary, ingpackboundary);
+ }
+
+ /* A FL with <= fl_starve_thres buffers is starving and a periodic
+ * timer will attempt to refill it. This needs to be larger than the
+ * SGE's Egress Congestion Threshold. If it isn't, then we can get
+ * stuck waiting for new packets while the SGE is waiting for us to
+ * give it more Free List entries. (Note that the SGE's Egress
+ * Congestion Threshold is in units of 2 Free List pointers.)
+ */
+ s->fl_starve_thres
+ = EGRTHRESHOLD_GET(sge_params->sge_congestion_control)*2 + 1;
/*
* Set up tasklet timers.
*/
struct sge_params {
u32 sge_control; /* padding, boundaries, lengths, etc. */
+ u32 sge_control2; /* T5: more of the same */
u32 sge_host_page_size; /* RDMA page sizes */
u32 sge_queues_per_page; /* RDMA queues/page */
u32 sge_user_mode_limits; /* limits for BAR2 user mode accesses */
u32 sge_fl_buffer_size[16]; /* free list buffer sizes */
u32 sge_ingress_rx_threshold; /* RX counter interrupt threshold[4] */
+ u32 sge_congestion_control; /* congestion thresholds, etc. */
u32 sge_timer_value_0_and_1; /* interrupt coalescing timer values */
u32 sge_timer_value_2_and_3;
u32 sge_timer_value_4_and_5;
sge_params->sge_timer_value_2_and_3 = vals[5];
sge_params->sge_timer_value_4_and_5 = vals[6];
+ /* T4 uses a single control field to specify both the PCIe Padding and
+ * Packing Boundary. T5 introduced the ability to specify these
+ * separately with the Padding Boundary in SGE_CONTROL and and Packing
+ * Boundary in SGE_CONTROL2. So for T5 and later we need to grab
+ * SGE_CONTROL in order to determine how ingress packet data will be
+ * laid out in Packed Buffer Mode. Unfortunately, older versions of
+ * the firmware won't let us retrieve SGE_CONTROL2 so if we get a
+ * failure grabbing it we throw an error since we can't figure out the
+ * right value.
+ */
+ if (!is_t4(adapter->params.chip)) {
+ params[0] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) |
+ FW_PARAMS_PARAM_XYZ(SGE_CONTROL2_A));
+ v = t4vf_query_params(adapter, 1, params, vals);
+ if (v != FW_SUCCESS) {
+ dev_err(adapter->pdev_dev,
+ "Unable to get SGE Control2; "
+ "probably old firmware.\n");
+ return v;
+ }
+ sge_params->sge_control2 = vals[0];
+ }
+
params[0] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) |
FW_PARAMS_PARAM_XYZ(SGE_INGRESS_RX_THRESHOLD));
- v = t4vf_query_params(adapter, 1, params, vals);
+ params[1] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) |
+ FW_PARAMS_PARAM_XYZ(SGE_CONM_CTRL));
+ v = t4vf_query_params(adapter, 2, params, vals);
if (v)
return v;
sge_params->sge_ingress_rx_threshold = vals[0];
+ sge_params->sge_congestion_control = vals[1];
return 0;
}
struct vnic_rq_buf *buf = rq->to_use;
if (buf->os_buf) {
- buf = buf->next;
- rq->to_use = buf;
- rq->ring.desc_avail--;
- if ((buf->index & VNIC_RQ_RETURN_RATE) == 0) {
- /* Adding write memory barrier prevents compiler and/or
- * CPU reordering, thus avoiding descriptor posting
- * before descriptor is initialized. Otherwise, hardware
- * can read stale descriptor fields.
- */
- wmb();
- iowrite32(buf->index, &rq->ctrl->posted_index);
- }
+ enic_queue_rq_desc(rq, buf->os_buf, os_buf_index, buf->dma_addr,
+ buf->len);
return 0;
}
enic->rq_truncated_pkts++;
}
+ pci_unmap_single(enic->pdev, buf->dma_addr, buf->len,
+ PCI_DMA_FROMDEVICE);
dev_kfree_skb_any(skb);
+ buf->os_buf = NULL;
return;
}
/* Buffer overflow
*/
+ pci_unmap_single(enic->pdev, buf->dma_addr, buf->len,
+ PCI_DMA_FROMDEVICE);
dev_kfree_skb_any(skb);
+ buf->os_buf = NULL;
}
}
return bufaddr;
}
+static void swap_buffer2(void *dst_buf, void *src_buf, int len)
+{
+ int i;
+ unsigned int *src = src_buf;
+ unsigned int *dst = dst_buf;
+
+ for (i = 0; i < len; i += 4, src++, dst++)
+ *dst = swab32p(src);
+}
+
static void fec_dump(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
}
static bool fec_enet_copybreak(struct net_device *ndev, struct sk_buff **skb,
- struct bufdesc *bdp, u32 length)
+ struct bufdesc *bdp, u32 length, bool swap)
{
struct fec_enet_private *fep = netdev_priv(ndev);
struct sk_buff *new_skb;
dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr,
FEC_ENET_RX_FRSIZE - fep->rx_align,
DMA_FROM_DEVICE);
- memcpy(new_skb->data, (*skb)->data, length);
+ if (!swap)
+ memcpy(new_skb->data, (*skb)->data, length);
+ else
+ swap_buffer2(new_skb->data, (*skb)->data, length);
*skb = new_skb;
return true;
u16 vlan_tag;
int index = 0;
bool is_copybreak;
+ bool need_swap = id_entry->driver_data & FEC_QUIRK_SWAP_FRAME;
#ifdef CONFIG_M532x
flush_cache_all();
* include that when passing upstream as it messes up
* bridging applications.
*/
- is_copybreak = fec_enet_copybreak(ndev, &skb, bdp, pkt_len - 4);
+ is_copybreak = fec_enet_copybreak(ndev, &skb, bdp, pkt_len - 4,
+ need_swap);
if (!is_copybreak) {
skb_new = netdev_alloc_skb(ndev, FEC_ENET_RX_FRSIZE);
if (unlikely(!skb_new)) {
prefetch(skb->data - NET_IP_ALIGN);
skb_put(skb, pkt_len - 4);
data = skb->data;
- if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
+ if (!is_copybreak && need_swap)
swap_buffer(data, pkt_len);
/* Extract the enhanced buffer descriptor */
netif_device_detach(ndev);
netif_tx_unlock_bh(ndev);
fec_stop(ndev);
+ fec_enet_clk_enable(ndev, false);
+ pinctrl_pm_select_sleep_state(&fep->pdev->dev);
}
rtnl_unlock();
- fec_enet_clk_enable(ndev, false);
- pinctrl_pm_select_sleep_state(&fep->pdev->dev);
-
if (fep->reg_phy)
regulator_disable(fep->reg_phy);
return ret;
}
- pinctrl_pm_select_default_state(&fep->pdev->dev);
- ret = fec_enet_clk_enable(ndev, true);
- if (ret)
- goto failed_clk;
-
rtnl_lock();
if (netif_running(ndev)) {
+ pinctrl_pm_select_default_state(&fep->pdev->dev);
+ ret = fec_enet_clk_enable(ndev, true);
+ if (ret) {
+ rtnl_unlock();
+ goto failed_clk;
+ }
fec_restart(ndev);
netif_tx_lock_bh(ndev);
netif_device_attach(ndev);
**/
s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw)
{
- s32 status;
u16 autoneg_reg = IXGBE_MII_AUTONEG_REG;
bool autoneg = false;
ixgbe_link_speed speed;
hw->phy.ops.write_reg(hw, MDIO_CTRL1,
MDIO_MMD_AN, autoneg_reg);
-
- return status;
+ return 0;
}
/**
int tx_index;
struct tx_desc *desc;
u32 cmd_sts;
- struct sk_buff *skb;
tx_index = txq->tx_used_desc;
desc = &txq->tx_desc_area[tx_index];
reclaimed++;
txq->tx_desc_count--;
- skb = NULL;
- if (cmd_sts & TX_LAST_DESC)
- skb = __skb_dequeue(&txq->tx_skb);
+ if (!IS_TSO_HEADER(txq, desc->buf_ptr))
+ dma_unmap_single(mp->dev->dev.parent, desc->buf_ptr,
+ desc->byte_cnt, DMA_TO_DEVICE);
+
+ if (cmd_sts & TX_ENABLE_INTERRUPT) {
+ struct sk_buff *skb = __skb_dequeue(&txq->tx_skb);
+
+ if (!WARN_ON(!skb))
+ dev_kfree_skb(skb);
+ }
if (cmd_sts & ERROR_SUMMARY) {
netdev_info(mp->dev, "tx error\n");
mp->dev->stats.tx_errors++;
}
- if (!IS_TSO_HEADER(txq, desc->buf_ptr))
- dma_unmap_single(mp->dev->dev.parent, desc->buf_ptr,
- desc->byte_cnt, DMA_TO_DEVICE);
- dev_kfree_skb(skb);
}
__netif_tx_unlock_bh(nq);
{
struct mvpp2_prs_entry *pe;
int tid_aux, tid;
+ int ret = 0;
pe = mvpp2_prs_vlan_find(priv, tpid, ai);
break;
}
- if (tid <= tid_aux)
- return -EINVAL;
+ if (tid <= tid_aux) {
+ ret = -EINVAL;
+ goto error;
+ }
memset(pe, 0 , sizeof(struct mvpp2_prs_entry));
mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_VLAN);
mvpp2_prs_hw_write(priv, pe);
+error:
kfree(pe);
- return 0;
+ return ret;
}
/* Get first free double vlan ai number */
unsigned int port_map)
{
struct mvpp2_prs_entry *pe;
- int tid_aux, tid, ai;
+ int tid_aux, tid, ai, ret = 0;
pe = mvpp2_prs_double_vlan_find(priv, tpid1, tpid2);
/* Set ai value for new double vlan entry */
ai = mvpp2_prs_double_vlan_ai_free_get(priv);
- if (ai < 0)
- return ai;
+ if (ai < 0) {
+ ret = ai;
+ goto error;
+ }
/* Get first single/triple vlan tid */
for (tid_aux = MVPP2_PE_FIRST_FREE_TID;
break;
}
- if (tid >= tid_aux)
- return -ERANGE;
+ if (tid >= tid_aux) {
+ ret = -ERANGE;
+ goto error;
+ }
memset(pe, 0, sizeof(struct mvpp2_prs_entry));
mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_VLAN);
mvpp2_prs_tcam_port_map_set(pe, port_map);
mvpp2_prs_hw_write(priv, pe);
+error:
kfree(pe);
- return 0;
+ return ret;
}
/* IPv4 header parsing for fragmentation and L4 offset */
ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port,
VXLAN_STEER_BY_OUTER_MAC, 1);
out:
- if (ret)
+ if (ret) {
en_err(priv, "failed setting L2 tunnel configuration ret %d\n", ret);
+ return;
+ }
+
+ /* set offloads */
+ priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
+ NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL;
+ priv->dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+ priv->dev->features |= NETIF_F_GSO_UDP_TUNNEL;
}
static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
int ret;
struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv,
vxlan_del_task);
+ /* unset offloads */
+ priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
+ NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL);
+ priv->dev->hw_features &= ~NETIF_F_GSO_UDP_TUNNEL;
+ priv->dev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port,
VXLAN_STEER_BY_OUTER_MAC, 0);
if (mdev->dev->caps.steering_mode != MLX4_STEERING_MODE_A0)
dev->priv_flags |= IFF_UNICAST_FLT;
- if (mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
- dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
- NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL;
- dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
- dev->features |= NETIF_F_GSO_UDP_TUNNEL;
- }
-
mdev->pndev[port] = dev;
netif_carrier_off(dev);
snprintf(eq->name, MLX5_MAX_EQ_NAME, "%s@pci:%s",
name, pci_name(dev->pdev));
eq->eqn = out.eq_number;
+ eq->irqn = vecidx;
+ eq->dev = dev;
+ eq->doorbell = uar->map + MLX5_EQ_DOORBEL_OFFSET;
err = request_irq(table->msix_arr[vecidx].vector, mlx5_msix_handler, 0,
eq->name, eq);
if (err)
goto err_eq;
- eq->irqn = vecidx;
- eq->dev = dev;
- eq->doorbell = uar->map + MLX5_EQ_DOORBEL_OFFSET;
-
err = mlx5_debug_eq_add(dev, eq);
if (err)
goto err_irq;
dev->profile = &profile[prof_sel];
dev->event = mlx5_core_event;
+ INIT_LIST_HEAD(&priv->ctx_list);
+ spin_lock_init(&priv->ctx_lock);
err = mlx5_dev_init(dev, pdev);
if (err) {
dev_err(&pdev->dev, "mlx5_dev_init failed %d\n", err);
goto out;
}
- INIT_LIST_HEAD(&priv->ctx_list);
- spin_lock_init(&priv->ctx_lock);
err = mlx5_register_device(dev);
if (err) {
dev_err(&pdev->dev, "mlx5_register_device failed %d\n", err);
if (test_bit(__NX_RESETTING, &adapter->state))
goto reschedule;
- if (test_bit(__NX_DEV_UP, &adapter->state)) {
+ if (test_bit(__NX_DEV_UP, &adapter->state) &&
+ !(adapter->capabilities & NX_FW_CAPABILITY_LINK_NOTIFICATION)) {
if (!adapter->has_link_events) {
netxen_nic_handle_phy_intr(adapter);
config NET_VENDOR_QUALCOMM
bool "Qualcomm devices"
default y
- depends on SPI_MASTER && OF_GPIO
---help---
If you have a network (Ethernet) card belonging to this class, say Y
and read the Ethernet-HOWTO, available from
config QCA7000
tristate "Qualcomm Atheros QCA7000 support"
- depends on SPI_MASTER && OF_GPIO
+ depends on SPI_MASTER && OF
---help---
This SPI protocol driver supports the Qualcomm Atheros QCA7000.
EFX_MAX_CHANNELS,
resource_size(&efx->pci_dev->resource[EFX_MEM_BAR]) /
(EFX_VI_PAGE_SIZE * EFX_TXQ_TYPES));
- BUG_ON(efx->max_channels == 0);
+ if (WARN_ON(efx->max_channels == 0))
+ return -EIO;
nic_data = kzalloc(sizeof(*nic_data), GFP_KERNEL);
if (!nic_data)
const struct of_device_id *match = NULL;
struct smc_local *lp;
struct net_device *ndev;
- struct resource *res, *ires;
+ struct resource *res;
unsigned int __iomem *addr;
unsigned long irq_flags = SMC_IRQ_FLAGS;
+ unsigned long irq_resflags;
int ret;
ndev = alloc_etherdev(sizeof(struct smc_local));
goto out_free_netdev;
}
- ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (!ires) {
+ ndev->irq = platform_get_irq(pdev, 0);
+ if (ndev->irq <= 0) {
ret = -ENODEV;
goto out_release_io;
}
-
- ndev->irq = ires->start;
-
- if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK)
- irq_flags = ires->flags & IRQF_TRIGGER_MASK;
+ /*
+ * If this platform does not specify any special irqflags, or if
+ * the resource supplies a trigger, override the irqflags with
+ * the trigger flags from the resource.
+ */
+ irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq));
+ if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK)
+ irq_flags = irq_resflags & IRQF_TRIGGER_MASK;
ret = smc_request_attrib(pdev, ndev);
if (ret)
spin_unlock(&pdata->mac_lock);
}
+static int smsc911x_phy_general_power_up(struct smsc911x_data *pdata)
+{
+ int rc = 0;
+
+ if (!pdata->phy_dev)
+ return rc;
+
+ /* If the internal PHY is in General Power-Down mode, all, except the
+ * management interface, is powered-down and stays in that condition as
+ * long as Phy register bit 0.11 is HIGH.
+ *
+ * In that case, clear the bit 0.11, so the PHY powers up and we can
+ * access to the phy registers.
+ */
+ rc = phy_read(pdata->phy_dev, MII_BMCR);
+ if (rc < 0) {
+ SMSC_WARN(pdata, drv, "Failed reading PHY control reg");
+ return rc;
+ }
+
+ /* If the PHY general power-down bit is not set is not necessary to
+ * disable the general power down-mode.
+ */
+ if (rc & BMCR_PDOWN) {
+ rc = phy_write(pdata->phy_dev, MII_BMCR, rc & ~BMCR_PDOWN);
+ if (rc < 0) {
+ SMSC_WARN(pdata, drv, "Failed writing PHY control reg");
+ return rc;
+ }
+
+ usleep_range(1000, 1500);
+ }
+
+ return 0;
+}
+
static int smsc911x_phy_disable_energy_detect(struct smsc911x_data *pdata)
{
int rc = 0;
return rc;
}
- /*
- * If energy is detected the PHY is already awake so is not necessary
- * to disable the energy detect power-down mode.
- */
- if ((rc & MII_LAN83C185_EDPWRDOWN) &&
- !(rc & MII_LAN83C185_ENERGYON)) {
+ /* Only disable if energy detect mode is already enabled */
+ if (rc & MII_LAN83C185_EDPWRDOWN) {
/* Disable energy detect mode for this SMSC Transceivers */
rc = phy_write(pdata->phy_dev, MII_LAN83C185_CTRL_STATUS,
rc & (~MII_LAN83C185_EDPWRDOWN));
SMSC_WARN(pdata, drv, "Failed writing PHY control reg");
return rc;
}
-
- mdelay(1);
+ /* Allow PHY to wakeup */
+ mdelay(2);
}
return 0;
/* Only enable if energy detect mode is already disabled */
if (!(rc & MII_LAN83C185_EDPWRDOWN)) {
- mdelay(100);
/* Enable energy detect mode for this SMSC Transceivers */
rc = phy_write(pdata->phy_dev, MII_LAN83C185_CTRL_STATUS,
rc | MII_LAN83C185_EDPWRDOWN);
SMSC_WARN(pdata, drv, "Failed writing PHY control reg");
return rc;
}
-
- mdelay(1);
}
return 0;
}
unsigned int temp;
int ret;
+ /*
+ * Make sure to power-up the PHY chip before doing a reset, otherwise
+ * the reset fails.
+ */
+ ret = smsc911x_phy_general_power_up(pdata);
+ if (ret) {
+ SMSC_WARN(pdata, drv, "Failed to power-up the PHY chip");
+ return ret;
+ }
+
/*
* LAN9210/LAN9211/LAN9220/LAN9221 chips have an internal PHY that
* are initialized in a Energy Detect Power-Down mode that prevents
bool stmmac_eee_init(struct stmmac_priv *priv)
{
char *phy_bus_name = priv->plat->phy_bus_name;
+ unsigned long flags;
bool ret = false;
/* Using PCS we cannot dial with the phy registers at this stage
* changed).
* In that case the driver disable own timers.
*/
+ spin_lock_irqsave(&priv->lock, flags);
if (priv->eee_active) {
pr_debug("stmmac: disable EEE\n");
del_timer_sync(&priv->eee_ctrl_timer);
tx_lpi_timer);
}
priv->eee_active = 0;
+ spin_unlock_irqrestore(&priv->lock, flags);
goto out;
}
/* Activate the EEE and start timers */
+ spin_lock_irqsave(&priv->lock, flags);
if (!priv->eee_active) {
priv->eee_active = 1;
init_timer(&priv->eee_ctrl_timer);
/* Set HW EEE according to the speed */
priv->hw->mac->set_eee_pls(priv->hw, priv->phydev->link);
- pr_debug("stmmac: Energy-Efficient Ethernet initialized\n");
-
ret = true;
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ pr_debug("stmmac: Energy-Efficient Ethernet initialized\n");
}
out:
return ret;
if (new_state && netif_msg_link(priv))
phy_print_status(phydev);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
/* At this stage, it could be needed to setup the EEE or adjust some
* MAC related HW registers.
*/
priv->eee_enabled = stmmac_eee_init(priv);
-
- spin_unlock_irqrestore(&priv->lock, flags);
}
/**
}
static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
- int i)
+ int i, gfp_t flags)
{
struct sk_buff *skb;
skb = __netdev_alloc_skb(priv->dev, priv->dma_buf_sz + NET_IP_ALIGN,
- GFP_KERNEL);
+ flags);
if (!skb) {
pr_err("%s: Rx init fails; skb is NULL\n", __func__);
return -ENOMEM;
* and allocates the socket buffers. It suppors the chained and ring
* modes.
*/
-static int init_dma_desc_rings(struct net_device *dev)
+static int init_dma_desc_rings(struct net_device *dev, gfp_t flags)
{
int i;
struct stmmac_priv *priv = netdev_priv(dev);
else
p = priv->dma_rx + i;
- ret = stmmac_init_rx_buffers(priv, p, i);
+ ret = stmmac_init_rx_buffers(priv, p, i, flags);
if (ret)
goto err_init_rx_buffers;
struct stmmac_priv *priv = netdev_priv(dev);
int ret;
- ret = init_dma_desc_rings(dev);
- if (ret < 0) {
- pr_err("%s: DMA descriptors initialization failed\n", __func__);
- return ret;
- }
/* DMA initialization and SW reset */
ret = stmmac_init_dma_engine(priv);
if (ret < 0) {
}
priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS;
- priv->eee_enabled = stmmac_eee_init(priv);
-
- stmmac_init_tx_coalesce(priv);
-
if ((priv->use_riwt) && (priv->hw->dma->rx_watchdog)) {
priv->rx_riwt = MAX_DMA_RIWT;
priv->hw->dma->rx_watchdog(priv->ioaddr, MAX_DMA_RIWT);
goto dma_desc_error;
}
+ ret = init_dma_desc_rings(dev, GFP_KERNEL);
+ if (ret < 0) {
+ pr_err("%s: DMA descriptors initialization failed\n", __func__);
+ goto init_error;
+ }
+
ret = stmmac_hw_setup(dev);
if (ret < 0) {
pr_err("%s: Hw setup failed\n", __func__);
goto init_error;
}
+ stmmac_init_tx_coalesce(priv);
+
if (priv->phydev)
phy_start(priv->phydev);
unsigned int nopaged_len = skb_headlen(skb);
unsigned int enh_desc = priv->plat->enh_desc;
+ spin_lock(&priv->tx_lock);
+
if (unlikely(stmmac_tx_avail(priv) < nfrags + 1)) {
+ spin_unlock(&priv->tx_lock);
if (!netif_queue_stopped(dev)) {
netif_stop_queue(dev);
/* This is a hard error, log it. */
return NETDEV_TX_BUSY;
}
- spin_lock(&priv->tx_lock);
-
if (priv->tx_path_in_lpi_mode)
stmmac_disable_eee_mode(priv);
return NETDEV_TX_OK;
dma_map_err:
+ spin_unlock(&priv->tx_lock);
dev_err(priv->device, "Tx dma map failed\n");
dev_kfree_skb(skb);
priv->dev->stats.tx_dropped++;
{
struct stmmac_priv *priv = netdev_priv(dev);
- spin_lock(&priv->lock);
priv->hw->mac->set_filter(priv->hw, dev);
- spin_unlock(&priv->lock);
}
/**
stmmac_set_mac(priv->ioaddr, false);
pinctrl_pm_select_sleep_state(priv->device);
/* Disable clock in case of PWM is off */
- clk_disable_unprepare(priv->stmmac_clk);
+ clk_disable(priv->stmmac_clk);
}
spin_unlock_irqrestore(&priv->lock, flags);
} else {
pinctrl_pm_select_default_state(priv->device);
/* enable the clk prevously disabled */
- clk_prepare_enable(priv->stmmac_clk);
+ clk_enable(priv->stmmac_clk);
/* reset the phy so that it's ready */
if (priv->mii)
stmmac_mdio_reset(priv->mii);
netif_device_attach(ndev);
+ init_dma_desc_rings(ndev, GFP_ATOMIC);
stmmac_hw_setup(ndev);
+ stmmac_init_tx_coalesce(priv);
napi_enable(&priv->napi);
HMD(("init rxring, "));
for (i = 0; i < RX_RING_SIZE; i++) {
struct sk_buff *skb;
+ u32 mapping;
skb = happy_meal_alloc_skb(RX_BUF_ALLOC_SIZE, GFP_ATOMIC);
if (!skb) {
/* Because we reserve afterwards. */
skb_put(skb, (ETH_FRAME_LEN + RX_OFFSET + 4));
+ mapping = dma_map_single(hp->dma_dev, skb->data, RX_BUF_ALLOC_SIZE,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(hp->dma_dev, mapping)) {
+ dev_kfree_skb_any(skb);
+ hme_write_rxd(hp, &hb->happy_meal_rxd[i], 0, 0);
+ continue;
+ }
hme_write_rxd(hp, &hb->happy_meal_rxd[i],
(RXFLAG_OWN | ((RX_BUF_ALLOC_SIZE - RX_OFFSET) << 16)),
- dma_map_single(hp->dma_dev, skb->data, RX_BUF_ALLOC_SIZE,
- DMA_FROM_DEVICE));
+ mapping);
skb_reserve(skb, RX_OFFSET);
}
skb = hp->rx_skbs[elem];
if (len > RX_COPY_THRESHOLD) {
struct sk_buff *new_skb;
+ u32 mapping;
/* Now refill the entry, if we can. */
new_skb = happy_meal_alloc_skb(RX_BUF_ALLOC_SIZE, GFP_ATOMIC);
drops++;
goto drop_it;
}
+ skb_put(new_skb, (ETH_FRAME_LEN + RX_OFFSET + 4));
+ mapping = dma_map_single(hp->dma_dev, new_skb->data,
+ RX_BUF_ALLOC_SIZE,
+ DMA_FROM_DEVICE);
+ if (unlikely(dma_mapping_error(hp->dma_dev, mapping))) {
+ dev_kfree_skb_any(new_skb);
+ drops++;
+ goto drop_it;
+ }
+
dma_unmap_single(hp->dma_dev, dma_addr, RX_BUF_ALLOC_SIZE, DMA_FROM_DEVICE);
hp->rx_skbs[elem] = new_skb;
- skb_put(new_skb, (ETH_FRAME_LEN + RX_OFFSET + 4));
hme_write_rxd(hp, this,
(RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)),
- dma_map_single(hp->dma_dev, new_skb->data, RX_BUF_ALLOC_SIZE,
- DMA_FROM_DEVICE));
+ mapping);
skb_reserve(new_skb, RX_OFFSET);
/* Trim the original skb for the netif. */
netif_wake_queue(dev);
}
+static void unmap_partial_tx_skb(struct happy_meal *hp, u32 first_mapping,
+ u32 first_len, u32 first_entry, u32 entry)
+{
+ struct happy_meal_txd *txbase = &hp->happy_block->happy_meal_txd[0];
+
+ dma_unmap_single(hp->dma_dev, first_mapping, first_len, DMA_TO_DEVICE);
+
+ first_entry = NEXT_TX(first_entry);
+ while (first_entry != entry) {
+ struct happy_meal_txd *this = &txbase[first_entry];
+ u32 addr, len;
+
+ addr = hme_read_desc32(hp, &this->tx_addr);
+ len = hme_read_desc32(hp, &this->tx_flags);
+ len &= TXFLAG_SIZE;
+ dma_unmap_page(hp->dma_dev, addr, len, DMA_TO_DEVICE);
+ }
+}
+
static netdev_tx_t happy_meal_start_xmit(struct sk_buff *skb,
struct net_device *dev)
{
len = skb->len;
mapping = dma_map_single(hp->dma_dev, skb->data, len, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(hp->dma_dev, mapping)))
+ goto out_dma_error;
tx_flags |= (TXFLAG_SOP | TXFLAG_EOP);
hme_write_txd(hp, &hp->happy_block->happy_meal_txd[entry],
(tx_flags | (len & TXFLAG_SIZE)),
first_len = skb_headlen(skb);
first_mapping = dma_map_single(hp->dma_dev, skb->data, first_len,
DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(hp->dma_dev, first_mapping)))
+ goto out_dma_error;
entry = NEXT_TX(entry);
for (frag = 0; frag < skb_shinfo(skb)->nr_frags; frag++) {
len = skb_frag_size(this_frag);
mapping = skb_frag_dma_map(hp->dma_dev, this_frag,
0, len, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(hp->dma_dev, mapping))) {
+ unmap_partial_tx_skb(hp, first_mapping, first_len,
+ first_entry, entry);
+ goto out_dma_error;
+ }
this_txflags = tx_flags;
if (frag == skb_shinfo(skb)->nr_frags - 1)
this_txflags |= TXFLAG_EOP;
tx_add_log(hp, TXLOG_ACTION_TXMIT, 0);
return NETDEV_TX_OK;
+
+out_dma_error:
+ hp->tx_skbs[hp->tx_new] = NULL;
+ spin_unlock_irq(&hp->happy_lock);
+
+ dev_kfree_skb_any(skb);
+ dev->stats.tx_dropped++;
+ return NETDEV_TX_OK;
}
static struct net_device_stats *happy_meal_get_stats(struct net_device *dev)
{
if (!ale)
return -EINVAL;
- cpsw_ale_stop(ale);
cpsw_ale_control_set(ale, 0, ALE_ENABLE, 0);
kfree(ale);
return 0;
switch (ptp_class & PTP_CLASS_PMASK) {
case PTP_CLASS_IPV4:
- offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN;
+ offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN;
break;
case PTP_CLASS_IPV6:
offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
if (skb->ip_summed == CHECKSUM_PARTIAL) {
vnet_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
vnet_hdr->csum_start = skb_checksum_start_offset(skb);
+ if (vlan_tx_tag_present(skb))
+ vnet_hdr->csum_start += VLAN_HLEN;
vnet_hdr->csum_offset = skb->csum_offset;
} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
vnet_hdr->flags = VIRTIO_NET_HDR_F_DATA_VALID;
switch (type & PTP_CLASS_PMASK) {
case PTP_CLASS_IPV4:
- offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN;
+ offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN;
break;
case PTP_CLASS_IPV6:
offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
switch (type & PTP_CLASS_PMASK) {
case PTP_CLASS_IPV4:
- offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN;
+ offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN;
break;
case PTP_CLASS_IPV6:
offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
{
struct mii_ioctl_data *mii_data = if_mii(ifr);
u16 val = mii_data->val_in;
+ bool change_autoneg = false;
switch (cmd) {
case SIOCGMIIPHY:
if (mii_data->phy_id == phydev->addr) {
switch (mii_data->reg_num) {
case MII_BMCR:
- if ((val & (BMCR_RESET | BMCR_ANENABLE)) == 0)
+ if ((val & (BMCR_RESET | BMCR_ANENABLE)) == 0) {
+ if (phydev->autoneg == AUTONEG_ENABLE)
+ change_autoneg = true;
phydev->autoneg = AUTONEG_DISABLE;
- else
+ if (val & BMCR_FULLDPLX)
+ phydev->duplex = DUPLEX_FULL;
+ else
+ phydev->duplex = DUPLEX_HALF;
+ if (val & BMCR_SPEED1000)
+ phydev->speed = SPEED_1000;
+ else if (val & BMCR_SPEED100)
+ phydev->speed = SPEED_100;
+ else phydev->speed = SPEED_10;
+ }
+ else {
+ if (phydev->autoneg == AUTONEG_DISABLE)
+ change_autoneg = true;
phydev->autoneg = AUTONEG_ENABLE;
- if (!phydev->autoneg && (val & BMCR_FULLDPLX))
- phydev->duplex = DUPLEX_FULL;
- else
- phydev->duplex = DUPLEX_HALF;
- if (!phydev->autoneg && (val & BMCR_SPEED1000))
- phydev->speed = SPEED_1000;
- else if (!phydev->autoneg &&
- (val & BMCR_SPEED100))
- phydev->speed = SPEED_100;
+ }
break;
case MII_ADVERTISE:
- phydev->advertising = val;
+ phydev->advertising = mii_adv_to_ethtool_adv_t(val);
+ change_autoneg = true;
break;
default:
/* do nothing */
if (mii_data->reg_num == MII_BMCR &&
val & BMCR_RESET)
return phy_init_hw(phydev);
+
+ if (change_autoneg)
+ return phy_start_aneg(phydev);
+
return 0;
case SIOCSHWTSTAMP:
err = get_filter(argp, &code);
if (err >= 0) {
+ struct bpf_prog *pass_filter = NULL;
struct sock_fprog_kern fprog = {
.len = err,
.filter = code,
};
- ppp_lock(ppp);
- if (ppp->pass_filter) {
- bpf_prog_destroy(ppp->pass_filter);
- ppp->pass_filter = NULL;
+ err = 0;
+ if (fprog.filter)
+ err = bpf_prog_create(&pass_filter, &fprog);
+ if (!err) {
+ ppp_lock(ppp);
+ if (ppp->pass_filter)
+ bpf_prog_destroy(ppp->pass_filter);
+ ppp->pass_filter = pass_filter;
+ ppp_unlock(ppp);
}
- if (fprog.filter != NULL)
- err = bpf_prog_create(&ppp->pass_filter,
- &fprog);
- else
- err = 0;
kfree(code);
- ppp_unlock(ppp);
}
break;
}
err = get_filter(argp, &code);
if (err >= 0) {
+ struct bpf_prog *active_filter = NULL;
struct sock_fprog_kern fprog = {
.len = err,
.filter = code,
};
- ppp_lock(ppp);
- if (ppp->active_filter) {
- bpf_prog_destroy(ppp->active_filter);
- ppp->active_filter = NULL;
+ err = 0;
+ if (fprog.filter)
+ err = bpf_prog_create(&active_filter, &fprog);
+ if (!err) {
+ ppp_lock(ppp);
+ if (ppp->active_filter)
+ bpf_prog_destroy(ppp->active_filter);
+ ppp->active_filter = active_filter;
+ ppp_unlock(ppp);
}
- if (fprog.filter != NULL)
- err = bpf_prog_create(&ppp->active_filter,
- &fprog);
- else
- err = 0;
kfree(code);
- ppp_unlock(ppp);
}
break;
}
struct tun_pi pi = { 0, skb->protocol };
ssize_t total = 0;
int vlan_offset = 0, copied;
+ int vlan_hlen = 0;
+ int vnet_hdr_sz = 0;
+
+ if (vlan_tx_tag_present(skb))
+ vlan_hlen = VLAN_HLEN;
+
+ if (tun->flags & TUN_VNET_HDR)
+ vnet_hdr_sz = tun->vnet_hdr_sz;
if (!(tun->flags & TUN_NO_PI)) {
if ((len -= sizeof(pi)) < 0)
return -EINVAL;
- if (len < skb->len) {
+ if (len < skb->len + vlan_hlen + vnet_hdr_sz) {
/* Packet will be striped */
pi.flags |= TUN_PKT_STRIP;
}
total += sizeof(pi);
}
- if (tun->flags & TUN_VNET_HDR) {
+ if (vnet_hdr_sz) {
struct virtio_net_hdr gso = { 0 }; /* no info leak */
- if ((len -= tun->vnet_hdr_sz) < 0)
+ if ((len -= vnet_hdr_sz) < 0)
return -EINVAL;
if (skb_is_gso(skb)) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
gso.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
- gso.csum_start = skb_checksum_start_offset(skb);
+ gso.csum_start = skb_checksum_start_offset(skb) +
+ vlan_hlen;
gso.csum_offset = skb->csum_offset;
} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
gso.flags = VIRTIO_NET_HDR_F_DATA_VALID;
if (unlikely(memcpy_toiovecend(iv, (void *)&gso, total,
sizeof(gso))))
return -EFAULT;
- total += tun->vnet_hdr_sz;
+ total += vnet_hdr_sz;
}
copied = total;
- total += skb->len;
- if (!vlan_tx_tag_present(skb)) {
- len = min_t(int, skb->len, len);
- } else {
+ len = min_t(int, skb->len + vlan_hlen, len);
+ total += skb->len + vlan_hlen;
+ if (vlan_hlen) {
int copy, ret;
struct {
__be16 h_vlan_proto;
veth.h_vlan_TCI = htons(vlan_tx_tag_get(skb));
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
- len = min_t(int, skb->len + VLAN_HLEN, len);
- total += VLAN_HLEN;
copy = min_t(int, vlan_offset, len);
ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy);
return ret;
}
- ret = asix_sw_reset(dev, AX_SWRESET_IPPD | AX_SWRESET_PRL);
- if (ret < 0)
- return ret;
-
- msleep(150);
-
- ret = asix_sw_reset(dev, AX_SWRESET_CLEAR);
- if (ret < 0)
- return ret;
-
- msleep(150);
-
- ret = asix_sw_reset(dev, embd_phy ? AX_SWRESET_IPRL : AX_SWRESET_PRTE);
+ ax88772_reset(dev);
/* Read PHYID register *AFTER* the PHY was reset properly */
phyid = asix_get_phyid(dev);
return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
}
-/* Find VXLAN socket based on network namespace and UDP port */
-static struct vxlan_sock *vxlan_find_sock(struct net *net, __be16 port)
+/* Find VXLAN socket based on network namespace, address family and UDP port */
+static struct vxlan_sock *vxlan_find_sock(struct net *net,
+ sa_family_t family, __be16 port)
{
struct vxlan_sock *vs;
hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
- if (inet_sk(vs->sock->sk)->inet_sport == port)
+ if (inet_sk(vs->sock->sk)->inet_sport == port &&
+ inet_sk(vs->sock->sk)->sk.sk_family == family)
return vs;
}
return NULL;
}
/* Look up VNI in a per net namespace table */
-static struct vxlan_dev *vxlan_find_vni(struct net *net, u32 id, __be16 port)
+static struct vxlan_dev *vxlan_find_vni(struct net *net, u32 id,
+ sa_family_t family, __be16 port)
{
struct vxlan_sock *vs;
- vs = vxlan_find_sock(net, port);
+ vs = vxlan_find_sock(net, family, port);
if (!vs)
return NULL;
int vxlan_len = sizeof(struct vxlanhdr) + sizeof(struct ethhdr);
int err = -ENOSYS;
+ udp_tunnel_gro_complete(skb, nhoff);
+
eh = (struct ethhdr *)(skb->data + nhoff + sizeof(struct vxlanhdr));
type = eh->h_proto;
struct vxlan_dev *dst_vxlan;
ip_rt_put(rt);
- dst_vxlan = vxlan_find_vni(vxlan->net, vni, dst_port);
+ dst_vxlan = vxlan_find_vni(vxlan->net, vni,
+ dst->sa.sa_family, dst_port);
if (!dst_vxlan)
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
struct vxlan_dev *dst_vxlan;
dst_release(ndst);
- dst_vxlan = vxlan_find_vni(vxlan->net, vni, dst_port);
+ dst_vxlan = vxlan_find_vni(vxlan->net, vni,
+ dst->sa.sa_family, dst_port);
if (!dst_vxlan)
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
struct vxlan_sock *vs;
+ bool ipv6 = vxlan->flags & VXLAN_F_IPV6;
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
if (!dev->tstats)
return -ENOMEM;
spin_lock(&vn->sock_lock);
- vs = vxlan_find_sock(vxlan->net, vxlan->dst_port);
+ vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
+ vxlan->dst_port);
if (vs) {
/* If we have a socket with same port already, reuse it */
atomic_inc(&vs->refcnt);
{
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
struct vxlan_sock *vs;
+ bool ipv6 = flags & VXLAN_F_IPV6;
vs = vxlan_socket_create(net, port, rcv, data, flags);
if (!IS_ERR(vs))
return vs;
spin_lock(&vn->sock_lock);
- vs = vxlan_find_sock(net, port);
+ vs = vxlan_find_sock(net, ipv6 ? AF_INET6 : AF_INET, port);
if (vs) {
if (vs->rcv == rcv)
atomic_inc(&vs->refcnt);
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]))
vxlan->flags |= VXLAN_F_UDP_ZERO_CSUM6_RX;
- if (vxlan_find_vni(net, vni, vxlan->dst_port)) {
+ if (vxlan_find_vni(net, vni, use_ipv6 ? AF_INET6 : AF_INET,
+ vxlan->dst_port)) {
pr_info("duplicate VNI %u\n", vni);
return -EEXIST;
}
lockdep_assert_held(&mvm->mutex);
- if (WARN_ON_ONCE(mvm->init_ucode_complete))
+ if (WARN_ON_ONCE(mvm->init_ucode_complete || mvm->calibrating))
return 0;
iwl_init_notification_wait(&mvm->notif_wait,
goto out;
}
+ mvm->calibrating = true;
+
/* Send TX valid antennas before triggering calibrations */
ret = iwl_send_tx_ant_cfg(mvm, mvm->fw->valid_tx_ant);
if (ret)
MVM_UCODE_CALIB_TIMEOUT);
if (!ret)
mvm->init_ucode_complete = true;
+
+ if (ret && iwl_mvm_is_radio_killed(mvm)) {
+ IWL_DEBUG_RF_KILL(mvm, "RFKILL while calibrating.\n");
+ ret = 1;
+ }
goto out;
error:
iwl_remove_notification(&mvm->notif_wait, &calib_wait);
out:
+ mvm->calibrating = false;
if (iwlmvm_mod_params.init_dbg && !mvm->nvm_data) {
/* we want to debug INIT and we have no NVM - fake */
mvm->nvm_data = kzalloc(sizeof(struct iwl_nvm_data) +
mvm->scan_status = IWL_MVM_SCAN_NONE;
mvm->ps_disabled = false;
+ mvm->calibrating = false;
/* just in case one was running */
ieee80211_remain_on_channel_expired(mvm->hw);
enum iwl_ucode_type cur_ucode;
bool ucode_loaded;
bool init_ucode_complete;
+ bool calibrating;
u32 error_event_table;
u32 log_event_table;
u32 umac_error_event_table;
}
mvm->sf_state = SF_UNINIT;
mvm->low_latency_agg_frame_limit = 6;
+ mvm->cur_ucode = IWL_UCODE_INIT;
mutex_init(&mvm->mutex);
mutex_init(&mvm->d0i3_suspend_mutex);
static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state)
{
struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode);
+ bool calibrating = ACCESS_ONCE(mvm->calibrating);
if (state)
set_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status);
wiphy_rfkill_set_hw_state(mvm->hw->wiphy, iwl_mvm_is_radio_killed(mvm));
- return state && mvm->cur_ucode != IWL_UCODE_INIT;
+ /* iwl_run_init_mvm_ucode is waiting for results, abort it */
+ if (calibrating)
+ iwl_abort_notification_waits(&mvm->notif_wait);
+
+ /*
+ * Stop the device if we run OPERATIONAL firmware or if we are in the
+ * middle of the calibrations.
+ */
+ return state && (mvm->cur_ucode != IWL_UCODE_INIT || calibrating);
}
static void iwl_mvm_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb)
* restart. So don't process again if the device is
* already dead.
*/
- if (test_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
+ if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
+ IWL_DEBUG_INFO(trans, "DEVICE_ENABLED bit was set and is now cleared\n");
iwl_pcie_tx_stop(trans);
iwl_pcie_rx_stop(trans);
/* clear all status bits */
clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status);
clear_bit(STATUS_INT_ENABLED, &trans->status);
- clear_bit(STATUS_DEVICE_ENABLED, &trans->status);
clear_bit(STATUS_TPOWER_PMI, &trans->status);
clear_bit(STATUS_RFKILL, &trans->status);
if (err != 0) {
printk(KERN_DEBUG "mac80211_hwsim: device_bind_driver failed (%d)\n",
err);
- goto failed_hw;
+ goto failed_bind;
}
skb_queue_head_init(&data->pending);
return idx;
failed_hw:
+ device_release_driver(data->dev);
+failed_bind:
device_unregister(data->dev);
failed_drvdata:
ieee80211_free_hw(hw);
}
EXPORT_SYMBOL_GPL(of_property_read_string);
-/**
- * of_property_read_string_index - Find and read a string from a multiple
- * strings property.
- * @np: device node from which the property value is to be read.
- * @propname: name of the property to be searched.
- * @index: index of the string in the list of strings
- * @out_string: pointer to null terminated return string, modified only if
- * return value is 0.
- *
- * Search for a property in a device tree node and retrieve a null
- * terminated string value (pointer to data, not a copy) in the list of strings
- * contained in that property.
- * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if
- * property does not have a value, and -EILSEQ if the string is not
- * null-terminated within the length of the property data.
- *
- * The out_string pointer is modified only if a valid string can be decoded.
- */
-int of_property_read_string_index(struct device_node *np, const char *propname,
- int index, const char **output)
-{
- struct property *prop = of_find_property(np, propname, NULL);
- int i = 0;
- size_t l = 0, total = 0;
- const char *p;
-
- if (!prop)
- return -EINVAL;
- if (!prop->value)
- return -ENODATA;
- if (strnlen(prop->value, prop->length) >= prop->length)
- return -EILSEQ;
-
- p = prop->value;
-
- for (i = 0; total < prop->length; total += l, p += l) {
- l = strlen(p) + 1;
- if (i++ == index) {
- *output = p;
- return 0;
- }
- }
- return -ENODATA;
-}
-EXPORT_SYMBOL_GPL(of_property_read_string_index);
-
/**
* of_property_match_string() - Find string in a list and return index
* @np: pointer to node containing string list property
end = p + prop->length;
for (i = 0; p < end; i++, p += l) {
- l = strlen(p) + 1;
+ l = strnlen(p, end - p) + 1;
if (p + l > end)
return -EILSEQ;
pr_debug("comparing %s with %s\n", string, p);
EXPORT_SYMBOL_GPL(of_property_match_string);
/**
- * of_property_count_strings - Find and return the number of strings from a
- * multiple strings property.
+ * of_property_read_string_util() - Utility helper for parsing string properties
* @np: device node from which the property value is to be read.
* @propname: name of the property to be searched.
+ * @out_strs: output array of string pointers.
+ * @sz: number of array elements to read.
+ * @skip: Number of strings to skip over at beginning of list.
*
- * Search for a property in a device tree node and retrieve the number of null
- * terminated string contain in it. Returns the number of strings on
- * success, -EINVAL if the property does not exist, -ENODATA if property
- * does not have a value, and -EILSEQ if the string is not null-terminated
- * within the length of the property data.
+ * Don't call this function directly. It is a utility helper for the
+ * of_property_read_string*() family of functions.
*/
-int of_property_count_strings(struct device_node *np, const char *propname)
+int of_property_read_string_helper(struct device_node *np, const char *propname,
+ const char **out_strs, size_t sz, int skip)
{
struct property *prop = of_find_property(np, propname, NULL);
- int i = 0;
- size_t l = 0, total = 0;
- const char *p;
+ int l = 0, i = 0;
+ const char *p, *end;
if (!prop)
return -EINVAL;
if (!prop->value)
return -ENODATA;
- if (strnlen(prop->value, prop->length) >= prop->length)
- return -EILSEQ;
-
p = prop->value;
+ end = p + prop->length;
- for (i = 0; total < prop->length; total += l, p += l, i++)
- l = strlen(p) + 1;
-
- return i;
+ for (i = 0; p < end && (!out_strs || i < skip + sz); i++, p += l) {
+ l = strnlen(p, end - p) + 1;
+ if (p + l > end)
+ return -EILSEQ;
+ if (out_strs && i >= skip)
+ *out_strs++ = p;
+ }
+ i -= skip;
+ return i <= 0 ? -ENODATA : i;
}
-EXPORT_SYMBOL_GPL(of_property_count_strings);
+EXPORT_SYMBOL_GPL(of_property_read_string_helper);
void of_print_phandle_args(const char *msg, const struct of_phandle_args *args)
{
selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
}
-static void __init of_selftest_property_match_string(void)
+static void __init of_selftest_property_string(void)
{
+ const char *strings[4];
struct device_node *np;
int rc;
rc = of_property_match_string(np, "phandle-list-names", "third");
selftest(rc == 2, "third expected:0 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "fourth");
- selftest(rc == -ENODATA, "unmatched string; rc=%i", rc);
+ selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
rc = of_property_match_string(np, "missing-property", "blah");
- selftest(rc == -EINVAL, "missing property; rc=%i", rc);
+ selftest(rc == -EINVAL, "missing property; rc=%i\n", rc);
rc = of_property_match_string(np, "empty-property", "blah");
- selftest(rc == -ENODATA, "empty property; rc=%i", rc);
+ selftest(rc == -ENODATA, "empty property; rc=%i\n", rc);
rc = of_property_match_string(np, "unterminated-string", "blah");
- selftest(rc == -EILSEQ, "unterminated string; rc=%i", rc);
+ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+
+ /* of_property_count_strings() tests */
+ rc = of_property_count_strings(np, "string-property");
+ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
+ rc = of_property_count_strings(np, "phandle-list-names");
+ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
+ rc = of_property_count_strings(np, "unterminated-string");
+ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+ rc = of_property_count_strings(np, "unterminated-string-list");
+ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+
+ /* of_property_read_string_index() tests */
+ rc = of_property_read_string_index(np, "string-property", 0, strings);
+ selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ strings[0] = NULL;
+ rc = of_property_read_string_index(np, "string-property", 1, strings);
+ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
+ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
+ selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
+ selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ strings[0] = NULL;
+ rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
+ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ strings[0] = NULL;
+ rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
+ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
+ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ strings[0] = NULL;
+ rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
+ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ strings[1] = NULL;
+
+ /* of_property_read_string_array() tests */
+ rc = of_property_read_string_array(np, "string-property", strings, 4);
+ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
+ rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
+ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
+ rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
+ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+ /* -- An incorrectly formed string should cause a failure */
+ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
+ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+ /* -- parsing the correctly formed strings should still work: */
+ strings[2] = NULL;
+ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
+ selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+ strings[1] = NULL;
+ rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
+ selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
}
#define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
of_selftest_find_node_by_name();
of_selftest_dynamic();
of_selftest_parse_phandle_with_args();
- of_selftest_property_match_string();
+ of_selftest_property_string();
of_selftest_property_copy();
of_selftest_changeset();
of_selftest_parse_interrupts();
phandle-list-bad-args = <&provider2 1 0>,
<&provider3 0>;
empty-property;
+ string-property = "foobar";
unterminated-string = [40 41 42 43];
+ unterminated-string-list = "first", "second", [40 41 42 43];
};
};
};
otg->phy = &phy->phy;
platform_set_drvdata(pdev, phy);
+ pm_runtime_enable(phy->dev);
generic_phy = devm_phy_create(phy->dev, NULL, &ops, NULL);
- if (IS_ERR(generic_phy))
+ if (IS_ERR(generic_phy)) {
+ pm_runtime_disable(phy->dev);
return PTR_ERR(generic_phy);
+ }
phy_set_drvdata(generic_phy, phy);
- pm_runtime_enable(phy->dev);
phy_provider = devm_of_phy_provider_register(phy->dev,
of_phy_simple_xlate);
if (IS_ERR(phy_provider)) {
config HP_ACCEL
tristate "HP laptop accelerometer"
depends on INPUT && ACPI
+ depends on SERIO_I8042
select SENSORS_LIS3LV02D
select NEW_LEDS
select LEDS_CLASS
#include <linux/leds.h>
#include <linux/atomic.h>
#include <linux/acpi.h>
+#include <linux/i8042.h>
+#include <linux/serio.h>
#include "../../misc/lis3lv02d/lis3lv02d.h"
#define DRIVER_NAME "hp_accel"
/* HP-specific accelerometer driver ------------------------------------ */
+/* e0 25, e0 26, e0 27, e0 28 are scan codes that the accelerometer with acpi id
+ * HPQ6000 sends through the keyboard bus */
+#define ACCEL_1 0x25
+#define ACCEL_2 0x26
+#define ACCEL_3 0x27
+#define ACCEL_4 0x28
+
/* For automatic insertion of the module */
static const struct acpi_device_id lis3lv02d_device_ids[] = {
{"HPQ0004", 0}, /* HP Mobile Data Protection System PNP */
printk(KERN_DEBUG DRIVER_NAME ": Error getting resources\n");
}
+static bool hp_accel_i8042_filter(unsigned char data, unsigned char str,
+ struct serio *port)
+{
+ static bool extended;
+
+ if (str & I8042_STR_AUXDATA)
+ return false;
+
+ if (data == 0xe0) {
+ extended = true;
+ return true;
+ } else if (unlikely(extended)) {
+ extended = false;
+
+ switch (data) {
+ case ACCEL_1:
+ case ACCEL_2:
+ case ACCEL_3:
+ case ACCEL_4:
+ return true;
+ default:
+ serio_interrupt(port, 0xe0, 0);
+ return false;
+ }
+ }
+
+ return false;
+}
+
static int lis3lv02d_add(struct acpi_device *device)
{
int ret;
if (ret)
return ret;
+ /* filter to remove HPQ6000 accelerometer data
+ * from keyboard bus stream */
+ if (strstr(dev_name(&device->dev), "HPQ6000"))
+ i8042_install_filter(hp_accel_i8042_filter);
+
INIT_WORK(&hpled_led.work, delayed_set_status_worker);
ret = led_classdev_register(NULL, &hpled_led.led_classdev);
if (ret) {
if (!device)
return -EINVAL;
+ i8042_remove_filter(hp_accel_i8042_filter);
lis3lv02d_joystick_disable(&lis3_dev);
lis3lv02d_poweroff(&lis3_dev);
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/time.h>
+#include <linux/time64.h>
#include <linux/of.h>
#include <linux/completion.h>
#include <linux/mfd/core.h>
struct ab8500_fg_avg_cap {
int avg;
int samples[NBR_AVG_SAMPLES];
- __kernel_time_t time_stamps[NBR_AVG_SAMPLES];
+ time64_t time_stamps[NBR_AVG_SAMPLES];
int pos;
int nbr_samples;
int sum;
*/
static int ab8500_fg_add_cap_sample(struct ab8500_fg *di, int sample)
{
- struct timespec ts;
+ struct timespec64 ts64;
struct ab8500_fg_avg_cap *avg = &di->avg_cap;
- getnstimeofday(&ts);
+ getnstimeofday64(&ts64);
do {
avg->sum += sample - avg->samples[avg->pos];
avg->samples[avg->pos] = sample;
- avg->time_stamps[avg->pos] = ts.tv_sec;
+ avg->time_stamps[avg->pos] = ts64.tv_sec;
avg->pos++;
if (avg->pos == NBR_AVG_SAMPLES)
* Check the time stamp for each sample. If too old,
* replace with latest sample
*/
- } while (ts.tv_sec - VALID_CAPACITY_SEC > avg->time_stamps[avg->pos]);
+ } while (ts64.tv_sec - VALID_CAPACITY_SEC > avg->time_stamps[avg->pos]);
avg->avg = avg->sum / avg->nbr_samples;
static void ab8500_fg_fill_cap_sample(struct ab8500_fg *di, int sample)
{
int i;
- struct timespec ts;
+ struct timespec64 ts64;
struct ab8500_fg_avg_cap *avg = &di->avg_cap;
- getnstimeofday(&ts);
+ getnstimeofday64(&ts64);
for (i = 0; i < NBR_AVG_SAMPLES; i++) {
avg->samples[i] = sample;
- avg->time_stamps[i] = ts.tv_sec;
+ avg->time_stamps[i] = ts64.tv_sec;
}
avg->pos = 0;
if (np) {
bq->notify_psy = power_supply_get_by_phandle(np, "ti,usb-charger-detection");
- if (!bq->notify_psy)
- return -EPROBE_DEFER;
+ if (IS_ERR(bq->notify_psy)) {
+ dev_info(&client->dev,
+ "no 'ti,usb-charger-detection' property (err=%ld)\n",
+ PTR_ERR(bq->notify_psy));
+ bq->notify_psy = NULL;
+ } else if (!bq->notify_psy) {
+ ret = -EPROBE_DEFER;
+ goto error_2;
+ }
}
else if (pdata->notify_device)
bq->notify_psy = power_supply_get_by_name(pdata->notify_device);
ret = of_property_read_u32(np, "ti,current-limit",
&bq->init_data.current_limit);
if (ret)
- return ret;
+ goto error_2;
ret = of_property_read_u32(np, "ti,weak-battery-voltage",
&bq->init_data.weak_battery_voltage);
if (ret)
- return ret;
+ goto error_2;
ret = of_property_read_u32(np, "ti,battery-regulation-voltage",
&bq->init_data.battery_regulation_voltage);
if (ret)
- return ret;
+ goto error_2;
ret = of_property_read_u32(np, "ti,charge-current",
&bq->init_data.charge_current);
if (ret)
- return ret;
+ goto error_2;
ret = of_property_read_u32(np, "ti,termination-current",
&bq->init_data.termination_current);
if (ret)
- return ret;
+ goto error_2;
ret = of_property_read_u32(np, "ti,resistor-sense",
&bq->init_data.resistor_sense);
if (ret)
- return ret;
+ goto error_2;
} else {
memcpy(&bq->init_data, pdata, sizeof(bq->init_data));
}
static bool is_batt_present(struct charger_manager *cm)
{
union power_supply_propval val;
+ struct power_supply *psy;
bool present = false;
int i, ret;
case CM_NO_BATTERY:
break;
case CM_FUEL_GAUGE:
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ psy = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!psy)
+ break;
+
+ ret = psy->get_property(psy,
POWER_SUPPLY_PROP_PRESENT, &val);
if (ret == 0 && val.intval)
present = true;
break;
case CM_CHARGER_STAT:
- for (i = 0; cm->charger_stat[i]; i++) {
- ret = cm->charger_stat[i]->get_property(
- cm->charger_stat[i],
- POWER_SUPPLY_PROP_PRESENT, &val);
+ for (i = 0; cm->desc->psy_charger_stat[i]; i++) {
+ psy = power_supply_get_by_name(
+ cm->desc->psy_charger_stat[i]);
+ if (!psy) {
+ dev_err(cm->dev, "Cannot find power supply \"%s\"\n",
+ cm->desc->psy_charger_stat[i]);
+ continue;
+ }
+
+ ret = psy->get_property(psy, POWER_SUPPLY_PROP_PRESENT,
+ &val);
if (ret == 0 && val.intval) {
present = true;
break;
static bool is_ext_pwr_online(struct charger_manager *cm)
{
union power_supply_propval val;
+ struct power_supply *psy;
bool online = false;
int i, ret;
/* If at least one of them has one, it's yes. */
- for (i = 0; cm->charger_stat[i]; i++) {
- ret = cm->charger_stat[i]->get_property(
- cm->charger_stat[i],
- POWER_SUPPLY_PROP_ONLINE, &val);
+ for (i = 0; cm->desc->psy_charger_stat[i]; i++) {
+ psy = power_supply_get_by_name(cm->desc->psy_charger_stat[i]);
+ if (!psy) {
+ dev_err(cm->dev, "Cannot find power supply \"%s\"\n",
+ cm->desc->psy_charger_stat[i]);
+ continue;
+ }
+
+ ret = psy->get_property(psy, POWER_SUPPLY_PROP_ONLINE, &val);
if (ret == 0 && val.intval) {
online = true;
break;
static int get_batt_uV(struct charger_manager *cm, int *uV)
{
union power_supply_propval val;
+ struct power_supply *fuel_gauge;
int ret;
- if (!cm->fuel_gauge)
+ fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge)
return -ENODEV;
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_VOLTAGE_NOW, &val);
if (ret)
return ret;
{
int i, ret;
bool charging = false;
+ struct power_supply *psy;
union power_supply_propval val;
/* If there is no battery, it cannot be charged */
return false;
/* If at least one of the charger is charging, return yes */
- for (i = 0; cm->charger_stat[i]; i++) {
+ for (i = 0; cm->desc->psy_charger_stat[i]; i++) {
/* 1. The charger sholuld not be DISABLED */
if (cm->emergency_stop)
continue;
if (!cm->charger_enabled)
continue;
+ psy = power_supply_get_by_name(cm->desc->psy_charger_stat[i]);
+ if (!psy) {
+ dev_err(cm->dev, "Cannot find power supply \"%s\"\n",
+ cm->desc->psy_charger_stat[i]);
+ continue;
+ }
+
/* 2. The charger should be online (ext-power) */
- ret = cm->charger_stat[i]->get_property(
- cm->charger_stat[i],
- POWER_SUPPLY_PROP_ONLINE, &val);
+ ret = psy->get_property(psy, POWER_SUPPLY_PROP_ONLINE, &val);
if (ret) {
dev_warn(cm->dev, "Cannot read ONLINE value from %s\n",
cm->desc->psy_charger_stat[i]);
* 3. The charger should not be FULL, DISCHARGING,
* or NOT_CHARGING.
*/
- ret = cm->charger_stat[i]->get_property(
- cm->charger_stat[i],
- POWER_SUPPLY_PROP_STATUS, &val);
+ ret = psy->get_property(psy, POWER_SUPPLY_PROP_STATUS, &val);
if (ret) {
dev_warn(cm->dev, "Cannot read STATUS value from %s\n",
cm->desc->psy_charger_stat[i]);
{
struct charger_desc *desc = cm->desc;
union power_supply_propval val;
+ struct power_supply *fuel_gauge;
int ret = 0;
int uV;
if (!is_batt_present(cm))
return false;
- if (cm->fuel_gauge && desc->fullbatt_full_capacity > 0) {
+ fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge)
+ return false;
+
+ if (desc->fullbatt_full_capacity > 0) {
val.intval = 0;
/* Not full if capacity of fuel gauge isn't full */
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CHARGE_FULL, &val);
if (!ret && val.intval > desc->fullbatt_full_capacity)
return true;
}
/* Full, if the capacity is more than fullbatt_soc */
- if (cm->fuel_gauge && desc->fullbatt_soc > 0) {
+ if (desc->fullbatt_soc > 0) {
val.intval = 0;
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CAPACITY, &val);
if (!ret && val.intval >= desc->fullbatt_soc)
return true;
return ret;
}
+static int cm_get_battery_temperature_by_psy(struct charger_manager *cm,
+ int *temp)
+{
+ struct power_supply *fuel_gauge;
+
+ fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge)
+ return -ENODEV;
+
+ return fuel_gauge->get_property(fuel_gauge,
+ POWER_SUPPLY_PROP_TEMP,
+ (union power_supply_propval *)temp);
+}
+
static int cm_get_battery_temperature(struct charger_manager *cm,
int *temp)
{
return -ENODEV;
#ifdef CONFIG_THERMAL
- ret = thermal_zone_get_temp(cm->tzd_batt, (unsigned long *)temp);
- if (!ret)
- /* Calibrate temperature unit */
- *temp /= 100;
-#else
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
- POWER_SUPPLY_PROP_TEMP,
- (union power_supply_propval *)temp);
+ if (cm->tzd_batt) {
+ ret = thermal_zone_get_temp(cm->tzd_batt, (unsigned long *)temp);
+ if (!ret)
+ /* Calibrate temperature unit */
+ *temp /= 100;
+ } else
#endif
+ {
+ /* if-else continued from CONFIG_THERMAL */
+ ret = cm_get_battery_temperature_by_psy(cm, temp);
+ }
+
return ret;
}
struct charger_manager *cm = container_of(psy,
struct charger_manager, charger_psy);
struct charger_desc *desc = cm->desc;
+ struct power_supply *fuel_gauge;
int ret = 0;
int uV;
ret = get_batt_uV(cm, &val->intval);
break;
case POWER_SUPPLY_PROP_CURRENT_NOW:
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge) {
+ ret = -ENODEV;
+ break;
+ }
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CURRENT_NOW, val);
break;
case POWER_SUPPLY_PROP_TEMP:
case POWER_SUPPLY_PROP_TEMP_AMBIENT:
return cm_get_battery_temperature(cm, &val->intval);
case POWER_SUPPLY_PROP_CAPACITY:
- if (!cm->fuel_gauge) {
+ fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge) {
ret = -ENODEV;
break;
}
break;
}
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CAPACITY, val);
if (ret)
break;
break;
case POWER_SUPPLY_PROP_CHARGE_NOW:
if (is_charging(cm)) {
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ fuel_gauge = power_supply_get_by_name(
+ cm->desc->psy_fuel_gauge);
+ if (!fuel_gauge) {
+ ret = -ENODEV;
+ break;
+ }
+
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CHARGE_NOW,
val);
if (ret) {
.properties = default_charger_props,
.num_properties = ARRAY_SIZE(default_charger_props),
.get_property = charger_get_property,
+ .no_thermal = true,
};
/**
return ret;
}
-static int cm_init_thermal_data(struct charger_manager *cm)
+static int cm_init_thermal_data(struct charger_manager *cm,
+ struct power_supply *fuel_gauge)
{
struct charger_desc *desc = cm->desc;
union power_supply_propval val;
int ret;
/* Verify whether fuel gauge provides battery temperature */
- ret = cm->fuel_gauge->get_property(cm->fuel_gauge,
+ ret = fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_TEMP, &val);
if (!ret) {
cm->desc->measure_battery_temp = true;
}
#ifdef CONFIG_THERMAL
- cm->tzd_batt = cm->fuel_gauge->tzd;
-
if (ret && desc->thermal_zone) {
cm->tzd_batt =
thermal_zone_get_zone_by_name(desc->thermal_zone);
int ret = 0, i = 0;
int j = 0;
union power_supply_propval val;
+ struct power_supply *fuel_gauge;
if (g_desc && !rtc_dev && g_desc->rtc_name) {
rtc_dev = rtc_class_open(g_desc->rtc_name);
while (desc->psy_charger_stat[i])
i++;
- cm->charger_stat = devm_kzalloc(&pdev->dev,
- sizeof(struct power_supply *) * i, GFP_KERNEL);
- if (!cm->charger_stat)
- return -ENOMEM;
-
+ /* Check if charger's supplies are present at probe */
for (i = 0; desc->psy_charger_stat[i]; i++) {
- cm->charger_stat[i] = power_supply_get_by_name(
- desc->psy_charger_stat[i]);
- if (!cm->charger_stat[i]) {
+ struct power_supply *psy;
+
+ psy = power_supply_get_by_name(desc->psy_charger_stat[i]);
+ if (!psy) {
dev_err(&pdev->dev, "Cannot find power supply \"%s\"\n",
desc->psy_charger_stat[i]);
return -ENODEV;
}
}
- cm->fuel_gauge = power_supply_get_by_name(desc->psy_fuel_gauge);
- if (!cm->fuel_gauge) {
+ fuel_gauge = power_supply_get_by_name(desc->psy_fuel_gauge);
+ if (!fuel_gauge) {
dev_err(&pdev->dev, "Cannot find power supply \"%s\"\n",
desc->psy_fuel_gauge);
return -ENODEV;
cm->charger_psy.num_properties = psy_default.num_properties;
/* Find which optional psy-properties are available */
- if (!cm->fuel_gauge->get_property(cm->fuel_gauge,
+ if (!fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CHARGE_NOW, &val)) {
cm->charger_psy.properties[cm->charger_psy.num_properties] =
POWER_SUPPLY_PROP_CHARGE_NOW;
cm->charger_psy.num_properties++;
}
- if (!cm->fuel_gauge->get_property(cm->fuel_gauge,
+ if (!fuel_gauge->get_property(fuel_gauge,
POWER_SUPPLY_PROP_CURRENT_NOW,
&val)) {
cm->charger_psy.properties[cm->charger_psy.num_properties] =
cm->charger_psy.num_properties++;
}
- ret = cm_init_thermal_data(cm);
+ ret = cm_init_thermal_data(cm, fuel_gauge);
if (ret) {
dev_err(&pdev->dev, "Failed to initialize thermal data\n");
cm->desc->measure_battery_temp = false;
int i;
bool found = false;
- for (i = 0; cm->charger_stat[i]; i++) {
- if (psy == cm->charger_stat[i]) {
+ for (i = 0; cm->desc->psy_charger_stat[i]; i++) {
+ if (!strcmp(psy->name, cm->desc->psy_charger_stat[i])) {
found = true;
break;
}
{
int i;
+ if (psy->no_thermal)
+ return 0;
+
/* Register battery zone device psy reports temperature */
for (i = 0; i < psy->num_properties; i++) {
if (psy->properties[i] == POWER_SUPPLY_PROP_TEMP) {
struct max1586_platform_data *pdata)
{
struct max1586_subdev_data *sub;
- struct of_regulator_match rmatch[ARRAY_SIZE(max1586_reg)];
+ struct of_regulator_match rmatch[ARRAY_SIZE(max1586_reg)] = { };
struct device_node *np = dev->of_node;
int i, matched;
struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct device_node *pmic_np, *regulators_np;
struct max77686_regulator_data *rdata;
- struct of_regulator_match rmatch;
+ struct of_regulator_match rmatch = { };
unsigned int i;
pmic_np = iodev->dev->of_node;
struct max77693_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct max77693_regulator_data *rdata = NULL;
int num_rdata, i;
- struct regulator_config config;
+ struct regulator_config config = { };
num_rdata = max77693_pmic_init_rdata(&pdev->dev, &rdata);
if (!rdata || num_rdata <= 0) {
struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct device_node *pmic_np, *regulators_np;
struct max77686_regulator_data *rdata;
- struct of_regulator_match rmatch;
+ struct of_regulator_match rmatch = { };
unsigned int i;
pmic_np = iodev->dev->of_node;
int matched, i;
struct device_node *np;
struct max8660_subdev_data *sub;
- struct of_regulator_match rmatch[ARRAY_SIZE(max8660_reg)];
+ struct of_regulator_match rmatch[ARRAY_SIZE(max8660_reg)] = { };
np = of_get_child_by_name(dev->of_node, "regulators");
if (!np) {
search = dev->of_node;
if (!search) {
- dev_err(dev, "Failed to find regulator container node\n");
+ dev_dbg(dev, "Failed to find regulator container node '%s'\n",
+ desc->regulators_node);
return NULL;
}
{
struct sec_pmic_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct sec_platform_data *pdata = dev_get_platdata(iodev->dev);
- struct of_regulator_match rdata[S2MPA01_REGULATOR_MAX];
+ struct of_regulator_match rdata[S2MPA01_REGULATOR_MAX] = { };
struct device_node *reg_np = NULL;
struct regulator_config config = { };
struct s2mpa01_info *s2mpa01;
struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev);
int i;
struct virtqueue *vq;
- struct virtio_driver *drv;
if (!vcdev)
return;
bnx2fc_initiate_cleanup(orig_io_req);
/* Post a new IO req with the same sc_cmd */
BNX2FC_IO_DBG(rec_req, "Post IO request again\n");
- spin_unlock_bh(&tgt->tgt_lock);
rc = bnx2fc_post_io_req(tgt, new_io_req);
- spin_lock_bh(&tgt->tgt_lock);
if (!rc)
goto free_frame;
BNX2FC_IO_DBG(rec_req, "REC: io post err\n");
goto exit_qcmd;
}
}
+
+ spin_lock_bh(&tgt->tgt_lock);
+
io_req = bnx2fc_cmd_alloc(tgt);
if (!io_req) {
rc = SCSI_MLQUEUE_HOST_BUSY;
- goto exit_qcmd;
+ goto exit_qcmd_tgtlock;
}
io_req->sc_cmd = sc_cmd;
if (bnx2fc_post_io_req(tgt, io_req)) {
printk(KERN_ERR PFX "Unable to post io_req\n");
rc = SCSI_MLQUEUE_HOST_BUSY;
- goto exit_qcmd;
+ goto exit_qcmd_tgtlock;
}
+
+exit_qcmd_tgtlock:
+ spin_unlock_bh(&tgt->tgt_lock);
exit_qcmd:
return rc;
}
int task_idx, index;
u16 xid;
+ /* bnx2fc_post_io_req() is called with the tgt_lock held */
+
/* Initialize rest of io_req fields */
io_req->cmd_type = BNX2FC_SCSI_CMD;
io_req->port = port;
/* Build buffer descriptor list for firmware from sg list */
if (bnx2fc_build_bd_list_from_sg(io_req)) {
printk(KERN_ERR PFX "BD list creation failed\n");
- spin_lock_bh(&tgt->tgt_lock);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
- spin_unlock_bh(&tgt->tgt_lock);
return -EAGAIN;
}
task = &(task_page[index]);
bnx2fc_init_task(io_req, task);
- spin_lock_bh(&tgt->tgt_lock);
-
if (tgt->flush_in_prog) {
printk(KERN_ERR PFX "Flush in progress..Host Busy\n");
kref_put(&io_req->refcount, bnx2fc_cmd_release);
- spin_unlock_bh(&tgt->tgt_lock);
return -EAGAIN;
}
if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) {
printk(KERN_ERR PFX "Session not ready...post_io\n");
kref_put(&io_req->refcount, bnx2fc_cmd_release);
- spin_unlock_bh(&tgt->tgt_lock);
return -EAGAIN;
}
/* Ring doorbell */
bnx2fc_ring_doorbell(tgt);
- spin_unlock_bh(&tgt->tgt_lock);
return 0;
}
cxgbi_sock_get(csk);
spin_lock_bh(&csk->lock);
- if (!cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD)) {
- cxgbi_sock_set_flag(csk, CTPF_ABORT_REQ_RCVD);
- cxgbi_sock_set_state(csk, CTP_ABORTING);
- goto done;
+ cxgbi_sock_clear_flag(csk, CTPF_ABORT_REQ_RCVD);
+
+ if (!cxgbi_sock_flag(csk, CTPF_TX_DATA_SENT)) {
+ send_tx_flowc_wr(csk);
+ cxgbi_sock_set_flag(csk, CTPF_TX_DATA_SENT);
}
- cxgbi_sock_clear_flag(csk, CTPF_ABORT_REQ_RCVD);
+ cxgbi_sock_set_flag(csk, CTPF_ABORT_REQ_RCVD);
+ cxgbi_sock_set_state(csk, CTP_ABORTING);
+
send_abort_rpl(csk, rst_status);
if (!cxgbi_sock_flag(csk, CTPF_ABORT_RPL_PENDING)) {
csk->err = abort_status_to_errno(csk, req->status, &rst_status);
cxgbi_sock_closed(csk);
}
-done:
+
spin_unlock_bh(&csk->lock);
cxgbi_sock_put(csk);
rel_skb:
{
cxgbi_sock_get(csk);
spin_lock_bh(&csk->lock);
+
+ cxgbi_sock_set_flag(csk, CTPF_ABORT_RPL_RCVD);
if (cxgbi_sock_flag(csk, CTPF_ABORT_RPL_PENDING)) {
- if (!cxgbi_sock_flag(csk, CTPF_ABORT_RPL_RCVD))
- cxgbi_sock_set_flag(csk, CTPF_ABORT_RPL_RCVD);
- else {
- cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_RCVD);
- cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_PENDING);
- if (cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD))
- pr_err("csk 0x%p,%u,0x%lx,%u,ABT_RPL_RSS.\n",
- csk, csk->state, csk->flags, csk->tid);
- cxgbi_sock_closed(csk);
- }
+ cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_PENDING);
+ if (cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD))
+ pr_err("csk 0x%p,%u,0x%lx,%u,ABT_RPL_RSS.\n",
+ csk, csk->state, csk->flags, csk->tid);
+ cxgbi_sock_closed(csk);
}
+
spin_unlock_bh(&csk->lock);
cxgbi_sock_put(csk);
}
* LUN Not Ready -- Offline
*/
return SUCCESS;
+ if (sdev->allow_restart &&
+ sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x02)
+ /*
+ * if the device is not started, we need to wake
+ * the error handler to start the motor
+ */
+ return FAILED;
break;
case UNIT_ATTENTION:
if (sense_hdr->asc == 0x29 && sense_hdr->ascq == 0x00)
instance->msixentry[i].entry = i;
i = pci_enable_msix_range(instance->pdev, instance->msixentry,
1, instance->msix_vectors);
- if (i)
+ if (i > 0)
instance->msix_vectors = i;
else
instance->msix_vectors = 0;
if (! scsi_command_normalize_sense(scmd, &sshdr))
return FAILED; /* no valid sense data */
- if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done)
- /*
- * nasty: for mid-layer issued TURs, we need to return the
- * actual sense data without any recovery attempt. For eh
- * issued ones, we need to try to recover and interpret
- */
- return SUCCESS;
-
scsi_report_sense(sdev, &sshdr);
if (scsi_sense_is_deferred(&sshdr))
/* handler does not care. Drop down to default handling */
}
+ if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done)
+ /*
+ * nasty: for mid-layer issued TURs, we need to return the
+ * actual sense data without any recovery attempt. For eh
+ * issued ones, we need to try to recover and interpret
+ */
+ return SUCCESS;
+
/*
* Previous logic looked for FILEMARK, EOM or ILI which are
* mainly associated with tapes and returned SUCCESS.
* is no point trying to lock the door of an off-line device.
*/
shost_for_each_device(sdev, shost) {
- if (scsi_device_online(sdev) && sdev->locked)
+ if (scsi_device_online(sdev) && sdev->was_reset && sdev->locked) {
scsi_eh_lock_door(sdev);
+ sdev->was_reset = 0;
+ }
}
/*
#define SPI_TCR 0x08
-#define SPI_CTAR(x) (0x0c + (x * 4))
+#define SPI_CTAR(x) (0x0c + (((x) & 0x3) * 4))
#define SPI_CTAR_FMSZ(x) (((x) & 0x0000000f) << 27)
#define SPI_CTAR_CPOL(x) ((x) << 26)
#define SPI_CTAR_CPHA(x) ((x) << 25)
#define SPI_PUSHR 0x34
#define SPI_PUSHR_CONT (1 << 31)
-#define SPI_PUSHR_CTAS(x) (((x) & 0x00000007) << 28)
+#define SPI_PUSHR_CTAS(x) (((x) & 0x00000003) << 28)
#define SPI_PUSHR_EOQ (1 << 27)
#define SPI_PUSHR_CTCNT (1 << 26)
#define SPI_PUSHR_PCS(x) (((1 << x) & 0x0000003f) << 16)
if (status != 0)
return status;
write_SSCR0(0, drv_data->ioaddr);
- clk_disable_unprepare(ssp->clk);
+
+ if (!pm_runtime_suspended(dev))
+ clk_disable_unprepare(ssp->clk);
return 0;
}
pxa2xx_spi_dma_resume(drv_data);
/* Enable the SSP clock */
- clk_prepare_enable(ssp->clk);
+ if (!pm_runtime_suspended(dev))
+ clk_prepare_enable(ssp->clk);
/* Restore LPSS private register bits */
lpss_ssp_setup(drv_data);
u8 *tx;
u8 *rx;
struct mutex buf_lock;
- const struct iio_chan_spec *ade7758_ring_channels;
struct spi_transfer ring_xfer[4];
struct spi_message ring_msg;
/*
.type = IIO_VOLTAGE,
.indexed = 1,
.channel = 0,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_A, AD7758_VOLTAGE),
.scan_index = 0,
.scan_type = {
.type = IIO_CURRENT,
.indexed = 1,
.channel = 0,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_A, AD7758_CURRENT),
.scan_index = 1,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 0,
- .extend_name = "apparent_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "apparent",
.address = AD7758_WT(AD7758_PHASE_A, AD7758_APP_PWR),
.scan_index = 2,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 0,
- .extend_name = "active_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "active",
.address = AD7758_WT(AD7758_PHASE_A, AD7758_ACT_PWR),
.scan_index = 3,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 0,
- .extend_name = "reactive_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "reactive",
.address = AD7758_WT(AD7758_PHASE_A, AD7758_REACT_PWR),
.scan_index = 4,
.scan_type = {
.type = IIO_VOLTAGE,
.indexed = 1,
.channel = 1,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_B, AD7758_VOLTAGE),
.scan_index = 5,
.scan_type = {
.type = IIO_CURRENT,
.indexed = 1,
.channel = 1,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_B, AD7758_CURRENT),
.scan_index = 6,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 1,
- .extend_name = "apparent_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "apparent",
.address = AD7758_WT(AD7758_PHASE_B, AD7758_APP_PWR),
.scan_index = 7,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 1,
- .extend_name = "active_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "active",
.address = AD7758_WT(AD7758_PHASE_B, AD7758_ACT_PWR),
.scan_index = 8,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 1,
- .extend_name = "reactive_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "reactive",
.address = AD7758_WT(AD7758_PHASE_B, AD7758_REACT_PWR),
.scan_index = 9,
.scan_type = {
.type = IIO_VOLTAGE,
.indexed = 1,
.channel = 2,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_C, AD7758_VOLTAGE),
.scan_index = 10,
.scan_type = {
.type = IIO_CURRENT,
.indexed = 1,
.channel = 2,
- .extend_name = "raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
.address = AD7758_WT(AD7758_PHASE_C, AD7758_CURRENT),
.scan_index = 11,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 2,
- .extend_name = "apparent_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "apparent",
.address = AD7758_WT(AD7758_PHASE_C, AD7758_APP_PWR),
.scan_index = 12,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 2,
- .extend_name = "active_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "active",
.address = AD7758_WT(AD7758_PHASE_C, AD7758_ACT_PWR),
.scan_index = 13,
.scan_type = {
.type = IIO_POWER,
.indexed = 1,
.channel = 2,
- .extend_name = "reactive_raw",
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "reactive",
.address = AD7758_WT(AD7758_PHASE_C, AD7758_REACT_PWR),
.scan_index = 14,
.scan_type = {
goto error_free_rx;
}
st->us = spi;
- st->ade7758_ring_channels = &ade7758_channels[0];
mutex_init(&st->buf_lock);
indio_dev->name = spi->dev.driver->name;
indio_dev->dev.parent = &spi->dev;
indio_dev->info = &ade7758_info;
indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = ade7758_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ade7758_channels);
ret = ade7758_configure_ring(indio_dev);
if (ret)
**/
static int ade7758_ring_preenable(struct iio_dev *indio_dev)
{
- struct ade7758_state *st = iio_priv(indio_dev);
unsigned channel;
- if (!bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength))
+ if (bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength))
return -EINVAL;
channel = find_first_bit(indio_dev->active_scan_mask,
indio_dev->masklength);
ade7758_write_waveform_type(&indio_dev->dev,
- st->ade7758_ring_channels[channel].address);
+ indio_dev->channels[channel].address);
return 0;
}
int measure_freq;
int ret;
+ if (!cpufreq_get_current_driver()) {
+ dev_dbg(&pdev->dev, "no cpufreq driver!");
+ return -EPROBE_DEFER;
+ }
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
return ret;
}
+ data->thermal_clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(data->thermal_clk)) {
+ ret = PTR_ERR(data->thermal_clk);
+ if (ret != -EPROBE_DEFER)
+ dev_err(&pdev->dev,
+ "failed to get thermal clk: %d\n", ret);
+ cpufreq_cooling_unregister(data->cdev);
+ return ret;
+ }
+
+ /*
+ * Thermal sensor needs clk on to get correct value, normally
+ * we should enable its clk before taking measurement and disable
+ * clk after measurement is done, but if alarm function is enabled,
+ * hardware will auto measure the temperature periodically, so we
+ * need to keep the clk always on for alarm function.
+ */
+ ret = clk_prepare_enable(data->thermal_clk);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to enable thermal clk: %d\n", ret);
+ cpufreq_cooling_unregister(data->cdev);
+ return ret;
+ }
+
data->tz = thermal_zone_device_register("imx_thermal_zone",
IMX_TRIP_NUM,
BIT(IMX_TRIP_PASSIVE), data,
ret = PTR_ERR(data->tz);
dev_err(&pdev->dev,
"failed to register thermal zone device %d\n", ret);
+ clk_disable_unprepare(data->thermal_clk);
cpufreq_cooling_unregister(data->cdev);
return ret;
}
- data->thermal_clk = devm_clk_get(&pdev->dev, NULL);
- if (IS_ERR(data->thermal_clk)) {
- dev_warn(&pdev->dev, "failed to get thermal clk!\n");
- } else {
- /*
- * Thermal sensor needs clk on to get correct value, normally
- * we should enable its clk before taking measurement and disable
- * clk after measurement is done, but if alarm function is enabled,
- * hardware will auto measure the temperature periodically, so we
- * need to keep the clk always on for alarm function.
- */
- ret = clk_prepare_enable(data->thermal_clk);
- if (ret)
- dev_warn(&pdev->dev, "failed to enable thermal clk: %d\n", ret);
- }
-
/* Enable measurements at ~ 10 Hz */
regmap_write(map, TEMPSENSE1 + REG_CLR, TEMPSENSE1_MEASURE_FREQ);
measure_freq = DIV_ROUND_UP(32768, 10); /* 10 Hz */
if (ACPI_FAILURE(status))
return -EIO;
- *temp = DECI_KELVIN_TO_MILLI_CELSIUS(hyst, KELVIN_OFFSET);
+ /*
+ * Thermal hysteresis represents a temperature difference.
+ * Kelvin and Celsius have same degree size. So the
+ * conversion here between tenths of degree Kelvin unit
+ * and Milli-Celsius unit is just to multiply 100.
+ */
+ *temp = hyst * 100;
return 0;
}
static const struct exynos_tmu_registers exynos5260_tmu_registers = {
.triminfo_data = EXYNOS_TMU_REG_TRIMINFO,
.tmu_ctrl = EXYNOS_TMU_REG_CONTROL,
- .tmu_ctrl = EXYNOS_TMU_REG_CONTROL1,
.therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT,
.therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK,
.therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT,
#define EXYNOS_MAX_TRIGGER_PER_REG 4
/* Exynos5260 specific */
-#define EXYNOS_TMU_REG_CONTROL1 0x24
#define EXYNOS5260_TMU_REG_INTEN 0xC0
#define EXYNOS5260_TMU_REG_INTSTAT 0xC4
#define EXYNOS5260_TMU_REG_INTCLEAR 0xC8
poll_wait(file, &tty->read_wait, wait);
poll_wait(file, &tty->write_wait, wait);
+ if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
+ mask |= POLLHUP;
if (input_available_p(tty, 1))
mask |= POLLIN | POLLRDNORM;
+ else if (mask & POLLHUP) {
+ tty_flush_to_ldisc(tty);
+ if (input_available_p(tty, 1))
+ mask |= POLLIN | POLLRDNORM;
+ }
if (tty->packet && tty->link->ctrl_status)
mask |= POLLPRI | POLLIN | POLLRDNORM;
- if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
- mask |= POLLHUP;
if (tty_hung_up_p(file))
mask |= POLLHUP;
if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
/* Set to highest baudrate supported */
if (baud >= 1152000)
baud = 921600;
- quot = DIV_ROUND_CLOSEST(port->uartclk, 256 * baud);
+ quot = (port->uartclk / (256 * baud)) + 1;
}
/*
if (of_find_property(ofdev->dev.of_node, "used-by-rtas", NULL))
return -EBUSY;
- info = kmalloc(sizeof(*info), GFP_KERNEL);
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
if (info == NULL)
return -ENOMEM;
* The spd_hi, spd_vhi, spd_shi, spd_warp kludge...
* Die! Die! Die!
*/
- if (baud == 38400)
+ if (try == 0 && baud == 38400)
baud = altbaud;
/*
int pty_master, tty_closing, o_tty_closing, do_sleep;
int idx;
char buf[64];
+ long timeout = 0;
+ int once = 1;
if (tty_paranoia_check(tty, inode, __func__))
return 0;
if (!do_sleep)
break;
- printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
- __func__, tty_name(tty, buf));
+ if (once) {
+ once = 0;
+ printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
+ __func__, tty_name(tty, buf));
+ }
tty_unlock_pair(tty, o_tty);
mutex_unlock(&tty_mutex);
- schedule();
+ schedule_timeout_killable(timeout);
+ if (timeout < 120 * HZ)
+ timeout = 2 * timeout + 1;
+ else
+ timeout = MAX_SCHEDULE_TIMEOUT;
}
/*
/* Save original vc_unipagdir_loc in case we allocate a new one */
p = *vc->vc_uni_pagedir_loc;
+
+ if (!p) {
+ err = -EINVAL;
+
+ goto out_unlock;
+ }
if (p->refcount > 1) {
int j, k;
set_inverse_transl(vc, p, i); /* Update inverse translations */
set_inverse_trans_unicode(vc, p);
+out_unlock:
console_unlock();
return err;
}
static DEFINE_MUTEX(acm_table_lock);
+static void acm_tty_set_termios(struct tty_struct *tty,
+ struct ktermios *termios_old);
+
/*
* acm_table accessors
*/
/* devices aren't required to support these requests.
* the cdc acm descriptor tells whether they do...
*/
-#define acm_set_control(acm, control) \
- acm_ctrl_msg(acm, USB_CDC_REQ_SET_CONTROL_LINE_STATE, control, NULL, 0)
+static inline int acm_set_control(struct acm *acm, int control)
+{
+ if (acm->quirks & QUIRK_CONTROL_LINE_STATE)
+ return -EOPNOTSUPP;
+
+ return acm_ctrl_msg(acm, USB_CDC_REQ_SET_CONTROL_LINE_STATE,
+ control, NULL, 0);
+}
+
#define acm_set_line(acm, line) \
acm_ctrl_msg(acm, USB_CDC_REQ_SET_LINE_CODING, 0, line, sizeof *(line))
#define acm_send_break(acm, ms) \
goto error_submit_urb;
}
+ acm_tty_set_termios(tty, NULL);
+
/*
* Unthrottle device in case the TTY was closed while throttled.
*/
/* FIXME: Needs to clear unsupported bits in the termios */
acm->clocal = ((termios->c_cflag & CLOCAL) != 0);
- if (!newline.dwDTERate) {
+ if (C_BAUD(tty) == B0) {
newline.dwDTERate = acm->line.dwDTERate;
newctrl &= ~ACM_CTRL_DTR;
- } else
+ } else if (termios_old && (termios_old->c_cflag & CBAUD) == B0) {
newctrl |= ACM_CTRL_DTR;
+ }
if (newctrl != acm->ctrlout)
acm_set_control(acm, acm->ctrlout = newctrl);
tty_port_init(&acm->port);
acm->port.ops = &acm_port_ops;
init_usb_anchor(&acm->delayed);
+ acm->quirks = quirks;
buf = usb_alloc_coherent(usb_dev, ctrlsize, GFP_KERNEL, &acm->ctrl_dma);
if (!buf) {
{ USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
},
+ { USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */
+ .driver_info = QUIRK_CONTROL_LINE_STATE, },
+ { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */
{ USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */
},
/* Motorola H24 HSPA module: */
unsigned int throttle_req:1; /* throttle requested */
u8 bInterval;
struct usb_anchor delayed; /* writes queued for a device about to be woken */
+ unsigned long quirks;
};
#define CDC_DATA_INTERFACE_TYPE 0x0a
#define NOT_A_MODEM BIT(3)
#define NO_DATA_INTERFACE BIT(4)
#define IGNORE_DEVICE BIT(5)
+#define QUIRK_CONTROL_LINE_STATE BIT(6)
return -EINVAL;
if (dev->speed != USB_SPEED_SUPER)
return -EINVAL;
+ if (dev->state < USB_STATE_CONFIGURED)
+ return -ENODEV;
for (i = 0; i < num_eps; i++) {
/* Streams only apply to bulk endpoints. */
if (retval)
goto fail;
- if (hcd->usb_phy && !hdev->parent)
- usb_phy_notify_connect(hcd->usb_phy, udev->speed);
-
/*
* Some superspeed devices have finished the link training process
* and attached to a superspeed hub port, but the device descriptor
/* Disconnect any existing devices under this port */
if (udev) {
- if (hcd->usb_phy && !hdev->parent &&
- !(portstatus & USB_PORT_STAT_CONNECTION))
+ if (hcd->usb_phy && !hdev->parent)
usb_phy_notify_disconnect(hcd->usb_phy, udev->speed);
usb_disconnect(&port_dev->child);
}
port_dev->child = NULL;
spin_unlock_irq(&device_state_lock);
mutex_unlock(&usb_port_peer_mutex);
+ } else {
+ if (hcd->usb_phy && !hdev->parent)
+ usb_phy_notify_connect(hcd->usb_phy,
+ udev->speed);
}
}
{ USB_DEVICE(0x04f3, 0x0089), .driver_info =
USB_QUIRK_DEVICE_QUALIFIER },
+ { USB_DEVICE(0x04f3, 0x009b), .driver_info =
+ USB_QUIRK_DEVICE_QUALIFIER },
+
+ { USB_DEVICE(0x04f3, 0x016f), .driver_info =
+ USB_QUIRK_DEVICE_QUALIFIER },
+
/* Roland SC-8820 */
{ USB_DEVICE(0x0582, 0x0007), .driver_info = USB_QUIRK_RESET_RESUME },
u32 usb_status = readl(hsotg->regs + GOTGCTL);
- dev_info(hsotg->dev, "%s: USBRst\n", __func__);
+ dev_dbg(hsotg->dev, "%s: USBRst\n", __func__);
dev_dbg(hsotg->dev, "GNPTXSTS=%08x\n",
readl(hsotg->regs + GNPTXSTS));
config USB_EHCI_EXYNOS
tristate "EHCI support for Samsung S5P/EXYNOS SoC Series"
- depends on PLAT_S5P || ARCH_EXYNOS
+ depends on ARCH_S5PV210 || ARCH_EXYNOS
help
Enable support for the Samsung Exynos SOC's on-chip EHCI controller.
config USB_OHCI_EXYNOS
tristate "OHCI support for Samsung S5P/EXYNOS SoC Series"
- depends on PLAT_S5P || ARCH_EXYNOS
+ depends on ARCH_S5PV210 || ARCH_EXYNOS
help
Enable support for the Samsung Exynos SOC's on-chip OHCI controller.
wa->wa_descr = wa_descr = (struct usb_wa_descriptor *) hdr;
if (le16_to_cpu(wa_descr->bcdWAVersion) > 0x0100)
dev_warn(dev, "Wire Adapter v%d.%d newer than groked v1.0\n",
- le16_to_cpu(wa_descr->bcdWAVersion) & 0xff00 >> 8,
+ (le16_to_cpu(wa_descr->bcdWAVersion) & 0xff00) >> 8,
le16_to_cpu(wa_descr->bcdWAVersion) & 0x00ff);
result = 0;
error:
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
xhci->quirks |= XHCI_AVOID_BEI;
}
- if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
- (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI ||
- pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) {
- /* Workaround for occasional spurious wakeups from S5 (or
- * any other sleep) on Haswell machines with LPT and LPT-LP
- * with the new Intel BIOS
- */
- /* Limit the quirk to only known vendors, as this triggers
- * yet another BIOS bug on some other machines
- * https://bugzilla.kernel.org/show_bug.cgi?id=66171
- */
- if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP)
- xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
- }
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) {
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
pdev->device == 0x3432)
xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ pdev->device == 0x1042)
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
+
if (xhci->quirks & XHCI_RESET_ON_RESUME)
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
"QUIRK: Resetting on resume");
port->interrupt_out_urb->transfer_buffer_length = length;
priv->cur_pos = priv->cur_pos + length;
- result = usb_submit_urb(port->interrupt_out_urb, GFP_NOIO);
+ result = usb_submit_urb(port->interrupt_out_urb,
+ GFP_ATOMIC);
dev_dbg(&port->dev, "%s - Send write URB returns: %i\n", __func__, result);
todo = priv->filled - priv->cur_pos;
if (priv->device_type == KOBIL_ADAPTER_B_PRODUCT_ID ||
priv->device_type == KOBIL_ADAPTER_K_PRODUCT_ID) {
result = usb_submit_urb(port->interrupt_in_urb,
- GFP_NOIO);
+ GFP_ATOMIC);
dev_dbg(&port->dev, "%s - Send read URB returns: %i\n", __func__, result);
}
}
/* The connected devices do not have a bulk write endpoint,
* to transmit data to de barcode device the control endpoint is used */
- dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_NOIO);
+ dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC);
if (!dr) {
count = -ENOMEM;
goto error_no_dr;
us->iobuf[0] = 0x1;
result = usb_stor_control_msg(us, us->send_ctrl_pipe,
0x0C, USB_RECIP_INTERFACE | USB_TYPE_VENDOR,
- 0x01, 0x0, us->iobuf, 0x1, USB_CTRL_SET_TIMEOUT);
+ 0x01, 0x0, us->iobuf, 0x1, 5 * HZ);
usb_stor_dbg(us, "-- result is %d\n", result);
return 0;
result = usb_stor_control_msg(us, us->send_ctrl_pipe,
USB_REQ_SET_FEATURE,
USB_TYPE_STANDARD | USB_RECIP_DEVICE,
- 0x01, 0x0, NULL, 0x0, 1000);
+ 0x01, 0x0, NULL, 0x0, 1 * HZ);
usb_stor_dbg(us, "Huawei mode set result is %d\n", result);
return 0;
}
return 0;
}
+#ifdef CONFIG_PM
static int config_autodelink_before_power_down(struct us_data *us)
{
struct rts51x_chip *chip = (struct rts51x_chip *)(us->extra);
}
}
}
+#endif
#ifdef CONFIG_REALTEK_AUTOPM
static void fw5895_set_mmc_wp(struct us_data *us)
*/
if (result == USB_STOR_XFER_LONG)
fake_sense = 1;
+
+ /*
+ * Sometimes a device will mistakenly skip the data phase
+ * and go directly to the status phase without sending a
+ * zero-length packet. If we get a 13-byte response here,
+ * check whether it really is a CSW.
+ */
+ if (result == USB_STOR_XFER_SHORT &&
+ srb->sc_data_direction == DMA_FROM_DEVICE &&
+ transfer_length - scsi_get_resid(srb) ==
+ US_BULK_CS_WRAP_LEN) {
+ struct scatterlist *sg = NULL;
+ unsigned int offset = 0;
+
+ if (usb_stor_access_xfer_buf((unsigned char *) bcs,
+ US_BULK_CS_WRAP_LEN, srb, &sg,
+ &offset, FROM_XFER_BUF) ==
+ US_BULK_CS_WRAP_LEN &&
+ bcs->Signature ==
+ cpu_to_le32(US_BULK_CS_SIGN)) {
+ usb_stor_dbg(us, "Device skipped data phase\n");
+ scsi_set_resid(srb, transfer_length);
+ goto skipped_data_phase;
+ }
+ }
}
/* See flow chart on pg 15 of the Bulk Only Transport spec for
if (result != USB_STOR_XFER_GOOD)
return USB_STOR_TRANSPORT_ERROR;
+ skipped_data_phase:
/* check bulk status */
residue = le32_to_cpu(bcs->Residue);
usb_stor_dbg(us, "Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NO_ATA_1X),
+/* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+UNUSUAL_DEV(0x0bc2, 0x3320, 0x0000, 0x9999,
+ "Seagate",
+ "Expansion Desk",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
+/* Reported-by: Bogdan Mihalcea <bogdan.mihalcea@infim.ro> */
+UNUSUAL_DEV(0x0bc2, 0xa003, 0x0000, 0x9999,
+ "Seagate",
+ "Backup Plus",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
/* https://bbs.archlinux.org/viewtopic.php?id=183190 */
UNUSUAL_DEV(0x0bc2, 0xab20, 0x0000, 0x9999,
"Seagate",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NO_ATA_1X),
+/* https://bbs.archlinux.org/viewtopic.php?id=183190 */
+UNUSUAL_DEV(0x0bc2, 0xab21, 0x0000, 0x9999,
+ "Seagate",
+ "Backup+ BK",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
/* Reported-by: Claudio Bizzarri <claudio.bizzarri@gmail.com> */
UNUSUAL_DEV(0x152d, 0x0567, 0x0000, 0x9999,
"JMicron",
"ASM1051",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_IGNORE_UAS),
+
+/* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ "VIA",
+ "VL711",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
ret = 0;
fail:
while (ret < 0 && !list_empty(&tmplist)) {
- sums = list_entry(&tmplist, struct btrfs_ordered_sum, list);
+ sums = list_entry(tmplist.next, struct btrfs_ordered_sum, list);
list_del(&sums->list);
kfree(sums);
}
for (i = 0; i < CEPH_CAP_BITS; i++)
if ((dirty & (1 << i)) &&
- flush_tid == ci->i_cap_flush_tid[i])
+ (u16)flush_tid == ci->i_cap_flush_tid[i])
cleaned |= 1 << i;
dout("handle_cap_flush_ack inode %p mds%d seq %d on %s cleaned %s,"
loff_t offset = header->args.offset;
size_t count = header->args.count;
struct page **pages = header->args.pages;
- int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
+ int pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
unsigned int pg_len;
struct blk_plug plug;
int i;
dprintk("%s CREATING PIPEFS MESSAGE\n", __func__);
+ mutex_lock(&nn->bl_mutex);
bl_pipe_msg.bl_wq = &nn->bl_wq;
b->simple.len += 4; /* single volume */
if (b->simple.len > PAGE_SIZE)
- return -EIO;
+ goto out_unlock;
memset(msg, 0, sizeof(*msg));
msg->len = sizeof(*bl_msg) + b->simple.len;
msg->data = kzalloc(msg->len, gfp_mask);
if (!msg->data)
- goto out;
+ goto out_free_data;
bl_msg = msg->data;
bl_msg->type = BL_DEVICE_MOUNT,
rc = rpc_queue_upcall(nn->bl_device_pipe, msg);
if (rc < 0) {
remove_wait_queue(&nn->bl_wq, &wq);
- goto out;
+ goto out_free_data;
}
set_current_state(TASK_UNINTERRUPTIBLE);
if (reply->status != BL_DEVICE_REQUEST_PROC) {
printk(KERN_WARNING "%s failed to decode device: %d\n",
__func__, reply->status);
- goto out;
+ goto out_free_data;
}
dev = MKDEV(reply->major, reply->minor);
-out:
+out_free_data:
kfree(msg->data);
+out_unlock:
+ mutex_unlock(&nn->bl_mutex);
return dev;
}
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct dentry *dentry;
+ mutex_init(&nn->bl_mutex);
init_waitqueue_head(&nn->bl_wq);
nn->bl_device_pipe = rpc_mkpipe_data(&bl_upcall_ops, 0);
if (IS_ERR(nn->bl_device_pipe))
continue;
if (!test_bit(NFS_DELEGATED_STATE, &state->flags))
continue;
+ if (!nfs4_valid_open_stateid(state))
+ continue;
if (!nfs4_stateid_match(&state->stateid, stateid))
continue;
get_nfs_open_context(ctx);
{
int res = 0;
- res = nfs4_proc_delegreturn(inode, delegation->cred, &delegation->stateid, issync);
+ if (!test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+ res = nfs4_proc_delegreturn(inode,
+ delegation->cred,
+ &delegation->stateid,
+ issync);
nfs_free_delegation(delegation);
return res;
}
{
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
- int err;
+ int err = 0;
if (delegation == NULL)
return 0;
do {
+ if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+ break;
err = nfs_delegation_claim_opens(inode, &delegation->stateid);
if (!issync || err != -EAGAIN)
break;
rcu_read_unlock();
}
+static void nfs_revoke_delegation(struct inode *inode)
+{
+ struct nfs_delegation *delegation;
+ rcu_read_lock();
+ delegation = rcu_dereference(NFS_I(inode)->delegation);
+ if (delegation != NULL) {
+ set_bit(NFS_DELEGATION_REVOKED, &delegation->flags);
+ nfs_mark_return_delegation(NFS_SERVER(inode), delegation);
+ }
+ rcu_read_unlock();
+}
+
void nfs_remove_bad_delegation(struct inode *inode)
{
struct nfs_delegation *delegation;
+ nfs_revoke_delegation(inode);
delegation = nfs_inode_detach_delegation(inode);
if (delegation) {
nfs_inode_find_state_and_recover(inode, &delegation->stateid);
NFS_DELEGATION_RETURN_IF_CLOSED,
NFS_DELEGATION_REFERENCED,
NFS_DELEGATION_RETURNING,
+ NFS_DELEGATION_REVOKED,
};
int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct nfs_openres *res);
case -ENOENT:
d_drop(dentry);
d_add(dentry, NULL);
+ nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
break;
case -EISDIR:
case -ENOTDIR:
{
struct nfs_direct_req *dreq = container_of(kref, struct nfs_direct_req, kref);
+ nfs_free_pnfs_ds_cinfo(&dreq->ds_cinfo);
if (dreq->l_ctx != NULL)
nfs_put_lock_context(dreq->l_ctx);
if (dreq->ctx != NULL)
case -NFS4ERR_DELEG_REVOKED:
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_BAD_STATEID:
- if (state == NULL)
- break;
- nfs_remove_bad_delegation(state->inode);
case -NFS4ERR_OPENMODE:
if (state == NULL)
break;
{
struct inode *inode = dentry->d_inode;
int need_atime = NFS_I(inode)->cache_validity & NFS_INO_INVALID_ATIME;
- int err;
+ int err = 0;
trace_nfs_getattr_enter(inode);
/* Flush out writes to the server in order to update c/mtime. */
struct rpc_pipe *bl_device_pipe;
struct bl_dev_msg bl_mount_reply;
wait_queue_head_t bl_wq;
+ struct mutex bl_mutex;
struct list_head nfs_client_list;
struct list_head nfs_volume_list;
#if IS_ENABLED(CONFIG_NFS_V4)
case -NFS4ERR_DELEG_REVOKED:
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_BAD_STATEID:
- if (inode != NULL && nfs4_have_delegation(inode, FMODE_READ)) {
- nfs_remove_bad_delegation(inode);
- exception->retry = 1;
- break;
- }
if (state == NULL)
break;
ret = nfs4_schedule_stateid_recovery(server, state);
nfs_inode_find_state_and_recover(state->inode,
stateid);
nfs4_schedule_stateid_recovery(server, state);
- return 0;
+ return -EAGAIN;
case -NFS4ERR_DELAY:
case -NFS4ERR_GRACE:
set_bit(NFS_DELEGATED_STATE, &state->flags);
return ret;
}
+static void nfs_finish_clear_delegation_stateid(struct nfs4_state *state)
+{
+ nfs_remove_bad_delegation(state->inode);
+ write_seqlock(&state->seqlock);
+ nfs4_stateid_copy(&state->stateid, &state->open_stateid);
+ write_sequnlock(&state->seqlock);
+ clear_bit(NFS_DELEGATED_STATE, &state->flags);
+}
+
+static void nfs40_clear_delegation_stateid(struct nfs4_state *state)
+{
+ if (rcu_access_pointer(NFS_I(state->inode)->delegation) != NULL)
+ nfs_finish_clear_delegation_stateid(state);
+}
+
+static int nfs40_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *state)
+{
+ /* NFSv4.0 doesn't allow for delegation recovery on open expire */
+ nfs40_clear_delegation_stateid(state);
+ return nfs4_open_expired(sp, state);
+}
+
#if defined(CONFIG_NFS_V4_1)
-static void nfs41_clear_delegation_stateid(struct nfs4_state *state)
+static void nfs41_check_delegation_stateid(struct nfs4_state *state)
{
struct nfs_server *server = NFS_SERVER(state->inode);
- nfs4_stateid *stateid = &state->stateid;
+ nfs4_stateid stateid;
struct nfs_delegation *delegation;
- struct rpc_cred *cred = NULL;
- int status = -NFS4ERR_BAD_STATEID;
-
- /* If a state reset has been done, test_stateid is unneeded */
- if (test_bit(NFS_DELEGATED_STATE, &state->flags) == 0)
- return;
+ struct rpc_cred *cred;
+ int status;
/* Get the delegation credential for use by test/free_stateid */
rcu_read_lock();
delegation = rcu_dereference(NFS_I(state->inode)->delegation);
- if (delegation != NULL &&
- nfs4_stateid_match(&delegation->stateid, stateid)) {
- cred = get_rpccred(delegation->cred);
- rcu_read_unlock();
- status = nfs41_test_stateid(server, stateid, cred);
- trace_nfs4_test_delegation_stateid(state, NULL, status);
- } else
+ if (delegation == NULL) {
rcu_read_unlock();
+ return;
+ }
+
+ nfs4_stateid_copy(&stateid, &delegation->stateid);
+ cred = get_rpccred(delegation->cred);
+ rcu_read_unlock();
+ status = nfs41_test_stateid(server, &stateid, cred);
+ trace_nfs4_test_delegation_stateid(state, NULL, status);
if (status != NFS_OK) {
/* Free the stateid unless the server explicitly
* informs us the stateid is unrecognized. */
if (status != -NFS4ERR_BAD_STATEID)
- nfs41_free_stateid(server, stateid, cred);
- nfs_remove_bad_delegation(state->inode);
-
- write_seqlock(&state->seqlock);
- nfs4_stateid_copy(&state->stateid, &state->open_stateid);
- write_sequnlock(&state->seqlock);
- clear_bit(NFS_DELEGATED_STATE, &state->flags);
+ nfs41_free_stateid(server, &stateid, cred);
+ nfs_finish_clear_delegation_stateid(state);
}
- if (cred != NULL)
- put_rpccred(cred);
+ put_rpccred(cred);
}
/**
{
int status;
- nfs41_clear_delegation_stateid(state);
+ nfs41_check_delegation_stateid(state);
status = nfs41_check_open_stateid(state);
if (status != NFS_OK)
status = nfs4_open_expired(sp, state);
seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
ret = _nfs4_proc_open(opendata);
- if (ret != 0) {
- if (ret == -ENOENT) {
- dentry = opendata->dentry;
- if (dentry->d_inode)
- d_delete(dentry);
- else if (d_unhashed(dentry))
- d_add(dentry, NULL);
-
- nfs_set_verifier(dentry,
- nfs_save_change_attribute(opendata->dir->d_inode));
- }
+ if (ret != 0)
goto out;
- }
state = nfs4_opendata_to_nfs4_state(opendata);
ret = PTR_ERR(state);
case -NFS4ERR_DELEG_REVOKED:
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_BAD_STATEID:
- if (state == NULL)
- break;
- nfs_remove_bad_delegation(state->inode);
case -NFS4ERR_OPENMODE:
if (state == NULL)
break;
static const struct nfs4_state_recovery_ops nfs40_nograce_recovery_ops = {
.owner_flag_bit = NFS_OWNER_RECLAIM_NOGRACE,
.state_flag_bit = NFS_STATE_RECLAIM_NOGRACE,
- .recover_open = nfs4_open_expired,
+ .recover_open = nfs40_open_expired,
.recover_lock = nfs4_lock_expired,
.establish_clid = nfs4_init_clientid,
};
| NFS_CAP_CHANGE_ATTR
| NFS_CAP_POSIX_LOCK
| NFS_CAP_STATEID_NFSV41
- | NFS_CAP_ATOMIC_OPEN_V1
- | NFS_CAP_SEEK,
+ | NFS_CAP_ATOMIC_OPEN_V1,
.init_client = nfs41_init_client,
.shutdown_client = nfs41_shutdown_client,
.match_stateid = nfs41_match_stateid,
| NFS_CAP_CHANGE_ATTR
| NFS_CAP_POSIX_LOCK
| NFS_CAP_STATEID_NFSV41
- | NFS_CAP_ATOMIC_OPEN_V1,
+ | NFS_CAP_ATOMIC_OPEN_V1
+ | NFS_CAP_SEEK,
.init_client = nfs41_init_client,
.shutdown_client = nfs41_shutdown_client,
.match_stateid = nfs41_match_stateid,
if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags))
nfs_release_request(req);
- else
- WARN_ON_ONCE(1);
}
static void
&fsnotify_mark_srcu);
}
+ /*
+ * We need to merge inode & vfsmount mark lists so that inode mark
+ * ignore masks are properly reflected for mount mark notifications.
+ * That's why this traversal is so complicated...
+ */
while (inode_node || vfsmount_node) {
- inode_group = vfsmount_group = NULL;
+ inode_group = NULL;
+ inode_mark = NULL;
+ vfsmount_group = NULL;
+ vfsmount_mark = NULL;
if (inode_node) {
inode_mark = hlist_entry(srcu_dereference(inode_node, &fsnotify_mark_srcu),
vfsmount_group = vfsmount_mark->group;
}
- if (inode_group > vfsmount_group) {
- /* handle inode */
- ret = send_to_group(to_tell, inode_mark, NULL, mask,
- data, data_is, cookie, file_name);
- /* we didn't use the vfsmount_mark */
- vfsmount_group = NULL;
- } else if (vfsmount_group > inode_group) {
- ret = send_to_group(to_tell, NULL, vfsmount_mark, mask,
- data, data_is, cookie, file_name);
- inode_group = NULL;
- } else {
- ret = send_to_group(to_tell, inode_mark, vfsmount_mark,
- mask, data, data_is, cookie,
- file_name);
+ if (inode_group && vfsmount_group) {
+ int cmp = fsnotify_compare_groups(inode_group,
+ vfsmount_group);
+ if (cmp > 0) {
+ inode_group = NULL;
+ inode_mark = NULL;
+ } else if (cmp < 0) {
+ vfsmount_group = NULL;
+ vfsmount_mark = NULL;
+ }
}
+ ret = send_to_group(to_tell, inode_mark, vfsmount_mark, mask,
+ data, data_is, cookie, file_name);
if (ret && (mask & ALL_FSNOTIFY_PERM_EVENTS))
goto out;
/* protects reads of inode and vfsmount marks list */
extern struct srcu_struct fsnotify_mark_srcu;
+/* compare two groups for sorting of marks lists */
+extern int fsnotify_compare_groups(struct fsnotify_group *a,
+ struct fsnotify_group *b);
+
extern void fsnotify_set_inode_mark_mask_locked(struct fsnotify_mark *fsn_mark,
__u32 mask);
/* add a mark to an inode */
{
struct fsnotify_mark *lmark, *last = NULL;
int ret = 0;
+ int cmp;
mark->flags |= FSNOTIFY_MARK_FLAG_INODE;
goto out;
}
- if (mark->group->priority < lmark->group->priority)
- continue;
-
- if ((mark->group->priority == lmark->group->priority) &&
- (mark->group < lmark->group))
+ cmp = fsnotify_compare_groups(lmark->group, mark->group);
+ if (cmp < 0)
continue;
hlist_add_before_rcu(&mark->i.i_list, &lmark->i.i_list);
mark->ignored_mask = mask;
}
+/*
+ * Sorting function for lists of fsnotify marks.
+ *
+ * Fanotify supports different notification classes (reflected as priority of
+ * notification group). Events shall be passed to notification groups in
+ * decreasing priority order. To achieve this marks in notification lists for
+ * inodes and vfsmounts are sorted so that priorities of corresponding groups
+ * are descending.
+ *
+ * Furthermore correct handling of the ignore mask requires processing inode
+ * and vfsmount marks of each group together. Using the group address as
+ * further sort criterion provides a unique sorting order and thus we can
+ * merge inode and vfsmount lists of marks in linear time and find groups
+ * present in both lists.
+ *
+ * A return value of 1 signifies that b has priority over a.
+ * A return value of 0 signifies that the two marks have to be handled together.
+ * A return value of -1 signifies that a has priority over b.
+ */
+int fsnotify_compare_groups(struct fsnotify_group *a, struct fsnotify_group *b)
+{
+ if (a == b)
+ return 0;
+ if (!a)
+ return 1;
+ if (!b)
+ return -1;
+ if (a->priority < b->priority)
+ return 1;
+ if (a->priority > b->priority)
+ return -1;
+ if (a < b)
+ return 1;
+ return -1;
+}
+
/*
* Attach an initialized mark to a given group and fs object.
* These marks may be used for the fsnotify backend to determine which
struct mount *m = real_mount(mnt);
struct fsnotify_mark *lmark, *last = NULL;
int ret = 0;
+ int cmp;
mark->flags |= FSNOTIFY_MARK_FLAG_VFSMOUNT;
goto out;
}
- if (mark->group->priority < lmark->group->priority)
- continue;
-
- if ((mark->group->priority == lmark->group->priority) &&
- (mark->group < lmark->group))
+ cmp = fsnotify_compare_groups(lmark->group, mark->group);
+ if (cmp < 0)
continue;
hlist_add_before_rcu(&mark->m.m_list, &lmark->m.m_list);
goto out;
}
-
+/*
+ * Preallocate and zero a range of a file. This mechanism has the allocation
+ * semantics of fallocate and in addition converts data in the range to zeroes.
+ */
int
xfs_zero_file_space(
struct xfs_inode *ip,
xfs_off_t len)
{
struct xfs_mount *mp = ip->i_mount;
- uint granularity;
- xfs_off_t start_boundary;
- xfs_off_t end_boundary;
+ uint blksize;
int error;
trace_xfs_zero_file_space(ip);
- granularity = max_t(uint, 1 << mp->m_sb.sb_blocklog, PAGE_CACHE_SIZE);
+ blksize = 1 << mp->m_sb.sb_blocklog;
/*
- * Round the range of extents we are going to convert inwards. If the
- * offset is aligned, then it doesn't get changed so we zero from the
- * start of the block offset points to.
+ * Punch a hole and prealloc the range. We use hole punch rather than
+ * unwritten extent conversion for two reasons:
+ *
+ * 1.) Hole punch handles partial block zeroing for us.
+ *
+ * 2.) If prealloc returns ENOSPC, the file range is still zero-valued
+ * by virtue of the hole punch.
*/
- start_boundary = round_up(offset, granularity);
- end_boundary = round_down(offset + len, granularity);
-
- ASSERT(start_boundary >= offset);
- ASSERT(end_boundary <= offset + len);
-
- if (start_boundary < end_boundary - 1) {
- /*
- * Writeback the range to ensure any inode size updates due to
- * appending writes make it to disk (otherwise we could just
- * punch out the delalloc blocks).
- */
- error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping,
- start_boundary, end_boundary - 1);
- if (error)
- goto out;
- truncate_pagecache_range(VFS_I(ip), start_boundary,
- end_boundary - 1);
-
- /* convert the blocks */
- error = xfs_alloc_file_space(ip, start_boundary,
- end_boundary - start_boundary - 1,
- XFS_BMAPI_PREALLOC | XFS_BMAPI_CONVERT);
- if (error)
- goto out;
-
- /* We've handled the interior of the range, now for the edges */
- if (start_boundary != offset) {
- error = xfs_iozero(ip, offset, start_boundary - offset);
- if (error)
- goto out;
- }
-
- if (end_boundary != offset + len)
- error = xfs_iozero(ip, end_boundary,
- offset + len - end_boundary);
-
- } else {
- /*
- * It's either a sub-granularity range or the range spanned lies
- * partially across two adjacent blocks.
- */
- error = xfs_iozero(ip, offset, len);
- }
+ error = xfs_free_file_space(ip, offset, len);
+ if (error)
+ goto out;
+ error = xfs_alloc_file_space(ip, round_down(offset, blksize),
+ round_up(offset + len, blksize) -
+ round_down(offset, blksize),
+ XFS_BMAPI_PREALLOC);
out:
return error;
XFS_WANT_CORRUPTED_RETURN(stat == 1);
/* Check if the record contains the inode in request */
- if (irec->ir_startino + XFS_INODES_PER_CHUNK <= agino)
- return -EINVAL;
+ if (irec->ir_startino + XFS_INODES_PER_CHUNK <= agino) {
+ *icount = 0;
+ return 0;
+ }
idx = agino - irec->ir_startino + 1;
if (idx < XFS_INODES_PER_CHUNK &&
#define XFS_BULKSTAT_UBLEFT(ubleft) ((ubleft) >= statstruct_size)
+struct xfs_bulkstat_agichunk {
+ char __user **ac_ubuffer;/* pointer into user's buffer */
+ int ac_ubleft; /* bytes left in user's buffer */
+ int ac_ubelem; /* spaces used in user's buffer */
+};
+
/*
* Process inodes in chunk with a pointer to a formatter function
* that will iget the inode and fill in the appropriate structure.
*/
-int
+static int
xfs_bulkstat_ag_ichunk(
struct xfs_mount *mp,
xfs_agnumber_t agno,
struct xfs_inobt_rec_incore *irbp,
bulkstat_one_pf formatter,
size_t statstruct_size,
- struct xfs_bulkstat_agichunk *acp)
+ struct xfs_bulkstat_agichunk *acp,
+ xfs_agino_t *last_agino)
{
- xfs_ino_t lastino = acp->ac_lastino;
char __user **ubufp = acp->ac_ubuffer;
- int ubleft = acp->ac_ubleft;
- int ubelem = acp->ac_ubelem;
- int chunkidx, clustidx;
+ int chunkidx;
int error = 0;
- xfs_agino_t agino;
+ xfs_agino_t agino = irbp->ir_startino;
- for (agino = irbp->ir_startino, chunkidx = clustidx = 0;
- XFS_BULKSTAT_UBLEFT(ubleft) &&
- irbp->ir_freecount < XFS_INODES_PER_CHUNK;
- chunkidx++, clustidx++, agino++) {
- int fmterror; /* bulkstat formatter result */
+ for (chunkidx = 0; chunkidx < XFS_INODES_PER_CHUNK;
+ chunkidx++, agino++) {
+ int fmterror;
int ubused;
- xfs_ino_t ino = XFS_AGINO_TO_INO(mp, agno, agino);
- ASSERT(chunkidx < XFS_INODES_PER_CHUNK);
+ /* inode won't fit in buffer, we are done */
+ if (acp->ac_ubleft < statstruct_size)
+ break;
/* Skip if this inode is free */
- if (XFS_INOBT_MASK(chunkidx) & irbp->ir_free) {
- lastino = ino;
+ if (XFS_INOBT_MASK(chunkidx) & irbp->ir_free)
continue;
- }
-
- /*
- * Count used inodes as free so we can tell when the
- * chunk is used up.
- */
- irbp->ir_freecount++;
/* Get the inode and fill in a single buffer */
ubused = statstruct_size;
- error = formatter(mp, ino, *ubufp, ubleft, &ubused, &fmterror);
- if (fmterror == BULKSTAT_RV_NOTHING) {
- if (error && error != -ENOENT && error != -EINVAL) {
- ubleft = 0;
- break;
- }
- lastino = ino;
- continue;
- }
- if (fmterror == BULKSTAT_RV_GIVEUP) {
- ubleft = 0;
+ error = formatter(mp, XFS_AGINO_TO_INO(mp, agno, agino),
+ *ubufp, acp->ac_ubleft, &ubused, &fmterror);
+
+ if (fmterror == BULKSTAT_RV_GIVEUP ||
+ (error && error != -ENOENT && error != -EINVAL)) {
+ acp->ac_ubleft = 0;
ASSERT(error);
break;
}
- if (*ubufp)
- *ubufp += ubused;
- ubleft -= ubused;
- ubelem++;
- lastino = ino;
+
+ /* be careful not to leak error if at end of chunk */
+ if (fmterror == BULKSTAT_RV_NOTHING || error) {
+ error = 0;
+ continue;
+ }
+
+ *ubufp += ubused;
+ acp->ac_ubleft -= ubused;
+ acp->ac_ubelem++;
}
- acp->ac_lastino = lastino;
- acp->ac_ubleft = ubleft;
- acp->ac_ubelem = ubelem;
+ /*
+ * Post-update *last_agino. At this point, agino will always point one
+ * inode past the last inode we processed successfully. Hence we
+ * substract that inode when setting the *last_agino cursor so that we
+ * return the correct cookie to userspace. On the next bulkstat call,
+ * the inode under the lastino cookie will be skipped as we have already
+ * processed it here.
+ */
+ *last_agino = agino - 1;
return error;
}
xfs_agino_t agino; /* inode # in allocation group */
xfs_agnumber_t agno; /* allocation group number */
xfs_btree_cur_t *cur; /* btree cursor for ialloc btree */
- int end_of_ag; /* set if we've seen the ag end */
- int error; /* error code */
- int fmterror;/* bulkstat formatter result */
- int i; /* loop index */
- int icount; /* count of inodes good in irbuf */
size_t irbsize; /* size of irec buffer in bytes */
- xfs_ino_t ino; /* inode number (filesystem) */
- xfs_inobt_rec_incore_t *irbp; /* current irec buffer pointer */
xfs_inobt_rec_incore_t *irbuf; /* start of irec buffer */
- xfs_inobt_rec_incore_t *irbufend; /* end of good irec buffer entries */
- xfs_ino_t lastino; /* last inode number returned */
int nirbuf; /* size of irbuf */
- int rval; /* return value error code */
- int tmp; /* result value from btree calls */
int ubcount; /* size of user's buffer */
- int ubleft; /* bytes left in user's buffer */
- char __user *ubufp; /* pointer into user's buffer */
- int ubelem; /* spaces used in user's buffer */
+ struct xfs_bulkstat_agichunk ac;
+ int error = 0;
/*
* Get the last inode value, see if there's nothing to do.
*/
- ino = (xfs_ino_t)*lastinop;
- lastino = ino;
- agno = XFS_INO_TO_AGNO(mp, ino);
- agino = XFS_INO_TO_AGINO(mp, ino);
+ agno = XFS_INO_TO_AGNO(mp, *lastinop);
+ agino = XFS_INO_TO_AGINO(mp, *lastinop);
if (agno >= mp->m_sb.sb_agcount ||
- ino != XFS_AGINO_TO_INO(mp, agno, agino)) {
+ *lastinop != XFS_AGINO_TO_INO(mp, agno, agino)) {
*done = 1;
*ubcountp = 0;
return 0;
}
ubcount = *ubcountp; /* statstruct's */
- ubleft = ubcount * statstruct_size; /* bytes */
- *ubcountp = ubelem = 0;
+ ac.ac_ubuffer = &ubuffer;
+ ac.ac_ubleft = ubcount * statstruct_size; /* bytes */;
+ ac.ac_ubelem = 0;
+
+ *ubcountp = 0;
*done = 0;
- fmterror = 0;
- ubufp = ubuffer;
+
irbuf = kmem_zalloc_greedy(&irbsize, PAGE_SIZE, PAGE_SIZE * 4);
if (!irbuf)
return -ENOMEM;
* Loop over the allocation groups, starting from the last
* inode returned; 0 means start of the allocation group.
*/
- rval = 0;
- while (XFS_BULKSTAT_UBLEFT(ubleft) && agno < mp->m_sb.sb_agcount) {
- cond_resched();
+ while (agno < mp->m_sb.sb_agcount) {
+ struct xfs_inobt_rec_incore *irbp = irbuf;
+ struct xfs_inobt_rec_incore *irbufend = irbuf + nirbuf;
+ bool end_of_ag = false;
+ int icount = 0;
+ int stat;
+
error = xfs_ialloc_read_agi(mp, NULL, agno, &agbp);
if (error)
break;
*/
cur = xfs_inobt_init_cursor(mp, NULL, agbp, agno,
XFS_BTNUM_INO);
- irbp = irbuf;
- irbufend = irbuf + nirbuf;
- end_of_ag = 0;
- icount = 0;
if (agino > 0) {
/*
* In the middle of an allocation group, we need to get
error = xfs_bulkstat_grab_ichunk(cur, agino, &icount, &r);
if (error)
- break;
+ goto del_cursor;
if (icount) {
irbp->ir_startino = r.ir_startino;
irbp->ir_freecount = r.ir_freecount;
irbp->ir_free = r.ir_free;
irbp++;
- agino = r.ir_startino + XFS_INODES_PER_CHUNK;
}
/* Increment to the next record */
- error = xfs_btree_increment(cur, 0, &tmp);
+ error = xfs_btree_increment(cur, 0, &stat);
} else {
/* Start of ag. Lookup the first inode chunk */
- error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &tmp);
+ error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &stat);
+ }
+ if (error || stat == 0) {
+ end_of_ag = true;
+ goto del_cursor;
}
- if (error)
- break;
/*
* Loop through inode btree records in this ag,
while (irbp < irbufend && icount < ubcount) {
struct xfs_inobt_rec_incore r;
- error = xfs_inobt_get_rec(cur, &r, &i);
- if (error || i == 0) {
- end_of_ag = 1;
- break;
+ error = xfs_inobt_get_rec(cur, &r, &stat);
+ if (error || stat == 0) {
+ end_of_ag = true;
+ goto del_cursor;
}
/*
irbp++;
icount += XFS_INODES_PER_CHUNK - r.ir_freecount;
}
- /*
- * Set agino to after this chunk and bump the cursor.
- */
- agino = r.ir_startino + XFS_INODES_PER_CHUNK;
- error = xfs_btree_increment(cur, 0, &tmp);
+ error = xfs_btree_increment(cur, 0, &stat);
+ if (error || stat == 0) {
+ end_of_ag = true;
+ goto del_cursor;
+ }
cond_resched();
}
+
/*
- * Drop the btree buffers and the agi buffer.
- * We can't hold any of the locks these represent
- * when calling iget.
+ * Drop the btree buffers and the agi buffer as we can't hold any
+ * of the locks these represent when calling iget. If there is a
+ * pending error, then we are done.
*/
+del_cursor:
xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);
xfs_buf_relse(agbp);
+ if (error)
+ break;
/*
- * Now format all the good inodes into the user's buffer.
+ * Now format all the good inodes into the user's buffer. The
+ * call to xfs_bulkstat_ag_ichunk() sets up the agino pointer
+ * for the next loop iteration.
*/
irbufend = irbp;
for (irbp = irbuf;
- irbp < irbufend && XFS_BULKSTAT_UBLEFT(ubleft); irbp++) {
- struct xfs_bulkstat_agichunk ac;
-
- ac.ac_lastino = lastino;
- ac.ac_ubuffer = &ubuffer;
- ac.ac_ubleft = ubleft;
- ac.ac_ubelem = ubelem;
+ irbp < irbufend && ac.ac_ubleft >= statstruct_size;
+ irbp++) {
error = xfs_bulkstat_ag_ichunk(mp, agno, irbp,
- formatter, statstruct_size, &ac);
+ formatter, statstruct_size, &ac,
+ &agino);
if (error)
- rval = error;
-
- lastino = ac.ac_lastino;
- ubleft = ac.ac_ubleft;
- ubelem = ac.ac_ubelem;
+ break;
cond_resched();
}
+
/*
- * Set up for the next loop iteration.
+ * If we've run out of space or had a formatting error, we
+ * are now done
*/
- if (XFS_BULKSTAT_UBLEFT(ubleft)) {
- if (end_of_ag) {
- agno++;
- agino = 0;
- } else
- agino = XFS_INO_TO_AGINO(mp, lastino);
- } else
+ if (ac.ac_ubleft < statstruct_size || error)
break;
+
+ if (end_of_ag) {
+ agno++;
+ agino = 0;
+ }
}
/*
* Done, we're either out of filesystem or space to put the data.
*/
kmem_free(irbuf);
- *ubcountp = ubelem;
+ *ubcountp = ac.ac_ubelem;
+
/*
- * Found some inodes, return them now and return the error next time.
+ * We found some inodes, so clear the error status and return them.
+ * The lastino pointer will point directly at the inode that triggered
+ * any error that occurred, so on the next call the error will be
+ * triggered again and propagated to userspace as there will be no
+ * formatted inodes in the buffer.
*/
- if (ubelem)
- rval = 0;
- if (agno >= mp->m_sb.sb_agcount) {
- /*
- * If we ran out of filesystem, mark lastino as off
- * the end of the filesystem, so the next call
- * will return immediately.
- */
- *lastinop = (xfs_ino_t)XFS_AGINO_TO_INO(mp, agno, 0);
+ if (ac.ac_ubelem)
+ error = 0;
+
+ /*
+ * If we ran out of filesystem, lastino will point off the end of
+ * the filesystem so the next call will return immediately.
+ */
+ *lastinop = XFS_AGINO_TO_INO(mp, agno, agino);
+ if (agno >= mp->m_sb.sb_agcount)
*done = 1;
- } else
- *lastinop = (xfs_ino_t)lastino;
- return rval;
+ return error;
}
int
int *ubused,
int *stat);
-struct xfs_bulkstat_agichunk {
- xfs_ino_t ac_lastino; /* last inode returned */
- char __user **ac_ubuffer;/* pointer into user's buffer */
- int ac_ubleft; /* bytes left in user's buffer */
- int ac_ubelem; /* spaces used in user's buffer */
-};
-
-int
-xfs_bulkstat_ag_ichunk(
- struct xfs_mount *mp,
- xfs_agnumber_t agno,
- struct xfs_inobt_rec_incore *irbp,
- bulkstat_one_pf formatter,
- size_t statstruct_size,
- struct xfs_bulkstat_agichunk *acp);
-
/*
* Values for stat return value.
*/
#define VF610_CLK_FASK_CLK_SEL 8
#define VF610_CLK_AUDIO_EXT 9
#define VF610_CLK_ENET_EXT 10
-#define VF610_CLK_PLL1_MAIN 11
+#define VF610_CLK_PLL1_SYS 11
#define VF610_CLK_PLL1_PFD1 12
#define VF610_CLK_PLL1_PFD2 13
#define VF610_CLK_PLL1_PFD3 14
#define VF610_CLK_PLL1_PFD4 15
-#define VF610_CLK_PLL2_MAIN 16
+#define VF610_CLK_PLL2_BUS 16
#define VF610_CLK_PLL2_PFD1 17
#define VF610_CLK_PLL2_PFD2 18
#define VF610_CLK_PLL2_PFD3 19
#define VF610_CLK_PLL2_PFD4 20
-#define VF610_CLK_PLL3_MAIN 21
+#define VF610_CLK_PLL3_USB_OTG 21
#define VF610_CLK_PLL3_PFD1 22
#define VF610_CLK_PLL3_PFD2 23
#define VF610_CLK_PLL3_PFD3 24
#define VF610_CLK_PLL3_PFD4 25
-#define VF610_CLK_PLL4_MAIN 26
-#define VF610_CLK_PLL5_MAIN 27
-#define VF610_CLK_PLL6_MAIN 28
+#define VF610_CLK_PLL4_AUDIO 26
+#define VF610_CLK_PLL5_ENET 27
+#define VF610_CLK_PLL6_VIDEO 28
#define VF610_CLK_PLL3_MAIN_DIV 29
#define VF610_CLK_PLL4_MAIN_DIV 30
#define VF610_CLK_PLL6_MAIN_DIV 31
#define VF610_CLK_DMAMUX3 153
#define VF610_CLK_FLEXCAN0_EN 154
#define VF610_CLK_FLEXCAN1_EN 155
-#define VF610_CLK_PLL7_MAIN 156
+#define VF610_CLK_PLL7_USB_HOST 156
#define VF610_CLK_USBPHY0 157
#define VF610_CLK_USBPHY1 158
-#define VF610_CLK_END 159
+#define VF610_CLK_LVDS1_IN 159
+#define VF610_CLK_ANACLK1 160
+#define VF610_CLK_PLL1_BYPASS_SRC 161
+#define VF610_CLK_PLL2_BYPASS_SRC 162
+#define VF610_CLK_PLL3_BYPASS_SRC 163
+#define VF610_CLK_PLL4_BYPASS_SRC 164
+#define VF610_CLK_PLL5_BYPASS_SRC 165
+#define VF610_CLK_PLL6_BYPASS_SRC 166
+#define VF610_CLK_PLL7_BYPASS_SRC 167
+#define VF610_CLK_PLL1 168
+#define VF610_CLK_PLL2 169
+#define VF610_CLK_PLL3 170
+#define VF610_CLK_PLL4 171
+#define VF610_CLK_PLL5 172
+#define VF610_CLK_PLL6 173
+#define VF610_CLK_PLL7 174
+#define VF610_PLL1_BYPASS 175
+#define VF610_PLL2_BYPASS 176
+#define VF610_PLL3_BYPASS 177
+#define VF610_PLL4_BYPASS 178
+#define VF610_PLL5_BYPASS 179
+#define VF610_PLL6_BYPASS 180
+#define VF610_PLL7_BYPASS 181
+#define VF610_CLK_END 182
#endif /* __DT_BINDINGS_CLOCK_VF610_H */
/* Active pin states */
#define PIN_OUTPUT (0 | PULL_DIS)
-#define PIN_OUTPUT_PULLUP (PIN_OUTPUT | PULL_ENA | PULL_UP)
-#define PIN_OUTPUT_PULLDOWN (PIN_OUTPUT | PULL_ENA)
+#define PIN_OUTPUT_PULLUP (PULL_UP)
+#define PIN_OUTPUT_PULLDOWN (0)
#define PIN_INPUT (INPUT_EN | PULL_DIS)
#define PIN_INPUT_SLEW (INPUT_EN | SLEWCONTROL)
#define PIN_INPUT_PULLUP (PULL_ENA | INPUT_EN | PULL_UP)
extern unsigned long init_bootmem(unsigned long addr, unsigned long memend);
extern unsigned long free_all_bootmem(void);
+extern void reset_node_managed_pages(pg_data_t *pgdat);
extern void reset_all_zones_managed_pages(void);
extern void free_bootmem_node(pg_data_t *pgdat,
MAX77693_IRQ_GROUP_NR,
};
+#define SRC_IRQ_CHARGER BIT(0)
+#define SRC_IRQ_TOP BIT(1)
+#define SRC_IRQ_FLASH BIT(2)
+#define SRC_IRQ_MUIC BIT(3)
+#define SRC_IRQ_ALL (SRC_IRQ_CHARGER | SRC_IRQ_TOP \
+ | SRC_IRQ_FLASH | SRC_IRQ_MUIC)
+
#define LED_IRQ_FLED2_OPEN BIT(0)
#define LED_IRQ_FLED2_SHORT BIT(1)
#define LED_IRQ_FLED1_OPEN BIT(2)
*/
int nr_migrate_reserve_block;
+#ifdef CONFIG_MEMORY_ISOLATION
+ /*
+ * Number of isolated pageblock. It is used to solve incorrect
+ * freepage counting problem due to racy retrieving migratetype
+ * of pageblock. Protected by zone->lock.
+ */
+ unsigned long nr_isolate_pageblock;
+#endif
+
#ifdef CONFIG_MEMORY_HOTPLUG
/* see spanned/present_pages for more description */
seqlock_t span_seqlock;
unsigned int status;
};
+static inline void
+nfs_free_pnfs_ds_cinfo(struct pnfs_ds_commit_info *cinfo)
+{
+ kfree(cinfo->buckets);
+}
+
#else
struct pnfs_ds_commit_info {
};
+static inline void
+nfs_free_pnfs_ds_cinfo(struct pnfs_ds_commit_info *cinfo)
+{
+}
+
#endif /* CONFIG_NFS_V4_1 */
#ifdef CONFIG_NFS_V4_2
extern int of_property_read_string(struct device_node *np,
const char *propname,
const char **out_string);
-extern int of_property_read_string_index(struct device_node *np,
- const char *propname,
- int index, const char **output);
extern int of_property_match_string(struct device_node *np,
const char *propname,
const char *string);
-extern int of_property_count_strings(struct device_node *np,
- const char *propname);
+extern int of_property_read_string_helper(struct device_node *np,
+ const char *propname,
+ const char **out_strs, size_t sz, int index);
extern int of_device_is_compatible(const struct device_node *device,
const char *);
extern int of_device_is_available(const struct device_node *device);
return -ENOSYS;
}
-static inline int of_property_read_string_index(struct device_node *np,
- const char *propname, int index,
- const char **out_string)
-{
- return -ENOSYS;
-}
-
-static inline int of_property_count_strings(struct device_node *np,
- const char *propname)
+static inline int of_property_read_string_helper(struct device_node *np,
+ const char *propname,
+ const char **out_strs, size_t sz, int index)
{
return -ENOSYS;
}
return of_property_count_elems_of_size(np, propname, sizeof(u64));
}
+/**
+ * of_property_read_string_array() - Read an array of strings from a multiple
+ * strings property.
+ * @np: device node from which the property value is to be read.
+ * @propname: name of the property to be searched.
+ * @out_strs: output array of string pointers.
+ * @sz: number of array elements to read.
+ *
+ * Search for a property in a device tree node and retrieve a list of
+ * terminated string values (pointer to data, not a copy) in that property.
+ *
+ * If @out_strs is NULL, the number of strings in the property is returned.
+ */
+static inline int of_property_read_string_array(struct device_node *np,
+ const char *propname, const char **out_strs,
+ size_t sz)
+{
+ return of_property_read_string_helper(np, propname, out_strs, sz, 0);
+}
+
+/**
+ * of_property_count_strings() - Find and return the number of strings from a
+ * multiple strings property.
+ * @np: device node from which the property value is to be read.
+ * @propname: name of the property to be searched.
+ *
+ * Search for a property in a device tree node and retrieve the number of null
+ * terminated string contain in it. Returns the number of strings on
+ * success, -EINVAL if the property does not exist, -ENODATA if property
+ * does not have a value, and -EILSEQ if the string is not null-terminated
+ * within the length of the property data.
+ */
+static inline int of_property_count_strings(struct device_node *np,
+ const char *propname)
+{
+ return of_property_read_string_helper(np, propname, NULL, 0, 0);
+}
+
+/**
+ * of_property_read_string_index() - Find and read a string from a multiple
+ * strings property.
+ * @np: device node from which the property value is to be read.
+ * @propname: name of the property to be searched.
+ * @index: index of the string in the list of strings
+ * @out_string: pointer to null terminated return string, modified only if
+ * return value is 0.
+ *
+ * Search for a property in a device tree node and retrieve a null
+ * terminated string value (pointer to data, not a copy) in the list of strings
+ * contained in that property.
+ * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if
+ * property does not have a value, and -EILSEQ if the string is not
+ * null-terminated within the length of the property data.
+ *
+ * The out_string pointer is modified only if a valid string can be decoded.
+ */
+static inline int of_property_read_string_index(struct device_node *np,
+ const char *propname,
+ int index, const char **output)
+{
+ int rc = of_property_read_string_helper(np, propname, output, 1, index);
+ return rc < 0 ? rc : 0;
+}
+
/**
* of_property_read_bool - Findfrom a property
* @np: device node from which the property value is to be read.
#define __LINUX_PAGEISOLATION_H
#ifdef CONFIG_MEMORY_ISOLATION
+static inline bool has_isolate_pageblock(struct zone *zone)
+{
+ return zone->nr_isolate_pageblock;
+}
static inline bool is_migrate_isolate_page(struct page *page)
{
return get_pageblock_migratetype(page) == MIGRATE_ISOLATE;
return migratetype == MIGRATE_ISOLATE;
}
#else
+static inline bool has_isolate_pageblock(struct zone *zone)
+{
+ return false;
+}
static inline bool is_migrate_isolate_page(struct page *page)
{
return false;
bool max_off_time_changed;
bool cached_power_down_ok;
struct gpd_cpuidle_data *cpuidle_data;
- void (*attach_dev)(struct device *dev);
- void (*detach_dev)(struct device *dev);
+ int (*attach_dev)(struct generic_pm_domain *domain,
+ struct device *dev);
+ void (*detach_dev)(struct generic_pm_domain *domain,
+ struct device *dev);
};
static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
struct notifier_block nb;
struct mutex lock;
unsigned int refcount;
- bool need_restore;
+ int need_restore;
};
#ifdef CONFIG_PM_GENERIC_DOMAINS
struct device *dev;
struct charger_desc *desc;
- struct power_supply *fuel_gauge;
- struct power_supply **charger_stat;
-
#ifdef CONFIG_THERMAL
struct thermal_zone_device *tzd_batt;
#endif
void (*external_power_changed)(struct power_supply *psy);
void (*set_charged)(struct power_supply *psy);
+ /*
+ * Set if thermal zone should not be created for this power supply.
+ * For example for virtual supplies forwarding calls to actual
+ * sensors or other supplies.
+ */
+ bool no_thermal;
/* For APM emulation, think legacy userspace. */
int use_for_apm;
__ring_buffer_alloc((size), (flags), &__key); \
})
-int ring_buffer_wait(struct ring_buffer *buffer, int cpu);
+int ring_buffer_wait(struct ring_buffer *buffer, int cpu, bool full);
int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu,
struct file *filp, poll_table *poll_table);
#define MSG_EOF MSG_FIN
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */
-#define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exit for file
+#define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file
descriptor received through
SCM_RIGHTS */
#if defined(CONFIG_COMPAT)
* @list: used to maintain a list of currently available transports
* @name: the human-readable name of the transport
* @maxsize: transport provided maximum packet size
- * @pref: Preferences of this transport
* @def: set if this transport should be considered the default
* @create: member function to create a new connection on this transport
* @close: member function to discard a connection on this transport
return iptunnel_handle_offloads(skb, udp_csum, type);
}
+static inline void udp_tunnel_gro_complete(struct sk_buff *skb, int nhoff)
+{
+ struct udphdr *uh;
+
+ uh = (struct udphdr *)(skb->data + nhoff - sizeof(struct udphdr));
+ skb_shinfo(skb)->gso_type |= uh->check ?
+ SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+}
+
static inline void udp_tunnel_encap_enable(struct socket *sock)
{
#if IS_ENABLED(CONFIG_IPV6)
header-y += firewire-cdev.h
header-y += firewire-constants.h
header-y += flat.h
+header-y += fou.h
header-y += fs.h
header-y += fsl_hypervisor.h
header-y += fuse.h
header-y += hiddev.h
header-y += hidraw.h
header-y += hpet.h
+header-y += hsr_netlink.h
header-y += hyperv.h
header-y += hysdn_if.h
header-y += i2c-dev.h
header-y += minix_fs.h
header-y += mman.h
header-y += mmtimer.h
+header-y += mpls.h
header-y += mqueue.h
header-y += mroute.h
header-y += mroute6.h
header-y += virtio_pci.h
header-y += virtio_ring.h
header-y += virtio_rng.h
+header=y += vm_sockets.h
header-y += vt.h
header-y += wait.h
header-y += wanrouter.h
#include <linux/types.h>
#include <linux/if_ether.h>
+#include <linux/in6.h>
#define SYSFS_BRIDGE_ATTR "bridge"
#define SYSFS_BRIDGE_FDB "brforward"
static_command_line, __start___param,
__stop___param - __start___param,
-1, -1, &unknown_bootoption);
- if (after_dashes)
+ if (!IS_ERR_OR_NULL(after_dashes))
parse_args("Setting init args", after_dashes, NULL, 0, -1, -1,
set_init_arg);
ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_FEATURE_CHANGE);
audit_log_task_info(ab, current);
- audit_log_format(ab, "feature=%s old=%u new=%u old_lock=%u new_lock=%u res=%d",
+ audit_log_format(ab, " feature=%s old=%u new=%u old_lock=%u new_lock=%u res=%d",
audit_feature_names[which], !!old_feature, !!new_feature,
!!old_lock, !!new_lock, res);
audit_log_end(ab);
chunk->owners[i].index = i;
}
fsnotify_init_mark(&chunk->mark, audit_tree_destroy_watch);
+ chunk->mark.mask = FS_IN_IGNORED;
return chunk;
}
* 'I' - Working around severe firmware bug.
* 'O' - Out-of-tree module has been loaded.
* 'E' - Unsigned module has been loaded.
+ * 'L' - A soft lockup has previously occurred.
*
* The string is overwritten by the next call to print_tainted().
*/
static int platform_suspend_prepare_late(suspend_state_t state)
{
- return state == PM_SUSPEND_FREEZE && freeze_ops->prepare ?
+ return state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->prepare ?
freeze_ops->prepare() : 0;
}
static void platform_resume_early(suspend_state_t state)
{
- if (state == PM_SUSPEND_FREEZE && freeze_ops->restore)
+ if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->restore)
freeze_ops->restore();
}
* ring_buffer_wait - wait for input to the ring buffer
* @buffer: buffer to wait on
* @cpu: the cpu buffer to wait on
+ * @full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS
*
* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon
* as data is added to any of the @buffer's cpu buffers. Otherwise
* it will wait for data to be added to a specific cpu buffer.
*/
-int ring_buffer_wait(struct ring_buffer *buffer, int cpu)
+int ring_buffer_wait(struct ring_buffer *buffer, int cpu, bool full)
{
- struct ring_buffer_per_cpu *cpu_buffer;
+ struct ring_buffer_per_cpu *uninitialized_var(cpu_buffer);
DEFINE_WAIT(wait);
struct rb_irq_work *work;
+ int ret = 0;
/*
* Depending on what the caller is waiting for, either any
}
- prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE);
+ while (true) {
+ prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE);
- /*
- * The events can happen in critical sections where
- * checking a work queue can cause deadlocks.
- * After adding a task to the queue, this flag is set
- * only to notify events to try to wake up the queue
- * using irq_work.
- *
- * We don't clear it even if the buffer is no longer
- * empty. The flag only causes the next event to run
- * irq_work to do the work queue wake up. The worse
- * that can happen if we race with !trace_empty() is that
- * an event will cause an irq_work to try to wake up
- * an empty queue.
- *
- * There's no reason to protect this flag either, as
- * the work queue and irq_work logic will do the necessary
- * synchronization for the wake ups. The only thing
- * that is necessary is that the wake up happens after
- * a task has been queued. It's OK for spurious wake ups.
- */
- work->waiters_pending = true;
+ /*
+ * The events can happen in critical sections where
+ * checking a work queue can cause deadlocks.
+ * After adding a task to the queue, this flag is set
+ * only to notify events to try to wake up the queue
+ * using irq_work.
+ *
+ * We don't clear it even if the buffer is no longer
+ * empty. The flag only causes the next event to run
+ * irq_work to do the work queue wake up. The worse
+ * that can happen if we race with !trace_empty() is that
+ * an event will cause an irq_work to try to wake up
+ * an empty queue.
+ *
+ * There's no reason to protect this flag either, as
+ * the work queue and irq_work logic will do the necessary
+ * synchronization for the wake ups. The only thing
+ * that is necessary is that the wake up happens after
+ * a task has been queued. It's OK for spurious wake ups.
+ */
+ work->waiters_pending = true;
+
+ if (signal_pending(current)) {
+ ret = -EINTR;
+ break;
+ }
+
+ if (cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer))
+ break;
+
+ if (cpu != RING_BUFFER_ALL_CPUS &&
+ !ring_buffer_empty_cpu(buffer, cpu)) {
+ unsigned long flags;
+ bool pagebusy;
+
+ if (!full)
+ break;
+
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+ if (!pagebusy)
+ break;
+ }
- if ((cpu == RING_BUFFER_ALL_CPUS && ring_buffer_empty(buffer)) ||
- (cpu != RING_BUFFER_ALL_CPUS && ring_buffer_empty_cpu(buffer, cpu)))
schedule();
+ }
finish_wait(&work->waiters, &wait);
- return 0;
+
+ return ret;
}
/**
}
#endif /* CONFIG_TRACER_MAX_TRACE */
-static int wait_on_pipe(struct trace_iterator *iter)
+static int wait_on_pipe(struct trace_iterator *iter, bool full)
{
/* Iterators are static, they should be filled or empty */
if (trace_buffer_iter(iter, iter->cpu_file))
return 0;
- return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file);
+ return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file,
+ full);
}
#ifdef CONFIG_FTRACE_STARTUP_TEST
mutex_unlock(&iter->mutex);
- ret = wait_on_pipe(iter);
+ ret = wait_on_pipe(iter, false);
mutex_lock(&iter->mutex);
if (ret)
return ret;
-
- if (signal_pending(current))
- return -EINTR;
}
return 1;
goto out_unlock;
}
mutex_unlock(&trace_types_lock);
- ret = wait_on_pipe(iter);
+ ret = wait_on_pipe(iter, false);
mutex_lock(&trace_types_lock);
if (ret) {
size = ret;
goto out_unlock;
}
- if (signal_pending(current)) {
- size = -EINTR;
- goto out_unlock;
- }
goto again;
}
size = 0;
};
struct buffer_ref *ref;
int entries, size, i;
- ssize_t ret;
+ ssize_t ret = 0;
mutex_lock(&trace_types_lock);
int r;
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
- if (!ref)
+ if (!ref) {
+ ret = -ENOMEM;
break;
+ }
ref->ref = 1;
ref->buffer = iter->trace_buffer->buffer;
ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file);
if (!ref->page) {
+ ret = -ENOMEM;
kfree(ref);
break;
}
/* did we read anything? */
if (!spd.nr_pages) {
+ if (ret)
+ goto out;
+
if ((file->f_flags & O_NONBLOCK) || (flags & SPLICE_F_NONBLOCK)) {
ret = -EAGAIN;
goto out;
}
mutex_unlock(&trace_types_lock);
- ret = wait_on_pipe(iter);
+ ret = wait_on_pipe(iter, true);
mutex_lock(&trace_types_lock);
if (ret)
goto out;
- if (signal_pending(current)) {
- ret = -EINTR;
- goto out;
- }
+
goto again;
}
ht->shift++;
/* For each new bucket, search the corresponding old bucket
- * for the first entry that hashes to the new bucket, and
+ * for the first entry that hashes to the new bucket, and
* link the new bucket to that entry. Since all the entries
* which will end up in the new bucket appear in the same
* old bucket, this constructs an entirely valid new hash
}
/* Publish the new table pointer. Lookups may now traverse
- * the new table, but they will not benefit from any
- * additional efficiency until later steps unzip the buckets.
+ * the new table, but they will not benefit from any
+ * additional efficiency until later steps unzip the buckets.
*/
rcu_assign_pointer(ht->tbl, new_tbl);
ht->shift--;
- /* Link each bucket in the new table to the first bucket
+ /* Link each bucket in the new table to the first bucket
* in the old table that contains entries which will hash
* to the new bucket.
*/
for (i = 0; i < ntbl->size; i++) {
ntbl->buckets[i] = tbl->buckets[i];
- /* Link each bucket in the new table to the first bucket
+ /* Link each bucket in the new table to the first bucket
* in the old table that contains entries which will hash
* to the new bucket.
*/
static int reset_managed_pages_done __initdata;
-static inline void __init reset_node_managed_pages(pg_data_t *pgdat)
+void reset_node_managed_pages(pg_data_t *pgdat)
{
struct zone *z;
- if (reset_managed_pages_done)
- return;
-
for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++)
z->managed_pages = 0;
}
{
struct pglist_data *pgdat;
+ if (reset_managed_pages_done)
+ return;
+
for_each_online_pgdat(pgdat)
reset_node_managed_pages(pgdat);
+
reset_managed_pages_done = 1;
}
block_end_pfn = min(block_end_pfn, end_pfn);
+ /*
+ * pfn could pass the block_end_pfn if isolated freepage
+ * is more than pageblock order. In this case, we adjust
+ * scanning range to right one.
+ */
+ if (pfn >= block_end_pfn) {
+ block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+ block_end_pfn = min(block_end_pfn, end_pfn);
+ }
+
if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone))
break;
}
acct_isolated(zone, cc);
- /* Record where migration scanner will be restarted */
- cc->migrate_pfn = low_pfn;
+ /*
+ * Record where migration scanner will be restarted. If we end up in
+ * the same pageblock as the free scanner, make the scanners fully
+ * meet so that compact_finished() terminates compaction.
+ */
+ cc->migrate_pfn = (end_pfn <= cc->free_pfn) ? low_pfn : cc->free_pfn;
return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
}
/*
* in mm/page_alloc.c
*/
+
+/*
+ * Locate the struct page for both the matching buddy in our
+ * pair (buddy1) and the combined O(n+1) page they form (page).
+ *
+ * 1) Any buddy B1 will have an order O twin B2 which satisfies
+ * the following equation:
+ * B2 = B1 ^ (1 << O)
+ * For example, if the starting buddy (buddy2) is #8 its order
+ * 1 buddy is #10:
+ * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10
+ *
+ * 2) Any buddy B will have an order O+1 parent P which
+ * satisfies the following equation:
+ * P = B & ~(1 << O)
+ *
+ * Assumption: *_mem_map is contiguous at least up to MAX_ORDER
+ */
+static inline unsigned long
+__find_buddy_index(unsigned long page_idx, unsigned int order)
+{
+ return page_idx ^ (1 << order);
+}
+
+extern int __isolate_free_page(struct page *page, unsigned int order);
extern void __free_pages_bootmem(struct page *page, unsigned int order);
extern void prep_compound_page(struct page *page, unsigned long order);
#ifdef CONFIG_MEMORY_FAILURE
if (i->nr_segs == 1)
return i->count;
else if (i->type & ITER_BVEC)
- return min(i->count, i->iov->iov_len - i->iov_offset);
- else
return min(i->count, i->bvec->bv_len - i->iov_offset);
+ else
+ return min(i->count, i->iov->iov_len - i->iov_offset);
}
EXPORT_SYMBOL(iov_iter_single_seg_count);
#include <linux/stop_machine.h>
#include <linux/hugetlb.h>
#include <linux/memblock.h>
+#include <linux/bootmem.h>
#include <asm/tlbflush.h>
}
#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
+static void reset_node_present_pages(pg_data_t *pgdat)
+{
+ struct zone *z;
+
+ for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++)
+ z->present_pages = 0;
+
+ pgdat->node_present_pages = 0;
+}
+
/* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */
static pg_data_t __ref *hotadd_new_pgdat(int nid, u64 start)
{
build_all_zonelists(pgdat, NULL);
mutex_unlock(&zonelists_mutex);
+ /*
+ * zone->managed_pages is set to an approximate value in
+ * free_area_init_core(), which will cause
+ * /sys/device/system/node/nodeX/meminfo has wrong data.
+ * So reset it to 0 before any memory is onlined.
+ */
+ reset_node_managed_pages(pgdat);
+
+ /*
+ * When memory is hot-added, all the memory is in offline state. So
+ * clear all zones' present_pages because they will be updated in
+ * online_pages() and offline_pages().
+ */
+ reset_node_present_pages(pgdat);
+
return pgdat;
}
static int reset_managed_pages_done __initdata;
-static inline void __init reset_node_managed_pages(pg_data_t *pgdat)
+void reset_node_managed_pages(pg_data_t *pgdat)
{
struct zone *z;
- if (reset_managed_pages_done)
- return;
for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++)
z->managed_pages = 0;
}
{
struct pglist_data *pgdat;
+ if (reset_managed_pages_done)
+ return;
+
for_each_online_pgdat(pgdat)
reset_node_managed_pages(pgdat);
+
reset_managed_pages_done = 1;
}
set_page_private(page, 0);
}
-/*
- * Locate the struct page for both the matching buddy in our
- * pair (buddy1) and the combined O(n+1) page they form (page).
- *
- * 1) Any buddy B1 will have an order O twin B2 which satisfies
- * the following equation:
- * B2 = B1 ^ (1 << O)
- * For example, if the starting buddy (buddy2) is #8 its order
- * 1 buddy is #10:
- * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10
- *
- * 2) Any buddy B will have an order O+1 parent P which
- * satisfies the following equation:
- * P = B & ~(1 << O)
- *
- * Assumption: *_mem_map is contiguous at least up to MAX_ORDER
- */
-static inline unsigned long
-__find_buddy_index(unsigned long page_idx, unsigned int order)
-{
- return page_idx ^ (1 << order);
-}
-
/*
* This function checks whether a page is free && is the buddy
* we can do coalesce a page and its buddy if
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
+ int max_order = MAX_ORDER;
VM_BUG_ON(!zone_is_initialized(zone));
return;
VM_BUG_ON(migratetype == -1);
+ if (is_migrate_isolate(migratetype)) {
+ /*
+ * We restrict max order of merging to prevent merge
+ * between freepages on isolate pageblock and normal
+ * pageblock. Without this, pageblock isolation
+ * could cause incorrect freepage accounting.
+ */
+ max_order = min(MAX_ORDER, pageblock_order + 1);
+ } else {
+ __mod_zone_freepage_state(zone, 1 << order, migratetype);
+ }
- page_idx = pfn & ((1 << MAX_ORDER) - 1);
+ page_idx = pfn & ((1 << max_order) - 1);
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
- while (order < MAX_ORDER-1) {
+ while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
*/
if (page_is_guard(buddy)) {
clear_page_guard_flag(buddy);
- set_page_private(page, 0);
- __mod_zone_freepage_state(zone, 1 << order,
- migratetype);
+ set_page_private(buddy, 0);
+ if (!is_migrate_isolate(migratetype)) {
+ __mod_zone_freepage_state(zone, 1 << order,
+ migratetype);
+ }
} else {
list_del(&buddy->lru);
zone->free_area[order].nr_free--;
/* must delete as __free_one_page list manipulates */
list_del(&page->lru);
mt = get_freepage_migratetype(page);
+ if (unlikely(has_isolate_pageblock(zone)))
+ mt = get_pageblock_migratetype(page);
+
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, page_to_pfn(page), zone, 0, mt);
trace_mm_page_pcpu_drain(page, 0, mt);
- if (likely(!is_migrate_isolate_page(page))) {
- __mod_zone_page_state(zone, NR_FREE_PAGES, 1);
- if (is_migrate_cma(mt))
- __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
- }
} while (--to_free && --batch_free && !list_empty(list));
}
spin_unlock(&zone->lock);
if (nr_scanned)
__mod_zone_page_state(zone, NR_PAGES_SCANNED, -nr_scanned);
+ if (unlikely(has_isolate_pageblock(zone) ||
+ is_migrate_isolate(migratetype))) {
+ migratetype = get_pfnblock_migratetype(page, pfn);
+ }
__free_one_page(page, pfn, zone, order, migratetype);
- if (unlikely(!is_migrate_isolate(migratetype)))
- __mod_zone_freepage_state(zone, 1 << order, migratetype);
spin_unlock(&zone->lock);
}
}
EXPORT_SYMBOL_GPL(split_page);
-static int __isolate_free_page(struct page *page, unsigned int order)
+int __isolate_free_page(struct page *page, unsigned int order)
{
unsigned long watermark;
struct zone *zone;
/* Make sure the range is really isolated. */
if (test_pages_isolated(outer_start, end, false)) {
- pr_warn("alloc_contig_range test_pages_isolated(%lx, %lx) failed\n",
- outer_start, end);
+ pr_info("%s: [%lx, %lx) PFNs busy\n",
+ __func__, outer_start, end);
ret = -EBUSY;
goto done;
}
-
/* Grab isolated pages from freelists. */
outer_end = isolate_freepages_range(&cc, outer_start, end);
if (!outer_end) {
int migratetype = get_pageblock_migratetype(page);
set_pageblock_migratetype(page, MIGRATE_ISOLATE);
+ zone->nr_isolate_pageblock++;
nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE);
__mod_zone_freepage_state(zone, -nr_pages, migratetype);
{
struct zone *zone;
unsigned long flags, nr_pages;
+ struct page *isolated_page = NULL;
+ unsigned int order;
+ unsigned long page_idx, buddy_idx;
+ struct page *buddy;
zone = page_zone(page);
spin_lock_irqsave(&zone->lock, flags);
if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
goto out;
- nr_pages = move_freepages_block(zone, page, migratetype);
- __mod_zone_freepage_state(zone, nr_pages, migratetype);
+
+ /*
+ * Because freepage with more than pageblock_order on isolated
+ * pageblock is restricted to merge due to freepage counting problem,
+ * it is possible that there is free buddy page.
+ * move_freepages_block() doesn't care of merge so we need other
+ * approach in order to merge them. Isolation and free will make
+ * these pages to be merged.
+ */
+ if (PageBuddy(page)) {
+ order = page_order(page);
+ if (order >= pageblock_order) {
+ page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
+ buddy_idx = __find_buddy_index(page_idx, order);
+ buddy = page + (buddy_idx - page_idx);
+
+ if (!is_migrate_isolate_page(buddy)) {
+ __isolate_free_page(page, order);
+ set_page_refcounted(page);
+ isolated_page = page;
+ }
+ }
+ }
+
+ /*
+ * If we isolate freepage with more than pageblock_order, there
+ * should be no freepage in the range, so we could avoid costly
+ * pageblock scanning for freepage moving.
+ */
+ if (!isolated_page) {
+ nr_pages = move_freepages_block(zone, page, migratetype);
+ __mod_zone_freepage_state(zone, nr_pages, migratetype);
+ }
set_pageblock_migratetype(page, migratetype);
+ zone->nr_isolate_pageblock--;
out:
spin_unlock_irqrestore(&zone->lock, flags);
+ if (isolated_page)
+ __free_pages(isolated_page, order);
}
static inline struct page *
if (s->size - size >= sizeof(void *))
continue;
+ if (IS_ENABLED(CONFIG_SLAB) && align &&
+ (align > s->align || s->align % align))
+ continue;
+
return s;
}
return NULL;
* necessary) to @newsize. It will be typically be called from the filesystem's
* setattr function when ATTR_SIZE is passed in.
*
- * Must be called with inode_mutex held and before all filesystem specific
- * block truncation has been performed.
+ * Must be called with a lock serializing truncates and writes (generally
+ * i_mutex but e.g. xfs uses a different lock) and before all filesystem
+ * specific block truncation has been performed.
*/
void truncate_setsize(struct inode *inode, loff_t newsize)
{
struct page *page;
pgoff_t index;
- WARN_ON(!mutex_is_locked(&inode->i_mutex));
WARN_ON(to > inode->i_size);
if (from >= to || bsize == PAGE_CACHE_SIZE)
#include <net/netfilter/ipv6/nf_reject.h>
#include <linux/ip.h>
#include <net/ip.h>
+#include <net/ip6_checksum.h>
#include <linux/netfilter_bridge.h>
#include "../br_private.h"
static const u8 *aes_iv = (u8 *)CEPH_AES_IV;
+/*
+ * Should be used for buffers allocated with ceph_kvmalloc().
+ * Currently these are encrypt out-buffer (ceph_buffer) and decrypt
+ * in-buffer (msg front).
+ *
+ * Dispose of @sgt with teardown_sgtable().
+ *
+ * @prealloc_sg is to avoid memory allocation inside sg_alloc_table()
+ * in cases where a single sg is sufficient. No attempt to reduce the
+ * number of sgs by squeezing physically contiguous pages together is
+ * made though, for simplicity.
+ */
+static int setup_sgtable(struct sg_table *sgt, struct scatterlist *prealloc_sg,
+ const void *buf, unsigned int buf_len)
+{
+ struct scatterlist *sg;
+ const bool is_vmalloc = is_vmalloc_addr(buf);
+ unsigned int off = offset_in_page(buf);
+ unsigned int chunk_cnt = 1;
+ unsigned int chunk_len = PAGE_ALIGN(off + buf_len);
+ int i;
+ int ret;
+
+ if (buf_len == 0) {
+ memset(sgt, 0, sizeof(*sgt));
+ return -EINVAL;
+ }
+
+ if (is_vmalloc) {
+ chunk_cnt = chunk_len >> PAGE_SHIFT;
+ chunk_len = PAGE_SIZE;
+ }
+
+ if (chunk_cnt > 1) {
+ ret = sg_alloc_table(sgt, chunk_cnt, GFP_NOFS);
+ if (ret)
+ return ret;
+ } else {
+ WARN_ON(chunk_cnt != 1);
+ sg_init_table(prealloc_sg, 1);
+ sgt->sgl = prealloc_sg;
+ sgt->nents = sgt->orig_nents = 1;
+ }
+
+ for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) {
+ struct page *page;
+ unsigned int len = min(chunk_len - off, buf_len);
+
+ if (is_vmalloc)
+ page = vmalloc_to_page(buf);
+ else
+ page = virt_to_page(buf);
+
+ sg_set_page(sg, page, len, off);
+
+ off = 0;
+ buf += len;
+ buf_len -= len;
+ }
+ WARN_ON(buf_len != 0);
+
+ return 0;
+}
+
+static void teardown_sgtable(struct sg_table *sgt)
+{
+ if (sgt->orig_nents > 1)
+ sg_free_table(sgt);
+}
+
static int ceph_aes_encrypt(const void *key, int key_len,
void *dst, size_t *dst_len,
const void *src, size_t src_len)
{
- struct scatterlist sg_in[2], sg_out[1];
+ struct scatterlist sg_in[2], prealloc_sg;
+ struct sg_table sg_out;
struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 };
int ret;
*dst_len = src_len + zero_padding;
- crypto_blkcipher_setkey((void *)tfm, key, key_len);
sg_init_table(sg_in, 2);
sg_set_buf(&sg_in[0], src, src_len);
sg_set_buf(&sg_in[1], pad, zero_padding);
- sg_init_table(sg_out, 1);
- sg_set_buf(sg_out, dst, *dst_len);
+ ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len);
+ if (ret)
+ goto out_tfm;
+
+ crypto_blkcipher_setkey((void *)tfm, key, key_len);
iv = crypto_blkcipher_crt(tfm)->iv;
ivsize = crypto_blkcipher_ivsize(tfm);
-
memcpy(iv, aes_iv, ivsize);
+
/*
print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1,
key, key_len, 1);
print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1,
pad, zero_padding, 1);
*/
- ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in,
+ ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in,
src_len + zero_padding);
- crypto_free_blkcipher(tfm);
- if (ret < 0)
+ if (ret < 0) {
pr_err("ceph_aes_crypt failed %d\n", ret);
+ goto out_sg;
+ }
/*
print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1,
dst, *dst_len, 1);
*/
- return 0;
+
+out_sg:
+ teardown_sgtable(&sg_out);
+out_tfm:
+ crypto_free_blkcipher(tfm);
+ return ret;
}
static int ceph_aes_encrypt2(const void *key, int key_len, void *dst,
const void *src1, size_t src1_len,
const void *src2, size_t src2_len)
{
- struct scatterlist sg_in[3], sg_out[1];
+ struct scatterlist sg_in[3], prealloc_sg;
+ struct sg_table sg_out;
struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 };
int ret;
*dst_len = src1_len + src2_len + zero_padding;
- crypto_blkcipher_setkey((void *)tfm, key, key_len);
sg_init_table(sg_in, 3);
sg_set_buf(&sg_in[0], src1, src1_len);
sg_set_buf(&sg_in[1], src2, src2_len);
sg_set_buf(&sg_in[2], pad, zero_padding);
- sg_init_table(sg_out, 1);
- sg_set_buf(sg_out, dst, *dst_len);
+ ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len);
+ if (ret)
+ goto out_tfm;
+
+ crypto_blkcipher_setkey((void *)tfm, key, key_len);
iv = crypto_blkcipher_crt(tfm)->iv;
ivsize = crypto_blkcipher_ivsize(tfm);
-
memcpy(iv, aes_iv, ivsize);
+
/*
print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1,
key, key_len, 1);
print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1,
pad, zero_padding, 1);
*/
- ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in,
+ ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in,
src1_len + src2_len + zero_padding);
- crypto_free_blkcipher(tfm);
- if (ret < 0)
+ if (ret < 0) {
pr_err("ceph_aes_crypt2 failed %d\n", ret);
+ goto out_sg;
+ }
/*
print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1,
dst, *dst_len, 1);
*/
- return 0;
+
+out_sg:
+ teardown_sgtable(&sg_out);
+out_tfm:
+ crypto_free_blkcipher(tfm);
+ return ret;
}
static int ceph_aes_decrypt(const void *key, int key_len,
void *dst, size_t *dst_len,
const void *src, size_t src_len)
{
- struct scatterlist sg_in[1], sg_out[2];
+ struct sg_table sg_in;
+ struct scatterlist sg_out[2], prealloc_sg;
struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
struct blkcipher_desc desc = { .tfm = tfm };
char pad[16];
if (IS_ERR(tfm))
return PTR_ERR(tfm);
- crypto_blkcipher_setkey((void *)tfm, key, key_len);
- sg_init_table(sg_in, 1);
sg_init_table(sg_out, 2);
- sg_set_buf(sg_in, src, src_len);
sg_set_buf(&sg_out[0], dst, *dst_len);
sg_set_buf(&sg_out[1], pad, sizeof(pad));
+ ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len);
+ if (ret)
+ goto out_tfm;
+ crypto_blkcipher_setkey((void *)tfm, key, key_len);
iv = crypto_blkcipher_crt(tfm)->iv;
ivsize = crypto_blkcipher_ivsize(tfm);
-
memcpy(iv, aes_iv, ivsize);
/*
print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1,
src, src_len, 1);
*/
-
- ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len);
- crypto_free_blkcipher(tfm);
+ ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len);
if (ret < 0) {
pr_err("ceph_aes_decrypt failed %d\n", ret);
- return ret;
+ goto out_sg;
}
if (src_len <= *dst_len)
print_hex_dump(KERN_ERR, "dec out: ", DUMP_PREFIX_NONE, 16, 1,
dst, *dst_len, 1);
*/
- return 0;
+
+out_sg:
+ teardown_sgtable(&sg_in);
+out_tfm:
+ crypto_free_blkcipher(tfm);
+ return ret;
}
static int ceph_aes_decrypt2(const void *key, int key_len,
void *dst2, size_t *dst2_len,
const void *src, size_t src_len)
{
- struct scatterlist sg_in[1], sg_out[3];
+ struct sg_table sg_in;
+ struct scatterlist sg_out[3], prealloc_sg;
struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher();
struct blkcipher_desc desc = { .tfm = tfm };
char pad[16];
if (IS_ERR(tfm))
return PTR_ERR(tfm);
- sg_init_table(sg_in, 1);
- sg_set_buf(sg_in, src, src_len);
sg_init_table(sg_out, 3);
sg_set_buf(&sg_out[0], dst1, *dst1_len);
sg_set_buf(&sg_out[1], dst2, *dst2_len);
sg_set_buf(&sg_out[2], pad, sizeof(pad));
+ ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len);
+ if (ret)
+ goto out_tfm;
crypto_blkcipher_setkey((void *)tfm, key, key_len);
iv = crypto_blkcipher_crt(tfm)->iv;
ivsize = crypto_blkcipher_ivsize(tfm);
-
memcpy(iv, aes_iv, ivsize);
/*
print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1,
src, src_len, 1);
*/
-
- ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len);
- crypto_free_blkcipher(tfm);
+ ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len);
if (ret < 0) {
pr_err("ceph_aes_decrypt failed %d\n", ret);
- return ret;
+ goto out_sg;
}
if (src_len <= *dst1_len)
dst2, *dst2_len, 1);
*/
- return 0;
+out_sg:
+ teardown_sgtable(&sg_in);
+out_tfm:
+ crypto_free_blkcipher(tfm);
+ return ret;
}
static void __remove_osd(struct ceph_osd_client *osdc, struct ceph_osd *osd)
{
dout("__remove_osd %p\n", osd);
- BUG_ON(!list_empty(&osd->o_requests));
- BUG_ON(!list_empty(&osd->o_linger_requests));
+ WARN_ON(!list_empty(&osd->o_requests));
+ WARN_ON(!list_empty(&osd->o_linger_requests));
rb_erase(&osd->o_node, &osdc->osds);
list_del_init(&osd->o_osd_lru);
if (list_empty(&req->r_osd_item))
req->r_osd = NULL;
}
+
+ list_del_init(&req->r_req_lru_item); /* can be on notarget */
ceph_osdc_put_request(req);
}
if (req->r_osd) {
__cancel_request(req);
list_del_init(&req->r_osd_item);
+ list_del_init(&req->r_linger_osd_item);
req->r_osd = NULL;
}
/* We could not connect to a designated PHY, so use the switch internal
* MDIO bus instead
*/
- if (!p->phy)
+ if (!p->phy) {
p->phy = ds->slave_mii_bus->phy_map[p->port];
- else
+ phy_connect_direct(slave_dev, p->phy, dsa_slave_adjust_link,
+ p->phy_interface);
+ } else {
pr_info("attached PHY at address %d [%s]\n",
p->phy->addr, p->phy->drv->name);
+ }
}
int dsa_slave_suspend(struct net_device *slave_dev)
int err = -ENOSYS;
const struct net_offload **offloads;
+ udp_tunnel_gro_complete(skb, nhoff);
+
rcu_read_lock();
offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
ops = rcu_dereference(offloads[proto]);
gnvh = (struct genevehdr *)__skb_push(skb, sizeof(*gnvh) + opt_len);
geneve_build_header(gnvh, tun_flags, vni, opt_len, opt);
+ skb_set_inner_protocol(skb, htons(ETH_P_TEB));
+
return udp_tunnel_xmit_skb(gs->sock, rt, skb, src, dst,
tos, ttl, df, src_port, dst_port, xnet);
}
static void __exit geneve_cleanup_module(void)
{
destroy_workqueue(geneve_wq);
+ unregister_pernet_subsys(&geneve_net_ops);
}
module_exit(geneve_cleanup_module);
for (cmsg = CMSG_FIRSTHDR(msg); cmsg; cmsg = CMSG_NXTHDR(msg, cmsg)) {
if (!CMSG_OK(msg, cmsg))
return -EINVAL;
-#if defined(CONFIG_IPV6)
+#if IS_ENABLED(CONFIG_IPV6)
if (allow_ipv6 &&
cmsg->cmsg_level == SOL_IPV6 &&
cmsg->cmsg_type == IPV6_PKTINFO) {
/* Undo procedures. */
+/* We can clear retrans_stamp when there are no retransmissions in the
+ * window. It would seem that it is trivially available for us in
+ * tp->retrans_out, however, that kind of assumptions doesn't consider
+ * what will happen if errors occur when sending retransmission for the
+ * second time. ...It could the that such segment has only
+ * TCPCB_EVER_RETRANS set at the present time. It seems that checking
+ * the head skb is enough except for some reneging corner cases that
+ * are not worth the effort.
+ *
+ * Main reason for all this complexity is the fact that connection dying
+ * time now depends on the validity of the retrans_stamp, in particular,
+ * that successive retransmissions of a segment must not advance
+ * retrans_stamp under any conditions.
+ */
+static bool tcp_any_retrans_done(const struct sock *sk)
+{
+ const struct tcp_sock *tp = tcp_sk(sk);
+ struct sk_buff *skb;
+
+ if (tp->retrans_out)
+ return true;
+
+ skb = tcp_write_queue_head(sk);
+ if (unlikely(skb && TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS))
+ return true;
+
+ return false;
+}
+
#if FASTRETRANS_DEBUG > 1
static void DBGUNDO(struct sock *sk, const char *msg)
{
* is ACKed. For Reno it is MUST to prevent false
* fast retransmits (RFC2582). SACK TCP is safe. */
tcp_moderate_cwnd(tp);
+ if (!tcp_any_retrans_done(sk))
+ tp->retrans_stamp = 0;
return true;
}
tcp_set_ca_state(sk, TCP_CA_Open);
return false;
}
-/* We can clear retrans_stamp when there are no retransmissions in the
- * window. It would seem that it is trivially available for us in
- * tp->retrans_out, however, that kind of assumptions doesn't consider
- * what will happen if errors occur when sending retransmission for the
- * second time. ...It could the that such segment has only
- * TCPCB_EVER_RETRANS set at the present time. It seems that checking
- * the head skb is enough except for some reneging corner cases that
- * are not worth the effort.
- *
- * Main reason for all this complexity is the fact that connection dying
- * time now depends on the validity of the retrans_stamp, in particular,
- * that successive retransmissions of a segment must not advance
- * retrans_stamp under any conditions.
- */
-static bool tcp_any_retrans_done(const struct sock *sk)
-{
- const struct tcp_sock *tp = tcp_sk(sk);
- struct sk_buff *skb;
-
- if (tp->retrans_out)
- return true;
-
- skb = tcp_write_queue_head(sk);
- if (unlikely(skb && TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS))
- return true;
-
- return false;
-}
-
/* Undo during loss recovery after partial ACK or using F-RTO. */
static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
{
else
dev->flags &= ~IFF_POINTOPOINT;
- dev->iflink = p->link;
-
/* Precalculate GRE options length */
if (t->parms.o_flags&(GRE_CSUM|GRE_KEY|GRE_SEQ)) {
if (t->parms.o_flags&GRE_CSUM)
u64_stats_init(&ip6gre_tunnel_stats->syncp);
}
+ dev->iflink = tunnel->parms.link;
return 0;
}
if (!dev->tstats)
return -ENOMEM;
+ dev->iflink = tunnel->parms.link;
+
return 0;
}
int err;
t = netdev_priv(dev);
- err = ip6_tnl_dev_init(dev);
- if (err < 0)
- goto out;
err = register_netdevice(dev);
if (err < 0)
static const struct net_device_ops ip6_tnl_netdev_ops = {
+ .ndo_init = ip6_tnl_dev_init,
.ndo_uninit = ip6_tnl_dev_uninit,
.ndo_start_xmit = ip6_tnl_xmit,
.ndo_do_ioctl = ip6_tnl_ioctl,
struct ip6_tnl *t = netdev_priv(dev);
struct net *net = dev_net(dev);
struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
- int err = ip6_tnl_dev_init_gen(dev);
-
- if (err)
- return err;
t->parms.proto = IPPROTO_IPV6;
dev_hold(dev);
- ip6_tnl_link_config(t);
-
rcu_assign_pointer(ip6n->tnls_wc[0], t);
return 0;
}
struct vti6_net *ip6n = net_generic(net, vti6_net_id);
int err;
- err = vti6_dev_init(dev);
- if (err < 0)
- goto out;
-
err = register_netdevice(dev);
if (err < 0)
goto out;
}
static const struct net_device_ops vti6_netdev_ops = {
+ .ndo_init = vti6_dev_init,
.ndo_uninit = vti6_dev_uninit,
.ndo_start_xmit = vti6_tnl_xmit,
.ndo_do_ioctl = vti6_ioctl,
struct ip6_tnl *t = netdev_priv(dev);
struct net *net = dev_net(dev);
struct vti6_net *ip6n = net_generic(net, vti6_net_id);
- int err = vti6_dev_init_gen(dev);
-
- if (err)
- return err;
t->parms.proto = IPPROTO_IPV6;
dev_hold(dev);
- vti6_link_config(t);
-
rcu_assign_pointer(ip6n->tnls_wc[0], t);
return 0;
}
struct sit_net *sitn = net_generic(net, sit_net_id);
int err;
- err = ipip6_tunnel_init(dev);
- if (err < 0)
- goto out;
- ipip6_tunnel_clone_6rd(dev, sitn);
+ memcpy(dev->dev_addr, &t->parms.iph.saddr, 4);
+ memcpy(dev->broadcast, &t->parms.iph.daddr, 4);
if ((__force u16)t->parms.i_flags & SIT_ISATAP)
dev->priv_flags |= IFF_ISATAP;
if (err < 0)
goto out;
- strcpy(t->parms.name, dev->name);
+ ipip6_tunnel_clone_6rd(dev, sitn);
+
dev->rtnl_link_ops = &sit_link_ops;
dev_hold(dev);
}
static const struct net_device_ops ipip6_netdev_ops = {
+ .ndo_init = ipip6_tunnel_init,
.ndo_uninit = ipip6_tunnel_uninit,
.ndo_start_xmit = sit_tunnel_xmit,
.ndo_do_ioctl = ipip6_tunnel_ioctl,
tunnel->dev = dev;
tunnel->net = dev_net(dev);
-
- memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4);
- memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4);
+ strcpy(tunnel->parms.name, dev->name);
ipip6_tunnel_bind_dev(dev);
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
tunnel->dev = dev;
tunnel->net = dev_net(dev);
- strcpy(tunnel->parms.name, dev->name);
iph->version = 4;
iph->protocol = IPPROTO_IPV6;
memset(¶ms, 0, sizeof(params));
memset(&csa_ie, 0, sizeof(csa_ie));
- err = ieee80211_parse_ch_switch_ie(sdata, elems, beacon,
+ err = ieee80211_parse_ch_switch_ie(sdata, elems,
ifibss->chandef.chan->band,
sta_flags, ifibss->bssid, &csa_ie);
/* can't switch to destination channel, fail */
* ieee80211_parse_ch_switch_ie - parses channel switch IEs
* @sdata: the sdata of the interface which has received the frame
* @elems: parsed 802.11 elements received with the frame
- * @beacon: indicates if the frame was a beacon or probe response
* @current_band: indicates the current band
* @sta_flags: contains information about own capabilities and restrictions
* to decide which channel switch announcements can be accepted. Only the
* Return: 0 on success, <0 on error and >0 if there is nothing to parse.
*/
int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
- struct ieee802_11_elems *elems, bool beacon,
+ struct ieee802_11_elems *elems,
enum ieee80211_band current_band,
u32 sta_flags, u8 *bssid,
struct ieee80211_csa_ie *csa_ie);
int i, flushed;
struct ps_data *ps;
struct cfg80211_chan_def chandef;
+ bool cancel_scan;
clear_bit(SDATA_STATE_RUNNING, &sdata->state);
- if (rcu_access_pointer(local->scan_sdata) == sdata)
+ cancel_scan = rcu_access_pointer(local->scan_sdata) == sdata;
+ if (cancel_scan)
ieee80211_scan_cancel(local);
/*
list_del(&sdata->u.vlan.list);
mutex_unlock(&local->mtx);
RCU_INIT_POINTER(sdata->vif.chanctx_conf, NULL);
+ /* see comment in the default case below */
+ ieee80211_free_keys(sdata, true);
/* no need to tell driver */
break;
case NL80211_IFTYPE_MONITOR:
/*
* When we get here, the interface is marked down.
* Free the remaining keys, if there are any
- * (shouldn't be, except maybe in WDS mode?)
+ * (which can happen in AP mode if userspace sets
+ * keys before the interface is operating, and maybe
+ * also in WDS mode)
*
* Force the key freeing to always synchronize_net()
* to wait for the RX path in case it is using this
- * interface enqueuing frames * at this very time on
+ * interface enqueuing frames at this very time on
* another CPU.
*/
ieee80211_free_keys(sdata, true);
-
- /* fall through */
- case NL80211_IFTYPE_AP:
skb_queue_purge(&sdata->skb_queue);
}
ieee80211_recalc_ps(local, -1);
+ if (cancel_scan)
+ flush_delayed_work(&local->scan_work);
+
if (local->open_count == 0) {
ieee80211_stop_device(local);
memset(¶ms, 0, sizeof(params));
memset(&csa_ie, 0, sizeof(csa_ie));
- err = ieee80211_parse_ch_switch_ie(sdata, elems, beacon, band,
+ err = ieee80211_parse_ch_switch_ie(sdata, elems, band,
sta_flags, sdata->vif.addr,
&csa_ie);
if (err < 0)
current_band = cbss->channel->band;
memset(&csa_ie, 0, sizeof(csa_ie));
- res = ieee80211_parse_ch_switch_ie(sdata, elems, beacon, current_band,
+ res = ieee80211_parse_ch_switch_ie(sdata, elems, current_band,
ifmgd->flags,
ifmgd->associated->bssid, &csa_ie);
if (res < 0)
ieee80211_queue_work(&local->hw, &ifmgd->chswitch_work);
else
mod_timer(&ifmgd->chswitch_timer,
- TU_TO_EXP_TIME(csa_ie.count * cbss->beacon_interval));
+ TU_TO_EXP_TIME((csa_ie.count - 1) *
+ cbss->beacon_interval));
}
static bool
sc = le16_to_cpu(hdr->seq_ctrl);
frag = sc & IEEE80211_SCTL_FRAG;
- if (likely((!ieee80211_has_morefrags(fc) && frag == 0) ||
- is_multicast_ether_addr(hdr->addr1))) {
- /* not fragmented */
+ if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
+ goto out;
+
+ if (is_multicast_ether_addr(hdr->addr1)) {
+ rx->local->dot11MulticastReceivedFrameCount++;
goto out;
}
+
I802_DEBUG_INC(rx->local->rx_handlers_fragments);
if (skb_linearize(rx->skb))
out:
if (rx->sta)
rx->sta->rx_packets++;
- if (is_multicast_ether_addr(hdr->addr1))
- rx->local->dot11MulticastReceivedFrameCount++;
- else
- ieee80211_led_rx(rx->local);
+ ieee80211_led_rx(rx->local);
return RX_CONTINUE;
}
#include "wme.h"
int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
- struct ieee802_11_elems *elems, bool beacon,
+ struct ieee802_11_elems *elems,
enum ieee80211_band current_band,
u32 sta_flags, u8 *bssid,
struct ieee80211_csa_ie *csa_ie)
return -EINVAL;
}
- if (!beacon && sec_chan_offs) {
+ if (sec_chan_offs) {
secondary_channel_offset = sec_chan_offs->sec_chan_offs;
- } else if (beacon && ht_oper) {
- secondary_channel_offset =
- ht_oper->ht_param & IEEE80211_HT_PARAM_CHA_SEC_OFFSET;
} else if (!(sta_flags & IEEE80211_STA_DISABLE_HT)) {
- /* If it's not a beacon, HT is enabled and the IE not present,
- * it's 20 MHz, 802.11-2012 8.5.2.6:
- * This element [the Secondary Channel Offset Element] is
- * present when switching to a 40 MHz channel. It may be
- * present when switching to a 20 MHz channel (in which
- * case the secondary channel offset is set to SCN).
- */
+ /* If the secondary channel offset IE is not present,
+ * we can't know what's the post-CSA offset, so the
+ * best we can do is use 20MHz.
+ */
secondary_channel_offset = IEEE80211_HT_PARAM_CHA_SEC_NONE;
}
return;
for (undo = 0; undo < group; undo++)
- if (test_bit(group, &groups))
+ if (test_bit(undo, &groups))
nlk->netlink_unbind(undo);
}
netlink_insert(sk, net, nladdr->nl_pid) :
netlink_autobind(sock);
if (err) {
- netlink_unbind(nlk->ngroups - 1, groups, nlk);
+ netlink_unbind(nlk->ngroups, groups, nlk);
return err;
}
}
nl_table[unit].module = module;
if (cfg) {
nl_table[unit].bind = cfg->bind;
+ nl_table[unit].unbind = cfg->unbind;
nl_table[unit].flags = cfg->flags;
if (cfg->compare)
nl_table[unit].compare = cfg->compare;
list_add(&cur_key->key_list, sh_keys);
cur_key->key = key;
- sctp_auth_key_hold(key);
-
return 0;
nomem:
if (!replace)
addr_param = param.v + sizeof(sctp_addip_param_t);
af = sctp_get_af_specific(param_type2af(param.p->type));
+ if (af == NULL)
+ break;
+
af->from_addr_param(&addr, addr_param,
htons(asoc->peer.port), 0);
char *string = NULL;
struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base);
struct gss_cl_ctx *ctx;
+ unsigned int len;
struct xdr_netobj *acceptor;
rcu_read_lock();
if (!ctx)
goto out;
- acceptor = &ctx->gc_acceptor;
+ len = ctx->gc_acceptor.len;
+ rcu_read_unlock();
/* no point if there's no string */
- if (!acceptor->len)
- goto out;
-
- string = kmalloc(acceptor->len + 1, GFP_KERNEL);
+ if (!len)
+ return NULL;
+realloc:
+ string = kmalloc(len + 1, GFP_KERNEL);
if (!string)
+ return NULL;
+
+ rcu_read_lock();
+ ctx = rcu_dereference(gss_cred->gc_ctx);
+
+ /* did the ctx disappear or was it replaced by one with no acceptor? */
+ if (!ctx || !ctx->gc_acceptor.len) {
+ kfree(string);
+ string = NULL;
goto out;
+ }
+
+ acceptor = &ctx->gc_acceptor;
+
+ /*
+ * Did we find a new acceptor that's longer than the original? Allocate
+ * a longer buffer and try again.
+ */
+ if (len < acceptor->len) {
+ len = acceptor->len;
+ rcu_read_unlock();
+ kfree(string);
+ goto realloc;
+ }
memcpy(string, acceptor->data, acceptor->len);
string[acceptor->len] = '\0';
err = selinux_nlmsg_lookup(sksec->sclass, nlh->nlmsg_type, &perm);
if (err) {
if (err == -EINVAL) {
- WARN_ONCE(1, "selinux_nlmsg_perm: unrecognized netlink message:"
- " protocol=%hu nlmsg_type=%hu sclass=%hu\n",
- sk->sk_protocol, nlh->nlmsg_type, sksec->sclass);
+ printk(KERN_WARNING
+ "SELinux: unrecognized netlink message:"
+ " protocol=%hu nlmsg_type=%hu sclass=%hu\n",
+ sk->sk_protocol, nlh->nlmsg_type, sksec->sclass);
if (!selinux_enforcing || security_get_allow_unknown())
err = 0;
}
"{Intel, LPT_LP},"
"{Intel, WPT_LP},"
"{Intel, SPT},"
+ "{Intel, SPT_LP},"
"{Intel, HPT},"
"{Intel, PBG},"
"{Intel, SCH},"
/* Sunrise Point */
{ PCI_DEVICE(0x8086, 0xa170),
.driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+ /* Sunrise Point-LP */
+ { PCI_DEVICE(0x8086, 0x9d70),
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
/* Haswell */
{ PCI_DEVICE(0x8086, 0x0a0c),
.driver_data = AZX_DRIVER_HDMI | AZX_DCAPS_INTEL_HASWELL },
unsigned int num_eapds;
hda_nid_t eapds[4];
bool dynamic_eapd;
+ hda_nid_t mute_led_eapd;
unsigned int parse_flags; /* flag for snd_hda_parse_pin_defcfg() */
cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, enabled);
}
+/* turn on/off EAPD according to Master switch (inversely!) for mute LED */
+static void cx_auto_vmaster_hook_mute_led(void *private_data, int enabled)
+{
+ struct hda_codec *codec = private_data;
+ struct conexant_spec *spec = codec->spec;
+
+ snd_hda_codec_write(codec, spec->mute_led_eapd, 0,
+ AC_VERB_SET_EAPD_BTLENABLE,
+ enabled ? 0x00 : 0x02);
+}
+
static int cx_auto_build_controls(struct hda_codec *codec)
{
int err;
CXT_FIXUP_TOSHIBA_P105,
CXT_FIXUP_HP_530,
CXT_FIXUP_CAP_MIX_AMP_5047,
+ CXT_FIXUP_MUTE_LED_EAPD,
};
/* for hda_fixup_thinkpad_acpi() */
}
}
+static void cxt_fixup_mute_led_eapd(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct conexant_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->mute_led_eapd = 0x1b;
+ spec->dynamic_eapd = 1;
+ spec->gen.vmaster_mute.hook = cx_auto_vmaster_hook_mute_led;
+ }
+}
+
/*
* Fix max input level on mixer widget to 0dB
* (originally it has 0x2b steps with 0dB offset 0x14)
.type = HDA_FIXUP_FUNC,
.v.func = cxt_fixup_cap_mix_amp_5047,
},
+ [CXT_FIXUP_MUTE_LED_EAPD] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cxt_fixup_mute_led_eapd,
+ },
};
static const struct snd_pci_quirk cxt5045_fixups[] = {
SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
+ SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
{ .id = CXT_PINCFG_LEMOTE_A1004, .name = "lemote-a1004" },
{ .id = CXT_PINCFG_LEMOTE_A1205, .name = "lemote-a1205" },
{ .id = CXT_FIXUP_OLPC_XO, .name = "olpc-xo" },
+ { .id = CXT_FIXUP_MUTE_LED_EAPD, .name = "mute-led-eapd" },
{}
};
snd_hda_jack_unsol_event(codec, res >> 2);
}
+/* Change EAPD to verb control */
+static void alc_fill_eapd_coef(struct hda_codec *codec)
+{
+ int coef;
+
+ coef = alc_get_coef0(codec);
+
+ switch (codec->vendor_id) {
+ case 0x10ec0262:
+ alc_update_coef_idx(codec, 0x7, 0, 1<<5);
+ break;
+ case 0x10ec0267:
+ case 0x10ec0268:
+ alc_update_coef_idx(codec, 0x7, 0, 1<<13);
+ break;
+ case 0x10ec0269:
+ if ((coef & 0x00f0) == 0x0010)
+ alc_update_coef_idx(codec, 0xd, 0, 1<<14);
+ if ((coef & 0x00f0) == 0x0020)
+ alc_update_coef_idx(codec, 0x4, 1<<15, 0);
+ if ((coef & 0x00f0) == 0x0030)
+ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ break;
+ case 0x10ec0280:
+ case 0x10ec0284:
+ case 0x10ec0290:
+ case 0x10ec0292:
+ alc_update_coef_idx(codec, 0x4, 1<<15, 0);
+ break;
+ case 0x10ec0233:
+ case 0x10ec0255:
+ case 0x10ec0282:
+ case 0x10ec0283:
+ case 0x10ec0286:
+ case 0x10ec0288:
+ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ break;
+ case 0x10ec0285:
+ case 0x10ec0293:
+ alc_update_coef_idx(codec, 0xa, 1<<13, 0);
+ break;
+ case 0x10ec0662:
+ if ((coef & 0x00f0) == 0x0030)
+ alc_update_coef_idx(codec, 0x4, 1<<10, 0); /* EAPD Ctrl */
+ break;
+ case 0x10ec0272:
+ case 0x10ec0273:
+ case 0x10ec0663:
+ case 0x10ec0665:
+ case 0x10ec0670:
+ case 0x10ec0671:
+ case 0x10ec0672:
+ alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */
+ break;
+ case 0x10ec0668:
+ alc_update_coef_idx(codec, 0x7, 3<<13, 0);
+ break;
+ case 0x10ec0867:
+ alc_update_coef_idx(codec, 0x4, 1<<10, 0);
+ break;
+ case 0x10ec0888:
+ if ((coef & 0x00f0) == 0x0020 || (coef & 0x00f0) == 0x0030)
+ alc_update_coef_idx(codec, 0x7, 1<<5, 0);
+ break;
+ case 0x10ec0892:
+ alc_update_coef_idx(codec, 0x7, 1<<5, 0);
+ break;
+ case 0x10ec0899:
+ case 0x10ec0900:
+ alc_update_coef_idx(codec, 0x7, 1<<1, 0);
+ break;
+ }
+}
+
/* additional initialization for ALC888 variants */
static void alc888_coef_init(struct hda_codec *codec)
{
/* generic EAPD initialization */
static void alc_auto_init_amp(struct hda_codec *codec, int type)
{
+ alc_fill_eapd_coef(codec);
alc_auto_setup_eapd(codec, true);
switch (type) {
case ALC_INIT_GPIO1:
}
}
- /* Class D */
- alc_update_coef_idx(codec, 0xd, 0, 1<<14);
-
/* HP */
alc_update_coef_idx(codec, 0x4, 0, 1<<11);
}
{}
};
-static void alc662_fill_coef(struct hda_codec *codec)
-{
- int coef;
-
- coef = alc_get_coef0(codec);
-
- switch (codec->vendor_id) {
- case 0x10ec0662:
- if ((coef & 0x00f0) == 0x0030)
- alc_update_coef_idx(codec, 0x4, 1<<10, 0); /* EAPD Ctrl */
- break;
- case 0x10ec0272:
- case 0x10ec0273:
- case 0x10ec0663:
- case 0x10ec0665:
- case 0x10ec0670:
- case 0x10ec0671:
- case 0x10ec0672:
- alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */
- break;
- }
-}
-
/*
*/
static int patch_alc662(struct hda_codec *codec)
case 0x10ec0668:
spec->init_hook = alc668_restore_default_value;
break;
- default:
- spec->init_hook = alc662_fill_coef;
- alc662_fill_coef(codec);
- break;
}
snd_hda_pick_fixup(codec, alc662_fixup_models,
return changed;
}
+static void kctl_private_value_free(struct snd_kcontrol *kctl)
+{
+ kfree((void *)kctl->private_value);
+}
+
static int snd_ftu_create_effect_switch(struct usb_mixer_interface *mixer,
int validx, int bUnitID)
{
return -ENOMEM;
}
+ kctl->private_free = kctl_private_value_free;
err = snd_ctl_add(mixer->chip->card, kctl);
if (err < 0)
return err;
struct tpacket2_hdr *header = ring;
int count = 0;
- while (header->tp_status & TP_STATUS_USER && count < RING_NUM_FRAMES) {
+ while (count < RING_NUM_FRAMES && header->tp_status & TP_STATUS_USER) {
count++;
header = ring + (count * getpagesize());
}