*.gz
*.bz2
*.lzma
+*.xz
*.lzo
*.patch
*.gcno
Arnaud Patard <arnaud.patard@rtp-net.org>
Arnd Bergmann <arnd@arndb.de>
Axel Dyks <xl@xlsigned.net>
+Axel Lin <axel.lin@gmail.com>
Ben Gardner <bgardner@wabtec.com>
Ben M Cahill <ben.m.cahill@intel.com>
Björn Steinbrink <B.Steinbrink@gmx.de>
--- /dev/null
+What: /sys/devices/platform/at91_can/net/<iface>/mb0_id
+Date: January 2011
+KernelVersion: 2.6.38
+Contact: Marc Kleine-Budde <kernel@pengutronix.de>
+Description:
+ Value representing the can_id of mailbox 0.
+
+ Default: 0x7ff (standard frame)
+
+ Due to a chip bug (errata 50.2.6.3 & 50.3.5.3 in
+ "AT91SAM9263 Preliminary 6249H-ATARM-27-Jul-09") the
+ contents of mailbox 0 may be send under certain
+ conditions (even if disabled or in rx mode).
+
+ The workaround in the errata suggests not to use the
+ mailbox and load it with an unused identifier.
+
+ In order to use an extended can_id add the
+ CAN_EFF_FLAG (0x80000000U) to the can_id. Example:
+
+ - standard id 0x7ff:
+ echo 0x7ff > /sys/class/net/can0/mb0_id
+
+ - extended id 0x1fffffff:
+ echo 0x9fffffff > /sys/class/net/can0/mb0_id
<chapter id="uart16x50">
<title>16x50 UART Driver</title>
!Iinclude/linux/serial_core.h
-!Edrivers/serial/serial_core.c
-!Edrivers/serial/8250.c
+!Edrivers/tty/serial/serial_core.c
+!Edrivers/tty/serial/8250.c
</chapter>
<chapter id="fbdev">
services.
</para>
<para>
- The core of every DRM driver is struct drm_device. Drivers
- will typically statically initialize a drm_device structure,
+ The core of every DRM driver is struct drm_driver. Drivers
+ will typically statically initialize a drm_driver structure,
then pass it to drm_init() at load time.
</para>
<title>Driver initialization</title>
<para>
Before calling the DRM initialization routines, the driver must
- first create and fill out a struct drm_device structure.
+ first create and fill out a struct drm_driver structure.
</para>
<programlisting>
static struct drm_driver driver = {
<holder>Convergence GmbH</holder>
</copyright>
<copyright>
- <year>2009-2010</year>
+ <year>2009-2011</year>
<holder>Mauro Carvalho Chehab</holder>
</copyright>
</sect1>
</chapter>
+ <chapter id="fs_events">
+ <title>Events based on file descriptors</title>
+!Efs/eventfd.c
+ </chapter>
+
<chapter id="sysfs">
<title>The Filesystem for Exporting Kernel Objects</title>
!Efs/sysfs/file.c
<title>LINUX MEDIA INFRASTRUCTURE API</title>
<copyright>
- <year>2009-2010</year>
+ <year>2009-2011</year>
<holder>LinuxTV Developers</holder>
</copyright>
</author>
</authorgroup>
<copyright>
- <year>2009-2010</year>
+ <year>2009-2011</year>
<holder>Mauro Carvalho Chehab</holder>
</copyright>
</section>
<section>
+ <title>RDS datastructures</title>
<table frame="none" pgwide="1" id="v4l2-rds-data">
<title>struct
<structname>v4l2_rds_data</structname></title>
<table frame="none" pgwide="1" id="v4l2-rds-block-codes">
<title>Block defines</title>
- <tgroup cols="3">
+ <tgroup cols="4">
<colspec colname="c1" colwidth="1*" />
<colspec colname="c2" colwidth="1*" />
- <colspec colname="c3" colwidth="5*" />
+ <colspec colname="c3" colwidth="1*" />
+ <colspec colname="c4" colwidth="5*" />
<tbody valign="top">
<row>
<entry>V4L2_RDS_BLOCK_MSK</entry>
<year>2008</year>
<year>2009</year>
<year>2010</year>
+ <year>2011</year>
<holder>Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin
Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab</holder>
</copyright>
</partinfo>
<title>Video for Linux Two API Specification</title>
- <subtitle>Revision 2.6.33</subtitle>
+ <subtitle>Revision 2.6.38</subtitle>
<chapter id="common">
&sub-common;
I - Introduction
1) Entry point for arch/powerpc
- 2) Board support
II - The DT block format
1) Header
VI - System-on-a-chip devices and nodes
1) Defining child nodes of an SOC
2) Representing devices without a current OF specification
- a) PHY nodes
- b) Interrupt controllers
- c) 4xx/Axon EMAC ethernet nodes
- d) Xilinx IP cores
- e) USB EHCI controllers
- f) MDIO on GPIOs
- g) SPI busses
VII - Specifying interrupt information for devices
1) interrupts property
I - Introduction
================
-During the recent development of the Linux/ppc64 kernel, and more
+During the development of the Linux/ppc64 kernel, and more
specifically, the addition of new platform types outside of the old
IBM pSeries/iSeries pair, it was decided to enforce some strict rules
regarding the kernel entry and bootloader <-> kernel interfaces, in
create a node for every PCI device in the system. It is a requirement
to have a node for PCI host bridges in order to provide interrupt
routing informations and memory/IO ranges, among others. It is also
-recommended to define nodes for on chip devices and other busses that
+recommended to define nodes for on chip devices and other buses that
don't specifically fit in an existing OF specification. This creates a
great flexibility in the way the kernel can then probe those and match
drivers to device, without having to hard code all sorts of tables. It
1) Entry point for arch/powerpc
-------------------------------
- There is one and one single entry point to the kernel, at the start
+ There is one single entry point to the kernel, at the start
of the kernel image. That entry point supports two calling
conventions:
with all CPUs. The way to do that with method b) will be
described in a later revision of this document.
-
-2) Board support
-----------------
-
-64-bit kernels:
-
Board supports (platforms) are not exclusive config options. An
arbitrary set of board supports can be built in a single kernel
image. The kernel will "know" what set of functions to use for a
containing the various callbacks that the generic code will
use to get to your platform specific code
- c) Add a reference to your "ppc_md" structure in the
- "machines" table in arch/powerpc/kernel/setup_64.c if you are
- a 64-bit platform.
-
- d) request and get assigned a platform number (see PLATFORM_*
- constants in arch/powerpc/include/asm/processor.h
-
-32-bit embedded kernels:
-
- Currently, board support is essentially an exclusive config option.
- The kernel is configured for a single platform. Part of the reason
- for this is to keep kernels on embedded systems small and efficient;
- part of this is due to the fact the code is already that way. In the
- future, a kernel may support multiple platforms, but only if the
+ A kernel image may support multiple platforms, but only if the
platforms feature the same core architecture. A single kernel build
cannot support both configurations with Book E and configurations
with classic Powerpc architectures.
- 32-bit embedded platforms that are moved into arch/powerpc using a
- flattened device tree should adopt the merged tree practice of
- setting ppc_md up dynamically, even though the kernel is currently
- built with support for only a single platform at a time. This allows
- unification of the setup code, and will make it easier to go to a
- multiple-platform-support model in the future.
-
-NOTE: I believe the above will be true once Ben's done with the merge
-of the boot sequences.... someone speak up if this is wrong!
-
- To add a 32-bit embedded platform support, follow the instructions
- for 64-bit platforms above, with the exception that the Kconfig
- option should be set up such that the kernel builds exclusively for
- the platform selected. The processor type for the platform should
- enable another config option to select the specific board
- supported.
-
-NOTE: If Ben doesn't merge the setup files, may need to change this to
-point to setup_32.c
-
-
- I will describe later the boot process and various callbacks that
- your platform should implement.
-
II - The DT block format
========================
1) Header
---------
- The kernel is entered with r3 pointing to an area of memory that is
- roughly described in arch/powerpc/include/asm/prom.h by the structure
+ The kernel is passed the physical address pointing to an area of memory
+ that is roughly described in include/linux/of_fdt.h by the structure
boot_param_header:
struct boot_param_header {
All values in this header are in big endian format, the various
fields in this header are defined more precisely below. All
"offset" values are in bytes from the start of the header; that is
- from the value of r3.
+ from the physical base address of the device tree block.
- magic
------------------------------
- r3 -> | struct boot_param_header |
+ base -> | struct boot_param_header |
------------------------------
| (alignment gap) (*) |
------------------------------
-----> ------------------------------
|
|
- --- (r3 + totalsize)
+ --- (base + totalsize)
(*) The alignment gaps are not necessarily present; their presence
and size are dependent on the various alignment requirements of
the device-tree. More details about the actual format of these will be
below.
-The kernel powerpc generic code does not make any formal use of the
+The kernel generic code does not make any formal use of the
unit address (though some board support code may do) so the only real
requirement here for the unit address is to ensure uniqueness of
the node unit name at a given level of the tree. Nodes with no notion
Every node which actually represents an actual device (that is, a node
which isn't only a virtual "container" for more nodes, like "/cpus"
-is) is also required to have a "device_type" property indicating the
-type of node .
+is) is also required to have a "compatible" property indicating the
+specific hardware and an optional list of devices it is fully
+backwards compatible with.
Finally, every node that can be referenced from a property in another
-node is required to have a "linux,phandle" property. Real open
-firmware implementations provide a unique "phandle" value for every
-node that the "prom_init()" trampoline code turns into
-"linux,phandle" properties. However, this is made optional if the
-flattened device tree is used directly. An example of a node
+node is required to have either a "phandle" or a "linux,phandle"
+property. Real Open Firmware implementations provide a unique
+"phandle" value for every node that the "prom_init()" trampoline code
+turns into "linux,phandle" properties. However, this is made optional
+if the flattened device tree is used directly. An example of a node
referencing another node via "phandle" is when laying out the
interrupt tree which will be described in a further version of this
document.
-This "linux, phandle" property is a 32-bit value that uniquely
+The "phandle" property is a 32-bit value that uniquely
identifies a node. You are free to use whatever values or system of
values, internal pointers, or whatever to generate these, the only
requirement is that every node for which you provide that property has
while the top cell contains address space indication, flags, and pci
bus & device numbers.
-For busses that support dynamic allocation, it's the accepted practice
+For buses that support dynamic allocation, it's the accepted practice
to then not provide the address in "reg" (keep it 0) though while
providing a flag indicating the address is dynamically allocated, and
then, to provide a separate "assigned-addresses" property that
The "reg" property only defines addresses and sizes (if #size-cells is
non-0) within a given bus. In order to translate addresses upward
(that is into parent bus addresses, and possibly into CPU physical
-addresses), all busses must contain a "ranges" property. If the
+addresses), all buses must contain a "ranges" property. If the
"ranges" property is missing at a given level, it's assumed that
translation isn't possible, i.e., the registers are not visible on the
parent bus. The format of the "ranges" property for a bus is a list
PCI<->ISA bridge, that would be a PCI address. It defines the base
address in the parent bus where the beginning of that range is mapped.
-For a new 64-bit powerpc board, I recommend either the 2/2 format or
+For new 64-bit board support, I recommend either the 2/2 format or
Apple's 2/1 format which is slightly more compact since sizes usually
-fit in a single 32-bit word. New 32-bit powerpc boards should use a
+fit in a single 32-bit word. New 32-bit board support should use a
1/1 format, unless the processor supports physical addresses greater
than 32-bits, in which case a 2/1 format is recommended.
While earlier users of Open Firmware like OldWorld macintoshes tended
to use the actual device name for the "name" property, it's nowadays
considered a good practice to use a name that is closer to the device
-class (often equal to device_type). For example, nowadays, ethernet
+class (often equal to device_type). For example, nowadays, Ethernet
controllers are named "ethernet", an additional "model" property
defining precisely the chip type/model, and "compatible" property
defining the family in case a single driver can driver more than one
4) Note about node and property names and character set
-------------------------------------------------------
-While open firmware provides more flexible usage of 8859-1, this
+While Open Firmware provides more flexible usage of 8859-1, this
specification enforces more strict rules. Nodes and properties should
be comprised only of ASCII characters 'a' to 'z', '0' to
'9', ',', '.', '_', '+', '#', '?', and '-'. Node names additionally
--------------------------------
These are all that are currently required. However, it is strongly
recommended that you expose PCI host bridges as documented in the
- PCI binding to open firmware, and your interrupt tree as documented
+ PCI binding to Open Firmware, and your interrupt tree as documented
in OF interrupt tree specification.
a) The root node
- model : this is your board name/model
- #address-cells : address representation for "root" devices
- #size-cells: the size representation for "root" devices
- - device_type : This property shouldn't be necessary. However, if
- you decide to create a device_type for your root node, make sure it
- is _not_ "chrp" unless your platform is a pSeries or PAPR compliant
- one for 64-bit, or a CHRP-type machine for 32-bit as this will
- matched by the kernel this way.
-
- Additionally, some recommended properties are:
-
- compatible : the board "family" generally finds its way here,
for example, if you have 2 board models with a similar layout,
that typically get driven by the same platform code in the
- kernel, you would use a different "model" property but put a
- value in "compatible". The kernel doesn't directly use that
- value but it is generally useful.
+ kernel, you would specify the exact board model in the
+ compatible property followed by an entry that represents the SoC
+ model.
The root node is also generally where you add additional properties
specific to your board like the serial number if any, that sort of
So under /cpus, you are supposed to create a node for every CPU on
the machine. There is no specific restriction on the name of the
- CPU, though It's common practice to call it PowerPC,<name>. For
+ CPU, though it's common to call it <architecture>,<core>. For
example, Apple uses PowerPC,G5 while IBM uses PowerPC,970FX.
+ However, the Generic Names convention suggests that it would be
+ better to simply use 'cpu' for each cpu node and use the compatible
+ property to identify the specific cpu core.
Required properties:
e) The /chosen node
- This node is a bit "special". Normally, that's where open firmware
+ This node is a bit "special". Normally, that's where Open Firmware
puts some variable environment information, like the arguments, or
the default input/output devices.
console device if any. Typically, if you have serial devices on
your board, you may want to put the full path to the one set as
the default console in the firmware here, for the kernel to pick
- it up as its own default console. If you look at the function
- set_preferred_console() in arch/ppc64/kernel/setup.c, you'll see
- that the kernel tries to find out the default console and has
- knowledge of various types like 8250 serial ports. You may want
- to extend this function to add your own.
+ it up as its own default console.
Note that u-boot creates and fills in the chosen node for platforms
that use it.
f) the /soc<SOCname> node
- This node is used to represent a system-on-a-chip (SOC) and must be
- present if the processor is a SOC. The top-level soc node contains
- information that is global to all devices on the SOC. The node name
- should contain a unit address for the SOC, which is the base address
- of the memory-mapped register set for the SOC. The name of an soc
+ This node is used to represent a system-on-a-chip (SoC) and must be
+ present if the processor is a SoC. The top-level soc node contains
+ information that is global to all devices on the SoC. The node name
+ should contain a unit address for the SoC, which is the base address
+ of the memory-mapped register set for the SoC. The name of an SoC
node should start with "soc", and the remainder of the name should
represent the part number for the soc. For example, the MPC8540's
soc node would be called "soc8540".
Required properties:
- - device_type : Should be "soc"
- ranges : Should be defined as specified in 1) to describe the
- translation of SOC addresses for memory mapped SOC registers.
- - bus-frequency: Contains the bus frequency for the SOC node.
+ translation of SoC addresses for memory mapped SoC registers.
+ - bus-frequency: Contains the bus frequency for the SoC node.
Typically, the value of this field is filled in by the boot
loader.
+ - compatible : Exact model of the SoC
Recommended properties:
- An example of code for iterating nodes & retrieving properties
directly from the flattened tree format can be found in the kernel
- file arch/ppc64/kernel/prom.c, look at scan_flat_dt() function,
+ file drivers/of/fdt.c. Look at the of_scan_flat_dt() function,
its usage in early_init_devtree(), and the corresponding various
early_init_dt_scan_*() callbacks. That code can be re-used in a
GPL bootloader, and as the author of that code, I would be happy
to discuss possible free licensing to any vendor who wishes to
integrate all or part of this code into a non-GPL bootloader.
+ (reference needed; who is 'I' here? ---gcl Jan 31, 2011)
2) Representing devices without a current OF specification
----------------------------------------------------------
-Currently, there are many devices on SOCs that do not have a standard
-representation pre-defined as part of the open firmware
-specifications, mainly because the boards that contain these SOCs are
-not currently booted using open firmware. This section contains
-descriptions for the SOC devices for which new nodes have been
-defined; this list will expand as more and more SOC-containing
-platforms are moved over to use the flattened-device-tree model.
+Currently, there are many devices on SoCs that do not have a standard
+representation defined as part of the Open Firmware specifications,
+mainly because the boards that contain these SoCs are not currently
+booted using Open Firmware. Binding documentation for new devices
+should be added to the Documentation/devicetree/bindings directory.
+That directory will expand as device tree support is added to more and
+more SoCs.
+
VII - Specifying interrupt information for devices
===================================================
-The device tree represents the busses and devices of a hardware
+The device tree represents the buses and devices of a hardware
system in a form similar to the physical bus topology of the
hardware.
-----------------------------
-What: __do_IRQ all in one fits nothing interrupt handler
-When: 2.6.32
-Why: __do_IRQ was kept for easy migration to the type flow handlers.
- More than two years of migration time is enough.
-Who: Thomas Gleixner <tglx@linutronix.de>
-
------------------------------
-
What: fakephp and associated sysfs files in /sys/bus/pci/slots/
When: 2011
Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to
Who: Jean Delvare <khali@linux-fr.org>
----------------------------
+
+What: noswapaccount kernel command line parameter
+When: 2.6.40
+Why: The original implementation of memsw feature enabled by
+ CONFIG_CGROUP_MEM_RES_CTLR_SWAP could be disabled by the noswapaccount
+ kernel parameter (introduced in 2.6.29-rc1). Later on, this decision
+ turned out to be not ideal because we cannot have the feature compiled
+ in and disabled by default and let only interested to enable it
+ (e.g. general distribution kernels might need it). Therefore we have
+ added swapaccount[=0|1] parameter (introduced in 2.6.37) which provides
+ the both possibilities. If we remove noswapaccount we will have
+ less command line parameters with the same functionality and we
+ can also cleanup the parameter handling a bit ().
+Who: Michal Hocko <mhocko@suse.cz>
+
+----------------------------
2.1.30:
- Fix writev() (it kept writing the first segment over and over again
instead of moving onto subsequent segments).
+ - Fix crash in ntfs_mft_record_alloc() when mapping the new extent mft
+ record failed.
2.1.29:
- Fix a deadlock when mounting read-write.
2.1.28:
* JEDEC JC 42.4 compliant temperature sensor chips
Prefix: 'jc42'
Addresses scanned: I2C 0x18 - 0x1f
- Datasheet: -
+ Datasheet:
+ http://www.jedec.org/sites/default/files/docs/4_01_04R19.pdf
Author:
Guenter Roeck <guenter.roeck@ericsson.com>
Description
-----------
-This driver implements support for JEDEC JC 42.4 compliant temperature sensors.
+This driver implements support for JEDEC JC 42.4 compliant temperature sensors,
+which are used on many DDR3 memory modules for mobile devices and servers. Some
+systems use the sensor to prevent memory overheating by automatically throttling
+the memory controller.
+
The driver auto-detects the chips listed above, but can be manually instantiated
to support other JC 42.4 compliant chips.
which applies to all limits. This register can be written by writing into
temp1_crit_hyst. Other hysteresis attributes are read-only.
+If the BIOS has configured the sensor for automatic temperature management, it
+is likely that it has locked the registers, i.e., that the temperature limits
+cannot be changed.
+
Sysfs entries
-------------
temp1_input Temperature (RO)
-temp1_min Minimum temperature (RW)
-temp1_max Maximum temperature (RW)
-temp1_crit Critical high temperature (RW)
+temp1_min Minimum temperature (RO or RW)
+temp1_max Maximum temperature (RO or RW)
+temp1_crit Critical high temperature (RO or RW)
-temp1_crit_hyst Critical hysteresis temperature (RW)
+temp1_crit_hyst Critical hysteresis temperature (RO or RW)
temp1_max_hyst Maximum hysteresis temperature (RO)
temp1_min_alarm Temperature low alarm
Socket S1G3: Athlon II, Sempron, Turion II
* AMD Family 11h processors:
Socket S1G2: Athlon (X2), Sempron (X2), Turion X2 (Ultra)
+* AMD Family 12h processors: "Llano"
+* AMD Family 14h processors: "Brazos" (C/E/G-Series)
Prefix: 'k10temp'
Addresses scanned: PCI space
http://support.amd.com/us/Processor_TechDocs/31116.pdf
BIOS and Kernel Developer's Guide (BKDG) for AMD Family 11h Processors:
http://support.amd.com/us/Processor_TechDocs/41256.pdf
+ BIOS and Kernel Developer's Guide (BKDG) for AMD Family 14h Models 00h-0Fh Processors:
+ http://support.amd.com/us/Processor_TechDocs/43170.pdf
Revision Guide for AMD Family 10h Processors:
http://support.amd.com/us/Processor_TechDocs/41322.pdf
Revision Guide for AMD Family 11h Processors:
http://support.amd.com/us/Processor_TechDocs/41788.pdf
+ Revision Guide for AMD Family 14h Models 00h-0Fh Processors:
+ http://support.amd.com/us/Processor_TechDocs/47534.pdf
AMD Family 11h Processor Power and Thermal Data Sheet for Notebooks:
http://support.amd.com/us/Processor_TechDocs/43373.pdf
AMD Family 10h Server and Workstation Processor Power and Thermal Data Sheet:
-----------
This driver permits reading of the internal temperature sensor of AMD
-Family 10h and 11h processors.
+Family 10h/11h/12h/14h processors.
All these processors have a sensor, but on those for Socket F or AM2+,
the sensor may return inconsistent values (erratum 319). The driver
AVR32 AVR32 architecture is enabled.
AX25 Appropriate AX.25 support is enabled.
BLACKFIN Blackfin architecture is enabled.
+ DRM Direct Rendering Management support is enabled.
+ DYNAMIC_DEBUG Build in debug messages and enable them at runtime
EDD BIOS Enhanced Disk Drive Services (EDD) is enabled
EFI EFI Partitioning (GPT) is enabled
EIDE EIDE/ATAPI support is enabled.
- DRM Direct Rendering Management support is enabled.
- DYNAMIC_DEBUG Build in debug messages and enable them at runtime
FB The frame buffer device is enabled.
GCOV GCOV profiling is enabled.
HW Appropriate hardware is enabled.
and is between 256 and 4096 characters. It is defined in the file
./include/asm/setup.h as COMMAND_LINE_SIZE.
+Finally, the [KMG] suffix is commonly described after a number of kernel
+parameter values. These 'K', 'M', and 'G' letters represent the _binary_
+multipliers 'Kilo', 'Mega', and 'Giga', equalling 2^10, 2^20, and 2^30
+bytes respectively. Such letter suffixes can also be entirely omitted.
+
acpi= [HW,ACPI,X86]
Advanced Configuration and Power Interface
Format:
<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
- crashkernel=nn[KMG]@ss[KMG]
- [KNL] Reserve a chunk of physical memory to
- hold a kernel to switch to with kexec on panic.
+ crashkernel=size[KMG][@offset[KMG]]
+ [KNL] Using kexec, Linux can switch to a 'crash kernel'
+ upon panic. This parameter reserves the physical
+ memory region [offset, offset + size] for that kernel
+ image. If '@offset' is omitted, then a suitable offset
+ is selected automatically. Check
+ Documentation/kdump/kdump.txt for further details.
crashkernel=range1:size1[,range2:size2,...][@offset]
[KNL] Same as above, but depends on the memory
in the running system. The syntax of range is
start-[end] where start and end are both
a memory unit (amount[KMG]). See also
- Documentation/kdump/kdump.txt for a example.
+ Documentation/kdump/kdump.txt for an example.
cs89x0_dma= [HW,NET]
Format: <dma>
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
- log_buf_len=n Sets the size of the printk ring buffer, in bytes.
- Format: { n | nk | nM }
- n must be a power of two. The default size
- is set in the kernel config file.
+ log_buf_len=n[KMG] Sets the size of the printk ring buffer,
+ in bytes. n must be a power of two. The default
+ size is set in the kernel config file.
logo.nologo [FB] Disables display of the built-in Linux logo.
This may be used to provide more screen space for
#include <limits.h>
#include <stddef.h>
#include <signal.h>
+#include <pwd.h>
+#include <grp.h>
+
#include <linux/virtio_config.h>
#include <linux/virtio_net.h>
#include <linux/virtio_blk.h>
/*
* We use a private mapping (ie. if we write to the page, it will be
- * copied).
+ * copied). We allocate an extra two pages PROT_NONE to act as guard
+ * pages against read/write attempts that exceed allocated space.
*/
- addr = mmap(NULL, getpagesize() * num,
- PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, fd, 0);
+ addr = mmap(NULL, getpagesize() * (num+2),
+ PROT_NONE, MAP_PRIVATE, fd, 0);
+
if (addr == MAP_FAILED)
err(1, "Mmapping %u pages of /dev/zero", num);
+ if (mprotect(addr + getpagesize(), getpagesize() * num,
+ PROT_READ|PROT_WRITE) == -1)
+ err(1, "mprotect rw %u pages failed", num);
+
/*
* One neat mmap feature is that you can close the fd, and it
* stays mapped.
*/
close(fd);
- return addr;
+ /* Return address after PROT_NONE page */
+ return addr + getpagesize();
}
/* Get some more pages for a device. */
* done to it. This allows us to share untouched memory between
* Guests.
*/
- if (mmap(addr, len, PROT_READ|PROT_WRITE|PROT_EXEC,
+ if (mmap(addr, len, PROT_READ|PROT_WRITE,
MAP_FIXED|MAP_PRIVATE, fd, offset) != MAP_FAILED)
return;
unsigned int line)
{
/*
- * We have to separately check addr and addr+size, because size could
- * be huge and addr + size might wrap around.
+ * Check if the requested address and size exceeds the allocated memory,
+ * or addr + size wraps around.
*/
- if (addr >= guest_limit || addr + size >= guest_limit)
+ if ((addr + size) > guest_limit || (addr + size) < addr)
errx(1, "%s:%i: Invalid address %#lx", __FILE__, line, addr);
/*
* We return a pointer for the caller's convenience, now we know it's
{ "block", 1, NULL, 'b' },
{ "rng", 0, NULL, 'r' },
{ "initrd", 1, NULL, 'i' },
+ { "username", 1, NULL, 'u' },
+ { "chroot", 1, NULL, 'c' },
{ NULL },
};
static void usage(void)
/* If they specify an initrd file to load. */
const char *initrd_name = NULL;
+ /* Password structure for initgroups/setres[gu]id */
+ struct passwd *user_details = NULL;
+
+ /* Directory to chroot to */
+ char *chroot_path = NULL;
+
/* Save the args: we "reboot" by execing ourselves again. */
main_args = argv;
case 'i':
initrd_name = optarg;
break;
+ case 'u':
+ user_details = getpwnam(optarg);
+ if (!user_details)
+ err(1, "getpwnam failed, incorrect username?");
+ break;
+ case 'c':
+ chroot_path = optarg;
+ break;
default:
warnx("Unknown argument %s", argv[optind]);
usage();
/* If we exit via err(), this kills all the threads, restores tty. */
atexit(cleanup_devices);
+ /* If requested, chroot to a directory */
+ if (chroot_path) {
+ if (chroot(chroot_path) != 0)
+ err(1, "chroot(\"%s\") failed", chroot_path);
+
+ if (chdir("/") != 0)
+ err(1, "chdir(\"/\") failed");
+
+ verbose("chroot done\n");
+ }
+
+ /* If requested, drop privileges */
+ if (user_details) {
+ uid_t u;
+ gid_t g;
+
+ u = user_details->pw_uid;
+ g = user_details->pw_gid;
+
+ if (initgroups(user_details->pw_name, g) != 0)
+ err(1, "initgroups failed");
+
+ if (setresgid(g, g, g) != 0)
+ err(1, "setresgid failed");
+
+ if (setresuid(u, u, u) != 0)
+ err(1, "setresuid failed");
+
+ verbose("Dropping privileges completed\n");
+ }
+
/* Finally, run the Guest. This doesn't return. */
run_guest();
}
for general information on how to get bridging to work.
+- Random number generation. Using the --rng option will provide a
+ /dev/hwrng in the guest that will read from the host's /dev/random.
+ Use this option in conjunction with rng-tools (see ../hw_random.txt)
+ to provide entropy to the guest kernel's /dev/random.
+
There is a helpful mailing list at http://ozlabs.org/mailman/listinfo/lguest
Good luck!
# List of programs to build
hostprogs-y := ifenslave
+HOSTCFLAGS_ifenslave.o += -I$(objtree)/usr/include
+
# Tell kbuild to always build the programs
always := $(hostprogs-y)
3.3 Configuring Bonding Manually with Ifenslave
3.3.1 Configuring Multiple Bonds Manually
3.4 Configuring Bonding Manually via Sysfs
-3.5 Overriding Configuration for Special Cases
+3.5 Configuration with Interfaces Support
+3.6 Overriding Configuration for Special Cases
4. Querying Bonding Configuration
4.1 Bonding Configuration
default kernel source include directory.
SECOND IMPORTANT NOTE:
- If you plan to configure bonding using sysfs, you do not need
-to use ifenslave.
+ If you plan to configure bonding using sysfs or using the
+/etc/network/interfaces file, you do not need to use ifenslave.
2. Bonding Driver Options
=========================
You can configure bonding using either your distro's network
initialization scripts, or manually using either ifenslave or the
-sysfs interface. Distros generally use one of two packages for the
-network initialization scripts: initscripts or sysconfig. Recent
-versions of these packages have support for bonding, while older
+sysfs interface. Distros generally use one of three packages for the
+network initialization scripts: initscripts, sysconfig or interfaces.
+Recent versions of these packages have support for bonding, while older
versions do not.
We will first describe the options for configuring bonding for
-distros using versions of initscripts and sysconfig with full or
-partial support for bonding, then provide information on enabling
+distros using versions of initscripts, sysconfig and interfaces with full
+or partial support for bonding, then provide information on enabling
bonding without support from the network initialization scripts (i.e.,
older versions of initscripts or sysconfig).
- If you're unsure whether your distro uses sysconfig or
-initscripts, or don't know if it's new enough, have no fear.
+ If you're unsure whether your distro uses sysconfig,
+initscripts or interfaces, or don't know if it's new enough, have no fear.
Determining this is fairly straightforward.
- First, issue the command:
+ First, look for a file called interfaces in /etc/network directory.
+If this file is present in your system, then your system use interfaces. See
+Configuration with Interfaces Support.
+
+ Else, issue the command:
$ rpm -qf /sbin/ifup
echo +eth2 > /sys/class/net/bond1/bonding/slaves
echo +eth3 > /sys/class/net/bond1/bonding/slaves
-3.5 Overriding Configuration for Special Cases
+3.5 Configuration with Interfaces Support
+-----------------------------------------
+
+ This section applies to distros which use /etc/network/interfaces file
+to describe network interface configuration, most notably Debian and it's
+derivatives.
+
+ The ifup and ifdown commands on Debian don't support bonding out of
+the box. The ifenslave-2.6 package should be installed to provide bonding
+support. Once installed, this package will provide bond-* options to be used
+into /etc/network/interfaces.
+
+ Note that ifenslave-2.6 package will load the bonding module and use
+the ifenslave command when appropriate.
+
+Example Configurations
+----------------------
+
+In /etc/network/interfaces, the following stanza will configure bond0, in
+active-backup mode, with eth0 and eth1 as slaves.
+
+auto bond0
+iface bond0 inet dhcp
+ bond-slaves eth0 eth1
+ bond-mode active-backup
+ bond-miimon 100
+ bond-primary eth0 eth1
+
+If the above configuration doesn't work, you might have a system using
+upstart for system startup. This is most notably true for recent
+Ubuntu versions. The following stanza in /etc/network/interfaces will
+produce the same result on those systems.
+
+auto bond0
+iface bond0 inet dhcp
+ bond-slaves none
+ bond-mode active-backup
+ bond-miimon 100
+
+auto eth0
+iface eth0 inet manual
+ bond-master bond0
+ bond-primary eth0 eth1
+
+auto eth1
+iface eth1 inet manual
+ bond-master bond0
+ bond-primary eth0 eth1
+
+For a full list of bond-* supported options in /etc/network/interfaces and some
+more advanced examples tailored to you particular distros, see the files in
+/usr/share/doc/ifenslave-2.6.
+
+3.6 Overriding Configuration for Special Cases
----------------------------------------------
+
When using the bonding driver, the physical port which transmits a frame is
typically selected by the bonding driver, and is not relevant to the user or
system administrator. The output port is simply selected using the policies of
tcp_dsack - BOOLEAN
Allows TCP to send "duplicate" SACKs.
-tcp_ecn - BOOLEAN
+tcp_ecn - INTEGER
Enable Explicit Congestion Notification (ECN) in TCP. ECN is only
used when both ends of the TCP flow support it. It is useful to
avoid losses due to congestion (when the bottleneck router supports
+Version 15 of schedstats dropped counters for some sched_yield:
+yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is
+identical to version 14.
+
Version 14 of schedstats includes support for sched_domains, which hit the
mainline kernel in 2.6.20 although it is identical to the stats from version
12 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
CPU statistics
--------------
-cpu<N> 1 2 3 4 5 6 7 8 9 10 11 12
-
-NOTE: In the sched_yield() statistics, the active queue is considered empty
- if it has only one process in it, since obviously the process calling
- sched_yield() is that process.
+cpu<N> 1 2 3 4 5 6 7 8 9
-First four fields are sched_yield() statistics:
- 1) # of times both the active and the expired queue were empty
- 2) # of times just the active queue was empty
- 3) # of times just the expired queue was empty
- 4) # of times sched_yield() was called
+First field is a sched_yield() statistic:
+ 1) # of times sched_yield() was called
Next three are schedule() statistics:
- 5) # of times we switched to the expired queue and reused it
- 6) # of times schedule() was called
- 7) # of times schedule() left the processor idle
+ 2) # of times we switched to the expired queue and reused it
+ 3) # of times schedule() was called
+ 4) # of times schedule() left the processor idle
Next two are try_to_wake_up() statistics:
- 8) # of times try_to_wake_up() was called
- 9) # of times try_to_wake_up() was called to wake up the local cpu
+ 5) # of times try_to_wake_up() was called
+ 6) # of times try_to_wake_up() was called to wake up the local cpu
Next three are statistics describing scheduling latency:
- 10) sum of all time spent running by tasks on this processor (in jiffies)
- 11) sum of all time spent waiting to run by tasks on this processor (in
+ 7) sum of all time spent running by tasks on this processor (in jiffies)
+ 8) sum of all time spent waiting to run by tasks on this processor (in
jiffies)
- 12) # of timeslices run on this cpu
+ 9) # of timeslices run on this cpu
Domain statistics
=============
laptop Basic Laptop config (default)
hp-laptop HP laptops, e g G60
+ asus Asus K52JU, Lenovo G560
dell-laptop Dell laptops
dell-vostro Dell Vostro
olpc-xo-1_5 OLPC XO 1.5
The 'new value' union is not used in g_volatile_ctrl. In general controls
that need to implement g_volatile_ctrl are read-only controls.
+Note that if one or more controls in a control cluster are marked as volatile,
+then all the controls in the cluster are seen as volatile.
+
To mark a control as volatile you have to set the is_volatile flag:
ctrl = v4l2_ctrl_new_std(&sd->ctrl_handler, ...);
Obviously, all controls in the cluster array must be initialized to either
a valid control or to NULL.
+In rare cases you might want to know which controls of a cluster actually
+were set explicitly by the user. For this you can check the 'is_new' flag of
+each control. For example, in the case of a volume/mute cluster the 'is_new'
+flag of the mute control would be set if the user called VIDIOC_S_CTRL for
+mute only. If the user would call VIDIOC_S_EXT_CTRLS for both mute and volume
+controls, then the 'is_new' flag would be 1 for both controls.
+
+The 'is_new' flag is always 1 when called from v4l2_ctrl_handler_setup().
+
VIDIOC_LOG_STATUS Support
=========================
* Long running CPU intensive workloads which can be better
managed by the system scheduler.
- WQ_FREEZEABLE
+ WQ_FREEZABLE
- A freezeable wq participates in the freeze phase of the system
+ A freezable wq participates in the freeze phase of the system
suspend operations. Work items on the wq are drained and no
new work item starts execution until thawed.
W: http://serial.sourceforge.net
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6.git
-F: drivers/serial/8250*
+F: drivers/tty/serial/8250*
F: include/linux/serial_8250.h
8390 NETWORK DRIVERS [WD80x3/SMC-ELITE, SMC-ULTRA, NE2000, 3C503, etc.]
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-ARM/ATMEL AT91RM9200 ARM ARCHITECTURE
+ARM/ATMEL AT91RM9200 AND AT91SAM ARM ARCHITECTURES
M: Andrew Victor <linux@maxim.org.za>
+M: Nicolas Ferre <nicolas.ferre@atmel.com>
+M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://maxim.org.za/at91_26.html
-S: Maintained
+W: http://www.linux4sam.org
+S: Supported
+F: arch/arm/mach-at91/
ARM/BCMRING ARM ARCHITECTURE
M: Jiandong Zheng <jdzheng@broadcom.com>
ARM/QUALCOMM MSM MACHINE SUPPORT
M: David Brown <davidb@codeaurora.org>
-M: Daniel Walker <dwalker@codeaurora.org>
+M: Daniel Walker <dwalker@fifo99.com>
M: Bryan Huntsman <bryanh@codeaurora.org>
L: linux-arm-msm@vger.kernel.org
F: arch/arm/mach-msm/
F: drivers/video/msm/
F: drivers/mmc/host/msm_sdcc.c
F: drivers/mmc/host/msm_sdcc.h
-F: drivers/serial/msm_serial.h
-F: drivers/serial/msm_serial.c
+F: drivers/tty/serial/msm_serial.h
+F: drivers/tty/serial/msm_serial.c
T: git git://codeaurora.org/quic/kernel/davidb/linux-msm.git
S: Maintained
F: arch/arm/plat-samsung/
F: arch/arm/plat-s3c24xx/
F: arch/arm/plat-s5p/
+F: drivers/*/*s3c2410*
+F: drivers/*/*/*s3c2410*
ARM/S3C2410 ARM ARCHITECTURE
M: Ben Dooks <ben-linux@fluff.org>
ATMEL AT91 / AT32 SERIAL DRIVER
M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
-F: drivers/serial/atmel_serial.c
+F: drivers/tty/serial/atmel_serial.c
ATMEL LCDFB DRIVER
M: Nicolas Ferre <nicolas.ferre@atmel.com>
L: uclinux-dist-devel@blackfin.uclinux.org
W: http://blackfin.uclinux.org
S: Supported
-F: drivers/serial/bfin_5xx.c
+F: drivers/tty/serial/bfin_5xx.c
BLACKFIN WATCHDOG DRIVER
M: Mike Frysinger <vapier.adi@gmail.com>
W: http://developer.axis.com
S: Maintained
F: arch/cris/
-F: drivers/serial/crisv10.*
+F: drivers/tty/serial/crisv10.*
CRYPTO API
M: Herbert Xu <herbert@gondor.apana.org.au>
F: fs/dlm/
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
+M: Vinod Koul <vinod.koul@intel.com>
M: Dan Williams <dan.j.williams@intel.com>
S: Supported
F: drivers/dma/
DZ DECSTATION DZ11 SERIAL DRIVER
M: "Maciej W. Rozycki" <macro@linux-mips.org>
S: Maintained
-F: drivers/serial/dz.*
+F: drivers/tty/serial/dz.*
EATA-DMA SCSI DRIVER
M: Michael Neuffer <mike@i-Connect.Net>
M: Timur Tabi <timur@freescale.com>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
-F: drivers/serial/ucc_uart.c
+F: drivers/tty/serial/ucc_uart.c
FREESCALE SOC SOUND DRIVERS
M: Timur Tabi <timur@freescale.com>
F: drivers/isdn/gigaset/
F: include/linux/gigaset_dev.h
+GPIO SUBSYSTEM
+M: Grant Likely <grant.likely@secretlab.ca>
+L: linux-kernel@vger.kernel.org
+S: Maintained
+T: git git://git.secretlab.ca/git/linux-2.6.git
+F: Documentation/gpio/gpio.txt
+F: drivers/gpio/
+F: include/linux/gpio*
+
GRETH 10/100/1G Ethernet MAC device driver
M: Kristoffer Glembo <kristoffer@gaisler.com>
L: netdev@vger.kernel.org
L: lm-sensors@lm-sensors.org
W: http://www.lm-sensors.org/
T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-hwmon/
-T: quilt kernel.org/pub/linux/kernel/people/groeck/linux-staging/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git
S: Maintained
F: Documentation/hwmon/
F: net/ieee802154/
F: drivers/ieee802154/
+IKANOS/ADI EAGLE ADSL USB DRIVER
+M: Matthieu Castet <castet.matthieu@free.fr>
+M: Stanislaw Gruszka <stf_xl@wp.pl>
+S: Maintained
+F: drivers/usb/atm/ueagle-atm.c
+
INTEGRITY MEASUREMENT ARCHITECTURE (IMA)
M: Mimi Zohar <zohar@us.ibm.com>
S: Supported
F: drivers/video/imsttfb.c
INFINIBAND SUBSYSTEM
-M: Roland Dreier <rolandd@cisco.com>
+M: Roland Dreier <roland@kernel.org>
M: Sean Hefty <sean.hefty@intel.com>
M: Hal Rosenstock <hal.rosenstock@gmail.com>
L: linux-rdma@vger.kernel.org
F: include/linux/wimax/i2400m.h
INTEL WIRELESS WIFI LINK (iwlwifi)
-M: Reinette Chatre <reinette.chatre@intel.com>
M: Wey-Yi Guy <wey-yi.w.guy@intel.com>
M: Intel Linux Wireless <ilw@linux.intel.com>
L: linux-wireless@vger.kernel.org
M: Pat Gefre <pfg@sgi.com>
L: linux-serial@vger.kernel.org
S: Maintained
-F: drivers/serial/ioc3_serial.c
+F: drivers/tty/serial/ioc3_serial.c
IP MASQUERADING
M: Juanjo Ciarlante <jjciarla@raiz.uncu.edu.ar>
M: Breno Leitao <leitao@linux.vnet.ibm.com>
L: linux-serial@vger.kernel.org
S: Maintained
-F: drivers/serial/jsm/
+F: drivers/tty/serial/jsm/
K10TEMP HARDWARE MONITORING DRIVER
M: Clemens Ladisch <clemens@ladisch.de>
F: include/keys/
F: security/keys/
+KEYS-TRUSTED
+M: David Safford <safford@watson.ibm.com>
+M: Mimi Zohar <zohar@us.ibm.com>
+L: linux-security-module@vger.kernel.org
+L: keyrings@linux-nfs.org
+S: Supported
+F: Documentation/keys-trusted-encrypted.txt
+F: include/keys/trusted-type.h
+F: security/keys/trusted.c
+F: security/keys/trusted.h
+
+KEYS-ENCRYPTED
+M: Mimi Zohar <zohar@us.ibm.com>
+M: David Safford <safford@watson.ibm.com>
+L: linux-security-module@vger.kernel.org
+L: keyrings@linux-nfs.org
+S: Supported
+F: Documentation/keys-trusted-encrypted.txt
+F: include/keys/encrypted-type.h
+F: security/keys/encrypted.c
+F: security/keys/encrypted.h
+
KGDB / KDB /debug_core
M: Jason Wessel <jason.wessel@windriver.com>
W: http://kgdb.wiki.kernel.org/
S: Maintained
F: Documentation/DocBook/kgdb.tmpl
F: drivers/misc/kgdbts.c
-F: drivers/serial/kgdboc.c
+F: drivers/tty/serial/kgdboc.c
F: include/linux/kdb.h
F: include/linux/kgdb.h
F: kernel/debug/
OPEN FIRMWARE AND FLATTENED DEVICE TREE
M: Grant Likely <grant.likely@secretlab.ca>
-L: devicetree-discuss@lists.ozlabs.org
+L: devicetree-discuss@lists.ozlabs.org (moderated for non-subscribers)
W: http://fdt.secretlab.ca
T: git git://git.secretlab.ca/git/linux-2.6.git
S: Maintained
F: drivers/scsi/be2iscsi/
SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
-M: Sathya Perla <sathyap@serverengines.com>
-M: Subbu Seetharaman <subbus@serverengines.com>
-M: Sarveshwar Bandi <sarveshwarb@serverengines.com>
-M: Ajit Khaparde <ajitk@serverengines.com>
+M: Sathya Perla <sathya.perla@emulex.com>
+M: Subbu Seetharaman <subbu.seetharaman@emulex.com>
+M: Ajit Khaparde <ajit.khaparde@emulex.com>
L: netdev@vger.kernel.org
-W: http://www.serverengines.com
+W: http://www.emulex.com
S: Supported
F: drivers/net/benet/
L: linux-ia64@vger.kernel.org
S: Supported
F: Documentation/ia64/serial.txt
-F: drivers/serial/ioc?_serial.c
+F: drivers/tty/serial/ioc?_serial.c
F: include/linux/ioc?.h
SGI VISUAL WORKSTATION 320 AND 540
S: Maintained
F: Documentation/arm/Sharp-LH/ADC-LH7-Touchscreen
F: arch/arm/mach-lh7a40x/
-F: drivers/serial/serial_lh7a40x.c
+F: drivers/tty/serial/serial_lh7a40x.c
F: drivers/usb/gadget/lh7a40*
F: drivers/usb/host/ohci-lh7a40*
SIMTEC EB110ATX (Chalice CATS)
P: Ben Dooks
-M: Vincent Sanders <support@simtec.co.uk>
+P: Vincent Sanders <vince@simtec.co.uk>
+M: Simtec Linux Team <linux@simtec.co.uk>
W: http://www.simtec.co.uk/products/EB110ATX/
S: Supported
SIMTEC EB2410ITX (BAST)
P: Ben Dooks
-M: Vincent Sanders <support@simtec.co.uk>
+P: Vincent Sanders <vince@simtec.co.uk>
+M: Simtec Linux Team <linux@simtec.co.uk>
W: http://www.simtec.co.uk/products/EB2410ITX/
S: Supported
-F: arch/arm/mach-s3c2410/
-F: drivers/*/*s3c2410*
-F: drivers/*/*/*s3c2410*
+F: arch/arm/mach-s3c2410/mach-bast.c
+F: arch/arm/mach-s3c2410/bast-ide.c
+F: arch/arm/mach-s3c2410/bast-irq.c
TI DAVINCI MACHINE SUPPORT
M: Kevin Hilman <khilman@deeprootsystems.com>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next-2.6.git
S: Maintained
-F: drivers/serial/suncore.c
-F: drivers/serial/suncore.h
-F: drivers/serial/sunhv.c
-F: drivers/serial/sunsab.c
-F: drivers/serial/sunsab.h
-F: drivers/serial/sunsu.c
-F: drivers/serial/sunzilog.c
-F: drivers/serial/sunzilog.h
+F: drivers/tty/serial/suncore.c
+F: drivers/tty/serial/suncore.h
+F: drivers/tty/serial/sunhv.c
+F: drivers/tty/serial/sunsab.c
+F: drivers/tty/serial/sunsab.h
+F: drivers/tty/serial/sunsu.c
+F: drivers/tty/serial/sunzilog.c
+F: drivers/tty/serial/sunzilog.h
SPEAR PLATFORM SUPPORT
M: Viresh Kumar <viresh.kumar@st.com>
M: Greg Kroah-Hartman <gregkh@suse.de>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6.git
-F: drivers/char/tty_*
-F: drivers/serial/serial_core.c
+F: drivers/tty/*
+F: drivers/tty/serial/serial_core.c
F: include/linux/serial_core.h
F: include/linux/serial.h
F: include/linux/tty.h
F: drivers/char/virtio_console.c
F: include/linux/virtio_console.h
+VIRTIO CORE, NET AND BLOCK DRIVERS
+M: Rusty Russell <rusty@rustcorp.com.au>
+M: "Michael S. Tsirkin" <mst@redhat.com>
+L: virtualization@lists.linux-foundation.org
+S: Maintained
+F: drivers/virtio/
+F: drivers/net/virtio_net.c
+F: drivers/block/virtio_blk.c
+F: include/linux/virtio_*.h
+
VIRTIO HOST (VHOST)
M: "Michael S. Tsirkin" <mst@redhat.com>
L: kvm@vger.kernel.org
F: drivers/net/wireless/wl1251/*
WL1271 WIRELESS DRIVER
-M: Luciano Coelho <luciano.coelho@nokia.com>
+M: Luciano Coelho <coelho@ti.com>
L: linux-wireless@vger.kernel.org
-W: http://wireless.kernel.org
+W: http://wireless.kernel.org/en/users/Drivers/wl12xx
T: git git://git.kernel.org/pub/scm/linux/kernel/git/luca/wl12xx.git
S: Maintained
-F: drivers/net/wireless/wl12xx/wl1271*
+F: drivers/net/wireless/wl12xx/
F: include/linux/wl12xx.h
WL3501 WIRELESS PCMCIA CARD DRIVER
M: Peter Korsgaard <jacmet@sunsite.dk>
L: linux-serial@vger.kernel.org
S: Maintained
-F: drivers/serial/uartlite.c
+F: drivers/tty/serial/uartlite.c
YAM DRIVER FOR AX.25
M: Jean-Paul Roubelat <jpr@f6fbb.org>
ZS DECSTATION Z85C30 SERIAL DRIVER
M: "Maciej W. Rozycki" <macro@linux-mips.org>
S: Maintained
-F: drivers/serial/zs.*
+F: drivers/tty/serial/zs.*
GRE DEMULTIPLEXER DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 38
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc6
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*
select HAVE_IRQ_WORK
select HAVE_PERF_EVENTS
select HAVE_DMA_ATTRS
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
+ select AUTO_IRQ_AFFINITY if SMP
help
The Alpha is a 64-bit general-purpose processor designed and
marketed by the Digital Equipment Corporation of blessed memory,
bool
default n
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
-config GENERIC_HARDIRQS
- bool
- default y
-
-config GENERIC_IRQ_PROBE
- bool
- default y
-
-config AUTO_IRQ_AFFINITY
- bool
- depends on SMP
- default y
-
source "init/Kconfig"
source "kernel/Kconfig.freezer"
visible impact on the overall performance or power consumption of the
processor.
+config ARM_ERRATA_751472
+ bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
+ depends on CPU_V7 && SMP
+ help
+ This option enables the workaround for the 751472 Cortex-A9 (prior
+ to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the
+ completion of a following broadcasted operation if the second
+ operation is received by a CPU before the ICIALLUIS has completed,
+ potentially leading to corrupted entries in the cache or TLB.
+
+config ARM_ERRATA_753970
+ bool "ARM errata: cache sync operation may be faulty"
+ depends on CACHE_PL310
+ help
+ This option enables the workaround for the 753970 PL310 (r3p0) erratum.
+
+ Under some condition the effect of cache sync operation on
+ the store buffer still remains when the operation completes.
+ This means that the store buffer is always asked to drain and
+ this prevents it from merging any further writes. The workaround
+ is to replace the normal offset of cache sync operation (0x730)
+ by another offset targeting an unmapped PL310 register 0x740.
+ This has the same effect as the cache sync operation: store buffer
+ drain and waiting for all buffers empty.
+
endmenu
source "arch/arm/common/Kconfig"
config OABI_COMPAT
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
- depends on AEABI && EXPERIMENTAL
+ depends on AEABI && EXPERIMENTAL && !THUMB2_KERNEL
default y
help
This option preserves the old syscall interface along with the
LDFLAGS_vmlinux += --be8
endif
-OBJCOPYFLAGS :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
+OBJCOPYFLAGS :=-O binary -R .comment -S
GZFLAGS :=-9
#KBUILD_CFLAGS +=-pipe
# Explicitly specifiy 32-bit ARM ISA since toolchain default can be -mthumb:
font.c
-piggy.gz
+lib1funcs.S
+piggy.gzip
+piggy.lzo
+piggy.lzma
+vmlinux
vmlinux.lds
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_EPOLL is not set
# CONFIG_SHMEM is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODVERSIONS=y
CONFIG_ARCH_SA1100=y
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_HOTPLUG is not set
# CONFIG_ELF_CORE is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_SLUB_DEBUG is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BASE_FULL is not set
# CONFIG_EPOLL is not set
CONFIG_SLOB=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_ARCH_EBSA110=y
CONFIG_PCCARD=m
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_ARCH_CLPS711X=y
CONFIG_ARCH_EDB7211=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_SLUB_DEBUG is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_MODULES=y
CONFIG_ARCH_FOOTBRIDGE=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_ARCH_CLPS711X=y
CONFIG_ARCH_FORTUNET=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODVERSIONS=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=16
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_SLAB=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_RELAY=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SLUB_DEBUG is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_ELF_CORE is not set
# CONFIG_BASE_FULL is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_SLAB=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_NAMESPACES=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_SHMEM is not set
CONFIG_MODULES=y
CONFIG_AUDIT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOCALVERSION="oe1"
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_BUG is not set
# CONFIG_ELF_CORE is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_SLAB=y
CONFIG_MODULES=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_AIO is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_MODULES=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=13
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_ELF_CORE is not set
# CONFIG_SHMEM is not set
CONFIG_SLAB=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SHMEM is not set
# CONFIG_VM_EVENT_COUNTERS is not set
#define L2X0_RAW_INTR_STAT 0x21C
#define L2X0_INTR_CLEAR 0x220
#define L2X0_CACHE_SYNC 0x730
+#define L2X0_DUMMY_REG 0x740
#define L2X0_INV_LINE_PA 0x770
#define L2X0_INV_WAY 0x77C
#define L2X0_CLEAN_LINE_PA 0x7B0
#define SCPCELLID2 0xFF8
#define SCPCELLID3 0xFFC
+#define SCCTRL_TIMEREN0SEL_REFCLK (0 << 15)
+#define SCCTRL_TIMEREN0SEL_TIMCLK (1 << 15)
+
+#define SCCTRL_TIMEREN1SEL_REFCLK (0 << 17)
+#define SCCTRL_TIMEREN1SEL_TIMCLK (1 << 17)
+
static inline void sysctl_soft_reset(void __iomem *base)
{
+ /* switch to slow mode */
+ writel(0x2, base + SCCTRL);
+
/* writing any value to SCSYSSTAT reg will reset system */
writel(0, base + SCSYSSTAT);
}
return (void __iomem *)addr;
}
+/* IO barriers */
+#ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE
+#define __iormb() rmb()
+#define __iowmb() wmb()
+#else
+#define __iormb() do { } while (0)
+#define __iowmb() do { } while (0)
+#endif
+
/*
* Now, pick up the machine-defined IO definitions
*/
* The {in,out}[bwl] macros are for emulating x86-style PCI/ISA IO space.
*/
#ifdef __io
-#define outb(v,p) __raw_writeb(v,__io(p))
-#define outw(v,p) __raw_writew((__force __u16) \
- cpu_to_le16(v),__io(p))
-#define outl(v,p) __raw_writel((__force __u32) \
- cpu_to_le32(v),__io(p))
+#define outb(v,p) ({ __iowmb(); __raw_writeb(v,__io(p)); })
+#define outw(v,p) ({ __iowmb(); __raw_writew((__force __u16) \
+ cpu_to_le16(v),__io(p)); })
+#define outl(v,p) ({ __iowmb(); __raw_writel((__force __u32) \
+ cpu_to_le32(v),__io(p)); })
-#define inb(p) ({ __u8 __v = __raw_readb(__io(p)); __v; })
+#define inb(p) ({ __u8 __v = __raw_readb(__io(p)); __iormb(); __v; })
#define inw(p) ({ __u16 __v = le16_to_cpu((__force __le16) \
- __raw_readw(__io(p))); __v; })
+ __raw_readw(__io(p))); __iormb(); __v; })
#define inl(p) ({ __u32 __v = le32_to_cpu((__force __le32) \
- __raw_readl(__io(p))); __v; })
+ __raw_readl(__io(p))); __iormb(); __v; })
#define outsb(p,d,l) __raw_writesb(__io(p),d,l)
#define outsw(p,d,l) __raw_writesw(__io(p),d,l)
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32) \
cpu_to_le32(v),__mem_pci(c)))
-#ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE
-#define __iormb() rmb()
-#define __iowmb() wmb()
-#else
-#define __iormb() do { } while (0)
-#define __iowmb() do { } while (0)
-#endif
-
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; })
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
* translation for translating DMA addresses. Use the driver
* DMA support - see dma-mapping.h.
*/
-static inline unsigned long virt_to_phys(void *x)
+static inline unsigned long virt_to_phys(const volatile void *x)
{
return __virt_to_phys((unsigned long)(x));
}
#define __ASMARM_TLB_H
#include <asm/cacheflush.h>
-#include <asm/tlbflush.h>
#ifndef CONFIG_MMU
#include <linux/pagemap.h>
+
+#define tlb_flush(tlb) ((void) tlb)
+
#include <asm-generic/tlb.h>
#else /* !CONFIG_MMU */
+#include <linux/swap.h>
#include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+
+/*
+ * We need to delay page freeing for SMP as other CPUs can access pages
+ * which have been removed but not yet had their TLB entries invalidated.
+ * Also, as ARMv7 speculative prefetch can drag new entries into the TLB,
+ * we need to apply this same delaying tactic to ensure correct operation.
+ */
+#if defined(CONFIG_SMP) || defined(CONFIG_CPU_32v7)
+#define tlb_fast_mode(tlb) 0
+#define FREE_PTE_NR 500
+#else
+#define tlb_fast_mode(tlb) 1
+#define FREE_PTE_NR 0
+#endif
/*
* TLB handling. This allows us to remove pages from the page
struct mmu_gather {
struct mm_struct *mm;
unsigned int fullmm;
+ struct vm_area_struct *vma;
unsigned long range_start;
unsigned long range_end;
+ unsigned int nr;
+ struct page *pages[FREE_PTE_NR];
};
DECLARE_PER_CPU(struct mmu_gather, mmu_gathers);
+/*
+ * This is unnecessarily complex. There's three ways the TLB shootdown
+ * code is used:
+ * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region().
+ * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called.
+ * tlb->vma will be non-NULL.
+ * 2. Unmapping all vmas. See exit_mmap().
+ * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called.
+ * tlb->vma will be non-NULL. Additionally, page tables will be freed.
+ * 3. Unmapping argument pages. See shift_arg_pages().
+ * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called.
+ * tlb->vma will be NULL.
+ */
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+ if (tlb->fullmm || !tlb->vma)
+ flush_tlb_mm(tlb->mm);
+ else if (tlb->range_end > 0) {
+ flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end);
+ tlb->range_start = TASK_SIZE;
+ tlb->range_end = 0;
+ }
+}
+
+static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr)
+{
+ if (!tlb->fullmm) {
+ if (addr < tlb->range_start)
+ tlb->range_start = addr;
+ if (addr + PAGE_SIZE > tlb->range_end)
+ tlb->range_end = addr + PAGE_SIZE;
+ }
+}
+
+static inline void tlb_flush_mmu(struct mmu_gather *tlb)
+{
+ tlb_flush(tlb);
+ if (!tlb_fast_mode(tlb)) {
+ free_pages_and_swap_cache(tlb->pages, tlb->nr);
+ tlb->nr = 0;
+ }
+}
+
static inline struct mmu_gather *
tlb_gather_mmu(struct mm_struct *mm, unsigned int full_mm_flush)
{
tlb->mm = mm;
tlb->fullmm = full_mm_flush;
+ tlb->vma = NULL;
+ tlb->nr = 0;
return tlb;
}
static inline void
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
- if (tlb->fullmm)
- flush_tlb_mm(tlb->mm);
+ tlb_flush_mmu(tlb);
/* keep the page table cache within bounds */
check_pgt_cache();
static inline void
tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr)
{
- if (!tlb->fullmm) {
- if (addr < tlb->range_start)
- tlb->range_start = addr;
- if (addr + PAGE_SIZE > tlb->range_end)
- tlb->range_end = addr + PAGE_SIZE;
- }
+ tlb_add_flush(tlb, addr);
}
/*
{
if (!tlb->fullmm) {
flush_cache_range(vma, vma->vm_start, vma->vm_end);
+ tlb->vma = vma;
tlb->range_start = TASK_SIZE;
tlb->range_end = 0;
}
static inline void
tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
- if (!tlb->fullmm && tlb->range_end > 0)
- flush_tlb_range(vma, tlb->range_start, tlb->range_end);
+ if (!tlb->fullmm)
+ tlb_flush(tlb);
+}
+
+static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
+{
+ if (tlb_fast_mode(tlb)) {
+ free_page_and_swap_cache(page);
+ } else {
+ tlb->pages[tlb->nr++] = page;
+ if (tlb->nr >= FREE_PTE_NR)
+ tlb_flush_mmu(tlb);
+ }
+}
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
+ unsigned long addr)
+{
+ pgtable_page_dtor(pte);
+ tlb_add_flush(tlb, addr);
+ tlb_remove_page(tlb, pte);
}
-#define tlb_remove_page(tlb,page) free_page_and_swap_cache(page)
-#define pte_free_tlb(tlb, ptep, addr) pte_free((tlb)->mm, ptep)
+#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr)
#define pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp)
#define tlb_migrate_finish(mm) do { } while (0)
#ifndef _ASMARM_TLBFLUSH_H
#define _ASMARM_TLBFLUSH_H
-
-#ifndef CONFIG_MMU
-
-#define tlb_flush(tlb) ((void) tlb)
-
-#else /* CONFIG_MMU */
+#ifdef CONFIG_MMU
#include <asm/glue.h>
#ifdef CONFIG_SMP_ON_UP
+ __INIT
__fixup_smp:
- mov r4, #0x00070000
- orr r3, r4, #0xff000000 @ mask 0xff070000
- orr r4, r4, #0x41000000 @ val 0x41070000
- and r0, r9, r3
- teq r0, r4 @ ARM CPU and ARMv6/v7?
+ and r3, r9, #0x000f0000 @ architecture version
+ teq r3, #0x000f0000 @ CPU ID supported?
bne __fixup_smp_on_up @ no, assume UP
- orr r3, r3, #0x0000ff00
- orr r3, r3, #0x000000f0 @ mask 0xff07fff0
+ bic r3, r9, #0x00ff0000
+ bic r3, r3, #0x0000000f @ mask 0xff00fff0
+ mov r4, #0x41000000
orr r4, r4, #0x0000b000
- orr r4, r4, #0x00000020 @ val 0x4107b020
- and r0, r9, r3
- teq r0, r4 @ ARM 11MPCore?
+ orr r4, r4, #0x00000020 @ val 0x4100b020
+ teq r3, r4 @ ARM 11MPCore?
moveq pc, lr @ yes, assume SMP
mrc p15, 0, r0, c0, c0, 5 @ read MPIDR
- tst r0, #1 << 31
- movne pc, lr @ bit 31 => SMP
+ and r0, r0, #0xc0000000 @ multiprocessing extensions and
+ teq r0, #0x80000000 @ not part of a uniprocessor system?
+ moveq pc, lr @ yes, assume SMP
__fixup_smp_on_up:
adr r0, 1f
sub r3, r0, r3
add r4, r4, r3
add r5, r5, r3
-2: cmp r4, r5
- movhs pc, lr
- ldmia r4!, {r0, r6}
- ARM( str r6, [r0, r3] )
- THUMB( add r0, r0, r3 )
-#ifdef __ARMEB__
- THUMB( mov r6, r6, ror #16 ) @ Convert word order for big-endian.
-#endif
- THUMB( strh r6, [r0], #2 ) @ For Thumb-2, store as two halfwords
- THUMB( mov r6, r6, lsr #16 ) @ to be robust against misaligned r3.
- THUMB( strh r6, [r0] )
- b 2b
+ b __do_fixup_smp_on_up
ENDPROC(__fixup_smp)
.align
ALT_SMP(.long 1)
ALT_UP(.long 0)
.popsection
+#endif
+ .text
+__do_fixup_smp_on_up:
+ cmp r4, r5
+ movhs pc, lr
+ ldmia r4!, {r0, r6}
+ ARM( str r6, [r0, r3] )
+ THUMB( add r0, r0, r3 )
+#ifdef __ARMEB__
+ THUMB( mov r6, r6, ror #16 ) @ Convert word order for big-endian.
#endif
+ THUMB( strh r6, [r0], #2 ) @ For Thumb-2, store as two halfwords
+ THUMB( mov r6, r6, lsr #16 ) @ to be robust against misaligned r3.
+ THUMB( strh r6, [r0] )
+ b __do_fixup_smp_on_up
+ENDPROC(__do_fixup_smp_on_up)
+
+ENTRY(fixup_smp)
+ stmfd sp!, {r4 - r6, lr}
+ mov r4, r0
+ add r5, r0, r1
+ mov r3, #0
+ bl __do_fixup_smp_on_up
+ ldmfd sp!, {r4 - r6, pc}
+ENDPROC(fixup_smp)
#include "head-common.S"
u32 didr;
/* Do we implement the extended CPUID interface? */
- if (((read_cpuid_id() >> 16) & 0xf) != 0xf) {
- pr_warning("CPUID feature registers not supported. "
- "Assuming v6 debug is present.\n");
+ if (WARN_ONCE((((read_cpuid_id() >> 16) & 0xf) != 0xf),
+ "CPUID feature registers not supported. "
+ "Assuming v6 debug is present.\n"))
return ARM_DEBUG_ARCH_V6;
- }
ARM_DBG_READ(c0, 0, didr);
return (didr >> 16) & 0xf;
return debug_arch;
}
+static int debug_arch_supported(void)
+{
+ u8 arch = get_debug_arch();
+ return arch >= ARM_DEBUG_ARCH_V6 && arch <= ARM_DEBUG_ARCH_V7_ECP14;
+}
+
/* Determine number of BRP register available. */
static int get_num_brp_resources(void)
{
int hw_breakpoint_slots(int type)
{
+ if (!debug_arch_supported())
+ return 0;
+
/*
* We can be called early, so don't rely on
* our static variables being initialised.
/*
* v7 debug contains save and restore registers so that debug state
- * can be maintained across low-power modes without leaving
- * the debug logic powered up. It is IMPLEMENTATION DEFINED whether
- * we can write to the debug registers out of reset, so we must
- * unlock the OS Lock Access Register to avoid taking undefined
- * instruction exceptions later on.
+ * can be maintained across low-power modes without leaving the debug
+ * logic powered up. It is IMPLEMENTATION DEFINED whether we can access
+ * the debug registers out of reset, so we must unlock the OS Lock
+ * Access Register to avoid taking undefined instruction exceptions
+ * later on.
*/
if (debug_arch >= ARM_DEBUG_ARCH_V7_ECP14) {
/*
debug_arch = get_debug_arch();
- if (debug_arch > ARM_DEBUG_ARCH_V7_ECP14) {
+ if (!debug_arch_supported()) {
pr_info("debug architecture 0x%x unsupported.\n", debug_arch);
return 0;
}
pr_info("%d breakpoint(s) reserved for watchpoint "
"single-step.\n", core_num_reserved_brps);
+ /*
+ * Reset the breakpoint resources. We assume that a halting
+ * debugger will leave the world in a nice state for us.
+ */
+ on_each_cpu(reset_ctrl_regs, NULL, 1);
+
ARM_DBG_READ(c1, 0, dscr);
if (dscr & ARM_DSCR_HDBGEN) {
+ max_watchpoint_len = 4;
pr_warning("halting debug mode enabled. Assuming maximum "
- "watchpoint size of 4 bytes.");
+ "watchpoint size of %u bytes.", max_watchpoint_len);
} else {
- /*
- * Reset the breakpoint resources. We assume that a halting
- * debugger will leave the world in a nice state for us.
- */
- smp_call_function(reset_ctrl_regs, NULL, 1);
- reset_ctrl_regs(NULL);
-
/* Work out the maximum supported watchpoint length. */
max_watchpoint_len = get_max_wp_len();
pr_info("maximum watchpoint size is %u bytes.\n",
return space_cccc_1100_010x(insn, asi);
- } else if ((insn & 0x0e000000) == 0x0c400000) {
+ } else if ((insn & 0x0e000000) == 0x0c000000) {
return space_cccc_110x(insn, asi);
#include <asm/pgtable.h>
#include <asm/sections.h>
+#include <asm/smp_plat.h>
#include <asm/unwind.h>
#ifdef CONFIG_XIP_KERNEL
const Elf_Shdr *txt_sec;
};
+static const Elf_Shdr *find_mod_section(const Elf32_Ehdr *hdr,
+ const Elf_Shdr *sechdrs, const char *name)
+{
+ const Elf_Shdr *s, *se;
+ const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++)
+ if (strcmp(name, secstrs + s->sh_name) == 0)
+ return s;
+
+ return NULL;
+}
+
+extern void fixup_smp(const void *, unsigned long);
+
int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *mod)
{
+ const Elf_Shdr * __maybe_unused s = NULL;
#ifdef CONFIG_ARM_UNWIND
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
- const Elf_Shdr *s, *sechdrs_end = sechdrs + hdr->e_shnum;
+ const Elf_Shdr *sechdrs_end = sechdrs + hdr->e_shnum;
struct mod_unwind_map maps[ARM_SEC_MAX];
int i;
maps[i].txt_sec->sh_addr,
maps[i].txt_sec->sh_size);
#endif
+ s = find_mod_section(hdr, sechdrs, ".alt.smp.init");
+ if (s && !is_smp())
+ fixup_smp((void *)s->sh_addr, s->sh_size);
return 0;
}
* Frame pointers should strictly progress back up the stack
* (towards higher addresses).
*/
- if (tail >= buftail.fp)
+ if (tail + 1 >= buftail.fp)
return NULL;
return buftail.fp - 1;
irq, cpu);
return err;
#else
- return 0;
+ return -EINVAL;
#endif
}
static int
init_cpu_pmu(void)
{
- int i, err = 0;
+ int i, irqs, err = 0;
struct platform_device *pdev = pmu_devices[ARM_PMU_DEVICE_CPU];
- if (!pdev) {
- err = -ENODEV;
- goto out;
- }
+ if (!pdev)
+ return -ENODEV;
+
+ irqs = pdev->num_resources;
+
+ /*
+ * If we have a single PMU interrupt that we can't shift, assume that
+ * we're running on a uniprocessor machine and continue.
+ */
+ if (irqs == 1 && !irq_can_set_affinity(platform_get_irq(pdev, 0)))
+ return 0;
- for (i = 0; i < pdev->num_resources; ++i) {
+ for (i = 0; i < irqs; ++i) {
err = set_irq_affinity(platform_get_irq(pdev, i), i);
if (err)
break;
}
-out:
return err;
}
* Register 0 and check for VMSAv7 or PMSAv7 */
asm("mrc p15, 0, %0, c0, c1, 4"
: "=r" (mmfr0));
- if ((mmfr0 & 0x0000000f) == 0x00000003 ||
- (mmfr0 & 0x000000f0) == 0x00000030)
+ if ((mmfr0 & 0x0000000f) >= 0x00000003 ||
+ (mmfr0 & 0x000000f0) >= 0x00000030)
cpu_arch = CPU_ARCH_ARMv7;
else if ((mmfr0 & 0x0000000f) == 0x00000002 ||
(mmfr0 & 0x000000f0) == 0x00000020)
unsigned long handler = (unsigned long)ka->sa.sa_handler;
unsigned long retcode;
int thumb = 0;
- unsigned long cpsr = regs->ARM_cpsr & ~PSR_f;
+ unsigned long cpsr = regs->ARM_cpsr & ~(PSR_f | PSR_E_BIT);
+
+ cpsr |= PSR_ENDSTATE;
/*
* Maybe we need to deliver a 32-bit signal to a 26-bit task.
/* timer load already set up */
ctrl = TWD_TIMER_CONTROL_ENABLE | TWD_TIMER_CONTROL_IT_ENABLE
| TWD_TIMER_CONTROL_PERIODIC;
+ __raw_writel(twd_timer_rate / HZ, twd_base + TWD_TIMER_LOAD);
break;
case CLOCK_EVT_MODE_ONESHOT:
/* period set, and timer enabled in 'next_event' hook */
static void __cpuinit twd_calibrate_rate(void)
{
- unsigned long load, count;
+ unsigned long count;
u64 waitjiffies;
/*
printk("%lu.%02luMHz.\n", twd_timer_rate / 1000000,
(twd_timer_rate / 1000000) % 100);
}
-
- load = twd_timer_rate / HZ;
-
- __raw_writel(load, twd_base + TWD_TIMER_LOAD);
}
/*
#define ARM_CPU_KEEP(x)
#endif
+#if defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)
+#define ARM_EXIT_KEEP(x) x
+#else
+#define ARM_EXIT_KEEP(x)
+#endif
+
OUTPUT_ARCH(arm)
ENTRY(stext)
_sinittext = .;
HEAD_TEXT
INIT_TEXT
+ ARM_EXIT_KEEP(EXIT_TEXT)
_einittext = .;
ARM_CPU_DISCARD(PROC_INFO)
__arch_info_begin = .;
#ifndef CONFIG_XIP_KERNEL
__init_begin = _stext;
INIT_DATA
+ ARM_EXIT_KEEP(EXIT_DATA)
#endif
}
. = ALIGN(PAGE_SIZE);
__init_begin = .;
INIT_DATA
+ ARM_EXIT_KEEP(EXIT_DATA)
. = ALIGN(PAGE_SIZE);
__init_end = .;
#endif
}
#endif
+ NOTES
+
BSS_SECTION(0, 0, 0)
_end = .;
static struct resource ep93xx_ac97_resources[] = {
{
.start = EP93XX_AAC_PHYS_BASE,
- .end = EP93XX_AAC_PHYS_BASE + 0xb0 - 1,
+ .end = EP93XX_AAC_PHYS_BASE + 0xac - 1,
.flags = IORESOURCE_MEM,
},
{
{
int i;
+ /* Set Ports C, D, E, G, and H for GPIO use */
+ ep93xx_devcfg_set_bits(EP93XX_SYSCON_DEVCFG_KEYS |
+ EP93XX_SYSCON_DEVCFG_GONK |
+ EP93XX_SYSCON_DEVCFG_EONIDE |
+ EP93XX_SYSCON_DEVCFG_GONIDE |
+ EP93XX_SYSCON_DEVCFG_HONIDE);
+
for (i = 0; i < ARRAY_SIZE(ep93xx_gpio_banks); i++)
gpiochip_add(&ep93xx_gpio_banks[i].chip);
}
/* For NetWinder debugging */
.macro addruart, rp, rv
mov \rp, #0x000003f8
- orr \rv, \rp, #0x7c000000 @ physical
- orr \rp, \rp, #0xff000000 @ virtual
+ orr \rv, \rp, #0xff000000 @ virtual
+ orr \rp, \rp, #0x7c000000 @ physical
.endm
#define UART_SHIFT 0
KEY(3, 3, KEY_POWER),
};
-static const struct matrix_keymap_data mx25pdk_keymap_data __initdata = {
+static const struct matrix_keymap_data mx25pdk_keymap_data __initconst = {
.keymap = mx25pdk_keymap,
.keymap_size = ARRAY_SIZE(mx25pdk_keymap),
};
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-unsigned long ixp4xx_timer_freq = FREQ;
+unsigned long ixp4xx_timer_freq = IXP4XX_TIMER_FREQ;
EXPORT_SYMBOL(ixp4xx_timer_freq);
static void __init ixp4xx_clocksource_init(void)
{
static void __init ixp4xx_clockevent_init(void)
{
- clockevent_ixp4xx.mult = div_sc(FREQ, NSEC_PER_SEC,
+ clockevent_ixp4xx.mult = div_sc(IXP4XX_TIMER_FREQ, NSEC_PER_SEC,
clockevent_ixp4xx.shift);
clockevent_ixp4xx.max_delta_ns =
clockevent_delta2ns(0xfffffffe, &clockevent_ixp4xx);
* 66.66... MHz. We do a convulted calculation of CLOCK_TICK_RATE b/c the
* timer register ignores the bottom 2 bits of the LATCH value.
*/
-#define FREQ 66666000
-#define CLOCK_TICK_RATE (((FREQ / HZ & ~IXP4XX_OST_RELOAD_MASK) + 1) * HZ)
+#define IXP4XX_TIMER_FREQ 66666000
+#define CLOCK_TICK_RATE \
+ (((IXP4XX_TIMER_FREQ / HZ & ~IXP4XX_OST_RELOAD_MASK) + 1) * HZ)
qmgr_queue_descs[queue], queue);
qmgr_queue_descs[queue][0] = '\x0';
#endif
+
+ while ((addr = qmgr_get_entry(queue)))
+ printk(KERN_ERR "qmgr: released queue %i not empty: 0x%08X\n",
+ queue, addr);
+
__raw_writel(0, &qmgr_regs->sram[queue]);
used_sram_bitmap[0] &= ~mask[0];
spin_unlock_irq(&qmgr_lock);
module_put(THIS_MODULE);
-
- while ((addr = qmgr_get_entry(queue)))
- printk(KERN_ERR "qmgr: released queue %i not empty: 0x%08X\n",
- queue, addr);
}
static int qmgr_init(void)
* at run-time: they vary from board to board, and the true
* configuration won't be known until boot.
*/
-static struct resource smc91x_resources[] __initdata = {
+static struct resource smc91x_resources[] = {
[0] = {
.flags = IORESOURCE_MEM,
},
},
};
-static struct platform_device smc91x_device __initdata = {
+static struct platform_device smc91x_device = {
.name = "smc91x",
.id = 0,
.num_resources = ARRAY_SIZE(smc91x_resources),
reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \
reg &= ~BM_CLKCTRL_##dr##_DIV; \
reg |= div << BP_CLKCTRL_##dr##_DIV; \
- if (reg | (1 << clk->enable_shift)) { \
+ if (reg & (1 << clk->enable_shift)) { \
pr_err("%s: clock is gated\n", __func__); \
return -EINVAL; \
} \
{ \
if (parent != clk->parent) { \
__raw_writel(BM_CLKCTRL_CLKSEQ_BYPASS_##bit, \
- HW_CLKCTRL_CLKSEQ_TOG); \
+ CLKCTRL_BASE_ADDR + HW_CLKCTRL_CLKSEQ_TOG); \
clk->parent = parent; \
} \
\
} else { \
reg &= ~BM_CLKCTRL_##dr##_DIV; \
reg |= div << BP_CLKCTRL_##dr##_DIV; \
- if (reg | (1 << clk->enable_shift)) { \
+ if (reg & (1 << clk->enable_shift)) { \
pr_err("%s: clock is gated\n", __func__); \
return -EINVAL; \
} \
} \
- __raw_writel(reg, CLKCTRL_BASE_ADDR + HW_CLKCTRL_CPU); \
+ __raw_writel(reg, CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \
\
for (i = 10000; i; i--) \
if (!(__raw_readl(CLKCTRL_BASE_ADDR + \
{ \
if (parent != clk->parent) { \
__raw_writel(BM_CLKCTRL_CLKSEQ_BYPASS_##bit, \
- HW_CLKCTRL_CLKSEQ_TOG); \
+ CLKCTRL_BASE_ADDR + HW_CLKCTRL_CLKSEQ_TOG); \
clk->parent = parent; \
} \
\
_REGISTER_CLOCK("duart", NULL, uart_clk)
_REGISTER_CLOCK("imx28-fec.0", NULL, fec_clk)
_REGISTER_CLOCK("imx28-fec.1", NULL, fec_clk)
- _REGISTER_CLOCK("fec.0", NULL, fec_clk)
_REGISTER_CLOCK("rtc", NULL, rtc_clk)
_REGISTER_CLOCK("pll2", NULL, pll2_clk)
_REGISTER_CLOCK(NULL, "hclk", hbus_clk)
if (clk->disable)
clk->disable(clk);
__clk_disable(clk->parent);
- __clk_disable(clk->secondary);
}
}
if (clk->usecount++ == 0) {
__clk_enable(clk->parent);
- __clk_enable(clk->secondary);
if (clk->enable)
clk->enable(clk);
struct mxs_gpio_port *port = (struct mxs_gpio_port *)get_irq_data(irq);
u32 gpio_irq_no_base = port->virtual_irq_start;
+ desc->irq_data.chip->irq_ack(&desc->irq_data);
+
irq_stat = __raw_readl(port->base + PINCTRL_IRQSTAT(port->id)) &
__raw_readl(port->base + PINCTRL_IRQEN(port->id));
int id;
/* Source clock this clk depends on */
struct clk *parent;
- /* Secondary clock to enable/disable with this clock */
- struct clk *secondary;
/* Reference count of clock enable/disable */
__s8 usecount;
/* Register bit position for clock's enable/disable control. */
depends on ARCH_OMAP1
bool "OMAP730 Based System"
select CPU_ARM926T
+ select OMAP_MPU_TIMER
select ARCH_OMAP_OTG
config ARCH_OMAP850
default y
bool "OMAP15xx Based System"
select CPU_ARM925T
+ select OMAP_MPU_TIMER
config ARCH_OMAP16XX
depends on ARCH_OMAP1
#
# Common support
-obj-y := io.o id.o sram.o irq.o mux.o flash.o serial.o devices.o dma.o
+obj-y := io.o id.o sram.o time.o irq.o mux.o flash.o serial.o devices.o dma.o
obj-y += clock.o clock_data.o opp_data.o
obj-$(CONFIG_OMAP_MCBSP) += mcbsp.o
-obj-$(CONFIG_OMAP_MPU_TIMER) += time.o
obj-$(CONFIG_OMAP_32K_TIMER) += timer32k.o
# Power Management
#include <mach/irqs.h>
#include <asm/hardware/gic.h>
-/*
- * We use __glue to avoid errors with multiple definitions of
- * .globl omap_irq_flags as it's included from entry-armv.S but not
- * from entry-common.S.
- */
-#ifdef __glue
- .pushsection .data
- .globl omap_irq_flags
-omap_irq_flags:
- .word 0
- .popsection
-#endif
-
.macro disable_fiq
.endm
unsigned long wake_enable;
};
+u32 omap_irq_flags;
static unsigned int irq_bank_count;
static struct omap_irq_bank *irq_banks;
void __init omap_init_irq(void)
{
- extern unsigned int omap_irq_flags;
int i, j;
#if defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP850)
* On OMAP1510, internal LCD controller will start the transfer
* when it gets enabled, so assume DMA running if LCD enabled.
*/
- if (cpu_is_omap1510())
+ if (cpu_is_omap15xx())
if (omap_readw(OMAP_LCDC_CONTROL) & OMAP_LCDC_CTRL_LCD_EN)
return 1;
void omap_set_lcd_dma_b1_rotation(int rotate)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA rotation is not supported in 1510 mode\n");
BUG();
return;
void omap_set_lcd_dma_b1_mirror(int mirror)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA mirror is not supported in 1510 mode\n");
BUG();
}
void omap_set_lcd_dma_b1_vxres(unsigned long vxres)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA virtual resulotion is not supported "
"in 1510 mode\n");
BUG();
void omap_set_lcd_dma_b1_scale(unsigned int xscale, unsigned int yscale)
{
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
printk(KERN_ERR "DMA scale is not supported in 1510 mode\n");
BUG();
}
bottom = PIXADDR(lcd_dma.xres - 1, lcd_dma.yres - 1);
/* 1510 DMA requires the bottom address to be 2 more
* than the actual last memory access location. */
- if (cpu_is_omap1510() &&
+ if (cpu_is_omap15xx() &&
lcd_dma.data_type == OMAP_DMA_DATA_TYPE_S32)
bottom += 2;
ei = PIXSTEP(0, 0, 1, 0);
return; /* Suppress warning about uninitialized vars */
}
- if (cpu_is_omap1510()) {
+ if (cpu_is_omap15xx()) {
omap_writew(top >> 16, OMAP1510_DMA_LCD_TOP_F1_U);
omap_writew(top, OMAP1510_DMA_LCD_TOP_F1_L);
omap_writew(bottom >> 16, OMAP1510_DMA_LCD_BOT_F1_U);
BUG();
return;
}
- if (!cpu_is_omap1510())
+ if (!cpu_is_omap15xx())
omap_writew(omap_readw(OMAP1610_DMA_LCD_CCR) & ~1,
OMAP1610_DMA_LCD_CCR);
lcd_dma.reserved = 0;
* connected. Otherwise the OMAP internal controller will
* start the transfer when it gets enabled.
*/
- if (cpu_is_omap1510() || !lcd_dma.ext_ctrl)
+ if (cpu_is_omap15xx() || !lcd_dma.ext_ctrl)
return;
w = omap_readw(OMAP1610_DMA_LCD_CTRL);
void omap_setup_lcd_dma(void)
{
BUG_ON(lcd_dma.active);
- if (!cpu_is_omap1510()) {
+ if (!cpu_is_omap15xx()) {
/* Set some reasonable defaults */
omap_writew(0x5440, OMAP1610_DMA_LCD_CCR);
omap_writew(0x9102, OMAP1610_DMA_LCD_CSDP);
omap_writew(0x0004, OMAP1610_DMA_LCD_LCH_CTRL);
}
set_b1_regs();
- if (!cpu_is_omap1510()) {
+ if (!cpu_is_omap15xx()) {
u16 w;
w = omap_readw(OMAP1610_DMA_LCD_CCR);
u16 w;
lcd_dma.active = 0;
- if (cpu_is_omap1510() || !lcd_dma.ext_ctrl)
+ if (cpu_is_omap15xx() || !lcd_dma.ext_ctrl)
return;
w = omap_readw(OMAP1610_DMA_LCD_CCR);
#include <mach/hardware.h>
#include <asm/leds.h>
#include <asm/irq.h>
+#include <asm/sched_clock.h>
+
#include <asm/mach/irq.h>
#include <asm/mach/time.h>
#include <plat/common.h>
+#ifdef CONFIG_OMAP_MPU_TIMER
+
#define OMAP_MPU_TIMER_BASE OMAP_MPU_TIMER1_BASE
#define OMAP_MPU_TIMER_OFFSET 0x100
((volatile omap_mpu_timer_regs_t*)OMAP1_IO_ADDRESS(OMAP_MPU_TIMER_BASE + \
(n)*OMAP_MPU_TIMER_OFFSET))
-static inline unsigned long omap_mpu_timer_read(int nr)
+static inline unsigned long notrace omap_mpu_timer_read(int nr)
{
volatile omap_mpu_timer_regs_t* timer = omap_mpu_timer_base(nr);
return timer->read_tim;
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
+static DEFINE_CLOCK_DATA(cd);
+
+static inline unsigned long long notrace _omap_mpu_sched_clock(void)
+{
+ u32 cyc = mpu_read(&clocksource_mpu);
+ return cyc_to_sched_clock(&cd, cyc, (u32)~0);
+}
+
+#ifndef CONFIG_OMAP_32K_TIMER
+unsigned long long notrace sched_clock(void)
+{
+ return _omap_mpu_sched_clock();
+}
+#else
+static unsigned long long notrace omap_mpu_sched_clock(void)
+{
+ return _omap_mpu_sched_clock();
+}
+#endif
+
+static void notrace mpu_update_sched_clock(void)
+{
+ u32 cyc = mpu_read(&clocksource_mpu);
+ update_sched_clock(&cd, cyc, (u32)~0);
+}
+
static void __init omap_init_clocksource(unsigned long rate)
{
static char err[] __initdata = KERN_ERR
setup_irq(INT_TIMER2, &omap_mpu_timer2_irq);
omap_mpu_timer_start(1, ~0, 1);
+ init_sched_clock(&cd, mpu_update_sched_clock, 32, rate);
if (clocksource_register_hz(&clocksource_mpu, rate))
printk(err, clocksource_mpu.name);
}
-/*
- * ---------------------------------------------------------------------------
- * Timer initialization
- * ---------------------------------------------------------------------------
- */
-static void __init omap_timer_init(void)
+static void __init omap_mpu_timer_init(void)
{
struct clk *ck_ref = clk_get(NULL, "ck_ref");
unsigned long rate;
omap_init_clocksource(rate);
}
+#else
+static inline void omap_mpu_timer_init(void)
+{
+ pr_err("Bogus timer, should not happen\n");
+}
+#endif /* CONFIG_OMAP_MPU_TIMER */
+
+#if defined(CONFIG_OMAP_MPU_TIMER) && defined(CONFIG_OMAP_32K_TIMER)
+static unsigned long long (*preferred_sched_clock)(void);
+
+unsigned long long notrace sched_clock(void)
+{
+ if (!preferred_sched_clock)
+ return 0;
+
+ return preferred_sched_clock();
+}
+
+static inline void preferred_sched_clock_init(bool use_32k_sched_clock)
+{
+ if (use_32k_sched_clock)
+ preferred_sched_clock = omap_32k_sched_clock;
+ else
+ preferred_sched_clock = omap_mpu_sched_clock;
+}
+#else
+static inline void preferred_sched_clock_init(bool use_32k_sched_clcok)
+{
+}
+#endif
+
+static inline int omap_32k_timer_usable(void)
+{
+ int res = false;
+
+ if (cpu_is_omap730() || cpu_is_omap15xx())
+ return res;
+
+#ifdef CONFIG_OMAP_32K_TIMER
+ res = omap_32k_timer_init();
+#endif
+
+ return res;
+}
+
+/*
+ * ---------------------------------------------------------------------------
+ * Timer initialization
+ * ---------------------------------------------------------------------------
+ */
+static void __init omap_timer_init(void)
+{
+ if (omap_32k_timer_usable()) {
+ preferred_sched_clock_init(1);
+ } else {
+ omap_mpu_timer_init();
+ preferred_sched_clock_init(0);
+ }
+}
+
struct sys_timer omap_timer = {
.init = omap_timer_init,
};
#include <asm/irq.h>
#include <asm/mach/irq.h>
#include <asm/mach/time.h>
+#include <plat/common.h>
#include <plat/dmtimer.h>
-struct sys_timer omap_timer;
-
/*
* ---------------------------------------------------------------------------
* 32KHz OS timer
* Timer initialization
* ---------------------------------------------------------------------------
*/
-static void __init omap_timer_init(void)
+bool __init omap_32k_timer_init(void)
{
+ omap_init_clocksource_32k();
+
#ifdef CONFIG_OMAP_DM_TIMER
omap_dm_timer_init();
#endif
omap_init_32k_timer();
-}
-struct sys_timer omap_timer = {
- .init = omap_timer_init,
-};
+ return true;
+}
#if defined(CONFIG_RTC_DRV_V3020) || defined(CONFIG_RTC_DRV_V3020_MODULE)
#define RTC_IO_GPIO (153)
#define RTC_WR_GPIO (154)
-#define RTC_RD_GPIO (160)
+#define RTC_RD_GPIO (53)
#define RTC_CS_GPIO (163)
+#define RTC_CS_EN_GPIO (160)
struct v3020_platform_data cm_t3517_v3020_pdata = {
.use_gpio = 1,
static void __init cm_t3517_init_rtc(void)
{
+ int err;
+
+ err = gpio_request(RTC_CS_EN_GPIO, "rtc cs en");
+ if (err) {
+ pr_err("CM-T3517: rtc cs en gpio request failed: %d\n", err);
+ return;
+ }
+
+ gpio_direction_output(RTC_CS_EN_GPIO, 1);
+
platform_device_register(&cm_t3517_rtc_device);
}
#else
},
{
.name = "linux",
- .offset = MTDPART_OFS_APPEND, /* Offset = 0x280000 */
+ .offset = MTDPART_OFS_APPEND, /* Offset = 0x2A0000 */
.size = 32 * NAND_BLOCK_SIZE,
},
{
.name = "rootfs",
- .offset = MTDPART_OFS_APPEND, /* Offset = 0x680000 */
+ .offset = MTDPART_OFS_APPEND, /* Offset = 0x6A0000 */
.size = MTDPART_SIZ_FULL,
},
};
static struct omap_board_mux board_mux[] __initdata = {
/* GPIO186 - Green LED */
OMAP3_MUX(SYS_CLKOUT2, OMAP_MUX_MODE4 | OMAP_PIN_OUTPUT),
- /* RTC GPIOs: IO, WR#, RD#, CS# */
+
+ /* RTC GPIOs: */
+ /* IO - GPIO153 */
OMAP3_MUX(MCBSP4_DR, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
+ /* WR# - GPIO154 */
OMAP3_MUX(MCBSP4_DX, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
- OMAP3_MUX(MCBSP_CLKS, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
+ /* RD# - GPIO53 */
+ OMAP3_MUX(GPMC_NCS2, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
+ /* CS# - GPIO163 */
OMAP3_MUX(UART3_CTS_RCTX, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
+ /* CS EN - GPIO160 */
+ OMAP3_MUX(MCBSP_CLKS, OMAP_MUX_MODE4 | OMAP_PIN_INPUT),
+
/* HSUSB1 RESET */
OMAP3_MUX(UART2_TX, OMAP_MUX_MODE4 | OMAP_PIN_OUTPUT),
/* HSUSB2 RESET */
static int devkit8000_panel_enable_lcd(struct omap_dss_device *dssdev)
{
- twl_i2c_write_u8(TWL4030_MODULE_GPIO, 0x80, REG_GPIODATADIR1);
- twl_i2c_write_u8(TWL4030_MODULE_LED, 0x0, 0x0);
-
if (gpio_is_valid(dssdev->reset_gpio))
gpio_set_value_cansleep(dssdev->reset_gpio, 1);
return 0;
static int devkit8000_twl_gpio_setup(struct device *dev,
unsigned gpio, unsigned ngpio)
{
+ int ret;
+
omap_mux_init_gpio(29, OMAP_PIN_INPUT);
/* gpio + 0 is "mmc0_cd" (input/IRQ) */
mmc[0].gpio_cd = gpio + 0;
/* TWL4030_GPIO_MAX + 1 == ledB, PMU_STAT (out, active low LED) */
gpio_leds[2].gpio = gpio + TWL4030_GPIO_MAX + 1;
- /* gpio + 1 is "LCD_PWREN" (out, active high) */
- devkit8000_lcd_device.reset_gpio = gpio + 1;
- gpio_request(devkit8000_lcd_device.reset_gpio, "LCD_PWREN");
- /* Disable until needed */
- gpio_direction_output(devkit8000_lcd_device.reset_gpio, 0);
+ /* TWL4030_GPIO_MAX + 0 is "LCD_PWREN" (out, active high) */
+ devkit8000_lcd_device.reset_gpio = gpio + TWL4030_GPIO_MAX + 0;
+ ret = gpio_request_one(devkit8000_lcd_device.reset_gpio,
+ GPIOF_DIR_OUT | GPIOF_INIT_LOW, "LCD_PWREN");
+ if (ret < 0) {
+ devkit8000_lcd_device.reset_gpio = -EINVAL;
+ printk(KERN_ERR "Failed to request GPIO for LCD_PWRN\n");
+ }
/* gpio + 7 is "DVI_PD" (out, active low) */
devkit8000_dvi_device.reset_gpio = gpio + 7;
- gpio_request(devkit8000_dvi_device.reset_gpio, "DVI PowerDown");
- /* Disable until needed */
- gpio_direction_output(devkit8000_dvi_device.reset_gpio, 0);
+ ret = gpio_request_one(devkit8000_dvi_device.reset_gpio,
+ GPIOF_DIR_OUT | GPIOF_INIT_LOW, "DVI PowerDown");
+ if (ret < 0) {
+ devkit8000_dvi_device.reset_gpio = -EINVAL;
+ printk(KERN_ERR "Failed to request GPIO for DVI PowerDown\n");
+ }
return 0;
}
.irq_base = TWL4030_GPIO_IRQ_BASE,
.irq_end = TWL4030_GPIO_IRQ_END,
.use_leds = true,
- .pullups = BIT(1),
- .pulldowns = BIT(2) | BIT(6) | BIT(7) | BIT(8) | BIT(13)
+ .pulldowns = BIT(1) | BIT(2) | BIT(6) | BIT(8) | BIT(13)
| BIT(15) | BIT(16) | BIT(17),
.setup = devkit8000_twl_gpio_setup,
};
platform_add_devices(panda_devices, ARRAY_SIZE(panda_devices));
omap_serial_init();
omap4_twl6030_hsmmc_init(mmc);
- /* OMAP4 Panda uses internal transceiver so register nop transceiver */
- usb_nop_xceiv_register();
omap4_ehci_init();
usb_musb_init(&musb_board_data);
}
static struct regulator_init_data rm680_vemmc = {
.constraints = {
.name = "rm680_vemmc",
- .min_uV = 2900000,
- .max_uV = 2900000,
- .apply_uV = 1,
.valid_modes_mask = REGULATOR_MODE_NORMAL
| REGULATOR_MODE_STANDBY,
.valid_ops_mask = REGULATOR_CHANGE_STATUS
#include "cm2_44xx.h"
#include "cm-regbits-44xx.h"
#include "prm44xx.h"
-#include "prm44xx.h"
#include "prm-regbits-44xx.h"
#include "control.h"
#include "scrm44xx.h"
{
struct clkdm_dep *cd;
+ if (!cpu_is_omap24xx() && !cpu_is_omap34xx()) {
+ pr_err("clockdomain: %s/%s: %s: not yet implemented\n",
+ clkdm1->name, clkdm2->name, __func__);
+ return -EINVAL;
+ }
+
if (!clkdm1 || !clkdm2)
return -EINVAL;
{
struct clkdm_dep *cd;
+ if (!cpu_is_omap24xx() && !cpu_is_omap34xx()) {
+ pr_err("clockdomain: %s/%s: %s: not yet implemented\n",
+ clkdm1->name, clkdm2->name, __func__);
+ return -EINVAL;
+ }
+
if (!clkdm1 || !clkdm2)
return -EINVAL;
if (!clkdm1 || !clkdm2)
return -EINVAL;
+ if (!cpu_is_omap24xx() && !cpu_is_omap34xx()) {
+ pr_err("clockdomain: %s/%s: %s: not yet implemented\n",
+ clkdm1->name, clkdm2->name, __func__);
+ return -EINVAL;
+ }
+
cd = _clkdm_deps_lookup(clkdm2, clkdm1->wkdep_srcs);
if (IS_ERR(cd)) {
pr_debug("clockdomain: hardware cannot set/clear wake up of "
struct clkdm_dep *cd;
u32 mask = 0;
+ if (!cpu_is_omap24xx() && !cpu_is_omap34xx()) {
+ pr_err("clockdomain: %s: %s: not yet implemented\n",
+ clkdm->name, __func__);
+ return -EINVAL;
+ }
+
if (!clkdm)
return -EINVAL;
* dependency code and data for OMAP4.
*/
if (cpu_is_omap44xx()) {
- WARN_ONCE(1, "clockdomain: OMAP4 wakeup/sleep dependency "
- "support is not yet implemented\n");
+ pr_err("clockdomain: %s: OMAP4 wakeup/sleep dependency support: not yet implemented\n", clkdm->name);
} else {
if (atomic_read(&clkdm->usecount) > 0)
_clkdm_add_autodeps(clkdm);
* dependency code and data for OMAP4.
*/
if (cpu_is_omap44xx()) {
- WARN_ONCE(1, "clockdomain: OMAP4 wakeup/sleep dependency "
- "support is not yet implemented\n");
+ pr_err("clockdomain: %s: OMAP4 wakeup/sleep dependency support: not yet implemented\n", clkdm->name);
} else {
if (atomic_read(&clkdm->usecount) > 0)
_clkdm_del_autodeps(clkdm);
#include "cm1_44xx.h"
#include "cm2_44xx.h"
-#include "cm1_44xx.h"
-#include "cm2_44xx.h"
#include "cm-regbits-44xx.h"
#include "prm44xx.h"
#include "prcm44xx.h"
if (IS_ERR(od)) {
pr_err("%s: Cant build omap_device for %s:%s.\n",
__func__, name, oh->name);
- return IS_ERR(od);
+ return PTR_ERR(od);
}
mem = platform_get_resource(&od->pdev, IORESOURCE_MEM, 0);
*/
#ifdef MULTI_OMAP2
-
-/*
- * We use __glue to avoid errors with multiple definitions of
- * .globl omap_irq_base as it's included from entry-armv.S but not
- * from entry-common.S.
- */
-#ifdef __glue
- .pushsection .data
- .globl omap_irq_base
-omap_irq_base:
- .word 0
- .popsection
-#endif
-
/*
* Configure the interrupt base on the first interrupt.
* See also omap_irq_base_init for setting omap_irq_base.
return omap_hwmod_set_postsetup_state(oh, *(u8 *)data);
}
+void __iomem *omap_irq_base;
+
/*
* Initialize asm_irq_base for entry-macro.S
*/
static inline void omap_irq_base_init(void)
{
- extern void __iomem *omap_irq_base;
-
-#ifdef MULTI_OMAP2
if (cpu_is_omap24xx())
omap_irq_base = OMAP2_L4_IO_ADDRESS(OMAP24XX_IC_BASE);
else if (cpu_is_omap34xx())
omap_irq_base = OMAP2_L4_IO_ADDRESS(OMAP44XX_GIC_CPU_BASE);
else
pr_err("Could not initialize omap_irq_base\n");
-#endif
}
void __init omap2_init_common_infrastructure(void)
struct omap_mux *mux = NULL;
struct omap_mux_entry *e;
const char *mode_name;
- int found = 0, found_mode, mode0_len = 0;
+ int found = 0, found_mode = 0, mode0_len = 0;
struct list_head *muxmodes = &partition->muxmodes;
mode_name = strchr(muxname, '.');
if (!partition->base) {
pr_err("%s: Could not ioremap mux partition at 0x%08x\n",
__func__, partition->phys);
+ kfree(partition);
return -ENODEV;
}
/* Block console output in case it is on one of the OMAP UARTs */
if (!is_suspending())
- if (try_acquire_console_sem())
+ if (!console_trylock())
goto no_sleep;
omap_uart_prepare_idle(0);
omap_uart_resume_idle(0);
if (!is_suspending())
- release_console_sem();
+ console_unlock();
no_sleep:
if (omap2_pm_debug) {
* once during boot sequence, but this works as we are not using secure
* services.
*/
-static void omap3_save_secure_ram_context(u32 target_mpu_state)
+static void omap3_save_secure_ram_context(void)
{
u32 ret;
+ int mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm);
if (omap_type() != OMAP2_DEVICE_TYPE_GP) {
/*
pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
ret = _omap_save_secure_sram((u32 *)
__pa(omap3_secure_ram_storage));
- pwrdm_set_next_pwrst(mpu_pwrdm, target_mpu_state);
+ pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);
/* Following is for error tracking, it should not happen */
if (ret) {
printk(KERN_ERR "save_secure_sram() returns %08x\n",
if (!is_suspending())
if (per_next_state < PWRDM_POWER_ON ||
core_next_state < PWRDM_POWER_ON)
- if (try_acquire_console_sem())
+ if (!console_trylock())
goto console_still_active;
/* PER */
}
if (!is_suspending())
- release_console_sem();
+ console_unlock();
console_still_active:
/* Disable IO-PAD and IO-CHAIN wakeup */
local_fiq_disable();
omap_dma_global_context_save();
- omap3_save_secure_ram_context(PWRDM_POWER_ON);
+ omap3_save_secure_ram_context();
omap_dma_global_context_restore();
local_irq_enable();
#include <plat/prcm.h>
#include "powerdomain.h"
-#include "prm-regbits-34xx.h"
#include "prm.h"
#include "prm-regbits-24xx.h"
#include "prm-regbits-34xx.h"
oh->dev_attr = uart;
- acquire_console_sem(); /* in case the earlycon is on the UART */
+ console_lock(); /* in case the earlycon is on the UART */
/*
* Because of early UART probing, UART did not get idled
omap_uart_block_sleep(uart);
uart->timeout = DEFAULT_TIMEOUT;
- release_console_sem();
+ console_unlock();
if ((cpu_is_omap34xx() && uart->padconf) ||
(uart->wk_en && uart->wk_mask)) {
struct omap_sr *sr_info = (struct omap_sr *) data;
if (!sr_info) {
- pr_warning("%s: omap_sr struct for sr_%s not found\n",
- __func__, sr_info->voltdm->name);
+ pr_warning("%s: omap_sr struct not found\n", __func__);
return -EINVAL;
}
struct omap_sr *sr_info = (struct omap_sr *) data;
if (!sr_info) {
- pr_warning("%s: omap_sr struct for sr_%s not found\n",
- __func__, sr_info->voltdm->name);
+ pr_warning("%s: omap_sr struct not found\n", __func__);
return -EINVAL;
}
if (!pdata) {
dev_err(&pdev->dev, "%s: platform data missing\n", __func__);
- return -EINVAL;
+ ret = -EINVAL;
+ goto err_free_devinfo;
}
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
}
sr_info = _sr_lookup(pdata->voltdm);
- if (!sr_info) {
+ if (IS_ERR(sr_info)) {
dev_warn(&pdev->dev, "%s: omap_sr struct not found\n",
__func__);
return -EINVAL;
#include "timer-gp.h"
+#include <plat/common.h>
+
/* MAX_GPTIMER_ID: number of GPTIMERs on the chip */
#define MAX_GPTIMER_ID 12
/*
* When 32k-timer is enabled, don't use GPTimer for clocksource
* instead, just leave default clocksource which uses the 32k
- * sync counter. See clocksource setup in see plat-omap/common.c.
+ * sync counter. See clocksource setup in plat-omap/counter_32k.c
*/
-static inline void __init omap2_gp_clocksource_init(void) {}
+static void __init omap2_gp_clocksource_init(void)
+{
+ omap_init_clocksource_32k();
+}
+
#else
/*
* clocksource
strcat(name, vdd->voltdm.name);
vdd->debug_dir = debugfs_create_dir(name, voltage_dir);
+ kfree(name);
if (IS_ERR(vdd->debug_dir)) {
pr_warning("%s: Unable to create debugfs directory for"
" vdd_%s\n", __func__, vdd->voltdm.name);
GPIO0_COLIBRI_PXA270_SD_DETECT;
if (machine_is_colibri300()) /* PXA300 Colibri */
colibri_mci_platform_data.gpio_card_detect =
- GPIO39_COLIBRI_PXA300_SD_DETECT;
+ GPIO13_COLIBRI_PXA300_SD_DETECT;
else /* PXA320 Colibri */
colibri_mci_platform_data.gpio_card_detect =
GPIO28_COLIBRI_PXA320_SD_DETECT;
GPIO4_MMC1_DAT1,
GPIO5_MMC1_DAT2,
GPIO6_MMC1_DAT3,
- GPIO39_GPIO, /* SD detect */
+ GPIO13_GPIO, /* GPIO13_COLIBRI_PXA300_SD_DETECT */
/* UHC */
GPIO0_2_USBH_PEN,
#define GPIO113_COLIBRI_PXA270_TS_IRQ 113
/* GPIO definitions for Colibri PXA300/310 */
-#define GPIO39_COLIBRI_PXA300_SD_DETECT 39
+#define GPIO13_COLIBRI_PXA300_SD_DETECT 13
/* GPIO definitions for Colibri PXA320 */
#define GPIO28_COLIBRI_PXA320_SD_DETECT 28
.pwm_id = 0,
.max_brightness = 0xfe,
.dft_brightness = 0x7e,
- .pwm_period_ns = 3500,
+ .pwm_period_ns = 3500 * 1024,
.init = palm27x_backlight_init,
.notify = palm27x_backlight_notify,
.exit = palm27x_backlight_exit,
#endif
/* skip registers saving for standby */
- if (state != PM_SUSPEND_STANDBY) {
+ if (state != PM_SUSPEND_STANDBY && pxa_cpu_pm_fns->save) {
pxa_cpu_pm_fns->save(sleep_save);
/* before sleeping, calculate and save a checksum */
for (i = 0; i < pxa_cpu_pm_fns->save_count - 1; i++)
pxa_cpu_pm_fns->enter(state);
cpu_init();
- if (state != PM_SUSPEND_STANDBY) {
+ if (state != PM_SUSPEND_STANDBY && pxa_cpu_pm_fns->restore) {
/* after sleeping, validate the checksum */
for (i = 0; i < pxa_cpu_pm_fns->save_count - 1; i++)
checksum += sleep_save[i];
depends on ARCH_REALVIEW
config MACH_REALVIEW_EB
- bool "Support RealView/EB platform"
+ bool "Support RealView(R) Emulation Baseboard"
select ARM_GIC
help
- Include support for the ARM(R) RealView Emulation Baseboard platform.
+ Include support for the ARM(R) RealView(R) Emulation Baseboard
+ platform.
config REALVIEW_EB_A9MP
- bool "Support Multicore Cortex-A9"
+ bool "Support Multicore Cortex-A9 Tile"
depends on MACH_REALVIEW_EB
select CPU_V7
help
- Enable support for the Cortex-A9MPCore tile on the Realview platform.
+ Enable support for the Cortex-A9MPCore tile fitted to the
+ Realview(R) Emulation Baseboard platform.
config REALVIEW_EB_ARM11MP
- bool "Support ARM11MPCore tile"
+ bool "Support ARM11MPCore Tile"
depends on MACH_REALVIEW_EB
select CPU_V6
select ARCH_HAS_BARRIERS if SMP
help
- Enable support for the ARM11MPCore tile on the Realview platform.
+ Enable support for the ARM11MPCore tile fitted to the Realview(R)
+ Emulation Baseboard platform.
config REALVIEW_EB_ARM11MP_REVB
- bool "Support ARM11MPCore RevB tile"
+ bool "Support ARM11MPCore RevB Tile"
depends on REALVIEW_EB_ARM11MP
help
- Enable support for the ARM11MPCore RevB tile on the Realview
- platform. Since there are device address differences, a
- kernel built with this option enabled is not compatible with
- other revisions of the ARM11MPCore tile.
+ Enable support for the ARM11MPCore Revision B tile on the
+ Realview(R) Emulation Baseboard platform. Since there are device
+ address differences, a kernel built with this option enabled is
+ not compatible with other revisions of the ARM11MPCore tile.
config MACH_REALVIEW_PB11MP
- bool "Support RealView/PB11MPCore platform"
+ bool "Support RealView(R) Platform Baseboard for ARM11MPCore"
select CPU_V6
select ARM_GIC
select HAVE_PATA_PLATFORM
select ARCH_HAS_BARRIERS if SMP
help
- Include support for the ARM(R) RealView MPCore Platform Baseboard.
- PB11MPCore is a platform with an on-board ARM11MPCore and has
+ Include support for the ARM(R) RealView(R) Platform Baseboard for
+ the ARM11MPCore. This platform has an on-board ARM11MPCore and has
support for PCI-E and Compact Flash.
config MACH_REALVIEW_PB1176
- bool "Support RealView/PB1176 platform"
+ bool "Support RealView(R) Platform Baseboard for ARM1176JZF-S"
select CPU_V6
select ARM_GIC
help
- Include support for the ARM(R) RealView ARM1176 Platform Baseboard.
+ Include support for the ARM(R) RealView(R) Platform Baseboard for
+ ARM1176JZF-S.
config REALVIEW_PB1176_SECURE_FLASH
bool "Allow access to the secure flash memory block"
block (64MB @ 0x3c000000) is required.
config MACH_REALVIEW_PBA8
- bool "Support RealView/PB-A8 platform"
+ bool "Support RealView(R) Platform Baseboard for Cortex(tm)-A8 platform"
select CPU_V7
select ARM_GIC
select HAVE_PATA_PLATFORM
help
- Include support for the ARM(R) RealView Cortex-A8 Platform Baseboard.
- PB-A8 is a platform with an on-board Cortex-A8 and has support for
- PCI-E and Compact Flash.
+ Include support for the ARM(R) RealView Platform Baseboard for
+ Cortex(tm)-A8. This platform has an on-board Cortex-A8 and has
+ support for PCI-E and Compact Flash.
config MACH_REALVIEW_PBX
- bool "Support RealView/PBX platform"
+ bool "Support RealView(R) Platform Baseboard Explore"
select ARM_GIC
select HAVE_PATA_PLATFORM
select ARCH_SPARSEMEM_ENABLE if CPU_V7 && !REALVIEW_HIGH_PHYS_OFFSET
select ZONE_DMA if SPARSEMEM
help
- Include support for the ARM(R) RealView PBX platform.
+ Include support for the ARM(R) RealView(R) Platform Baseboard
+ Explore.
config REALVIEW_HIGH_PHYS_OFFSET
bool "High physical base address for the RealView platform"
* observers, irrespective of whether they're taking part in coherency
* or not. This is necessary for the hotplug code to work reliably.
*/
-static void write_pen_release(int val)
+static void __cpuinit write_pen_release(int val)
{
pen_release = val;
smp_wmb();
/* linux/arch/arm/mach-s5p6442/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5P6442 - Memory map definitions
#include <plat/map-base.h>
#include <plat/map-s5p.h>
-#define S5P6442_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5P6442_PA_CHIPID
+#define S5P6442_PA_SDRAM 0x20000000
-#define S5P6442_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5P6442_PA_SYSCON
+#define S5P6442_PA_I2S0 0xC0B00000
+#define S5P6442_PA_I2S1 0xF2200000
-#define S5P6442_PA_GPIO (0xE0200000)
+#define S5P6442_PA_CHIPID 0xE0000000
-#define S5P6442_PA_VIC0 (0xE4000000)
-#define S5P6442_PA_VIC1 (0xE4100000)
-#define S5P6442_PA_VIC2 (0xE4200000)
+#define S5P6442_PA_SYSCON 0xE0100000
-#define S5P6442_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5P6442_PA_SROMC
+#define S5P6442_PA_GPIO 0xE0200000
-#define S5P6442_PA_MDMA 0xE8000000
-#define S5P6442_PA_PDMA 0xE9000000
+#define S5P6442_PA_VIC0 0xE4000000
+#define S5P6442_PA_VIC1 0xE4100000
+#define S5P6442_PA_VIC2 0xE4200000
-#define S5P6442_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5P6442_PA_TIMER
+#define S5P6442_PA_SROMC 0xE7000000
-#define S5P6442_PA_SYSTIMER (0xEA100000)
+#define S5P6442_PA_MDMA 0xE8000000
+#define S5P6442_PA_PDMA 0xE9000000
-#define S5P6442_PA_WATCHDOG (0xEA200000)
+#define S5P6442_PA_TIMER 0xEA000000
-#define S5P6442_PA_UART (0xEC000000)
+#define S5P6442_PA_SYSTIMER 0xEA100000
-#define S5P_PA_UART0 (S5P6442_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5P6442_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5P6442_PA_UART + 0x800)
-#define S5P_SZ_UART SZ_256
+#define S5P6442_PA_WATCHDOG 0xEA200000
-#define S5P6442_PA_IIC0 (0xEC100000)
+#define S5P6442_PA_UART 0xEC000000
-#define S5P6442_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5P6442_PA_SDRAM
+#define S5P6442_PA_IIC0 0xEC100000
#define S5P6442_PA_SPI 0xEC300000
-/* I2S */
-#define S5P6442_PA_I2S0 0xC0B00000
-#define S5P6442_PA_I2S1 0xF2200000
-
-/* PCM */
#define S5P6442_PA_PCM0 0xF2400000
#define S5P6442_PA_PCM1 0xF2500000
-/* compatibiltiy defines. */
+/* Compatibiltiy Defines */
+
+#define S3C_PA_IIC S5P6442_PA_IIC0
#define S3C_PA_WDT S5P6442_PA_WATCHDOG
+
+#define S5P_PA_CHIPID S5P6442_PA_CHIPID
+#define S5P_PA_SDRAM S5P6442_PA_SDRAM
+#define S5P_PA_SROMC S5P6442_PA_SROMC
+#define S5P_PA_SYSCON S5P6442_PA_SYSCON
+#define S5P_PA_TIMER S5P6442_PA_TIMER
+
+/* UART */
+
#define S3C_PA_UART S5P6442_PA_UART
-#define S3C_PA_IIC S5P6442_PA_IIC0
+
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
/* linux/arch/arm/mach-s5p64x0/include/mach/map.h
*
- * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2009-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* S5P64X0 - Memory map definitions
#include <plat/map-base.h>
#include <plat/map-s5p.h>
-#define S5P64X0_PA_SDRAM (0x20000000)
+#define S5P64X0_PA_SDRAM 0x20000000
-#define S5P64X0_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5P64X0_PA_CHIPID
-
-#define S5P64X0_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5P64X0_PA_SYSCON
-
-#define S5P64X0_PA_GPIO (0xE0308000)
-
-#define S5P64X0_PA_VIC0 (0xE4000000)
-#define S5P64X0_PA_VIC1 (0xE4100000)
+#define S5P64X0_PA_CHIPID 0xE0000000
-#define S5P64X0_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5P64X0_PA_SROMC
-
-#define S5P64X0_PA_PDMA (0xE9000000)
-
-#define S5P64X0_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5P64X0_PA_TIMER
+#define S5P64X0_PA_SYSCON 0xE0100000
-#define S5P64X0_PA_RTC (0xEA100000)
+#define S5P64X0_PA_GPIO 0xE0308000
-#define S5P64X0_PA_WDT (0xEA200000)
+#define S5P64X0_PA_VIC0 0xE4000000
+#define S5P64X0_PA_VIC1 0xE4100000
-#define S5P6440_PA_UART(x) (0xEC000000 + ((x) * S3C_UART_OFFSET))
-#define S5P6450_PA_UART(x) ((x < 5) ? (0xEC800000 + ((x) * S3C_UART_OFFSET)) : (0xEC000000))
+#define S5P64X0_PA_SROMC 0xE7000000
-#define S5P_PA_UART0 S5P6450_PA_UART(0)
-#define S5P_PA_UART1 S5P6450_PA_UART(1)
-#define S5P_PA_UART2 S5P6450_PA_UART(2)
-#define S5P_PA_UART3 S5P6450_PA_UART(3)
-#define S5P_PA_UART4 S5P6450_PA_UART(4)
-#define S5P_PA_UART5 S5P6450_PA_UART(5)
+#define S5P64X0_PA_PDMA 0xE9000000
-#define S5P_SZ_UART SZ_256
+#define S5P64X0_PA_TIMER 0xEA000000
+#define S5P64X0_PA_RTC 0xEA100000
+#define S5P64X0_PA_WDT 0xEA200000
-#define S5P6440_PA_IIC0 (0xEC104000)
-#define S5P6440_PA_IIC1 (0xEC20F000)
-#define S5P6450_PA_IIC0 (0xEC100000)
-#define S5P6450_PA_IIC1 (0xEC200000)
+#define S5P6440_PA_IIC0 0xEC104000
+#define S5P6440_PA_IIC1 0xEC20F000
+#define S5P6450_PA_IIC0 0xEC100000
+#define S5P6450_PA_IIC1 0xEC200000
-#define S5P64X0_PA_SPI0 (0xEC400000)
-#define S5P64X0_PA_SPI1 (0xEC500000)
+#define S5P64X0_PA_SPI0 0xEC400000
+#define S5P64X0_PA_SPI1 0xEC500000
-#define S5P64X0_PA_HSOTG (0xED100000)
+#define S5P64X0_PA_HSOTG 0xED100000
#define S5P64X0_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
-#define S5P64X0_PA_I2S (0xF2000000)
+#define S5P64X0_PA_I2S 0xF2000000
#define S5P6450_PA_I2S1 0xF2800000
#define S5P6450_PA_I2S2 0xF2900000
-#define S5P64X0_PA_PCM (0xF2100000)
+#define S5P64X0_PA_PCM 0xF2100000
-#define S5P64X0_PA_ADC (0xF3000000)
+#define S5P64X0_PA_ADC 0xF3000000
-/* compatibiltiy defines. */
+/* Compatibiltiy Defines */
#define S3C_PA_HSMMC0 S5P64X0_PA_HSMMC(0)
#define S3C_PA_HSMMC1 S5P64X0_PA_HSMMC(1)
#define S3C_PA_RTC S5P64X0_PA_RTC
#define S3C_PA_WDT S5P64X0_PA_WDT
+#define S5P_PA_CHIPID S5P64X0_PA_CHIPID
+#define S5P_PA_SROMC S5P64X0_PA_SROMC
+#define S5P_PA_SYSCON S5P64X0_PA_SYSCON
+#define S5P_PA_TIMER S5P64X0_PA_TIMER
+
#define SAMSUNG_PA_ADC S5P64X0_PA_ADC
+/* UART */
+
+#define S5P6440_PA_UART(x) (0xEC000000 + ((x) * S3C_UART_OFFSET))
+#define S5P6450_PA_UART(x) ((x < 5) ? (0xEC800000 + ((x) * S3C_UART_OFFSET)) : (0xEC000000))
+
+#define S5P_PA_UART0 S5P6450_PA_UART(0)
+#define S5P_PA_UART1 S5P6450_PA_UART(1)
+#define S5P_PA_UART2 S5P6450_PA_UART(2)
+#define S5P_PA_UART3 S5P6450_PA_UART(3)
+#define S5P_PA_UART4 S5P6450_PA_UART(4)
+#define S5P_PA_UART5 S5P6450_PA_UART(5)
+
+#define S5P_SZ_UART SZ_256
+
#endif /* __ASM_ARCH_MAP_H */
/* linux/arch/arm/mach-s5pc100/include/mach/map.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
*
* Copyright 2009 Samsung Electronics Co.
* Byungho Min <bhmin@samsung.com>
#include <plat/map-base.h>
#include <plat/map-s5p.h>
-/*
- * map-base.h has already defined virtual memory address
- * S3C_VA_IRQ S3C_ADDR(0x00000000) irq controller(s)
- * S3C_VA_SYS S3C_ADDR(0x00100000) system control
- * S3C_VA_MEM S3C_ADDR(0x00200000) system control (not used)
- * S3C_VA_TIMER S3C_ADDR(0x00300000) timer block
- * S3C_VA_WATCHDOG S3C_ADDR(0x00400000) watchdog
- * S3C_VA_UART S3C_ADDR(0x01000000) UART
- *
- * S5PC100 specific virtual memory address can be defined here
- * S5PC1XX_VA_GPIO S3C_ADDR(0x00500000) GPIO
- *
- */
+#define S5PC100_PA_SDRAM 0x20000000
+
+#define S5PC100_PA_ONENAND 0xE7100000
+#define S5PC100_PA_ONENAND_BUF 0xB0000000
+
+#define S5PC100_PA_CHIPID 0xE0000000
-#define S5PC100_PA_ONENAND_BUF (0xB0000000)
-#define S5PC100_SZ_ONENAND_BUF (SZ_256M - SZ_32M)
+#define S5PC100_PA_SYSCON 0xE0100000
-/* Chip ID */
+#define S5PC100_PA_OTHERS 0xE0200000
-#define S5PC100_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5PC100_PA_CHIPID
+#define S5PC100_PA_GPIO 0xE0300000
-#define S5PC100_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5PC100_PA_SYSCON
+#define S5PC100_PA_VIC0 0xE4000000
+#define S5PC100_PA_VIC1 0xE4100000
+#define S5PC100_PA_VIC2 0xE4200000
-#define S5PC100_PA_OTHERS (0xE0200000)
-#define S5PC100_VA_OTHERS (S3C_VA_SYS + 0x10000)
+#define S5PC100_PA_SROMC 0xE7000000
-#define S5PC100_PA_GPIO (0xE0300000)
-#define S5PC1XX_VA_GPIO S3C_ADDR(0x00500000)
+#define S5PC100_PA_CFCON 0xE7800000
-/* Interrupt */
-#define S5PC100_PA_VIC0 (0xE4000000)
-#define S5PC100_PA_VIC1 (0xE4100000)
-#define S5PC100_PA_VIC2 (0xE4200000)
-#define S5PC100_VA_VIC S3C_VA_IRQ
-#define S5PC100_VA_VIC_OFFSET 0x10000
-#define S5PC1XX_VA_VIC(x) (S5PC100_VA_VIC + ((x) * S5PC100_VA_VIC_OFFSET))
+#define S5PC100_PA_MDMA 0xE8100000
+#define S5PC100_PA_PDMA0 0xE9000000
+#define S5PC100_PA_PDMA1 0xE9200000
-#define S5PC100_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5PC100_PA_SROMC
+#define S5PC100_PA_TIMER 0xEA000000
+#define S5PC100_PA_SYSTIMER 0xEA100000
+#define S5PC100_PA_WATCHDOG 0xEA200000
+#define S5PC100_PA_RTC 0xEA300000
-#define S5PC100_PA_ONENAND (0xE7100000)
+#define S5PC100_PA_UART 0xEC000000
-#define S5PC100_PA_CFCON (0xE7800000)
+#define S5PC100_PA_IIC0 0xEC100000
+#define S5PC100_PA_IIC1 0xEC200000
-/* DMA */
-#define S5PC100_PA_MDMA (0xE8100000)
-#define S5PC100_PA_PDMA0 (0xE9000000)
-#define S5PC100_PA_PDMA1 (0xE9200000)
+#define S5PC100_PA_SPI0 0xEC300000
+#define S5PC100_PA_SPI1 0xEC400000
+#define S5PC100_PA_SPI2 0xEC500000
-/* Timer */
-#define S5PC100_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5PC100_PA_TIMER
+#define S5PC100_PA_USB_HSOTG 0xED200000
+#define S5PC100_PA_USB_HSPHY 0xED300000
-#define S5PC100_PA_SYSTIMER (0xEA100000)
+#define S5PC100_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
-#define S5PC100_PA_WATCHDOG (0xEA200000)
-#define S5PC100_PA_RTC (0xEA300000)
+#define S5PC100_PA_FB 0xEE000000
-#define S5PC100_PA_UART (0xEC000000)
+#define S5PC100_PA_FIMC0 0xEE200000
+#define S5PC100_PA_FIMC1 0xEE300000
+#define S5PC100_PA_FIMC2 0xEE400000
-#define S5P_PA_UART0 (S5PC100_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5PC100_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5PC100_PA_UART + 0x800)
-#define S5P_PA_UART3 (S5PC100_PA_UART + 0xC00)
-#define S5P_SZ_UART SZ_256
+#define S5PC100_PA_I2S0 0xF2000000
+#define S5PC100_PA_I2S1 0xF2100000
+#define S5PC100_PA_I2S2 0xF2200000
-#define S5PC100_PA_IIC0 (0xEC100000)
-#define S5PC100_PA_IIC1 (0xEC200000)
+#define S5PC100_PA_AC97 0xF2300000
-/* SPI */
-#define S5PC100_PA_SPI0 0xEC300000
-#define S5PC100_PA_SPI1 0xEC400000
-#define S5PC100_PA_SPI2 0xEC500000
+#define S5PC100_PA_PCM0 0xF2400000
+#define S5PC100_PA_PCM1 0xF2500000
-/* USB HS OTG */
-#define S5PC100_PA_USB_HSOTG (0xED200000)
-#define S5PC100_PA_USB_HSPHY (0xED300000)
+#define S5PC100_PA_SPDIF 0xF2600000
-#define S5PC100_PA_FB (0xEE000000)
+#define S5PC100_PA_TSADC 0xF3000000
-#define S5PC100_PA_FIMC0 (0xEE200000)
-#define S5PC100_PA_FIMC1 (0xEE300000)
-#define S5PC100_PA_FIMC2 (0xEE400000)
+#define S5PC100_PA_KEYPAD 0xF3100000
-#define S5PC100_PA_I2S0 (0xF2000000)
-#define S5PC100_PA_I2S1 (0xF2100000)
-#define S5PC100_PA_I2S2 (0xF2200000)
+/* Compatibiltiy Defines */
-#define S5PC100_PA_AC97 0xF2300000
+#define S3C_PA_FB S5PC100_PA_FB
+#define S3C_PA_HSMMC0 S5PC100_PA_HSMMC(0)
+#define S3C_PA_HSMMC1 S5PC100_PA_HSMMC(1)
+#define S3C_PA_HSMMC2 S5PC100_PA_HSMMC(2)
+#define S3C_PA_IIC S5PC100_PA_IIC0
+#define S3C_PA_IIC1 S5PC100_PA_IIC1
+#define S3C_PA_KEYPAD S5PC100_PA_KEYPAD
+#define S3C_PA_ONENAND S5PC100_PA_ONENAND
+#define S3C_PA_ONENAND_BUF S5PC100_PA_ONENAND_BUF
+#define S3C_PA_RTC S5PC100_PA_RTC
+#define S3C_PA_TSADC S5PC100_PA_TSADC
+#define S3C_PA_USB_HSOTG S5PC100_PA_USB_HSOTG
+#define S3C_PA_USB_HSPHY S5PC100_PA_USB_HSPHY
+#define S3C_PA_WDT S5PC100_PA_WATCHDOG
-/* PCM */
-#define S5PC100_PA_PCM0 0xF2400000
-#define S5PC100_PA_PCM1 0xF2500000
+#define S5P_PA_CHIPID S5PC100_PA_CHIPID
+#define S5P_PA_FIMC0 S5PC100_PA_FIMC0
+#define S5P_PA_FIMC1 S5PC100_PA_FIMC1
+#define S5P_PA_FIMC2 S5PC100_PA_FIMC2
+#define S5P_PA_SDRAM S5PC100_PA_SDRAM
+#define S5P_PA_SROMC S5PC100_PA_SROMC
+#define S5P_PA_SYSCON S5PC100_PA_SYSCON
+#define S5P_PA_TIMER S5PC100_PA_TIMER
-#define S5PC100_PA_SPDIF 0xF2600000
+#define SAMSUNG_PA_ADC S5PC100_PA_TSADC
+#define SAMSUNG_PA_CFCON S5PC100_PA_CFCON
+#define SAMSUNG_PA_KEYPAD S5PC100_PA_KEYPAD
-#define S5PC100_PA_TSADC (0xF3000000)
+#define S5PC100_VA_OTHERS (S3C_VA_SYS + 0x10000)
-/* KEYPAD */
-#define S5PC100_PA_KEYPAD (0xF3100000)
+#define S3C_SZ_ONENAND_BUF (SZ_256M - SZ_32M)
-#define S5PC100_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
+/* UART */
-#define S5PC100_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5PC100_PA_SDRAM
+#define S3C_PA_UART S5PC100_PA_UART
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PC100_PA_UART
-#define S3C_PA_IIC S5PC100_PA_IIC0
-#define S3C_PA_IIC1 S5PC100_PA_IIC1
-#define S3C_PA_FB S5PC100_PA_FB
-#define S3C_PA_G2D S5PC100_PA_G2D
-#define S3C_PA_G3D S5PC100_PA_G3D
-#define S3C_PA_JPEG S5PC100_PA_JPEG
-#define S3C_PA_ROTATOR S5PC100_PA_ROTATOR
-#define S5P_VA_VIC0 S5PC1XX_VA_VIC(0)
-#define S5P_VA_VIC1 S5PC1XX_VA_VIC(1)
-#define S5P_VA_VIC2 S5PC1XX_VA_VIC(2)
-#define S3C_PA_USB_HSOTG S5PC100_PA_USB_HSOTG
-#define S3C_PA_USB_HSPHY S5PC100_PA_USB_HSPHY
-#define S3C_PA_HSMMC0 S5PC100_PA_HSMMC(0)
-#define S3C_PA_HSMMC1 S5PC100_PA_HSMMC(1)
-#define S3C_PA_HSMMC2 S5PC100_PA_HSMMC(2)
-#define S3C_PA_KEYPAD S5PC100_PA_KEYPAD
-#define S3C_PA_WDT S5PC100_PA_WATCHDOG
-#define S3C_PA_TSADC S5PC100_PA_TSADC
-#define S3C_PA_ONENAND S5PC100_PA_ONENAND
-#define S3C_PA_ONENAND_BUF S5PC100_PA_ONENAND_BUF
-#define S3C_SZ_ONENAND_BUF S5PC100_SZ_ONENAND_BUF
-#define S3C_PA_RTC S5PC100_PA_RTC
-
-#define SAMSUNG_PA_ADC S5PC100_PA_TSADC
-#define SAMSUNG_PA_CFCON S5PC100_PA_CFCON
-#define SAMSUNG_PA_KEYPAD S5PC100_PA_KEYPAD
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
-#define S5P_PA_FIMC0 S5PC100_PA_FIMC0
-#define S5P_PA_FIMC1 S5PC100_PA_FIMC1
-#define S5P_PA_FIMC2 S5PC100_PA_FIMC2
+#define S5P_SZ_UART SZ_256
-#endif /* __ASM_ARCH_C100_MAP_H */
+#endif /* __ASM_ARCH_MAP_H */
/* linux/arch/arm/mach-s5pv210/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5PV210 - Memory map definitions
#include <plat/map-base.h>
#include <plat/map-s5p.h>
-#define S5PV210_PA_SROM_BANK5 (0xA8000000)
+#define S5PV210_PA_SDRAM 0x20000000
-#define S5PC110_PA_ONENAND (0xB0000000)
-#define S5P_PA_ONENAND S5PC110_PA_ONENAND
+#define S5PV210_PA_SROM_BANK5 0xA8000000
-#define S5PC110_PA_ONENAND_DMA (0xB0600000)
-#define S5P_PA_ONENAND_DMA S5PC110_PA_ONENAND_DMA
+#define S5PC110_PA_ONENAND 0xB0000000
+#define S5PC110_PA_ONENAND_DMA 0xB0600000
-#define S5PV210_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5PV210_PA_CHIPID
+#define S5PV210_PA_CHIPID 0xE0000000
-#define S5PV210_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5PV210_PA_SYSCON
+#define S5PV210_PA_SYSCON 0xE0100000
-#define S5PV210_PA_GPIO (0xE0200000)
+#define S5PV210_PA_GPIO 0xE0200000
-/* SPI */
-#define S5PV210_PA_SPI0 0xE1300000
-#define S5PV210_PA_SPI1 0xE1400000
+#define S5PV210_PA_SPDIF 0xE1100000
-#define S5PV210_PA_KEYPAD (0xE1600000)
+#define S5PV210_PA_SPI0 0xE1300000
+#define S5PV210_PA_SPI1 0xE1400000
-#define S5PV210_PA_IIC0 (0xE1800000)
-#define S5PV210_PA_IIC1 (0xFAB00000)
-#define S5PV210_PA_IIC2 (0xE1A00000)
+#define S5PV210_PA_KEYPAD 0xE1600000
-#define S5PV210_PA_TIMER (0xE2500000)
-#define S5P_PA_TIMER S5PV210_PA_TIMER
+#define S5PV210_PA_ADC 0xE1700000
-#define S5PV210_PA_SYSTIMER (0xE2600000)
+#define S5PV210_PA_IIC0 0xE1800000
+#define S5PV210_PA_IIC1 0xFAB00000
+#define S5PV210_PA_IIC2 0xE1A00000
-#define S5PV210_PA_WATCHDOG (0xE2700000)
+#define S5PV210_PA_AC97 0xE2200000
-#define S5PV210_PA_RTC (0xE2800000)
-#define S5PV210_PA_UART (0xE2900000)
+#define S5PV210_PA_PCM0 0xE2300000
+#define S5PV210_PA_PCM1 0xE1200000
+#define S5PV210_PA_PCM2 0xE2B00000
-#define S5P_PA_UART0 (S5PV210_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5PV210_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5PV210_PA_UART + 0x800)
-#define S5P_PA_UART3 (S5PV210_PA_UART + 0xC00)
+#define S5PV210_PA_TIMER 0xE2500000
+#define S5PV210_PA_SYSTIMER 0xE2600000
+#define S5PV210_PA_WATCHDOG 0xE2700000
+#define S5PV210_PA_RTC 0xE2800000
-#define S5P_SZ_UART SZ_256
+#define S5PV210_PA_UART 0xE2900000
-#define S3C_VA_UARTx(x) (S3C_VA_UART + ((x) * S3C_UART_OFFSET))
+#define S5PV210_PA_SROMC 0xE8000000
-#define S5PV210_PA_SROMC (0xE8000000)
-#define S5P_PA_SROMC S5PV210_PA_SROMC
+#define S5PV210_PA_CFCON 0xE8200000
-#define S5PV210_PA_CFCON (0xE8200000)
+#define S5PV210_PA_HSMMC(x) (0xEB000000 + ((x) * 0x100000))
-#define S5PV210_PA_MDMA 0xFA200000
-#define S5PV210_PA_PDMA0 0xE0900000
-#define S5PV210_PA_PDMA1 0xE0A00000
+#define S5PV210_PA_HSOTG 0xEC000000
+#define S5PV210_PA_HSPHY 0xEC100000
-#define S5PV210_PA_FB (0xF8000000)
+#define S5PV210_PA_IIS0 0xEEE30000
+#define S5PV210_PA_IIS1 0xE2100000
+#define S5PV210_PA_IIS2 0xE2A00000
-#define S5PV210_PA_FIMC0 (0xFB200000)
-#define S5PV210_PA_FIMC1 (0xFB300000)
-#define S5PV210_PA_FIMC2 (0xFB400000)
+#define S5PV210_PA_DMC0 0xF0000000
+#define S5PV210_PA_DMC1 0xF1400000
-#define S5PV210_PA_HSMMC(x) (0xEB000000 + ((x) * 0x100000))
+#define S5PV210_PA_VIC0 0xF2000000
+#define S5PV210_PA_VIC1 0xF2100000
+#define S5PV210_PA_VIC2 0xF2200000
+#define S5PV210_PA_VIC3 0xF2300000
-#define S5PV210_PA_HSOTG (0xEC000000)
-#define S5PV210_PA_HSPHY (0xEC100000)
+#define S5PV210_PA_FB 0xF8000000
-#define S5PV210_PA_VIC0 (0xF2000000)
-#define S5PV210_PA_VIC1 (0xF2100000)
-#define S5PV210_PA_VIC2 (0xF2200000)
-#define S5PV210_PA_VIC3 (0xF2300000)
+#define S5PV210_PA_MDMA 0xFA200000
+#define S5PV210_PA_PDMA0 0xE0900000
+#define S5PV210_PA_PDMA1 0xE0A00000
-#define S5PV210_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5PV210_PA_SDRAM
+#define S5PV210_PA_MIPI_CSIS 0xFA600000
-/* S/PDIF */
-#define S5PV210_PA_SPDIF 0xE1100000
+#define S5PV210_PA_FIMC0 0xFB200000
+#define S5PV210_PA_FIMC1 0xFB300000
+#define S5PV210_PA_FIMC2 0xFB400000
-/* I2S */
-#define S5PV210_PA_IIS0 0xEEE30000
-#define S5PV210_PA_IIS1 0xE2100000
-#define S5PV210_PA_IIS2 0xE2A00000
+/* Compatibiltiy Defines */
-/* PCM */
-#define S5PV210_PA_PCM0 0xE2300000
-#define S5PV210_PA_PCM1 0xE1200000
-#define S5PV210_PA_PCM2 0xE2B00000
+#define S3C_PA_FB S5PV210_PA_FB
+#define S3C_PA_HSMMC0 S5PV210_PA_HSMMC(0)
+#define S3C_PA_HSMMC1 S5PV210_PA_HSMMC(1)
+#define S3C_PA_HSMMC2 S5PV210_PA_HSMMC(2)
+#define S3C_PA_HSMMC3 S5PV210_PA_HSMMC(3)
+#define S3C_PA_IIC S5PV210_PA_IIC0
+#define S3C_PA_IIC1 S5PV210_PA_IIC1
+#define S3C_PA_IIC2 S5PV210_PA_IIC2
+#define S3C_PA_RTC S5PV210_PA_RTC
+#define S3C_PA_USB_HSOTG S5PV210_PA_HSOTG
+#define S3C_PA_WDT S5PV210_PA_WATCHDOG
-/* AC97 */
-#define S5PV210_PA_AC97 0xE2200000
+#define S5P_PA_CHIPID S5PV210_PA_CHIPID
+#define S5P_PA_FIMC0 S5PV210_PA_FIMC0
+#define S5P_PA_FIMC1 S5PV210_PA_FIMC1
+#define S5P_PA_FIMC2 S5PV210_PA_FIMC2
+#define S5P_PA_MIPI_CSIS0 S5PV210_PA_MIPI_CSIS
+#define S5P_PA_ONENAND S5PC110_PA_ONENAND
+#define S5P_PA_ONENAND_DMA S5PC110_PA_ONENAND_DMA
+#define S5P_PA_SDRAM S5PV210_PA_SDRAM
+#define S5P_PA_SROMC S5PV210_PA_SROMC
+#define S5P_PA_SYSCON S5PV210_PA_SYSCON
+#define S5P_PA_TIMER S5PV210_PA_TIMER
-#define S5PV210_PA_ADC (0xE1700000)
+#define SAMSUNG_PA_ADC S5PV210_PA_ADC
+#define SAMSUNG_PA_CFCON S5PV210_PA_CFCON
+#define SAMSUNG_PA_KEYPAD S5PV210_PA_KEYPAD
-#define S5PV210_PA_DMC0 (0xF0000000)
-#define S5PV210_PA_DMC1 (0xF1400000)
+/* UART */
-#define S5PV210_PA_MIPI_CSIS 0xFA600000
+#define S3C_VA_UARTx(x) (S3C_VA_UART + ((x) * S3C_UART_OFFSET))
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PV210_PA_UART
-#define S3C_PA_HSMMC0 S5PV210_PA_HSMMC(0)
-#define S3C_PA_HSMMC1 S5PV210_PA_HSMMC(1)
-#define S3C_PA_HSMMC2 S5PV210_PA_HSMMC(2)
-#define S3C_PA_HSMMC3 S5PV210_PA_HSMMC(3)
-#define S3C_PA_IIC S5PV210_PA_IIC0
-#define S3C_PA_IIC1 S5PV210_PA_IIC1
-#define S3C_PA_IIC2 S5PV210_PA_IIC2
-#define S3C_PA_FB S5PV210_PA_FB
-#define S3C_PA_RTC S5PV210_PA_RTC
-#define S3C_PA_WDT S5PV210_PA_WATCHDOG
-#define S3C_PA_USB_HSOTG S5PV210_PA_HSOTG
-#define S5P_PA_FIMC0 S5PV210_PA_FIMC0
-#define S5P_PA_FIMC1 S5PV210_PA_FIMC1
-#define S5P_PA_FIMC2 S5PV210_PA_FIMC2
-#define S5P_PA_MIPI_CSIS0 S5PV210_PA_MIPI_CSIS
+#define S3C_PA_UART S5PV210_PA_UART
-#define SAMSUNG_PA_ADC S5PV210_PA_ADC
-#define SAMSUNG_PA_CFCON S5PV210_PA_CFCON
-#define SAMSUNG_PA_KEYPAD S5PV210_PA_KEYPAD
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
static struct regulator_init_data aquila_ldo3_data = {
.constraints = {
- .name = "VUSB/MIPI_1.1V",
+ .name = "VUSB+MIPI_1.1V",
.min_uV = 1100000,
.max_uV = 1100000,
.apply_uV = 1,
static struct regulator_init_data aquila_ldo8_data = {
.constraints = {
- .name = "VUSB/VADC_3.3V",
+ .name = "VUSB+VADC_3.3V",
.min_uV = 3300000,
.max_uV = 3300000,
.apply_uV = 1,
static struct regulator_init_data aquila_ldo9_data = {
.constraints = {
- .name = "VCC/VCAM_2.8V",
+ .name = "VCC+VCAM_2.8V",
.min_uV = 2800000,
.max_uV = 2800000,
.apply_uV = 1,
.buck1_set1 = S5PV210_GPH0(3),
.buck1_set2 = S5PV210_GPH0(4),
.buck2_set3 = S5PV210_GPH0(5),
- .buck1_max_voltage1 = 1200000,
- .buck1_max_voltage2 = 1200000,
- .buck2_max_voltage = 1200000,
+ .buck1_voltage1 = 1200000,
+ .buck1_voltage2 = 1200000,
+ .buck1_voltage3 = 1200000,
+ .buck1_voltage4 = 1200000,
+ .buck2_voltage1 = 1200000,
+ .buck2_voltage2 = 1200000,
};
#endif
static struct regulator_init_data goni_ldo3_data = {
.constraints = {
- .name = "VUSB/MIPI_1.1V",
+ .name = "VUSB+MIPI_1.1V",
.min_uV = 1100000,
.max_uV = 1100000,
.apply_uV = 1,
static struct regulator_init_data goni_ldo8_data = {
.constraints = {
- .name = "VUSB/VADC_3.3V",
+ .name = "VUSB+VADC_3.3V",
.min_uV = 3300000,
.max_uV = 3300000,
.apply_uV = 1,
static struct regulator_init_data goni_ldo9_data = {
.constraints = {
- .name = "VCC/VCAM_2.8V",
+ .name = "VCC+VCAM_2.8V",
.min_uV = 2800000,
.max_uV = 2800000,
.apply_uV = 1,
.buck1_set1 = S5PV210_GPH0(3),
.buck1_set2 = S5PV210_GPH0(4),
.buck2_set3 = S5PV210_GPH0(5),
- .buck1_max_voltage1 = 1200000,
- .buck1_max_voltage2 = 1200000,
- .buck2_max_voltage = 1200000,
+ .buck1_voltage1 = 1200000,
+ .buck1_voltage2 = 1200000,
+ .buck1_voltage3 = 1200000,
+ .buck1_voltage4 = 1200000,
+ .buck2_voltage1 = 1200000,
+ .buck2_voltage2 = 1200000,
};
#endif
select S3C_DEV_HSMMC2
select S3C_DEV_HSMMC3
select S5PV310_DEV_PD
+ select S5PV310_DEV_SYSMMU
select S5PV310_SETUP_I2C1
select S5PV310_SETUP_SDHCI
help
/* linux/arch/arm/mach-s5pv310/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5PV310 - Memory map definitions
#include <plat/map-s5p.h>
-#define S5PV310_PA_SYSRAM (0x02025000)
+#define S5PV310_PA_SYSRAM 0x02025000
-#define S5PV310_PA_SROM_BANK(x) (0x04000000 + ((x) * 0x01000000))
-
-#define S5PC210_PA_ONENAND (0x0C000000)
-#define S5P_PA_ONENAND S5PC210_PA_ONENAND
-
-#define S5PC210_PA_ONENAND_DMA (0x0C600000)
-#define S5P_PA_ONENAND_DMA S5PC210_PA_ONENAND_DMA
-
-#define S5PV310_PA_CHIPID (0x10000000)
-#define S5P_PA_CHIPID S5PV310_PA_CHIPID
-
-#define S5PV310_PA_SYSCON (0x10010000)
-#define S5P_PA_SYSCON S5PV310_PA_SYSCON
+#define S5PV310_PA_I2S0 0x03830000
+#define S5PV310_PA_I2S1 0xE3100000
+#define S5PV310_PA_I2S2 0xE2A00000
-#define S5PV310_PA_PMU (0x10020000)
+#define S5PV310_PA_PCM0 0x03840000
+#define S5PV310_PA_PCM1 0x13980000
+#define S5PV310_PA_PCM2 0x13990000
-#define S5PV310_PA_CMU (0x10030000)
-
-#define S5PV310_PA_WATCHDOG (0x10060000)
-#define S5PV310_PA_RTC (0x10070000)
-
-#define S5PV310_PA_DMC0 (0x10400000)
-
-#define S5PV310_PA_COMBINER (0x10448000)
-
-#define S5PV310_PA_COREPERI (0x10500000)
-#define S5PV310_PA_GIC_CPU (0x10500100)
-#define S5PV310_PA_TWD (0x10500600)
-#define S5PV310_PA_GIC_DIST (0x10501000)
-#define S5PV310_PA_L2CC (0x10502000)
-
-/* DMA */
-#define S5PV310_PA_MDMA 0x10810000
-#define S5PV310_PA_PDMA0 0x12680000
-#define S5PV310_PA_PDMA1 0x12690000
-
-#define S5PV310_PA_GPIO1 (0x11400000)
-#define S5PV310_PA_GPIO2 (0x11000000)
-#define S5PV310_PA_GPIO3 (0x03860000)
-
-#define S5PV310_PA_MIPI_CSIS0 0x11880000
-#define S5PV310_PA_MIPI_CSIS1 0x11890000
+#define S5PV310_PA_SROM_BANK(x) (0x04000000 + ((x) * 0x01000000))
-#define S5PV310_PA_HSMMC(x) (0x12510000 + ((x) * 0x10000))
+#define S5PC210_PA_ONENAND 0x0C000000
+#define S5PC210_PA_ONENAND_DMA 0x0C600000
-#define S5PV310_PA_SROMC (0x12570000)
-#define S5P_PA_SROMC S5PV310_PA_SROMC
+#define S5PV310_PA_CHIPID 0x10000000
-/* S/PDIF */
-#define S5PV310_PA_SPDIF 0xE1100000
+#define S5PV310_PA_SYSCON 0x10010000
+#define S5PV310_PA_PMU 0x10020000
+#define S5PV310_PA_CMU 0x10030000
-/* I2S */
-#define S5PV310_PA_I2S0 0x03830000
-#define S5PV310_PA_I2S1 0xE3100000
-#define S5PV310_PA_I2S2 0xE2A00000
+#define S5PV310_PA_WATCHDOG 0x10060000
+#define S5PV310_PA_RTC 0x10070000
-/* PCM */
-#define S5PV310_PA_PCM0 0x03840000
-#define S5PV310_PA_PCM1 0x13980000
-#define S5PV310_PA_PCM2 0x13990000
+#define S5PV310_PA_DMC0 0x10400000
-/* AC97 */
-#define S5PV310_PA_AC97 0x139A0000
+#define S5PV310_PA_COMBINER 0x10448000
-#define S5PV310_PA_UART (0x13800000)
+#define S5PV310_PA_COREPERI 0x10500000
+#define S5PV310_PA_GIC_CPU 0x10500100
+#define S5PV310_PA_TWD 0x10500600
+#define S5PV310_PA_GIC_DIST 0x10501000
+#define S5PV310_PA_L2CC 0x10502000
-#define S5P_PA_UART(x) (S5PV310_PA_UART + ((x) * S3C_UART_OFFSET))
-#define S5P_PA_UART0 S5P_PA_UART(0)
-#define S5P_PA_UART1 S5P_PA_UART(1)
-#define S5P_PA_UART2 S5P_PA_UART(2)
-#define S5P_PA_UART3 S5P_PA_UART(3)
-#define S5P_PA_UART4 S5P_PA_UART(4)
-
-#define S5P_SZ_UART SZ_256
-
-#define S5PV310_PA_IIC(x) (0x13860000 + ((x) * 0x10000))
-
-#define S5PV310_PA_TIMER (0x139D0000)
-#define S5P_PA_TIMER S5PV310_PA_TIMER
-
-#define S5PV310_PA_SDRAM (0x40000000)
-#define S5P_PA_SDRAM S5PV310_PA_SDRAM
+#define S5PV310_PA_MDMA 0x10810000
+#define S5PV310_PA_PDMA0 0x12680000
+#define S5PV310_PA_PDMA1 0x12690000
#define S5PV310_PA_SYSMMU_MDMA 0x10A40000
#define S5PV310_PA_SYSMMU_SSS 0x10A50000
#define S5PV310_PA_SYSMMU_TV 0x12E20000
#define S5PV310_PA_SYSMMU_MFC_L 0x13620000
#define S5PV310_PA_SYSMMU_MFC_R 0x13630000
-#define S5PV310_SYSMMU_TOTAL_IPNUM 16
-#define S5P_SYSMMU_TOTAL_IPNUM S5PV310_SYSMMU_TOTAL_IPNUM
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PV310_PA_UART
+#define S5PV310_PA_GPIO1 0x11400000
+#define S5PV310_PA_GPIO2 0x11000000
+#define S5PV310_PA_GPIO3 0x03860000
+
+#define S5PV310_PA_MIPI_CSIS0 0x11880000
+#define S5PV310_PA_MIPI_CSIS1 0x11890000
+
+#define S5PV310_PA_HSMMC(x) (0x12510000 + ((x) * 0x10000))
+
+#define S5PV310_PA_SROMC 0x12570000
+
+#define S5PV310_PA_UART 0x13800000
+
+#define S5PV310_PA_IIC(x) (0x13860000 + ((x) * 0x10000))
+
+#define S5PV310_PA_AC97 0x139A0000
+
+#define S5PV310_PA_TIMER 0x139D0000
+
+#define S5PV310_PA_SDRAM 0x40000000
+
+#define S5PV310_PA_SPDIF 0xE1100000
+
+/* Compatibiltiy Defines */
+
#define S3C_PA_HSMMC0 S5PV310_PA_HSMMC(0)
#define S3C_PA_HSMMC1 S5PV310_PA_HSMMC(1)
#define S3C_PA_HSMMC2 S5PV310_PA_HSMMC(2)
#define S3C_PA_IIC7 S5PV310_PA_IIC(7)
#define S3C_PA_RTC S5PV310_PA_RTC
#define S3C_PA_WDT S5PV310_PA_WATCHDOG
+
+#define S5P_PA_CHIPID S5PV310_PA_CHIPID
#define S5P_PA_MIPI_CSIS0 S5PV310_PA_MIPI_CSIS0
#define S5P_PA_MIPI_CSIS1 S5PV310_PA_MIPI_CSIS1
+#define S5P_PA_ONENAND S5PC210_PA_ONENAND
+#define S5P_PA_ONENAND_DMA S5PC210_PA_ONENAND_DMA
+#define S5P_PA_SDRAM S5PV310_PA_SDRAM
+#define S5P_PA_SROMC S5PV310_PA_SROMC
+#define S5P_PA_SYSCON S5PV310_PA_SYSCON
+#define S5P_PA_TIMER S5PV310_PA_TIMER
+
+/* UART */
+
+#define S3C_PA_UART S5PV310_PA_UART
+
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
+#define S5P_PA_UART4 S5P_PA_UART(4)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
#ifndef __ASM_ARM_ARCH_SYSMMU_H
#define __ASM_ARM_ARCH_SYSMMU_H __FILE__
+#define S5PV310_SYSMMU_TOTAL_IPNUM 16
+#define S5P_SYSMMU_TOTAL_IPNUM S5PV310_SYSMMU_TOTAL_IPNUM
+
enum s5pv310_sysmmu_ips {
SYSMMU_MDMA,
SYSMMU_SSS,
SYSMMU_MFC_R,
};
-static char *sysmmu_ips_name[S5P_SYSMMU_TOTAL_IPNUM] = {
+static char *sysmmu_ips_name[S5PV310_SYSMMU_TOTAL_IPNUM] = {
"SYSMMU_MDMA" ,
"SYSMMU_SSS" ,
"SYSMMU_FIMC0" ,
struct platform_device collie_locomo_device = {
.name = "locomo",
.id = 0,
+ .dev = {
+ .platform_data = &locomo_info,
+ },
.num_resources = ARRAY_SIZE(locomo_resources),
.resource = locomo_resources,
};
config MACH_AG5EVM
bool "AG5EVM board"
+ select ARCH_REQUIRE_GPIOLIB
+ select SH_LCD_MIPI_DSI
depends on ARCH_SH73A0
config MACH_MACKEREL
#include <linux/input/sh_keysc.h>
#include <linux/mmc/host.h>
#include <linux/mmc/sh_mmcif.h>
-
+#include <linux/sh_clk.h>
+#include <video/sh_mobile_lcdc.h>
+#include <video/sh_mipi_dsi.h>
#include <sound/sh_fsi.h>
-
#include <mach/hardware.h>
#include <mach/sh73a0.h>
#include <mach/common.h>
.resource = sh_mmcif_resources,
};
+/* IrDA */
+static struct resource irda_resources[] = {
+ [0] = {
+ .start = 0xE6D00000,
+ .end = 0xE6D01FD4 - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = gic_spi(95),
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device irda_device = {
+ .name = "sh_irda",
+ .id = 0,
+ .resource = irda_resources,
+ .num_resources = ARRAY_SIZE(irda_resources),
+};
+
+static unsigned char lcd_backlight_seq[3][2] = {
+ { 0x04, 0x07 },
+ { 0x23, 0x80 },
+ { 0x03, 0x01 },
+};
+
+static void lcd_backlight_on(void)
+{
+ struct i2c_adapter *a;
+ struct i2c_msg msg;
+ int k;
+
+ a = i2c_get_adapter(1);
+ for (k = 0; a && k < 3; k++) {
+ msg.addr = 0x6d;
+ msg.buf = &lcd_backlight_seq[k][0];
+ msg.len = 2;
+ msg.flags = 0;
+ if (i2c_transfer(a, &msg, 1) != 1)
+ break;
+ }
+}
+
+static void lcd_backlight_reset(void)
+{
+ gpio_set_value(GPIO_PORT235, 0);
+ mdelay(24);
+ gpio_set_value(GPIO_PORT235, 1);
+}
+
+static void lcd_on(void *board_data, struct fb_info *info)
+{
+ lcd_backlight_on();
+}
+
+static void lcd_off(void *board_data)
+{
+ lcd_backlight_reset();
+}
+
+/* LCDC0 */
+static const struct fb_videomode lcdc0_modes[] = {
+ {
+ .name = "R63302(QHD)",
+ .xres = 544,
+ .yres = 961,
+ .left_margin = 72,
+ .right_margin = 600,
+ .hsync_len = 16,
+ .upper_margin = 8,
+ .lower_margin = 8,
+ .vsync_len = 2,
+ .sync = FB_SYNC_VERT_HIGH_ACT | FB_SYNC_HOR_HIGH_ACT,
+ },
+};
+
+static struct sh_mobile_lcdc_info lcdc0_info = {
+ .clock_source = LCDC_CLK_PERIPHERAL,
+ .ch[0] = {
+ .chan = LCDC_CHAN_MAINLCD,
+ .interface_type = RGB24,
+ .clock_divider = 1,
+ .flags = LCDC_FLAGS_DWPOL,
+ .lcd_size_cfg.width = 44,
+ .lcd_size_cfg.height = 79,
+ .bpp = 16,
+ .lcd_cfg = lcdc0_modes,
+ .num_cfg = ARRAY_SIZE(lcdc0_modes),
+ .board_cfg = {
+ .display_on = lcd_on,
+ .display_off = lcd_off,
+ },
+ }
+};
+
+static struct resource lcdc0_resources[] = {
+ [0] = {
+ .name = "LCDC0",
+ .start = 0xfe940000, /* P4-only space */
+ .end = 0xfe943fff,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = intcs_evt2irq(0x580),
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device lcdc0_device = {
+ .name = "sh_mobile_lcdc_fb",
+ .num_resources = ARRAY_SIZE(lcdc0_resources),
+ .resource = lcdc0_resources,
+ .id = 0,
+ .dev = {
+ .platform_data = &lcdc0_info,
+ .coherent_dma_mask = ~0,
+ },
+};
+
+/* MIPI-DSI */
+static struct resource mipidsi0_resources[] = {
+ [0] = {
+ .start = 0xfeab0000,
+ .end = 0xfeab3fff,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = 0xfeab4000,
+ .end = 0xfeab7fff,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct sh_mipi_dsi_info mipidsi0_info = {
+ .data_format = MIPI_RGB888,
+ .lcd_chan = &lcdc0_info.ch[0],
+ .vsynw_offset = 20,
+ .clksrc = 1,
+ .flags = SH_MIPI_DSI_HSABM,
+};
+
+static struct platform_device mipidsi0_device = {
+ .name = "sh-mipi-dsi",
+ .num_resources = ARRAY_SIZE(mipidsi0_resources),
+ .resource = mipidsi0_resources,
+ .id = 0,
+ .dev = {
+ .platform_data = &mipidsi0_info,
+ },
+};
+
static struct platform_device *ag5evm_devices[] __initdata = {
ð_device,
&keysc_device,
&fsi_device,
&mmc_device,
+ &irda_device,
+ &lcdc0_device,
+ &mipidsi0_device,
};
static struct map_desc ag5evm_io_desc[] __initdata = {
__raw_writew(__raw_readw(PINTCR0A) | (2<<10), PINTCR0A);
}
+#define DSI0PHYCR 0xe615006c
+
static void __init ag5evm_init(void)
{
sh73a0_pinmux_init();
gpio_request(GPIO_FN_FSIAISLD, NULL);
gpio_request(GPIO_FN_FSIAOSLD, NULL);
+ /* IrDA */
+ gpio_request(GPIO_FN_PORT241_IRDA_OUT, NULL);
+ gpio_request(GPIO_FN_PORT242_IRDA_IN, NULL);
+ gpio_request(GPIO_FN_PORT243_IRDA_FIRSEL, NULL);
+
+ /* LCD panel */
+ gpio_request(GPIO_PORT217, NULL); /* RESET */
+ gpio_direction_output(GPIO_PORT217, 0);
+ mdelay(1);
+ gpio_set_value(GPIO_PORT217, 1);
+
+ /* LCD backlight controller */
+ gpio_request(GPIO_PORT235, NULL); /* RESET */
+ gpio_direction_output(GPIO_PORT235, 0);
+ lcd_backlight_reset();
+
+ /* MIPI-DSI clock setup */
+ __raw_writel(0x2a809010, DSI0PHYCR);
+
#ifdef CONFIG_CACHE_L2X0
/* Shared attribute override enable, 64K*8way */
l2x0_init(__io(0xf0100000), 0x00460000, 0xc2000fff);
gpio_request(GPIO_FN_IRDA_OUT, NULL);
gpio_request(GPIO_FN_IRDA_IN, NULL);
gpio_request(GPIO_FN_IRDA_FIRSEL, NULL);
- set_irq_type(evt2irq(0x480), IRQ_TYPE_LEVEL_LOW);
sh7367_add_standard_devices();
* SW1 | SW33
* | bit1 | bit2 | bit3 | bit4
* -------------+------+------+------+-------
- * MMC0 OFF | OFF | ON | ON | X
- * MMC1 ON | OFF | ON | X | ON
- * SDHI1 OFF | ON | X | OFF | ON
+ * MMC0 OFF | OFF | X | ON | X (Use MMCIF)
+ * SDHI1 OFF | ON | X | OFF | X (Use MFD_SH_MOBILE_SDHI)
*
*/
value = __raw_readl(PLLC2CR) & ~(0x3f << 24);
- __raw_writel((value & ~0x80000000) | ((idx + 19) << 24), PLLC2CR);
+ __raw_writel(value | ((idx + 19) << 24), PLLC2CR);
+
+ clk->rate = clk->freq_table[idx].frequency;
return 0;
}
{
unsigned long mult = 1;
- if (__raw_readl(PLLECR) & (1 << clk->enable_bit))
+ if (__raw_readl(PLLECR) & (1 << clk->enable_bit)) {
mult = (((__raw_readl(clk->enable_reg) >> 24) & 0x3f) + 1);
+ /* handle CFG bit for PLL1 and PLL2 */
+ switch (clk->enable_bit) {
+ case 1:
+ case 2:
+ if (__raw_readl(clk->enable_reg) & (1 << 20))
+ mult *= 2;
+ }
+ }
return clk->parent->rate * mult;
}
static struct clk div4_clks[DIV4_NR] = {
[DIV4_I] = DIV4(FRQCRA, 20, 0xfff, CLK_ENABLE_ON_INIT),
[DIV4_ZG] = DIV4(FRQCRA, 16, 0xbff, CLK_ENABLE_ON_INIT),
- [DIV4_M3] = DIV4(FRQCRA, 8, 0xfff, CLK_ENABLE_ON_INIT),
+ [DIV4_M3] = DIV4(FRQCRA, 12, 0xfff, CLK_ENABLE_ON_INIT),
[DIV4_B] = DIV4(FRQCRA, 8, 0xfff, CLK_ENABLE_ON_INIT),
[DIV4_M1] = DIV4(FRQCRA, 4, 0xfff, 0),
[DIV4_M2] = DIV4(FRQCRA, 0, 0xfff, 0),
};
enum { MSTP001,
- MSTP125, MSTP116,
+ MSTP125, MSTP118, MSTP116, MSTP100,
MSTP219,
MSTP207, MSTP206, MSTP204, MSTP203, MSTP202, MSTP201, MSTP200,
- MSTP331, MSTP329, MSTP323, MSTP312,
+ MSTP331, MSTP329, MSTP325, MSTP323, MSTP312,
MSTP411, MSTP410, MSTP403,
MSTP_NR };
static struct clk mstp_clks[MSTP_NR] = {
[MSTP001] = MSTP(&div4_clks[DIV4_HP], SMSTPCR0, 1, 0), /* IIC2 */
[MSTP125] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR1, 25, 0), /* TMU0 */
+ [MSTP118] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 18, 0), /* DSITX0 */
[MSTP116] = MSTP(&div4_clks[DIV4_HP], SMSTPCR1, 16, 0), /* IIC0 */
+ [MSTP100] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 0, 0), /* LCDC0 */
[MSTP219] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 19, 0), /* SCIFA7 */
[MSTP207] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 7, 0), /* SCIFA5 */
[MSTP206] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 6, 0), /* SCIFB */
[MSTP200] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 0, 0), /* SCIFA4 */
[MSTP331] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 31, 0), /* SCIFA6 */
[MSTP329] = MSTP(&r_clk, SMSTPCR3, 29, 0), /* CMT10 */
+ [MSTP325] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR3, 25, 0), /* IrDA */
[MSTP323] = MSTP(&div4_clks[DIV4_HP], SMSTPCR3, 23, 0), /* IIC1 */
[MSTP312] = MSTP(&div4_clks[DIV4_HP], SMSTPCR3, 12, 0), /* MMCIF0 */
[MSTP411] = MSTP(&div4_clks[DIV4_HP], SMSTPCR4, 11, 0), /* IIC3 */
#define CLKDEV_CON_ID(_id, _clk) { .con_id = _id, .clk = _clk }
#define CLKDEV_DEV_ID(_id, _clk) { .dev_id = _id, .clk = _clk }
+#define CLKDEV_ICK_ID(_cid, _did, _clk) { .con_id = _cid, .dev_id = _did, .clk = _clk }
static struct clk_lookup lookups[] = {
/* main clocks */
CLKDEV_CON_ID("r_clk", &r_clk),
+ /* DIV6 clocks */
+ CLKDEV_ICK_ID("dsit_clk", "sh-mipi-dsi.0", &div6_clks[DIV6_DSIT]),
+ CLKDEV_ICK_ID("dsit_clk", "sh-mipi-dsi.1", &div6_clks[DIV6_DSIT]),
+ CLKDEV_ICK_ID("dsi0p_clk", "sh-mipi-dsi.0", &div6_clks[DIV6_DSI0P]),
+ CLKDEV_ICK_ID("dsi1p_clk", "sh-mipi-dsi.1", &div6_clks[DIV6_DSI1P]),
+
/* MSTP32 clocks */
CLKDEV_DEV_ID("i2c-sh_mobile.2", &mstp_clks[MSTP001]), /* I2C2 */
+ CLKDEV_DEV_ID("sh_mobile_lcdc_fb.0", &mstp_clks[MSTP100]), /* LCDC0 */
CLKDEV_DEV_ID("sh_tmu.0", &mstp_clks[MSTP125]), /* TMU00 */
CLKDEV_DEV_ID("sh_tmu.1", &mstp_clks[MSTP125]), /* TMU01 */
CLKDEV_DEV_ID("i2c-sh_mobile.0", &mstp_clks[MSTP116]), /* I2C0 */
+ CLKDEV_DEV_ID("sh-mipi-dsi.0", &mstp_clks[MSTP118]), /* DSITX */
CLKDEV_DEV_ID("sh-sci.7", &mstp_clks[MSTP219]), /* SCIFA7 */
CLKDEV_DEV_ID("sh-sci.5", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.8", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("sh-sci.4", &mstp_clks[MSTP200]), /* SCIFA4 */
CLKDEV_DEV_ID("sh-sci.6", &mstp_clks[MSTP331]), /* SCIFA6 */
CLKDEV_DEV_ID("sh_cmt.10", &mstp_clks[MSTP329]), /* CMT10 */
+ CLKDEV_DEV_ID("sh_irda.0", &mstp_clks[MSTP325]), /* IrDA */
CLKDEV_DEV_ID("i2c-sh_mobile.1", &mstp_clks[MSTP323]), /* I2C1 */
CLKDEV_DEV_ID("sh_mmcif.0", &mstp_clks[MSTP312]), /* MMCIF0 */
CLKDEV_DEV_ID("i2c-sh_mobile.3", &mstp_clks[MSTP411]), /* I2C3 */
enum {
UNUSED_INTCS = 0,
+ ENABLED_INTCS,
INTCS,
CMT4,
DSITX1_DSITX1_0,
DSITX1_DSITX1_1,
- /* MFIS2 */
+ MFIS2_INTCS, /* Priority always enabled using ENABLED_INTCS */
CPORTS2R,
/* CEC */
JPU6E,
INTCS_VECT(CMT4, 0x1980),
INTCS_VECT(DSITX1_DSITX1_0, 0x19a0),
INTCS_VECT(DSITX1_DSITX1_1, 0x19c0),
- /* MFIS2 */
+ INTCS_VECT(MFIS2_INTCS, 0x1a00),
INTCS_VECT(CPORTS2R, 0x1a20),
/* CEC */
INTCS_VECT(JPU6E, 0x1a80),
{ 0, TMU1_TUNI2, TMU1_TUNI1, TMU1_TUNI0,
CMT4, DSITX1_DSITX1_0, DSITX1_DSITX1_1, 0 } },
{ 0xffd5019c, 0xffd501dc, 8, /* IMR7SA3 / IMCR7SA3 */
- { 0, CPORTS2R, 0, 0,
+ { MFIS2_INTCS, CPORTS2R, 0, 0,
JPU6E, 0, 0, 0 } },
{ 0xffd20104, 0, 16, /* INTAMASK */
{ 0, 0, 0, 0, 0, 0, 0, 0,
{ 0xffd50030, 0, 16, 4, /* IPRMS3 */ { TMU1, 0, 0, 0 } },
{ 0xffd50034, 0, 16, 4, /* IPRNS3 */ { CMT4, DSITX1_DSITX1_0,
DSITX1_DSITX1_1, 0 } },
- { 0xffd50038, 0, 16, 4, /* IPROS3 */ { 0, CPORTS2R, 0, 0 } },
+ { 0xffd50038, 0, 16, 4, /* IPROS3 */ { ENABLED_INTCS, CPORTS2R,
+ 0, 0 } },
{ 0xffd5003c, 0, 16, 4, /* IPRPS3 */ { JPU6E, 0, 0, 0 } },
};
static struct intc_desc intcs_desc __initdata = {
.name = "sh7372-intcs",
+ .force_enable = ENABLED_INTCS,
.resource = intcs_resources,
.num_resources = ARRAY_SIZE(intcs_resources),
.hw = INTC_HW_DESC(intcs_vectors, intcs_groups, intcs_mask_registers,
void __init sh73a0_init_irq(void)
{
- void __iomem *gic_base = __io(0xf0001000);
+ void __iomem *gic_dist_base = __io(0xf0001000);
+ void __iomem *gic_cpu_base = __io(0xf0000100);
void __iomem *intevtsa = ioremap_nocache(0xffd20100, PAGE_SIZE);
- gic_init(0, 29, gic_base, gic_base);
+ gic_init(0, 29, gic_dist_base, gic_cpu_base);
register_intc_controller(&intcs_desc);
#define SPEAR320_SMII1_BASE 0xAB000000
#define SPEAR320_SMII1_SIZE 0x01000000
-#define SPEAR320_SOC_CONFIG_BASE 0xB4000000
+#define SPEAR320_SOC_CONFIG_BASE 0xB3000000
#define SPEAR320_SOC_CONFIG_SIZE 0x00000070
/* Interrupt registers offsets and masks */
#define INT_STS_MASK_REG 0x04
spin_unlock_irqrestore(&bank->lvl_lock[port], flags);
if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
- __set_irq_handler_unlocked(irq, handle_level_irq);
+ __set_irq_handler_unlocked(d->irq, handle_level_irq);
else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING))
- __set_irq_handler_unlocked(irq, handle_edge_irq);
+ __set_irq_handler_unlocked(d->irq, handle_edge_irq);
return 0;
}
#ifndef __MACH_CLK_H
#define __MACH_CLK_H
+struct clk;
+
void tegra_periph_reset_deassert(struct clk *c);
void tegra_periph_reset_assert(struct clk *c);
#ifndef __MACH_CLKDEV_H
#define __MACH_CLKDEV_H
+struct clk;
+
static inline int __clk_get(struct clk *clk)
{
return 1;
--- /dev/null
+/*
+ * Platform definitions for tegra-kbc keyboard input driver
+ *
+ * Copyright (c) 2010-2011, NVIDIA Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef ASMARM_ARCH_TEGRA_KBC_H
+#define ASMARM_ARCH_TEGRA_KBC_H
+
+#include <linux/types.h>
+#include <linux/input/matrix_keypad.h>
+
+#ifdef CONFIG_ARCH_TEGRA_2x_SOC
+#define KBC_MAX_GPIO 24
+#define KBC_MAX_KPENT 8
+#else
+#define KBC_MAX_GPIO 20
+#define KBC_MAX_KPENT 7
+#endif
+
+#define KBC_MAX_ROW 16
+#define KBC_MAX_COL 8
+#define KBC_MAX_KEY (KBC_MAX_ROW * KBC_MAX_COL)
+
+struct tegra_kbc_pin_cfg {
+ bool is_row;
+ unsigned char num;
+};
+
+struct tegra_kbc_wake_key {
+ u8 row:4;
+ u8 col:4;
+};
+
+struct tegra_kbc_platform_data {
+ unsigned int debounce_cnt;
+ unsigned int repeat_cnt;
+
+ unsigned int wake_cnt; /* 0:wake on any key >1:wake on wake_cfg */
+ const struct tegra_kbc_wake_key *wake_cfg;
+
+ struct tegra_kbc_pin_cfg pin_cfg[KBC_MAX_GPIO];
+ const struct matrix_keymap_data *keymap_data;
+
+ bool wakeup;
+};
+#endif
#define ICTLR_COP_IER_CLR 0x38
#define ICTLR_COP_IEP_CLASS 0x3c
-static void (*gic_mask_irq)(struct irq_data *d);
-static void (*gic_unmask_irq)(struct irq_data *d);
+static void (*tegra_gic_mask_irq)(struct irq_data *d);
+static void (*tegra_gic_unmask_irq)(struct irq_data *d);
-#define irq_to_ictlr(irq) (((irq)-32) >> 5)
+#define irq_to_ictlr(irq) (((irq) - 32) >> 5)
static void __iomem *tegra_ictlr_base = IO_ADDRESS(TEGRA_PRIMARY_ICTLR_BASE);
-#define ictlr_to_virt(ictlr) (tegra_ictlr_base + (ictlr)*0x100)
+#define ictlr_to_virt(ictlr) (tegra_ictlr_base + (ictlr) * 0x100)
static void tegra_mask(struct irq_data *d)
{
void __iomem *addr = ictlr_to_virt(irq_to_ictlr(d->irq));
- gic_mask_irq(d);
- writel(1<<(d->irq&31), addr+ICTLR_CPU_IER_CLR);
+ tegra_gic_mask_irq(d);
+ writel(1 << (d->irq & 31), addr+ICTLR_CPU_IER_CLR);
}
static void tegra_unmask(struct irq_data *d)
{
void __iomem *addr = ictlr_to_virt(irq_to_ictlr(d->irq));
- gic_unmask_irq(d);
+ tegra_gic_unmask_irq(d);
writel(1<<(d->irq&31), addr+ICTLR_CPU_IER_SET);
}
IO_ADDRESS(TEGRA_ARM_PERIF_BASE + 0x100));
gic = get_irq_chip(29);
- gic_unmask_irq = gic->irq_unmask;
- gic_mask_irq = gic->irq_mask;
+ tegra_gic_unmask_irq = gic->irq_unmask;
+ tegra_gic_mask_irq = gic->irq_mask;
tegra_irq.irq_ack = gic->irq_ack;
#ifdef CONFIG_SMP
tegra_irq.irq_set_affinity = gic->irq_set_affinity;
depends on ARCH_VERSATILE
config ARCH_VERSATILE_PB
- bool "Support Versatile/PB platform"
+ bool "Support Versatile Platform Baseboard for ARM926EJ-S"
select CPU_ARM926T
select MIGHT_HAVE_PCI
default y
help
- Include support for the ARM(R) Versatile/PB platform.
+ Include support for the ARM(R) Versatile Platform Baseboard
+ for the ARM926EJ-S.
config MACH_VERSATILE_AB
- bool "Support Versatile/AB platform"
+ bool "Support Versatile Application Baseboard for ARM926EJ-S"
select CPU_ARM926T
help
- Include support for the ARM(R) Versatile/AP platform.
+ Include support for the ARM(R) Versatile Application Baseboard
+ for the ARM926EJ-S.
endmenu
* observers, irrespective of whether they're taking part in coherency
* or not. This is necessary for the hotplug code to work reliably.
*/
-static void write_pen_release(int val)
+static void __cpuinit write_pen_release(int val)
{
pen_release = val;
smp_wmb();
#include <asm/mach/time.h>
#include <asm/hardware/arm_timer.h>
#include <asm/hardware/timer-sp.h>
+#include <asm/hardware/sp810.h>
#include <mach/motherboard.h>
static void __init v2m_timer_init(void)
{
+ u32 scctrl;
+
versatile_sched_clock_init(MMIO_P2V(V2M_SYS_24MHZ), 24000000);
+ /* Select 1MHz TIMCLK as the reference clock for SP804 timers */
+ scctrl = readl(MMIO_P2V(V2M_SYSCTL + SCCTRL));
+ scctrl |= SCCTRL_TIMEREN0SEL_TIMCLK;
+ scctrl |= SCCTRL_TIMEREN1SEL_TIMCLK;
+ writel(scctrl, MMIO_P2V(V2M_SYSCTL + SCCTRL));
+
writel(0, MMIO_P2V(V2M_TIMER0) + TIMER_CTRL);
writel(0, MMIO_P2V(V2M_TIMER1) + TIMER_CTRL);
config CPU_32v6K
bool "Support ARM V6K processor extensions" if !SMP
depends on CPU_V6 || CPU_V7
- default y if SMP && !(ARCH_MX3 || ARCH_OMAP2)
+ default y if SMP
help
Say Y here if your ARMv6 processor supports the 'K' extension.
This enables the kernel to use some instructions not present
# ARMv7
config CPU_V7
bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX
- select CPU_32v6K if !ARCH_OMAP2
+ select CPU_32v6K
select CPU_32v7
select CPU_ABRT_EV7
select CPU_PABRT_V7
config SWP_EMULATE
bool "Emulate SWP/SWPB instructions"
- depends on CPU_V7 && !CPU_V6
+ depends on !CPU_USE_DOMAINS && CPU_V7 && !CPU_V6
select HAVE_PROC_CPU if PROC_FS
default y if SMP
help
static inline void cache_sync(void)
{
void __iomem *base = l2x0_base;
+
+#ifdef CONFIG_ARM_ERRATA_753970
+ /* write to an unmmapped register */
+ writel_relaxed(0, base + L2X0_DUMMY_REG);
+#else
writel_relaxed(0, base + L2X0_CACHE_SYNC);
+#endif
cache_wait(base + L2X0_CACHE_SYNC, 1);
}
memblock_reserve(__pa(_stext), _end - _stext);
#endif
#ifdef CONFIG_BLK_DEV_INITRD
+ if (phys_initrd_size &&
+ memblock_is_region_reserved(phys_initrd_start, phys_initrd_size)) {
+ pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region - disabling initrd\n",
+ phys_initrd_start, phys_initrd_size);
+ phys_initrd_start = phys_initrd_size = 0;
+ }
if (phys_initrd_size) {
memblock_reserve(phys_initrd_start, phys_initrd_size);
orreq r10, r10, #1 << 6 @ set bit #6
mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register
#endif
+#ifdef CONFIG_ARM_ERRATA_751472
+ cmp r6, #0x30 @ present prior to r3p0
+ mrclt p15, 0, r10, c15, c0, 1 @ read diagnostic register
+ orrlt r10, r10, #1 << 11 @ set bit #11
+ mcrlt p15, 0, r10, c15, c0, 1 @ write diagnostic register
+#endif
3: mov r10, #0
#ifdef HARVARD_CACHE
*/
#include <linux/cpumask.h>
-#include <linux/err.h>
-#include <linux/errno.h>
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/oprofile.h>
return NULL;
}
}
+#endif
static int report_trace(struct stackframe *frame, void *d)
{
/* frame pointers should strictly progress back up the stack
* (towards higher addresses) */
- if (tail >= buftail[0].fp)
+ if (tail + 1 >= buftail[0].fp)
return NULL;
return buftail[0].fp-1;
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
+ /* provide backtrace support also in timer mode: */
ops->backtrace = arm_backtrace;
return oprofile_perf_init(ops);
{
oprofile_perf_exit();
}
-#else
-int __init oprofile_arch_init(struct oprofile_operations *ops)
-{
- pr_info("oprofile: hardware counters not available\n");
- return -ENODEV;
-}
-void __exit oprofile_arch_exit(void) {}
-#endif /* CONFIG_HW_PERF_EVENTS */
case MACH_TYPE_MX35_3DS:
case MACH_TYPE_PCM043:
case MACH_TYPE_LILLY1131:
+ case MACH_TYPE_VPR200:
uart_base = MX3X_UART1_BASE_ADDR;
break;
case MACH_TYPE_MAGX_ZN5:
break;
case MACH_TYPE_MX51_BABBAGE:
case MACH_TYPE_EUKREA_CPUIMX51SD:
+ case MACH_TYPE_MX51_3DS:
uart_base = MX51_UART1_BASE_ADDR;
break;
case MACH_TYPE_MX50_RDP:
config OMAP_IOMMU_IVA2
bool
-choice
- prompt "System timer"
- default OMAP_32K_TIMER if !ARCH_OMAP15XX
-
config OMAP_MPU_TIMER
bool "Use mpu timer"
+ depends on ARCH_OMAP1
help
Select this option if you want to use the OMAP mpu timer. This
timer provides more intra-tick resolution than the 32KHz timer,
config OMAP_32K_TIMER
bool "Use 32KHz timer"
depends on ARCH_OMAP16XX || ARCH_OMAP2PLUS
+ default y if (ARCH_OMAP16XX || ARCH_OMAP2PLUS)
help
Select this option if you want to enable the OMAP 32KHz timer.
This timer saves power compared to the OMAP_MPU_TIMER, and has
intra-tick resolution than OMAP_MPU_TIMER. The 32KHz timer is
currently only available for OMAP16XX, 24XX, 34XX and OMAP4.
-endchoice
-
config OMAP3_L2_AUX_SECURE_SAVE_RESTORE
bool "OMAP3 HS/EMU save and restore for L2 AUX control register"
depends on ARCH_OMAP3 && PM
#define OMAP16XX_TIMER_32K_SYNCHRONIZED 0xfffbc410
-#if !(defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP15XX))
-
#include <linux/clocksource.h>
/*
#define SC_MULT 4000000000u
#define SC_SHIFT 17
-unsigned long long notrace sched_clock(void)
+static inline unsigned long long notrace _omap_32k_sched_clock(void)
{
u32 cyc = clocksource_32k.read(&clocksource_32k);
return cyc_to_fixed_sched_clock(&cd, cyc, (u32)~0, SC_MULT, SC_SHIFT);
}
+#ifndef CONFIG_OMAP_MPU_TIMER
+unsigned long long notrace sched_clock(void)
+{
+ return _omap_32k_sched_clock();
+}
+#else
+unsigned long long notrace omap_32k_sched_clock(void)
+{
+ return _omap_32k_sched_clock();
+}
+#endif
+
static void notrace omap_update_sched_clock(void)
{
u32 cyc = clocksource_32k.read(&clocksource_32k);
*ts = *tsp;
}
-static int __init omap_init_clocksource_32k(void)
+int __init omap_init_clocksource_32k(void)
{
static char err[] __initdata = KERN_ERR
"%s: can't register clocksource!\n";
}
return 0;
}
-arch_initcall(omap_init_clocksource_32k);
-
-#endif /* !(defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP15XX)) */
-
#endif
#define OMAP_DMA_ACTIVE 0x01
-#define OMAP2_DMA_CSR_CLEAR_MASK 0xffe
+#define OMAP2_DMA_CSR_CLEAR_MASK 0xffffffff
#define OMAP_FUNC_MUX_ARM_BASE (0xfffe1000 + 0xec)
printk(KERN_INFO "DMA misaligned error with device %d\n",
dma_chan[ch].dev_id);
- p->dma_write(OMAP2_DMA_CSR_CLEAR_MASK, CSR, ch);
+ p->dma_write(status, CSR, ch);
p->dma_write(1 << ch, IRQSTATUS_L0, ch);
/* read back the register to flush the write */
p->dma_read(IRQSTATUS_L0, ch);
OMAP_DMA_CHAIN_INCQHEAD(chain_id);
status = p->dma_read(CSR, ch);
+ p->dma_write(status, CSR, ch);
}
- p->dma_write(status, CSR, ch);
-
if (likely(dma_chan[ch].callback != NULL))
dma_chan[ch].callback(ch, status, dma_chan[ch].data);
extern void omap_map_common_io(void);
extern struct sys_timer omap_timer;
+extern bool omap_32k_timer_init(void);
+extern int __init omap_init_clocksource_32k(void);
+extern unsigned long long notrace omap_32k_sched_clock(void);
extern void omap_reserve(void);
#define mfp_configured(p) ((p)->config != -1)
/*
- * perform a read-back of any MFPR register to make sure the
+ * perform a read-back of any valid MFPR register to make sure the
* previous writings are finished
*/
-#define mfpr_sync() (void)__raw_readl(mfpr_mmio_base + 0)
+static unsigned long mfpr_off_readback;
+#define mfpr_sync() (void)__raw_readl(mfpr_mmio_base + mfpr_off_readback)
static inline void __mfp_config_run(struct mfp_pin *p)
{
spin_lock_irqsave(&mfp_spin_lock, flags);
+ /* mfp offset for readback */
+ mfpr_off_readback = map[0].offset;
+
for (p = map; p->start != MFP_PIN_INVALID; p++) {
offset = p->offset;
i = p->start;
help
Common code for the GPIO interrupts (other than external interrupts.)
+comment "System MMU"
+
+config S5P_SYSTEM_MMU
+ bool "S5P SYSTEM MMU"
+ depends on ARCH_S5PV310
+ help
+ Say Y here if you want to enable System MMU
+
config S5P_DEV_FIMC0
bool
help
bool
help
Compile in platform device definitions for MIPI-CSIS channel 1
-
-menuconfig S5P_SYSMMU
- bool "SYSMMU support"
- depends on ARCH_S5PV310
- help
- This is a System MMU driver for Samsung ARM based Soc.
-
-if S5P_SYSMMU
-
-config S5P_SYSMMU_DEBUG
- bool "Enables debug messages"
- depends on S5P_SYSMMU
- help
- This enables SYSMMU driver debug massages.
-
-endif
obj-y += irq.o
obj-$(CONFIG_S5P_EXT_INT) += irq-eint.o
obj-$(CONFIG_S5P_GPIO_INT) += irq-gpioint.o
+obj-$(CONFIG_S5P_SYSTEM_MMU) += sysmmu.o
obj-$(CONFIG_PM) += pm.o
obj-$(CONFIG_PM) += irq-pm.o
obj-$(CONFIG_S5P_DEV_ONENAND) += dev-onenand.o
obj-$(CONFIG_S5P_DEV_CSIS0) += dev-csis0.o
obj-$(CONFIG_S5P_DEV_CSIS1) += dev-csis1.o
-obj-$(CONFIG_S5P_SYSMMU) += sysmmu.o
static struct resource s5p_uart0_resource[] = {
[0] = {
.start = S5P_PA_UART0,
- .end = S5P_PA_UART0 + S5P_SZ_UART,
+ .end = S5P_PA_UART0 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
static struct resource s5p_uart1_resource[] = {
[0] = {
.start = S5P_PA_UART1,
- .end = S5P_PA_UART1 + S5P_SZ_UART,
+ .end = S5P_PA_UART1 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
static struct resource s5p_uart2_resource[] = {
[0] = {
.start = S5P_PA_UART2,
- .end = S5P_PA_UART2 + S5P_SZ_UART,
+ .end = S5P_PA_UART2 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 3
[0] = {
.start = S5P_PA_UART3,
- .end = S5P_PA_UART3 + S5P_SZ_UART,
+ .end = S5P_PA_UART3 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 4
[0] = {
.start = S5P_PA_UART4,
- .end = S5P_PA_UART4 + S5P_SZ_UART,
+ .end = S5P_PA_UART4 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 5
[0] = {
.start = S5P_PA_UART5,
- .end = S5P_PA_UART5 + S5P_SZ_UART,
+ .end = S5P_PA_UART5 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
+++ /dev/null
-/* linux/arch/arm/plat-s5p/include/plat/sysmmu.h
- *
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- *
- * Samsung sysmmu driver
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
-
-#ifndef __ASM_PLAT_S5P_SYSMMU_H
-#define __ASM_PLAT_S5P_SYSMMU_H __FILE__
-
-/* debug macro */
-#ifdef CONFIG_S5P_SYSMMU_DEBUG
-#define sysmmu_debug(fmt, arg...) printk(KERN_INFO "[%s] " fmt, __func__, ## arg)
-#else
-#define sysmmu_debug(fmt, arg...) do { } while (0)
-#endif
-
-#endif /* __ASM_PLAT_S5P_SYSMMU_H */
#include <mach/regs-sysmmu.h>
#include <mach/sysmmu.h>
-#include <plat/sysmmu.h>
-
struct sysmmu_controller s5p_sysmmu_cntlrs[S5P_SYSMMU_TOTAL_IPNUM];
void s5p_sysmmu_register(struct sysmmu_controller *sysmmuconp)
: "=r" (pg) : : "cc"); \
pg &= ~0x3fff;
- sysmmu_debug("CP15 TTBR0 : 0x%x\n", pg);
+ printk(KERN_INFO "%s: CP15 TTBR0 : 0x%x\n", __func__, pg);
/* Set sysmmu page table base address */
__raw_writel(pg, sysmmuconp->regs + S5P_PT_BASE_ADDR);
s3c_device_ts.dev.platform_data = npd;
}
-EXPORT_SYMBOL(s3c24xx_ts_set_platdata);
#include <linux/irq.h>
+struct sys_device;
+
#ifdef CONFIG_PM
extern __init int s3c_pm_init(void);
{
void __iomem *base = (void __iomem *)SPEAR_DBG_UART_BASE;
- while (readl(base + UART01x_FR) & UART01x_FR_TXFF)
+ while (readl_relaxed(base + UART01x_FR) & UART01x_FR_TXFF)
barrier();
- writel(c, base + UART01x_DR);
+ writel_relaxed(c, base + UART01x_DR);
}
static inline void flush(void)
#ifndef __PLAT_VMALLOC_H
#define __PLAT_VMALLOC_H
-#define VMALLOC_END 0xF0000000
+#define VMALLOC_END 0xF0000000UL
#endif /* __PLAT_VMALLOC_H */
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
-# Last update: Sun Dec 12 23:24:27 2010
+# Last update: Mon Feb 7 08:59:27 2011
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
vs_v210 MACH_VS_V210 VS_V210 2252
vs_v212 MACH_VS_V212 VS_V212 2253
hmt MACH_HMT HMT 2254
-suen3 MACH_SUEN3 SUEN3 2255
+km_kirkwood MACH_KM_KIRKWOOD KM_KIRKWOOD 2255
vesper MACH_VESPER VESPER 2256
str9 MACH_STR9 STR9 2257
omap3_wl_ff MACH_OMAP3_WL_FF OMAP3_WL_FF 2258
ea20 MACH_EA20 EA20 3002
awm2 MACH_AWM2 AWM2 3003
ti8148evm MACH_TI8148EVM TI8148EVM 3004
-tegra_seaboard MACH_TEGRA_SEABOARD TEGRA_SEABOARD 3005
+seaboard MACH_SEABOARD SEABOARD 3005
linkstation_chlv2 MACH_LINKSTATION_CHLV2 LINKSTATION_CHLV2 3006
tera_pro2_rack MACH_TERA_PRO2_RACK TERA_PRO2_RACK 3007
rubys MACH_RUBYS RUBYS 3008
ics_if_voip MACH_ICS_IF_VOIP ICS_IF_VOIP 3206
wlf_cragg_6410 MACH_WLF_CRAGG_6410 WLF_CRAGG_6410 3207
punica MACH_PUNICA PUNICA 3208
-sbc_nt250 MACH_SBC_NT250 SBC_NT250 3209
+trimslice MACH_TRIMSLICE TRIMSLICE 3209
mx27_wmultra MACH_MX27_WMULTRA MX27_WMULTRA 3210
mackerel MACH_MACKEREL MACKEREL 3211
fa9x27 MACH_FA9X27 FA9X27 3213
pcm048 MACH_PCM048 PCM048 3236
dds MACH_DDS DDS 3237
chalten_xa1 MACH_CHALTEN_XA1 CHALTEN_XA1 3238
+ts48xx MACH_TS48XX TS48XX 3239
+tonga2_tfttimer MACH_TONGA2_TFTTIMER TONGA2_TFTTIMER 3240
+whistler MACH_WHISTLER WHISTLER 3241
+asl_phoenix MACH_ASL_PHOENIX ASL_PHOENIX 3242
+at91sam9263otlite MACH_AT91SAM9263OTLITE AT91SAM9263OTLITE 3243
+ddplug MACH_DDPLUG DDPLUG 3244
+d2plug MACH_D2PLUG D2PLUG 3245
+kzm9d MACH_KZM9D KZM9D 3246
+verdi_lte MACH_VERDI_LTE VERDI_LTE 3247
+nanozoom MACH_NANOZOOM NANOZOOM 3248
+dm3730_som_lv MACH_DM3730_SOM_LV DM3730_SOM_LV 3249
+dm3730_torpedo MACH_DM3730_TORPEDO DM3730_TORPEDO 3250
+anchovy MACH_ANCHOVY ANCHOVY 3251
+re2rev20 MACH_RE2REV20 RE2REV20 3253
+re2rev21 MACH_RE2REV21 RE2REV21 3254
+cns21xx MACH_CNS21XX CNS21XX 3255
+rider MACH_RIDER RIDER 3257
+nsk330 MACH_NSK330 NSK330 3258
+cns2133evb MACH_CNS2133EVB CNS2133EVB 3259
+z3_816x_mod MACH_Z3_816X_MOD Z3_816X_MOD 3260
+z3_814x_mod MACH_Z3_814X_MOD Z3_814X_MOD 3261
+beect MACH_BEECT BEECT 3262
+dma_thunderbug MACH_DMA_THUNDERBUG DMA_THUNDERBUG 3263
+omn_at91sam9g20 MACH_OMN_AT91SAM9G20 OMN_AT91SAM9G20 3264
+mx25_e2s_uc MACH_MX25_E2S_UC MX25_E2S_UC 3265
+mione MACH_MIONE MIONE 3266
+top9000_tcu MACH_TOP9000_TCU TOP9000_TCU 3267
+top9000_bsl MACH_TOP9000_BSL TOP9000_BSL 3268
+kingdom MACH_KINGDOM KINGDOM 3269
+armadillo460 MACH_ARMADILLO460 ARMADILLO460 3270
+lq2 MACH_LQ2 LQ2 3271
+sweda_tms2 MACH_SWEDA_TMS2 SWEDA_TMS2 3272
+mx53_loco MACH_MX53_LOCO MX53_LOCO 3273
+acer_a8 MACH_ACER_A8 ACER_A8 3275
+acer_gauguin MACH_ACER_GAUGUIN ACER_GAUGUIN 3276
+guppy MACH_GUPPY GUPPY 3277
+mx61_ard MACH_MX61_ARD MX61_ARD 3278
+tx53 MACH_TX53 TX53 3279
+omapl138_case_a3 MACH_OMAPL138_CASE_A3 OMAPL138_CASE_A3 3280
+uemd MACH_UEMD UEMD 3281
+ccwmx51mut MACH_CCWMX51MUT CCWMX51MUT 3282
+rockhopper MACH_ROCKHOPPER ROCKHOPPER 3283
+nookcolor MACH_NOOKCOLOR NOOKCOLOR 3284
+hkdkc100 MACH_HKDKC100 HKDKC100 3285
+ts42xx MACH_TS42XX TS42XX 3286
+aebl MACH_AEBL AEBL 3287
+wario MACH_WARIO WARIO 3288
+gfs_spm MACH_GFS_SPM GFS_SPM 3289
+cm_t3730 MACH_CM_T3730 CM_T3730 3290
+isc3 MACH_ISC3 ISC3 3291
+rascal MACH_RASCAL RASCAL 3292
+hrefv60 MACH_HREFV60 HREFV60 3293
+tpt_2_0 MACH_TPT_2_0 TPT_2_0 3294
+pyramid_td MACH_PYRAMID_TD PYRAMID_TD 3295
+splendor MACH_SPLENDOR SPLENDOR 3296
+guf_planet MACH_GUF_PLANET GUF_PLANET 3297
+msm8x60_qt MACH_MSM8X60_QT MSM8X60_QT 3298
+htc_hd_mini MACH_HTC_HD_MINI HTC_HD_MINI 3299
+athene MACH_ATHENE ATHENE 3300
+deep_r_ek_1 MACH_DEEP_R_EK_1 DEEP_R_EK_1 3301
+vivow_ct MACH_VIVOW_CT VIVOW_CT 3302
+nery_1000 MACH_NERY_1000 NERY_1000 3303
+rfl109145_ssrv MACH_RFL109145_SSRV RFL109145_SSRV 3304
+nmh MACH_NMH NMH 3305
+wn802t MACH_WN802T WN802T 3306
+dragonet MACH_DRAGONET DRAGONET 3307
+geneva_b MACH_GENEVA_B GENEVA_B 3308
+at91sam9263desk16l MACH_AT91SAM9263DESK16L AT91SAM9263DESK16L 3309
+bcmhana_sv MACH_BCMHANA_SV BCMHANA_SV 3310
+bcmhana_tablet MACH_BCMHANA_TABLET BCMHANA_TABLET 3311
+koi MACH_KOI KOI 3312
+ts4800 MACH_TS4800 TS4800 3313
+tqma9263 MACH_TQMA9263 TQMA9263 3314
+holiday MACH_HOLIDAY HOLIDAY 3315
+dma_6410 MACH_DMA6410 DMA6410 3316
+pcats_overlay MACH_PCATS_OVERLAY PCATS_OVERLAY 3317
+hwgw6410 MACH_HWGW6410 HWGW6410 3318
+shenzhou MACH_SHENZHOU SHENZHOU 3319
+cwme9210 MACH_CWME9210 CWME9210 3320
+cwme9210js MACH_CWME9210JS CWME9210JS 3321
+pgs_v1 MACH_PGS_SITARA PGS_SITARA 3322
+colibri_tegra2 MACH_COLIBRI_TEGRA2 COLIBRI_TEGRA2 3323
+w21 MACH_W21 W21 3324
+polysat1 MACH_POLYSAT1 POLYSAT1 3325
+dataway MACH_DATAWAY DATAWAY 3326
+cobral138 MACH_COBRAL138 COBRAL138 3327
+roverpcs8 MACH_ROVERPCS8 ROVERPCS8 3328
+marvelc MACH_MARVELC MARVELC 3329
+navefihid MACH_NAVEFIHID NAVEFIHID 3330
+dm365_cv100 MACH_DM365_CV100 DM365_CV100 3331
+able MACH_ABLE ABLE 3332
+legacy MACH_LEGACY LEGACY 3333
+icong MACH_ICONG ICONG 3334
+rover_g8 MACH_ROVER_G8 ROVER_G8 3335
+t5388p MACH_T5388P T5388P 3336
+dingo MACH_DINGO DINGO 3337
+goflexhome MACH_GOFLEXHOME GOFLEXHOME 3338
config AVR32
def_bool y
- # With EMBEDDED=n, we get lots of stuff automatically selected
+ # With EXPERT=n, we get lots of stuff automatically selected
# that we usually don't need on AVR32.
- select EMBEDDED
+ select EXPERT
select HAVE_CLK
select HAVE_OPROFILE
select HAVE_KPROBES
#ifndef __ASM_AVR32_PGALLOC_H
#define __ASM_AVR32_PGALLOC_H
+#include <linux/mm.h>
#include <linux/quicklist.h>
#include <asm/page.h>
#include <asm/pgtable.h>
select HAVE_KERNEL_LZO if RAMKERNEL
select HAVE_OPROFILE
select ARCH_WANT_OPTIONAL_GPIOLIB
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
+ select IRQ_PER_CPU if SMP
config GENERIC_CSUM
def_bool y
config GENERIC_FIND_NEXT_BIT
def_bool y
-config GENERIC_HARDIRQS
- def_bool y
-
-config GENERIC_IRQ_PROBE
- def_bool y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
config GENERIC_GPIO
def_bool y
depends on SMP && HOTPLUG
default y
-config IRQ_PER_CPU
- bool
- depends on SMP
- default y
-
config HAVE_LEGACY_PER_CPU_AREA
def_bool y
depends on SMP
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_ELF_CORE is not set
# CONFIG_AIO is not set
CONFIG_SLAB=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_BLK_DEV_INITRD=y
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLOB=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_HOTPLUG is not set
# CONFIG_ELF_CORE is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS_ALL=y
# CONFIG_ELF_CORE is not set
CONFIG_BLK_DEV_INITRD=y
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_FUTEX is not set
# CONFIG_RD_GZIP is not set
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
#define __BFIN_ASM_SERIAL_H__
#include <linux/serial_core.h>
+#include <linux/spinlock.h>
#include <mach/anomaly.h>
#include <mach/bfin_serial.h>
struct circ_buf rx_dma_buf;
struct timer_list rx_dma_timer;
int rx_dma_nrows;
+ spinlock_t rx_lock;
unsigned int tx_dma_channel;
unsigned int rx_dma_channel;
struct work_struct tx_dma_workqueue;
bool
default y
select HAVE_IDE
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_HARDIRQS_NO_DEPRECATED
config HZ
int
source "fs/Kconfig.binfmt"
-config GENERIC_HARDIRQS
- bool
- default y
-
config ETRAX_CMDLINE
string "Kernel command line"
default "root=/dev/mtdblock3"
IRQ31_interrupt
};
-static void enable_crisv10_irq(unsigned int irq);
-
-static unsigned int startup_crisv10_irq(unsigned int irq)
-{
- enable_crisv10_irq(irq);
- return 0;
-}
-
-#define shutdown_crisv10_irq disable_crisv10_irq
-
-static void enable_crisv10_irq(unsigned int irq)
-{
- crisv10_unmask_irq(irq);
-}
-
-static void disable_crisv10_irq(unsigned int irq)
-{
- crisv10_mask_irq(irq);
-}
-
-static void ack_crisv10_irq(unsigned int irq)
+static void enable_crisv10_irq(struct irq_data *data)
{
+ crisv10_unmask_irq(data->irq);
}
-static void end_crisv10_irq(unsigned int irq)
+static void disable_crisv10_irq(struct irq_data *data)
{
+ crisv10_mask_irq(data->irq);
}
static struct irq_chip crisv10_irq_type = {
- .name = "CRISv10",
- .startup = startup_crisv10_irq,
- .shutdown = shutdown_crisv10_irq,
- .enable = enable_crisv10_irq,
- .disable = disable_crisv10_irq,
- .ack = ack_crisv10_irq,
- .end = end_crisv10_irq,
- .set_affinity = NULL
+ .name = "CRISv10",
+ .irq_shutdown = disable_crisv10_irq,
+ .irq_enable = enable_crisv10_irq,
+ .irq_disable = disable_crisv10_irq,
};
void weird_irq(void);
/* Initialize IRQ handler descriptors. */
for(i = 2; i < NR_IRQS; i++) {
- irq_desc[i].chip = &crisv10_irq_type;
+ set_irq_desc_and_handler(i, &crisv10_irq_type,
+ handle_simple_irq);
set_int_vector(i, interrupt[i]);
}
}
-static unsigned int startup_crisv32_irq(unsigned int irq)
+static void enable_crisv32_irq(struct irq_data *data)
{
- crisv32_unmask_irq(irq);
- return 0;
-}
-
-static void shutdown_crisv32_irq(unsigned int irq)
-{
- crisv32_mask_irq(irq);
+ crisv32_unmask_irq(data->irq);
}
-static void enable_crisv32_irq(unsigned int irq)
+static void disable_crisv32_irq(struct irq_data *data)
{
- crisv32_unmask_irq(irq);
+ crisv32_mask_irq(data->irq);
}
-static void disable_crisv32_irq(unsigned int irq)
-{
- crisv32_mask_irq(irq);
-}
-
-static void ack_crisv32_irq(unsigned int irq)
-{
-}
-
-static void end_crisv32_irq(unsigned int irq)
-{
-}
-
-int set_affinity_crisv32_irq(unsigned int irq, const struct cpumask *dest)
+static int set_affinity_crisv32_irq(struct irq_data *data,
+ const struct cpumask *dest, bool force)
{
unsigned long flags;
+
spin_lock_irqsave(&irq_lock, flags);
- irq_allocations[irq - FIRST_IRQ].mask = *dest;
+ irq_allocations[data->irq - FIRST_IRQ].mask = *dest;
spin_unlock_irqrestore(&irq_lock, flags);
-
return 0;
}
static struct irq_chip crisv32_irq_type = {
- .name = "CRISv32",
- .startup = startup_crisv32_irq,
- .shutdown = shutdown_crisv32_irq,
- .enable = enable_crisv32_irq,
- .disable = disable_crisv32_irq,
- .ack = ack_crisv32_irq,
- .end = end_crisv32_irq,
- .set_affinity = set_affinity_crisv32_irq
+ .name = "CRISv32",
+ .irq_shutdown = disable_crisv32_irq,
+ .irq_enable = enable_crisv32_irq,
+ .irq_disable = disable_crisv32_irq,
+ .irq_set_affinity = set_affinity_crisv32_irq,
};
void
/* Point all IRQ's to bad handlers. */
for (i = FIRST_IRQ, j = 0; j < NR_IRQS; i++, j++) {
- irq_desc[j].chip = &crisv32_irq_type;
+ set_irq_chip_and_handler(j, &crisv32_irq_type,
+ handle_simple_irq);
set_exception_vector(i, interrupt[j]);
}
# CONFIG_SWAP is not set
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_SWAP is not set
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_SWAP is not set
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_BLK_DEV_BSG is not set
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_irqs_cpu(i, j));
#endif
- seq_printf(p, " %14s", irq_desc[i].chip->name);
+ seq_printf(p, " %14s", irq_desc[i].irq_data.chip->name);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
printk("do_IRQ: stack overflow: %lX\n", sp);
show_stack(NULL, (unsigned long *)sp);
}
- __do_IRQ(irq);
- irq_exit();
+ generic_handle_irq(irq);
+ irq_exit();
set_irq_regs(old_regs);
}
select HAVE_ARCH_TRACEHOOK
select HAVE_IRQ_WORK
select HAVE_PERF_EVENTS
+ select HAVE_GENERIC_HARDIRQS
config ZONE_DMA
bool
bool
default n
-config GENERIC_HARDIRQS
- bool
- default y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- bool
- default y
-
config TIME_LOW_RES
bool
default y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_MMU=y
CONFIG_FRV_OUTOFLINE_ATOMIC_OPS=y
bool
default y
select HAVE_IDE
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_HARDIRQS_NO_DEPRECATED
config SYMBOL_PREFIX
string
bool
default y
-config GENERIC_HARDIRQS
- bool
- default y
-
config GENERIC_CALIBRATE_DELAY
bool
default y
CONFIG_EXPERIMENTAL=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_UID16 is not set
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
return (irq >= EXT_IRQ0 && irq <= (EXT_IRQ0 + EXT_IRQS));
}
-static void h8300_enable_irq(unsigned int irq)
+static void h8300_enable_irq(struct irq_data *data)
{
- if (is_ext_irq(irq))
- IER_REGS |= 1 << (irq - EXT_IRQ0);
+ if (is_ext_irq(data->irq))
+ IER_REGS |= 1 << (data->irq - EXT_IRQ0);
}
-static void h8300_disable_irq(unsigned int irq)
+static void h8300_disable_irq(struct irq_data *data)
{
- if (is_ext_irq(irq))
- IER_REGS &= ~(1 << (irq - EXT_IRQ0));
+ if (is_ext_irq(data->irq))
+ IER_REGS &= ~(1 << (data->irq - EXT_IRQ0));
}
-static void h8300_end_irq(unsigned int irq)
+static unsigned int h8300_startup_irq(struct irq_data *data)
{
-}
-
-static unsigned int h8300_startup_irq(unsigned int irq)
-{
- if (is_ext_irq(irq))
- return h8300_enable_irq_pin(irq);
+ if (is_ext_irq(data->irq))
+ return h8300_enable_irq_pin(data->irq);
else
return 0;
}
-static void h8300_shutdown_irq(unsigned int irq)
+static void h8300_shutdown_irq(struct irq_data *data)
{
- if (is_ext_irq(irq))
- h8300_disable_irq_pin(irq);
+ if (is_ext_irq(data->irq))
+ h8300_disable_irq_pin(data->irq);
}
/*
*/
struct irq_chip h8300irq_chip = {
.name = "H8300-INTC",
- .startup = h8300_startup_irq,
- .shutdown = h8300_shutdown_irq,
- .enable = h8300_enable_irq,
- .disable = h8300_disable_irq,
- .ack = NULL,
- .end = h8300_end_irq,
+ .irq_startup = h8300_startup_irq,
+ .irq_shutdown = h8300_shutdown_irq,
+ .irq_enable = h8300_enable_irq,
+ .irq_disable = h8300_disable_irq,
};
#if defined(CONFIG_RAMKERNEL)
setup_vector();
- for (c = 0; c < NR_IRQS; c++) {
- irq_desc[c].status = IRQ_DISABLED;
- irq_desc[c].action = NULL;
- irq_desc[c].depth = 1;
- irq_desc[c].chip = &h8300irq_chip;
- }
+ for (c = 0; c < NR_IRQS; c++)
+ set_irq_chip_and_handler(c, &h8300irq_chip, handle_simple_irq);
}
asmlinkage void do_IRQ(int irq)
{
irq_enter();
- __do_IRQ(irq);
+ generic_handle_irq(irq);
irq_exit();
}
goto unlock;
seq_printf(p, "%3d: ",i);
seq_printf(p, "%10u ", kstat_irqs(i));
- seq_printf(p, " %14s", irq_desc[i].chip->name);
+ seq_printf(p, " %14s", irq_desc[i].irq_data.chip->name);
seq_printf(p, "-%-8s", irq_desc[i].name);
seq_printf(p, " %s", action->name);
select HAVE_KVM
select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
+ select GENERIC_PENDING_IRQ if SMP
+ select IRQ_PER_CPU
default y
help
The Itanium Processor Family is Intel's 64-bit successor to
source "lib/Kconfig"
-#
-# Use the generic interrupt handling code in kernel/irq/:
-#
-config GENERIC_HARDIRQS
- def_bool y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
-config GENERIC_IRQ_PROBE
- bool
- default y
-
-config GENERIC_PENDING_IRQ
- bool
- depends on GENERIC_HARDIRQS && SMP
- default y
-
-config IRQ_PER_CPU
- bool
- default y
-
config IOMMU_HELPER
def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_HARDIRQS_NO_DEPRECATED
+ select GENERIC_IRQ_PROBE
config SBUS
bool
bool
default y
-config GENERIC_HARDIRQS
- bool
- default y
-
-config GENERIC_IRQ_PROBE
- bool
- default y
-
config NO_IOPORT
def_bool y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=15
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=15
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=15
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_SLAB=y
CONFIG_MODULES=y
}
if (i < NR_IRQS) {
- raw_spin_lock_irqsave(&irq_desc[i].lock, flags);
- action = irq_desc[i].action;
+ struct irq_desc *desc = irq_to_desc(i);
+
+ raw_spin_lock_irqsave(&desc->lock, flags);
+ action = desc->action;
if (!action)
goto skip;
seq_printf(p, "%3d: ",i);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_irqs_cpu(i, j));
#endif
- seq_printf(p, " %14s", irq_desc[i].chip->name);
+ seq_printf(p, " %14s", desc->irq_data.chip->name);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
seq_putc(p, '\n');
skip:
- raw_spin_unlock_irqrestore(&irq_desc[i].lock, flags);
+ raw_spin_unlock_irqrestore(&desc->lock, flags);
}
return 0;
}
#ifdef CONFIG_DEBUG_STACKOVERFLOW
/* FIXME M32R */
#endif
- __do_IRQ(irq);
+ generic_handle_irq(irq);
irq_exit();
set_irq_regs(old_regs);
outl(data, port);
}
-static void mask_and_ack_m32104ut(unsigned int irq)
+static void mask_m32104ut_irq(struct irq_data *data)
{
- disable_m32104ut_irq(irq);
+ disable_m32104ut_irq(data->irq);
}
-static void end_m32104ut_irq(unsigned int irq)
+static void unmask_m32104ut_irq(struct irq_data *data)
{
- enable_m32104ut_irq(irq);
+ enable_m32104ut_irq(data->irq);
}
-static unsigned int startup_m32104ut_irq(unsigned int irq)
+static void shutdown_m32104ut_irq(struct irq_data *data)
{
- enable_m32104ut_irq(irq);
- return (0);
-}
-
-static void shutdown_m32104ut_irq(unsigned int irq)
-{
- unsigned long port;
+ unsigned int irq = data->irq;
+ unsigned long port = irq2port(irq);
- port = irq2port(irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32104ut_irq_type =
{
- .name = "M32104UT-IRQ",
- .startup = startup_m32104ut_irq,
- .shutdown = shutdown_m32104ut_irq,
- .enable = enable_m32104ut_irq,
- .disable = disable_m32104ut_irq,
- .ack = mask_and_ack_m32104ut,
- .end = end_m32104ut_irq
+ .name = "M32104UT-IRQ",
+ .irq_shutdown = shutdown_m32104ut_irq,
+ .irq_unmask = unmask_m32104ut_irq,
+ .irq_mask = mask_m32104ut_irq,
};
void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on M32104UT-LAN (SMC91C111)*/
- irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].chip = &m32104ut_irq_type;
- irq_desc[M32R_IRQ_INT0].action = 0;
- irq_desc[M32R_IRQ_INT0].depth = 1;
- icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD11; /* "H" level sense */
+ set_irq_chip_and_handler(M32R_IRQ_INT0, &m32104ut_irq_type,
+ handle_level_irq);
+ /* "H" level sense */
+ cu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD11;
disable_m32104ut_irq(M32R_IRQ_INT0);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &m32104ut_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &m32104ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &m32104ut_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &m32104ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &m32104ut_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &m32104ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_SIO0_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
outl(data, port);
}
-static void mask_and_ack_m32700ut(unsigned int irq)
+static void mask_m32700ut(struct irq_data *data)
{
- disable_m32700ut_irq(irq);
+ disable_m32700ut_irq(data->irq);
}
-static void end_m32700ut_irq(unsigned int irq)
+static void unmask_m32700ut(struct irq_data *data)
{
- enable_m32700ut_irq(irq);
+ enable_m32700ut_irq(data->irq);
}
-static unsigned int startup_m32700ut_irq(unsigned int irq)
-{
- enable_m32700ut_irq(irq);
- return (0);
-}
-
-static void shutdown_m32700ut_irq(unsigned int irq)
+static void shutdown_m32700ut(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32700ut_irq_type =
{
- .name = "M32700UT-IRQ",
- .startup = startup_m32700ut_irq,
- .shutdown = shutdown_m32700ut_irq,
- .enable = enable_m32700ut_irq,
- .disable = disable_m32700ut_irq,
- .ack = mask_and_ack_m32700ut,
- .end = end_m32700ut_irq
+ .name = "M32700UT-IRQ",
+ .irq_shutdown = shutdown_m32700ut,
+ .irq_mask = mask_m32700ut,
+ .irq_unmask = unmask_m32700ut
};
/*
unsigned int pldirq;
pldirq = irq2pldirq(irq);
-// disable_m32700ut_irq(M32R_IRQ_INT1);
port = pldirq2port(pldirq);
data = pld_icu_data[pldirq].icucr|PLD_ICUCR_ILEVEL7;
outw(data, port);
unsigned int pldirq;
pldirq = irq2pldirq(irq);
-// enable_m32700ut_irq(M32R_IRQ_INT1);
port = pldirq2port(pldirq);
data = pld_icu_data[pldirq].icucr|PLD_ICUCR_IEN|PLD_ICUCR_ILEVEL6;
outw(data, port);
}
-static void mask_and_ack_m32700ut_pld(unsigned int irq)
-{
- disable_m32700ut_pld_irq(irq);
-// mask_and_ack_m32700ut(M32R_IRQ_INT1);
-}
-
-static void end_m32700ut_pld_irq(unsigned int irq)
+static void mask_m32700ut_pld(struct irq_data *data)
{
- enable_m32700ut_pld_irq(irq);
- end_m32700ut_irq(M32R_IRQ_INT1);
+ disable_m32700ut_pld_irq(data->irq);
}
-static unsigned int startup_m32700ut_pld_irq(unsigned int irq)
+static void unmask_m32700ut_pld(struct irq_data *data)
{
- enable_m32700ut_pld_irq(irq);
- return (0);
+ enable_m32700ut_pld_irq(data->irq);
+ enable_m32700ut_irq(M32R_IRQ_INT1);
}
-static void shutdown_m32700ut_pld_irq(unsigned int irq)
+static void shutdown_m32700ut_pld_irq(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2pldirq(irq);
-// shutdown_m32700ut_irq(M32R_IRQ_INT1);
+ pldirq = irq2pldirq(data->irq);
port = pldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32700ut_pld_irq_type =
{
- .name = "M32700UT-PLD-IRQ",
- .startup = startup_m32700ut_pld_irq,
- .shutdown = shutdown_m32700ut_pld_irq,
- .enable = enable_m32700ut_pld_irq,
- .disable = disable_m32700ut_pld_irq,
- .ack = mask_and_ack_m32700ut_pld,
- .end = end_m32700ut_pld_irq
+ .name = "M32700UT-PLD-IRQ",
+ .irq_shutdown = shutdown_m32700ut_pld_irq,
+ .irq_mask = mask_m32700ut_pld,
+ .irq_unmask = unmask_m32700ut_pld,
};
/*
outw(data, port);
}
-static void mask_and_ack_m32700ut_lanpld(unsigned int irq)
+static void mask_m32700ut_lanpld(struct irq_data *data)
{
- disable_m32700ut_lanpld_irq(irq);
+ disable_m32700ut_lanpld_irq(data->irq);
}
-static void end_m32700ut_lanpld_irq(unsigned int irq)
+static void unmask_m32700ut_lanpld(struct irq_data *data)
{
- enable_m32700ut_lanpld_irq(irq);
- end_m32700ut_irq(M32R_IRQ_INT0);
-}
-
-static unsigned int startup_m32700ut_lanpld_irq(unsigned int irq)
-{
- enable_m32700ut_lanpld_irq(irq);
- return (0);
+ enable_m32700ut_lanpld_irq(data->irq);
+ enable_m32700ut_irq(M32R_IRQ_INT0);
}
-static void shutdown_m32700ut_lanpld_irq(unsigned int irq)
+static void shutdown_m32700ut_lanpld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2lanpldirq(irq);
+ pldirq = irq2lanpldirq(data->irq);
port = lanpldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32700ut_lanpld_irq_type =
{
- .name = "M32700UT-PLD-LAN-IRQ",
- .startup = startup_m32700ut_lanpld_irq,
- .shutdown = shutdown_m32700ut_lanpld_irq,
- .enable = enable_m32700ut_lanpld_irq,
- .disable = disable_m32700ut_lanpld_irq,
- .ack = mask_and_ack_m32700ut_lanpld,
- .end = end_m32700ut_lanpld_irq
+ .name = "M32700UT-PLD-LAN-IRQ",
+ .irq_shutdown = shutdown_m32700ut_lanpld,
+ .irq_mask = mask_m32700ut_lanpld,
+ .irq_unmask = unmask_m32700ut_lanpld,
};
/*
outw(data, port);
}
-static void mask_and_ack_m32700ut_lcdpld(unsigned int irq)
+static void mask_m32700ut_lcdpld(struct irq_data *data)
{
- disable_m32700ut_lcdpld_irq(irq);
+ disable_m32700ut_lcdpld_irq(data->irq);
}
-static void end_m32700ut_lcdpld_irq(unsigned int irq)
+static void unmask_m32700ut_lcdpld(struct irq_data *data)
{
- enable_m32700ut_lcdpld_irq(irq);
- end_m32700ut_irq(M32R_IRQ_INT2);
-}
-
-static unsigned int startup_m32700ut_lcdpld_irq(unsigned int irq)
-{
- enable_m32700ut_lcdpld_irq(irq);
- return (0);
+ enable_m32700ut_lcdpld_irq(data->irq);
+ enable_m32700ut_irq(M32R_IRQ_INT2);
}
-static void shutdown_m32700ut_lcdpld_irq(unsigned int irq)
+static void shutdown_m32700ut_lcdpld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2lcdpldirq(irq);
+ pldirq = irq2lcdpldirq(data->irq);
port = lcdpldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32700ut_lcdpld_irq_type =
{
- .name = "M32700UT-PLD-LCD-IRQ",
- .startup = startup_m32700ut_lcdpld_irq,
- .shutdown = shutdown_m32700ut_lcdpld_irq,
- .enable = enable_m32700ut_lcdpld_irq,
- .disable = disable_m32700ut_lcdpld_irq,
- .ack = mask_and_ack_m32700ut_lcdpld,
- .end = end_m32700ut_lcdpld_irq
+ .name = "M32700UT-PLD-LCD-IRQ",
+ .irq_shutdown = shutdown_m32700ut_lcdpld,
+ .irq_mask = mask_m32700ut_lcdpld,
+ .irq_unmask = unmask_m32700ut_lcdpld,
};
void __init init_IRQ(void)
{
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on M32700UT-LAN (SMC91C111)*/
- irq_desc[M32700UT_LAN_IRQ_LAN].status = IRQ_DISABLED;
- irq_desc[M32700UT_LAN_IRQ_LAN].chip = &m32700ut_lanpld_irq_type;
- irq_desc[M32700UT_LAN_IRQ_LAN].action = 0;
- irq_desc[M32700UT_LAN_IRQ_LAN].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(M32700UT_LAN_IRQ_LAN,
+ &m32700ut_lanpld_irq_type, handle_level_irq);
lanpld_icu_data[irq2lanpldirq(M32700UT_LAN_IRQ_LAN)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* "H" edge sense */
disable_m32700ut_lanpld_irq(M32700UT_LAN_IRQ_LAN);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_m32700ut_irq(M32R_IRQ_MFT2);
/* SIO0 : receive */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_m32700ut_irq(M32R_IRQ_SIO0_R);
/* SIO0 : send */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_m32700ut_irq(M32R_IRQ_SIO0_S);
/* SIO1 : receive */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_m32700ut_irq(M32R_IRQ_SIO1_R);
/* SIO1 : send */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_m32700ut_irq(M32R_IRQ_SIO1_S);
/* DMA1 : */
- irq_desc[M32R_IRQ_DMA1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_DMA1].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_DMA1].action = 0;
- irq_desc[M32R_IRQ_DMA1].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_DMA1, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_DMA1].icucr = 0;
disable_m32700ut_irq(M32R_IRQ_DMA1);
#ifdef CONFIG_SERIAL_M32R_PLDSIO
/* INT#1: SIO0 Receive on PLD */
- irq_desc[PLD_IRQ_SIO0_RCV].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_RCV].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_SIO0_RCV].action = 0;
- irq_desc[PLD_IRQ_SIO0_RCV].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_SIO0_RCV, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_RCV)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
disable_m32700ut_pld_irq(PLD_IRQ_SIO0_RCV);
/* INT#1: SIO0 Send on PLD */
- irq_desc[PLD_IRQ_SIO0_SND].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_SND].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_SIO0_SND].action = 0;
- irq_desc[PLD_IRQ_SIO0_SND].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_SIO0_SND, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_SND)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
disable_m32700ut_pld_irq(PLD_IRQ_SIO0_SND);
#endif /* CONFIG_SERIAL_M32R_PLDSIO */
/* INT#1: CFC IREQ on PLD */
- irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_CFIREQ].action = 0;
- irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFIREQ, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFIREQ)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* 'L' level sense */
disable_m32700ut_pld_irq(PLD_IRQ_CFIREQ);
/* INT#1: CFC Insert on PLD */
- irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
- irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_INSERT, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_INSERT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD00; /* 'L' edge sense */
disable_m32700ut_pld_irq(PLD_IRQ_CFC_INSERT);
/* INT#1: CFC Eject on PLD */
- irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
- irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_EJECT, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_EJECT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* 'H' edge sense */
disable_m32700ut_pld_irq(PLD_IRQ_CFC_EJECT);
#if defined(CONFIG_USB)
outw(USBCR_OTGS, USBCR); /* USBCR: non-OTG */
+ set_irq_chip_and_handler(M32700UT_LCD_IRQ_USB_INT1,
+ &m32700ut_lcdpld_irq_type, handle_level_irq);
- irq_desc[M32700UT_LCD_IRQ_USB_INT1].status = IRQ_DISABLED;
- irq_desc[M32700UT_LCD_IRQ_USB_INT1].chip = &m32700ut_lcdpld_irq_type;
- irq_desc[M32700UT_LCD_IRQ_USB_INT1].action = 0;
- irq_desc[M32700UT_LCD_IRQ_USB_INT1].depth = 1;
- lcdpld_icu_data[irq2lcdpldirq(M32700UT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
- disable_m32700ut_lcdpld_irq(M32700UT_LCD_IRQ_USB_INT1);
+ lcdpld_icu_data[irq2lcdpldirq(M32700UT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
+ disable_m32700ut_lcdpld_irq(M32700UT_LCD_IRQ_USB_INT1);
#endif
/*
* INT2# is used for BAT, USB, AUDIO
/*
* INT3# is used for AR
*/
- irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].chip = &m32700ut_irq_type;
- irq_desc[M32R_IRQ_INT3].action = 0;
- irq_desc[M32R_IRQ_INT3].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT3, &m32700ut_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_m32700ut_irq(M32R_IRQ_INT3);
#endif /* CONFIG_VIDEO_M32R_AR */
outl(data, port);
}
-static void mask_and_ack_mappi(unsigned int irq)
+static void mask_mappi(struct irq_data *data)
{
- disable_mappi_irq(irq);
+ disable_mappi_irq(data->irq);
}
-static void end_mappi_irq(unsigned int irq)
+static void unmask_mappi(struct irq_data *data)
{
- if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
- enable_mappi_irq(irq);
+ enable_mappi_irq(data->irq);
}
-static unsigned int startup_mappi_irq(unsigned int irq)
-{
- enable_mappi_irq(irq);
- return (0);
-}
-
-static void shutdown_mappi_irq(unsigned int irq)
+static void shutdown_mappi(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip mappi_irq_type =
{
- .name = "MAPPI-IRQ",
- .startup = startup_mappi_irq,
- .shutdown = shutdown_mappi_irq,
- .enable = enable_mappi_irq,
- .disable = disable_mappi_irq,
- .ack = mask_and_ack_mappi,
- .end = end_mappi_irq
+ .name = "MAPPI-IRQ",
+ .irq_shutdown = shutdown_mappi,
+ .irq_mask = mask_mappi,
+ .irq_unmask = unmask_mappi,
};
void __init init_IRQ(void)
#ifdef CONFIG_NE2000
/* INT0 : LAN controller (RTL8019AS) */
- irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT0].action = NULL;
- irq_desc[M32R_IRQ_INT0].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT0, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD11;
disable_mappi_irq(M32R_IRQ_INT0);
#endif /* CONFIG_M32R_NE2000 */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = NULL;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_mappi_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = NULL;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = NULL;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = NULL;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = NULL;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
#if defined(CONFIG_M32R_PCC)
/* INT1 : pccard0 interrupt */
- irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT1].action = NULL;
- irq_desc[M32R_IRQ_INT1].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT1, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
disable_mappi_irq(M32R_IRQ_INT1);
/* INT2 : pccard1 interrupt */
- irq_desc[M32R_IRQ_INT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT2].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT2].action = NULL;
- irq_desc[M32R_IRQ_INT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT2, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT2].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
disable_mappi_irq(M32R_IRQ_INT2);
#endif /* CONFIG_M32RPCC */
outl(data, port);
}
-static void mask_and_ack_mappi2(unsigned int irq)
+static void mask_mappi2(struct irq_data *data)
{
- disable_mappi2_irq(irq);
+ disable_mappi2_irq(data->irq);
}
-static void end_mappi2_irq(unsigned int irq)
+static void unmask_mappi2(struct irq_data *data)
{
- enable_mappi2_irq(irq);
+ enable_mappi2_irq(data->irq);
}
-static unsigned int startup_mappi2_irq(unsigned int irq)
-{
- enable_mappi2_irq(irq);
- return (0);
-}
-
-static void shutdown_mappi2_irq(unsigned int irq)
+static void shutdown_mappi2(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip mappi2_irq_type =
{
- .name = "MAPPI2-IRQ",
- .startup = startup_mappi2_irq,
- .shutdown = shutdown_mappi2_irq,
- .enable = enable_mappi2_irq,
- .disable = disable_mappi2_irq,
- .ack = mask_and_ack_mappi2,
- .end = end_mappi2_irq
+ .name = "MAPPI2-IRQ",
+ .irq_shutdown = shutdown_mappi2,
+ .irq_mask = mask_mappi2,
+ .irq_unmask = unmask_mappi2,
};
void __init init_IRQ(void)
{
#if defined(CONFIG_SMC91X)
/* INT0 : LAN controller (SMC91111) */
- irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_INT0].action = 0;
- irq_desc[M32R_IRQ_INT0].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT0, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_mappi2_irq(M32R_IRQ_INT0);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_mappi2_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_mappi2_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi2_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_mappi2_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_mappi2_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_M32R_USE_DBG_CONSOLE */
#if defined(CONFIG_USB)
/* INT1 : USB Host controller interrupt */
- irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].chip = &mappi2_irq_type;
- irq_desc[M32R_IRQ_INT1].action = 0;
- irq_desc[M32R_IRQ_INT1].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT1, &mappi2_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_ISMOD01;
disable_mappi2_irq(M32R_IRQ_INT1);
#endif /* CONFIG_USB */
/* ICUCR40: CFC IREQ */
- irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].chip = &mappi2_irq_type;
- irq_desc[PLD_IRQ_CFIREQ].action = 0;
- irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFIREQ, &mappi2_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_CFIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
disable_mappi2_irq(PLD_IRQ_CFIREQ);
#if defined(CONFIG_M32R_CFC)
/* ICUCR41: CFC Insert */
- irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].chip = &mappi2_irq_type;
- irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
- irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_INSERT, &mappi2_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_CFC_INSERT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD00;
disable_mappi2_irq(PLD_IRQ_CFC_INSERT);
/* ICUCR42: CFC Eject */
- irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].chip = &mappi2_irq_type;
- irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
- irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_EJECT, &mappi2_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_CFC_EJECT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_mappi2_irq(PLD_IRQ_CFC_EJECT);
#endif /* CONFIG_MAPPI2_CFC */
outl(data, port);
}
-static void mask_and_ack_mappi3(unsigned int irq)
+static void mask_mappi3(struct irq_data *data)
{
- disable_mappi3_irq(irq);
+ disable_mappi3_irq(data->irq);
}
-static void end_mappi3_irq(unsigned int irq)
+static void unmask_mappi3(struct irq_data *data)
{
- enable_mappi3_irq(irq);
+ enable_mappi3_irq(data->irq);
}
-static unsigned int startup_mappi3_irq(unsigned int irq)
-{
- enable_mappi3_irq(irq);
- return (0);
-}
-
-static void shutdown_mappi3_irq(unsigned int irq)
+static void shutdown_mappi3(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
-static struct irq_chip mappi3_irq_type =
-{
- .name = "MAPPI3-IRQ",
- .startup = startup_mappi3_irq,
- .shutdown = shutdown_mappi3_irq,
- .enable = enable_mappi3_irq,
- .disable = disable_mappi3_irq,
- .ack = mask_and_ack_mappi3,
- .end = end_mappi3_irq
+static struct irq_chip mappi3_irq_type = {
+ .name = "MAPPI3-IRQ",
+ .irq_shutdown = shutdown_mappi3,
+ .irq_mask = mask_mappi3,
+ .irq_unmask = unmask_mappi3,
};
void __init init_IRQ(void)
{
#if defined(CONFIG_SMC91X)
/* INT0 : LAN controller (SMC91111) */
- irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_INT0].action = 0;
- irq_desc[M32R_IRQ_INT0].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT0, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_mappi3_irq(M32R_IRQ_INT0);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_mappi3_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_mappi3_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi3_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_mappi3_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_mappi3_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_M32R_USE_DBG_CONSOLE */
#if defined(CONFIG_USB)
/* INT1 : USB Host controller interrupt */
- irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].chip = &mappi3_irq_type;
- irq_desc[M32R_IRQ_INT1].action = 0;
- irq_desc[M32R_IRQ_INT1].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT1, &mappi3_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_ISMOD01;
disable_mappi3_irq(M32R_IRQ_INT1);
#endif /* CONFIG_USB */
/* CFC IREQ */
- irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].chip = &mappi3_irq_type;
- irq_desc[PLD_IRQ_CFIREQ].action = 0;
- irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFIREQ, &mappi3_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_CFIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
disable_mappi3_irq(PLD_IRQ_CFIREQ);
#if defined(CONFIG_M32R_CFC)
/* ICUCR41: CFC Insert & eject */
- irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].chip = &mappi3_irq_type;
- irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
- irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_INSERT, &mappi3_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_CFC_INSERT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD00;
disable_mappi3_irq(PLD_IRQ_CFC_INSERT);
#endif /* CONFIG_M32R_CFC */
/* IDE IREQ */
- irq_desc[PLD_IRQ_IDEIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_IDEIREQ].chip = &mappi3_irq_type;
- irq_desc[PLD_IRQ_IDEIREQ].action = 0;
- irq_desc[PLD_IRQ_IDEIREQ].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_IDEIREQ, &mappi3_irq_type,
+ handle_level_irq);
icu_data[PLD_IRQ_IDEIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_mappi3_irq(PLD_IRQ_IDEIREQ);
outl(data, port);
}
-static void mask_and_ack_mappi(unsigned int irq)
+static void mask_oaks32r(struct irq_data *data)
{
- disable_oaks32r_irq(irq);
+ disable_oaks32r_irq(data->irq);
}
-static void end_oaks32r_irq(unsigned int irq)
+static void unmask_oaks32r(struct irq_data *data)
{
- enable_oaks32r_irq(irq);
+ enable_oaks32r_irq(data->irq);
}
-static unsigned int startup_oaks32r_irq(unsigned int irq)
-{
- enable_oaks32r_irq(irq);
- return (0);
-}
-
-static void shutdown_oaks32r_irq(unsigned int irq)
+static void shutdown_oaks32r(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip oaks32r_irq_type =
{
- .name = "OAKS32R-IRQ",
- .startup = startup_oaks32r_irq,
- .shutdown = shutdown_oaks32r_irq,
- .enable = enable_oaks32r_irq,
- .disable = disable_oaks32r_irq,
- .ack = mask_and_ack_mappi,
- .end = end_oaks32r_irq
+ .name = "OAKS32R-IRQ",
+ .irq_shutdown = shutdown_oaks32r,
+ .irq_mask = mask_oaks32r,
+ .irq_unmask = unmask_oaks32r,
};
void __init init_IRQ(void)
#ifdef CONFIG_NE2000
/* INT3 : LAN controller (RTL8019AS) */
- irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_INT3].action = 0;
- irq_desc[M32R_IRQ_INT3].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT3, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_oaks32r_irq(M32R_IRQ_INT3);
#endif /* CONFIG_M32R_NE2000 */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_oaks32r_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_oaks32r_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_oaks32r_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_oaks32r_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &oaks32r_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &oaks32r_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_oaks32r_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
outl(data, port);
}
-static void mask_and_ack_opsput(unsigned int irq)
+static void mask_opsput(struct irq_data *data)
{
- disable_opsput_irq(irq);
+ disable_opsput_irq(data->irq);
}
-static void end_opsput_irq(unsigned int irq)
+static void unmask_opsput(struct irq_data *data)
{
- enable_opsput_irq(irq);
+ enable_opsput_irq(data->irq);
}
-static unsigned int startup_opsput_irq(unsigned int irq)
-{
- enable_opsput_irq(irq);
- return (0);
-}
-
-static void shutdown_opsput_irq(unsigned int irq)
+static void shutdown_opsput(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip opsput_irq_type =
{
- .name = "OPSPUT-IRQ",
- .startup = startup_opsput_irq,
- .shutdown = shutdown_opsput_irq,
- .enable = enable_opsput_irq,
- .disable = disable_opsput_irq,
- .ack = mask_and_ack_opsput,
- .end = end_opsput_irq
+ .name = "OPSPUT-IRQ",
+ .irq_shutdown = shutdown_opsput,
+ .irq_mask = mask_opsput,
+ .irq_unmask = unmask_opsput,
};
/*
unsigned int pldirq;
pldirq = irq2pldirq(irq);
-// disable_opsput_irq(M32R_IRQ_INT1);
port = pldirq2port(pldirq);
data = pld_icu_data[pldirq].icucr|PLD_ICUCR_ILEVEL7;
outw(data, port);
unsigned int pldirq;
pldirq = irq2pldirq(irq);
-// enable_opsput_irq(M32R_IRQ_INT1);
port = pldirq2port(pldirq);
data = pld_icu_data[pldirq].icucr|PLD_ICUCR_IEN|PLD_ICUCR_ILEVEL6;
outw(data, port);
}
-static void mask_and_ack_opsput_pld(unsigned int irq)
-{
- disable_opsput_pld_irq(irq);
-// mask_and_ack_opsput(M32R_IRQ_INT1);
-}
-
-static void end_opsput_pld_irq(unsigned int irq)
+static void mask_opsput_pld(struct irq_data *data)
{
- enable_opsput_pld_irq(irq);
- end_opsput_irq(M32R_IRQ_INT1);
+ disable_opsput_pld_irq(data->irq);
}
-static unsigned int startup_opsput_pld_irq(unsigned int irq)
+static void unmask_opsput_pld(struct irq_data *data)
{
- enable_opsput_pld_irq(irq);
- return (0);
+ enable_opsput_pld_irq(data->irq);
+ enable_opsput_irq(M32R_IRQ_INT1);
}
-static void shutdown_opsput_pld_irq(unsigned int irq)
+static void shutdown_opsput_pld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2pldirq(irq);
-// shutdown_opsput_irq(M32R_IRQ_INT1);
+ pldirq = irq2pldirq(data->irq);
port = pldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip opsput_pld_irq_type =
{
- .name = "OPSPUT-PLD-IRQ",
- .startup = startup_opsput_pld_irq,
- .shutdown = shutdown_opsput_pld_irq,
- .enable = enable_opsput_pld_irq,
- .disable = disable_opsput_pld_irq,
- .ack = mask_and_ack_opsput_pld,
- .end = end_opsput_pld_irq
+ .name = "OPSPUT-PLD-IRQ",
+ .irq_shutdown = shutdown_opsput_pld,
+ .irq_mask = mask_opsput_pld,
+ .irq_unmask = unmask_opsput_pld,
};
/*
outw(data, port);
}
-static void mask_and_ack_opsput_lanpld(unsigned int irq)
-{
- disable_opsput_lanpld_irq(irq);
-}
-
-static void end_opsput_lanpld_irq(unsigned int irq)
+static void mask_opsput_lanpld(struct irq_data *data)
{
- enable_opsput_lanpld_irq(irq);
- end_opsput_irq(M32R_IRQ_INT0);
+ disable_opsput_lanpld_irq(data->irq);
}
-static unsigned int startup_opsput_lanpld_irq(unsigned int irq)
+static void unmask_opsput_lanpld(struct irq_data *data)
{
- enable_opsput_lanpld_irq(irq);
- return (0);
+ enable_opsput_lanpld_irq(data->irq);
+ enable_opsput_irq(M32R_IRQ_INT0);
}
-static void shutdown_opsput_lanpld_irq(unsigned int irq)
+static void shutdown_opsput_lanpld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2lanpldirq(irq);
+ pldirq = irq2lanpldirq(data->irq);
port = lanpldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip opsput_lanpld_irq_type =
{
- .name = "OPSPUT-PLD-LAN-IRQ",
- .startup = startup_opsput_lanpld_irq,
- .shutdown = shutdown_opsput_lanpld_irq,
- .enable = enable_opsput_lanpld_irq,
- .disable = disable_opsput_lanpld_irq,
- .ack = mask_and_ack_opsput_lanpld,
- .end = end_opsput_lanpld_irq
+ .name = "OPSPUT-PLD-LAN-IRQ",
+ .irq_shutdown = shutdown_opsput_lanpld,
+ .irq_mask = mask_opsput_lanpld,
+ .irq_unmask = unmask_opsput_lanpld,
};
/*
outw(data, port);
}
-static void mask_and_ack_opsput_lcdpld(unsigned int irq)
-{
- disable_opsput_lcdpld_irq(irq);
-}
-
-static void end_opsput_lcdpld_irq(unsigned int irq)
+static void mask_opsput_lcdpld(struct irq_data *data)
{
- enable_opsput_lcdpld_irq(irq);
- end_opsput_irq(M32R_IRQ_INT2);
+ disable_opsput_lcdpld_irq(data->irq);
}
-static unsigned int startup_opsput_lcdpld_irq(unsigned int irq)
+static void unmask_opsput_lcdpld(struct irq_data *data)
{
- enable_opsput_lcdpld_irq(irq);
- return (0);
+ enable_opsput_lcdpld_irq(data->irq);
+ enable_opsput_irq(M32R_IRQ_INT2);
}
-static void shutdown_opsput_lcdpld_irq(unsigned int irq)
+static void shutdown_opsput_lcdpld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2lcdpldirq(irq);
+ pldirq = irq2lcdpldirq(data->irq);
port = lcdpldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
-static struct irq_chip opsput_lcdpld_irq_type =
-{
- "OPSPUT-PLD-LCD-IRQ",
- startup_opsput_lcdpld_irq,
- shutdown_opsput_lcdpld_irq,
- enable_opsput_lcdpld_irq,
- disable_opsput_lcdpld_irq,
- mask_and_ack_opsput_lcdpld,
- end_opsput_lcdpld_irq
+static struct irq_chip opsput_lcdpld_irq_type = {
+ .name = "OPSPUT-PLD-LCD-IRQ",
+ .irq_shutdown = shutdown_opsput_lcdpld,
+ .irq_mask = mask_opsput_lcdpld,
+ .irq_unmask = unmask_opsput_lcdpld,
};
void __init init_IRQ(void)
{
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on OPSPUT-LAN (SMC91C111)*/
- irq_desc[OPSPUT_LAN_IRQ_LAN].status = IRQ_DISABLED;
- irq_desc[OPSPUT_LAN_IRQ_LAN].chip = &opsput_lanpld_irq_type;
- irq_desc[OPSPUT_LAN_IRQ_LAN].action = 0;
- irq_desc[OPSPUT_LAN_IRQ_LAN].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(OPSPUT_LAN_IRQ_LAN, &opsput_lanpld_irq_type,
+ handle_level_irq);
lanpld_icu_data[irq2lanpldirq(OPSPUT_LAN_IRQ_LAN)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* "H" edge sense */
disable_opsput_lanpld_irq(OPSPUT_LAN_IRQ_LAN);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_opsput_irq(M32R_IRQ_MFT2);
/* SIO0 : receive */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_opsput_irq(M32R_IRQ_SIO0_R);
/* SIO0 : send */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_opsput_irq(M32R_IRQ_SIO0_S);
/* SIO1 : receive */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_opsput_irq(M32R_IRQ_SIO1_R);
/* SIO1 : send */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_opsput_irq(M32R_IRQ_SIO1_S);
/* DMA1 : */
- irq_desc[M32R_IRQ_DMA1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_DMA1].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_DMA1].action = 0;
- irq_desc[M32R_IRQ_DMA1].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_DMA1, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_DMA1].icucr = 0;
disable_opsput_irq(M32R_IRQ_DMA1);
#ifdef CONFIG_SERIAL_M32R_PLDSIO
/* INT#1: SIO0 Receive on PLD */
- irq_desc[PLD_IRQ_SIO0_RCV].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_RCV].chip = &opsput_pld_irq_type;
- irq_desc[PLD_IRQ_SIO0_RCV].action = 0;
- irq_desc[PLD_IRQ_SIO0_RCV].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_SIO0_RCV, &opsput_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_RCV)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
disable_opsput_pld_irq(PLD_IRQ_SIO0_RCV);
/* INT#1: SIO0 Send on PLD */
- irq_desc[PLD_IRQ_SIO0_SND].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_SND].chip = &opsput_pld_irq_type;
- irq_desc[PLD_IRQ_SIO0_SND].action = 0;
- irq_desc[PLD_IRQ_SIO0_SND].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_SIO0_SND, &opsput_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_SND)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
disable_opsput_pld_irq(PLD_IRQ_SIO0_SND);
#endif /* CONFIG_SERIAL_M32R_PLDSIO */
/* INT#1: CFC IREQ on PLD */
- irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].chip = &opsput_pld_irq_type;
- irq_desc[PLD_IRQ_CFIREQ].action = 0;
- irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFIREQ, &opsput_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFIREQ)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* 'L' level sense */
disable_opsput_pld_irq(PLD_IRQ_CFIREQ);
/* INT#1: CFC Insert on PLD */
- irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].chip = &opsput_pld_irq_type;
- irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
- irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_INSERT, &opsput_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_INSERT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD00; /* 'L' edge sense */
disable_opsput_pld_irq(PLD_IRQ_CFC_INSERT);
/* INT#1: CFC Eject on PLD */
- irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].chip = &opsput_pld_irq_type;
- irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
- irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CFC_EJECT, &opsput_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_EJECT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* 'H' edge sense */
disable_opsput_pld_irq(PLD_IRQ_CFC_EJECT);
enable_opsput_irq(M32R_IRQ_INT1);
#if defined(CONFIG_USB)
- outw(USBCR_OTGS, USBCR); /* USBCR: non-OTG */
-
- irq_desc[OPSPUT_LCD_IRQ_USB_INT1].status = IRQ_DISABLED;
- irq_desc[OPSPUT_LCD_IRQ_USB_INT1].chip = &opsput_lcdpld_irq_type;
- irq_desc[OPSPUT_LCD_IRQ_USB_INT1].action = 0;
- irq_desc[OPSPUT_LCD_IRQ_USB_INT1].depth = 1;
- lcdpld_icu_data[irq2lcdpldirq(OPSPUT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
- disable_opsput_lcdpld_irq(OPSPUT_LCD_IRQ_USB_INT1);
+ outw(USBCR_OTGS, USBCR); /* USBCR: non-OTG */
+ set_irq_chip_and_handler(OPSPUT_LCD_IRQ_USB_INT1,
+ &opsput_lcdpld_irq_type, handle_level_irq);
+ lcdpld_icu_data[irq2lcdpldirq(OPSPUT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
+ disable_opsput_lcdpld_irq(OPSPUT_LCD_IRQ_USB_INT1);
#endif
/*
* INT2# is used for BAT, USB, AUDIO
/*
* INT3# is used for AR
*/
- irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].chip = &opsput_irq_type;
- irq_desc[M32R_IRQ_INT3].action = 0;
- irq_desc[M32R_IRQ_INT3].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_INT3, &opsput_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_opsput_irq(M32R_IRQ_INT3);
#endif /* CONFIG_VIDEO_M32R_AR */
outl(data, port);
}
-static void mask_and_ack_mappi(unsigned int irq)
+static void mask_mappi(struct irq_data *data)
{
- disable_mappi_irq(irq);
+ disable_mappi_irq(data->irq);
}
-static void end_mappi_irq(unsigned int irq)
+static void unmask_mappi(struct irq_data *data)
{
- enable_mappi_irq(irq);
+ enable_mappi_irq(data->irq);
}
-static unsigned int startup_mappi_irq(unsigned int irq)
-{
- enable_mappi_irq(irq);
- return 0;
-}
-
-static void shutdown_mappi_irq(unsigned int irq)
+static void shutdown_mappi(struct irq_data *data)
{
unsigned long port;
- port = irq2port(irq);
+ port = irq2port(data->irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct irq_chip mappi_irq_type =
{
- .name = "M32700-IRQ",
- .startup = startup_mappi_irq,
- .shutdown = shutdown_mappi_irq,
- .enable = enable_mappi_irq,
- .disable = disable_mappi_irq,
- .ack = mask_and_ack_mappi,
- .end = end_mappi_irq
+ .name = "M32700-IRQ",
+ .irq_shutdown = shutdown_mappi,
+ .irq_mask = mask_mappi,
+ .irq_unmask = unmask_mappi,
};
/*
outw(data, port);
}
-static void mask_and_ack_m32700ut_pld(unsigned int irq)
+static void mask_m32700ut_pld(struct irq_data *data)
{
- disable_m32700ut_pld_irq(irq);
+ disable_m32700ut_pld_irq(data->irq);
}
-static void end_m32700ut_pld_irq(unsigned int irq)
+static void unmask_m32700ut_pld(struct irq_data *data)
{
- enable_m32700ut_pld_irq(irq);
- end_mappi_irq(M32R_IRQ_INT1);
-}
-
-static unsigned int startup_m32700ut_pld_irq(unsigned int irq)
-{
- enable_m32700ut_pld_irq(irq);
- return 0;
+ enable_m32700ut_pld_irq(data->irq);
+ enable_mappi_irq(M32R_IRQ_INT1);
}
-static void shutdown_m32700ut_pld_irq(unsigned int irq)
+static void shutdown_m32700ut_pld(struct irq_data *data)
{
unsigned long port;
unsigned int pldirq;
- pldirq = irq2pldirq(irq);
+ pldirq = irq2pldirq(data->irq);
port = pldirq2port(pldirq);
outw(PLD_ICUCR_ILEVEL7, port);
}
static struct irq_chip m32700ut_pld_irq_type =
{
- .name = "USRV-PLD-IRQ",
- .startup = startup_m32700ut_pld_irq,
- .shutdown = shutdown_m32700ut_pld_irq,
- .enable = enable_m32700ut_pld_irq,
- .disable = disable_m32700ut_pld_irq,
- .ack = mask_and_ack_m32700ut_pld,
- .end = end_m32700ut_pld_irq
+ .name = "USRV-PLD-IRQ",
+ .irq_shutdown = shutdown_m32700ut_pld,
+ .irq_mask = mask_m32700ut_pld,
+ .irq_unmask = unmask_m32700ut_pld,
};
void __init init_IRQ(void)
once++;
/* MFT2 : system timer */
- irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
- irq_desc[M32R_IRQ_MFT2].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_MFT2, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_mappi_irq(M32R_IRQ_MFT2);
#if defined(CONFIG_SERIAL_M32R_SIO)
/* SIO0_R : uart receive data */
- irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
- irq_desc[M32R_IRQ_SIO0_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_R, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
- irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
- irq_desc[M32R_IRQ_SIO0_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO0_S, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
- irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
- irq_desc[M32R_IRQ_SIO1_R].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_R, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
- irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
- irq_desc[M32R_IRQ_SIO1_S].depth = 1;
+ set_irq_chip_and_handler(M32R_IRQ_SIO1_S, &mappi_irq_type,
+ handle_level_irq);
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
/* INT#67-#71: CFC#0 IREQ on PLD */
for (i = 0 ; i < CONFIG_M32R_CFC_NUM ; i++ ) {
- irq_desc[PLD_IRQ_CF0 + i].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CF0 + i].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_CF0 + i].action = 0;
- irq_desc[PLD_IRQ_CF0 + i].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_CF0 + i,
+ &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_CF0 + i)].icucr
= PLD_ICUCR_ISMOD01; /* 'L' level sense */
disable_m32700ut_pld_irq(PLD_IRQ_CF0 + i);
#if defined(CONFIG_SERIAL_8250) || defined(CONFIG_SERIAL_8250_MODULE)
/* INT#76: 16552D#0 IREQ on PLD */
- irq_desc[PLD_IRQ_UART0].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_UART0].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_UART0].action = 0;
- irq_desc[PLD_IRQ_UART0].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_UART0, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_UART0)].icucr
= PLD_ICUCR_ISMOD03; /* 'H' level sense */
disable_m32700ut_pld_irq(PLD_IRQ_UART0);
/* INT#77: 16552D#1 IREQ on PLD */
- irq_desc[PLD_IRQ_UART1].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_UART1].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_UART1].action = 0;
- irq_desc[PLD_IRQ_UART1].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_UART1, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_UART1)].icucr
= PLD_ICUCR_ISMOD03; /* 'H' level sense */
disable_m32700ut_pld_irq(PLD_IRQ_UART1);
#if defined(CONFIG_IDC_AK4524) || defined(CONFIG_IDC_AK4524_MODULE)
/* INT#80: AK4524 IREQ on PLD */
- irq_desc[PLD_IRQ_SNDINT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SNDINT].chip = &m32700ut_pld_irq_type;
- irq_desc[PLD_IRQ_SNDINT].action = 0;
- irq_desc[PLD_IRQ_SNDINT].depth = 1; /* disable nested irq */
+ set_irq_chip_and_handler(PLD_IRQ_SNDINT, &m32700ut_pld_irq_type,
+ handle_level_irq);
pld_icu_data[irq2pldirq(PLD_IRQ_SNDINT)].icucr
= PLD_ICUCR_ISMOD01; /* 'L' level sense */
disable_m32700ut_pld_irq(PLD_IRQ_SNDINT);
static int __init amiga_savekmsg_setup(char *arg)
{
- static struct resource debug_res = { .name = "Debug" };
-
if (!MACH_IS_AMIGA || strcmp(arg, "mem"))
- goto done;
+ return 0;
- if (!AMIGAHW_PRESENT(CHIP_RAM)) {
- printk("Warning: no chipram present for debugging\n");
- goto done;
+ if (amiga_chip_size < SAVEKMSG_MAXMEM) {
+ pr_err("Not enough chipram for debugging\n");
+ return -ENOMEM;
}
- savekmsg = amiga_chip_alloc_res(SAVEKMSG_MAXMEM, &debug_res);
+ /* Just steal the block, the chipram allocator isn't functional yet */
+ amiga_chip_size -= SAVEKMSG_MAXMEM;
+ savekmsg = (void *)ZTWO_VADDR(CHIP_PHYSADDR + amiga_chip_size);
savekmsg->magic1 = SAVEKMSG_MAGIC1;
savekmsg->magic2 = SAVEKMSG_MAGIC2;
savekmsg->magicptr = ZTWO_PADDR(savekmsg);
amiga_console_driver.write = amiga_mem_console_write;
register_console(&amiga_console_driver);
-
-done:
return 0;
}
}
if (ATARIHW_PRESENT(SCC) && !atari_SCC_reset_done) {
- scc.cha_a_ctrl = 9;
+ atari_scc.cha_a_ctrl = 9;
MFPDELAY();
- scc.cha_a_ctrl = (char) 0xc0; /* hardware reset */
+ atari_scc.cha_a_ctrl = (char) 0xc0; /* hardware reset */
}
if (ATARIHW_PRESENT(SCU)) {
ATARIHW_SET(SCC_DMA);
printk("SCC_DMA ");
}
- if (scc_test(&scc.cha_a_ctrl)) {
+ if (scc_test(&atari_scc.cha_a_ctrl)) {
ATARIHW_SET(SCC);
printk("SCC ");
}
{
do {
MFPDELAY();
- } while (!(scc.cha_b_ctrl & 0x04)); /* wait for tx buf empty */
+ } while (!(atari_scc.cha_b_ctrl & 0x04)); /* wait for tx buf empty */
MFPDELAY();
- scc.cha_b_data = c;
+ atari_scc.cha_b_data = c;
}
static void atari_scc_console_write(struct console *co, const char *str,
{
do {
MFPDELAY();
- } while (!(scc.cha_b_ctrl & 0x01)); /* wait for rx buf filled */
+ } while (!(atari_scc.cha_b_ctrl & 0x01)); /* wait for rx buf filled */
MFPDELAY();
- return scc.cha_b_data;
+ return atari_scc.cha_b_data;
}
int atari_midi_console_wait_key(struct console *co)
#define SCC_WRITE(reg, val) \
do { \
- scc.cha_b_ctrl = (reg); \
+ atari_scc.cha_b_ctrl = (reg); \
MFPDELAY(); \
- scc.cha_b_ctrl = (val); \
+ atari_scc.cha_b_ctrl = (val); \
MFPDELAY(); \
} while (0)
reg3 = (cflag & CSIZE) == CS8 ? 0xc0 : 0x40;
reg5 = (cflag & CSIZE) == CS8 ? 0x60 : 0x20 | 0x82 /* assert DTR/RTS */;
- (void)scc.cha_b_ctrl; /* reset reg pointer */
+ (void)atari_scc.cha_b_ctrl; /* reset reg pointer */
SCC_WRITE(9, 0xc0); /* reset */
LONG_DELAY(); /* extra delay after WR9 access */
SCC_WRITE(4, (cflag & PARENB) ? ((cflag & PARODD) ? 0x01 : 0x03)
u_char char_dummy3;
u_char cha_b_data;
};
-# define scc ((*(volatile struct SCC*)SCC_BAS))
+# define atari_scc ((*(volatile struct SCC*)SCC_BAS))
/* The ESCC (Z85230) in an Atari ST. The channels are reversed! */
# define st_escc ((*(volatile struct SCC*)0xfffffa31))
strcpy(__d + strlen(__d), (s)); \
})
-#define __HAVE_ARCH_STRCHR
-static inline char *strchr(const char *s, int c)
-{
- char sc, ch = c;
-
- for (; (sc = *s++) != ch; ) {
- if (!sc)
- return NULL;
- }
- return (char *)s - 1;
-}
-
#ifndef CONFIG_COLDFIRE
#define __HAVE_ARCH_STRCMP
static inline int strcmp(const char *cs, const char *ct)
: "+a" (cs), "+a" (ct), "=d" (res));
return res;
}
+#endif /* CONFIG_COLDFIRE */
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *, const void *, __kernel_size_t);
-#define __HAVE_ARCH_MEMCMP
-extern int memcmp(const void *, const void *, __kernel_size_t);
#define memcmp(d, s, n) __builtin_memcmp(d, s, n)
-#endif /* CONFIG_COLDFIRE */
#define __HAVE_ARCH_MEMSET
extern void *memset(void *, int, __kernel_size_t);
return xdest;
}
EXPORT_SYMBOL(memmove);
-
-int memcmp(const void *cs, const void *ct, size_t count)
-{
- const unsigned char *su1, *su2;
-
- for (su1 = cs, su2 = ct; count > 0; ++su1, ++su2, count--)
- if (*su1 != *su2)
- return *su1 < *su2 ? -1 : +1;
- return 0;
-}
-EXPORT_SYMBOL(memcmp);
bool
default y
select HAVE_IDE
+ select HAVE_GENERIC_HARDIRQS
config MMU
bool
bool
default y
-config GENERIC_HARDIRQS
- bool
- default y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- bool
- default y
-
config GENERIC_CALIBRATE_DELAY
bool
default y
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
CONFIG_EXPERIMENTAL=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_FUTEX is not set
*(__param)
__stop___param = .;
+ /* Built-in module versions */
+ . = ALIGN(4) ;
+ __start___modver = .;
+ *(__modver)
+ __stop___modver = .;
+
. = ALIGN(4) ;
_etext = . ;
} > TEXT
lib-y := ashldi3.o ashrdi3.o lshrdi3.o \
muldi3.o mulsi3.o divsi3.o udivsi3.o modsi3.o umodsi3.o \
- checksum.o memcpy.o memset.o delay.o
+ checksum.o memcpy.o memmove.o memset.o delay.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive
+ * for more details.
+ */
+
+#define __IN_STRING_C
+
+#include <linux/module.h>
+#include <linux/string.h>
+
+void *memmove(void *dest, const void *src, size_t n)
+{
+ void *xdest = dest;
+ size_t temp;
+
+ if (!n)
+ return xdest;
+
+ if (dest < src) {
+ if ((long)dest & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ *cdest++ = *csrc++;
+ dest = cdest;
+ src = csrc;
+ n--;
+ }
+ if (n > 2 && (long)dest & 2) {
+ short *sdest = dest;
+ const short *ssrc = src;
+ *sdest++ = *ssrc++;
+ dest = sdest;
+ src = ssrc;
+ n -= 2;
+ }
+ temp = n >> 2;
+ if (temp) {
+ long *ldest = dest;
+ const long *lsrc = src;
+ temp--;
+ do
+ *ldest++ = *lsrc++;
+ while (temp--);
+ dest = ldest;
+ src = lsrc;
+ }
+ if (n & 2) {
+ short *sdest = dest;
+ const short *ssrc = src;
+ *sdest++ = *ssrc++;
+ dest = sdest;
+ src = ssrc;
+ }
+ if (n & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ *cdest = *csrc;
+ }
+ } else {
+ dest = (char *)dest + n;
+ src = (const char *)src + n;
+ if ((long)dest & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ *--cdest = *--csrc;
+ dest = cdest;
+ src = csrc;
+ n--;
+ }
+ if (n > 2 && (long)dest & 2) {
+ short *sdest = dest;
+ const short *ssrc = src;
+ *--sdest = *--ssrc;
+ dest = sdest;
+ src = ssrc;
+ n -= 2;
+ }
+ temp = n >> 2;
+ if (temp) {
+ long *ldest = dest;
+ const long *lsrc = src;
+ temp--;
+ do
+ *--ldest = *--lsrc;
+ while (temp--);
+ dest = ldest;
+ src = lsrc;
+ }
+ if (n & 2) {
+ short *sdest = dest;
+ const short *ssrc = src;
+ *--sdest = *--ssrc;
+ dest = sdest;
+ src = ssrc;
+ }
+ if (n & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ *--cdest = *--csrc;
+ }
+ }
+ return xdest;
+}
+EXPORT_SYMBOL(memmove);
int irq;
/* GPIO interrupt sources */
- for (irq = MCFINTC2_GPIOIRQ0; (irq <= MCFINTC2_GPIOIRQ7); irq++)
+ for (irq = MCFINTC2_GPIOIRQ0; (irq <= MCFINTC2_GPIOIRQ7); irq++) {
irq_desc[irq].chip = &intc2_irq_gpio_chip;
+ set_irq_handler(irq, handle_edge_irq);
+ }
return 0;
}
movel %d1,%a2
1:
move %a2@(TI_FLAGS),%d1 /* thread_info->flags */
- andl #_TIF_WORK_MASK,%d1
jne Lwork_to_do
RESTORE_ALL
cpm_install_handler(int vec, void (*handler)(), void *dev_id)
{
- request_irq(vec, handler, IRQ_FLG_LOCK, "timer", dev_id);
+ request_irq(vec, handler, 0, "timer", dev_id);
/* if (cpm_vecs[vec].handler != 0) */
/* printk(KERN_INFO "CPM interrupt %x replacing %x\n", */
/* Set compare register 32Khz / 32 / 10 = 100 */
TCMP = 10;
- request_irq(IRQ_MACHSPEC | 1, timer_routine, IRQ_FLG_LOCK, "timer", NULL);
+ request_irq(IRQ_MACHSPEC | 1, timer_routine, 0, "timer", NULL);
#endif
/* General purpose quicc timers: MC68360UM p7-20 */
movel %d1,%a2
1:
move %a2@(TI_FLAGS),%d1 /* thread_info->flags */
- andl #_TIF_WORK_MASK,%d1
jne Lwork_to_do
RESTORE_ALL
pquicc->intr_cimr = 0x00000000;
for (i = 0; (i < NR_IRQS); i++) {
- set_irq_chip(irq, &intc_irq_chip);
- set_irq_handler(irq, handle_level_irq);
+ set_irq_chip(i, &intc_irq_chip);
+ set_irq_handler(i, handle_level_irq);
}
}
andl #-THREAD_SIZE,%d1 /* at base of kernel stack */
movel %d1,%a0
movel %a0@(TI_FLAGS),%d1 /* get thread_info->flags */
- andl #0xefff,%d1
jne Lwork_to_do /* still work to do */
Lreturn:
select TRACING_SUPPORT
select OF
select OF_EARLY_FLATTREE
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
config SWAP
def_bool n
config GENERIC_HWEIGHT
def_bool y
-config GENERIC_HARDIRQS
- def_bool y
-
-config GENERIC_IRQ_PROBE
- def_bool y
-
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_CLOCKEVENTS
def_bool y
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
config GENERIC_GPIO
def_bool y
CONFIG_INITRAMFS_SOURCE="rootfs.cpio"
CONFIG_INITRAMFS_COMPRESSION_GZIP=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_HOTPLUG is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_HOTPLUG is not set
#include <linux/types.h>
#include <asm/registers.h>
-#ifdef CONFIG_XILINX_MICROBLAZE0_USE_MSR_INSTR
+#if CONFIG_XILINX_MICROBLAZE0_USE_MSR_INSTR
static inline unsigned long arch_local_irq_save(void)
{
static inline unsigned long pte_update(pte_t *p, unsigned long clr,
unsigned long set)
{
- unsigned long old, tmp, msr;
-
- __asm__ __volatile__("\
- msrclr %2, 0x2\n\
- nop\n\
- lw %0, %4, r0\n\
- andn %1, %0, %5\n\
- or %1, %1, %6\n\
- sw %1, %4, r0\n\
- mts rmsr, %2\n\
- nop"
- : "=&r" (old), "=&r" (tmp), "=&r" (msr), "=m" (*p)
- : "r" ((unsigned long)(p + 1) - 4), "r" (clr), "r" (set), "m" (*p)
- : "cc");
+ unsigned long flags, old, tmp;
+
+ raw_local_irq_save(flags);
+
+ __asm__ __volatile__( "lw %0, %2, r0 \n"
+ "andn %1, %0, %3 \n"
+ "or %1, %1, %4 \n"
+ "sw %1, %2, r0 \n"
+ : "=&r" (old), "=&r" (tmp)
+ : "r" ((unsigned long)(p + 1) - 4), "r" (clr), "r" (set)
+ : "cc");
+
+ raw_local_irq_restore(flags);
return old;
}
register unsigned tmp __asm__("r3"); \
tmp = 0x0; /* Prevent warning about unused */ \
__asm__ __volatile__ ( \
- "mfs %0, rpvr" #pvrid ";" \
+ "mfs %0, rpvr" #pvrid ";" \
: "=r" (tmp) : : "memory"); \
val = tmp; \
}
if (!(flags & PVR_MSR_BIT))
return 0;
- get_single_pvr(0x00, pvr0);
+ get_single_pvr(0, pvr0);
pr_debug("%s: pvr0 is 0x%08x\n", __func__, pvr0);
if (pvr0 & PVR0_PVR_FULL_MASK)
andi r1, r1, ~2
mts rmsr, r1
/*
- * Here is checking mechanism which check if Microblaze has msr instructions
- * We load msr and compare it with previous r1 value - if is the same,
- * msr instructions works if not - cpu don't have them.
+ * According to Xilinx, msrclr instruction behaves like 'mfs rX,rpc'
+ * if the msrclr instruction is not enabled. We use this to detect
+ * if the opcode is available, by issuing msrclr and then testing the result.
+ * r8 == 0 - msr instructions are implemented
+ * r8 != 0 - msr instructions are not implemented
*/
- /* r8=0 - I have msr instr, 1 - I don't have them */
- rsubi r0, r0, 1 /* set the carry bit */
- msrclr r0, 0x4 /* try to clear it */
- /* read the carry bit, r8 will be '0' if msrclr exists */
- addik r8, r0, 0
+ msrclr r8, 0 /* clear nothing - just read msr for test */
+ cmpu r8, r8, r1 /* r1 must contain msr reg content */
/* r7 may point to an FDT, or there may be one linked in.
if it's in r7, we've got to save it away ASAP.
We ensure r7 points to a valid FDT, just in case the bootloader
is broken or non-existent */
beqi r7, no_fdt_arg /* NULL pointer? don't copy */
- lw r11, r0, r7 /* Does r7 point to a */
- rsubi r11, r11, OF_DT_HEADER /* valid FDT? */
+/* Does r7 point to a valid FDT? Load HEADER magic number */
+ /* Run time Big/Little endian platform */
+ /* Save 1 as word and load byte - 0 - BIG, 1 - LITTLE */
+ addik r11, r0, 0x1 /* BIG/LITTLE checking value */
+ /* __bss_start will be zeroed later - it is just temp location */
+ swi r11, r0, TOPHYS(__bss_start)
+ lbui r11, r0, TOPHYS(__bss_start)
+ beqid r11, big_endian /* DO NOT break delay stop dependency */
+ lw r11, r0, r7 /* Big endian load in delay slot */
+ lwr r11, r0, r7 /* Little endian load */
+big_endian:
+ rsubi r11, r11, OF_DT_HEADER /* Check FDT header */
beqi r11, _prepare_copy_fdt
or r7, r0, r0 /* clear R7 when not valid DTB */
bnei r11, no_fdt_arg /* No - get out of here */
#if CONFIG_XILINX_MICROBLAZE0_USE_BARREL > 0
#define BSRLI(rD, rA, imm) \
bsrli rD, rA, imm
- #elif CONFIG_XILINX_MICROBLAZE0_USE_DIV > 0
- #define BSRLI(rD, rA, imm) \
- ori rD, r0, (1 << imm); \
- idivu rD, rD, rA
#else
#define BSRLI(rD, rA, imm) BSRLI ## imm (rD, rA)
/* Only the used shift constants defined here - add more if needed */
#if CONFIG_XILINX_MICROBLAZE0_USE_MSR_INSTR
if (msr)
eprintk("!!!Your kernel has setup MSR instruction but "
- "CPU don't have it %d\n", msr);
+ "CPU don't have it %x\n", msr);
#else
if (!msr)
eprintk("!!!Your kernel not setup MSR instruction but "
- "CPU have it %d\n", msr);
+ "CPU have it %x\n", msr);
#endif
for (src = __ivt_start; src < __ivt_end; src++, dst++)
* between mem locations with size of xfer spec'd in bytes
*/
+#ifdef __MICROBLAZEEL__
+#error Microblaze LE not support ASM optimized lib func. Disable OPT_LIB_ASM.
+#endif
+
#include <linux/linkage.h>
.text
.globl memcpy
bool
default y
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
#
# Select some configuration options automatically based on user selections.
#
source "lib/Kconfig.debug"
config EARLY_PRINTK
- bool "Early printk" if EMBEDDED
+ bool "Early printk" if EXPERT
depends on SYS_HAS_EARLY_PRINTK
default y
help
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_ELF_CORE is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_SWAP is not set
CONFIG_TINY_RCU=y
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
CONFIG_NET_NS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_KERNEL_LZMA=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_MODULES=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_CPUSETS=y
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_RELAY=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_IPC_NS=y
CONFIG_PID_NS=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SHMEM is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_RD_GZIP is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS_ALL=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_ELF_CORE is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
# CONFIG_PCSPKR_PLATFORM is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_NAMESPACES=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_SLAB=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_EPOLL is not set
CONFIG_SLAB=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_RELAY=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
config MN10300
def_bool y
select HAVE_OPROFILE
+ select GENERIC_HARDIRQS
config AM33_2
def_bool n
config RWSEM_XCHGADD_ALGORITHM
bool
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
config GENERIC_CALIBRATE_DELAY
def_bool y
config ARCH_HAS_ILOG2_U32
def_bool y
-# Use the generic interrupt handling code in kernel/irq/
-config GENERIC_HARDIRQS
- def_bool y
-
config HOTPLUG_CPU
def_bool n
CONFIG_TINY_RCU=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_RESOURCE_COUNTERS=y
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_SLAB=y
select HAVE_IRQ_WORK
select HAVE_PERF_EVENTS
select GENERIC_ATOMIC64 if !64BIT
- select GENERIC_HARDIRQS_NO__DO_IRQ
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
+ select IRQ_PER_CPU
+
help
The PA-RISC microprocessor is designed by Hewlett-Packard and used
in many of their workstations & servers (HP9000 700 and 800 series,
depends on SMP
default y
-config GENERIC_HARDIRQS
- def_bool y
-
-config GENERIC_IRQ_PROBE
- def_bool y
-
config HAVE_LATENCYTOP_SUPPORT
def_bool y
-config IRQ_PER_CPU
- bool
- default y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
# unless you want to implement ACPI on PA-RISC ... ;-)
config PM
bool
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_SLAB=y
CONFIG_PROFILING=y
struct console *tmp;
- acquire_console_sem();
+ console_lock();
for_each_console(tmp)
if (tmp == &pdc_cons)
break;
- release_console_sem();
+ console_unlock();
if (!tmp) {
printk(KERN_INFO "PDC console driver not registered anymore, not creating %s\n", pdc_cons.name);
config GENERIC_CLOCKEVENTS
def_bool y
-config GENERIC_HARDIRQS
- bool
- default y
-
-config GENERIC_HARDIRQS_NO__DO_IRQ
- bool
- default y
-
config HAVE_SETUP_PER_CPU_AREA
def_bool PPC64
config NEED_PER_CPU_EMBED_FIRST_CHUNK
def_bool PPC64
-config IRQ_PER_CPU
- bool
- default y
-
config NR_IRQS
int "Number of virtual interrupt numbers"
range 32 32768
select HAVE_PERF_EVENTS
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64
+ select HAVE_GENERIC_HARDIRQS
+ select HAVE_SPARSE_IRQ
+ select IRQ_PER_CPU
config EARLY_PRINTK
bool
CPU. Generally saying Y is safe, although some problems have been
reported with SMP Power Macintoshes with this option enabled.
-config SPARSE_IRQ
- bool "Support sparse irq numbering"
- default n
- help
- This enables support for sparse irqs. This is useful for distro
- kernels that want to define a high CONFIG_NR_CPUS value but still
- want to have low kernel memory footprint on smaller machines.
-
- ( Sparse IRQs can also be beneficial on NUMA boxes, as they spread
- out the irq_desc[] array in a more NUMA-friendly way. )
-
- If you don't know what to do here, say N.
-
config NUMA
bool "NUMA support"
depends on PPC64
extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(extra-y))
hostprogs-installed := $(patsubst %, $(DESTDIR)$(WRAPPER_BINDIR)/%, $(hostprogs-y))
wrapper-installed := $(DESTDIR)$(WRAPPER_BINDIR)/wrapper
-dts-installed := $(patsubst $(obj)/dts/%, $(DESTDIR)$(WRAPPER_DTSDIR)/%, $(wildcard $(obj)/dts/*.dts))
+dts-installed := $(patsubst $(dtstree)/%, $(DESTDIR)$(WRAPPER_DTSDIR)/%, $(wildcard $(dtstree)/*.dts))
all-installed := $(extra-installed) $(hostprogs-installed) $(wrapper-installed) $(dts-installed)
#address-cells = <1>;
#size-cells = <1>;
device_type = "soc";
- compatible = "fsl,mpc8315-immr", "simple-bus";
+ compatible = "fsl,mpc8308-immr", "simple-bus";
ranges = <0 0xe0000000 0x00100000>;
reg = <0xe0000000 0x00000200>;
bus-frequency = <0>;
ranges = <0x0 0xc100 0x200>;
cell-index = <1>;
dma00: dma-channel@0 {
- compatible = "fsl,eloplus-dma-channel";
+ compatible = "fsl,ssi-dma-channel";
reg = <0x0 0x80>;
cell-index = <0>;
interrupts = <76 2>;
};
dma01: dma-channel@80 {
- compatible = "fsl,eloplus-dma-channel";
+ compatible = "fsl,ssi-dma-channel";
reg = <0x80 0x80>;
cell-index = <1>;
interrupts = <77 2>;
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_PCI_QUIRKS is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_PROFILING=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_EPOLL is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_SLAB=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_KALLSYMS is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_KSI8560=y
CONFIG_CPM2=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_MPC8540_ADS=y
CONFIG_NO_HZ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_MPC8560_ADS=y
CONFIG_BINFMT_MISC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_MPC85xx_CDS=y
CONFIG_NO_HZ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_SBC8548=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_SBC8560=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODVERSIONS=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
# CONFIG_EPOLL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_ELF_CORE is not set
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_BASE_FULL is not set
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_SLAB=y
# CONFIG_IOSCHED_CFQ is not set
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_BASE_FULL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_ELF_CORE is not set
CONFIG_PERF_COUNTERS=y
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_PPC_CHRP is not set
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_SLAB=y
# CONFIG_IOSCHED_CFQ is not set
CONFIG_SYSVIPC=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_HOTPLUG is not set
# CONFIG_BUG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_PPC_CHRP is not set
# CONFIG_PPC_PMAC is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_PPC_CHRP is not set
# CONFIG_PPC_PMAC is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_HOTPLUG is not set
# CONFIG_BUG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_BASE_FULL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_MODULES=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_PPC_CHRP is not set
# CONFIG_PPC_PMAC is not set
CONFIG_POSIX_MQUEUE=y
CONFIG_NAMESPACES=y
CONFIG_BLK_DEV_INITRD=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_KALLSYMS_EXTRA_PASS=y
# CONFIG_PERF_EVENTS is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_ALTIVEC=y
CONFIG_VSX=y
CONFIG_SMP=y
-CONFIG_NR_CPUS=128
+CONFIG_NR_CPUS=1024
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IRQ_ALL_CPUS=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_PPC_64K_PAGES=y
+CONFIG_PPC_SUBPAGE_PROT=y
CONFIG_SCHED_SMT=y
CONFIG_HOTPLUG_PCI=m
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_TIGON3=y
+CONFIG_BNX2=m
CONFIG_CHELSIO_T1=m
CONFIG_CHELSIO_T3=m
CONFIG_EHEA=y
# CONFIG_RCU_CPU_STALL_DETECTOR is not set
CONFIG_LATENCYTOP=y
CONFIG_SYSCTL_SYSCALL_CHECK=y
-CONFIG_IRQSOFF_TRACER=y
CONFIG_SCHED_TRACER=y
-CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y
CONFIG_LOG_BUF_SHIFT=14
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_SYSCTL_SYSCALL is not set
# CONFIG_ELF_CORE is not set
# CONFIG_BASE_FULL is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_ELF_CORE is not set
CONFIG_PERF_COUNTERS=y
# CONFIG_VM_EVENT_COUNTERS is not set
.align 2; \
label##3:
-#define MAKE_FTR_SECTION_ENTRY(msk, val, label, sect) \
-label##4: \
- .popsection; \
- .pushsection sect,"a"; \
- .align 3; \
-label##5: \
- FTR_ENTRY_LONG msk; \
- FTR_ENTRY_LONG val; \
- FTR_ENTRY_OFFSET label##1b-label##5b; \
- FTR_ENTRY_OFFSET label##2b-label##5b; \
- FTR_ENTRY_OFFSET label##3b-label##5b; \
- FTR_ENTRY_OFFSET label##4b-label##5b; \
+#define MAKE_FTR_SECTION_ENTRY(msk, val, label, sect) \
+label##4: \
+ .popsection; \
+ .pushsection sect,"a"; \
+ .align 3; \
+label##5: \
+ FTR_ENTRY_LONG msk; \
+ FTR_ENTRY_LONG val; \
+ FTR_ENTRY_OFFSET label##1b-label##5b; \
+ FTR_ENTRY_OFFSET label##2b-label##5b; \
+ FTR_ENTRY_OFFSET label##3b-label##5b; \
+ FTR_ENTRY_OFFSET label##4b-label##5b; \
+ .ifgt (label##4b-label##3b)-(label##2b-label##1b); \
+ .error "Feature section else case larger than body"; \
+ .endif; \
.popsection;
extern struct qe_immap __iomem *qe_immr;
extern phys_addr_t get_qe_base(void);
-static inline unsigned long immrbar_virt_to_phys(void *address)
+/*
+ * Returns the offset within the QE address space of the given pointer.
+ *
+ * Note that the QE does not support 36-bit physical addresses, so if
+ * get_qe_base() returns a number above 4GB, the caller will probably fail.
+ */
+static inline phys_addr_t immrbar_virt_to_phys(void *address)
{
- if ( ((u32)address >= (u32)qe_immr) &&
- ((u32)address < ((u32)qe_immr + QE_IMMAP_SIZE)) )
- return (unsigned long)(address - (u32)qe_immr +
- (u32)get_qe_base());
- return (unsigned long)virt_to_phys(address);
+ void *q = (void *)qe_immr;
+
+ /* Is it a MURAM address? */
+ if ((address >= q) && (address < (q + QE_IMMAP_SIZE)))
+ return get_qe_base() + (address - q);
+
+ /* It's an address returned by kmalloc */
+ return virt_to_phys(address);
}
#endif /* __KERNEL__ */
#else
#ifdef CONFIG_TRACE_IRQFLAGS
+#ifdef CONFIG_IRQSOFF_TRACER
+/*
+ * Since the ftrace irqsoff latency trace checks CALLER_ADDR1,
+ * which is the stack frame here, we need to force a stack frame
+ * in case we came from user space.
+ */
+#define TRACE_WITH_FRAME_BUFFER(func) \
+ mflr r0; \
+ stdu r1, -32(r1); \
+ std r0, 16(r1); \
+ stdu r1, -32(r1); \
+ bl func; \
+ ld r1, 0(r1); \
+ ld r1, 0(r1);
+#else
+#define TRACE_WITH_FRAME_BUFFER(func) \
+ bl func;
+#endif
+
/*
* Most of the CPU's IRQ-state tracing is done from assembly code; we
* have to call a C function so call a wrapper that saves all the
* C-clobbered registers.
*/
-#define TRACE_ENABLE_INTS bl .trace_hardirqs_on
-#define TRACE_DISABLE_INTS bl .trace_hardirqs_off
-#define TRACE_AND_RESTORE_IRQ_PARTIAL(en,skip) \
- cmpdi en,0; \
- bne 95f; \
- stb en,PACASOFTIRQEN(r13); \
- bl .trace_hardirqs_off; \
- b skip; \
-95: bl .trace_hardirqs_on; \
+#define TRACE_ENABLE_INTS TRACE_WITH_FRAME_BUFFER(.trace_hardirqs_on)
+#define TRACE_DISABLE_INTS TRACE_WITH_FRAME_BUFFER(.trace_hardirqs_off)
+
+#define TRACE_AND_RESTORE_IRQ_PARTIAL(en,skip) \
+ cmpdi en,0; \
+ bne 95f; \
+ stb en,PACASOFTIRQEN(r13); \
+ TRACE_WITH_FRAME_BUFFER(.trace_hardirqs_off) \
+ b skip; \
+95: TRACE_WITH_FRAME_BUFFER(.trace_hardirqs_on) \
li en,1;
#define TRACE_AND_RESTORE_IRQ(en) \
TRACE_AND_RESTORE_IRQ_PARTIAL(en,96f); \
- stb en,PACASOFTIRQEN(r13); \
+ stb en,PACASOFTIRQEN(r13); \
96:
#else
#define TRACE_ENABLE_INTS
* If for some reason there is no irq, but the interrupt
* shouldn't be counted as spurious, return NO_IRQ_IGNORE. */
unsigned int (*get_irq)(void);
-#ifdef CONFIG_KEXEC
- void (*kexec_cpu_down)(int crash_shutdown, int secondary);
-#endif
/* PCI stuff */
/* Called after scanning the bus, before allocating resources */
void (*machine_shutdown)(void);
#ifdef CONFIG_KEXEC
- /* Called to do the minimal shutdown needed to run a kexec'd kernel
- * to run successfully.
- * XXX Should we move this one out of kexec scope?
- */
- void (*machine_crash_shutdown)(struct pt_regs *regs);
+ void (*kexec_cpu_down)(int crash_shutdown, int secondary);
/* Called to do what every setup is needed on image and the
* reboot code buffer. Returns 0 on success.
* claims to support kexec.
*/
int (*machine_kexec_prepare)(struct kimage *image);
-
- /* Called to handle any machine specific cleanup on image */
- void (*machine_kexec_cleanup)(struct kimage *image);
-
- /* Called to perform the _real_ kexec.
- * Do NOT allocate memory or fail here. We are past the point of
- * no return.
- */
- void (*machine_kexec)(struct kimage *image);
#endif /* CONFIG_KEXEC */
#ifdef CONFIG_SUSPEND
/* MAS registers bit definitions */
-#define MAS0_TLBSEL(x) ((x << 28) & 0x30000000)
-#define MAS0_ESEL(x) ((x << 16) & 0x0FFF0000)
+#define MAS0_TLBSEL(x) (((x) << 28) & 0x30000000)
+#define MAS0_ESEL(x) (((x) << 16) & 0x0FFF0000)
#define MAS0_NV(x) ((x) & 0x00000FFF)
#define MAS0_HES 0x00004000
#define MAS0_WQ_ALLWAYS 0x00000000
#define MAS1_VALID 0x80000000
#define MAS1_IPROT 0x40000000
-#define MAS1_TID(x) ((x << 16) & 0x3FFF0000)
+#define MAS1_TID(x) (((x) << 16) & 0x3FFF0000)
#define MAS1_IND 0x00002000
#define MAS1_TS 0x00001000
#define MAS1_TSIZE_MASK 0x00000f80
#define MAS1_TSIZE_SHIFT 7
-#define MAS1_TSIZE(x) ((x << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK)
+#define MAS1_TSIZE(x) (((x) << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK)
#define MAS2_EPN 0xFFFFF000
#define MAS2_X0 0x00000040
#ifdef CONFIG_FLATMEM
#define ARCH_PFN_OFFSET (MEMORY_START >> PAGE_SHIFT)
-#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < (ARCH_PFN_OFFSET + max_mapnr))
+#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
#endif
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define HID0_NOPTI (1<<0) /* No-op dcbt and dcbst instr. */
#define SPRN_HID1 0x3F1 /* Hardware Implementation Register 1 */
+#ifdef CONFIG_6xx
#define HID1_EMCP (1<<31) /* 7450 Machine Check Pin Enable */
#define HID1_DFS (1<<22) /* 7447A Dynamic Frequency Scaling */
#define HID1_PC0 (1<<16) /* 7450 PLL_CFG[0] */
#define HID1_SYNCBE (1<<11) /* 7450 ABE for sync, eieio */
#define HID1_ABE (1<<10) /* 7450 Address Broadcast Enable */
#define HID1_PS (1<<16) /* 750FX PLL selection */
+#endif
#define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */
#define SPRN_HID2_GEKKO 0x398 /* Gekko HID2 Register */
#define SPRN_IABR 0x3F2 /* Instruction Address Breakpoint Register */
store or cache line push */
#endif
+/* Bit definitions for the HID1 */
+#ifdef CONFIG_E500
+/* e500v1/v2 */
+#define HID1_PLL_CFG_MASK 0xfc000000 /* PLL_CFG input pins */
+#define HID1_RFXE 0x00020000 /* Read fault exception enable */
+#define HID1_R1DPE 0x00008000 /* R1 data bus parity enable */
+#define HID1_R2DPE 0x00004000 /* R2 data bus parity enable */
+#define HID1_ASTME 0x00002000 /* Address bus streaming mode enable */
+#define HID1_ABE 0x00001000 /* Address broadcast enable */
+#define HID1_MPXTT 0x00000400 /* MPX re-map transfer type */
+#define HID1_ATS 0x00000080 /* Atomic status */
+#define HID1_MID_MASK 0x0000000f /* MID input pins */
+#endif
+
/* Bit definitions for the DBSR. */
/*
* DBSR bits which have conflicting definitions on true Book E versus IBM 40x.
void spu_setup_kernel_slbs(struct spu *spu, struct spu_lscsa *lscsa,
void *code, int code_size);
-#ifdef CONFIG_KEXEC
-void crash_register_spus(struct list_head *list);
-#else
-static inline void crash_register_spus(struct list_head *list)
-{
-}
-#endif
-
extern void spu_invalidate_slbs(struct spu *spu);
extern void spu_associate_mm(struct spu *spu, struct mm_struct *mm);
int spu_64k_pages_available(void);
#include <asm/mmu.h>
_GLOBAL(__setup_cpu_603)
- mflr r4
+ mflr r5
BEGIN_MMU_FTR_SECTION
li r10,0
mtspr SPRN_SPRG_603_LRU,r10 /* init SW LRU tracking */
bl __init_fpu_registers
END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE)
bl setup_common_caches
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_604)
- mflr r4
+ mflr r5
bl setup_common_caches
bl setup_604_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750cx)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
bl setup_750cx
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_750fx)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_common_caches
bl setup_750_7400_hid0
bl setup_750fx
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_7400)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_7400_workarounds
bl setup_common_caches
bl setup_750_7400_hid0
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_7410)
- mflr r4
+ mflr r5
bl __init_fpu_registers
bl setup_7410_workarounds
bl setup_common_caches
bl setup_750_7400_hid0
li r3,0
mtspr SPRN_L2CR2,r3
- mtlr r4
+ mtlr r5
blr
_GLOBAL(__setup_cpu_745x)
- mflr r4
+ mflr r5
bl setup_common_caches
bl setup_745x_specifics
- mtlr r4
+ mtlr r5
blr
/* Enable caches for 603's, 604, 750 & 7400 */
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
cror 4*cr0+eq,4*cr0+eq,4*cr2+eq
bnelr
- lwz r6,CPU_SPEC_FEATURES(r5)
+ lwz r6,CPU_SPEC_FEATURES(r4)
li r7,CPU_FTR_CAN_NAP
andc r6,r6,r7
- stw r6,CPU_SPEC_FEATURES(r5)
+ stw r6,CPU_SPEC_FEATURES(r4)
blr
/* 750fx specific
andis. r11,r11,L3CR_L3E@h
beq 1f
END_FTR_SECTION_IFSET(CPU_FTR_L3CR)
- lwz r6,CPU_SPEC_FEATURES(r5)
+ lwz r6,CPU_SPEC_FEATURES(r4)
andi. r0,r6,CPU_FTR_L3_DISABLE_NAP
beq 1f
li r7,CPU_FTR_CAN_NAP
andc r6,r6,r7
- stw r6,CPU_SPEC_FEATURES(r5)
+ stw r6,CPU_SPEC_FEATURES(r4)
1:
mfspr r11,SPRN_HID0
bl __e500_icache_setup
bl __e500_dcache_setup
bl __setup_e500_ivors
+#ifdef CONFIG_RAPIDIO
+ /* Ensure that RFXE is set */
+ mfspr r3,SPRN_HID1
+ oris r3,r3,HID1_RFXE@h
+ mtspr SPRN_HID1,r3
+#endif
mtlr r4
blr
_GLOBAL(__setup_cpu_e500mc)
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/power3",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "power3",
},
{ /* Power3+ */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/power3",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "power3",
},
{ /* Northstar */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/rs64",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "rs64",
},
{ /* Pulsar */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/rs64",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "rs64",
},
{ /* I-star */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/rs64",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "rs64",
},
{ /* S-star */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/rs64",
.oprofile_type = PPC_OPROFILE_RS64,
- .machine_check = machine_check_generic,
.platform = "rs64",
},
{ /* Power4 */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/power4",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "power4",
},
{ /* Power4+ */
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/power4",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "power4",
},
{ /* PPC970 */
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "ppc970",
},
{ /* PPC970FX */
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "ppc970",
},
{ /* PPC970MP DD1.0 - no DEEPNAP, use regular 970 init */
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970MP",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "ppc970",
},
{ /* PPC970MP */
.cpu_restore = __restore_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970MP",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "ppc970",
},
{ /* PPC970GX */
.cpu_setup = __setup_cpu_ppc970,
.oprofile_cpu_type = "ppc64/970",
.oprofile_type = PPC_OPROFILE_POWER4,
- .machine_check = machine_check_generic,
.platform = "ppc970",
},
{ /* Power5 GR */
*/
.oprofile_mmcra_sihv = MMCRA_SIHV,
.oprofile_mmcra_sipr = MMCRA_SIPR,
- .machine_check = machine_check_generic,
.platform = "power5",
},
{ /* Power5++ */
.oprofile_type = PPC_OPROFILE_POWER4,
.oprofile_mmcra_sihv = MMCRA_SIHV,
.oprofile_mmcra_sipr = MMCRA_SIPR,
- .machine_check = machine_check_generic,
.platform = "power5+",
},
{ /* Power5 GS */
.oprofile_type = PPC_OPROFILE_POWER4,
.oprofile_mmcra_sihv = MMCRA_SIHV,
.oprofile_mmcra_sipr = MMCRA_SIPR,
- .machine_check = machine_check_generic,
.platform = "power5+",
},
{ /* POWER6 in P5+ mode; 2.04-compliant processor */
.mmu_features = MMU_FTR_HPTE_TABLE,
.icache_bsize = 128,
.dcache_bsize = 128,
- .machine_check = machine_check_generic,
.oprofile_cpu_type = "ppc64/ibm-compat-v1",
.oprofile_type = PPC_OPROFILE_POWER4,
.platform = "power5+",
.oprofile_mmcra_sipr = POWER6_MMCRA_SIPR,
.oprofile_mmcra_clear = POWER6_MMCRA_THRM |
POWER6_MMCRA_OTHER,
- .machine_check = machine_check_generic,
.platform = "power6x",
},
{ /* 2.05-compliant processor, i.e. Power6 "architected" mode */
.mmu_features = MMU_FTR_HPTE_TABLE,
.icache_bsize = 128,
.dcache_bsize = 128,
- .machine_check = machine_check_generic,
.oprofile_cpu_type = "ppc64/ibm-compat-v1",
.oprofile_type = PPC_OPROFILE_POWER4,
.platform = "power6",
MMU_FTR_TLBIE_206,
.icache_bsize = 128,
.dcache_bsize = 128,
- .machine_check = machine_check_generic,
.oprofile_type = PPC_OPROFILE_POWER4,
.oprofile_cpu_type = "ppc64/ibm-compat-v1",
.platform = "power7",
.pmc_type = PPC_PMC_IBM,
.oprofile_cpu_type = "ppc64/cell-be",
.oprofile_type = PPC_OPROFILE_CELL,
- .machine_check = machine_check_generic,
.platform = "ppc-cell-be",
},
{ /* PA Semi PA6T */
.cpu_restore = __restore_cpu_pa6t,
.oprofile_cpu_type = "ppc64/pa6t",
.oprofile_type = PPC_OPROFILE_PA6T,
- .machine_check = machine_check_generic,
.platform = "pa6t",
},
{ /* default match */
.dcache_bsize = 128,
.num_pmcs = 6,
.pmc_type = PPC_PMC_IBM,
- .machine_check = machine_check_generic,
.platform = "power4",
}
#endif /* CONFIG_PPC_BOOK3S_64 */
* pointer on ppc64 and booke as we are running at 0 in real mode
* on ppc64 and reloc_offset is always 0 on booke.
*/
- if (s->cpu_setup) {
- s->cpu_setup(offset, s);
+ if (t->cpu_setup) {
+ t->cpu_setup(offset, t);
}
#endif /* CONFIG_PPC64 || CONFIG_BOOKE */
}
static cpumask_t cpus_in_crash = CPU_MASK_NONE;
cpumask_t cpus_in_sr = CPU_MASK_NONE;
-#define CRASH_HANDLER_MAX 2
+#define CRASH_HANDLER_MAX 3
/* NULL terminated list of shutdown handles */
static crash_shutdown_t crash_shutdown_handles[CRASH_HANDLER_MAX+1];
static DEFINE_SPINLOCK(crash_handlers_lock);
smp_wmb();
/*
- * FIXME: Until we will have the way to stop other CPUSs reliabally,
+ * FIXME: Until we will have the way to stop other CPUs reliably,
* the crash CPU will send an IPI and wait for other CPUs to
* respond.
* Delay of at least 10 seconds.
cpus_in_sr = CPU_MASK_NONE;
}
#endif
-#ifdef CONFIG_SPU_BASE
-
-#include <asm/spu.h>
-#include <asm/spu_priv1.h>
-
-struct crash_spu_info {
- struct spu *spu;
- u32 saved_spu_runcntl_RW;
- u32 saved_spu_status_R;
- u32 saved_spu_npc_RW;
- u64 saved_mfc_sr1_RW;
- u64 saved_mfc_dar;
- u64 saved_mfc_dsisr;
-};
-
-#define CRASH_NUM_SPUS 16 /* Enough for current hardware */
-static struct crash_spu_info crash_spu_info[CRASH_NUM_SPUS];
-
-static void crash_kexec_stop_spus(void)
-{
- struct spu *spu;
- int i;
- u64 tmp;
-
- for (i = 0; i < CRASH_NUM_SPUS; i++) {
- if (!crash_spu_info[i].spu)
- continue;
-
- spu = crash_spu_info[i].spu;
-
- crash_spu_info[i].saved_spu_runcntl_RW =
- in_be32(&spu->problem->spu_runcntl_RW);
- crash_spu_info[i].saved_spu_status_R =
- in_be32(&spu->problem->spu_status_R);
- crash_spu_info[i].saved_spu_npc_RW =
- in_be32(&spu->problem->spu_npc_RW);
-
- crash_spu_info[i].saved_mfc_dar = spu_mfc_dar_get(spu);
- crash_spu_info[i].saved_mfc_dsisr = spu_mfc_dsisr_get(spu);
- tmp = spu_mfc_sr1_get(spu);
- crash_spu_info[i].saved_mfc_sr1_RW = tmp;
-
- tmp &= ~MFC_STATE1_MASTER_RUN_CONTROL_MASK;
- spu_mfc_sr1_set(spu, tmp);
-
- __delay(200);
- }
-}
-
-void crash_register_spus(struct list_head *list)
-{
- struct spu *spu;
-
- list_for_each_entry(spu, list, full_list) {
- if (WARN_ON(spu->number >= CRASH_NUM_SPUS))
- continue;
-
- crash_spu_info[spu->number].spu = spu;
- }
-}
-
-#else
-static inline void crash_kexec_stop_spus(void)
-{
-}
-#endif /* CONFIG_SPU_BASE */
/*
* Register a function to be called on shutdown. Only use this if you
crash_shutdown_cpu = -1;
__debugger_fault_handler = old_handler;
- crash_kexec_stop_spus();
-
if (ppc_md.kexec_cpu_down)
ppc_md.kexec_cpu_down(1, 0);
}
*/
andi. r10,r9,MSR_EE
beq 1f
+ /*
+ * Since the ftrace irqsoff latency trace checks CALLER_ADDR1,
+ * which is the stack frame here, we need to force a stack frame
+ * in case we came from user space.
+ */
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,4(r1)
+ stwu r1,-32(r1)
bl trace_hardirqs_on
+ lwz r1,0(r1)
+ lwz r1,0(r1)
lwz r9,_MSR(r1)
1:
#endif /* CONFIG_TRACE_IRQFLAGS */
#include <linux/memblock.h>
#include <linux/of.h>
#include <linux/irq.h>
+#include <linux/ftrace.h>
#include <asm/machdep.h>
#include <asm/prom.h>
void machine_crash_shutdown(struct pt_regs *regs)
{
- if (ppc_md.machine_crash_shutdown)
- ppc_md.machine_crash_shutdown(regs);
- else
- default_machine_crash_shutdown(regs);
+ default_machine_crash_shutdown(regs);
}
/*
void machine_kexec_cleanup(struct kimage *image)
{
- if (ppc_md.machine_kexec_cleanup)
- ppc_md.machine_kexec_cleanup(image);
}
void arch_crash_save_vmcoreinfo(void)
*/
void machine_kexec(struct kimage *image)
{
- if (ppc_md.machine_kexec)
- ppc_md.machine_kexec(image);
- else
- default_machine_kexec(image);
+ int save_ftrace_enabled;
+
+ save_ftrace_enabled = __ftrace_enabled_save();
+
+ default_machine_kexec(image);
+
+ __ftrace_enabled_restore(save_ftrace_enabled);
/* Fall back to normal restart if we're still alive. */
machine_restart(NULL);
if (left <= 0)
left = period;
record = 1;
+ event->hw.last_period = event->hw.sample_period;
}
if (left < 0x80000000LL)
val = 0x80000000LL - left;
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
printk("DEAR: "REG", ESR: "REG"\n", regs->dar, regs->dsisr);
#else
- printk("DAR: "REG", DSISR: "REG"\n", regs->dar, regs->dsisr);
+ printk("DAR: "REG", DSISR: %08lx\n", regs->dar, regs->dsisr);
#endif
printk("TASK = %p[%d] '%s' THREAD: %p",
current, task_pid_nr(current), current->comm, task_thread_info(current));
struct proc_dir_entry *dp = PDE(file->f_path.dentry->d_inode);
struct rtas_update_flash_t *uf;
char msg[RTAS_MSG_MAXLEN];
- int msglen;
- uf = (struct rtas_update_flash_t *) dp->data;
+ uf = dp->data;
if (!strcmp(dp->name, FIRMWARE_FLASH_NAME)) {
get_flash_status_msg(uf->status, msg);
} else { /* FIRMWARE_UPDATE_NAME */
sprintf(msg, "%d\n", uf->status);
}
- msglen = strlen(msg);
- if (msglen > count)
- msglen = count;
-
- if (ppos && *ppos != 0)
- return 0; /* be cheap */
-
- if (!access_ok(VERIFY_WRITE, buf, msglen))
- return -EINVAL;
- if (copy_to_user(buf, msg, msglen))
- return -EFAULT;
-
- if (ppos)
- *ppos = msglen;
- return msglen;
+ return simple_read_from_buffer(buf, count, ppos, msg, strlen(msg));
}
/* constructor for flash_block_cache */
char msg[RTAS_MSG_MAXLEN];
int msglen;
- args_buf = (struct rtas_manage_flash_t *) dp->data;
+ args_buf = dp->data;
if (args_buf == NULL)
return 0;
msglen = sprintf(msg, "%d\n", args_buf->status);
- if (msglen > count)
- msglen = count;
- if (ppos && *ppos != 0)
- return 0; /* be cheap */
-
- if (!access_ok(VERIFY_WRITE, buf, msglen))
- return -EINVAL;
-
- if (copy_to_user(buf, msg, msglen))
- return -EFAULT;
-
- if (ppos)
- *ppos = msglen;
- return msglen;
+ return simple_read_from_buffer(buf, count, ppos, msg, msglen);
}
static ssize_t manage_flash_write(struct file *file, const char __user *buf,
char msg[RTAS_MSG_MAXLEN];
int msglen;
- args_buf = (struct rtas_validate_flash_t *) dp->data;
+ args_buf = dp->data;
- if (ppos && *ppos != 0)
- return 0; /* be cheap */
-
msglen = get_validate_flash_msg(args_buf, msg);
- if (msglen > count)
- msglen = count;
-
- if (!access_ok(VERIFY_WRITE, buf, msglen))
- return -EINVAL;
-
- if (copy_to_user(buf, msg, msglen))
- return -EFAULT;
- if (ppos)
- *ppos = msglen;
- return msglen;
+ return simple_read_from_buffer(buf, count, ppos, msg, msglen);
}
static ssize_t validate_flash_write(struct file *file, const char __user *buf,
/* rtas fixed header */
len = 8;
err = (struct rtas_error_log *)buf;
- if (err->extended_log_length) {
+ if (err->extended && err->extended_log_length) {
/* extended header */
len += err->extended_log_length;
{
u64 sst, ust;
- sst = scan_dispatch_log(get_paca()->starttime_user);
- ust = scan_dispatch_log(get_paca()->starttime);
- get_paca()->system_time -= sst;
- get_paca()->user_time -= ust;
- get_paca()->stolen_time += ust + sst;
+ u8 save_soft_enabled = local_paca->soft_enabled;
+ u8 save_hard_enabled = local_paca->hard_enabled;
+
+ /* We are called early in the exception entry, before
+ * soft/hard_enabled are sync'ed to the expected state
+ * for the exception. We are hard disabled but the PACA
+ * needs to reflect that so various debug stuff doesn't
+ * complain
+ */
+ local_paca->soft_enabled = 0;
+ local_paca->hard_enabled = 0;
+
+ sst = scan_dispatch_log(local_paca->starttime_user);
+ ust = scan_dispatch_log(local_paca->starttime);
+ local_paca->system_time -= sst;
+ local_paca->user_time -= ust;
+ local_paca->stolen_time += ust + sst;
+
+ local_paca->soft_enabled = save_soft_enabled;
+ local_paca->hard_enabled = save_hard_enabled;
}
static inline u64 calculate_stolen_time(u64 stop_tb)
if (recover > 0)
return;
- if (user_mode(regs)) {
- regs->msr |= MSR_RI;
- _exception(SIGBUS, regs, BUS_ADRERR, regs->nip);
- return;
- }
-
#if defined(CONFIG_8xx) && defined(CONFIG_PCI)
/* the qspan pci read routines can cause machine checks -- Cort
*
return;
#endif
- if (debugger_fault_handler(regs)) {
- regs->msr |= MSR_RI;
+ if (debugger_fault_handler(regs))
return;
- }
if (check_io_access(regs))
return;
- if (debugger_fault_handler(regs))
- return;
die("Machine check", regs, SIGBUS);
/* Must die if the interrupt is not recoverable */
3: or 3,3,3
+#if 0
+/* Test that if we have a larger else case the assembler spots it and
+ * reports an error. #if 0'ed so as not to break the build normally.
+ */
+ftr_fixup_test7:
+ or 1,1,1
+BEGIN_FTR_SECTION
+ or 2,2,2
+ or 2,2,2
+ or 2,2,2
+FTR_SECTION_ELSE
+ or 3,3,3
+ or 3,3,3
+ or 3,3,3
+ or 3,3,3
+ALT_FTR_SECTION_END(0, 1)
+ or 1,1,1
+#endif
+
#define MAKE_MACRO_TEST(TYPE) \
globl(ftr_fixup_test_ ##TYPE##_macros) \
or 1,1,1; \
dbg("removing cpu %lu from node %d\n", cpu, node);
if (cpumask_test_cpu(cpu, node_to_cpumask_map[node])) {
- cpumask_set_cpu(cpu, node_to_cpumask_map[node]);
+ cpumask_clear_cpu(cpu, node_to_cpumask_map[node]);
} else {
printk(KERN_ERR "WARNING: cpu %lu not found in node %d\n",
cpu, node);
}
#endif /* CONFIG_MEMORY_HOTPLUG */
-/* Vrtual Processor Home Node (VPHN) support */
+/* Virtual Processor Home Node (VPHN) support */
#ifdef CONFIG_PPC_SPLPAR
-#define VPHN_NR_CHANGE_CTRS (8)
-static u8 vphn_cpu_change_counts[NR_CPUS][VPHN_NR_CHANGE_CTRS];
+static u8 vphn_cpu_change_counts[NR_CPUS][MAX_DISTANCE_REF_POINTS];
static cpumask_t cpu_associativity_changes_mask;
static int vphn_enabled;
static void set_topology_timer(void);
*/
static void setup_cpu_associativity_change_counters(void)
{
- int cpu = 0;
+ int cpu;
+
+ /* The VPHN feature supports a maximum of 8 reference points */
+ BUILD_BUG_ON(MAX_DISTANCE_REF_POINTS > 8);
for_each_possible_cpu(cpu) {
- int i = 0;
+ int i;
u8 *counts = vphn_cpu_change_counts[cpu];
volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts;
- for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) {
+ for (i = 0; i < distance_ref_points_depth; i++)
counts[i] = hypervisor_counts[i];
- }
}
}
*/
static int update_cpu_associativity_changes_mask(void)
{
- int cpu = 0, nr_cpus = 0;
+ int cpu, nr_cpus = 0;
cpumask_t *changes = &cpu_associativity_changes_mask;
cpumask_clear(changes);
u8 *counts = vphn_cpu_change_counts[cpu];
volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts;
- for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) {
- if (hypervisor_counts[i] > counts[i]) {
+ for (i = 0; i < distance_ref_points_depth; i++) {
+ if (hypervisor_counts[i] != counts[i]) {
counts[i] = hypervisor_counts[i];
changed = 1;
}
return nr_cpus;
}
-/* 6 64-bit registers unpacked into 12 32-bit associativity values */
-#define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32))
+/*
+ * 6 64-bit registers unpacked into 12 32-bit associativity values. To form
+ * the complete property we have to add the length in the first cell.
+ */
+#define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32) + 1)
/*
* Convert the associativity domain numbers returned from the hypervisor
*/
static int vphn_unpack_associativity(const long *packed, unsigned int *unpacked)
{
- int i = 0;
- int nr_assoc_doms = 0;
+ int i, nr_assoc_doms = 0;
const u16 *field = (const u16*) packed;
#define VPHN_FIELD_UNUSED (0xffff)
#define VPHN_FIELD_MSB (0x8000)
#define VPHN_FIELD_MASK (~VPHN_FIELD_MSB)
- for (i = 0; i < VPHN_ASSOC_BUFSIZE; i++) {
+ for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
if (*field == VPHN_FIELD_UNUSED) {
/* All significant fields processed, and remaining
* fields contain the reserved value of all 1's.
*/
unpacked[i] = *((u32*)field);
field += 2;
- }
- else if (*field & VPHN_FIELD_MSB) {
+ } else if (*field & VPHN_FIELD_MSB) {
/* Data is in the lower 15 bits of this field */
unpacked[i] = *field & VPHN_FIELD_MASK;
field++;
nr_assoc_doms++;
- }
- else {
+ } else {
/* Data is in the lower 15 bits of this field
* concatenated with the next 16 bit field
*/
}
}
+ /* The first cell contains the length of the property */
+ unpacked[0] = nr_assoc_doms;
+
return nr_assoc_doms;
}
*/
static long hcall_vphn(unsigned long cpu, unsigned int *associativity)
{
- long rc = 0;
+ long rc;
long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
u64 flags = 1;
int hwcpu = get_hard_smp_processor_id(cpu);
static long vphn_get_associativity(unsigned long cpu,
unsigned int *associativity)
{
- long rc = 0;
+ long rc;
rc = hcall_vphn(cpu, associativity);
*/
int arch_update_cpu_topology(void)
{
- int cpu = 0, nid = 0, old_nid = 0;
+ int cpu, nid, old_nid;
unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0};
- struct sys_device *sysdev = NULL;
+ struct sys_device *sysdev;
for_each_cpu_mask(cpu, cpu_associativity_changes_mask) {
vphn_get_associativity(cpu, associativity);
{
int rc = 0;
- if (firmware_has_feature(FW_FEATURE_VPHN)) {
+ if (firmware_has_feature(FW_FEATURE_VPHN) &&
+ get_lppaca()->shared_proc) {
vphn_enabled = 1;
setup_cpu_associativity_change_counters();
init_timer_deferrable(&topology_timer);
ipic_set_default_priority();
}
-struct const char *board[] __initdata = {
+static const char *board[] __initdata = {
"MPC8308RDB",
"fsl,mpc8308rdb",
"denx,mpc8308_p1m",
NULL
-}
+};
/*
* Called very early, MMU is off, device-tree isn't unflattened
ipic_set_default_priority();
}
-struct const char *board[] __initdata = {
+static const char *board[] __initdata = {
"MPC8313ERDB",
"fsl,mpc8315erdb",
NULL
-}
+};
/*
* Called very early, MMU is off, device-tree isn't unflattened
/* system i/o configuration register high */
#define MPC83XX_SICRH_OFFS 0x118
+#define MPC8308_SICRH_USB_MASK 0x000c0000
+#define MPC8308_SICRH_USB_ULPI 0x00040000
#define MPC834X_SICRH_USB_UTMI 0x00020000
#define MPC831X_SICRH_USB_MASK 0x000000e0
#define MPC831X_SICRH_USB_ULPI 0x000000a0
/* Configure clock */
immr_node = of_get_parent(np);
- if (immr_node && of_device_is_compatible(immr_node, "fsl,mpc8315-immr"))
+ if (immr_node && (of_device_is_compatible(immr_node, "fsl,mpc8315-immr") ||
+ of_device_is_compatible(immr_node, "fsl,mpc8308-immr")))
clrsetbits_be32(immap + MPC83XX_SCCR_OFFS,
MPC8315_SCCR_USB_MASK,
MPC8315_SCCR_USB_DRCM_01);
/* Configure pin mux for ULPI. There is no pin mux for UTMI */
if (prop && !strcmp(prop, "ulpi")) {
- if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) {
+ if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) {
+ clrsetbits_be32(immap + MPC83XX_SICRH_OFFS,
+ MPC8308_SICRH_USB_MASK,
+ MPC8308_SICRH_USB_ULPI);
+ } else if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) {
clrsetbits_be32(immap + MPC83XX_SICRL_OFFS,
MPC8315_SICRL_USB_MASK,
MPC8315_SICRL_USB_ULPI);
!strcmp(prop, "utmi"))) {
u32 refsel;
+ if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr"))
+ goto out;
+
if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr"))
refsel = CONTROL_REFSEL_24MHZ;
else
temp = CONTROL_PHY_CLK_SEL_ULPI;
#ifdef CONFIG_USB_OTG
/* Set OTG_PORT */
- dr_mode = of_get_property(np, "dr_mode", NULL);
- if (dr_mode && !strcmp(dr_mode, "otg"))
- temp |= CONTROL_OTG_PORT;
+ if (!of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) {
+ dr_mode = of_get_property(np, "dr_mode", NULL);
+ if (dr_mode && !strcmp(dr_mode, "otg"))
+ temp |= CONTROL_OTG_PORT;
+ }
#endif /* CONFIG_USB_OTG */
out_be32(usb_regs + FSL_USB2_CONTROL_OFFS, temp);
} else {
ret = -EINVAL;
}
+out:
iounmap(usb_regs);
of_node_put(np);
return ret;
};
static DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info);
-static struct workqueue_struct *kspugov_wq;
-
static int calc_freq(struct spu_gov_info_struct *info)
{
int cpu;
__cpufreq_driver_target(info->policy, target_freq, CPUFREQ_RELATION_H);
delay = usecs_to_jiffies(info->poll_int);
- queue_delayed_work_on(info->policy->cpu, kspugov_wq, &info->work, delay);
+ schedule_delayed_work_on(info->policy->cpu, &info->work, delay);
}
static void spu_gov_init_work(struct spu_gov_info_struct *info)
{
int delay = usecs_to_jiffies(info->poll_int);
INIT_DELAYED_WORK_DEFERRABLE(&info->work, spu_gov_work);
- queue_delayed_work_on(info->policy->cpu, kspugov_wq, &info->work, delay);
+ schedule_delayed_work_on(info->policy->cpu, &info->work, delay);
}
static void spu_gov_cancel_work(struct spu_gov_info_struct *info)
{
int ret;
- kspugov_wq = create_workqueue("kspugov");
- if (!kspugov_wq) {
- printk(KERN_ERR "creation of kspugov failed\n");
- ret = -EFAULT;
- goto out;
- }
-
ret = cpufreq_register_governor(&spu_governor);
- if (ret) {
+ if (ret)
printk(KERN_ERR "registration of governor failed\n");
- destroy_workqueue(kspugov_wq);
- goto out;
- }
-out:
return ret;
}
static void __exit spu_gov_exit(void)
{
cpufreq_unregister_governor(&spu_governor);
- destroy_workqueue(kspugov_wq);
}
.calibrate_decr = generic_calibrate_decr,
.progress = qpace_progress,
.init_IRQ = iic_init_IRQ,
-#ifdef CONFIG_KEXEC
- .machine_kexec = default_machine_kexec,
- .machine_kexec_prepare = default_machine_kexec_prepare,
- .machine_crash_shutdown = default_machine_crash_shutdown,
-#endif
};
#include <asm/spu_csa.h>
#include <asm/xmon.h>
#include <asm/prom.h>
+#include <asm/kexec.h>
const struct spu_management_ops *spu_management_ops;
EXPORT_SYMBOL_GPL(spu_management_ops);
static SYSDEV_ATTR(stat, 0644, spu_stat_show, NULL);
+#ifdef CONFIG_KEXEC
+
+struct crash_spu_info {
+ struct spu *spu;
+ u32 saved_spu_runcntl_RW;
+ u32 saved_spu_status_R;
+ u32 saved_spu_npc_RW;
+ u64 saved_mfc_sr1_RW;
+ u64 saved_mfc_dar;
+ u64 saved_mfc_dsisr;
+};
+
+#define CRASH_NUM_SPUS 16 /* Enough for current hardware */
+static struct crash_spu_info crash_spu_info[CRASH_NUM_SPUS];
+
+static void crash_kexec_stop_spus(void)
+{
+ struct spu *spu;
+ int i;
+ u64 tmp;
+
+ for (i = 0; i < CRASH_NUM_SPUS; i++) {
+ if (!crash_spu_info[i].spu)
+ continue;
+
+ spu = crash_spu_info[i].spu;
+
+ crash_spu_info[i].saved_spu_runcntl_RW =
+ in_be32(&spu->problem->spu_runcntl_RW);
+ crash_spu_info[i].saved_spu_status_R =
+ in_be32(&spu->problem->spu_status_R);
+ crash_spu_info[i].saved_spu_npc_RW =
+ in_be32(&spu->problem->spu_npc_RW);
+
+ crash_spu_info[i].saved_mfc_dar = spu_mfc_dar_get(spu);
+ crash_spu_info[i].saved_mfc_dsisr = spu_mfc_dsisr_get(spu);
+ tmp = spu_mfc_sr1_get(spu);
+ crash_spu_info[i].saved_mfc_sr1_RW = tmp;
+
+ tmp &= ~MFC_STATE1_MASTER_RUN_CONTROL_MASK;
+ spu_mfc_sr1_set(spu, tmp);
+
+ __delay(200);
+ }
+}
+
+static void crash_register_spus(struct list_head *list)
+{
+ struct spu *spu;
+ int ret;
+
+ list_for_each_entry(spu, list, full_list) {
+ if (WARN_ON(spu->number >= CRASH_NUM_SPUS))
+ continue;
+
+ crash_spu_info[spu->number].spu = spu;
+ }
+
+ ret = crash_shutdown_register(&crash_kexec_stop_spus);
+ if (ret)
+ printk(KERN_ERR "Could not register SPU crash handler");
+}
+
+#else
+static inline void crash_register_spus(struct list_head *list)
+{
+}
+#endif
+
static int __init init_spu_base(void)
{
int i, ret = 0;
loff_t pos = *ppos;
int ret;
- if (pos < 0)
- return -EINVAL;
if (pos > LS_SIZE)
return -EFBIG;
- if (size > LS_SIZE - pos)
- size = LS_SIZE - pos;
ret = spu_acquire(ctx);
if (ret)
return ret;
local_store = ctx->ops->get_ls(ctx);
- ret = copy_from_user(local_store + pos, buffer, size);
+ size = simple_write_to_buffer(local_store, LS_SIZE, ppos, buffer, size);
spu_release(ctx);
- if (ret)
- return -EFAULT;
- *ppos = pos + size;
return size;
}
if (*pos >= sizeof(lscsa->gprs))
return -EFBIG;
- size = min_t(ssize_t, sizeof(lscsa->gprs) - *pos, size);
- *pos += size;
-
ret = spu_acquire_saved(ctx);
if (ret)
return ret;
- ret = copy_from_user((char *)lscsa->gprs + *pos - size,
- buffer, size) ? -EFAULT : size;
+ size = simple_write_to_buffer(lscsa->gprs, sizeof(lscsa->gprs), pos,
+ buffer, size);
spu_release_saved(ctx);
- return ret;
+ return size;
}
static const struct file_operations spufs_regs_fops = {
if (*pos >= sizeof(lscsa->fpcr))
return -EFBIG;
- size = min_t(ssize_t, sizeof(lscsa->fpcr) - *pos, size);
-
ret = spu_acquire_saved(ctx);
if (ret)
return ret;
- *pos += size;
- ret = copy_from_user((char *)&lscsa->fpcr + *pos - size,
- buffer, size) ? -EFAULT : size;
+ size = simple_write_to_buffer(&lscsa->fpcr, sizeof(lscsa->fpcr), pos,
+ buffer, size);
spu_release_saved(ctx);
- return ret;
+ return size;
}
static const struct file_operations spufs_fpcr_fops = {
flipper_quiesce();
}
-#ifdef CONFIG_KEXEC
-static int gamecube_kexec_prepare(struct kimage *image)
-{
- return 0;
-}
-#endif /* CONFIG_KEXEC */
-
-
define_machine(gamecube) {
.name = "gamecube",
.probe = gamecube_probe,
.calibrate_decr = generic_calibrate_decr,
.progress = udbg_progress,
.machine_shutdown = gamecube_shutdown,
-#ifdef CONFIG_KEXEC
- .machine_kexec_prepare = gamecube_kexec_prepare,
-#endif
};
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/seq_file.h>
-#include <linux/kexec.h>
#include <linux/of_platform.h>
#include <linux/memblock.h>
#include <mm/mmu_decl.h>
flipper_quiesce();
}
-#ifdef CONFIG_KEXEC
-static int wii_machine_kexec_prepare(struct kimage *image)
-{
- return 0;
-}
-#endif /* CONFIG_KEXEC */
-
define_machine(wii) {
.name = "wii",
.probe = wii_probe,
.calibrate_decr = generic_calibrate_decr,
.progress = udbg_progress,
.machine_shutdown = wii_shutdown,
-#ifdef CONFIG_KEXEC
- .machine_kexec_prepare = wii_machine_kexec_prepare,
-#endif
};
static struct of_device_id wii_of_bus[] = {
bool "IBM Legacy iSeries"
depends on PPC64 && PPC_BOOK3S
select PPC_INDIRECT_IO
- select PPC_PCI_CHOICE if EMBEDDED
+ select PPC_PCI_CHOICE if EXPERT
menu "iSeries device drivers"
depends on PPC_ISERIES
select RTAS_ERROR_LOGGING
select PPC_UDBG_16550
select PPC_NATIVE
- select PPC_PCI_CHOICE if EMBEDDED
+ select PPC_PCI_CHOICE if EXPERT
default y
config PPC_SPLPAR
two or more partitions.
config EEH
- bool "PCI Extended Error Handling (EEH)" if EMBEDDED
+ bool "PCI Extended Error Handling (EEH)" if EXPERT
depends on PPC_PSERIES && PCI
- default y if !EMBEDDED
+ default y if !EXPERT
config PSERIES_MSI
bool
{
ppc_md.kexec_cpu_down = pseries_kexec_cpu_down_xics;
}
-
-static int __init pseries_kexec_setup(void)
-{
- ppc_md.machine_kexec = default_machine_kexec;
- ppc_md.machine_kexec_prepare = default_machine_kexec_prepare;
- ppc_md.machine_crash_shutdown = default_machine_crash_shutdown;
-
- return 0;
-}
-machine_device_initcall(pseries, pseries_kexec_setup);
/* NB: reg/unreg are called while guarded with the tracepoints_mutex */
extern long hcall_tracepoint_refcount;
+/*
+ * Since the tracing code might execute hcalls we need to guard against
+ * recursion. One example of this are spinlocks calling H_YIELD on
+ * shared processor partitions.
+ */
+static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);
+
void hcall_tracepoint_regfunc(void)
{
hcall_tracepoint_refcount++;
void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
{
+ unsigned long flags;
+ unsigned int *depth;
+
+ local_irq_save(flags);
+
+ depth = &__get_cpu_var(hcall_trace_depth);
+
+ if (*depth)
+ goto out;
+
+ (*depth)++;
trace_hcall_entry(opcode, args);
+ (*depth)--;
+
+out:
+ local_irq_restore(flags);
}
void __trace_hcall_exit(long opcode, unsigned long retval,
unsigned long *retbuf)
{
+ unsigned long flags;
+ unsigned int *depth;
+
+ local_irq_save(flags);
+
+ depth = &__get_cpu_var(hcall_trace_depth);
+
+ if (*depth)
+ goto out;
+
+ (*depth)++;
trace_hcall_exit(opcode, retval, retbuf);
+ (*depth)--;
+
+out:
+ local_irq_restore(flags);
}
#endif
static unsigned char ras_log_buf[RTAS_ERROR_LOG_MAX];
static DEFINE_SPINLOCK(ras_log_buf_lock);
-static char mce_data_buf[RTAS_ERROR_LOG_MAX];
+static char global_mce_data_buf[RTAS_ERROR_LOG_MAX];
+static DEFINE_PER_CPU(__u64, mce_data_buf);
static int ras_get_sensor_state_token;
static int ras_check_exception_token;
return IRQ_HANDLED;
}
-/* Get the error information for errors coming through the
+/*
+ * Some versions of FWNMI place the buffer inside the 4kB page starting at
+ * 0x7000. Other versions place it inside the rtas buffer. We check both.
+ */
+#define VALID_FWNMI_BUFFER(A) \
+ ((((A) >= 0x7000) && ((A) < 0x7ff0)) || \
+ (((A) >= rtas.base) && ((A) < (rtas.base + rtas.size - 16))))
+
+/*
+ * Get the error information for errors coming through the
* FWNMI vectors. The pt_regs' r3 will be updated to reflect
* the actual r3 if possible, and a ptr to the error log entry
* will be returned if found.
*
- * The mce_data_buf does not have any locks or protection around it,
+ * If the RTAS error is not of the extended type, then we put it in a per
+ * cpu 64bit buffer. If it is the extended type we use global_mce_data_buf.
+ *
+ * The global_mce_data_buf does not have any locks or protection around it,
* if a second machine check comes in, or a system reset is done
* before we have logged the error, then we will get corruption in the
* error log. This is preferable over holding off on calling
*/
static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
{
- unsigned long errdata = regs->gpr[3];
- struct rtas_error_log *errhdr = NULL;
unsigned long *savep;
+ struct rtas_error_log *h, *errhdr = NULL;
+
+ if (!VALID_FWNMI_BUFFER(regs->gpr[3])) {
+ printk(KERN_ERR "FWNMI: corrupt r3\n");
+ return NULL;
+ }
- if ((errdata >= 0x7000 && errdata < 0x7fff0) ||
- (errdata >= rtas.base && errdata < rtas.base + rtas.size - 16)) {
- savep = __va(errdata);
- regs->gpr[3] = savep[0]; /* restore original r3 */
- memset(mce_data_buf, 0, RTAS_ERROR_LOG_MAX);
- memcpy(mce_data_buf, (char *)(savep + 1), RTAS_ERROR_LOG_MAX);
- errhdr = (struct rtas_error_log *)mce_data_buf;
+ savep = __va(regs->gpr[3]);
+ regs->gpr[3] = savep[0]; /* restore original r3 */
+
+ /* If it isn't an extended log we can use the per cpu 64bit buffer */
+ h = (struct rtas_error_log *)&savep[1];
+ if (!h->extended) {
+ memcpy(&__get_cpu_var(mce_data_buf), h, sizeof(__u64));
+ errhdr = (struct rtas_error_log *)&__get_cpu_var(mce_data_buf);
} else {
- printk("FWNMI: corrupt r3\n");
+ int len;
+
+ len = max_t(int, 8+h->extended_log_length, RTAS_ERROR_LOG_MAX);
+ memset(global_mce_data_buf, 0, RTAS_ERROR_LOG_MAX);
+ memcpy(global_mce_data_buf, h, len);
+ errhdr = (struct rtas_error_log *)global_mce_data_buf;
}
+
return errhdr;
}
{
int ret = rtas_call(rtas_token("ibm,nmi-interlock"), 0, 1, NULL);
if (ret != 0)
- printk("FWNMI: nmi-interlock failed: %d\n", ret);
+ printk(KERN_ERR "FWNMI: nmi-interlock failed: %d\n", ret);
}
int pSeries_system_reset_exception(struct pt_regs *regs)
* Return 1 if corrected (or delivered a signal).
* Return 0 if there is nothing we can do.
*/
-static int recover_mce(struct pt_regs *regs, struct rtas_error_log * err)
+static int recover_mce(struct pt_regs *regs, struct rtas_error_log *err)
{
- int nonfatal = 0;
+ int recovered = 0;
- if (err->disposition == RTAS_DISP_FULLY_RECOVERED) {
+ if (!(regs->msr & MSR_RI)) {
+ /* If MSR_RI isn't set, we cannot recover */
+ recovered = 0;
+
+ } else if (err->disposition == RTAS_DISP_FULLY_RECOVERED) {
/* Platform corrected itself */
- nonfatal = 1;
- } else if ((regs->msr & MSR_RI) &&
- user_mode(regs) &&
- err->severity == RTAS_SEVERITY_ERROR_SYNC &&
- err->disposition == RTAS_DISP_NOT_RECOVERED &&
- err->target == RTAS_TARGET_MEMORY &&
- err->type == RTAS_TYPE_ECC_UNCORR &&
- !(current->pid == 0 || is_global_init(current))) {
- /* Kill off a user process with an ECC error */
- printk(KERN_ERR "MCE: uncorrectable ecc error for pid %d\n",
- current->pid);
- /* XXX something better for ECC error? */
- _exception(SIGBUS, regs, BUS_ADRERR, regs->nip);
- nonfatal = 1;
+ recovered = 1;
+
+ } else if (err->disposition == RTAS_DISP_LIMITED_RECOVERY) {
+ /* Platform corrected itself but could be degraded */
+ printk(KERN_ERR "MCE: limited recovery, system may "
+ "be degraded\n");
+ recovered = 1;
+
+ } else if (user_mode(regs) && !is_global_init(current) &&
+ err->severity == RTAS_SEVERITY_ERROR_SYNC) {
+
+ /*
+ * If we received a synchronous error when in userspace
+ * kill the task. Firmware may report details of the fail
+ * asynchronously, so we can't rely on the target and type
+ * fields being valid here.
+ */
+ printk(KERN_ERR "MCE: uncorrectable error, killing task "
+ "%s:%d\n", current->comm, current->pid);
+
+ _exception(SIGBUS, regs, BUS_MCEERR_AR, regs->nip);
+ recovered = 1;
}
- log_error((char *)err, ERR_TYPE_RTAS_LOG, !nonfatal);
+ log_error((char *)err, ERR_TYPE_RTAS_LOG, 0);
- return nonfatal;
+ return recovered;
}
/*
saved_mcheck_exception = ppc_md.machine_check_exception;
ppc_md.machine_check_exception = fsl_rio_mcheck_exception;
#endif
- /* Ensure that RFXE is set */
- mtspr(SPRN_HID1, (mfspr(SPRN_HID1) | 0x20000));
return 0;
err:
/* make sure mask gets to controller before we return to user */
do {
if (!loops--) {
- printk(KERN_ERR "mpic_enable_irq timeout\n");
+ printk(KERN_ERR "%s: timeout on hwirq %u\n",
+ __func__, src);
break;
}
} while(mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI)) & MPIC_VECPRI_MASK);
/* make sure mask gets to controller before we return to user */
do {
if (!loops--) {
- printk(KERN_ERR "mpic_enable_irq timeout\n");
+ printk(KERN_ERR "%s: timeout on hwirq %u\n",
+ __func__, src);
break;
}
} while(!(mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI)) & MPIC_VECPRI_MASK));
If unsure, say Y.
config CHSC_SCH
- def_tristate y
+ def_tristate m
prompt "Support for CHSC subchannels"
help
This driver allows usage of CHSC subchannels. A CHSC subchannel
unsigned long output_addr;
unsigned char *output;
- check_ipl_parmblock((void *) 0, (unsigned long) output + SZ__bss_start);
+ output_addr = ((unsigned long) &_end + HEAP_SIZE + 4095UL) & -4096UL;
+ check_ipl_parmblock((void *) 0, output_addr + SZ__bss_start);
memset(&_bss, 0, &_ebss - &_bss);
free_mem_ptr = (unsigned long)&_end;
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
- output = (unsigned char *) ((free_mem_end_ptr + 4095UL) & -4096UL);
+ output = (unsigned char *) output_addr;
#ifdef CONFIG_BLK_DEV_INITRD
/*
BUG_ON(ret != bsize);
data += bsize - index;
len -= bsize - index;
+ index = 0;
}
/* process as many blocks as possible */
static inline int atomic_read(const atomic_t *v)
{
- barrier();
- return v->counter;
+ int c;
+
+ asm volatile(
+ " l %0,%1\n"
+ : "=d" (c) : "Q" (v->counter));
+ return c;
}
static inline void atomic_set(atomic_t *v, int i)
{
- v->counter = i;
- barrier();
+ asm volatile(
+ " st %1,%0\n"
+ : "=Q" (v->counter) : "d" (i));
}
static inline int atomic_add_return(int i, atomic_t *v)
static inline long long atomic64_read(const atomic64_t *v)
{
- barrier();
- return v->counter;
+ long long c;
+
+ asm volatile(
+ " lg %0,%1\n"
+ : "=d" (c) : "Q" (v->counter));
+ return c;
}
static inline void atomic64_set(atomic64_t *v, long long i)
{
- v->counter = i;
- barrier();
+ asm volatile(
+ " stg %1,%0\n"
+ : "=Q" (v->counter) : "d" (i));
}
static inline long long atomic64_add_return(long long i, atomic64_t *v)
#define L1_CACHE_BYTES 256
#define L1_CACHE_SHIFT 8
+#define NET_SKB_PAD 32
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
#ifndef _S390_CACHEFLUSH_H
#define _S390_CACHEFLUSH_H
-/* Keep includes the same across arches. */
-#include <linux/mm.h>
-
/* Caches aren't brain-dead on the s390. */
-#define flush_cache_all() do { } while (0)
-#define flush_cache_mm(mm) do { } while (0)
-#define flush_cache_dup_mm(mm) do { } while (0)
-#define flush_cache_range(vma, start, end) do { } while (0)
-#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
-#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
-#define flush_dcache_page(page) do { } while (0)
-#define flush_dcache_mmap_lock(mapping) do { } while (0)
-#define flush_dcache_mmap_unlock(mapping) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
-#define flush_icache_page(vma,pg) do { } while (0)
-#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
-#define flush_cache_vmap(start, end) do { } while (0)
-#define flush_cache_vunmap(start, end) do { } while (0)
-
-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
-#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
+#include <asm-generic/cacheflush.h>
#ifdef CONFIG_DEBUG_PAGEALLOC
void kernel_map_pages(struct page *page, int numpages, int enable);
*/
extern unsigned long thread_saved_pc(struct task_struct *t);
-/*
- * Print register of task into buffer. Used in fs/proc/array.c.
- */
-extern void task_show_regs(struct seq_file *m, struct task_struct *task);
-
extern void show_code(struct pt_regs *regs);
unsigned long get_wchan(struct task_struct *p);
*/
#include <linux/mm.h>
+#include <linux/pagemap.h>
#include <linux/swap.h>
#include <asm/processor.h>
#include <asm/pgalloc.h>
show_last_breaking_event(regs);
}
-/* This is called from fs/proc/array.c */
-void task_show_regs(struct seq_file *m, struct task_struct *task)
-{
- struct pt_regs *regs;
-
- regs = task_pt_regs(task);
- seq_printf(m, "task: %p, ksp: %p\n",
- task, (void *)task->thread.ksp);
- seq_printf(m, "User PSW : %p %p\n",
- (void *) regs->psw.mask, (void *)regs->psw.addr);
-
- seq_printf(m, "User GPRS: " FOURLONG,
- regs->gprs[0], regs->gprs[1],
- regs->gprs[2], regs->gprs[3]);
- seq_printf(m, " " FOURLONG,
- regs->gprs[4], regs->gprs[5],
- regs->gprs[6], regs->gprs[7]);
- seq_printf(m, " " FOURLONG,
- regs->gprs[8], regs->gprs[9],
- regs->gprs[10], regs->gprs[11]);
- seq_printf(m, " " FOURLONG,
- regs->gprs[12], regs->gprs[13],
- regs->gprs[14], regs->gprs[15]);
- seq_printf(m, "User ACRS: %08x %08x %08x %08x\n",
- task->thread.acrs[0], task->thread.acrs[1],
- task->thread.acrs[2], task->thread.acrs[3]);
- seq_printf(m, " %08x %08x %08x %08x\n",
- task->thread.acrs[4], task->thread.acrs[5],
- task->thread.acrs[6], task->thread.acrs[7]);
- seq_printf(m, " %08x %08x %08x %08x\n",
- task->thread.acrs[8], task->thread.acrs[9],
- task->thread.acrs[10], task->thread.acrs[11]);
- seq_printf(m, " %08x %08x %08x %08x\n",
- task->thread.acrs[12], task->thread.acrs[13],
- task->thread.acrs[14], task->thread.acrs[15]);
-}
-
static DEFINE_SPINLOCK(die_lock);
void die(const char * str, struct pt_regs * regs, long err)
unsigned long tmp1;
asm volatile(
+ " sacf 256\n"
" "AHI" %0,-1\n"
" jo 5f\n"
- " sacf 256\n"
" bras %3,3f\n"
"0:"AHI" %0,257\n"
"1: mvc 0(1,%1),0(%2)\n"
"3:"AHI" %0,-256\n"
" jnm 2b\n"
"4: ex %0,1b-0b(%3)\n"
- " sacf 0\n"
"5: "SLR" %0,%0\n"
- "6:\n"
+ "6: sacf 0\n"
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "+a" (from), "=a" (tmp1)
: : "cc", "memory");
unsigned long tmp1, tmp2;
asm volatile(
+ " sacf 256\n"
" "AHI" %0,-1\n"
" jo 5f\n"
- " sacf 256\n"
" bras %3,3f\n"
" xc 0(1,%1),0(%1)\n"
"0:"AHI" %0,257\n"
"3:"AHI" %0,-256\n"
" jnm 2b\n"
"4: ex %0,0(%3)\n"
- " sacf 0\n"
"5: "SLR" %0,%0\n"
- "6:\n"
+ "6: sacf 0\n"
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "=a" (tmp1), "=a" (tmp2)
: : "cc", "memory");
page->flags ^= bits;
if (page->flags & FRAG_MASK) {
/* Page now has some free pgtable fragments. */
- list_move(&page->lru, &mm->context.pgtable_list);
+ if (!list_empty(&page->lru))
+ list_move(&page->lru, &mm->context.pgtable_list);
page = NULL;
} else
/* All fragments of the 4K page have been freed. */
menu "Machine selection"
+config SCORE
+ def_bool y
+ select HAVE_GENERIC_HARDIRQS
+
choice
prompt "System type"
default MACH_SPCT6600
config SCHED_NO_NO_OMIT_FRAME_POINTER
def_bool y
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
config GENERIC_SYSCALL_TABLE
def_bool y
config 32BIT
def_bool y
-config GENERIC_HARDIRQS
- def_bool y
-
config ARCH_FLATMEM_ENABLE
def_bool y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_KALLSYMS is not set
# CONFIG_HOTPLUG is not set
CONFIG_SLAB=y
config SUPERH
def_bool y
- select EMBEDDED
+ select EXPERT
select CLKDEV_LOOKUP
select HAVE_IDE if HAS_IOPORT
select HAVE_MEMBLOCK
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_KERNEL_XZ
select HAVE_KERNEL_LZO
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_REGS_AND_STACK_ACCESS_API
libs-$(CONFIG_SUPERH32) := arch/sh/lib/ $(libs-y)
libs-$(CONFIG_SUPERH64) := arch/sh/lib64/ $(libs-y)
-BOOT_TARGETS = uImage uImage.bz2 uImage.gz uImage.lzma uImage.lzo \
+BOOT_TARGETS = uImage uImage.bz2 uImage.gz uImage.lzma uImage.xz uImage.lzo \
uImage.srec uImage.bin zImage vmlinux.bin vmlinux.srec \
romImage
PHONY += $(BOOT_TARGETS)
@echo '* uImage.gz - Kernel-only image for U-Boot (gzip)'
@echo ' uImage.bz2 - Kernel-only image for U-Boot (bzip2)'
@echo ' uImage.lzma - Kernel-only image for U-Boot (lzma)'
+ @echo ' uImage.xz - Kernel-only image for U-Boot (xz)'
@echo ' uImage.lzo - Kernel-only image for U-Boot (lzo)'
endef
i2c_register_board_info(1, i2c1_devices,
ARRAY_SIZE(i2c1_devices));
+#if defined(CONFIG_VIDEO_SH_VOU) || defined(CONFIG_VIDEO_SH_VOU_MODULE)
/* VOU */
gpio_request(GPIO_FN_DV_D15, NULL);
gpio_request(GPIO_FN_DV_D14, NULL);
/* Remove reset */
gpio_set_value(GPIO_PTG4, 1);
+#endif
return platform_add_devices(ecovec_devices,
ARRAY_SIZE(ecovec_devices));
suffix-$(CONFIG_KERNEL_GZIP) := gz
suffix-$(CONFIG_KERNEL_BZIP2) := bz2
suffix-$(CONFIG_KERNEL_LZMA) := lzma
+suffix-$(CONFIG_KERNEL_XZ) := xz
suffix-$(CONFIG_KERNEL_LZO) := lzo
targets := zImage vmlinux.srec romImage uImage uImage.srec uImage.gz \
- uImage.bz2 uImage.lzma uImage.lzo uImage.bin
+ uImage.bz2 uImage.lzma uImage.xz uImage.lzo uImage.bin
extra-y += vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
- vmlinux.bin.lzo
+ vmlinux.bin.xz vmlinux.bin.lzo
subdir- := compressed romimage
$(obj)/zImage: $(obj)/compressed/vmlinux FORCE
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
+$(obj)/vmlinux.bin.xz: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,xzkern)
+
$(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzo)
$(obj)/uImage.lzma: $(obj)/vmlinux.bin.lzma
$(call if_changed,uimage,lzma)
+$(obj)/uImage.xz: $(obj)/vmlinux.bin.xz
+ $(call if_changed,uimage,xz)
+
$(obj)/uImage.lzo: $(obj)/vmlinux.bin.lzo
$(call if_changed,uimage,lzo)
targets := vmlinux vmlinux.bin vmlinux.bin.gz \
vmlinux.bin.bz2 vmlinux.bin.lzma \
- vmlinux.bin.lzo \
+ vmlinux.bin.xz vmlinux.bin.lzo \
head_$(BITS).o misc.o piggy.o
OBJECTS = $(obj)/head_$(BITS).o $(obj)/misc.o $(obj)/cache.o
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(vmlinux.bin.all-y) FORCE
$(call if_changed,lzma)
+$(obj)/vmlinux.bin.xz: $(vmlinux.bin.all-y) FORCE
+ $(call if_changed,xzkern)
$(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
$(call if_changed,lzo)
#include "../../../../lib/decompress_unlzma.c"
#endif
+#ifdef CONFIG_KERNEL_XZ
+#include "../../../../lib/decompress_unxz.c"
+#endif
+
#ifdef CONFIG_KERNEL_LZO
#include "../../../../lib/decompress_unlzo.c"
#endif
extern void pgtable_cache_init(void);
struct vm_area_struct;
+struct mm_struct;
extern void __update_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte);
static int __init sh7750_devices_setup(void)
{
if (mach_is_rts7751r2d()) {
- platform_register_device(&scif_device);
+ platform_device_register(&scif_device);
} else {
- platform_register_device(&sci_device);
- platform_register_device(&scif_device);
+ platform_device_register(&sci_device);
+ platform_device_register(&scif_device);
}
return platform_add_devices(sh7750_devices,
static DEFINE_PER_CPU(struct cpu, cpu_devices);
cpumask_t cpu_core_map[NR_CPUS];
+EXPORT_SYMBOL(cpu_core_map);
static cpumask_t cpu_coregroup_map(unsigned int cpu)
{
select RTC_DRV_STARFIRE
select HAVE_PERF_EVENTS
select PERF_USE_VMALLOC
+ select HAVE_GENERIC_HARDIRQS
config ARCH_DEFCONFIG
string
config NEED_PER_CPU_PAGE_FIRST_CHUNK
def_bool y if SPARC64
-config GENERIC_HARDIRQS_NO__DO_IRQ
- bool
- def_bool y if SPARC64
-
config MMU
bool
default y
can be controlled through /sys/devices/system/cpu/cpu#.
Say N if you want to disable CPU hotplug.
-config GENERIC_HARDIRQS
- bool
- default y if SPARC64
-
source "kernel/time/Kconfig"
if SPARC64
extern u64 pcr_enable;
+extern int pcr_arch_init(void);
+
#endif /* __PCR_H */
static int iommu_alloc_ctx(struct iommu *iommu)
{
int lowest = iommu->ctx_lowest_free;
- int sz = IOMMU_NUM_CTXS - lowest;
- int n = find_next_zero_bit(iommu->ctx_bitmap, sz, lowest);
+ int n = find_next_zero_bit(iommu->ctx_bitmap, IOMMU_NUM_CTXS, lowest);
- if (unlikely(n == sz)) {
+ if (unlikely(n == IOMMU_NUM_CTXS)) {
n = find_next_zero_bit(iommu->ctx_bitmap, lowest, 1);
if (unlikely(n == lowest)) {
printk(KERN_WARNING "IOMMU: Ran out of contexts.\n");
unregister_perf_hsvc();
return err;
}
-
-early_initcall(pcr_arch_init);
#include <asm/mdesc.h>
#include <asm/ldc.h>
#include <asm/hypervisor.h>
+#include <asm/pcr.h>
#include "cpumap.h"
void __init smp_cpus_done(unsigned int max_cpus)
{
+ pcr_arch_init();
}
void smp_send_reschedule(int cpu)
.globl __do_int_store
__do_int_store:
ld [%o2], %g1
- cmp %1, 2
+ cmp %o1, 2
be 2f
- cmp %1, 4
+ cmp %o1, 4
be 1f
srl %g1, 24, %g2
srl %g1, 16, %g7
*/
#include <linux/string.h>
-#include <linux/bitops.h>
+#include <linux/bitmap.h>
#include <asm/bitext.h>
while (test_bit(offset + i, t->map) == 0) {
i++;
if (i == len) {
- for (i = 0; i < len; i++)
- __set_bit(offset + i, t->map);
+ bitmap_set(t->map, offset, len);
if (offset == t->first_free)
t->first_free = find_next_zero_bit
(t->map, t->size,
# For a description of the syntax of this configuration file,
# see Documentation/kbuild/config-language.txt.
-config MMU
- def_bool y
-
-config GENERIC_CSUM
- def_bool y
-
-config GENERIC_HARDIRQS
+config TILE
def_bool y
+ select HAVE_KVM if !TILEGX
+ select GENERIC_FIND_FIRST_BIT
+ select GENERIC_FIND_NEXT_BIT
+ select USE_GENERIC_SMP_HELPERS
+ select CC_OPTIMIZE_FOR_SIZE
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_IRQ_PROBE
+ select GENERIC_PENDING_IRQ if SMP
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
+# FIXME: investigate whether we need/want these options.
+# select HAVE_IOREMAP_PROT
+# select HAVE_OPTPROBES
+# select HAVE_REGS_AND_STACK_ACCESS_API
+# select HAVE_HW_BREAKPOINT
+# select PERF_EVENTS
+# select HAVE_USER_RETURN_NOTIFIER
+# config NO_BOOTMEM
+# config ARCH_SUPPORTS_DEBUG_PAGEALLOC
+# config HUGETLB_PAGE_SIZE_VARIABLE
-config GENERIC_IRQ_PROBE
+config MMU
def_bool y
-config GENERIC_PENDING_IRQ
+config GENERIC_CSUM
def_bool y
- depends on GENERIC_HARDIRQS && SMP
config SEMAPHORE_SLEEPERS
def_bool y
select HVC_DRIVER
def_bool y
-config TILE
- def_bool y
- select HAVE_KVM if !TILEGX
- select GENERIC_FIND_FIRST_BIT
- select GENERIC_FIND_NEXT_BIT
- select USE_GENERIC_SMP_HELPERS
- select CC_OPTIMIZE_FOR_SIZE
-
-# FIXME: investigate whether we need/want these options.
-# select HAVE_IOREMAP_PROT
-# select HAVE_OPTPROBES
-# select HAVE_REGS_AND_STACK_ACCESS_API
-# select HAVE_HW_BREAKPOINT
-# select PERF_EVENTS
-# select HAVE_USER_RETURN_NOTIFIER
-# config NO_BOOTMEM
-# config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-# config HUGETLB_PAGE_SIZE_VARIABLE
-
-
# Please note: TILE-Gx support is not yet finalized; this is
# the preliminary support. TILE-Gx drivers are only provided
# with the alpha or beta test versions for Tilera customers.
choice
depends on !TILEGX
- prompt "Memory split" if EMBEDDED
+ prompt "Memory split" if EXPERT
default VMSPLIT_3G
---help---
Select the desired split between kernel and user memory.
source "lib/Kconfig.debug"
config EARLY_PRINTK
- bool "Early printk" if EMBEDDED && DEBUG_KERNEL
+ bool "Early printk" if EXPERT && DEBUG_KERNEL
default y
help
Write kernel log output directly via the hypervisor console.
CONFIG_SYSVIPC=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="usr/contents.txt"
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_MODULES=y
option defconfig_list
default "arch/$ARCH/defconfig"
-# UML uses the generic IRQ subsystem
-config GENERIC_HARDIRQS
- bool
- default y
-
config UML
bool
default y
+ select HAVE_GENERIC_HARDIRQS
config MMU
bool
If you don't know what to do, say N.
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
-# CONFIG_EMBEDDED is not set
+# CONFIG_EXPERT is not set
CONFIG_UID16=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
as it is off-chip. APB timers are always running regardless of CPU
C states, they are used as per CPU clockevent device when possible.
-# Mark as embedded because too many people got it wrong.
+# Mark as expert because too many people got it wrong.
# The code disables itself when not needed.
config DMI
default y
- bool "Enable DMI scanning" if EMBEDDED
+ bool "Enable DMI scanning" if EXPERT
---help---
Enabled scanning of DMI to identify machine quirks. Say Y
here unless you have verified that your setup is not
BIOS code.
config GART_IOMMU
- bool "GART IOMMU support" if EMBEDDED
+ bool "GART IOMMU support" if EXPERT
default y
select SWIOTLB
depends on X86_64 && PCI && AMD_NB
depends on X86_MCE_INTEL
config VM86
- bool "Enable VM86 support" if EMBEDDED
+ bool "Enable VM86 support" if EXPERT
default y
depends on X86_32
---help---
choice
depends on EXPERIMENTAL
- prompt "Memory split" if EMBEDDED
+ prompt "Memory split" if EXPERT
default VMSPLIT_3G
depends on X86_32
---help---
def_bool X86_64 || HIGHMEM64G
config DIRECT_GBPAGES
- bool "Enable 1GB pages for kernel pagetables" if EMBEDDED
+ bool "Enable 1GB pages for kernel pagetables" if EXPERT
default y
depends on X86_64
---help---
config MTRR
def_bool y
- prompt "MTRR (Memory Type Range Register) support" if EMBEDDED
+ prompt "MTRR (Memory Type Range Register) support" if EXPERT
---help---
On Intel P6 family processors (Pentium Pro, Pentium II and later)
the Memory Type Range Registers (MTRRs) may be used to control
config X86_PAT
def_bool y
- prompt "x86 PAT support" if EMBEDDED
+ prompt "x86 PAT support" if EXPERT
depends on MTRR
---help---
Use PAT attributes to setup page level cache control.
code in physical address mode via KEXEC
config PHYSICAL_START
- hex "Physical address where the kernel is loaded" if (EMBEDDED || CRASH_DUMP)
+ hex "Physical address where the kernel is loaded" if (EXPERT || CRASH_DUMP)
default "0x1000000"
---help---
This gives the physical address where the kernel is loaded.
depends on X86_64 && PCI && ACPI
config PCI_CNB20LE_QUIRK
- bool "Read CNB20LE Host Bridge Windows" if EMBEDDED
+ bool "Read CNB20LE Host Bridge Windows" if EXPERT
default n
depends on PCI && EXPERIMENTAL
help
depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386) && !UML
menuconfig PROCESSOR_SELECT
- bool "Supported processor vendors" if EMBEDDED
+ bool "Supported processor vendors" if EXPERT
---help---
This lets you choose what x86 vendor support code your kernel
will include.
see errors. Disable this if you want silent bootup.
config EARLY_PRINTK
- bool "Early printk" if EMBEDDED
+ bool "Early printk" if EXPERT
default y
---help---
Write kernel log output directly into the VGA buffer or to a serial
config DOUBLEFAULT
default y
- bool "Enable doublefault exception handler" if EMBEDDED
+ bool "Enable doublefault exception handler" if EXPERT
depends on X86_32
---help---
This option allows trapping of rare doublefault exceptions that
extern void init_bsp_APIC(void);
extern void setup_local_APIC(void);
extern void end_local_APIC_setup(void);
+extern void bsp_end_local_APIC_setup(void);
extern void init_apic_mappings(void);
void register_lapic_address(unsigned long address);
extern void setup_boot_APIC_clock(void);
#ifndef _ASM_X86_CACHEFLUSH_H
#define _ASM_X86_CACHEFLUSH_H
-/* Keep includes the same across arches. */
-#include <linux/mm.h>
-
/* Caches aren't brain-dead on the intel. */
-static inline void flush_cache_all(void) { }
-static inline void flush_cache_mm(struct mm_struct *mm) { }
-static inline void flush_cache_dup_mm(struct mm_struct *mm) { }
-static inline void flush_cache_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end) { }
-static inline void flush_cache_page(struct vm_area_struct *vma,
- unsigned long vmaddr, unsigned long pfn) { }
-#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
-static inline void flush_dcache_page(struct page *page) { }
-static inline void flush_dcache_mmap_lock(struct address_space *mapping) { }
-static inline void flush_dcache_mmap_unlock(struct address_space *mapping) { }
-static inline void flush_icache_range(unsigned long start,
- unsigned long end) { }
-static inline void flush_icache_page(struct vm_area_struct *vma,
- struct page *page) { }
-static inline void flush_icache_user_range(struct vm_area_struct *vma,
- struct page *page,
- unsigned long addr,
- unsigned long len) { }
-static inline void flush_cache_vmap(unsigned long start, unsigned long end) { }
-static inline void flush_cache_vunmap(unsigned long start,
- unsigned long end) { }
-
-static inline void copy_to_user_page(struct vm_area_struct *vma,
- struct page *page, unsigned long vaddr,
- void *dst, const void *src,
- unsigned long len)
-{
- memcpy(dst, src, len);
-}
-
-static inline void copy_from_user_page(struct vm_area_struct *vma,
- struct page *page, unsigned long vaddr,
- void *dst, const void *src,
- unsigned long len)
-{
- memcpy(dst, src, len);
-}
+#include <asm-generic/cacheflush.h>
#ifdef CONFIG_X86_PAT
/*
DECLARE_PER_CPU(int, cpu_state);
+int mwait_usable(const struct cpuinfo_x86 *);
#endif /* _ASM_X86_CPU_H */
do { \
asm goto("1:" \
JUMP_LABEL_INITIAL_NOP \
- ".pushsection __jump_table, \"a\" \n\t"\
+ ".pushsection __jump_table, \"aw\" \n\t"\
_ASM_PTR "1b, %l[" #label "], %c0 \n\t" \
".popsection \n\t" \
: : "i" (key) : : label); \
unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
- /* stop flush ipis for the previous mm */
- cpumask_clear_cpu(cpu, mm_cpumask(prev));
#ifdef CONFIG_SMP
percpu_write(cpu_tlbstate.state, TLBSTATE_OK);
percpu_write(cpu_tlbstate.active_mm, next);
/* Re-load page tables */
load_cr3(next->pgd);
+ /* stop flush ipis for the previous mm */
+ cpumask_clear_cpu(cpu, mm_cpumask(prev));
+
/*
* load the LDT, if the LDT is different:
*/
#ifndef _ASM_X86_NUMA_32_H
#define _ASM_X86_NUMA_32_H
+extern int numa_off;
+
extern int pxm_to_nid(int pxm);
extern void numa_remove_cpu(int cpu);
#ifdef CONFIG_NUMA_EMU
#define FAKE_NODE_MIN_SIZE ((u64)32 << 20)
#define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
+void numa_emu_cmdline(char *);
#endif /* CONFIG_NUMA_EMU */
#else
static inline void init_cpu_to_node(void) { }
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t pmd)
{
-#if PAGETABLE_LEVELS >= 3
if (sizeof(pmdval_t) > sizeof(long))
/* 5 arg words */
pv_mmu_ops.set_pmd_at(mm, addr, pmdp, pmd);
else
- PVOP_VCALL4(pv_mmu_ops.set_pmd_at, mm, addr, pmdp, pmd.pmd);
-#endif
+ PVOP_VCALL4(pv_mmu_ops.set_pmd_at, mm, addr, pmdp,
+ native_pmd_val(pmd));
}
#endif
typeof(var) pxo_new__ = (nval); \
switch (sizeof(var)) { \
case 1: \
- asm("\n1:mov "__percpu_arg(1)",%%al" \
- "\n\tcmpxchgb %2, "__percpu_arg(1) \
+ asm("\n\tmov "__percpu_arg(1)",%%al" \
+ "\n1:\tcmpxchgb %2, "__percpu_arg(1) \
"\n\tjnz 1b" \
- : "=a" (pxo_ret__), "+m" (var) \
+ : "=&a" (pxo_ret__), "+m" (var) \
: "q" (pxo_new__) \
: "memory"); \
break; \
case 2: \
- asm("\n1:mov "__percpu_arg(1)",%%ax" \
- "\n\tcmpxchgw %2, "__percpu_arg(1) \
+ asm("\n\tmov "__percpu_arg(1)",%%ax" \
+ "\n1:\tcmpxchgw %2, "__percpu_arg(1) \
"\n\tjnz 1b" \
- : "=a" (pxo_ret__), "+m" (var) \
+ : "=&a" (pxo_ret__), "+m" (var) \
: "r" (pxo_new__) \
: "memory"); \
break; \
case 4: \
- asm("\n1:mov "__percpu_arg(1)",%%eax" \
- "\n\tcmpxchgl %2, "__percpu_arg(1) \
+ asm("\n\tmov "__percpu_arg(1)",%%eax" \
+ "\n1:\tcmpxchgl %2, "__percpu_arg(1) \
"\n\tjnz 1b" \
- : "=a" (pxo_ret__), "+m" (var) \
+ : "=&a" (pxo_ret__), "+m" (var) \
: "r" (pxo_new__) \
: "memory"); \
break; \
case 8: \
- asm("\n1:mov "__percpu_arg(1)",%%rax" \
- "\n\tcmpxchgq %2, "__percpu_arg(1) \
+ asm("\n\tmov "__percpu_arg(1)",%%rax" \
+ "\n1:\tcmpxchgq %2, "__percpu_arg(1) \
"\n\tjnz 1b" \
- : "=a" (pxo_ret__), "+m" (var) \
+ : "=&a" (pxo_ret__), "+m" (var) \
: "r" (pxo_new__) \
: "memory"); \
break; \
#define this_cpu_xchg_1(pcp, nval) percpu_xchg_op(pcp, nval)
#define this_cpu_xchg_2(pcp, nval) percpu_xchg_op(pcp, nval)
#define this_cpu_xchg_4(pcp, nval) percpu_xchg_op(pcp, nval)
-#define this_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval)
-#define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
#define irqsafe_cpu_add_1(pcp, val) percpu_add_op((pcp), val)
#define irqsafe_cpu_add_2(pcp, val) percpu_add_op((pcp), val)
#define irqsafe_cpu_xchg_1(pcp, nval) percpu_xchg_op(pcp, nval)
#define irqsafe_cpu_xchg_2(pcp, nval) percpu_xchg_op(pcp, nval)
#define irqsafe_cpu_xchg_4(pcp, nval) percpu_xchg_op(pcp, nval)
-#define irqsafe_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval)
-#define irqsafe_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
#ifndef CONFIG_M386
#define __this_cpu_add_return_1(pcp, val) percpu_add_return_op(pcp, val)
#define this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val)
#define this_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
#define this_cpu_add_return_8(pcp, val) percpu_add_return_op(pcp, val)
+#define this_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval)
+#define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
#define irqsafe_cpu_add_8(pcp, val) percpu_add_op((pcp), val)
#define irqsafe_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val)
#define irqsafe_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val)
#define irqsafe_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
+#define irqsafe_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval)
+#define irqsafe_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
#endif
/* This is not atomic against other CPUs -- CPU preemption needs to be off */
DECLARE_EARLY_PER_CPU(u16, x86_bios_cpu_apicid);
/* Static state in head.S used to set up a CPU */
-extern struct {
- void *sp;
- unsigned short ss;
-} stack_start;
+extern unsigned long stack_start; /* Initial stack pointer address */
struct smp_ops {
void (*smp_prepare_boot_cpu)(void);
+++ /dev/null
-#ifndef _ASM_X86_SYSTEM_64_H
-#define _ASM_X86_SYSTEM_64_H
-
-#include <asm/segment.h>
-#include <asm/cmpxchg.h>
-
-
-static inline unsigned long read_cr8(void)
-{
- unsigned long cr8;
- asm volatile("movq %%cr8,%0" : "=r" (cr8));
- return cr8;
-}
-
-static inline void write_cr8(unsigned long val)
-{
- asm volatile("movq %0,%%cr8" :: "r" (val) : "memory");
-}
-
-#include <linux/irqflags.h>
-
-#endif /* _ASM_X86_SYSTEM_64_H */
#include <linux/cpumask.h>
#include <asm/segment.h>
#include <asm/desc.h>
-
-#ifdef CONFIG_X86_32
#include <asm/pgtable.h>
-#endif
+#include <asm/cacheflush.h>
#include "realmode/wakeup.h"
#include "sleep.h"
#else /* CONFIG_64BIT */
header->trampoline_segment = setup_trampoline() >> 4;
#ifdef CONFIG_SMP
- stack_start.sp = temp_stack + sizeof(temp_stack);
+ stack_start = (unsigned long)temp_stack + sizeof(temp_stack);
early_gdt_descr.address =
(unsigned long)get_cpu_gdt_table(smp_processor_id());
initial_gs = per_cpu_offset(smp_processor_id());
memblock_x86_reserve_range(mem, mem + WAKEUP_SIZE, "ACPI WAKEUP");
}
+int __init acpi_configure_wakeup_memory(void)
+{
+ if (acpi_realmode)
+ set_memory_x(acpi_realmode, WAKEUP_SIZE >> PAGE_SHIFT);
+
+ return 0;
+}
+arch_initcall(acpi_configure_wakeup_memory);
+
static int __init acpi_sleep_setup(char *str)
{
atomic_set(&stop_machine_first, 1);
wrote_text = 0;
- stop_machine(stop_machine_text_poke, (void *)&tpp, NULL);
+ __stop_machine(stop_machine_text_poke, (void *)&tpp, NULL);
}
#if defined(CONFIG_DYNAMIC_FTRACE) || defined(HAVE_JUMP_LABEL)
#endif
apic_pm_activate();
+}
+
+void __init bsp_end_local_APIC_setup(void)
+{
+ end_local_APIC_setup();
/*
* Now that local APIC setup is completed for BP, configure the fault
* handling for interrupt remapping.
*/
- if (!smp_processor_id() && intr_remapping_enabled)
+ if (intr_remapping_enabled)
enable_drhd_fault_handling();
}
enable_IO_APIC();
#endif
- end_local_APIC_setup();
+ bsp_end_local_APIC_setup();
#ifdef CONFIG_X86_IO_APIC
if (smp_found_config && !skip_ioapic_setup && nr_ioapics)
{
int i = 0;
+ if (nr_ioapics == 0)
+ return -1;
+
/* Find the IOAPIC that manages this GSI. */
for (i = 0; i < nr_ioapics; i++) {
if ((gsi >= mp_gsi_routing[i].gsi_base)
{ 0x0a, LVL_1_DATA, 8 }, /* 2 way set assoc, 32 byte line size */
{ 0x0c, LVL_1_DATA, 16 }, /* 4-way set assoc, 32 byte line size */
{ 0x0d, LVL_1_DATA, 16 }, /* 4-way set assoc, 64 byte line size */
+ { 0x0e, LVL_1_DATA, 24 }, /* 6-way set assoc, 64 byte line size */
{ 0x21, LVL_2, 256 }, /* 8-way set assoc, 64 byte line size */
{ 0x22, LVL_3, 512 }, /* 4-way set assoc, sectored cache, 64 byte line size */
{ 0x23, LVL_3, MB(1) }, /* 8-way set assoc, sectored cache, 64 byte line size */
{ 0x45, LVL_2, MB(2) }, /* 4-way set assoc, 32 byte line size */
{ 0x46, LVL_3, MB(4) }, /* 4-way set assoc, 64 byte line size */
{ 0x47, LVL_3, MB(8) }, /* 8-way set assoc, 64 byte line size */
+ { 0x48, LVL_2, MB(3) }, /* 12-way set assoc, 64 byte line size */
{ 0x49, LVL_3, MB(4) }, /* 16-way set assoc, 64 byte line size */
{ 0x4a, LVL_3, MB(6) }, /* 12-way set assoc, 64 byte line size */
{ 0x4b, LVL_3, MB(8) }, /* 16-way set assoc, 64 byte line size */
{ 0x7c, LVL_2, MB(1) }, /* 8-way set assoc, sectored cache, 64 byte line size */
{ 0x7d, LVL_2, MB(2) }, /* 8-way set assoc, 64 byte line size */
{ 0x7f, LVL_2, 512 }, /* 2-way set assoc, 64 byte line size */
+ { 0x80, LVL_2, 512 }, /* 8-way set assoc, 64 byte line size */
{ 0x82, LVL_2, 256 }, /* 8-way set assoc, 32 byte line size */
{ 0x83, LVL_2, 512 }, /* 8-way set assoc, 32 byte line size */
{ 0x84, LVL_2, MB(1) }, /* 8-way set assoc, 32 byte line size */
/* Callback to handle core threshold interrupts */
int (*platform_thermal_notify)(__u64 msr_val);
+EXPORT_SYMBOL(platform_thermal_notify);
static DEFINE_PER_CPU(struct thermal_state, thermal_state);
}
/*
- * MTRR initialization for all AP's
+ * Delayed MTRR initialization for all AP's
*/
void mtrr_aps_init(void)
{
if (!use_intel())
return;
+ /*
+ * Check if someone has requested the delay of AP MTRR initialization,
+ * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
+ * then we are done.
+ */
+ if (!mtrr_aps_delayed_init)
+ return;
+
set_mtrr(~0U, 0, 0, 0);
mtrr_aps_delayed_init = false;
}
* if an event is shared accross the logical threads
* the user needs special permissions to be able to use it
*/
- if (p4_event_bind_map[v].shared) {
+ if (p4_ht_active() && p4_event_bind_map[v].shared) {
if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
return -EACCES;
}
event->hw.config = p4_set_ht_bit(event->hw.config);
if (event->attr.type == PERF_TYPE_RAW) {
-
+ struct p4_event_bind *bind;
+ unsigned int esel;
/*
* Clear bits we reserve to be managed by kernel itself
* and never allowed from a user space
* bits since we keep additional info here (for cache events and etc)
*/
event->hw.config |= event->attr.config;
+ bind = p4_config_get_bind(event->attr.config);
+ if (!bind) {
+ rc = -EINVAL;
+ goto out;
+ }
+ esel = P4_OPCODE_ESEL(bind->opcode);
+ event->hw.config |= p4_config_pack_cccr(P4_CCCR_ESEL(esel));
}
rc = x86_setup_perfctr(event);
unsigned used = 0;
struct thread_info *tinfo;
int graph = 0;
+ unsigned long dummy;
unsigned long bp;
if (!task)
task = current;
if (!stack) {
- unsigned long dummy;
stack = &dummy;
if (task && task != current)
stack = (unsigned long *)task->thread.sp;
*/
__HEAD
ENTRY(startup_32)
+ movl pa(stack_start),%ecx
+
/* test KEEP_SEGMENTS flag to see if the bootloader is asking
us to not reload segments */
testb $(1<<6), BP_loadflags(%esi)
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
+ movl %eax,%ss
2:
+ leal -__PAGE_OFFSET(%ecx),%esp
/*
* Clear BSS first so that there are no surprises...
* _brk_end is set up to point to the first "safe" location.
* Mappings are created both at virtual address 0 (identity mapping)
* and PAGE_OFFSET for up to _end.
- *
- * Note that the stack is not yet set up!
*/
#ifdef CONFIG_X86_PAE
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
+ movl pa(stack_start),%ecx
+ movl %eax,%ss
+ leal -__PAGE_OFFSET(%ecx),%esp
#endif /* CONFIG_SMP */
default_entry:
movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1:
- /* Set up the stack pointer */
- lss stack_start,%esp
+ /* Shift the stack pointer to a virtual address */
+ addl $__PAGE_OFFSET, %esp
/*
* Initialize eflags. Some BIOS's leave bits like NT set. This would
#ifdef CONFIG_SMP
cmpb $0, ready
- jz 1f /* Initial CPU cleans BSS */
- jmp checkCPUtype
-1:
+ jnz checkCPUtype
#endif /* CONFIG_SMP */
/*
cld # gcc2 wants the direction flag cleared at all times
pushl $0 # fake return address for unwinder
-#ifdef CONFIG_SMP
- movb ready, %cl
movb $1, ready
- cmpb $0,%cl # the first CPU calls start_kernel
- je 1f
- movl (stack_start), %esp
-1:
-#endif /* CONFIG_SMP */
jmp *(initial_code)
/*
#endif
.data
+.balign 4
ENTRY(stack_start)
.long init_thread_union+THREAD_SIZE
- .long __BOOT_DS
-
-ready: .byte 0
early_recursion_flag:
.long 0
+ready: .byte 0
+
int_msg:
.asciz "Unknown interrupt or fault at: %p %p %p\n"
if (irr & (1 << (vector % 32))) {
irq = __this_cpu_read(vector_irq[vector]);
- data = irq_get_irq_data(irq);
+ desc = irq_to_desc(irq);
+ data = &desc->irq_data;
raw_spin_lock(&desc->lock);
if (data->chip->irq_retrigger)
data->chip->irq_retrigger(data);
#include <linux/utsname.h>
#include <trace/events/power.h>
#include <linux/hw_breakpoint.h>
+#include <asm/cpu.h>
#include <asm/system.h>
#include <asm/apic.h>
#include <asm/syscalls.h>
void show_regs_common(void)
{
- const char *board, *product;
+ const char *vendor, *product, *board;
- board = dmi_get_system_info(DMI_BOARD_NAME);
- if (!board)
- board = "";
+ vendor = dmi_get_system_info(DMI_SYS_VENDOR);
+ if (!vendor)
+ vendor = "";
product = dmi_get_system_info(DMI_PRODUCT_NAME);
if (!product)
product = "";
+ /* Board Name is optional */
+ board = dmi_get_system_info(DMI_BOARD_NAME);
+
printk(KERN_CONT "\n");
- printk(KERN_DEFAULT "Pid: %d, comm: %.20s %s %s %.*s %s/%s\n",
+ printk(KERN_DEFAULT "Pid: %d, comm: %.20s %s %s %.*s",
current->pid, current->comm, print_tainted(),
init_utsname()->release,
(int)strcspn(init_utsname()->version, " "),
- init_utsname()->version, board, product);
+ init_utsname()->version);
+ printk(KERN_CONT " ");
+ printk(KERN_CONT "%s %s", vendor, product);
+ if (board) {
+ printk(KERN_CONT "/");
+ printk(KERN_CONT "%s", board);
+ }
+ printk(KERN_CONT "\n");
}
void flush_thread(void)
#define MWAIT_ECX_EXTENDED_INFO 0x01
#define MWAIT_EDX_C1 0xf0
-static int __cpuinit mwait_usable(const struct cpuinfo_x86 *c)
+int mwait_usable(const struct cpuinfo_x86 *c)
{
u32 eax, ebx, ecx, edx;
* target processor state.
*/
startup_ipi_hook(phys_apicid, (unsigned long) start_secondary,
- (unsigned long)stack_start.sp);
+ stack_start);
/*
* Run STARTUP IPI loop.
#endif
early_gdt_descr.address = (unsigned long)get_cpu_gdt_table(cpu);
initial_code = (unsigned long)start_secondary;
- stack_start.sp = (void *) c_idle.idle->thread.sp;
+ stack_start = c_idle.idle->thread.sp;
/* start_ip had better be page-aligned! */
start_ip = setup_trampoline();
connect_bsp_APIC();
setup_local_APIC();
- end_local_APIC_setup();
+ bsp_end_local_APIC_setup();
return -1;
}
if (!skip_ioapic_setup && nr_ioapics)
enable_IO_APIC();
- end_local_APIC_setup();
+ bsp_end_local_APIC_setup();
map_cpu_to_logical_apicid();
unsigned int highest_subcstate = 0;
int i;
void *mwait_ptr;
+ struct cpuinfo_x86 *c = __this_cpu_ptr(&cpu_info);
- if (!cpu_has(__this_cpu_ptr(&cpu_info), X86_FEATURE_MWAIT))
+ if (!(cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)))
return;
if (!cpu_has(__this_cpu_ptr(&cpu_info), X86_FEATURE_CLFLSH))
return;
#ifdef CONFIG_X86_32
OUTPUT_ARCH(i386)
ENTRY(phys_startup_32)
+jiffies = jiffies_64;
#else
OUTPUT_ARCH(i386:x86-64)
ENTRY(phys_startup_64)
+jiffies_64 = jiffies;
#endif
#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
DATA_DATA
- /*
- * Workaround a binutils (2.20.51.0.12 to 2.21.51.0.3) bug.
- * This makes jiffies relocatable in such binutils
- */
-#ifdef CONFIG_X86_32
- jiffies = jiffies_64;
-#else
- jiffies_64 = jiffies;
-#endif
CONSTRUCTORS
/* rarely changed data like cpu maps */
kvm_load_ldt(svm->host.ldt);
#ifdef CONFIG_X86_64
loadsegment(fs, svm->host.fs);
- load_gs_index(svm->host.gs);
wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
+ load_gs_index(svm->host.gs);
#else
loadsegment(gs, svm->host.gs);
#endif
bool "Lguest guest support"
select PARAVIRT
depends on X86_32
+ select VIRTUALIZATION
select VIRTIO
select VIRTIO_RING
select VIRTIO_CONSOLE
for (i = FIRST_EXTERNAL_VECTOR; i < NR_VECTORS; i++) {
/* Some systems map "vectors" to interrupts weirdly. Not us! */
- __get_cpu_var(vector_irq)[i] = i - FIRST_EXTERNAL_VECTOR;
+ __this_cpu_write(vector_irq[i], i - FIRST_EXTERNAL_VECTOR);
if (i != SYSCALL_VECTOR)
set_intr_gate(i, interrupt[i - FIRST_EXTERNAL_VECTOR]);
}
#include <linux/topology.h>
#include <linux/module.h>
#include <linux/bootmem.h>
+#include <asm/numa.h>
+#include <asm/acpi.h>
+
+int __initdata numa_off;
+
+static __init int numa_setup(char *opt)
+{
+ if (!opt)
+ return -EINVAL;
+ if (!strncmp(opt, "off", 3))
+ numa_off = 1;
+#ifdef CONFIG_NUMA_EMU
+ if (!strncmp(opt, "fake=", 5))
+ numa_emu_cmdline(opt + 5);
+#endif
+#ifdef CONFIG_ACPI_NUMA
+ if (!strncmp(opt, "noacpi", 6))
+ acpi_numa = -1;
+#endif
+ return 0;
+}
+early_param("numa", numa_setup);
/*
* Which logical CPUs are on which nodes
[0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
};
-int numa_off __initdata;
static unsigned long __initdata nodemap_addr;
static unsigned long __initdata nodemap_size;
static struct bootnode physnodes[MAX_NUMNODES] __cpuinitdata;
static char *cmdline __initdata;
+void __init numa_emu_cmdline(char *str)
+{
+ cmdline = str;
+}
+
static int __init setup_physnodes(unsigned long start, unsigned long end,
int acpi, int amd)
{
return pages;
}
-static __init int numa_setup(char *opt)
-{
- if (!opt)
- return -EINVAL;
- if (!strncmp(opt, "off", 3))
- numa_off = 1;
-#ifdef CONFIG_NUMA_EMU
- if (!strncmp(opt, "fake=", 5))
- cmdline = opt + 5;
-#endif
-#ifdef CONFIG_ACPI_NUMA
- if (!strncmp(opt, "noacpi", 6))
- acpi_numa = -1;
-#endif
- return 0;
-}
-early_param("numa", numa_setup);
-
#ifdef CONFIG_NUMA
static __init int find_near_online_node(int node)
unsigned long pfn)
{
pgprot_t forbidden = __pgprot(0);
- pgprot_t required = __pgprot(0);
/*
* The BIOS area between 640k and 1Mb needs to be executable for
if (within(pfn, __pa((unsigned long)__start_rodata) >> PAGE_SHIFT,
__pa((unsigned long)__end_rodata) >> PAGE_SHIFT))
pgprot_val(forbidden) |= _PAGE_RW;
- /*
- * .data and .bss should always be writable.
- */
- if (within(address, (unsigned long)_sdata, (unsigned long)_edata) ||
- within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))
- pgprot_val(required) |= _PAGE_RW;
#if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)
/*
#endif
prot = __pgprot(pgprot_val(prot) & ~pgprot_val(forbidden));
- prot = __pgprot(pgprot_val(prot) | pgprot_val(required));
return prot;
}
static int __initdata num_memory_chunks; /* total number of memory chunks */
static u8 __initdata apicid_to_pxm[MAX_APICID];
-int numa_off __initdata;
int acpi_numa __initdata;
static __init void bad_srat(void)
per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
local_irq_disable();
- early_boot_irqs_off();
+ early_boot_irqs_disabled = true;
memblock_init();
#endif
};
-void __init xen_init_irq_ops()
+void __init xen_init_irq_ops(void)
{
pv_irq_ops = xen_irq_ops;
x86_init.irqs.intr_init = xen_init_IRQ;
p2m_top[topidx] = mid;
}
+ /*
+ * As long as the mfn_list has enough entries to completely
+ * fill a p2m page, pointing into the array is ok. But if
+ * not the entries beyond the last pfn will be undefined.
+ */
+ if (unlikely(pfn + P2M_PER_PAGE > max_pfn)) {
+ unsigned long p2midx;
+
+ p2midx = max_pfn % P2M_PER_PAGE;
+ for ( ; p2midx < P2M_PER_PAGE; p2midx++)
+ mfn_list[pfn + p2midx] = INVALID_P2M_ENTRY;
+ }
p2m_top[topidx][mididx] = &mfn_list[pfn];
}
e820.nr_map = 0;
xen_extra_mem_start = mem_end;
for (i = 0; i < memmap.nr_entries; i++) {
- unsigned long long end = map[i].addr + map[i].size;
+ unsigned long long end;
+ /* Guard against non-page aligned E820 entries. */
+ if (map[i].type == E820_RAM)
+ map[i].size -= (map[i].size + map[i].addr) % PAGE_SIZE;
+
+ end = map[i].addr + map[i].size;
if (map[i].type == E820_RAM && end > mem_end) {
/* RAM off the end - may be partially included */
u64 delta = min(map[i].size, end - mem_end);
boot_cpu_data.hlt_works_ok = 1;
#endif
pm_idle = default_idle;
+ boot_option_idle_override = IDLE_HALT;
fiddle_vdso();
}
# CONFIG_HOTPLUG is not set
CONFIG_KOBJECT_UEVENT=y
# CONFIG_IKCONFIG is not set
-# CONFIG_EMBEDDED is not set
+# CONFIG_EXPERT is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
# CONFIG_KALLSYMS_EXTRA_PASS is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
-CONFIG_EMBEDDED=y
+CONFIG_EXPERT=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
# Block layer core configuration
#
menuconfig BLOCK
- bool "Enable the block layer" if EMBEDDED
+ bool "Enable the block layer" if EXPERT
default y
help
Provide block layer support for the kernel.
* tree of blkg (instead of traversing through hash list all
* the time.
*/
- tg = tg_of_blkg(blkiocg_lookup_group(blkcg, key));
+
+ /*
+ * This is the common case when there are no blkio cgroups.
+ * Avoid lookup in this case
+ */
+ if (blkcg == &blkio_root_cgroup)
+ tg = &td->root_tg;
+ else
+ tg = tg_of_blkg(blkiocg_lookup_group(blkcg, key));
/* Fill in device details for root group */
if (tg && !tg->blkg.dev && bdi->dev && dev_name(bdi->dev)) {
}
static inline unsigned
-cfq_scaled_group_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+cfq_scaled_cfqq_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
unsigned slice = cfq_prio_to_slice(cfqd, cfqq);
if (cfqd->cfq_latency) {
static inline void
cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
- unsigned slice = cfq_scaled_group_slice(cfqd, cfqq);
+ unsigned slice = cfq_scaled_cfqq_slice(cfqd, cfqq);
cfqq->slice_start = jiffies;
cfqq->slice_end = jiffies + slice;
*/
if (timed_out) {
if (cfq_cfqq_slice_new(cfqq))
- cfqq->slice_resid = cfq_scaled_group_slice(cfqd, cfqq);
+ cfqq->slice_resid = cfq_scaled_cfqq_slice(cfqd, cfqq);
else
cfqq->slice_resid = cfqq->slice_end - jiffies;
cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid);
{
struct cfq_io_context *cic = cfqd->active_cic;
+ /* If the queue already has requests, don't wait */
+ if (!RB_EMPTY_ROOT(&cfqq->sort_list))
+ return false;
+
/* If there are other queues in the group, don't wait */
if (cfqq->cfqg->nr_cfqq > 1)
return false;
# regulators early, since some subsystems rely on them to initialize
obj-$(CONFIG_REGULATOR) += regulator/
-# char/ comes before serial/ etc so that the VT console is the boot-time
+# tty/ comes before char/ so that the VT console is the boot-time
# default.
obj-y += tty/
obj-y += char/
obj-$(CONFIG_FB_I810) += video/i810/
obj-$(CONFIG_FB_INTEL) += video/intelfb/
-obj-y += serial/
obj-$(CONFIG_PARPORT) += parport/
obj-y += base/ block/ misc/ mfd/ nfc/
obj-$(CONFIG_NUBUS) += nubus/
the module will be called pci_slot.
config X86_PM_TIMER
- bool "Power Management Timer Support" if EMBEDDED
+ bool "Power Management Timer Support" if EXPERT
depends on X86
default y
help
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized, _REG was run */
#define AOPOBJ_SETUP_COMPLETE 0x10 /* Region setup is complete */
#define AOPOBJ_INVALID 0x20 /* Host OS won't allow a Region address */
-#define AOPOBJ_MODULE_LEVEL 0x40 /* Method is actually module-level code */
-#define AOPOBJ_MODIFIED_NAMESPACE 0x80 /* Method modified the namespace */
/******************************************************************************
*
};
struct acpi_object_method {
- ACPI_OBJECT_COMMON_HEADER u8 method_flags;
+ ACPI_OBJECT_COMMON_HEADER u8 info_flags;
u8 param_count;
u8 sync_level;
union acpi_operand_object *mutex;
union {
ACPI_INTERNAL_METHOD implementation;
union acpi_operand_object *handler;
- } extra;
+ } dispatch;
u32 aml_length;
u8 thread_count;
acpi_owner_id owner_id;
};
+/* Flags for info_flags field above */
+
+#define ACPI_METHOD_MODULE_LEVEL 0x01 /* Method is actually module-level code */
+#define ACPI_METHOD_INTERNAL_ONLY 0x02 /* Method is implemented internally (_OSI) */
+#define ACPI_METHOD_SERIALIZED 0x04 /* Method is serialized */
+#define ACPI_METHOD_SERIALIZED_PENDING 0x08 /* Method is to be marked serialized */
+#define ACPI_METHOD_MODIFIED_NAMESPACE 0x10 /* Method modified the namespace */
+
/******************************************************************************
*
* Objects that can be notified. All share a common notify_info area.
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
AML_FIELD_ATTRIB_SMB_BLOCK_CALL = 0x0D
} AML_ACCESS_ATTRIBUTE;
-/* Bit fields in method_flags byte */
+/* Bit fields in the AML method_flags byte */
#define AML_METHOD_ARG_COUNT 0x07
#define AML_METHOD_SERIALIZED 0x08
#define AML_METHOD_SYNC_LEVEL 0xF0
-/* METHOD_FLAGS_ARG_COUNT is not used internally, define additional flags */
-
-#define AML_METHOD_INTERNAL_ONLY 0x01
-#define AML_METHOD_RESERVED1 0x02
-#define AML_METHOD_RESERVED2 0x04
-
#endif /* __AMLCODE_H__ */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#include <acpi/acpi.h>
#include "accommon.h"
-#include "amlcode.h"
#include "acdispat.h"
#include "acinterp.h"
#include "acnamesp.h"
/*
* If this method is serialized, we need to acquire the method mutex.
*/
- if (obj_desc->method.method_flags & AML_METHOD_SERIALIZED) {
+ if (obj_desc->method.info_flags & ACPI_METHOD_SERIALIZED) {
/*
* Create a mutex for the method if it is defined to be Serialized
* and a mutex has not already been created. We defer the mutex creation
/* Invoke an internal method if necessary */
- if (obj_desc->method.method_flags & AML_METHOD_INTERNAL_ONLY) {
- status = obj_desc->method.extra.implementation(next_walk_state);
+ if (obj_desc->method.info_flags & ACPI_METHOD_INTERNAL_ONLY) {
+ status =
+ obj_desc->method.dispatch.implementation(next_walk_state);
if (status == AE_OK) {
status = AE_CTRL_TERMINATE;
}
/*
* Delete any namespace objects created anywhere within the
- * namespace by the execution of this method. Unless this method
- * is a module-level executable code method, in which case we
- * want make the objects permanent.
+ * namespace by the execution of this method. Unless:
+ * 1) This method is a module-level executable code method, in which
+ * case we want make the objects permanent.
+ * 2) There are other threads executing the method, in which case we
+ * will wait until the last thread has completed.
*/
- if (!(method_desc->method.flags & AOPOBJ_MODULE_LEVEL)) {
+ if (!(method_desc->method.info_flags & ACPI_METHOD_MODULE_LEVEL)
+ && (method_desc->method.thread_count == 1)) {
/* Delete any direct children of (created by) this method */
/*
* Delete any objects that were created by this method
* elsewhere in the namespace (if any were created).
+ * Use of the ACPI_METHOD_MODIFIED_NAMESPACE optimizes the
+ * deletion such that we don't have to perform an entire
+ * namespace walk for every control method execution.
*/
if (method_desc->method.
- flags & AOPOBJ_MODIFIED_NAMESPACE) {
+ info_flags & ACPI_METHOD_MODIFIED_NAMESPACE) {
acpi_ns_delete_namespace_by_owner(method_desc->
method.
owner_id);
+ method_desc->method.info_flags &=
+ ~ACPI_METHOD_MODIFIED_NAMESPACE;
}
}
}
* Serialized if it appears that the method is incorrectly written and
* does not support multiple thread execution. The best example of this
* is if such a method creates namespace objects and blocks. A second
- * thread will fail with an AE_ALREADY_EXISTS exception
+ * thread will fail with an AE_ALREADY_EXISTS exception.
*
* This code is here because we must wait until the last thread exits
- * before creating the synchronization semaphore.
+ * before marking the method as serialized.
*/
- if ((method_desc->method.method_flags & AML_METHOD_SERIALIZED)
- && (!method_desc->method.mutex)) {
- (void)acpi_ds_create_method_mutex(method_desc);
+ if (method_desc->method.
+ info_flags & ACPI_METHOD_SERIALIZED_PENDING) {
+ if (walk_state) {
+ ACPI_INFO((AE_INFO,
+ "Marking method %4.4s as Serialized because of AE_ALREADY_EXISTS error",
+ walk_state->method_node->name.
+ ascii));
+ }
+
+ /*
+ * Method tried to create an object twice and was marked as
+ * "pending serialized". The probable cause is that the method
+ * cannot handle reentrancy.
+ *
+ * The method was created as not_serialized, but it tried to create
+ * a named object and then blocked, causing the second thread
+ * entrance to begin and then fail. Workaround this problem by
+ * marking the method permanently as Serialized when the last
+ * thread exits here.
+ */
+ method_desc->method.info_flags &=
+ ~ACPI_METHOD_SERIALIZED_PENDING;
+ method_desc->method.info_flags |=
+ ACPI_METHOD_SERIALIZED;
+ method_desc->method.sync_level = 0;
}
/* No more threads, we can free the owner_id */
- if (!(method_desc->method.flags & AOPOBJ_MODULE_LEVEL)) {
+ if (!
+ (method_desc->method.
+ info_flags & ACPI_METHOD_MODULE_LEVEL)) {
acpi_ut_release_owner_id(&method_desc->method.owner_id);
}
}
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
if (ACPI_FAILURE(status)) {
+ ACPI_FREE(local_gpe_event_info);
return_VOID;
}
if (!acpi_ev_valid_gpe_event(gpe_event_info)) {
status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
+ ACPI_FREE(local_gpe_event_info);
return_VOID;
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* See acpi_ns_exec_module_code
*/
if (obj_desc->method.
- flags & AOPOBJ_MODULE_LEVEL) {
+ info_flags & ACPI_METHOD_MODULE_LEVEL) {
handler_obj =
- obj_desc->method.extra.handler;
+ obj_desc->method.dispatch.handler;
}
break;
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
- /* Validate wake_device is of type Device */
-
- device_node = ACPI_CAST_PTR(struct acpi_namespace_node, wake_device);
- if (device_node->type != ACPI_TYPE_DEVICE) {
- return_ACPI_STATUS(AE_BAD_PARAMETER);
- }
-
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
/* Ensure that we have a valid GPE number */
gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number);
- if (gpe_event_info) {
- /*
- * If there is no method or handler for this GPE, then the
- * wake_device will be notified whenever this GPE fires (aka
- * "implicit notify") Note: The GPE is assumed to be
- * level-triggered (for windows compatibility).
- */
- if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
- ACPI_GPE_DISPATCH_NONE) {
- gpe_event_info->flags =
- (ACPI_GPE_DISPATCH_NOTIFY |
- ACPI_GPE_LEVEL_TRIGGERED);
- gpe_event_info->dispatch.device_node = device_node;
- }
+ if (!gpe_event_info) {
+ goto unlock_and_exit;
+ }
+
+ /*
+ * If there is no method or handler for this GPE, then the
+ * wake_device will be notified whenever this GPE fires (aka
+ * "implicit notify") Note: The GPE is assumed to be
+ * level-triggered (for windows compatibility).
+ */
+ if (((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) ==
+ ACPI_GPE_DISPATCH_NONE) && (wake_device != ACPI_ROOT_OBJECT)) {
- gpe_event_info->flags |= ACPI_GPE_CAN_WAKE;
- status = AE_OK;
+ /* Validate wake_device is of type Device */
+
+ device_node = ACPI_CAST_PTR(struct acpi_namespace_node,
+ wake_device);
+ if (device_node->type != ACPI_TYPE_DEVICE) {
+ goto unlock_and_exit;
+ }
+ gpe_event_info->flags = (ACPI_GPE_DISPATCH_NOTIFY |
+ ACPI_GPE_LEVEL_TRIGGERED);
+ gpe_event_info->dispatch.device_node = device_node;
}
+ gpe_event_info->flags |= ACPI_GPE_CAN_WAKE;
+ status = AE_OK;
+
+ unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return_ACPI_STATUS(status);
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
obj_desc->method.aml_length = aml_length;
/*
- * Disassemble the method flags. Split off the Arg Count
- * for efficiency
+ * Disassemble the method flags. Split off the arg_count, Serialized
+ * flag, and sync_level for efficiency.
*/
method_flags = (u8) operand[1]->integer.value;
- obj_desc->method.method_flags =
- (u8) (method_flags & ~AML_METHOD_ARG_COUNT);
obj_desc->method.param_count =
(u8) (method_flags & AML_METHOD_ARG_COUNT);
* created for this method when it is parsed.
*/
if (method_flags & AML_METHOD_SERIALIZED) {
+ obj_desc->method.info_flags = ACPI_METHOD_SERIALIZED;
+
/*
* ACPI 1.0: sync_level = 0
* ACPI 2.0: sync_level = sync_level in method declaration
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
static struct acpi_exdump_info acpi_ex_dump_method[9] = {
{ACPI_EXD_INIT, ACPI_EXD_TABLE_SIZE(acpi_ex_dump_method), NULL},
- {ACPI_EXD_UINT8, ACPI_EXD_OFFSET(method.method_flags), "Method Flags"},
+ {ACPI_EXD_UINT8, ACPI_EXD_OFFSET(method.info_flags), "Info Flags"},
{ACPI_EXD_UINT8, ACPI_EXD_OFFSET(method.param_count),
"Parameter Count"},
{ACPI_EXD_UINT8, ACPI_EXD_OFFSET(method.sync_level), "Sync Level"},
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#else
/* Mark this as a very SPECIAL method */
- obj_desc->method.method_flags =
- AML_METHOD_INTERNAL_ONLY;
- obj_desc->method.extra.implementation =
+ obj_desc->method.info_flags =
+ ACPI_METHOD_INTERNAL_ONLY;
+ obj_desc->method.dispatch.implementation =
acpi_ut_osi_implementation;
#endif
break;
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modified the namespace. This is used for cleanup when the
* method exits.
*/
- walk_state->method_desc->method.flags |=
- AOPOBJ_MODIFIED_NAMESPACE;
+ walk_state->method_desc->method.info_flags |=
+ ACPI_METHOD_MODIFIED_NAMESPACE;
}
}
{
struct acpi_namespace_node *child_node = NULL;
u32 level = 1;
+ acpi_status status;
ACPI_FUNCTION_TRACE(ns_delete_namespace_subtree);
return_VOID;
}
+ /* Lock namespace for possible update */
+
+ status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+ if (ACPI_FAILURE(status)) {
+ return_VOID;
+ }
+
/*
* Traverse the tree of objects until we bubble back up
* to where we started.
}
}
+ (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_VOID;
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_owner_id owner_id, acpi_handle start_handle)
{
struct acpi_walk_info info;
+ acpi_status status;
ACPI_FUNCTION_ENTRY();
+ /*
+ * Just lock the entire namespace for the duration of the dump.
+ * We don't want any changes to the namespace during this time,
+ * especially the temporary nodes since we are going to display
+ * them also.
+ */
+ status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+ if (ACPI_FAILURE(status)) {
+ acpi_os_printf("Could not acquire namespace mutex\n");
+ return;
+ }
+
info.debug_level = ACPI_LV_TABLES;
info.owner_id = owner_id;
info.display_type = display_type;
ACPI_NS_WALK_TEMP_NODES,
acpi_ns_dump_one_object, NULL,
(void *)&info, NULL);
+
+ (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
}
#endif /* ACPI_FUTURE_USAGE */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* acpi_gbl_root_node->Object is NULL at PASS1.
*/
if ((type == ACPI_TYPE_DEVICE) && parent_node->object) {
- method_obj->method.extra.handler =
+ method_obj->method.dispatch.handler =
parent_node->object->device.handler;
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
method_obj->method.param_count = (u8)
(method_flags & AML_METHOD_ARG_COUNT);
- method_obj->method.method_flags = (u8)
- (method_flags & ~AML_METHOD_ARG_COUNT);
-
if (method_flags & AML_METHOD_SERIALIZED) {
+ method_obj->method.info_flags = ACPI_METHOD_SERIALIZED;
+
method_obj->method.sync_level = (u8)
((method_flags & AML_METHOD_SYNC_LEVEL) >> 4);
}
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
method_obj->method.aml_start = aml_start;
method_obj->method.aml_length = aml_length;
method_obj->method.owner_id = owner_id;
- method_obj->method.flags |= AOPOBJ_MODULE_LEVEL;
+ method_obj->method.info_flags |= ACPI_METHOD_MODULE_LEVEL;
/*
* Save the parent node in next_object. This is cheating, but we
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#include "acparser.h"
#include "acdispat.h"
#include "amlcode.h"
-#include "acnamesp.h"
#include "acinterp.h"
#define _COMPONENT ACPI_PARSER
/* Check for possible multi-thread reentrancy problem */
if ((status == AE_ALREADY_EXISTS) &&
- (!walk_state->method_desc->method.mutex)) {
- ACPI_INFO((AE_INFO,
- "Marking method %4.4s as Serialized because of AE_ALREADY_EXISTS error",
- walk_state->method_node->name.
- ascii));
-
+ (!(walk_state->method_desc->method.
+ info_flags & ACPI_METHOD_SERIALIZED))) {
/*
- * Method tried to create an object twice. The probable cause is
- * that the method cannot handle reentrancy.
- *
- * The method is marked not_serialized, but it tried to create
- * a named object, causing the second thread entrance to fail.
- * Workaround this problem by marking the method permanently
- * as Serialized.
+ * Method is not serialized and tried to create an object
+ * twice. The probable cause is that the method cannot
+ * handle reentrancy. Mark as "pending serialized" now, and
+ * then mark "serialized" when the last thread exits.
*/
- walk_state->method_desc->method.method_flags |=
- AML_METHOD_SERIALIZED;
- walk_state->method_desc->method.sync_level = 0;
+ walk_state->method_desc->method.info_flags |=
+ ACPI_METHOD_SERIALIZED_PENDING;
}
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#include "acdispat.h"
#include "acinterp.h"
#include "actables.h"
-#include "amlcode.h"
#define _COMPONENT ACPI_PARSER
ACPI_MODULE_NAME("psxface")
goto cleanup;
}
- if (info->obj_desc->method.flags & AOPOBJ_MODULE_LEVEL) {
+ if (info->obj_desc->method.info_flags & ACPI_METHOD_MODULE_LEVEL) {
walk_state->parse_flags |= ACPI_PARSE_MODULE_LEVEL;
}
/* Invoke an internal method if necessary */
- if (info->obj_desc->method.method_flags & AML_METHOD_INTERNAL_ONLY) {
+ if (info->obj_desc->method.info_flags & ACPI_METHOD_INTERNAL_ONLY) {
status =
- info->obj_desc->method.extra.implementation(walk_state);
+ info->obj_desc->method.dispatch.implementation(walk_state);
info->return_object = walk_state->return_desc;
/* Cleanup states */
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
if (!device)
return -EINVAL;
battery = acpi_driver_data(device);
- acpi_battery_refresh(battery);
battery->update_time = 0;
acpi_battery_update(battery);
return 0;
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/acpi.h>
+#include <linux/acpi_io.h>
#include <acpi/acpiosxf.h>
/*
free_page((unsigned long)entry->data);
entry->data = NULL;
if (entry->kaddr) {
- acpi_os_unmap_memory(entry->kaddr, entry->size);
+ iounmap(entry->kaddr);
entry->kaddr = NULL;
}
}
list_for_each_entry(entry, &nvs_list, node)
if (entry->data) {
- entry->kaddr = acpi_os_map_memory(entry->phys_start,
- entry->size);
+ entry->kaddr = acpi_os_ioremap(entry->phys_start,
+ entry->size);
if (!entry->kaddr) {
suspend_nvs_free();
return -ENOMEM;
#include <linux/workqueue.h>
#include <linux/nmi.h>
#include <linux/acpi.h>
+#include <linux/acpi_io.h>
#include <linux/efi.h>
#include <linux/ioport.h>
#include <linux/list.h>
acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
{
struct acpi_ioremap *map, *tmp_map;
- unsigned long flags, pg_sz;
+ unsigned long flags;
void __iomem *virt;
- phys_addr_t pg_off;
+ acpi_physical_address pg_off;
+ acpi_size pg_sz;
if (phys > ULONG_MAX) {
printk(KERN_ERR PREFIX "Cannot map memory that high\n");
pg_off = round_down(phys, PAGE_SIZE);
pg_sz = round_up(phys + size, PAGE_SIZE) - pg_off;
- virt = ioremap_cache(pg_off, pg_sz);
+ virt = acpi_os_ioremap(pg_off, pg_sz);
if (!virt) {
kfree(map);
return NULL;
acpi_status
acpi_os_read_memory(acpi_physical_address phys_addr, u32 * value, u32 width)
{
- u32 dummy;
void __iomem *virt_addr;
- int size = width / 8, unmap = 0;
+ unsigned int size = width / 8;
+ bool unmap = false;
+ u32 dummy;
rcu_read_lock();
virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
- rcu_read_unlock();
if (!virt_addr) {
- virt_addr = ioremap_cache(phys_addr, size);
- unmap = 1;
+ rcu_read_unlock();
+ virt_addr = acpi_os_ioremap(phys_addr, size);
+ if (!virt_addr)
+ return AE_BAD_ADDRESS;
+ unmap = true;
}
+
if (!value)
value = &dummy;
if (unmap)
iounmap(virt_addr);
+ else
+ rcu_read_unlock();
return AE_OK;
}
acpi_os_write_memory(acpi_physical_address phys_addr, u32 value, u32 width)
{
void __iomem *virt_addr;
- int size = width / 8, unmap = 0;
+ unsigned int size = width / 8;
+ bool unmap = false;
rcu_read_lock();
virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
- rcu_read_unlock();
if (!virt_addr) {
- virt_addr = ioremap_cache(phys_addr, size);
- unmap = 1;
+ rcu_read_unlock();
+ virt_addr = acpi_os_ioremap(phys_addr, size);
+ if (!virt_addr)
+ return AE_BAD_ADDRESS;
+ unmap = true;
}
switch (width) {
if (unmap)
iounmap(virt_addr);
+ else
+ rcu_read_unlock();
return AE_OK;
}
u32 acpi_state = acpi_target_sleep_state;
acpi_ec_unblock_transactions();
+ suspend_nvs_free();
if (acpi_state == ACPI_STATE_S0)
return;
*/
static void acpi_pm_end(void)
{
- suspend_nvs_free();
/*
* This is necessary in case acpi_pm_finish() is not called during a
* failing transition to a sleep state.
if (!device)
return 0;
+ /* Is this device able to support video switching ? */
+ if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOD", &h_dummy)) ||
+ ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOS", &h_dummy)))
+ video_caps |= ACPI_VIDEO_OUTPUT_SWITCHING;
+
/* Is this device able to retrieve a video ROM ? */
if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_ROM", &h_dummy)))
video_caps |= ACPI_VIDEO_ROM_AVAILABLE;
struct acpi_device *dev = container_of(node,
struct acpi_device,
wakeup_list);
- if (device_can_wakeup(&dev->dev))
+ if (device_can_wakeup(&dev->dev)) {
+ /* Button GPEs are supposed to be always enabled. */
+ acpi_enable_gpe(dev->wakeup.gpe_device,
+ dev->wakeup.gpe_number);
device_set_wakeup_enable(&dev->dev, true);
+ }
}
mutex_unlock(&acpi_device_lock);
return 0;
config PATA_PLATFORM
tristate "Generic platform device PATA support"
- depends on EMBEDDED || PPC || HAVE_PATA_PLATFORM
+ depends on EXPERT || PPC || HAVE_PATA_PLATFORM
help
This option enables support for generic directly connected ATA
devices commonly found on embedded systems.
{ PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PBG AHCI */
{ PCI_VDEVICE(INTEL, 0x1d04), board_ahci }, /* PBG RAID */
{ PCI_VDEVICE(INTEL, 0x1d06), board_ahci }, /* PBG RAID */
+ { PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* DH89xxCC AHCI */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
{ PCI_VDEVICE(MARVELL, 0x6145), board_ahci_mv }, /* 6145 */
{ PCI_VDEVICE(MARVELL, 0x6121), board_ahci_mv }, /* 6121 */
{ PCI_DEVICE(0x1b4b, 0x9123),
+ .class = PCI_CLASS_STORAGE_SATA_AHCI,
+ .class_mask = 0xffffff,
.driver_data = board_ahci_yes_fbs }, /* 88se9128 */
/* Promise */
* device and controller are SATA.
*/
{ "PIONEER DVD-RW DVRTD08", "1.00", ATA_HORKAGE_NOSETXFER },
+ { "PIONEER DVD-RW DVR-212D", "1.28", ATA_HORKAGE_NOSETXFER },
/* End Marker */
{ }
struct request_queue *q = sdev->request_queue;
void *buf;
- /* set the min alignment and padding */
- blk_queue_update_dma_alignment(sdev->request_queue,
- ATA_DMA_PAD_SZ - 1);
+ sdev->sector_size = ATA_SECT_SIZE;
+
+ /* set DMA padding */
blk_queue_update_dma_pad(sdev->request_queue,
ATA_DMA_PAD_SZ - 1);
blk_queue_dma_drain(q, atapi_drain_needed, buf, ATAPI_MAX_DRAIN);
} else {
- /* ATA devices must be sector aligned */
sdev->sector_size = ata_id_logical_sector_size(dev->id);
- blk_queue_update_dma_alignment(sdev->request_queue,
- sdev->sector_size - 1);
sdev->manage_start_stop = 1;
}
+ /*
+ * ata_pio_sectors() expects buffer for each sector to not cross
+ * page boundary. Enforce it by requiring buffers to be sector
+ * aligned, which works iff sector_size is not larger than
+ * PAGE_SIZE. ATAPI devices also need the alignment as
+ * IDENTIFY_PACKET is executed as ATA_PROT_PIO.
+ */
+ if (sdev->sector_size > PAGE_SIZE)
+ ata_dev_printk(dev, KERN_WARNING,
+ "sector_size=%u > PAGE_SIZE, PIO may malfunction\n",
+ sdev->sector_size);
+
+ blk_queue_update_dma_alignment(sdev->request_queue,
+ sdev->sector_size - 1);
+
if (dev->flags & ATA_DFLAG_AN)
set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events);
#include <linux/libata.h>
#define DRV_NAME "pata_hpt366"
-#define DRV_VERSION "0.6.9"
+#define DRV_VERSION "0.6.10"
struct hpt_clock {
u8 xfer_mode;
while (list[i] != NULL) {
if (!strcmp(list[i], model_num)) {
- printk(KERN_WARNING DRV_NAME ": %s is not supported for %s.\n",
- modestr, list[i]);
+ pr_warning(DRV_NAME ": %s is not supported for %s.\n",
+ modestr, list[i]);
return 1;
}
i++;
#include <linux/libata.h>
#define DRV_NAME "pata_hpt37x"
-#define DRV_VERSION "0.6.18"
+#define DRV_VERSION "0.6.22"
struct hpt_clock {
u8 xfer_speed;
while (list[i] != NULL) {
if (!strcmp(list[i], model_num)) {
- printk(KERN_WARNING DRV_NAME ": %s is not supported for %s.\n",
- modestr, list[i]);
+ pr_warning(DRV_NAME ": %s is not supported for %s.\n",
+ modestr, list[i]);
return 1;
}
i++;
static struct ata_port_operations hpt374_fn1_port_ops = {
.inherits = &hpt372_port_ops,
.cable_detect = hpt374_fn1_cable_detect,
- .prereset = hpt37x_pre_reset,
};
/**
.udma_mask = ATA_UDMA6,
.port_ops = &hpt302_port_ops
};
- /* HPT374 - UDMA100, function 1 uses different prereset method */
+ /* HPT374 - UDMA100, function 1 uses different cable_detect method */
static const struct ata_port_info info_hpt374_fn0 = {
.flags = ATA_FLAG_SLAVE_POSS,
.pio_mask = ATA_PIO4,
if (rc)
return rc;
- if (dev->device == PCI_DEVICE_ID_TTI_HPT366) {
+ switch (dev->device) {
+ case PCI_DEVICE_ID_TTI_HPT366:
/* May be a later chip in disguise. Check */
/* Older chips are in the HPT366 driver. Ignore them */
if (rev < 3)
chip_table = &hpt372;
break;
default:
- printk(KERN_ERR "pata_hpt37x: Unknown HPT366 subtype, "
+ pr_err(DRV_NAME ": Unknown HPT366 subtype, "
"please report (%d).\n", rev);
return -ENODEV;
}
- } else {
- switch (dev->device) {
- case PCI_DEVICE_ID_TTI_HPT372:
- /* 372N if rev >= 2 */
- if (rev >= 2)
- return -ENODEV;
- ppi[0] = &info_hpt372;
- chip_table = &hpt372a;
- break;
- case PCI_DEVICE_ID_TTI_HPT302:
- /* 302N if rev > 1 */
- if (rev > 1)
- return -ENODEV;
- ppi[0] = &info_hpt302;
- /* Check this */
- chip_table = &hpt302;
- break;
- case PCI_DEVICE_ID_TTI_HPT371:
- if (rev > 1)
- return -ENODEV;
- ppi[0] = &info_hpt302;
- chip_table = &hpt371;
- /*
- * Single channel device, master is not present
- * but the BIOS (or us for non x86) must mark it
- * absent
- */
- pci_read_config_byte(dev, 0x50, &mcr1);
- mcr1 &= ~0x04;
- pci_write_config_byte(dev, 0x50, mcr1);
- break;
- case PCI_DEVICE_ID_TTI_HPT374:
- chip_table = &hpt374;
- if (!(PCI_FUNC(dev->devfn) & 1))
- *ppi = &info_hpt374_fn0;
- else
- *ppi = &info_hpt374_fn1;
- break;
- default:
- printk(KERN_ERR
- "pata_hpt37x: PCI table is bogus, please report (%d).\n",
- dev->device);
- return -ENODEV;
- }
+ break;
+ case PCI_DEVICE_ID_TTI_HPT372:
+ /* 372N if rev >= 2 */
+ if (rev >= 2)
+ return -ENODEV;
+ ppi[0] = &info_hpt372;
+ chip_table = &hpt372a;
+ break;
+ case PCI_DEVICE_ID_TTI_HPT302:
+ /* 302N if rev > 1 */
+ if (rev > 1)
+ return -ENODEV;
+ ppi[0] = &info_hpt302;
+ /* Check this */
+ chip_table = &hpt302;
+ break;
+ case PCI_DEVICE_ID_TTI_HPT371:
+ if (rev > 1)
+ return -ENODEV;
+ ppi[0] = &info_hpt302;
+ chip_table = &hpt371;
+ /*
+ * Single channel device, master is not present but the BIOS
+ * (or us for non x86) must mark it absent
+ */
+ pci_read_config_byte(dev, 0x50, &mcr1);
+ mcr1 &= ~0x04;
+ pci_write_config_byte(dev, 0x50, mcr1);
+ break;
+ case PCI_DEVICE_ID_TTI_HPT374:
+ chip_table = &hpt374;
+ if (!(PCI_FUNC(dev->devfn) & 1))
+ *ppi = &info_hpt374_fn0;
+ else
+ *ppi = &info_hpt374_fn1;
+ break;
+ default:
+ pr_err(DRV_NAME ": PCI table is bogus, please report (%d).\n",
+ dev->device);
+ return -ENODEV;
}
/* Ok so this is a chip we support */
u8 sr;
u32 total = 0;
- printk(KERN_WARNING
- "pata_hpt37x: BIOS has not set timing clocks.\n");
+ pr_warning(DRV_NAME ": BIOS has not set timing clocks.\n");
/* This is the process the HPT371 BIOS is reported to use */
for (i = 0; i < 128; i++) {
(f_high << 16) | f_low | 0x100);
}
if (adjust == 8) {
- printk(KERN_ERR "pata_hpt37x: DPLL did not stabilize!\n");
+ pr_err(DRV_NAME ": DPLL did not stabilize!\n");
return -ENODEV;
}
if (dpll == 3)
else
private_data = (void *)hpt37x_timings_50;
- printk(KERN_INFO "pata_hpt37x: bus clock %dMHz, using %dMHz DPLL.\n",
- MHz[clock_slot], MHz[dpll]);
+ pr_info(DRV_NAME ": bus clock %dMHz, using %dMHz DPLL.\n",
+ MHz[clock_slot], MHz[dpll]);
} else {
private_data = (void *)chip_table->clocks[clock_slot];
/*
ppi[0] = &info_hpt370_33;
if (clock_slot < 2 && ppi[0] == &info_hpt370a)
ppi[0] = &info_hpt370a_33;
- printk(KERN_INFO "pata_hpt37x: %s using %dMHz bus clock.\n",
- chip_table->name, MHz[clock_slot]);
+
+ pr_info(DRV_NAME ": %s using %dMHz bus clock.\n",
+ chip_table->name, MHz[clock_slot]);
}
/* Now kick off ATA set up */
#include <linux/libata.h>
#define DRV_NAME "pata_hpt3x2n"
-#define DRV_VERSION "0.3.13"
+#define DRV_VERSION "0.3.14"
enum {
HPT_PCI_FAST = (1 << 31),
u16 sr;
u32 total = 0;
- printk(KERN_WARNING "pata_hpt3x2n: BIOS clock data not set.\n");
+ pr_warning(DRV_NAME ": BIOS clock data not set.\n");
/* This is the process the HPT371 BIOS is reported to use */
for (i = 0; i < 128; i++) {
ppi[0] = &info_hpt372n;
break;
default:
- printk(KERN_ERR
- "pata_hpt3x2n: PCI table is bogus please report (%d).\n",
+ pr_err(DRV_NAME ": PCI table is bogus, please report (%d).\n",
dev->device);
return -ENODEV;
}
pci_write_config_dword(dev, 0x5C, (f_high << 16) | f_low);
}
if (adjust == 8) {
- printk(KERN_ERR "pata_hpt3x2n: DPLL did not stabilize!\n");
+ pr_err(DRV_NAME ": DPLL did not stabilize!\n");
return -ENODEV;
}
- printk(KERN_INFO "pata_hpt37x: bus clock %dMHz, using 66MHz DPLL.\n",
- pci_mhz);
+ pr_info(DRV_NAME ": bus clock %dMHz, using 66MHz DPLL.\n", pci_mhz);
/*
* Set our private data up. We only need a few flags
};
static struct ata_port_operations mpc52xx_ata_port_ops = {
- .inherits = &ata_sff_port_ops,
+ .inherits = &ata_bmdma_port_ops,
.sff_dev_select = mpc52xx_ata_dev_select,
.set_piomode = mpc52xx_ata_set_piomode,
.set_dmamode = mpc52xx_ata_set_dmamode,
spin_unlock_irqrestore(&idt77105_priv_lock, flags);
if (arg == NULL)
return 0;
- return copy_to_user(arg, &PRIV(dev)->stats,
+ return copy_to_user(arg, &stats,
sizeof(struct idt77105_stats)) ? -EFAULT : 0;
}
}
skb = alloc_skb(sizeof(*header), GFP_ATOMIC);
- if (!skb && net_ratelimit()) {
- dev_warn(&card->dev->dev, "Failed to allocate sk_buff in popen()\n");
+ if (!skb) {
+ if (net_ratelimit())
+ dev_warn(&card->dev->dev, "Failed to allocate sk_buff in popen()\n");
return -ENOMEM;
}
header = (void *)skb_put(skb, sizeof(*header));
If unsure say Y here.
config FW_LOADER
- tristate "Userspace firmware loading support" if EMBEDDED
+ tristate "Userspace firmware loading support" if EXPERT
default y
---help---
This option is provided for the case where no in-kernel-tree modules
goto out;
}
+ /* Maybe the parent is now able to suspend. */
if (parent && !parent->power.ignore_children && !dev->power.irq_safe) {
- spin_unlock_irq(&dev->power.lock);
+ spin_unlock(&dev->power.lock);
- pm_request_idle(parent);
+ spin_lock(&parent->power.lock);
+ rpm_idle(parent, RPM_ASYNC);
+ spin_unlock(&parent->power.lock);
- spin_lock_irq(&dev->power.lock);
+ spin_lock(&dev->power.lock);
}
out:
obj-$(CONFIG_BLK_DEV_DRBD) += drbd/
obj-$(CONFIG_BLK_DEV_RBD) += rbd.o
-swim_mod-objs := swim.o swim_asm.o
+swim_mod-y := swim.o swim_asm.o
#
obj-$(CONFIG_ATA_OVER_ETH) += aoe.o
-aoe-objs := aoeblk.o aoechr.o aoecmd.o aoedev.o aoemain.o aoenet.o
+aoe-y := aoeblk.o aoechr.o aoecmd.o aoedev.o aoemain.o aoenet.o
sector_t total_size;
InquiryData_struct *inq_buff = NULL;
- for (logvol = 0; logvol < CISS_MAX_LUN; logvol++) {
+ for (logvol = 0; logvol <= h->highest_lun; logvol++) {
if (!h->drv[logvol])
continue;
if (memcmp(h->drv[logvol]->LunID, drv->LunID,
static void loop_free(struct loop_device *lo)
{
+ if (!lo->lo_queue->queue_lock)
+ lo->lo_queue->queue_lock = &lo->lo_queue->__queue_lock;
+
blk_cleanup_queue(lo->lo_queue);
put_disk(lo->lo_disk);
list_del(&lo->lo_list);
#define DBG_BLKDEV 0x0100
#define DBG_RX 0x0200
#define DBG_TX 0x0400
-static DEFINE_MUTEX(nbd_mutex);
static unsigned int debugflags;
#endif /* NDEBUG */
dprintk(DBG_IOCTL, "%s: nbd_ioctl cmd=%s(0x%x) arg=%lu\n",
lo->disk->disk_name, ioctl_cmd_to_ascii(cmd), cmd, arg);
- mutex_lock(&nbd_mutex);
mutex_lock(&lo->tx_lock);
error = __nbd_ioctl(bdev, lo, cmd, arg);
mutex_unlock(&lo->tx_lock);
- mutex_unlock(&nbd_mutex);
return error;
}
/* Atheros AR3011 with sflash firmware*/
{ USB_DEVICE(0x0CF3, 0x3002) },
+ /* Atheros AR9285 Malbec with sflash firmware */
+ { USB_DEVICE(0x03F0, 0x311D) },
{ } /* Terminating entry */
};
#define USB_REQ_DFU_DNLOAD 1
#define BULK_SIZE 4096
-struct ath3k_data {
- struct usb_device *udev;
- u8 *fw_data;
- u32 fw_size;
- u32 fw_sent;
-};
-
-static int ath3k_load_firmware(struct ath3k_data *data,
- unsigned char *firmware,
- int count)
+static int ath3k_load_firmware(struct usb_device *udev,
+ const struct firmware *firmware)
{
u8 *send_buf;
int err, pipe, len, size, sent = 0;
+ int count = firmware->size;
- BT_DBG("ath3k %p udev %p", data, data->udev);
+ BT_DBG("udev %p", udev);
- pipe = usb_sndctrlpipe(data->udev, 0);
+ pipe = usb_sndctrlpipe(udev, 0);
- if ((usb_control_msg(data->udev, pipe,
+ send_buf = kmalloc(BULK_SIZE, GFP_ATOMIC);
+ if (!send_buf) {
+ BT_ERR("Can't allocate memory chunk for firmware");
+ return -ENOMEM;
+ }
+
+ memcpy(send_buf, firmware->data, 20);
+ if ((err = usb_control_msg(udev, pipe,
USB_REQ_DFU_DNLOAD,
USB_TYPE_VENDOR, 0, 0,
- firmware, 20, USB_CTRL_SET_TIMEOUT)) < 0) {
+ send_buf, 20, USB_CTRL_SET_TIMEOUT)) < 0) {
BT_ERR("Can't change to loading configuration err");
- return -EBUSY;
+ goto error;
}
sent += 20;
count -= 20;
- send_buf = kmalloc(BULK_SIZE, GFP_ATOMIC);
- if (!send_buf) {
- BT_ERR("Can't allocate memory chunk for firmware");
- return -ENOMEM;
- }
-
while (count) {
size = min_t(uint, count, BULK_SIZE);
- pipe = usb_sndbulkpipe(data->udev, 0x02);
- memcpy(send_buf, firmware + sent, size);
+ pipe = usb_sndbulkpipe(udev, 0x02);
+ memcpy(send_buf, firmware->data + sent, size);
- err = usb_bulk_msg(data->udev, pipe, send_buf, size,
+ err = usb_bulk_msg(udev, pipe, send_buf, size,
&len, 3000);
if (err || (len != size)) {
{
const struct firmware *firmware;
struct usb_device *udev = interface_to_usbdev(intf);
- struct ath3k_data *data;
- int size;
BT_DBG("intf %p id %p", intf, id);
if (intf->cur_altsetting->desc.bInterfaceNumber != 0)
return -ENODEV;
- data = kzalloc(sizeof(*data), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
-
- data->udev = udev;
-
if (request_firmware(&firmware, "ath3k-1.fw", &udev->dev) < 0) {
- kfree(data);
return -EIO;
}
- size = max_t(uint, firmware->size, 4096);
- data->fw_data = kmalloc(size, GFP_KERNEL);
- if (!data->fw_data) {
+ if (ath3k_load_firmware(udev, firmware)) {
release_firmware(firmware);
- kfree(data);
- return -ENOMEM;
- }
-
- memcpy(data->fw_data, firmware->data, firmware->size);
- data->fw_size = firmware->size;
- data->fw_sent = 0;
- release_firmware(firmware);
-
- usb_set_intfdata(intf, data);
- if (ath3k_load_firmware(data, data->fw_data, data->fw_size)) {
- usb_set_intfdata(intf, NULL);
- kfree(data->fw_data);
- kfree(data);
return -EIO;
}
+ release_firmware(firmware);
return 0;
}
static void ath3k_disconnect(struct usb_interface *intf)
{
- struct ath3k_data *data = usb_get_intfdata(intf);
-
BT_DBG("ath3k_disconnect intf %p", intf);
-
- kfree(data->fw_data);
- kfree(data);
}
static struct usb_driver ath3k_driver = {
/* Atheros 3011 with sflash firmware */
{ USB_DEVICE(0x0cf3, 0x3002), .driver_info = BTUSB_IGNORE },
+ /* Atheros AR9285 Malbec with sflash firmware */
+ { USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
+
/* Broadcom BCM2035 */
{ USB_DEVICE(0x0a5c, 0x2035), .driver_info = BTUSB_WRONG_SCO_MTU },
{ USB_DEVICE(0x0a5c, 0x200a), .driver_info = BTUSB_WRONG_SCO_MTU },
}
ENSURE(drive_status, CDC_DRIVE_STATUS );
- ENSURE(media_changed, CDC_MEDIA_CHANGED);
+ if (cdo->check_events == NULL && cdo->media_changed == NULL)
+ *change_capability = ~(CDC_MEDIA_CHANGED | CDC_SELECT_DISC);
ENSURE(tray_move, CDC_CLOSE_TRAY | CDC_OPEN_TRAY);
ENSURE(lock_door, CDC_LOCK);
ENSURE(select_speed, CDC_SELECT_SPEED);
menu "Character devices"
config VT
- bool "Virtual terminal" if EMBEDDED
+ bool "Virtual terminal" if EXPERT
depends on !S390
select INPUT
default y
config CONSOLE_TRANSLATIONS
depends on VT
default y
- bool "Enable character translations in console" if EMBEDDED
+ bool "Enable character translations in console" if EXPERT
---help---
This enables support for font mapping and Unicode translation
on virtual consoles.
config VT_CONSOLE
- bool "Support for console on virtual terminal" if EMBEDDED
+ bool "Support for console on virtual terminal" if EXPERT
depends on VT
default y
---help---
If you have an SGI Altix with an attached SABrick
say Y or M here, otherwise say N.
-source "drivers/serial/Kconfig"
+source "drivers/tty/serial/Kconfig"
config UNIX98_PTYS
- bool "Unix98 PTY support" if EMBEDDED
+ bool "Unix98 PTY support" if EXPERT
default y
---help---
A pseudo terminal (PTY) is a software device consisting of two
config TTY_PRINTK
bool "TTY driver to output user messages via printk"
- depends on EMBEDDED
+ depends on EXPERT
default n
---help---
If you say Y here, the support for writing user messages (i.e.
obj-$(CONFIG_AMIGA_BUILTIN_SERIAL) += amiserial.o
obj-$(CONFIG_SX) += sx.o generic_serial.o
obj-$(CONFIG_RIO) += rio/ generic_serial.o
-obj-$(CONFIG_HVC_CONSOLE) += hvc_vio.o hvsi.o
-obj-$(CONFIG_HVC_ISERIES) += hvc_iseries.o
-obj-$(CONFIG_HVC_RTAS) += hvc_rtas.o
-obj-$(CONFIG_HVC_TILE) += hvc_tile.o
-obj-$(CONFIG_HVC_DCC) += hvc_dcc.o
-obj-$(CONFIG_HVC_BEAT) += hvc_beat.o
-obj-$(CONFIG_HVC_DRIVER) += hvc_console.o
-obj-$(CONFIG_HVC_IRQ) += hvc_irq.o
-obj-$(CONFIG_HVC_XEN) += hvc_xen.o
-obj-$(CONFIG_HVC_IUCV) += hvc_iucv.o
-obj-$(CONFIG_HVC_UDBG) += hvc_udbg.o
obj-$(CONFIG_VIRTIO_CONSOLE) += virtio_console.o
obj-$(CONFIG_RAW_DRIVER) += raw.o
obj-$(CONFIG_SGI_SNSC) += snsc.o snsc_event.o
obj-$(CONFIG_MMTIMER) += mmtimer.o
obj-$(CONFIG_UV_MMTIMER) += uv_mmtimer.o
obj-$(CONFIG_VIOTAPE) += viotape.o
-obj-$(CONFIG_HVCS) += hvcs.o
obj-$(CONFIG_IBM_BSR) += bsr.o
obj-$(CONFIG_SGI_MBCS) += mbcs.o
obj-$(CONFIG_BRIQ_PANEL) += briq_panel.o
config AGP_AMD
tristate "AMD Irongate, 761, and 762 chipset support"
- depends on AGP && (X86_32 || ALPHA)
+ depends on AGP && X86_32
help
This option gives you AGP support for the GLX component of
X on AMD Irongate, 761, and 762 chipsets.
if (page_map->real == NULL)
return -ENOMEM;
-#ifndef CONFIG_X86
- SetPageReserved(virt_to_page(page_map->real));
- global_cache_flush();
- page_map->remapped = ioremap_nocache(virt_to_phys(page_map->real),
- PAGE_SIZE);
- if (page_map->remapped == NULL) {
- ClearPageReserved(virt_to_page(page_map->real));
- free_page((unsigned long) page_map->real);
- page_map->real = NULL;
- return -ENOMEM;
- }
- global_cache_flush();
-#else
set_memory_uc((unsigned long)page_map->real, 1);
page_map->remapped = page_map->real;
-#endif
for (i = 0; i < PAGE_SIZE / sizeof(unsigned long); i++) {
writel(agp_bridge->scratch_page, page_map->remapped+i);
static void amd_free_page_map(struct amd_page_map *page_map)
{
-#ifndef CONFIG_X86
- iounmap(page_map->remapped);
- ClearPageReserved(virt_to_page(page_map->real));
-#else
set_memory_wb((unsigned long)page_map->real, 1);
-#endif
free_page((unsigned long) page_map->real);
}
dev_info(&pdev->dev, "Intel %s Chipset\n", intel_agp_chipsets[i].name);
- /*
- * If the device has not been properly setup, the following will catch
- * the problem and should stop the system from crashing.
- * 20030610 - hamish@zot.org
- */
- if (pci_enable_device(pdev)) {
- dev_err(&pdev->dev, "can't enable PCI device\n");
- agp_put_bridge(bridge);
- return -ENODEV;
- }
-
/*
* The following fixes the case where the BIOS has "forgotten" to
* provide an address range for the GART.
* 20030610 - hamish@zot.org
+ * This happens before pci_enable_device() intentionally;
+ * calling pci_enable_device() before assigning the resource
+ * will result in the GART being disabled on machines with such
+ * BIOSs (the GART ends up with a BAR starting at 0, which
+ * conflicts a lot of other devices).
*/
r = &pdev->resource[0];
if (!r->start && r->end) {
}
}
+ /*
+ * If the device has not been properly setup, the following will catch
+ * the problem and should stop the system from crashing.
+ * 20030610 - hamish@zot.org
+ */
+ if (pci_enable_device(pdev)) {
+ dev_err(&pdev->dev, "can't enable PCI device\n");
+ agp_put_bridge(bridge);
+ return -ENODEV;
+ }
+
/* Fill in the mode register */
if (cap_ptr) {
pci_read_config_dword(pdev,
phys_addr_t gma_bus_addr;
u32 PGETBL_save;
u32 __iomem *gtt; /* I915G */
+ bool clear_fake_agp; /* on first access via agp, fill with scratch */
int num_dcache_entries;
union {
void __iomem *i9xx_flush_page;
static int intel_fake_agp_configure(void)
{
- int i;
-
if (!intel_enable_gtt())
return -EIO;
+ intel_private.clear_fake_agp = true;
agp_bridge->gart_bus_addr = intel_private.gma_bus_addr;
- for (i = 0; i < intel_private.base.gtt_total_entries; i++) {
- intel_private.driver->write_entry(intel_private.scratch_page_dma,
- i, 0);
- }
- readl(intel_private.gtt+i-1); /* PCI Posting. */
-
- global_cache_flush();
-
return 0;
}
{
int ret = -EINVAL;
+ if (intel_private.clear_fake_agp) {
+ int start = intel_private.base.stolen_size / PAGE_SIZE;
+ int end = intel_private.base.gtt_mappable_entries;
+ intel_gtt_clear_range(start, end - start);
+ intel_private.clear_fake_agp = false;
+ }
+
if (INTEL_GTT_GEN == 1 && type == AGP_DCACHE_MEMORY)
return i810_insert_dcache_entries(mem, pg_start, type);
}
#ifndef CONFIG_BFIN_JTAG_COMM_CONSOLE
-# define acquire_console_sem()
-# define release_console_sem()
+# define console_lock()
+# define console_unlock()
#endif
static int
bfin_jc_write(struct tty_struct *tty, const unsigned char *buf, int count)
{
int i;
- acquire_console_sem();
+ console_lock();
i = bfin_jc_circ_write(buf, count);
- release_console_sem();
+ console_unlock();
wake_up_process(bfin_jc_kthread);
return i;
}
static int add_smi(struct smi_info *smi);
static int try_smi_init(struct smi_info *smi);
static void cleanup_one_si(struct smi_info *to_clean);
+static void cleanup_ipmi_si(void);
static ATOMIC_NOTIFIER_HEAD(xaction_notifier_list);
static int register_xaction_notifier(struct notifier_block *nb)
mutex_lock(&smi_infos_lock);
if (unload_when_empty && list_empty(&smi_infos)) {
mutex_unlock(&smi_infos_lock);
-#ifdef CONFIG_PCI
- if (pci_registered)
- pci_unregister_driver(&ipmi_pci_driver);
-#endif
-
-#ifdef CONFIG_PPC_OF
- if (of_registered)
- of_unregister_platform_driver(&ipmi_of_platform_driver);
-#endif
- driver_unregister(&ipmi_driver.driver);
+ cleanup_ipmi_si();
printk(KERN_WARNING PFX
"Unable to find any System Interface(s)\n");
return -ENODEV;
tpm_protected_ordinal_duration[ordinal &
TPM_PROTECTED_ORDINAL_MASK];
- if (duration_idx != TPM_UNDEFINED)
+ if (duration_idx != TPM_UNDEFINED) {
duration = chip->vendor.duration[duration_idx];
- if (duration <= 0)
+ /* if duration is 0, it's because chip->vendor.duration wasn't */
+ /* filled yet, so we set the lowest timeout just to give enough */
+ /* time for tpm_get_timeouts() to succeed */
+ return (duration <= 0 ? HZ : duration);
+ } else
return 2 * 60 * HZ;
- else
- return duration;
}
EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
"1.2 TPM (device-id 0x%X, rev-id %d)\n",
vendor >> 16, ioread8(chip->vendor.iobase + TPM_RID(0)));
- if (is_itpm(to_pnp_dev(dev)))
- itpm = 1;
-
if (itpm)
dev_info(dev, "Intel iTPM workaround enabled\n");
else
interrupts = 0;
+ if (is_itpm(pnp_dev))
+ itpm = 1;
+
return tpm_tis_init(&pnp_dev->dev, start, len, irq);
}
/*
* Copyright (C) 2006, 2007, 2009 Rusty Russell, IBM Corporation
- * Copyright (C) 2009, 2010 Red Hat, Inc.
+ * Copyright (C) 2009, 2010, 2011 Red Hat, Inc.
+ * Copyright (C) 2009, 2010, 2011 Amit Shah <amit.shah@redhat.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
#include <linux/virtio_console.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
-#include "hvc_console.h"
+#include "../tty/hvc/hvc_console.h"
/*
* This is a global struct for storing common data for all the devices
spin_unlock(&portdev->cvq_lock);
}
+static void out_intr(struct virtqueue *vq)
+{
+ struct port *port;
+
+ port = find_port_by_vq(vq->vdev->priv, vq);
+ if (!port)
+ return;
+
+ wake_up_interruptible(&port->waitqueue);
+}
+
static void in_intr(struct virtqueue *vq)
{
struct port *port;
*/
j = 0;
io_callbacks[j] = in_intr;
- io_callbacks[j + 1] = NULL;
+ io_callbacks[j + 1] = out_intr;
io_names[j] = "input";
io_names[j + 1] = "output";
j += 2;
for (i = 1; i < nr_ports; i++) {
j += 2;
io_callbacks[j] = in_intr;
- io_callbacks[j + 1] = NULL;
+ io_callbacks[j + 1] = out_intr;
io_names[j] = "input";
io_names[j + 1] = "output";
}
printk(KERN_INFO "PM-Timer had inconsistent results:"
" 0x%#llx, 0x%#llx - aborting.\n",
value1, value2);
+ pmtmr_ioport = 0;
return -EINVAL;
}
if (i == ACPI_PM_READ_CHECKS) {
printk(KERN_INFO "PM-Timer failed consistency check "
" (0x%#llx) - aborting.\n", value1);
+ pmtmr_ioport = 0;
return -ENODEV;
}
}
- if (verify_pmtmr_rate() != 0)
+ if (verify_pmtmr_rate() != 0){
+ pmtmr_ioport = 0;
return -ENODEV;
+ }
return clocksource_register_hz(&clocksource_acpi_pm,
PMTMR_TICKS_PER_SEC);
clkevt.clkevt.min_delta_ns = clockevent_delta2ns(1, &clkevt.clkevt) + 1;
clkevt.clkevt.cpumask = cpumask_of(0);
- setup_irq(irq, &tc_irqaction);
-
clockevents_register_device(&clkevt.clkevt);
+
+ setup_irq(irq, &tc_irqaction);
}
#else /* !CONFIG_GENERIC_CLOCKEVENTS */
config CPU_FREQ_DEFAULT_GOV_POWERSAVE
bool "powersave"
- depends on EMBEDDED
+ depends on EXPERT
select CPU_FREQ_GOV_POWERSAVE
help
Use the CPUFreq governor 'powersave' as default. This sets
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
+#include <linux/delay.h>
#include <linux/dmapool.h>
#include <linux/dmaengine.h>
#include <linux/amba/bus.h>
}
/*
- * Overall DMAC remains enabled always.
+ * Pause the channel by setting the HALT bit.
*
- * Disabling individual channels could lose data.
+ * For M->P transfers, pause the DMAC first and then stop the peripheral -
+ * the FIFO can only drain if the peripheral is still requesting data.
+ * (note: this can still timeout if the DMAC FIFO never drains of data.)
*
- * Disable the peripheral DMA after disabling the DMAC in order to allow
- * the DMAC FIFO to drain, and hence allow the channel to show inactive
+ * For P->M transfers, disable the peripheral first to stop it filling
+ * the DMAC FIFO, and then pause the DMAC.
*/
static void pl08x_pause_phy_chan(struct pl08x_phy_chan *ch)
{
u32 val;
+ int timeout;
/* Set the HALT bit and wait for the FIFO to drain */
val = readl(ch->base + PL080_CH_CONFIG);
writel(val, ch->base + PL080_CH_CONFIG);
/* Wait for channel inactive */
- while (pl08x_phy_channel_busy(ch))
- cpu_relax();
+ for (timeout = 1000; timeout; timeout--) {
+ if (!pl08x_phy_channel_busy(ch))
+ break;
+ udelay(1);
+ }
+ if (pl08x_phy_channel_busy(ch))
+ pr_err("pl08x: channel%u timeout waiting for pause\n", ch->id);
}
static void pl08x_resume_phy_chan(struct pl08x_phy_chan *ch)
}
-/* Stops the channel */
-static void pl08x_stop_phy_chan(struct pl08x_phy_chan *ch)
+/*
+ * pl08x_terminate_phy_chan() stops the channel, clears the FIFO and
+ * clears any pending interrupt status. This should not be used for
+ * an on-going transfer, but as a method of shutting down a channel
+ * (eg, when it's no longer used) or terminating a transfer.
+ */
+static void pl08x_terminate_phy_chan(struct pl08x_driver_data *pl08x,
+ struct pl08x_phy_chan *ch)
{
- u32 val;
+ u32 val = readl(ch->base + PL080_CH_CONFIG);
- pl08x_pause_phy_chan(ch);
+ val &= ~(PL080_CONFIG_ENABLE | PL080_CONFIG_ERR_IRQ_MASK |
+ PL080_CONFIG_TC_IRQ_MASK);
- /* Disable channel */
- val = readl(ch->base + PL080_CH_CONFIG);
- val &= ~PL080_CONFIG_ENABLE;
- val &= ~PL080_CONFIG_ERR_IRQ_MASK;
- val &= ~PL080_CONFIG_TC_IRQ_MASK;
writel(val, ch->base + PL080_CH_CONFIG);
+
+ writel(1 << ch->id, pl08x->base + PL080_ERR_CLEAR);
+ writel(1 << ch->id, pl08x->base + PL080_TC_CLEAR);
}
static inline u32 get_bytes_in_cctl(u32 cctl)
{
unsigned long flags;
+ spin_lock_irqsave(&ch->lock, flags);
+
/* Stop the channel and clear its interrupts */
- pl08x_stop_phy_chan(ch);
- writel((1 << ch->id), pl08x->base + PL080_ERR_CLEAR);
- writel((1 << ch->id), pl08x->base + PL080_TC_CLEAR);
+ pl08x_terminate_phy_chan(pl08x, ch);
/* Mark it as free */
- spin_lock_irqsave(&ch->lock, flags);
ch->serving = NULL;
spin_unlock_irqrestore(&ch->lock, flags);
}
plchan->state = PL08X_CHAN_IDLE;
if (plchan->phychan) {
- pl08x_stop_phy_chan(plchan->phychan);
+ pl08x_terminate_phy_chan(pl08x, plchan->phychan);
/*
* Mark physical channel as free and free any slave
struct imxdma_engine {
struct device *dev;
+ struct device_dma_parameters dma_parms;
struct dma_device dma_device;
struct imxdma_channel channel[MAX_DMA_CHANNELS];
};
else
dmamode = DMA_MODE_WRITE;
+ switch (imxdmac->word_size) {
+ case DMA_SLAVE_BUSWIDTH_4_BYTES:
+ if (sgl->length & 3 || sgl->dma_address & 3)
+ return NULL;
+ break;
+ case DMA_SLAVE_BUSWIDTH_2_BYTES:
+ if (sgl->length & 1 || sgl->dma_address & 1)
+ return NULL;
+ break;
+ case DMA_SLAVE_BUSWIDTH_1_BYTE:
+ break;
+ default:
+ return NULL;
+ }
+
ret = imx_dma_setup_sg(imxdmac->imxdma_channel, sgl, sg_len,
dma_length, imxdmac->per_address, dmamode);
if (ret)
INIT_LIST_HEAD(&imxdma->dma_device.channels);
+ dma_cap_set(DMA_SLAVE, imxdma->dma_device.cap_mask);
+ dma_cap_set(DMA_CYCLIC, imxdma->dma_device.cap_mask);
+
/* Initialize channel parameters */
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
struct imxdma_channel *imxdmac = &imxdma->channel[i];
imxdmac->imxdma = imxdma;
spin_lock_init(&imxdmac->lock);
- dma_cap_set(DMA_SLAVE, imxdma->dma_device.cap_mask);
- dma_cap_set(DMA_CYCLIC, imxdma->dma_device.cap_mask);
-
imxdmac->chan.device = &imxdma->dma_device;
- imxdmac->chan.chan_id = i;
imxdmac->channel = i;
/* Add the channel to the DMAC list */
platform_set_drvdata(pdev, imxdma);
+ imxdma->dma_device.dev->dma_parms = &imxdma->dma_parms;
+ dma_set_max_seg_size(imxdma->dma_device.dev, 0xffffff);
+
ret = dma_async_device_register(&imxdma->dma_device);
if (ret) {
dev_err(&pdev->dev, "unable to register\n");
* struct sdma_channel - housekeeping for a SDMA channel
*
* @sdma pointer to the SDMA engine for this channel
- * @channel the channel number, matches dmaengine chan_id
+ * @channel the channel number, matches dmaengine chan_id + 1
* @direction transfer type. Needed for setting SDMA script
* @peripheral_type Peripheral type. Needed for setting SDMA script
* @event_id0 aka dma request line
struct sdma_engine {
struct device *dev;
+ struct device_dma_parameters dma_parms;
struct sdma_channel channel[MAX_DMA_CHANNELS];
struct sdma_channel_control *channel_control;
void __iomem *regs;
if (bd->mode.status & BD_RROR)
sdmac->status = DMA_ERROR;
else
- sdmac->status = DMA_SUCCESS;
+ sdmac->status = DMA_IN_PROGRESS;
bd->mode.status |= BD_DONE;
sdmac->buf_tail++;
__raw_writel(1 << channel, sdma->regs + SDMA_H_START);
}
-static dma_cookie_t sdma_assign_cookie(struct sdma_channel *sdma)
+static dma_cookie_t sdma_assign_cookie(struct sdma_channel *sdmac)
{
- dma_cookie_t cookie = sdma->chan.cookie;
+ dma_cookie_t cookie = sdmac->chan.cookie;
if (++cookie < 0)
cookie = 1;
- sdma->chan.cookie = cookie;
- sdma->desc.cookie = cookie;
+ sdmac->chan.cookie = cookie;
+ sdmac->desc.cookie = cookie;
return cookie;
}
cookie = sdma_assign_cookie(sdmac);
- sdma_enable_channel(sdma, tx->chan->chan_id);
+ sdma_enable_channel(sdma, sdmac->channel);
spin_unlock_irq(&sdmac->lock);
struct imx_dma_data *data = chan->private;
int prio, ret;
- /* No need to execute this for internal channel 0 */
- if (chan->chan_id == 0)
- return 0;
-
if (!data)
return -EINVAL;
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
int ret, i, count;
- int channel = chan->chan_id;
+ int channel = sdmac->channel;
struct scatterlist *sg;
if (sdmac->status == DMA_IN_PROGRESS)
ret = -EINVAL;
goto err_out;
}
- if (sdmac->word_size == DMA_SLAVE_BUSWIDTH_4_BYTES)
+
+ switch (sdmac->word_size) {
+ case DMA_SLAVE_BUSWIDTH_4_BYTES:
bd->mode.command = 0;
- else
- bd->mode.command = sdmac->word_size;
+ if (count & 3 || sg->dma_address & 3)
+ return NULL;
+ break;
+ case DMA_SLAVE_BUSWIDTH_2_BYTES:
+ bd->mode.command = 2;
+ if (count & 1 || sg->dma_address & 1)
+ return NULL;
+ break;
+ case DMA_SLAVE_BUSWIDTH_1_BYTE:
+ bd->mode.command = 1;
+ break;
+ default:
+ return NULL;
+ }
param = BD_DONE | BD_EXTD | BD_CONT;
- if (sdmac->flags & IMX_DMA_SG_LOOP) {
+ if (i + 1 == sg_len) {
param |= BD_INTR;
- if (i + 1 == sg_len)
- param |= BD_WRAP;
+ param |= BD_LAST;
+ param &= ~BD_CONT;
}
- if (i + 1 == sg_len)
- param |= BD_INTR;
-
dev_dbg(sdma->dev, "entry %d: count: %d dma: 0x%08x %s%s\n",
i, count, sg->dma_address,
param & BD_WRAP ? "wrap" : "",
return &sdmac->desc;
err_out:
+ sdmac->status = DMA_ERROR;
return NULL;
}
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
int num_periods = buf_len / period_len;
- int channel = chan->chan_id;
+ int channel = sdmac->channel;
int ret, i = 0, buf = 0;
dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel);
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
dma_cookie_t last_used;
- enum dma_status ret;
last_used = chan->cookie;
- ret = dma_async_is_complete(cookie, sdmac->last_completed, last_used);
dma_set_tx_state(txstate, sdmac->last_completed, last_used, 0);
- return ret;
+ return sdmac->status;
}
static void sdma_issue_pending(struct dma_chan *chan)
/* download the RAM image for SDMA */
sdma_load_script(sdma, ram_code,
header->ram_code_size,
- sdma->script_addrs->ram_code_start_addr);
+ addr->ram_code_start_addr);
clk_disable(sdma->clk);
sdma_add_scripts(sdma, addr);
struct resource *iores;
struct sdma_platform_data *pdata = pdev->dev.platform_data;
int i;
- dma_cap_mask_t mask;
struct sdma_engine *sdma;
sdma = kzalloc(sizeof(*sdma), GFP_KERNEL);
sdma->version = pdata->sdma_version;
+ dma_cap_set(DMA_SLAVE, sdma->dma_device.cap_mask);
+ dma_cap_set(DMA_CYCLIC, sdma->dma_device.cap_mask);
+
INIT_LIST_HEAD(&sdma->dma_device.channels);
/* Initialize channel parameters */
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
sdmac->sdma = sdma;
spin_lock_init(&sdmac->lock);
- dma_cap_set(DMA_SLAVE, sdma->dma_device.cap_mask);
- dma_cap_set(DMA_CYCLIC, sdma->dma_device.cap_mask);
-
sdmac->chan.device = &sdma->dma_device;
- sdmac->chan.chan_id = i;
sdmac->channel = i;
- /* Add the channel to the DMAC list */
- list_add_tail(&sdmac->chan.device_node, &sdma->dma_device.channels);
+ /*
+ * Add the channel to the DMAC list. Do not add channel 0 though
+ * because we need it internally in the SDMA driver. This also means
+ * that channel 0 in dmaengine counting matches sdma channel 1.
+ */
+ if (i)
+ list_add_tail(&sdmac->chan.device_node,
+ &sdma->dma_device.channels);
}
ret = sdma_init(sdma);
sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic;
sdma->dma_device.device_control = sdma_control;
sdma->dma_device.device_issue_pending = sdma_issue_pending;
+ sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
+ dma_set_max_seg_size(sdma->dma_device.dev, 65535);
ret = dma_async_device_register(&sdma->dma_device);
if (ret) {
goto err_init;
}
- /* request channel 0. This is an internal control channel
- * to the SDMA engine and not available to clients.
- */
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
- dma_request_channel(mask, NULL, NULL);
-
dev_info(sdma->dev, "initialized\n");
return 0;
err_request_region:
err_irq:
kfree(sdma);
- return 0;
+ return ret;
}
static int __exit sdma_remove(struct platform_device *pdev)
reg = idmac_read_icreg(ipu, IDMAC_CHA_EN);
idmac_write_icreg(ipu, reg & ~chan_mask, IDMAC_CHA_EN);
- /*
- * Problem (observed with channel DMAIC_7): after enabling the channel
- * and initialising buffers, there comes an interrupt with current still
- * pointing at buffer 0, whereas it should use buffer 0 first and only
- * generate an interrupt when it is done, then current should already
- * point to buffer 1. This spurious interrupt also comes on channel
- * DMASDC_0. With DMAIC_7 normally, is we just leave the ISR after the
- * first interrupt, there comes the second with current correctly
- * pointing to buffer 1 this time. But sometimes this second interrupt
- * doesn't come and the channel hangs. Clearing BUFx_RDY when disabling
- * the channel seems to prevent the channel from hanging, but it doesn't
- * prevent the spurious interrupt. This might also be unsafe. Think
- * about the IDMAC controller trying to switch to a buffer, when we
- * clear the ready bit, and re-enable it a moment later.
- */
- reg = idmac_read_ipureg(ipu, IPU_CHA_BUF0_RDY);
- idmac_write_ipureg(ipu, 0, IPU_CHA_BUF0_RDY);
- idmac_write_ipureg(ipu, reg & ~(1UL << channel), IPU_CHA_BUF0_RDY);
-
- reg = idmac_read_ipureg(ipu, IPU_CHA_BUF1_RDY);
- idmac_write_ipureg(ipu, 0, IPU_CHA_BUF1_RDY);
- idmac_write_ipureg(ipu, reg & ~(1UL << channel), IPU_CHA_BUF1_RDY);
-
spin_unlock_irqrestore(&ipu->lock, flags);
return 0;
/* Other interrupts do not interfere with this channel */
spin_lock(&ichan->lock);
- if (unlikely(chan_id != IDMAC_SDC_0 && chan_id != IDMAC_SDC_1 &&
- ((curbuf >> chan_id) & 1) == ichan->active_buffer &&
- !list_is_last(ichan->queue.next, &ichan->queue))) {
- int i = 100;
-
- /* This doesn't help. See comment in ipu_disable_channel() */
- while (--i) {
- curbuf = idmac_read_ipureg(&ipu_data, IPU_CHA_CUR_BUF);
- if (((curbuf >> chan_id) & 1) != ichan->active_buffer)
- break;
- cpu_relax();
- }
-
- if (!i) {
- spin_unlock(&ichan->lock);
- dev_dbg(dev,
- "IRQ on active buffer on channel %x, active "
- "%d, ready %x, %x, current %x!\n", chan_id,
- ichan->active_buffer, ready0, ready1, curbuf);
- return IRQ_NONE;
- } else
- dev_dbg(dev,
- "Buffer deactivated on channel %x, active "
- "%d, ready %x, %x, current %x, rest %d!\n", chan_id,
- ichan->active_buffer, ready0, ready1, curbuf, i);
- }
-
if (unlikely((ichan->active_buffer && (ready1 >> chan_id) & 1) ||
(!ichan->active_buffer && (ready0 >> chan_id) & 1)
)) {
/* Display and decode various NB registers for debug purposes. */
static void amd64_dump_misc_regs(struct amd64_pvt *pvt)
{
- int ganged;
-
debugf1("F3xE8 (NB Cap): 0x%08x\n", pvt->nbcap);
debugf1(" NB two channel DRAM capable: %s\n",
debugf1(" DramHoleValid: %s\n",
(pvt->dhar & DHAR_VALID) ? "yes" : "no");
+ amd64_debug_display_dimm_sizes(0, pvt);
+
/* everything below this point is Fam10h and above */
- if (boot_cpu_data.x86 == 0xf) {
- amd64_debug_display_dimm_sizes(0, pvt);
+ if (boot_cpu_data.x86 == 0xf)
return;
- }
+
+ amd64_debug_display_dimm_sizes(1, pvt);
amd64_info("using %s syndromes.\n", ((pvt->syn_type == 8) ? "x8" : "x4"));
/* Only if NOT ganged does dclr1 have valid info */
if (!dct_ganging_enabled(pvt))
amd64_dump_dramcfg_low(pvt->dclr1, 1);
-
- /*
- * Determine if ganged and then dump memory sizes for first controller,
- * and if NOT ganged dump info for 2nd controller.
- */
- ganged = dct_ganging_enabled(pvt);
-
- amd64_debug_display_dimm_sizes(0, pvt);
-
- if (!ganged)
- amd64_debug_display_dimm_sizes(1, pvt);
}
/* Read in both of DBAM registers */
WARN_ON(ctrl != 0);
}
- debugf1("F2x%d80 (DRAM Bank Address Mapping): 0x%08x\n",
- ctrl, ctrl ? pvt->dbam1 : pvt->dbam0);
+ dbam = (ctrl && !dct_ganging_enabled(pvt)) ? pvt->dbam1 : pvt->dbam0;
+ dcsb = (ctrl && !dct_ganging_enabled(pvt)) ? pvt->dcsb1 : pvt->dcsb0;
- dbam = ctrl ? pvt->dbam1 : pvt->dbam0;
- dcsb = ctrl ? pvt->dcsb1 : pvt->dcsb0;
+ debugf1("F2x%d80 (DRAM Bank Address Mapping): 0x%08x\n", ctrl, dbam);
edac_printk(KERN_DEBUG, EDAC_MC, "DCT%d chip selects:\n", ctrl);
configuration section.
config FIREWIRE_NET
- tristate "IP networking over 1394 (EXPERIMENTAL)"
- depends on FIREWIRE && INET && EXPERIMENTAL
+ tristate "IP networking over 1394"
+ depends on FIREWIRE && INET
help
This enables IPv4 over IEEE 1394, providing IP connectivity with
other implementations of RFC 2734 as found on several operating
systems. Multicast support is currently limited.
- NOTE, this driver is not stable yet!
-
To compile this driver as a module, say M here: The module will be
called firewire-net.
#define BIB_IRMC ((1) << 31)
#define NODE_CAPABILITIES 0x0c0083c0 /* per IEEE 1394 clause 8.3.2.6.5.2 */
+#define CANON_OUI 0x000085
+
static void generate_config_rom(struct fw_card *card, __be32 *config_rom)
{
struct fw_descriptor *desc;
bool root_device_is_running;
bool root_device_is_cmc;
bool irm_is_1394_1995_only;
+ bool keep_this_irm;
spin_lock_irq(&card->lock);
irm_is_1394_1995_only = irm_device && irm_device->config_rom &&
(irm_device->config_rom[2] & 0x000000f0) == 0;
+ /* Canon MV5i works unreliably if it is not root node. */
+ keep_this_irm = irm_device && irm_device->config_rom &&
+ irm_device->config_rom[3] >> 8 == CANON_OUI;
+
root_id = root_node->node_id;
irm_id = card->irm_node->node_id;
local_id = card->local_node->node_id;
goto pick_me;
}
- if (irm_is_1394_1995_only) {
+ if (irm_is_1394_1995_only && !keep_this_irm) {
new_root_id = local_id;
fw_notify("%s, making local node (%02x) root.\n",
"IRM is not 1394a compliant", new_root_id);
spin_lock_irq(&card->lock);
- if (rcode != RCODE_COMPLETE) {
+ if (rcode != RCODE_COMPLETE && !keep_this_irm) {
/*
* The lock request failed, maybe the IRM
* isn't really IRM capable after all. Let's
struct fwnet_device *dev;
u64 guid;
u64 fifo;
+ __be32 ip;
/* guarded by dev->lock */
struct list_head pd_list; /* received partial datagrams */
peer->speed = sspd;
if (peer->max_payload > max_payload)
peer->max_payload = max_payload;
+
+ peer->ip = arp1394->sip;
}
spin_unlock_irqrestore(&dev->lock, flags);
peer->dev = dev;
peer->guid = (u64)device->config_rom[3] << 32 | device->config_rom[4];
peer->fifo = FWNET_NO_FIFO_ADDR;
+ peer->ip = 0;
INIT_LIST_HEAD(&peer->pd_list);
peer->pdg_size = 0;
peer->datagram_label = 0;
mutex_lock(&fwnet_device_mutex);
+ net = dev->netdev;
+ if (net && peer->ip)
+ arp_invalidate(net, peer->ip);
+
fwnet_remove_peer(peer, dev);
if (list_empty(&dev->peer_list)) {
- net = dev->netdev;
unregister_netdev(net);
if (dev->local_fifo != FWNET_NO_FIFO_ADDR)
using the kernel parameter 'edd={on|skipmbr|off}'.
config FIRMWARE_MEMMAP
- bool "Add firmware-provided memory map to sysfs" if EMBEDDED
+ bool "Add firmware-provided memory map to sysfs" if EXPERT
default X86
help
Add the firmware-provided (unmodified) memory map to /sys/firmware/memmap.
static void __init dmi_dump_ids(void)
{
+ const char *board; /* Board Name is optional */
+
printk(KERN_DEBUG "DMI: ");
- print_filtered(dmi_get_system_info(DMI_BOARD_NAME));
- printk(KERN_CONT "/");
+ print_filtered(dmi_get_system_info(DMI_SYS_VENDOR));
+ printk(KERN_CONT " ");
print_filtered(dmi_get_system_info(DMI_PRODUCT_NAME));
+ board = dmi_get_system_info(DMI_BOARD_NAME);
+ if (board) {
+ printk(KERN_CONT "/");
+ print_filtered(board);
+ }
printk(KERN_CONT ", BIOS ");
print_filtered(dmi_get_system_info(DMI_BIOS_VERSION));
printk(KERN_CONT " ");
static void lnw_irq_handler(unsigned irq, struct irq_desc *desc)
{
- struct lnw_gpio *lnw = (struct lnw_gpio *)get_irq_data(irq);
+ struct lnw_gpio *lnw = get_irq_data(irq);
u32 base, gpio;
void __iomem *gedr;
u32 gedr_v;
/* clear the edge detect status bit */
writel(gedr_v, gedr);
}
- desc->chip->eoi(irq);
+
+ if (desc->chip->irq_eoi)
+ desc->chip->irq_eoi(irq_get_irq_data(irq));
+ else
+ dev_warn(lnw->chip.dev, "missing EOI handler for irq %d\n", irq);
+
}
static int __devinit lnw_gpio_probe(struct pci_dev *pdev,
unsigned gpio_start;
uint16_t reg_output;
uint16_t reg_direction;
+ struct mutex i2c_lock;
#ifdef CONFIG_GPIO_PCA953X_IRQ
struct mutex irq_lock;
chip = container_of(gc, struct pca953x_chip, gpio_chip);
+ mutex_lock(&chip->i2c_lock);
reg_val = chip->reg_direction | (1u << off);
ret = pca953x_write_reg(chip, PCA953X_DIRECTION, reg_val);
if (ret)
- return ret;
+ goto exit;
chip->reg_direction = reg_val;
- return 0;
+ ret = 0;
+exit:
+ mutex_unlock(&chip->i2c_lock);
+ return ret;
}
static int pca953x_gpio_direction_output(struct gpio_chip *gc,
chip = container_of(gc, struct pca953x_chip, gpio_chip);
+ mutex_lock(&chip->i2c_lock);
/* set output level */
if (val)
reg_val = chip->reg_output | (1u << off);
ret = pca953x_write_reg(chip, PCA953X_OUTPUT, reg_val);
if (ret)
- return ret;
+ goto exit;
chip->reg_output = reg_val;
reg_val = chip->reg_direction & ~(1u << off);
ret = pca953x_write_reg(chip, PCA953X_DIRECTION, reg_val);
if (ret)
- return ret;
+ goto exit;
chip->reg_direction = reg_val;
- return 0;
+ ret = 0;
+exit:
+ mutex_unlock(&chip->i2c_lock);
+ return ret;
}
static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off)
chip = container_of(gc, struct pca953x_chip, gpio_chip);
+ mutex_lock(&chip->i2c_lock);
ret = pca953x_read_reg(chip, PCA953X_INPUT, ®_val);
+ mutex_unlock(&chip->i2c_lock);
if (ret < 0) {
/* NOTE: diagnostic already emitted; that's all we should
* do unless gpio_*_value_cansleep() calls become different
chip = container_of(gc, struct pca953x_chip, gpio_chip);
+ mutex_lock(&chip->i2c_lock);
if (val)
reg_val = chip->reg_output | (1u << off);
else
ret = pca953x_write_reg(chip, PCA953X_OUTPUT, reg_val);
if (ret)
- return;
+ goto exit;
chip->reg_output = reg_val;
+exit:
+ mutex_unlock(&chip->i2c_lock);
}
static void pca953x_setup_gpio(struct pca953x_chip *chip, int gpios)
chip->names = pdata->names;
+ mutex_init(&chip->i2c_lock);
+
/* initialize cached registers from their original values.
* we can't share this chip with another i2c master.
*/
tristate
depends on DRM
select FB
- select FRAMEBUFFER_CONSOLE if !EMBEDDED
+ select FRAMEBUFFER_CONSOLE if !EXPERT
help
FB and CRTC helpers for KMS drivers.
config DRM_I915
tristate "i915 driver"
depends on AGP_INTEL
+ # we need shmfs for the swappable backing store, and in particular
+ # the shmem_readpage() which depends upon tmpfs
select SHMEM
+ select TMPFS
select DRM_KMS_HELPER
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
mutex_unlock(&dev->mode_config.mutex);
return ret;
}
+
+void drm_mode_config_reset(struct drm_device *dev)
+{
+ struct drm_crtc *crtc;
+ struct drm_encoder *encoder;
+ struct drm_connector *connector;
+
+ list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
+ if (crtc->funcs->reset)
+ crtc->funcs->reset(crtc);
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head)
+ if (encoder->funcs->reset)
+ encoder->funcs->reset(encoder);
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head)
+ if (connector->funcs->reset)
+ connector->funcs->reset(connector);
+}
+EXPORT_SYMBOL(drm_mode_config_reset);
struct drm_encoder *encoder;
bool ret = true;
- adjusted_mode = drm_mode_duplicate(dev, mode);
-
crtc->enabled = drm_helper_crtc_in_use(crtc);
-
if (!crtc->enabled)
return true;
+ adjusted_mode = drm_mode_duplicate(dev, mode);
+
saved_hwmode = crtc->hwmode;
saved_mode = crtc->mode;
saved_x = crtc->x;
*/
drm_calc_timestamping_constants(crtc);
- /* XXX free adjustedmode */
- drm_mode_destroy(dev, adjusted_mode);
/* FIXME: add subpixel order */
done:
+ drm_mode_destroy(dev, adjusted_mode);
if (!ret) {
crtc->hwmode = saved_hwmode;
crtc->mode = saved_mode;
crtc_funcs = set->crtc->helper_private;
+ if (!set->mode)
+ set->fb = NULL;
+
if (set->fb) {
DRM_DEBUG_KMS("[CRTC:%d] [FB:%d] #connectors=%d (x y) (%i %i)\n",
set->crtc->base.id, set->fb->base.id,
(int)set->num_connectors, set->x, set->y);
} else {
- DRM_DEBUG_KMS("[CRTC:%d] [NOFB] #connectors=%d (x y) (%i %i)\n",
- set->crtc->base.id, (int)set->num_connectors,
- set->x, set->y);
+ DRM_DEBUG_KMS("[CRTC:%d] [NOFB]\n", set->crtc->base.id);
+ set->mode = NULL;
+ set->num_connectors = 0;
}
dev = set->crtc->dev;
mode_changed = true;
if (mode_changed) {
- set->crtc->enabled = (set->mode != NULL);
- if (set->mode != NULL) {
+ set->crtc->enabled = drm_helper_crtc_in_use(set->crtc);
+ if (set->crtc->enabled) {
DRM_DEBUG_KMS("attempting to set mode from"
" userspace\n");
drm_mode_debug_printmodeline(set->mode);
ret = -EINVAL;
goto fail;
}
+ DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
+ for (i = 0; i < set->num_connectors; i++) {
+ DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
+ drm_get_connector_name(set->connectors[i]));
+ set->connectors[i]->dpms = DRM_MODE_DPMS_ON;
+ }
}
drm_helper_disable_unused_functions(dev);
} else if (fb_changed) {
goto fail;
}
}
- DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
- for (i = 0; i < set->num_connectors; i++) {
- DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
- drm_get_connector_name(set->connectors[i]));
- set->connectors[i]->dpms = DRM_MODE_DPMS_ON;
- }
kfree(save_connectors);
kfree(save_encoders);
}
EXPORT_SYMBOL(drm_fb_helper_hotplug_event);
-/* The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EMBEDDED)
+/* The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EXPERT)
* but the module doesn't depend on any fb console symbols. At least
* attempt to load fbcon to avoid leaving the system without a usable console.
*/
-#if defined(CONFIG_FRAMEBUFFER_CONSOLE_MODULE) && !defined(CONFIG_EMBEDDED)
+#if defined(CONFIG_FRAMEBUFFER_CONSOLE_MODULE) && !defined(CONFIG_EXPERT)
static int __init drm_fb_helper_modinit(void)
{
const char *name = "fbcon";
#endif
mutex_lock(&dev->struct_mutex);
- seq_printf(m, "vma use count: %d, high_memory = %p, 0x%08llx\n",
+ seq_printf(m, "vma use count: %d, high_memory = %pK, 0x%pK\n",
atomic_read(&dev->vma_count),
- high_memory, (u64)virt_to_phys(high_memory));
+ high_memory, (void *)virt_to_phys(high_memory));
list_for_each_entry(pt, &dev->vmalist, head) {
vma = pt->vma;
if (!vma)
continue;
seq_printf(m,
- "\n%5d 0x%08lx-0x%08lx %c%c%c%c%c%c 0x%08lx000",
- pt->pid, vma->vm_start, vma->vm_end,
+ "\n%5d 0x%pK-0x%pK %c%c%c%c%c%c 0x%08lx000",
+ pt->pid,
+ (void *)vma->vm_start, (void *)vma->vm_end,
vma->vm_flags & VM_READ ? 'r' : '-',
vma->vm_flags & VM_WRITE ? 'w' : '-',
vma->vm_flags & VM_EXEC ? 'x' : '-',
* Drivers should call this routine in their vblank interrupt handlers to
* update the vblank counter and send any signals that may be pending.
*/
-void drm_handle_vblank(struct drm_device *dev, int crtc)
+bool drm_handle_vblank(struct drm_device *dev, int crtc)
{
u32 vblcount;
s64 diff_ns;
unsigned long irqflags;
if (!dev->num_crtcs)
- return;
+ return false;
/* Need timestamp lock to prevent concurrent execution with
* vblank enable/disable, as this would cause inconsistent
/* Vblank irq handling disabled. Nothing to do. */
if (!dev->vblank_enabled[crtc]) {
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
- return;
+ return false;
}
/* Fetch corresponding timestamp for this vblank interval from
drm_handle_vblank_events(dev, crtc);
spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
+ return true;
}
EXPORT_SYMBOL(drm_handle_vblank);
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
- struct intel_ring_buffer *ring = LP_RING(dev_priv);
+ int ret;
master_priv->sarea = drm_getsarea(dev);
if (master_priv->sarea) {
}
if (init->ring_size != 0) {
- if (ring->obj != NULL) {
+ if (LP_RING(dev_priv)->obj != NULL) {
i915_dma_cleanup(dev);
DRM_ERROR("Client tried to initialize ringbuffer in "
"GEM mode\n");
return -EINVAL;
}
- ring->size = init->ring_size;
-
- ring->map.offset = init->ring_start;
- ring->map.size = init->ring_size;
- ring->map.type = 0;
- ring->map.flags = 0;
- ring->map.mtrr = 0;
-
- drm_core_ioremap_wc(&ring->map, dev);
-
- if (ring->map.handle == NULL) {
+ ret = intel_render_ring_init_dri(dev,
+ init->ring_start,
+ init->ring_size);
+ if (ret) {
i915_dma_cleanup(dev);
- DRM_ERROR("can not ioremap virtual address for"
- " ring buffer\n");
- return -ENOMEM;
+ return ret;
}
}
- ring->virtual_start = ring->map.handle;
-
dev_priv->cpp = init->cpp;
dev_priv->back_offset = init->back_offset;
dev_priv->front_offset = init->front_offset;
if (ret)
DRM_INFO("failed to find VBIOS tables\n");
- /* if we have > 1 VGA cards, then disable the radeon VGA resources */
+ /* If we have > 1 VGA cards, then we need to arbitrate access
+ * to the common VGA resources.
+ *
+ * If we are a secondary display controller (!PCI_DISPLAY_CLASS_VGA),
+ * then we do not take part in VGA arbitration and the
+ * vga_client_register() fails with -ENODEV.
+ */
ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode);
- if (ret)
+ if (ret && ret != -ENODEV)
goto cleanup_ringbuffer;
intel_register_dsm_handler();
unsigned int i915_powersave = 1;
module_param_named(powersave, i915_powersave, int, 0600);
+unsigned int i915_enable_rc6 = 0;
+module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0600);
+
unsigned int i915_lvds_downclock = 0;
module_param_named(lvds_downclock, i915_lvds_downclock, int, 0400);
#define INTEL_VGA_DEVICE(id, info) { \
.class = PCI_CLASS_DISPLAY_VGA << 8, \
- .class_mask = 0xffff00, \
+ .class_mask = 0xff0000, \
.vendor = 0x8086, \
.device = id, \
.subvendor = PCI_ANY_ID, \
error = i915_gem_init_ringbuffer(dev);
mutex_unlock(&dev->struct_mutex);
+ drm_mode_config_reset(dev);
drm_irq_install(dev);
/* Resume the modeset for every activated CRTC */
drm_helper_resume_force_mode(dev);
- if (dev_priv->renderctx && dev_priv->pwrctx)
+ if (IS_IRONLAKE_M(dev))
ironlake_enable_rc6(dev);
}
mutex_unlock(&dev->struct_mutex);
drm_irq_uninstall(dev);
+ drm_mode_config_reset(dev);
drm_irq_install(dev);
mutex_lock(&dev->struct_mutex);
}
static int __devinit
i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
+ /* Only bind to function 0 of the device. Early generations
+ * used function 1 as a placeholder for multi-head. This causes
+ * us confusion instead, especially on the systems where both
+ * functions have the same PCI-ID!
+ */
+ if (PCI_FUNC(pdev->devfn))
+ return -ENODEV;
+
return drm_get_pci_dev(pdev, ent, &driver);
}
driver.driver_features &= ~DRIVER_MODESET;
#endif
+ if (!(driver.driver_features & DRIVER_MODESET))
+ driver.get_vblank_timestamp = NULL;
+
return drm_init(&driver);
}
/** List of all objects in gtt_space. Used to restore gtt
* mappings on resume */
struct list_head gtt_list;
- /** End of mappable part of GTT */
+
+ /** Usable portion of the GTT for GEM */
+ unsigned long gtt_start;
unsigned long gtt_mappable_end;
+ unsigned long gtt_end;
struct io_mapping *gtt_mapping;
int gtt_mtrr;
extern unsigned int i915_powersave;
extern unsigned int i915_lvds_downclock;
extern unsigned int i915_panel_use_ssc;
+extern unsigned int i915_enable_rc6;
extern int i915_suspend(struct drm_device *dev, pm_message_t state);
extern int i915_resume(struct drm_device *dev);
{
drm_i915_private_t *dev_priv = dev->dev_private;
- drm_mm_init(&dev_priv->mm.gtt_space, start,
- end - start);
+ drm_mm_init(&dev_priv->mm.gtt_space, start, end - start);
+ dev_priv->mm.gtt_start = start;
+ dev_priv->mm.gtt_mappable_end = mappable_end;
+ dev_priv->mm.gtt_end = end;
dev_priv->mm.gtt_total = end - start;
dev_priv->mm.mappable_gtt_total = min(end, mappable_end) - start;
- dev_priv->mm.gtt_mappable_end = mappable_end;
+
+ /* Take over this portion of the GTT */
+ intel_gtt_clear_range(start / PAGE_SIZE, (end-start) / PAGE_SIZE);
}
int
seqno = ring->get_seqno(ring);
- for (i = 0; i < I915_NUM_RINGS; i++)
+ for (i = 0; i < ARRAY_SIZE(ring->sync_seqno); i++)
if (seqno >= ring->sync_seqno[i])
ring->sync_seqno[i] = 0;
goto err;
seqno = i915_gem_next_request_seqno(dev, ring);
- for (i = 0; i < I915_NUM_RINGS-1; i++) {
+ for (i = 0; i < ARRAY_SIZE(ring->sync_seqno); i++) {
if (seqno < ring->sync_seqno[i]) {
/* The GPU can not handle its semaphore value wrapping,
* so every billion or so execbuffers, we need to stall
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj;
+ /* First fill our portion of the GTT with scratch pages */
+ intel_gtt_clear_range(dev_priv->mm.gtt_start / PAGE_SIZE,
+ (dev_priv->mm.gtt_end - dev_priv->mm.gtt_start) / PAGE_SIZE);
+
list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
i915_gem_clflush_object(obj);
return ret;
}
-int i915_get_vblank_timestamp(struct drm_device *dev, int crtc,
+int i915_get_vblank_timestamp(struct drm_device *dev, int pipe,
int *max_error,
struct timeval *vblank_time,
unsigned flags)
{
- struct drm_crtc *drmcrtc;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_crtc *crtc;
- if (crtc < 0 || crtc >= dev->num_crtcs) {
- DRM_ERROR("Invalid crtc %d\n", crtc);
+ if (pipe < 0 || pipe >= dev_priv->num_pipe) {
+ DRM_ERROR("Invalid crtc %d\n", pipe);
return -EINVAL;
}
/* Get drm_crtc to timestamp: */
- drmcrtc = intel_get_crtc_for_pipe(dev, crtc);
+ crtc = intel_get_crtc_for_pipe(dev, pipe);
+ if (crtc == NULL) {
+ DRM_ERROR("Invalid crtc %d\n", pipe);
+ return -EINVAL;
+ }
+
+ if (!crtc->enabled) {
+ DRM_DEBUG_KMS("crtc %d is disabled\n", pipe);
+ return -EBUSY;
+ }
/* Helper routine in DRM core does all the work: */
- return drm_calc_vbltimestamp_from_scanoutpos(dev, crtc, max_error,
- vblank_time, flags, drmcrtc);
+ return drm_calc_vbltimestamp_from_scanoutpos(dev, pipe, max_error,
+ vblank_time, flags,
+ crtc);
}
/*
struct intel_ring_buffer *ring)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 seqno = ring->get_seqno(ring);
+ u32 seqno;
+ if (ring->obj == NULL)
+ return;
+
+ seqno = ring->get_seqno(ring);
trace_i915_gem_request_complete(dev, seqno);
ring->irq_seqno = seqno;
i++;
error->pinned_bo_count = i - error->active_bo_count;
+ error->active_bo = NULL;
+ error->pinned_bo = NULL;
if (i) {
error->active_bo = kmalloc(sizeof(*error->active_bo)*i,
GFP_ATOMIC);
intel_finish_page_flip_plane(dev, 1);
}
- if (pipea_stats & vblank_status) {
+ if (pipea_stats & vblank_status &&
+ drm_handle_vblank(dev, 0)) {
vblank++;
- drm_handle_vblank(dev, 0);
if (!dev_priv->flip_pending_is_done) {
i915_pageflip_stall_check(dev, 0);
intel_finish_page_flip(dev, 0);
}
}
- if (pipeb_stats & vblank_status) {
+ if (pipeb_stats & vblank_status &&
+ drm_handle_vblank(dev, 1)) {
vblank++;
- drm_handle_vblank(dev, 1);
if (!dev_priv->flip_pending_is_done) {
i915_pageflip_stall_check(dev, 1);
intel_finish_page_flip(dev, 1);
if (master_priv->sarea_priv)
master_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT;
- ret = -ENODEV;
if (ring->irq_get(ring)) {
DRM_WAIT_ON(ret, ring->irq_queue, 3 * DRM_HZ,
READ_BREADCRUMB(dev_priv) >= irq_nr);
ring->irq_put(ring);
- }
+ } else if (wait_for(READ_BREADCRUMB(dev_priv) >= irq_nr, 3000))
+ ret = -EBUSY;
if (ret == -EBUSY) {
DRM_ERROR("EBUSY -- rec: %d emitted: %d\n",
* address/value pairs. Don't overdue it, though, x <= 2^4 must hold!
*/
#define MI_LOAD_REGISTER_IMM(x) MI_INSTR(0x22, 2*x-1)
-#define MI_FLUSH_DW MI_INSTR(0x26, 2) /* for GEN6 */
+#define MI_FLUSH_DW MI_INSTR(0x26, 1) /* for GEN6 */
+#define MI_INVALIDATE_TLB (1<<18)
+#define MI_INVALIDATE_BSD (1<<7)
#define MI_BATCH_BUFFER MI_INSTR(0x30, 1)
#define MI_BATCH_NON_SECURE (1)
#define MI_BATCH_NON_SECURE_I965 (1<<8)
#define GEN6_BLITTER_SYNC_STATUS (1 << 24)
#define GEN6_BLITTER_USER_INTERRUPT (1 << 22)
+#define GEN6_BLITTER_ECOSKPD 0x221d0
+#define GEN6_BLITTER_LOCK_SHIFT 16
+#define GEN6_BLITTER_FBC_NOTIFY (1<<3)
+
#define GEN6_BSD_SLEEP_PSMI_CONTROL 0x12050
#define GEN6_BSD_SLEEP_PSMI_CONTROL_RC_ILDL_MESSAGE_MODIFY_MASK (1 << 16)
#define GEN6_BSD_SLEEP_PSMI_CONTROL_RC_ILDL_MESSAGE_DISABLE (1 << 0)
/* Backlight control */
#define BLC_PWM_CTL 0x61254
-#define BACKLIGHT_MODULATION_FREQ_SHIFT (17)
#define BLC_PWM_CTL2 0x61250 /* 965+ only */
-#define BLM_COMBINATION_MODE (1 << 30)
-/*
- * This is the most significant 15 bits of the number of backlight cycles in a
- * complete cycle of the modulated backlight control.
- *
- * The actual value is this field multiplied by two.
- */
-#define BACKLIGHT_MODULATION_FREQ_MASK (0x7fff << 17)
-#define BLM_LEGACY_MODE (1 << 16)
/*
* This is the number of cycles out of the backlight modulation cycle for which
* the backlight is on.
#define DISPLAY_PORT_PLL_BIOS_2 0x46014
#define PCH_DSPCLK_GATE_D 0x42020
+# define DPFCUNIT_CLOCK_GATE_DISABLE (1 << 9)
+# define DPFCRUNIT_CLOCK_GATE_DISABLE (1 << 8)
# define DPFDUNIT_CLOCK_GATE_DISABLE (1 << 7)
# define DPARBUNIT_CLOCK_GATE_DISABLE (1 << 5)
return 0;
}
+static void intel_crt_reset(struct drm_connector *connector)
+{
+ struct drm_device *dev = connector->dev;
+ struct intel_crt *crt = intel_attached_crt(connector);
+
+ if (HAS_PCH_SPLIT(dev))
+ crt->force_hotplug_required = 1;
+}
+
/*
* Routines for controlling stuff on the analog port
*/
};
static const struct drm_connector_funcs intel_crt_connector_funcs = {
+ .reset = intel_crt_reset,
.dpms = drm_helper_connector_dpms,
.detect = intel_crt_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN;
}
+static void sandybridge_blit_fbc_update(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 blt_ecoskpd;
+
+ /* Make sure blitter notifies FBC of writes */
+ __gen6_force_wake_get(dev_priv);
+ blt_ecoskpd = I915_READ(GEN6_BLITTER_ECOSKPD);
+ blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY <<
+ GEN6_BLITTER_LOCK_SHIFT;
+ I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
+ blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY;
+ I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
+ blt_ecoskpd &= ~(GEN6_BLITTER_FBC_NOTIFY <<
+ GEN6_BLITTER_LOCK_SHIFT);
+ I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
+ POSTING_READ(GEN6_BLITTER_ECOSKPD);
+ __gen6_force_wake_put(dev_priv);
+}
+
static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
{
struct drm_device *dev = crtc->dev;
I915_WRITE(SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE | dev_priv->cfb_fence);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y);
+ sandybridge_blit_fbc_update(dev);
}
DRM_DEBUG_KMS("enabled fbc on plane %d\n", intel_crtc->plane);
return ret;
}
+static void intel_crtc_reset(struct drm_crtc *crtc)
+{
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ /* Reset flags back to the 'unknown' status so that they
+ * will be correctly set on the initial modeset.
+ */
+ intel_crtc->dpms_mode = -1;
+}
+
static struct drm_crtc_helper_funcs intel_helper_funcs = {
.dpms = intel_crtc_dpms,
.mode_fixup = intel_crtc_mode_fixup,
};
static const struct drm_crtc_funcs intel_crtc_funcs = {
+ .reset = intel_crtc_reset,
.cursor_set = intel_crtc_cursor_set,
.cursor_move = intel_crtc_cursor_move,
.gamma_set = intel_crtc_gamma_set,
dev_priv->plane_to_crtc_mapping[intel_crtc->plane] = &intel_crtc->base;
dev_priv->pipe_to_crtc_mapping[intel_crtc->pipe] = &intel_crtc->base;
- intel_crtc->cursor_addr = 0;
- intel_crtc->dpms_mode = -1;
+ intel_crtc_reset(&intel_crtc->base);
intel_crtc->active = true; /* force the pipe off on setup_init_config */
if (HAS_PCH_SPLIT(dev)) {
if (IS_GEN5(dev)) {
/* Required for FBC */
- dspclk_gate |= DPFDUNIT_CLOCK_GATE_DISABLE;
+ dspclk_gate |= DPFCUNIT_CLOCK_GATE_DISABLE |
+ DPFCRUNIT_CLOCK_GATE_DISABLE |
+ DPFDUNIT_CLOCK_GATE_DISABLE;
/* Required for CxSR */
dspclk_gate |= DPARBUNIT_CLOCK_GATE_DISABLE;
}
}
-void intel_disable_clock_gating(struct drm_device *dev)
+static void ironlake_teardown_rc6(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (dev_priv->renderctx) {
- struct drm_i915_gem_object *obj = dev_priv->renderctx;
-
- I915_WRITE(CCID, 0);
- POSTING_READ(CCID);
-
- i915_gem_object_unpin(obj);
- drm_gem_object_unreference(&obj->base);
+ i915_gem_object_unpin(dev_priv->renderctx);
+ drm_gem_object_unreference(&dev_priv->renderctx->base);
dev_priv->renderctx = NULL;
}
if (dev_priv->pwrctx) {
- struct drm_i915_gem_object *obj = dev_priv->pwrctx;
+ i915_gem_object_unpin(dev_priv->pwrctx);
+ drm_gem_object_unreference(&dev_priv->pwrctx->base);
+ dev_priv->pwrctx = NULL;
+ }
+}
+
+static void ironlake_disable_rc6(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ if (I915_READ(PWRCTXA)) {
+ /* Wake the GPU, prevent RC6, then restore RSTDBYCTL */
+ I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) | RCX_SW_EXIT);
+ wait_for(((I915_READ(RSTDBYCTL) & RSX_STATUS_MASK) == RSX_STATUS_ON),
+ 50);
I915_WRITE(PWRCTXA, 0);
POSTING_READ(PWRCTXA);
- i915_gem_object_unpin(obj);
- drm_gem_object_unreference(&obj->base);
- dev_priv->pwrctx = NULL;
+ I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
+ POSTING_READ(RSTDBYCTL);
}
+
+ ironlake_disable_rc6(dev);
}
-static void ironlake_disable_rc6(struct drm_device *dev)
+static int ironlake_setup_rc6(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- /* Wake the GPU, prevent RC6, then restore RSTDBYCTL */
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) | RCX_SW_EXIT);
- wait_for(((I915_READ(RSTDBYCTL) & RSX_STATUS_MASK) == RSX_STATUS_ON),
- 10);
- POSTING_READ(CCID);
- I915_WRITE(PWRCTXA, 0);
- POSTING_READ(PWRCTXA);
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
- POSTING_READ(RSTDBYCTL);
- i915_gem_object_unpin(dev_priv->renderctx);
- drm_gem_object_unreference(&dev_priv->renderctx->base);
- dev_priv->renderctx = NULL;
- i915_gem_object_unpin(dev_priv->pwrctx);
- drm_gem_object_unreference(&dev_priv->pwrctx->base);
- dev_priv->pwrctx = NULL;
+ if (dev_priv->renderctx == NULL)
+ dev_priv->renderctx = intel_alloc_context_page(dev);
+ if (!dev_priv->renderctx)
+ return -ENOMEM;
+
+ if (dev_priv->pwrctx == NULL)
+ dev_priv->pwrctx = intel_alloc_context_page(dev);
+ if (!dev_priv->pwrctx) {
+ ironlake_teardown_rc6(dev);
+ return -ENOMEM;
+ }
+
+ return 0;
}
void ironlake_enable_rc6(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
+ /* rc6 disabled by default due to repeated reports of hanging during
+ * boot and resume.
+ */
+ if (!i915_enable_rc6)
+ return;
+
+ ret = ironlake_setup_rc6(dev);
+ if (ret)
+ return;
+
/*
* GPU can automatically power down the render unit if given a page
* to save state.
*/
ret = BEGIN_LP_RING(6);
if (ret) {
- ironlake_disable_rc6(dev);
+ ironlake_teardown_rc6(dev);
return;
}
+
OUT_RING(MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN);
OUT_RING(MI_SET_CONTEXT);
OUT_RING(dev_priv->renderctx->gtt_offset |
I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
}
+
/* Set up chip specific display functions */
static void intel_init_display(struct drm_device *dev)
{
if (IS_GEN6(dev))
gen6_enable_rps(dev_priv);
- if (IS_IRONLAKE_M(dev)) {
- dev_priv->renderctx = intel_alloc_context_page(dev);
- if (!dev_priv->renderctx)
- goto skip_rc6;
- dev_priv->pwrctx = intel_alloc_context_page(dev);
- if (!dev_priv->pwrctx) {
- i915_gem_object_unpin(dev_priv->renderctx);
- drm_gem_object_unreference(&dev_priv->renderctx->base);
- dev_priv->renderctx = NULL;
- goto skip_rc6;
- }
+ if (IS_IRONLAKE_M(dev))
ironlake_enable_rc6(dev);
- }
-skip_rc6:
INIT_WORK(&dev_priv->idle_work, intel_idle_update);
setup_timer(&dev_priv->idle_timer, intel_gpu_idle_timer,
(unsigned long)dev);
return 0;
}
+static bool
+intel_dp_detect_audio(struct drm_connector *connector)
+{
+ struct intel_dp *intel_dp = intel_attached_dp(connector);
+ struct edid *edid;
+ bool has_audio = false;
+
+ edid = drm_get_edid(connector, &intel_dp->adapter);
+ if (edid) {
+ has_audio = drm_detect_monitor_audio(edid);
+
+ connector->display_info.raw_edid = NULL;
+ kfree(edid);
+ }
+
+ return has_audio;
+}
+
static int
intel_dp_set_property(struct drm_connector *connector,
struct drm_property *property,
return ret;
if (property == intel_dp->force_audio_property) {
- if (val == intel_dp->force_audio)
+ int i = val;
+ bool has_audio;
+
+ if (i == intel_dp->force_audio)
return 0;
- intel_dp->force_audio = val;
+ intel_dp->force_audio = i;
- if (val > 0 && intel_dp->has_audio)
- return 0;
- if (val < 0 && !intel_dp->has_audio)
+ if (i == 0)
+ has_audio = intel_dp_detect_audio(connector);
+ else
+ has_audio = i > 0;
+
+ if (has_audio == intel_dp->has_audio)
return 0;
- intel_dp->has_audio = val > 0;
+ intel_dp->has_audio = has_audio;
goto done;
}
extern void intel_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
u16 *blue, int regno);
extern void intel_enable_clock_gating(struct drm_device *dev);
-extern void intel_disable_clock_gating(struct drm_device *dev);
extern void ironlake_enable_drps(struct drm_device *dev);
extern void ironlake_disable_drps(struct drm_device *dev);
extern void gen6_enable_rps(struct drm_i915_private *dev_priv);
&dev_priv->gmbus[intel_hdmi->ddc_bus].adapter);
}
+static bool
+intel_hdmi_detect_audio(struct drm_connector *connector)
+{
+ struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+ struct drm_i915_private *dev_priv = connector->dev->dev_private;
+ struct edid *edid;
+ bool has_audio = false;
+
+ edid = drm_get_edid(connector,
+ &dev_priv->gmbus[intel_hdmi->ddc_bus].adapter);
+ if (edid) {
+ if (edid->input & DRM_EDID_INPUT_DIGITAL)
+ has_audio = drm_detect_monitor_audio(edid);
+
+ connector->display_info.raw_edid = NULL;
+ kfree(edid);
+ }
+
+ return has_audio;
+}
+
static int
intel_hdmi_set_property(struct drm_connector *connector,
struct drm_property *property,
return ret;
if (property == intel_hdmi->force_audio_property) {
- if (val == intel_hdmi->force_audio)
+ int i = val;
+ bool has_audio;
+
+ if (i == intel_hdmi->force_audio)
return 0;
- intel_hdmi->force_audio = val;
+ intel_hdmi->force_audio = i;
- if (val > 0 && intel_hdmi->has_audio)
- return 0;
- if (val < 0 && !intel_hdmi->has_audio)
+ if (i == 0)
+ has_audio = intel_hdmi_detect_audio(connector);
+ else
+ has_audio = i > 0;
+
+ if (has_audio == intel_hdmi->has_audio)
return 0;
- intel_hdmi->has_audio = val > 0;
+ intel_hdmi->has_audio = has_audio;
goto done;
}
return true;
}
- /* Make sure pre-965s set dither correctly */
- if (INTEL_INFO(dev)->gen < 4) {
- if (dev_priv->lvds_dither)
- pfit_control |= PANEL_8TO6_DITHER_ENABLE;
- }
-
/* Native modes don't need fitting */
if (adjusted_mode->hdisplay == mode->hdisplay &&
adjusted_mode->vdisplay == mode->vdisplay)
}
out:
+ /* If not enabling scaling, be consistent and always use 0. */
if ((pfit_control & PFIT_ENABLE) == 0) {
pfit_control = 0;
pfit_pgm_ratios = 0;
}
+
+ /* Make sure pre-965 set dither correctly */
+ if (INTEL_INFO(dev)->gen < 4 && dev_priv->lvds_dither)
+ pfit_control |= PANEL_8TO6_DITHER_ENABLE;
+
if (pfit_control != intel_lvds->pfit_control ||
pfit_pgm_ratios != intel_lvds->pfit_pgm_ratios) {
intel_lvds->pfit_control = pfit_control;
*/
#include <linux/acpi.h>
+#include <linux/acpi_io.h>
#include <acpi/video.h>
#include "drmP.h"
return -ENOTSUPP;
}
- base = ioremap(asls, OPREGION_SIZE);
+ base = acpi_os_ioremap(asls, OPREGION_SIZE);
if (!base)
return -ENOMEM;
#include "intel_drv.h"
-#define PCI_LBPC 0xf4 /* legacy/combination backlight modes */
-
void
intel_fixed_panel_mode(struct drm_display_mode *fixed_mode,
struct drm_display_mode *adjusted_mode)
dev_priv->pch_pf_size = (width << 16) | height;
}
-static int is_backlight_combination_mode(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (INTEL_INFO(dev)->gen >= 4)
- return I915_READ(BLC_PWM_CTL2) & BLM_COMBINATION_MODE;
-
- if (IS_GEN2(dev))
- return I915_READ(BLC_PWM_CTL) & BLM_LEGACY_MODE;
-
- return 0;
-}
-
static u32 i915_read_blc_pwm_ctl(struct drm_i915_private *dev_priv)
{
u32 val;
if (INTEL_INFO(dev)->gen < 4)
max &= ~1;
}
-
- if (is_backlight_combination_mode(dev))
- max *= 0xff;
}
DRM_DEBUG_DRIVER("max backlight PWM = %d\n", max);
val = I915_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
if (IS_PINEVIEW(dev))
val >>= 1;
-
- if (is_backlight_combination_mode(dev)){
- u8 lbpc;
-
- val &= ~1;
- pci_read_config_byte(dev->pdev, PCI_LBPC, &lbpc);
- val *= lbpc;
- val >>= 1;
- }
}
DRM_DEBUG_DRIVER("get backlight PWM = %d\n", val);
if (HAS_PCH_SPLIT(dev))
return intel_pch_panel_set_backlight(dev, level);
-
- if (is_backlight_combination_mode(dev)){
- u32 max = intel_panel_get_max_backlight(dev);
- u8 lpbc;
-
- lpbc = level * 0xfe / max + 1;
- level /= lpbc;
- pci_write_config_byte(dev->pdev, PCI_LBPC, lpbc);
- }
-
tmp = I915_READ(BLC_PWM_CTL);
if (IS_PINEVIEW(dev)) {
tmp &= ~(BACKLIGHT_DUTY_CYCLE_MASK - 1);
#include "i915_trace.h"
#include "intel_drv.h"
+static inline int ring_space(struct intel_ring_buffer *ring)
+{
+ int space = (ring->head & HEAD_ADDR) - (ring->tail + 8);
+ if (space < 0)
+ space += ring->size;
+ return space;
+}
+
static u32 i915_gem_get_seqno(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
if (!drm_core_check_feature(ring->dev, DRIVER_MODESET))
i915_kernel_lost_context(ring->dev);
else {
- ring->head = I915_READ_HEAD(ring) & HEAD_ADDR;
+ ring->head = I915_READ_HEAD(ring);
ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
- ring->space = ring->head - (ring->tail + 8);
- if (ring->space < 0)
- ring->space += ring->size;
+ ring->space = ring_space(ring);
}
return 0;
}
ring->tail = 0;
- ring->space = ring->head - 8;
+ ring->space = ring_space(ring);
return 0;
}
unsigned long end;
u32 head;
+ /* If the reported head position has wrapped or hasn't advanced,
+ * fallback to the slow and accurate path.
+ */
+ head = intel_read_status_page(ring, 4);
+ if (head > ring->head) {
+ ring->head = head;
+ ring->space = ring_space(ring);
+ if (ring->space >= n)
+ return 0;
+ }
+
trace_i915_ring_wait_begin (dev);
end = jiffies + 3 * HZ;
do {
- /* If the reported head position has wrapped or hasn't advanced,
- * fallback to the slow and accurate path.
- */
- head = intel_read_status_page(ring, 4);
- if (head < ring->actual_head)
- head = I915_READ_HEAD(ring);
- ring->actual_head = head;
- ring->head = head & HEAD_ADDR;
- ring->space = ring->head - (ring->tail + 8);
- if (ring->space < 0)
- ring->space += ring->size;
+ ring->head = I915_READ_HEAD(ring);
+ ring->space = ring_space(ring);
if (ring->space >= n) {
trace_i915_ring_wait_end(dev);
return 0;
}
static int gen6_ring_flush(struct intel_ring_buffer *ring,
- u32 invalidate_domains,
- u32 flush_domains)
+ u32 invalidate, u32 flush)
{
+ uint32_t cmd;
int ret;
- if ((flush_domains & I915_GEM_DOMAIN_RENDER) == 0)
+ if (((invalidate | flush) & I915_GEM_GPU_DOMAINS) == 0)
return 0;
ret = intel_ring_begin(ring, 4);
if (ret)
return ret;
- intel_ring_emit(ring, MI_FLUSH_DW);
- intel_ring_emit(ring, 0);
+ cmd = MI_FLUSH_DW;
+ if (invalidate & I915_GEM_GPU_DOMAINS)
+ cmd |= MI_INVALIDATE_TLB | MI_INVALIDATE_BSD;
+ intel_ring_emit(ring, cmd);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
}
static int blt_ring_flush(struct intel_ring_buffer *ring,
- u32 invalidate_domains,
- u32 flush_domains)
+ u32 invalidate, u32 flush)
{
+ uint32_t cmd;
int ret;
- if ((flush_domains & I915_GEM_DOMAIN_RENDER) == 0)
+ if (((invalidate | flush) & I915_GEM_DOMAIN_RENDER) == 0)
return 0;
ret = blt_ring_begin(ring, 4);
if (ret)
return ret;
- intel_ring_emit(ring, MI_FLUSH_DW);
- intel_ring_emit(ring, 0);
+ cmd = MI_FLUSH_DW;
+ if (invalidate & I915_GEM_DOMAIN_RENDER)
+ cmd |= MI_INVALIDATE_TLB;
+ intel_ring_emit(ring, cmd);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
return intel_init_ring_buffer(dev, ring);
}
+int intel_render_ring_init_dri(struct drm_device *dev, u64 start, u32 size)
+{
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ struct intel_ring_buffer *ring = &dev_priv->ring[RCS];
+
+ *ring = render_ring;
+ if (INTEL_INFO(dev)->gen >= 6) {
+ ring->add_request = gen6_add_request;
+ ring->irq_get = gen6_render_ring_get_irq;
+ ring->irq_put = gen6_render_ring_put_irq;
+ } else if (IS_GEN5(dev)) {
+ ring->add_request = pc_render_add_request;
+ ring->get_seqno = pc_render_get_seqno;
+ }
+
+ ring->dev = dev;
+ INIT_LIST_HEAD(&ring->active_list);
+ INIT_LIST_HEAD(&ring->request_list);
+ INIT_LIST_HEAD(&ring->gpu_write_list);
+
+ ring->size = size;
+ ring->effective_size = ring->size;
+ if (IS_I830(ring->dev))
+ ring->effective_size -= 128;
+
+ ring->map.offset = start;
+ ring->map.size = size;
+ ring->map.type = 0;
+ ring->map.flags = 0;
+ ring->map.mtrr = 0;
+
+ drm_core_ioremap_wc(&ring->map, dev);
+ if (ring->map.handle == NULL) {
+ DRM_ERROR("can not ioremap virtual address for"
+ " ring buffer\n");
+ return -ENOMEM;
+ }
+
+ ring->virtual_start = (void __force __iomem *)ring->map.handle;
+ return 0;
+}
+
int intel_init_bsd_ring_buffer(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_device *dev;
struct drm_i915_gem_object *obj;
- u32 actual_head;
u32 head;
u32 tail;
int space;
u32 intel_ring_get_active_head(struct intel_ring_buffer *ring);
void intel_ring_setup_status_page(struct intel_ring_buffer *ring);
+/* DRI warts */
+int intel_render_ring_init_dri(struct drm_device *dev, u64 start, u32 size);
+
#endif /* _INTEL_RINGBUFFER_H_ */
SDVO_TV_MASK)
#define IS_TV(c) (c->output_flag & SDVO_TV_MASK)
+#define IS_TMDS(c) (c->output_flag & SDVO_TMDS_MASK)
#define IS_LVDS(c) (c->output_flag & SDVO_LVDS_MASK)
#define IS_TV_OR_LVDS(c) (c->output_flag & (SDVO_TV_MASK | SDVO_LVDS_MASK))
return false;
}
- i = 3;
- while (status == SDVO_CMD_STATUS_PENDING && i--) {
- if (!intel_sdvo_read_byte(intel_sdvo,
- SDVO_I2C_CMD_STATUS,
- &status))
- return false;
- }
- if (status != SDVO_CMD_STATUS_SUCCESS) {
- DRM_DEBUG_KMS("command returns response %s [%d]\n",
- status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP ? cmd_status_names[status] : "???",
- status);
- return false;
- }
-
return true;
}
u8 status;
int i;
+ DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(intel_sdvo));
+
/*
* The documentation states that all commands will be
* processed within 15µs, and that we need only poll
*
* Check 5 times in case the hardware failed to read the docs.
*/
- do {
+ if (!intel_sdvo_read_byte(intel_sdvo,
+ SDVO_I2C_CMD_STATUS,
+ &status))
+ goto log_fail;
+
+ while (status == SDVO_CMD_STATUS_PENDING && retry--) {
+ udelay(15);
if (!intel_sdvo_read_byte(intel_sdvo,
SDVO_I2C_CMD_STATUS,
&status))
- return false;
- } while (status == SDVO_CMD_STATUS_PENDING && --retry);
+ goto log_fail;
+ }
- DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(intel_sdvo));
if (status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP)
DRM_LOG_KMS("(%s)", cmd_status_names[status]);
else
return true;
log_fail:
- DRM_LOG_KMS("\n");
+ DRM_LOG_KMS("... failed\n");
return false;
}
static bool intel_sdvo_set_control_bus_switch(struct intel_sdvo *intel_sdvo,
u8 ddc_bus)
{
+ /* This must be the immediately preceding write before the i2c xfer */
return intel_sdvo_write_cmd(intel_sdvo,
SDVO_CMD_SET_CONTROL_BUS_SWITCH,
&ddc_bus, 1);
static bool intel_sdvo_set_value(struct intel_sdvo *intel_sdvo, u8 cmd, const void *data, int len)
{
- return intel_sdvo_write_cmd(intel_sdvo, cmd, data, len);
+ if (!intel_sdvo_write_cmd(intel_sdvo, cmd, data, len))
+ return false;
+
+ return intel_sdvo_read_response(intel_sdvo, NULL, 0);
}
static bool
intel_dip_infoframe_csum(&avi_if);
- if (!intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_INDEX,
+ if (!intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_INDEX,
set_buf_index, 2))
return false;
for (i = 0; i < sizeof(avi_if); i += 8) {
- if (!intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_DATA,
+ if (!intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_DATA,
data, 8))
return false;
data++;
}
- return intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_HBUF_TXRATE,
+ return intel_sdvo_set_value(intel_sdvo,
+ SDVO_CMD_SET_HBUF_TXRATE,
&tx_rate, 1);
}
intel_sdvo->has_hdmi_monitor = drm_detect_hdmi_monitor(edid);
intel_sdvo->has_hdmi_audio = drm_detect_monitor_audio(edid);
}
- }
+ } else
+ status = connector_status_disconnected;
connector->display_info.raw_edid = NULL;
kfree(edid);
}
if ((intel_sdvo_connector->output_flag & response) == 0)
ret = connector_status_disconnected;
- else if (response & SDVO_TMDS_MASK)
+ else if (IS_TMDS(intel_sdvo_connector))
ret = intel_sdvo_hdmi_sink_detect(connector);
- else
- ret = connector_status_connected;
+ else {
+ struct edid *edid;
+
+ /* if we have an edid check it matches the connection */
+ edid = intel_sdvo_get_edid(connector);
+ if (edid == NULL)
+ edid = intel_sdvo_get_analog_edid(connector);
+ if (edid != NULL) {
+ if (edid->input & DRM_EDID_INPUT_DIGITAL)
+ ret = connector_status_disconnected;
+ else
+ ret = connector_status_connected;
+ connector->display_info.raw_edid = NULL;
+ kfree(edid);
+ } else
+ ret = connector_status_connected;
+ }
/* May update encoder flag for like clock for SDVO TV, etc.*/
if (ret == connector_status_connected) {
edid = intel_sdvo_get_analog_edid(connector);
if (edid != NULL) {
- if (edid->input & DRM_EDID_INPUT_DIGITAL) {
+ struct intel_sdvo_connector *intel_sdvo_connector = to_intel_sdvo_connector(connector);
+ bool monitor_is_digital = !!(edid->input & DRM_EDID_INPUT_DIGITAL);
+ bool connector_is_digital = !!IS_TMDS(intel_sdvo_connector);
+
+ if (connector_is_digital == monitor_is_digital) {
drm_mode_connector_update_edid_property(connector, edid);
drm_add_edid_modes(connector, edid);
}
+
connector->display_info.raw_edid = NULL;
kfree(edid);
}
kfree(connector);
}
+static bool intel_sdvo_detect_hdmi_audio(struct drm_connector *connector)
+{
+ struct intel_sdvo *intel_sdvo = intel_attached_sdvo(connector);
+ struct edid *edid;
+ bool has_audio = false;
+
+ if (!intel_sdvo->is_hdmi)
+ return false;
+
+ edid = intel_sdvo_get_edid(connector);
+ if (edid != NULL && edid->input & DRM_EDID_INPUT_DIGITAL)
+ has_audio = drm_detect_monitor_audio(edid);
+
+ return has_audio;
+}
+
static int
intel_sdvo_set_property(struct drm_connector *connector,
struct drm_property *property,
return ret;
if (property == intel_sdvo_connector->force_audio_property) {
- if (val == intel_sdvo_connector->force_audio)
+ int i = val;
+ bool has_audio;
+
+ if (i == intel_sdvo_connector->force_audio)
return 0;
- intel_sdvo_connector->force_audio = val;
+ intel_sdvo_connector->force_audio = i;
- if (val > 0 && intel_sdvo->has_hdmi_audio)
- return 0;
- if (val < 0 && !intel_sdvo->has_hdmi_audio)
+ if (i == 0)
+ has_audio = intel_sdvo_detect_hdmi_audio(connector);
+ else
+ has_audio = i > 0;
+
+ if (has_audio == intel_sdvo->has_hdmi_audio)
return 0;
- intel_sdvo->has_hdmi_audio = val > 0;
+ intel_sdvo->has_hdmi_audio = has_audio;
goto done;
}
* \return false if TV is disconnected.
*/
static int
-intel_tv_detect_type (struct intel_tv *intel_tv)
+intel_tv_detect_type (struct intel_tv *intel_tv,
+ struct drm_connector *connector)
{
struct drm_encoder *encoder = &intel_tv->base.base;
struct drm_device *dev = encoder->dev;
int type;
/* Disable TV interrupts around load detect or we'll recurse */
- spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
- i915_disable_pipestat(dev_priv, 0,
- PIPE_HOTPLUG_INTERRUPT_ENABLE |
- PIPE_HOTPLUG_TV_INTERRUPT_ENABLE);
- spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ if (connector->polled & DRM_CONNECTOR_POLL_HPD) {
+ spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
+ i915_disable_pipestat(dev_priv, 0,
+ PIPE_HOTPLUG_INTERRUPT_ENABLE |
+ PIPE_HOTPLUG_TV_INTERRUPT_ENABLE);
+ spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ }
save_tv_dac = tv_dac = I915_READ(TV_DAC);
save_tv_ctl = tv_ctl = I915_READ(TV_CTL);
I915_WRITE(TV_CTL, save_tv_ctl);
/* Restore interrupt config */
- spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
- i915_enable_pipestat(dev_priv, 0,
- PIPE_HOTPLUG_INTERRUPT_ENABLE |
- PIPE_HOTPLUG_TV_INTERRUPT_ENABLE);
- spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ if (connector->polled & DRM_CONNECTOR_POLL_HPD) {
+ spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
+ i915_enable_pipestat(dev_priv, 0,
+ PIPE_HOTPLUG_INTERRUPT_ENABLE |
+ PIPE_HOTPLUG_TV_INTERRUPT_ENABLE);
+ spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ }
return type;
}
drm_mode_set_crtcinfo(&mode, CRTC_INTERLACE_HALVE_V);
if (intel_tv->base.base.crtc && intel_tv->base.base.crtc->enabled) {
- type = intel_tv_detect_type(intel_tv);
+ type = intel_tv_detect_type(intel_tv, connector);
} else if (force) {
struct drm_crtc *crtc;
int dpms_mode;
crtc = intel_get_load_detect_pipe(&intel_tv->base, connector,
&mode, &dpms_mode);
if (crtc) {
- type = intel_tv_detect_type(intel_tv);
+ type = intel_tv_detect_type(intel_tv, connector);
intel_release_load_detect_pipe(&intel_tv->base, connector,
dpms_mode);
} else
intel_encoder = &intel_tv->base;
connector = &intel_connector->base;
+ /* The documentation, for the older chipsets at least, recommend
+ * using a polling method rather than hotplug detection for TVs.
+ * This is because in order to perform the hotplug detection, the PLLs
+ * for the TV must be kept alive increasing power drain and starving
+ * bandwidth from other encoders. Notably for instance, it causes
+ * pipe underruns on Crestline when this encoder is supposedly idle.
+ *
+ * More recent chipsets favour HDMI rather than integrated S-Video.
+ */
+ connector->polled =
+ DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+
drm_connector_init(dev, connector, &intel_tv_connector_funcs,
DRM_MODE_CONNECTOR_SVIDEO);
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB
- select FRAMEBUFFER_CONSOLE if !EMBEDDED
+ select FRAMEBUFFER_CONSOLE if !EXPERT
select FB_BACKLIGHT if DRM_NOUVEAU_BACKLIGHT
select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && VIDEO_OUTPUT_CONTROL && INPUT
help
entry->tvconf.has_component_output = false;
break;
case OUTPUT_LVDS:
- if ((conn & 0x00003f00) != 0x10)
+ if ((conn & 0x00003f00) >> 8 != 0x10)
entry->lvdsconf.use_straps_for_mode = true;
entry->lvdsconf.use_power_scripts = true;
break;
static bool
apply_dcb_encoder_quirks(struct drm_device *dev, int idx, u32 *conn, u32 *conf)
{
+ struct drm_nouveau_private *dev_priv = dev->dev_private;
+ struct dcb_table *dcb = &dev_priv->vbios.dcb;
+
/* Dell Precision M6300
* DCB entry 2: 02025312 00000010
* DCB entry 3: 02026312 00000020
return false;
}
+ /* GeForce3 Ti 200
+ *
+ * DCB reports an LVDS output that should be TMDS:
+ * DCB entry 1: f2005014 ffffffff
+ */
+ if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) {
+ if (*conn == 0xf2005014 && *conf == 0xffffffff) {
+ fabricate_dcb_output(dcb, OUTPUT_TMDS, 1, 1, 1);
+ return false;
+ }
+ }
+
return true;
}
}
}
+ nvbo->bo.mem.num_pages = size >> PAGE_SHIFT;
nouveau_bo_placement_set(nvbo, flags, 0);
nvbo->channel = chan;
set_placement_range(struct nouveau_bo *nvbo, uint32_t type)
{
struct drm_nouveau_private *dev_priv = nouveau_bdev(nvbo->bo.bdev);
+ int vram_pages = dev_priv->vram_size >> PAGE_SHIFT;
if (dev_priv->card_type == NV_10 &&
- nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM)) {
+ nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM) &&
+ nvbo->bo.mem.num_pages < vram_pages / 2) {
/*
* Make sure that the color and depth buffers are handled
* by independent memory controller units. Up to a 9x
* speed up when alpha-blending and depth-test are enabled
* at the same time.
*/
- int vram_pages = dev_priv->vram_size >> PAGE_SHIFT;
-
if (nvbo->tile_flags & NOUVEAU_GEM_TILE_ZETA) {
nvbo->placement.fpfn = vram_pages / 2;
nvbo->placement.lpfn = ~0;
if (ret)
goto out;
- ret = ttm_bo_move_ttm(bo, evict, no_wait_reserve, no_wait_gpu, new_mem);
+ ret = ttm_bo_move_ttm(bo, true, no_wait_reserve, no_wait_gpu, new_mem);
out:
ttm_bo_mem_put(bo, &tmp_mem);
return ret;
if (ret)
return ret;
- ret = ttm_bo_move_ttm(bo, evict, no_wait_reserve, no_wait_gpu, &tmp_mem);
+ ret = ttm_bo_move_ttm(bo, true, no_wait_reserve, no_wait_gpu, &tmp_mem);
if (ret)
goto out;
- ret = nouveau_bo_move_m2mf(bo, evict, intr, no_wait_reserve, no_wait_gpu, new_mem);
+ ret = nouveau_bo_move_m2mf(bo, true, intr, no_wait_reserve, no_wait_gpu, new_mem);
if (ret)
goto out;
int high_w = 0, high_h = 0, high_v = 0;
list_for_each_entry(mode, &nv_connector->base.probed_modes, head) {
+ mode->vrefresh = drm_mode_vrefresh(mode);
if (helper->mode_valid(connector, mode) != MODE_OK ||
(mode->flags & DRM_MODE_FLAG_INTERLACE))
continue;
pci_set_power_state(pdev, PCI_D3hot);
}
- acquire_console_sem();
+ console_lock();
nouveau_fbcon_set_suspend(dev, 1);
- release_console_sem();
+ console_unlock();
nouveau_fbcon_restore_accel(dev);
return 0;
nv_crtc->lut.depth = 0;
}
- acquire_console_sem();
+ console_lock();
nouveau_fbcon_set_suspend(dev, 0);
- release_console_sem();
+ console_unlock();
nouveau_fbcon_zfill_all(dev);
struct nouveau_fence *fence);
extern const struct ttm_mem_type_manager_func nouveau_vram_manager;
-/* nvc0_vram.c */
-extern const struct ttm_mem_type_manager_func nvc0_vram_manager;
-
/* nouveau_notifier.c */
extern int nouveau_notifier_init_channel(struct nouveau_channel *);
extern void nouveau_notifier_takedown_channel(struct nouveau_channel *);
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
if (pm->hwmon) {
- sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
+ sysfs_remove_group(&dev->pdev->dev.kobj, &hwmon_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
#endif
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
struct nouveau_pm_level *perflvl;
- if (pm->cur == &pm->boot)
+ if (!pm->cur || pm->cur == &pm->boot)
return;
perflvl = pm->cur;
struct i2c_board_info info[] = {
{ I2C_BOARD_INFO("w83l785ts", 0x2d) },
{ I2C_BOARD_INFO("w83781d", 0x2d) },
- { I2C_BOARD_INFO("f75375", 0x2e) },
{ I2C_BOARD_INFO("adt7473", 0x2e) },
+ { I2C_BOARD_INFO("f75375", 0x2e) },
{ I2C_BOARD_INFO("lm99", 0x4c) },
{ }
};
if (nv_encoder->dcb->type == OUTPUT_LVDS) {
bool duallink, dummy;
- nouveau_bios_parse_lvds_table(dev, nv_connector->native_mode->
- clock, &duallink, &dummy);
+ nouveau_bios_parse_lvds_table(dev, output_mode->clock,
+ &duallink, &dummy);
if (duallink)
regp->fp_control |= (8 << 28);
} else
return;
if (nv_encoder->dcb->lvdsconf.use_power_scripts) {
- struct nouveau_connector *nv_connector = nouveau_encoder_connector_get(nv_encoder);
-
/* when removing an output, crtc may not be set, but PANEL_OFF
* must still be run
*/
nv04_dfp_get_bound_head(dev, nv_encoder->dcb);
if (mode == DRM_MODE_DPMS_ON) {
- if (!nv_connector->native_mode) {
- NV_ERROR(dev, "Not turning on LVDS without native mode\n");
- return;
- }
call_lvds_script(dev, nv_encoder->dcb, head,
- LVDS_PANEL_ON, nv_connector->native_mode->clock);
+ LVDS_PANEL_ON, nv_encoder->mode.clock);
} else
/* pxclk of 0 is fine for PANEL_OFF, and for a
* disconnected LVDS encoder there is no native_mode
struct nouveau_tile_reg *tile = &dev_priv->tile.reg[i];
switch (dev_priv->chipset) {
+ case 0x40:
+ case 0x41: /* guess */
+ case 0x42:
+ case 0x43:
+ case 0x45: /* guess */
+ case 0x4e:
+ nv_wr32(dev, NV20_PGRAPH_TSIZE(i), tile->pitch);
+ nv_wr32(dev, NV20_PGRAPH_TLIMIT(i), tile->limit);
+ nv_wr32(dev, NV20_PGRAPH_TILE(i), tile->addr);
+ nv_wr32(dev, NV40_PGRAPH_TSIZE1(i), tile->pitch);
+ nv_wr32(dev, NV40_PGRAPH_TLIMIT1(i), tile->limit);
+ nv_wr32(dev, NV40_PGRAPH_TILE1(i), tile->addr);
+ break;
case 0x44:
case 0x4a:
- case 0x4e:
nv_wr32(dev, NV20_PGRAPH_TSIZE(i), tile->pitch);
nv_wr32(dev, NV20_PGRAPH_TLIMIT(i), tile->limit);
nv_wr32(dev, NV20_PGRAPH_TILE(i), tile->addr);
break;
-
case 0x46:
case 0x47:
case 0x49:
case 0x4b:
+ case 0x4c:
+ case 0x67:
+ default:
nv_wr32(dev, NV47_PGRAPH_TSIZE(i), tile->pitch);
nv_wr32(dev, NV47_PGRAPH_TLIMIT(i), tile->limit);
nv_wr32(dev, NV47_PGRAPH_TILE(i), tile->addr);
nv_wr32(dev, NV40_PGRAPH_TLIMIT1(i), tile->limit);
nv_wr32(dev, NV40_PGRAPH_TILE1(i), tile->addr);
break;
-
- default:
- nv_wr32(dev, NV20_PGRAPH_TSIZE(i), tile->pitch);
- nv_wr32(dev, NV20_PGRAPH_TLIMIT(i), tile->limit);
- nv_wr32(dev, NV20_PGRAPH_TILE(i), tile->addr);
- nv_wr32(dev, NV40_PGRAPH_TSIZE1(i), tile->pitch);
- nv_wr32(dev, NV40_PGRAPH_TLIMIT1(i), tile->limit);
- nv_wr32(dev, NV40_PGRAPH_TILE1(i), tile->addr);
- break;
}
}
break;
default:
switch (dev_priv->chipset) {
- case 0x46:
- case 0x47:
- case 0x49:
- case 0x4b:
- nv_wr32(dev, 0x400DF0, nv_rd32(dev, NV04_PFB_CFG0));
- nv_wr32(dev, 0x400DF4, nv_rd32(dev, NV04_PFB_CFG1));
- break;
- default:
+ case 0x41:
+ case 0x42:
+ case 0x43:
+ case 0x45:
+ case 0x4e:
+ case 0x44:
+ case 0x4a:
nv_wr32(dev, 0x4009F0, nv_rd32(dev, NV04_PFB_CFG0));
nv_wr32(dev, 0x4009F4, nv_rd32(dev, NV04_PFB_CFG1));
break;
+ default:
+ nv_wr32(dev, 0x400DF0, nv_rd32(dev, NV04_PFB_CFG0));
+ nv_wr32(dev, 0x400DF4, nv_rd32(dev, NV04_PFB_CFG1));
+ break;
}
nv_wr32(dev, 0x4069F0, nv_rd32(dev, NV04_PFB_CFG0));
nv_wr32(dev, 0x4069F4, nv_rd32(dev, NV04_PFB_CFG1));
nv50_evo_channel_del(&dev_priv->evo);
return ret;
}
- } else
- if (dev_priv->chipset != 0x50) {
+ } else {
ret = nv50_evo_dmaobj_new(evo, 0x3d, NvEvoFB16, 0x70, 0x19,
0, 0xffffffff, 0x00010000);
if (ret) {
struct drm_device *dev = chan->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph;
+ struct nouveau_fifo_engine *pfifo = &dev_priv->engine.fifo;
int i, hdr = (dev_priv->chipset == 0x50) ? 0x200 : 0x20;
unsigned long flags;
return;
spin_lock_irqsave(&dev_priv->context_switch_lock, flags);
+ pfifo->reassign(dev, false);
pgraph->fifo_access(dev, false);
if (pgraph->channel(dev) == chan)
dev_priv->engine.instmem.flush(dev);
pgraph->fifo_access(dev, true);
+ pfifo->reassign(dev, true);
spin_unlock_irqrestore(&dev_priv->context_switch_lock, flags);
nouveau_gpuobj_ref(NULL, &chan->ramin_grctx);
}
if (phys & 1) {
- if (dev_priv->vram_sys_base) {
- phys += dev_priv->vram_sys_base;
- phys |= 0x30;
- }
-
if (coverage <= 32 * 1024 * 1024)
phys |= 0x60;
else if (coverage <= 64 * 1024 * 1024)
#include "nvc0_graph.h"
static void nvc0_graph_isr(struct drm_device *);
+static void nvc0_runk140_isr(struct drm_device *);
static int nvc0_graph_unload_context_to(struct drm_device *dev, u64 chan);
void
return;
nouveau_irq_unregister(dev, 12);
+ nouveau_irq_unregister(dev, 25);
nouveau_gpuobj_ref(NULL, &priv->unk4188b8);
nouveau_gpuobj_ref(NULL, &priv->unk4188b4);
}
nouveau_irq_register(dev, 12, nvc0_graph_isr);
+ nouveau_irq_register(dev, 25, nvc0_runk140_isr);
NVOBJ_CLASS(dev, 0x902d, GR); /* 2D */
NVOBJ_CLASS(dev, 0x9039, GR); /* M2MF */
NVOBJ_CLASS(dev, 0x9097, GR); /* 3D */
nv_wr32(dev, TP_UNIT(gpc, tp, 0x224), 0xc0000000);
nv_wr32(dev, TP_UNIT(gpc, tp, 0x48c), 0xc0000000);
nv_wr32(dev, TP_UNIT(gpc, tp, 0x084), 0xc0000000);
- nv_wr32(dev, TP_UNIT(gpc, tp, 0xe44), 0x001ffffe);
- nv_wr32(dev, TP_UNIT(gpc, tp, 0xe4c), 0x0000000f);
+ nv_wr32(dev, TP_UNIT(gpc, tp, 0x644), 0x001ffffe);
+ nv_wr32(dev, TP_UNIT(gpc, tp, 0x64c), 0x0000000f);
}
nv_wr32(dev, GPC_UNIT(gpc, 0x2c90), 0xffffffff);
nv_wr32(dev, GPC_UNIT(gpc, 0x2c94), 0xffffffff);
nv_wr32(dev, 0x400500, 0x00010001);
}
+
+static void
+nvc0_runk140_isr(struct drm_device *dev)
+{
+ u32 units = nv_rd32(dev, 0x00017c) & 0x1f;
+
+ while (units) {
+ u32 unit = ffs(units) - 1;
+ u32 reg = 0x140000 + unit * 0x2000;
+ u32 st0 = nv_mask(dev, reg + 0x1020, 0, 0);
+ u32 st1 = nv_mask(dev, reg + 0x1420, 0, 0);
+
+ NV_INFO(dev, "PRUNK140: %d 0x%08x 0x%08x\n", unit, st0, st1);
+ units &= ~(1 << unit);
+ }
+}
for (tp = 0, id = 0; tp < 4; tp++) {
for (gpc = 0; gpc < priv->gpc_nr; gpc++) {
- if (tp <= priv->tp_nr[gpc]) {
+ if (tp < priv->tp_nr[gpc]) {
nv_wr32(dev, TP_UNIT(gpc, tp, 0x698), id);
nv_wr32(dev, TP_UNIT(gpc, tp, 0x4e8), id);
nv_wr32(dev, GPC_UNIT(gpc, 0x0c10 + tp * 4), id);
switch (radeon_crtc->rmx_type) {
case RMX_CENTER:
- args.usOverscanTop = (adjusted_mode->crtc_vdisplay - mode->crtc_vdisplay) / 2;
- args.usOverscanBottom = (adjusted_mode->crtc_vdisplay - mode->crtc_vdisplay) / 2;
- args.usOverscanLeft = (adjusted_mode->crtc_hdisplay - mode->crtc_hdisplay) / 2;
- args.usOverscanRight = (adjusted_mode->crtc_hdisplay - mode->crtc_hdisplay) / 2;
+ args.usOverscanTop = cpu_to_le16((adjusted_mode->crtc_vdisplay - mode->crtc_vdisplay) / 2);
+ args.usOverscanBottom = cpu_to_le16((adjusted_mode->crtc_vdisplay - mode->crtc_vdisplay) / 2);
+ args.usOverscanLeft = cpu_to_le16((adjusted_mode->crtc_hdisplay - mode->crtc_hdisplay) / 2);
+ args.usOverscanRight = cpu_to_le16((adjusted_mode->crtc_hdisplay - mode->crtc_hdisplay) / 2);
break;
case RMX_ASPECT:
a1 = mode->crtc_vdisplay * adjusted_mode->crtc_hdisplay;
a2 = adjusted_mode->crtc_vdisplay * mode->crtc_hdisplay;
if (a1 > a2) {
- args.usOverscanLeft = (adjusted_mode->crtc_hdisplay - (a2 / mode->crtc_vdisplay)) / 2;
- args.usOverscanRight = (adjusted_mode->crtc_hdisplay - (a2 / mode->crtc_vdisplay)) / 2;
+ args.usOverscanLeft = cpu_to_le16((adjusted_mode->crtc_hdisplay - (a2 / mode->crtc_vdisplay)) / 2);
+ args.usOverscanRight = cpu_to_le16((adjusted_mode->crtc_hdisplay - (a2 / mode->crtc_vdisplay)) / 2);
} else if (a2 > a1) {
- args.usOverscanLeft = (adjusted_mode->crtc_vdisplay - (a1 / mode->crtc_hdisplay)) / 2;
- args.usOverscanRight = (adjusted_mode->crtc_vdisplay - (a1 / mode->crtc_hdisplay)) / 2;
+ args.usOverscanLeft = cpu_to_le16((adjusted_mode->crtc_vdisplay - (a1 / mode->crtc_hdisplay)) / 2);
+ args.usOverscanRight = cpu_to_le16((adjusted_mode->crtc_vdisplay - (a1 / mode->crtc_hdisplay)) / 2);
}
break;
case RMX_FULL:
default:
- args.usOverscanRight = radeon_crtc->h_border;
- args.usOverscanLeft = radeon_crtc->h_border;
- args.usOverscanBottom = radeon_crtc->v_border;
- args.usOverscanTop = radeon_crtc->v_border;
+ args.usOverscanRight = cpu_to_le16(radeon_crtc->h_border);
+ args.usOverscanLeft = cpu_to_le16(radeon_crtc->h_border);
+ args.usOverscanBottom = cpu_to_le16(radeon_crtc->v_border);
+ args.usOverscanTop = cpu_to_le16(radeon_crtc->v_border);
break;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
memset(&args, 0, sizeof(args));
if (ASIC_IS_DCE5(rdev)) {
- args.v3.usSpreadSpectrumAmountFrac = 0;
+ args.v3.usSpreadSpectrumAmountFrac = cpu_to_le16(0);
args.v3.ucSpreadSpectrumType = ss->type;
switch (pll_id) {
case ATOM_PPLL1:
args.v3.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V3_P1PLL;
- args.v3.usSpreadSpectrumAmount = ss->amount;
- args.v3.usSpreadSpectrumStep = ss->step;
+ args.v3.usSpreadSpectrumAmount = cpu_to_le16(ss->amount);
+ args.v3.usSpreadSpectrumStep = cpu_to_le16(ss->step);
break;
case ATOM_PPLL2:
args.v3.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V3_P2PLL;
- args.v3.usSpreadSpectrumAmount = ss->amount;
- args.v3.usSpreadSpectrumStep = ss->step;
+ args.v3.usSpreadSpectrumAmount = cpu_to_le16(ss->amount);
+ args.v3.usSpreadSpectrumStep = cpu_to_le16(ss->step);
break;
case ATOM_DCPLL:
args.v3.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V3_DCPLL;
- args.v3.usSpreadSpectrumAmount = 0;
- args.v3.usSpreadSpectrumStep = 0;
+ args.v3.usSpreadSpectrumAmount = cpu_to_le16(0);
+ args.v3.usSpreadSpectrumStep = cpu_to_le16(0);
break;
case ATOM_PPLL_INVALID:
return;
switch (pll_id) {
case ATOM_PPLL1:
args.v2.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V2_P1PLL;
- args.v2.usSpreadSpectrumAmount = ss->amount;
- args.v2.usSpreadSpectrumStep = ss->step;
+ args.v2.usSpreadSpectrumAmount = cpu_to_le16(ss->amount);
+ args.v2.usSpreadSpectrumStep = cpu_to_le16(ss->step);
break;
case ATOM_PPLL2:
args.v2.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V2_P2PLL;
- args.v2.usSpreadSpectrumAmount = ss->amount;
- args.v2.usSpreadSpectrumStep = ss->step;
+ args.v2.usSpreadSpectrumAmount = cpu_to_le16(ss->amount);
+ args.v2.usSpreadSpectrumStep = cpu_to_le16(ss->step);
break;
case ATOM_DCPLL:
args.v2.ucSpreadSpectrumType |= ATOM_PPLL_SS_TYPE_V2_DCPLL;
- args.v2.usSpreadSpectrumAmount = 0;
- args.v2.usSpreadSpectrumStep = 0;
+ args.v2.usSpreadSpectrumAmount = cpu_to_le16(0);
+ args.v2.usSpreadSpectrumStep = cpu_to_le16(0);
break;
case ATOM_PPLL_INVALID:
return;
pll->flags |= RADEON_PLL_PREFER_HIGH_FB_DIV;
else
pll->flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
-
}
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
dp_clock = dig_connector->dp_clock;
}
}
-#if 0 /* doesn't work properly on some laptops */
+
/* use recommended ref_div for ss */
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (ss_enabled) {
if (ss->refdiv) {
+ pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
pll->flags |= RADEON_PLL_USE_REF_DIV;
pll->reference_div = ss->refdiv;
+ if (ASIC_IS_AVIVO(rdev))
+ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV;
}
}
}
-#endif
+
if (ASIC_IS_AVIVO(rdev)) {
/* DVO wants 2x pixel clock if the DVO chip is in 12 bit mode */
if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1)
adjusted_clock = mode->clock * 2;
if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT))
pll->flags |= RADEON_PLL_PREFER_CLOSEST_LOWER;
+ if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
+ pll->flags |= RADEON_PLL_IS_LCD;
} else {
if (encoder->encoder_type != DRM_MODE_ENCODER_DAC)
pll->flags |= RADEON_PLL_NO_ODD_POST_DIV;
args.v1.usPixelClock = cpu_to_le16(mode->clock / 10);
args.v1.ucTransmitterID = radeon_encoder->encoder_id;
args.v1.ucEncodeMode = encoder_mode;
- if (encoder_mode == ATOM_ENCODER_MODE_DP) {
- if (ss_enabled)
- args.v1.ucConfig |=
- ADJUST_DISPLAY_CONFIG_SS_ENABLE;
- } else if (encoder_mode == ATOM_ENCODER_MODE_LVDS) {
+ if (ss_enabled)
args.v1.ucConfig |=
ADJUST_DISPLAY_CONFIG_SS_ENABLE;
- }
atom_execute_table(rdev->mode_info.atom_context,
index, (uint32_t *)&args);
args.v3.sInput.ucTransmitterID = radeon_encoder->encoder_id;
args.v3.sInput.ucEncodeMode = encoder_mode;
args.v3.sInput.ucDispPllConfig = 0;
+ if (ss_enabled)
+ args.v3.sInput.ucDispPllConfig |=
+ DISPPLL_CONFIG_SS_ENABLE;
if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
if (encoder_mode == ATOM_ENCODER_MODE_DP) {
- if (ss_enabled)
- args.v3.sInput.ucDispPllConfig |=
- DISPPLL_CONFIG_SS_ENABLE;
args.v3.sInput.ucDispPllConfig |=
DISPPLL_CONFIG_COHERENT_MODE;
/* 16200 or 27000 */
}
} else if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (encoder_mode == ATOM_ENCODER_MODE_DP) {
- if (ss_enabled)
- args.v3.sInput.ucDispPllConfig |=
- DISPPLL_CONFIG_SS_ENABLE;
args.v3.sInput.ucDispPllConfig |=
DISPPLL_CONFIG_COHERENT_MODE;
/* 16200 or 27000 */
args.v3.sInput.usPixelClock = cpu_to_le16(dp_clock / 10);
- } else if (encoder_mode == ATOM_ENCODER_MODE_LVDS) {
- if (ss_enabled)
- args.v3.sInput.ucDispPllConfig |=
- DISPPLL_CONFIG_SS_ENABLE;
- } else {
+ } else if (encoder_mode != ATOM_ENCODER_MODE_LVDS) {
if (mode->clock > 165000)
args.v3.sInput.ucDispPllConfig |=
DISPPLL_CONFIG_DUAL_LINK;
index, (uint32_t *)&args);
adjusted_clock = le32_to_cpu(args.v3.sOutput.ulDispPllFreq) * 10;
if (args.v3.sOutput.ucRefDiv) {
+ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV;
pll->flags |= RADEON_PLL_USE_REF_DIV;
pll->reference_div = args.v3.sOutput.ucRefDiv;
}
if (args.v3.sOutput.ucPostDiv) {
+ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV;
pll->flags |= RADEON_PLL_USE_POST_DIV;
pll->post_div = args.v3.sOutput.ucPostDiv;
}
* SetPixelClock provides the dividers
*/
args.v5.ucCRTC = ATOM_CRTC_INVALID;
- args.v5.usPixelClock = dispclk;
+ args.v5.usPixelClock = cpu_to_le16(dispclk);
args.v5.ucPpll = ATOM_DCPLL;
break;
case 6:
/* if the default dcpll clock is specified,
* SetPixelClock provides the dividers
*/
- args.v6.ulDispEngClkFreq = dispclk;
+ args.v6.ulDispEngClkFreq = cpu_to_le32(dispclk);
args.v6.ucPpll = ATOM_DCPLL;
break;
default:
/* adjust pixel clock as needed */
adjusted_clock = atombios_adjust_pll(crtc, mode, pll, ss_enabled, &ss);
- radeon_compute_pll(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
- &ref_div, &post_div);
+ if (ASIC_IS_AVIVO(rdev))
+ radeon_compute_pll_avivo(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
+ &ref_div, &post_div);
+ else
+ radeon_compute_pll_legacy(pll, adjusted_clock, &pll_clock, &fb_div, &frac_fb_div,
+ &ref_div, &post_div);
atombios_crtc_program_ss(crtc, ATOM_DISABLE, radeon_crtc->pll_id, &ss);
}
}
-static int evergreen_crtc_do_set_base(struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- int x, int y, int atomic)
+static int dce4_crtc_do_set_base(struct drm_crtc *crtc,
+ struct drm_framebuffer *fb,
+ int x, int y, int atomic)
{
struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct radeon_bo *rbo;
uint64_t fb_location;
uint32_t fb_format, fb_pitch_pixels, tiling_flags;
+ u32 fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_NONE);
int r;
/* no fb bound */
case 16:
fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) |
EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB565));
+#ifdef __BIG_ENDIAN
+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16);
+#endif
break;
case 24:
case 32:
fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_32BPP) |
EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB8888));
+#ifdef __BIG_ENDIAN
+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN32);
+#endif
break;
default:
DRM_ERROR("Unsupported screen depth %d\n",
WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + radeon_crtc->crtc_offset,
(u32) fb_location & EVERGREEN_GRPH_SURFACE_ADDRESS_MASK);
WREG32(EVERGREEN_GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format);
+ WREG32(EVERGREEN_GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap);
WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0);
WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0);
WREG32(EVERGREEN_VIEWPORT_SIZE + radeon_crtc->crtc_offset,
(crtc->mode.hdisplay << 16) | crtc->mode.vdisplay);
- if (crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)
- WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset,
- EVERGREEN_INTERLEAVE_EN);
- else
- WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, 0);
-
if (!atomic && fb && fb != crtc->fb) {
radeon_fb = to_radeon_framebuffer(fb);
rbo = radeon_fb->obj->driver_private;
struct drm_framebuffer *target_fb;
uint64_t fb_location;
uint32_t fb_format, fb_pitch_pixels, tiling_flags;
+ u32 fb_swap = R600_D1GRPH_SWAP_ENDIAN_NONE;
int r;
/* no fb bound */
fb_format =
AVIVO_D1GRPH_CONTROL_DEPTH_16BPP |
AVIVO_D1GRPH_CONTROL_16BPP_RGB565;
+#ifdef __BIG_ENDIAN
+ fb_swap = R600_D1GRPH_SWAP_ENDIAN_16BIT;
+#endif
break;
case 24:
case 32:
fb_format =
AVIVO_D1GRPH_CONTROL_DEPTH_32BPP |
AVIVO_D1GRPH_CONTROL_32BPP_ARGB8888;
+#ifdef __BIG_ENDIAN
+ fb_swap = R600_D1GRPH_SWAP_ENDIAN_32BIT;
+#endif
break;
default:
DRM_ERROR("Unsupported screen depth %d\n",
WREG32(AVIVO_D1GRPH_SECONDARY_SURFACE_ADDRESS +
radeon_crtc->crtc_offset, (u32) fb_location);
WREG32(AVIVO_D1GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format);
+ if (rdev->family >= CHIP_R600)
+ WREG32(R600_D1GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap);
WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0);
WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0);
WREG32(AVIVO_D1MODE_VIEWPORT_SIZE + radeon_crtc->crtc_offset,
(crtc->mode.hdisplay << 16) | crtc->mode.vdisplay);
- if (crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)
- WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset,
- AVIVO_D1MODE_INTERLEAVE_EN);
- else
- WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, 0);
-
if (!atomic && fb && fb != crtc->fb) {
radeon_fb = to_radeon_framebuffer(fb);
rbo = radeon_fb->obj->driver_private;
struct radeon_device *rdev = dev->dev_private;
if (ASIC_IS_DCE4(rdev))
- return evergreen_crtc_do_set_base(crtc, old_fb, x, y, 0);
+ return dce4_crtc_do_set_base(crtc, old_fb, x, y, 0);
else if (ASIC_IS_AVIVO(rdev))
return avivo_crtc_do_set_base(crtc, old_fb, x, y, 0);
else
struct radeon_device *rdev = dev->dev_private;
if (ASIC_IS_DCE4(rdev))
- return evergreen_crtc_do_set_base(crtc, fb, x, y, 1);
+ return dce4_crtc_do_set_base(crtc, fb, x, y, 1);
else if (ASIC_IS_AVIVO(rdev))
return avivo_crtc_do_set_base(crtc, fb, x, y, 1);
else
int dp_mode_valid(u8 dpcd[DP_DPCD_SIZE], int mode_clock)
{
int lanes = dp_lanes_for_mode_clock(dpcd, mode_clock);
- int bw = dp_lanes_for_mode_clock(dpcd, mode_clock);
+ int dp_clock = dp_link_clock_for_mode_clock(dpcd, mode_clock);
- if ((lanes == 0) || (bw == 0))
+ if ((lanes == 0) || (dp_clock == 0))
return MODE_CLOCK_HIGH;
return MODE_OK;
}
/* get temperature in millidegrees */
-u32 evergreen_get_temp(struct radeon_device *rdev)
+int evergreen_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_MULT_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
u32 actual_temp = 0;
- if ((temp >> 10) & 1)
- actual_temp = 0;
- else if ((temp >> 9) & 1)
+ if (temp & 0x400)
+ actual_temp = -256;
+ else if (temp & 0x200)
actual_temp = 255;
- else
- actual_temp = (temp >> 1) & 0xff;
+ else if (temp & 0x100) {
+ actual_temp = temp & 0x1ff;
+ actual_temp |= ~0x1ff;
+ } else
+ actual_temp = temp & 0xff;
- return actual_temp * 1000;
+ return (actual_temp * 1000) / 2;
}
-u32 sumo_get_temp(struct radeon_device *rdev)
+int sumo_get_temp(struct radeon_device *rdev)
{
u32 temp = RREG32(CG_THERMAL_STATUS) & 0xff;
- u32 actual_temp = (temp >> 1) & 0xff;
+ int actual_temp = temp - 49;
return actual_temp * 1000;
}
/*
* CP.
*/
+void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib)
+{
+ /* set to DX10/11 mode */
+ radeon_ring_write(rdev, PACKET3(PACKET3_MODE_CONTROL, 0));
+ radeon_ring_write(rdev, 1);
+ /* FIXME: implement */
+ radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+ radeon_ring_write(rdev,
+#ifdef __BIG_ENDIAN
+ (2 << 0) |
+#endif
+ (ib->gpu_addr & 0xFFFFFFFC));
+ radeon_ring_write(rdev, upper_32_bits(ib->gpu_addr) & 0xFF);
+ radeon_ring_write(rdev, ib->length_dw);
+}
+
static int evergreen_cp_load_microcode(struct radeon_device *rdev)
{
return -EINVAL;
r700_cp_stop(rdev);
- WREG32(CP_RB_CNTL, RB_NO_UPDATE | (15 << 8) | (3 << 0));
+ WREG32(CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ BUF_SWAP_32BIT |
+#endif
+ RB_NO_UPDATE | RB_BLKSZ(15) | RB_BUFSZ(3));
fw_data = (const __be32 *)rdev->pfp_fw->data;
WREG32(CP_PFP_UCODE_ADDR, 0);
cp_me = 0xff;
WREG32(CP_ME_CNTL, cp_me);
- r = radeon_ring_lock(rdev, evergreen_default_size + 15);
+ r = radeon_ring_lock(rdev, evergreen_default_size + 19);
if (r) {
DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r);
return r;
radeon_ring_write(rdev, 0xffffffff);
radeon_ring_write(rdev, 0xffffffff);
+ radeon_ring_write(rdev, 0xc0026900);
+ radeon_ring_write(rdev, 0x00000316);
+ radeon_ring_write(rdev, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
+ radeon_ring_write(rdev, 0x00000010); /* */
+
radeon_ring_unlock_commit(rdev);
return 0;
WREG32(CP_RB_WPTR, 0);
/* set the wb address wether it's enabled or not */
- WREG32(CP_RB_RPTR_ADDR, (rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC);
+ WREG32(CP_RB_RPTR_ADDR,
+#ifdef __BIG_ENDIAN
+ RB_RPTR_SWAP(2) |
+#endif
+ ((rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC));
WREG32(CP_RB_RPTR_ADDR_HI, upper_32_bits(rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFF);
WREG32(SCRATCH_ADDR, ((rdev->wb.gpu_addr + RADEON_WB_SCRATCH_OFFSET) >> 8) & 0xFFFFFFFF);
WREG32(VGT_CACHE_INVALIDATION, vgt_cache_invalidation);
WREG32(VGT_GS_VERTEX_REUSE, 16);
+ WREG32(PA_SU_LINE_STIPPLE_VALUE, 0);
WREG32(PA_SC_LINE_STIPPLE_STATE, 0);
WREG32(VGT_VERTEX_REUSE_BLOCK_CNTL, 14);
struct evergreen_mc_save save;
u32 grbm_reset = 0;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ return 0;
+
dev_info(rdev->dev, "GPU softreset \n");
dev_info(rdev->dev, " GRBM_STATUS=0x%08X\n",
RREG32(GRBM_STATUS));
while (rptr != wptr) {
/* wptr/rptr are in bytes! */
ring_index = rptr / 4;
- src_id = rdev->ih.ring[ring_index] & 0xff;
- src_data = rdev->ih.ring[ring_index + 1] & 0xfffffff;
+ src_id = le32_to_cpu(rdev->ih.ring[ring_index]) & 0xff;
+ src_data = le32_to_cpu(rdev->ih.ring[ring_index + 1]) & 0xfffffff;
switch (src_id) {
case 1: /* D1 vblank/vline */
if (h < 8)
h = 8;
- cb_color_info = ((format << 2) | (1 << 24));
+ cb_color_info = ((format << 2) | (1 << 24) | (1 << 8));
pitch = (w / 8) - 1;
slice = ((w * h) / 64) - 1;
/* high addr, stride */
sq_vtx_constant_word2 = ((upper_32_bits(gpu_addr) & 0xff) | (16 << 8));
+#ifdef __BIG_ENDIAN
+ sq_vtx_constant_word2 |= (2 << 30);
+#endif
/* xyzw swizzles */
sq_vtx_constant_word3 = (0 << 3) | (1 << 6) | (2 << 9) | (3 << 12);
sq_tex_resource_word0 = (1 << 0); /* 2D */
sq_tex_resource_word0 |= ((((pitch >> 3) - 1) << 6) |
((w - 1) << 18));
- sq_tex_resource_word1 = ((h - 1) << 0);
+ sq_tex_resource_word1 = ((h - 1) << 0) | (1 << 28);
/* xyzw swizzles */
sq_tex_resource_word4 = (0 << 16) | (1 << 19) | (2 << 22) | (3 << 25);
radeon_ring_write(rdev, DI_PT_RECTLIST);
radeon_ring_write(rdev, PACKET3(PACKET3_INDEX_TYPE, 0));
- radeon_ring_write(rdev, DI_INDEX_SIZE_16_BIT);
+ radeon_ring_write(rdev,
+#ifdef __BIG_ENDIAN
+ (2 << 2) |
+#endif
+ DI_INDEX_SIZE_16_BIT);
radeon_ring_write(rdev, PACKET3(PACKET3_NUM_INSTANCES, 0));
radeon_ring_write(rdev, 1);
}
-/* emits 30 */
+/* emits 36 */
static void
set_default_state(struct radeon_device *rdev)
{
int num_hs_threads, num_ls_threads;
int num_ps_stack_entries, num_vs_stack_entries, num_gs_stack_entries, num_es_stack_entries;
int num_hs_stack_entries, num_ls_stack_entries;
+ u64 gpu_addr;
+ int dwords;
switch (rdev->family) {
case CHIP_CEDAR:
radeon_ring_write(rdev, 0x00000000);
radeon_ring_write(rdev, 0x00000000);
+ /* set to DX10/11 mode */
+ radeon_ring_write(rdev, PACKET3(PACKET3_MODE_CONTROL, 0));
+ radeon_ring_write(rdev, 1);
+
+ /* emit an IB pointing at default state */
+ dwords = ALIGN(rdev->r600_blit.state_len, 0x10);
+ gpu_addr = rdev->r600_blit.shader_gpu_addr + rdev->r600_blit.state_offset;
+ radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+ radeon_ring_write(rdev, gpu_addr & 0xFFFFFFFC);
+ radeon_ring_write(rdev, upper_32_bits(gpu_addr) & 0xFF);
+ radeon_ring_write(rdev, dwords);
+
}
static inline uint32_t i2f(uint32_t input)
int evergreen_blit_init(struct radeon_device *rdev)
{
u32 obj_size;
- int r;
+ int i, r, dwords;
void *ptr;
+ u32 packet2s[16];
+ int num_packet2s = 0;
/* pin copy shader into vram if already initialized */
if (rdev->r600_blit.shader_obj)
mutex_init(&rdev->r600_blit.mutex);
rdev->r600_blit.state_offset = 0;
- rdev->r600_blit.state_len = 0;
- obj_size = 0;
+
+ rdev->r600_blit.state_len = evergreen_default_size;
+
+ dwords = rdev->r600_blit.state_len;
+ while (dwords & 0xf) {
+ packet2s[num_packet2s++] = cpu_to_le32(PACKET2(0));
+ dwords++;
+ }
+
+ obj_size = dwords * 4;
+ obj_size = ALIGN(obj_size, 256);
rdev->r600_blit.vs_offset = obj_size;
obj_size += evergreen_vs_size * 4;
return r;
}
- memcpy(ptr + rdev->r600_blit.vs_offset, evergreen_vs, evergreen_vs_size * 4);
- memcpy(ptr + rdev->r600_blit.ps_offset, evergreen_ps, evergreen_ps_size * 4);
+ memcpy_toio(ptr + rdev->r600_blit.state_offset,
+ evergreen_default_state, rdev->r600_blit.state_len * 4);
+
+ if (num_packet2s)
+ memcpy_toio(ptr + rdev->r600_blit.state_offset + (rdev->r600_blit.state_len * 4),
+ packet2s, num_packet2s * 4);
+ for (i = 0; i < evergreen_vs_size; i++)
+ *(u32 *)((unsigned long)ptr + rdev->r600_blit.vs_offset + i * 4) = cpu_to_le32(evergreen_vs[i]);
+ for (i = 0; i < evergreen_ps_size; i++)
+ *(u32 *)((unsigned long)ptr + rdev->r600_blit.ps_offset + i * 4) = cpu_to_le32(evergreen_ps[i]);
radeon_bo_kunmap(rdev->r600_blit.shader_obj);
radeon_bo_unreserve(rdev->r600_blit.shader_obj);
/* calculate number of loops correctly */
ring_size = num_loops * dwords_per_loop;
/* set default + shaders */
- ring_size += 46; /* shaders + def state */
+ ring_size += 52; /* shaders + def state */
ring_size += 10; /* fence emit for VB IB */
ring_size += 5; /* done copy */
ring_size += 10; /* fence emit for done copy */
if (r)
return r;
- set_default_state(rdev); /* 30 */
+ set_default_state(rdev); /* 36 */
set_shaders(rdev); /* 16 */
return 0;
}
0x00000000,
0x3c000000,
0x67961001,
+#ifdef __BIG_ENDIAN
+ 0x000a0000,
+#else
0x00080000,
+#endif
0x00000000,
0x1c000000,
0x67961000,
+#ifdef __BIG_ENDIAN
+ 0x00020008,
+#else
0x00000008,
+#endif
0x00000000,
};
#define BUF_SWAP_32BIT (2 << 16)
#define CP_RB_RPTR 0x8700
#define CP_RB_RPTR_ADDR 0xC10C
+#define RB_RPTR_SWAP(x) ((x) << 0)
#define CP_RB_RPTR_ADDR_HI 0xC110
#define CP_RB_RPTR_WR 0xC108
#define CP_RB_WPTR 0xC114
#define FORCE_EOV_MAX_CLK_CNT(x) ((x) << 0)
#define FORCE_EOV_MAX_REZ_CNT(x) ((x) << 16)
#define PA_SC_LINE_STIPPLE 0x28A0C
+#define PA_SU_LINE_STIPPLE_VALUE 0x8A60
#define PA_SC_LINE_STIPPLE_STATE 0x8B10
#define SCRATCH_REG0 0x8500
#define PACKET3_DISPATCH_DIRECT 0x15
#define PACKET3_DISPATCH_INDIRECT 0x16
#define PACKET3_INDIRECT_BUFFER_END 0x17
+#define PACKET3_MODE_CONTROL 0x18
#define PACKET3_SET_PREDICATION 0x20
#define PACKET3_REG_RMW 0x21
#define PACKET3_COND_EXEC 0x22
last_reg = strtol(last_reg_s, NULL, 16);
do {
- if (fgets(buf, 1024, file) == NULL)
+ if (fgets(buf, 1024, file) == NULL) {
+ fclose(file);
return -1;
+ }
len = strlen(buf);
if (ftell(file) == end)
done = 1;
fprintf(stderr,
"Error matching regular expression %d in %s\n",
r, filename);
+ fclose(file);
return -1;
} else {
buf[match[0].rm_eo] = 0;
WREG32(RADEON_CP_CSQ_MODE,
REG_SET(RADEON_INDIRECT2_START, indirect2_start) |
REG_SET(RADEON_INDIRECT1_START, indirect1_start));
- WREG32(0x718, 0);
- WREG32(0x744, 0x00004D4D);
+ WREG32(RADEON_CP_RB_WPTR_DELAY, 0);
+ WREG32(RADEON_CP_CSQ_MODE, 0x00004D4D);
WREG32(RADEON_CP_CSQ_CNTL, RADEON_CSQ_PRIBM_INDBM);
radeon_ring_start(rdev);
r = radeon_ring_test(rdev);
}
track->zb.robj = reloc->robj;
track->zb.offset = idx_value;
+ track->zb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case RADEON_RB3D_COLOROFFSET:
}
track->cb[0].robj = reloc->robj;
track->cb[0].offset = idx_value;
+ track->cb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case RADEON_PP_TXOFFSET_0:
}
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case RADEON_PP_CUBIC_OFFSET_T0_0:
case RADEON_PP_CUBIC_OFFSET_T0_1:
track->textures[0].cube_info[i].offset = idx_value;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[0].cube_info[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case RADEON_PP_CUBIC_OFFSET_T1_0:
case RADEON_PP_CUBIC_OFFSET_T1_1:
track->textures[1].cube_info[i].offset = idx_value;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[1].cube_info[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case RADEON_PP_CUBIC_OFFSET_T2_0:
case RADEON_PP_CUBIC_OFFSET_T2_1:
track->textures[2].cube_info[i].offset = idx_value;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[2].cube_info[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case RADEON_RE_WIDTH_HEIGHT:
track->maxy = ((idx_value >> 16) & 0x7FF);
+ track->cb_dirty = true;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_COLORPITCH:
r = r100_cs_packet_next_reloc(p, &reloc);
ib[idx] = tmp;
track->cb[0].pitch = idx_value & RADEON_COLORPITCH_MASK;
+ track->cb_dirty = true;
break;
case RADEON_RB3D_DEPTHPITCH:
track->zb.pitch = idx_value & RADEON_DEPTHPITCH_MASK;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_CNTL:
switch ((idx_value >> RADEON_RB3D_COLOR_FORMAT_SHIFT) & 0x1f) {
return -EINVAL;
}
track->z_enabled = !!(idx_value & RADEON_Z_ENABLE);
+ track->cb_dirty = true;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_ZSTENCILCNTL:
switch (idx_value & 0xf) {
default:
break;
}
+ track->zb_dirty = true;
break;
case RADEON_RB3D_ZPASS_ADDR:
r = r100_cs_packet_next_reloc(p, &reloc);
uint32_t temp = idx_value >> 4;
for (i = 0; i < track->num_texture; i++)
track->textures[i].enabled = !!(temp & (1 << i));
+ track->tex_dirty = true;
}
break;
case RADEON_SE_VF_CNTL:
i = (reg - RADEON_PP_TEX_SIZE_0) / 8;
track->textures[i].width = (idx_value & RADEON_TEX_USIZE_MASK) + 1;
track->textures[i].height = ((idx_value & RADEON_TEX_VSIZE_MASK) >> RADEON_TEX_VSIZE_SHIFT) + 1;
+ track->tex_dirty = true;
break;
case RADEON_PP_TEX_PITCH_0:
case RADEON_PP_TEX_PITCH_1:
case RADEON_PP_TEX_PITCH_2:
i = (reg - RADEON_PP_TEX_PITCH_0) / 8;
track->textures[i].pitch = idx_value + 32;
+ track->tex_dirty = true;
break;
case RADEON_PP_TXFILTER_0:
case RADEON_PP_TXFILTER_1:
tmp = (idx_value >> 27) & 0x7;
if (tmp == 2 || tmp == 6)
track->textures[i].roundup_h = false;
+ track->tex_dirty = true;
break;
case RADEON_PP_TXFORMAT_0:
case RADEON_PP_TXFORMAT_1:
}
track->textures[i].cube_info[4].width = 1 << ((idx_value >> 16) & 0xf);
track->textures[i].cube_info[4].height = 1 << ((idx_value >> 20) & 0xf);
+ track->tex_dirty = true;
break;
case RADEON_PP_CUBIC_FACES_0:
case RADEON_PP_CUBIC_FACES_1:
track->textures[i].cube_info[face].width = 1 << ((tmp >> (face * 8)) & 0xf);
track->textures[i].cube_info[face].height = 1 << ((tmp >> ((face * 8) + 4)) & 0xf);
}
+ track->tex_dirty = true;
break;
default:
printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n",
temp = RREG32(RADEON_CONFIG_CNTL);
if (state == false) {
- temp &= ~(1<<8);
- temp |= (1<<9);
+ temp &= ~RADEON_CFG_VGA_RAM_EN;
+ temp |= RADEON_CFG_VGA_IO_DIS;
} else {
- temp &= ~(1<<9);
+ temp &= ~RADEON_CFG_VGA_IO_DIS;
}
WREG32(RADEON_CONFIG_CNTL, temp);
}
unsigned long size;
unsigned prim_walk;
unsigned nverts;
- unsigned num_cb = track->num_cb;
+ unsigned num_cb = track->cb_dirty ? track->num_cb : 0;
- if (!track->zb_cb_clear && !track->color_channel_mask &&
+ if (num_cb && !track->zb_cb_clear && !track->color_channel_mask &&
!track->blend_read_enable)
num_cb = 0;
return -EINVAL;
}
}
- if (track->z_enabled) {
+ track->cb_dirty = false;
+
+ if (track->zb_dirty && track->z_enabled) {
if (track->zb.robj == NULL) {
DRM_ERROR("[drm] No buffer for z buffer !\n");
return -EINVAL;
return -EINVAL;
}
}
+ track->zb_dirty = false;
+
+ if (track->aa_dirty && track->aaresolve) {
+ if (track->aa.robj == NULL) {
+ DRM_ERROR("[drm] No buffer for AA resolve buffer %d !\n", i);
+ return -EINVAL;
+ }
+ /* I believe the format comes from colorbuffer0. */
+ size = track->aa.pitch * track->cb[0].cpp * track->maxy;
+ size += track->aa.offset;
+ if (size > radeon_bo_size(track->aa.robj)) {
+ DRM_ERROR("[drm] Buffer too small for AA resolve buffer %d "
+ "(need %lu have %lu) !\n", i, size,
+ radeon_bo_size(track->aa.robj));
+ DRM_ERROR("[drm] AA resolve buffer %d (%u %u %u %u)\n",
+ i, track->aa.pitch, track->cb[0].cpp,
+ track->aa.offset, track->maxy);
+ return -EINVAL;
+ }
+ }
+ track->aa_dirty = false;
+
prim_walk = (track->vap_vf_cntl >> 4) & 0x3;
if (track->vap_vf_cntl & (1 << 14)) {
nverts = track->vap_alt_nverts;
prim_walk);
return -EINVAL;
}
- return r100_cs_track_texture_check(rdev, track);
+
+ if (track->tex_dirty) {
+ track->tex_dirty = false;
+ return r100_cs_track_texture_check(rdev, track);
+ }
+ return 0;
}
void r100_cs_track_clear(struct radeon_device *rdev, struct r100_cs_track *track)
{
unsigned i, face;
+ track->cb_dirty = true;
+ track->zb_dirty = true;
+ track->tex_dirty = true;
+ track->aa_dirty = true;
+
if (rdev->family < CHIP_R300) {
track->num_cb = 1;
if (rdev->family <= CHIP_RS200)
track->num_texture = 16;
track->maxy = 4096;
track->separate_cube = 0;
+ track->aaresolve = true;
+ track->aa.robj = NULL;
}
for (i = 0; i < track->num_cb; i++) {
if (i < rdev->usec_timeout) {
DRM_INFO("ring test succeeded in %d usecs\n", i);
} else {
- DRM_ERROR("radeon: ring test failed (sracth(0x%04X)=0x%08X)\n",
+ DRM_ERROR("radeon: ring test failed (scratch(0x%04X)=0x%08X)\n",
scratch, tmp);
r = -EINVAL;
}
unsigned compress_format;
};
-struct r100_cs_track_limits {
- unsigned num_cb;
- unsigned num_texture;
- unsigned max_levels;
-};
-
struct r100_cs_track {
- struct radeon_device *rdev;
unsigned num_cb;
unsigned num_texture;
unsigned maxy;
struct r100_cs_track_array arrays[11];
struct r100_cs_track_cb cb[R300_MAX_CB];
struct r100_cs_track_cb zb;
+ struct r100_cs_track_cb aa;
struct r100_cs_track_texture textures[R300_TRACK_MAX_TEXTURE];
bool z_enabled;
bool separate_cube;
bool zb_cb_clear;
bool blend_read_enable;
+ bool cb_dirty;
+ bool zb_dirty;
+ bool tex_dirty;
+ bool aa_dirty;
+ bool aaresolve;
};
int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track);
}
track->zb.robj = reloc->robj;
track->zb.offset = idx_value;
+ track->zb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case RADEON_RB3D_COLOROFFSET:
}
track->cb[0].robj = reloc->robj;
track->cb[0].offset = idx_value;
+ track->cb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case R200_PP_TXOFFSET_0:
}
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case R200_PP_CUBIC_OFFSET_F1_0:
case R200_PP_CUBIC_OFFSET_F2_0:
track->textures[i].cube_info[face - 1].offset = idx_value;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
track->textures[i].cube_info[face - 1].robj = reloc->robj;
+ track->tex_dirty = true;
break;
case RADEON_RE_WIDTH_HEIGHT:
track->maxy = ((idx_value >> 16) & 0x7FF);
+ track->cb_dirty = true;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_COLORPITCH:
r = r100_cs_packet_next_reloc(p, &reloc);
ib[idx] = tmp;
track->cb[0].pitch = idx_value & RADEON_COLORPITCH_MASK;
+ track->cb_dirty = true;
break;
case RADEON_RB3D_DEPTHPITCH:
track->zb.pitch = idx_value & RADEON_DEPTHPITCH_MASK;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_CNTL:
switch ((idx_value >> RADEON_RB3D_COLOR_FORMAT_SHIFT) & 0x1f) {
}
track->z_enabled = !!(idx_value & RADEON_Z_ENABLE);
+ track->cb_dirty = true;
+ track->zb_dirty = true;
break;
case RADEON_RB3D_ZSTENCILCNTL:
switch (idx_value & 0xf) {
default:
break;
}
+ track->zb_dirty = true;
break;
case RADEON_RB3D_ZPASS_ADDR:
r = r100_cs_packet_next_reloc(p, &reloc);
uint32_t temp = idx_value >> 4;
for (i = 0; i < track->num_texture; i++)
track->textures[i].enabled = !!(temp & (1 << i));
+ track->tex_dirty = true;
}
break;
case RADEON_SE_VF_CNTL:
i = (reg - R200_PP_TXSIZE_0) / 32;
track->textures[i].width = (idx_value & RADEON_TEX_USIZE_MASK) + 1;
track->textures[i].height = ((idx_value & RADEON_TEX_VSIZE_MASK) >> RADEON_TEX_VSIZE_SHIFT) + 1;
+ track->tex_dirty = true;
break;
case R200_PP_TXPITCH_0:
case R200_PP_TXPITCH_1:
case R200_PP_TXPITCH_5:
i = (reg - R200_PP_TXPITCH_0) / 32;
track->textures[i].pitch = idx_value + 32;
+ track->tex_dirty = true;
break;
case R200_PP_TXFILTER_0:
case R200_PP_TXFILTER_1:
tmp = (idx_value >> 27) & 0x7;
if (tmp == 2 || tmp == 6)
track->textures[i].roundup_h = false;
+ track->tex_dirty = true;
break;
case R200_PP_TXMULTI_CTL_0:
case R200_PP_TXMULTI_CTL_1:
track->textures[i].tex_coord_type = 1;
break;
}
+ track->tex_dirty = true;
break;
case R200_PP_TXFORMAT_0:
case R200_PP_TXFORMAT_1:
}
track->textures[i].cube_info[4].width = 1 << ((idx_value >> 16) & 0xf);
track->textures[i].cube_info[4].height = 1 << ((idx_value >> 20) & 0xf);
+ track->tex_dirty = true;
break;
case R200_PP_CUBIC_FACES_0:
case R200_PP_CUBIC_FACES_1:
track->textures[i].cube_info[face].width = 1 << ((tmp >> (face * 8)) & 0xf);
track->textures[i].cube_info[face].height = 1 << ((tmp >> ((face * 8) + 4)) & 0xf);
}
+ track->tex_dirty = true;
break;
default:
printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n",
mb();
}
+#define R300_PTE_WRITEABLE (1 << 2)
+#define R300_PTE_READABLE (1 << 3)
+
int rv370_pcie_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr)
{
void __iomem *ptr = (void *)rdev->gart.table.vram.ptr;
}
addr = (lower_32_bits(addr) >> 8) |
((upper_32_bits(addr) & 0xff) << 24) |
- 0xc;
+ R300_PTE_WRITEABLE | R300_PTE_READABLE;
/* on x86 we want this to be CPU endian, on powerpc
* on powerpc without HW swappers, it'll get swapped on way
* into VRAM - so no need for cpu_to_le32 on VRAM tables */
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_LO, rdev->mc.vram_start);
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_HI, 0);
/* Clear error */
- WREG32_PCIE(0x18, 0);
+ WREG32_PCIE(RADEON_PCIE_TX_GART_ERROR, 0);
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
tmp |= RADEON_PCIE_TX_GART_EN;
tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
}
track->cb[i].robj = reloc->robj;
track->cb[i].offset = idx_value;
+ track->cb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case R300_ZB_DEPTHOFFSET:
}
track->zb.robj = reloc->robj;
track->zb.offset = idx_value;
+ track->zb_dirty = true;
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break;
case R300_TX_OFFSET_0:
tmp |= tile_flags;
ib[idx] = tmp;
track->textures[i].robj = reloc->robj;
+ track->tex_dirty = true;
break;
/* Tracked registers */
case 0x2084:
if (p->rdev->family < CHIP_RV515) {
track->maxy -= 1440;
}
+ track->cb_dirty = true;
+ track->zb_dirty = true;
break;
case 0x4E00:
/* RB3D_CCTL */
return -EINVAL;
}
track->num_cb = ((idx_value >> 5) & 0x3) + 1;
+ track->cb_dirty = true;
break;
case 0x4E38:
case 0x4E3C:
((idx_value >> 21) & 0xF));
return -EINVAL;
}
+ track->cb_dirty = true;
break;
case 0x4F00:
/* ZB_CNTL */
} else {
track->z_enabled = false;
}
+ track->zb_dirty = true;
break;
case 0x4F10:
/* ZB_FORMAT */
(idx_value & 0xF));
return -EINVAL;
}
+ track->zb_dirty = true;
break;
case 0x4F24:
/* ZB_DEPTHPITCH */
ib[idx] = tmp;
track->zb.pitch = idx_value & 0x3FFC;
+ track->zb_dirty = true;
break;
case 0x4104:
+ /* TX_ENABLE */
for (i = 0; i < 16; i++) {
bool enabled;
enabled = !!(idx_value & (1 << i));
track->textures[i].enabled = enabled;
}
+ track->tex_dirty = true;
break;
case 0x44C0:
case 0x44C4:
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
break;
case R300_TX_FORMAT_X16:
+ case R300_TX_FORMAT_FL_I16:
case R300_TX_FORMAT_Y8X8:
case R300_TX_FORMAT_Z5Y6X5:
case R300_TX_FORMAT_Z6Y5X5:
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
break;
case R300_TX_FORMAT_Y16X16:
+ case R300_TX_FORMAT_FL_I16A16:
case R300_TX_FORMAT_Z11Y11X10:
case R300_TX_FORMAT_Z10Y11X11:
case R300_TX_FORMAT_W8Z8Y8X8:
DRM_ERROR("Invalid texture format %u\n",
(idx_value & 0x1F));
return -EINVAL;
- break;
}
+ track->tex_dirty = true;
break;
case 0x4400:
case 0x4404:
if (tmp == 2 || tmp == 4 || tmp == 6) {
track->textures[i].roundup_h = false;
}
+ track->tex_dirty = true;
break;
case 0x4500:
case 0x4504:
DRM_ERROR("Forbidden bit TXFORMAT_MSB\n");
return -EINVAL;
}
+ track->tex_dirty = true;
break;
case 0x4480:
case 0x4484:
track->textures[i].use_pitch = !!tmp;
tmp = (idx_value >> 22) & 0xF;
track->textures[i].txdepth = tmp;
+ track->tex_dirty = true;
break;
case R300_ZB_ZPASS_ADDR:
r = r100_cs_packet_next_reloc(p, &reloc);
case 0x4e0c:
/* RB3D_COLOR_CHANNEL_MASK */
track->color_channel_mask = idx_value;
+ track->cb_dirty = true;
break;
case 0x43a4:
/* SC_HYPERZ_EN */
case 0x4f1c:
/* ZB_BW_CNTL */
track->zb_cb_clear = !!(idx_value & (1 << 5));
+ track->cb_dirty = true;
+ track->zb_dirty = true;
if (p->rdev->hyperz_filp != p->filp) {
if (idx_value & (R300_HIZ_ENABLE |
R300_RD_COMP_ENABLE |
case 0x4e04:
/* RB3D_BLENDCNTL */
track->blend_read_enable = !!(idx_value & (1 << 2));
+ track->cb_dirty = true;
+ break;
+ case R300_RB3D_AARESOLVE_OFFSET:
+ r = r100_cs_packet_next_reloc(p, &reloc);
+ if (r) {
+ DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
+ idx, reg);
+ r100_cs_dump_packet(p, pkt);
+ return r;
+ }
+ track->aa.robj = reloc->robj;
+ track->aa.offset = idx_value;
+ track->aa_dirty = true;
+ ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
+ break;
+ case R300_RB3D_AARESOLVE_PITCH:
+ track->aa.pitch = idx_value & 0x3FFE;
+ track->aa_dirty = true;
break;
- case 0x4f28: /* ZB_DEPTHCLEARVALUE */
+ case R300_RB3D_AARESOLVE_CTL:
+ track->aaresolve = idx_value & 0x1;
+ track->aa_dirty = true;
break;
case 0x4f30: /* ZB_MASK_OFFSET */
case 0x4f34: /* ZB_ZMASK_PITCH */
#define R300_RB3D_COLORPITCH2 0x4E40 /* GUESS */
#define R300_RB3D_COLORPITCH3 0x4E44 /* GUESS */
+#define R300_RB3D_AARESOLVE_OFFSET 0x4E80
+#define R300_RB3D_AARESOLVE_PITCH 0x4E84
#define R300_RB3D_AARESOLVE_CTL 0x4E88
/* gap */
"programming pipes. Bad things might happen.\n");
}
/* get max number of pipes */
- gb_pipe_select = RREG32(0x402C);
+ gb_pipe_select = RREG32(R400_GB_PIPE_SELECT);
num_pipes = ((gb_pipe_select >> 12) & 3) + 1;
/* SE chips have 1 pipe */
WREG32(0x4128, 0xFF);
}
r420_pipes_init(rdev);
- gb_pipe_select = RREG32(0x402C);
- tmp = RREG32(0x170C);
+ gb_pipe_select = RREG32(R400_GB_PIPE_SELECT);
+ tmp = RREG32(R300_DST_PIPE_CONFIG);
pipe_select_current = (tmp >> 2) & 3;
tmp = (1 << pipe_select_current) |
(((gb_pipe_select >> 8) & 0xF) << 4);
static void r600_pcie_gen2_enable(struct radeon_device *rdev);
/* get temperature in millidegrees */
-u32 rv6xx_get_temp(struct radeon_device *rdev)
+int rv6xx_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
+ int actual_temp = temp & 0xff;
- return temp * 1000;
+ if (temp & 0x100)
+ actual_temp -= 256;
+
+ return actual_temp * 1000;
}
void r600_pm_get_dynpm_state(struct radeon_device *rdev)
S_008014_CB2_BUSY(1) | S_008014_CB3_BUSY(1);
u32 tmp;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ return 0;
+
dev_info(rdev->dev, "GPU softreset \n");
dev_info(rdev->dev, " R_008010_GRBM_STATUS=0x%08X\n",
RREG32(R_008010_GRBM_STATUS));
r600_cp_stop(rdev);
- WREG32(CP_RB_CNTL, RB_NO_UPDATE | RB_BLKSZ(15) | RB_BUFSZ(3));
+ WREG32(CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ BUF_SWAP_32BIT |
+#endif
+ RB_NO_UPDATE | RB_BLKSZ(15) | RB_BUFSZ(3));
/* Reset cp */
WREG32(GRBM_SOFT_RESET, SOFT_RESET_CP);
WREG32(CP_RB_WPTR, 0);
/* set the wb address whether it's enabled or not */
- WREG32(CP_RB_RPTR_ADDR, (rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC);
+ WREG32(CP_RB_RPTR_ADDR,
+#ifdef __BIG_ENDIAN
+ RB_RPTR_SWAP(2) |
+#endif
+ ((rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC));
WREG32(CP_RB_RPTR_ADDR_HI, upper_32_bits(rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFF);
WREG32(SCRATCH_ADDR, ((rdev->wb.gpu_addr + RADEON_WB_SCRATCH_OFFSET) >> 8) & 0xFFFFFFFF);
{
/* FIXME: implement */
radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
- radeon_ring_write(rdev, ib->gpu_addr & 0xFFFFFFFC);
+ radeon_ring_write(rdev,
+#ifdef __BIG_ENDIAN
+ (2 << 0) |
+#endif
+ (ib->gpu_addr & 0xFFFFFFFC));
radeon_ring_write(rdev, upper_32_bits(ib->gpu_addr) & 0xFF);
radeon_ring_write(rdev, ib->length_dw);
}
while (rptr != wptr) {
/* wptr/rptr are in bytes! */
ring_index = rptr / 4;
- src_id = rdev->ih.ring[ring_index] & 0xff;
- src_data = rdev->ih.ring[ring_index + 1] & 0xfffffff;
+ src_id = le32_to_cpu(rdev->ih.ring[ring_index]) & 0xff;
+ src_data = le32_to_cpu(rdev->ih.ring[ring_index + 1]) & 0xfffffff;
switch (src_id) {
case 1: /* D1 vblank/vline */
ps = (u32 *) ((char *)dev->agp_buffer_map->handle + dev_priv->blit_vb->offset + 256);
for (i = 0; i < r6xx_vs_size; i++)
- vs[i] = r6xx_vs[i];
+ vs[i] = cpu_to_le32(r6xx_vs[i]);
for (i = 0; i < r6xx_ps_size; i++)
- ps[i] = r6xx_ps[i];
+ ps[i] = cpu_to_le32(r6xx_ps[i]);
dev_priv->blit_vb->used = 512;
DRM_DEBUG("\n");
sq_vtx_constant_word2 = (((gpu_addr >> 32) & 0xff) | (16 << 8));
+#ifdef __BIG_ENDIAN
+ sq_vtx_constant_word2 |= (2 << 30);
+#endif
BEGIN_RING(9);
OUT_RING(CP_PACKET3(R600_IT_SET_RESOURCE, 7));
OUT_RING(DI_PT_RECTLIST);
OUT_RING(CP_PACKET3(R600_IT_INDEX_TYPE, 0));
+#ifdef __BIG_ENDIAN
+ OUT_RING((2 << 2) | DI_INDEX_SIZE_16_BIT);
+#else
OUT_RING(DI_INDEX_SIZE_16_BIT);
+#endif
OUT_RING(CP_PACKET3(R600_IT_NUM_INSTANCES, 0));
OUT_RING(1);
if (h < 8)
h = 8;
- cb_color_info = ((format << 2) | (1 << 27));
+ cb_color_info = ((format << 2) | (1 << 27) | (1 << 8));
pitch = (w / 8) - 1;
slice = ((w * h) / 64) - 1;
u32 sq_vtx_constant_word2;
sq_vtx_constant_word2 = ((upper_32_bits(gpu_addr) & 0xff) | (16 << 8));
+#ifdef __BIG_ENDIAN
+ sq_vtx_constant_word2 |= (2 << 30);
+#endif
radeon_ring_write(rdev, PACKET3(PACKET3_SET_RESOURCE, 7));
radeon_ring_write(rdev, 0x460);
if (h < 1)
h = 1;
- sq_tex_resource_word0 = (1 << 0);
+ sq_tex_resource_word0 = (1 << 0) | (1 << 3);
sq_tex_resource_word0 |= ((((pitch >> 3) - 1) << 8) |
((w - 1) << 19));
radeon_ring_write(rdev, DI_PT_RECTLIST);
radeon_ring_write(rdev, PACKET3(PACKET3_INDEX_TYPE, 0));
- radeon_ring_write(rdev, DI_INDEX_SIZE_16_BIT);
+ radeon_ring_write(rdev,
+#ifdef __BIG_ENDIAN
+ (2 << 2) |
+#endif
+ DI_INDEX_SIZE_16_BIT);
radeon_ring_write(rdev, PACKET3(PACKET3_NUM_INSTANCES, 0));
radeon_ring_write(rdev, 1);
dwords = ALIGN(rdev->r600_blit.state_len, 0x10);
gpu_addr = rdev->r600_blit.shader_gpu_addr + rdev->r600_blit.state_offset;
radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
- radeon_ring_write(rdev, gpu_addr & 0xFFFFFFFC);
+ radeon_ring_write(rdev,
+#ifdef __BIG_ENDIAN
+ (2 << 0) |
+#endif
+ (gpu_addr & 0xFFFFFFFC));
radeon_ring_write(rdev, upper_32_bits(gpu_addr) & 0xFF);
radeon_ring_write(rdev, dwords);
int r600_blit_init(struct radeon_device *rdev)
{
u32 obj_size;
- int r, dwords;
+ int i, r, dwords;
void *ptr;
u32 packet2s[16];
int num_packet2s = 0;
dwords = rdev->r600_blit.state_len;
while (dwords & 0xf) {
- packet2s[num_packet2s++] = PACKET2(0);
+ packet2s[num_packet2s++] = cpu_to_le32(PACKET2(0));
dwords++;
}
if (num_packet2s)
memcpy_toio(ptr + rdev->r600_blit.state_offset + (rdev->r600_blit.state_len * 4),
packet2s, num_packet2s * 4);
- memcpy(ptr + rdev->r600_blit.vs_offset, r6xx_vs, r6xx_vs_size * 4);
- memcpy(ptr + rdev->r600_blit.ps_offset, r6xx_ps, r6xx_ps_size * 4);
+ for (i = 0; i < r6xx_vs_size; i++)
+ *(u32 *)((unsigned long)ptr + rdev->r600_blit.vs_offset + i * 4) = cpu_to_le32(r6xx_vs[i]);
+ for (i = 0; i < r6xx_ps_size; i++)
+ *(u32 *)((unsigned long)ptr + rdev->r600_blit.ps_offset + i * 4) = cpu_to_le32(r6xx_ps[i]);
radeon_bo_kunmap(rdev->r600_blit.shader_obj);
radeon_bo_unreserve(rdev->r600_blit.shader_obj);
0x00000000,
0x3c000000,
0x68cd1000,
+#ifdef __BIG_ENDIAN
+ 0x000a0000,
+#else
0x00080000,
+#endif
0x00000000,
};
r600_do_cp_stop(dev_priv);
RADEON_WRITE(R600_CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ R600_BUF_SWAP_32BIT |
+#endif
R600_RB_NO_UPDATE |
R600_RB_BLKSZ(15) |
R600_RB_BUFSZ(3));
r600_do_cp_stop(dev_priv);
RADEON_WRITE(R600_CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ R600_BUF_SWAP_32BIT |
+#endif
R600_RB_NO_UPDATE |
- (15 << 8) |
- (3 << 0));
+ R600_RB_BLKSZ(15) |
+ R600_RB_BUFSZ(3));
RADEON_WRITE(R600_GRBM_SOFT_RESET, R600_SOFT_RESET_CP);
RADEON_READ(R600_GRBM_SOFT_RESET);
if (!dev_priv->writeback_works) {
/* Disable writeback to avoid unnecessary bus master transfer */
- RADEON_WRITE(R600_CP_RB_CNTL, RADEON_READ(R600_CP_RB_CNTL) |
- RADEON_RB_NO_UPDATE);
+ RADEON_WRITE(R600_CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ R600_BUF_SWAP_32BIT |
+#endif
+ RADEON_READ(R600_CP_RB_CNTL) |
+ R600_RB_NO_UPDATE);
RADEON_WRITE(R600_SCRATCH_UMSK, 0);
}
}
RADEON_WRITE(R600_CP_RB_WPTR_DELAY, 0);
cp_rb_cntl = RADEON_READ(R600_CP_RB_CNTL);
- RADEON_WRITE(R600_CP_RB_CNTL, R600_RB_RPTR_WR_ENA);
+ RADEON_WRITE(R600_CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ R600_BUF_SWAP_32BIT |
+#endif
+ R600_RB_RPTR_WR_ENA);
RADEON_WRITE(R600_CP_RB_RPTR_WR, cp_ptr);
RADEON_WRITE(R600_CP_RB_WPTR, cp_ptr);
+ dev_priv->gart_vm_start;
}
RADEON_WRITE(R600_CP_RB_RPTR_ADDR,
- rptr_addr & 0xffffffff);
+#ifdef __BIG_ENDIAN
+ (2 << 0) |
+#endif
+ (rptr_addr & 0xfffffffc));
RADEON_WRITE(R600_CP_RB_RPTR_ADDR_HI,
upper_32_bits(rptr_addr));
{
u64 scratch_addr;
- scratch_addr = RADEON_READ(R600_CP_RB_RPTR_ADDR);
+ scratch_addr = RADEON_READ(R600_CP_RB_RPTR_ADDR) & 0xFFFFFFFC;
scratch_addr |= ((u64)RADEON_READ(R600_CP_RB_RPTR_ADDR_HI)) << 32;
scratch_addr += R600_SCRATCH_REG_OFFSET;
scratch_addr >>= 8;
}
if (!IS_ALIGNED(pitch, pitch_align)) {
- dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n",
- __func__, __LINE__, pitch);
+ dev_warn(p->dev, "%s:%d cb pitch (%d, 0x%x, %d) invalid\n",
+ __func__, __LINE__, pitch, pitch_align, array_mode);
return -EINVAL;
}
if (!IS_ALIGNED(height, height_align)) {
- dev_warn(p->dev, "%s:%d cb height (%d) invalid\n",
- __func__, __LINE__, height);
+ dev_warn(p->dev, "%s:%d cb height (%d, 0x%x, %d) invalid\n",
+ __func__, __LINE__, height, height_align, array_mode);
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
- dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset);
+ dev_warn(p->dev, "%s offset[%d] 0x%llx 0x%llx, %d not aligned\n", __func__, i,
+ base_offset, base_align, array_mode);
return -EINVAL;
}
* broken userspace.
*/
} else {
- dev_warn(p->dev, "%s offset[%d] %d %d %lu too big\n", __func__, i, track->cb_color_bo_offset[i], tmp, radeon_bo_size(track->cb_color_bo[i]));
+ dev_warn(p->dev, "%s offset[%d] %d %d %d %lu too big\n", __func__, i,
+ array_mode,
+ track->cb_color_bo_offset[i], tmp,
+ radeon_bo_size(track->cb_color_bo[i]));
return -EINVAL;
}
}
}
if (!IS_ALIGNED(pitch, pitch_align)) {
- dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n",
- __func__, __LINE__, pitch);
+ dev_warn(p->dev, "%s:%d db pitch (%d, 0x%x, %d) invalid\n",
+ __func__, __LINE__, pitch, pitch_align, array_mode);
return -EINVAL;
}
if (!IS_ALIGNED(height, height_align)) {
- dev_warn(p->dev, "%s:%d db height (%d) invalid\n",
- __func__, __LINE__, height);
+ dev_warn(p->dev, "%s:%d db height (%d, 0x%x, %d) invalid\n",
+ __func__, __LINE__, height, height_align, array_mode);
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
- dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset);
+ dev_warn(p->dev, "%s offset[%d] 0x%llx, 0x%llx, %d not aligned\n", __func__, i,
+ base_offset, base_align, array_mode);
return -EINVAL;
}
nviews = G_028004_SLICE_MAX(track->db_depth_view) + 1;
tmp = ntiles * bpe * 64 * nviews;
if ((tmp + track->db_offset) > radeon_bo_size(track->db_bo)) {
- dev_warn(p->dev, "z/stencil buffer too small (0x%08X %d %d %d -> %u have %lu)\n",
- track->db_depth_size, ntiles, nviews, bpe, tmp + track->db_offset,
- radeon_bo_size(track->db_bo));
+ dev_warn(p->dev, "z/stencil buffer (%d) too small (0x%08X %d %d %d -> %u have %lu)\n",
+ array_mode,
+ track->db_depth_size, ntiles, nviews, bpe, tmp + track->db_offset,
+ radeon_bo_size(track->db_bo));
return -EINVAL;
}
}
/* XXX check height as well... */
if (!IS_ALIGNED(pitch, pitch_align)) {
- dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n",
- __func__, __LINE__, pitch);
+ dev_warn(p->dev, "%s:%d tex pitch (%d, 0x%x, %d) invalid\n",
+ __func__, __LINE__, pitch, pitch_align, G_038000_TILE_MODE(word0));
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
- dev_warn(p->dev, "%s:%d tex base offset (0x%llx) invalid\n",
- __func__, __LINE__, base_offset);
+ dev_warn(p->dev, "%s:%d tex base offset (0x%llx, 0x%llx, %d) invalid\n",
+ __func__, __LINE__, base_offset, base_align, G_038000_TILE_MODE(word0));
return -EINVAL;
}
if (!IS_ALIGNED(mip_offset, base_align)) {
- dev_warn(p->dev, "%s:%d tex mip offset (0x%llx) invalid\n",
- __func__, __LINE__, mip_offset);
+ dev_warn(p->dev, "%s:%d tex mip offset (0x%llx, 0x%llx, %d) invalid\n",
+ __func__, __LINE__, mip_offset, base_align, G_038000_TILE_MODE(word0));
return -EINVAL;
}
#define R600_MEDIUM_VID_LOWER_GPIO_CNTL 0x720
#define R600_LOW_VID_LOWER_GPIO_CNTL 0x724
-
+#define R600_D1GRPH_SWAP_CONTROL 0x610C
+# define R600_D1GRPH_SWAP_ENDIAN_NONE (0 << 0)
+# define R600_D1GRPH_SWAP_ENDIAN_16BIT (1 << 0)
+# define R600_D1GRPH_SWAP_ENDIAN_32BIT (2 << 0)
+# define R600_D1GRPH_SWAP_ENDIAN_64BIT (3 << 0)
#define R600_HDP_NONSURFACE_BASE 0x2c04
#define ROQ_IB2_START(x) ((x) << 8)
#define CP_RB_BASE 0xC100
#define CP_RB_CNTL 0xC104
-#define RB_BUFSZ(x) ((x)<<0)
-#define RB_BLKSZ(x) ((x)<<8)
-#define RB_NO_UPDATE (1<<27)
-#define RB_RPTR_WR_ENA (1<<31)
+#define RB_BUFSZ(x) ((x) << 0)
+#define RB_BLKSZ(x) ((x) << 8)
+#define RB_NO_UPDATE (1 << 27)
+#define RB_RPTR_WR_ENA (1 << 31)
#define BUF_SWAP_32BIT (2 << 16)
#define CP_RB_RPTR 0x8700
#define CP_RB_RPTR_ADDR 0xC10C
+#define RB_RPTR_SWAP(x) ((x) << 0)
#define CP_RB_RPTR_ADDR_HI 0xC110
#define CP_RB_RPTR_WR 0xC108
#define CP_RB_WPTR 0xC114
void radeon_atombios_get_power_modes(struct radeon_device *rdev);
void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level);
void rs690_pm_info(struct radeon_device *rdev);
-extern u32 rv6xx_get_temp(struct radeon_device *rdev);
-extern u32 rv770_get_temp(struct radeon_device *rdev);
-extern u32 evergreen_get_temp(struct radeon_device *rdev);
-extern u32 sumo_get_temp(struct radeon_device *rdev);
+extern int rv6xx_get_temp(struct radeon_device *rdev);
+extern int rv770_get_temp(struct radeon_device *rdev);
+extern int evergreen_get_temp(struct radeon_device *rdev);
+extern int sumo_get_temp(struct radeon_device *rdev);
/*
* Fences.
fixed20_12 sclk;
fixed20_12 mclk;
fixed20_12 needed_bandwidth;
- /* XXX: use a define for num power modes */
- struct radeon_power_state power_state[8];
+ struct radeon_power_state *power_state;
/* number of valid power states */
int num_power_states;
int current_power_state_index;
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
.gart_set_page = &rs600_gart_set_page,
.ring_test = &r600_ring_test,
- .ring_ib_execute = &r600_ring_ib_execute,
+ .ring_ib_execute = &evergreen_ring_ib_execute,
.irq_set = &evergreen_irq_set,
.irq_process = &evergreen_irq_process,
.get_vblank_counter = &evergreen_get_vblank_counter,
bool evergreen_gpu_is_lockup(struct radeon_device *rdev);
int evergreen_asic_reset(struct radeon_device *rdev);
void evergreen_bandwidth_update(struct radeon_device *rdev);
+void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
int evergreen_copy_blit(struct radeon_device *rdev,
uint64_t src_offset, uint64_t dst_offset,
unsigned num_pages, struct radeon_fence *fence);
/* some evergreen boards have bad data for this entry */
if (ASIC_IS_DCE4(rdev)) {
if ((i == 7) &&
- (gpio->usClkMaskRegisterIndex == 0x1936) &&
+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) &&
(gpio->sucI2cId.ucAccess == 0)) {
gpio->sucI2cId.ucAccess = 0x97;
gpio->ucDataMaskShift = 8;
/* some DCE3 boards have bad data for this entry */
if (ASIC_IS_DCE3(rdev)) {
if ((i == 4) &&
- (gpio->usClkMaskRegisterIndex == 0x1fda) &&
+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) &&
(gpio->sucI2cId.ucAccess == 0x94))
gpio->sucI2cId.ucAccess = 0x14;
}
/* some evergreen boards have bad data for this entry */
if (ASIC_IS_DCE4(rdev)) {
if ((i == 7) &&
- (gpio->usClkMaskRegisterIndex == 0x1936) &&
+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) &&
(gpio->sucI2cId.ucAccess == 0)) {
gpio->sucI2cId.ucAccess = 0x97;
gpio->ucDataMaskShift = 8;
/* some DCE3 boards have bad data for this entry */
if (ASIC_IS_DCE3(rdev)) {
if ((i == 4) &&
- (gpio->usClkMaskRegisterIndex == 0x1fda) &&
+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) &&
(gpio->sucI2cId.ucAccess == 0x94))
gpio->sucI2cId.ucAccess = 0x14;
}
pin = &gpio_info->asGPIO_Pin[i];
if (id == pin->ucGPIO_ID) {
gpio.id = pin->ucGPIO_ID;
- gpio.reg = pin->usGpioPin_AIndex * 4;
+ gpio.reg = le16_to_cpu(pin->usGpioPin_AIndex) * 4;
gpio.mask = (1 << pin->ucGpioPinBitShift);
gpio.valid = true;
break;
*line_mux = 0x90;
}
- /* mac rv630 */
- if ((dev->pdev->device == 0x9588) &&
- (dev->pdev->subsystem_vendor == 0x106b) &&
- (dev->pdev->subsystem_device == 0x00a6)) {
- if ((supported_device == ATOM_DEVICE_TV1_SUPPORT) &&
- (*connector_type == DRM_MODE_CONNECTOR_DVII)) {
- *connector_type = DRM_MODE_CONNECTOR_9PinDIN;
- *line_mux = CONNECTOR_7PIN_DIN_ENUM_ID1;
- }
+ /* mac rv630, rv730, others */
+ if ((supported_device == ATOM_DEVICE_TV1_SUPPORT) &&
+ (*connector_type == DRM_MODE_CONNECTOR_DVII)) {
+ *connector_type = DRM_MODE_CONNECTOR_9PinDIN;
+ *line_mux = CONNECTOR_7PIN_DIN_ENUM_ID1;
}
/* ASUS HD 3600 XT board lists the DVI port as HDMI */
p1pll->pll_out_min = 64800;
else
p1pll->pll_out_min = 20000;
- } else if (p1pll->pll_out_min > 64800) {
- /* Limiting the pll output range is a good thing generally as
- * it limits the number of possible pll combinations for a given
- * frequency presumably to the ones that work best on each card.
- * However, certain duallink DVI monitors seem to like
- * pll combinations that would be limited by this at least on
- * pre-DCE 3.0 r6xx hardware. This might need to be adjusted per
- * family.
- */
- p1pll->pll_out_min = 64800;
}
p1pll->pll_in_min =
data_offset);
switch (crev) {
case 1:
- if (igp_info->info.ulBootUpMemoryClock)
+ if (le32_to_cpu(igp_info->info.ulBootUpMemoryClock))
return true;
break;
case 2:
- if (igp_info->info_2.ulBootUpSidePortClock)
+ if (le32_to_cpu(igp_info->info_2.ulBootUpSidePortClock))
return true;
break;
default:
for (i = 0; i < num_indices; i++) {
if ((ss_info->info.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= ss_info->info.asSpreadSpectrum[i].ulTargetClockRange)) {
+ (clock <= le32_to_cpu(ss_info->info.asSpreadSpectrum[i].ulTargetClockRange))) {
ss->percentage =
le16_to_cpu(ss_info->info.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
ss->type = ss_info->info.asSpreadSpectrum[i].ucSpreadSpectrumMode;
sizeof(ATOM_ASIC_SS_ASSIGNMENT_V2);
for (i = 0; i < num_indices; i++) {
if ((ss_info->info_2.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= ss_info->info_2.asSpreadSpectrum[i].ulTargetClockRange)) {
+ (clock <= le32_to_cpu(ss_info->info_2.asSpreadSpectrum[i].ulTargetClockRange))) {
ss->percentage =
le16_to_cpu(ss_info->info_2.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
ss->type = ss_info->info_2.asSpreadSpectrum[i].ucSpreadSpectrumMode;
sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3);
for (i = 0; i < num_indices; i++) {
if ((ss_info->info_3.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= ss_info->info_3.asSpreadSpectrum[i].ulTargetClockRange)) {
+ (clock <= le32_to_cpu(ss_info->info_3.asSpreadSpectrum[i].ulTargetClockRange))) {
ss->percentage =
le16_to_cpu(ss_info->info_3.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
ss->type = ss_info->info_3.asSpreadSpectrum[i].ucSpreadSpectrumMode;
if (misc & ATOM_DOUBLE_CLOCK_MODE)
lvds->native_mode.flags |= DRM_MODE_FLAG_DBLSCAN;
- lvds->native_mode.width_mm = lvds_info->info.sLCDTiming.usImageHSize;
- lvds->native_mode.height_mm = lvds_info->info.sLCDTiming.usImageVSize;
+ lvds->native_mode.width_mm = le16_to_cpu(lvds_info->info.sLCDTiming.usImageHSize);
+ lvds->native_mode.height_mm = le16_to_cpu(lvds_info->info.sLCDTiming.usImageVSize);
/* set crtc values */
drm_mode_set_crtcinfo(&lvds->native_mode, CRTC_INTERLACE_HALVE_V);
lvds->linkb = false;
/* parse the lcd record table */
- if (lvds_info->info.usModePatchTableOffset) {
+ if (le16_to_cpu(lvds_info->info.usModePatchTableOffset)) {
ATOM_FAKE_EDID_PATCH_RECORD *fake_edid_record;
ATOM_PANEL_RESOLUTION_PATCH_RECORD *panel_res_record;
bool bad_record = false;
u8 *record = (u8 *)(mode_info->atom_context->bios +
data_offset +
- lvds_info->info.usModePatchTableOffset);
+ le16_to_cpu(lvds_info->info.usModePatchTableOffset));
while (*record != ATOM_RECORD_END_TYPE) {
switch (*record) {
case LCD_MODE_PATCH_RECORD_MODE_TYPE:
num_modes = power_info->info.ucNumOfPowerModeEntries;
if (num_modes > ATOM_MAX_NUMBEROF_POWER_BLOCK)
num_modes = ATOM_MAX_NUMBEROF_POWER_BLOCK;
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) * num_modes, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
/* last mode is usually default, array is low to high */
for (i = 0; i < num_modes; i++) {
rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
firmware_info =
(union firmware_info *)(mode_info->atom_context->bios +
data_offset);
- vddc = firmware_info->info_14.usBootUpVDDCVoltage;
+ vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage);
}
return vddc;
rdev->pm.power_state[state_index].clock_info[mode_index].voltage.type =
VOLTAGE_SW;
rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage =
- clock_info->evergreen.usVDDC;
+ le16_to_cpu(clock_info->evergreen.usVDDC);
} else {
sclk = le16_to_cpu(clock_info->r600.usEngineClockLow);
sclk |= clock_info->r600.ucEngineClockHigh << 16;
rdev->pm.power_state[state_index].clock_info[mode_index].voltage.type =
VOLTAGE_SW;
rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage =
- clock_info->r600.usVDDC;
+ le16_to_cpu(clock_info->r600.usVDDC);
}
if (rdev->flags & RADEON_IS_IGP) {
power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
radeon_atombios_add_pplib_thermal_controller(rdev, &power_info->pplib.sThermalController);
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) *
+ power_info->pplib.ucNumStates, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
/* first mode is usually default, followed by low to high */
for (i = 0; i < power_info->pplib.ucNumStates; i++) {
mode_index = 0;
radeon_atombios_add_pplib_thermal_controller(rdev, &power_info->pplib.sThermalController);
state_array = (struct StateArray *)
(mode_info->atom_context->bios + data_offset +
- power_info->pplib.usStateArrayOffset);
+ le16_to_cpu(power_info->pplib.usStateArrayOffset));
clock_info_array = (struct ClockInfoArray *)
(mode_info->atom_context->bios + data_offset +
- power_info->pplib.usClockInfoArrayOffset);
+ le16_to_cpu(power_info->pplib.usClockInfoArrayOffset));
non_clock_info_array = (struct NonClockInfoArray *)
(mode_info->atom_context->bios + data_offset +
- power_info->pplib.usNonClockInfoArrayOffset);
+ le16_to_cpu(power_info->pplib.usNonClockInfoArrayOffset));
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) *
+ state_array->ucNumEntries, GFP_KERNEL);
+ if (!rdev->pm.power_state)
+ return state_index;
for (i = 0; i < state_array->ucNumEntries; i++) {
mode_index = 0;
power_state = (union pplib_power_state *)&state_array->states[i];
break;
}
} else {
- /* add the default mode */
- rdev->pm.power_state[state_index].type =
- POWER_STATE_TYPE_DEFAULT;
- rdev->pm.power_state[state_index].num_clock_modes = 1;
- rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk;
- rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk;
- rdev->pm.power_state[state_index].default_clock_mode =
- &rdev->pm.power_state[state_index].clock_info[0];
- rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
- rdev->pm.power_state[state_index].pcie_lanes = 16;
- rdev->pm.default_power_state_index = state_index;
- rdev->pm.power_state[state_index].flags = 0;
- state_index++;
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state), GFP_KERNEL);
+ if (rdev->pm.power_state) {
+ /* add the default mode */
+ rdev->pm.power_state[state_index].type =
+ POWER_STATE_TYPE_DEFAULT;
+ rdev->pm.power_state[state_index].num_clock_modes = 1;
+ rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk;
+ rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk;
+ rdev->pm.power_state[state_index].default_clock_mode =
+ &rdev->pm.power_state[state_index].clock_info[0];
+ rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
+ rdev->pm.power_state[state_index].pcie_lanes = 16;
+ rdev->pm.default_power_state_index = state_index;
+ rdev->pm.power_state[state_index].flags = 0;
+ state_index++;
+ }
}
rdev->pm.num_power_states = state_index;
int index = GetIndexIntoMasterTable(COMMAND, GetEngineClock);
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
- return args.ulReturnEngineClock;
+ return le32_to_cpu(args.ulReturnEngineClock);
}
uint32_t radeon_atom_get_memory_clock(struct radeon_device *rdev)
int index = GetIndexIntoMasterTable(COMMAND, GetMemoryClock);
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
- return args.ulReturnMemoryClock;
+ return le32_to_cpu(args.ulReturnMemoryClock);
}
void radeon_atom_set_engine_clock(struct radeon_device *rdev,
SET_ENGINE_CLOCK_PS_ALLOCATION args;
int index = GetIndexIntoMasterTable(COMMAND, SetEngineClock);
- args.ulTargetEngineClock = eng_clock; /* 10 khz */
+ args.ulTargetEngineClock = cpu_to_le32(eng_clock); /* 10 khz */
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
if (rdev->flags & RADEON_IS_IGP)
return;
- args.ulTargetMemoryClock = mem_clock; /* 10 khz */
+ args.ulTargetMemoryClock = cpu_to_le32(mem_clock); /* 10 khz */
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
bios_2_scratch &= ~ATOM_S2_VRI_BRIGHT_ENABLE;
/* tell the bios not to handle mode switching */
- bios_6_scratch |= (ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH | ATOM_S6_ACC_MODE);
+ bios_6_scratch |= ATOM_S6_ACC_BLOCK_DISPLAY_SWITCH;
if (rdev->family >= CHIP_R600) {
WREG32(R600_BIOS_2_SCRATCH, bios_2_scratch);
else
bios_6_scratch = RREG32(RADEON_BIOS_6_SCRATCH);
- if (lock)
+ if (lock) {
bios_6_scratch |= ATOM_S6_CRITICAL_STATE;
- else
+ bios_6_scratch &= ~ATOM_S6_ACC_MODE;
+ } else {
bios_6_scratch &= ~ATOM_S6_CRITICAL_STATE;
+ bios_6_scratch |= ATOM_S6_ACC_MODE;
+ }
if (rdev->family >= CHIP_R600)
WREG32(R600_BIOS_6_SCRATCH, bios_6_scratch);
(rdev->pdev->subsystem_device == 0x4a48)) {
/* Mac X800 */
rdev->mode_info.connector_table = CT_MAC_X800;
+ } else if ((rdev->pdev->device == 0x4150) &&
+ (rdev->pdev->subsystem_vendor == 0x1002) &&
+ (rdev->pdev->subsystem_device == 0x4150)) {
+ /* Mac G5 9600 */
+ rdev->mode_info.connector_table = CT_MAC_G5_9600;
} else
#endif /* CONFIG_PPC_PMAC */
#ifdef CONFIG_PPC64
CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I,
&hpd);
break;
+ case CT_MAC_G5_9600:
+ DRM_INFO("Connector Table: %d (mac g5 9600)\n",
+ rdev->mode_info.connector_table);
+ /* DVI - tv dac, dvo */
+ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_DVI, 0, 0);
+ hpd.hpd = RADEON_HPD_1; /* ??? */
+ radeon_add_legacy_encoder(dev,
+ radeon_get_encoder_enum(dev,
+ ATOM_DEVICE_DFP2_SUPPORT,
+ 0),
+ ATOM_DEVICE_DFP2_SUPPORT);
+ radeon_add_legacy_encoder(dev,
+ radeon_get_encoder_enum(dev,
+ ATOM_DEVICE_CRT2_SUPPORT,
+ 2),
+ ATOM_DEVICE_CRT2_SUPPORT);
+ radeon_add_legacy_connector(dev, 0,
+ ATOM_DEVICE_DFP2_SUPPORT |
+ ATOM_DEVICE_CRT2_SUPPORT,
+ DRM_MODE_CONNECTOR_DVII, &ddc_i2c,
+ CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I,
+ &hpd);
+ /* ADC - primary dac, internal tmds */
+ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_VGA, 0, 0);
+ hpd.hpd = RADEON_HPD_2; /* ??? */
+ radeon_add_legacy_encoder(dev,
+ radeon_get_encoder_enum(dev,
+ ATOM_DEVICE_DFP1_SUPPORT,
+ 0),
+ ATOM_DEVICE_DFP1_SUPPORT);
+ radeon_add_legacy_encoder(dev,
+ radeon_get_encoder_enum(dev,
+ ATOM_DEVICE_CRT1_SUPPORT,
+ 1),
+ ATOM_DEVICE_CRT1_SUPPORT);
+ radeon_add_legacy_connector(dev, 1,
+ ATOM_DEVICE_DFP1_SUPPORT |
+ ATOM_DEVICE_CRT1_SUPPORT,
+ DRM_MODE_CONNECTOR_DVII, &ddc_i2c,
+ CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I,
+ &hpd);
+ break;
default:
DRM_INFO("Connector table: %d (invalid)\n",
rdev->mode_info.connector_table);
rdev->pm.default_power_state_index = -1;
+ /* allocate 2 power states */
+ rdev->pm.power_state = kzalloc(sizeof(struct radeon_power_state) * 2, GFP_KERNEL);
+ if (!rdev->pm.power_state) {
+ rdev->pm.default_power_state_index = state_index;
+ rdev->pm.num_power_states = 0;
+
+ rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
+ rdev->pm.current_clock_mode_index = 0;
+ return;
+ }
+
if (rdev->flags & RADEON_IS_MOBILITY) {
offset = combios_get_table_offset(dev, COMBIOS_POWERPLAY_INFO_TABLE);
if (offset) {
pci_disable_device(dev->pdev);
pci_set_power_state(dev->pdev, PCI_D3hot);
}
- acquire_console_sem();
+ console_lock();
radeon_fbdev_set_suspend(rdev, 1);
- release_console_sem();
+ console_unlock();
return 0;
}
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
- acquire_console_sem();
+ console_lock();
pci_set_power_state(dev->pdev, PCI_D0);
pci_restore_state(dev->pdev);
if (pci_enable_device(dev->pdev)) {
- release_console_sem();
+ console_unlock();
return -1;
}
pci_set_master(dev->pdev);
radeon_restore_bios_scratch_regs(rdev);
radeon_fbdev_set_suspend(rdev, 0);
- release_console_sem();
+ console_unlock();
/* reset hpd state */
radeon_hpd_init(rdev);
int radeon_gpu_reset(struct radeon_device *rdev)
{
int r;
+ int resched;
radeon_save_bios_scratch_regs(rdev);
+ /* block TTM */
+ resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
radeon_suspend(rdev);
r = radeon_asic_reset(rdev);
radeon_resume(rdev);
radeon_restore_bios_scratch_regs(rdev);
drm_helper_resume_force_mode(rdev->ddev);
+ ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
return 0;
}
/* bad news, how to tell it to userspace ? */
return ret;
}
+/* avivo */
+static void avivo_get_fb_div(struct radeon_pll *pll,
+ u32 target_clock,
+ u32 post_div,
+ u32 ref_div,
+ u32 *fb_div,
+ u32 *frac_fb_div)
+{
+ u32 tmp = post_div * ref_div;
+
+ tmp *= target_clock;
+ *fb_div = tmp / pll->reference_freq;
+ *frac_fb_div = tmp % pll->reference_freq;
+
+ if (*fb_div > pll->max_feedback_div)
+ *fb_div = pll->max_feedback_div;
+ else if (*fb_div < pll->min_feedback_div)
+ *fb_div = pll->min_feedback_div;
+}
+
+static u32 avivo_get_post_div(struct radeon_pll *pll,
+ u32 target_clock)
+{
+ u32 vco, post_div, tmp;
+
+ if (pll->flags & RADEON_PLL_USE_POST_DIV)
+ return pll->post_div;
+
+ if (pll->flags & RADEON_PLL_PREFER_MINM_OVER_MAXP) {
+ if (pll->flags & RADEON_PLL_IS_LCD)
+ vco = pll->lcd_pll_out_min;
+ else
+ vco = pll->pll_out_min;
+ } else {
+ if (pll->flags & RADEON_PLL_IS_LCD)
+ vco = pll->lcd_pll_out_max;
+ else
+ vco = pll->pll_out_max;
+ }
+
+ post_div = vco / target_clock;
+ tmp = vco % target_clock;
+
+ if (pll->flags & RADEON_PLL_PREFER_MINM_OVER_MAXP) {
+ if (tmp)
+ post_div++;
+ } else {
+ if (!tmp)
+ post_div--;
+ }
+
+ if (post_div > pll->max_post_div)
+ post_div = pll->max_post_div;
+ else if (post_div < pll->min_post_div)
+ post_div = pll->min_post_div;
+
+ return post_div;
+}
+
+#define MAX_TOLERANCE 10
+
+void radeon_compute_pll_avivo(struct radeon_pll *pll,
+ u32 freq,
+ u32 *dot_clock_p,
+ u32 *fb_div_p,
+ u32 *frac_fb_div_p,
+ u32 *ref_div_p,
+ u32 *post_div_p)
+{
+ u32 target_clock = freq / 10;
+ u32 post_div = avivo_get_post_div(pll, target_clock);
+ u32 ref_div = pll->min_ref_div;
+ u32 fb_div = 0, frac_fb_div = 0, tmp;
+
+ if (pll->flags & RADEON_PLL_USE_REF_DIV)
+ ref_div = pll->reference_div;
+
+ if (pll->flags & RADEON_PLL_USE_FRAC_FB_DIV) {
+ avivo_get_fb_div(pll, target_clock, post_div, ref_div, &fb_div, &frac_fb_div);
+ frac_fb_div = (100 * frac_fb_div) / pll->reference_freq;
+ if (frac_fb_div >= 5) {
+ frac_fb_div -= 5;
+ frac_fb_div = frac_fb_div / 10;
+ frac_fb_div++;
+ }
+ if (frac_fb_div >= 10) {
+ fb_div++;
+ frac_fb_div = 0;
+ }
+ } else {
+ while (ref_div <= pll->max_ref_div) {
+ avivo_get_fb_div(pll, target_clock, post_div, ref_div,
+ &fb_div, &frac_fb_div);
+ if (frac_fb_div >= (pll->reference_freq / 2))
+ fb_div++;
+ frac_fb_div = 0;
+ tmp = (pll->reference_freq * fb_div) / (post_div * ref_div);
+ tmp = (tmp * 10000) / target_clock;
+
+ if (tmp > (10000 + MAX_TOLERANCE))
+ ref_div++;
+ else if (tmp >= (10000 - MAX_TOLERANCE))
+ break;
+ else
+ ref_div++;
+ }
+ }
+
+ *dot_clock_p = ((pll->reference_freq * fb_div * 10) + (pll->reference_freq * frac_fb_div)) /
+ (ref_div * post_div * 10);
+ *fb_div_p = fb_div;
+ *frac_fb_div_p = frac_fb_div;
+ *ref_div_p = ref_div;
+ *post_div_p = post_div;
+ DRM_DEBUG_KMS("%d, pll dividers - fb: %d.%d ref: %d, post %d\n",
+ *dot_clock_p, fb_div, frac_fb_div, ref_div, post_div);
+}
+
+/* pre-avivo */
static inline uint32_t radeon_div(uint64_t n, uint32_t d)
{
uint64_t mod;
return n;
}
-void radeon_compute_pll(struct radeon_pll *pll,
- uint64_t freq,
- uint32_t *dot_clock_p,
- uint32_t *fb_div_p,
- uint32_t *frac_fb_div_p,
- uint32_t *ref_div_p,
- uint32_t *post_div_p)
+void radeon_compute_pll_legacy(struct radeon_pll *pll,
+ uint64_t freq,
+ uint32_t *dot_clock_p,
+ uint32_t *fb_div_p,
+ uint32_t *frac_fb_div_p,
+ uint32_t *ref_div_p,
+ uint32_t *post_div_p)
{
uint32_t min_ref_div = pll->min_ref_div;
uint32_t max_ref_div = pll->max_ref_div;
pll_out_max = pll->pll_out_max;
}
+ if (pll_out_min > 64800)
+ pll_out_min = 64800;
+
if (pll->flags & RADEON_PLL_USE_REF_DIV)
min_ref_div = max_ref_div = pll->reference_div;
else {
max_fractional_feed_div = pll->max_frac_feedback_div;
}
- for (post_div = max_post_div; post_div >= min_post_div; --post_div) {
+ for (post_div = min_post_div; post_div <= max_post_div; ++post_div) {
uint32_t ref_div;
if ((pll->flags & RADEON_PLL_NO_ODD_POST_DIV) && (post_div & 1))
*frac_fb_div_p = best_frac_feedback_div;
*ref_div_p = best_ref_div;
*post_div_p = best_post_div;
+ DRM_DEBUG_KMS("%d %d, pll dividers - fb: %d.%d ref: %d, post %d\n",
+ freq, best_freq / 1000, best_feedback_div, best_frac_feedback_div,
+ best_ref_div, best_post_div);
+
}
static void radeon_user_framebuffer_destroy(struct drm_framebuffer *fb)
* - 2.5.0 - add get accel 2 to work around ddx breakage for evergreen
* - 2.6.0 - add tiling config query (r6xx+), add initial HiZ support (r300->r500)
* 2.7.0 - fixups for r600 2D tiling support. (no external ABI change), add eg dyn gpr regs
- * 2.8.0 - pageflip support, r500 US_FORMAT regs. r500 ARGB2101010 colorbuf, r300->r500 CMASK
+ * 2.8.0 - pageflip support, r500 US_FORMAT regs. r500 ARGB2101010 colorbuf, r300->r500 CMASK, clock crystal query
*/
#define KMS_DRIVER_MAJOR 2
#define KMS_DRIVER_MINOR 8
#define R600_CP_RB_CNTL 0xc104
# define R600_RB_BUFSZ(x) ((x) << 0)
# define R600_RB_BLKSZ(x) ((x) << 8)
+# define R600_BUF_SWAP_32BIT (2 << 16)
# define R600_RB_NO_UPDATE (1 << 27)
# define R600_RB_RPTR_WR_ENA (1 << 31)
#define R600_CP_RB_RPTR_WR 0xc108
switch (connector->connector_type) {
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */
- if (drm_detect_monitor_audio(radeon_connector->edid)) {
+ if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) {
/* fix me */
if (ASIC_IS_DCE4(rdev))
return ATOM_ENCODER_MODE_DVI;
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
default:
- if (drm_detect_monitor_audio(radeon_connector->edid)) {
+ if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) {
/* fix me */
if (ASIC_IS_DCE4(rdev))
return ATOM_ENCODER_MODE_DVI;
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
(dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))
return ATOM_ENCODER_MODE_DP;
- else if (drm_detect_monitor_audio(radeon_connector->edid)) {
+ else if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) {
/* fix me */
if (ASIC_IS_DCE4(rdev))
return ATOM_ENCODER_MODE_DVI;
args.v1.ucAction = action;
if (action == ATOM_TRANSMITTER_ACTION_INIT) {
- args.v1.usInitInfo = connector_object_id;
+ args.v1.usInitInfo = cpu_to_le16(connector_object_id);
} else if (action == ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH) {
args.v1.asMode.ucLaneSel = lane_num;
args.v1.asMode.ucLaneSet = lane_set;
if (!ASIC_IS_DCE4(rdev))
return;
- if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) ||
+ if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) &&
(action != ATOM_TRANSMITTER_ACTION_POWER_OFF))
return;
case 3:
args.v3.sExtEncoder.ucAction = action;
if (action == EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT)
- args.v3.sExtEncoder.usConnectorId = connector_object_id;
+ args.v3.sExtEncoder.usConnectorId = cpu_to_le16(connector_object_id);
else
args.v3.sExtEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);
args.v3.sExtEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder);
}
/* set scaler clears this on some chips */
- /* XXX check DCE4 */
- if (!(radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT))) {
- if (ASIC_IS_AVIVO(rdev) && (mode->flags & DRM_MODE_FLAG_INTERLACE))
- WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset,
- AVIVO_D1MODE_INTERLEAVE_EN);
+ if (ASIC_IS_AVIVO(rdev) &&
+ (!(radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)))) {
+ if (ASIC_IS_DCE4(rdev)) {
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset,
+ EVERGREEN_INTERLEAVE_EN);
+ else
+ WREG32(EVERGREEN_DATA_FORMAT + radeon_crtc->crtc_offset, 0);
+ } else {
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset,
+ AVIVO_D1MODE_INTERLEAVE_EN);
+ else
+ WREG32(AVIVO_D1MODE_DATA_FORMAT + radeon_crtc->crtc_offset, 0);
+ }
}
}
int radeon_irq_kms_init(struct radeon_device *rdev)
{
+ int i;
int r = 0;
INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
spin_lock_init(&rdev->irq.sw_lock);
+ for (i = 0; i < rdev->num_crtc; i++)
+ spin_lock_init(&rdev->irq.pflip_lock[i]);
r = drm_vblank_init(rdev->ddev, rdev->num_crtc);
if (r) {
return r;
}
radeon_set_filp_rights(dev, &rdev->cmask_filp, filp, &value);
break;
+ case RADEON_INFO_CLOCK_CRYSTAL_FREQ:
+ /* return clock value in KHz */
+ value = rdev->clock.spll.reference_freq * 10;
+ break;
default:
DRM_DEBUG_KMS("Invalid request %d\n", info->request);
return -EINVAL;
struct radeon_device *rdev = dev->dev_private;
if (rdev->hyperz_filp == file_priv)
rdev->hyperz_filp = NULL;
+ if (rdev->cmask_filp == file_priv)
+ rdev->cmask_filp = NULL;
}
/*
DRM_DEBUG_KMS("\n");
if (!use_bios_divs) {
- radeon_compute_pll(pll, mode->clock,
- &freq, &feedback_div, &frac_fb_div,
- &reference_div, &post_divider);
+ radeon_compute_pll_legacy(pll, mode->clock,
+ &freq, &feedback_div, &frac_fb_div,
+ &reference_div, &post_divider);
for (post_div = &post_divs[0]; post_div->divider; ++post_div) {
if (post_div->divider == post_divider)
#define RADEON_PLL_PREFER_CLOSEST_LOWER (1 << 11)
#define RADEON_PLL_USE_POST_DIV (1 << 12)
#define RADEON_PLL_IS_LCD (1 << 13)
+#define RADEON_PLL_PREFER_MINM_OVER_MAXP (1 << 14)
struct radeon_pll {
/* reference frequency */
CT_EMAC,
CT_RN50_POWER,
CT_MAC_X800,
+ CT_MAC_G5_9600,
};
enum radeon_dvo_chip {
struct radeon_atom_ss *ss,
int id, u32 clock);
-extern void radeon_compute_pll(struct radeon_pll *pll,
- uint64_t freq,
- uint32_t *dot_clock_p,
- uint32_t *fb_div_p,
- uint32_t *frac_fb_div_p,
- uint32_t *ref_div_p,
- uint32_t *post_div_p);
+extern void radeon_compute_pll_legacy(struct radeon_pll *pll,
+ uint64_t freq,
+ uint32_t *dot_clock_p,
+ uint32_t *fb_div_p,
+ uint32_t *frac_fb_div_p,
+ uint32_t *ref_div_p,
+ uint32_t *post_div_p);
+
+extern void radeon_compute_pll_avivo(struct radeon_pll *pll,
+ u32 freq,
+ u32 *dot_clock_p,
+ u32 *fb_div_p,
+ u32 *frac_fb_div_p,
+ u32 *ref_div_p,
+ u32 *post_div_p);
extern void radeon_setup_encoder_clones(struct drm_device *dev);
{
struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
struct radeon_device *rdev = ddev->dev_private;
- u32 temp;
+ int temp;
switch (rdev->pm.int_thermal_type) {
case THERMAL_TYPE_RV6XX:
#endif
}
+ if (rdev->pm.power_state)
+ kfree(rdev->pm.power_state);
+
radeon_hwmon_fini(rdev);
}
#define RADEON_CONFIG_APER_SIZE 0x0108
#define RADEON_CONFIG_BONDS 0x00e8
#define RADEON_CONFIG_CNTL 0x00e0
+# define RADEON_CFG_VGA_RAM_EN (1 << 8)
+# define RADEON_CFG_VGA_IO_DIS (1 << 9)
# define RADEON_CFG_ATI_REV_A11 (0 << 16)
# define RADEON_CFG_ATI_REV_A12 (1 << 16)
# define RADEON_CFG_ATI_REV_A13 (2 << 16)
radeon_mem_types_list[i].show = &radeon_mm_dump_table;
radeon_mem_types_list[i].driver_features = 0;
if (i == 0)
- radeon_mem_types_list[i].data = &rdev->mman.bdev.man[TTM_PL_VRAM].priv;
+ radeon_mem_types_list[i].data = rdev->mman.bdev.man[TTM_PL_VRAM].priv;
else
- radeon_mem_types_list[i].data = &rdev->mman.bdev.man[TTM_PL_TT].priv;
+ radeon_mem_types_list[i].data = rdev->mman.bdev.man[TTM_PL_TT].priv;
}
/* Add ttm page pool to debugfs */
0x4DF4 US_ALU_CONST_G_31
0x4DF8 US_ALU_CONST_B_31
0x4DFC US_ALU_CONST_A_31
-0x4E04 RB3D_BLENDCNTL_R3
0x4E08 RB3D_ABLENDCNTL_R3
-0x4E0C RB3D_COLOR_CHANNEL_MASK
0x4E10 RB3D_CONSTANT_COLOR
0x4E14 RB3D_COLOR_CLEAR_VALUE
0x4E18 RB3D_ROPCNTL_R3
0x4E74 RB3D_CMASK_WRINDEX
0x4E78 RB3D_CMASK_DWORD
0x4E7C RB3D_CMASK_RDINDEX
-0x4E80 RB3D_AARESOLVE_OFFSET
-0x4E84 RB3D_AARESOLVE_PITCH
-0x4E88 RB3D_AARESOLVE_CTL
0x4EA0 RB3D_DISCARD_SRC_PIXEL_LTE_THRESHOLD
0x4EA4 RB3D_DISCARD_SRC_PIXEL_GTE_THRESHOLD
0x4F04 ZB_ZSTENCILCNTL
0x4F08 ZB_STENCILREFMASK
0x4F14 ZB_ZTOP
0x4F18 ZB_ZCACHE_CTLSTAT
+0x4F28 ZB_DEPTHCLEARVALUE
0x4F58 ZB_ZPASS_DATA
0x401C GB_SELECT
0x4020 GB_AA_CONFIG
0x4024 GB_FIFO_SIZE
-0x4028 GB_Z_PEQ_CONFIG
0x4100 TX_INVALTAGS
0x4200 GA_POINT_S0
0x4204 GA_POINT_T0
0x4DF4 US_ALU_CONST_G_31
0x4DF8 US_ALU_CONST_B_31
0x4DFC US_ALU_CONST_A_31
-0x4E04 RB3D_BLENDCNTL_R3
0x4E08 RB3D_ABLENDCNTL_R3
-0x4E0C RB3D_COLOR_CHANNEL_MASK
0x4E10 RB3D_CONSTANT_COLOR
0x4E14 RB3D_COLOR_CLEAR_VALUE
0x4E18 RB3D_ROPCNTL_R3
0x4E74 RB3D_CMASK_WRINDEX
0x4E78 RB3D_CMASK_DWORD
0x4E7C RB3D_CMASK_RDINDEX
-0x4E80 RB3D_AARESOLVE_OFFSET
-0x4E84 RB3D_AARESOLVE_PITCH
-0x4E88 RB3D_AARESOLVE_CTL
0x4EA0 RB3D_DISCARD_SRC_PIXEL_LTE_THRESHOLD
0x4EA4 RB3D_DISCARD_SRC_PIXEL_GTE_THRESHOLD
0x4F04 ZB_ZSTENCILCNTL
0x4F08 ZB_STENCILREFMASK
0x4F14 ZB_ZTOP
0x4F18 ZB_ZCACHE_CTLSTAT
+0x4F28 ZB_DEPTHCLEARVALUE
0x4F58 ZB_ZPASS_DATA
0x4DF4 US_ALU_CONST_G_31
0x4DF8 US_ALU_CONST_B_31
0x4DFC US_ALU_CONST_A_31
-0x4E04 RB3D_BLENDCNTL_R3
0x4E08 RB3D_ABLENDCNTL_R3
-0x4E0C RB3D_COLOR_CHANNEL_MASK
0x4E10 RB3D_CONSTANT_COLOR
0x4E14 RB3D_COLOR_CLEAR_VALUE
0x4E18 RB3D_ROPCNTL_R3
0x4E74 RB3D_CMASK_WRINDEX
0x4E78 RB3D_CMASK_DWORD
0x4E7C RB3D_CMASK_RDINDEX
-0x4E80 RB3D_AARESOLVE_OFFSET
-0x4E84 RB3D_AARESOLVE_PITCH
-0x4E88 RB3D_AARESOLVE_CTL
0x4EA0 RB3D_DISCARD_SRC_PIXEL_LTE_THRESHOLD
0x4EA4 RB3D_DISCARD_SRC_PIXEL_GTE_THRESHOLD
0x4F04 ZB_ZSTENCILCNTL
0x4F08 ZB_STENCILREFMASK
0x4F14 ZB_ZTOP
0x4F18 ZB_ZCACHE_CTLSTAT
+0x4F28 ZB_DEPTHCLEARVALUE
0x4F58 ZB_ZPASS_DATA
0x401C GB_SELECT
0x4020 GB_AA_CONFIG
0x4024 GB_FIFO_SIZE
-0x4028 GB_Z_PEQ_CONFIG
0x4100 TX_INVALTAGS
0x4114 SU_TEX_WRAP_PS3
0x4118 PS3_ENABLE
0x4DF4 US_ALU_CONST_G_31
0x4DF8 US_ALU_CONST_B_31
0x4DFC US_ALU_CONST_A_31
-0x4E04 RB3D_BLENDCNTL_R3
0x4E08 RB3D_ABLENDCNTL_R3
-0x4E0C RB3D_COLOR_CHANNEL_MASK
0x4E10 RB3D_CONSTANT_COLOR
0x4E14 RB3D_COLOR_CLEAR_VALUE
0x4E18 RB3D_ROPCNTL_R3
0x4E74 RB3D_CMASK_WRINDEX
0x4E78 RB3D_CMASK_DWORD
0x4E7C RB3D_CMASK_RDINDEX
-0x4E80 RB3D_AARESOLVE_OFFSET
-0x4E84 RB3D_AARESOLVE_PITCH
-0x4E88 RB3D_AARESOLVE_CTL
0x4EA0 RB3D_DISCARD_SRC_PIXEL_LTE_THRESHOLD
0x4EA4 RB3D_DISCARD_SRC_PIXEL_GTE_THRESHOLD
0x4EF8 RB3D_CONSTANT_COLOR_AR
0x4F14 ZB_ZTOP
0x4F18 ZB_ZCACHE_CTLSTAT
0x4F58 ZB_ZPASS_DATA
+0x4F28 ZB_DEPTHCLEARVALUE
0x4FD4 ZB_STENCILREFMASK_BF
radeon_gart_table_ram_free(rdev);
}
+#define RS400_PTE_WRITEABLE (1 << 2)
+#define RS400_PTE_READABLE (1 << 3)
+
int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr)
{
uint32_t entry;
entry = (lower_32_bits(addr) & PAGE_MASK) |
((upper_32_bits(addr) & 0xff) << 4) |
- 0xc;
+ RS400_PTE_WRITEABLE | RS400_PTE_READABLE;
entry = cpu_to_le32(entry);
rdev->gart.table.ram.ptr[i] = entry;
return 0;
for (i = 0; i < rdev->usec_timeout; i++) {
/* read MC_STATUS */
- tmp = RREG32(0x0150);
- if (tmp & (1 << 2)) {
+ tmp = RREG32(RADEON_MC_STATUS);
+ if (tmp & RADEON_MC_IDLE) {
return 0;
}
DRM_UDELAY(1);
r420_pipes_init(rdev);
if (rs400_mc_wait_for_idle(rdev)) {
printk(KERN_WARNING "rs400: Failed to wait MC idle while "
- "programming pipes. Bad things might happen. %08x\n", RREG32(0x150));
+ "programming pipes. Bad things might happen. %08x\n", RREG32(RADEON_MC_STATUS));
}
}
seq_printf(m, "MCCFG_AGP_BASE_2 0x%08x\n", tmp);
tmp = RREG32_MC(RS690_MCCFG_AGP_LOCATION);
seq_printf(m, "MCCFG_AGP_LOCATION 0x%08x\n", tmp);
- tmp = RREG32_MC(0x100);
+ tmp = RREG32_MC(RS690_MCCFG_FB_LOCATION);
seq_printf(m, "MCCFG_FB_LOCATION 0x%08x\n", tmp);
- tmp = RREG32(0x134);
+ tmp = RREG32(RS690_HDP_FB_LOCATION);
seq_printf(m, "HDP_FB_LOCATION 0x%08x\n", tmp);
} else {
tmp = RREG32(RADEON_AGP_BASE);
switch (crev) {
case 1:
tmp.full = dfixed_const(100);
- rdev->pm.igp_sideport_mclk.full = dfixed_const(info->info.ulBootUpMemoryClock);
+ rdev->pm.igp_sideport_mclk.full = dfixed_const(le32_to_cpu(info->info.ulBootUpMemoryClock));
rdev->pm.igp_sideport_mclk.full = dfixed_div(rdev->pm.igp_sideport_mclk, tmp);
- if (info->info.usK8MemoryClock)
+ if (le16_to_cpu(info->info.usK8MemoryClock))
rdev->pm.igp_system_mclk.full = dfixed_const(le16_to_cpu(info->info.usK8MemoryClock));
else if (rdev->clock.default_mclk) {
rdev->pm.igp_system_mclk.full = dfixed_const(rdev->clock.default_mclk);
break;
case 2:
tmp.full = dfixed_const(100);
- rdev->pm.igp_sideport_mclk.full = dfixed_const(info->info_v2.ulBootUpSidePortClock);
+ rdev->pm.igp_sideport_mclk.full = dfixed_const(le32_to_cpu(info->info_v2.ulBootUpSidePortClock));
rdev->pm.igp_sideport_mclk.full = dfixed_div(rdev->pm.igp_sideport_mclk, tmp);
- if (info->info_v2.ulBootUpUMAClock)
- rdev->pm.igp_system_mclk.full = dfixed_const(info->info_v2.ulBootUpUMAClock);
+ if (le32_to_cpu(info->info_v2.ulBootUpUMAClock))
+ rdev->pm.igp_system_mclk.full = dfixed_const(le32_to_cpu(info->info_v2.ulBootUpUMAClock));
else if (rdev->clock.default_mclk)
rdev->pm.igp_system_mclk.full = dfixed_const(rdev->clock.default_mclk);
else
rdev->pm.igp_system_mclk.full = dfixed_const(66700);
rdev->pm.igp_system_mclk.full = dfixed_div(rdev->pm.igp_system_mclk, tmp);
- rdev->pm.igp_ht_link_clk.full = dfixed_const(info->info_v2.ulHTLinkFreq);
+ rdev->pm.igp_ht_link_clk.full = dfixed_const(le32_to_cpu(info->info_v2.ulHTLinkFreq));
rdev->pm.igp_ht_link_clk.full = dfixed_div(rdev->pm.igp_ht_link_clk, tmp);
rdev->pm.igp_ht_link_width.full = dfixed_const(le16_to_cpu(info->info_v2.usMinHTLinkWidth));
break;
ISYNC_CPSCRATCH_IDLEGUI);
radeon_ring_write(rdev, PACKET0(WAIT_UNTIL, 0));
radeon_ring_write(rdev, WAIT_2D_IDLECLEAN | WAIT_3D_IDLECLEAN);
- radeon_ring_write(rdev, PACKET0(0x170C, 0));
- radeon_ring_write(rdev, 1 << 31);
+ radeon_ring_write(rdev, PACKET0(R300_DST_PIPE_CONFIG, 0));
+ radeon_ring_write(rdev, R300_PIPE_AUTO_CONFIG);
radeon_ring_write(rdev, PACKET0(GB_SELECT, 0));
radeon_ring_write(rdev, 0);
radeon_ring_write(rdev, PACKET0(GB_ENABLE, 0));
radeon_ring_write(rdev, 0);
- radeon_ring_write(rdev, PACKET0(0x42C8, 0));
+ radeon_ring_write(rdev, PACKET0(R500_SU_REG_DEST, 0));
radeon_ring_write(rdev, (1 << rdev->num_gb_pipes) - 1);
radeon_ring_write(rdev, PACKET0(VAP_INDEX_OFFSET, 0));
radeon_ring_write(rdev, 0);
}
rv515_vga_render_disable(rdev);
r420_pipes_init(rdev);
- gb_pipe_select = RREG32(0x402C);
- tmp = RREG32(0x170C);
+ gb_pipe_select = RREG32(R400_GB_PIPE_SELECT);
+ tmp = RREG32(R300_DST_PIPE_CONFIG);
pipe_select_current = (tmp >> 2) & 3;
tmp = (1 << pipe_select_current) |
(((gb_pipe_select >> 8) & 0xF) << 4);
}
/* get temperature in millidegrees */
-u32 rv770_get_temp(struct radeon_device *rdev)
+int rv770_get_temp(struct radeon_device *rdev)
{
u32 temp = (RREG32(CG_MULT_THERMAL_STATUS) & ASIC_T_MASK) >>
ASIC_T_SHIFT;
- u32 actual_temp = 0;
-
- if ((temp >> 9) & 1)
- actual_temp = 0;
- else
- actual_temp = (temp >> 1) & 0xff;
-
- return actual_temp * 1000;
+ int actual_temp;
+
+ if (temp & 0x400)
+ actual_temp = -256;
+ else if (temp & 0x200)
+ actual_temp = 255;
+ else if (temp & 0x100) {
+ actual_temp = temp & 0x1ff;
+ actual_temp |= ~0x1ff;
+ } else
+ actual_temp = temp & 0xff;
+
+ return (actual_temp * 1000) / 2;
}
void rv770_pm_misc(struct radeon_device *rdev)
return -EINVAL;
r700_cp_stop(rdev);
- WREG32(CP_RB_CNTL, RB_NO_UPDATE | (15 << 8) | (3 << 0));
+ WREG32(CP_RB_CNTL,
+#ifdef __BIG_ENDIAN
+ BUF_SWAP_32BIT |
+#endif
+ RB_NO_UPDATE | RB_BLKSZ(15) | RB_BUFSZ(3));
/* Reset cp */
WREG32(GRBM_SOFT_RESET, SOFT_RESET_CP);
#define ROQ_IB1_START(x) ((x) << 0)
#define ROQ_IB2_START(x) ((x) << 8)
#define CP_RB_CNTL 0xC104
-#define RB_BUFSZ(x) ((x)<<0)
-#define RB_BLKSZ(x) ((x)<<8)
-#define RB_NO_UPDATE (1<<27)
-#define RB_RPTR_WR_ENA (1<<31)
+#define RB_BUFSZ(x) ((x) << 0)
+#define RB_BLKSZ(x) ((x) << 8)
+#define RB_NO_UPDATE (1 << 27)
+#define RB_RPTR_WR_ENA (1 << 31)
#define BUF_SWAP_32BIT (2 << 16)
#define CP_RB_RPTR 0x8700
#define CP_RB_RPTR_ADDR 0xC10C
config STUB_POULSBO
tristate "Intel GMA500 Stub Driver"
depends on PCI
+ depends on NET # for THERMAL
# Poulsbo stub depends on ACPI_VIDEO when ACPI is enabled
# but for select to work, need to select ACPI_VIDEO's dependencies, ick
select BACKLIGHT_CLASS_DEVICE if ACPI
select INPUT if ACPI
select ACPI_VIDEO if ACPI
+ select THERMAL if ACPI
help
Choose this option if you have a system that has Intel GMA500
(Poulsbo) integrated graphics. If M is selected, the module will
config VGA_ARB
- bool "VGA Arbitration" if EMBEDDED
+ bool "VGA Arbitration" if EXPERT
default y
depends on PCI
help
void (*irq_set_state)(void *cookie, bool state),
unsigned int (*set_vga_decode)(void *cookie, bool decode))
{
- int ret = -1;
+ int ret = -ENODEV;
struct vga_device *vgadev;
unsigned long flags;
Support for 3M PCT touch screens.
config HID_A4TECH
- tristate "A4 tech mice" if EMBEDDED
+ tristate "A4 tech mice" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for A4 tech X5 and WOP-35 / Trust 450L mice.
game controllers.
config HID_APPLE
- tristate "Apple {i,Power,Mac}Books" if EMBEDDED
+ tristate "Apple {i,Power,Mac}Books" if EXPERT
depends on (USB_HID || BT_HIDP)
- default !EMBEDDED
+ default !EXPERT
---help---
Support for some Apple devices which less or more break
HID specification.
MacBooks, MacBook Pros and Apple Aluminum.
config HID_BELKIN
- tristate "Belkin Flip KVM and Wireless keyboard" if EMBEDDED
+ tristate "Belkin Flip KVM and Wireless keyboard" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Belkin Flip KVM and Wireless keyboard.
Support for Cando dual touch panel.
config HID_CHERRY
- tristate "Cherry Cymotion keyboard" if EMBEDDED
+ tristate "Cherry Cymotion keyboard" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Cherry Cymotion keyboard.
config HID_CHICONY
- tristate "Chicony Tactical pad" if EMBEDDED
+ tristate "Chicony Tactical pad" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Chicony Tactical pad.
and some additional multimedia keys.
config HID_CYPRESS
- tristate "Cypress mouse and barcode readers" if EMBEDDED
+ tristate "Cypress mouse and barcode readers" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for cypress mouse and barcode readers.
Support for the ELECOM BM084 (bluetooth mouse).
config HID_EZKEY
- tristate "Ezkey BTC 8193 keyboard" if EMBEDDED
+ tristate "Ezkey BTC 8193 keyboard" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Ezkey BTC 8193 keyboard.
config HID_KYE
- tristate "Kye/Genius Ergo Mouse" if EMBEDDED
+ tristate "Kye/Genius Ergo Mouse" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Kye/Genius Ergo Mouse.
Support for Twinhan IR remote control.
config HID_KENSINGTON
- tristate "Kensington Slimblade Trackball" if EMBEDDED
+ tristate "Kensington Slimblade Trackball" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Kensington Slimblade Trackball.
config HID_LOGITECH
- tristate "Logitech devices" if EMBEDDED
+ tristate "Logitech devices" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Logitech devices that are not fully compliant with HID standard.
Apple Wireless "Magic" Mouse.
config HID_MICROSOFT
- tristate "Microsoft non-fully HID-compliant devices" if EMBEDDED
+ tristate "Microsoft non-fully HID-compliant devices" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Microsoft devices that are not fully compliant with HID standard.
Support for MosArt dual-touch panels.
config HID_MONTEREY
- tristate "Monterey Genius KB29E keyboard" if EMBEDDED
+ tristate "Monterey Genius KB29E keyboard" if EXPERT
depends on USB_HID
- default !EMBEDDED
+ default !EXPERT
---help---
Support for Monterey Genius KB29E.
- IR
config HID_PICOLCD_FB
- bool "Framebuffer support" if EMBEDDED
- default !EMBEDDED
+ bool "Framebuffer support" if EXPERT
+ default !EXPERT
depends on HID_PICOLCD
depends on HID_PICOLCD=FB || FB=y
select FB_DEFERRED_IO
frambuffer device.
config HID_PICOLCD_BACKLIGHT
- bool "Backlight control" if EMBEDDED
- default !EMBEDDED
+ bool "Backlight control" if EXPERT
+ default !EXPERT
depends on HID_PICOLCD
depends on HID_PICOLCD=BACKLIGHT_CLASS_DEVICE || BACKLIGHT_CLASS_DEVICE=y
---help---
class.
config HID_PICOLCD_LCD
- bool "Contrast control" if EMBEDDED
- default !EMBEDDED
+ bool "Contrast control" if EXPERT
+ default !EXPERT
depends on HID_PICOLCD
depends on HID_PICOLCD=LCD_CLASS_DEVICE || LCD_CLASS_DEVICE=y
---help---
Provide access to PicoLCD's LCD contrast via lcd class.
config HID_PICOLCD_LEDS
- bool "GPO via leds class" if EMBEDDED
- default !EMBEDDED
+ bool "GPO via leds class" if EXPERT
+ default !EXPERT
depends on HID_PICOLCD
depends on HID_PICOLCD=LEDS_CLASS || LEDS_CLASS=y
---help---
If unsure, say Y.
menu "USB HID Boot Protocol drivers"
- depends on USB!=n && USB_HID!=y && EMBEDDED
+ depends on USB!=n && USB_HID!=y && EXPERT
config USB_KBD
tristate "USB HIDBP Keyboard (simple Boot) support"
will be called k8temp.
config SENSORS_K10TEMP
- tristate "AMD Phenom/Sempron/Turion/Opteron temperature sensor"
+ tristate "AMD Family 10h/11h/12h/14h temperature sensor"
depends on X86 && PCI
help
If you say yes here you get support for the temperature
sensor(s) inside your CPU. Supported are later revisions of
- the AMD Family 10h and all revisions of the AMD Family 11h
- microarchitectures.
+ the AMD Family 10h and all revisions of the AMD Family 11h,
+ 12h (Llano), and 14h (Brazos) microarchitectures.
This driver can also be built as a module. If so, the module
will be called k10temp.
called jz4740-hwmon.
config SENSORS_JC42
- tristate "JEDEC JC42.4 compliant temperature sensors"
+ tristate "JEDEC JC42.4 compliant memory module temperature sensors"
depends on I2C
help
- If you say yes here you get support for Jedec JC42.4 compliant
- temperature sensors. Support will include, but not be limited to,
- ADT7408, CAT34TS02,, CAT6095, MAX6604, MCP9805, MCP98242, MCP98243,
- MCP9843, SE97, SE98, STTS424, TSE2002B3, and TS3000B3.
+ If you say yes here, you get support for JEDEC JC42.4 compliant
+ temperature sensors, which are used on many DDR3 memory modules for
+ mobile devices and servers. Support will include, but not be limited
+ to, ADT7408, CAT34TS02, CAT6095, MAX6604, MCP9805, MCP98242, MCP98243,
+ MCP9843, SE97, SE98, STTS424(E), TSE2002B3, and TS3000B3.
This driver can also be built as a module. If so, the module
will be called jc42.
help
If you say yes here you get support for National Semiconductor LM85
sensor chips and clones: ADM1027, ADT7463, ADT7468, EMC6D100,
- EMC6D101 and EMC6D102.
+ EMC6D101, EMC6D102, and EMC6D103.
This driver can also be built as a module. If so, the module
will be called lm85.
node->sda.dev_attr.show = grp->show;
node->sda.dev_attr.store = grp->store;
attr = &node->sda.dev_attr.attr;
+ sysfs_attr_init(attr);
attr->name = node->name;
attr->mode = S_IRUGO | (grp->store ? S_IWUSR : 0);
ret = sysfs_create_file(&pdev->dev.kobj, attr);
#include <linux/list.h>
#include <linux/module.h>
#include <linux/slab.h>
+#include <linux/dmi.h>
#include <acpi/acpi.h>
#include <acpi/acpixf.h>
#define ATK_HID "ATK0110"
+static bool new_if;
+module_param(new_if, bool, 0);
+MODULE_PARM_DESC(new_if, "Override detection heuristic and force the use of the new ATK0110 interface");
+
+static const struct dmi_system_id __initconst atk_force_new_if[] = {
+ {
+ /* Old interface has broken MCH temp monitoring */
+ .ident = "Asus Sabertooth X58",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "SABERTOOTH X58")
+ }
+ },
+ { }
+};
+
/* Minimum time between readings, enforced in order to avoid
* hogging the CPU.
*/
* analysis of multiple DSDTs indicates that when both interfaces
* are present the new one (GGRP/GITM) is not functional.
*/
- if (data->rtmp_handle && data->rvlt_handle && data->rfan_handle)
+ if (new_if)
+ dev_info(dev, "Overriding interface detection\n");
+ if (data->rtmp_handle && data->rvlt_handle && data->rfan_handle && !new_if)
data->old_interface = true;
else if (data->enumerate_handle && data->read_handle &&
data->write_handle)
return -EBUSY;
}
+ if (dmi_check_system(atk_force_new_if))
+ new_if = true;
+
ret = acpi_bus_register_driver(&atk_driver);
if (ret)
pr_info("acpi_bus_register_driver failed: %d\n", ret);
}
static const unsigned short emc1403_address_list[] = {
- 0x18, 0x2a, 0x4c, 0x4d, I2C_CLIENT_END
+ 0x18, 0x29, 0x4c, 0x4d, I2C_CLIENT_END
};
static const struct i2c_device_id emc1403_idtable[] = {
/* Configuration register defines */
#define JC42_CFG_CRIT_ONLY (1 << 2)
+#define JC42_CFG_TCRIT_LOCK (1 << 6)
+#define JC42_CFG_EVENT_LOCK (1 << 7)
#define JC42_CFG_SHUTDOWN (1 << 8)
#define JC42_CFG_HYST_SHIFT 9
#define JC42_CFG_HYST_MASK 0x03
{
struct i2c_client *client = to_i2c_client(dev);
struct jc42_data *data = i2c_get_clientdata(client);
- long val;
+ unsigned long val;
int diff, hyst;
int err;
int ret = count;
static DEVICE_ATTR(temp1_input, S_IRUGO,
show_temp_input, NULL);
-static DEVICE_ATTR(temp1_crit, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(temp1_crit, S_IRUGO,
show_temp_crit, set_temp_crit);
-static DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(temp1_min, S_IRUGO,
show_temp_min, set_temp_min);
-static DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(temp1_max, S_IRUGO,
show_temp_max, set_temp_max);
-static DEVICE_ATTR(temp1_crit_hyst, S_IWUSR | S_IRUGO,
+static DEVICE_ATTR(temp1_crit_hyst, S_IRUGO,
show_temp_crit_hyst, set_temp_crit_hyst);
static DEVICE_ATTR(temp1_max_hyst, S_IRUGO,
show_temp_max_hyst, NULL);
NULL
};
+static mode_t jc42_attribute_mode(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct i2c_client *client = to_i2c_client(dev);
+ struct jc42_data *data = i2c_get_clientdata(client);
+ unsigned int config = data->config;
+ bool readonly;
+
+ if (attr == &dev_attr_temp1_crit.attr)
+ readonly = config & JC42_CFG_TCRIT_LOCK;
+ else if (attr == &dev_attr_temp1_min.attr ||
+ attr == &dev_attr_temp1_max.attr)
+ readonly = config & JC42_CFG_EVENT_LOCK;
+ else if (attr == &dev_attr_temp1_crit_hyst.attr)
+ readonly = config & (JC42_CFG_EVENT_LOCK | JC42_CFG_TCRIT_LOCK);
+ else
+ readonly = true;
+
+ return S_IRUGO | (readonly ? 0 : S_IWUSR);
+}
+
static const struct attribute_group jc42_group = {
.attrs = jc42_attributes,
+ .is_visible = jc42_attribute_mode,
};
/* Return 0 if detection is successful, -ENODEV otherwise */
/*
- * k10temp.c - AMD Family 10h/11h processor hardware monitoring
+ * k10temp.c - AMD Family 10h/11h/12h/14h processor hardware monitoring
*
* Copyright (c) 2009 Clemens Ladisch <clemens@ladisch.de>
*
#include <linux/pci.h>
#include <asm/processor.h>
-MODULE_DESCRIPTION("AMD Family 10h/11h CPU core temperature monitor");
+MODULE_DESCRIPTION("AMD Family 10h/11h/12h/14h CPU core temperature monitor");
MODULE_AUTHOR("Clemens Ladisch <clemens@ladisch.de>");
MODULE_LICENSE("GPL");
static const struct pci_device_id k10temp_id_table[] = {
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_10H_NB_MISC) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_11H_NB_MISC) },
+ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) },
{}
};
MODULE_DEVICE_TABLE(pci, k10temp_id_table);
/* bail if we did not get an IRQ from the bus layer */
if (!dev->irq) {
- pr_err("No IRQ. Disabling /dev/freefall\n");
+ pr_debug("No IRQ. Disabling /dev/freefall\n");
goto out;
}
* value, it uses signed 8-bit values with LSB = 1 degree Celsius.
* For remote temperature, low and high limits, it uses signed 11-bit values
* with LSB = 0.125 degree Celsius, left-justified in 16-bit registers.
+ * For LM64 the actual remote diode temperature is 16 degree Celsius higher
+ * than the register reading. Remote temperature setpoints have to be
+ * adapted accordingly.
*/
#define FAN_FROM_REG(reg) ((reg) == 0xFFFC || (reg) == 0 ? 0 : \
struct mutex update_lock;
char valid; /* zero until following fields are valid */
unsigned long last_updated; /* in jiffies */
+ int kind;
+ int temp2_offset;
/* registers values */
u8 config, config_fan;
return sprintf(buf, "%d\n", data->config_fan & 0x20 ? 1 : 2);
}
-static ssize_t show_temp8(struct device *dev, struct device_attribute *devattr,
- char *buf)
+/*
+ * There are 8bit registers for both local(temp1) and remote(temp2) sensor.
+ * For remote sensor registers temp2_offset has to be considered,
+ * for local sensor it must not.
+ * So we need separate 8bit accessors for local and remote sensor.
+ */
+static ssize_t show_local_temp8(struct device *dev,
+ struct device_attribute *devattr,
+ char *buf)
{
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct lm63_data *data = lm63_update_device(dev);
return sprintf(buf, "%d\n", TEMP8_FROM_REG(data->temp8[attr->index]));
}
-static ssize_t set_temp8(struct device *dev, struct device_attribute *dummy,
- const char *buf, size_t count)
+static ssize_t show_remote_temp8(struct device *dev,
+ struct device_attribute *devattr,
+ char *buf)
+{
+ struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ struct lm63_data *data = lm63_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP8_FROM_REG(data->temp8[attr->index])
+ + data->temp2_offset);
+}
+
+static ssize_t set_local_temp8(struct device *dev,
+ struct device_attribute *dummy,
+ const char *buf, size_t count)
{
struct i2c_client *client = to_i2c_client(dev);
struct lm63_data *data = i2c_get_clientdata(client);
{
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct lm63_data *data = lm63_update_device(dev);
- return sprintf(buf, "%d\n", TEMP11_FROM_REG(data->temp11[attr->index]));
+ return sprintf(buf, "%d\n", TEMP11_FROM_REG(data->temp11[attr->index])
+ + data->temp2_offset);
}
static ssize_t set_temp11(struct device *dev, struct device_attribute *devattr,
int nr = attr->index;
mutex_lock(&data->update_lock);
- data->temp11[nr] = TEMP11_TO_REG(val);
+ data->temp11[nr] = TEMP11_TO_REG(val - data->temp2_offset);
i2c_smbus_write_byte_data(client, reg[(nr - 1) * 2],
data->temp11[nr] >> 8);
i2c_smbus_write_byte_data(client, reg[(nr - 1) * 2 + 1],
{
struct lm63_data *data = lm63_update_device(dev);
return sprintf(buf, "%d\n", TEMP8_FROM_REG(data->temp8[2])
+ + data->temp2_offset
- TEMP8_FROM_REG(data->temp2_crit_hyst));
}
long hyst;
mutex_lock(&data->update_lock);
- hyst = TEMP8_FROM_REG(data->temp8[2]) - val;
+ hyst = TEMP8_FROM_REG(data->temp8[2]) + data->temp2_offset - val;
i2c_smbus_write_byte_data(client, LM63_REG_REMOTE_TCRIT_HYST,
HYST_TO_REG(hyst));
mutex_unlock(&data->update_lock);
static DEVICE_ATTR(pwm1, S_IWUSR | S_IRUGO, show_pwm1, set_pwm1);
static DEVICE_ATTR(pwm1_enable, S_IRUGO, show_pwm1_enable, NULL);
-static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_temp8, NULL, 0);
-static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_temp8,
- set_temp8, 1);
+static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, show_local_temp8, NULL, 0);
+static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, show_local_temp8,
+ set_local_temp8, 1);
static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO, show_temp11, NULL, 0);
static SENSOR_DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp11,
set_temp11, 1);
static SENSOR_DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp11,
set_temp11, 2);
-static SENSOR_DEVICE_ATTR(temp2_crit, S_IRUGO, show_temp8, NULL, 2);
+/*
+ * On LM63, temp2_crit can be set only once, which should be job
+ * of the bootloader.
+ */
+static SENSOR_DEVICE_ATTR(temp2_crit, S_IRUGO, show_remote_temp8,
+ NULL, 2);
static DEVICE_ATTR(temp2_crit_hyst, S_IWUSR | S_IRUGO, show_temp2_crit_hyst,
set_temp2_crit_hyst);
data->valid = 0;
mutex_init(&data->update_lock);
- /* Initialize the LM63 chip */
+ /* Set the device type */
+ data->kind = id->driver_data;
+ if (data->kind == lm64)
+ data->temp2_offset = 16000;
+
+ /* Initialize chip */
lm63_init_client(new_client);
/* Register sysfs hooks */
enum chips {
any_chip, lm85b, lm85c,
adm1027, adt7463, adt7468,
- emc6d100, emc6d102
+ emc6d100, emc6d102, emc6d103
};
/* The LM85 registers */
#define LM85_VERSTEP_EMC6D100_A0 0x60
#define LM85_VERSTEP_EMC6D100_A1 0x61
#define LM85_VERSTEP_EMC6D102 0x65
+#define LM85_VERSTEP_EMC6D103_A0 0x68
+#define LM85_VERSTEP_EMC6D103_A1 0x69
+#define LM85_VERSTEP_EMC6D103S 0x6A /* Also known as EMC6D103:A2 */
#define LM85_REG_CONFIG 0x40
{ "emc6d100", emc6d100 },
{ "emc6d101", emc6d100 },
{ "emc6d102", emc6d102 },
+ { "emc6d103", emc6d103 },
{ }
};
MODULE_DEVICE_TABLE(i2c, lm85_id);
case LM85_VERSTEP_EMC6D102:
type_name = "emc6d102";
break;
+ case LM85_VERSTEP_EMC6D103_A0:
+ case LM85_VERSTEP_EMC6D103_A1:
+ type_name = "emc6d103";
+ break;
+ /*
+ * Registers apparently missing in EMC6D103S/EMC6D103:A2
+ * compared to EMC6D103:A0, EMC6D103:A1, and EMC6D102
+ * (according to the data sheets), but used unconditionally
+ * in the driver: 62[5:7], 6D[0:7], and 6E[0:7].
+ * So skip EMC6D103S for now.
+ case LM85_VERSTEP_EMC6D103S:
+ type_name = "emc6d103s";
+ break;
+ */
}
} else {
dev_dbg(&adapter->dev,
case adt7468:
case emc6d100:
case emc6d102:
+ case emc6d103:
data->freq_map = adm1027_freq_map;
break;
default:
/* More alarm bits */
data->alarms |= lm85_read_value(client,
EMC6D100_REG_ALARM3) << 16;
- } else if (data->type == emc6d102) {
+ } else if (data->type == emc6d102 || data->type == emc6d103) {
/* Have to read LSB bits after the MSB ones because
the reading of the MSB bits has frozen the
LSBs (backward from the ADM1027).
module will be called ide-cd.
config BLK_DEV_IDECD_VERBOSE_ERRORS
- bool "Verbose error logging for IDE/ATAPI CDROM driver" if EMBEDDED
+ bool "Verbose error logging for IDE/ATAPI CDROM driver" if EXPERT
depends on BLK_DEV_IDECD
default y
help
clockevents_notify(reason, &cpu);
}
-static int __cpuinit setup_broadcast_cpuhp_notify(struct notifier_block *n,
+static int setup_broadcast_cpuhp_notify(struct notifier_block *n,
unsigned long action, void *hcpu)
{
int hotcpu = (unsigned long)hcpu;
smp_call_function_single(hotcpu, __setup_broadcast_timer,
(void *)true, 1);
break;
- case CPU_DOWN_PREPARE:
- smp_call_function_single(hotcpu, __setup_broadcast_timer,
- (void *)false, 1);
- break;
}
return NOTIFY_OK;
}
-static struct notifier_block __cpuinitdata setup_broadcast_notifier = {
+static struct notifier_block setup_broadcast_notifier = {
.notifier_call = setup_broadcast_cpuhp_notify,
};
ib_unregister_event_handler(&sa_dev->event_handler);
- flush_scheduled_work();
+ flush_workqueue(ib_wq);
for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
if (rdma_port_get_link_layer(device, i + 1) == IB_LINK_LAYER_INFINIBAND) {
}
}
+static void ucma_copy_iw_route(struct rdma_ucm_query_route_resp *resp,
+ struct rdma_route *route)
+{
+ struct rdma_dev_addr *dev_addr;
+
+ dev_addr = &route->addr.dev_addr;
+ rdma_addr_get_dgid(dev_addr, (union ib_gid *) &resp->ib_route[0].dgid);
+ rdma_addr_get_sgid(dev_addr, (union ib_gid *) &resp->ib_route[0].sgid);
+}
+
static ssize_t ucma_query_route(struct ucma_file *file,
const char __user *inbuf,
int in_len, int out_len)
resp.node_guid = (__force __u64) ctx->cm_id->device->node_guid;
resp.port_num = ctx->cm_id->port_num;
- if (rdma_node_get_transport(ctx->cm_id->device->node_type) == RDMA_TRANSPORT_IB) {
- switch (rdma_port_get_link_layer(ctx->cm_id->device, ctx->cm_id->port_num)) {
+ switch (rdma_node_get_transport(ctx->cm_id->device->node_type)) {
+ case RDMA_TRANSPORT_IB:
+ switch (rdma_port_get_link_layer(ctx->cm_id->device,
+ ctx->cm_id->port_num)) {
case IB_LINK_LAYER_INFINIBAND:
ucma_copy_ib_route(&resp, &ctx->cm_id->route);
break;
default:
break;
}
+ break;
+ case RDMA_TRANSPORT_IWARP:
+ ucma_copy_iw_route(&resp, &ctx->cm_id->route);
+ break;
+ default:
+ break;
}
out:
r = kmalloc(sizeof(struct c2_vq_req), GFP_KERNEL);
if (r) {
init_waitqueue_head(&r->wait_object);
- r->reply_msg = (u64) NULL;
+ r->reply_msg = 0;
r->event = 0;
r->cm_id = NULL;
r->qp = NULL;
*/
void vq_req_free(struct c2_dev *c2dev, struct c2_vq_req *r)
{
- r->reply_msg = (u64) NULL;
+ r->reply_msg = 0;
if (atomic_dec_and_test(&r->refcnt)) {
kfree(r);
}
void vq_req_put(struct c2_dev *c2dev, struct c2_vq_req *r)
{
if (atomic_dec_and_test(&r->refcnt)) {
- if (r->reply_msg != (u64) NULL)
+ if (r->reply_msg != 0)
vq_repbuf_free(c2dev,
(void *) (unsigned long) r->reply_msg);
kfree(r);
16)) | FW_WR_FLOWID(ep->hwtid));
flowc->mnemval[0].mnemonic = FW_FLOWC_MNEM_PFNVFN;
- flowc->mnemval[0].val = cpu_to_be32(0);
+ flowc->mnemval[0].val = cpu_to_be32(PCI_FUNC(ep->com.dev->rdev.lldi.pdev->devfn) << 8);
flowc->mnemval[1].mnemonic = FW_FLOWC_MNEM_CH;
flowc->mnemval[1].val = cpu_to_be32(ep->tx_chan);
flowc->mnemval[2].mnemonic = FW_FLOWC_MNEM_PORT;
V_FW_RI_RES_WR_DCAEN(0) |
V_FW_RI_RES_WR_DCACPU(0) |
V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(3) |
+ V_FW_RI_RES_WR_FBMAX(2) |
V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
V_FW_RI_RES_WR_CIDXFTHRESH(0) |
V_FW_RI_RES_WR_EQSIZE(eqsize));
V_FW_RI_RES_WR_DCAEN(0) |
V_FW_RI_RES_WR_DCACPU(0) |
V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(3) |
+ V_FW_RI_RES_WR_FBMAX(2) |
V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
V_FW_RI_RES_WR_CIDXFTHRESH(0) |
V_FW_RI_RES_WR_EQSIZE(eqsize));
("Tavor") and the MT25208 PCI Express HCA ("Arbel").
config INFINIBAND_MTHCA_DEBUG
- bool "Verbose debugging output" if EMBEDDED
+ bool "Verbose debugging output" if EXPERT
depends on INFINIBAND_MTHCA
default y
---help---
netif_carrier_on(nesvnic->netdev);
spin_lock(&nesvnic->port_ibevent_lock);
- if (nesdev->iw_status == 0) {
- nesdev->iw_status = 1;
- nes_port_ibevent(nesvnic);
+ if (nesvnic->of_device_registered) {
+ if (nesdev->iw_status == 0) {
+ nesdev->iw_status = 1;
+ nes_port_ibevent(nesvnic);
+ }
}
spin_unlock(&nesvnic->port_ibevent_lock);
}
netif_carrier_off(nesvnic->netdev);
spin_lock(&nesvnic->port_ibevent_lock);
- if (nesdev->iw_status == 1) {
- nesdev->iw_status = 0;
- nes_port_ibevent(nesvnic);
+ if (nesvnic->of_device_registered) {
+ if (nesdev->iw_status == 1) {
+ nesdev->iw_status = 0;
+ nes_port_ibevent(nesvnic);
+ }
}
spin_unlock(&nesvnic->port_ibevent_lock);
}
netif_carrier_on(nesvnic->netdev);
spin_lock(&nesvnic->port_ibevent_lock);
- if (nesdev->iw_status == 0) {
- nesdev->iw_status = 1;
- nes_port_ibevent(nesvnic);
+ if (nesvnic->of_device_registered) {
+ if (nesdev->iw_status == 0) {
+ nesdev->iw_status = 1;
+ nes_port_ibevent(nesvnic);
+ }
}
spin_unlock(&nesvnic->port_ibevent_lock);
}
netif_carrier_off(nesvnic->netdev);
spin_lock(&nesvnic->port_ibevent_lock);
- if (nesdev->iw_status == 1) {
- nesdev->iw_status = 0;
- nes_port_ibevent(nesvnic);
+ if (nesvnic->of_device_registered) {
+ if (nesdev->iw_status == 1) {
+ nesdev->iw_status = 0;
+ nes_port_ibevent(nesvnic);
+ }
}
spin_unlock(&nesvnic->port_ibevent_lock);
}
u8 ibmalfusesnap;
struct qib_qsfp_data qsfp_data;
char epmsgbuf[192]; /* for port error interrupt msg buffer */
- u8 bounced;
};
static struct {
IB_PHYSPORTSTATE_DISABLED)
qib_set_ib_7322_lstate(ppd, 0,
QLOGIC_IB_IBCC_LINKINITCMD_DISABLE);
- else {
- u32 lstate;
- /*
- * We need the current logical link state before
- * lflags are set in handle_e_ibstatuschanged.
- */
- lstate = qib_7322_iblink_state(ibcs);
-
- if (IS_QMH(dd) && !ppd->cpspec->bounced &&
- ltstate == IB_PHYSPORTSTATE_LINKUP &&
- (lstate >= IB_PORT_INIT &&
- lstate <= IB_PORT_ACTIVE)) {
- ppd->cpspec->bounced = 1;
- qib_7322_set_ib_cfg(ppd, QIB_IB_CFG_LSTATE,
- IB_LINKCMD_DOWN | IB_LINKINITCMD_POLL);
- }
-
+ else
/*
* Since going into a recovery state causes the link
* state to go down and since recovery is transitory,
ltstate != IB_PHYSPORTSTATE_RECOVERY_WAITRMT &&
ltstate != IB_PHYSPORTSTATE_RECOVERY_IDLE)
qib_handle_e_ibstatuschanged(ppd, ibcs);
- }
}
if (*msg && iserr)
qib_dev_porterr(dd, ppd->port, "%s error\n", msg);
qib_write_kreg_port(ppd, krp_rcvctrl, ppd->p_rcvctrl);
spin_unlock_irqrestore(&dd->cspec->rcvmod_lock, flags);
+ /* Hold the link state machine for mezz boards */
+ if (IS_QMH(dd) || IS_QME(dd))
+ qib_set_ib_7322_lstate(ppd, 0,
+ QLOGIC_IB_IBCC_LINKINITCMD_DISABLE);
+
/* Also enable IBSTATUSCHG interrupt. */
val = qib_read_kreg_port(ppd, krp_errmask);
qib_write_kreg_port(ppd, krp_errmask,
ppd->cpspec->h1_val = h1;
/* now change the IBC and serdes, overriding generic */
init_txdds_table(ppd, 1);
+ /* Re-enable the physical state machine on mezz boards
+ * now that the correct settings have been set. */
+ if (IS_QMH(dd) || IS_QME(dd))
+ qib_set_ib_7322_lstate(ppd, 0,
+ QLOGIC_IB_IBCC_LINKINITCMD_SLEEP);
any++;
}
if (*nxt == '\n')
* there are still requests that haven't been acked.
*/
if ((psn & IB_BTH_REQ_ACK) && qp->s_acked != qp->s_tail &&
- !(qp->s_flags & (QIB_S_TIMER | QIB_S_WAIT_RNR | QIB_S_WAIT_PSN)))
+ !(qp->s_flags & (QIB_S_TIMER | QIB_S_WAIT_RNR | QIB_S_WAIT_PSN)) &&
+ (ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK))
start_timer(qp);
while (qp->s_last != qp->s_acked) {
}
spin_lock_irqsave(&qp->s_lock, flags);
+ if (!(ib_qib_state_ops[qp->state] & QIB_PROCESS_RECV_OK))
+ goto ack_done;
/* Ignore invalid responses. */
if (qib_cmp24(psn, qp->s_next_psn) >= 0)
unless you limit mtu for these destinations to 2044.
config INFINIBAND_IPOIB_DEBUG
- bool "IP-over-InfiniBand debugging" if EMBEDDED
+ bool "IP-over-InfiniBand debugging" if EXPERT
depends on INFINIBAND_IPOIB
default y
---help---
depends on !S390
config INPUT
- tristate "Generic input layer (needed for keyboard, mouse, ...)" if EMBEDDED
+ tristate "Generic input layer (needed for keyboard, mouse, ...)" if EXPERT
default y
help
Say Y here if you have any input device (mouse, keyboard, tablet,
comment "Userland interfaces"
config INPUT_MOUSEDEV
- tristate "Mouse interface" if EMBEDDED
+ tristate "Mouse interface" if EXPERT
default y
help
Say Y here if you want your mouse to be accessible as char devices
module will be called evbug.
config INPUT_APMPOWER
- tristate "Input Power Event -> APM Bridge" if EMBEDDED
+ tristate "Input Power Event -> APM Bridge" if EXPERT
depends on INPUT && APM_EMULATION
help
Say Y here if you want suspend key events to trigger a user
* dev->event_lock held and interrupts disabled.
*/
static void input_pass_event(struct input_dev *dev,
- struct input_handler *src_handler,
unsigned int type, unsigned int code, int value)
{
struct input_handler *handler;
continue;
handler = handle->handler;
-
- /*
- * If this is the handler that injected this
- * particular event we want to skip it to avoid
- * filters firing again and again.
- */
- if (handler == src_handler)
- continue;
-
if (!handler->filter) {
if (filtered)
break;
if (test_bit(dev->repeat_key, dev->key) &&
is_event_supported(dev->repeat_key, dev->keybit, KEY_MAX)) {
- input_pass_event(dev, NULL, EV_KEY, dev->repeat_key, 2);
+ input_pass_event(dev, EV_KEY, dev->repeat_key, 2);
if (dev->sync) {
/*
* Otherwise assume that the driver will send
* SYN_REPORT once it's done.
*/
- input_pass_event(dev, NULL, EV_SYN, SYN_REPORT, 1);
+ input_pass_event(dev, EV_SYN, SYN_REPORT, 1);
}
if (dev->rep[REP_PERIOD])
#define INPUT_PASS_TO_ALL (INPUT_PASS_TO_HANDLERS | INPUT_PASS_TO_DEVICE)
static int input_handle_abs_event(struct input_dev *dev,
- struct input_handler *src_handler,
unsigned int code, int *pval)
{
bool is_mt_event;
/* Flush pending "slot" event */
if (is_mt_event && dev->slot != input_abs_get_val(dev, ABS_MT_SLOT)) {
input_abs_set_val(dev, ABS_MT_SLOT, dev->slot);
- input_pass_event(dev, src_handler,
- EV_ABS, ABS_MT_SLOT, dev->slot);
+ input_pass_event(dev, EV_ABS, ABS_MT_SLOT, dev->slot);
}
return INPUT_PASS_TO_HANDLERS;
}
static void input_handle_event(struct input_dev *dev,
- struct input_handler *src_handler,
unsigned int type, unsigned int code, int value)
{
int disposition = INPUT_IGNORE_EVENT;
case EV_ABS:
if (is_event_supported(code, dev->absbit, ABS_MAX))
- disposition = input_handle_abs_event(dev, src_handler,
- code, &value);
+ disposition = input_handle_abs_event(dev, code, &value);
break;
dev->event(dev, type, code, value);
if (disposition & INPUT_PASS_TO_HANDLERS)
- input_pass_event(dev, src_handler, type, code, value);
+ input_pass_event(dev, type, code, value);
}
/**
spin_lock_irqsave(&dev->event_lock, flags);
add_input_randomness(type, code, value);
- input_handle_event(dev, NULL, type, code, value);
+ input_handle_event(dev, type, code, value);
spin_unlock_irqrestore(&dev->event_lock, flags);
}
}
rcu_read_lock();
grab = rcu_dereference(dev->grab);
if (!grab || grab == handle)
- input_handle_event(dev, handle->handler,
- type, code, value);
+ input_handle_event(dev, type, code, value);
rcu_read_unlock();
spin_unlock_irqrestore(&dev->event_lock, flags);
for (code = 0; code <= KEY_MAX; code++) {
if (is_event_supported(code, dev->keybit, KEY_MAX) &&
__test_and_clear_bit(code, dev->key)) {
- input_pass_event(dev, NULL, EV_KEY, code, 0);
+ input_pass_event(dev, EV_KEY, code, 0);
}
}
- input_pass_event(dev, NULL, EV_SYN, SYN_REPORT, 1);
+ input_pass_event(dev, EV_SYN, SYN_REPORT, 1);
}
}
!is_event_supported(old_keycode, dev->keybit, KEY_MAX) &&
__test_and_clear_bit(old_keycode, dev->key)) {
- input_pass_event(dev, NULL, EV_KEY, old_keycode, 0);
+ input_pass_event(dev, EV_KEY, old_keycode, 0);
if (dev->sync)
- input_pass_event(dev, NULL, EV_SYN, SYN_REPORT, 1);
+ input_pass_event(dev, EV_SYN, SYN_REPORT, 1);
}
out:
# Input core configuration
#
menuconfig INPUT_KEYBOARD
- bool "Keyboards" if EMBEDDED || !X86
+ bool "Keyboards" if EXPERT || !X86
default y
help
Say Y here, and a list of supported keyboards will be displayed.
module will be called atakbd.
config KEYBOARD_ATKBD
- tristate "AT keyboard" if EMBEDDED || !X86
+ tristate "AT keyboard" if EXPERT || !X86
default y
select SERIO
select SERIO_LIBPS2
To compile this driver as a module, choose M here: the
module will be called nmk-ske-keypad.
+config KEYBOARD_TEGRA
+ tristate "NVIDIA Tegra internal matrix keyboard controller support"
+ depends on ARCH_TEGRA
+ help
+ Say Y here if you want to use a matrix keyboard connected directly
+ to the internal keyboard controller on Tegra SoCs.
+
+ To compile this driver as a module, choose M here: the
+ module will be called tegra-kbc.
+
config KEYBOARD_OPENCORES
tristate "OpenCores Keyboard Controller"
help
obj-$(CONFIG_KEYBOARD_STOWAWAY) += stowaway.o
obj-$(CONFIG_KEYBOARD_SUNKBD) += sunkbd.o
obj-$(CONFIG_KEYBOARD_TC3589X) += tc3589x-keypad.o
+obj-$(CONFIG_KEYBOARD_TEGRA) += tegra-kbc.o
obj-$(CONFIG_KEYBOARD_TNETV107X) += tnetv107x-keypad.o
obj-$(CONFIG_KEYBOARD_TWL4030) += twl4030_keypad.o
obj-$(CONFIG_KEYBOARD_XTKBD) += xtkbd.o
struct gpio_keys_button *button = bdata->button;
struct input_dev *input = bdata->input;
unsigned int type = button->type ?: EV_KEY;
- int state = (gpio_get_value(button->gpio) ? 1 : 0) ^ button->active_low;
+ int state = (gpio_get_value_cansleep(button->gpio) ? 1 : 0) ^ button->active_low;
input_event(input, type, button->code, !!state);
input_sync(input);
if (!button->can_disable)
irqflags |= IRQF_SHARED;
- error = request_irq(irq, gpio_keys_isr, irqflags, desc, bdata);
- if (error) {
+ error = request_any_context_irq(irq, gpio_keys_isr, irqflags, desc, bdata);
+ if (error < 0) {
dev_err(dev, "Unable to claim irq %d; error %d\n",
irq, error);
goto fail3;
--- /dev/null
+/*
+ * Keyboard class input driver for the NVIDIA Tegra SoC internal matrix
+ * keyboard controller
+ *
+ * Copyright (c) 2009-2011, NVIDIA Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/module.h>
+#include <linux/input.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/clk.h>
+#include <linux/slab.h>
+#include <mach/clk.h>
+#include <mach/kbc.h>
+
+#define KBC_MAX_DEBOUNCE_CNT 0x3ffu
+
+/* KBC row scan time and delay for beginning the row scan. */
+#define KBC_ROW_SCAN_TIME 16
+#define KBC_ROW_SCAN_DLY 5
+
+/* KBC uses a 32KHz clock so a cycle = 1/32Khz */
+#define KBC_CYCLE_USEC 32
+
+/* KBC Registers */
+
+/* KBC Control Register */
+#define KBC_CONTROL_0 0x0
+#define KBC_FIFO_TH_CNT_SHIFT(cnt) (cnt << 14)
+#define KBC_DEBOUNCE_CNT_SHIFT(cnt) (cnt << 4)
+#define KBC_CONTROL_FIFO_CNT_INT_EN (1 << 3)
+#define KBC_CONTROL_KBC_EN (1 << 0)
+
+/* KBC Interrupt Register */
+#define KBC_INT_0 0x4
+#define KBC_INT_FIFO_CNT_INT_STATUS (1 << 2)
+
+#define KBC_ROW_CFG0_0 0x8
+#define KBC_COL_CFG0_0 0x18
+#define KBC_INIT_DLY_0 0x28
+#define KBC_RPT_DLY_0 0x2c
+#define KBC_KP_ENT0_0 0x30
+#define KBC_KP_ENT1_0 0x34
+#define KBC_ROW0_MASK_0 0x38
+
+#define KBC_ROW_SHIFT 3
+
+struct tegra_kbc {
+ void __iomem *mmio;
+ struct input_dev *idev;
+ unsigned int irq;
+ unsigned int wake_enable_rows;
+ unsigned int wake_enable_cols;
+ spinlock_t lock;
+ unsigned int repoll_dly;
+ unsigned long cp_dly_jiffies;
+ const struct tegra_kbc_platform_data *pdata;
+ unsigned short keycode[KBC_MAX_KEY];
+ unsigned short current_keys[KBC_MAX_KPENT];
+ unsigned int num_pressed_keys;
+ struct timer_list timer;
+ struct clk *clk;
+};
+
+static const u32 tegra_kbc_default_keymap[] = {
+ KEY(0, 2, KEY_W),
+ KEY(0, 3, KEY_S),
+ KEY(0, 4, KEY_A),
+ KEY(0, 5, KEY_Z),
+ KEY(0, 7, KEY_FN),
+
+ KEY(1, 7, KEY_LEFTMETA),
+
+ KEY(2, 6, KEY_RIGHTALT),
+ KEY(2, 7, KEY_LEFTALT),
+
+ KEY(3, 0, KEY_5),
+ KEY(3, 1, KEY_4),
+ KEY(3, 2, KEY_R),
+ KEY(3, 3, KEY_E),
+ KEY(3, 4, KEY_F),
+ KEY(3, 5, KEY_D),
+ KEY(3, 6, KEY_X),
+
+ KEY(4, 0, KEY_7),
+ KEY(4, 1, KEY_6),
+ KEY(4, 2, KEY_T),
+ KEY(4, 3, KEY_H),
+ KEY(4, 4, KEY_G),
+ KEY(4, 5, KEY_V),
+ KEY(4, 6, KEY_C),
+ KEY(4, 7, KEY_SPACE),
+
+ KEY(5, 0, KEY_9),
+ KEY(5, 1, KEY_8),
+ KEY(5, 2, KEY_U),
+ KEY(5, 3, KEY_Y),
+ KEY(5, 4, KEY_J),
+ KEY(5, 5, KEY_N),
+ KEY(5, 6, KEY_B),
+ KEY(5, 7, KEY_BACKSLASH),
+
+ KEY(6, 0, KEY_MINUS),
+ KEY(6, 1, KEY_0),
+ KEY(6, 2, KEY_O),
+ KEY(6, 3, KEY_I),
+ KEY(6, 4, KEY_L),
+ KEY(6, 5, KEY_K),
+ KEY(6, 6, KEY_COMMA),
+ KEY(6, 7, KEY_M),
+
+ KEY(7, 1, KEY_EQUAL),
+ KEY(7, 2, KEY_RIGHTBRACE),
+ KEY(7, 3, KEY_ENTER),
+ KEY(7, 7, KEY_MENU),
+
+ KEY(8, 4, KEY_RIGHTSHIFT),
+ KEY(8, 5, KEY_LEFTSHIFT),
+
+ KEY(9, 5, KEY_RIGHTCTRL),
+ KEY(9, 7, KEY_LEFTCTRL),
+
+ KEY(11, 0, KEY_LEFTBRACE),
+ KEY(11, 1, KEY_P),
+ KEY(11, 2, KEY_APOSTROPHE),
+ KEY(11, 3, KEY_SEMICOLON),
+ KEY(11, 4, KEY_SLASH),
+ KEY(11, 5, KEY_DOT),
+
+ KEY(12, 0, KEY_F10),
+ KEY(12, 1, KEY_F9),
+ KEY(12, 2, KEY_BACKSPACE),
+ KEY(12, 3, KEY_3),
+ KEY(12, 4, KEY_2),
+ KEY(12, 5, KEY_UP),
+ KEY(12, 6, KEY_PRINT),
+ KEY(12, 7, KEY_PAUSE),
+
+ KEY(13, 0, KEY_INSERT),
+ KEY(13, 1, KEY_DELETE),
+ KEY(13, 3, KEY_PAGEUP),
+ KEY(13, 4, KEY_PAGEDOWN),
+ KEY(13, 5, KEY_RIGHT),
+ KEY(13, 6, KEY_DOWN),
+ KEY(13, 7, KEY_LEFT),
+
+ KEY(14, 0, KEY_F11),
+ KEY(14, 1, KEY_F12),
+ KEY(14, 2, KEY_F8),
+ KEY(14, 3, KEY_Q),
+ KEY(14, 4, KEY_F4),
+ KEY(14, 5, KEY_F3),
+ KEY(14, 6, KEY_1),
+ KEY(14, 7, KEY_F7),
+
+ KEY(15, 0, KEY_ESC),
+ KEY(15, 1, KEY_GRAVE),
+ KEY(15, 2, KEY_F5),
+ KEY(15, 3, KEY_TAB),
+ KEY(15, 4, KEY_F1),
+ KEY(15, 5, KEY_F2),
+ KEY(15, 6, KEY_CAPSLOCK),
+ KEY(15, 7, KEY_F6),
+};
+
+static const struct matrix_keymap_data tegra_kbc_default_keymap_data = {
+ .keymap = tegra_kbc_default_keymap,
+ .keymap_size = ARRAY_SIZE(tegra_kbc_default_keymap),
+};
+
+static void tegra_kbc_report_released_keys(struct input_dev *input,
+ unsigned short old_keycodes[],
+ unsigned int old_num_keys,
+ unsigned short new_keycodes[],
+ unsigned int new_num_keys)
+{
+ unsigned int i, j;
+
+ for (i = 0; i < old_num_keys; i++) {
+ for (j = 0; j < new_num_keys; j++)
+ if (old_keycodes[i] == new_keycodes[j])
+ break;
+
+ if (j == new_num_keys)
+ input_report_key(input, old_keycodes[i], 0);
+ }
+}
+
+static void tegra_kbc_report_pressed_keys(struct input_dev *input,
+ unsigned char scancodes[],
+ unsigned short keycodes[],
+ unsigned int num_pressed_keys)
+{
+ unsigned int i;
+
+ for (i = 0; i < num_pressed_keys; i++) {
+ input_event(input, EV_MSC, MSC_SCAN, scancodes[i]);
+ input_report_key(input, keycodes[i], 1);
+ }
+}
+
+static void tegra_kbc_report_keys(struct tegra_kbc *kbc)
+{
+ unsigned char scancodes[KBC_MAX_KPENT];
+ unsigned short keycodes[KBC_MAX_KPENT];
+ u32 val = 0;
+ unsigned int i;
+ unsigned int num_down = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&kbc->lock, flags);
+ for (i = 0; i < KBC_MAX_KPENT; i++) {
+ if ((i % 4) == 0)
+ val = readl(kbc->mmio + KBC_KP_ENT0_0 + i);
+
+ if (val & 0x80) {
+ unsigned int col = val & 0x07;
+ unsigned int row = (val >> 3) & 0x0f;
+ unsigned char scancode =
+ MATRIX_SCAN_CODE(row, col, KBC_ROW_SHIFT);
+
+ scancodes[num_down] = scancode;
+ keycodes[num_down++] = kbc->keycode[scancode];
+ }
+
+ val >>= 8;
+ }
+ spin_unlock_irqrestore(&kbc->lock, flags);
+
+ tegra_kbc_report_released_keys(kbc->idev,
+ kbc->current_keys, kbc->num_pressed_keys,
+ keycodes, num_down);
+ tegra_kbc_report_pressed_keys(kbc->idev, scancodes, keycodes, num_down);
+ input_sync(kbc->idev);
+
+ memcpy(kbc->current_keys, keycodes, sizeof(kbc->current_keys));
+ kbc->num_pressed_keys = num_down;
+}
+
+static void tegra_kbc_keypress_timer(unsigned long data)
+{
+ struct tegra_kbc *kbc = (struct tegra_kbc *)data;
+ unsigned long flags;
+ u32 val;
+ unsigned int i;
+
+ val = (readl(kbc->mmio + KBC_INT_0) >> 4) & 0xf;
+ if (val) {
+ unsigned long dly;
+
+ tegra_kbc_report_keys(kbc);
+
+ /*
+ * If more than one keys are pressed we need not wait
+ * for the repoll delay.
+ */
+ dly = (val == 1) ? kbc->repoll_dly : 1;
+ mod_timer(&kbc->timer, jiffies + msecs_to_jiffies(dly));
+ } else {
+ /* Release any pressed keys and exit the polling loop */
+ for (i = 0; i < kbc->num_pressed_keys; i++)
+ input_report_key(kbc->idev, kbc->current_keys[i], 0);
+ input_sync(kbc->idev);
+
+ kbc->num_pressed_keys = 0;
+
+ /* All keys are released so enable the keypress interrupt */
+ spin_lock_irqsave(&kbc->lock, flags);
+ val = readl(kbc->mmio + KBC_CONTROL_0);
+ val |= KBC_CONTROL_FIFO_CNT_INT_EN;
+ writel(val, kbc->mmio + KBC_CONTROL_0);
+ spin_unlock_irqrestore(&kbc->lock, flags);
+ }
+}
+
+static irqreturn_t tegra_kbc_isr(int irq, void *args)
+{
+ struct tegra_kbc *kbc = args;
+ u32 val, ctl;
+
+ /*
+ * Until all keys are released, defer further processing to
+ * the polling loop in tegra_kbc_keypress_timer
+ */
+ ctl = readl(kbc->mmio + KBC_CONTROL_0);
+ ctl &= ~KBC_CONTROL_FIFO_CNT_INT_EN;
+ writel(ctl, kbc->mmio + KBC_CONTROL_0);
+
+ /*
+ * Quickly bail out & reenable interrupts if the fifo threshold
+ * count interrupt wasn't the interrupt source
+ */
+ val = readl(kbc->mmio + KBC_INT_0);
+ writel(val, kbc->mmio + KBC_INT_0);
+
+ if (val & KBC_INT_FIFO_CNT_INT_STATUS) {
+ /*
+ * Schedule timer to run when hardware is in continuous
+ * polling mode.
+ */
+ mod_timer(&kbc->timer, jiffies + kbc->cp_dly_jiffies);
+ } else {
+ ctl |= KBC_CONTROL_FIFO_CNT_INT_EN;
+ writel(ctl, kbc->mmio + KBC_CONTROL_0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void tegra_kbc_setup_wakekeys(struct tegra_kbc *kbc, bool filter)
+{
+ const struct tegra_kbc_platform_data *pdata = kbc->pdata;
+ int i;
+ unsigned int rst_val;
+
+ BUG_ON(pdata->wake_cnt > KBC_MAX_KEY);
+ rst_val = (filter && pdata->wake_cnt) ? ~0 : 0;
+
+ for (i = 0; i < KBC_MAX_ROW; i++)
+ writel(rst_val, kbc->mmio + KBC_ROW0_MASK_0 + i * 4);
+
+ if (filter) {
+ for (i = 0; i < pdata->wake_cnt; i++) {
+ u32 val, addr;
+ addr = pdata->wake_cfg[i].row * 4 + KBC_ROW0_MASK_0;
+ val = readl(kbc->mmio + addr);
+ val &= ~(1 << pdata->wake_cfg[i].col);
+ writel(val, kbc->mmio + addr);
+ }
+ }
+}
+
+static void tegra_kbc_config_pins(struct tegra_kbc *kbc)
+{
+ const struct tegra_kbc_platform_data *pdata = kbc->pdata;
+ int i;
+
+ for (i = 0; i < KBC_MAX_GPIO; i++) {
+ u32 r_shft = 5 * (i % 6);
+ u32 c_shft = 4 * (i % 8);
+ u32 r_mask = 0x1f << r_shft;
+ u32 c_mask = 0x0f << c_shft;
+ u32 r_offs = (i / 6) * 4 + KBC_ROW_CFG0_0;
+ u32 c_offs = (i / 8) * 4 + KBC_COL_CFG0_0;
+ u32 row_cfg = readl(kbc->mmio + r_offs);
+ u32 col_cfg = readl(kbc->mmio + c_offs);
+
+ row_cfg &= ~r_mask;
+ col_cfg &= ~c_mask;
+
+ if (pdata->pin_cfg[i].is_row)
+ row_cfg |= ((pdata->pin_cfg[i].num << 1) | 1) << r_shft;
+ else
+ col_cfg |= ((pdata->pin_cfg[i].num << 1) | 1) << c_shft;
+
+ writel(row_cfg, kbc->mmio + r_offs);
+ writel(col_cfg, kbc->mmio + c_offs);
+ }
+}
+
+static int tegra_kbc_start(struct tegra_kbc *kbc)
+{
+ const struct tegra_kbc_platform_data *pdata = kbc->pdata;
+ unsigned long flags;
+ unsigned int debounce_cnt;
+ u32 val = 0;
+
+ clk_enable(kbc->clk);
+
+ /* Reset the KBC controller to clear all previous status.*/
+ tegra_periph_reset_assert(kbc->clk);
+ udelay(100);
+ tegra_periph_reset_deassert(kbc->clk);
+ udelay(100);
+
+ tegra_kbc_config_pins(kbc);
+ tegra_kbc_setup_wakekeys(kbc, false);
+
+ writel(pdata->repeat_cnt, kbc->mmio + KBC_RPT_DLY_0);
+
+ /* Keyboard debounce count is maximum of 12 bits. */
+ debounce_cnt = min(pdata->debounce_cnt, KBC_MAX_DEBOUNCE_CNT);
+ val = KBC_DEBOUNCE_CNT_SHIFT(debounce_cnt);
+ val |= KBC_FIFO_TH_CNT_SHIFT(1); /* set fifo interrupt threshold to 1 */
+ val |= KBC_CONTROL_FIFO_CNT_INT_EN; /* interrupt on FIFO threshold */
+ val |= KBC_CONTROL_KBC_EN; /* enable */
+ writel(val, kbc->mmio + KBC_CONTROL_0);
+
+ /*
+ * Compute the delay(ns) from interrupt mode to continuous polling
+ * mode so the timer routine is scheduled appropriately.
+ */
+ val = readl(kbc->mmio + KBC_INIT_DLY_0);
+ kbc->cp_dly_jiffies = usecs_to_jiffies((val & 0xfffff) * 32);
+
+ kbc->num_pressed_keys = 0;
+
+ /*
+ * Atomically clear out any remaining entries in the key FIFO
+ * and enable keyboard interrupts.
+ */
+ spin_lock_irqsave(&kbc->lock, flags);
+ while (1) {
+ val = readl(kbc->mmio + KBC_INT_0);
+ val >>= 4;
+ if (!val)
+ break;
+
+ val = readl(kbc->mmio + KBC_KP_ENT0_0);
+ val = readl(kbc->mmio + KBC_KP_ENT1_0);
+ }
+ writel(0x7, kbc->mmio + KBC_INT_0);
+ spin_unlock_irqrestore(&kbc->lock, flags);
+
+ enable_irq(kbc->irq);
+
+ return 0;
+}
+
+static void tegra_kbc_stop(struct tegra_kbc *kbc)
+{
+ unsigned long flags;
+ u32 val;
+
+ spin_lock_irqsave(&kbc->lock, flags);
+ val = readl(kbc->mmio + KBC_CONTROL_0);
+ val &= ~1;
+ writel(val, kbc->mmio + KBC_CONTROL_0);
+ spin_unlock_irqrestore(&kbc->lock, flags);
+
+ disable_irq(kbc->irq);
+ del_timer_sync(&kbc->timer);
+
+ clk_disable(kbc->clk);
+}
+
+static int tegra_kbc_open(struct input_dev *dev)
+{
+ struct tegra_kbc *kbc = input_get_drvdata(dev);
+
+ return tegra_kbc_start(kbc);
+}
+
+static void tegra_kbc_close(struct input_dev *dev)
+{
+ struct tegra_kbc *kbc = input_get_drvdata(dev);
+
+ return tegra_kbc_stop(kbc);
+}
+
+static bool __devinit
+tegra_kbc_check_pin_cfg(const struct tegra_kbc_platform_data *pdata,
+ struct device *dev, unsigned int *num_rows)
+{
+ int i;
+
+ *num_rows = 0;
+
+ for (i = 0; i < KBC_MAX_GPIO; i++) {
+ const struct tegra_kbc_pin_cfg *pin_cfg = &pdata->pin_cfg[i];
+
+ if (pin_cfg->is_row) {
+ if (pin_cfg->num >= KBC_MAX_ROW) {
+ dev_err(dev,
+ "pin_cfg[%d]: invalid row number %d\n",
+ i, pin_cfg->num);
+ return false;
+ }
+ (*num_rows)++;
+ } else {
+ if (pin_cfg->num >= KBC_MAX_COL) {
+ dev_err(dev,
+ "pin_cfg[%d]: invalid column number %d\n",
+ i, pin_cfg->num);
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
+static int __devinit tegra_kbc_probe(struct platform_device *pdev)
+{
+ const struct tegra_kbc_platform_data *pdata = pdev->dev.platform_data;
+ const struct matrix_keymap_data *keymap_data;
+ struct tegra_kbc *kbc;
+ struct input_dev *input_dev;
+ struct resource *res;
+ int irq;
+ int err;
+ int i;
+ int num_rows = 0;
+ unsigned int debounce_cnt;
+ unsigned int scan_time_rows;
+
+ if (!pdata)
+ return -EINVAL;
+
+ if (!tegra_kbc_check_pin_cfg(pdata, &pdev->dev, &num_rows))
+ return -EINVAL;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "failed to get I/O memory\n");
+ return -ENXIO;
+ }
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(&pdev->dev, "failed to get keyboard IRQ\n");
+ return -ENXIO;
+ }
+
+ kbc = kzalloc(sizeof(*kbc), GFP_KERNEL);
+ input_dev = input_allocate_device();
+ if (!kbc || !input_dev) {
+ err = -ENOMEM;
+ goto err_free_mem;
+ }
+
+ kbc->pdata = pdata;
+ kbc->idev = input_dev;
+ kbc->irq = irq;
+ spin_lock_init(&kbc->lock);
+ setup_timer(&kbc->timer, tegra_kbc_keypress_timer, (unsigned long)kbc);
+
+ res = request_mem_region(res->start, resource_size(res), pdev->name);
+ if (!res) {
+ dev_err(&pdev->dev, "failed to request I/O memory\n");
+ err = -EBUSY;
+ goto err_free_mem;
+ }
+
+ kbc->mmio = ioremap(res->start, resource_size(res));
+ if (!kbc->mmio) {
+ dev_err(&pdev->dev, "failed to remap I/O memory\n");
+ err = -ENXIO;
+ goto err_free_mem_region;
+ }
+
+ kbc->clk = clk_get(&pdev->dev, NULL);
+ if (IS_ERR(kbc->clk)) {
+ dev_err(&pdev->dev, "failed to get keyboard clock\n");
+ err = PTR_ERR(kbc->clk);
+ goto err_iounmap;
+ }
+
+ kbc->wake_enable_rows = 0;
+ kbc->wake_enable_cols = 0;
+ for (i = 0; i < pdata->wake_cnt; i++) {
+ kbc->wake_enable_rows |= (1 << pdata->wake_cfg[i].row);
+ kbc->wake_enable_cols |= (1 << pdata->wake_cfg[i].col);
+ }
+
+ /*
+ * The time delay between two consecutive reads of the FIFO is
+ * the sum of the repeat time and the time taken for scanning
+ * the rows. There is an additional delay before the row scanning
+ * starts. The repoll delay is computed in milliseconds.
+ */
+ debounce_cnt = min(pdata->debounce_cnt, KBC_MAX_DEBOUNCE_CNT);
+ scan_time_rows = (KBC_ROW_SCAN_TIME + debounce_cnt) * num_rows;
+ kbc->repoll_dly = KBC_ROW_SCAN_DLY + scan_time_rows + pdata->repeat_cnt;
+ kbc->repoll_dly = ((kbc->repoll_dly * KBC_CYCLE_USEC) + 999) / 1000;
+
+ input_dev->name = pdev->name;
+ input_dev->id.bustype = BUS_HOST;
+ input_dev->dev.parent = &pdev->dev;
+ input_dev->open = tegra_kbc_open;
+ input_dev->close = tegra_kbc_close;
+
+ input_set_drvdata(input_dev, kbc);
+
+ input_dev->evbit[0] = BIT_MASK(EV_KEY);
+ input_set_capability(input_dev, EV_MSC, MSC_SCAN);
+
+ input_dev->keycode = kbc->keycode;
+ input_dev->keycodesize = sizeof(kbc->keycode[0]);
+ input_dev->keycodemax = ARRAY_SIZE(kbc->keycode);
+
+ keymap_data = pdata->keymap_data ?: &tegra_kbc_default_keymap_data;
+ matrix_keypad_build_keymap(keymap_data, KBC_ROW_SHIFT,
+ input_dev->keycode, input_dev->keybit);
+
+ err = request_irq(kbc->irq, tegra_kbc_isr, IRQF_TRIGGER_HIGH,
+ pdev->name, kbc);
+ if (err) {
+ dev_err(&pdev->dev, "failed to request keyboard IRQ\n");
+ goto err_put_clk;
+ }
+
+ disable_irq(kbc->irq);
+
+ err = input_register_device(kbc->idev);
+ if (err) {
+ dev_err(&pdev->dev, "failed to register input device\n");
+ goto err_free_irq;
+ }
+
+ platform_set_drvdata(pdev, kbc);
+ device_init_wakeup(&pdev->dev, pdata->wakeup);
+
+ return 0;
+
+err_free_irq:
+ free_irq(kbc->irq, pdev);
+err_put_clk:
+ clk_put(kbc->clk);
+err_iounmap:
+ iounmap(kbc->mmio);
+err_free_mem_region:
+ release_mem_region(res->start, resource_size(res));
+err_free_mem:
+ input_free_device(kbc->idev);
+ kfree(kbc);
+
+ return err;
+}
+
+static int __devexit tegra_kbc_remove(struct platform_device *pdev)
+{
+ struct tegra_kbc *kbc = platform_get_drvdata(pdev);
+ struct resource *res;
+
+ free_irq(kbc->irq, pdev);
+ clk_put(kbc->clk);
+
+ input_unregister_device(kbc->idev);
+ iounmap(kbc->mmio);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(res->start, resource_size(res));
+
+ kfree(kbc);
+
+ platform_set_drvdata(pdev, NULL);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int tegra_kbc_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct tegra_kbc *kbc = platform_get_drvdata(pdev);
+
+ if (device_may_wakeup(&pdev->dev)) {
+ tegra_kbc_setup_wakekeys(kbc, true);
+ enable_irq_wake(kbc->irq);
+ /* Forcefully clear the interrupt status */
+ writel(0x7, kbc->mmio + KBC_INT_0);
+ msleep(30);
+ } else {
+ mutex_lock(&kbc->idev->mutex);
+ if (kbc->idev->users)
+ tegra_kbc_stop(kbc);
+ mutex_unlock(&kbc->idev->mutex);
+ }
+
+ return 0;
+}
+
+static int tegra_kbc_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct tegra_kbc *kbc = platform_get_drvdata(pdev);
+ int err = 0;
+
+ if (device_may_wakeup(&pdev->dev)) {
+ disable_irq_wake(kbc->irq);
+ tegra_kbc_setup_wakekeys(kbc, false);
+ } else {
+ mutex_lock(&kbc->idev->mutex);
+ if (kbc->idev->users)
+ err = tegra_kbc_start(kbc);
+ mutex_unlock(&kbc->idev->mutex);
+ }
+
+ return err;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(tegra_kbc_pm_ops, tegra_kbc_suspend, tegra_kbc_resume);
+
+static struct platform_driver tegra_kbc_driver = {
+ .probe = tegra_kbc_probe,
+ .remove = __devexit_p(tegra_kbc_remove),
+ .driver = {
+ .name = "tegra-kbc",
+ .owner = THIS_MODULE,
+ .pm = &tegra_kbc_pm_ops,
+ },
+};
+
+static void __exit tegra_kbc_exit(void)
+{
+ platform_driver_unregister(&tegra_kbc_driver);
+}
+module_exit(tegra_kbc_exit);
+
+static int __init tegra_kbc_init(void)
+{
+ return platform_driver_register(&tegra_kbc_driver);
+}
+module_init(tegra_kbc_init);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Rakesh Iyer <riyer@nvidia.com>");
+MODULE_DESCRIPTION("Tegra matrix keyboard controller driver");
+MODULE_ALIAS("platform:tegra-kbc");
*/
#include <linux/kernel.h>
+#include <linux/err.h>
#include <linux/errno.h>
#include <linux/input.h>
#include <linux/platform_device.h>
}
kp->clk = clk_get(dev, NULL);
- if (!kp->clk) {
+ if (IS_ERR(kp->clk)) {
dev_err(dev, "cannot claim device clock\n");
- error = -EINVAL;
+ error = PTR_ERR(kp->clk);
goto error_clk;
}
}
if (value > 20 && value < 32767)
-#ifndef FREQ
- count = (ixp4xx_get_board_tick_rate() / (value * 4)) - 1;
-#else
- count = (FREQ / (value * 4)) - 1;
-#endif
+ count = (IXP4XX_TIMER_FREQ / (value * 4)) - 1;
ixp4xx_spkr_control(pin, count);
/* request the IRQs */
err = request_irq(encoder->irq_a, &rotary_encoder_irq,
- IORESOURCE_IRQ_HIGHEDGE | IORESOURCE_IRQ_LOWEDGE,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
DRV_NAME, encoder);
if (err) {
dev_err(&pdev->dev, "unable to request IRQ %d\n",
}
err = request_irq(encoder->irq_b, &rotary_encoder_irq,
- IORESOURCE_IRQ_HIGHEDGE | IORESOURCE_IRQ_LOWEDGE,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
DRV_NAME, encoder);
if (err) {
dev_err(&pdev->dev, "unable to request IRQ %d\n",
module will be called psmouse.
config MOUSE_PS2_ALPS
- bool "ALPS PS/2 mouse protocol extension" if EMBEDDED
+ bool "ALPS PS/2 mouse protocol extension" if EXPERT
default y
depends on MOUSE_PS2
help
If unsure, say Y.
config MOUSE_PS2_LOGIPS2PP
- bool "Logitech PS/2++ mouse protocol extension" if EMBEDDED
+ bool "Logitech PS/2++ mouse protocol extension" if EXPERT
default y
depends on MOUSE_PS2
help
If unsure, say Y.
config MOUSE_PS2_SYNAPTICS
- bool "Synaptics PS/2 mouse protocol extension" if EMBEDDED
+ bool "Synaptics PS/2 mouse protocol extension" if EXPERT
default y
depends on MOUSE_PS2
help
If unsure, say Y.
config MOUSE_PS2_LIFEBOOK
- bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EMBEDDED
+ bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EXPERT
default y
depends on MOUSE_PS2 && X86 && DMI
help
If unsure, say Y.
config MOUSE_PS2_TRACKPOINT
- bool "IBM Trackpoint PS/2 mouse protocol extension" if EMBEDDED
+ bool "IBM Trackpoint PS/2 mouse protocol extension" if EXPERT
default y
depends on MOUSE_PS2
help
{
struct synaptics_data *priv = psmouse->private;
struct synaptics_data old_priv = *priv;
+ int retry = 0;
+ int error;
- psmouse_reset(psmouse);
+ do {
+ psmouse_reset(psmouse);
+ error = synaptics_detect(psmouse, 0);
+ } while (error && ++retry < 3);
- if (synaptics_detect(psmouse, 0))
+ if (error)
return -1;
+ if (retry > 1)
+ printk(KERN_DEBUG "Synaptics reconnected after %d tries\n",
+ retry);
+
if (synaptics_query_hardware(psmouse)) {
printk(KERN_ERR "Unable to query Synaptics hardware.\n");
return -1;
}
- if (old_priv.identity != priv->identity ||
- old_priv.model_id != priv->model_id ||
- old_priv.capabilities != priv->capabilities ||
- old_priv.ext_cap != priv->ext_cap)
- return -1;
-
if (synaptics_set_absolute_mode(psmouse)) {
printk(KERN_ERR "Unable to initialize Synaptics hardware.\n");
return -1;
return -1;
}
+ if (old_priv.identity != priv->identity ||
+ old_priv.model_id != priv->model_id ||
+ old_priv.capabilities != priv->capabilities ||
+ old_priv.ext_cap != priv->ext_cap) {
+ printk(KERN_ERR "Synaptics hardware appears to be different: "
+ "id(%ld-%ld), model(%ld-%ld), caps(%lx-%lx), ext(%lx-%lx).\n",
+ old_priv.identity, priv->identity,
+ old_priv.model_id, priv->model_id,
+ old_priv.capabilities, priv->capabilities,
+ old_priv.ext_cap, priv->ext_cap);
+ return -1;
+ }
+
return 0;
}
# Input core configuration
#
config SERIO
- tristate "Serial I/O support" if EMBEDDED || !X86
+ tristate "Serial I/O support" if EXPERT || !X86
default y
help
Say Yes here if you have any input device that uses serial I/O to
if SERIO
config SERIO_I8042
- tristate "i8042 PC Keyboard controller" if EMBEDDED || !X86
+ tristate "i8042 PC Keyboard controller" if EXPERT || !X86
default y
depends on !PARISC && (!ARM || ARCH_SHARK || FOOTBRIDGE_HOST) && \
(!SUPERH || SH_CAYMAN) && !M68K && !BLACKFIN
module will be called maceps2.
config SERIO_LIBPS2
- tristate "PS/2 driver library" if EMBEDDED
+ tristate "PS/2 driver library" if EXPERT
depends on SERIO_I8042 || SERIO_I8042=n
help
Say Y here if you are using a driver for device connected
static int ct82c710_open(struct serio *serio)
{
unsigned char status;
+ int err;
- if (request_irq(CT82C710_IRQ, ct82c710_interrupt, 0, "ct82c710", NULL))
- return -1;
+ err = request_irq(CT82C710_IRQ, ct82c710_interrupt, 0, "ct82c710", NULL);
+ if (err)
+ return err;
status = inb_p(CT82C710_STATUS);
status &= ~(CT82C710_ENABLE | CT82C710_INTS_ON);
outb_p(status, CT82C710_STATUS);
free_irq(CT82C710_IRQ, NULL);
- return -1;
+ return -EBUSY;
}
return 0;
kfree(event);
}
-static void serio_remove_duplicate_events(struct serio_event *event)
+static void serio_remove_duplicate_events(void *object,
+ enum serio_event_type type)
{
struct serio_event *e, *next;
unsigned long flags;
spin_lock_irqsave(&serio_event_lock, flags);
list_for_each_entry_safe(e, next, &serio_event_list, node) {
- if (event->object == e->object) {
+ if (object == e->object) {
/*
* If this event is of different type we should not
* look further - we only suppress duplicate events
* that were sent back-to-back.
*/
- if (event->type != e->type)
+ if (type != e->type)
break;
list_del_init(&e->node);
break;
}
- serio_remove_duplicate_events(event);
+ serio_remove_duplicate_events(event->object, event->type);
serio_free_event(event);
}
} else if (!strncmp(buf, "rescan", count)) {
serio_disconnect_port(serio);
serio_find_driver(serio);
+ serio_remove_duplicate_events(serio, SERIO_RESCAN_PORT);
} else if ((drv = driver_find(buf, &serio_bus)) != NULL) {
serio_disconnect_port(serio);
error = serio_bind_driver(serio, to_serio_driver(drv));
put_driver(drv);
+ serio_remove_duplicate_events(serio, SERIO_RESCAN_PORT);
} else {
error = -EINVAL;
}
/*
* serport_ldisc_receive() is called by the low level tty driver when characters
- * are ready for us. We forward the characters, one by one to the 'interrupt'
- * routine.
+ * are ready for us. We forward the characters and flags, one by one to the
+ * 'interrupt' routine.
*/
static void serport_ldisc_receive(struct tty_struct *tty, const unsigned char *cp, char *fp, int count)
{
struct serport *serport = (struct serport*) tty->disc_data;
unsigned long flags;
+ unsigned int ch_flags;
int i;
spin_lock_irqsave(&serport->lock, flags);
if (!test_bit(SERPORT_ACTIVE, &serport->flags))
goto out;
- for (i = 0; i < count; i++)
- serio_interrupt(serport->serio, cp[i], 0);
+ for (i = 0; i < count; i++) {
+ switch (fp[i]) {
+ case TTY_FRAME:
+ ch_flags = SERIO_FRAME;
+ break;
+
+ case TTY_PARITY:
+ ch_flags = SERIO_PARITY;
+ break;
+
+ default:
+ ch_flags = 0;
+ break;
+ }
+
+ serio_interrupt(serport->serio, cp[i], ch_flags);
+ }
out:
spin_unlock_irqrestore(&serport->lock, flags);
break;
case KE_SW:
+ case KE_VSW:
__set_bit(EV_SW, dev->evbit);
__set_bit(entry->sw.code, dev->swbit);
break;
/* Retrieve the physical and logical size for OEM devices */
error = wacom_retrieve_hid_descriptor(intf, features);
if (error)
- goto fail2;
+ goto fail3;
wacom_setup_device_quirks(features);
}
}
+static unsigned int wacom_calculate_touch_res(unsigned int logical_max,
+ unsigned int physical_max)
+{
+ /* Touch physical dimensions are in 100th of mm */
+ return (logical_max * 100) / physical_max;
+}
+
void wacom_setup_input_capabilities(struct input_dev *input_dev,
struct wacom_wac *wacom_wac)
{
case TABLETPC:
if (features->device_type == BTN_TOOL_DOUBLETAP ||
features->device_type == BTN_TOOL_TRIPLETAP) {
- input_set_abs_params(input_dev, ABS_RX, 0, features->x_phy, 0, 0);
- input_set_abs_params(input_dev, ABS_RY, 0, features->y_phy, 0, 0);
+ input_abs_set_res(input_dev, ABS_X,
+ wacom_calculate_touch_res(features->x_max,
+ features->x_phy));
+ input_abs_set_res(input_dev, ABS_Y,
+ wacom_calculate_touch_res(features->y_max,
+ features->y_phy));
__set_bit(BTN_TOOL_DOUBLETAP, input_dev->keybit);
}
input_set_abs_params(input_dev, ABS_MT_PRESSURE,
0, features->pressure_max,
features->pressure_fuzz, 0);
+ input_abs_set_res(input_dev, ABS_X,
+ wacom_calculate_touch_res(features->x_max,
+ features->x_phy));
+ input_abs_set_res(input_dev, ABS_Y,
+ wacom_calculate_touch_res(features->y_max,
+ features->y_phy));
} else if (features->device_type == BTN_TOOL_PEN) {
__set_bit(BTN_TOOL_RUBBER, input_dev->keybit);
__set_bit(BTN_TOOL_PEN, input_dev->keybit);
{ "Wacom Bamboo 2FG 6x8", WACOM_PKGLEN_BBFUN, 21648, 13530, 1023, 63, BAMBOO_PT };
static const struct wacom_features wacom_features_0xD4 =
{ "Wacom Bamboo Pen", WACOM_PKGLEN_BBFUN, 14720, 9200, 255, 63, BAMBOO_PT };
+static struct wacom_features wacom_features_0xD6 =
+ { "Wacom BambooPT 2FG 4x5", WACOM_PKGLEN_BBFUN, 14720, 9200, 1023, 63, BAMBOO_PT };
+static struct wacom_features wacom_features_0xD7 =
+ { "Wacom BambooPT 2FG Small", WACOM_PKGLEN_BBFUN, 14720, 9200, 1023, 63, BAMBOO_PT };
static struct wacom_features wacom_features_0xD8 =
{ "Wacom Bamboo Comic 2FG", WACOM_PKGLEN_BBFUN, 21648, 13530, 1023, 63, BAMBOO_PT };
static struct wacom_features wacom_features_0xDA =
{ USB_DEVICE_WACOM(0xD2) },
{ USB_DEVICE_WACOM(0xD3) },
{ USB_DEVICE_WACOM(0xD4) },
+ { USB_DEVICE_WACOM(0xD6) },
+ { USB_DEVICE_WACOM(0xD7) },
{ USB_DEVICE_WACOM(0xD8) },
{ USB_DEVICE_WACOM(0xDA) },
{ USB_DEVICE_WACOM(0xDB) },
config TOUCHSCREEN_USB_EGALAX
default y
- bool "eGalax, eTurboTouch CT-410/510/700 device support" if EMBEDDED
+ bool "eGalax, eTurboTouch CT-410/510/700 device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_PANJIT
default y
- bool "PanJit device support" if EMBEDDED
+ bool "PanJit device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_3M
default y
- bool "3M/Microtouch EX II series device support" if EMBEDDED
+ bool "3M/Microtouch EX II series device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_ITM
default y
- bool "ITM device support" if EMBEDDED
+ bool "ITM device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_ETURBO
default y
- bool "eTurboTouch (non-eGalax compatible) device support" if EMBEDDED
+ bool "eTurboTouch (non-eGalax compatible) device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_GUNZE
default y
- bool "Gunze AHL61 device support" if EMBEDDED
+ bool "Gunze AHL61 device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_DMC_TSC10
default y
- bool "DMC TSC-10/25 device support" if EMBEDDED
+ bool "DMC TSC-10/25 device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_IRTOUCH
default y
- bool "IRTOUCHSYSTEMS/UNITOP device support" if EMBEDDED
+ bool "IRTOUCHSYSTEMS/UNITOP device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_IDEALTEK
default y
- bool "IdealTEK URTC1000 device support" if EMBEDDED
+ bool "IdealTEK URTC1000 device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_GENERAL_TOUCH
default y
- bool "GeneralTouch Touchscreen device support" if EMBEDDED
+ bool "GeneralTouch Touchscreen device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_GOTOP
default y
- bool "GoTop Super_Q2/GogoPen/PenPower tablet device support" if EMBEDDED
+ bool "GoTop Super_Q2/GogoPen/PenPower tablet device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_JASTEC
default y
- bool "JASTEC/DigiTech DTR-02U USB touch controller device support" if EMBEDDED
+ bool "JASTEC/DigiTech DTR-02U USB touch controller device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_E2I
config TOUCHSCREEN_USB_ZYTRONIC
default y
- bool "Zytronic controller" if EMBEDDED
+ bool "Zytronic controller" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_ETT_TC45USB
default y
- bool "ET&T USB series TC4UM/TC5UH touchscreen controller support" if EMBEDDED
+ bool "ET&T USB series TC4UM/TC5UH touchscreen controller support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_USB_NEXIO
default y
- bool "NEXIO/iNexio device support" if EMBEDDED
+ bool "NEXIO/iNexio device support" if EXPERT
depends on TOUCHSCREEN_USB_COMPOSITE
config TOUCHSCREEN_TOUCHIT213
struct ads7846_platform_data *pdata = spi->dev.platform_data;
int err;
- /* REVISIT when the irq can be triggered active-low, or if for some
+ /*
+ * REVISIT when the irq can be triggered active-low, or if for some
* reason the touchscreen isn't hooked up, we don't need to access
* the pendown state.
*/
- if (!pdata->get_pendown_state && !gpio_is_valid(pdata->gpio_pendown)) {
- dev_err(&spi->dev, "no get_pendown_state nor gpio_pendown?\n");
- return -EINVAL;
- }
if (pdata->get_pendown_state) {
ts->get_pendown_state = pdata->get_pendown_state;
- return 0;
- }
+ } else if (gpio_is_valid(pdata->gpio_pendown)) {
- err = gpio_request(pdata->gpio_pendown, "ads7846_pendown");
- if (err) {
- dev_err(&spi->dev, "failed to request pendown GPIO%d\n",
- pdata->gpio_pendown);
- return err;
- }
+ err = gpio_request(pdata->gpio_pendown, "ads7846_pendown");
+ if (err) {
+ dev_err(&spi->dev, "failed to request pendown GPIO%d\n",
+ pdata->gpio_pendown);
+ return err;
+ }
- ts->gpio_pendown = pdata->gpio_pendown;
+ ts->gpio_pendown = pdata->gpio_pendown;
+
+ } else {
+ dev_err(&spi->dev, "no get_pendown_state nor gpio_pendown?\n");
+ return -EINVAL;
+ }
return 0;
}
err_put_regulator:
regulator_put(ts->reg);
err_free_gpio:
- if (ts->gpio_pendown != -1)
+ if (!ts->get_pendown_state)
gpio_free(ts->gpio_pendown);
err_cleanup_filter:
if (ts->filter_cleanup)
regulator_disable(ts->reg);
regulator_put(ts->reg);
- if (ts->gpio_pendown != -1)
+ if (!ts->get_pendown_state) {
+ /*
+ * If we are not using specialized pendown method we must
+ * have been relying on gpio we set up ourselves.
+ */
gpio_free(ts->gpio_pendown);
+ }
if (ts->filter_cleanup)
ts->filter_cleanup(ts->filter_data);
#include <linux/input.h>
#include <linux/input/bu21013.h>
#include <linux/slab.h>
+#include <linux/regulator/consumer.h>
#define PEN_DOWN_INTR 0
#define MAX_FINGERS 2
* @chip: pointer to the touch panel controller
* @in_dev: pointer to the input device structure
* @intr_pin: interrupt pin value
+ * @regulator: pointer to the Regulator used for touch screen
*
* Touch panel device data structure
*/
const struct bu21013_platform_device *chip;
struct input_dev *in_dev;
unsigned int intr_pin;
+ struct regulator *regulator;
};
/**
bu21013_data->in_dev = in_dev;
bu21013_data->chip = pdata;
bu21013_data->client = client;
+
+ bu21013_data->regulator = regulator_get(&client->dev, "V-TOUCH");
+ if (IS_ERR(bu21013_data->regulator)) {
+ dev_err(&client->dev, "regulator_get failed\n");
+ error = PTR_ERR(bu21013_data->regulator);
+ goto err_free_mem;
+ }
+
+ error = regulator_enable(bu21013_data->regulator);
+ if (error < 0) {
+ dev_err(&client->dev, "regulator enable failed\n");
+ goto err_put_regulator;
+ }
+
bu21013_data->touch_stopped = false;
init_waitqueue_head(&bu21013_data->wait);
error = pdata->cs_en(pdata->cs_pin);
if (error < 0) {
dev_err(&client->dev, "chip init failed\n");
- goto err_free_mem;
+ goto err_disable_regulator;
}
}
__set_bit(EV_ABS, in_dev->evbit);
input_set_abs_params(in_dev, ABS_MT_POSITION_X, 0,
- pdata->x_max_res, 0, 0);
+ pdata->touch_x_max, 0, 0);
input_set_abs_params(in_dev, ABS_MT_POSITION_Y, 0,
- pdata->y_max_res, 0, 0);
+ pdata->touch_y_max, 0, 0);
input_set_drvdata(in_dev, bu21013_data);
error = request_threaded_irq(pdata->irq, NULL, bu21013_gpio_irq,
bu21013_free_irq(bu21013_data);
err_cs_disable:
pdata->cs_dis(pdata->cs_pin);
+err_disable_regulator:
+ regulator_disable(bu21013_data->regulator);
+err_put_regulator:
+ regulator_put(bu21013_data->regulator);
err_free_mem:
input_free_device(in_dev);
kfree(bu21013_data);
bu21013_data->chip->cs_dis(bu21013_data->chip->cs_pin);
input_unregister_device(bu21013_data->in_dev);
+
+ regulator_disable(bu21013_data->regulator);
+ regulator_put(bu21013_data->regulator);
+
kfree(bu21013_data);
device_init_wakeup(&client->dev, false);
else
disable_irq(bu21013_data->chip->irq);
+ regulator_disable(bu21013_data->regulator);
+
return 0;
}
struct i2c_client *client = bu21013_data->client;
int retval;
+ retval = regulator_enable(bu21013_data->regulator);
+ if (retval < 0) {
+ dev_err(&client->dev, "bu21013 regulator enable failed\n");
+ return retval;
+ }
+
retval = bu21013_init_chip(bu21013_data);
if (retval < 0) {
dev_err(&client->dev, "bu21013 controller config failed\n");
*/
#include <linux/kernel.h>
+#include <linux/err.h>
#include <linux/errno.h>
#include <linux/input.h>
#include <linux/platform_device.h>
}
ts->clk = clk_get(dev, NULL);
- if (!ts->clk) {
+ if (IS_ERR(ts->clk)) {
dev_err(dev, "cannot claim device clock\n");
- error = -EINVAL;
+ error = PTR_ERR(ts->clk);
goto error_clk;
}
#define W8001_PKTLEN_TPCCTL 11 /* control packet */
#define W8001_PKTLEN_TOUCH2FG 13
+/* resolution in points/mm */
+#define W8001_PEN_RESOLUTION 100
+#define W8001_TOUCH_RESOLUTION 10
+
struct w8001_coord {
u8 rdy;
u8 tsw;
query->y = 1024;
if (query->panel_res)
query->x = query->y = (1 << query->panel_res);
- query->panel_res = 10;
+ query->panel_res = W8001_TOUCH_RESOLUTION;
}
}
input_set_abs_params(dev, ABS_X, 0, coord.x, 0, 0);
input_set_abs_params(dev, ABS_Y, 0, coord.y, 0, 0);
+ input_abs_set_res(dev, ABS_X, W8001_PEN_RESOLUTION);
+ input_abs_set_res(dev, ABS_Y, W8001_PEN_RESOLUTION);
input_set_abs_params(dev, ABS_PRESSURE, 0, coord.pen_pressure, 0, 0);
if (coord.tilt_x && coord.tilt_y) {
input_set_abs_params(dev, ABS_TILT_X, 0, coord.tilt_x, 0, 0);
w8001->max_touch_x = touch.x;
w8001->max_touch_y = touch.y;
- /* scale to pen maximum */
if (w8001->max_pen_x && w8001->max_pen_y) {
+ /* if pen is supported scale to pen maximum */
touch.x = w8001->max_pen_x;
touch.y = w8001->max_pen_y;
+ touch.panel_res = W8001_PEN_RESOLUTION;
}
input_set_abs_params(dev, ABS_X, 0, touch.x, 0, 0);
input_set_abs_params(dev, ABS_Y, 0, touch.y, 0, 0);
+ input_abs_set_res(dev, ABS_X, touch.panel_res);
+ input_abs_set_res(dev, ABS_Y, touch.panel_res);
switch (touch.sensor_id) {
case 0:
l2_pull_iqueue(struct FsmInst *fi, int event, void *arg)
{
struct PStack *st = fi->userdata;
- struct sk_buff *skb, *oskb;
+ struct sk_buff *skb;
struct Layer2 *l2 = &st->l2;
u_char header[MAX_HEADER_LEN];
- int i;
+ int i, hdr_space_needed;
int unsigned p1;
u_long flags;
if (!skb)
return;
+ hdr_space_needed = l2headersize(l2, 0);
+ if (hdr_space_needed > skb_headroom(skb)) {
+ struct sk_buff *orig_skb = skb;
+
+ skb = skb_realloc_headroom(skb, hdr_space_needed);
+ if (!skb) {
+ dev_kfree_skb(orig_skb);
+ return;
+ }
+ }
spin_lock_irqsave(&l2->lock, flags);
if(test_bit(FLG_MOD128, &l2->flag))
p1 = (l2->vs - l2->va) % 128;
l2->vs = (l2->vs + 1) % 8;
}
spin_unlock_irqrestore(&l2->lock, flags);
- p1 = skb->data - skb->head;
- if (p1 >= i)
- memcpy(skb_push(skb, i), header, i);
- else {
- printk(KERN_WARNING
- "isdl2 pull_iqueue skb header(%d/%d) too short\n", i, p1);
- oskb = skb;
- skb = alloc_skb(oskb->len + i, GFP_ATOMIC);
- memcpy(skb_put(skb, i), header, i);
- skb_copy_from_linear_data(oskb,
- skb_put(skb, oskb->len), oskb->len);
- dev_kfree_skb(oskb);
- }
+ memcpy(skb_push(skb, i), header, i);
st->l2.l2l1(st, PH_PULL | INDICATION, skb);
test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
if (!test_and_set_bit(FLG_T200_RUN, &st->l2.flag)) {
/*************************/
/* im/exported functions */
/*************************/
-extern char *hysdn_getrev(const char *);
/* hysdn_procconf.c */
extern int hysdn_procconf_init(void); /* init proc config filesys */
/* hysdn_net.c */
extern unsigned int hynet_enable;
-extern char *hysdn_net_revision;
extern int hysdn_net_create(hysdn_card *); /* create a new net device */
extern int hysdn_net_release(hysdn_card *); /* delete the device */
extern char *hysdn_net_getname(hysdn_card *); /* get name of net interface */
MODULE_AUTHOR("Werner Cornelius");
MODULE_LICENSE("GPL");
-static char *hysdn_init_revision = "$Revision: 1.6.6.6 $";
static int cardmax; /* number of found cards */
hysdn_card *card_root = NULL; /* pointer to first card */
static hysdn_card *card_last = NULL; /* pointer to first card */
/* Additionally newer versions may be activated without rebooting. */
/****************************************************************************/
-/******************************************************/
-/* extract revision number from string for log output */
-/******************************************************/
-char *
-hysdn_getrev(const char *revision)
-{
- char *rev;
- char *p;
-
- if ((p = strchr(revision, ':'))) {
- rev = p + 2;
- p = strchr(rev, '$');
- *--p = 0;
- } else
- rev = "???";
- return rev;
-}
-
-
/****************************************************************************/
/* init_module is called once when the module is loaded to do all necessary */
/* things like autodetect... */
static int __init
hysdn_init(void)
{
- char tmp[50];
int rc;
- strcpy(tmp, hysdn_init_revision);
- printk(KERN_NOTICE "HYSDN: module Rev: %s loaded\n", hysdn_getrev(tmp));
- strcpy(tmp, hysdn_net_revision);
- printk(KERN_NOTICE "HYSDN: network interface Rev: %s \n", hysdn_getrev(tmp));
+ printk(KERN_NOTICE "HYSDN: module loaded\n");
rc = pci_register_driver(&hysdn_pci_driver);
if (rc)
unsigned int hynet_enable = 0xffffffff;
module_param(hynet_enable, uint, 0);
-/* store the actual version for log reporting */
-char *hysdn_net_revision = "$Revision: 1.8.6.4 $";
-
#define MAX_SKB_BUFFERS 20 /* number of buffers for keeping TX-data */
/****************************************************************************/
#include "hysdn_defs.h"
static DEFINE_MUTEX(hysdn_conf_mutex);
-static char *hysdn_procconf_revision = "$Revision: 1.8.6.4 $";
#define INFO_OUT_LEN 80 /* length of info line including lf */
card = card->next; /* next entry */
}
- printk(KERN_NOTICE "HYSDN: procfs Rev. %s initialised\n", hysdn_getrev(hysdn_procconf_revision));
+ printk(KERN_NOTICE "HYSDN: procfs initialised\n");
return (0);
} /* hysdn_procconf_init */
static int __init icn_init(void)
{
char *p;
- char rev[20];
+ char rev[21];
memset(&dev, 0, sizeof(icn_dev));
dev.memaddr = (membase & 0x0ffc000);
if ((p = strchr(revision, ':'))) {
strncpy(rev, p + 1, 20);
+ rev[20] = '\0';
p = strchr(rev, '$');
if (p)
*p = 0;
led_dat->pwm = pwm_request(cur_led->pwm_id,
cur_led->name);
if (IS_ERR(led_dat->pwm)) {
+ ret = PTR_ERR(led_dat->pwm);
dev_err(&pdev->dev, "unable to request PWM %d\n",
cur_led->pwm_id);
goto err;
struct led_classdev *led = dev_get_drvdata(dev);
struct gpio_trig_data *gpio_data = led->trigger_data;
- return sprintf(buf, "%s\n", gpio_data->inverted ? "yes" : "no");
+ return sprintf(buf, "%u\n", gpio_data->inverted);
}
static ssize_t gpio_trig_inverted_store(struct device *dev,
{
struct led_classdev *led = dev_get_drvdata(dev);
struct gpio_trig_data *gpio_data = led->trigger_data;
- unsigned inverted;
+ unsigned long inverted;
int ret;
- ret = sscanf(buf, "%u", &inverted);
- if (ret < 1) {
- dev_err(dev, "invalid value\n");
+ ret = strict_strtoul(buf, 10, &inverted);
+ if (ret < 0)
+ return ret;
+
+ if (inverted > 1)
return -EINVAL;
- }
- gpio_data->inverted = !!inverted;
+ gpio_data->inverted = inverted;
/* After inverting, we need to update the LED. */
schedule_work(&gpio_data->work);
*/
void map_switcher_in_guest(struct lg_cpu *cpu, struct lguest_pages *pages)
{
- pte_t *switcher_pte_page = __get_cpu_var(switcher_pte_pages);
+ pte_t *switcher_pte_page = __this_cpu_read(switcher_pte_pages);
pte_t regs_pte;
#ifdef CONFIG_X86_PAE
* meanwhile). If that's not the case, we pretend everything in the
* Guest has changed.
*/
- if (__get_cpu_var(lg_last_cpu) != cpu || cpu->last_pages != pages) {
- __get_cpu_var(lg_last_cpu) = cpu;
+ if (__this_cpu_read(lg_last_cpu) != cpu || cpu->last_pages != pages) {
+ __this_cpu_write(lg_last_cpu, cpu);
cpu->last_pages = pages;
cpu->changed = CHANGED_ALL;
}
tries = 0;
for (;;) {
nr = i2c_master_recv(fcu, buf, nb);
- if (nr > 0 || (nr < 0 && nr != ENODEV) || tries >= 100)
+ if (nr > 0 || (nr < 0 && nr != -ENODEV) || tries >= 100)
break;
msleep(10);
++tries;
tries = 0;
for (;;) {
nw = i2c_master_send(fcu, buf, nb);
- if (nw > 0 || (nw < 0 && nw != EIO) || tries >= 100)
+ if (nw > 0 || (nw < 0 && nw != -EIO) || tries >= 100)
break;
msleep(10);
++tries;
mddev_t *mddev = q->queuedata;
int rv;
int cpu;
+ unsigned int sectors;
if (mddev == NULL || mddev->pers == NULL
|| !mddev->ready) {
atomic_inc(&mddev->active_io);
rcu_read_unlock();
+ /*
+ * save the sectors now since our bio can
+ * go away inside make_request
+ */
+ sectors = bio_sectors(bio);
rv = mddev->pers->make_request(mddev, bio);
cpu = part_stat_lock();
part_stat_inc(cpu, &mddev->gendisk->part0, ios[rw]);
- part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw],
- bio_sectors(bio));
+ part_stat_add(cpu, &mddev->gendisk->part0, sectors[rw], sectors);
part_stat_unlock();
if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended)
__bdevname(dev, b));
return PTR_ERR(bdev);
}
- if (!shared)
- set_bit(AllReserved, &rdev->flags);
rdev->bdev = bdev;
return err;
}
if (rdev->raid_disk != -1)
return -EBUSY;
+ if (test_bit(MD_RECOVERY_RUNNING, &rdev->mddev->recovery))
+ return -EBUSY;
+
if (rdev->mddev->pers->hot_add_disk == NULL)
return -EINVAL;
mddev_lock(mddev);
list_for_each_entry(rdev2, &mddev->disks, same_set)
- if (test_bit(AllReserved, &rdev2->flags) ||
- (rdev->bdev == rdev2->bdev &&
- rdev != rdev2 &&
- overlaps(rdev->data_offset, rdev->sectors,
- rdev2->data_offset,
- rdev2->sectors))) {
+ if (rdev->bdev == rdev2->bdev &&
+ rdev != rdev2 &&
+ overlaps(rdev->data_offset, rdev->sectors,
+ rdev2->data_offset,
+ rdev2->sectors)) {
overlap = 1;
break;
}
mddev->delta_disks = raid_disks - mddev->raid_disks;
rv = mddev->pers->check_reshape(mddev);
+ if (rv < 0)
+ mddev->delta_disks = 0;
return rv;
}
} else if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
mddev->resync_min = mddev->curr_resync_completed;
mddev->curr_resync = 0;
- if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery))
- mddev->curr_resync_completed = 0;
- sysfs_notify(&mddev->kobj, NULL, "sync_completed");
wake_up(&resync_wait);
set_bit(MD_RECOVERY_DONE, &mddev->recovery);
md_wakeup_thread(mddev->thread);
}
}
- if (mddev->degraded && ! mddev->ro && !mddev->recovery_disabled) {
+ if (mddev->degraded && !mddev->recovery_disabled) {
list_for_each_entry(rdev, &mddev->disks, same_set) {
if (rdev->raid_disk >= 0 &&
!test_bit(In_sync, &rdev->flags) &&
/* Only thing we do on a ro array is remove
* failed devices.
*/
- remove_and_add_spares(mddev);
+ mdk_rdev_t *rdev;
+ list_for_each_entry(rdev, &mddev->disks, same_set)
+ if (rdev->raid_disk >= 0 &&
+ !test_bit(Blocked, &rdev->flags) &&
+ test_bit(Faulty, &rdev->flags) &&
+ atomic_read(&rdev->nr_pending)==0) {
+ if (mddev->pers->hot_remove_disk(
+ mddev, rdev->raid_disk)==0) {
+ char nm[20];
+ sprintf(nm,"rd%d", rdev->raid_disk);
+ sysfs_remove_link(&mddev->kobj, nm);
+ rdev->raid_disk = -1;
+ }
+ }
clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
goto unlock;
}
#define Faulty 1 /* device is known to have a fault */
#define In_sync 2 /* device is in_sync with rest of array */
#define WriteMostly 4 /* Avoid reading if at all possible */
-#define AllReserved 6 /* If whole device is reserved for
- * one array */
#define AutoDetected 7 /* added by auto-detect */
#define Blocked 8 /* An error occured on an externally
* managed array, don't allow writes
rdev1->new_raid_disk = j;
}
+ if (mddev->level == 1) {
+ /* taiking over a raid1 array-
+ * we have only one active disk
+ */
+ j = 0;
+ rdev1->new_raid_disk = j;
+ }
+
if (j < 0 || j >= mddev->raid_disks) {
printk(KERN_ERR "md/raid0:%s: bad disk number %d - "
"aborting!\n", mdname(mddev), j);
return priv_conf;
}
+static void *raid0_takeover_raid1(mddev_t *mddev)
+{
+ raid0_conf_t *priv_conf;
+
+ /* Check layout:
+ * - (N - 1) mirror drives must be already faulty
+ */
+ if ((mddev->raid_disks - 1) != mddev->degraded) {
+ printk(KERN_ERR "md/raid0:%s: (N - 1) mirrors drives must be already faulty!\n",
+ mdname(mddev));
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* Set new parameters */
+ mddev->new_level = 0;
+ mddev->new_layout = 0;
+ mddev->new_chunk_sectors = 128; /* by default set chunk size to 64k */
+ mddev->delta_disks = 1 - mddev->raid_disks;
+ /* make sure it will be not marked as dirty */
+ mddev->recovery_cp = MaxSector;
+
+ create_strip_zones(mddev, &priv_conf);
+ return priv_conf;
+}
+
static void *raid0_takeover(mddev_t *mddev)
{
/* raid0 can take over:
* raid4 - if all data disks are active.
* raid5 - providing it is Raid4 layout and one disk is faulty
* raid10 - assuming we have all necessary active disks
+ * raid1 - with (N -1) mirror drives faulty
*/
if (mddev->level == 4)
return raid0_takeover_raid45(mddev);
if (mddev->level == 10)
return raid0_takeover_raid10(mddev);
+ if (mddev->level == 1)
+ return raid0_takeover_raid1(mddev);
+
+ printk(KERN_ERR "Takeover from raid%i to raid0 not supported\n",
+ mddev->level);
+
return ERR_PTR(-EINVAL);
}
mddev->recovery_cp = MaxSector;
conf = setup_conf(mddev);
- if (!IS_ERR(conf))
+ if (!IS_ERR(conf)) {
list_for_each_entry(rdev, &mddev->disks, same_set)
if (rdev->raid_disk >= 0)
rdev->new_raid_disk = rdev->raid_disk * 2;
-
+ conf->barrier = 1;
+ }
+
return conf;
}
raid5_conf_t *conf = mddev->private;
mdk_rdev_t *rdev;
int spares = 0;
- int added_devices = 0;
unsigned long flags;
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
return -ENOSPC;
list_for_each_entry(rdev, &mddev->disks, same_set)
- if ((rdev->raid_disk < 0 || rdev->raid_disk >= conf->raid_disks)
- && !test_bit(Faulty, &rdev->flags))
+ if (!test_bit(In_sync, &rdev->flags)
+ && !test_bit(Faulty, &rdev->flags))
spares++;
if (spares - mddev->degraded < mddev->delta_disks - conf->max_degraded)
* to correctly record the "partially reconstructed" state of
* such devices during the reshape and confusion could result.
*/
- if (mddev->delta_disks >= 0)
- list_for_each_entry(rdev, &mddev->disks, same_set)
- if (rdev->raid_disk < 0 &&
- !test_bit(Faulty, &rdev->flags)) {
- if (raid5_add_disk(mddev, rdev) == 0) {
- char nm[20];
- if (rdev->raid_disk >= conf->previous_raid_disks) {
- set_bit(In_sync, &rdev->flags);
- added_devices++;
- } else
- rdev->recovery_offset = 0;
- sprintf(nm, "rd%d", rdev->raid_disk);
- if (sysfs_create_link(&mddev->kobj,
- &rdev->kobj, nm))
- /* Failure here is OK */;
- } else
- break;
- } else if (rdev->raid_disk >= conf->previous_raid_disks
- && !test_bit(Faulty, &rdev->flags)) {
- /* This is a spare that was manually added */
- set_bit(In_sync, &rdev->flags);
- added_devices++;
- }
+ if (mddev->delta_disks >= 0) {
+ int added_devices = 0;
+ list_for_each_entry(rdev, &mddev->disks, same_set)
+ if (rdev->raid_disk < 0 &&
+ !test_bit(Faulty, &rdev->flags)) {
+ if (raid5_add_disk(mddev, rdev) == 0) {
+ char nm[20];
+ if (rdev->raid_disk
+ >= conf->previous_raid_disks) {
+ set_bit(In_sync, &rdev->flags);
+ added_devices++;
+ } else
+ rdev->recovery_offset = 0;
+ sprintf(nm, "rd%d", rdev->raid_disk);
+ if (sysfs_create_link(&mddev->kobj,
+ &rdev->kobj, nm))
+ /* Failure here is OK */;
+ }
+ } else if (rdev->raid_disk >= conf->previous_raid_disks
+ && !test_bit(Faulty, &rdev->flags)) {
+ /* This is a spare that was manually added */
+ set_bit(In_sync, &rdev->flags);
+ added_devices++;
+ }
- /* When a reshape changes the number of devices, ->degraded
- * is measured against the larger of the pre and post number of
- * devices.*/
- if (mddev->delta_disks > 0) {
+ /* When a reshape changes the number of devices,
+ * ->degraded is measured against the larger of the
+ * pre and post number of devices.
+ */
spin_lock_irqsave(&conf->device_lock, flags);
mddev->degraded += (conf->raid_disks - conf->previous_raid_disks)
- added_devices;
INFO(("found saa7146 @ mem %p (revision %d, irq %d) (0x%04x,0x%04x).\n", dev->mem, dev->revision, pci->irq, pci->subsystem_vendor, pci->subsystem_device));
dev->ext = ext;
- mutex_init(&dev->lock);
+ mutex_init(&dev->v4l2_lock);
spin_lock_init(&dev->int_slock);
spin_lock_init(&dev->slock);
}
/* is it free? */
- mutex_lock(&dev->lock);
if (vv->resources & bit) {
DEB_D(("locked! vv->resources:0x%02x, we want:0x%02x\n",vv->resources,bit));
/* no, someone else uses it */
- mutex_unlock(&dev->lock);
return 0;
}
/* it's free, grab it */
fh->resources |= bit;
vv->resources |= bit;
DEB_D(("res: get 0x%02x, cur:0x%02x\n",bit,vv->resources));
- mutex_unlock(&dev->lock);
return 1;
}
BUG_ON((fh->resources & bits) != bits);
- mutex_lock(&dev->lock);
fh->resources &= ~bits;
vv->resources &= ~bits;
DEB_D(("res: put 0x%02x, cur:0x%02x\n",bits,vv->resources));
- mutex_unlock(&dev->lock);
}
.write = fops_write,
.poll = fops_poll,
.mmap = fops_mmap,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
};
static void vv_callback(struct saa7146_dev *dev, unsigned long status)
vfd->fops = &video_fops;
vfd->ioctl_ops = &dev->ext_vv_data->ops;
vfd->release = video_device_release;
+ vfd->lock = &dev->v4l2_lock;
vfd->tvnorms = 0;
for (i = 0; i < dev->ext_vv_data->num_stds; i++)
vfd->tvnorms |= dev->ext_vv_data->stds[i].id;
V4L2_BUF_TYPE_VBI_CAPTURE,
V4L2_FIELD_SEQ_TB, // FIXME: does this really work?
sizeof(struct saa7146_buf),
- file, NULL);
+ file, &dev->v4l2_lock);
init_timer(&fh->vbi_read_timeout);
fh->vbi_read_timeout.function = vbi_read_timeout;
}
}
- mutex_lock(&dev->lock);
-
/* ok, accept it */
vv->ov_fb = *fb;
vv->ov_fmt = fmt;
vv->ov_fb.fmt.bytesperline = vv->ov_fb.fmt.width * fmt->depth / 8;
DEB_D(("setting bytesperline to %d\n", vv->ov_fb.fmt.bytesperline));
}
-
- mutex_unlock(&dev->lock);
return 0;
}
return -EINVAL;
}
- mutex_lock(&dev->lock);
-
switch (ctrl->type) {
case V4L2_CTRL_TYPE_BOOLEAN:
case V4L2_CTRL_TYPE_MENU:
/* fixme: we can support changing VFLIP and HFLIP here... */
if (IS_CAPTURE_ACTIVE(fh) != 0) {
DEB_D(("V4L2_CID_HFLIP while active capture.\n"));
- mutex_unlock(&dev->lock);
return -EBUSY;
}
vv->hflip = c->value;
case V4L2_CID_VFLIP:
if (IS_CAPTURE_ACTIVE(fh) != 0) {
DEB_D(("V4L2_CID_VFLIP while active capture.\n"));
- mutex_unlock(&dev->lock);
return -EBUSY;
}
vv->vflip = c->value;
break;
default:
- mutex_unlock(&dev->lock);
return -EINVAL;
}
- mutex_unlock(&dev->lock);
if (IS_OVERLAY_ACTIVE(fh) != 0) {
saa7146_stop_preview(fh);
err = vidioc_try_fmt_vid_overlay(file, fh, f);
if (0 != err)
return err;
- mutex_lock(&dev->lock);
fh->ov.win = f->fmt.win;
fh->ov.nclips = f->fmt.win.clipcount;
if (fh->ov.nclips > 16)
fh->ov.nclips = 16;
if (copy_from_user(fh->ov.clips, f->fmt.win.clips,
sizeof(struct v4l2_clip) * fh->ov.nclips)) {
- mutex_unlock(&dev->lock);
return -EFAULT;
}
/* fh->ov.fh is used to indicate that we have valid overlay informations, too */
fh->ov.fh = fh;
- mutex_unlock(&dev->lock);
-
/* check if our current overlay is active */
if (IS_OVERLAY_ACTIVE(fh) != 0) {
saa7146_stop_preview(fh);
}
}
- mutex_lock(&dev->lock);
-
for (i = 0; i < dev->ext_vv_data->num_stds; i++)
if (*id & dev->ext_vv_data->stds[i].id)
break;
found = 1;
}
- mutex_unlock(&dev->lock);
-
if (vv->ov_suspend != NULL) {
saa7146_start_preview(vv->ov_suspend);
vv->ov_suspend = NULL;
V4L2_BUF_TYPE_VIDEO_CAPTURE,
V4L2_FIELD_INTERLACED,
sizeof(struct saa7146_buf),
- file, NULL);
+ file, &dev->v4l2_lock);
return 0;
}
config MEDIA_TUNER_CUSTOMISE
bool "Customize analog and hybrid tuner modules to build"
depends on MEDIA_TUNER
- default y if EMBEDDED
+ default y if EXPERT
help
This allows the user to deselect tuner drivers unnecessary
for their hardware from the build. Use this option with care
msleep(20);
} else {
msg = disable;
- tuner_i2c_xfer_send(&priv->i2c_props, msg, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &msg[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props, msg, 1, &msg[1], 1);
buf[2] = msg[1];
buf[2] &= ~0x04;
tuner_i2c_xfer_send(&priv->i2c_props, pll_bw_nom, 2);
}
+
tda8290_i2c_bridge(fe, 1);
if (fe->ops.tuner_ops.set_analog_params)
fe->ops.tuner_ops.set_analog_params(fe, params);
for (i = 0; i < 3; i++) {
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_pll_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &pll_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_pll_stat, 1, &pll_stat, 1);
if (pll_stat & 0x80) {
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_adc_sat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &adc_sat, 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_agc_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &agc_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_adc_sat, 1,
+ &adc_sat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_agc_stat, 1,
+ &agc_stat, 1);
tuner_dbg("tda8290 is locked, AGC: %d\n", agc_stat);
break;
} else {
agc_stat, adc_sat, pll_stat & 0x80);
tuner_i2c_xfer_send(&priv->i2c_props, gainset_2, 2);
msleep(100);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_agc_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &agc_stat, 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_pll_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &pll_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_agc_stat, 1, &agc_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_pll_stat, 1, &pll_stat, 1);
if ((agc_stat > 115) || !(pll_stat & 0x80)) {
tuner_dbg("adjust gain, step 2. Agc: %d, lock: %d\n",
agc_stat, pll_stat & 0x80);
if (priv->cfg.agcf)
priv->cfg.agcf(fe);
msleep(100);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_agc_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &agc_stat, 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_pll_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &pll_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_agc_stat, 1,
+ &agc_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_pll_stat, 1,
+ &pll_stat, 1);
if((agc_stat > 115) || !(pll_stat & 0x80)) {
tuner_dbg("adjust gain, step 3. Agc: %d\n", agc_stat);
tuner_i2c_xfer_send(&priv->i2c_props, adc_head_12, 2);
/* l/ l' deadlock? */
if(priv->tda8290_easy_mode & 0x60) {
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_adc_sat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &adc_sat, 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &addr_pll_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &pll_stat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_adc_sat, 1,
+ &adc_sat, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &addr_pll_stat, 1,
+ &pll_stat, 1);
if ((adc_sat > 20) || !(pll_stat & 0x80)) {
tuner_dbg("trying to resolve SECAM L deadlock\n");
tuner_i2c_xfer_send(&priv->i2c_props, agc_rst_on, 2);
struct tda8290_priv *priv = fe->analog_demod_priv;
unsigned char buf[] = { 0x30, 0x00 }; /* clb_stdbt */
- tuner_i2c_xfer_send(&priv->i2c_props, &buf[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &buf[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props, &buf[0], 1, &buf[1], 1);
if (enable)
buf[1] = 0x01;
struct tda8290_priv *priv = fe->analog_demod_priv;
unsigned char buf[] = { 0x01, 0x00 };
- tuner_i2c_xfer_send(&priv->i2c_props, &buf[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &buf[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props, &buf[0], 1, &buf[1], 1);
if (enable)
buf[1] = 0x01; /* rising edge sets regs 0x02 - 0x23 */
struct tda8290_priv *priv = fe->analog_demod_priv;
unsigned char buf[] = { 0x02, 0x00 }; /* DIV_FUNC */
- tuner_i2c_xfer_send(&priv->i2c_props, &buf[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &buf[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props, &buf[0], 1, &buf[1], 1);
if (enable)
buf[1] &= ~0x40;
unsigned char set_gpio_cf[] = { 0x44, 0x00 };
unsigned char set_gpio_val[] = { 0x46, 0x00 };
- tuner_i2c_xfer_send(&priv->i2c_props, &set_gpio_cf[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &set_gpio_cf[1], 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &set_gpio_val[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &set_gpio_val[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &set_gpio_cf[0], 1, &set_gpio_cf[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &set_gpio_val[0], 1, &set_gpio_val[1], 1);
set_gpio_cf[1] &= 0xf0; /* clear GPIO_0 bits 3-0 */
unsigned char hvpll_stat = 0x26;
unsigned char ret;
- tuner_i2c_xfer_send(&priv->i2c_props, &hvpll_stat, 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &ret, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props, &hvpll_stat, 1, &ret, 1);
return (ret & 0x01) ? 65535 : 0;
}
tda8295_power(fe, 1);
tda8295_agc1_out(fe, 1);
- tuner_i2c_xfer_send(&priv->i2c_props, &blanking_mode[0], 1);
- tuner_i2c_xfer_recv(&priv->i2c_props, &blanking_mode[1], 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ &blanking_mode[0], 1, &blanking_mode[1], 1);
tda8295_set_video_std(fe);
unsigned char i2c_get_afc[1] = { 0x1B };
unsigned char afc = 0;
- tuner_i2c_xfer_send(&priv->i2c_props, i2c_get_afc, ARRAY_SIZE(i2c_get_afc));
- tuner_i2c_xfer_recv(&priv->i2c_props, &afc, 1);
+ tuner_i2c_xfer_send_recv(&priv->i2c_props,
+ i2c_get_afc, ARRAY_SIZE(i2c_get_afc), &afc, 1);
return (afc & 0x80)? 65535:0;
}
static int tda8290_probe(struct tuner_i2c_props *i2c_props)
{
#define TDA8290_ID 0x89
- unsigned char tda8290_id[] = { 0x1f, 0x00 };
+ u8 reg = 0x1f, id;
+ struct i2c_msg msg_read[] = {
+ { .addr = 0x4b, .flags = 0, .len = 1, .buf = ® },
+ { .addr = 0x4b, .flags = I2C_M_RD, .len = 1, .buf = &id },
+ };
/* detect tda8290 */
- tuner_i2c_xfer_send(i2c_props, &tda8290_id[0], 1);
- tuner_i2c_xfer_recv(i2c_props, &tda8290_id[1], 1);
+ if (i2c_transfer(i2c_props->adap, msg_read, 2) != 2) {
+ printk(KERN_WARNING "%s: tda8290 couldn't read register 0x%02x\n",
+ __func__, reg);
+ return -ENODEV;
+ }
- if (tda8290_id[1] == TDA8290_ID) {
+ if (id == TDA8290_ID) {
if (debug)
printk(KERN_DEBUG "%s: tda8290 detected @ %d-%04x\n",
__func__, i2c_adapter_id(i2c_props->adap),
i2c_props->addr);
return 0;
}
-
return -ENODEV;
}
{
#define TDA8295_ID 0x8a
#define TDA8295C2_ID 0x8b
- unsigned char tda8295_id[] = { 0x2f, 0x00 };
+ u8 reg = 0x2f, id;
+ struct i2c_msg msg_read[] = {
+ { .addr = 0x4b, .flags = 0, .len = 1, .buf = ® },
+ { .addr = 0x4b, .flags = I2C_M_RD, .len = 1, .buf = &id },
+ };
- /* detect tda8295 */
- tuner_i2c_xfer_send(i2c_props, &tda8295_id[0], 1);
- tuner_i2c_xfer_recv(i2c_props, &tda8295_id[1], 1);
+ /* detect tda8290 */
+ if (i2c_transfer(i2c_props->adap, msg_read, 2) != 2) {
+ printk(KERN_WARNING "%s: tda8290 couldn't read register 0x%02x\n",
+ __func__, reg);
+ return -ENODEV;
+ }
- if ((tda8295_id[1] & 0xfe) == TDA8295_ID) {
+ if ((id & 0xfe) == TDA8295_ID) {
if (debug)
printk(KERN_DEBUG "%s: %s detected @ %d-%04x\n",
- __func__, (tda8295_id[1] == TDA8295_ID) ?
+ __func__, (id == TDA8295_ID) ?
"tda8295c1" : "tda8295c2",
i2c_adapter_id(i2c_props->adap),
i2c_props->addr);
sizeof(struct analog_demod_ops));
}
- if ((!(cfg) || (TDA829X_PROBE_TUNER == cfg->probe_tuner)) &&
- (tda829x_find_tuner(fe) < 0))
- goto fail;
+ if (!(cfg) || (TDA829X_PROBE_TUNER == cfg->probe_tuner)) {
+ tda8295_power(fe, 1);
+ if (tda829x_find_tuner(fe) < 0)
+ goto fail;
+ }
switch (priv->ver) {
case TDA8290:
return fe;
fail:
+ memset(&fe->ops.analog_ops, 0, sizeof(struct analog_demod_ops));
+
tda829x_release(fe);
return NULL;
}
int i;
/* rule out tda9887, which would return the same byte repeatedly */
- tuner_i2c_xfer_send(&i2c_props, soft_reset, 1);
- tuner_i2c_xfer_recv(&i2c_props, buf, PROBE_BUFFER_SIZE);
+ tuner_i2c_xfer_send_recv(&i2c_props,
+ soft_reset, 1, buf, PROBE_BUFFER_SIZE);
for (i = 1; i < PROBE_BUFFER_SIZE; i++) {
if (buf[i] != buf[0])
break;
/* fall back to old probing method */
tuner_i2c_xfer_send(&i2c_props, easy_mode_b, 2);
tuner_i2c_xfer_send(&i2c_props, soft_reset, 2);
- tuner_i2c_xfer_send(&i2c_props, &addr_dto_lsb, 1);
- tuner_i2c_xfer_recv(&i2c_props, &data, 1);
+ tuner_i2c_xfer_send_recv(&i2c_props, &addr_dto_lsb, 1, &data, 1);
if (data == 0) {
tuner_i2c_xfer_send(&i2c_props, easy_mode_g, 2);
tuner_i2c_xfer_send(&i2c_props, soft_reset, 2);
- tuner_i2c_xfer_send(&i2c_props, &addr_dto_lsb, 1);
- tuner_i2c_xfer_recv(&i2c_props, &data, 1);
+ tuner_i2c_xfer_send_recv(&i2c_props,
+ &addr_dto_lsb, 1, &data, 1);
if (data == 0x7b) {
return 0;
}
union {
u16 system16;
struct {
- u8 system;
u8 not_system;
+ u8 system;
};
};
u8 data;
if ((poll_reply->system ^ poll_reply->not_system) != 0xff) {
deb_data("NEC extended protocol\n");
/* NEC extended code - 24 bits */
- keycode = poll_reply->system16 << 8 | poll_reply->data;
+ keycode = be16_to_cpu(poll_reply->system16) << 8 | poll_reply->data;
} else {
deb_data("NEC normal protocol\n");
/* normal NEC code - 16 bits */
deb_data("RC5 protocol\n");
/* RC5 Protocol */
toggle = poll_reply->report_id;
- keycode = poll_reply->system16 << 8 | poll_reply->data;
+ keycode = poll_reply->system << 8 | poll_reply->data;
break;
}
void fdtv_handle_rc(struct firedtv *fdtv, unsigned int code)
{
- u16 *keycode = fdtv->remote_ctrl_dev->keycode;
+ struct input_dev *idev = fdtv->remote_ctrl_dev;
+ u16 *keycode = idev->keycode;
if (code >= 0x0300 && code <= 0x031f)
code = keycode[code - 0x0300];
return;
}
- input_report_key(fdtv->remote_ctrl_dev, code, 1);
- input_report_key(fdtv->remote_ctrl_dev, code, 0);
+ input_report_key(idev, code, 1);
+ input_sync(idev);
+ input_report_key(idev, code, 0);
+ input_sync(idev);
}
config DVB_FE_CUSTOMISE
bool "Customise the frontend modules to build"
depends on DVB_CORE
- default y if EMBEDDED
+ default y if EXPERT
help
This allows the user to select/deselect frontend drivers for their
hardware from the build.
if_sample_freq = 3300000; /* 3.3 MHz */
break;
case BANDWIDTH_7_MHZ:
- if_sample_freq = 3800000; /* 3.8 MHz */
+ if_sample_freq = 3500000; /* 3.5 MHz */
break;
case BANDWIDTH_8_MHZ:
default:
- if_sample_freq = 4300000; /* 4.3 MHz */
+ if_sample_freq = 4000000; /* 4.0 MHz */
break;
}
} else if (state->config.tuner == AF9013_TUNER_TDA18218) {
return fe;
error:
- ix2505v_release(fe);
+ kfree(state);
return NULL;
}
EXPORT_SYMBOL(ix2505v_attach);
const struct mb86a20s_config *config;
struct dvb_frontend frontend;
+
+ bool need_init;
};
struct regdata {
rc = i2c_transfer(state->i2c, &msg, 1);
if (rc != 1) {
- printk("%s: writereg rcor(rc == %i, reg == 0x%02x,"
+ printk("%s: writereg error (rc == %i, reg == 0x%02x,"
" data == 0x%02x)\n", __func__, rc, reg, data);
return rc;
}
rc = i2c_transfer(state->i2c, msg, 2);
if (rc != 2) {
- rc("%s: reg=0x%x (rcor=%d)\n", __func__, reg, rc);
+ rc("%s: reg=0x%x (error=%d)\n", __func__, reg, rc);
return rc;
}
/* Initialize the frontend */
rc = mb86a20s_writeregdata(state, mb86a20s_init);
if (rc < 0)
- return rc;
+ goto err;
if (!state->config->is_serial) {
regD5 &= ~1;
rc = mb86a20s_writereg(state, 0x50, 0xd5);
if (rc < 0)
- return rc;
+ goto err;
rc = mb86a20s_writereg(state, 0x51, regD5);
if (rc < 0)
- return rc;
+ goto err;
}
if (fe->ops.i2c_gate_ctrl)
fe->ops.i2c_gate_ctrl(fe, 1);
- return 0;
+err:
+ if (rc < 0) {
+ state->need_init = true;
+ printk(KERN_INFO "mb86a20s: Init failed. Will try again later\n");
+ } else {
+ state->need_init = false;
+ dprintk("Initialization succeded.\n");
+ }
+ return rc;
}
static int mb86a20s_read_signal_strength(struct dvb_frontend *fe, u16 *strength)
if (fe->ops.i2c_gate_ctrl)
fe->ops.i2c_gate_ctrl(fe, 1);
+ dprintk("Calling tuner set parameters\n");
fe->ops.tuner_ops.set_params(fe, p);
+ /*
+ * Make it more reliable: if, for some reason, the initial
+ * device initialization doesn't happen, initialize it when
+ * a SBTVD parameters are adjusted.
+ *
+ * Unfortunately, due to a hard to track bug at tda829x/tda18271,
+ * the agc callback logic is not called during DVB attach time,
+ * causing mb86a20s to not be initialized with Kworld SBTVD.
+ * So, this hack is needed, in order to make Kworld SBTVD to work.
+ */
+ if (state->need_init)
+ mb86a20s_initfe(fe);
+
if (fe->ops.i2c_gate_ctrl)
fe->ops.i2c_gate_ctrl(fe, 0);
rc = mb86a20s_writeregdata(state, mb86a20s_reset_reception);
{
ca_slot_info_t *info=(ca_slot_info_t *)parg;
- if (info->num > 1)
+ if (info->num < 0 || info->num > 1)
return -EINVAL;
av7110->ci_slot[info->num].num = info->num;
av7110->ci_slot[info->num].type = FW_CI_LL_SUPPORT(av7110->arm_app) ?
following ports will be probed: 0x20c, 0x30c, 0x24c, 0x34c, 0x248 and
0x28c.
-config RADIO_GEMTEK_PCI
- tristate "GemTek PCI Radio Card support"
- depends on VIDEO_V4L2 && PCI
- ---help---
- Choose Y here if you have this PCI FM radio card.
-
- In order to control your radio card, you will need to use programs
- that are compatible with the Video for Linux API. Information on
- this API and pointers to "v4l" programs may be found at
- <file:Documentation/video4linux/API.html>.
-
- To compile this driver as a module, choose M here: the
- module will be called radio-gemtek-pci.
-
config RADIO_MAXIRADIO
tristate "Guillemot MAXI Radio FM 2000 radio"
depends on VIDEO_V4L2 && PCI
obj-$(CONFIG_RADIO_RTRACK) += radio-aimslab.o
obj-$(CONFIG_RADIO_ZOLTRIX) += radio-zoltrix.o
obj-$(CONFIG_RADIO_GEMTEK) += radio-gemtek.o
-obj-$(CONFIG_RADIO_GEMTEK_PCI) += radio-gemtek-pci.o
obj-$(CONFIG_RADIO_TRUST) += radio-trust.o
obj-$(CONFIG_I2C_SI4713) += si4713-i2c.o
obj-$(CONFIG_RADIO_SI4713) += radio-si4713.o
#include <linux/module.h> /* Modules */
#include <linux/init.h> /* Initdata */
#include <linux/ioport.h> /* request_region */
+#include <linux/delay.h> /* msleep */
#include <linux/videodev2.h> /* kernel radio structs */
#include <linux/version.h> /* for KERNEL_VERSION MACRO */
#include <linux/io.h> /* outb, outb_p */
+++ /dev/null
-/*
- ***************************************************************************
- *
- * radio-gemtek-pci.c - Gemtek PCI Radio driver
- * (C) 2001 Vladimir Shebordaev <vshebordaev@mail.ru>
- *
- ***************************************************************************
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of
- * the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public
- * License along with this program; if not, write to the Free
- * Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139,
- * USA.
- *
- ***************************************************************************
- *
- * Gemtek Corp still silently refuses to release any specifications
- * of their multimedia devices, so the protocol still has to be
- * reverse engineered.
- *
- * The v4l code was inspired by Jonas Munsin's Gemtek serial line
- * radio device driver.
- *
- * Please, let me know if this piece of code was useful :)
- *
- * TODO: multiple device support and portability were not tested
- *
- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>
- *
- ***************************************************************************
- */
-
-#include <linux/types.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/videodev2.h>
-#include <linux/errno.h>
-#include <linux/version.h> /* for KERNEL_VERSION MACRO */
-#include <linux/io.h>
-#include <linux/slab.h>
-#include <media/v4l2-device.h>
-#include <media/v4l2-ioctl.h>
-
-MODULE_AUTHOR("Vladimir Shebordaev <vshebordaev@mail.ru>");
-MODULE_DESCRIPTION("The video4linux driver for the Gemtek PCI Radio Card");
-MODULE_LICENSE("GPL");
-
-static int nr_radio = -1;
-static int mx = 1;
-
-module_param(mx, bool, 0);
-MODULE_PARM_DESC(mx, "single digit: 1 - turn off the turner upon module exit (default), 0 - do not");
-module_param(nr_radio, int, 0);
-MODULE_PARM_DESC(nr_radio, "video4linux device number to use");
-
-#define RADIO_VERSION KERNEL_VERSION(0, 0, 2)
-
-#ifndef PCI_VENDOR_ID_GEMTEK
-#define PCI_VENDOR_ID_GEMTEK 0x5046
-#endif
-
-#ifndef PCI_DEVICE_ID_GEMTEK_PR103
-#define PCI_DEVICE_ID_GEMTEK_PR103 0x1001
-#endif
-
-#ifndef GEMTEK_PCI_RANGE_LOW
-#define GEMTEK_PCI_RANGE_LOW (87*16000)
-#endif
-
-#ifndef GEMTEK_PCI_RANGE_HIGH
-#define GEMTEK_PCI_RANGE_HIGH (108*16000)
-#endif
-
-struct gemtek_pci {
- struct v4l2_device v4l2_dev;
- struct video_device vdev;
- struct mutex lock;
- struct pci_dev *pdev;
-
- u32 iobase;
- u32 length;
-
- u32 current_frequency;
- u8 mute;
-};
-
-static inline struct gemtek_pci *to_gemtek_pci(struct v4l2_device *v4l2_dev)
-{
- return container_of(v4l2_dev, struct gemtek_pci, v4l2_dev);
-}
-
-static inline u8 gemtek_pci_out(u16 value, u32 port)
-{
- outw(value, port);
-
- return (u8)value;
-}
-
-#define _b0(v) (*((u8 *)&v))
-
-static void __gemtek_pci_cmd(u16 value, u32 port, u8 *last_byte, int keep)
-{
- u8 byte = *last_byte;
-
- if (!value) {
- if (!keep)
- value = (u16)port;
- byte &= 0xfd;
- } else
- byte |= 2;
-
- _b0(value) = byte;
- outw(value, port);
- byte |= 1;
- _b0(value) = byte;
- outw(value, port);
- byte &= 0xfe;
- _b0(value) = byte;
- outw(value, port);
-
- *last_byte = byte;
-}
-
-static inline void gemtek_pci_nil(u32 port, u8 *last_byte)
-{
- __gemtek_pci_cmd(0x00, port, last_byte, false);
-}
-
-static inline void gemtek_pci_cmd(u16 cmd, u32 port, u8 *last_byte)
-{
- __gemtek_pci_cmd(cmd, port, last_byte, true);
-}
-
-static void gemtek_pci_setfrequency(struct gemtek_pci *card, unsigned long frequency)
-{
- int i;
- u32 value = frequency / 200 + 856;
- u16 mask = 0x8000;
- u8 last_byte;
- u32 port = card->iobase;
-
- mutex_lock(&card->lock);
- card->current_frequency = frequency;
- last_byte = gemtek_pci_out(0x06, port);
-
- i = 0;
- do {
- gemtek_pci_nil(port, &last_byte);
- i++;
- } while (i < 9);
-
- i = 0;
- do {
- gemtek_pci_cmd(value & mask, port, &last_byte);
- mask >>= 1;
- i++;
- } while (i < 16);
-
- outw(0x10, port);
- mutex_unlock(&card->lock);
-}
-
-
-static void gemtek_pci_mute(struct gemtek_pci *card)
-{
- mutex_lock(&card->lock);
- outb(0x1f, card->iobase);
- card->mute = true;
- mutex_unlock(&card->lock);
-}
-
-static void gemtek_pci_unmute(struct gemtek_pci *card)
-{
- if (card->mute) {
- gemtek_pci_setfrequency(card, card->current_frequency);
- card->mute = false;
- }
-}
-
-static int gemtek_pci_getsignal(struct gemtek_pci *card)
-{
- int sig;
-
- mutex_lock(&card->lock);
- sig = (inb(card->iobase) & 0x08) ? 0 : 1;
- mutex_unlock(&card->lock);
- return sig;
-}
-
-static int vidioc_querycap(struct file *file, void *priv,
- struct v4l2_capability *v)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- strlcpy(v->driver, "radio-gemtek-pci", sizeof(v->driver));
- strlcpy(v->card, "GemTek PCI Radio", sizeof(v->card));
- snprintf(v->bus_info, sizeof(v->bus_info), "PCI:%s", pci_name(card->pdev));
- v->version = RADIO_VERSION;
- v->capabilities = V4L2_CAP_TUNER | V4L2_CAP_RADIO;
- return 0;
-}
-
-static int vidioc_g_tuner(struct file *file, void *priv,
- struct v4l2_tuner *v)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- if (v->index > 0)
- return -EINVAL;
-
- strlcpy(v->name, "FM", sizeof(v->name));
- v->type = V4L2_TUNER_RADIO;
- v->rangelow = GEMTEK_PCI_RANGE_LOW;
- v->rangehigh = GEMTEK_PCI_RANGE_HIGH;
- v->rxsubchans = V4L2_TUNER_SUB_MONO;
- v->capability = V4L2_TUNER_CAP_LOW;
- v->audmode = V4L2_TUNER_MODE_MONO;
- v->signal = 0xffff * gemtek_pci_getsignal(card);
- return 0;
-}
-
-static int vidioc_s_tuner(struct file *file, void *priv,
- struct v4l2_tuner *v)
-{
- return v->index ? -EINVAL : 0;
-}
-
-static int vidioc_s_frequency(struct file *file, void *priv,
- struct v4l2_frequency *f)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- if (f->tuner != 0 || f->type != V4L2_TUNER_RADIO)
- return -EINVAL;
- if (f->frequency < GEMTEK_PCI_RANGE_LOW ||
- f->frequency > GEMTEK_PCI_RANGE_HIGH)
- return -EINVAL;
- gemtek_pci_setfrequency(card, f->frequency);
- card->mute = false;
- return 0;
-}
-
-static int vidioc_g_frequency(struct file *file, void *priv,
- struct v4l2_frequency *f)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- if (f->tuner != 0)
- return -EINVAL;
- f->type = V4L2_TUNER_RADIO;
- f->frequency = card->current_frequency;
- return 0;
-}
-
-static int vidioc_queryctrl(struct file *file, void *priv,
- struct v4l2_queryctrl *qc)
-{
- switch (qc->id) {
- case V4L2_CID_AUDIO_MUTE:
- return v4l2_ctrl_query_fill(qc, 0, 1, 1, 1);
- case V4L2_CID_AUDIO_VOLUME:
- return v4l2_ctrl_query_fill(qc, 0, 65535, 65535, 65535);
- }
- return -EINVAL;
-}
-
-static int vidioc_g_ctrl(struct file *file, void *priv,
- struct v4l2_control *ctrl)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- switch (ctrl->id) {
- case V4L2_CID_AUDIO_MUTE:
- ctrl->value = card->mute;
- return 0;
- case V4L2_CID_AUDIO_VOLUME:
- if (card->mute)
- ctrl->value = 0;
- else
- ctrl->value = 65535;
- return 0;
- }
- return -EINVAL;
-}
-
-static int vidioc_s_ctrl(struct file *file, void *priv,
- struct v4l2_control *ctrl)
-{
- struct gemtek_pci *card = video_drvdata(file);
-
- switch (ctrl->id) {
- case V4L2_CID_AUDIO_MUTE:
- if (ctrl->value)
- gemtek_pci_mute(card);
- else
- gemtek_pci_unmute(card);
- return 0;
- case V4L2_CID_AUDIO_VOLUME:
- if (ctrl->value)
- gemtek_pci_unmute(card);
- else
- gemtek_pci_mute(card);
- return 0;
- }
- return -EINVAL;
-}
-
-static int vidioc_g_input(struct file *filp, void *priv, unsigned int *i)
-{
- *i = 0;
- return 0;
-}
-
-static int vidioc_s_input(struct file *filp, void *priv, unsigned int i)
-{
- return i ? -EINVAL : 0;
-}
-
-static int vidioc_g_audio(struct file *file, void *priv,
- struct v4l2_audio *a)
-{
- a->index = 0;
- strlcpy(a->name, "Radio", sizeof(a->name));
- a->capability = V4L2_AUDCAP_STEREO;
- return 0;
-}
-
-static int vidioc_s_audio(struct file *file, void *priv,
- struct v4l2_audio *a)
-{
- return a->index ? -EINVAL : 0;
-}
-
-enum {
- GEMTEK_PR103
-};
-
-static char *card_names[] __devinitdata = {
- "GEMTEK_PR103"
-};
-
-static struct pci_device_id gemtek_pci_id[] =
-{
- { PCI_VENDOR_ID_GEMTEK, PCI_DEVICE_ID_GEMTEK_PR103,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, GEMTEK_PR103 },
- { 0 }
-};
-
-MODULE_DEVICE_TABLE(pci, gemtek_pci_id);
-
-static const struct v4l2_file_operations gemtek_pci_fops = {
- .owner = THIS_MODULE,
- .unlocked_ioctl = video_ioctl2,
-};
-
-static const struct v4l2_ioctl_ops gemtek_pci_ioctl_ops = {
- .vidioc_querycap = vidioc_querycap,
- .vidioc_g_tuner = vidioc_g_tuner,
- .vidioc_s_tuner = vidioc_s_tuner,
- .vidioc_g_audio = vidioc_g_audio,
- .vidioc_s_audio = vidioc_s_audio,
- .vidioc_g_input = vidioc_g_input,
- .vidioc_s_input = vidioc_s_input,
- .vidioc_g_frequency = vidioc_g_frequency,
- .vidioc_s_frequency = vidioc_s_frequency,
- .vidioc_queryctrl = vidioc_queryctrl,
- .vidioc_g_ctrl = vidioc_g_ctrl,
- .vidioc_s_ctrl = vidioc_s_ctrl,
-};
-
-static int __devinit gemtek_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pci_id)
-{
- struct gemtek_pci *card;
- struct v4l2_device *v4l2_dev;
- int res;
-
- card = kzalloc(sizeof(struct gemtek_pci), GFP_KERNEL);
- if (card == NULL) {
- dev_err(&pdev->dev, "out of memory\n");
- return -ENOMEM;
- }
-
- v4l2_dev = &card->v4l2_dev;
- mutex_init(&card->lock);
- card->pdev = pdev;
-
- strlcpy(v4l2_dev->name, "gemtek_pci", sizeof(v4l2_dev->name));
-
- res = v4l2_device_register(&pdev->dev, v4l2_dev);
- if (res < 0) {
- v4l2_err(v4l2_dev, "Could not register v4l2_device\n");
- kfree(card);
- return res;
- }
-
- if (pci_enable_device(pdev))
- goto err_pci;
-
- card->iobase = pci_resource_start(pdev, 0);
- card->length = pci_resource_len(pdev, 0);
-
- if (request_region(card->iobase, card->length, card_names[pci_id->driver_data]) == NULL) {
- v4l2_err(v4l2_dev, "i/o port already in use\n");
- goto err_pci;
- }
-
- strlcpy(card->vdev.name, v4l2_dev->name, sizeof(card->vdev.name));
- card->vdev.v4l2_dev = v4l2_dev;
- card->vdev.fops = &gemtek_pci_fops;
- card->vdev.ioctl_ops = &gemtek_pci_ioctl_ops;
- card->vdev.release = video_device_release_empty;
- video_set_drvdata(&card->vdev, card);
-
- gemtek_pci_mute(card);
-
- if (video_register_device(&card->vdev, VFL_TYPE_RADIO, nr_radio) < 0)
- goto err_video;
-
- v4l2_info(v4l2_dev, "Gemtek PCI Radio (rev. %d) found at 0x%04x-0x%04x.\n",
- pdev->revision, card->iobase, card->iobase + card->length - 1);
-
- return 0;
-
-err_video:
- release_region(card->iobase, card->length);
-
-err_pci:
- v4l2_device_unregister(v4l2_dev);
- kfree(card);
- return -ENODEV;
-}
-
-static void __devexit gemtek_pci_remove(struct pci_dev *pdev)
-{
- struct v4l2_device *v4l2_dev = dev_get_drvdata(&pdev->dev);
- struct gemtek_pci *card = to_gemtek_pci(v4l2_dev);
-
- video_unregister_device(&card->vdev);
- v4l2_device_unregister(v4l2_dev);
-
- release_region(card->iobase, card->length);
-
- if (mx)
- gemtek_pci_mute(card);
-
- kfree(card);
-}
-
-static struct pci_driver gemtek_pci_driver = {
- .name = "gemtek_pci",
- .id_table = gemtek_pci_id,
- .probe = gemtek_pci_probe,
- .remove = __devexit_p(gemtek_pci_remove),
-};
-
-static int __init gemtek_pci_init(void)
-{
- return pci_register_driver(&gemtek_pci_driver);
-}
-
-static void __exit gemtek_pci_exit(void)
-{
- pci_unregister_driver(&gemtek_pci_driver);
-}
-
-module_init(gemtek_pci_init);
-module_exit(gemtek_pci_exit);
/* TEA5757 pin mappings */
static const int clk = 1, data = 2, wren = 4, mo_st = 8, power = 16;
-#define FREQ_LO (50 * 16000)
-#define FREQ_HI (150 * 16000)
+#define FREQ_LO (87 * 16000)
+#define FREQ_HI (108 * 16000)
#define FREQ_IF 171200 /* 10.7*16000 */
#define FREQ_STEP 200 /* 12.5*16 */
.read = wl1273_fm_fops_read,
.write = wl1273_fm_fops_write,
.poll = wl1273_fm_fops_poll,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
.open = wl1273_fm_fops_open,
.release = wl1273_fm_fops_release,
};
goto done;
/* sysconfig 1 */
- radio->registers[SYSCONFIG1] = SYSCONFIG1_DE;
+ radio->registers[SYSCONFIG1] =
+ (de << 11) & SYSCONFIG1_DE; /* DE*/
retval = si470x_set_register(radio, SYSCONFIG1);
if (retval < 0)
goto done;
/* driver constants */
strcpy(tuner->name, "FM");
tuner->type = V4L2_TUNER_RADIO;
-#if defined(CONFIG_USB_SI470X) || defined(CONFIG_USB_SI470X_MODULE)
tuner->capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_STEREO |
V4L2_TUNER_CAP_RDS | V4L2_TUNER_CAP_RDS_BLOCK_IO;
-#else
- tuner->capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_STEREO;
-#endif
/* range limits */
switch ((radio->registers[SYSCONFIG2] & SYSCONFIG2_BAND) >> 6) {
tuner->rxsubchans = V4L2_TUNER_SUB_MONO;
else
tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO;
-#if defined(CONFIG_USB_SI470X) || defined(CONFIG_USB_SI470X_MODULE)
/* If there is a reliable method of detecting an RDS channel,
then this code should check for that before setting this
RDS subchannel. */
tuner->rxsubchans |= V4L2_TUNER_SUB_RDS;
-#endif
/* mono/stereo selector */
if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 0)
select_timeout:
if (dev->rx_fan_input_inuse) {
- dev->rdev->rx_resolution = MS_TO_NS(ENE_FW_SAMPLE_PERIOD_FAN);
+ dev->rdev->rx_resolution = US_TO_NS(ENE_FW_SAMPLE_PERIOD_FAN);
/* Fan input doesn't support timeouts, it just ends the
input with a maximum sample */
dev->rdev->min_timeout = dev->rdev->max_timeout =
- MS_TO_NS(ENE_FW_SMPL_BUF_FAN_MSK *
+ US_TO_NS(ENE_FW_SMPL_BUF_FAN_MSK *
ENE_FW_SAMPLE_PERIOD_FAN);
} else {
- dev->rdev->rx_resolution = MS_TO_NS(sample_period);
+ dev->rdev->rx_resolution = US_TO_NS(sample_period);
/* Theoreticly timeout is unlimited, but we cap it
* because it was seen that on one device, it
* would stop sending spaces after around 250 msec.
* Besides, this is close to 2^32 anyway and timeout is u32.
*/
- dev->rdev->min_timeout = MS_TO_NS(127 * sample_period);
- dev->rdev->max_timeout = MS_TO_NS(200000);
+ dev->rdev->min_timeout = US_TO_NS(127 * sample_period);
+ dev->rdev->max_timeout = US_TO_NS(200000);
}
if (dev->hw_learning_and_tx_capable)
- dev->rdev->tx_resolution = MS_TO_NS(sample_period);
+ dev->rdev->tx_resolution = US_TO_NS(sample_period);
if (dev->rdev->timeout > dev->rdev->max_timeout)
dev->rdev->timeout = dev->rdev->max_timeout;
dbg("RX: %d (%s)", hw_sample, pulse ? "pulse" : "space");
- ev.duration = MS_TO_NS(hw_sample);
+ ev.duration = US_TO_NS(hw_sample);
ev.pulse = pulse;
ir_raw_event_store_with_filter(dev->rdev, &ev);
}
dev->learning_mode_enabled = learning_mode_force;
/* Set reasonable default timeout */
- dev->rdev->timeout = MS_TO_NS(150000);
+ dev->rdev->timeout = US_TO_NS(150000);
}
/* Upload all hardware settings at once. Used at load and resume time */
/* validate resources */
error = -ENODEV;
+ /* init these to -1, as 0 is valid for both */
+ dev->hw_io = -1;
+ dev->irq = -1;
+
if (!pnp_port_valid(pnp_dev, 0) ||
pnp_port_len(pnp_dev, 0) < ENE_IO_SIZE)
goto error;
rdev->input_name = "ENE eHome Infrared Remote Transceiver";
}
+ dev->rdev = rdev;
+
ene_rx_setup_hw_buffer(dev);
ene_setup_default_settings(dev);
ene_setup_hw_settings(dev);
if (error < 0)
goto error;
- dev->rdev = rdev;
ene_notice("driver has been succesfully loaded");
return 0;
error:
#define dbg_verbose(format, ...) __dbg(2, format, ## __VA_ARGS__)
#define dbg_regs(format, ...) __dbg(3, format, ## __VA_ARGS__)
-#define MS_TO_NS(msec) ((msec) * 1000)
-
struct ene_device {
struct pnp_dev *pnp_dev;
struct rc_dev *rdev;
int retval;
struct imon_context *ictx = rc->priv;
struct device *dev = ictx->dev;
- bool pad_mouse;
unsigned char ir_proto_packet[] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x86 };
case RC_TYPE_RC6:
dev_dbg(dev, "Configuring IR receiver for MCE protocol\n");
ir_proto_packet[0] = 0x01;
- pad_mouse = false;
break;
case RC_TYPE_UNKNOWN:
case RC_TYPE_OTHER:
dev_dbg(dev, "Configuring IR receiver for iMON protocol\n");
- if (pad_stabilize && !nomouse)
- pad_mouse = true;
- else {
+ if (!pad_stabilize)
dev_dbg(dev, "PAD stabilize functionality disabled\n");
- pad_mouse = false;
- }
/* ir_proto_packet[0] = 0x00; // already the default */
rc_type = RC_TYPE_OTHER;
break;
default:
dev_warn(dev, "Unsupported IR protocol specified, overriding "
"to iMON IR protocol\n");
- if (pad_stabilize && !nomouse)
- pad_mouse = true;
- else {
+ if (!pad_stabilize)
dev_dbg(dev, "PAD stabilize functionality disabled\n");
- pad_mouse = false;
- }
/* ir_proto_packet[0] = 0x00; // already the default */
rc_type = RC_TYPE_OTHER;
break;
goto out;
ictx->rc_type = rc_type;
- ictx->pad_mouse = pad_mouse;
+ ictx->pad_mouse = false;
out:
return retval;
spin_unlock_irqrestore(&ictx->kc_lock, flags);
return;
} else {
- ictx->pad_mouse = 0;
+ ictx->pad_mouse = false;
dev_dbg(dev, "mouse mode disabled, passing key value\n");
}
}
printk(KERN_CONT " (id 0x%02x)\n", ffdc_cfg_byte);
ictx->display_type = detected_display_type;
- ictx->rdev->allowed_protos = allowed_protos;
ictx->rc_type = allowed_protos;
}
rdev->allowed_protos = RC_TYPE_OTHER | RC_TYPE_RC6; /* iMON PAD or MCE */
rdev->change_protocol = imon_ir_change_protocol;
rdev->driver_name = MOD_NAME;
- if (ictx->rc_type == RC_TYPE_RC6)
- rdev->map_name = RC_MAP_IMON_MCE;
- else
- rdev->map_name = RC_MAP_IMON_PAD;
/* Enable front-panel buttons and/or knobs */
memcpy(ictx->usb_tx_buf, &fp_packet, sizeof(fp_packet));
if (ret)
dev_info(ictx->dev, "panel buttons/knobs setup failed\n");
- if (ictx->product == 0xffdc)
+ if (ictx->product == 0xffdc) {
imon_get_ffdc_type(ictx);
+ rdev->allowed_protos = ictx->rc_type;
+ }
imon_set_display_type(ictx);
+ if (ictx->rc_type == RC_TYPE_RC6)
+ rdev->map_name = RC_MAP_IMON_MCE;
+ else
+ rdev->map_name = RC_MAP_IMON_PAD;
+
ret = rc_register_device(rdev);
if (ret < 0) {
dev_err(ictx->dev, "remote input dev register failed\n");
goto find_endpoint_failed;
}
- ictx->idev = imon_init_idev(ictx);
- if (!ictx->idev) {
- dev_err(dev, "%s: input device setup failed\n", __func__);
- goto idev_setup_failed;
- }
-
- ictx->rdev = imon_init_rdev(ictx);
- if (!ictx->rdev) {
- dev_err(dev, "%s: rc device setup failed\n", __func__);
- goto rdev_setup_failed;
- }
-
usb_fill_int_urb(ictx->rx_urb_intf0, ictx->usbdev_intf0,
usb_rcvintpipe(ictx->usbdev_intf0,
ictx->rx_endpoint_intf0->bEndpointAddress),
goto urb_submit_failed;
}
+ ictx->idev = imon_init_idev(ictx);
+ if (!ictx->idev) {
+ dev_err(dev, "%s: input device setup failed\n", __func__);
+ goto idev_setup_failed;
+ }
+
+ ictx->rdev = imon_init_rdev(ictx);
+ if (!ictx->rdev) {
+ dev_err(dev, "%s: rc device setup failed\n", __func__);
+ goto rdev_setup_failed;
+ }
+
return ictx;
-urb_submit_failed:
- rc_unregister_device(ictx->rdev);
rdev_setup_failed:
input_unregister_device(ictx->idev);
idev_setup_failed:
+ usb_kill_urb(ictx->rx_urb_intf0);
+urb_submit_failed:
find_endpoint_failed:
mutex_unlock(&ictx->lock);
usb_free_urb(tx_urb);
-/* ir-lirc-codec.c - ir-core to classic lirc interface bridge
+/* ir-lirc-codec.c - rc-core to classic lirc interface bridge
*
* Copyright (C) 2010 by Jarod Wilson <jarod@redhat.com>
*
/* Carrier reports */
if (ev.carrier_report) {
sample = LIRC_FREQUENCY(ev.carrier);
+ IR_dprintk(2, "carrier report (freq: %d)\n", sample);
/* Packet end */
} else if (ev.timeout) {
return 0;
sample = LIRC_TIMEOUT(ev.duration / 1000);
+ IR_dprintk(2, "timeout report (duration: %d)\n", sample);
/* Normal sample */
} else {
sample = ev.pulse ? LIRC_PULSE(ev.duration / 1000) :
LIRC_SPACE(ev.duration / 1000);
+ IR_dprintk(2, "delivering %uus %s to lirc_dev\n",
+ TO_US(ev.duration), TO_STR(ev.pulse));
}
lirc_buffer_write(dev->raw->lirc.drv->rbuf,
/* used internally by the sysfs interface */
u64
-ir_raw_get_allowed_protocols()
+ir_raw_get_allowed_protocols(void)
{
u64 protocols;
mutex_lock(&ir_raw_handler_lock);
static struct rc_map_table dib0700_nec_table[] = {
/* Key codes for the Pixelview SBTVD remote */
- { 0x8613, KEY_MUTE },
- { 0x8612, KEY_POWER },
- { 0x8601, KEY_1 },
- { 0x8602, KEY_2 },
- { 0x8603, KEY_3 },
- { 0x8604, KEY_4 },
- { 0x8605, KEY_5 },
- { 0x8606, KEY_6 },
- { 0x8607, KEY_7 },
- { 0x8608, KEY_8 },
- { 0x8609, KEY_9 },
- { 0x8600, KEY_0 },
- { 0x860d, KEY_CHANNELUP },
- { 0x8619, KEY_CHANNELDOWN },
- { 0x8610, KEY_VOLUMEUP },
- { 0x860c, KEY_VOLUMEDOWN },
+ { 0x866b13, KEY_MUTE },
+ { 0x866b12, KEY_POWER },
+ { 0x866b01, KEY_1 },
+ { 0x866b02, KEY_2 },
+ { 0x866b03, KEY_3 },
+ { 0x866b04, KEY_4 },
+ { 0x866b05, KEY_5 },
+ { 0x866b06, KEY_6 },
+ { 0x866b07, KEY_7 },
+ { 0x866b08, KEY_8 },
+ { 0x866b09, KEY_9 },
+ { 0x866b00, KEY_0 },
+ { 0x866b0d, KEY_CHANNELUP },
+ { 0x866b19, KEY_CHANNELDOWN },
+ { 0x866b10, KEY_VOLUMEUP },
+ { 0x866b0c, KEY_VOLUMEDOWN },
- { 0x860a, KEY_CAMERA },
- { 0x860b, KEY_ZOOM },
- { 0x861b, KEY_BACKSPACE },
- { 0x8615, KEY_ENTER },
+ { 0x866b0a, KEY_CAMERA },
+ { 0x866b0b, KEY_ZOOM },
+ { 0x866b1b, KEY_BACKSPACE },
+ { 0x866b15, KEY_ENTER },
- { 0x861d, KEY_UP },
- { 0x861e, KEY_DOWN },
- { 0x860e, KEY_LEFT },
- { 0x860f, KEY_RIGHT },
+ { 0x866b1d, KEY_UP },
+ { 0x866b1e, KEY_DOWN },
+ { 0x866b0e, KEY_LEFT },
+ { 0x866b0f, KEY_RIGHT },
- { 0x8618, KEY_RECORD },
- { 0x861a, KEY_STOP },
+ { 0x866b18, KEY_RECORD },
+ { 0x866b1a, KEY_STOP },
/* Key codes for the EvolutePC TVWay+ remote */
{ 0x7a00, KEY_MENU },
*
* Copyright (c) 2010 by Jarod Wilson <jarod@redhat.com>
*
+ * See http://mediacenterguides.com/book/export/html/31 for details on
+ * key mappings.
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
{ 0x800f0426, KEY_EPG }, /* Guide */
{ 0x800f0427, KEY_ZOOM }, /* Aspect */
+ { 0x800f0432, KEY_MODE }, /* Visualization */
+ { 0x800f0433, KEY_PRESENTATION }, /* Slide Show */
+ { 0x800f0434, KEY_EJECTCD },
{ 0x800f043a, KEY_BRIGHTNESSUP },
{ 0x800f0446, KEY_TV },
#define USB_BUFLEN 32 /* USB reception buffer length */
#define USB_CTRL_MSG_SZ 2 /* Size of usb ctrl msg on gen1 hw */
#define MCE_G1_INIT_MSGS 40 /* Init messages on gen1 hw to throw out */
-#define MS_TO_NS(msec) ((msec) * 1000)
/* MCE constants */
#define MCE_CMDBUF_SIZE 384 /* MCE Command buffer length */
switch (ir->buf_in[index]) {
/* 2-byte return value commands */
case MCE_CMD_S_TIMEOUT:
- ir->rc->timeout = MS_TO_NS((hi << 8 | lo) / 2);
+ ir->rc->timeout = US_TO_NS((hi << 8 | lo) / 2);
break;
/* 1-byte return value commands */
break;
case PARSE_IRDATA:
ir->rem--;
+ init_ir_raw_event(&rawir);
rawir.pulse = ((ir->buf_in[i] & MCE_PULSE_BIT) != 0);
rawir.duration = (ir->buf_in[i] & MCE_PULSE_MASK)
- * MS_TO_NS(MCE_TIME_UNIT);
+ * US_TO_NS(MCE_TIME_UNIT);
dev_dbg(ir->dev, "Storing %s with duration %d\n",
rawir.pulse ? "pulse" : "space",
i, ir->rem + 1, false);
if (ir->rem)
ir->parser_state = PARSE_IRDATA;
+ else
+ ir_raw_event_reset(ir->rc);
break;
}
rc->priv = ir;
rc->driver_type = RC_DRIVER_IR_RAW;
rc->allowed_protos = RC_TYPE_ALL;
- rc->timeout = MS_TO_NS(1000);
+ rc->timeout = US_TO_NS(1000);
if (!ir->flags.no_tx) {
rc->s_tx_mask = mceusb_set_tx_mask;
rc->s_tx_carrier = mceusb_set_tx_carrier;
return 0;
}
- carrier = (count * 1000000) / duration;
+ carrier = MS_TO_NS(count) / duration;
if ((carrier > MAX_CARRIER) || (carrier < MIN_CARRIER))
nvt_dbg("WTF? Carrier frequency out of range!");
sample = nvt->buf[i];
rawir.pulse = ((sample & BUF_PULSE_BIT) != 0);
- rawir.duration = (sample & BUF_LEN_MASK)
- * SAMPLE_PERIOD * 1000;
+ rawir.duration = US_TO_NS((sample & BUF_LEN_MASK)
+ * SAMPLE_PERIOD);
if ((sample & BUF_LEN_MASK) == BUF_LEN_MASK) {
if (nvt->rawir.pulse == rawir.pulse)
index = ir_lookup_by_scancode(rc_map, scancode);
}
- if (index >= rc_map->len) {
- if (!(ke->flags & INPUT_KEYMAP_BY_INDEX))
- IR_dprintk(1, "unknown key for scancode 0x%04x\n",
- scancode);
+ if (index < rc_map->len) {
+ entry = &rc_map->scan[index];
+
+ ke->index = index;
+ ke->keycode = entry->keycode;
+ ke->len = sizeof(entry->scancode);
+ memcpy(ke->scancode, &entry->scancode, sizeof(entry->scancode));
+
+ } else if (!(ke->flags & INPUT_KEYMAP_BY_INDEX)) {
+ /*
+ * We do not really know the valid range of scancodes
+ * so let's respond with KEY_RESERVED to anything we
+ * do not have mapping for [yet].
+ */
+ ke->index = index;
+ ke->keycode = KEY_RESERVED;
+ } else {
retval = -EINVAL;
goto out;
}
- entry = &rc_map->scan[index];
-
- ke->index = index;
- ke->keycode = entry->keycode;
- ke->len = sizeof(entry->scancode);
- memcpy(ke->scancode, &entry->scancode, sizeof(entry->scancode));
-
retval = 0;
out:
sz->signal_start.tv_usec -
sz->signal_last.tv_usec);
rawir.duration -= sz->sum;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
rawir.duration &= IR_MAX_DURATION;
}
sz_push(sz, rawir);
rawir.duration = ((int) value) * SZ_RESOLUTION;
rawir.duration += SZ_RESOLUTION / 2;
sz->sum += rawir.duration;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
rawir.duration &= IR_MAX_DURATION;
sz_push(sz, rawir);
}
rawir.duration = ((int) value) * SZ_RESOLUTION;
rawir.duration += SZ_RESOLUTION / 2;
sz->sum += rawir.duration;
- rawir.duration *= 1000;
+ rawir.duration = US_TO_NS(rawir.duration);
sz_push(sz, rawir);
}
if (sz->timeout_enabled)
sz_push(sz, rawir);
ir_raw_event_handle(sz->rdev);
+ ir_raw_event_reset(sz->rdev);
} else {
sz_push_full_space(sz, sz->buf_in[i]);
}
}
}
+ ir_raw_event_handle(sz->rdev);
usb_submit_urb(urb, GFP_ATOMIC);
return;
sz->decoder_state = PulseSpace;
/* FIXME: don't yet have a way to set this */
sz->timeout_enabled = true;
- sz->rdev->timeout = (((SZ_TIMEOUT * SZ_RESOLUTION * 1000) &
+ sz->rdev->timeout = ((US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION) &
IR_MAX_DURATION) | 0x03000000);
#if 0
/* not yet supported, depends on patches from maxim */
/* see also: LIRC_GET_REC_RESOLUTION and LIRC_SET_REC_TIMEOUT */
- sz->min_timeout = SZ_TIMEOUT * SZ_RESOLUTION * 1000;
- sz->max_timeout = SZ_TIMEOUT * SZ_RESOLUTION * 1000;
+ sz->min_timeout = US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION);
+ sz->max_timeout = US_TO_NS(SZ_TIMEOUT * SZ_RESOLUTION);
#endif
do_gettimeofday(&sz->signal_start);
config VIDEO_HELPER_CHIPS_AUTO
bool "Autoselect pertinent encoders/decoders and other helper chips"
- default y if !EMBEDDED
+ default y if !EXPERT
---help---
Most video cards may require additional modules to encode or
decode audio/video standards. This option will autoselect
To compile this driver as a module, choose M here: the
module will be called tda9840.
-config VIDEO_TDA9875
- tristate "Philips TDA9875 audio processor"
- depends on VIDEO_V4L2 && I2C
- ---help---
- Support for tda9875 audio decoder chip found on some bt8xx boards.
-
- To compile this driver as a module, choose M here: the
- module will be called tda9875.
-
config VIDEO_TEA6415C
tristate "Philips TEA6415C audio processor"
depends on I2C
obj-$(CONFIG_VIDEO_TUNER) += tuner.o
obj-$(CONFIG_VIDEO_TVAUDIO) += tvaudio.o
obj-$(CONFIG_VIDEO_TDA7432) += tda7432.o
-obj-$(CONFIG_VIDEO_TDA9875) += tda9875.o
obj-$(CONFIG_VIDEO_SAA6588) += saa6588.o
obj-$(CONFIG_VIDEO_TDA9840) += tda9840.o
obj-$(CONFIG_VIDEO_TEA6415C) += tea6415c.o
return v4l2_chip_ident_i2c_client(client, chip, V4L2_IDENT_ADV7175, 0);
}
+static int adv7175_s_power(struct v4l2_subdev *sd, int on)
+{
+ if (on)
+ adv7175_write(sd, 0x01, 0x00);
+ else
+ adv7175_write(sd, 0x01, 0x78);
+
+ return 0;
+}
+
/* ----------------------------------------------------------------------- */
static const struct v4l2_subdev_core_ops adv7175_core_ops = {
.g_chip_ident = adv7175_g_chip_ident,
.init = adv7175_init,
+ .s_power = adv7175_s_power,
};
static const struct v4l2_subdev_video_ops adv7175_video_ops = {
.gpiomute = 0x1800,
.audio_mode_gpio= fv2000s_audio,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.needs_tvaudio = 1,
.pll = PLL_28,
.tuner_type = TUNER_PHILIPS_PAL,
.gpiomute = 0x09,
.needs_tvaudio = 1,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.pll = PLL_28,
.tuner_type = TUNER_PHILIPS_PAL,
.tuner_addr = ADDR_UNSET,
.gpiomask2 = 0x07ff,
.muxsel = MUXSEL(3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3),
.no_msp34xx = 1,
- .no_tda9875 = 1,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.muxsel_hook = rv605_muxsel,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY1x0_848] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY1x1] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY1x1_SVID] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY2xx] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY2x0] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY500] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
[BTTV_BOARD_OSPREY540] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1, /* must avoid, conflicts with the bt860 */
},
[BTTV_BOARD_IDS_EAGLE] = {
.muxsel = MUXSEL(2, 2, 2, 2),
.muxsel_hook = eagle_muxsel,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.pll = PLL_28,
},
[BTTV_BOARD_PINNACLESAT] = {
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.muxsel = MUXSEL(3, 1),
.pll = PLL_28,
.svhs = 2,
.gpiomask = 0,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.muxsel = MUXSEL(2, 0, 1),
.pll = PLL_28,
/* Tuner, CVid, SVid, CVid over SVid connector */
.muxsel = MUXSEL(2, 3, 1, 1),
.gpiomask = 0,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.tuner_type = TUNER_PHILIPS_PAL_I,
.tuner_addr = ADDR_UNSET,
.muxsel = MUXSEL(2,2,2,2, 3,3,3,3, 1,1,1,1, 0,0,0,0),
.muxsel_hook = xguard_muxsel,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
},
.svhs = NO_SVHS,
.muxsel = MUXSEL(2, 3, 1, 0),
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
.tuner_type = TUNER_ABSENT,
.svhs = NO_SVHS, /* card has no svhs */
.needs_tvaudio = 0,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.gpiomask = 0x00,
.muxsel = MUXSEL(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
[BTTV_BOARD_TWINHAN_DST] = {
.name = "Twinhan DST + clones",
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
/* Vid In, SVid In, Vid over SVid in connector */
.muxsel = MUXSEL(3, 1, 1, 3),
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.svhs = NO_SVHS,
.muxsel = MUXSEL(2, 3, 1, 0),
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.needs_tvaudio = 0,
.tuner_type = TUNER_ABSENT,
.gpiomask = 0,
.gpiomask2 = 0x3C<<16,/*Set the GPIO[18]->GPIO[21] as output pin.==> drive the video inputs through analog multiplexers*/
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
/*878A input is always MUX0, see above.*/
.muxsel = MUXSEL(2, 2, 2, 2),
.tuner_type = TUNER_TEMIC_PAL,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
},
[BTTV_BOARD_AVDVBT_771] = {
/* Wolfram Joost <wojo@frokaschwei.de> */
.tuner_addr = ADDR_UNSET,
.muxsel = MUXSEL(3, 3),
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
.has_dvb = 1,
.svhs = 1,
.muxsel = MUXSEL(3, 1, 2, 0), /* Comp0, S-Video, ?, ? */
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
.tuner_type = TUNER_ABSENT,
/* Chris Pascoe <c.pascoe@itee.uq.edu.au> */
.name = "DViCO FusionHDTV DVB-T Lite",
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
.no_video = 1,
.muxsel = MUXSEL(2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2),
.pll = PLL_28,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.pll = PLL_28,
.no_msp34xx = 1,
.no_tda7432 = 1,
- .no_tda9875 = 1,
.muxsel_hook = kodicom4400r_muxsel,
},
[BTTV_BOARD_KODICOM_4400R_SL] = {
.pll = PLL_28,
.no_msp34xx = 1,
.no_tda7432 = 1,
- .no_tda9875 = 1,
.muxsel_hook = kodicom4400r_muxsel,
},
/* ---- card 0x86---------------------------------- */
.gpiomux = { 0x00400005, 0, 0x00000001, 0 },
.gpiomute = 0x00c00007,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.has_dvb = 1,
},
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
/* ---- card 0x8d ---------------------------------- */
.muxsel = MUXSEL(2, 3, 1, 1),
.gpiomux = { 100000, 100002, 100002, 100000 },
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.pll = PLL_28,
.tuner_type = TUNER_TNF_5335MF,
.gpiomask = 0x0f, /* old: 7 */
.muxsel = MUXSEL(0, 1, 3, 2), /* Composite 0-3 */
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
.tuner_type = TUNER_ABSENT,
.tuner_addr = ADDR_UNSET,
.gpiomux = { 0x00400005, 0, 0x00000001, 0 },
.gpiomute = 0x00c00007,
.no_msp34xx = 1,
- .no_tda9875 = 1,
.no_tda7432 = 1,
},
/* ---- card 0x95---------------------------------- */
.pll = PLL_28,
.no_msp34xx = 1,
.no_tda7432 = 1,
- .no_tda9875 = 1,
.muxsel_hook = gv800s_muxsel,
},
[BTTV_BOARD_GEOVISION_GV800S_SL] = {
.pll = PLL_28,
.no_msp34xx = 1,
.no_tda7432 = 1,
- .no_tda9875 = 1,
.muxsel_hook = gv800s_muxsel,
},
[BTTV_BOARD_PV183] = {
/* i2c audio flags */
unsigned int no_msp34xx:1;
- unsigned int no_tda9875:1;
unsigned int no_tda7432:1;
unsigned int needs_tvaudio:1;
unsigned int msp34xx_alt:1;
.min_width = 320,
.min_height = 240,
};
+ struct i2c_board_info ov7670_info = {
+ .type = "ov7670",
+ .addr = 0x42,
+ .platform_data = &sensor_cfg,
+ };
/*
* Start putting together one of our big camera structures.
if (dmi_check_system(olpc_xo1_dmi))
sensor_cfg.clock_speed = 45;
- cam->sensor_addr = 0x42;
- cam->sensor = v4l2_i2c_new_subdev_cfg(&cam->v4l2_dev, &cam->i2c_adapter,
- "ov7670", 0, &sensor_cfg, cam->sensor_addr, NULL);
+ cam->sensor_addr = ov7670_info.addr;
+ cam->sensor = v4l2_i2c_new_subdev_board(&cam->v4l2_dev, &cam->i2c_adapter,
+ &ov7670_info, NULL);
if (cam->sensor == NULL) {
ret = -ENODEV;
goto out_smbus;
struct camera_data {
/* locks */
- struct mutex busy_lock; /* guard against SMP multithreading */
+ struct mutex v4l2_lock; /* serialize file operations */
struct v4l2_prio_state prio;
/* camera status */
cam->present = 1;
- mutex_init(&cam->busy_lock);
+ mutex_init(&cam->v4l2_lock);
init_waitqueue_head(&cam->wq_stream);
return cam;
char __user *buf, unsigned long count, int noblock)
{
struct framebuf *frame;
- if (!count) {
+
+ if (!count)
return 0;
- }
if (!buf) {
ERR("%s: buffer NULL\n",__func__);
return -EINVAL;
}
- /* make this _really_ smp and multithread-safe */
- if (mutex_lock_interruptible(&cam->busy_lock))
- return -ERESTARTSYS;
-
if (!cam->present) {
LOG("%s: camera removed\n",__func__);
- mutex_unlock(&cam->busy_lock);
return 0; /* EOF */
}
- if(!cam->streaming) {
+ if (!cam->streaming) {
/* Start streaming */
cpia2_usb_stream_start(cam,
cam->params.camera_state.stream_mode);
/* Copy cam->curbuff in case it changes while we're processing */
frame = cam->curbuff;
if (noblock && frame->status != FRAME_READY) {
- mutex_unlock(&cam->busy_lock);
return -EAGAIN;
}
- if(frame->status != FRAME_READY) {
- mutex_unlock(&cam->busy_lock);
+ if (frame->status != FRAME_READY) {
+ mutex_unlock(&cam->v4l2_lock);
wait_event_interruptible(cam->wq_stream,
!cam->present ||
(frame = cam->curbuff)->status == FRAME_READY);
+ mutex_lock(&cam->v4l2_lock);
if (signal_pending(current))
return -ERESTARTSYS;
- /* make this _really_ smp and multithread-safe */
- if (mutex_lock_interruptible(&cam->busy_lock)) {
- return -ERESTARTSYS;
- }
- if(!cam->present) {
- mutex_unlock(&cam->busy_lock);
+ if (!cam->present)
return 0;
- }
}
/* copy data to user space */
- if (frame->length > count) {
- mutex_unlock(&cam->busy_lock);
+ if (frame->length > count)
return -EFAULT;
- }
- if (copy_to_user(buf, frame->data, frame->length)) {
- mutex_unlock(&cam->busy_lock);
+ if (copy_to_user(buf, frame->data, frame->length))
return -EFAULT;
- }
count = frame->length;
frame->status = FRAME_EMPTY;
- mutex_unlock(&cam->busy_lock);
return count;
}
{
unsigned int status=0;
- if(!cam) {
+ if (!cam) {
ERR("%s: Internal error, camera_data not found!\n",__func__);
return POLLERR;
}
- mutex_lock(&cam->busy_lock);
-
- if(!cam->present) {
- mutex_unlock(&cam->busy_lock);
+ if (!cam->present)
return POLLHUP;
- }
if(!cam->streaming) {
/* Start streaming */
cam->params.camera_state.stream_mode);
}
- mutex_unlock(&cam->busy_lock);
poll_wait(filp, &cam->wq_stream, wait);
- mutex_lock(&cam->busy_lock);
if(!cam->present)
status = POLLHUP;
else if(cam->curbuff->status == FRAME_READY)
status = POLLIN | POLLRDNORM;
- mutex_unlock(&cam->busy_lock);
return status;
}
DBG("mmap offset:%ld size:%ld\n", start_offset, size);
- /* make this _really_ smp-safe */
- if (mutex_lock_interruptible(&cam->busy_lock))
- return -ERESTARTSYS;
-
- if (!cam->present) {
- mutex_unlock(&cam->busy_lock);
+ if (!cam->present)
return -ENODEV;
- }
if (size > cam->frame_size*cam->num_frames ||
(start_offset % cam->frame_size) != 0 ||
- (start_offset+size > cam->frame_size*cam->num_frames)) {
- mutex_unlock(&cam->busy_lock);
+ (start_offset+size > cam->frame_size*cam->num_frames))
return -EINVAL;
- }
pos = ((unsigned long) (cam->frame_buffer)) + start_offset;
while (size > 0) {
page = kvirt_to_pa(pos);
- if (remap_pfn_range(vma, start, page >> PAGE_SHIFT, PAGE_SIZE, PAGE_SHARED)) {
- mutex_unlock(&cam->busy_lock);
+ if (remap_pfn_range(vma, start, page >> PAGE_SHIFT, PAGE_SIZE, PAGE_SHARED))
return -EAGAIN;
- }
start += PAGE_SIZE;
pos += PAGE_SIZE;
if (size > PAGE_SIZE)
}
cam->mmapped = true;
- mutex_unlock(&cam->busy_lock);
return 0;
}
-
static int cpia2_open(struct file *file)
{
struct camera_data *cam = video_drvdata(file);
- int retval = 0;
+ struct cpia2_fh *fh;
if (!cam) {
ERR("Internal error, camera_data not found!\n");
return -ENODEV;
}
- if(mutex_lock_interruptible(&cam->busy_lock))
- return -ERESTARTSYS;
-
- if(!cam->present) {
- retval = -ENODEV;
- goto err_return;
- }
+ if (!cam->present)
+ return -ENODEV;
- if (cam->open_count > 0) {
- goto skip_init;
- }
+ if (cam->open_count == 0) {
+ if (cpia2_allocate_buffers(cam))
+ return -ENOMEM;
- if (cpia2_allocate_buffers(cam)) {
- retval = -ENOMEM;
- goto err_return;
- }
+ /* reset the camera */
+ if (cpia2_reset_camera(cam) < 0)
+ return -EIO;
- /* reset the camera */
- if (cpia2_reset_camera(cam) < 0) {
- retval = -EIO;
- goto err_return;
+ cam->APP_len = 0;
+ cam->COM_len = 0;
}
- cam->APP_len = 0;
- cam->COM_len = 0;
-
-skip_init:
- {
- struct cpia2_fh *fh = kmalloc(sizeof(*fh),GFP_KERNEL);
- if(!fh) {
- retval = -ENOMEM;
- goto err_return;
- }
- file->private_data = fh;
- fh->prio = V4L2_PRIORITY_UNSET;
- v4l2_prio_open(&cam->prio, &fh->prio);
- fh->mmapped = 0;
- }
+ fh = kmalloc(sizeof(*fh), GFP_KERNEL);
+ if (!fh)
+ return -ENOMEM;
+ file->private_data = fh;
+ fh->prio = V4L2_PRIORITY_UNSET;
+ v4l2_prio_open(&cam->prio, &fh->prio);
+ fh->mmapped = 0;
++cam->open_count;
cpia2_dbg_dump_registers(cam);
-
-err_return:
- mutex_unlock(&cam->busy_lock);
- return retval;
+ return 0;
}
/******************************************************************************
struct camera_data *cam = video_get_drvdata(dev);
struct cpia2_fh *fh = file->private_data;
- mutex_lock(&cam->busy_lock);
-
if (cam->present &&
- (cam->open_count == 1
- || fh->prio == V4L2_PRIORITY_RECORD
- )) {
+ (cam->open_count == 1 || fh->prio == V4L2_PRIORITY_RECORD)) {
cpia2_usb_stream_stop(cam);
- if(cam->open_count == 1) {
+ if (cam->open_count == 1) {
/* save camera state for later open */
cpia2_save_camera_state(cam);
}
}
- {
- if(fh->mmapped)
- cam->mmapped = 0;
- v4l2_prio_close(&cam->prio, fh->prio);
- file->private_data = NULL;
- kfree(fh);
- }
+ if (fh->mmapped)
+ cam->mmapped = 0;
+ v4l2_prio_close(&cam->prio, fh->prio);
+ file->private_data = NULL;
+ kfree(fh);
if (--cam->open_count == 0) {
cpia2_free_buffers(cam);
if (!cam->present) {
video_unregister_device(dev);
- mutex_unlock(&cam->busy_lock);
kfree(cam);
return 0;
}
}
- mutex_unlock(&cam->busy_lock);
-
return 0;
}
return 0;
}
- mutex_unlock(&cam->busy_lock);
+ mutex_unlock(&cam->v4l2_lock);
wait_event_interruptible(cam->wq_stream,
!cam->streaming ||
frame->status == FRAME_READY);
- mutex_lock(&cam->busy_lock);
+ mutex_lock(&cam->v4l2_lock);
if (signal_pending(current))
return -ERESTARTSYS;
if(!cam->present)
if(frame < 0) {
/* Wait for a frame to become available */
struct framebuf *cb=cam->curbuff;
- mutex_unlock(&cam->busy_lock);
+ mutex_unlock(&cam->v4l2_lock);
wait_event_interruptible(cam->wq_stream,
!cam->present ||
(cb=cam->curbuff)->status == FRAME_READY);
- mutex_lock(&cam->busy_lock);
+ mutex_lock(&cam->v4l2_lock);
if (signal_pending(current))
return -ERESTARTSYS;
if(!cam->present)
if (!cam)
return -ENOTTY;
- /* make this _really_ smp-safe */
- if (mutex_lock_interruptible(&cam->busy_lock))
- return -ERESTARTSYS;
-
- if (!cam->present) {
- mutex_unlock(&cam->busy_lock);
+ if (!cam->present)
return -ENODEV;
- }
/* Priority check */
switch (cmd) {
{
struct cpia2_fh *fh = file->private_data;
retval = v4l2_prio_check(&cam->prio, fh->prio);
- if(retval) {
- mutex_unlock(&cam->busy_lock);
+ if (retval)
return retval;
- }
break;
}
default:
break;
}
- mutex_unlock(&cam->busy_lock);
return retval;
}
.release = cpia2_close,
.read = cpia2_v4l_read,
.poll = cpia2_v4l_poll,
- .ioctl = cpia2_ioctl,
+ .unlocked_ioctl = cpia2_ioctl,
.mmap = cpia2_mmap,
};
memcpy(cam->vdev, &cpia2_template, sizeof(cpia2_template));
video_set_drvdata(cam->vdev, cam);
+ cam->vdev->lock = &cam->v4l2_lock;
reset_camera_struct_v4l(cam);
{
snprintf(cx->in_workq_name, sizeof(cx->in_workq_name), "%s-in",
cx->v4l2_dev.name);
- cx->in_work_queue = create_singlethread_workqueue(cx->in_workq_name);
+ cx->in_work_queue = alloc_ordered_workqueue(cx->in_workq_name, 0);
if (cx->in_work_queue == NULL) {
CX18_ERR("Unable to create incoming mailbox handler thread\n");
return -ENOMEM;
return 0;
}
-static int __devinit cx18_create_out_workq(struct cx18 *cx)
-{
- snprintf(cx->out_workq_name, sizeof(cx->out_workq_name), "%s-out",
- cx->v4l2_dev.name);
- cx->out_work_queue = create_workqueue(cx->out_workq_name);
- if (cx->out_work_queue == NULL) {
- CX18_ERR("Unable to create outgoing mailbox handler threads\n");
- return -ENOMEM;
- }
- return 0;
-}
-
static void __devinit cx18_init_in_work_orders(struct cx18 *cx)
{
int i;
mutex_init(&cx->epu2apu_mb_lock);
mutex_init(&cx->epu2cpu_mb_lock);
- ret = cx18_create_out_workq(cx);
- if (ret)
- return ret;
-
ret = cx18_create_in_workq(cx);
- if (ret) {
- destroy_workqueue(cx->out_work_queue);
+ if (ret)
return ret;
- }
cx18_init_in_work_orders(cx);
release_mem_region(cx->base_addr, CX18_MEM_SIZE);
free_workqueues:
destroy_workqueue(cx->in_work_queue);
- destroy_workqueue(cx->out_work_queue);
err:
if (retval == 0)
retval = -ENODEV;
cx18_halt_firmware(cx);
destroy_workqueue(cx->in_work_queue);
- destroy_workqueue(cx->out_work_queue);
cx18_streams_cleanup(cx, 1);
struct cx18_in_work_order in_work_order[CX18_MAX_IN_WORK_ORDERS];
char epu_debug_str[256]; /* CX18_EPU_DEBUG is rare: use shared space */
- struct workqueue_struct *out_work_queue;
- char out_workq_name[12]; /* "cx18-NN-out" */
-
/* i2c */
struct i2c_adapter i2c_adap[2];
struct i2c_algo_bit_data i2c_algo[2];
/* Related to submission of mdls to firmware */
static inline void cx18_stream_load_fw_queue(struct cx18_stream *s)
{
- struct cx18 *cx = s->cx;
- queue_work(cx->out_work_queue, &s->out_work_order);
+ schedule_work(&s->out_work_order);
}
static inline void cx18_stream_put_mdl_fw(struct cx18_stream *s,
#include <media/videobuf-vmalloc.h>
#include "xc5000.h"
-#include "dvb_dummy_fe.h"
#include "s5h1432.h"
#include "tda18271.h"
#include "s5h1411.h"
if (dev->dvb->frontend == NULL) {
printk(DRIVER_NAME
- ": Failed to attach dummy front end\n");
+ ": Failed to attach s5h1411 front end\n");
result = -EINVAL;
goto out_free;
}
if (dev->dvb->frontend == NULL) {
printk(DRIVER_NAME
- ": Failed to attach dummy front end\n");
+ ": Failed to attach s5h1411 front end\n");
result = -EINVAL;
goto out_free;
}
return 0;
}
-static int cx25840_s_config(struct v4l2_subdev *sd, int irq, void *platform_data)
-{
- struct cx25840_state *state = to_state(sd);
- struct i2c_client *client = v4l2_get_subdevdata(sd);
-
- if (platform_data) {
- struct cx25840_platform_data *pdata = platform_data;
-
- state->pvr150_workaround = pdata->pvr150_workaround;
- set_input(client, state->vid_input, state->aud_input);
- }
- return 0;
-}
-
static int cx23885_irq_handler(struct v4l2_subdev *sd, u32 status,
bool *handled)
{
static const struct v4l2_subdev_core_ops cx25840_core_ops = {
.log_status = cx25840_log_status,
- .s_config = cx25840_s_config,
.g_chip_ident = cx25840_g_chip_ident,
.g_ctrl = v4l2_subdev_g_ctrl,
.s_ctrl = v4l2_subdev_s_ctrl,
state->vid_input = CX25840_COMPOSITE7;
state->aud_input = CX25840_AUDIO8;
state->audclk_freq = 48000;
- state->pvr150_workaround = 0;
state->audmode = V4L2_TUNER_MODE_LANG1;
state->vbi_line_offset = 8;
state->id = id;
v4l2_ctrl_cluster(2, &state->volume);
v4l2_ctrl_handler_setup(&state->hdl);
+ if (client->dev.platform_data) {
+ struct cx25840_platform_data *pdata = client->dev.platform_data;
+
+ state->pvr150_workaround = pdata->pvr150_workaround;
+ }
+
cx25840_ir_probe(sd);
return 0;
}
void __iomem *vpif_base;
+/**
+ * ch_params: video standard configuration parameters for vpif
+ * The table must include all presets from supported subdevices.
+ */
+const struct vpif_channel_config_params ch_params[] = {
+ /* HDTV formats */
+ {
+ .name = "480p59_94",
+ .width = 720,
+ .height = 480,
+ .frm_fmt = 1,
+ .ycmux_mode = 0,
+ .eav2sav = 138-8,
+ .sav2eav = 720,
+ .l1 = 1,
+ .l3 = 43,
+ .l5 = 523,
+ .vsize = 525,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_480P59_94,
+ },
+ {
+ .name = "576p50",
+ .width = 720,
+ .height = 576,
+ .frm_fmt = 1,
+ .ycmux_mode = 0,
+ .eav2sav = 144-8,
+ .sav2eav = 720,
+ .l1 = 1,
+ .l3 = 45,
+ .l5 = 621,
+ .vsize = 625,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_576P50,
+ },
+ {
+ .name = "720p50",
+ .width = 1280,
+ .height = 720,
+ .frm_fmt = 1,
+ .ycmux_mode = 0,
+ .eav2sav = 700-8,
+ .sav2eav = 1280,
+ .l1 = 1,
+ .l3 = 26,
+ .l5 = 746,
+ .vsize = 750,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_720P50,
+ },
+ {
+ .name = "720p60",
+ .width = 1280,
+ .height = 720,
+ .frm_fmt = 1,
+ .ycmux_mode = 0,
+ .eav2sav = 370 - 8,
+ .sav2eav = 1280,
+ .l1 = 1,
+ .l3 = 26,
+ .l5 = 746,
+ .vsize = 750,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_720P60,
+ },
+ {
+ .name = "1080I50",
+ .width = 1920,
+ .height = 1080,
+ .frm_fmt = 0,
+ .ycmux_mode = 0,
+ .eav2sav = 720 - 8,
+ .sav2eav = 1920,
+ .l1 = 1,
+ .l3 = 21,
+ .l5 = 561,
+ .l7 = 563,
+ .l9 = 584,
+ .l11 = 1124,
+ .vsize = 1125,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_1080I50,
+ },
+ {
+ .name = "1080I60",
+ .width = 1920,
+ .height = 1080,
+ .frm_fmt = 0,
+ .ycmux_mode = 0,
+ .eav2sav = 280 - 8,
+ .sav2eav = 1920,
+ .l1 = 1,
+ .l3 = 21,
+ .l5 = 561,
+ .l7 = 563,
+ .l9 = 584,
+ .l11 = 1124,
+ .vsize = 1125,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_1080I60,
+ },
+ {
+ .name = "1080p60",
+ .width = 1920,
+ .height = 1080,
+ .frm_fmt = 1,
+ .ycmux_mode = 0,
+ .eav2sav = 280 - 8,
+ .sav2eav = 1920,
+ .l1 = 1,
+ .l3 = 42,
+ .l5 = 1122,
+ .vsize = 1125,
+ .capture_format = 0,
+ .vbi_supported = 0,
+ .hd_sd = 1,
+ .dv_preset = V4L2_DV_1080P60,
+ },
+
+ /* SDTV formats */
+ {
+ .name = "NTSC_M",
+ .width = 720,
+ .height = 480,
+ .frm_fmt = 0,
+ .ycmux_mode = 1,
+ .eav2sav = 268,
+ .sav2eav = 1440,
+ .l1 = 1,
+ .l3 = 23,
+ .l5 = 263,
+ .l7 = 266,
+ .l9 = 286,
+ .l11 = 525,
+ .vsize = 525,
+ .capture_format = 0,
+ .vbi_supported = 1,
+ .hd_sd = 0,
+ .stdid = V4L2_STD_525_60,
+ },
+ {
+ .name = "PAL_BDGHIK",
+ .width = 720,
+ .height = 576,
+ .frm_fmt = 0,
+ .ycmux_mode = 1,
+ .eav2sav = 280,
+ .sav2eav = 1440,
+ .l1 = 1,
+ .l3 = 23,
+ .l5 = 311,
+ .l7 = 313,
+ .l9 = 336,
+ .l11 = 624,
+ .vsize = 625,
+ .capture_format = 0,
+ .vbi_supported = 1,
+ .hd_sd = 0,
+ .stdid = V4L2_STD_625_50,
+ },
+};
+
+const unsigned int vpif_ch_params_count = ARRAY_SIZE(ch_params);
+
static inline void vpif_wr_bit(u32 reg, u32 bit, u32 val)
{
if (val)
char name[VPIF_MAX_NAME]; /* Name of the mode */
u16 width; /* Indicates width of the image */
u16 height; /* Indicates height of the image */
- u8 fps;
- u8 frm_fmt; /* Indicates whether this is interlaced
- * or progressive format */
- u8 ycmux_mode; /* Indicates whether this mode requires
- * single or two channels */
- u16 eav2sav; /* length of sav 2 eav */
+ u8 frm_fmt; /* Interlaced (0) or progressive (1) */
+ u8 ycmux_mode; /* This mode requires one (0) or two (1)
+ channels */
+ u16 eav2sav; /* length of eav 2 sav */
u16 sav2eav; /* length of sav 2 eav */
u16 l1, l3, l5, l7, l9, l11; /* Other parameter configurations */
u16 vsize; /* Vertical size of the image */
* is in BT or in CCD/CMOS */
u8 vbi_supported; /* Indicates whether this mode
* supports capturing vbi or not */
- u8 hd_sd;
- v4l2_std_id stdid;
+ u8 hd_sd; /* HDTV (1) or SDTV (0) format */
+ v4l2_std_id stdid; /* SDTV format */
+ u32 dv_preset; /* HDTV format */
};
+extern const unsigned int vpif_ch_params_count;
+extern const struct vpif_channel_config_params ch_params[];
+
struct vpif_video_params;
struct vpif_params;
struct vpif_vbi_params;
#include <linux/slab.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
+#include <media/v4l2-chip-ident.h>
#include "vpif_capture.h"
#include "vpif.h"
static struct vpif_device vpif_obj = { {NULL} };
static struct device *vpif_dev;
-/**
- * ch_params: video standard configuration parameters for vpif
- */
-static const struct vpif_channel_config_params ch_params[] = {
- {
- "NTSC_M", 720, 480, 30, 0, 1, 268, 1440, 1, 23, 263, 266,
- 286, 525, 525, 0, 1, 0, V4L2_STD_525_60,
- },
- {
- "PAL_BDGHIK", 720, 576, 25, 0, 1, 280, 1440, 1, 23, 311, 313,
- 336, 624, 625, 0, 1, 0, V4L2_STD_625_50,
- },
-};
-
/**
* vpif_uservirt_to_phys : translate user/virtual address to phy address
* @virtp: user/virtual address
* @dev_id: dev_id ptr
*
* It changes status of the captured buffer, takes next buffer from the queue
- * and sets its address in VPIF registers
+ * and sets its address in VPIF registers
*/
static irqreturn_t vpif_channel_isr(int irq, void *dev_id)
{
struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
struct vpif_params *vpifparams = &ch->vpifparams;
const struct vpif_channel_config_params *config;
- struct vpif_channel_config_params *std_info;
+ struct vpif_channel_config_params *std_info = &vpifparams->std_info;
struct video_obj *vid_ch = &ch->video;
int index;
vpif_dbg(2, debug, "vpif_update_std_info\n");
- std_info = &vpifparams->std_info;
-
- for (index = 0; index < ARRAY_SIZE(ch_params); index++) {
+ for (index = 0; index < vpif_ch_params_count; index++) {
config = &ch_params[index];
- if (config->stdid & vid_ch->stdid) {
- memcpy(std_info, config, sizeof(*config));
- break;
+ if (config->hd_sd == 0) {
+ vpif_dbg(2, debug, "SD format\n");
+ if (config->stdid & vid_ch->stdid) {
+ memcpy(std_info, config, sizeof(*config));
+ break;
+ }
+ } else {
+ vpif_dbg(2, debug, "HD format\n");
+ if (config->dv_preset == vid_ch->dv_preset) {
+ memcpy(std_info, config, sizeof(*config));
+ break;
+ }
}
}
/* standard not found */
- if (index == ARRAY_SIZE(ch_params))
+ if (index == vpif_ch_params_count)
return -EINVAL;
common->fmt.fmt.pix.width = std_info->width;
common->fmt.fmt.pix.bytesperline = std_info->width;
vpifparams->video_params.hpitch = std_info->width;
vpifparams->video_params.storage_mode = std_info->frm_fmt;
+
return 0;
}
struct video_obj *vid_ch;
struct channel_obj *ch;
struct vpif_fh *fh;
- int i, ret = 0;
+ int i;
vpif_dbg(2, debug, "vpif_open\n");
vid_ch = &ch->video;
common = &ch->common[VPIF_VIDEO_INDEX];
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
if (NULL == ch->curr_subdev_info) {
/**
* search through the sub device to see a registered
}
if (i == config->subdev_count) {
vpif_err("No sub device registered\n");
- ret = -ENOENT;
- goto exit;
+ return -ENOENT;
}
}
fh = kzalloc(sizeof(struct vpif_fh), GFP_KERNEL);
if (NULL == fh) {
vpif_err("unable to allocate memory for file handle object\n");
- ret = -ENOMEM;
- goto exit;
+ return -ENOMEM;
}
/* store pointer to fh in private_data member of filep */
/* Initialize priority of this instance to default priority */
fh->prio = V4L2_PRIORITY_UNSET;
v4l2_prio_open(&ch->prio, &fh->prio);
-exit:
- mutex_unlock(&common->lock);
- return ret;
+ return 0;
}
/**
common = &ch->common[VPIF_VIDEO_INDEX];
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* if this instance is doing IO */
if (fh->io_allowed[VPIF_VIDEO_INDEX]) {
/* Reset io_usrs member of channel object */
/* Decrement channel usrs counter */
ch->usrs--;
- /* unlock mutex on channel object */
- mutex_unlock(&common->lock);
-
/* Close the priority */
v4l2_prio_close(&ch->prio, fh->prio);
struct channel_obj *ch = fh->channel;
struct common_obj *common;
u8 index = 0;
- int ret = 0;
vpif_dbg(2, debug, "vpif_reqbufs\n");
common = &ch->common[index];
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
- if (0 != common->io_usrs) {
- ret = -EBUSY;
- goto reqbuf_exit;
- }
+ if (0 != common->io_usrs)
+ return -EBUSY;
/* Initialize videobuf queue as per the buffer type */
videobuf_queue_dma_contig_init(&common->buffer_queue,
reqbuf->type,
common->fmt.fmt.pix.field,
sizeof(struct videobuf_buffer), fh,
- NULL);
+ &common->lock);
/* Set io allowed member of file handle to TRUE */
fh->io_allowed[index] = 1;
INIT_LIST_HEAD(&common->dma_queue);
/* Allocate buffers */
- ret = videobuf_reqbufs(&common->buffer_queue, reqbuf);
-
-reqbuf_exit:
- mutex_unlock(&common->lock);
- return ret;
+ return videobuf_reqbufs(&common->buffer_queue, reqbuf);
}
/**
return ret;
}
- if (mutex_lock_interruptible(&common->lock)) {
- ret = -ERESTARTSYS;
- goto streamoff_exit;
- }
-
/* If buffer queue is empty, return error */
if (list_empty(&common->dma_queue)) {
vpif_dbg(1, debug, "buffer queue is empty\n");
enable_channel1(1);
}
channel_first_int[VPIF_VIDEO_INDEX][ch->channel_id] = 1;
- mutex_unlock(&common->lock);
return ret;
exit:
- mutex_unlock(&common->lock);
-streamoff_exit:
- ret = videobuf_streamoff(&common->buffer_queue);
+ videobuf_streamoff(&common->buffer_queue);
return ret;
}
return -EINVAL;
}
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* disable channel */
if (VPIF_CHANNEL0_VIDEO == ch->channel_id) {
enable_channel0(0);
if (ret && (ret != -ENOIOCTLCMD))
vpif_dbg(1, debug, "stream off failed in subdev\n");
- mutex_unlock(&common->lock);
-
return videobuf_streamoff(&common->buffer_queue);
}
{
struct vpif_fh *fh = priv;
struct channel_obj *ch = fh->channel;
- struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
int ret = 0;
vpif_dbg(2, debug, "vpif_querystd\n");
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* Call querystd function of decoder device */
ret = v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index], video,
querystd, std_id);
if (ret < 0)
vpif_dbg(1, debug, "Failed to set standard for sub devices\n");
- mutex_unlock(&common->lock);
return ret;
}
fh->initialized = 1;
/* Call encoder subdevice function to set the standard */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
ch->video.stdid = *std_id;
+ ch->video.dv_preset = V4L2_DV_INVALID;
+ memset(&ch->video.bt_timings, 0, sizeof(ch->video.bt_timings));
/* Get the information about the standard */
if (vpif_update_std_info(ch)) {
- ret = -EINVAL;
vpif_err("Error getting the standard info\n");
- goto s_std_exit;
+ return -EINVAL;
}
/* Configure the default format information */
s_std, *std_id);
if (ret < 0)
vpif_dbg(1, debug, "Failed to set standard for sub devices\n");
-
-s_std_exit:
- mutex_unlock(&common->lock);
return ret;
}
return -EINVAL;
}
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* first setup input path from sub device to vpif */
if (config->setup_input_path) {
ret = config->setup_input_path(ch->channel_id,
vpif_dbg(1, debug, "couldn't setup input path for the"
" sub device %s, for input index %d\n",
subdev_info->name, index);
- goto exit;
+ return ret;
}
}
input, output, 0);
if (ret < 0) {
vpif_dbg(1, debug, "Failed to set input\n");
- goto exit;
+ return ret;
}
}
vid_ch->input_idx = index;
/* update tvnorms from the sub device input info */
ch->video_dev->tvnorms = chan_cfg->inputs[index].input.std;
-
-exit:
- mutex_unlock(&common->lock);
return ret;
}
return -EINVAL;
/* Fill in the information about format */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
*fmt = common->fmt;
- mutex_unlock(&common->lock);
return 0;
}
struct v4l2_pix_format *pixfmt;
int ret = 0;
- vpif_dbg(2, debug, "VIDIOC_S_FMT\n");
+ vpif_dbg(2, debug, "%s\n", __func__);
/* If streaming is started, return error */
if (common->started) {
if (ret)
return ret;
/* store the format in the channel object */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
common->fmt = *fmt;
- mutex_unlock(&common->lock);
-
return 0;
}
return 0;
}
+/**
+ * vpif_enum_dv_presets() - ENUM_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_enum_dv_presets(struct file *file, void *priv,
+ struct v4l2_dv_enum_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ return v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index],
+ video, enum_dv_presets, preset);
+}
+
+/**
+ * vpif_query_dv_presets() - QUERY_DV_PRESET handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_query_dv_preset(struct file *file, void *priv,
+ struct v4l2_dv_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ return v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index],
+ video, query_dv_preset, preset);
+}
+/**
+ * vpif_s_dv_presets() - S_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_s_dv_preset(struct file *file, void *priv,
+ struct v4l2_dv_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
+ int ret = 0;
+
+ if (common->started) {
+ vpif_dbg(1, debug, "streaming in progress\n");
+ return -EBUSY;
+ }
+
+ if ((VPIF_CHANNEL0_VIDEO == ch->channel_id) ||
+ (VPIF_CHANNEL1_VIDEO == ch->channel_id)) {
+ if (!fh->initialized) {
+ vpif_dbg(1, debug, "Channel Busy\n");
+ return -EBUSY;
+ }
+ }
+
+ ret = v4l2_prio_check(&ch->prio, fh->prio);
+ if (ret)
+ return ret;
+
+ fh->initialized = 1;
+
+ /* Call encoder subdevice function to set the standard */
+ if (mutex_lock_interruptible(&common->lock))
+ return -ERESTARTSYS;
+
+ ch->video.dv_preset = preset->preset;
+ ch->video.stdid = V4L2_STD_UNKNOWN;
+ memset(&ch->video.bt_timings, 0, sizeof(ch->video.bt_timings));
+
+ /* Get the information about the standard */
+ if (vpif_update_std_info(ch)) {
+ vpif_dbg(1, debug, "Error getting the standard info\n");
+ ret = -EINVAL;
+ } else {
+ /* Configure the default format information */
+ vpif_config_format(ch);
+
+ ret = v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index],
+ video, s_dv_preset, preset);
+ }
+
+ mutex_unlock(&common->lock);
+
+ return ret;
+}
+/**
+ * vpif_g_dv_presets() - G_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_g_dv_preset(struct file *file, void *priv,
+ struct v4l2_dv_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ preset->preset = ch->video.dv_preset;
+
+ return 0;
+}
+
+/**
+ * vpif_s_dv_timings() - S_DV_TIMINGS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @timings: digital video timings
+ */
+static int vpif_s_dv_timings(struct file *file, void *priv,
+ struct v4l2_dv_timings *timings)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct vpif_params *vpifparams = &ch->vpifparams;
+ struct vpif_channel_config_params *std_info = &vpifparams->std_info;
+ struct video_obj *vid_ch = &ch->video;
+ struct v4l2_bt_timings *bt = &vid_ch->bt_timings;
+ int ret;
+
+ if (timings->type != V4L2_DV_BT_656_1120) {
+ vpif_dbg(2, debug, "Timing type not defined\n");
+ return -EINVAL;
+ }
+
+ /* Configure subdevice timings, if any */
+ ret = v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index],
+ video, s_dv_timings, timings);
+ if (ret == -ENOIOCTLCMD) {
+ vpif_dbg(2, debug, "Custom DV timings not supported by "
+ "subdevice\n");
+ return -EINVAL;
+ }
+ if (ret < 0) {
+ vpif_dbg(2, debug, "Error setting custom DV timings\n");
+ return ret;
+ }
+
+ if (!(timings->bt.width && timings->bt.height &&
+ (timings->bt.hbackporch ||
+ timings->bt.hfrontporch ||
+ timings->bt.hsync) &&
+ timings->bt.vfrontporch &&
+ (timings->bt.vbackporch ||
+ timings->bt.vsync))) {
+ vpif_dbg(2, debug, "Timings for width, height, "
+ "horizontal back porch, horizontal sync, "
+ "horizontal front porch, vertical back porch, "
+ "vertical sync and vertical back porch "
+ "must be defined\n");
+ return -EINVAL;
+ }
+
+ *bt = timings->bt;
+
+ /* Configure video port timings */
+
+ std_info->eav2sav = bt->hbackporch + bt->hfrontporch +
+ bt->hsync - 8;
+ std_info->sav2eav = bt->width;
+
+ std_info->l1 = 1;
+ std_info->l3 = bt->vsync + bt->vbackporch + 1;
+
+ if (bt->interlaced) {
+ if (bt->il_vbackporch || bt->il_vfrontporch || bt->il_vsync) {
+ std_info->vsize = bt->height * 2 +
+ bt->vfrontporch + bt->vsync + bt->vbackporch +
+ bt->il_vfrontporch + bt->il_vsync +
+ bt->il_vbackporch;
+ std_info->l5 = std_info->vsize/2 -
+ (bt->vfrontporch - 1);
+ std_info->l7 = std_info->vsize/2 + 1;
+ std_info->l9 = std_info->l7 + bt->il_vsync +
+ bt->il_vbackporch + 1;
+ std_info->l11 = std_info->vsize -
+ (bt->il_vfrontporch - 1);
+ } else {
+ vpif_dbg(2, debug, "Required timing values for "
+ "interlaced BT format missing\n");
+ return -EINVAL;
+ }
+ } else {
+ std_info->vsize = bt->height + bt->vfrontporch +
+ bt->vsync + bt->vbackporch;
+ std_info->l5 = std_info->vsize - (bt->vfrontporch - 1);
+ }
+ strncpy(std_info->name, "Custom timings BT656/1120", VPIF_MAX_NAME);
+ std_info->width = bt->width;
+ std_info->height = bt->height;
+ std_info->frm_fmt = bt->interlaced ? 0 : 1;
+ std_info->ycmux_mode = 0;
+ std_info->capture_format = 0;
+ std_info->vbi_supported = 0;
+ std_info->hd_sd = 1;
+ std_info->stdid = 0;
+ std_info->dv_preset = V4L2_DV_INVALID;
+
+ vid_ch->stdid = 0;
+ vid_ch->dv_preset = V4L2_DV_INVALID;
+ return 0;
+}
+
+/**
+ * vpif_g_dv_timings() - G_DV_TIMINGS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @timings: digital video timings
+ */
+static int vpif_g_dv_timings(struct file *file, void *priv,
+ struct v4l2_dv_timings *timings)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct video_obj *vid_ch = &ch->video;
+ struct v4l2_bt_timings *bt = &vid_ch->bt_timings;
+
+ timings->bt = *bt;
+
+ return 0;
+}
+
+/*
+ * vpif_g_chip_ident() - Identify the chip
+ * @file: file ptr
+ * @priv: file handle
+ * @chip: chip identity
+ *
+ * Returns zero or -EINVAL if read operations fails.
+ */
+static int vpif_g_chip_ident(struct file *file, void *priv,
+ struct v4l2_dbg_chip_ident *chip)
+{
+ chip->ident = V4L2_IDENT_NONE;
+ chip->revision = 0;
+ if (chip->match.type != V4L2_CHIP_MATCH_I2C_DRIVER &&
+ chip->match.type != V4L2_CHIP_MATCH_I2C_ADDR) {
+ vpif_dbg(2, debug, "match_type is invalid.\n");
+ return -EINVAL;
+ }
+
+ return v4l2_device_call_until_err(&vpif_obj.v4l2_dev, 0, core,
+ g_chip_ident, chip);
+}
+
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+/*
+ * vpif_dbg_g_register() - Read register
+ * @file: file ptr
+ * @priv: file handle
+ * @reg: register to be read
+ *
+ * Debugging only
+ * Returns zero or -EINVAL if read operations fails.
+ */
+static int vpif_dbg_g_register(struct file *file, void *priv,
+ struct v4l2_dbg_register *reg){
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ return v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index], core,
+ g_register, reg);
+}
+
+/*
+ * vpif_dbg_s_register() - Write to register
+ * @file: file ptr
+ * @priv: file handle
+ * @reg: register to be modified
+ *
+ * Debugging only
+ * Returns zero or -EINVAL if write operations fails.
+ */
+static int vpif_dbg_s_register(struct file *file, void *priv,
+ struct v4l2_dbg_register *reg){
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ return v4l2_subdev_call(vpif_obj.sd[ch->curr_sd_index], core,
+ s_register, reg);
+}
+#endif
+
+/*
+ * vpif_log_status() - Status information
+ * @file: file ptr
+ * @priv: file handle
+ *
+ * Returns zero.
+ */
+static int vpif_log_status(struct file *filep, void *priv)
+{
+ /* status for sub devices */
+ v4l2_device_call_all(&vpif_obj.v4l2_dev, 0, core, log_status);
+
+ return 0;
+}
+
/* vpif capture ioctl operations */
static const struct v4l2_ioctl_ops vpif_ioctl_ops = {
.vidioc_querycap = vpif_querycap,
.vidioc_streamon = vpif_streamon,
.vidioc_streamoff = vpif_streamoff,
.vidioc_cropcap = vpif_cropcap,
+ .vidioc_enum_dv_presets = vpif_enum_dv_presets,
+ .vidioc_s_dv_preset = vpif_s_dv_preset,
+ .vidioc_g_dv_preset = vpif_g_dv_preset,
+ .vidioc_query_dv_preset = vpif_query_dv_preset,
+ .vidioc_s_dv_timings = vpif_s_dv_timings,
+ .vidioc_g_dv_timings = vpif_g_dv_timings,
+ .vidioc_g_chip_ident = vpif_g_chip_ident,
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+ .vidioc_g_register = vpif_dbg_g_register,
+ .vidioc_s_register = vpif_dbg_s_register,
+#endif
+ .vidioc_log_status = vpif_log_status,
};
/* vpif file operations */
.owner = THIS_MODULE,
.open = vpif_open,
.release = vpif_release,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
.mmap = vpif_mmap,
.poll = vpif_poll
};
common = &(ch->common[VPIF_VIDEO_INDEX]);
spin_lock_init(&common->irqlock);
mutex_init(&common->lock);
+ ch->video_dev->lock = &common->lock;
/* Initialize prio member of channel object */
v4l2_prio_init(&ch->prio);
err = video_register_device(ch->video_dev,
if (vpif_obj.sd[i])
vpif_obj.sd[i]->grp_id = 1 << i;
}
- v4l2_info(&vpif_obj.v4l2_dev, "DM646x VPIF Capture driver"
- " initialized\n");
+ v4l2_info(&vpif_obj.v4l2_dev,
+ "DM646x VPIF capture driver initialized\n");
return 0;
probe_subdev_out:
enum v4l2_field buf_field;
/* Currently selected or default standard */
v4l2_std_id stdid;
+ u32 dv_preset;
+ struct v4l2_bt_timings bt_timings;
/* This is to track the last input that is passed to application */
u32 input_idx;
};
#include <media/adv7343.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
+#include <media/v4l2-chip-ident.h>
#include <mach/dm646x.h>
static struct vpif_device vpif_obj = { {NULL} };
static struct device *vpif_dev;
-static const struct vpif_channel_config_params ch_params[] = {
- {
- "NTSC", 720, 480, 30, 0, 1, 268, 1440, 1, 23, 263, 266,
- 286, 525, 525, 0, 1, 0, V4L2_STD_525_60,
- },
- {
- "PAL", 720, 576, 25, 0, 1, 280, 1440, 1, 23, 311, 313,
- 336, 624, 625, 0, 1, 0, V4L2_STD_625_50,
- },
-};
-
/*
* vpif_uservirt_to_phys: This function is used to convert user
* space virtual address to physical address.
return IRQ_HANDLED;
}
-static int vpif_get_std_info(struct channel_obj *ch)
+static int vpif_update_std_info(struct channel_obj *ch)
{
- struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
struct video_obj *vid_ch = &ch->video;
struct vpif_params *vpifparams = &ch->vpifparams;
struct vpif_channel_config_params *std_info = &vpifparams->std_info;
const struct vpif_channel_config_params *config;
- int index;
-
- std_info->stdid = vid_ch->stdid;
- if (!std_info->stdid)
- return -1;
+ int i;
- for (index = 0; index < ARRAY_SIZE(ch_params); index++) {
- config = &ch_params[index];
- if (config->stdid & std_info->stdid) {
- memcpy(std_info, config, sizeof(*config));
- break;
+ for (i = 0; i < vpif_ch_params_count; i++) {
+ config = &ch_params[i];
+ if (config->hd_sd == 0) {
+ vpif_dbg(2, debug, "SD format\n");
+ if (config->stdid & vid_ch->stdid) {
+ memcpy(std_info, config, sizeof(*config));
+ break;
+ }
+ } else {
+ vpif_dbg(2, debug, "HD format\n");
+ if (config->dv_preset == vid_ch->dv_preset) {
+ memcpy(std_info, config, sizeof(*config));
+ break;
+ }
}
}
- if (index == ARRAY_SIZE(ch_params))
- return -1;
+ if (i == vpif_ch_params_count) {
+ vpif_dbg(1, debug, "Format not found\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vpif_update_resolution(struct channel_obj *ch)
+{
+ struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
+ struct video_obj *vid_ch = &ch->video;
+ struct vpif_params *vpifparams = &ch->vpifparams;
+ struct vpif_channel_config_params *std_info = &vpifparams->std_info;
+
+ if (!vid_ch->stdid && !vid_ch->dv_preset && !vid_ch->bt_timings.height)
+ return -EINVAL;
+
+ if (vid_ch->stdid || vid_ch->dv_preset) {
+ if (vpif_update_std_info(ch))
+ return -EINVAL;
+ }
common->fmt.fmt.pix.width = std_info->width;
common->fmt.fmt.pix.height = std_info->height;
common->fmt.fmt.pix.width, common->fmt.fmt.pix.height);
/* Set height and width paramateres */
- ch->common[VPIF_VIDEO_INDEX].height = std_info->height;
- ch->common[VPIF_VIDEO_INDEX].width = std_info->width;
+ common->height = std_info->height;
+ common->width = std_info->width;
return 0;
}
else
sizeimage = config_params.channel_bufsize[ch->channel_id];
- if (vpif_get_std_info(ch)) {
- vpif_err("Error getting the standard info\n");
+ if (vpif_update_resolution(ch))
return -EINVAL;
- }
hpitch = pixfmt->bytesperline;
vpitch = sizeimage / (hpitch * 2);
static int vpif_mmap(struct file *filep, struct vm_area_struct *vma)
{
struct vpif_fh *fh = filep->private_data;
- struct common_obj *common = &fh->channel->common[VPIF_VIDEO_INDEX];
+ struct channel_obj *ch = fh->channel;
+ struct common_obj *common = &(ch->common[VPIF_VIDEO_INDEX]);
+
+ vpif_dbg(2, debug, "vpif_mmap\n");
return videobuf_mmap_mapper(&common->buffer_queue, vma);
}
struct channel_obj *ch = fh->channel;
struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* if this instance is doing IO */
if (fh->io_allowed[VPIF_VIDEO_INDEX]) {
/* Reset io_usrs member of channel object */
config_params.numbuffers[ch->channel_id];
}
- mutex_unlock(&common->lock);
-
/* Decrement channel usrs counter */
atomic_dec(&ch->usrs);
/* If this file handle has initialize encoder device, reset it */
}
/* functions implementing ioctls */
-
+/**
+ * vpif_querycap() - QUERYCAP handler
+ * @file: file ptr
+ * @priv: file handle
+ * @cap: ptr to v4l2_capability structure
+ */
static int vpif_querycap(struct file *file, void *priv,
struct v4l2_capability *cap)
{
if (common->fmt.type != fmt->type)
return -EINVAL;
- /* Fill in the information about format */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
- if (vpif_get_std_info(ch)) {
- vpif_err("Error getting the standard info\n");
+ if (vpif_update_resolution(ch))
return -EINVAL;
- }
-
*fmt = common->fmt;
- mutex_unlock(&common->lock);
return 0;
}
/* store the pix format in the channel object */
common->fmt.fmt.pix = *pixfmt;
/* store the format in the channel object */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
common->fmt = *fmt;
- mutex_unlock(&common->lock);
-
return 0;
}
struct common_obj *common;
enum v4l2_field field;
u8 index = 0;
- int ret = 0;
/* This file handle has not initialized the channel,
It is not allowed to do settings */
index = VPIF_VIDEO_INDEX;
common = &ch->common[index];
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
- if (common->fmt.type != reqbuf->type) {
- ret = -EINVAL;
- goto reqbuf_exit;
- }
+ if (common->fmt.type != reqbuf->type)
+ return -EINVAL;
- if (0 != common->io_usrs) {
- ret = -EBUSY;
- goto reqbuf_exit;
- }
+ if (0 != common->io_usrs)
+ return -EBUSY;
if (reqbuf->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
if (common->fmt.fmt.pix.field == V4L2_FIELD_ANY)
&common->irqlock,
reqbuf->type, field,
sizeof(struct videobuf_buffer), fh,
- NULL);
+ &common->lock);
/* Set io allowed member of file handle to TRUE */
fh->io_allowed[index] = 1;
INIT_LIST_HEAD(&common->dma_queue);
/* Allocate buffers */
- ret = videobuf_reqbufs(&common->buffer_queue, reqbuf);
-
-reqbuf_exit:
- mutex_unlock(&common->lock);
- return ret;
+ return videobuf_reqbufs(&common->buffer_queue, reqbuf);
}
static int vpif_querybuf(struct file *file, void *priv,
}
/* Call encoder subdevice function to set the standard */
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
ch->video.stdid = *std_id;
+ ch->video.dv_preset = V4L2_DV_INVALID;
+ memset(&ch->video.bt_timings, 0, sizeof(ch->video.bt_timings));
+
/* Get the information about the standard */
- if (vpif_get_std_info(ch)) {
- vpif_err("Error getting the standard info\n");
+ if (vpif_update_resolution(ch))
return -EINVAL;
- }
if ((ch->vpifparams.std_info.width *
ch->vpifparams.std_info.height * 2) >
config_params.channel_bufsize[ch->channel_id]) {
vpif_err("invalid std for this size\n");
- ret = -EINVAL;
- goto s_std_exit;
+ return -EINVAL;
}
common->fmt.fmt.pix.bytesperline = common->fmt.fmt.pix.width;
s_std_output, *std_id);
if (ret < 0) {
vpif_err("Failed to set output standard\n");
- goto s_std_exit;
+ return ret;
}
ret = v4l2_device_call_until_err(&vpif_obj.v4l2_dev, 1, core,
s_std, *std_id);
if (ret < 0)
vpif_err("Failed to set standard for sub devices\n");
-
-s_std_exit:
- mutex_unlock(&common->lock);
return ret;
}
if (ret < 0)
return ret;
- /* Call videobuf_streamon to start streaming in videobuf */
+ /* Call videobuf_streamon to start streaming in videobuf */
ret = videobuf_streamon(&common->buffer_queue);
if (ret < 0) {
vpif_err("videobuf_streamon\n");
return ret;
}
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
/* If buffer queue is empty, return error */
if (list_empty(&common->dma_queue)) {
vpif_err("buffer queue is empty\n");
- ret = -EIO;
- goto streamon_exit;
+ return -EIO;
}
/* Get the next frame from the buffer queue */
|| (!ch->vpifparams.std_info.frm_fmt
&& (common->fmt.fmt.pix.field == V4L2_FIELD_NONE))) {
vpif_err("conflict in field format and std format\n");
- ret = -EINVAL;
- goto streamon_exit;
+ return -EINVAL;
}
/* clock settings */
ch->vpifparams.std_info.hd_sd);
if (ret < 0) {
vpif_err("can't set clock\n");
- goto streamon_exit;
+ return ret;
}
/* set the parameters and addresses */
ret = vpif_set_video_params(vpif, ch->channel_id + 2);
if (ret < 0)
- goto streamon_exit;
+ return ret;
common->started = ret;
vpif_config_addr(ch, ret);
}
channel_first_int[VPIF_VIDEO_INDEX][ch->channel_id] = 1;
}
-
-streamon_exit:
- mutex_unlock(&common->lock);
return ret;
}
return -EINVAL;
}
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
if (buftype == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
/* disable channel */
if (VPIF_CHANNEL2_VIDEO == ch->channel_id) {
}
common->started = 0;
- mutex_unlock(&common->lock);
-
return videobuf_streamoff(&common->buffer_queue);
}
struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
int ret = 0;
- if (mutex_lock_interruptible(&common->lock))
- return -ERESTARTSYS;
-
if (common->started) {
vpif_err("Streaming in progress\n");
- ret = -EBUSY;
- goto s_output_exit;
+ return -EBUSY;
}
ret = v4l2_device_call_until_err(&vpif_obj.v4l2_dev, 1, video,
vpif_err("Failed to set output standard\n");
vid_ch->output_id = i;
-
-s_output_exit:
- mutex_unlock(&common->lock);
return ret;
}
return v4l2_prio_change(&ch->prio, &fh->prio, p);
}
+/**
+ * vpif_enum_dv_presets() - ENUM_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_enum_dv_presets(struct file *file, void *priv,
+ struct v4l2_dv_enum_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct video_obj *vid_ch = &ch->video;
+
+ return v4l2_subdev_call(vpif_obj.sd[vid_ch->output_id],
+ video, enum_dv_presets, preset);
+}
+
+/**
+ * vpif_s_dv_presets() - S_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_s_dv_preset(struct file *file, void *priv,
+ struct v4l2_dv_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX];
+ struct video_obj *vid_ch = &ch->video;
+ int ret = 0;
+
+ if (common->started) {
+ vpif_dbg(1, debug, "streaming in progress\n");
+ return -EBUSY;
+ }
+
+ ret = v4l2_prio_check(&ch->prio, fh->prio);
+ if (ret != 0)
+ return ret;
+
+ fh->initialized = 1;
+
+ /* Call encoder subdevice function to set the standard */
+ if (mutex_lock_interruptible(&common->lock))
+ return -ERESTARTSYS;
+
+ ch->video.dv_preset = preset->preset;
+ ch->video.stdid = V4L2_STD_UNKNOWN;
+ memset(&ch->video.bt_timings, 0, sizeof(ch->video.bt_timings));
+
+ /* Get the information about the standard */
+ if (vpif_update_resolution(ch)) {
+ ret = -EINVAL;
+ } else {
+ /* Configure the default format information */
+ vpif_config_format(ch);
+
+ ret = v4l2_subdev_call(vpif_obj.sd[vid_ch->output_id],
+ video, s_dv_preset, preset);
+ }
+
+ mutex_unlock(&common->lock);
+
+ return ret;
+}
+/**
+ * vpif_g_dv_presets() - G_DV_PRESETS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @preset: input preset
+ */
+static int vpif_g_dv_preset(struct file *file, void *priv,
+ struct v4l2_dv_preset *preset)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+
+ preset->preset = ch->video.dv_preset;
+
+ return 0;
+}
+/**
+ * vpif_s_dv_timings() - S_DV_TIMINGS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @timings: digital video timings
+ */
+static int vpif_s_dv_timings(struct file *file, void *priv,
+ struct v4l2_dv_timings *timings)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct vpif_params *vpifparams = &ch->vpifparams;
+ struct vpif_channel_config_params *std_info = &vpifparams->std_info;
+ struct video_obj *vid_ch = &ch->video;
+ struct v4l2_bt_timings *bt = &vid_ch->bt_timings;
+ int ret;
+
+ if (timings->type != V4L2_DV_BT_656_1120) {
+ vpif_dbg(2, debug, "Timing type not defined\n");
+ return -EINVAL;
+ }
+
+ /* Configure subdevice timings, if any */
+ ret = v4l2_subdev_call(vpif_obj.sd[vid_ch->output_id],
+ video, s_dv_timings, timings);
+ if (ret == -ENOIOCTLCMD) {
+ vpif_dbg(2, debug, "Custom DV timings not supported by "
+ "subdevice\n");
+ return -EINVAL;
+ }
+ if (ret < 0) {
+ vpif_dbg(2, debug, "Error setting custom DV timings\n");
+ return ret;
+ }
+
+ if (!(timings->bt.width && timings->bt.height &&
+ (timings->bt.hbackporch ||
+ timings->bt.hfrontporch ||
+ timings->bt.hsync) &&
+ timings->bt.vfrontporch &&
+ (timings->bt.vbackporch ||
+ timings->bt.vsync))) {
+ vpif_dbg(2, debug, "Timings for width, height, "
+ "horizontal back porch, horizontal sync, "
+ "horizontal front porch, vertical back porch, "
+ "vertical sync and vertical back porch "
+ "must be defined\n");
+ return -EINVAL;
+ }
+
+ *bt = timings->bt;
+
+ /* Configure video port timings */
+
+ std_info->eav2sav = bt->hbackporch + bt->hfrontporch +
+ bt->hsync - 8;
+ std_info->sav2eav = bt->width;
+
+ std_info->l1 = 1;
+ std_info->l3 = bt->vsync + bt->vbackporch + 1;
+
+ if (bt->interlaced) {
+ if (bt->il_vbackporch || bt->il_vfrontporch || bt->il_vsync) {
+ std_info->vsize = bt->height * 2 +
+ bt->vfrontporch + bt->vsync + bt->vbackporch +
+ bt->il_vfrontporch + bt->il_vsync +
+ bt->il_vbackporch;
+ std_info->l5 = std_info->vsize/2 -
+ (bt->vfrontporch - 1);
+ std_info->l7 = std_info->vsize/2 + 1;
+ std_info->l9 = std_info->l7 + bt->il_vsync +
+ bt->il_vbackporch + 1;
+ std_info->l11 = std_info->vsize -
+ (bt->il_vfrontporch - 1);
+ } else {
+ vpif_dbg(2, debug, "Required timing values for "
+ "interlaced BT format missing\n");
+ return -EINVAL;
+ }
+ } else {
+ std_info->vsize = bt->height + bt->vfrontporch +
+ bt->vsync + bt->vbackporch;
+ std_info->l5 = std_info->vsize - (bt->vfrontporch - 1);
+ }
+ strncpy(std_info->name, "Custom timings BT656/1120",
+ VPIF_MAX_NAME);
+ std_info->width = bt->width;
+ std_info->height = bt->height;
+ std_info->frm_fmt = bt->interlaced ? 0 : 1;
+ std_info->ycmux_mode = 0;
+ std_info->capture_format = 0;
+ std_info->vbi_supported = 0;
+ std_info->hd_sd = 1;
+ std_info->stdid = 0;
+ std_info->dv_preset = V4L2_DV_INVALID;
+
+ vid_ch->stdid = 0;
+ vid_ch->dv_preset = V4L2_DV_INVALID;
+
+ return 0;
+}
+
+/**
+ * vpif_g_dv_timings() - G_DV_TIMINGS handler
+ * @file: file ptr
+ * @priv: file handle
+ * @timings: digital video timings
+ */
+static int vpif_g_dv_timings(struct file *file, void *priv,
+ struct v4l2_dv_timings *timings)
+{
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct video_obj *vid_ch = &ch->video;
+ struct v4l2_bt_timings *bt = &vid_ch->bt_timings;
+
+ timings->bt = *bt;
+
+ return 0;
+}
+
+/*
+ * vpif_g_chip_ident() - Identify the chip
+ * @file: file ptr
+ * @priv: file handle
+ * @chip: chip identity
+ *
+ * Returns zero or -EINVAL if read operations fails.
+ */
+static int vpif_g_chip_ident(struct file *file, void *priv,
+ struct v4l2_dbg_chip_ident *chip)
+{
+ chip->ident = V4L2_IDENT_NONE;
+ chip->revision = 0;
+ if (chip->match.type != V4L2_CHIP_MATCH_I2C_DRIVER &&
+ chip->match.type != V4L2_CHIP_MATCH_I2C_ADDR) {
+ vpif_dbg(2, debug, "match_type is invalid.\n");
+ return -EINVAL;
+ }
+
+ return v4l2_device_call_until_err(&vpif_obj.v4l2_dev, 0, core,
+ g_chip_ident, chip);
+}
+
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+/*
+ * vpif_dbg_g_register() - Read register
+ * @file: file ptr
+ * @priv: file handle
+ * @reg: register to be read
+ *
+ * Debugging only
+ * Returns zero or -EINVAL if read operations fails.
+ */
+static int vpif_dbg_g_register(struct file *file, void *priv,
+ struct v4l2_dbg_register *reg){
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct video_obj *vid_ch = &ch->video;
+
+ return v4l2_subdev_call(vpif_obj.sd[vid_ch->output_id], core,
+ g_register, reg);
+}
+
+/*
+ * vpif_dbg_s_register() - Write to register
+ * @file: file ptr
+ * @priv: file handle
+ * @reg: register to be modified
+ *
+ * Debugging only
+ * Returns zero or -EINVAL if write operations fails.
+ */
+static int vpif_dbg_s_register(struct file *file, void *priv,
+ struct v4l2_dbg_register *reg){
+ struct vpif_fh *fh = priv;
+ struct channel_obj *ch = fh->channel;
+ struct video_obj *vid_ch = &ch->video;
+
+ return v4l2_subdev_call(vpif_obj.sd[vid_ch->output_id], core,
+ s_register, reg);
+}
+#endif
+
+/*
+ * vpif_log_status() - Status information
+ * @file: file ptr
+ * @priv: file handle
+ *
+ * Returns zero.
+ */
+static int vpif_log_status(struct file *filep, void *priv)
+{
+ /* status for sub devices */
+ v4l2_device_call_all(&vpif_obj.v4l2_dev, 0, core, log_status);
+
+ return 0;
+}
+
/* vpif display ioctl operations */
static const struct v4l2_ioctl_ops vpif_ioctl_ops = {
.vidioc_querycap = vpif_querycap,
.vidioc_s_output = vpif_s_output,
.vidioc_g_output = vpif_g_output,
.vidioc_cropcap = vpif_cropcap,
+ .vidioc_enum_dv_presets = vpif_enum_dv_presets,
+ .vidioc_s_dv_preset = vpif_s_dv_preset,
+ .vidioc_g_dv_preset = vpif_g_dv_preset,
+ .vidioc_s_dv_timings = vpif_s_dv_timings,
+ .vidioc_g_dv_timings = vpif_g_dv_timings,
+ .vidioc_g_chip_ident = vpif_g_chip_ident,
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+ .vidioc_g_register = vpif_dbg_g_register,
+ .vidioc_s_register = vpif_dbg_s_register,
+#endif
+ .vidioc_log_status = vpif_log_status,
};
static const struct v4l2_file_operations vpif_fops = {
.owner = THIS_MODULE,
.open = vpif_open,
.release = vpif_release,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
.mmap = vpif_mmap,
.poll = vpif_poll
};
v4l2_prio_init(&ch->prio);
ch->common[VPIF_VIDEO_INDEX].fmt.type =
V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ ch->video_dev->lock = &common->lock;
/* register video device */
vpif_dbg(1, debug, "channel=%x,channel->video_dev=%x\n",
vpif_obj.sd[i]->grp_id = 1 << i;
}
+ v4l2_info(&vpif_obj.v4l2_dev,
+ "DM646x VPIF display driver initialized\n");
return 0;
probe_subdev_out:
* most recent displayed frame only */
v4l2_std_id stdid; /* Currently selected or default
* standard */
+ u32 dv_preset;
+ struct v4l2_bt_timings bt_timings;
u32 output_id; /* Current output id */
};
#include <media/saa7115.h>
#include <media/tvp5150.h>
#include <media/tvaudio.h>
+#include <media/mt9v011.h>
#include <media/i2c-addr.h>
#include <media/tveeprom.h>
#include <media/v4l2-common.h>
I2C_CLIENT_END
};
-static unsigned short mt9v011_addrs[] = {
- 0xba >> 1,
- I2C_CLIENT_END
-};
-
static unsigned short msp3400_addrs[] = {
0x80 >> 1,
0x88 >> 1,
dev->init_data.ir_codes = RC_MAP_RC5_HAUPPAUGE_NEW;
dev->init_data.get_key = em28xx_get_key_em_haup;
dev->init_data.name = "i2c IR (EM2840 Hauppauge)";
+ break;
case EM2820_BOARD_LEADTEK_WINFAST_USBII_DELUXE:
dev->init_data.ir_codes = RC_MAP_WINFAST_USBII_DELUXE;
dev->init_data.get_key = em28xx_get_key_winfast_usbii_deluxe;
"tvp5150", 0, tvp5150_addrs);
if (dev->em28xx_sensor == EM28XX_MT9V011) {
+ struct mt9v011_platform_data pdata;
+ struct i2c_board_info mt9v011_info = {
+ .type = "mt9v011",
+ .addr = 0xba >> 1,
+ .platform_data = &pdata,
+ };
struct v4l2_subdev *sd;
- sd = v4l2_i2c_new_subdev(&dev->v4l2_dev,
- &dev->i2c_adap, "mt9v011", 0, mt9v011_addrs);
- v4l2_subdev_call(sd, core, s_config, 0, &dev->sensor_xtal);
+ pdata.xtal = dev->sensor_xtal;
+ sd = v4l2_i2c_new_subdev_board(&dev->v4l2_dev, &dev->i2c_adap,
+ &mt9v011_info, NULL);
}
/*****************************************************************************/
static const struct usb_device_id et61x251_id_table[] = {
- { USB_DEVICE(0x102c, 0x6151), },
{ USB_DEVICE(0x102c, 0x6251), },
- { USB_DEVICE(0x102c, 0x6253), },
- { USB_DEVICE(0x102c, 0x6254), },
- { USB_DEVICE(0x102c, 0x6255), },
- { USB_DEVICE(0x102c, 0x6256), },
- { USB_DEVICE(0x102c, 0x6257), },
- { USB_DEVICE(0x102c, 0x6258), },
- { USB_DEVICE(0x102c, 0x6259), },
- { USB_DEVICE(0x102c, 0x625a), },
- { USB_DEVICE(0x102c, 0x625b), },
- { USB_DEVICE(0x102c, 0x625c), },
- { USB_DEVICE(0x102c, 0x625d), },
- { USB_DEVICE(0x102c, 0x625e), },
- { USB_DEVICE(0x102c, 0x625f), },
- { USB_DEVICE(0x102c, 0x6260), },
- { USB_DEVICE(0x102c, 0x6261), },
- { USB_DEVICE(0x102c, 0x6262), },
- { USB_DEVICE(0x102c, 0x6263), },
- { USB_DEVICE(0x102c, 0x6264), },
- { USB_DEVICE(0x102c, 0x6265), },
- { USB_DEVICE(0x102c, 0x6266), },
- { USB_DEVICE(0x102c, 0x6267), },
- { USB_DEVICE(0x102c, 0x6268), },
- { USB_DEVICE(0x102c, 0x6269), },
{ }
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x04a5, 0x3035)},
{}
};
};
/* -- module initialisation -- */
-static const struct usb_device_id device_table[] __devinitconst = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0572, 0x0041)},
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
/* -- device connect -- */
-static int __devinit sd_probe(struct usb_interface *intf,
+static int sd_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return gspca_dev_probe(intf, id, &sd_desc, sizeof(struct sd),
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0553, 0x0002)},
{USB_DEVICE(0x0813, 0x0001)},
{}
};
/* -- module initialisation -- */
-static const struct usb_device_id device_table[] __devinitconst = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x102c, 0x6151), .driver_info = SENSOR_PAS106},
#if !defined CONFIG_USB_ET61X251 && !defined CONFIG_USB_ET61X251_MODULE
{USB_DEVICE(0x102c, 0x6251), .driver_info = SENSOR_TAS5130CXX},
MODULE_DEVICE_TABLE(usb, device_table);
/* -- device connect -- */
-static int __devinit sd_probe(struct usb_interface *intf,
+static int sd_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return gspca_dev_probe(intf, id, &sd_desc, sizeof(struct sd),
}
/* Table of supported USB devices */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x04cb, 0x0104)},
{USB_DEVICE(0x04cb, 0x0109)},
{USB_DEVICE(0x04cb, 0x010b)},
/*=================== USB driver structure initialisation ==================*/
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x05e3, 0x0503)},
{USB_DEVICE(0x05e3, 0xf191)},
{}
MODULE_DESCRIPTION("GSPCA USB Camera Driver");
MODULE_LICENSE("GPL");
-#define DRIVER_VERSION_NUMBER KERNEL_VERSION(2, 11, 0)
+#define DRIVER_VERSION_NUMBER KERNEL_VERSION(2, 12, 0)
#ifdef GSPCA_DEBUG
int gspca_debug = D_ERR | D_PROBE;
return 0;
}
-static int frame_alloc(struct gspca_dev *gspca_dev,
- unsigned int count)
+static int frame_alloc(struct gspca_dev *gspca_dev, struct file *file,
+ enum v4l2_memory memory, unsigned int count)
{
struct gspca_frame *frame;
unsigned int frsz;
frsz = gspca_dev->cam.cam_mode[i].sizeimage;
PDEBUG(D_STREAM, "frame alloc frsz: %d", frsz);
frsz = PAGE_ALIGN(frsz);
- gspca_dev->frsz = frsz;
if (count >= GSPCA_MAX_FRAMES)
count = GSPCA_MAX_FRAMES - 1;
gspca_dev->frbuf = vmalloc_32(frsz * count);
err("frame alloc failed");
return -ENOMEM;
}
+ gspca_dev->capt_file = file;
+ gspca_dev->memory = memory;
+ gspca_dev->frsz = frsz;
gspca_dev->nframes = count;
for (i = 0; i < count; i++) {
frame = &gspca_dev->frame[i];
frame->v4l2_buf.flags = 0;
frame->v4l2_buf.field = V4L2_FIELD_NONE;
frame->v4l2_buf.length = frsz;
- frame->v4l2_buf.memory = gspca_dev->memory;
+ frame->v4l2_buf.memory = memory;
frame->v4l2_buf.sequence = 0;
frame->data = gspca_dev->frbuf + i * frsz;
frame->v4l2_buf.m.offset = i * frsz;
gspca_dev->frame[i].data = NULL;
}
gspca_dev->nframes = 0;
+ gspca_dev->frsz = 0;
+ gspca_dev->capt_file = NULL;
+ gspca_dev->memory = GSPCA_MEMORY_NO;
}
static void destroy_urbs(struct gspca_dev *gspca_dev)
static int dev_open(struct file *file)
{
struct gspca_dev *gspca_dev;
- int ret;
PDEBUG(D_STREAM, "[%s] open", current->comm);
gspca_dev = (struct gspca_dev *) video_devdata(file);
- if (mutex_lock_interruptible(&gspca_dev->queue_lock))
- return -ERESTARTSYS;
- if (!gspca_dev->present) {
- ret = -ENODEV;
- goto out;
- }
-
- if (gspca_dev->users > 4) { /* (arbitrary value) */
- ret = -EBUSY;
- goto out;
- }
+ if (!gspca_dev->present)
+ return -ENODEV;
/* protect the subdriver against rmmod */
- if (!try_module_get(gspca_dev->module)) {
- ret = -ENODEV;
- goto out;
- }
-
- gspca_dev->users++;
+ if (!try_module_get(gspca_dev->module))
+ return -ENODEV;
file->private_data = gspca_dev;
#ifdef GSPCA_DEBUG
gspca_dev->vdev.debug &= ~(V4L2_DEBUG_IOCTL
| V4L2_DEBUG_IOCTL_ARG);
#endif
- ret = 0;
-out:
- mutex_unlock(&gspca_dev->queue_lock);
- if (ret != 0)
- PDEBUG(D_ERR|D_STREAM, "open failed err %d", ret);
- else
- PDEBUG(D_STREAM, "open done");
- return ret;
+ return 0;
}
static int dev_close(struct file *file)
PDEBUG(D_STREAM, "[%s] close", current->comm);
if (mutex_lock_interruptible(&gspca_dev->queue_lock))
return -ERESTARTSYS;
- gspca_dev->users--;
/* if the file did the capture, free the streaming resources */
if (gspca_dev->capt_file == file) {
mutex_unlock(&gspca_dev->usb_lock);
}
frame_free(gspca_dev);
- gspca_dev->capt_file = NULL;
- gspca_dev->memory = GSPCA_MEMORY_NO;
}
file->private_data = NULL;
module_put(gspca_dev->module);
return -ERESTARTSYS;
if (gspca_dev->memory != GSPCA_MEMORY_NO
+ && gspca_dev->memory != GSPCA_MEMORY_READ
&& gspca_dev->memory != rb->memory) {
ret = -EBUSY;
goto out;
gspca_stream_off(gspca_dev);
mutex_unlock(&gspca_dev->usb_lock);
}
+ /* Don't restart the stream when switching from read to mmap mode */
+ if (gspca_dev->memory == GSPCA_MEMORY_READ)
+ streaming = 0;
/* free the previous allocated buffers, if any */
- if (gspca_dev->nframes != 0) {
+ if (gspca_dev->nframes != 0)
frame_free(gspca_dev);
- gspca_dev->capt_file = NULL;
- }
if (rb->count == 0) /* unrequest */
goto out;
- gspca_dev->memory = rb->memory;
- ret = frame_alloc(gspca_dev, rb->count);
+ ret = frame_alloc(gspca_dev, file, rb->memory, rb->count);
if (ret == 0) {
rb->count = gspca_dev->nframes;
- gspca_dev->capt_file = file;
if (streaming)
ret = gspca_init_transfer(gspca_dev);
}
if (buf_type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
- if (!gspca_dev->streaming)
- return 0;
+
if (mutex_lock_interruptible(&gspca_dev->queue_lock))
return -ERESTARTSYS;
+ if (!gspca_dev->streaming) {
+ ret = 0;
+ goto out;
+ }
+
/* check the capture file */
if (gspca_dev->capt_file != file) {
ret = -EBUSY;
gspca_dev->usb_err = 0;
gspca_stream_off(gspca_dev);
mutex_unlock(&gspca_dev->usb_lock);
+ /* In case another thread is waiting in dqbuf */
+ wake_up_interruptible(&gspca_dev->wq);
/* empty the transfer queues */
atomic_set(&gspca_dev->fr_q, 0);
return ret;
}
+static int frame_ready_nolock(struct gspca_dev *gspca_dev, struct file *file,
+ enum v4l2_memory memory)
+{
+ if (!gspca_dev->present)
+ return -ENODEV;
+ if (gspca_dev->capt_file != file || gspca_dev->memory != memory ||
+ !gspca_dev->streaming)
+ return -EINVAL;
+
+ /* check if a frame is ready */
+ return gspca_dev->fr_o != atomic_read(&gspca_dev->fr_i);
+}
+
+static int frame_ready(struct gspca_dev *gspca_dev, struct file *file,
+ enum v4l2_memory memory)
+{
+ int ret;
+
+ if (mutex_lock_interruptible(&gspca_dev->queue_lock))
+ return -ERESTARTSYS;
+ ret = frame_ready_nolock(gspca_dev, file, memory);
+ mutex_unlock(&gspca_dev->queue_lock);
+ return ret;
+}
+
/*
- * wait for a video frame
+ * dequeue a video buffer
*
- * If a frame is ready, its index is returned.
+ * If nonblock_ing is false, block until a buffer is available.
*/
-static int frame_wait(struct gspca_dev *gspca_dev,
- int nonblock_ing)
+static int vidioc_dqbuf(struct file *file, void *priv,
+ struct v4l2_buffer *v4l2_buf)
{
- int i, ret;
+ struct gspca_dev *gspca_dev = priv;
+ struct gspca_frame *frame;
+ int i, j, ret;
- /* check if a frame is ready */
- i = gspca_dev->fr_o;
- if (i == atomic_read(&gspca_dev->fr_i)) {
- if (nonblock_ing)
+ PDEBUG(D_FRAM, "dqbuf");
+
+ if (mutex_lock_interruptible(&gspca_dev->queue_lock))
+ return -ERESTARTSYS;
+
+ for (;;) {
+ ret = frame_ready_nolock(gspca_dev, file, v4l2_buf->memory);
+ if (ret < 0)
+ goto out;
+ if (ret > 0)
+ break;
+
+ mutex_unlock(&gspca_dev->queue_lock);
+
+ if (file->f_flags & O_NONBLOCK)
return -EAGAIN;
/* wait till a frame is ready */
ret = wait_event_interruptible_timeout(gspca_dev->wq,
- i != atomic_read(&gspca_dev->fr_i) ||
- !gspca_dev->streaming || !gspca_dev->present,
+ frame_ready(gspca_dev, file, v4l2_buf->memory),
msecs_to_jiffies(3000));
if (ret < 0)
return ret;
- if (ret == 0 || !gspca_dev->streaming || !gspca_dev->present)
+ if (ret == 0)
return -EIO;
+
+ if (mutex_lock_interruptible(&gspca_dev->queue_lock))
+ return -ERESTARTSYS;
}
+ i = gspca_dev->fr_o;
+ j = gspca_dev->fr_queue[i];
+ frame = &gspca_dev->frame[j];
+
gspca_dev->fr_o = (i + 1) % GSPCA_MAX_FRAMES;
if (gspca_dev->sd_desc->dq_callback) {
gspca_dev->sd_desc->dq_callback(gspca_dev);
mutex_unlock(&gspca_dev->usb_lock);
}
- return gspca_dev->fr_queue[i];
-}
-
-/*
- * dequeue a video buffer
- *
- * If nonblock_ing is false, block until a buffer is available.
- */
-static int vidioc_dqbuf(struct file *file, void *priv,
- struct v4l2_buffer *v4l2_buf)
-{
- struct gspca_dev *gspca_dev = priv;
- struct gspca_frame *frame;
- int i, ret;
-
- PDEBUG(D_FRAM, "dqbuf");
- if (v4l2_buf->memory != gspca_dev->memory)
- return -EINVAL;
-
- if (!gspca_dev->present)
- return -ENODEV;
-
- /* if not streaming, be sure the application will not loop forever */
- if (!(file->f_flags & O_NONBLOCK)
- && !gspca_dev->streaming && gspca_dev->users == 1)
- return -EINVAL;
- /* only the capturing file may dequeue */
- if (gspca_dev->capt_file != file)
- return -EINVAL;
-
- /* only one dequeue / read at a time */
- if (mutex_lock_interruptible(&gspca_dev->read_lock))
- return -ERESTARTSYS;
+ frame->v4l2_buf.flags &= ~V4L2_BUF_FLAG_DONE;
+ memcpy(v4l2_buf, &frame->v4l2_buf, sizeof *v4l2_buf);
+ PDEBUG(D_FRAM, "dqbuf %d", j);
+ ret = 0;
- ret = frame_wait(gspca_dev, file->f_flags & O_NONBLOCK);
- if (ret < 0)
- goto out;
- i = ret; /* frame index */
- frame = &gspca_dev->frame[i];
if (gspca_dev->memory == V4L2_MEMORY_USERPTR) {
if (copy_to_user((__u8 __user *) frame->v4l2_buf.m.userptr,
frame->data,
PDEBUG(D_ERR|D_STREAM,
"dqbuf cp to user failed");
ret = -EFAULT;
- goto out;
}
}
- frame->v4l2_buf.flags &= ~V4L2_BUF_FLAG_DONE;
- memcpy(v4l2_buf, &frame->v4l2_buf, sizeof *v4l2_buf);
- PDEBUG(D_FRAM, "dqbuf %d", i);
- ret = 0;
out:
- mutex_unlock(&gspca_dev->read_lock);
+ mutex_unlock(&gspca_dev->queue_lock);
return ret;
}
poll_wait(file, &gspca_dev->wq, wait);
/* if reqbufs is not done, the user would use read() */
- if (gspca_dev->nframes == 0) {
- if (gspca_dev->memory != GSPCA_MEMORY_NO)
- return POLLERR; /* not the 1st time */
+ if (gspca_dev->memory == GSPCA_MEMORY_NO) {
ret = read_alloc(gspca_dev, file);
if (ret != 0)
return POLLERR;
PDEBUG(D_FRAM, "read (%zd)", count);
if (!gspca_dev->present)
return -ENODEV;
- switch (gspca_dev->memory) {
- case GSPCA_MEMORY_NO: /* first time */
+ if (gspca_dev->memory == GSPCA_MEMORY_NO) { /* first time ? */
ret = read_alloc(gspca_dev, file);
if (ret != 0)
return ret;
- break;
- case GSPCA_MEMORY_READ:
- if (gspca_dev->capt_file == file)
- break;
- /* fall thru */
- default:
- return -EINVAL;
}
/* get a frame */
goto out;
mutex_init(&gspca_dev->usb_lock);
- mutex_init(&gspca_dev->read_lock);
mutex_init(&gspca_dev->queue_lock);
init_waitqueue_head(&gspca_dev->wq);
PDEBUG(D_PROBE, "%s disconnect",
video_device_node_name(&gspca_dev->vdev));
mutex_lock(&gspca_dev->usb_lock);
+
gspca_dev->present = 0;
+ wake_up_interruptible(&gspca_dev->wq);
- if (gspca_dev->streaming) {
- destroy_urbs(gspca_dev);
- wake_up_interruptible(&gspca_dev->wq);
- }
+ destroy_urbs(gspca_dev);
#if defined(CONFIG_INPUT) || defined(CONFIG_INPUT_MODULE)
gspca_input_destroy_urb(gspca_dev);
wait_queue_head_t wq; /* wait queue */
struct mutex usb_lock; /* usb exchange protection */
- struct mutex read_lock; /* read protection */
struct mutex queue_lock; /* ISOC queue protection */
int usb_err; /* USB error - protected by usb_lock */
u16 pkt_size; /* ISOC packet size */
#ifdef CONFIG_PM
char frozen; /* suspend - resume */
#endif
- char users; /* number of opens */
char present; /* device connected */
char nbufread; /* number of buffers for read() */
char memory; /* memory type (V4L2_MEMORY_xxx) */
}
/* Table of supported USB devices */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0979, 0x0280)},
{}
};
memcpy(jpeg_hdr, jpeg_head, sizeof jpeg_head);
#ifndef CONEX_CAM
jpeg_hdr[JPEG_HEIGHT_OFFSET + 0] = height >> 8;
- jpeg_hdr[JPEG_HEIGHT_OFFSET + 1] = height & 0xff;
+ jpeg_hdr[JPEG_HEIGHT_OFFSET + 1] = height;
jpeg_hdr[JPEG_HEIGHT_OFFSET + 2] = width >> 8;
- jpeg_hdr[JPEG_HEIGHT_OFFSET + 3] = width & 0xff;
+ jpeg_hdr[JPEG_HEIGHT_OFFSET + 3] = width;
jpeg_hdr[JPEG_HEIGHT_OFFSET + 6] = samplesY;
#endif
}
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x04c8, 0x0720)}, /* Intel YC 76 */
{}
};
static int dump_bridge;
int dump_sensor;
-static const __devinitdata struct usb_device_id m5602_table[] = {
+static const struct usb_device_id m5602_table[] = {
{USB_DEVICE(0x0402, 0x5602)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x093a, 0x050f)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x08ca, 0x0110)}, /* Trust Spyc@m 100 */
{USB_DEVICE(0x08ca, 0x0111)}, /* Aiptek Pencam VGA+ */
{USB_DEVICE(0x093a, 0x010f)}, /* All other known MR97310A VGA cams */
#define R511_SNAP_PXDIV 0x1c
#define R511_SNAP_LNDIV 0x1d
#define R511_SNAP_UV_EN 0x1e
-#define R511_SNAP_UV_EN 0x1e
#define R511_SNAP_OPTS 0x1f
#define R511_DRAM_FLOW_CTL 0x20
{ 0x6c, 0x0a },
{ 0x6d, 0x55 },
{ 0x6e, 0x11 },
- { 0x6f, 0x9f },
- /* "9e for advance AWB" */
+ { 0x6f, 0x9f }, /* "9e for advance AWB" */
{ 0x6a, 0x40 },
{ OV7670_R01_BLUE, 0x40 },
{ OV7670_R02_RED, 0x60 },
{
static const struct ov_regvals init_519[] = {
{ 0x5a, 0x6d }, /* EnableSystem */
- { 0x53, 0x9b },
+ { 0x53, 0x9b }, /* don't enable the microcontroller */
{ OV519_R54_EN_CLK1, 0xff }, /* set bit2 to enable jpeg */
{ 0x5d, 0x03 },
{ 0x49, 0x01 },
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x4003), .driver_info = BRIDGE_W9968CF },
{USB_DEVICE(0x041e, 0x4052), .driver_info = BRIDGE_OV519 },
{USB_DEVICE(0x041e, 0x405f), .driver_info = BRIDGE_OV519 },
struct usb_device *udev = gspca_dev->dev;
int ret;
- PDEBUG(D_USBO, "reg=0x%04x, val=0%02x", reg, val);
+ if (gspca_dev->usb_err < 0)
+ return;
+
+ PDEBUG(D_USBO, "SET 01 0000 %04x %02x", reg, val);
gspca_dev->usb_buf[0] = val;
ret = usb_control_msg(udev,
usb_sndctrlpipe(udev, 0),
0x01,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0x00, reg, gspca_dev->usb_buf, 1, CTRL_TIMEOUT);
- if (ret < 0)
+ if (ret < 0) {
err("write failed %d", ret);
+ gspca_dev->usb_err = ret;
+ }
}
static u8 ov534_reg_read(struct gspca_dev *gspca_dev, u16 reg)
struct usb_device *udev = gspca_dev->dev;
int ret;
+ if (gspca_dev->usb_err < 0)
+ return 0;
ret = usb_control_msg(udev,
usb_rcvctrlpipe(udev, 0),
0x01,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0x00, reg, gspca_dev->usb_buf, 1, CTRL_TIMEOUT);
- PDEBUG(D_USBI, "reg=0x%04x, data=0x%02x", reg, gspca_dev->usb_buf[0]);
- if (ret < 0)
+ PDEBUG(D_USBI, "GET 01 0000 %04x %02x", reg, gspca_dev->usb_buf[0]);
+ if (ret < 0) {
err("read failed %d", ret);
+ gspca_dev->usb_err = ret;
+ }
return gspca_dev->usb_buf[0];
}
static void sccb_reg_write(struct gspca_dev *gspca_dev, u8 reg, u8 val)
{
- PDEBUG(D_USBO, "reg: 0x%02x, val: 0x%02x", reg, val);
+ PDEBUG(D_USBO, "sccb write: %02x %02x", reg, val);
ov534_reg_write(gspca_dev, OV534_REG_SUBADDR, reg);
ov534_reg_write(gspca_dev, OV534_REG_WRITE, val);
ov534_reg_write(gspca_dev, OV534_REG_OPERATION, OV534_OP_WRITE_3);
- if (!sccb_check_status(gspca_dev))
+ if (!sccb_check_status(gspca_dev)) {
err("sccb_reg_write failed");
+ gspca_dev->usb_err = -EIO;
+ }
}
static u8 sccb_reg_read(struct gspca_dev *gspca_dev, u16 reg)
ov534_set_led(gspca_dev, 0);
set_frame_rate(gspca_dev);
- return 0;
+ return gspca_dev->usb_err;
}
static int sd_start(struct gspca_dev *gspca_dev)
ov534_set_led(gspca_dev, 1);
ov534_reg_write(gspca_dev, 0xe0, 0x00);
- return 0;
+ return gspca_dev->usb_err;
}
static void sd_stopN(struct gspca_dev *gspca_dev)
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x1415, 0x2000)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x06f8, 0x3003)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x4028)},
{USB_DEVICE(0x093a, 0x2460)},
{USB_DEVICE(0x093a, 0x2461)},
};
/* -- module initialisation -- */
-static const struct usb_device_id device_table[] __devinitconst = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x06f8, 0x3009)},
{USB_DEVICE(0x093a, 0x2620)},
{USB_DEVICE(0x093a, 0x2621)},
MODULE_DEVICE_TABLE(usb, device_table);
/* -- device connect -- */
-static int __devinit sd_probe(struct usb_interface *intf,
+static int sd_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return gspca_dev_probe(intf, id, &sd_desc, sizeof(struct sd),
};
/* -- module initialisation -- */
-static const struct usb_device_id device_table[] __devinitconst = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x093a, 0x2600)},
{USB_DEVICE(0x093a, 0x2601)},
{USB_DEVICE(0x093a, 0x2603)},
MODULE_DEVICE_TABLE(usb, device_table);
/* -- device connect -- */
-static int __devinit sd_probe(struct usb_interface *intf,
+static int sd_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return gspca_dev_probe(intf, id, &sd_desc, sizeof(struct sd),
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0458, 0x7005)}, /* Genius Smart 300, version 2 */
/* The Genius Smart is untested. I can't find an owner ! */
/* {USB_DEVICE(0x0c45, 0x8000)}, DC31VC, Don't know this camera */
| (SENSOR_ ## sensor << 8) \
| (i2c_addr)
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0c45, 0x6240), SN9C20X(MT9M001, 0x5d, 0)},
{USB_DEVICE(0x0c45, 0x6242), SN9C20X(MT9M111, 0x5d, 0)},
{USB_DEVICE(0x0c45, 0x6248), SN9C20X(OV9655, 0x30, 0)},
/* Some documentation on known sonixb registers:
Reg Use
+sn9c101 / sn9c102:
0x10 high nibble red gain low nibble blue gain
0x11 low nibble green gain
+sn9c103:
+0x05 red gain 0-127
+0x06 blue gain 0-127
+0x07 green gain 0-127
+all:
+0x08-0x0f i2c / 3wire registers
0x12 hstart
0x13 vstart
0x15 hsize (hsize = register-value * 16)
typedef const __u8 sensor_init_t[8];
struct sensor_data {
- const __u8 *bridge_init[2];
- int bridge_init_size[2];
+ const __u8 *bridge_init;
sensor_init_t *sensor_init;
int sensor_init_size;
- sensor_init_t *sensor_bridge_init[2];
- int sensor_bridge_init_size[2];
int flags;
unsigned ctrl_dis;
__u8 sensor_addr;
#define NO_FREQ (1 << FREQ_IDX)
#define NO_BRIGHTNESS (1 << BRIGHTNESS_IDX)
-#define COMP2 0x8f
#define COMP 0xc7 /* 0x87 //0x07 */
#define COMP1 0xc9 /* 0x89 //0x09 */
#define SYS_CLK 0x04
-#define SENS(bridge_1, bridge_3, sensor, sensor_1, \
- sensor_3, _flags, _ctrl_dis, _sensor_addr) \
+#define SENS(bridge, sensor, _flags, _ctrl_dis, _sensor_addr) \
{ \
- .bridge_init = { bridge_1, bridge_3 }, \
- .bridge_init_size = { sizeof(bridge_1), sizeof(bridge_3) }, \
+ .bridge_init = bridge, \
.sensor_init = sensor, \
.sensor_init_size = sizeof(sensor), \
- .sensor_bridge_init = { sensor_1, sensor_3,}, \
- .sensor_bridge_init_size = { sizeof(sensor_1), sizeof(sensor_3)}, \
.flags = _flags, .ctrl_dis = _ctrl_dis, .sensor_addr = _sensor_addr \
}
0x00, 0x00,
0x00, 0x00, 0x00, 0x02, 0x02, 0x00,
0x28, 0x1e, 0x60, 0x8e, 0x42,
- 0x1d, 0x10, 0x02, 0x03, 0x0f, 0x0c
};
static const __u8 hv7131d_sensor_init[][8] = {
{0xa0, 0x11, 0x01, 0x04, 0x00, 0x00, 0x00, 0x17},
0x00, 0x00,
0x00, 0x00, 0x00, 0x02, 0x01, 0x00,
0x28, 0x1e, 0x60, 0x8a, 0x20,
- 0x1d, 0x10, 0x02, 0x03, 0x0f, 0x0c
};
static const __u8 hv7131r_sensor_init[][8] = {
{0xc0, 0x11, 0x31, 0x38, 0x2a, 0x2e, 0x00, 0x10},
0x44, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80,
0x60, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x01, 0x0a, 0x16, 0x12, 0x68, 0x8b,
- 0x10, 0x1d, 0x10, 0x02, 0x02, 0x09, 0x07
+ 0x10,
};
static const __u8 ov6650_sensor_init[][8] = {
/* Bright, contrast, etc are set through SCBB interface.
0x21, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* r09 .. r10 */
0x00, 0x01, 0x01, 0x0a, /* r11 .. r14 */
0x28, 0x1e, /* H & V sizes r15 .. r16 */
- 0x68, COMP2, MCK_INIT1, /* r17 .. r19 */
- 0x1d, 0x10, 0x02, 0x03, 0x0f, 0x0c /* r1a .. r1f */
-};
-static const __u8 initOv7630_3[] = {
- 0x44, 0x44, 0x00, 0x1a, 0x20, 0x20, 0x20, 0x80, /* r01 .. r08 */
- 0x21, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* r09 .. r10 */
- 0x00, 0x02, 0x01, 0x0a, /* r11 .. r14 */
- 0x28, 0x1e, /* H & V sizes r15 .. r16 */
0x68, 0x8f, MCK_INIT1, /* r17 .. r19 */
- 0x1d, 0x10, 0x02, 0x03, 0x0f, 0x0c, 0x00, /* r1a .. r20 */
- 0x10, 0x20, 0x30, 0x40, 0x50, 0x60, 0x70, 0x80, /* r21 .. r28 */
- 0x90, 0xa0, 0xb0, 0xc0, 0xd0, 0xe0, 0xf0, 0xff /* r29 .. r30 */
};
static const __u8 ov7630_sensor_init[][8] = {
{0xa0, 0x21, 0x12, 0x80, 0x00, 0x00, 0x00, 0x10},
{0xb0, 0x21, 0x01, 0x77, 0x3a, 0x00, 0x00, 0x10},
/* {0xd0, 0x21, 0x12, 0x7c, 0x01, 0x80, 0x34, 0x10}, jfm */
- {0xd0, 0x21, 0x12, 0x1c, 0x00, 0x80, 0x34, 0x10}, /* jfm */
+ {0xd0, 0x21, 0x12, 0x5c, 0x00, 0x80, 0x34, 0x10}, /* jfm */
{0xa0, 0x21, 0x1b, 0x04, 0x00, 0x80, 0x34, 0x10},
{0xa0, 0x21, 0x20, 0x44, 0x00, 0x80, 0x34, 0x10},
{0xa0, 0x21, 0x23, 0xee, 0x00, 0x80, 0x34, 0x10},
{0xd0, 0x21, 0x17, 0x1c, 0xbd, 0x06, 0xf6, 0x10},
};
-static const __u8 ov7630_sensor_init_3[][8] = {
- {0xa0, 0x21, 0x13, 0x80, 0x00, 0x00, 0x00, 0x10},
-};
-
static const __u8 initPas106[] = {
0x04, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00,
0x00, 0x00, 0x00, 0x04, 0x01, 0x00,
0x16, 0x12, 0x24, COMP1, MCK_INIT1,
- 0x18, 0x10, 0x02, 0x02, 0x09, 0x07
};
/* compression 0x86 mckinit1 0x2b */
0x00, 0x00,
0x00, 0x00, 0x00, 0x06, 0x03, 0x0a,
0x28, 0x1e, 0x20, 0x89, 0x20,
- 0x00, 0x00, 0x02, 0x03, 0x0f, 0x0c
};
/* "Known" PAS202BCB registers:
0x00, 0x00,
0x00, 0x00, 0x00, 0x45, 0x09, 0x0a,
0x16, 0x12, 0x60, 0x86, 0x2b,
- 0x14, 0x0a, 0x02, 0x02, 0x09, 0x07
};
/* Same as above, except a different hstart */
static const __u8 initTas5110d[] = {
0x00, 0x00,
0x00, 0x00, 0x00, 0x41, 0x09, 0x0a,
0x16, 0x12, 0x60, 0x86, 0x2b,
- 0x14, 0x0a, 0x02, 0x02, 0x09, 0x07
};
-static const __u8 tas5110_sensor_init[][8] = {
+/* tas5110c is 3 wire, tas5110d is 2 wire (regular i2c) */
+static const __u8 tas5110c_sensor_init[][8] = {
{0x30, 0x11, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x10},
{0x30, 0x11, 0x02, 0x20, 0xa9, 0x00, 0x00, 0x10},
- {0xa0, 0x61, 0x9a, 0xca, 0x00, 0x00, 0x00, 0x17},
+};
+/* Known TAS5110D registers
+ * reg02: gain, bit order reversed!! 0 == max gain, 255 == min gain
+ * reg03: bit3: vflip, bit4: ~hflip, bit7: ~gainboost (~ == inverted)
+ * Note: writing reg03 seems to only work when written together with 02
+ */
+static const __u8 tas5110d_sensor_init[][8] = {
+ {0xa0, 0x61, 0x9a, 0xca, 0x00, 0x00, 0x00, 0x17}, /* reset */
};
static const __u8 initTas5130[] = {
0x00, 0x00,
0x00, 0x00, 0x00, 0x68, 0x0c, 0x0a,
0x28, 0x1e, 0x60, COMP, MCK_INIT,
- 0x18, 0x10, 0x04, 0x03, 0x11, 0x0c
};
static const __u8 tas5130_sensor_init[][8] = {
/* {0x30, 0x11, 0x00, 0x40, 0x47, 0x00, 0x00, 0x10},
};
static struct sensor_data sensor_data[] = {
-SENS(initHv7131d, NULL, hv7131d_sensor_init, NULL, NULL, F_GAIN, NO_BRIGHTNESS|NO_FREQ, 0),
-SENS(initHv7131r, NULL, hv7131r_sensor_init, NULL, NULL, 0, NO_BRIGHTNESS|NO_EXPO|NO_FREQ, 0),
-SENS(initOv6650, NULL, ov6650_sensor_init, NULL, NULL, F_GAIN|F_SIF, 0, 0x60),
-SENS(initOv7630, initOv7630_3, ov7630_sensor_init, NULL, ov7630_sensor_init_3,
- F_GAIN, 0, 0x21),
-SENS(initPas106, NULL, pas106_sensor_init, NULL, NULL, F_GAIN|F_SIF, NO_FREQ,
- 0),
-SENS(initPas202, initPas202, pas202_sensor_init, NULL, NULL, F_GAIN,
- NO_FREQ, 0),
-SENS(initTas5110c, NULL, tas5110_sensor_init, NULL, NULL,
- F_GAIN|F_SIF|F_COARSE_EXPO, NO_BRIGHTNESS|NO_FREQ, 0),
-SENS(initTas5110d, NULL, tas5110_sensor_init, NULL, NULL,
- F_GAIN|F_SIF|F_COARSE_EXPO, NO_BRIGHTNESS|NO_FREQ, 0),
-SENS(initTas5130, NULL, tas5130_sensor_init, NULL, NULL, 0, NO_EXPO|NO_FREQ,
- 0),
+SENS(initHv7131d, hv7131d_sensor_init, F_GAIN, NO_BRIGHTNESS|NO_FREQ, 0),
+SENS(initHv7131r, hv7131r_sensor_init, 0, NO_BRIGHTNESS|NO_EXPO|NO_FREQ, 0),
+SENS(initOv6650, ov6650_sensor_init, F_GAIN|F_SIF, 0, 0x60),
+SENS(initOv7630, ov7630_sensor_init, F_GAIN, 0, 0x21),
+SENS(initPas106, pas106_sensor_init, F_GAIN|F_SIF, NO_FREQ, 0),
+SENS(initPas202, pas202_sensor_init, F_GAIN, NO_FREQ, 0),
+SENS(initTas5110c, tas5110c_sensor_init, F_GAIN|F_SIF|F_COARSE_EXPO,
+ NO_BRIGHTNESS|NO_FREQ, 0),
+SENS(initTas5110d, tas5110d_sensor_init, F_GAIN|F_SIF|F_COARSE_EXPO,
+ NO_BRIGHTNESS|NO_FREQ, 0),
+SENS(initTas5130, tas5130_sensor_init, F_GAIN,
+ NO_BRIGHTNESS|NO_EXPO|NO_FREQ, 0),
};
/* get one byte in gspca_dev->usb_buf */
static void setbrightness(struct gspca_dev *gspca_dev)
{
struct sd *sd = (struct sd *) gspca_dev;
- __u8 value;
switch (sd->sensor) {
case SENSOR_OV6650:
goto err;
break;
}
- case SENSOR_TAS5130CXX: {
- __u8 i2c[] =
- {0x30, 0x11, 0x02, 0x20, 0x70, 0x00, 0x00, 0x10};
-
- value = 0xff - sd->brightness;
- i2c[4] = value;
- PDEBUG(D_CONF, "brightness %d : %d", value, i2c[4]);
- if (i2c_w(gspca_dev, i2c) < 0)
- goto err;
- break;
- }
}
return;
err:
break;
}
case SENSOR_TAS5110C:
- case SENSOR_TAS5110D: {
+ case SENSOR_TAS5130CXX: {
__u8 i2c[] =
{0x30, 0x11, 0x02, 0x20, 0x70, 0x00, 0x00, 0x10};
goto err;
break;
}
+ case SENSOR_TAS5110D: {
+ __u8 i2c[] = {
+ 0xb0, 0x61, 0x02, 0x00, 0x10, 0x00, 0x00, 0x17 };
+ gain = 255 - gain;
+ /* The bits in the register are the wrong way around!! */
+ i2c[3] |= (gain & 0x80) >> 7;
+ i2c[3] |= (gain & 0x40) >> 5;
+ i2c[3] |= (gain & 0x20) >> 3;
+ i2c[3] |= (gain & 0x10) >> 1;
+ i2c[3] |= (gain & 0x08) << 1;
+ i2c[3] |= (gain & 0x04) << 3;
+ i2c[3] |= (gain & 0x02) << 5;
+ i2c[3] |= (gain & 0x01) << 7;
+ if (i2c_w(gspca_dev, i2c) < 0)
+ goto err;
+ break;
+ }
case SENSOR_OV6650:
gain >>= 1;
{
struct sd *sd = (struct sd *) gspca_dev;
__u8 gain;
- __u8 buf[2] = { 0, 0 };
+ __u8 buf[3] = { 0, 0, 0 };
if (sensor_data[sd->sensor].flags & F_GAIN) {
/* Use the sensor gain to do the actual gain */
return;
}
- gain = sd->gain >> 4;
-
- /* red and blue gain */
- buf[0] = gain << 4 | gain;
- /* green gain */
- buf[1] = gain;
- reg_w(gspca_dev, 0x10, buf, 2);
+ if (sd->bridge == BRIDGE_103) {
+ gain = sd->gain >> 1;
+ buf[0] = gain; /* Red */
+ buf[1] = gain; /* Green */
+ buf[2] = gain; /* Blue */
+ reg_w(gspca_dev, 0x05, buf, 3);
+ } else {
+ gain = sd->gain >> 4;
+ buf[0] = gain << 4 | gain; /* Red and blue */
+ buf[1] = gain; /* Green */
+ reg_w(gspca_dev, 0x10, buf, 2);
+ }
}
static void setexposure(struct gspca_dev *gspca_dev)
desired_avg_lum = 5000;
} else {
deadzone = 1500;
- desired_avg_lum = 18000;
+ desired_avg_lum = 13000;
}
if (sensor_data[sd->sensor].flags & F_COARSE_EXPO)
{
struct sd *sd = (struct sd *) gspca_dev;
struct cam *cam = &gspca_dev->cam;
- int mode, l;
- const __u8 *sn9c10x;
- __u8 reg12_19[8];
+ int i, mode;
+ __u8 regs[0x31];
mode = cam->cam_mode[gspca_dev->curr_mode].priv & 0x07;
- sn9c10x = sensor_data[sd->sensor].bridge_init[sd->bridge];
- l = sensor_data[sd->sensor].bridge_init_size[sd->bridge];
- memcpy(reg12_19, &sn9c10x[0x12 - 1], 8);
- reg12_19[6] = sn9c10x[0x18 - 1] | (mode << 4);
- /* Special cases where reg 17 and or 19 value depends on mode */
+ /* Copy registers 0x01 - 0x19 from the template */
+ memcpy(®s[0x01], sensor_data[sd->sensor].bridge_init, 0x19);
+ /* Set the mode */
+ regs[0x18] |= mode << 4;
+
+ /* Set bridge gain to 1.0 */
+ if (sd->bridge == BRIDGE_103) {
+ regs[0x05] = 0x20; /* Red */
+ regs[0x06] = 0x20; /* Green */
+ regs[0x07] = 0x20; /* Blue */
+ } else {
+ regs[0x10] = 0x00; /* Red and blue */
+ regs[0x11] = 0x00; /* Green */
+ }
+
+ /* Setup pixel numbers and auto exposure window */
+ if (sensor_data[sd->sensor].flags & F_SIF) {
+ regs[0x1a] = 0x14; /* HO_SIZE 640, makes no sense */
+ regs[0x1b] = 0x0a; /* VO_SIZE 320, makes no sense */
+ regs[0x1c] = 0x02; /* AE H-start 64 */
+ regs[0x1d] = 0x02; /* AE V-start 64 */
+ regs[0x1e] = 0x09; /* AE H-end 288 */
+ regs[0x1f] = 0x07; /* AE V-end 224 */
+ } else {
+ regs[0x1a] = 0x1d; /* HO_SIZE 960, makes no sense */
+ regs[0x1b] = 0x10; /* VO_SIZE 512, makes no sense */
+ regs[0x1c] = 0x05; /* AE H-start 160 */
+ regs[0x1d] = 0x03; /* AE V-start 96 */
+ regs[0x1e] = 0x0f; /* AE H-end 480 */
+ regs[0x1f] = 0x0c; /* AE V-end 384 */
+ }
+
+ /* Setup the gamma table (only used with the sn9c103 bridge) */
+ for (i = 0; i < 16; i++)
+ regs[0x20 + i] = i * 16;
+ regs[0x20 + i] = 255;
+
+ /* Special cases where some regs depend on mode or bridge */
switch (sd->sensor) {
case SENSOR_TAS5130CXX:
- /* probably not mode specific at all most likely the upper
+ /* FIXME / TESTME
+ probably not mode specific at all most likely the upper
nibble of 0x19 is exposure (clock divider) just as with
the tas5110, we need someone to test this. */
- reg12_19[7] = mode ? 0x23 : 0x43;
+ regs[0x19] = mode ? 0x23 : 0x43;
break;
+ case SENSOR_OV7630:
+ /* FIXME / TESTME for some reason with the 101/102 bridge the
+ clock is set to 12 Mhz (reg1 == 0x04), rather then 24.
+ Also the hstart needs to go from 1 to 2 when using a 103,
+ which is likely related. This does not seem right. */
+ if (sd->bridge == BRIDGE_103) {
+ regs[0x01] = 0x44; /* Select 24 Mhz clock */
+ regs[0x12] = 0x02; /* Set hstart to 2 */
+ }
}
/* Disable compression when the raw bayer format has been selected */
if (cam->cam_mode[gspca_dev->curr_mode].priv & MODE_RAW)
- reg12_19[6] &= ~0x80;
+ regs[0x18] &= ~0x80;
/* Vga mode emulation on SIF sensor? */
if (cam->cam_mode[gspca_dev->curr_mode].priv & MODE_REDUCED_SIF) {
- reg12_19[0] += 16; /* 0x12: hstart adjust */
- reg12_19[1] += 24; /* 0x13: vstart adjust */
- reg12_19[3] = 320 / 16; /* 0x15: hsize */
- reg12_19[4] = 240 / 16; /* 0x16: vsize */
+ regs[0x12] += 16; /* hstart adjust */
+ regs[0x13] += 24; /* vstart adjust */
+ regs[0x15] = 320 / 16; /* hsize */
+ regs[0x16] = 240 / 16; /* vsize */
}
/* reg 0x01 bit 2 video transfert on */
- reg_w(gspca_dev, 0x01, &sn9c10x[0x01 - 1], 1);
+ reg_w(gspca_dev, 0x01, ®s[0x01], 1);
/* reg 0x17 SensorClk enable inv Clk 0x60 */
- reg_w(gspca_dev, 0x17, &sn9c10x[0x17 - 1], 1);
+ reg_w(gspca_dev, 0x17, ®s[0x17], 1);
/* Set the registers from the template */
- reg_w(gspca_dev, 0x01, sn9c10x, l);
+ reg_w(gspca_dev, 0x01, ®s[0x01],
+ (sd->bridge == BRIDGE_103) ? 0x30 : 0x1f);
/* Init the sensor */
i2c_w_vector(gspca_dev, sensor_data[sd->sensor].sensor_init,
sensor_data[sd->sensor].sensor_init_size);
- if (sensor_data[sd->sensor].sensor_bridge_init[sd->bridge])
- i2c_w_vector(gspca_dev,
- sensor_data[sd->sensor].sensor_bridge_init[sd->bridge],
- sensor_data[sd->sensor].sensor_bridge_init_size[
- sd->bridge]);
- /* Mode specific sensor setup */
+ /* Mode / bridge specific sensor setup */
switch (sd->sensor) {
case SENSOR_PAS202: {
const __u8 i2cpclockdiv[] =
/* clockdiv from 4 to 3 (7.5 -> 10 fps) when in low res mode */
if (mode)
i2c_w(gspca_dev, i2cpclockdiv);
+ break;
}
+ case SENSOR_OV7630:
+ /* FIXME / TESTME We should be able to handle this identical
+ for the 101/102 and the 103 case */
+ if (sd->bridge == BRIDGE_103) {
+ const __u8 i2c[] = { 0xa0, 0x21, 0x13,
+ 0x80, 0x00, 0x00, 0x00, 0x10 };
+ i2c_w(gspca_dev, i2c);
+ }
+ break;
}
/* H_size V_size 0x28, 0x1e -> 640x480. 0x16, 0x12 -> 352x288 */
- reg_w(gspca_dev, 0x15, ®12_19[3], 2);
+ reg_w(gspca_dev, 0x15, ®s[0x15], 2);
/* compression register */
- reg_w(gspca_dev, 0x18, ®12_19[6], 1);
+ reg_w(gspca_dev, 0x18, ®s[0x18], 1);
/* H_start */
- reg_w(gspca_dev, 0x12, ®12_19[0], 1);
+ reg_w(gspca_dev, 0x12, ®s[0x12], 1);
/* V_START */
- reg_w(gspca_dev, 0x13, ®12_19[1], 1);
+ reg_w(gspca_dev, 0x13, ®s[0x13], 1);
/* reset 0x17 SensorClk enable inv Clk 0x60 */
/*fixme: ov7630 [17]=68 8f (+20 if 102)*/
- reg_w(gspca_dev, 0x17, ®12_19[5], 1);
+ reg_w(gspca_dev, 0x17, ®s[0x17], 1);
/*MCKSIZE ->3 */ /*fixme: not ov7630*/
- reg_w(gspca_dev, 0x19, ®12_19[7], 1);
+ reg_w(gspca_dev, 0x19, ®s[0x19], 1);
/* AE_STRX AE_STRY AE_ENDX AE_ENDY */
- reg_w(gspca_dev, 0x1c, &sn9c10x[0x1c - 1], 4);
+ reg_w(gspca_dev, 0x1c, ®s[0x1c], 4);
/* Enable video transfert */
- reg_w(gspca_dev, 0x01, &sn9c10x[0], 1);
+ reg_w(gspca_dev, 0x01, ®s[0x01], 1);
/* Compression */
- reg_w(gspca_dev, 0x18, ®12_19[6], 2);
+ reg_w(gspca_dev, 0x18, ®s[0x18], 2);
msleep(20);
sd->reg11 = -1;
.driver_info = (SENSOR_ ## sensor << 8) | BRIDGE_ ## bridge
-static const struct usb_device_id device_table[] __devinitconst = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0c45, 0x6001), SB(TAS5110C, 102)}, /* TAS5110C1B */
{USB_DEVICE(0x0c45, 0x6005), SB(TAS5110C, 101)}, /* TAS5110C1B */
{USB_DEVICE(0x0c45, 0x6007), SB(TAS5110D, 101)}, /* TAS5110D */
{USB_DEVICE(0x0c45, 0x6009), SB(PAS106, 101)},
{USB_DEVICE(0x0c45, 0x600d), SB(PAS106, 101)},
{USB_DEVICE(0x0c45, 0x6011), SB(OV6650, 101)},
-#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
{USB_DEVICE(0x0c45, 0x6019), SB(OV7630, 101)},
+#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
{USB_DEVICE(0x0c45, 0x6024), SB(TAS5130CXX, 102)},
{USB_DEVICE(0x0c45, 0x6025), SB(TAS5130CXX, 102)},
#endif
{USB_DEVICE(0x0c45, 0x602c), SB(OV7630, 102)},
{USB_DEVICE(0x0c45, 0x602d), SB(HV7131R, 102)},
{USB_DEVICE(0x0c45, 0x602e), SB(OV7630, 102)},
- /* {USB_DEVICE(0x0c45, 0x602b), SB(MI03XX, 102)}, */ /* MI0343 MI0360 MI0330 */
+ /* {USB_DEVICE(0x0c45, 0x6030), SB(MI03XX, 102)}, */ /* MI0343 MI0360 MI0330 */
+ /* {USB_DEVICE(0x0c45, 0x6082), SB(MI03XX, 103)}, */ /* MI0343 MI0360 */
+ {USB_DEVICE(0x0c45, 0x6083), SB(HV7131D, 103)},
+ {USB_DEVICE(0x0c45, 0x608c), SB(HV7131R, 103)},
+ /* {USB_DEVICE(0x0c45, 0x608e), SB(CISVF10, 103)}, */
{USB_DEVICE(0x0c45, 0x608f), SB(OV7630, 103)},
-#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
+ {USB_DEVICE(0x0c45, 0x60a8), SB(PAS106, 103)},
+ {USB_DEVICE(0x0c45, 0x60aa), SB(TAS5130CXX, 103)},
{USB_DEVICE(0x0c45, 0x60af), SB(PAS202, 103)},
-#endif
{USB_DEVICE(0x0c45, 0x60b0), SB(OV7630, 103)},
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
/* -- device connect -- */
-static int __devinit sd_probe(struct usb_interface *intf,
+static int sd_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return gspca_dev_probe(intf, id, &sd_desc, sizeof(struct sd),
#include "gspca.h"
#include "jpeg.h"
-#define V4L2_CID_INFRARED (V4L2_CID_PRIVATE_BASE + 0)
-
MODULE_AUTHOR("Jean-François Moine <http://moinejf.free.fr>");
MODULE_DESCRIPTION("GSPCA/SONIX JPEG USB Camera Driver");
MODULE_LICENSE("GPL");
+static int starcam;
+
/* controls */
enum e_ctrl {
BRIGHTNESS,
HFLIP,
VFLIP,
SHARPNESS,
- INFRARED,
+ ILLUM,
FREQ,
NCTRLS /* number of controls */
};
};
/* device flags */
-#define PDN_INV 1 /* inverse pin S_PWR_DN / sn_xxx tables */
+#define F_PDN_INV 0x01 /* inverse pin S_PWR_DN / sn_xxx tables */
+#define F_ILLUM 0x02 /* presence of illuminator */
/* sn9c1xx definitions */
/* register 0x01 */
static void setautogain(struct gspca_dev *gspca_dev);
static void sethvflip(struct gspca_dev *gspca_dev);
static void setsharpness(struct gspca_dev *gspca_dev);
-static void setinfrared(struct gspca_dev *gspca_dev);
+static void setillum(struct gspca_dev *gspca_dev);
static void setfreq(struct gspca_dev *gspca_dev);
static const struct ctrl sd_ctrls[NCTRLS] = {
},
.set_control = setsharpness
},
-/* mt9v111 only */
-[INFRARED] = {
+[ILLUM] = {
{
- .id = V4L2_CID_INFRARED,
+ .id = V4L2_CID_ILLUMINATORS_1,
.type = V4L2_CTRL_TYPE_BOOLEAN,
- .name = "Infrared",
+ .name = "Illuminator / infrared",
.minimum = 0,
.maximum = 1,
.step = 1,
.default_value = 0,
},
- .set_control = setinfrared
+ .set_control = setillum
},
/* ov7630/ov7648/ov7660 only */
[FREQ] = {
/* table of the disabled controls */
static const __u32 ctrl_dis[] = {
[SENSOR_ADCM1700] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_GC0307] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_GC0307] = (1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_HV7131R] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_HV7131R] = (1 << HFLIP) |
(1 << FREQ),
-[SENSOR_MI0360] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_MI0360] = (1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_MI0360B] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_MI0360B] = (1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_MO4000] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_MO4000] = (1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_OM6802] = (1 << INFRARED) |
- (1 << HFLIP) |
+[SENSOR_OM6802] = (1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
-[SENSOR_OV7630] = (1 << INFRARED) |
- (1 << HFLIP),
+[SENSOR_OV7630] = (1 << HFLIP),
-[SENSOR_OV7648] = (1 << INFRARED) |
- (1 << HFLIP),
+[SENSOR_OV7648] = (1 << HFLIP),
[SENSOR_OV7660] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << HFLIP) |
(1 << VFLIP),
[SENSOR_PO1030] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
[SENSOR_PO2030N] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << FREQ),
[SENSOR_SOI768] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
[SENSOR_SP80708] = (1 << AUTOGAIN) |
- (1 << INFRARED) |
(1 << HFLIP) |
(1 << VFLIP) |
(1 << FREQ),
PDEBUG(D_PROBE, "Sonix chip id: %02x", regF1);
switch (sd->bridge) {
case BRIDGE_SN9C102P:
+ case BRIDGE_SN9C105:
if (regF1 != 0x11)
return -ENODEV;
+ break;
+ default:
+/* case BRIDGE_SN9C110: */
+/* case BRIDGE_SN9C120: */
+ if (regF1 != 0x12)
+ return -ENODEV;
+ }
+
+ switch (sd->sensor) {
+ case SENSOR_MI0360:
+ mi0360_probe(gspca_dev);
+ break;
+ case SENSOR_OV7630:
+ ov7630_probe(gspca_dev);
+ break;
+ case SENSOR_OV7648:
+ ov7648_probe(gspca_dev);
+ break;
+ case SENSOR_PO2030N:
+ po2030n_probe(gspca_dev);
+ break;
+ }
+
+ switch (sd->bridge) {
+ case BRIDGE_SN9C102P:
reg_w1(gspca_dev, 0x02, regGpio[1]);
break;
case BRIDGE_SN9C105:
- if (regF1 != 0x11)
- return -ENODEV;
- if (sd->sensor == SENSOR_MI0360)
- mi0360_probe(gspca_dev);
reg_w(gspca_dev, 0x01, regGpio, 2);
break;
+ case BRIDGE_SN9C110:
+ reg_w1(gspca_dev, 0x02, 0x62);
+ break;
case BRIDGE_SN9C120:
- if (regF1 != 0x12)
- return -ENODEV;
- switch (sd->sensor) {
- case SENSOR_MI0360:
- mi0360_probe(gspca_dev);
- break;
- case SENSOR_OV7630:
- ov7630_probe(gspca_dev);
- break;
- case SENSOR_OV7648:
- ov7648_probe(gspca_dev);
- break;
- case SENSOR_PO2030N:
- po2030n_probe(gspca_dev);
- break;
- }
regGpio[1] = 0x70; /* no audio */
reg_w(gspca_dev, 0x01, regGpio, 2);
break;
- default:
-/* case BRIDGE_SN9C110: */
-/* case BRIDGE_SN9C325: */
- if (regF1 != 0x12)
- return -ENODEV;
- reg_w1(gspca_dev, 0x02, 0x62);
- break;
}
if (sd->sensor == SENSOR_OM6802)
sd->i2c_addr = sn9c1xx[9];
gspca_dev->ctrl_dis = ctrl_dis[sd->sensor];
+ if (!(sd->flags & F_ILLUM))
+ gspca_dev->ctrl_dis |= (1 << ILLUM);
return gspca_dev->usb_err;
}
reg_w1(gspca_dev, 0x99, sd->ctrls[SHARPNESS].val);
}
-static void setinfrared(struct gspca_dev *gspca_dev)
+static void setillum(struct gspca_dev *gspca_dev)
{
struct sd *sd = (struct sd *) gspca_dev;
- if (gspca_dev->ctrl_dis & (1 << INFRARED))
+ if (gspca_dev->ctrl_dis & (1 << ILLUM))
return;
-/*fixme: different sequence for StarCam Clip and StarCam 370i */
-/* Clip */
- i2c_w1(gspca_dev, 0x02, /* gpio */
- sd->ctrls[INFRARED].val ? 0x66 : 0x64);
+ switch (sd->sensor) {
+ case SENSOR_ADCM1700:
+ reg_w1(gspca_dev, 0x02, /* gpio */
+ sd->ctrls[ILLUM].val ? 0x64 : 0x60);
+ break;
+ case SENSOR_MT9V111:
+ if (starcam)
+ reg_w1(gspca_dev, 0x02,
+ sd->ctrls[ILLUM].val ?
+ 0x55 : 0x54); /* 370i */
+ else
+ reg_w1(gspca_dev, 0x02,
+ sd->ctrls[ILLUM].val ?
+ 0x66 : 0x64); /* Clip */
+ break;
+ }
}
static void setfreq(struct gspca_dev *gspca_dev)
/* sensor clock already enabled in sd_init */
/* reg_w1(gspca_dev, 0xf1, 0x00); */
reg01 = sn9c1xx[1];
- if (sd->flags & PDN_INV)
+ if (sd->flags & F_PDN_INV)
reg01 ^= S_PDN_INV; /* power down inverted */
reg_w1(gspca_dev, 0x01, reg01);
.driver_info = (BRIDGE_ ## bridge << 16) \
| (SENSOR_ ## sensor << 8) \
| (flags)
-static const __devinitdata struct usb_device_id device_table[] = {
-#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0458, 0x7025), BS(SN9C120, MI0360)},
{USB_DEVICE(0x0458, 0x702e), BS(SN9C120, OV7660)},
-#endif
- {USB_DEVICE(0x045e, 0x00f5), BSF(SN9C105, OV7660, PDN_INV)},
- {USB_DEVICE(0x045e, 0x00f7), BSF(SN9C105, OV7660, PDN_INV)},
+ {USB_DEVICE(0x045e, 0x00f5), BSF(SN9C105, OV7660, F_PDN_INV)},
+ {USB_DEVICE(0x045e, 0x00f7), BSF(SN9C105, OV7660, F_PDN_INV)},
{USB_DEVICE(0x0471, 0x0327), BS(SN9C105, MI0360)},
{USB_DEVICE(0x0471, 0x0328), BS(SN9C105, MI0360)},
{USB_DEVICE(0x0471, 0x0330), BS(SN9C105, MI0360)},
/* {USB_DEVICE(0x0c45, 0x607b), BS(SN9C102P, OV7660)}, */
{USB_DEVICE(0x0c45, 0x607c), BS(SN9C102P, HV7131R)},
/* {USB_DEVICE(0x0c45, 0x607e), BS(SN9C102P, OV7630)}, */
- {USB_DEVICE(0x0c45, 0x60c0), BS(SN9C105, MI0360)},
+ {USB_DEVICE(0x0c45, 0x60c0), BSF(SN9C105, MI0360, F_ILLUM)},
/* or MT9V111 */
/* {USB_DEVICE(0x0c45, 0x60c2), BS(SN9C105, P1030xC)}, */
/* {USB_DEVICE(0x0c45, 0x60c8), BS(SN9C105, OM6802)}, */
/* {USB_DEVICE(0x0c45, 0x60fa), BS(SN9C105, OV7648)}, */
/* {USB_DEVICE(0x0c45, 0x60f2), BS(SN9C105, OV7660)}, */
{USB_DEVICE(0x0c45, 0x60fb), BS(SN9C105, OV7660)},
-#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
{USB_DEVICE(0x0c45, 0x60fc), BS(SN9C105, HV7131R)},
{USB_DEVICE(0x0c45, 0x60fe), BS(SN9C105, OV7630)},
-#endif
{USB_DEVICE(0x0c45, 0x6100), BS(SN9C120, MI0360)}, /*sn9c128*/
{USB_DEVICE(0x0c45, 0x6102), BS(SN9C120, PO2030N)}, /* /GC0305*/
/* {USB_DEVICE(0x0c45, 0x6108), BS(SN9C120, OM6802)}, */
/* {USB_DEVICE(0x0c45, 0x6132), BS(SN9C120, OV7670)}, */
{USB_DEVICE(0x0c45, 0x6138), BS(SN9C120, MO4000)},
{USB_DEVICE(0x0c45, 0x613a), BS(SN9C120, OV7648)},
-#if !defined CONFIG_USB_SN9C102 && !defined CONFIG_USB_SN9C102_MODULE
{USB_DEVICE(0x0c45, 0x613b), BS(SN9C120, OV7660)},
-#endif
{USB_DEVICE(0x0c45, 0x613c), BS(SN9C120, HV7131R)},
{USB_DEVICE(0x0c45, 0x613e), BS(SN9C120, OV7630)},
{USB_DEVICE(0x0c45, 0x6142), BS(SN9C120, PO2030N)}, /*sn9c120b*/
/* or GC0305 / GC0307 */
{USB_DEVICE(0x0c45, 0x6143), BS(SN9C120, SP80708)}, /*sn9c120b*/
{USB_DEVICE(0x0c45, 0x6148), BS(SN9C120, OM6802)}, /*sn9c120b*/
- {USB_DEVICE(0x0c45, 0x614a), BS(SN9C120, ADCM1700)}, /*sn9c120b*/
+ {USB_DEVICE(0x0c45, 0x614a), BSF(SN9C120, ADCM1700, F_ILLUM)},
+/* {USB_DEVICE(0x0c45, 0x614c), BS(SN9C120, GC0306)}, */ /*sn9c120b*/
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
module_init(sd_mod_init);
module_exit(sd_mod_exit);
+
+module_param(starcam, int, 0644);
+MODULE_PARM_DESC(starcam,
+ "StarCam model. 0: Clip, 1: 370i");
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x04fc, 0x1528)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x040a, 0x0300), .driver_info = KodakEZ200},
{USB_DEVICE(0x041e, 0x400a), .driver_info = CreativePCCam300},
{USB_DEVICE(0x046d, 0x0890), .driver_info = LogitechTraveler},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x040a, 0x0002), .driver_info = KodakDVC325},
{USB_DEVICE(0x0497, 0xc001), .driver_info = SmileIntlCamera},
{USB_DEVICE(0x0506, 0x00df), .driver_info = ThreeComHomeConnectLite},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x401d), .driver_info = Nxultra},
{USB_DEVICE(0x0733, 0x0430), .driver_info = IntelPCCameraPro},
/*fixme: may be UsbGrabberPV321 BRIDGE_SPCA506 SENSOR_SAA7113 */
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0130, 0x0130), .driver_info = HamaUSBSightcam},
{USB_DEVICE(0x041e, 0x4018), .driver_info = CreativeVista},
{USB_DEVICE(0x0733, 0x0110), .driver_info = ViewQuestVQ110},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x401a), .driver_info = Rev072A},
{USB_DEVICE(0x041e, 0x403b), .driver_info = Rev012A},
{USB_DEVICE(0x0458, 0x7004), .driver_info = Rev072A},
}
/* Table of supported USB devices */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x2770, 0x9120)},
{}
};
}
/* Table of supported USB devices */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x2770, 0x905c)},
{USB_DEVICE(0x2770, 0x9050)},
{USB_DEVICE(0x2770, 0x9051)},
#define ST(sensor, type) \
.driver_info = (SENSOR_ ## sensor << 8) \
| (type)
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x4038), ST(MI0360, 0)},
{USB_DEVICE(0x041e, 0x403c), ST(LZ24BP, 0)},
{USB_DEVICE(0x041e, 0x403d), ST(LZ24BP, 0)},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x05e1, 0x0893)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x0553, 0x0202)},
{USB_DEVICE(0x041e, 0x4007)},
{}
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
/* QuickCam Express */
{USB_DEVICE(0x046d, 0x0840), .driver_info = BRIDGE_STV600 },
/* LEGO cam / QuickCam Web */
#define BS(bridge, subtype) \
.driver_info = (BRIDGE_ ## bridge << 8) \
| (subtype)
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x400b), BS(SPCA504C, 0)},
{USB_DEVICE(0x041e, 0x4012), BS(SPCA504C, 0)},
{USB_DEVICE(0x041e, 0x4013), BS(SPCA504C, 0)},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x17a1, 0x0128)},
{}
};
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x046d, 0x0920)},
{USB_DEVICE(0x046d, 0x0921)},
{USB_DEVICE(0x0545, 0x808b)},
#define BF(bridge, flags) \
.driver_info = (BRIDGE_ ## bridge << 8) \
| (flags)
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x405b), BF(VC0323, FL_VFLIP)},
{USB_DEVICE(0x046d, 0x0892), BF(VC0321, 0)},
{USB_DEVICE(0x046d, 0x0896), BF(VC0321, 0)},
};
/* -- module initialisation -- */
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{ USB_DEVICE_VER(0x0545, 0x8080, 0x0001, 0x0001), .driver_info = CIT_MODEL0 },
{ USB_DEVICE_VER(0x0545, 0x8080, 0x0002, 0x0002), .driver_info = CIT_MODEL1 },
{ USB_DEVICE_VER(0x0545, 0x8080, 0x030a, 0x030a), .driver_info = CIT_MODEL2 },
break;
default:
/* case 0xdd: * delay */
- msleep(action->val / 64 + 10);
+ msleep(action->idx);
break;
}
action++;
[SENSOR_GC0305] = gc0305_matrix,
[SENSOR_HDCS2020b] = NULL,
[SENSOR_HV7131B] = NULL,
- [SENSOR_HV7131R] = NULL,
+ [SENSOR_HV7131R] = po2030_matrix,
[SENSOR_ICM105A] = po2030_matrix,
[SENSOR_MC501CB] = NULL,
[SENSOR_MT9V111_1] = gc0305_matrix,
case SENSOR_ADCM2700:
case SENSOR_GC0305:
case SENSOR_HV7131B:
+ case SENSOR_HV7131R:
case SENSOR_OV7620:
case SENSOR_PAS202B:
case SENSOR_PO2030:
reg_w(gspca_dev, 0x02, 0x003b);
reg_w(gspca_dev, 0x00, 0x0038);
break;
+ case SENSOR_HV7131R:
case SENSOR_PAS202B:
reg_w(gspca_dev, 0x03, 0x003b);
reg_w(gspca_dev, 0x0c, 0x003a);
reg_w(gspca_dev, 0x0b, 0x0039);
- reg_w(gspca_dev, 0x0b, 0x0038);
+ if (sensor == SENSOR_PAS202B)
+ reg_w(gspca_dev, 0x0b, 0x0038);
break;
}
}
reg_w(gspca_dev, 0x02, 0x003b);
reg_w(gspca_dev, 0x00, 0x0038);
break;
+ case SENSOR_HV7131R:
case SENSOR_PAS202B:
reg_w(gspca_dev, 0x03, 0x003b);
reg_w(gspca_dev, 0x0c, 0x003a);
reg_w(gspca_dev, 0x0b, 0x0039);
+ if (sd->sensor == SENSOR_HV7131R)
+ reg_w(gspca_dev, 0x50, ZC3XX_R11D_GLOBALGAIN);
break;
}
break;
case SENSOR_PAS202B:
case SENSOR_GC0305:
+ case SENSOR_HV7131R:
case SENSOR_TAS5130C:
reg_r(gspca_dev, 0x0008);
/* fall thru */
/* ms-win + */
reg_w(gspca_dev, 0x40, 0x0117);
break;
+ case SENSOR_HV7131R:
+ i2c_write(gspca_dev, 0x25, 0x04, 0x00); /* exposure */
+ i2c_write(gspca_dev, 0x26, 0x93, 0x00);
+ i2c_write(gspca_dev, 0x27, 0xe0, 0x00);
+ reg_w(gspca_dev, 0x00, ZC3XX_R1A7_CALCGLOBALMEAN);
+ break;
case SENSOR_GC0305:
case SENSOR_TAS5130C:
reg_w(gspca_dev, 0x09, 0x01ad); /* (from win traces) */
{
struct sd *sd = (struct sd *) gspca_dev;
- if (data[0] == 0xff && data[1] == 0xd8) { /* start of frame */
+ /* check the JPEG end of frame */
+ if (len >= 3
+ && data[len - 3] == 0xff && data[len - 2] == 0xd9) {
+/*fixme: what does the last byte mean?*/
gspca_frame_add(gspca_dev, LAST_PACKET,
- NULL, 0);
+ data, len - 1);
+ return;
+ }
+
+ /* check the JPEG start of a frame */
+ if (data[0] == 0xff && data[1] == 0xd8) {
/* put the JPEG header in the new frame */
gspca_frame_add(gspca_dev, FIRST_PACKET,
sd->jpeg_hdr, JPEG_HDR_SZ);
#endif
};
-static const __devinitdata struct usb_device_id device_table[] = {
+static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x041e, 0x041e)},
{USB_DEVICE(0x041e, 0x4017)},
{USB_DEVICE(0x041e, 0x401c), .driver_info = SENSOR_PAS106},
-hdpvr-objs := hdpvr-control.o hdpvr-core.o hdpvr-video.o
-
-hdpvr-$(CONFIG_I2C) += hdpvr-i2c.o
+hdpvr-objs := hdpvr-control.o hdpvr-core.o hdpvr-video.o hdpvr-i2c.o
obj-$(CONFIG_VIDEO_HDPVR) += hdpvr.o
struct hdpvr_device *dev;
struct usb_host_interface *iface_desc;
struct usb_endpoint_descriptor *endpoint;
+ struct i2c_client *client;
size_t buffer_size;
int i;
int retval = -ENOMEM;
goto error;
}
-#ifdef CONFIG_I2C
- /* until i2c is working properly */
- retval = 0; /* hdpvr_register_i2c_adapter(dev); */
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+ retval = hdpvr_register_i2c_adapter(dev);
if (retval < 0) {
- v4l2_err(&dev->v4l2_dev, "registering i2c adapter failed\n");
+ v4l2_err(&dev->v4l2_dev, "i2c adapter register failed\n");
goto error;
}
- /* until i2c is working properly */
- retval = 0; /* hdpvr_register_i2c_ir(dev); */
- if (retval < 0)
- v4l2_err(&dev->v4l2_dev, "registering i2c IR devices failed\n");
-#endif /* CONFIG_I2C */
+ client = hdpvr_register_ir_rx_i2c(dev);
+ if (!client) {
+ v4l2_err(&dev->v4l2_dev, "i2c IR RX device register failed\n");
+ goto reg_fail;
+ }
+
+ client = hdpvr_register_ir_tx_i2c(dev);
+ if (!client) {
+ v4l2_err(&dev->v4l2_dev, "i2c IR TX device register failed\n");
+ goto reg_fail;
+ }
+#endif
/* let the user know what node this device is now attached to */
v4l2_info(&dev->v4l2_dev, "device now attached to %s\n",
video_device_node_name(dev->video_dev));
return 0;
+reg_fail:
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+ i2c_del_adapter(&dev->i2c_adapter);
+#endif
error:
if (dev) {
/* Destroy single thread */
mutex_lock(&dev->io_mutex);
hdpvr_cancel_queue(dev);
mutex_unlock(&dev->io_mutex);
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+ i2c_del_adapter(&dev->i2c_adapter);
+#endif
video_unregister_device(dev->video_dev);
atomic_dec(&dev_nr);
}
*
*/
+#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+
#include <linux/i2c.h>
#include <linux/slab.h>
#define Z8F0811_IR_TX_I2C_ADDR 0x70
#define Z8F0811_IR_RX_I2C_ADDR 0x71
-static const u8 ir_i2c_addrs[] = {
- Z8F0811_IR_TX_I2C_ADDR,
- Z8F0811_IR_RX_I2C_ADDR,
-};
-
-static const char * const ir_devicenames[] = {
- "ir_tx_z8f0811_hdpvr",
- "ir_rx_z8f0811_hdpvr",
-};
-static int hdpvr_new_i2c_ir(struct hdpvr_device *dev, struct i2c_adapter *adap,
- const char *type, u8 addr)
+struct i2c_client *hdpvr_register_ir_tx_i2c(struct hdpvr_device *dev)
{
- struct i2c_board_info info;
struct IR_i2c_init_data *init_data = &dev->ir_i2c_init_data;
- unsigned short addr_list[2] = { addr, I2C_CLIENT_END };
+ struct i2c_board_info hdpvr_ir_tx_i2c_board_info = {
+ I2C_BOARD_INFO("ir_tx_z8f0811_hdpvr", Z8F0811_IR_TX_I2C_ADDR),
+ };
- memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, type, I2C_NAME_SIZE);
-
- /* Our default information for ir-kbd-i2c.c to use */
- switch (addr) {
- case Z8F0811_IR_RX_I2C_ADDR:
- init_data->ir_codes = RC_MAP_HAUPPAUGE_NEW;
- init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP_XVR;
- init_data->type = RC_TYPE_RC5;
- init_data->name = "HD PVR";
- info.platform_data = init_data;
- break;
- }
+ init_data->name = "HD-PVR";
+ hdpvr_ir_tx_i2c_board_info.platform_data = init_data;
- return i2c_new_probed_device(adap, &info, addr_list, NULL) == NULL ?
- -1 : 0;
+ return i2c_new_device(&dev->i2c_adapter, &hdpvr_ir_tx_i2c_board_info);
}
-int hdpvr_register_i2c_ir(struct hdpvr_device *dev)
+struct i2c_client *hdpvr_register_ir_rx_i2c(struct hdpvr_device *dev)
{
- int i;
- int ret = 0;
+ struct IR_i2c_init_data *init_data = &dev->ir_i2c_init_data;
+ struct i2c_board_info hdpvr_ir_rx_i2c_board_info = {
+ I2C_BOARD_INFO("ir_rx_z8f0811_hdpvr", Z8F0811_IR_RX_I2C_ADDR),
+ };
- for (i = 0; i < ARRAY_SIZE(ir_i2c_addrs); i++)
- ret += hdpvr_new_i2c_ir(dev, dev->i2c_adapter,
- ir_devicenames[i], ir_i2c_addrs[i]);
+ /* Our default information for ir-kbd-i2c.c to use */
+ init_data->ir_codes = RC_MAP_HAUPPAUGE_NEW;
+ init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP_XVR;
+ init_data->type = RC_TYPE_RC5;
+ init_data->name = "HD-PVR";
+ hdpvr_ir_rx_i2c_board_info.platform_data = init_data;
- return ret;
+ return i2c_new_device(&dev->i2c_adapter, &hdpvr_ir_rx_i2c_board_info);
}
-static int hdpvr_i2c_read(struct hdpvr_device *dev, unsigned char addr,
- char *data, int len)
+static int hdpvr_i2c_read(struct hdpvr_device *dev, int bus,
+ unsigned char addr, char *data, int len)
{
int ret;
- char *buf = kmalloc(len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
+
+ if (len > sizeof(dev->i2c_buf))
+ return -EINVAL;
ret = usb_control_msg(dev->udev,
usb_rcvctrlpipe(dev->udev, 0),
REQTYPE_I2C_READ, CTRL_READ_REQUEST,
- 0x100|addr, 0, buf, len, 1000);
+ (bus << 8) | addr, 0, &dev->i2c_buf, len, 1000);
if (ret == len) {
- memcpy(data, buf, len);
+ memcpy(data, &dev->i2c_buf, len);
ret = 0;
} else if (ret >= 0)
ret = -EIO;
- kfree(buf);
-
return ret;
}
-static int hdpvr_i2c_write(struct hdpvr_device *dev, unsigned char addr,
- char *data, int len)
+static int hdpvr_i2c_write(struct hdpvr_device *dev, int bus,
+ unsigned char addr, char *data, int len)
{
int ret;
- char *buf = kmalloc(len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
- memcpy(buf, data, len);
+ if (len > sizeof(dev->i2c_buf))
+ return -EINVAL;
+
+ memcpy(&dev->i2c_buf, data, len);
ret = usb_control_msg(dev->udev,
usb_sndctrlpipe(dev->udev, 0),
REQTYPE_I2C_WRITE, CTRL_WRITE_REQUEST,
- 0x100|addr, 0, buf, len, 1000);
+ (bus << 8) | addr, 0, &dev->i2c_buf, len, 1000);
if (ret < 0)
- goto error;
+ return ret;
ret = usb_control_msg(dev->udev,
usb_rcvctrlpipe(dev->udev, 0),
REQTYPE_I2C_WRITE_STATT, CTRL_READ_REQUEST,
- 0, 0, buf, 2, 1000);
+ 0, 0, &dev->i2c_buf, 2, 1000);
- if (ret == 2)
+ if ((ret == 2) && (dev->i2c_buf[1] == (len - 1)))
ret = 0;
else if (ret >= 0)
ret = -EIO;
-error:
- kfree(buf);
return ret;
}
addr = msgs[i].addr << 1;
if (msgs[i].flags & I2C_M_RD)
- retval = hdpvr_i2c_read(dev, addr, msgs[i].buf,
+ retval = hdpvr_i2c_read(dev, 1, addr, msgs[i].buf,
msgs[i].len);
else
- retval = hdpvr_i2c_write(dev, addr, msgs[i].buf,
+ retval = hdpvr_i2c_write(dev, 1, addr, msgs[i].buf,
msgs[i].len);
}
.functionality = hdpvr_functionality,
};
+static struct i2c_adapter hdpvr_i2c_adapter_template = {
+ .name = "Hauppage HD PVR I2C",
+ .owner = THIS_MODULE,
+ .algo = &hdpvr_algo,
+};
+
+static int hdpvr_activate_ir(struct hdpvr_device *dev)
+{
+ char buffer[8];
+
+ mutex_lock(&dev->i2c_mutex);
+
+ hdpvr_i2c_read(dev, 0, 0x54, buffer, 1);
+
+ buffer[0] = 0;
+ buffer[1] = 0x8;
+ hdpvr_i2c_write(dev, 1, 0x54, buffer, 2);
+
+ buffer[1] = 0x18;
+ hdpvr_i2c_write(dev, 1, 0x54, buffer, 2);
+
+ mutex_unlock(&dev->i2c_mutex);
+
+ return 0;
+}
+
int hdpvr_register_i2c_adapter(struct hdpvr_device *dev)
{
- struct i2c_adapter *i2c_adap;
int retval = -ENOMEM;
- i2c_adap = kzalloc(sizeof(struct i2c_adapter), GFP_KERNEL);
- if (i2c_adap == NULL)
- goto error;
-
- strlcpy(i2c_adap->name, "Hauppauge HD PVR I2C",
- sizeof(i2c_adap->name));
- i2c_adap->algo = &hdpvr_algo;
- i2c_adap->owner = THIS_MODULE;
- i2c_adap->dev.parent = &dev->udev->dev;
+ hdpvr_activate_ir(dev);
- i2c_set_adapdata(i2c_adap, dev);
+ memcpy(&dev->i2c_adapter, &hdpvr_i2c_adapter_template,
+ sizeof(struct i2c_adapter));
+ dev->i2c_adapter.dev.parent = &dev->udev->dev;
- retval = i2c_add_adapter(i2c_adap);
+ i2c_set_adapdata(&dev->i2c_adapter, dev);
- if (!retval)
- dev->i2c_adapter = i2c_adap;
- else
- kfree(i2c_adap);
+ retval = i2c_add_adapter(&dev->i2c_adapter);
-error:
return retval;
}
+
+#endif
v4l2_device_unregister(&dev->v4l2_dev);
/* deregister I2C adapter */
-#ifdef CONFIG_I2C
+#if defined(CONFIG_I2C) || (CONFIG_I2C_MODULE)
mutex_lock(&dev->i2c_mutex);
- if (dev->i2c_adapter)
- i2c_del_adapter(dev->i2c_adapter);
- kfree(dev->i2c_adapter);
- dev->i2c_adapter = NULL;
+ i2c_del_adapter(&dev->i2c_adapter);
mutex_unlock(&dev->i2c_mutex);
#endif /* CONFIG_I2C */
KERNEL_VERSION(HDPVR_MAJOR_VERSION, HDPVR_MINOR_VERSION, HDPVR_RELEASE)
#define HDPVR_MAX 8
+#define HDPVR_I2C_MAX_SIZE 128
/* Define these values to match your devices */
#define HD_PVR_VENDOR_ID 0x2040
struct work_struct worker;
/* I2C adapter */
- struct i2c_adapter *i2c_adapter;
+ struct i2c_adapter i2c_adapter;
/* I2C lock */
struct mutex i2c_mutex;
+ /* I2C message buffer space */
+ char i2c_buf[HDPVR_I2C_MAX_SIZE];
/* For passing data to ir-kbd-i2c */
struct IR_i2c_init_data ir_i2c_init_data;
/* i2c adapter registration */
int hdpvr_register_i2c_adapter(struct hdpvr_device *dev);
-int hdpvr_register_i2c_ir(struct hdpvr_device *dev);
+struct i2c_client *hdpvr_register_ir_rx_i2c(struct hdpvr_device *dev);
+struct i2c_client *hdpvr_register_ir_tx_i2c(struct hdpvr_device *dev);
/*========================================================================*/
/* buffer management */
static int get_key_haup_xvr(struct IR_i2c *ir, u32 *ir_key, u32 *ir_raw)
{
+ int ret;
+ unsigned char buf[1] = { 0 };
+
+ /*
+ * This is the same apparent "are you ready?" poll command observed
+ * watching Windows driver traffic and implemented in lirc_zilog. With
+ * this added, we get far saner remote behavior with z8 chips on usb
+ * connected devices, even with the default polling interval of 100ms.
+ */
+ ret = i2c_master_send(ir->c, buf, 1);
+ if (ret != 1)
+ return (ret < 0) ? ret : -EINVAL;
+
return get_key_haup_common (ir, ir_key, ir_raw, 6, 3);
}
static u32 ir_key, ir_raw;
int rc;
- dprintk(2,"ir_poll_key\n");
+ dprintk(3, "%s\n", __func__);
rc = ir->get_key(ir, &ir_key, &ir_raw);
if (rc < 0) {
dprintk(2,"error\n");
return;
}
- if (rc)
+ if (rc) {
+ dprintk(1, "%s: keycode = 0x%04x\n", __func__, ir_key);
rc_keydown(ir->rc, ir_key, 0);
+ }
}
static void ir_work(struct work_struct *work)
rc_type = RC_TYPE_OTHER;
ir_codes = RC_MAP_AVERMEDIA_CARDBUS;
break;
+ case 0x71:
+ name = "Hauppauge/Zilog Z8";
+ ir->get_key = get_key_haup_xvr;
+ rc_type = RC_TYPE_RC5;
+ ir_codes = hauppauge ? RC_MAP_HAUPPAUGE_NEW : RC_MAP_RC5_TV;
+ break;
}
/* Let the caller override settings */
adap, type, 0, I2C_ADDRS(hw_addrs[idx]));
} else if (hw == IVTV_HW_CX25840) {
struct cx25840_platform_data pdata;
+ struct i2c_board_info cx25840_info = {
+ .type = "cx25840",
+ .addr = hw_addrs[idx],
+ .platform_data = &pdata,
+ };
pdata.pvr150_workaround = itv->pvr150_workaround;
- sd = v4l2_i2c_new_subdev_cfg(&itv->v4l2_dev,
- adap, type, 0, &pdata, hw_addrs[idx], NULL);
+ sd = v4l2_i2c_new_subdev_board(&itv->v4l2_dev, adap,
+ &cx25840_info, NULL);
} else {
sd = v4l2_i2c_new_subdev(&itv->v4l2_dev,
adap, type, hw_addrs[idx], NULL);
#include <asm/div64.h>
#include <media/v4l2-device.h>
#include <media/v4l2-chip-ident.h>
-#include "mt9v011.h"
+#include <media/mt9v011.h>
MODULE_DESCRIPTION("Micron mt9v011 sensor driver");
MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@redhat.com>");
MODULE_LICENSE("GPL");
-
static int debug;
module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-2)");
+#define R00_MT9V011_CHIP_VERSION 0x00
+#define R01_MT9V011_ROWSTART 0x01
+#define R02_MT9V011_COLSTART 0x02
+#define R03_MT9V011_HEIGHT 0x03
+#define R04_MT9V011_WIDTH 0x04
+#define R05_MT9V011_HBLANK 0x05
+#define R06_MT9V011_VBLANK 0x06
+#define R07_MT9V011_OUT_CTRL 0x07
+#define R09_MT9V011_SHUTTER_WIDTH 0x09
+#define R0A_MT9V011_CLK_SPEED 0x0a
+#define R0B_MT9V011_RESTART 0x0b
+#define R0C_MT9V011_SHUTTER_DELAY 0x0c
+#define R0D_MT9V011_RESET 0x0d
+#define R1E_MT9V011_DIGITAL_ZOOM 0x1e
+#define R20_MT9V011_READ_MODE 0x20
+#define R2B_MT9V011_GREEN_1_GAIN 0x2b
+#define R2C_MT9V011_BLUE_GAIN 0x2c
+#define R2D_MT9V011_RED_GAIN 0x2d
+#define R2E_MT9V011_GREEN_2_GAIN 0x2e
+#define R35_MT9V011_GLOBAL_GAIN 0x35
+#define RF1_MT9V011_CHIP_ENABLE 0xf1
+
+#define MT9V011_VERSION 0x8232
+#define MT9V011_REV_B_VERSION 0x8243
+
/* supported controls */
static struct v4l2_queryctrl mt9v011_qctrl[] = {
{
return 0;
}
-static int mt9v011_s_config(struct v4l2_subdev *sd, int dumb, void *data)
-{
- struct mt9v011 *core = to_mt9v011(sd);
- unsigned *xtal = data;
-
- v4l2_dbg(1, debug, sd, "s_config called\n");
-
- if (xtal) {
- core->xtal = *xtal;
- v4l2_dbg(1, debug, sd, "xtal set to %d.%03d MHz\n",
- *xtal / 1000000, (*xtal / 1000) % 1000);
- }
-
- return 0;
-}
-
-
#ifdef CONFIG_VIDEO_ADV_DEBUG
static int mt9v011_g_register(struct v4l2_subdev *sd,
struct v4l2_dbg_register *reg)
.g_ctrl = mt9v011_g_ctrl,
.s_ctrl = mt9v011_s_ctrl,
.reset = mt9v011_reset,
- .s_config = mt9v011_s_config,
.g_chip_ident = mt9v011_g_chip_ident,
#ifdef CONFIG_VIDEO_ADV_DEBUG
.g_register = mt9v011_g_register,
core->height = 480;
core->xtal = 27000000; /* Hz */
+ if (c->dev.platform_data) {
+ struct mt9v011_platform_data *pdata = c->dev.platform_data;
+
+ core->xtal = pdata->xtal;
+ v4l2_dbg(1, debug, sd, "xtal set to %d.%03d MHz\n",
+ core->xtal / 1000000, (core->xtal / 1000) % 1000);
+ }
+
v4l_info(c, "chip found @ 0x%02x (%s - chip version 0x%04x)\n",
c->addr << 1, c->adapter->name, version);
+++ /dev/null
-/*
- * mt9v011 -Micron 1/4-Inch VGA Digital Image Sensor
- *
- * Copyright (c) 2009 Mauro Carvalho Chehab (mchehab@redhat.com)
- * This code is placed under the terms of the GNU General Public License v2
- */
-
-#ifndef MT9V011_H_
-#define MT9V011_H_
-
-#define R00_MT9V011_CHIP_VERSION 0x00
-#define R01_MT9V011_ROWSTART 0x01
-#define R02_MT9V011_COLSTART 0x02
-#define R03_MT9V011_HEIGHT 0x03
-#define R04_MT9V011_WIDTH 0x04
-#define R05_MT9V011_HBLANK 0x05
-#define R06_MT9V011_VBLANK 0x06
-#define R07_MT9V011_OUT_CTRL 0x07
-#define R09_MT9V011_SHUTTER_WIDTH 0x09
-#define R0A_MT9V011_CLK_SPEED 0x0a
-#define R0B_MT9V011_RESTART 0x0b
-#define R0C_MT9V011_SHUTTER_DELAY 0x0c
-#define R0D_MT9V011_RESET 0x0d
-#define R1E_MT9V011_DIGITAL_ZOOM 0x1e
-#define R20_MT9V011_READ_MODE 0x20
-#define R2B_MT9V011_GREEN_1_GAIN 0x2b
-#define R2C_MT9V011_BLUE_GAIN 0x2c
-#define R2D_MT9V011_RED_GAIN 0x2d
-#define R2E_MT9V011_GREEN_2_GAIN 0x2e
-#define R35_MT9V011_GLOBAL_GAIN 0x35
-#define RF1_MT9V011_CHIP_ENABLE 0xf1
-
-#define MT9V011_VERSION 0x8232
-#define MT9V011_REV_B_VERSION 0x8243
-
-#endif
return v4l2_chip_ident_i2c_client(client, chip, V4L2_IDENT_OV7670, 0);
}
-static int ov7670_s_config(struct v4l2_subdev *sd, int dumb, void *data)
-{
- struct i2c_client *client = v4l2_get_subdevdata(sd);
- struct ov7670_config *config = data;
- struct ov7670_info *info = to_state(sd);
- int ret;
-
- info->clock_speed = 30; /* default: a guess */
-
- /*
- * Must apply configuration before initializing device, because it
- * selects I/O method.
- */
- if (config) {
- info->min_width = config->min_width;
- info->min_height = config->min_height;
- info->use_smbus = config->use_smbus;
-
- if (config->clock_speed)
- info->clock_speed = config->clock_speed;
- }
-
- /* Make sure it's an ov7670 */
- ret = ov7670_detect(sd);
- if (ret) {
- v4l_dbg(1, debug, client,
- "chip found @ 0x%x (%s) is not an ov7670 chip.\n",
- client->addr << 1, client->adapter->name);
- kfree(info);
- return ret;
- }
- v4l_info(client, "chip found @ 0x%02x (%s)\n",
- client->addr << 1, client->adapter->name);
-
- info->fmt = &ov7670_formats[0];
- info->sat = 128; /* Review this */
- info->clkrc = info->clock_speed / 30;
-
- return 0;
-}
-
#ifdef CONFIG_VIDEO_ADV_DEBUG
static int ov7670_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg)
{
.s_ctrl = ov7670_s_ctrl,
.queryctrl = ov7670_queryctrl,
.reset = ov7670_reset,
- .s_config = ov7670_s_config,
.init = ov7670_init,
#ifdef CONFIG_VIDEO_ADV_DEBUG
.g_register = ov7670_g_register,
{
struct v4l2_subdev *sd;
struct ov7670_info *info;
+ int ret;
info = kzalloc(sizeof(struct ov7670_info), GFP_KERNEL);
if (info == NULL)
sd = &info->sd;
v4l2_i2c_subdev_init(sd, client, &ov7670_ops);
+ info->clock_speed = 30; /* default: a guess */
+ if (client->dev.platform_data) {
+ struct ov7670_config *config = client->dev.platform_data;
+
+ /*
+ * Must apply configuration before initializing device, because it
+ * selects I/O method.
+ */
+ info->min_width = config->min_width;
+ info->min_height = config->min_height;
+ info->use_smbus = config->use_smbus;
+
+ if (config->clock_speed)
+ info->clock_speed = config->clock_speed;
+ }
+
+ /* Make sure it's an ov7670 */
+ ret = ov7670_detect(sd);
+ if (ret) {
+ v4l_dbg(1, debug, client,
+ "chip found @ 0x%x (%s) is not an ov7670 chip.\n",
+ client->addr << 1, client->adapter->name);
+ kfree(info);
+ return ret;
+ }
+ v4l_info(client, "chip found @ 0x%02x (%s)\n",
+ client->addr << 1, client->adapter->name);
+
+ info->fmt = &ov7670_formats[0];
+ info->sat = 128; /* Review this */
+ info->clkrc = info->clock_speed / 30;
return 0;
}
#include "pvrusb2-io.h"
#include <media/v4l2-device.h>
#include <media/cx2341x.h>
+#include <media/ir-kbd-i2c.h>
#include "pvrusb2-devattr.h"
/* Legal values for PVR2_CID_HSM */
/* IR related */
unsigned int ir_scheme_active; /* IR scheme as seen from the outside */
+ struct IR_i2c_init_data ir_init_data; /* params passed to IR modules */
/* Frequency table */
unsigned int freqTable[FREQTABLE_SIZE];
*/
#include <linux/i2c.h>
+#include <media/ir-kbd-i2c.h>
#include "pvrusb2-i2c-core.h"
#include "pvrusb2-hdw-internal.h"
#include "pvrusb2-debug.h"
MODULE_PARM_DESC(disable_autoload_ir_video,
"1=do not try to autoload ir_video IR receiver");
-/* Mapping of IR schemes to known I2C addresses - if any */
-static const unsigned char ir_video_addresses[] = {
- [PVR2_IR_SCHEME_ZILOG] = 0x71,
- [PVR2_IR_SCHEME_29XXX] = 0x18,
- [PVR2_IR_SCHEME_24XXX] = 0x18,
-};
-
static int pvr2_i2c_write(struct pvr2_hdw *hdw, /* Context */
u8 i2c_addr, /* I2C address we're talking to */
u8 *data, /* Data to write */
static void pvr2_i2c_register_ir(struct pvr2_hdw *hdw)
{
struct i2c_board_info info;
- unsigned char addr = 0;
+ struct IR_i2c_init_data *init_data = &hdw->ir_init_data;
if (pvr2_disable_ir_video) {
pvr2_trace(PVR2_TRACE_INFO,
"Automatic binding of ir_video has been disabled.");
return;
}
- if (hdw->ir_scheme_active < ARRAY_SIZE(ir_video_addresses)) {
- addr = ir_video_addresses[hdw->ir_scheme_active];
- }
- if (!addr) {
+ memset(&info, 0, sizeof(struct i2c_board_info));
+ switch (hdw->ir_scheme_active) {
+ case PVR2_IR_SCHEME_24XXX: /* FX2-controlled IR */
+ case PVR2_IR_SCHEME_29XXX: /* Original 29xxx device */
+ init_data->ir_codes = RC_MAP_HAUPPAUGE_NEW;
+ init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP;
+ init_data->type = RC_TYPE_RC5;
+ init_data->name = hdw->hdw_desc->description;
+ init_data->polling_interval = 100; /* ms From ir-kbd-i2c */
+ /* IR Receiver */
+ info.addr = 0x18;
+ info.platform_data = init_data;
+ strlcpy(info.type, "ir_video", I2C_NAME_SIZE);
+ pvr2_trace(PVR2_TRACE_INFO, "Binding %s to i2c address 0x%02x.",
+ info.type, info.addr);
+ i2c_new_device(&hdw->i2c_adap, &info);
+ break;
+ case PVR2_IR_SCHEME_ZILOG: /* HVR-1950 style */
+ case PVR2_IR_SCHEME_24XXX_MCE: /* 24xxx MCE device */
+ init_data->ir_codes = RC_MAP_HAUPPAUGE_NEW;
+ init_data->internal_get_key_func = IR_KBD_GET_KEY_HAUP_XVR;
+ init_data->type = RC_TYPE_RC5;
+ init_data->name = hdw->hdw_desc->description;
+ /* IR Receiver */
+ info.addr = 0x71;
+ info.platform_data = init_data;
+ strlcpy(info.type, "ir_rx_z8f0811_haup", I2C_NAME_SIZE);
+ pvr2_trace(PVR2_TRACE_INFO, "Binding %s to i2c address 0x%02x.",
+ info.type, info.addr);
+ i2c_new_device(&hdw->i2c_adap, &info);
+ /* IR Trasmitter */
+ info.addr = 0x70;
+ info.platform_data = init_data;
+ strlcpy(info.type, "ir_tx_z8f0811_haup", I2C_NAME_SIZE);
+ pvr2_trace(PVR2_TRACE_INFO, "Binding %s to i2c address 0x%02x.",
+ info.type, info.addr);
+ i2c_new_device(&hdw->i2c_adap, &info);
+ break;
+ default:
/* The device either doesn't support I2C-based IR or we
don't know (yet) how to operate IR on the device. */
- return;
+ break;
}
- pvr2_trace(PVR2_TRACE_INFO,
- "Binding ir_video to i2c address 0x%02x.", addr);
- memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, "ir_video", I2C_NAME_SIZE);
- info.addr = addr;
- i2c_new_device(&hdw->i2c_adap, &info);
}
void pvr2_i2c_core_init(struct pvr2_hdw *hdw)
chip_id = name[5];
/* Check whether this chip is part of the saa711x series */
- if (memcmp(name, "1f711", 5)) {
+ if (memcmp(name + 1, "f711", 4)) {
v4l_dbg(1, debug, client, "chip found @ 0x%x (ID %s) does not match a known saa711x chip.\n",
client->addr << 1, name);
return -ENODEV;
[SAA7134_BOARD_KWORLD_PCI_SBTVD_FULLSEG] = {
.name = "Kworld PCI SBTVD/ISDB-T Full-Seg Hybrid",
.audio_clock = 0x00187de7,
-#if 0
- /*
- * FIXME: Analog mode doesn't work, if digital is enabled. The proper
- * fix is to use tda8290 driver, but Kworld seems to use an
- * unsupported version of tda8295.
- */
- .tuner_type = TUNER_NXP_TDA18271, /* TUNER_PHILIPS_TDA8290 */
- .tuner_addr = 0x60,
-#else
- .tuner_type = UNSET,
+ .tuner_type = TUNER_PHILIPS_TDA8290,
.tuner_addr = ADDR_UNSET,
-#endif
.radio_type = UNSET,
.radio_addr = ADDR_UNSET,
.gpiomask = 0x8e054000,
/* toggle AGC switch through GPIO 27 */
switch (mode) {
case TDA18271_ANALOG:
- saa7134_set_gpio(dev, 27, 0);
+ saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x4000);
+ saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x4000);
+ msleep(20);
break;
case TDA18271_DIGITAL:
- saa7134_set_gpio(dev, 27, 1);
+ saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x14000);
+ saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x14000);
+ msleep(20);
+ saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x54000);
+ saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x54000);
+ msleep(30);
break;
default:
return -EINVAL;
int saa7134_tuner_callback(void *priv, int component, int command, int arg)
{
struct saa7134_dev *dev = priv;
+
if (dev != NULL) {
switch (dev->tuner_type) {
case TUNER_PHILIPS_TDA8290:
break;
}
case SAA7134_BOARD_KWORLD_PCI_SBTVD_FULLSEG:
- {
- struct i2c_msg msg = { .addr = 0x4b, .flags = 0 };
- int i;
- static u8 buffer[][2] = {
- {0x30, 0x31},
- {0xff, 0x00},
- {0x41, 0x03},
- {0x41, 0x1a},
- {0xff, 0x02},
- {0x34, 0x00},
- {0x45, 0x97},
- {0x45, 0xc1},
- };
saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x4000);
saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x4000);
- /*
- * FIXME: identify what device is at addr 0x4b and what means
- * this initialization
- */
- for (i = 0; i < ARRAY_SIZE(buffer); i++) {
- msg.buf = &buffer[i][0];
- msg.len = ARRAY_SIZE(buffer[0]);
- if (i2c_transfer(&dev->i2c_adap, &msg, 1) != 1)
- printk(KERN_WARNING
- "%s: Unable to enable tuner(%i).\n",
- dev->name, i);
- }
+ saa7134_set_gpio(dev, 27, 0);
break;
- }
} /* switch() */
/* initialize tuner */
static struct tda18271_config kworld_tda18271_config = {
.std_map = &mb86a20s_tda18271_std_map,
.gate = TDA18271_GATE_DIGITAL,
+ .config = 3, /* Use tuner callback for AGC */
+
};
static const struct mb86a20s_config kworld_mb86a20s_config = {
.demod_address = 0x10,
};
+static int kworld_sbtvd_gate_ctrl(struct dvb_frontend* fe, int enable)
+{
+ struct saa7134_dev *dev = fe->dvb->priv;
+
+ unsigned char initmsg[] = {0x45, 0x97};
+ unsigned char msg_enable[] = {0x45, 0xc1};
+ unsigned char msg_disable[] = {0x45, 0x81};
+ struct i2c_msg msg = {.addr = 0x4b, .flags = 0, .buf = initmsg, .len = 2};
+
+ if (i2c_transfer(&dev->i2c_adap, &msg, 1) != 1) {
+ wprintk("could not access the I2C gate\n");
+ return -EIO;
+ }
+ if (enable)
+ msg.buf = msg_enable;
+ else
+ msg.buf = msg_disable;
+ if (i2c_transfer(&dev->i2c_adap, &msg, 1) != 1) {
+ wprintk("could not access the I2C gate\n");
+ return -EIO;
+ }
+ msleep(20);
+ return 0;
+}
+
/* ==================================================================
* tda1004x based DVB-T cards, helper functions
*/
/* ------------------------------------------------------------------ */
-static int __kworld_sbtvd_i2c_gate_ctrl(struct saa7134_dev *dev, int enable)
-{
- unsigned char initmsg[] = {0x45, 0x97};
- unsigned char msg_enable[] = {0x45, 0xc1};
- unsigned char msg_disable[] = {0x45, 0x81};
- struct i2c_msg msg = {.addr = 0x4b, .flags = 0, .buf = initmsg, .len = 2};
-
- if (i2c_transfer(&dev->i2c_adap, &msg, 1) != 1) {
- wprintk("could not access the I2C gate\n");
- return -EIO;
- }
- if (enable)
- msg.buf = msg_enable;
- else
- msg.buf = msg_disable;
- if (i2c_transfer(&dev->i2c_adap, &msg, 1) != 1) {
- wprintk("could not access the I2C gate\n");
- return -EIO;
- }
- msleep(20);
- return 0;
-}
-static int kworld_sbtvd_i2c_gate_ctrl(struct dvb_frontend *fe, int enable)
-{
- struct saa7134_dev *dev = fe->dvb->priv;
-
- return __kworld_sbtvd_i2c_gate_ctrl(dev, enable);
-}
-
-/* ------------------------------------------------------------------ */
-
static struct tda1004x_config tda827x_lifeview_config = {
.demod_address = 0x08,
.invert = 1,
}
break;
case SAA7134_BOARD_KWORLD_PCI_SBTVD_FULLSEG:
- __kworld_sbtvd_i2c_gate_ctrl(dev, 0);
- saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x14000);
- saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x14000);
- msleep(20);
- saa_writel(SAA7134_GPIO_GPMODE0 >> 2, 0x54000);
- saa_writel(SAA7134_GPIO_GPSTATUS0 >> 2, 0x54000);
- msleep(20);
+ /* Switch to digital mode */
+ saa7134_tuner_callback(dev, 0,
+ TDA18271_CALLBACK_CMD_AGC_ENABLE, 1);
fe0->dvb.frontend = dvb_attach(mb86a20s_attach,
&kworld_mb86a20s_config,
&dev->i2c_adap);
- __kworld_sbtvd_i2c_gate_ctrl(dev, 1);
if (fe0->dvb.frontend != NULL) {
+ dvb_attach(tda829x_attach, fe0->dvb.frontend,
+ &dev->i2c_adap, 0x4b,
+ &tda829x_no_probe);
dvb_attach(tda18271_attach, fe0->dvb.frontend,
0x60, &dev->i2c_adap,
&kworld_tda18271_config);
- /*
- * Only after success, it can initialize the gate, otherwise
- * an OOPS will hit, due to kfree(fe0->dvb.frontend)
- */
- fe0->dvb.frontend->ops.i2c_gate_ctrl = kworld_sbtvd_i2c_gate_ctrl;
+ fe0->dvb.frontend->ops.i2c_gate_ctrl = kworld_sbtvd_gate_ctrl;
}
+
+ /* mb86a20s need to use the I2C gateway */
break;
default:
wprintk("Huh? unknown DVB card?\n");
{ SN9C102_USB_DEVICE(0x0c45, 0x6009, BRIDGE_SN9C102), },
{ SN9C102_USB_DEVICE(0x0c45, 0x600d, BRIDGE_SN9C102), },
/* { SN9C102_USB_DEVICE(0x0c45, 0x6011, BRIDGE_SN9C102), }, OV6650 */
-#endif
{ SN9C102_USB_DEVICE(0x0c45, 0x6019, BRIDGE_SN9C102), },
+#endif
{ SN9C102_USB_DEVICE(0x0c45, 0x6024, BRIDGE_SN9C102), },
{ SN9C102_USB_DEVICE(0x0c45, 0x6025, BRIDGE_SN9C102), },
#if !defined CONFIG_USB_GSPCA_SONIXB && !defined CONFIG_USB_GSPCA_SONIXB_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x6029, BRIDGE_SN9C102), },
{ SN9C102_USB_DEVICE(0x0c45, 0x602a, BRIDGE_SN9C102), },
#endif
- { SN9C102_USB_DEVICE(0x0c45, 0x602b, BRIDGE_SN9C102), },
+ { SN9C102_USB_DEVICE(0x0c45, 0x602b, BRIDGE_SN9C102), }, /* not in sonixb */
#if !defined CONFIG_USB_GSPCA_SONIXB && !defined CONFIG_USB_GSPCA_SONIXB_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x602c, BRIDGE_SN9C102), },
/* { SN9C102_USB_DEVICE(0x0c45, 0x602d, BRIDGE_SN9C102), }, HV7131R */
{ SN9C102_USB_DEVICE(0x0c45, 0x602e, BRIDGE_SN9C102), },
#endif
- { SN9C102_USB_DEVICE(0x0c45, 0x6030, BRIDGE_SN9C102), },
+ { SN9C102_USB_DEVICE(0x0c45, 0x6030, BRIDGE_SN9C102), }, /* not in sonixb */
/* SN9C103 */
- { SN9C102_USB_DEVICE(0x0c45, 0x6080, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x6082, BRIDGE_SN9C103), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x6080, BRIDGE_SN9C103), }, non existent ? */
+ { SN9C102_USB_DEVICE(0x0c45, 0x6082, BRIDGE_SN9C103), }, /* not in sonixb */
+#if !defined CONFIG_USB_GSPCA_SONIXB && !defined CONFIG_USB_GSPCA_SONIXB_MODULE
/* { SN9C102_USB_DEVICE(0x0c45, 0x6083, BRIDGE_SN9C103), }, HY7131D/E */
- { SN9C102_USB_DEVICE(0x0c45, 0x6088, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x608a, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x608b, BRIDGE_SN9C103), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x6088, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x608a, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x608b, BRIDGE_SN9C103), }, non existent ? */
{ SN9C102_USB_DEVICE(0x0c45, 0x608c, BRIDGE_SN9C103), },
/* { SN9C102_USB_DEVICE(0x0c45, 0x608e, BRIDGE_SN9C103), }, CISVF10 */
-#if !defined CONFIG_USB_GSPCA_SONIXB && !defined CONFIG_USB_GSPCA_SONIXB_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x608f, BRIDGE_SN9C103), },
-#endif
- { SN9C102_USB_DEVICE(0x0c45, 0x60a0, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60a2, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60a3, BRIDGE_SN9C103), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60a0, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60a2, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60a3, BRIDGE_SN9C103), }, non existent ? */
/* { SN9C102_USB_DEVICE(0x0c45, 0x60a8, BRIDGE_SN9C103), }, PAS106 */
/* { SN9C102_USB_DEVICE(0x0c45, 0x60aa, BRIDGE_SN9C103), }, TAS5130 */
-/* { SN9C102_USB_DEVICE(0x0c45, 0x60ab, BRIDGE_SN9C103), }, TAS5130 */
- { SN9C102_USB_DEVICE(0x0c45, 0x60ac, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60ae, BRIDGE_SN9C103), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ab, BRIDGE_SN9C103), }, TAS5110, non existent */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ac, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ae, BRIDGE_SN9C103), }, non existent ? */
{ SN9C102_USB_DEVICE(0x0c45, 0x60af, BRIDGE_SN9C103), },
-#if !defined CONFIG_USB_GSPCA_SONIXB && !defined CONFIG_USB_GSPCA_SONIXB_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x60b0, BRIDGE_SN9C103), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60b2, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60b3, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60b8, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ba, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60bb, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60bc, BRIDGE_SN9C103), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60be, BRIDGE_SN9C103), }, non existent ? */
#endif
- { SN9C102_USB_DEVICE(0x0c45, 0x60b2, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60b3, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60b8, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60ba, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60bb, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60bc, BRIDGE_SN9C103), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60be, BRIDGE_SN9C103), },
/* SN9C105 */
#if !defined CONFIG_USB_GSPCA_SONIXJ && !defined CONFIG_USB_GSPCA_SONIXJ_MODULE
{ SN9C102_USB_DEVICE(0x045e, 0x00f5, BRIDGE_SN9C105), },
{ SN9C102_USB_DEVICE(0x045e, 0x00f7, BRIDGE_SN9C105), },
{ SN9C102_USB_DEVICE(0x0471, 0x0327, BRIDGE_SN9C105), },
{ SN9C102_USB_DEVICE(0x0471, 0x0328, BRIDGE_SN9C105), },
-#endif
{ SN9C102_USB_DEVICE(0x0c45, 0x60c0, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60c2, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60c8, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60cc, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60ea, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60ec, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60ef, BRIDGE_SN9C105), },
- { SN9C102_USB_DEVICE(0x0c45, 0x60fa, BRIDGE_SN9C105), },
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60c2, BRIDGE_SN9C105), }, PO1030 */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60c8, BRIDGE_SN9C105), }, OM6801 */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60cc, BRIDGE_SN9C105), }, HV7131GP */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ea, BRIDGE_SN9C105), }, non existent ? */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ec, BRIDGE_SN9C105), }, MO4000 */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60ef, BRIDGE_SN9C105), }, ICM105C */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x60fa, BRIDGE_SN9C105), }, OV7648 */
{ SN9C102_USB_DEVICE(0x0c45, 0x60fb, BRIDGE_SN9C105), },
{ SN9C102_USB_DEVICE(0x0c45, 0x60fc, BRIDGE_SN9C105), },
{ SN9C102_USB_DEVICE(0x0c45, 0x60fe, BRIDGE_SN9C105), },
/* SN9C120 */
{ SN9C102_USB_DEVICE(0x0458, 0x7025, BRIDGE_SN9C120), },
-#if !defined CONFIG_USB_GSPCA_SONIXJ && !defined CONFIG_USB_GSPCA_SONIXJ_MODULE
- { SN9C102_USB_DEVICE(0x0c45, 0x6102, BRIDGE_SN9C120), },
-#endif
- { SN9C102_USB_DEVICE(0x0c45, 0x6108, BRIDGE_SN9C120), },
- { SN9C102_USB_DEVICE(0x0c45, 0x610f, BRIDGE_SN9C120), },
-#if !defined CONFIG_USB_GSPCA_SONIXJ && !defined CONFIG_USB_GSPCA_SONIXJ_MODULE
+/* { SN9C102_USB_DEVICE(0x0c45, 0x6102, BRIDGE_SN9C120), }, po2030 */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x6108, BRIDGE_SN9C120), }, om6801 */
+/* { SN9C102_USB_DEVICE(0x0c45, 0x610f, BRIDGE_SN9C120), }, S5K53BEB */
{ SN9C102_USB_DEVICE(0x0c45, 0x6130, BRIDGE_SN9C120), },
-#endif
/* { SN9C102_USB_DEVICE(0x0c45, 0x6138, BRIDGE_SN9C120), }, MO8000 */
-#if !defined CONFIG_USB_GSPCA_SONIXJ && !defined CONFIG_USB_GSPCA_SONIXJ_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x613a, BRIDGE_SN9C120), },
-#endif
{ SN9C102_USB_DEVICE(0x0c45, 0x613b, BRIDGE_SN9C120), },
-#if !defined CONFIG_USB_GSPCA_SONIXJ && !defined CONFIG_USB_GSPCA_SONIXJ_MODULE
{ SN9C102_USB_DEVICE(0x0c45, 0x613c, BRIDGE_SN9C120), },
{ SN9C102_USB_DEVICE(0x0c45, 0x613e, BRIDGE_SN9C120), },
#endif
return ret;
}
-static int sr030pc30_s_config(struct v4l2_subdev *sd,
- int irq, void *platform_data)
-{
- struct sr030pc30_info *info = to_sr030pc30(sd);
-
- info->pdata = platform_data;
- return 0;
-}
-
static int sr030pc30_s_stream(struct v4l2_subdev *sd, int enable)
{
return 0;
}
static const struct v4l2_subdev_core_ops sr030pc30_core_ops = {
- .s_config = sr030pc30_s_config,
.s_power = sr030pc30_s_power,
.queryctrl = sr030pc30_queryctrl,
.s_ctrl = sr030pc30_s_ctrl,
+++ /dev/null
-/*
- * For the TDA9875 chip
- * (The TDA9875 is used on the Diamond DTV2000 french version
- * Other cards probably use these chips as well.)
- * This driver will not complain if used with any
- * other i2c device with the same address.
- *
- * Copyright (c) 2000 Guillaume Delvit based on Gerd Knorr source and
- * Eric Sandeen
- * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org>
- * This code is placed under the terms of the GNU General Public License
- * Based on tda9855.c by Steve VanDeBogart (vandebo@uclink.berkeley.edu)
- * Which was based on tda8425.c by Greg Alexander (c) 1998
- *
- * OPTIONS:
- * debug - set to 1 if you'd like to see debug messages
- *
- * Revision: 0.1 - original version
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/delay.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/i2c.h>
-#include <linux/videodev2.h>
-#include <media/v4l2-device.h>
-#include <media/i2c-addr.h>
-
-static int debug; /* insmod parameter */
-module_param(debug, int, S_IRUGO | S_IWUSR);
-MODULE_LICENSE("GPL");
-
-
-/* This is a superset of the TDA9875 */
-struct tda9875 {
- struct v4l2_subdev sd;
- int rvol, lvol;
- int bass, treble;
-};
-
-static inline struct tda9875 *to_state(struct v4l2_subdev *sd)
-{
- return container_of(sd, struct tda9875, sd);
-}
-
-#define dprintk if (debug) printk
-
-/* The TDA9875 is made by Philips Semiconductor
- * http://www.semiconductors.philips.com
- * TDA9875: I2C-bus controlled DSP audio processor, FM demodulator
- *
- */
-
- /* subaddresses for TDA9875 */
-#define TDA9875_MUT 0x12 /*General mute (value --> 0b11001100*/
-#define TDA9875_CFG 0x01 /* Config register (value --> 0b00000000 */
-#define TDA9875_DACOS 0x13 /*DAC i/o select (ADC) 0b0000100*/
-#define TDA9875_LOSR 0x16 /*Line output select regirter 0b0100 0001*/
-
-#define TDA9875_CH1V 0x0c /*Channel 1 volume (mute)*/
-#define TDA9875_CH2V 0x0d /*Channel 2 volume (mute)*/
-#define TDA9875_SC1 0x14 /*SCART 1 in (mono)*/
-#define TDA9875_SC2 0x15 /*SCART 2 in (mono)*/
-
-#define TDA9875_ADCIS 0x17 /*ADC input select (mono) 0b0110 000*/
-#define TDA9875_AER 0x19 /*Audio effect (AVL+Pseudo) 0b0000 0110*/
-#define TDA9875_MCS 0x18 /*Main channel select (DAC) 0b0000100*/
-#define TDA9875_MVL 0x1a /* Main volume gauche */
-#define TDA9875_MVR 0x1b /* Main volume droite */
-#define TDA9875_MBA 0x1d /* Main Basse */
-#define TDA9875_MTR 0x1e /* Main treble */
-#define TDA9875_ACS 0x1f /* Auxilary channel select (FM) 0b0000000*/
-#define TDA9875_AVL 0x20 /* Auxilary volume gauche */
-#define TDA9875_AVR 0x21 /* Auxilary volume droite */
-#define TDA9875_ABA 0x22 /* Auxilary Basse */
-#define TDA9875_ATR 0x23 /* Auxilary treble */
-
-#define TDA9875_MSR 0x02 /* Monitor select register */
-#define TDA9875_C1MSB 0x03 /* Carrier 1 (FM) frequency register MSB */
-#define TDA9875_C1MIB 0x04 /* Carrier 1 (FM) frequency register (16-8]b */
-#define TDA9875_C1LSB 0x05 /* Carrier 1 (FM) frequency register LSB */
-#define TDA9875_C2MSB 0x06 /* Carrier 2 (nicam) frequency register MSB */
-#define TDA9875_C2MIB 0x07 /* Carrier 2 (nicam) frequency register (16-8]b */
-#define TDA9875_C2LSB 0x08 /* Carrier 2 (nicam) frequency register LSB */
-#define TDA9875_DCR 0x09 /* Demodulateur configuration regirter*/
-#define TDA9875_DEEM 0x0a /* FM de-emphasis regirter*/
-#define TDA9875_FMAT 0x0b /* FM Matrix regirter*/
-
-/* values */
-#define TDA9875_MUTE_ON 0xff /* general mute */
-#define TDA9875_MUTE_OFF 0xcc /* general no mute */
-
-
-
-/* Begin code */
-
-static int tda9875_write(struct v4l2_subdev *sd, int subaddr, unsigned char val)
-{
- struct i2c_client *client = v4l2_get_subdevdata(sd);
- unsigned char buffer[2];
-
- v4l2_dbg(1, debug, sd, "Writing %d 0x%x\n", subaddr, val);
- buffer[0] = subaddr;
- buffer[1] = val;
- if (2 != i2c_master_send(client, buffer, 2)) {
- v4l2_warn(sd, "I/O error, trying (write %d 0x%x)\n",
- subaddr, val);
- return -1;
- }
- return 0;
-}
-
-
-static int i2c_read_register(struct i2c_client *client, int addr, int reg)
-{
- unsigned char write[1];
- unsigned char read[1];
- struct i2c_msg msgs[2] = {
- { addr, 0, 1, write },
- { addr, I2C_M_RD, 1, read }
- };
-
- write[0] = reg;
-
- if (2 != i2c_transfer(client->adapter, msgs, 2)) {
- v4l_warn(client, "I/O error (read2)\n");
- return -1;
- }
- v4l_dbg(1, debug, client, "chip_read2: reg%d=0x%x\n", reg, read[0]);
- return read[0];
-}
-
-static void tda9875_set(struct v4l2_subdev *sd)
-{
- struct tda9875 *tda = to_state(sd);
- unsigned char a;
-
- v4l2_dbg(1, debug, sd, "tda9875_set(%04x,%04x,%04x,%04x)\n",
- tda->lvol, tda->rvol, tda->bass, tda->treble);
-
- a = tda->lvol & 0xff;
- tda9875_write(sd, TDA9875_MVL, a);
- a =tda->rvol & 0xff;
- tda9875_write(sd, TDA9875_MVR, a);
- a =tda->bass & 0xff;
- tda9875_write(sd, TDA9875_MBA, a);
- a =tda->treble & 0xff;
- tda9875_write(sd, TDA9875_MTR, a);
-}
-
-static void do_tda9875_init(struct v4l2_subdev *sd)
-{
- struct tda9875 *t = to_state(sd);
-
- v4l2_dbg(1, debug, sd, "In tda9875_init\n");
- tda9875_write(sd, TDA9875_CFG, 0xd0); /*reg de config 0 (reset)*/
- tda9875_write(sd, TDA9875_MSR, 0x03); /* Monitor 0b00000XXX*/
- tda9875_write(sd, TDA9875_C1MSB, 0x00); /*Car1(FM) MSB XMHz*/
- tda9875_write(sd, TDA9875_C1MIB, 0x00); /*Car1(FM) MIB XMHz*/
- tda9875_write(sd, TDA9875_C1LSB, 0x00); /*Car1(FM) LSB XMHz*/
- tda9875_write(sd, TDA9875_C2MSB, 0x00); /*Car2(NICAM) MSB XMHz*/
- tda9875_write(sd, TDA9875_C2MIB, 0x00); /*Car2(NICAM) MIB XMHz*/
- tda9875_write(sd, TDA9875_C2LSB, 0x00); /*Car2(NICAM) LSB XMHz*/
- tda9875_write(sd, TDA9875_DCR, 0x00); /*Demod config 0x00*/
- tda9875_write(sd, TDA9875_DEEM, 0x44); /*DE-Emph 0b0100 0100*/
- tda9875_write(sd, TDA9875_FMAT, 0x00); /*FM Matrix reg 0x00*/
- tda9875_write(sd, TDA9875_SC1, 0x00); /* SCART 1 (SC1)*/
- tda9875_write(sd, TDA9875_SC2, 0x01); /* SCART 2 (sc2)*/
-
- tda9875_write(sd, TDA9875_CH1V, 0x10); /* Channel volume 1 mute*/
- tda9875_write(sd, TDA9875_CH2V, 0x10); /* Channel volume 2 mute */
- tda9875_write(sd, TDA9875_DACOS, 0x02); /* sig DAC i/o(in:nicam)*/
- tda9875_write(sd, TDA9875_ADCIS, 0x6f); /* sig ADC input(in:mono)*/
- tda9875_write(sd, TDA9875_LOSR, 0x00); /* line out (in:mono)*/
- tda9875_write(sd, TDA9875_AER, 0x00); /*06 Effect (AVL+PSEUDO) */
- tda9875_write(sd, TDA9875_MCS, 0x44); /* Main ch select (DAC) */
- tda9875_write(sd, TDA9875_MVL, 0x03); /* Vol Main left 10dB */
- tda9875_write(sd, TDA9875_MVR, 0x03); /* Vol Main right 10dB*/
- tda9875_write(sd, TDA9875_MBA, 0x00); /* Main Bass Main 0dB*/
- tda9875_write(sd, TDA9875_MTR, 0x00); /* Main Treble Main 0dB*/
- tda9875_write(sd, TDA9875_ACS, 0x44); /* Aux chan select (dac)*/
- tda9875_write(sd, TDA9875_AVL, 0x00); /* Vol Aux left 0dB*/
- tda9875_write(sd, TDA9875_AVR, 0x00); /* Vol Aux right 0dB*/
- tda9875_write(sd, TDA9875_ABA, 0x00); /* Aux Bass Main 0dB*/
- tda9875_write(sd, TDA9875_ATR, 0x00); /* Aux Aigus Main 0dB*/
-
- tda9875_write(sd, TDA9875_MUT, 0xcc); /* General mute */
-
- t->lvol = t->rvol = 0; /* 0dB */
- t->bass = 0; /* 0dB */
- t->treble = 0; /* 0dB */
- tda9875_set(sd);
-}
-
-
-static int tda9875_g_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
-{
- struct tda9875 *t = to_state(sd);
-
- switch (ctrl->id) {
- case V4L2_CID_AUDIO_VOLUME:
- {
- int left = (t->lvol+84)*606;
- int right = (t->rvol+84)*606;
-
- ctrl->value=max(left,right);
- return 0;
- }
- case V4L2_CID_AUDIO_BALANCE:
- {
- int left = (t->lvol+84)*606;
- int right = (t->rvol+84)*606;
- int volume = max(left,right);
- int balance = (32768*min(left,right))/
- (volume ? volume : 1);
- ctrl->value=(left<right)?
- (65535-balance) : balance;
- return 0;
- }
- case V4L2_CID_AUDIO_BASS:
- ctrl->value = (t->bass+12)*2427; /* min -12 max +15 */
- return 0;
- case V4L2_CID_AUDIO_TREBLE:
- ctrl->value = (t->treble+12)*2730;/* min -12 max +12 */
- return 0;
- }
- return -EINVAL;
-}
-
-static int tda9875_s_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
-{
- struct tda9875 *t = to_state(sd);
- int chvol = 0, volume = 0, balance = 0, left, right;
-
- switch (ctrl->id) {
- case V4L2_CID_AUDIO_VOLUME:
- left = (t->lvol+84)*606;
- right = (t->rvol+84)*606;
-
- volume = max(left,right);
- balance = (32768*min(left,right))/
- (volume ? volume : 1);
- balance =(left<right)?
- (65535-balance) : balance;
-
- volume = ctrl->value;
-
- chvol=1;
- break;
- case V4L2_CID_AUDIO_BALANCE:
- left = (t->lvol+84)*606;
- right = (t->rvol+84)*606;
-
- volume=max(left,right);
-
- balance = ctrl->value;
-
- chvol=1;
- break;
- case V4L2_CID_AUDIO_BASS:
- t->bass = ((ctrl->value/2400)-12) & 0xff;
- if (t->bass > 15)
- t->bass = 15;
- if (t->bass < -12)
- t->bass = -12 & 0xff;
- break;
- case V4L2_CID_AUDIO_TREBLE:
- t->treble = ((ctrl->value/2700)-12) & 0xff;
- if (t->treble > 12)
- t->treble = 12;
- if (t->treble < -12)
- t->treble = -12 & 0xff;
- break;
- default:
- return -EINVAL;
- }
-
- if (chvol) {
- left = (min(65536 - balance,32768) *
- volume) / 32768;
- right = (min(balance,32768) *
- volume) / 32768;
- t->lvol = ((left/606)-84) & 0xff;
- if (t->lvol > 24)
- t->lvol = 24;
- if (t->lvol < -84)
- t->lvol = -84 & 0xff;
-
- t->rvol = ((right/606)-84) & 0xff;
- if (t->rvol > 24)
- t->rvol = 24;
- if (t->rvol < -84)
- t->rvol = -84 & 0xff;
- }
-
- tda9875_set(sd);
- return 0;
-}
-
-static int tda9875_queryctrl(struct v4l2_subdev *sd, struct v4l2_queryctrl *qc)
-{
- switch (qc->id) {
- case V4L2_CID_AUDIO_VOLUME:
- return v4l2_ctrl_query_fill(qc, 0, 65535, 65535 / 100, 58880);
- case V4L2_CID_AUDIO_BASS:
- case V4L2_CID_AUDIO_TREBLE:
- return v4l2_ctrl_query_fill(qc, 0, 65535, 65535 / 100, 32768);
- }
- return -EINVAL;
-}
-
-/* ----------------------------------------------------------------------- */
-
-static const struct v4l2_subdev_core_ops tda9875_core_ops = {
- .queryctrl = tda9875_queryctrl,
- .g_ctrl = tda9875_g_ctrl,
- .s_ctrl = tda9875_s_ctrl,
-};
-
-static const struct v4l2_subdev_ops tda9875_ops = {
- .core = &tda9875_core_ops,
-};
-
-/* ----------------------------------------------------------------------- */
-
-
-/* *********************** *
- * i2c interface functions *
- * *********************** */
-
-static int tda9875_checkit(struct i2c_client *client, int addr)
-{
- int dic, rev;
-
- dic = i2c_read_register(client, addr, 254);
- rev = i2c_read_register(client, addr, 255);
-
- if (dic == 0 || dic == 2) { /* tda9875 and tda9875A */
- v4l_info(client, "tda9875%s rev. %d detected at 0x%02x\n",
- dic == 0 ? "" : "A", rev, addr << 1);
- return 1;
- }
- v4l_info(client, "no such chip at 0x%02x (dic=0x%x rev=0x%x)\n",
- addr << 1, dic, rev);
- return 0;
-}
-
-static int tda9875_probe(struct i2c_client *client,
- const struct i2c_device_id *id)
-{
- struct tda9875 *t;
- struct v4l2_subdev *sd;
-
- v4l_info(client, "chip found @ 0x%02x (%s)\n",
- client->addr << 1, client->adapter->name);
-
- if (!tda9875_checkit(client, client->addr))
- return -ENODEV;
-
- t = kzalloc(sizeof(*t), GFP_KERNEL);
- if (!t)
- return -ENOMEM;
- sd = &t->sd;
- v4l2_i2c_subdev_init(sd, client, &tda9875_ops);
-
- do_tda9875_init(sd);
- return 0;
-}
-
-static int tda9875_remove(struct i2c_client *client)
-{
- struct v4l2_subdev *sd = i2c_get_clientdata(client);
-
- do_tda9875_init(sd);
- v4l2_device_unregister_subdev(sd);
- kfree(to_state(sd));
- return 0;
-}
-
-static const struct i2c_device_id tda9875_id[] = {
- { "tda9875", 0 },
- { }
-};
-MODULE_DEVICE_TABLE(i2c, tda9875_id);
-
-static struct i2c_driver tda9875_driver = {
- .driver = {
- .owner = THIS_MODULE,
- .name = "tda9875",
- },
- .probe = tda9875_probe,
- .remove = tda9875_remove,
- .id_table = tda9875_id,
-};
-
-static __init int init_tda9875(void)
-{
- return i2c_add_driver(&tda9875_driver);
-}
-
-static __exit void exit_tda9875(void)
-{
- i2c_del_driver(&tda9875_driver);
-}
-
-module_init(init_tda9875);
-module_exit(exit_tda9875);
int buf_size, gfp_t gfp_flags,
usb_complete_t complete_fn, void *context)
{
- struct urb *urb;
- void *mem;
- int i;
+ int i = 0;
- for (i = 0; i < num; i++) {
- urb = usb_alloc_urb(0, gfp_flags);
+ for (; i < num; i++) {
+ void *mem;
+ struct urb *urb = usb_alloc_urb(0, gfp_flags);
if (urb == NULL)
return i;
mem = usb_alloc_coherent(udev, buf_size, gfp_flags,
&urb->transfer_dma);
- if (mem == NULL)
+ if (mem == NULL) {
+ usb_free_urb(urb);
return i;
+ }
usb_fill_bulk_urb(urb, udev, usb_rcvbulkpipe(udev, ep_addr),
mem, buf_size, complete_fn, context);
/* Decrease the module use count to match the first try_module_get. */
module_put(client->driver->driver.owner);
- if (sd) {
- /* We return errors from v4l2_subdev_call only if we have the
- callback as the .s_config is not mandatory */
- int err = v4l2_subdev_call(sd, core, s_config,
- info->irq, info->platform_data);
-
- if (err && err != -ENOIOCTLCMD) {
- v4l2_device_unregister_subdev(sd);
- sd = NULL;
- }
- }
-
error:
/* If we have a client but no subdev, then something went wrong and
we must unregister the client. */
}
EXPORT_SYMBOL_GPL(v4l2_i2c_new_subdev_board);
-struct v4l2_subdev *v4l2_i2c_new_subdev_cfg(struct v4l2_device *v4l2_dev,
+struct v4l2_subdev *v4l2_i2c_new_subdev(struct v4l2_device *v4l2_dev,
struct i2c_adapter *adapter, const char *client_type,
- int irq, void *platform_data,
u8 addr, const unsigned short *probe_addrs)
{
struct i2c_board_info info;
memset(&info, 0, sizeof(info));
strlcpy(info.type, client_type, sizeof(info.type));
info.addr = addr;
- info.irq = irq;
- info.platform_data = platform_data;
return v4l2_i2c_new_subdev_board(v4l2_dev, adapter, &info, probe_addrs);
}
-EXPORT_SYMBOL_GPL(v4l2_i2c_new_subdev_cfg);
+EXPORT_SYMBOL_GPL(v4l2_i2c_new_subdev);
/* Return i2c client address of v4l2_subdev. */
unsigned short v4l2_i2c_subdev_addr(struct v4l2_subdev *sd)
int ret;
u32 size;
- ctrl->has_new = 1;
+ ctrl->is_new = 1;
switch (ctrl->type) {
case V4L2_CTRL_TYPE_INTEGER64:
ctrl->val64 = c->value64;
if (ctrl->done)
continue;
- for (i = 0; i < master->ncontrols; i++)
- cur_to_new(master->cluster[i]);
+ for (i = 0; i < master->ncontrols; i++) {
+ if (master->cluster[i]) {
+ cur_to_new(master->cluster[i]);
+ master->cluster[i]->is_new = 1;
+ }
+ }
/* Skip button controls and read-only controls. */
if (ctrl->type == V4L2_CTRL_TYPE_BUTTON ||
ctrl = ref->ctrl;
memset(qc, 0, sizeof(*qc));
- qc->id = ctrl->id;
+ if (id >= V4L2_CID_PRIVATE_BASE)
+ qc->id = id;
+ else
+ qc->id = ctrl->id;
strlcpy(qc->name, ctrl->name, sizeof(qc->name));
qc->minimum = ctrl->minimum;
qc->maximum = ctrl->maximum;
qc->default_value = ctrl->default_value;
- if (qc->type == V4L2_CTRL_TYPE_MENU)
+ if (ctrl->type == V4L2_CTRL_TYPE_MENU)
qc->step = 1;
else
qc->step = ctrl->step;
if (ctrl == NULL)
continue;
- if (ctrl->has_new) {
+ if (ctrl->is_new) {
/* Double check this: it may have changed since the
last check in try_or_set_ext_ctrls(). */
if (set && (ctrl->flags & V4L2_CTRL_FLAG_GRABBED))
v4l2_ctrl_lock(ctrl);
- /* Reset the 'has_new' flags of the cluster */
+ /* Reset the 'is_new' flags of the cluster */
for (j = 0; j < master->ncontrols; j++)
if (master->cluster[j])
- master->cluster[j]->has_new = 0;
+ master->cluster[j]->is_new = 0;
/* Copy the new caller-supplied control values.
- user_to_new() sets 'has_new' to 1. */
+ user_to_new() sets 'is_new' to 1. */
ret = cluster_walk(i, cs, helpers, user_to_new);
if (!ret)
int ret;
int i;
+ if (ctrl->flags & V4L2_CTRL_FLAG_READ_ONLY)
+ return -EACCES;
+
v4l2_ctrl_lock(ctrl);
- /* Reset the 'has_new' flags of the cluster */
+ /* Reset the 'is_new' flags of the cluster */
for (i = 0; i < master->ncontrols; i++)
if (master->cluster[i])
- master->cluster[i]->has_new = 0;
+ master->cluster[i]->is_new = 0;
ctrl->val = *val;
- ctrl->has_new = 1;
+ ctrl->is_new = 1;
ret = try_or_set_control_cluster(master, false);
if (!ret)
ret = try_or_set_control_cluster(master, true);
* The registration code assigns minor numbers and device node numbers
* based on the requested type and registers the new device node with
* the kernel.
+ *
+ * This function assumes that struct video_device was zeroed when it
+ * was allocated and does not contain any stale date.
+ *
* An error is returned if no free minor or device node number could be
* found, or if the registration of the device node failed.
*
int minor_offset = 0;
int minor_cnt = VIDEO_NUM_DEVICES;
const char *name_base;
- void *priv = vdev->dev.p;
/* A minor value of -1 marks this video device as never
having been registered */
}
/* Part 4: register the device with sysfs */
- memset(&vdev->dev, 0, sizeof(vdev->dev));
- /* The memset above cleared the device's device_private, so
- put back the copy we made earlier. */
- vdev->dev.p = priv;
vdev->dev.class = &video_class;
vdev->dev.devt = MKDEV(VIDEO_MAJOR, vdev->minor);
if (vdev->parent)
is a platform bus, then it is never deleted. */
if (client)
i2c_unregister_device(client);
+ continue;
}
#endif
#if defined(CONFIG_SPI)
if (spi)
spi_unregister_device(spi);
+ continue;
}
#endif
}
WARN_ON(sd->v4l2_dev != NULL);
if (!try_module_get(sd->owner))
return -ENODEV;
+ sd->v4l2_dev = v4l2_dev;
+ if (sd->internal_ops && sd->internal_ops->registered) {
+ err = sd->internal_ops->registered(sd);
+ if (err)
+ return err;
+ }
/* This just returns 0 if either of the two args is NULL */
err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler);
- if (err)
+ if (err) {
+ if (sd->internal_ops && sd->internal_ops->unregistered)
+ sd->internal_ops->unregistered(sd);
return err;
- sd->v4l2_dev = v4l2_dev;
+ }
spin_lock(&v4l2_dev->lock);
list_add_tail(&sd->list, &v4l2_dev->subdevs);
spin_unlock(&v4l2_dev->lock);
spin_lock(&sd->v4l2_dev->lock);
list_del(&sd->list);
spin_unlock(&sd->v4l2_dev->lock);
+ if (sd->internal_ops && sd->internal_ops->unregistered)
+ sd->internal_ops->unregistered(sd);
sd->v4l2_dev = NULL;
module_put(sd->owner);
}
{
struct v4l2_dbg_register *p = arg;
- if (!capable(CAP_SYS_ADMIN))
- ret = -EPERM;
- else if (ops->vidioc_g_register)
- ret = ops->vidioc_g_register(file, fh, p);
+ if (ops->vidioc_g_register) {
+ if (!capable(CAP_SYS_ADMIN))
+ ret = -EPERM;
+ else
+ ret = ops->vidioc_g_register(file, fh, p);
+ }
break;
}
case VIDIOC_DBG_S_REGISTER:
{
struct v4l2_dbg_register *p = arg;
- if (!capable(CAP_SYS_ADMIN))
- ret = -EPERM;
- else if (ops->vidioc_s_register)
- ret = ops->vidioc_s_register(file, fh, p);
+ if (ops->vidioc_s_register) {
+ if (!capable(CAP_SYS_ADMIN))
+ ret = -EPERM;
+ else
+ ret = ops->vidioc_s_register(file, fh, p);
+ }
break;
}
#endif
parport_unregister_device(cam->pdev);
w9966_set_state(cam, W9966_STATE_PDEV, 0);
}
+ memset(cam, 0, sizeof(*cam));
}
/* allocate memory *before* doing anything to the hardware
* in case allocation fails */
zr->stat_com = kzalloc(BUZ_NUM_STAT_COM * 4, GFP_KERNEL);
- zr->video_dev = kmalloc(sizeof(struct video_device), GFP_KERNEL);
+ zr->video_dev = video_device_alloc();
if (!zr->stat_com || !zr->video_dev) {
dprintk(1,
KERN_ERR
{
int rc;
- workqueue = create_freezeable_workqueue("kmemstick");
+ workqueue = create_freezable_workqueue("kmemstick");
if (!workqueue)
return -ENOMEM;
#define COPYRIGHT "Copyright (c) 1999-2008 " MODULEAUTHOR
#endif
-#define MPT_LINUX_VERSION_COMMON "3.04.17"
-#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.17"
+#define MPT_LINUX_VERSION_COMMON "3.04.18"
+#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.18"
#define WHAT_MAGIC_STRING "@" "(" "#" ")"
#define show_mptmod_ver(s,ver) \
return 1;
}
+static int
+mptctl_release(struct inode *inode, struct file *filep)
+{
+ fasync_helper(-1, filep, 0, &async_queue);
+ return 0;
+}
+
static int
mptctl_fasync(int fd, struct file *filep, int mode)
{
.llseek = no_llseek,
.fasync = mptctl_fasync,
.unlocked_ioctl = mptctl_ioctl,
+ .release = mptctl_release,
#ifdef CONFIG_COMPAT
.compat_ioctl = compat_mpctl_ioctl,
#endif
}
out:
- printk(MYIOC_s_INFO_FMT "task abort: %s (sc=%p)\n",
- ioc->name, ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), SCpnt);
+ printk(MYIOC_s_INFO_FMT "task abort: %s (rv=%04x) (sc=%p) (sn=%ld)\n",
+ ioc->name, ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), retval,
+ SCpnt, SCpnt->serial_number);
return retval;
}
vdevice = SCpnt->device->hostdata;
if (!vdevice || !vdevice->vtarget) {
- retval = SUCCESS;
+ retval = 0;
goto out;
}
{
int rc;
- workqueue = create_freezeable_workqueue("tifm");
+ workqueue = create_freezable_workqueue("tifm");
if (!workqueue)
return -ENOMEM;
if (x86_hyper != &x86_hyper_vmware)
return -ENODEV;
- vmballoon_wq = create_freezeable_workqueue("vmmemctl");
+ vmballoon_wq = create_freezable_workqueue("vmmemctl");
if (!vmballoon_wq) {
pr_err("failed to create workqueue\n");
return -ENOMEM;
goto out;
}
- mmc = mmc_alloc_host(sizeof(*mmc), &pdev->dev);
+ mmc = mmc_alloc_host(sizeof(struct sdh_host), &pdev->dev);
if (!mmc) {
ret = -ENOMEM;
goto out;
*/
#include <linux/mmc/host.h>
+#include <linux/err.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
}
host->clk = clk_get(&pdev->dev, "mmc");
- if (!host->clk) {
- ret = -ENOENT;
+ if (IS_ERR(host->clk)) {
+ ret = PTR_ERR(host->clk);
dev_err(&pdev->dev, "Failed to get mmc clock\n");
goto err_free_host;
}
#include <linux/ioport.h>
#include <linux/device.h>
#include <linux/interrupt.h>
+#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/highmem.h>
* is asserted (likewise for RX)
* @fifohalfsize: number of bytes that can be written when MCI_TXFIFOHALFEMPTY
* is asserted (likewise for RX)
- * @broken_blockend: the MCI_DATABLOCKEND is broken on the hardware
- * and will not work at all.
- * @broken_blockend_dma: the MCI_DATABLOCKEND is broken on the hardware when
- * using DMA.
* @sdio: variant supports SDIO
* @st_clkdiv: true if using a ST-specific clock divider algorithm
*/
unsigned int datalength_bits;
unsigned int fifosize;
unsigned int fifohalfsize;
- bool broken_blockend;
- bool broken_blockend_dma;
bool sdio;
bool st_clkdiv;
};
.fifohalfsize = 8 * 4,
.clkreg_enable = 1 << 13, /* HWFCEN */
.datalength_bits = 16,
- .broken_blockend_dma = true,
.sdio = true,
};
.clkreg = MCI_CLK_ENABLE,
.clkreg_enable = 1 << 14, /* HWFCEN */
.datalength_bits = 24,
- .broken_blockend = true,
.sdio = true,
.st_clkdiv = true,
};
host->data = data;
host->size = data->blksz * data->blocks;
host->data_xfered = 0;
- host->blockend = false;
- host->dataend = false;
mmci_init_sg(host, data);
mmci_data_irq(struct mmci_host *host, struct mmc_data *data,
unsigned int status)
{
- struct variant_data *variant = host->variant;
-
/* First check for errors */
if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_TXUNDERRUN|MCI_RXOVERRUN)) {
+ u32 remain, success;
+
+ /* Calculate how far we are into the transfer */
+ remain = readl(host->base + MMCIDATACNT);
+ success = data->blksz * data->blocks - remain;
+
dev_dbg(mmc_dev(host->mmc), "MCI ERROR IRQ (status %08x)\n", status);
- if (status & MCI_DATACRCFAIL)
+ if (status & MCI_DATACRCFAIL) {
+ /* Last block was not successful */
+ host->data_xfered = round_down(success - 1, data->blksz);
data->error = -EILSEQ;
- else if (status & MCI_DATATIMEOUT)
+ } else if (status & MCI_DATATIMEOUT) {
+ host->data_xfered = round_down(success, data->blksz);
data->error = -ETIMEDOUT;
- else if (status & (MCI_TXUNDERRUN|MCI_RXOVERRUN))
+ } else if (status & (MCI_TXUNDERRUN|MCI_RXOVERRUN)) {
+ host->data_xfered = round_down(success, data->blksz);
data->error = -EIO;
-
- /* Force-complete the transaction */
- host->blockend = true;
- host->dataend = true;
+ }
/*
* We hit an error condition. Ensure that any data
}
}
- /*
- * On ARM variants in PIO mode, MCI_DATABLOCKEND
- * is always sent first, and we increase the
- * transfered number of bytes for that IRQ. Then
- * MCI_DATAEND follows and we conclude the transaction.
- *
- * On the Ux500 single-IRQ variant MCI_DATABLOCKEND
- * doesn't seem to immediately clear from the status,
- * so we can't use it keep count when only one irq is
- * used because the irq will hit for other reasons, and
- * then the flag is still up. So we use the MCI_DATAEND
- * IRQ at the end of the entire transfer because
- * MCI_DATABLOCKEND is broken.
- *
- * In the U300, the IRQs can arrive out-of-order,
- * e.g. MCI_DATABLOCKEND sometimes arrives after MCI_DATAEND,
- * so for this case we use the flags "blockend" and
- * "dataend" to make sure both IRQs have arrived before
- * concluding the transaction. (This does not apply
- * to the Ux500 which doesn't fire MCI_DATABLOCKEND
- * at all.) In DMA mode it suffers from the same problem
- * as the Ux500.
- */
- if (status & MCI_DATABLOCKEND) {
- /*
- * Just being a little over-cautious, we do not
- * use this progressive update if the hardware blockend
- * flag is unreliable: since it can stay high between
- * IRQs it will corrupt the transfer counter.
- */
- if (!variant->broken_blockend)
- host->data_xfered += data->blksz;
- host->blockend = true;
- }
-
- if (status & MCI_DATAEND)
- host->dataend = true;
+ if (status & MCI_DATABLOCKEND)
+ dev_err(mmc_dev(host->mmc), "stray MCI_DATABLOCKEND interrupt\n");
- /*
- * On variants with broken blockend we shall only wait for dataend,
- * on others we must sync with the blockend signal since they can
- * appear out-of-order.
- */
- if (host->dataend && (host->blockend || variant->broken_blockend)) {
+ if (status & MCI_DATAEND || data->error) {
mmci_stop_data(host);
- /* Reset these flags */
- host->blockend = false;
- host->dataend = false;
-
- /*
- * Variants with broken blockend flags need to handle the
- * end of the entire transfer here.
- */
- if (variant->broken_blockend && !data->error)
+ if (!data->error)
+ /* The error clause is handled above, success! */
host->data_xfered += data->blksz * data->blocks;
if (!data->stop) {
host->cmd = NULL;
- cmd->resp[0] = readl(base + MMCIRESPONSE0);
- cmd->resp[1] = readl(base + MMCIRESPONSE1);
- cmd->resp[2] = readl(base + MMCIRESPONSE2);
- cmd->resp[3] = readl(base + MMCIRESPONSE3);
-
if (status & MCI_CMDTIMEOUT) {
cmd->error = -ETIMEDOUT;
} else if (status & MCI_CMDCRCFAIL && cmd->flags & MMC_RSP_CRC) {
cmd->error = -EILSEQ;
+ } else {
+ cmd->resp[0] = readl(base + MMCIRESPONSE0);
+ cmd->resp[1] = readl(base + MMCIRESPONSE1);
+ cmd->resp[2] = readl(base + MMCIRESPONSE2);
+ cmd->resp[3] = readl(base + MMCIRESPONSE3);
}
if (!cmd->data || cmd->error) {
struct variant_data *variant = id->data;
struct mmci_host *host;
struct mmc_host *mmc;
- unsigned int mask;
int ret;
/* must have platform data */
goto irq0_free;
}
- mask = MCI_IRQENABLE;
- /* Don't use the datablockend flag if it's broken */
- if (variant->broken_blockend)
- mask &= ~MCI_DATABLOCKEND;
-
- writel(mask, host->base + MMCIMASK0);
+ writel(MCI_IRQENABLE, host->base + MMCIMASK0);
amba_set_drvdata(dev, mmc);
#define MCI_IRQENABLE \
(MCI_CMDCRCFAILMASK|MCI_DATACRCFAILMASK|MCI_CMDTIMEOUTMASK| \
MCI_DATATIMEOUTMASK|MCI_TXUNDERRUNMASK|MCI_RXOVERRUNMASK| \
- MCI_CMDRESPENDMASK|MCI_CMDSENTMASK|MCI_DATABLOCKENDMASK)
+ MCI_CMDRESPENDMASK|MCI_CMDSENTMASK)
/* These interrupts are directed to IRQ1 when two IRQ lines are available */
#define MCI_IRQ1MASK \
struct timer_list timer;
unsigned int oldstat;
- bool blockend;
- bool dataend;
-
/* pio stuff */
struct sg_mapping_iter sg_miter;
unsigned int size;
host->curr.user_pages = 0;
box = &nc->cmd[0];
- for (i = 0; i < host->dma.num_ents; i++) {
- box->cmd = CMD_MODE_BOX;
- /* Initialize sg dma address */
- sg->dma_address = page_to_dma(mmc_dev(host->mmc), sg_page(sg))
- + sg->offset;
+ /* location of command block must be 64 bit aligned */
+ BUG_ON(host->dma.cmd_busaddr & 0x07);
- if (i == (host->dma.num_ents - 1))
+ nc->cmdptr = (host->dma.cmd_busaddr >> 3) | CMD_PTR_LP;
+ host->dma.hdr.cmdptr = DMOV_CMD_PTR_LIST |
+ DMOV_CMD_ADDR(host->dma.cmdptr_busaddr);
+ host->dma.hdr.complete_func = msmsdcc_dma_complete_func;
+
+ n = dma_map_sg(mmc_dev(host->mmc), host->dma.sg,
+ host->dma.num_ents, host->dma.dir);
+ if (n == 0) {
+ printk(KERN_ERR "%s: Unable to map in all sg elements\n",
+ mmc_hostname(host->mmc));
+ host->dma.sg = NULL;
+ host->dma.num_ents = 0;
+ return -ENOMEM;
+ }
+
+ for_each_sg(host->dma.sg, sg, n, i) {
+
+ box->cmd = CMD_MODE_BOX;
+
+ if (i == n - 1)
box->cmd |= CMD_LC;
rows = (sg_dma_len(sg) % MCI_FIFOSIZE) ?
(sg_dma_len(sg) / MCI_FIFOSIZE) + 1 :
box->cmd |= CMD_DST_CRCI(crci);
}
box++;
- sg++;
- }
-
- /* location of command block must be 64 bit aligned */
- BUG_ON(host->dma.cmd_busaddr & 0x07);
-
- nc->cmdptr = (host->dma.cmd_busaddr >> 3) | CMD_PTR_LP;
- host->dma.hdr.cmdptr = DMOV_CMD_PTR_LIST |
- DMOV_CMD_ADDR(host->dma.cmdptr_busaddr);
- host->dma.hdr.complete_func = msmsdcc_dma_complete_func;
-
- n = dma_map_sg(mmc_dev(host->mmc), host->dma.sg,
- host->dma.num_ents, host->dma.dir);
-/* dsb inside dma_map_sg will write nc out to mem as well */
-
- if (n != host->dma.num_ents) {
- printk(KERN_ERR "%s: Unable to map in all sg elements\n",
- mmc_hostname(host->mmc));
- host->dma.sg = NULL;
- host->dma.num_ents = 0;
- return -ENOMEM;
}
return 0;
if (host->timer.function)
pr_info("%s: Polling status mode enabled\n", mmc_hostname(mmc));
-#if BUSCLK_PWRSAVE
- msmsdcc_disable_clocks(host, 1);
-#endif
return 0;
cmd_irq_free:
free_irq(cmd_irqres->start, host);
host->clock = clock;
}
+/**
+ * sdhci_s3c_platform_8bit_width - support 8bit buswidth
+ * @host: The SDHCI host being queried
+ * @width: MMC_BUS_WIDTH_ macro for the bus width being requested
+ *
+ * We have 8-bit width support but is not a v3 controller.
+ * So we add platform_8bit_width() and support 8bit width.
+ */
+static int sdhci_s3c_platform_8bit_width(struct sdhci_host *host, int width)
+{
+ u8 ctrl;
+
+ ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
+
+ switch (width) {
+ case MMC_BUS_WIDTH_8:
+ ctrl |= SDHCI_CTRL_8BITBUS;
+ ctrl &= ~SDHCI_CTRL_4BITBUS;
+ break;
+ case MMC_BUS_WIDTH_4:
+ ctrl |= SDHCI_CTRL_4BITBUS;
+ ctrl &= ~SDHCI_CTRL_8BITBUS;
+ break;
+ default:
+ break;
+ }
+
+ sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
+
+ return 0;
+}
+
static struct sdhci_ops sdhci_s3c_ops = {
.get_max_clock = sdhci_s3c_get_max_clk,
.set_clock = sdhci_s3c_set_clock,
.get_min_clock = sdhci_s3c_get_min_clock,
+ .platform_8bit_width = sdhci_s3c_platform_8bit_width,
};
static void sdhci_s3c_notify_change(struct platform_device *dev, int state)
if (pdata->cd_type == S3C_SDHCI_CD_PERMANENT)
host->mmc->caps = MMC_CAP_NONREMOVABLE;
+ if (pdata->host_caps)
+ host->mmc->caps |= pdata->host_caps;
+
host->quirks |= (SDHCI_QUIRK_32BIT_DMA_ADDR |
SDHCI_QUIRK_32BIT_DMA_SIZE);
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/kernel.h>
-#include <linux/usb.h>
#include <linux/slab.h>
#include <linux/dma-mapping.h>
#include <linux/mmc/host.h>
init_completion(&dev->dma_done);
- dev->card_workqueue = create_freezeable_workqueue(DRV_NAME);
+ dev->card_workqueue = create_freezable_workqueue(DRV_NAME);
if (!dev->card_workqueue)
goto error9;
static __init int sm_module_init(void)
{
int error = 0;
- cache_flush_workqueue = create_freezeable_workqueue("smflush");
+ cache_flush_workqueue = create_freezable_workqueue("smflush");
if (IS_ERR(cache_flush_workqueue))
return PTR_ERR(cache_flush_workqueue);
ubi->nor_flash = 1;
}
- /*
- * Set UBI min. I/O size (@ubi->min_io_size). We use @mtd->writebufsize
- * for these purposes, not @mtd->writesize. At the moment this does not
- * matter for NAND, because currently @mtd->writebufsize is equivalent to
- * @mtd->writesize for all NANDs. However, some CFI NOR flashes may
- * have @mtd->writebufsize which is multiple of @mtd->writesize.
- *
- * The reason we use @mtd->writebufsize for @ubi->min_io_size is that
- * UBI and UBIFS recovery algorithms rely on the fact that if there was
- * an unclean power cut, then we can find offset of the last corrupted
- * node, align the offset to @ubi->min_io_size, read the rest of the
- * eraseblock starting from this offset, and check whether there are
- * only 0xFF bytes. If yes, then we are probably dealing with a
- * corruption caused by a power cut, if not, then this is probably some
- * severe corruption.
- *
- * Thus, we have to use the maximum write unit size of the flash, which
- * is @mtd->writebufsize, because @mtd->writesize is the minimum write
- * size, not the maximum.
- */
- if (ubi->mtd->type == MTD_NANDFLASH)
- ubi_assert(ubi->mtd->writebufsize == ubi->mtd->writesize);
- else if (ubi->mtd->type == MTD_NORFLASH)
- ubi_assert(ubi->mtd->writebufsize % ubi->mtd->writesize == 0);
-
- ubi->min_io_size = ubi->mtd->writebufsize;
-
+ ubi->min_io_size = ubi->mtd->writesize;
ubi->hdrs_min_io_size = ubi->mtd->writesize >> ubi->mtd->subpage_sft;
/*
default n
config MLX4_DEBUG
- bool "Verbose debugging output" if (MLX4_CORE && EMBEDDED)
+ bool "Verbose debugging output" if (MLX4_CORE && EXPERT)
depends on MLX4_CORE
default y
---help---
module_init(ks8695_init);
module_exit(ks8695_cleanup);
-MODULE_AUTHOR("Simtec Electronics")
+MODULE_AUTHOR("Simtec Electronics");
MODULE_DESCRIPTION("Micrel KS8695 (Centaur) Ethernet driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:" MODULENAME);
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L2C_B)},
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L2C_B2)},
{PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L1D)},
+ {PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATHEROS_L1D_2_0)},
/* required last entry */
{ 0 }
};
spin_lock_bh(&adapter->mcc_lock);
wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+ status = -EBUSY;
+ goto err;
+ }
req = nonemb_cmd->va;
sge = nonembedded_sgl(wrb);
status = be_mcc_notify_wait(adapter);
+err:
spin_unlock_bh(&adapter->mcc_lock);
return status;
}
if (adapter->link_up != link_up) {
adapter->link_speed = -1;
if (link_up) {
- netif_start_queue(netdev);
netif_carrier_on(netdev);
printk(KERN_INFO "%s: Link up\n", netdev->name);
} else {
- netif_stop_queue(netdev);
netif_carrier_off(netdev);
printk(KERN_INFO "%s: Link down\n", netdev->name);
}
netif_napi_add(netdev, &adapter->tx_eq.napi, be_poll_tx_mcc,
BE_NAPI_WEIGHT);
-
- netif_stop_queue(netdev);
}
static void be_unmap_pci_bars(struct be_adapter *adapter)
!(data & ETH_FLAG_RXVLAN))
return -EINVAL;
+ /* TSO with VLAN tag won't work with current firmware */
+ if (!(data & ETH_FLAG_TXVLAN))
+ return -EINVAL;
+
rc = ethtool_op_set_flags(dev, data, ETH_FLAG_RXHASH | ETH_FLAG_RXVLAN |
ETH_FLAG_TXVLAN);
if (rc)
/* AER (Advanced Error Reporting) hooks */
err = pci_enable_pcie_error_reporting(pdev);
- if (err) {
- dev_err(&pdev->dev, "pci_enable_pcie_error_reporting "
- "failed 0x%x\n", err);
- /* non-fatal, continue */
- }
+ if (!err)
+ bp->flags |= BNX2_FLAG_AER_ENABLED;
} else {
bp->pcix_cap = pci_find_capability(pdev, PCI_CAP_ID_PCIX);
return 0;
err_out_unmap:
- if (bp->flags & BNX2_FLAG_PCIE)
+ if (bp->flags & BNX2_FLAG_AER_ENABLED) {
pci_disable_pcie_error_reporting(pdev);
+ bp->flags &= ~BNX2_FLAG_AER_ENABLED;
+ }
if (bp->regview) {
iounmap(bp->regview);
kfree(bp->temp_stats_blk);
- if (bp->flags & BNX2_FLAG_PCIE)
+ if (bp->flags & BNX2_FLAG_AER_ENABLED) {
pci_disable_pcie_error_reporting(pdev);
+ bp->flags &= ~BNX2_FLAG_AER_ENABLED;
+ }
free_netdev(dev);
}
rtnl_unlock();
- if (!(bp->flags & BNX2_FLAG_PCIE))
+ if (!(bp->flags & BNX2_FLAG_AER_ENABLED))
return result;
err = pci_cleanup_aer_uncorrect_error_status(pdev);
#define BNX2_FLAG_JUMBO_BROKEN 0x00000800
#define BNX2_FLAG_CAN_KEEP_VLAN 0x00001000
#define BNX2_FLAG_BROKEN_STATS 0x00002000
+#define BNX2_FLAG_AER_ENABLED 0x00004000
struct bnx2_napi bnx2_napi[BNX2_MAX_MSIX_VEC];
* (you will need to reboot afterwards) */
/* #define BNX2X_STOP_ON_ERROR */
-#define DRV_MODULE_VERSION "1.62.00-3"
-#define DRV_MODULE_RELDATE "2010/12/21"
+#define DRV_MODULE_VERSION "1.62.00-5"
+#define DRV_MODULE_RELDATE "2011/01/30"
#define BNX2X_BC_VER 0x040200
#define BNX2X_MULTI_QUEUE
#define PORT_HW_CFG_LANE_SWAP_CFG_31203120 0x0000d8d8
/* forced only */
#define PORT_HW_CFG_LANE_SWAP_CFG_32103210 0x0000e4e4
+ /* Indicate whether to swap the external phy polarity */
+#define PORT_HW_CFG_SWAP_PHY_POLARITY_MASK 0x00010000
+#define PORT_HW_CFG_SWAP_PHY_POLARITY_DISABLED 0x00000000
+#define PORT_HW_CFG_SWAP_PHY_POLARITY_ENABLED 0x00010000
u32 external_phy_config;
#define PORT_HW_CFG_SERDES_EXT_PHY_TYPE_MASK 0xff000000
offset = phy->addr + ser_lane;
if (CHIP_IS_E2(bp))
- aer_val = 0x2800 + offset - 1;
+ aer_val = 0x3800 + offset - 1;
else
aer_val = 0x3800 + offset;
CL45_WR_OVER_CL22(bp, phy,
if (!vars->link_up)
break;
case LED_MODE_ON:
- if (SINGLE_MEDIA_DIRECT(params)) {
+ if (params->phy[EXT_PHY1].type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8727 &&
+ CHIP_IS_E2(bp) && params->num_phys == 2) {
+ /**
+ * This is a work-around for E2+8727 Configurations
+ */
+ if (mode == LED_MODE_ON ||
+ speed == SPEED_10000){
+ REG_WR(bp, NIG_REG_LED_MODE_P0 + port*4, 0);
+ REG_WR(bp, NIG_REG_LED_10G_P0 + port*4, 1);
+
+ tmp = EMAC_RD(bp, EMAC_REG_EMAC_LED);
+ EMAC_WR(bp, EMAC_REG_EMAC_LED,
+ (tmp | EMAC_LED_OVERRIDE));
+ return rc;
+ }
+ } else if (SINGLE_MEDIA_DIRECT(params)) {
/**
* This is a work-around for HW issue found when link
* is up in CL73
pause_result);
}
}
-
-static void bnx2x_8073_8727_external_rom_boot(struct bnx2x *bp,
+static u8 bnx2x_8073_8727_external_rom_boot(struct bnx2x *bp,
struct bnx2x_phy *phy,
u8 port)
{
+ u32 count = 0;
+ u16 fw_ver1, fw_msgout;
+ u8 rc = 0;
+
/* Boot port from external ROM */
/* EDC grst */
bnx2x_cl45_write(bp, phy,
MDIO_PMA_REG_GEN_CTRL,
MDIO_PMA_REG_GEN_CTRL_ROM_RESET_INTERNAL_MP);
- /* wait for 120ms for code download via SPI port */
- msleep(120);
+ /* Delay 100ms per the PHY specifications */
+ msleep(100);
+
+ /* 8073 sometimes taking longer to download */
+ do {
+ count++;
+ if (count > 300) {
+ DP(NETIF_MSG_LINK,
+ "bnx2x_8073_8727_external_rom_boot port %x:"
+ "Download failed. fw version = 0x%x\n",
+ port, fw_ver1);
+ rc = -EINVAL;
+ break;
+ }
+
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_ROM_VER1, &fw_ver1);
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_M8051_MSGOUT_REG, &fw_msgout);
+
+ msleep(1);
+ } while (fw_ver1 == 0 || fw_ver1 == 0x4321 ||
+ ((fw_msgout & 0xff) != 0x03 && (phy->type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8073)));
/* Clear ser_boot_ctl bit */
bnx2x_cl45_write(bp, phy,
MDIO_PMA_DEVAD,
MDIO_PMA_REG_MISC_CTRL1, 0x0000);
bnx2x_save_bcm_spirom_ver(bp, phy, port);
-}
-static void bnx2x_8073_set_xaui_low_power_mode(struct bnx2x *bp,
- struct bnx2x_phy *phy)
-{
- u16 val;
- bnx2x_cl45_read(bp, phy,
- MDIO_PMA_DEVAD, MDIO_PMA_REG_8073_CHIP_REV, &val);
-
- if (val == 0) {
- /* Mustn't set low power mode in 8073 A0 */
- return;
- }
-
- /* Disable PLL sequencer (use read-modify-write to clear bit 13) */
- bnx2x_cl45_read(bp, phy,
- MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, &val);
- val &= ~(1<<13);
- bnx2x_cl45_write(bp, phy,
- MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, val);
-
- /* PLL controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805E, 0x1077);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805D, 0x0000);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805C, 0x030B);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805B, 0x1240);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x805A, 0x2490);
-
- /* Tx Controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A7, 0x0C74);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A6, 0x9041);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80A5, 0x4640);
-
- /* Rx Controls */
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FE, 0x01C4);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FD, 0x9249);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, 0x80FC, 0x2015);
+ DP(NETIF_MSG_LINK,
+ "bnx2x_8073_8727_external_rom_boot port %x:"
+ "Download complete. fw version = 0x%x\n",
+ port, fw_ver1);
- /* Enable PLL sequencer (use read-modify-write to set bit 13) */
- bnx2x_cl45_read(bp, phy, MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, &val);
- val |= (1<<13);
- bnx2x_cl45_write(bp, phy, MDIO_XS_DEVAD, MDIO_XS_PLL_SEQUENCER, val);
+ return rc;
}
/******************************************************************/
bnx2x_8073_set_pause_cl37(params, phy, vars);
- bnx2x_8073_set_xaui_low_power_mode(bp, phy);
-
bnx2x_cl45_read(bp, phy,
MDIO_PMA_DEVAD, MDIO_PMA_REG_M8051_MSGOUT_REG, &tmp1);
DP(NETIF_MSG_LINK, "Before rom RX_ALARM(port1): 0x%x\n", tmp1);
+ /**
+ * If this is forced speed, set to KR or KX (all other are not
+ * supported)
+ */
+ /* Swap polarity if required - Must be done only in non-1G mode */
+ if (params->lane_config & PORT_HW_CFG_SWAP_PHY_POLARITY_ENABLED) {
+ /* Configure the 8073 to swap _P and _N of the KR lines */
+ DP(NETIF_MSG_LINK, "Swapping polarity for the 8073\n");
+ /* 10G Rx/Tx and 1G Tx signal polarity swap */
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8073_OPT_DIGITAL_CTRL, &val);
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8073_OPT_DIGITAL_CTRL,
+ (val | (3<<9)));
+ }
+
+
/* Enable CL37 BAM */
if (REG_RD(bp, params->shmem_base +
offsetof(struct shmem_region, dev_info.
}
if (link_up) {
+ /* Swap polarity if required */
+ if (params->lane_config &
+ PORT_HW_CFG_SWAP_PHY_POLARITY_ENABLED) {
+ /* Configure the 8073 to swap P and N of the KR lines */
+ bnx2x_cl45_read(bp, phy,
+ MDIO_XS_DEVAD,
+ MDIO_XS_REG_8073_RX_CTRL_PCIE, &val1);
+ /**
+ * Set bit 3 to invert Rx in 1G mode and clear this bit
+ * when it`s in 10G mode.
+ */
+ if (vars->line_speed == SPEED_1000) {
+ DP(NETIF_MSG_LINK, "Swapping 1G polarity for"
+ "the 8073\n");
+ val1 |= (1<<3);
+ } else
+ val1 &= ~(1<<3);
+
+ bnx2x_cl45_write(bp, phy,
+ MDIO_XS_DEVAD,
+ MDIO_XS_REG_8073_RX_CTRL_PCIE,
+ val1);
+ }
bnx2x_ext_phy_10G_an_resolve(bp, phy, vars);
bnx2x_8073_resolve_fc(phy, params, vars);
+ vars->duplex = DUPLEX_FULL;
}
return link_up;
}
else
vars->line_speed = SPEED_10000;
bnx2x_ext_phy_resolve_fc(phy, params, vars);
+ vars->duplex = DUPLEX_FULL;
}
return link_up;
}
DP(NETIF_MSG_LINK, "port %x: External link is down\n",
params->port);
}
- if (link_up)
+ if (link_up) {
bnx2x_ext_phy_resolve_fc(phy, params, vars);
+ vars->duplex = DUPLEX_FULL;
+ DP(NETIF_MSG_LINK, "duplex = 0x%x\n", vars->duplex);
+ }
if ((DUAL_MEDIA(params)) &&
(phy->req_line_speed == SPEED_1000)) {
MDIO_PMA_REG_8481_LED2_MASK,
0x18);
+ /* Select activity source by Tx and Rx, as suggested by PHY AE */
bnx2x_cl45_write(bp, phy,
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_LED3_MASK,
- 0x0040);
+ 0x0006);
+
+ /* Select the closest activity blink rate to that in 10/100/1000 */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_BLINK,
+ 0);
+
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_84823_CTL_LED_CTL_1, &val);
+ val |= MDIO_PMA_REG_84823_LED3_STRETCH_EN; /* stretch_en for LED3*/
+
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_84823_CTL_LED_CTL_1, val);
/* 'Interrupt Mask' */
bnx2x_cl45_write(bp, phy,
/* Check link 10G */
if (val2 & (1<<11)) {
vars->line_speed = SPEED_10000;
+ vars->duplex = DUPLEX_FULL;
link_up = 1;
bnx2x_ext_phy_10G_an_resolve(bp, phy, vars);
} else { /* Check Legacy speed link */
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_LED1_MASK,
0x80);
+
+ /* Tell LED3 to blink on source */
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ &val);
+ val &= ~(7<<6);
+ val |= (1<<6); /* A83B[8:6]= 1 */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ val);
}
break;
}
MDIO_AN_DEVAD, MDIO_AN_REG_MASTER_STATUS,
&val2);
vars->line_speed = SPEED_10000;
+ vars->duplex = DUPLEX_FULL;
DP(NETIF_MSG_LINK, "SFX7101 AN status 0x%x->Master=%x\n",
val2, (val2 & (1<<14)));
bnx2x_ext_phy_10G_an_resolve(bp, phy, vars);
struct bnx2x_phy phy[PORT_MAX];
struct bnx2x_phy *phy_blk[PORT_MAX];
u16 val;
- s8 port;
+ s8 port = 0;
s8 port_of_path = 0;
-
- bnx2x_ext_phy_hw_reset(bp, 0);
+ u32 swap_val, swap_override;
+ swap_val = REG_RD(bp, NIG_REG_PORT_SWAP);
+ swap_override = REG_RD(bp, NIG_REG_STRAP_OVERRIDE);
+ port ^= (swap_val && swap_override);
+ bnx2x_ext_phy_hw_reset(bp, port);
/* PART1 - Reset both phys */
for (port = PORT_MAX - 1; port >= PORT_0; port--) {
u32 shmem_base, shmem2_base;
/* PART2 - Download firmware to both phys */
for (port = PORT_MAX - 1; port >= PORT_0; port--) {
- u16 fw_ver1;
if (CHIP_IS_E2(bp))
port_of_path = 0;
else
DP(NETIF_MSG_LINK, "Loading spirom for phy address 0x%x\n",
phy_blk[port]->addr);
- bnx2x_8073_8727_external_rom_boot(bp, phy_blk[port],
- port_of_path);
-
- bnx2x_cl45_read(bp, phy_blk[port],
- MDIO_PMA_DEVAD,
- MDIO_PMA_REG_ROM_VER1, &fw_ver1);
- if (fw_ver1 == 0 || fw_ver1 == 0x4321) {
- DP(NETIF_MSG_LINK,
- "bnx2x_8073_common_init_phy port %x:"
- "Download failed. fw version = 0x%x\n",
- port, fw_ver1);
+ if (bnx2x_8073_8727_external_rom_boot(bp, phy_blk[port],
+ port_of_path))
return -EINVAL;
- }
/* Only set bit 10 = 1 (Tx power down) */
bnx2x_cl45_read(bp, phy_blk[port],
}
/* PART2 - Download firmware to both phys */
for (port = PORT_MAX - 1; port >= PORT_0; port--) {
- u16 fw_ver1;
if (CHIP_IS_E2(bp))
port_of_path = 0;
else
port_of_path = port;
DP(NETIF_MSG_LINK, "Loading spirom for phy address 0x%x\n",
phy_blk[port]->addr);
- bnx2x_8073_8727_external_rom_boot(bp, phy_blk[port],
- port_of_path);
- bnx2x_cl45_read(bp, phy_blk[port],
- MDIO_PMA_DEVAD,
- MDIO_PMA_REG_ROM_VER1, &fw_ver1);
- if (fw_ver1 == 0 || fw_ver1 == 0x4321) {
- DP(NETIF_MSG_LINK,
- "bnx2x_8727_common_init_phy port %x:"
- "Download failed. fw version = 0x%x\n",
- port, fw_ver1);
+ if (bnx2x_8073_8727_external_rom_boot(bp, phy_blk[port],
+ port_of_path))
return -EINVAL;
- }
- }
+ }
return 0;
}
u32 shmem2_base_path[], u32 chip_id)
{
u8 rc = 0;
+ u32 phy_ver;
u8 phy_index;
u32 ext_phy_type, ext_phy_config;
DP(NETIF_MSG_LINK, "Begin common phy init\n");
if (CHIP_REV_IS_EMUL(bp))
return 0;
+ /* Check if common init was already done */
+ phy_ver = REG_RD(bp, shmem_base_path[0] +
+ offsetof(struct shmem_region,
+ port_mb[PORT_0].ext_phy_fw_version));
+ if (phy_ver) {
+ DP(NETIF_MSG_LINK, "Not doing common init; phy ver is 0x%x\n",
+ phy_ver);
+ return 0;
+ }
+
/* Read the ext_phy_type for arbitrary port(0) */
for (phy_index = EXT_PHY1; phy_index < MAX_PHYS;
phy_index++) {
/* accept matched ucast */
drop_all_ucast = 0;
}
- if (filters & BNX2X_ACCEPT_MULTICAST) {
+ if (filters & BNX2X_ACCEPT_MULTICAST)
/* accept matched mcast */
drop_all_mcast = 0;
- if (IS_MF_SI(bp))
- /* since mcast addresses won't arrive with ovlan,
- * fw needs to accept all of them in
- * switch-independent mode */
- accp_all_mcast = 1;
- }
+
if (filters & BNX2X_ACCEPT_ALL_UNICAST) {
/* accept all mcast */
drop_all_ucast = 0;
def_q_filters |= BNX2X_ACCEPT_UNICAST | BNX2X_ACCEPT_BROADCAST |
BNX2X_ACCEPT_MULTICAST;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id,
+ BNX2X_ACCEPT_UNICAST |
+ BNX2X_ACCEPT_MULTICAST);
+ }
#endif
break;
def_q_filters |= BNX2X_ACCEPT_UNICAST | BNX2X_ACCEPT_BROADCAST |
BNX2X_ACCEPT_ALL_MULTICAST;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ /*
+ * Prevent duplication of multicast packets by configuring FCoE
+ * L2 Client to receive only matched unicast frames.
+ */
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id,
+ BNX2X_ACCEPT_UNICAST);
+ }
#endif
break;
case BNX2X_RX_MODE_PROMISC:
def_q_filters |= BNX2X_PROMISCUOUS_MODE;
#ifdef BCM_CNIC
- cl_id = bnx2x_fcoe(bp, cl_id);
- bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_UNICAST |
- BNX2X_ACCEPT_MULTICAST);
+ /*
+ * Prevent packets duplication by configuring DROP_ALL for FCoE
+ * L2 Client.
+ */
+ if (!NO_FCOE(bp)) {
+ cl_id = bnx2x_fcoe(bp, cl_id);
+ bnx2x_rxq_set_mac_filters(bp, cl_id, BNX2X_ACCEPT_NONE);
+ }
#endif
/* pass management unicast packets as well */
llh_mask |= NIG_LLH0_BRB1_DRV_MASK_REG_LLH0_BRB1_DRV_MASK_UNCST;
}
}
- bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
- bp->common.shmem_base,
- bp->common.shmem2_base);
-
bnx2x_setup_fan_failure_detection(bp);
/* clear PXP2 attentions */
bnx2x_init_block(bp, MCP_BLOCK, init_stage);
bnx2x_init_block(bp, DMAE_BLOCK, init_stage);
- bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
- bp->common.shmem_base,
- bp->common.shmem2_base);
if (bnx2x_fan_failure_det_req(bp, bp->common.shmem_base,
bp->common.shmem2_base, port)) {
u32 reg_addr = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
(ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN))
bp->mdio.prtad =
XGXS_EXT_PHY_ADDR(ext_phy_config);
+
+ /*
+ * Check if hw lock is required to access MDC/MDIO bus to the PHY(s)
+ * In MF mode, it is set to cover self test cases
+ */
+ if (IS_MF(bp))
+ bp->port.need_hw_lock = 1;
+ else
+ bp->port.need_hw_lock = bnx2x_hw_lock_required(bp,
+ bp->common.shmem_base,
+ bp->common.shmem2_base);
}
static void __devinit bnx2x_get_mac_hwinfo(struct bnx2x *bp)
#define MDIO_CTL_REG_84823_MEDIA_PRIORITY_COPPER 0x0000
#define MDIO_CTL_REG_84823_MEDIA_PRIORITY_FIBER 0x0100
#define MDIO_CTL_REG_84823_MEDIA_FIBER_1G 0x1000
+#define MDIO_CTL_REG_84823_USER_CTRL_REG 0x4005
+#define MDIO_CTL_REG_84823_USER_CTRL_CMS 0x0080
+#define MDIO_PMA_REG_84823_CTL_LED_CTL_1 0xa8e3
+#define MDIO_PMA_REG_84823_LED3_STRETCH_EN 0x0080
#define IGU_FUNC_BASE 0x0400
if (!(dev->flags & IFF_MASTER))
goto out;
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ goto out;
+
if (!pskb_may_pull(skb, sizeof(struct lacpdu)))
goto out;
goto out;
}
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ goto out;
+
if (!pskb_may_pull(skb, arp_hdr_len(bond_dev)))
goto out;
if (!slave || !slave_do_arp_validate(bond, slave))
goto out_unlock;
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ goto out_unlock;
+
if (!pskb_may_pull(skb, arp_hdr_len(dev)))
goto out_unlock;
As only the sending and receiving of CAN frames is implemented, this
driver should work with the (serial/USB) CAN hardware from:
- www.canusb.com / www.can232.com / www.mictronic.com / www.canhack.de
+ www.canusb.com / www.can232.com / www.mictronics.de / www.canhack.de
Userspace tools to attach the SLCAN line discipline (slcan_attach,
slcand) can be found in the can-utils at the SocketCAN SVN, see
source "drivers/net/can/usb/Kconfig"
+source "drivers/net/can/softing/Kconfig"
+
config CAN_DEBUG_DEVICES
bool "CAN devices debugging messages"
depends on CAN
can-dev-y := dev.o
obj-y += usb/
+obj-y += softing/
obj-$(CONFIG_CAN_SJA1000) += sja1000/
obj-$(CONFIG_CAN_MSCAN) += mscan/
* at91_can.c - CAN network driver for AT91 SoC CAN controller
*
* (C) 2007 by Hans J. Koch <hjk@hansjkoch.de>
- * (C) 2008, 2009, 2010 by Marc Kleine-Budde <kernel@pengutronix.de>
+ * (C) 2008, 2009, 2010, 2011 by Marc Kleine-Budde <kernel@pengutronix.de>
*
* This software may be distributed under the terms of the GNU General
* Public License ("GPL") version 2 as distributed in the 'COPYING'
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/platform_device.h>
+#include <linux/rtnetlink.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <mach/board.h>
-#define AT91_NAPI_WEIGHT 12
+#define AT91_NAPI_WEIGHT 11
/*
* RX/TX Mailbox split
* don't dare to touch
*/
-#define AT91_MB_RX_NUM 12
+#define AT91_MB_RX_NUM 11
#define AT91_MB_TX_SHIFT 2
-#define AT91_MB_RX_FIRST 0
+#define AT91_MB_RX_FIRST 1
#define AT91_MB_RX_LAST (AT91_MB_RX_FIRST + AT91_MB_RX_NUM - 1)
#define AT91_MB_RX_MASK(i) ((1 << (i)) - 1)
#define AT91_MB_RX_SPLIT 8
#define AT91_MB_RX_LOW_LAST (AT91_MB_RX_SPLIT - 1)
-#define AT91_MB_RX_LOW_MASK (AT91_MB_RX_MASK(AT91_MB_RX_SPLIT))
+#define AT91_MB_RX_LOW_MASK (AT91_MB_RX_MASK(AT91_MB_RX_SPLIT) & \
+ ~AT91_MB_RX_MASK(AT91_MB_RX_FIRST))
#define AT91_MB_TX_NUM (1 << AT91_MB_TX_SHIFT)
#define AT91_MB_TX_FIRST (AT91_MB_RX_LAST + 1)
struct clk *clk;
struct at91_can_data *pdata;
+
+ canid_t mb0_id;
};
static struct can_bittiming_const at91_bittiming_const = {
set_mb_mode_prio(priv, mb, mode, 0);
}
+static inline u32 at91_can_id_to_reg_mid(canid_t can_id)
+{
+ u32 reg_mid;
+
+ if (can_id & CAN_EFF_FLAG)
+ reg_mid = (can_id & CAN_EFF_MASK) | AT91_MID_MIDE;
+ else
+ reg_mid = (can_id & CAN_SFF_MASK) << 18;
+
+ return reg_mid;
+}
+
/*
* Swtich transceiver on or off
*/
{
struct at91_priv *priv = netdev_priv(dev);
unsigned int i;
+ u32 reg_mid;
/*
- * The first 12 mailboxes are used as a reception FIFO. The
- * last mailbox is configured with overwrite option. The
- * overwrite flag indicates a FIFO overflow.
+ * Due to a chip bug (errata 50.2.6.3 & 50.3.5.3) the first
+ * mailbox is disabled. The next 11 mailboxes are used as a
+ * reception FIFO. The last mailbox is configured with
+ * overwrite option. The overwrite flag indicates a FIFO
+ * overflow.
*/
+ reg_mid = at91_can_id_to_reg_mid(priv->mb0_id);
+ for (i = 0; i < AT91_MB_RX_FIRST; i++) {
+ set_mb_mode(priv, i, AT91_MB_MODE_DISABLED);
+ at91_write(priv, AT91_MID(i), reg_mid);
+ at91_write(priv, AT91_MCR(i), 0x0); /* clear dlc */
+ }
+
for (i = AT91_MB_RX_FIRST; i < AT91_MB_RX_LAST; i++)
set_mb_mode(priv, i, AT91_MB_MODE_RX);
set_mb_mode(priv, AT91_MB_RX_LAST, AT91_MB_MODE_RX_OVRWR);
set_mb_mode_prio(priv, i, AT91_MB_MODE_TX, 0);
/* Reset tx and rx helper pointers */
- priv->tx_next = priv->tx_echo = priv->rx_next = 0;
+ priv->tx_next = priv->tx_echo = 0;
+ priv->rx_next = AT91_MB_RX_FIRST;
}
static int at91_set_bittiming(struct net_device *dev)
netdev_err(dev, "BUG! TX buffer full when queue awake!\n");
return NETDEV_TX_BUSY;
}
-
- if (cf->can_id & CAN_EFF_FLAG)
- reg_mid = (cf->can_id & CAN_EFF_MASK) | AT91_MID_MIDE;
- else
- reg_mid = (cf->can_id & CAN_SFF_MASK) << 18;
-
+ reg_mid = at91_can_id_to_reg_mid(cf->can_id);
reg_mcr = ((cf->can_id & CAN_RTR_FLAG) ? AT91_MCR_MRTR : 0) |
(cf->can_dlc << 16) | AT91_MCR_MTCR;
*
* Theory of Operation:
*
- * 12 of the 16 mailboxes on the chip are reserved for RX. we split
- * them into 2 groups. The lower group holds 8 and upper 4 mailboxes.
+ * 11 of the 16 mailboxes on the chip are reserved for RX. we split
+ * them into 2 groups. The lower group holds 7 and upper 4 mailboxes.
*
* Like it or not, but the chip always saves a received CAN message
* into the first free mailbox it finds (starting with the
* lowest). This makes it very difficult to read the messages in the
* right order from the chip. This is how we work around that problem:
*
- * The first message goes into mb nr. 0 and issues an interrupt. All
+ * The first message goes into mb nr. 1 and issues an interrupt. All
* rx ints are disabled in the interrupt handler and a napi poll is
* scheduled. We read the mailbox, but do _not_ reenable the mb (to
* receive another message).
*
* lower mbxs upper
- * ______^______ __^__
- * / \ / \
+ * ____^______ __^__
+ * / \ / \
* +-+-+-+-+-+-+-+-++-+-+-+-+
- * |x|x|x|x|x|x|x|x|| | | | |
+ * | |x|x|x|x|x|x|x|| | | | |
* +-+-+-+-+-+-+-+-++-+-+-+-+
* 0 0 0 0 0 0 0 0 0 0 1 1 \ mail
* 0 1 2 3 4 5 6 7 8 9 0 1 / box
+ * ^
+ * |
+ * \
+ * unused, due to chip bug
*
* The variable priv->rx_next points to the next mailbox to read a
* message from. As long we're in the lower mailboxes we just read the
"order of incoming frames cannot be guaranteed\n");
again:
- for (mb = find_next_bit(addr, AT91_MB_RX_NUM, priv->rx_next);
- mb < AT91_MB_RX_NUM && quota > 0;
+ for (mb = find_next_bit(addr, AT91_MB_RX_LAST + 1, priv->rx_next);
+ mb < AT91_MB_RX_LAST + 1 && quota > 0;
reg_sr = at91_read(priv, AT91_SR),
- mb = find_next_bit(addr, AT91_MB_RX_NUM, ++priv->rx_next)) {
+ mb = find_next_bit(addr, AT91_MB_RX_LAST + 1, ++priv->rx_next)) {
at91_read_msg(dev, mb);
/* reactivate mailboxes */
/* upper group completed, look again in lower */
if (priv->rx_next > AT91_MB_RX_LOW_LAST &&
- quota > 0 && mb >= AT91_MB_RX_NUM) {
- priv->rx_next = 0;
+ quota > 0 && mb > AT91_MB_RX_LAST) {
+ priv->rx_next = AT91_MB_RX_FIRST;
goto again;
}
.ndo_start_xmit = at91_start_xmit,
};
+static ssize_t at91_sysfs_show_mb0_id(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct at91_priv *priv = netdev_priv(to_net_dev(dev));
+
+ if (priv->mb0_id & CAN_EFF_FLAG)
+ return snprintf(buf, PAGE_SIZE, "0x%08x\n", priv->mb0_id);
+ else
+ return snprintf(buf, PAGE_SIZE, "0x%03x\n", priv->mb0_id);
+}
+
+static ssize_t at91_sysfs_set_mb0_id(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct net_device *ndev = to_net_dev(dev);
+ struct at91_priv *priv = netdev_priv(ndev);
+ unsigned long can_id;
+ ssize_t ret;
+ int err;
+
+ rtnl_lock();
+
+ if (ndev->flags & IFF_UP) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ err = strict_strtoul(buf, 0, &can_id);
+ if (err) {
+ ret = err;
+ goto out;
+ }
+
+ if (can_id & CAN_EFF_FLAG)
+ can_id &= CAN_EFF_MASK | CAN_EFF_FLAG;
+ else
+ can_id &= CAN_SFF_MASK;
+
+ priv->mb0_id = can_id;
+ ret = count;
+
+ out:
+ rtnl_unlock();
+ return ret;
+}
+
+static DEVICE_ATTR(mb0_id, S_IWUSR | S_IRUGO,
+ at91_sysfs_show_mb0_id, at91_sysfs_set_mb0_id);
+
+static struct attribute *at91_sysfs_attrs[] = {
+ &dev_attr_mb0_id.attr,
+ NULL,
+};
+
+static struct attribute_group at91_sysfs_attr_group = {
+ .attrs = at91_sysfs_attrs,
+};
+
static int __devinit at91_can_probe(struct platform_device *pdev)
{
struct net_device *dev;
dev->netdev_ops = &at91_netdev_ops;
dev->irq = irq;
dev->flags |= IFF_ECHO;
+ dev->sysfs_groups[0] = &at91_sysfs_attr_group;
priv = netdev_priv(dev);
priv->can.clock.freq = clk_get_rate(clk);
priv->dev = dev;
priv->clk = clk;
priv->pdata = pdev->dev.platform_data;
+ priv->mb0_id = 0x7ff;
netif_napi_add(dev, &priv->napi, at91_poll, AT91_NAPI_WEIGHT);
return count;
}
-static DEVICE_ATTR(termination, S_IWUGO | S_IRUGO, ican3_sysfs_show_term,
+static DEVICE_ATTR(termination, S_IWUSR | S_IRUGO, ican3_sysfs_show_term,
ican3_sysfs_set_term);
static struct attribute *ican3_sysfs_attrs[] = {
goto open_unlock;
}
- priv->wq = create_freezeable_workqueue("mcp251x_wq");
+ priv->wq = create_freezable_workqueue("mcp251x_wq");
INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
config CAN_MSCAN
- depends on CAN_DEV && (PPC || M68K || M68KNOMMU)
+ depends on CAN_DEV && (PPC || M68K)
tristate "Support for Freescale MSCAN based chips"
---help---
The Motorola Scalable Controller Area Network (MSCAN) definition
static struct can_bittiming_const pch_can_bittiming_const = {
.name = KBUILD_MODNAME,
- .tseg1_min = 1,
+ .tseg1_min = 2,
.tseg1_max = 16,
.tseg2_min = 1,
.tseg2_max = 8,
struct pch_can_priv *priv = netdev_priv(ndev);
unregister_candev(priv->ndev);
- pci_iounmap(pdev, priv->regs);
if (priv->use_msi)
pci_disable_msi(priv->dev);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
pch_can_reset(priv);
+ pci_iounmap(pdev, priv->regs);
free_candev(priv->ndev);
}
priv->use_msi = 0;
} else {
netdev_err(ndev, "PCH CAN opened with MSI\n");
+ pci_set_master(pdev);
priv->use_msi = 1;
}
--- /dev/null
+config CAN_SOFTING
+ tristate "Softing Gmbh CAN generic support"
+ depends on CAN_DEV && HAS_IOMEM
+ ---help---
+ Support for CAN cards from Softing Gmbh & some cards
+ from Vector Gmbh.
+ Softing Gmbh CAN cards come with 1 or 2 physical busses.
+ Those cards typically use Dual Port RAM to communicate
+ with the host CPU. The interface is then identical for PCI
+ and PCMCIA cards. This driver operates on a platform device,
+ which has been created by softing_cs or softing_pci driver.
+ Warning:
+ The API of the card does not allow fine control per bus, but
+ controls the 2 busses on the card together.
+ As such, some actions (start/stop/busoff recovery) on 1 bus
+ must bring down the other bus too temporarily.
+
+config CAN_SOFTING_CS
+ tristate "Softing Gmbh CAN pcmcia cards"
+ depends on PCMCIA
+ depends on CAN_SOFTING
+ ---help---
+ Support for PCMCIA cards from Softing Gmbh & some cards
+ from Vector Gmbh.
+ You need firmware for these, which you can get at
+ http://developer.berlios.de/projects/socketcan/
+ This version of the driver is written against
+ firmware version 4.6 (softing-fw-4.6-binaries.tar.gz)
+ In order to use the card as CAN device, you need the Softing generic
+ support too.
--- /dev/null
+
+softing-y := softing_main.o softing_fw.o
+obj-$(CONFIG_CAN_SOFTING) += softing.o
+obj-$(CONFIG_CAN_SOFTING_CS) += softing_cs.o
+
+ccflags-$(CONFIG_CAN_DEBUG_DEVICES) := -DDEBUG
--- /dev/null
+/*
+ * softing common interfaces
+ *
+ * by Kurt Van Dijck, 2008-2010
+ */
+
+#include <linux/atomic.h>
+#include <linux/netdevice.h>
+#include <linux/ktime.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+#include <linux/can.h>
+#include <linux/can/dev.h>
+
+#include "softing_platform.h"
+
+struct softing;
+
+struct softing_priv {
+ struct can_priv can; /* must be the first member! */
+ struct net_device *netdev;
+ struct softing *card;
+ struct {
+ int pending;
+ /* variables wich hold the circular buffer */
+ int echo_put;
+ int echo_get;
+ } tx;
+ struct can_bittiming_const btr_const;
+ int index;
+ uint8_t output;
+ uint16_t chip;
+};
+#define netdev2softing(netdev) ((struct softing_priv *)netdev_priv(netdev))
+
+struct softing {
+ const struct softing_platform_data *pdat;
+ struct platform_device *pdev;
+ struct net_device *net[2];
+ spinlock_t spin; /* protect this structure & DPRAM access */
+ ktime_t ts_ref;
+ ktime_t ts_overflow; /* timestamp overflow value, in ktime */
+
+ struct {
+ /* indication of firmware status */
+ int up;
+ /* protection of the 'up' variable */
+ struct mutex lock;
+ } fw;
+ struct {
+ int nr;
+ int requested;
+ int svc_count;
+ unsigned int dpram_position;
+ } irq;
+ struct {
+ int pending;
+ int last_bus;
+ /*
+ * keep the bus that last tx'd a message,
+ * in order to let every netdev queue resume
+ */
+ } tx;
+ __iomem uint8_t *dpram;
+ unsigned long dpram_phys;
+ unsigned long dpram_size;
+ struct {
+ uint16_t fw_version, hw_version, license, serial;
+ uint16_t chip[2];
+ unsigned int freq; /* remote cpu's operating frequency */
+ } id;
+};
+
+extern int softing_default_output(struct net_device *netdev);
+
+extern ktime_t softing_raw2ktime(struct softing *card, u32 raw);
+
+extern int softing_chip_poweron(struct softing *card);
+
+extern int softing_bootloader_command(struct softing *card, int16_t cmd,
+ const char *msg);
+
+/* Load firmware after reset */
+extern int softing_load_fw(const char *file, struct softing *card,
+ __iomem uint8_t *virt, unsigned int size, int offset);
+
+/* Load final application firmware after bootloader */
+extern int softing_load_app_fw(const char *file, struct softing *card);
+
+/*
+ * enable or disable irq
+ * only called with fw.lock locked
+ */
+extern int softing_enable_irq(struct softing *card, int enable);
+
+/* start/stop 1 bus on card */
+extern int softing_startstop(struct net_device *netdev, int up);
+
+/* netif_rx() */
+extern int softing_netdev_rx(struct net_device *netdev,
+ const struct can_frame *msg, ktime_t ktime);
+
+/* SOFTING DPRAM mappings */
+#define DPRAM_RX 0x0000
+ #define DPRAM_RX_SIZE 32
+ #define DPRAM_RX_CNT 16
+#define DPRAM_RX_RD 0x0201 /* uint8_t */
+#define DPRAM_RX_WR 0x0205 /* uint8_t */
+#define DPRAM_RX_LOST 0x0207 /* uint8_t */
+
+#define DPRAM_FCT_PARAM 0x0300 /* int16_t [20] */
+#define DPRAM_FCT_RESULT 0x0328 /* int16_t */
+#define DPRAM_FCT_HOST 0x032b /* uint16_t */
+
+#define DPRAM_INFO_BUSSTATE 0x0331 /* uint16_t */
+#define DPRAM_INFO_BUSSTATE2 0x0335 /* uint16_t */
+#define DPRAM_INFO_ERRSTATE 0x0339 /* uint16_t */
+#define DPRAM_INFO_ERRSTATE2 0x033d /* uint16_t */
+#define DPRAM_RESET 0x0341 /* uint16_t */
+#define DPRAM_CLR_RECV_FIFO 0x0345 /* uint16_t */
+#define DPRAM_RESET_TIME 0x034d /* uint16_t */
+#define DPRAM_TIME 0x0350 /* uint64_t */
+#define DPRAM_WR_START 0x0358 /* uint8_t */
+#define DPRAM_WR_END 0x0359 /* uint8_t */
+#define DPRAM_RESET_RX_FIFO 0x0361 /* uint16_t */
+#define DPRAM_RESET_TX_FIFO 0x0364 /* uint8_t */
+#define DPRAM_READ_FIFO_LEVEL 0x0365 /* uint8_t */
+#define DPRAM_RX_FIFO_LEVEL 0x0366 /* uint16_t */
+#define DPRAM_TX_FIFO_LEVEL 0x0366 /* uint16_t */
+
+#define DPRAM_TX 0x0400 /* uint16_t */
+ #define DPRAM_TX_SIZE 16
+ #define DPRAM_TX_CNT 32
+#define DPRAM_TX_RD 0x0601 /* uint8_t */
+#define DPRAM_TX_WR 0x0605 /* uint8_t */
+
+#define DPRAM_COMMAND 0x07e0 /* uint16_t */
+#define DPRAM_RECEIPT 0x07f0 /* uint16_t */
+#define DPRAM_IRQ_TOHOST 0x07fe /* uint8_t */
+#define DPRAM_IRQ_TOCARD 0x07ff /* uint8_t */
+
+#define DPRAM_V2_RESET 0x0e00 /* uint8_t */
+#define DPRAM_V2_IRQ_TOHOST 0x0e02 /* uint8_t */
+
+#define TXMAX (DPRAM_TX_CNT - 1)
+
+/* DPRAM return codes */
+#define RES_NONE 0
+#define RES_OK 1
+#define RES_NOK 2
+#define RES_UNKNOWN 3
+/* DPRAM flags */
+#define CMD_TX 0x01
+#define CMD_ACK 0x02
+#define CMD_XTD 0x04
+#define CMD_RTR 0x08
+#define CMD_ERR 0x10
+#define CMD_BUS2 0x80
+
+/* returned fifo entry bus state masks */
+#define SF_MASK_BUSOFF 0x80
+#define SF_MASK_EPASSIVE 0x60
+
+/* bus states */
+#define STATE_BUSOFF 2
+#define STATE_EPASSIVE 1
+#define STATE_EACTIVE 0
--- /dev/null
+/*
+ * Copyright (C) 2008-2010
+ *
+ * - Kurt Van Dijck, EIA Electronics
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the version 2 of the GNU General Public License
+ * as published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+
+#include <pcmcia/cistpl.h>
+#include <pcmcia/ds.h>
+
+#include "softing_platform.h"
+
+static int softingcs_index;
+static spinlock_t softingcs_index_lock;
+
+static int softingcs_reset(struct platform_device *pdev, int v);
+static int softingcs_enable_irq(struct platform_device *pdev, int v);
+
+/*
+ * platform_data descriptions
+ */
+#define MHZ (1000*1000)
+static const struct softing_platform_data softingcs_platform_data[] = {
+{
+ .name = "CANcard",
+ .manf = 0x0168, .prod = 0x001,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 16 * MHZ, .max_brp = 32, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "CANcard-NEC",
+ .manf = 0x0168, .prod = 0x002,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 16 * MHZ, .max_brp = 32, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "CANcard-SJA",
+ .manf = 0x0168, .prod = 0x004,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 20 * MHZ, .max_brp = 32, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cansja.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "CANcard-2",
+ .manf = 0x0168, .prod = 0x005,
+ .generation = 2,
+ .nbus = 2,
+ .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4,
+ .dpram_size = 0x1000,
+ .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = NULL,
+}, {
+ .name = "Vector-CANcard",
+ .manf = 0x0168, .prod = 0x081,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 16 * MHZ, .max_brp = 64, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "Vector-CANcard-SJA",
+ .manf = 0x0168, .prod = 0x084,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 20 * MHZ, .max_brp = 32, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cansja.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "Vector-CANcard-2",
+ .manf = 0x0168, .prod = 0x085,
+ .generation = 2,
+ .nbus = 2,
+ .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4,
+ .dpram_size = 0x1000,
+ .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = NULL,
+}, {
+ .name = "EDICcard-NEC",
+ .manf = 0x0168, .prod = 0x102,
+ .generation = 1,
+ .nbus = 2,
+ .freq = 16 * MHZ, .max_brp = 64, .max_sjw = 4,
+ .dpram_size = 0x0800,
+ .boot = {0x0000, 0x000000, fw_dir "bcard.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = softingcs_enable_irq,
+}, {
+ .name = "EDICcard-2",
+ .manf = 0x0168, .prod = 0x105,
+ .generation = 2,
+ .nbus = 2,
+ .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4,
+ .dpram_size = 0x1000,
+ .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",},
+ .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",},
+ .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",},
+ .reset = softingcs_reset,
+ .enable_irq = NULL,
+}, {
+ 0, 0,
+},
+};
+
+MODULE_FIRMWARE(fw_dir "bcard.bin");
+MODULE_FIRMWARE(fw_dir "ldcard.bin");
+MODULE_FIRMWARE(fw_dir "cancard.bin");
+MODULE_FIRMWARE(fw_dir "cansja.bin");
+
+MODULE_FIRMWARE(fw_dir "bcard2.bin");
+MODULE_FIRMWARE(fw_dir "ldcard2.bin");
+MODULE_FIRMWARE(fw_dir "cancrd2.bin");
+
+static __devinit const struct softing_platform_data
+*softingcs_find_platform_data(unsigned int manf, unsigned int prod)
+{
+ const struct softing_platform_data *lp;
+
+ for (lp = softingcs_platform_data; lp->manf; ++lp) {
+ if ((lp->manf == manf) && (lp->prod == prod))
+ return lp;
+ }
+ return NULL;
+}
+
+/*
+ * platformdata callbacks
+ */
+static int softingcs_reset(struct platform_device *pdev, int v)
+{
+ struct pcmcia_device *pcmcia = to_pcmcia_dev(pdev->dev.parent);
+
+ dev_dbg(&pdev->dev, "pcmcia config [2] %02x\n", v ? 0 : 0x20);
+ return pcmcia_write_config_byte(pcmcia, 2, v ? 0 : 0x20);
+}
+
+static int softingcs_enable_irq(struct platform_device *pdev, int v)
+{
+ struct pcmcia_device *pcmcia = to_pcmcia_dev(pdev->dev.parent);
+
+ dev_dbg(&pdev->dev, "pcmcia config [0] %02x\n", v ? 0x60 : 0);
+ return pcmcia_write_config_byte(pcmcia, 0, v ? 0x60 : 0);
+}
+
+/*
+ * pcmcia check
+ */
+static __devinit int softingcs_probe_config(struct pcmcia_device *pcmcia,
+ void *priv_data)
+{
+ struct softing_platform_data *pdat = priv_data;
+ struct resource *pres;
+ int memspeed = 0;
+
+ WARN_ON(!pdat);
+ pres = pcmcia->resource[PCMCIA_IOMEM_0];
+ if (resource_size(pres) < 0x1000)
+ return -ERANGE;
+
+ pres->flags |= WIN_MEMORY_TYPE_CM | WIN_ENABLE;
+ if (pdat->generation < 2) {
+ pres->flags |= WIN_USE_WAIT | WIN_DATA_WIDTH_8;
+ memspeed = 3;
+ } else {
+ pres->flags |= WIN_DATA_WIDTH_16;
+ }
+ return pcmcia_request_window(pcmcia, pres, memspeed);
+}
+
+static __devexit void softingcs_remove(struct pcmcia_device *pcmcia)
+{
+ struct platform_device *pdev = pcmcia->priv;
+
+ /* free bits */
+ platform_device_unregister(pdev);
+ /* release pcmcia stuff */
+ pcmcia_disable_device(pcmcia);
+}
+
+/*
+ * platform_device wrapper
+ * pdev->resource has 2 entries: io & irq
+ */
+static void softingcs_pdev_release(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ kfree(pdev);
+}
+
+static __devinit int softingcs_probe(struct pcmcia_device *pcmcia)
+{
+ int ret;
+ struct platform_device *pdev;
+ const struct softing_platform_data *pdat;
+ struct resource *pres;
+ struct dev {
+ struct platform_device pdev;
+ struct resource res[2];
+ } *dev;
+
+ /* find matching platform_data */
+ pdat = softingcs_find_platform_data(pcmcia->manf_id, pcmcia->card_id);
+ if (!pdat)
+ return -ENOTTY;
+
+ /* setup pcmcia device */
+ pcmcia->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IOMEM |
+ CONF_AUTO_SET_VPP | CONF_AUTO_CHECK_VCC;
+ ret = pcmcia_loop_config(pcmcia, softingcs_probe_config, (void *)pdat);
+ if (ret)
+ goto pcmcia_failed;
+
+ ret = pcmcia_enable_device(pcmcia);
+ if (ret < 0)
+ goto pcmcia_failed;
+
+ pres = pcmcia->resource[PCMCIA_IOMEM_0];
+ if (!pres) {
+ ret = -EBADF;
+ goto pcmcia_bad;
+ }
+
+ /* create softing platform device */
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+ goto mem_failed;
+ }
+ dev->pdev.resource = dev->res;
+ dev->pdev.num_resources = ARRAY_SIZE(dev->res);
+ dev->pdev.dev.release = softingcs_pdev_release;
+
+ pdev = &dev->pdev;
+ pdev->dev.platform_data = (void *)pdat;
+ pdev->dev.parent = &pcmcia->dev;
+ pcmcia->priv = pdev;
+
+ /* platform device resources */
+ pdev->resource[0].flags = IORESOURCE_MEM;
+ pdev->resource[0].start = pres->start;
+ pdev->resource[0].end = pres->end;
+
+ pdev->resource[1].flags = IORESOURCE_IRQ;
+ pdev->resource[1].start = pcmcia->irq;
+ pdev->resource[1].end = pdev->resource[1].start;
+
+ /* platform device setup */
+ spin_lock(&softingcs_index_lock);
+ pdev->id = softingcs_index++;
+ spin_unlock(&softingcs_index_lock);
+ pdev->name = "softing";
+ dev_set_name(&pdev->dev, "softingcs.%i", pdev->id);
+ ret = platform_device_register(pdev);
+ if (ret < 0)
+ goto platform_failed;
+
+ dev_info(&pcmcia->dev, "created %s\n", dev_name(&pdev->dev));
+ return 0;
+
+platform_failed:
+ kfree(dev);
+mem_failed:
+pcmcia_bad:
+pcmcia_failed:
+ pcmcia_disable_device(pcmcia);
+ pcmcia->priv = NULL;
+ return ret ?: -ENODEV;
+}
+
+static /*const*/ struct pcmcia_device_id softingcs_ids[] = {
+ /* softing */
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0001),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0002),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0004),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0005),
+ /* vector, manufacturer? */
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0081),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0084),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0085),
+ /* EDIC */
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0102),
+ PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0105),
+ PCMCIA_DEVICE_NULL,
+};
+
+MODULE_DEVICE_TABLE(pcmcia, softingcs_ids);
+
+static struct pcmcia_driver softingcs_driver = {
+ .owner = THIS_MODULE,
+ .name = "softingcs",
+ .id_table = softingcs_ids,
+ .probe = softingcs_probe,
+ .remove = __devexit_p(softingcs_remove),
+};
+
+static int __init softingcs_start(void)
+{
+ spin_lock_init(&softingcs_index_lock);
+ return pcmcia_register_driver(&softingcs_driver);
+}
+
+static void __exit softingcs_stop(void)
+{
+ pcmcia_unregister_driver(&softingcs_driver);
+}
+
+module_init(softingcs_start);
+module_exit(softingcs_stop);
+
+MODULE_DESCRIPTION("softing CANcard driver"
+ ", links PCMCIA card to softing driver");
+MODULE_LICENSE("GPL v2");
--- /dev/null
+/*
+ * Copyright (C) 2008-2010
+ *
+ * - Kurt Van Dijck, EIA Electronics
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the version 2 of the GNU General Public License
+ * as published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/firmware.h>
+#include <linux/sched.h>
+#include <asm/div64.h>
+
+#include "softing.h"
+
+/*
+ * low level DPRAM command.
+ * Make sure that card->dpram[DPRAM_FCT_HOST] is preset
+ */
+static int _softing_fct_cmd(struct softing *card, int16_t cmd, uint16_t vector,
+ const char *msg)
+{
+ int ret;
+ unsigned long stamp;
+
+ iowrite16(cmd, &card->dpram[DPRAM_FCT_PARAM]);
+ iowrite8(vector >> 8, &card->dpram[DPRAM_FCT_HOST + 1]);
+ iowrite8(vector, &card->dpram[DPRAM_FCT_HOST]);
+ /* be sure to flush this to the card */
+ wmb();
+ stamp = jiffies + 1 * HZ;
+ /* wait for card */
+ do {
+ /* DPRAM_FCT_HOST is _not_ aligned */
+ ret = ioread8(&card->dpram[DPRAM_FCT_HOST]) +
+ (ioread8(&card->dpram[DPRAM_FCT_HOST + 1]) << 8);
+ /* don't have any cached variables */
+ rmb();
+ if (ret == RES_OK)
+ /* read return-value now */
+ return ioread16(&card->dpram[DPRAM_FCT_RESULT]);
+
+ if ((ret != vector) || time_after(jiffies, stamp))
+ break;
+ /* process context => relax */
+ usleep_range(500, 10000);
+ } while (1);
+
+ ret = (ret == RES_NONE) ? -ETIMEDOUT : -ECANCELED;
+ dev_alert(&card->pdev->dev, "firmware %s failed (%i)\n", msg, ret);
+ return ret;
+}
+
+static int softing_fct_cmd(struct softing *card, int16_t cmd, const char *msg)
+{
+ int ret;
+
+ ret = _softing_fct_cmd(card, cmd, 0, msg);
+ if (ret > 0) {
+ dev_alert(&card->pdev->dev, "%s returned %u\n", msg, ret);
+ ret = -EIO;
+ }
+ return ret;
+}
+
+int softing_bootloader_command(struct softing *card, int16_t cmd,
+ const char *msg)
+{
+ int ret;
+ unsigned long stamp;
+
+ iowrite16(RES_NONE, &card->dpram[DPRAM_RECEIPT]);
+ iowrite16(cmd, &card->dpram[DPRAM_COMMAND]);
+ /* be sure to flush this to the card */
+ wmb();
+ stamp = jiffies + 3 * HZ;
+ /* wait for card */
+ do {
+ ret = ioread16(&card->dpram[DPRAM_RECEIPT]);
+ /* don't have any cached variables */
+ rmb();
+ if (ret == RES_OK)
+ return 0;
+ if (time_after(jiffies, stamp))
+ break;
+ /* process context => relax */
+ usleep_range(500, 10000);
+ } while (!signal_pending(current));
+
+ ret = (ret == RES_NONE) ? -ETIMEDOUT : -ECANCELED;
+ dev_alert(&card->pdev->dev, "bootloader %s failed (%i)\n", msg, ret);
+ return ret;
+}
+
+static int fw_parse(const uint8_t **pmem, uint16_t *ptype, uint32_t *paddr,
+ uint16_t *plen, const uint8_t **pdat)
+{
+ uint16_t checksum[2];
+ const uint8_t *mem;
+ const uint8_t *end;
+
+ /*
+ * firmware records are a binary, unaligned stream composed of:
+ * uint16_t type;
+ * uint32_t addr;
+ * uint16_t len;
+ * uint8_t dat[len];
+ * uint16_t checksum;
+ * all values in little endian.
+ * We could define a struct for this, with __attribute__((packed)),
+ * but would that solve the alignment in _all_ cases (cfr. the
+ * struct itself may be an odd address)?
+ *
+ * I chose to use leXX_to_cpup() since this solves both
+ * endianness & alignment.
+ */
+ mem = *pmem;
+ *ptype = le16_to_cpup((void *)&mem[0]);
+ *paddr = le32_to_cpup((void *)&mem[2]);
+ *plen = le16_to_cpup((void *)&mem[6]);
+ *pdat = &mem[8];
+ /* verify checksum */
+ end = &mem[8 + *plen];
+ checksum[0] = le16_to_cpup((void *)end);
+ for (checksum[1] = 0; mem < end; ++mem)
+ checksum[1] += *mem;
+ if (checksum[0] != checksum[1])
+ return -EINVAL;
+ /* increment */
+ *pmem += 10 + *plen;
+ return 0;
+}
+
+int softing_load_fw(const char *file, struct softing *card,
+ __iomem uint8_t *dpram, unsigned int size, int offset)
+{
+ const struct firmware *fw;
+ int ret;
+ const uint8_t *mem, *end, *dat;
+ uint16_t type, len;
+ uint32_t addr;
+ uint8_t *buf = NULL;
+ int buflen = 0;
+ int8_t type_end = 0;
+
+ ret = request_firmware(&fw, file, &card->pdev->dev);
+ if (ret < 0)
+ return ret;
+ dev_dbg(&card->pdev->dev, "%s, firmware(%s) got %u bytes"
+ ", offset %c0x%04x\n",
+ card->pdat->name, file, (unsigned int)fw->size,
+ (offset >= 0) ? '+' : '-', (unsigned int)abs(offset));
+ /* parse the firmware */
+ mem = fw->data;
+ end = &mem[fw->size];
+ /* look for header record */
+ ret = fw_parse(&mem, &type, &addr, &len, &dat);
+ if (ret < 0)
+ goto failed;
+ if (type != 0xffff)
+ goto failed;
+ if (strncmp("Structured Binary Format, Softing GmbH" , dat, len)) {
+ ret = -EINVAL;
+ goto failed;
+ }
+ /* ok, we had a header */
+ while (mem < end) {
+ ret = fw_parse(&mem, &type, &addr, &len, &dat);
+ if (ret < 0)
+ goto failed;
+ if (type == 3) {
+ /* start address, not used here */
+ continue;
+ } else if (type == 1) {
+ /* eof */
+ type_end = 1;
+ break;
+ } else if (type != 0) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ if ((addr + len + offset) > size)
+ goto failed;
+ memcpy_toio(&dpram[addr + offset], dat, len);
+ /* be sure to flush caches from IO space */
+ mb();
+ if (len > buflen) {
+ /* align buflen */
+ buflen = (len + (1024-1)) & ~(1024-1);
+ buf = krealloc(buf, buflen, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto failed;
+ }
+ }
+ /* verify record data */
+ memcpy_fromio(buf, &dpram[addr + offset], len);
+ if (memcmp(buf, dat, len)) {
+ /* is not ok */
+ dev_alert(&card->pdev->dev, "DPRAM readback failed\n");
+ ret = -EIO;
+ goto failed;
+ }
+ }
+ if (!type_end)
+ /* no end record seen */
+ goto failed;
+ ret = 0;
+failed:
+ kfree(buf);
+ release_firmware(fw);
+ if (ret < 0)
+ dev_info(&card->pdev->dev, "firmware %s failed\n", file);
+ return ret;
+}
+
+int softing_load_app_fw(const char *file, struct softing *card)
+{
+ const struct firmware *fw;
+ const uint8_t *mem, *end, *dat;
+ int ret, j;
+ uint16_t type, len;
+ uint32_t addr, start_addr = 0;
+ unsigned int sum, rx_sum;
+ int8_t type_end = 0, type_entrypoint = 0;
+
+ ret = request_firmware(&fw, file, &card->pdev->dev);
+ if (ret) {
+ dev_alert(&card->pdev->dev, "request_firmware(%s) got %i\n",
+ file, ret);
+ return ret;
+ }
+ dev_dbg(&card->pdev->dev, "firmware(%s) got %lu bytes\n",
+ file, (unsigned long)fw->size);
+ /* parse the firmware */
+ mem = fw->data;
+ end = &mem[fw->size];
+ /* look for header record */
+ ret = fw_parse(&mem, &type, &addr, &len, &dat);
+ if (ret)
+ goto failed;
+ ret = -EINVAL;
+ if (type != 0xffff) {
+ dev_alert(&card->pdev->dev, "firmware starts with type 0x%x\n",
+ type);
+ goto failed;
+ }
+ if (strncmp("Structured Binary Format, Softing GmbH", dat, len)) {
+ dev_alert(&card->pdev->dev, "firmware string '%.*s' fault\n",
+ len, dat);
+ goto failed;
+ }
+ /* ok, we had a header */
+ while (mem < end) {
+ ret = fw_parse(&mem, &type, &addr, &len, &dat);
+ if (ret)
+ goto failed;
+
+ if (type == 3) {
+ /* start address */
+ start_addr = addr;
+ type_entrypoint = 1;
+ continue;
+ } else if (type == 1) {
+ /* eof */
+ type_end = 1;
+ break;
+ } else if (type != 0) {
+ dev_alert(&card->pdev->dev,
+ "unknown record type 0x%04x\n", type);
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ /* regualar data */
+ for (sum = 0, j = 0; j < len; ++j)
+ sum += dat[j];
+ /* work in 16bit (target) */
+ sum &= 0xffff;
+
+ memcpy_toio(&card->dpram[card->pdat->app.offs], dat, len);
+ iowrite32(card->pdat->app.offs + card->pdat->app.addr,
+ &card->dpram[DPRAM_COMMAND + 2]);
+ iowrite32(addr, &card->dpram[DPRAM_COMMAND + 6]);
+ iowrite16(len, &card->dpram[DPRAM_COMMAND + 10]);
+ iowrite8(1, &card->dpram[DPRAM_COMMAND + 12]);
+ ret = softing_bootloader_command(card, 1, "loading app.");
+ if (ret < 0)
+ goto failed;
+ /* verify checksum */
+ rx_sum = ioread16(&card->dpram[DPRAM_RECEIPT + 2]);
+ if (rx_sum != sum) {
+ dev_alert(&card->pdev->dev, "SRAM seems to be damaged"
+ ", wanted 0x%04x, got 0x%04x\n", sum, rx_sum);
+ ret = -EIO;
+ goto failed;
+ }
+ }
+ if (!type_end || !type_entrypoint)
+ goto failed;
+ /* start application in card */
+ iowrite32(start_addr, &card->dpram[DPRAM_COMMAND + 2]);
+ iowrite8(1, &card->dpram[DPRAM_COMMAND + 6]);
+ ret = softing_bootloader_command(card, 3, "start app.");
+ if (ret < 0)
+ goto failed;
+ ret = 0;
+failed:
+ release_firmware(fw);
+ if (ret < 0)
+ dev_info(&card->pdev->dev, "firmware %s failed\n", file);
+ return ret;
+}
+
+static int softing_reset_chip(struct softing *card)
+{
+ int ret;
+
+ do {
+ /* reset chip */
+ iowrite8(0, &card->dpram[DPRAM_RESET_RX_FIFO]);
+ iowrite8(0, &card->dpram[DPRAM_RESET_RX_FIFO+1]);
+ iowrite8(1, &card->dpram[DPRAM_RESET]);
+ iowrite8(0, &card->dpram[DPRAM_RESET+1]);
+
+ ret = softing_fct_cmd(card, 0, "reset_can");
+ if (!ret)
+ break;
+ if (signal_pending(current))
+ /* don't wait any longer */
+ break;
+ } while (1);
+ card->tx.pending = 0;
+ return ret;
+}
+
+int softing_chip_poweron(struct softing *card)
+{
+ int ret;
+ /* sync */
+ ret = _softing_fct_cmd(card, 99, 0x55, "sync-a");
+ if (ret < 0)
+ goto failed;
+
+ ret = _softing_fct_cmd(card, 99, 0xaa, "sync-b");
+ if (ret < 0)
+ goto failed;
+
+ ret = softing_reset_chip(card);
+ if (ret < 0)
+ goto failed;
+ /* get_serial */
+ ret = softing_fct_cmd(card, 43, "get_serial_number");
+ if (ret < 0)
+ goto failed;
+ card->id.serial = ioread32(&card->dpram[DPRAM_FCT_PARAM]);
+ /* get_version */
+ ret = softing_fct_cmd(card, 12, "get_version");
+ if (ret < 0)
+ goto failed;
+ card->id.fw_version = ioread16(&card->dpram[DPRAM_FCT_PARAM + 2]);
+ card->id.hw_version = ioread16(&card->dpram[DPRAM_FCT_PARAM + 4]);
+ card->id.license = ioread16(&card->dpram[DPRAM_FCT_PARAM + 6]);
+ card->id.chip[0] = ioread16(&card->dpram[DPRAM_FCT_PARAM + 8]);
+ card->id.chip[1] = ioread16(&card->dpram[DPRAM_FCT_PARAM + 10]);
+ return 0;
+failed:
+ return ret;
+}
+
+static void softing_initialize_timestamp(struct softing *card)
+{
+ uint64_t ovf;
+
+ card->ts_ref = ktime_get();
+
+ /* 16MHz is the reference */
+ ovf = 0x100000000ULL * 16;
+ do_div(ovf, card->pdat->freq ?: 16);
+
+ card->ts_overflow = ktime_add_us(ktime_set(0, 0), ovf);
+}
+
+ktime_t softing_raw2ktime(struct softing *card, u32 raw)
+{
+ uint64_t rawl;
+ ktime_t now, real_offset;
+ ktime_t target;
+ ktime_t tmp;
+
+ now = ktime_get();
+ real_offset = ktime_sub(ktime_get_real(), now);
+
+ /* find nsec from card */
+ rawl = raw * 16;
+ do_div(rawl, card->pdat->freq ?: 16);
+ target = ktime_add_us(card->ts_ref, rawl);
+ /* test for overflows */
+ tmp = ktime_add(target, card->ts_overflow);
+ while (unlikely(ktime_to_ns(tmp) > ktime_to_ns(now))) {
+ card->ts_ref = ktime_add(card->ts_ref, card->ts_overflow);
+ target = tmp;
+ tmp = ktime_add(target, card->ts_overflow);
+ }
+ return ktime_add(target, real_offset);
+}
+
+static inline int softing_error_reporting(struct net_device *netdev)
+{
+ struct softing_priv *priv = netdev_priv(netdev);
+
+ return (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ ? 1 : 0;
+}
+
+int softing_startstop(struct net_device *dev, int up)
+{
+ int ret;
+ struct softing *card;
+ struct softing_priv *priv;
+ struct net_device *netdev;
+ int bus_bitmask_start;
+ int j, error_reporting;
+ struct can_frame msg;
+ const struct can_bittiming *bt;
+
+ priv = netdev_priv(dev);
+ card = priv->card;
+
+ if (!card->fw.up)
+ return -EIO;
+
+ ret = mutex_lock_interruptible(&card->fw.lock);
+ if (ret)
+ return ret;
+
+ bus_bitmask_start = 0;
+ if (dev && up)
+ /* prepare to start this bus as well */
+ bus_bitmask_start |= (1 << priv->index);
+ /* bring netdevs down */
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ netdev = card->net[j];
+ if (!netdev)
+ continue;
+ priv = netdev_priv(netdev);
+
+ if (dev != netdev)
+ netif_stop_queue(netdev);
+
+ if (netif_running(netdev)) {
+ if (dev != netdev)
+ bus_bitmask_start |= (1 << j);
+ priv->tx.pending = 0;
+ priv->tx.echo_put = 0;
+ priv->tx.echo_get = 0;
+ /*
+ * this bus' may just have called open_candev()
+ * which is rather stupid to call close_candev()
+ * already
+ * but we may come here from busoff recovery too
+ * in which case the echo_skb _needs_ flushing too.
+ * just be sure to call open_candev() again
+ */
+ close_candev(netdev);
+ }
+ priv->can.state = CAN_STATE_STOPPED;
+ }
+ card->tx.pending = 0;
+
+ softing_enable_irq(card, 0);
+ ret = softing_reset_chip(card);
+ if (ret)
+ goto failed;
+ if (!bus_bitmask_start)
+ /* no busses to be brought up */
+ goto card_done;
+
+ if ((bus_bitmask_start & 1) && (bus_bitmask_start & 2)
+ && (softing_error_reporting(card->net[0])
+ != softing_error_reporting(card->net[1]))) {
+ dev_alert(&card->pdev->dev,
+ "err_reporting flag differs for busses\n");
+ goto invalid;
+ }
+ error_reporting = 0;
+ if (bus_bitmask_start & 1) {
+ netdev = card->net[0];
+ priv = netdev_priv(netdev);
+ error_reporting += softing_error_reporting(netdev);
+ /* init chip 1 */
+ bt = &priv->can.bittiming;
+ iowrite16(bt->brp, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(bt->sjw, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ iowrite16(bt->phase_seg1 + bt->prop_seg,
+ &card->dpram[DPRAM_FCT_PARAM + 6]);
+ iowrite16(bt->phase_seg2, &card->dpram[DPRAM_FCT_PARAM + 8]);
+ iowrite16((priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) ? 1 : 0,
+ &card->dpram[DPRAM_FCT_PARAM + 10]);
+ ret = softing_fct_cmd(card, 1, "initialize_chip[0]");
+ if (ret < 0)
+ goto failed;
+ /* set mode */
+ iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ ret = softing_fct_cmd(card, 3, "set_mode[0]");
+ if (ret < 0)
+ goto failed;
+ /* set filter */
+ /* 11bit id & mask */
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(0x07ff, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ /* 29bit id.lo & mask.lo & id.hi & mask.hi */
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 6]);
+ iowrite16(0xffff, &card->dpram[DPRAM_FCT_PARAM + 8]);
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 10]);
+ iowrite16(0x1fff, &card->dpram[DPRAM_FCT_PARAM + 12]);
+ ret = softing_fct_cmd(card, 7, "set_filter[0]");
+ if (ret < 0)
+ goto failed;
+ /* set output control */
+ iowrite16(priv->output, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ ret = softing_fct_cmd(card, 5, "set_output[0]");
+ if (ret < 0)
+ goto failed;
+ }
+ if (bus_bitmask_start & 2) {
+ netdev = card->net[1];
+ priv = netdev_priv(netdev);
+ error_reporting += softing_error_reporting(netdev);
+ /* init chip2 */
+ bt = &priv->can.bittiming;
+ iowrite16(bt->brp, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(bt->sjw, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ iowrite16(bt->phase_seg1 + bt->prop_seg,
+ &card->dpram[DPRAM_FCT_PARAM + 6]);
+ iowrite16(bt->phase_seg2, &card->dpram[DPRAM_FCT_PARAM + 8]);
+ iowrite16((priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) ? 1 : 0,
+ &card->dpram[DPRAM_FCT_PARAM + 10]);
+ ret = softing_fct_cmd(card, 2, "initialize_chip[1]");
+ if (ret < 0)
+ goto failed;
+ /* set mode2 */
+ iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ ret = softing_fct_cmd(card, 4, "set_mode[1]");
+ if (ret < 0)
+ goto failed;
+ /* set filter2 */
+ /* 11bit id & mask */
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(0x07ff, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ /* 29bit id.lo & mask.lo & id.hi & mask.hi */
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 6]);
+ iowrite16(0xffff, &card->dpram[DPRAM_FCT_PARAM + 8]);
+ iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 10]);
+ iowrite16(0x1fff, &card->dpram[DPRAM_FCT_PARAM + 12]);
+ ret = softing_fct_cmd(card, 8, "set_filter[1]");
+ if (ret < 0)
+ goto failed;
+ /* set output control2 */
+ iowrite16(priv->output, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ ret = softing_fct_cmd(card, 6, "set_output[1]");
+ if (ret < 0)
+ goto failed;
+ }
+ /* enable_error_frame */
+ /*
+ * Error reporting is switched off at the moment since
+ * the receiving of them is not yet 100% verified
+ * This should be enabled sooner or later
+ *
+ if (error_reporting) {
+ ret = softing_fct_cmd(card, 51, "enable_error_frame");
+ if (ret < 0)
+ goto failed;
+ }
+ */
+ /* initialize interface */
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 2]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 4]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 6]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 8]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 10]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 12]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 14]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 16]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 18]);
+ iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 20]);
+ ret = softing_fct_cmd(card, 17, "initialize_interface");
+ if (ret < 0)
+ goto failed;
+ /* enable_fifo */
+ ret = softing_fct_cmd(card, 36, "enable_fifo");
+ if (ret < 0)
+ goto failed;
+ /* enable fifo tx ack */
+ ret = softing_fct_cmd(card, 13, "fifo_tx_ack[0]");
+ if (ret < 0)
+ goto failed;
+ /* enable fifo tx ack2 */
+ ret = softing_fct_cmd(card, 14, "fifo_tx_ack[1]");
+ if (ret < 0)
+ goto failed;
+ /* start_chip */
+ ret = softing_fct_cmd(card, 11, "start_chip");
+ if (ret < 0)
+ goto failed;
+ iowrite8(0, &card->dpram[DPRAM_INFO_BUSSTATE]);
+ iowrite8(0, &card->dpram[DPRAM_INFO_BUSSTATE2]);
+ if (card->pdat->generation < 2) {
+ iowrite8(0, &card->dpram[DPRAM_V2_IRQ_TOHOST]);
+ /* flush the DPRAM caches */
+ wmb();
+ }
+
+ softing_initialize_timestamp(card);
+
+ /*
+ * do socketcan notifications/status changes
+ * from here, no errors should occur, or the failed: part
+ * must be reviewed
+ */
+ memset(&msg, 0, sizeof(msg));
+ msg.can_id = CAN_ERR_FLAG | CAN_ERR_RESTARTED;
+ msg.can_dlc = CAN_ERR_DLC;
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ if (!(bus_bitmask_start & (1 << j)))
+ continue;
+ netdev = card->net[j];
+ if (!netdev)
+ continue;
+ priv = netdev_priv(netdev);
+ priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ open_candev(netdev);
+ if (dev != netdev) {
+ /* notify other busses on the restart */
+ softing_netdev_rx(netdev, &msg, ktime_set(0, 0));
+ ++priv->can.can_stats.restarts;
+ }
+ netif_wake_queue(netdev);
+ }
+
+ /* enable interrupts */
+ ret = softing_enable_irq(card, 1);
+ if (ret)
+ goto failed;
+card_done:
+ mutex_unlock(&card->fw.lock);
+ return 0;
+invalid:
+ ret = -EINVAL;
+failed:
+ softing_enable_irq(card, 0);
+ softing_reset_chip(card);
+ mutex_unlock(&card->fw.lock);
+ /* bring all other interfaces down */
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ netdev = card->net[j];
+ if (!netdev)
+ continue;
+ dev_close(netdev);
+ }
+ return ret;
+}
+
+int softing_default_output(struct net_device *netdev)
+{
+ struct softing_priv *priv = netdev_priv(netdev);
+ struct softing *card = priv->card;
+
+ switch (priv->chip) {
+ case 1000:
+ return (card->pdat->generation < 2) ? 0xfb : 0xfa;
+ case 5:
+ return 0x60;
+ default:
+ return 0x40;
+ }
+}
--- /dev/null
+/*
+ * Copyright (C) 2008-2010
+ *
+ * - Kurt Van Dijck, EIA Electronics
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the version 2 of the GNU General Public License
+ * as published by the Free Software Foundation
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+
+#include "softing.h"
+
+#define TX_ECHO_SKB_MAX (((TXMAX+1)/2)-1)
+
+/*
+ * test is a specific CAN netdev
+ * is online (ie. up 'n running, not sleeping, not busoff
+ */
+static inline int canif_is_active(struct net_device *netdev)
+{
+ struct can_priv *can = netdev_priv(netdev);
+
+ if (!netif_running(netdev))
+ return 0;
+ return (can->state <= CAN_STATE_ERROR_PASSIVE);
+}
+
+/* reset DPRAM */
+static inline void softing_set_reset_dpram(struct softing *card)
+{
+ if (card->pdat->generation >= 2) {
+ spin_lock_bh(&card->spin);
+ iowrite8(ioread8(&card->dpram[DPRAM_V2_RESET]) & ~1,
+ &card->dpram[DPRAM_V2_RESET]);
+ spin_unlock_bh(&card->spin);
+ }
+}
+
+static inline void softing_clr_reset_dpram(struct softing *card)
+{
+ if (card->pdat->generation >= 2) {
+ spin_lock_bh(&card->spin);
+ iowrite8(ioread8(&card->dpram[DPRAM_V2_RESET]) | 1,
+ &card->dpram[DPRAM_V2_RESET]);
+ spin_unlock_bh(&card->spin);
+ }
+}
+
+/* trigger the tx queue-ing */
+static netdev_tx_t softing_netdev_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct softing_priv *priv = netdev_priv(dev);
+ struct softing *card = priv->card;
+ int ret;
+ uint8_t *ptr;
+ uint8_t fifo_wr, fifo_rd;
+ struct can_frame *cf = (struct can_frame *)skb->data;
+ uint8_t buf[DPRAM_TX_SIZE];
+
+ if (can_dropped_invalid_skb(dev, skb))
+ return NETDEV_TX_OK;
+
+ spin_lock(&card->spin);
+
+ ret = NETDEV_TX_BUSY;
+ if (!card->fw.up ||
+ (card->tx.pending >= TXMAX) ||
+ (priv->tx.pending >= TX_ECHO_SKB_MAX))
+ goto xmit_done;
+ fifo_wr = ioread8(&card->dpram[DPRAM_TX_WR]);
+ fifo_rd = ioread8(&card->dpram[DPRAM_TX_RD]);
+ if (fifo_wr == fifo_rd)
+ /* fifo full */
+ goto xmit_done;
+ memset(buf, 0, sizeof(buf));
+ ptr = buf;
+ *ptr = CMD_TX;
+ if (cf->can_id & CAN_RTR_FLAG)
+ *ptr |= CMD_RTR;
+ if (cf->can_id & CAN_EFF_FLAG)
+ *ptr |= CMD_XTD;
+ if (priv->index)
+ *ptr |= CMD_BUS2;
+ ++ptr;
+ *ptr++ = cf->can_dlc;
+ *ptr++ = (cf->can_id >> 0);
+ *ptr++ = (cf->can_id >> 8);
+ if (cf->can_id & CAN_EFF_FLAG) {
+ *ptr++ = (cf->can_id >> 16);
+ *ptr++ = (cf->can_id >> 24);
+ } else {
+ /* increment 1, not 2 as you might think */
+ ptr += 1;
+ }
+ if (!(cf->can_id & CAN_RTR_FLAG))
+ memcpy(ptr, &cf->data[0], cf->can_dlc);
+ memcpy_toio(&card->dpram[DPRAM_TX + DPRAM_TX_SIZE * fifo_wr],
+ buf, DPRAM_TX_SIZE);
+ if (++fifo_wr >= DPRAM_TX_CNT)
+ fifo_wr = 0;
+ iowrite8(fifo_wr, &card->dpram[DPRAM_TX_WR]);
+ card->tx.last_bus = priv->index;
+ ++card->tx.pending;
+ ++priv->tx.pending;
+ can_put_echo_skb(skb, dev, priv->tx.echo_put);
+ ++priv->tx.echo_put;
+ if (priv->tx.echo_put >= TX_ECHO_SKB_MAX)
+ priv->tx.echo_put = 0;
+ /* can_put_echo_skb() saves the skb, safe to return TX_OK */
+ ret = NETDEV_TX_OK;
+xmit_done:
+ spin_unlock(&card->spin);
+ if (card->tx.pending >= TXMAX) {
+ int j;
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ if (card->net[j])
+ netif_stop_queue(card->net[j]);
+ }
+ }
+ if (ret != NETDEV_TX_OK)
+ netif_stop_queue(dev);
+
+ return ret;
+}
+
+/*
+ * shortcut for skb delivery
+ */
+int softing_netdev_rx(struct net_device *netdev, const struct can_frame *msg,
+ ktime_t ktime)
+{
+ struct sk_buff *skb;
+ struct can_frame *cf;
+
+ skb = alloc_can_skb(netdev, &cf);
+ if (!skb)
+ return -ENOMEM;
+ memcpy(cf, msg, sizeof(*msg));
+ skb->tstamp = ktime;
+ return netif_rx(skb);
+}
+
+/*
+ * softing_handle_1
+ * pop 1 entry from the DPRAM queue, and process
+ */
+static int softing_handle_1(struct softing *card)
+{
+ struct net_device *netdev;
+ struct softing_priv *priv;
+ ktime_t ktime;
+ struct can_frame msg;
+ int cnt = 0, lost_msg;
+ uint8_t fifo_rd, fifo_wr, cmd;
+ uint8_t *ptr;
+ uint32_t tmp_u32;
+ uint8_t buf[DPRAM_RX_SIZE];
+
+ memset(&msg, 0, sizeof(msg));
+ /* test for lost msgs */
+ lost_msg = ioread8(&card->dpram[DPRAM_RX_LOST]);
+ if (lost_msg) {
+ int j;
+ /* reset condition */
+ iowrite8(0, &card->dpram[DPRAM_RX_LOST]);
+ /* prepare msg */
+ msg.can_id = CAN_ERR_FLAG | CAN_ERR_CRTL;
+ msg.can_dlc = CAN_ERR_DLC;
+ msg.data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+ /*
+ * service to all busses, we don't know which it was applicable
+ * but only service busses that are online
+ */
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ netdev = card->net[j];
+ if (!netdev)
+ continue;
+ if (!canif_is_active(netdev))
+ /* a dead bus has no overflows */
+ continue;
+ ++netdev->stats.rx_over_errors;
+ softing_netdev_rx(netdev, &msg, ktime_set(0, 0));
+ }
+ /* prepare for other use */
+ memset(&msg, 0, sizeof(msg));
+ ++cnt;
+ }
+
+ fifo_rd = ioread8(&card->dpram[DPRAM_RX_RD]);
+ fifo_wr = ioread8(&card->dpram[DPRAM_RX_WR]);
+
+ if (++fifo_rd >= DPRAM_RX_CNT)
+ fifo_rd = 0;
+ if (fifo_wr == fifo_rd)
+ return cnt;
+
+ memcpy_fromio(buf, &card->dpram[DPRAM_RX + DPRAM_RX_SIZE*fifo_rd],
+ DPRAM_RX_SIZE);
+ mb();
+ /* trigger dual port RAM */
+ iowrite8(fifo_rd, &card->dpram[DPRAM_RX_RD]);
+
+ ptr = buf;
+ cmd = *ptr++;
+ if (cmd == 0xff)
+ /* not quite usefull, probably the card has got out */
+ return 0;
+ netdev = card->net[0];
+ if (cmd & CMD_BUS2)
+ netdev = card->net[1];
+ priv = netdev_priv(netdev);
+
+ if (cmd & CMD_ERR) {
+ uint8_t can_state, state;
+
+ state = *ptr++;
+
+ msg.can_id = CAN_ERR_FLAG;
+ msg.can_dlc = CAN_ERR_DLC;
+
+ if (state & SF_MASK_BUSOFF) {
+ can_state = CAN_STATE_BUS_OFF;
+ msg.can_id |= CAN_ERR_BUSOFF;
+ state = STATE_BUSOFF;
+ } else if (state & SF_MASK_EPASSIVE) {
+ can_state = CAN_STATE_ERROR_PASSIVE;
+ msg.can_id |= CAN_ERR_CRTL;
+ msg.data[1] = CAN_ERR_CRTL_TX_PASSIVE;
+ state = STATE_EPASSIVE;
+ } else {
+ can_state = CAN_STATE_ERROR_ACTIVE;
+ msg.can_id |= CAN_ERR_CRTL;
+ state = STATE_EACTIVE;
+ }
+ /* update DPRAM */
+ iowrite8(state, &card->dpram[priv->index ?
+ DPRAM_INFO_BUSSTATE2 : DPRAM_INFO_BUSSTATE]);
+ /* timestamp */
+ tmp_u32 = le32_to_cpup((void *)ptr);
+ ptr += 4;
+ ktime = softing_raw2ktime(card, tmp_u32);
+
+ ++netdev->stats.rx_errors;
+ /* update internal status */
+ if (can_state != priv->can.state) {
+ priv->can.state = can_state;
+ if (can_state == CAN_STATE_ERROR_PASSIVE)
+ ++priv->can.can_stats.error_passive;
+ else if (can_state == CAN_STATE_BUS_OFF) {
+ /* this calls can_close_cleanup() */
+ can_bus_off(netdev);
+ netif_stop_queue(netdev);
+ }
+ /* trigger socketcan */
+ softing_netdev_rx(netdev, &msg, ktime);
+ }
+
+ } else {
+ if (cmd & CMD_RTR)
+ msg.can_id |= CAN_RTR_FLAG;
+ msg.can_dlc = get_can_dlc(*ptr++);
+ if (cmd & CMD_XTD) {
+ msg.can_id |= CAN_EFF_FLAG;
+ msg.can_id |= le32_to_cpup((void *)ptr);
+ ptr += 4;
+ } else {
+ msg.can_id |= le16_to_cpup((void *)ptr);
+ ptr += 2;
+ }
+ /* timestamp */
+ tmp_u32 = le32_to_cpup((void *)ptr);
+ ptr += 4;
+ ktime = softing_raw2ktime(card, tmp_u32);
+ if (!(msg.can_id & CAN_RTR_FLAG))
+ memcpy(&msg.data[0], ptr, 8);
+ ptr += 8;
+ /* update socket */
+ if (cmd & CMD_ACK) {
+ /* acknowledge, was tx msg */
+ struct sk_buff *skb;
+ skb = priv->can.echo_skb[priv->tx.echo_get];
+ if (skb)
+ skb->tstamp = ktime;
+ can_get_echo_skb(netdev, priv->tx.echo_get);
+ ++priv->tx.echo_get;
+ if (priv->tx.echo_get >= TX_ECHO_SKB_MAX)
+ priv->tx.echo_get = 0;
+ if (priv->tx.pending)
+ --priv->tx.pending;
+ if (card->tx.pending)
+ --card->tx.pending;
+ ++netdev->stats.tx_packets;
+ if (!(msg.can_id & CAN_RTR_FLAG))
+ netdev->stats.tx_bytes += msg.can_dlc;
+ } else {
+ int ret;
+
+ ret = softing_netdev_rx(netdev, &msg, ktime);
+ if (ret == NET_RX_SUCCESS) {
+ ++netdev->stats.rx_packets;
+ if (!(msg.can_id & CAN_RTR_FLAG))
+ netdev->stats.rx_bytes += msg.can_dlc;
+ } else {
+ ++netdev->stats.rx_dropped;
+ }
+ }
+ }
+ ++cnt;
+ return cnt;
+}
+
+/*
+ * real interrupt handler
+ */
+static irqreturn_t softing_irq_thread(int irq, void *dev_id)
+{
+ struct softing *card = (struct softing *)dev_id;
+ struct net_device *netdev;
+ struct softing_priv *priv;
+ int j, offset, work_done;
+
+ work_done = 0;
+ spin_lock_bh(&card->spin);
+ while (softing_handle_1(card) > 0) {
+ ++card->irq.svc_count;
+ ++work_done;
+ }
+ spin_unlock_bh(&card->spin);
+ /* resume tx queue's */
+ offset = card->tx.last_bus;
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ if (card->tx.pending >= TXMAX)
+ break;
+ netdev = card->net[(j + offset + 1) % card->pdat->nbus];
+ if (!netdev)
+ continue;
+ priv = netdev_priv(netdev);
+ if (!canif_is_active(netdev))
+ /* it makes no sense to wake dead busses */
+ continue;
+ if (priv->tx.pending >= TX_ECHO_SKB_MAX)
+ continue;
+ ++work_done;
+ netif_wake_queue(netdev);
+ }
+ return work_done ? IRQ_HANDLED : IRQ_NONE;
+}
+
+/*
+ * interrupt routines:
+ * schedule the 'real interrupt handler'
+ */
+static irqreturn_t softing_irq_v2(int irq, void *dev_id)
+{
+ struct softing *card = (struct softing *)dev_id;
+ uint8_t ir;
+
+ ir = ioread8(&card->dpram[DPRAM_V2_IRQ_TOHOST]);
+ iowrite8(0, &card->dpram[DPRAM_V2_IRQ_TOHOST]);
+ return (1 == ir) ? IRQ_WAKE_THREAD : IRQ_NONE;
+}
+
+static irqreturn_t softing_irq_v1(int irq, void *dev_id)
+{
+ struct softing *card = (struct softing *)dev_id;
+ uint8_t ir;
+
+ ir = ioread8(&card->dpram[DPRAM_IRQ_TOHOST]);
+ iowrite8(0, &card->dpram[DPRAM_IRQ_TOHOST]);
+ return ir ? IRQ_WAKE_THREAD : IRQ_NONE;
+}
+
+/*
+ * netdev/candev inter-operability
+ */
+static int softing_netdev_open(struct net_device *ndev)
+{
+ int ret;
+
+ /* check or determine and set bittime */
+ ret = open_candev(ndev);
+ if (!ret)
+ ret = softing_startstop(ndev, 1);
+ return ret;
+}
+
+static int softing_netdev_stop(struct net_device *ndev)
+{
+ int ret;
+
+ netif_stop_queue(ndev);
+
+ /* softing cycle does close_candev() */
+ ret = softing_startstop(ndev, 0);
+ return ret;
+}
+
+static int softing_candev_set_mode(struct net_device *ndev, enum can_mode mode)
+{
+ int ret;
+
+ switch (mode) {
+ case CAN_MODE_START:
+ /* softing_startstop does close_candev() */
+ ret = softing_startstop(ndev, 1);
+ return ret;
+ case CAN_MODE_STOP:
+ case CAN_MODE_SLEEP:
+ return -EOPNOTSUPP;
+ }
+ return 0;
+}
+
+/*
+ * Softing device management helpers
+ */
+int softing_enable_irq(struct softing *card, int enable)
+{
+ int ret;
+
+ if (!card->irq.nr) {
+ return 0;
+ } else if (card->irq.requested && !enable) {
+ free_irq(card->irq.nr, card);
+ card->irq.requested = 0;
+ } else if (!card->irq.requested && enable) {
+ ret = request_threaded_irq(card->irq.nr,
+ (card->pdat->generation >= 2) ?
+ softing_irq_v2 : softing_irq_v1,
+ softing_irq_thread, IRQF_SHARED,
+ dev_name(&card->pdev->dev), card);
+ if (ret) {
+ dev_alert(&card->pdev->dev,
+ "request_threaded_irq(%u) failed\n",
+ card->irq.nr);
+ return ret;
+ }
+ card->irq.requested = 1;
+ }
+ return 0;
+}
+
+static void softing_card_shutdown(struct softing *card)
+{
+ int fw_up = 0;
+
+ if (mutex_lock_interruptible(&card->fw.lock))
+ /* return -ERESTARTSYS */;
+ fw_up = card->fw.up;
+ card->fw.up = 0;
+
+ if (card->irq.requested && card->irq.nr) {
+ free_irq(card->irq.nr, card);
+ card->irq.requested = 0;
+ }
+ if (fw_up) {
+ if (card->pdat->enable_irq)
+ card->pdat->enable_irq(card->pdev, 0);
+ softing_set_reset_dpram(card);
+ if (card->pdat->reset)
+ card->pdat->reset(card->pdev, 1);
+ }
+ mutex_unlock(&card->fw.lock);
+}
+
+static __devinit int softing_card_boot(struct softing *card)
+{
+ int ret, j;
+ static const uint8_t stream[] = {
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, };
+ unsigned char back[sizeof(stream)];
+
+ if (mutex_lock_interruptible(&card->fw.lock))
+ return -ERESTARTSYS;
+ if (card->fw.up) {
+ mutex_unlock(&card->fw.lock);
+ return 0;
+ }
+ /* reset board */
+ if (card->pdat->enable_irq)
+ card->pdat->enable_irq(card->pdev, 1);
+ /* boot card */
+ softing_set_reset_dpram(card);
+ if (card->pdat->reset)
+ card->pdat->reset(card->pdev, 1);
+ for (j = 0; (j + sizeof(stream)) < card->dpram_size;
+ j += sizeof(stream)) {
+
+ memcpy_toio(&card->dpram[j], stream, sizeof(stream));
+ /* flush IO cache */
+ mb();
+ memcpy_fromio(back, &card->dpram[j], sizeof(stream));
+
+ if (!memcmp(back, stream, sizeof(stream)))
+ continue;
+ /* memory is not equal */
+ dev_alert(&card->pdev->dev, "dpram failed at 0x%04x\n", j);
+ ret = -EIO;
+ goto failed;
+ }
+ wmb();
+ /* load boot firmware */
+ ret = softing_load_fw(card->pdat->boot.fw, card, card->dpram,
+ card->dpram_size,
+ card->pdat->boot.offs - card->pdat->boot.addr);
+ if (ret < 0)
+ goto failed;
+ /* load loader firmware */
+ ret = softing_load_fw(card->pdat->load.fw, card, card->dpram,
+ card->dpram_size,
+ card->pdat->load.offs - card->pdat->load.addr);
+ if (ret < 0)
+ goto failed;
+
+ if (card->pdat->reset)
+ card->pdat->reset(card->pdev, 0);
+ softing_clr_reset_dpram(card);
+ ret = softing_bootloader_command(card, 0, "card boot");
+ if (ret < 0)
+ goto failed;
+ ret = softing_load_app_fw(card->pdat->app.fw, card);
+ if (ret < 0)
+ goto failed;
+
+ ret = softing_chip_poweron(card);
+ if (ret < 0)
+ goto failed;
+
+ card->fw.up = 1;
+ mutex_unlock(&card->fw.lock);
+ return 0;
+failed:
+ card->fw.up = 0;
+ if (card->pdat->enable_irq)
+ card->pdat->enable_irq(card->pdev, 0);
+ softing_set_reset_dpram(card);
+ if (card->pdat->reset)
+ card->pdat->reset(card->pdev, 1);
+ mutex_unlock(&card->fw.lock);
+ return ret;
+}
+
+/*
+ * netdev sysfs
+ */
+static ssize_t show_channel(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct net_device *ndev = to_net_dev(dev);
+ struct softing_priv *priv = netdev2softing(ndev);
+
+ return sprintf(buf, "%i\n", priv->index);
+}
+
+static ssize_t show_chip(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct net_device *ndev = to_net_dev(dev);
+ struct softing_priv *priv = netdev2softing(ndev);
+
+ return sprintf(buf, "%i\n", priv->chip);
+}
+
+static ssize_t show_output(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct net_device *ndev = to_net_dev(dev);
+ struct softing_priv *priv = netdev2softing(ndev);
+
+ return sprintf(buf, "0x%02x\n", priv->output);
+}
+
+static ssize_t store_output(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct net_device *ndev = to_net_dev(dev);
+ struct softing_priv *priv = netdev2softing(ndev);
+ struct softing *card = priv->card;
+ unsigned long val;
+ int ret;
+
+ ret = strict_strtoul(buf, 0, &val);
+ if (ret < 0)
+ return ret;
+ val &= 0xFF;
+
+ ret = mutex_lock_interruptible(&card->fw.lock);
+ if (ret)
+ return -ERESTARTSYS;
+ if (netif_running(ndev)) {
+ mutex_unlock(&card->fw.lock);
+ return -EBUSY;
+ }
+ priv->output = val;
+ mutex_unlock(&card->fw.lock);
+ return count;
+}
+
+static const DEVICE_ATTR(channel, S_IRUGO, show_channel, NULL);
+static const DEVICE_ATTR(chip, S_IRUGO, show_chip, NULL);
+static const DEVICE_ATTR(output, S_IRUGO | S_IWUSR, show_output, store_output);
+
+static const struct attribute *const netdev_sysfs_attrs[] = {
+ &dev_attr_channel.attr,
+ &dev_attr_chip.attr,
+ &dev_attr_output.attr,
+ NULL,
+};
+static const struct attribute_group netdev_sysfs_group = {
+ .name = NULL,
+ .attrs = (struct attribute **)netdev_sysfs_attrs,
+};
+
+static const struct net_device_ops softing_netdev_ops = {
+ .ndo_open = softing_netdev_open,
+ .ndo_stop = softing_netdev_stop,
+ .ndo_start_xmit = softing_netdev_start_xmit,
+};
+
+static const struct can_bittiming_const softing_btr_const = {
+ .tseg1_min = 1,
+ .tseg1_max = 16,
+ .tseg2_min = 1,
+ .tseg2_max = 8,
+ .sjw_max = 4, /* overruled */
+ .brp_min = 1,
+ .brp_max = 32, /* overruled */
+ .brp_inc = 1,
+};
+
+
+static __devinit struct net_device *softing_netdev_create(struct softing *card,
+ uint16_t chip_id)
+{
+ struct net_device *netdev;
+ struct softing_priv *priv;
+
+ netdev = alloc_candev(sizeof(*priv), TX_ECHO_SKB_MAX);
+ if (!netdev) {
+ dev_alert(&card->pdev->dev, "alloc_candev failed\n");
+ return NULL;
+ }
+ priv = netdev_priv(netdev);
+ priv->netdev = netdev;
+ priv->card = card;
+ memcpy(&priv->btr_const, &softing_btr_const, sizeof(priv->btr_const));
+ priv->btr_const.brp_max = card->pdat->max_brp;
+ priv->btr_const.sjw_max = card->pdat->max_sjw;
+ priv->can.bittiming_const = &priv->btr_const;
+ priv->can.clock.freq = 8000000;
+ priv->chip = chip_id;
+ priv->output = softing_default_output(netdev);
+ SET_NETDEV_DEV(netdev, &card->pdev->dev);
+
+ netdev->flags |= IFF_ECHO;
+ netdev->netdev_ops = &softing_netdev_ops;
+ priv->can.do_set_mode = softing_candev_set_mode;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES;
+
+ return netdev;
+}
+
+static __devinit int softing_netdev_register(struct net_device *netdev)
+{
+ int ret;
+
+ netdev->sysfs_groups[0] = &netdev_sysfs_group;
+ ret = register_candev(netdev);
+ if (ret) {
+ dev_alert(&netdev->dev, "register failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+static void softing_netdev_cleanup(struct net_device *netdev)
+{
+ unregister_candev(netdev);
+ free_candev(netdev);
+}
+
+/*
+ * sysfs for Platform device
+ */
+#define DEV_ATTR_RO(name, member) \
+static ssize_t show_##name(struct device *dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ struct softing *card = platform_get_drvdata(to_platform_device(dev)); \
+ return sprintf(buf, "%u\n", card->member); \
+} \
+static DEVICE_ATTR(name, 0444, show_##name, NULL)
+
+#define DEV_ATTR_RO_STR(name, member) \
+static ssize_t show_##name(struct device *dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ struct softing *card = platform_get_drvdata(to_platform_device(dev)); \
+ return sprintf(buf, "%s\n", card->member); \
+} \
+static DEVICE_ATTR(name, 0444, show_##name, NULL)
+
+DEV_ATTR_RO(serial, id.serial);
+DEV_ATTR_RO_STR(firmware, pdat->app.fw);
+DEV_ATTR_RO(firmware_version, id.fw_version);
+DEV_ATTR_RO_STR(hardware, pdat->name);
+DEV_ATTR_RO(hardware_version, id.hw_version);
+DEV_ATTR_RO(license, id.license);
+DEV_ATTR_RO(frequency, id.freq);
+DEV_ATTR_RO(txpending, tx.pending);
+
+static struct attribute *softing_pdev_attrs[] = {
+ &dev_attr_serial.attr,
+ &dev_attr_firmware.attr,
+ &dev_attr_firmware_version.attr,
+ &dev_attr_hardware.attr,
+ &dev_attr_hardware_version.attr,
+ &dev_attr_license.attr,
+ &dev_attr_frequency.attr,
+ &dev_attr_txpending.attr,
+ NULL,
+};
+
+static const struct attribute_group softing_pdev_group = {
+ .name = NULL,
+ .attrs = softing_pdev_attrs,
+};
+
+/*
+ * platform driver
+ */
+static __devexit int softing_pdev_remove(struct platform_device *pdev)
+{
+ struct softing *card = platform_get_drvdata(pdev);
+ int j;
+
+ /* first, disable card*/
+ softing_card_shutdown(card);
+
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ if (!card->net[j])
+ continue;
+ softing_netdev_cleanup(card->net[j]);
+ card->net[j] = NULL;
+ }
+ sysfs_remove_group(&pdev->dev.kobj, &softing_pdev_group);
+
+ iounmap(card->dpram);
+ kfree(card);
+ return 0;
+}
+
+static __devinit int softing_pdev_probe(struct platform_device *pdev)
+{
+ const struct softing_platform_data *pdat = pdev->dev.platform_data;
+ struct softing *card;
+ struct net_device *netdev;
+ struct softing_priv *priv;
+ struct resource *pres;
+ int ret;
+ int j;
+
+ if (!pdat) {
+ dev_warn(&pdev->dev, "no platform data\n");
+ return -EINVAL;
+ }
+ if (pdat->nbus > ARRAY_SIZE(card->net)) {
+ dev_warn(&pdev->dev, "%u nets??\n", pdat->nbus);
+ return -EINVAL;
+ }
+
+ card = kzalloc(sizeof(*card), GFP_KERNEL);
+ if (!card)
+ return -ENOMEM;
+ card->pdat = pdat;
+ card->pdev = pdev;
+ platform_set_drvdata(pdev, card);
+ mutex_init(&card->fw.lock);
+ spin_lock_init(&card->spin);
+
+ ret = -EINVAL;
+ pres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!pres)
+ goto platform_resource_failed;;
+ card->dpram_phys = pres->start;
+ card->dpram_size = pres->end - pres->start + 1;
+ card->dpram = ioremap_nocache(card->dpram_phys, card->dpram_size);
+ if (!card->dpram) {
+ dev_alert(&card->pdev->dev, "dpram ioremap failed\n");
+ goto ioremap_failed;
+ }
+
+ pres = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (pres)
+ card->irq.nr = pres->start;
+
+ /* reset card */
+ ret = softing_card_boot(card);
+ if (ret < 0) {
+ dev_alert(&pdev->dev, "failed to boot\n");
+ goto boot_failed;
+ }
+
+ /* only now, the chip's are known */
+ card->id.freq = card->pdat->freq;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &softing_pdev_group);
+ if (ret < 0) {
+ dev_alert(&card->pdev->dev, "sysfs failed\n");
+ goto sysfs_failed;
+ }
+
+ ret = -ENOMEM;
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ card->net[j] = netdev =
+ softing_netdev_create(card, card->id.chip[j]);
+ if (!netdev) {
+ dev_alert(&pdev->dev, "failed to make can[%i]", j);
+ goto netdev_failed;
+ }
+ priv = netdev_priv(card->net[j]);
+ priv->index = j;
+ ret = softing_netdev_register(netdev);
+ if (ret) {
+ free_candev(netdev);
+ card->net[j] = NULL;
+ dev_alert(&card->pdev->dev,
+ "failed to register can[%i]\n", j);
+ goto netdev_failed;
+ }
+ }
+ dev_info(&card->pdev->dev, "%s ready.\n", card->pdat->name);
+ return 0;
+
+netdev_failed:
+ for (j = 0; j < ARRAY_SIZE(card->net); ++j) {
+ if (!card->net[j])
+ continue;
+ softing_netdev_cleanup(card->net[j]);
+ }
+ sysfs_remove_group(&pdev->dev.kobj, &softing_pdev_group);
+sysfs_failed:
+ softing_card_shutdown(card);
+boot_failed:
+ iounmap(card->dpram);
+ioremap_failed:
+platform_resource_failed:
+ kfree(card);
+ return ret;
+}
+
+static struct platform_driver softing_driver = {
+ .driver = {
+ .name = "softing",
+ .owner = THIS_MODULE,
+ },
+ .probe = softing_pdev_probe,
+ .remove = __devexit_p(softing_pdev_remove),
+};
+
+MODULE_ALIAS("platform:softing");
+
+static int __init softing_start(void)
+{
+ return platform_driver_register(&softing_driver);
+}
+
+static void __exit softing_stop(void)
+{
+ platform_driver_unregister(&softing_driver);
+}
+
+module_init(softing_start);
+module_exit(softing_stop);
+
+MODULE_DESCRIPTION("Softing DPRAM CAN driver");
+MODULE_AUTHOR("Kurt Van Dijck <kurt.van.dijck@eia.be>");
+MODULE_LICENSE("GPL v2");
--- /dev/null
+
+#include <linux/platform_device.h>
+
+#ifndef _SOFTING_DEVICE_H_
+#define _SOFTING_DEVICE_H_
+
+/* softing firmware directory prefix */
+#define fw_dir "softing-4.6/"
+
+struct softing_platform_data {
+ unsigned int manf;
+ unsigned int prod;
+ /*
+ * generation
+ * 1st with NEC or SJA1000
+ * 8bit, exclusive interrupt, ...
+ * 2nd only SJA1000
+ * 16bit, shared interrupt
+ */
+ int generation;
+ int nbus; /* # busses on device */
+ unsigned int freq; /* operating frequency in Hz */
+ unsigned int max_brp;
+ unsigned int max_sjw;
+ unsigned long dpram_size;
+ const char *name;
+ struct {
+ unsigned long offs;
+ unsigned long addr;
+ const char *fw;
+ } boot, load, app;
+ /*
+ * reset() function
+ * bring pdev in or out of reset, depending on value
+ */
+ int (*reset)(struct platform_device *pdev, int value);
+ int (*enable_irq)(struct platform_device *pdev, int value);
+};
+
+#endif
static void cnic_setup_page_tbl(struct cnic_dev *dev, struct cnic_dma *dma)
{
int i;
- u32 *page_table = dma->pgtbl;
+ __le32 *page_table = (__le32 *) dma->pgtbl;
for (i = 0; i < dma->num_pages; i++) {
/* Each entry needs to be in big endian format. */
- *page_table = (u32) ((u64) dma->pg_map_arr[i] >> 32);
+ *page_table = cpu_to_le32((u64) dma->pg_map_arr[i] >> 32);
page_table++;
- *page_table = (u32) dma->pg_map_arr[i];
+ *page_table = cpu_to_le32(dma->pg_map_arr[i] & 0xffffffff);
page_table++;
}
}
static void cnic_setup_page_tbl_le(struct cnic_dev *dev, struct cnic_dma *dma)
{
int i;
- u32 *page_table = dma->pgtbl;
+ __le32 *page_table = (__le32 *) dma->pgtbl;
for (i = 0; i < dma->num_pages; i++) {
/* Each entry needs to be in little endian format. */
- *page_table = dma->pg_map_arr[i] & 0xffffffff;
+ *page_table = cpu_to_le32(dma->pg_map_arr[i] & 0xffffffff);
page_table++;
- *page_table = (u32) ((u64) dma->pg_map_arr[i] >> 32);
+ *page_table = cpu_to_le32((u64) dma->pg_map_arr[i] >> 32);
page_table++;
}
}
struct port_info *pi = netdev_priv(dev);
struct adapter *adapter = pi->adapter;
+ netif_carrier_off(dev);
+
if (!(adapter->flags & FULL_INIT_DONE)) {
err = cxgb_up(adapter);
if (err < 0)
pi->xact_addr_filt = -1;
pi->rx_offload = RX_CSO;
pi->port_id = i;
- netif_carrier_off(netdev);
netdev->irq = pdev->irq;
netdev->features |= NETIF_F_SG | TSO_FLAGS;
{
int i;
- BUG_ON(adapter->debugfs_root == NULL);
+ BUG_ON(IS_ERR_OR_NULL(adapter->debugfs_root));
/*
* Debugfs support is best effort.
*/
static void cleanup_debugfs(struct adapter *adapter)
{
- BUG_ON(adapter->debugfs_root == NULL);
+ BUG_ON(IS_ERR_OR_NULL(adapter->debugfs_root));
/*
* Unlike our sister routine cleanup_proc(), we don't need to remove
struct port_info *pi;
struct net_device *netdev;
- /*
- * Vet our module parameters.
- */
- if (msi != MSI_MSIX && msi != MSI_MSI) {
- dev_err(&pdev->dev, "bad module parameter msi=%d; must be %d"
- " (MSI-X or MSI) or %d (MSI)\n", msi, MSI_MSIX,
- MSI_MSI);
- err = -EINVAL;
- goto err_out;
- }
-
/*
* Print our driver banner the first time we're called to initialize a
* device.
/*
* Set up our debugfs entries.
*/
- if (cxgb4vf_debugfs_root) {
+ if (!IS_ERR_OR_NULL(cxgb4vf_debugfs_root)) {
adapter->debugfs_root =
debugfs_create_dir(pci_name(pdev),
cxgb4vf_debugfs_root);
- if (adapter->debugfs_root == NULL)
+ if (IS_ERR_OR_NULL(adapter->debugfs_root))
dev_warn(&pdev->dev, "could not create debugfs"
" directory");
else
*/
err_free_debugfs:
- if (adapter->debugfs_root) {
+ if (!IS_ERR_OR_NULL(adapter->debugfs_root)) {
cleanup_debugfs(adapter);
debugfs_remove_recursive(adapter->debugfs_root);
}
err_disable_device:
pci_disable_device(pdev);
-err_out:
return err;
}
/*
* Tear down our debugfs entries.
*/
- if (adapter->debugfs_root) {
+ if (!IS_ERR_OR_NULL(adapter->debugfs_root)) {
cleanup_debugfs(adapter);
debugfs_remove_recursive(adapter->debugfs_root);
}
pci_release_regions(pdev);
}
+/*
+ * "Shutdown" quiesce the device, stopping Ingress Packet and Interrupt
+ * delivery.
+ */
+static void __devexit cxgb4vf_pci_shutdown(struct pci_dev *pdev)
+{
+ struct adapter *adapter;
+ int pidx;
+
+ adapter = pci_get_drvdata(pdev);
+ if (!adapter)
+ return;
+
+ /*
+ * Disable all Virtual Interfaces. This will shut down the
+ * delivery of all ingress packets into the chip for these
+ * Virtual Interfaces.
+ */
+ for_each_port(adapter, pidx) {
+ struct net_device *netdev;
+ struct port_info *pi;
+
+ if (!test_bit(pidx, &adapter->registered_device_map))
+ continue;
+
+ netdev = adapter->port[pidx];
+ if (!netdev)
+ continue;
+
+ pi = netdev_priv(netdev);
+ t4vf_enable_vi(adapter, pi->viid, false, false);
+ }
+
+ /*
+ * Free up all Queues which will prevent further DMA and
+ * Interrupts allowing various internal pathways to drain.
+ */
+ t4vf_free_sge_resources(adapter);
+}
+
/*
* PCI Device registration data structures.
*/
.id_table = cxgb4vf_pci_tbl,
.probe = cxgb4vf_pci_probe,
.remove = __devexit_p(cxgb4vf_pci_remove),
+ .shutdown = __devexit_p(cxgb4vf_pci_shutdown),
};
/*
{
int ret;
+ /*
+ * Vet our module parameters.
+ */
+ if (msi != MSI_MSIX && msi != MSI_MSI) {
+ printk(KERN_WARNING KBUILD_MODNAME
+ ": bad module parameter msi=%d; must be %d"
+ " (MSI-X or MSI) or %d (MSI)\n",
+ msi, MSI_MSIX, MSI_MSI);
+ return -EINVAL;
+ }
+
/* Debugfs support is optional, just warn if this fails */
cxgb4vf_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
- if (!cxgb4vf_debugfs_root)
+ if (IS_ERR_OR_NULL(cxgb4vf_debugfs_root))
printk(KERN_WARNING KBUILD_MODNAME ": could not create"
" debugfs entry, continuing\n");
ret = pci_register_driver(&cxgb4vf_driver);
- if (ret < 0)
+ if (ret < 0 && !IS_ERR_OR_NULL(cxgb4vf_debugfs_root))
debugfs_remove(cxgb4vf_debugfs_root);
return ret;
}
delay_idx = 0;
ms = delay[0];
- for (i = 0; i < 500; i += ms) {
+ for (i = 0; i < FW_CMD_MAX_TIMEOUT; i += ms) {
if (sleep_ok) {
ms = delay[delay_idx];
if (delay_idx < ARRAY_SIZE(delay) - 1)
}
}
/* Change buffer ownership for this last frame, back to the adapter */
- for (; lp->rx_old != entry; lp->rx_old = (++lp->rx_old) & lp->rxRingMask) {
+ for (; lp->rx_old != entry; lp->rx_old = (lp->rx_old + 1) & lp->rxRingMask) {
writel(readl(&lp->rx_ring[lp->rx_old].base) | R_OWN, &lp->rx_ring[lp->rx_old].base);
}
writel(readl(&lp->rx_ring[entry].base) | R_OWN, &lp->rx_ring[entry].base);
/*
** Update entry information
*/
- lp->rx_new = (++lp->rx_new) & lp->rxRingMask;
+ lp->rx_new = (lp->rx_new + 1) & lp->rxRingMask;
}
return 0;
}
/* Update all the pointers */
- lp->tx_old = (++lp->tx_old) & lp->txRingMask;
+ lp->tx_old = (lp->tx_old + 1) & lp->txRingMask;
}
return 0;
/* Free all the skbuffs in the queue. */
for (i = 0; i < RX_RING_SIZE; i++) {
- np->rx_ring[i].status = 0;
- np->rx_ring[i].fraginfo = 0;
skb = np->rx_skbuff[i];
if (skb) {
pci_unmap_single(np->pdev,
dev_kfree_skb (skb);
np->rx_skbuff[i] = NULL;
}
+ np->rx_ring[i].status = 0;
+ np->rx_ring[i].fraginfo = 0;
}
for (i = 0; i < TX_RING_SIZE; i++) {
skb = np->tx_skbuff[i];
case M88E1000_I_PHY_ID:
case M88E1011_I_PHY_ID:
case M88E1111_I_PHY_ID:
+ case M88E1118_E_PHY_ID:
hw->phy_type = e1000_phy_m88;
break;
case IGP01E1000_I_PHY_ID:
break;
case e1000_ce4100:
if ((hw->phy_id == RTL8211B_PHY_ID) ||
- (hw->phy_id == RTL8201N_PHY_ID))
+ (hw->phy_id == RTL8201N_PHY_ID) ||
+ (hw->phy_id == M88E1118_E_PHY_ID))
match = true;
break;
case e1000_82541:
#define M88E1000_14_PHY_ID M88E1000_E_PHY_ID
#define M88E1011_I_REV_4 0x04
#define M88E1111_I_PHY_ID 0x01410CC0
+#define M88E1118_E_PHY_ID 0x01410E40
#define L1LXT971A_PHY_ID 0x001378E0
#define RTL8211B_PHY_ID 0x001CC910
u16 phy_status, phy_1000t_status, phy_ext_status;
u16 pci_status;
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
e1e_rphy(hw, PHY_STATUS, &phy_status);
e1e_rphy(hw, PHY_1000T_STATUS, &phy_1000t_status);
e1e_rphy(hw, PHY_EXT_STATUS, &phy_ext_status);
struct e1000_adapter *adapter = container_of(work,
struct e1000_adapter, downshift_task);
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
e1000e_gig_downshift_workaround_ich8lan(&adapter->hw);
}
return 0;
}
+static void e1000e_flush_descriptors(struct e1000_adapter *adapter)
+{
+ struct e1000_hw *hw = &adapter->hw;
+
+ if (!(adapter->flags2 & FLAG2_DMA_BURST))
+ return;
+
+ /* flush pending descriptor writebacks to memory */
+ ew32(TIDV, adapter->tx_int_delay | E1000_TIDV_FPD);
+ ew32(RDTR, adapter->rx_int_delay | E1000_RDTR_FPD);
+
+ /* execute the writes immediately */
+ e1e_flush();
+}
+
void e1000e_down(struct e1000_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
if (!pci_channel_offline(adapter->pdev))
e1000e_reset(adapter);
+
+ e1000e_flush_descriptors(adapter);
+
e1000_clean_tx_ring(adapter);
e1000_clean_rx_ring(adapter);
{
struct e1000_adapter *adapter = container_of(work,
struct e1000_adapter, update_phy_task);
+
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
e1000_get_phy_info(&adapter->hw);
}
static void e1000_update_phy_info(unsigned long data)
{
struct e1000_adapter *adapter = (struct e1000_adapter *) data;
+
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
schedule_work(&adapter->update_phy_task);
}
u32 link, tctl;
int tx_pending = 0;
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
link = e1000e_has_link(adapter);
if ((netif_carrier_ok(netdev)) && link) {
/* Cancel scheduled suspend requests. */
* to get done, so reset controller to flush Tx.
* (Do the reset outside of interrupt context).
*/
- adapter->tx_timeout_count++;
schedule_work(&adapter->reset_task);
/* return immediately since reset is imminent */
return;
else
ew32(ICS, E1000_ICS_RXDMT0);
+ /* flush pending descriptors to memory before detecting Tx hang */
+ e1000e_flush_descriptors(adapter);
+
/* Force detection of hung controller every watchdog period */
adapter->detect_tx_hung = 1;
- /* flush partial descriptors to memory before detecting Tx hang */
- if (adapter->flags2 & FLAG2_DMA_BURST) {
- ew32(TIDV, adapter->tx_int_delay | E1000_TIDV_FPD);
- ew32(RDTR, adapter->rx_int_delay | E1000_RDTR_FPD);
- /*
- * no need to flush the writes because the timeout code does
- * an er32 first thing
- */
- }
-
/*
* With 82571 controllers, LAA may be overwritten due to controller
* reset from the other port. Set the appropriate LAA in RAR[0]
struct e1000_adapter *adapter;
adapter = container_of(work, struct e1000_adapter, reset_task);
+ /* don't run the task if already down */
+ if (test_bit(__E1000_DOWN, &adapter->state))
+ return;
+
if (!((adapter->flags & FLAG_RX_NEEDS_RESTART) &&
(adapter->flags & FLAG_RX_RESTART_NOW))) {
e1000e_dump(adapter);
if (netif_msg_hw(priv))
printk(KERN_DEBUG DRV_NAME ": reading TSV at addr:0x%04x\n",
endptr + 1);
- enc28j60_mem_read(priv, endptr + 1, sizeof(tsv), tsv);
+ enc28j60_mem_read(priv, endptr + 1, TSV_SIZE, tsv);
}
static void enc28j60_dump_tsv(struct enc28j60_net *priv, const char *msg,
goto out_error;
}
+ netif_carrier_off(dev);
+
dev_info(&pci_dev->dev, "ifname %s, PHY OUI 0x%x @ %d, addr %pM\n",
dev->name, np->phy_oui, np->phyaddr, dev->dev_addr);
if (err) {
for (j = 0; j < i; j++)
free_grp_irqs(&priv->gfargrp[j]);
- goto irq_fail;
+ goto irq_fail;
}
}
ret = sh_irda_set_baudrate(self, speed);
if (ret < 0)
- return ret;
+ goto sh_irda_hard_xmit_end;
self->tx_buff.len = 0;
if (skb->len) {
sh_irda_write(self, IRTFLR, self->tx_buff.len);
sh_irda_write(self, IRTCTR, ARMOD | TE);
- }
+ } else
+ goto sh_irda_hard_xmit_end;
dev_kfree_skb(skb);
return 0;
+
+sh_irda_hard_xmit_end:
+ sh_irda_set_baudrate(self, 9600);
+ netif_wake_queue(self->ndev);
+ sh_irda_rcv_ctrl(self, 1);
+ dev_kfree_skb(skb);
+
+ return ret;
+
}
static int sh_irda_ioctl(struct net_device *ndev, struct ifreq *ifreq, int cmd)
hw_dbg(hw, " New MAC Addr =%pM\n", hw->mac.addr);
hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0, IXGBE_RAH_AV);
+
+ /* clear VMDq pool/queue selection for RAR 0 */
+ hw->mac.ops.clear_vmdq(hw, 0, IXGBE_CLEAR_VMDQ_ALL);
}
hw->addr_ctrl.overflow_promisc = 0;
struct scatterlist *sg;
unsigned int i, j, dmacount;
unsigned int len;
- static const unsigned int bufflen = 4096;
+ static const unsigned int bufflen = IXGBE_FCBUFF_MIN;
unsigned int firstoff = 0;
unsigned int lastsize;
unsigned int thisoff = 0;
unsigned int thislen = 0;
u32 fcbuff, fcdmarw, fcfltrw;
- dma_addr_t addr;
+ dma_addr_t addr = 0;
if (!netdev || !sgl)
return 0;
/* only the last buffer may have non-full bufflen */
lastsize = thisoff + thislen;
+ /*
+ * lastsize can not be buffer len.
+ * If it is then adding another buffer with lastsize = 1.
+ */
+ if (lastsize == bufflen) {
+ if (j >= IXGBE_BUFFCNT_MAX) {
+ e_err(drv, "xid=%x:%d,%d,%d:addr=%llx "
+ "not enough user buffers. We need an extra "
+ "buffer because lastsize is bufflen.\n",
+ xid, i, j, dmacount, (u64)addr);
+ goto out_noddp_free;
+ }
+
+ ddp->udl[j] = (u64)(fcoe->extra_ddp_buffer_dma);
+ j++;
+ lastsize = 1;
+ }
+
fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT);
fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT);
fcbuff |= (firstoff << IXGBE_FCBUFF_OFFSET_SHIFT);
e_err(drv, "failed to allocated FCoE DDP pool\n");
spin_lock_init(&fcoe->lock);
+
+ /* Extra buffer to be shared by all DDPs for HW work around */
+ fcoe->extra_ddp_buffer = kmalloc(IXGBE_FCBUFF_MIN, GFP_ATOMIC);
+ if (fcoe->extra_ddp_buffer == NULL) {
+ e_err(drv, "failed to allocated extra DDP buffer\n");
+ goto out_extra_ddp_buffer_alloc;
+ }
+
+ fcoe->extra_ddp_buffer_dma =
+ dma_map_single(&adapter->pdev->dev,
+ fcoe->extra_ddp_buffer,
+ IXGBE_FCBUFF_MIN,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(&adapter->pdev->dev,
+ fcoe->extra_ddp_buffer_dma)) {
+ e_err(drv, "failed to map extra DDP buffer\n");
+ goto out_extra_ddp_buffer_dma;
+ }
}
/* Enable L2 eth type filter for FCoE */
}
}
#endif
+
+ return;
+
+out_extra_ddp_buffer_dma:
+ kfree(fcoe->extra_ddp_buffer);
+out_extra_ddp_buffer_alloc:
+ pci_pool_destroy(fcoe->pool);
+ fcoe->pool = NULL;
}
/**
if (fcoe->pool) {
for (i = 0; i < IXGBE_FCOE_DDP_MAX; i++)
ixgbe_fcoe_ddp_put(adapter->netdev, i);
+ dma_unmap_single(&adapter->pdev->dev,
+ fcoe->extra_ddp_buffer_dma,
+ IXGBE_FCBUFF_MIN,
+ DMA_FROM_DEVICE);
+ kfree(fcoe->extra_ddp_buffer);
pci_pool_destroy(fcoe->pool);
fcoe->pool = NULL;
}
spinlock_t lock;
struct pci_pool *pool;
struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
+ unsigned char *extra_ddp_buffer;
+ dma_addr_t extra_ddp_buffer_dma;
};
#endif /* _IXGBE_FCOE_H */
static const char ixgbe_driver_string[] =
"Intel(R) 10 Gigabit PCI Express Network Driver";
-#define DRV_VERSION "3.0.12-k2"
+#define DRV_VERSION "3.2.9-k2"
const char ixgbe_driver_version[] = DRV_VERSION;
static char ixgbe_copyright[] = "Copyright (c) 1999-2010 Intel Corporation.";
u32 mhadd, hlreg0;
/* Decide whether to use packet split mode or not */
+ /* On by default */
+ adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED;
+
/* Do not use packet split if we're in SR-IOV Mode */
- if (!adapter->num_vfs)
- adapter->flags |= IXGBE_FLAG_RX_PS_ENABLED;
+ if (adapter->num_vfs)
+ adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED;
+
+ /* Disable packet split due to 82599 erratum #45 */
+ if (hw->mac.type == ixgbe_mac_82599EB)
+ adapter->flags &= ~IXGBE_FLAG_RX_PS_ENABLED;
/* Set the RX buffer length according to the mode */
if (adapter->flags & IXGBE_FLAG_RX_PS_ENABLED) {
* We need to try and force an autonegotiation
* session, then bring up link.
*/
- hw->mac.ops.setup_sfp(hw);
+ if (hw->mac.ops.setup_sfp)
+ hw->mac.ops.setup_sfp(hw);
if (!(adapter->flags & IXGBE_FLAG_IN_SFP_LINK_TASK))
schedule_work(&adapter->multispeed_fiber_task);
} else {
{
int q_idx, num_q_vectors;
struct ixgbe_q_vector *q_vector;
- int napi_vectors;
int (*poll)(struct napi_struct *, int);
if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) {
num_q_vectors = adapter->num_msix_vectors - NON_Q_VECTORS;
- napi_vectors = adapter->num_rx_queues;
poll = &ixgbe_clean_rxtx_many;
} else {
num_q_vectors = 1;
- napi_vectors = 1;
poll = &ixgbe_poll;
}
unregister_netdev(adapter->netdev);
return;
}
- hw->mac.ops.setup_sfp(hw);
+ if (hw->mac.ops.setup_sfp)
+ hw->mac.ops.setup_sfp(hw);
if (!(adapter->flags & IXGBE_FLAG_IN_SFP_LINK_TASK))
/* This will also work for DA Twinax connections */
return adapter->hw.mac.ops.set_vfta(&adapter->hw, vid, vf, (bool)add);
}
-
static void ixgbe_set_vmolr(struct ixgbe_hw *hw, u32 vf, bool aupe)
{
u32 vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));
vmolr |= (IXGBE_VMOLR_ROMPE |
- IXGBE_VMOLR_ROPE |
IXGBE_VMOLR_BAM);
if (aupe)
vmolr |= IXGBE_VMOLR_AUPE;
}
ctrl = IXGBE_READ_REG(hw, IXGBE_CTRL);
- IXGBE_WRITE_REG(hw, IXGBE_CTRL, (ctrl | IXGBE_CTRL_RST));
+ IXGBE_WRITE_REG(hw, IXGBE_CTRL, (ctrl | reset_bit));
IXGBE_WRITE_FLUSH(hw);
/* Poll for reset bit to self-clear indicating reset is complete */
for (i = 0; i < 10; i++) {
udelay(1);
ctrl = IXGBE_READ_REG(hw, IXGBE_CTRL);
- if (!(ctrl & IXGBE_CTRL_RST))
+ if (!(ctrl & reset_bit))
break;
}
- if (ctrl & IXGBE_CTRL_RST) {
+ if (ctrl & reset_bit) {
status = IXGBE_ERR_RESET_FAILED;
hw_dbg(hw, "Reset polling failed to complete.\n");
}
{ PCI_VDEVICE(MELLANOX, 0x6764) }, /* MT26468 ConnectX EN 10GigE PCIe gen2*/
{ PCI_VDEVICE(MELLANOX, 0x6746) }, /* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
{ PCI_VDEVICE(MELLANOX, 0x676e) }, /* MT26478 ConnectX2 40GigE PCIe gen2 */
+ { PCI_VDEVICE(MELLANOX, 0x1002) }, /* MT25400 Family [ConnectX-2 Virtual Function] */
+ { PCI_VDEVICE(MELLANOX, 0x1003) }, /* MT27500 Family [ConnectX-3] */
+ { PCI_VDEVICE(MELLANOX, 0x1004) }, /* MT27500 Family [ConnectX-3 Virtual Function] */
+ { PCI_VDEVICE(MELLANOX, 0x1005) }, /* MT27510 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1006) }, /* MT27511 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1007) }, /* MT27520 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1008) }, /* MT27521 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1009) }, /* MT27530 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100a) }, /* MT27531 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100b) }, /* MT27540 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100c) }, /* MT27541 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100d) }, /* MT27550 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100e) }, /* MT27551 Family */
+ { PCI_VDEVICE(MELLANOX, 0x100f) }, /* MT27560 Family */
+ { PCI_VDEVICE(MELLANOX, 0x1010) }, /* MT27561 Family */
{ 0, }
};
{
struct niu_parent *parent = np->parent;
int first_rx_channel, first_tx_channel;
+ int num_rx_rings, num_tx_rings;
+ struct rx_ring_info *rx_rings;
+ struct tx_ring_info *tx_rings;
int i, port, err;
port = np->port;
first_tx_channel += parent->txchan_per_port[i];
}
- np->num_rx_rings = parent->rxchan_per_port[port];
- np->num_tx_rings = parent->txchan_per_port[port];
+ num_rx_rings = parent->rxchan_per_port[port];
+ num_tx_rings = parent->txchan_per_port[port];
- netif_set_real_num_rx_queues(np->dev, np->num_rx_rings);
- netif_set_real_num_tx_queues(np->dev, np->num_tx_rings);
-
- np->rx_rings = kcalloc(np->num_rx_rings, sizeof(struct rx_ring_info),
- GFP_KERNEL);
+ rx_rings = kcalloc(num_rx_rings, sizeof(struct rx_ring_info),
+ GFP_KERNEL);
err = -ENOMEM;
- if (!np->rx_rings)
+ if (!rx_rings)
goto out_err;
+ np->num_rx_rings = num_rx_rings;
+ smp_wmb();
+ np->rx_rings = rx_rings;
+
+ netif_set_real_num_rx_queues(np->dev, num_rx_rings);
+
for (i = 0; i < np->num_rx_rings; i++) {
struct rx_ring_info *rp = &np->rx_rings[i];
return err;
}
- np->tx_rings = kcalloc(np->num_tx_rings, sizeof(struct tx_ring_info),
- GFP_KERNEL);
+ tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),
+ GFP_KERNEL);
err = -ENOMEM;
- if (!np->tx_rings)
+ if (!tx_rings)
goto out_err;
+ np->num_tx_rings = num_tx_rings;
+ smp_wmb();
+ np->tx_rings = tx_rings;
+
+ netif_set_real_num_tx_queues(np->dev, num_tx_rings);
+
for (i = 0; i < np->num_tx_rings; i++) {
struct tx_ring_info *rp = &np->tx_rings[i];
static void niu_get_rx_stats(struct niu *np)
{
unsigned long pkts, dropped, errors, bytes;
+ struct rx_ring_info *rx_rings;
int i;
pkts = dropped = errors = bytes = 0;
+
+ rx_rings = ACCESS_ONCE(np->rx_rings);
+ if (!rx_rings)
+ goto no_rings;
+
for (i = 0; i < np->num_rx_rings; i++) {
- struct rx_ring_info *rp = &np->rx_rings[i];
+ struct rx_ring_info *rp = &rx_rings[i];
niu_sync_rx_discard_stats(np, rp, 0);
dropped += rp->rx_dropped;
errors += rp->rx_errors;
}
+
+no_rings:
np->dev->stats.rx_packets = pkts;
np->dev->stats.rx_bytes = bytes;
np->dev->stats.rx_dropped = dropped;
static void niu_get_tx_stats(struct niu *np)
{
unsigned long pkts, errors, bytes;
+ struct tx_ring_info *tx_rings;
int i;
pkts = errors = bytes = 0;
+
+ tx_rings = ACCESS_ONCE(np->tx_rings);
+ if (!tx_rings)
+ goto no_rings;
+
for (i = 0; i < np->num_tx_rings; i++) {
- struct tx_ring_info *rp = &np->tx_rings[i];
+ struct tx_ring_info *rp = &tx_rings[i];
pkts += rp->tx_packets;
bytes += rp->tx_bytes;
errors += rp->tx_errors;
}
+
+no_rings:
np->dev->stats.tx_packets = pkts;
np->dev->stats.tx_bytes = bytes;
np->dev->stats.tx_errors = errors;
{
struct niu *np = netdev_priv(dev);
- niu_get_rx_stats(np);
- niu_get_tx_stats(np);
-
+ if (netif_running(dev)) {
+ niu_get_rx_stats(np);
+ niu_get_tx_stats(np);
+ }
return &dev->stats;
}
}
ndev = alloc_etherdev(sizeof(struct ns83820));
- dev = PRIV(ndev);
-
err = -ENOMEM;
- if (!dev)
+ if (!ndev)
goto out;
+ dev = PRIV(ndev);
dev->ndev = ndev;
spin_lock_init(&dev->rx_info.lock);
struct pch_gbe_regs_mac_adr mac_adr[16];
u32 ADDR_MASK;
u32 MIIM;
- u32 reserve2;
+ u32 MAC_ADDR_LOAD;
u32 RGMII_ST;
u32 RGMII_CTRL;
u32 reserve3[3];
#define PCH_GBE_SHORT_PKT 64
#define DSC_INIT16 0xC000
#define PCH_GBE_DMA_ALIGN 0
+#define PCH_GBE_DMA_PADDING 2
#define PCH_GBE_WATCHDOG_PERIOD (1 * HZ) /* watchdog time */
#define PCH_GBE_COPYBREAK_DEFAULT 256
#define PCH_GBE_PCI_BAR 1
static int pch_gbe_mdio_read(struct net_device *netdev, int addr, int reg);
static void pch_gbe_mdio_write(struct net_device *netdev, int addr, int reg,
int data);
+
+inline void pch_gbe_mac_load_mac_addr(struct pch_gbe_hw *hw)
+{
+ iowrite32(0x01, &hw->reg->MAC_ADDR_LOAD);
+}
+
/**
* pch_gbe_mac_read_mac_addr - Read MAC address
* @hw: Pointer to the HW structure
struct pch_gbe_adapter *adapter;
adapter = container_of(work, struct pch_gbe_adapter, reset_task);
+ rtnl_lock();
pch_gbe_reinit_locked(adapter);
+ rtnl_unlock();
}
/**
*/
void pch_gbe_reinit_locked(struct pch_gbe_adapter *adapter)
{
- struct net_device *netdev = adapter->netdev;
-
- rtnl_lock();
- if (netif_running(netdev)) {
- pch_gbe_down(adapter);
- pch_gbe_up(adapter);
- }
- rtnl_unlock();
+ pch_gbe_down(adapter);
+ pch_gbe_up(adapter);
}
/**
struct pch_gbe_buffer *buffer_info;
struct pch_gbe_rx_desc *rx_desc;
u32 length;
- unsigned char tmp_packet[ETH_HLEN];
unsigned int i;
unsigned int cleaned_count = 0;
bool cleaned = false;
- struct sk_buff *skb;
+ struct sk_buff *skb, *new_skb;
u8 dma_status;
u16 gbec_status;
u32 tcp_ip_status;
- u8 skb_copy_flag = 0;
- u8 skb_padding_flag = 0;
i = rx_ring->next_to_clean;
pr_err("Receive CRC Error\n");
} else {
/* get receive length */
- /* length convert[-3], padding[-2] */
- length = (rx_desc->rx_words_eob) - 3 - 2;
+ /* length convert[-3] */
+ length = (rx_desc->rx_words_eob) - 3;
/* Decide the data conversion method */
if (!adapter->rx_csum) {
/* [Header:14][payload] */
- skb_padding_flag = 0;
- skb_copy_flag = 1;
+ if (NET_IP_ALIGN) {
+ /* Because alignment differs,
+ * the new_skb is newly allocated,
+ * and data is copied to new_skb.*/
+ new_skb = netdev_alloc_skb(netdev,
+ length + NET_IP_ALIGN);
+ if (!new_skb) {
+ /* dorrop error */
+ pr_err("New skb allocation "
+ "Error\n");
+ goto dorrop;
+ }
+ skb_reserve(new_skb, NET_IP_ALIGN);
+ memcpy(new_skb->data, skb->data,
+ length);
+ skb = new_skb;
+ } else {
+ /* DMA buffer is used as SKB as it is.*/
+ buffer_info->skb = NULL;
+ }
} else {
/* [Header:14][padding:2][payload] */
- skb_padding_flag = 1;
- if (length < copybreak)
- skb_copy_flag = 1;
- else
- skb_copy_flag = 0;
- }
-
- /* Data conversion */
- if (skb_copy_flag) { /* recycle skb */
- struct sk_buff *new_skb;
- new_skb =
- netdev_alloc_skb(netdev,
- length + NET_IP_ALIGN);
- if (new_skb) {
- if (!skb_padding_flag) {
- skb_reserve(new_skb,
- NET_IP_ALIGN);
+ /* The length includes padding length */
+ length = length - PCH_GBE_DMA_PADDING;
+ if ((length < copybreak) ||
+ (NET_IP_ALIGN != PCH_GBE_DMA_PADDING)) {
+ /* Because alignment differs,
+ * the new_skb is newly allocated,
+ * and data is copied to new_skb.
+ * Padding data is deleted
+ * at the time of a copy.*/
+ new_skb = netdev_alloc_skb(netdev,
+ length + NET_IP_ALIGN);
+ if (!new_skb) {
+ /* dorrop error */
+ pr_err("New skb allocation "
+ "Error\n");
+ goto dorrop;
}
+ skb_reserve(new_skb, NET_IP_ALIGN);
memcpy(new_skb->data, skb->data,
- length);
- /* save the skb
- * in buffer_info as good */
+ ETH_HLEN);
+ memcpy(&new_skb->data[ETH_HLEN],
+ &skb->data[ETH_HLEN +
+ PCH_GBE_DMA_PADDING],
+ length - ETH_HLEN);
skb = new_skb;
- } else if (!skb_padding_flag) {
- /* dorrop error */
- pr_err("New skb allocation Error\n");
- goto dorrop;
+ } else {
+ /* Padding data is deleted
+ * by moving header data.*/
+ memmove(&skb->data[PCH_GBE_DMA_PADDING],
+ &skb->data[0], ETH_HLEN);
+ skb_reserve(skb, NET_IP_ALIGN);
+ buffer_info->skb = NULL;
}
- } else {
- buffer_info->skb = NULL;
}
- if (skb_padding_flag) {
- memcpy(&tmp_packet[0], &skb->data[0], ETH_HLEN);
- memcpy(&skb->data[NET_IP_ALIGN], &tmp_packet[0],
- ETH_HLEN);
- skb_reserve(skb, NET_IP_ALIGN);
-
- }
-
+ /* The length includes FCS length */
+ length = length - ETH_FCS_LEN;
/* update status of driver */
adapter->stats.rx_bytes += length;
adapter->stats.rx_packets++;
struct net_device *netdev = pci_get_drvdata(pdev);
struct pch_gbe_adapter *adapter = netdev_priv(netdev);
- flush_scheduled_work();
+ cancel_work_sync(&adapter->reset_task);
unregister_netdev(netdev);
pch_gbe_hal_phy_hw_reset(&adapter->hw);
netdev->features = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_GRO;
pch_gbe_set_ethtool_ops(netdev);
+ pch_gbe_mac_load_mac_addr(&adapter->hw);
pch_gbe_mac_reset_hw(&adapter->hw);
/* setup the private structure */
/*
* Wait a full Tx time (1.2ms) + some guard time, NS says 1.6ms total.
- * Early datasheets said to poll the reset bit, but now they say that
- * it "is not a reliable indicator and subsequently should be ignored."
- * We wait at least 10ms.
+ * We wait at least 2ms.
*/
- mdelay(10);
+ mdelay(2);
/*
* Reset RBCR[01] back to zero as per magic incantation.
if (pm)
pm_request_resume(&tp->pci_dev->dev);
netif_carrier_on(dev);
- netif_info(tp, ifup, dev, "link up\n");
+ if (net_ratelimit())
+ netif_info(tp, ifup, dev, "link up\n");
} else {
netif_carrier_off(dev);
netif_info(tp, ifdown, dev, "link down\n");
if (pci_dev_run_wake(pdev))
pm_runtime_put_noidle(&pdev->dev);
+ netif_carrier_off(dev);
+
out:
return rc;
RTL_W16(IntrMitigate, 0x5151);
/* Work around for RxFIFO overflow. */
- if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_22) {
tp->intr_event |= RxFIFOOver | PCSTimeout;
tp->intr_event &= ~RxOverflow;
}
break;
}
- /* Work around for rx fifo overflow */
- if (unlikely(status & RxFIFOOver) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_11)) {
- netif_stop_queue(dev);
- rtl8169_tx_timeout(dev);
- break;
+ if (unlikely(status & RxFIFOOver)) {
+ switch (tp->mac_version) {
+ /* Work around for rx fifo overflow */
+ case RTL_GIGA_MAC_VER_11:
+ case RTL_GIGA_MAC_VER_22:
+ case RTL_GIGA_MAC_VER_26:
+ netif_stop_queue(dev);
+ rtl8169_tx_timeout(dev);
+ goto done;
+ /* Testers needed. */
+ case RTL_GIGA_MAC_VER_17:
+ case RTL_GIGA_MAC_VER_19:
+ case RTL_GIGA_MAC_VER_20:
+ case RTL_GIGA_MAC_VER_21:
+ case RTL_GIGA_MAC_VER_23:
+ case RTL_GIGA_MAC_VER_24:
+ case RTL_GIGA_MAC_VER_27:
+ case RTL_GIGA_MAC_VER_28:
+ /* Experimental science. Pktgen proof. */
+ case RTL_GIGA_MAC_VER_12:
+ case RTL_GIGA_MAC_VER_25:
+ if (status == RxFIFOOver)
+ goto done;
+ break;
+ default:
+ break;
+ }
}
if (unlikely(status & SYSErr)) {
(status & RxFIFOOver) ? (status | RxOverflow) : status);
status = RTL_R16(IntrStatus);
}
-
+done:
return IRQ_RETVAL(handled);
}
"cur_rx:%4.4d, dirty_rx:%4.4d\n",
net_dev->name, sis_priv->cur_rx,
sis_priv->dirty_rx);
+ dev_kfree_skb(skb);
break;
}
priv->hw = device;
- if (device_can_wakeup(priv->device))
+ if (device_can_wakeup(priv->device)) {
priv->wolopts = WAKE_MAGIC; /* Magic Frame as default */
+ enable_irq_wake(dev->irq);
+ }
return 0;
}
#define BAR_0 0
#define BAR_2 2
-#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
-#define TG3_VLAN_TAG_USED 1
-#else
-#define TG3_VLAN_TAG_USED 0
-#endif
-
#include "tg3.h"
#define DRV_MODULE_NAME "tg3"
TG3_TX_RING_SIZE)
#define NEXT_TX(N) (((N) + 1) & (TG3_TX_RING_SIZE - 1))
-#define TG3_RX_DMA_ALIGN 16
-#define TG3_RX_HEADROOM ALIGN(VLAN_HLEN, TG3_RX_DMA_ALIGN)
-
#define TG3_DMA_BYTE_ENAB 64
#define TG3_RX_STD_DMA_SZ 1536
struct sk_buff *skb;
dma_addr_t dma_addr;
u32 opaque_key, desc_idx, *post_ptr;
- bool hw_vlan __maybe_unused = false;
- u16 vtag __maybe_unused = 0;
desc_idx = desc->opaque & RXD_OPAQUE_INDEX_MASK;
opaque_key = desc->opaque & RXD_OPAQUE_RING_MASK;
tg3_recycle_rx(tnapi, tpr, opaque_key,
desc_idx, *post_ptr);
- copy_skb = netdev_alloc_skb(tp->dev, len + VLAN_HLEN +
+ copy_skb = netdev_alloc_skb(tp->dev, len +
TG3_RAW_IP_ALIGN);
if (copy_skb == NULL)
goto drop_it_no_recycle;
- skb_reserve(copy_skb, TG3_RAW_IP_ALIGN + VLAN_HLEN);
+ skb_reserve(copy_skb, TG3_RAW_IP_ALIGN);
skb_put(copy_skb, len);
pci_dma_sync_single_for_cpu(tp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE);
skb_copy_from_linear_data(skb, copy_skb->data, len);
}
if (desc->type_flags & RXD_FLAG_VLAN &&
- !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG)) {
- vtag = desc->err_vlan & RXD_VLAN_MASK;
-#if TG3_VLAN_TAG_USED
- if (tp->vlgrp)
- hw_vlan = true;
- else
-#endif
- {
- struct vlan_ethhdr *ve = (struct vlan_ethhdr *)
- __skb_push(skb, VLAN_HLEN);
-
- memmove(ve, skb->data + VLAN_HLEN,
- ETH_ALEN * 2);
- ve->h_vlan_proto = htons(ETH_P_8021Q);
- ve->h_vlan_TCI = htons(vtag);
- }
- }
+ !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG))
+ __vlan_hwaccel_put_tag(skb,
+ desc->err_vlan & RXD_VLAN_MASK);
-#if TG3_VLAN_TAG_USED
- if (hw_vlan)
- vlan_gro_receive(&tnapi->napi, tp->vlgrp, vtag, skb);
- else
-#endif
- napi_gro_receive(&tnapi->napi, skb);
+ napi_gro_receive(&tnapi->napi, skb);
received++;
budget--;
base_flags |= TXD_FLAG_TCPUDP_CSUM;
}
-#if TG3_VLAN_TAG_USED
if (vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN |
(vlan_tx_tag_get(skb) << 16));
-#endif
len = skb_headlen(skb);
}
}
}
-#if TG3_VLAN_TAG_USED
+
if (vlan_tx_tag_present(skb))
base_flags |= (TXD_FLAG_VLAN |
(vlan_tx_tag_get(skb) << 16));
-#endif
if ((tp->tg3_flags3 & TG3_FLG3_USE_JUMBO_BDFLAG) &&
!mss && skb->len > VLAN_ETH_FRAME_LEN)
rx_mode = tp->rx_mode & ~(RX_MODE_PROMISC |
RX_MODE_KEEP_VLAN_TAG);
+#if !defined(CONFIG_VLAN_8021Q) && !defined(CONFIG_VLAN_8021Q_MODULE)
/* When ASF is in use, we always keep the RX_MODE_KEEP_VLAN_TAG
* flag clear.
*/
-#if TG3_VLAN_TAG_USED
- if (!tp->vlgrp &&
- !(tp->tg3_flags & TG3_FLAG_ENABLE_ASF))
- rx_mode |= RX_MODE_KEEP_VLAN_TAG;
-#else
- /* By definition, VLAN is disabled always in this
- * case.
- */
if (!(tp->tg3_flags & TG3_FLAG_ENABLE_ASF))
rx_mode |= RX_MODE_KEEP_VLAN_TAG;
#endif
if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES)
break; /* We have no PHY */
- if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)
+ if ((tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) ||
+ ((tp->tg3_flags & TG3_FLAG_ENABLE_ASF) &&
+ !netif_running(dev)))
return -EAGAIN;
spin_lock_bh(&tp->lock);
if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES)
break; /* We have no PHY */
- if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)
+ if ((tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) ||
+ ((tp->tg3_flags & TG3_FLAG_ENABLE_ASF) &&
+ !netif_running(dev)))
return -EAGAIN;
spin_lock_bh(&tp->lock);
return -EOPNOTSUPP;
}
-#if TG3_VLAN_TAG_USED
-static void tg3_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
-{
- struct tg3 *tp = netdev_priv(dev);
-
- if (!netif_running(dev)) {
- tp->vlgrp = grp;
- return;
- }
-
- tg3_netif_stop(tp);
-
- tg3_full_lock(tp, 0);
-
- tp->vlgrp = grp;
-
- /* Update RX_MODE_KEEP_VLAN_TAG bit in RX_MODE register. */
- __tg3_set_rx_mode(dev);
-
- tg3_netif_start(tp);
-
- tg3_full_unlock(tp);
-}
-#endif
-
static int tg3_get_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
{
struct tg3 *tp = netdev_priv(dev);
static void inline vlan_features_add(struct net_device *dev, unsigned long flags)
{
-#if TG3_VLAN_TAG_USED
dev->vlan_features |= flags;
-#endif
}
static inline u32 tg3_rx_ret_ring_size(struct tg3 *tp)
else
tp->tg3_flags &= ~TG3_FLAG_POLL_SERDES;
- tp->rx_offset = NET_IP_ALIGN + TG3_RX_HEADROOM;
+ tp->rx_offset = NET_IP_ALIGN;
tp->rx_copy_thresh = TG3_RX_COPY_THRESHOLD;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5701 &&
(tp->tg3_flags & TG3_FLAG_PCIX_MODE) != 0) {
- tp->rx_offset -= NET_IP_ALIGN;
+ tp->rx_offset = 0;
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
tp->rx_copy_thresh = ~(u16)0;
#endif
.ndo_do_ioctl = tg3_ioctl,
.ndo_tx_timeout = tg3_tx_timeout,
.ndo_change_mtu = tg3_change_mtu,
-#if TG3_VLAN_TAG_USED
- .ndo_vlan_rx_register = tg3_vlan_rx_register,
-#endif
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = tg3_poll_controller,
#endif
.ndo_do_ioctl = tg3_ioctl,
.ndo_tx_timeout = tg3_tx_timeout,
.ndo_change_mtu = tg3_change_mtu,
-#if TG3_VLAN_TAG_USED
- .ndo_vlan_rx_register = tg3_vlan_rx_register,
-#endif
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = tg3_poll_controller,
#endif
SET_NETDEV_DEV(dev, &pdev->dev);
-#if TG3_VLAN_TAG_USED
dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
-#endif
tp = netdev_priv(dev);
tp->pdev = pdev;
u32 rx_std_max_post;
u32 rx_offset;
u32 rx_pkt_map_sz;
-#if TG3_VLAN_TAG_USED
- struct vlan_group *vlgrp;
-#endif
/* begin "everything else" cacheline(s) section */
/*
* cdc_ncm.c
*
- * Copyright (C) ST-Ericsson 2010
+ * Copyright (C) ST-Ericsson 2010-2011
* Contact: Alexey Orishko <alexey.orishko@stericsson.com>
* Original author: Hans Petter Selasky <hans.petter.selasky@stericsson.com>
*
#include <linux/usb/usbnet.h>
#include <linux/usb/cdc.h>
-#define DRIVER_VERSION "30-Nov-2010"
+#define DRIVER_VERSION "7-Feb-2011"
/* CDC NCM subclass 3.2.1 */
#define USB_CDC_NCM_NDP16_LENGTH_MIN 0x10
*/
#define CDC_NCM_DPT_DATAGRAMS_MAX 32
+/* Maximum amount of IN datagrams in NTB */
+#define CDC_NCM_DPT_DATAGRAMS_IN_MAX 0 /* unlimited */
+
/* Restart the timer, if amount of datagrams is less than given value */
#define CDC_NCM_RESTART_TIMER_DATAGRAM_CNT 3
(sizeof(struct usb_cdc_ncm_nth16) + sizeof(struct usb_cdc_ncm_ndp16) + \
(CDC_NCM_DPT_DATAGRAMS_MAX + 1) * sizeof(struct usb_cdc_ncm_dpe16))
-struct connection_speed_change {
- __le32 USBitRate; /* holds 3GPP downlink value, bits per second */
- __le32 DSBitRate; /* holds 3GPP uplink value, bits per second */
-} __attribute__ ((packed));
-
struct cdc_ncm_data {
struct usb_cdc_ncm_nth16 nth16;
struct usb_cdc_ncm_ndp16 ndp16;
{
struct usb_cdc_notification req;
u32 val;
- __le16 max_datagram_size;
u8 flags;
u8 iface_no;
int err;
+ u16 ntb_fmt_supported;
iface_no = ctx->control->cur_altsetting->desc.bInterfaceNumber;
ctx->tx_remainder = le16_to_cpu(ctx->ncm_parm.wNdpOutPayloadRemainder);
ctx->tx_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutDivisor);
ctx->tx_ndp_modulus = le16_to_cpu(ctx->ncm_parm.wNdpOutAlignment);
+ /* devices prior to NCM Errata shall set this field to zero */
+ ctx->tx_max_datagrams = le16_to_cpu(ctx->ncm_parm.wNtbOutMaxDatagrams);
+ ntb_fmt_supported = le16_to_cpu(ctx->ncm_parm.bmNtbFormatsSupported);
if (ctx->func_desc != NULL)
flags = ctx->func_desc->bmNetworkCapabilities;
pr_debug("dwNtbInMaxSize=%u dwNtbOutMaxSize=%u "
"wNdpOutPayloadRemainder=%u wNdpOutDivisor=%u "
- "wNdpOutAlignment=%u flags=0x%x\n",
+ "wNdpOutAlignment=%u wNtbOutMaxDatagrams=%u flags=0x%x\n",
ctx->rx_max, ctx->tx_max, ctx->tx_remainder, ctx->tx_modulus,
- ctx->tx_ndp_modulus, flags);
+ ctx->tx_ndp_modulus, ctx->tx_max_datagrams, flags);
- /* max count of tx datagrams without terminating NULL entry */
- ctx->tx_max_datagrams = CDC_NCM_DPT_DATAGRAMS_MAX;
+ /* max count of tx datagrams */
+ if ((ctx->tx_max_datagrams == 0) ||
+ (ctx->tx_max_datagrams > CDC_NCM_DPT_DATAGRAMS_MAX))
+ ctx->tx_max_datagrams = CDC_NCM_DPT_DATAGRAMS_MAX;
/* verify maximum size of received NTB in bytes */
- if ((ctx->rx_max <
- (CDC_NCM_MIN_HDR_SIZE + CDC_NCM_MIN_DATAGRAM_SIZE)) ||
- (ctx->rx_max > CDC_NCM_NTB_MAX_SIZE_RX)) {
+ if (ctx->rx_max < USB_CDC_NCM_NTB_MIN_IN_SIZE) {
+ pr_debug("Using min receive length=%d\n",
+ USB_CDC_NCM_NTB_MIN_IN_SIZE);
+ ctx->rx_max = USB_CDC_NCM_NTB_MIN_IN_SIZE;
+ }
+
+ if (ctx->rx_max > CDC_NCM_NTB_MAX_SIZE_RX) {
pr_debug("Using default maximum receive length=%d\n",
CDC_NCM_NTB_MAX_SIZE_RX);
ctx->rx_max = CDC_NCM_NTB_MAX_SIZE_RX;
}
+ /* inform device about NTB input size changes */
+ if (ctx->rx_max != le32_to_cpu(ctx->ncm_parm.dwNtbInMaxSize)) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_NTB_INPUT_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+
+ if (flags & USB_CDC_NCM_NCAP_NTB_INPUT_SIZE) {
+ struct usb_cdc_ncm_ndp_input_size ndp_in_sz;
+
+ req.wLength = 8;
+ ndp_in_sz.dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+ ndp_in_sz.wNtbInMaxDatagrams =
+ cpu_to_le16(CDC_NCM_DPT_DATAGRAMS_MAX);
+ ndp_in_sz.wReserved = 0;
+ err = cdc_ncm_do_request(ctx, &req, &ndp_in_sz, 0, NULL,
+ 1000);
+ } else {
+ __le32 dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+
+ req.wLength = 4;
+ err = cdc_ncm_do_request(ctx, &req, &dwNtbInMaxSize, 0,
+ NULL, 1000);
+ }
+
+ if (err)
+ pr_debug("Setting NTB Input Size failed\n");
+ }
+
/* verify maximum size of transmitted NTB in bytes */
if ((ctx->tx_max <
(CDC_NCM_MIN_HDR_SIZE + CDC_NCM_MIN_DATAGRAM_SIZE)) ||
/* additional configuration */
/* set CRC Mode */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_SET_CRC_MODE;
- req.wValue = cpu_to_le16(USB_CDC_NCM_CRC_NOT_APPENDED);
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = 0;
-
- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
- if (err)
- pr_debug("Setting CRC mode off failed\n");
+ if (flags & USB_CDC_NCM_NCAP_CRC_MODE) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_CRC_MODE;
+ req.wValue = cpu_to_le16(USB_CDC_NCM_CRC_NOT_APPENDED);
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 0;
+
+ err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+ if (err)
+ pr_debug("Setting CRC mode off failed\n");
+ }
- /* set NTB format */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_SET_NTB_FORMAT;
- req.wValue = cpu_to_le16(USB_CDC_NCM_NTB16_FORMAT);
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = 0;
+ /* set NTB format, if both formats are supported */
+ if (ntb_fmt_supported & USB_CDC_NCM_NTH32_SIGN) {
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_NTB_FORMAT;
+ req.wValue = cpu_to_le16(USB_CDC_NCM_NTB16_FORMAT);
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 0;
+
+ err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+ if (err)
+ pr_debug("Setting NTB format to 16-bit failed\n");
+ }
- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
- if (err)
- pr_debug("Setting NTB format to 16-bit failed\n");
+ ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
/* set Max Datagram Size (MTU) */
- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE;
- req.bNotificationType = USB_CDC_GET_MAX_DATAGRAM_SIZE;
- req.wValue = 0;
- req.wIndex = cpu_to_le16(iface_no);
- req.wLength = cpu_to_le16(2);
+ if (flags & USB_CDC_NCM_NCAP_MAX_DATAGRAM_SIZE) {
+ __le16 max_datagram_size;
+ u16 eth_max_sz = le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
+
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_GET_MAX_DATAGRAM_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = cpu_to_le16(2);
+
+ err = cdc_ncm_do_request(ctx, &req, &max_datagram_size, 0, NULL,
+ 1000);
+ if (err) {
+ pr_debug("GET_MAX_DATAGRAM_SIZE failed, use size=%u\n",
+ CDC_NCM_MIN_DATAGRAM_SIZE);
+ } else {
+ ctx->max_datagram_size = le16_to_cpu(max_datagram_size);
+ /* Check Eth descriptor value */
+ if (eth_max_sz < CDC_NCM_MAX_DATAGRAM_SIZE) {
+ if (ctx->max_datagram_size > eth_max_sz)
+ ctx->max_datagram_size = eth_max_sz;
+ } else {
+ if (ctx->max_datagram_size >
+ CDC_NCM_MAX_DATAGRAM_SIZE)
+ ctx->max_datagram_size =
+ CDC_NCM_MAX_DATAGRAM_SIZE;
+ }
- err = cdc_ncm_do_request(ctx, &req, &max_datagram_size, 0, NULL, 1000);
- if (err) {
- pr_debug(" GET_MAX_DATAGRAM_SIZE failed, using size=%u\n",
- CDC_NCM_MIN_DATAGRAM_SIZE);
- /* use default */
- ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
- } else {
- ctx->max_datagram_size = le16_to_cpu(max_datagram_size);
+ if (ctx->max_datagram_size < CDC_NCM_MIN_DATAGRAM_SIZE)
+ ctx->max_datagram_size =
+ CDC_NCM_MIN_DATAGRAM_SIZE;
+
+ /* if value changed, update device */
+ req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+ USB_RECIP_INTERFACE;
+ req.bNotificationType = USB_CDC_SET_MAX_DATAGRAM_SIZE;
+ req.wValue = 0;
+ req.wIndex = cpu_to_le16(iface_no);
+ req.wLength = 2;
+ max_datagram_size = cpu_to_le16(ctx->max_datagram_size);
+
+ err = cdc_ncm_do_request(ctx, &req, &max_datagram_size,
+ 0, NULL, 1000);
+ if (err)
+ pr_debug("SET_MAX_DATAGRAM_SIZE failed\n");
+ }
- if (ctx->max_datagram_size < CDC_NCM_MIN_DATAGRAM_SIZE)
- ctx->max_datagram_size = CDC_NCM_MIN_DATAGRAM_SIZE;
- else if (ctx->max_datagram_size > CDC_NCM_MAX_DATAGRAM_SIZE)
- ctx->max_datagram_size = CDC_NCM_MAX_DATAGRAM_SIZE;
}
if (ctx->netdev->mtu != (ctx->max_datagram_size - ETH_HLEN))
ctx->ether_desc =
(const struct usb_cdc_ether_desc *)buf;
-
dev->hard_mtu =
le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
- if (dev->hard_mtu <
- (CDC_NCM_MIN_DATAGRAM_SIZE - ETH_HLEN))
- dev->hard_mtu =
- CDC_NCM_MIN_DATAGRAM_SIZE - ETH_HLEN;
-
- else if (dev->hard_mtu >
- (CDC_NCM_MAX_DATAGRAM_SIZE - ETH_HLEN))
- dev->hard_mtu =
- CDC_NCM_MAX_DATAGRAM_SIZE - ETH_HLEN;
+ if (dev->hard_mtu < CDC_NCM_MIN_DATAGRAM_SIZE)
+ dev->hard_mtu = CDC_NCM_MIN_DATAGRAM_SIZE;
+ else if (dev->hard_mtu > CDC_NCM_MAX_DATAGRAM_SIZE)
+ dev->hard_mtu = CDC_NCM_MAX_DATAGRAM_SIZE;
break;
case USB_CDC_NCM_TYPE:
u32 offset;
u32 last_offset;
u16 n = 0;
- u8 timeout = 0;
+ u8 ready2send = 0;
/* if there is a remaining skb, it gets priority */
if (skb != NULL)
swap(skb, ctx->tx_rem_skb);
else
- timeout = 1;
+ ready2send = 1;
/*
* +----------------+
for (; n < ctx->tx_max_datagrams; n++) {
/* check if end of transmit buffer is reached */
- if (offset >= ctx->tx_max)
+ if (offset >= ctx->tx_max) {
+ ready2send = 1;
break;
-
+ }
/* compute maximum buffer size */
rem = ctx->tx_max - offset;
}
ctx->tx_rem_skb = skb;
skb = NULL;
-
- /* loop one more time */
- timeout = 1;
+ ready2send = 1;
}
break;
}
ctx->tx_curr_last_offset = last_offset;
goto exit_no_skb;
- } else if ((n < ctx->tx_max_datagrams) && (timeout == 0)) {
+ } else if ((n < ctx->tx_max_datagrams) && (ready2send == 0)) {
/* wait for more frames */
/* push variables */
ctx->tx_curr_skb = skb_out;
cpu_to_le16(sizeof(ctx->tx_ncm.nth16));
ctx->tx_ncm.nth16.wSequence = cpu_to_le16(ctx->tx_seq);
ctx->tx_ncm.nth16.wBlockLength = cpu_to_le16(last_offset);
- ctx->tx_ncm.nth16.wFpIndex = ALIGN(sizeof(struct usb_cdc_ncm_nth16),
+ ctx->tx_ncm.nth16.wNdpIndex = ALIGN(sizeof(struct usb_cdc_ncm_nth16),
ctx->tx_ndp_modulus);
memcpy(skb_out->data, &(ctx->tx_ncm.nth16), sizeof(ctx->tx_ncm.nth16));
rem = sizeof(ctx->tx_ncm.ndp16) + ((ctx->tx_curr_frame_num + 1) *
sizeof(struct usb_cdc_ncm_dpe16));
ctx->tx_ncm.ndp16.wLength = cpu_to_le16(rem);
- ctx->tx_ncm.ndp16.wNextFpIndex = 0; /* reserved */
+ ctx->tx_ncm.ndp16.wNextNdpIndex = 0; /* reserved */
- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wFpIndex,
+ memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex,
&(ctx->tx_ncm.ndp16),
sizeof(ctx->tx_ncm.ndp16));
- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wFpIndex +
+ memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex +
sizeof(ctx->tx_ncm.ndp16),
&(ctx->tx_ncm.dpe16),
(ctx->tx_curr_frame_num + 1) *
if (ctx->tx_timer_pending != 0) {
ctx->tx_timer_pending--;
restart = 1;
- } else
+ } else {
restart = 0;
+ }
spin_unlock(&ctx->mtx);
- if (restart)
+ if (restart) {
+ spin_lock(&ctx->mtx);
cdc_ncm_tx_timeout_start(ctx);
- else if (ctx->netdev != NULL)
+ spin_unlock(&ctx->mtx);
+ } else if (ctx->netdev != NULL) {
usbnet_start_xmit(NULL, ctx->netdev);
+ }
}
static struct sk_buff *
skb_out = cdc_ncm_fill_tx_frame(ctx, skb);
if (ctx->tx_curr_skb != NULL)
need_timer = 1;
- spin_unlock(&ctx->mtx);
/* Start timer, if there is a remaining skb */
if (need_timer)
if (skb_out)
dev->net->stats.tx_packets += ctx->tx_curr_frame_num;
+
+ spin_unlock(&ctx->mtx);
return skb_out;
error:
goto error;
}
- temp = le16_to_cpu(ctx->rx_ncm.nth16.wFpIndex);
+ temp = le16_to_cpu(ctx->rx_ncm.nth16.wNdpIndex);
if ((temp + sizeof(ctx->rx_ncm.ndp16)) > actlen) {
pr_debug("invalid DPT16 index\n");
goto error;
if (((offset + temp) > actlen) ||
(temp > CDC_NCM_MAX_DATAGRAM_SIZE) || (temp < ETH_HLEN)) {
pr_debug("invalid frame detected (ignored)"
- "offset[%u]=%u, length=%u, skb=%p\n",
- x, offset, temp, skb_in);
+ "offset[%u]=%u, length=%u, skb=%p\n",
+ x, offset, temp, skb_in);
if (!x)
goto error;
break;
static void
cdc_ncm_speed_change(struct cdc_ncm_ctx *ctx,
- struct connection_speed_change *data)
+ struct usb_cdc_speed_change *data)
{
- uint32_t rx_speed = le32_to_cpu(data->USBitRate);
- uint32_t tx_speed = le32_to_cpu(data->DSBitRate);
+ uint32_t rx_speed = le32_to_cpu(data->DLBitRRate);
+ uint32_t tx_speed = le32_to_cpu(data->ULBitRate);
/*
* Currently the USB-NET API does not support reporting the actual
/* test for split data in 8-byte chunks */
if (test_and_clear_bit(EVENT_STS_SPLIT, &dev->flags)) {
cdc_ncm_speed_change(ctx,
- (struct connection_speed_change *)urb->transfer_buffer);
+ (struct usb_cdc_speed_change *)urb->transfer_buffer);
return;
}
break;
case USB_CDC_NOTIFY_SPEED_CHANGE:
- if (urb->actual_length <
- (sizeof(*event) + sizeof(struct connection_speed_change)))
+ if (urb->actual_length < (sizeof(*event) +
+ sizeof(struct usb_cdc_speed_change)))
set_bit(EVENT_STS_SPLIT, &dev->flags);
else
cdc_ncm_speed_change(ctx,
- (struct connection_speed_change *) &event[1]);
+ (struct usb_cdc_speed_change *) &event[1]);
break;
default:
static void hso_free_tiomget(struct hso_serial *serial)
{
- struct hso_tiocmget *tiocmget = serial->tiocmget;
+ struct hso_tiocmget *tiocmget;
+ if (!serial)
+ return;
+ tiocmget = serial->tiocmget;
if (tiocmget) {
- if (tiocmget->urb) {
- usb_free_urb(tiocmget->urb);
- tiocmget->urb = NULL;
- }
+ usb_free_urb(tiocmget->urb);
+ tiocmget->urb = NULL;
serial->tiocmget = NULL;
kfree(tiocmget);
-
}
}
if (fw->size > KAWETH_FIRMWARE_BUF_SIZE) {
err("Firmware too big: %zu", fw->size);
+ release_firmware(fw);
return -ENOSPC;
}
data_len = fw->size;
if (urb != NULL) {
clear_bit (EVENT_RX_MEMORY, &dev->flags);
status = usb_autopm_get_interface(dev->intf);
- if (status < 0)
+ if (status < 0) {
+ usb_free_urb(urb);
goto fail_lowmem;
+ }
if (rx_submit (dev, urb, GFP_KERNEL) == -ENOLINK)
resched = 0;
usb_autopm_put_interface(dev->intf);
}
}
+static void virtnet_napi_enable(struct virtnet_info *vi)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
vi = container_of(work, struct virtnet_info, refill.work);
napi_disable(&vi->napi);
still_empty = !try_fill_recv(vi, GFP_KERNEL);
- napi_enable(&vi->napi);
+ virtnet_napi_enable(vi);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again. */
{
struct virtnet_info *vi = netdev_priv(dev);
- napi_enable(&vi->napi);
-
- /* If all buffers were filled by other side before we napi_enabled, we
- * won't get another interrupt, so process any outstanding packets
- * now. virtnet_poll wants re-enable the queue, so we disable here.
- * We synchronize against interrupts via NAPI_STATE_SCHED */
- if (napi_schedule_prep(&vi->napi)) {
- virtqueue_disable_cb(vi->rvq);
- __napi_schedule(&vi->napi);
- }
+ virtnet_napi_enable(vi);
return 0;
}
static int enable_mq = 1;
static int irq_share_mode;
+static void
+vmxnet3_write_mac_addr(struct vmxnet3_adapter *adapter, u8 *mac);
+
/*
* Enable/Disable the given intr
*/
{
u32 ret;
int i;
+ unsigned long flags;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_LINK);
ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
+
adapter->link_speed = ret >> 16;
if (ret & 1) { /* Link is up. */
printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n",
/* Check if there is an error on xmit/recv queues */
if (events & (VMXNET3_ECR_TQERR | VMXNET3_ECR_RQERR)) {
+ spin_lock(&adapter->cmd_lock);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_GET_QUEUE_STATUS);
+ spin_unlock(&adapter->cmd_lock);
for (i = 0; i < adapter->num_tx_queues; i++)
if (adapter->tqd_start[i].status.stopped)
skb_transport_header(skb))->doff * 4;
ctx->copy_size = ctx->eth_ip_hdr_size + ctx->l4_hdr_size;
} else {
- unsigned int pull_size;
-
if (skb->ip_summed == CHECKSUM_PARTIAL) {
ctx->eth_ip_hdr_size = skb_checksum_start_offset(skb);
if (ctx->ipv4) {
struct iphdr *iph = (struct iphdr *)
skb_network_header(skb);
- if (iph->protocol == IPPROTO_TCP) {
- pull_size = ctx->eth_ip_hdr_size +
- sizeof(struct tcphdr);
-
- if (unlikely(!pskb_may_pull(skb,
- pull_size))) {
- goto err;
- }
+ if (iph->protocol == IPPROTO_TCP)
ctx->l4_hdr_size = ((struct tcphdr *)
skb_transport_header(skb))->doff * 4;
- } else if (iph->protocol == IPPROTO_UDP) {
+ else if (iph->protocol == IPPROTO_UDP)
+ /*
+ * Use tcp header size so that bytes to
+ * be copied are more than required by
+ * the device.
+ */
ctx->l4_hdr_size =
- sizeof(struct udphdr);
- } else {
+ sizeof(struct tcphdr);
+ else
ctx->l4_hdr_size = 0;
- }
} else {
/* for simplicity, don't copy L4 headers */
ctx->l4_hdr_size = 0;
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
struct Vmxnet3_DriverShared *shared = adapter->shared;
u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;
+ unsigned long flags;
if (grp) {
/* add vlan rx stripping. */
if (adapter->netdev->features & NETIF_F_HW_VLAN_RX) {
int i;
- struct Vmxnet3_DSDevRead *devRead = &shared->devRead;
adapter->vlan_grp = grp;
- /* update FEATURES to device */
- devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
- VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
- VMXNET3_CMD_UPDATE_FEATURE);
/*
* Clear entire vfTable; then enable untagged pkts.
* Note: setting one entry in vfTable to non-zero turns
vfTable[i] = 0;
VMXNET3_SET_VFTABLE_ENTRY(vfTable, 0);
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_VLAN_FILTERS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
} else {
printk(KERN_ERR "%s: vlan_rx_register when device has "
"no NETIF_F_HW_VLAN_RX\n", netdev->name);
*/
vfTable[i] = 0;
}
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_VLAN_FILTERS);
-
- /* update FEATURES to device */
- devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
- VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
- VMXNET3_CMD_UPDATE_FEATURE);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
}
}
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;
+ unsigned long flags;
VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid);
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_VLAN_FILTERS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable;
+ unsigned long flags;
VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vid);
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_VLAN_FILTERS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
vmxnet3_set_mc(struct net_device *netdev)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+ unsigned long flags;
struct Vmxnet3_RxFilterConf *rxConf =
&adapter->shared->devRead.rxFilterConf;
u8 *new_table = NULL;
rxConf->mfTablePA = 0;
}
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
if (new_mode != rxConf->rxMode) {
rxConf->rxMode = cpu_to_le32(new_mode);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_MAC_FILTERS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
kfree(new_table);
}
devRead->misc.uptFeatures |= UPT1_F_LRO;
devRead->misc.maxNumRxSG = cpu_to_le16(1 + MAX_SKB_FRAGS);
}
- if ((adapter->netdev->features & NETIF_F_HW_VLAN_RX) &&
- adapter->vlan_grp) {
+ if (adapter->netdev->features & NETIF_F_HW_VLAN_RX)
devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
- }
devRead->misc.mtu = cpu_to_le32(adapter->netdev->mtu);
devRead->misc.queueDescPA = cpu_to_le64(adapter->queue_desc_pa);
/* rx filter settings */
devRead->rxFilterConf.rxMode = 0;
vmxnet3_restore_vlan(adapter);
+ vmxnet3_write_mac_addr(adapter, adapter->netdev->dev_addr);
+
/* the rest are already zeroed */
}
{
int err, i;
u32 ret;
+ unsigned long flags;
dev_dbg(&adapter->netdev->dev, "%s: skb_buf_size %d, rx_buf_per_pkt %d,"
" ring sizes %u %u %u\n", adapter->netdev->name,
adapter->shared_pa));
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAH, VMXNET3_GET_ADDR_HI(
adapter->shared_pa));
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_ACTIVATE_DEV);
ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
if (ret != 0) {
printk(KERN_ERR "Failed to activate dev %s: error %u\n",
void
vmxnet3_reset_dev(struct vmxnet3_adapter *adapter)
{
+ unsigned long flags;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_RESET_DEV);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter)
{
int i;
+ unsigned long flags;
if (test_and_set_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state))
return 0;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_QUIESCE_DEV);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
vmxnet3_disable_all_intrs(adapter);
for (i = 0; i < adapter->num_rx_queues; i++)
sz = adapter->rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN;
ring0_size = adapter->rx_queue[0].rx_ring[0].size;
ring0_size = (ring0_size + sz - 1) / sz * sz;
- ring0_size = min_t(u32, rq->rx_ring[0].size, VMXNET3_RX_RING_MAX_SIZE /
+ ring0_size = min_t(u32, ring0_size, VMXNET3_RX_RING_MAX_SIZE /
sz * sz);
ring1_size = adapter->rx_queue[0].rx_ring[1].size;
comp_size = ring0_size + ring1_size;
break;
} else {
/* If fails to enable required number of MSI-x vectors
- * try enabling 3 of them. One each for rx, tx and event
+ * try enabling minimum number of vectors required.
*/
vectors = vector_threshold;
printk(KERN_ERR "Failed to enable %d MSI-X for %s, try"
u32 cfg;
/* intr settings */
+ spin_lock(&adapter->cmd_lock);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_GET_CONF_INTR);
cfg = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD);
+ spin_unlock(&adapter->cmd_lock);
adapter->intr.type = cfg & 0x3;
adapter->intr.mask_mode = (cfg >> 2) & 0x3;
*/
if (err == VMXNET3_LINUX_MIN_MSIX_VECT) {
if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE
- || adapter->num_rx_queues != 2) {
+ || adapter->num_rx_queues != 1) {
adapter->share_intr = VMXNET3_INTR_TXSHARE;
printk(KERN_ERR "Number of rx queues : 1\n");
adapter->num_rx_queues = 1;
adapter->netdev = netdev;
adapter->pdev = pdev;
+ spin_lock_init(&adapter->cmd_lock);
adapter->shared = pci_alloc_consistent(adapter->pdev,
sizeof(struct Vmxnet3_DriverShared),
&adapter->shared_pa);
u8 *arpreq;
struct in_device *in_dev;
struct in_ifaddr *ifa;
+ unsigned long flags;
int i = 0;
if (!netif_running(netdev))
return 0;
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ napi_disable(&adapter->rx_queue[i].napi);
+
vmxnet3_disable_all_intrs(adapter);
vmxnet3_free_irqs(adapter);
vmxnet3_free_intr_resources(adapter);
adapter->shared->devRead.pmConfDesc.confPA = cpu_to_le64(virt_to_phys(
pmConf));
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_PMCFG);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
pci_save_state(pdev);
pci_enable_wake(pdev, pci_choose_state(pdev, PMSG_SUSPEND),
static int
vmxnet3_resume(struct device *device)
{
- int err;
+ int err, i = 0;
+ unsigned long flags;
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *netdev = pci_get_drvdata(pdev);
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
pci_enable_wake(pdev, PCI_D0, 0);
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_PMCFG);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
vmxnet3_alloc_intr_resources(adapter);
vmxnet3_request_irqs(adapter);
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ napi_enable(&adapter->rx_queue[i].napi);
vmxnet3_enable_all_intrs(adapter);
return 0;
vmxnet3_set_rx_csum(struct net_device *netdev, u32 val)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+ unsigned long flags;
if (adapter->rxcsum != val) {
adapter->rxcsum = val;
adapter->shared->devRead.misc.uptFeatures &=
~UPT1_F_RXCSUM;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_FEATURE);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
}
return 0;
static const struct vmxnet3_stat_desc
vmxnet3_tq_dev_stats[] = {
/* description, offset */
- { "TSO pkts tx", offsetof(struct UPT1_TxStats, TSOPktsTxOK) },
- { "TSO bytes tx", offsetof(struct UPT1_TxStats, TSOBytesTxOK) },
- { "ucast pkts tx", offsetof(struct UPT1_TxStats, ucastPktsTxOK) },
- { "ucast bytes tx", offsetof(struct UPT1_TxStats, ucastBytesTxOK) },
- { "mcast pkts tx", offsetof(struct UPT1_TxStats, mcastPktsTxOK) },
- { "mcast bytes tx", offsetof(struct UPT1_TxStats, mcastBytesTxOK) },
- { "bcast pkts tx", offsetof(struct UPT1_TxStats, bcastPktsTxOK) },
- { "bcast bytes tx", offsetof(struct UPT1_TxStats, bcastBytesTxOK) },
- { "pkts tx err", offsetof(struct UPT1_TxStats, pktsTxError) },
- { "pkts tx discard", offsetof(struct UPT1_TxStats, pktsTxDiscard) },
+ { "Tx Queue#", 0 },
+ { " TSO pkts tx", offsetof(struct UPT1_TxStats, TSOPktsTxOK) },
+ { " TSO bytes tx", offsetof(struct UPT1_TxStats, TSOBytesTxOK) },
+ { " ucast pkts tx", offsetof(struct UPT1_TxStats, ucastPktsTxOK) },
+ { " ucast bytes tx", offsetof(struct UPT1_TxStats, ucastBytesTxOK) },
+ { " mcast pkts tx", offsetof(struct UPT1_TxStats, mcastPktsTxOK) },
+ { " mcast bytes tx", offsetof(struct UPT1_TxStats, mcastBytesTxOK) },
+ { " bcast pkts tx", offsetof(struct UPT1_TxStats, bcastPktsTxOK) },
+ { " bcast bytes tx", offsetof(struct UPT1_TxStats, bcastBytesTxOK) },
+ { " pkts tx err", offsetof(struct UPT1_TxStats, pktsTxError) },
+ { " pkts tx discard", offsetof(struct UPT1_TxStats, pktsTxDiscard) },
};
/* per tq stats maintained by the driver */
static const struct vmxnet3_stat_desc
vmxnet3_tq_driver_stats[] = {
/* description, offset */
- {"drv dropped tx total", offsetof(struct vmxnet3_tq_driver_stats,
- drop_total) },
- { " too many frags", offsetof(struct vmxnet3_tq_driver_stats,
- drop_too_many_frags) },
- { " giant hdr", offsetof(struct vmxnet3_tq_driver_stats,
- drop_oversized_hdr) },
- { " hdr err", offsetof(struct vmxnet3_tq_driver_stats,
- drop_hdr_inspect_err) },
- { " tso", offsetof(struct vmxnet3_tq_driver_stats,
- drop_tso) },
- { "ring full", offsetof(struct vmxnet3_tq_driver_stats,
- tx_ring_full) },
- { "pkts linearized", offsetof(struct vmxnet3_tq_driver_stats,
- linearized) },
- { "hdr cloned", offsetof(struct vmxnet3_tq_driver_stats,
- copy_skb_header) },
- { "giant hdr", offsetof(struct vmxnet3_tq_driver_stats,
- oversized_hdr) },
+ {" drv dropped tx total", offsetof(struct vmxnet3_tq_driver_stats,
+ drop_total) },
+ { " too many frags", offsetof(struct vmxnet3_tq_driver_stats,
+ drop_too_many_frags) },
+ { " giant hdr", offsetof(struct vmxnet3_tq_driver_stats,
+ drop_oversized_hdr) },
+ { " hdr err", offsetof(struct vmxnet3_tq_driver_stats,
+ drop_hdr_inspect_err) },
+ { " tso", offsetof(struct vmxnet3_tq_driver_stats,
+ drop_tso) },
+ { " ring full", offsetof(struct vmxnet3_tq_driver_stats,
+ tx_ring_full) },
+ { " pkts linearized", offsetof(struct vmxnet3_tq_driver_stats,
+ linearized) },
+ { " hdr cloned", offsetof(struct vmxnet3_tq_driver_stats,
+ copy_skb_header) },
+ { " giant hdr", offsetof(struct vmxnet3_tq_driver_stats,
+ oversized_hdr) },
};
/* per rq stats maintained by the device */
static const struct vmxnet3_stat_desc
vmxnet3_rq_dev_stats[] = {
- { "LRO pkts rx", offsetof(struct UPT1_RxStats, LROPktsRxOK) },
- { "LRO byte rx", offsetof(struct UPT1_RxStats, LROBytesRxOK) },
- { "ucast pkts rx", offsetof(struct UPT1_RxStats, ucastPktsRxOK) },
- { "ucast bytes rx", offsetof(struct UPT1_RxStats, ucastBytesRxOK) },
- { "mcast pkts rx", offsetof(struct UPT1_RxStats, mcastPktsRxOK) },
- { "mcast bytes rx", offsetof(struct UPT1_RxStats, mcastBytesRxOK) },
- { "bcast pkts rx", offsetof(struct UPT1_RxStats, bcastPktsRxOK) },
- { "bcast bytes rx", offsetof(struct UPT1_RxStats, bcastBytesRxOK) },
- { "pkts rx out of buf", offsetof(struct UPT1_RxStats, pktsRxOutOfBuf) },
- { "pkts rx err", offsetof(struct UPT1_RxStats, pktsRxError) },
+ { "Rx Queue#", 0 },
+ { " LRO pkts rx", offsetof(struct UPT1_RxStats, LROPktsRxOK) },
+ { " LRO byte rx", offsetof(struct UPT1_RxStats, LROBytesRxOK) },
+ { " ucast pkts rx", offsetof(struct UPT1_RxStats, ucastPktsRxOK) },
+ { " ucast bytes rx", offsetof(struct UPT1_RxStats, ucastBytesRxOK) },
+ { " mcast pkts rx", offsetof(struct UPT1_RxStats, mcastPktsRxOK) },
+ { " mcast bytes rx", offsetof(struct UPT1_RxStats, mcastBytesRxOK) },
+ { " bcast pkts rx", offsetof(struct UPT1_RxStats, bcastPktsRxOK) },
+ { " bcast bytes rx", offsetof(struct UPT1_RxStats, bcastBytesRxOK) },
+ { " pkts rx OOB", offsetof(struct UPT1_RxStats, pktsRxOutOfBuf) },
+ { " pkts rx err", offsetof(struct UPT1_RxStats, pktsRxError) },
};
/* per rq stats maintained by the driver */
static const struct vmxnet3_stat_desc
vmxnet3_rq_driver_stats[] = {
/* description, offset */
- { "drv dropped rx total", offsetof(struct vmxnet3_rq_driver_stats,
- drop_total) },
- { " err", offsetof(struct vmxnet3_rq_driver_stats,
- drop_err) },
- { " fcs", offsetof(struct vmxnet3_rq_driver_stats,
- drop_fcs) },
- { "rx buf alloc fail", offsetof(struct vmxnet3_rq_driver_stats,
- rx_buf_alloc_failure) },
+ { " drv dropped rx total", offsetof(struct vmxnet3_rq_driver_stats,
+ drop_total) },
+ { " err", offsetof(struct vmxnet3_rq_driver_stats,
+ drop_err) },
+ { " fcs", offsetof(struct vmxnet3_rq_driver_stats,
+ drop_fcs) },
+ { " rx buf alloc fail", offsetof(struct vmxnet3_rq_driver_stats,
+ rx_buf_alloc_failure) },
};
/* gloabl stats maintained by the driver */
static const struct vmxnet3_stat_desc
vmxnet3_global_stats[] = {
/* description, offset */
- { "tx timeout count", offsetof(struct vmxnet3_adapter,
+ { "tx timeout count", offsetof(struct vmxnet3_adapter,
tx_timeout_count) }
};
struct UPT1_TxStats *devTxStats;
struct UPT1_RxStats *devRxStats;
struct net_device_stats *net_stats = &netdev->stats;
+ unsigned long flags;
int i;
adapter = netdev_priv(netdev);
/* Collect the dev stats into the shared area */
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
memset(net_stats, 0, sizeof(*net_stats));
for (i = 0; i < adapter->num_tx_queues; i++) {
static int
vmxnet3_get_sset_count(struct net_device *netdev, int sset)
{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
switch (sset) {
case ETH_SS_STATS:
- return ARRAY_SIZE(vmxnet3_tq_dev_stats) +
- ARRAY_SIZE(vmxnet3_tq_driver_stats) +
- ARRAY_SIZE(vmxnet3_rq_dev_stats) +
- ARRAY_SIZE(vmxnet3_rq_driver_stats) +
+ return (ARRAY_SIZE(vmxnet3_tq_dev_stats) +
+ ARRAY_SIZE(vmxnet3_tq_driver_stats)) *
+ adapter->num_tx_queues +
+ (ARRAY_SIZE(vmxnet3_rq_dev_stats) +
+ ARRAY_SIZE(vmxnet3_rq_driver_stats)) *
+ adapter->num_rx_queues +
ARRAY_SIZE(vmxnet3_global_stats);
default:
return -EOPNOTSUPP;
}
+/* Should be multiple of 4 */
+#define NUM_TX_REGS 8
+#define NUM_RX_REGS 12
+
static int
vmxnet3_get_regs_len(struct net_device *netdev)
{
- return 20 * sizeof(u32);
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+ return (adapter->num_tx_queues * NUM_TX_REGS * sizeof(u32) +
+ adapter->num_rx_queues * NUM_RX_REGS * sizeof(u32));
}
static void
vmxnet3_get_strings(struct net_device *netdev, u32 stringset, u8 *buf)
{
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
if (stringset == ETH_SS_STATS) {
- int i;
-
- for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++) {
- memcpy(buf, vmxnet3_tq_dev_stats[i].desc,
- ETH_GSTRING_LEN);
- buf += ETH_GSTRING_LEN;
- }
- for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++) {
- memcpy(buf, vmxnet3_tq_driver_stats[i].desc,
- ETH_GSTRING_LEN);
- buf += ETH_GSTRING_LEN;
- }
- for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++) {
- memcpy(buf, vmxnet3_rq_dev_stats[i].desc,
- ETH_GSTRING_LEN);
- buf += ETH_GSTRING_LEN;
+ int i, j;
+ for (j = 0; j < adapter->num_tx_queues; j++) {
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++) {
+ memcpy(buf, vmxnet3_tq_dev_stats[i].desc,
+ ETH_GSTRING_LEN);
+ buf += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats);
+ i++) {
+ memcpy(buf, vmxnet3_tq_driver_stats[i].desc,
+ ETH_GSTRING_LEN);
+ buf += ETH_GSTRING_LEN;
+ }
}
- for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++) {
- memcpy(buf, vmxnet3_rq_driver_stats[i].desc,
- ETH_GSTRING_LEN);
- buf += ETH_GSTRING_LEN;
+
+ for (j = 0; j < adapter->num_rx_queues; j++) {
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++) {
+ memcpy(buf, vmxnet3_rq_dev_stats[i].desc,
+ ETH_GSTRING_LEN);
+ buf += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats);
+ i++) {
+ memcpy(buf, vmxnet3_rq_driver_stats[i].desc,
+ ETH_GSTRING_LEN);
+ buf += ETH_GSTRING_LEN;
+ }
}
+
for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++) {
memcpy(buf, vmxnet3_global_stats[i].desc,
ETH_GSTRING_LEN);
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
u8 lro_requested = (data & ETH_FLAG_LRO) == 0 ? 0 : 1;
u8 lro_present = (netdev->features & NETIF_F_LRO) == 0 ? 0 : 1;
+ unsigned long flags;
if (data & ~ETH_FLAG_LRO)
return -EOPNOTSUPP;
else
adapter->shared->devRead.misc.uptFeatures &=
~UPT1_F_LRO;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_FEATURE);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
return 0;
}
struct ethtool_stats *stats, u64 *buf)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+ unsigned long flags;
u8 *base;
int i;
int j = 0;
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
/* this does assume each counter is 64-bit wide */
-/* TODO change this for multiple queues */
-
- base = (u8 *)&adapter->tqd_start[j].stats;
- for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++)
- *buf++ = *(u64 *)(base + vmxnet3_tq_dev_stats[i].offset);
-
- base = (u8 *)&adapter->tx_queue[j].stats;
- for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++)
- *buf++ = *(u64 *)(base + vmxnet3_tq_driver_stats[i].offset);
-
- base = (u8 *)&adapter->rqd_start[j].stats;
- for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++)
- *buf++ = *(u64 *)(base + vmxnet3_rq_dev_stats[i].offset);
+ for (j = 0; j < adapter->num_tx_queues; j++) {
+ base = (u8 *)&adapter->tqd_start[j].stats;
+ *buf++ = (u64)j;
+ for (i = 1; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++)
+ *buf++ = *(u64 *)(base +
+ vmxnet3_tq_dev_stats[i].offset);
+
+ base = (u8 *)&adapter->tx_queue[j].stats;
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++)
+ *buf++ = *(u64 *)(base +
+ vmxnet3_tq_driver_stats[i].offset);
+ }
- base = (u8 *)&adapter->rx_queue[j].stats;
- for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++)
- *buf++ = *(u64 *)(base + vmxnet3_rq_driver_stats[i].offset);
+ for (j = 0; j < adapter->num_tx_queues; j++) {
+ base = (u8 *)&adapter->rqd_start[j].stats;
+ *buf++ = (u64) j;
+ for (i = 1; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++)
+ *buf++ = *(u64 *)(base +
+ vmxnet3_rq_dev_stats[i].offset);
+
+ base = (u8 *)&adapter->rx_queue[j].stats;
+ for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++)
+ *buf++ = *(u64 *)(base +
+ vmxnet3_rq_driver_stats[i].offset);
+ }
base = (u8 *)adapter;
for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
u32 *buf = p;
- int i = 0;
+ int i = 0, j = 0;
memset(p, 0, vmxnet3_get_regs_len(netdev));
/* Update vmxnet3_get_regs_len if we want to dump more registers */
/* make each ring use multiple of 16 bytes */
-/* TODO change this for multiple queues */
- buf[0] = adapter->tx_queue[i].tx_ring.next2fill;
- buf[1] = adapter->tx_queue[i].tx_ring.next2comp;
- buf[2] = adapter->tx_queue[i].tx_ring.gen;
- buf[3] = 0;
-
- buf[4] = adapter->tx_queue[i].comp_ring.next2proc;
- buf[5] = adapter->tx_queue[i].comp_ring.gen;
- buf[6] = adapter->tx_queue[i].stopped;
- buf[7] = 0;
-
- buf[8] = adapter->rx_queue[i].rx_ring[0].next2fill;
- buf[9] = adapter->rx_queue[i].rx_ring[0].next2comp;
- buf[10] = adapter->rx_queue[i].rx_ring[0].gen;
- buf[11] = 0;
-
- buf[12] = adapter->rx_queue[i].rx_ring[1].next2fill;
- buf[13] = adapter->rx_queue[i].rx_ring[1].next2comp;
- buf[14] = adapter->rx_queue[i].rx_ring[1].gen;
- buf[15] = 0;
-
- buf[16] = adapter->rx_queue[i].comp_ring.next2proc;
- buf[17] = adapter->rx_queue[i].comp_ring.gen;
- buf[18] = 0;
- buf[19] = 0;
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ buf[j++] = adapter->tx_queue[i].tx_ring.next2fill;
+ buf[j++] = adapter->tx_queue[i].tx_ring.next2comp;
+ buf[j++] = adapter->tx_queue[i].tx_ring.gen;
+ buf[j++] = 0;
+
+ buf[j++] = adapter->tx_queue[i].comp_ring.next2proc;
+ buf[j++] = adapter->tx_queue[i].comp_ring.gen;
+ buf[j++] = adapter->tx_queue[i].stopped;
+ buf[j++] = 0;
+ }
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ buf[j++] = adapter->rx_queue[i].rx_ring[0].next2fill;
+ buf[j++] = adapter->rx_queue[i].rx_ring[0].next2comp;
+ buf[j++] = adapter->rx_queue[i].rx_ring[0].gen;
+ buf[j++] = 0;
+
+ buf[j++] = adapter->rx_queue[i].rx_ring[1].next2fill;
+ buf[j++] = adapter->rx_queue[i].rx_ring[1].next2comp;
+ buf[j++] = adapter->rx_queue[i].rx_ring[1].gen;
+ buf[j++] = 0;
+
+ buf[j++] = adapter->rx_queue[i].comp_ring.next2proc;
+ buf[j++] = adapter->rx_queue[i].comp_ring.gen;
+ buf[j++] = 0;
+ buf[j++] = 0;
+ }
+
}
const struct ethtool_rxfh_indir *p)
{
unsigned int i;
+ unsigned long flags;
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
struct UPT1_RSSConf *rssConf = adapter->rss_conf;
for (i = 0; i < rssConf->indTableSize; i++)
rssConf->indTable[i] = p->ring_index[i];
+ spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_UPDATE_RSSIDT);
+ spin_unlock_irqrestore(&adapter->cmd_lock, flags);
return 0;
/*
* Version numbers
*/
-#define VMXNET3_DRIVER_VERSION_STRING "1.0.16.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING "1.0.25.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
-#define VMXNET3_DRIVER_VERSION_NUM 0x01001000
+#define VMXNET3_DRIVER_VERSION_NUM 0x01001900
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
#define VMXNET3_LINUX_MAX_MSIX_VECT (VMXNET3_DEVICE_MAX_TX_QUEUES + \
VMXNET3_DEVICE_MAX_RX_QUEUES + 1)
-#define VMXNET3_LINUX_MIN_MSIX_VECT 3 /* 1 for each : tx, rx and event */
+#define VMXNET3_LINUX_MIN_MSIX_VECT 2 /* 1 for tx-rx pair and 1 for event */
struct vmxnet3_intr {
struct vmxnet3_rx_queue rx_queue[VMXNET3_DEVICE_MAX_RX_QUEUES];
struct vlan_group *vlan_grp;
struct vmxnet3_intr intr;
+ spinlock_t cmd_lock;
struct Vmxnet3_DriverShared *shared;
struct Vmxnet3_PMConf *pm_conf;
struct Vmxnet3_TxQueueDesc *tqd_start; /* all tx queue desc */
if (status != VXGE_HW_OK)
goto exit;
- if ((rts_table != VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) ||
+ if ((rts_table != VXGE_HW_RTS_ACCESS_STEER_CTRL_DATA_STRUCT_SEL_DA) &&
(rts_table !=
VXGE_HW_RTS_ACS_STEER_CTRL_DATA_STRUCT_SEL_RTH_MULTI_IT))
*data1 = 0;
int i;
bool needreset = false;
+ mutex_lock(&sc->lock);
+
for (i = 0; i < ARRAY_SIZE(sc->txqs); i++) {
if (sc->txqs[i].setup) {
txq = &sc->txqs[i];
ath5k_reset(sc, NULL, true);
}
+ mutex_unlock(&sc->lock);
+
ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work,
msecs_to_jiffies(ATH5K_TX_COMPLETE_POLL_INT));
}
for (i = 0; i < qmax; i++) {
err = ath5k_hw_stop_tx_dma(ah, i);
/* -EINVAL -> queue inactive */
- if (err != -EINVAL)
+ if (err && err != -EINVAL)
return err;
}
- return err;
+ return 0;
}
if (!ah->ah_bwmode) {
dur = ieee80211_generic_frame_duration(sc->hw,
NULL, len, rate);
- return dur;
+ return le16_to_cpu(dur);
}
bitrate = rate->bitrate;
* what rate we should choose to TX ACKs. */
tx_time = ath5k_hw_get_frame_duration(ah, 10, rate);
- tx_time = le16_to_cpu(tx_time);
-
ath5k_hw_reg_write(ah, tx_time, reg);
if (!(rate->flags & IEEE80211_RATE_SHORT_PREAMBLE))
/* Do NF cal only at longer intervals */
if (longcal || nfcal_pending) {
- /* Do periodic PAOffset Cal */
- ar9002_hw_pa_cal(ah, false);
- ar9002_hw_olc_temp_compensation(ah);
-
/*
* Get the value from the previous NF cal and update
* history buffer.
ath9k_hw_loadnf(ah, ah->curchan);
}
- if (longcal)
+ if (longcal) {
ath9k_hw_start_nfcal(ah, false);
+ /* Do periodic PAOffset Cal */
+ ar9002_hw_pa_cal(ah, false);
+ ar9002_hw_olc_temp_compensation(ah);
+ }
}
return iscaldone;
}
/* WAR for ASPM system hang */
- if (AR_SREV_9280(ah) || AR_SREV_9285(ah) || AR_SREV_9287(ah)) {
+ if (AR_SREV_9285(ah) || AR_SREV_9287(ah))
val |= (AR_WA_BIT6 | AR_WA_BIT7);
- }
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
static const u32 ar9300PciePhy_pll_on_clkreq_disable_L1_2p2[][2] = {
/* Addr allmodes */
- {0x00004040, 0x08212e5e},
+ {0x00004040, 0x0821265e},
{0x00004040, 0x0008003b},
{0x00004044, 0x00000000},
};
/* Sleep Setting */
INIT_INI_ARRAY(&ah->iniPcieSerdesLowPower,
- ar9300PciePhy_clkreq_enable_L1_2p2,
- ARRAY_SIZE(ar9300PciePhy_clkreq_enable_L1_2p2),
+ ar9300PciePhy_pll_on_clkreq_disable_L1_2p2,
+ ARRAY_SIZE(ar9300PciePhy_pll_on_clkreq_disable_L1_2p2),
2);
/* Fast clock modal settings */
struct ath_buf_state {
u8 bf_type;
u8 bfs_paprd;
+ unsigned long bfs_paprd_timestamp;
enum ath9k_internal_frame_type bfs_ftype;
};
struct work_struct paprd_work;
struct work_struct hw_check_work;
struct completion paprd_complete;
- bool paprd_pending;
u32 intrstatus;
u32 sc_flags; /* SC_OP_* */
u8 node_idx;
u8 vif_idx;
u8 tidno;
- u32 flags; /* ATH9K_HTC_TX_* */
+ __be32 flags; /* ATH9K_HTC_TX_* */
u8 key_type;
u8 keyix;
u8 reserved[26];
{
ath9k_htc_exit_debug(priv->ah);
ath9k_hw_deinit(priv->ah);
- tasklet_kill(&priv->swba_tasklet);
- tasklet_kill(&priv->rx_tasklet);
- tasklet_kill(&priv->tx_tasklet);
kfree(priv->ah);
priv->ah = NULL;
}
int ret = 0;
u8 cmd_rsp;
- /* Cancel all the running timers/work .. */
- cancel_work_sync(&priv->fatal_work);
- cancel_work_sync(&priv->ps_work);
- cancel_delayed_work_sync(&priv->ath9k_led_blink_work);
- ath9k_led_stop_brightness(priv);
-
mutex_lock(&priv->mutex);
if (priv->op_flags & OP_INVALID) {
WMI_CMD(WMI_DISABLE_INTR_CMDID);
WMI_CMD(WMI_DRAIN_TXQ_ALL_CMDID);
WMI_CMD(WMI_STOP_RECV_CMDID);
+
+ tasklet_kill(&priv->swba_tasklet);
+ tasklet_kill(&priv->rx_tasklet);
+ tasklet_kill(&priv->tx_tasklet);
+
skb_queue_purge(&priv->tx_queue);
+ mutex_unlock(&priv->mutex);
+
+ /* Cancel all the running timers/work .. */
+ cancel_work_sync(&priv->fatal_work);
+ cancel_work_sync(&priv->ps_work);
+ cancel_delayed_work_sync(&priv->ath9k_led_blink_work);
+ ath9k_led_stop_brightness(priv);
+
+ mutex_lock(&priv->mutex);
+
/* Remove monitor interface here */
if (ah->opmode == NL80211_IFTYPE_MONITOR) {
if (ath9k_htc_remove_monitor_interface(priv))
if (ieee80211_is_data(fc)) {
struct tx_frame_hdr tx_hdr;
+ u32 flags = 0;
u8 *qc;
memset(&tx_hdr, 0, sizeof(struct tx_frame_hdr));
/* Check for RTS protection */
if (priv->hw->wiphy->rts_threshold != (u32) -1)
if (skb->len > priv->hw->wiphy->rts_threshold)
- tx_hdr.flags |= ATH9K_HTC_TX_RTSCTS;
+ flags |= ATH9K_HTC_TX_RTSCTS;
/* CTS-to-self */
- if (!(tx_hdr.flags & ATH9K_HTC_TX_RTSCTS) &&
+ if (!(flags & ATH9K_HTC_TX_RTSCTS) &&
(priv->op_flags & OP_PROTECT_ENABLE))
- tx_hdr.flags |= ATH9K_HTC_TX_CTSONLY;
+ flags |= ATH9K_HTC_TX_CTSONLY;
+ tx_hdr.flags = cpu_to_be32(flags);
tx_hdr.key_type = ath9k_cmn_get_hw_crypto_keytype(skb);
if (tx_hdr.key_type == ATH9K_KEY_TYPE_CLEAR)
tx_hdr.keyix = (u8) ATH9K_TXKEYIX_INVALID;
else
ah->config.ht_enable = 0;
+ /* PAPRD needs some more work to be enabled */
+ ah->config.paprd_disable = 1;
+
ah->config.rx_intr_mitigation = true;
ah->config.pcieSerDesWrite = true;
pCap->rx_status_len = sizeof(struct ar9003_rxs);
pCap->tx_desc_len = sizeof(struct ar9003_txc);
pCap->txs_len = sizeof(struct ar9003_txs);
- if (ah->eep_ops->get_eeprom(ah, EEP_PAPRD))
+ if (!ah->config.paprd_disable &&
+ ah->eep_ops->get_eeprom(ah, EEP_PAPRD))
pCap->hw_caps |= ATH9K_HW_CAP_PAPRD;
} else {
pCap->tx_desc_len = sizeof(struct ath_desc);
u32 pcie_waen;
u8 analog_shiftreg;
u8 ht_enable;
+ u8 paprd_disable;
u32 ofdm_trig_low;
u32 ofdm_trig_high;
u32 cck_trig_high;
err_queues:
ath9k_hw_deinit(ah);
err_hw:
- tasklet_kill(&sc->intr_tq);
- tasklet_kill(&sc->bcon_tasklet);
kfree(ah);
sc->sc_ah = NULL;
ath9k_hw_deinit(sc->sc_ah);
- tasklet_kill(&sc->intr_tq);
- tasklet_kill(&sc->bcon_tasklet);
-
kfree(sc->sc_ah);
sc->sc_ah = NULL;
}
wiphy_rfkill_stop_polling(sc->hw->wiphy);
ath_deinit_leds(sc);
+ ath9k_ps_restore(sc);
+
for (i = 0; i < sc->num_sec_wiphy; i++) {
struct ath_wiphy *aphy = sc->sec_wiphy[i];
if (aphy == NULL)
{
struct ieee80211_hw *hw = sc->hw;
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ath_hw *ah = sc->sc_ah;
+ struct ath_common *common = ath9k_hw_common(ah);
struct ath_tx_control txctl;
int time_left;
tx_info->control.rates[1].idx = -1;
init_completion(&sc->paprd_complete);
- sc->paprd_pending = true;
txctl.paprd = BIT(chain);
- if (ath_tx_start(hw, skb, &txctl) != 0)
+
+ if (ath_tx_start(hw, skb, &txctl) != 0) {
+ ath_dbg(common, ATH_DBG_XMIT, "PAPRD TX failed\n");
+ dev_kfree_skb_any(skb);
return false;
+ }
time_left = wait_for_completion_timeout(&sc->paprd_complete,
msecs_to_jiffies(ATH_PAPRD_TIMEOUT));
- sc->paprd_pending = false;
if (!time_left)
ath_dbg(ath9k_hw_common(sc->sc_ah), ATH_DBG_CALIBRATE,
u32 status = sc->intrstatus;
u32 rxmask;
- ath9k_ps_wakeup(sc);
-
if (status & ATH9K_INT_FATAL) {
ath_reset(sc, true);
- ath9k_ps_restore(sc);
return;
}
+ ath9k_ps_wakeup(sc);
spin_lock(&sc->sc_pcu_lock);
if (!ath9k_hw_check_alive(ah))
spin_unlock_bh(&sc->sc_pcu_lock);
ath9k_ps_restore(sc);
-
- ath9k_setpower(sc, ATH9K_PM_FULL_SLEEP);
}
int ath_reset(struct ath_softc *sc, bool retry_tx)
/* Stop ANI */
del_timer_sync(&common->ani.timer);
+ ath9k_ps_wakeup(sc);
spin_lock_bh(&sc->sc_pcu_lock);
ieee80211_stop_queues(hw);
/* Start ANI */
ath_start_ani(common);
+ ath9k_ps_restore(sc);
return r;
}
spin_lock_bh(&sc->sc_pcu_lock);
+ /* prevent tasklets to enable interrupts once we disable them */
+ ah->imask &= ~ATH9K_INT_GLOBAL;
+
/* make sure h/w will not generate any interrupt
* before setting the invalid flag. */
ath9k_hw_disable_interrupts(ah);
spin_unlock_bh(&sc->sc_pcu_lock);
+ /* we can now sync irq and kill any running tasklets, since we already
+ * disabled interrupts and not holding a spin lock */
+ synchronize_irq(sc->irq);
+ tasklet_kill(&sc->intr_tq);
+ tasklet_kill(&sc->bcon_tasklet);
+
ath9k_ps_restore(sc);
sc->ps_idle = true;
skip_chan_change:
if (changed & IEEE80211_CONF_CHANGE_POWER) {
sc->config.txpowlimit = 2 * conf->power_level;
+ ath9k_ps_wakeup(sc);
ath_update_txpow(sc);
+ ath9k_ps_restore(sc);
}
spin_lock_bh(&sc->wiphy_lock);
ar9003_hw_set_paprd_txdesc(sc->sc_ah, bf->bf_desc,
bf->bf_state.bfs_paprd);
+ if (txctl->paprd)
+ bf->bf_state.bfs_paprd_timestamp = jiffies;
+
ath_tx_send_normal(sc, txctl->txq, tid, &bf_head);
}
bf->bf_buf_addr = 0;
if (bf->bf_state.bfs_paprd) {
- if (!sc->paprd_pending)
+ if (time_after(jiffies,
+ bf->bf_state.bfs_paprd_timestamp +
+ msecs_to_jiffies(ATH_PAPRD_TIMEOUT)))
dev_kfree_skb_any(skb);
else
complete(&sc->paprd_complete);
if (needreset) {
ath_dbg(ath9k_hw_common(sc->sc_ah), ATH_DBG_RESET,
"tx hung, resetting the chip\n");
- ath9k_ps_wakeup(sc);
ath_reset(sc, true);
- ath9k_ps_restore(sc);
}
ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work,
cam = ieee80211_check_tim(tim_ie, tim_len, ar->common.curaid);
/* 2. Maybe the AP wants to send multicast/broadcast data? */
- cam = !!(tim_ie->bitmap_ctrl & 0x01);
+ cam |= !!(tim_ie->bitmap_ctrl & 0x01);
if (!cam) {
/* back to low-power land. */
}
#endif
-/**
- * iwl3945_good_plcp_health - checks for plcp error.
- *
- * When the plcp error is exceeding the thresholds, reset the radio
- * to improve the throughput.
- */
-static bool iwl3945_good_plcp_health(struct iwl_priv *priv,
- struct iwl_rx_packet *pkt)
-{
- bool rc = true;
- struct iwl3945_notif_statistics current_stat;
- int combined_plcp_delta;
- unsigned int plcp_msec;
- unsigned long plcp_received_jiffies;
-
- if (priv->cfg->base_params->plcp_delta_threshold ==
- IWL_MAX_PLCP_ERR_THRESHOLD_DISABLE) {
- IWL_DEBUG_RADIO(priv, "plcp_err check disabled\n");
- return rc;
- }
- memcpy(¤t_stat, pkt->u.raw, sizeof(struct
- iwl3945_notif_statistics));
- /*
- * check for plcp_err and trigger radio reset if it exceeds
- * the plcp error threshold plcp_delta.
- */
- plcp_received_jiffies = jiffies;
- plcp_msec = jiffies_to_msecs((long) plcp_received_jiffies -
- (long) priv->plcp_jiffies);
- priv->plcp_jiffies = plcp_received_jiffies;
- /*
- * check to make sure plcp_msec is not 0 to prevent division
- * by zero.
- */
- if (plcp_msec) {
- combined_plcp_delta =
- (le32_to_cpu(current_stat.rx.ofdm.plcp_err) -
- le32_to_cpu(priv->_3945.statistics.rx.ofdm.plcp_err));
-
- if ((combined_plcp_delta > 0) &&
- ((combined_plcp_delta * 100) / plcp_msec) >
- priv->cfg->base_params->plcp_delta_threshold) {
- /*
- * if plcp_err exceed the threshold, the following
- * data is printed in csv format:
- * Text: plcp_err exceeded %d,
- * Received ofdm.plcp_err,
- * Current ofdm.plcp_err,
- * combined_plcp_delta,
- * plcp_msec
- */
- IWL_DEBUG_RADIO(priv, "plcp_err exceeded %u, "
- "%u, %d, %u mSecs\n",
- priv->cfg->base_params->plcp_delta_threshold,
- le32_to_cpu(current_stat.rx.ofdm.plcp_err),
- combined_plcp_delta, plcp_msec);
- /*
- * Reset the RF radio due to the high plcp
- * error rate
- */
- rc = false;
- }
- }
- return rc;
-}
-
void iwl3945_hw_rx_statistics(struct iwl_priv *priv,
struct iwl_rx_mem_buffer *rxb)
{
.isr_ops = {
.isr = iwl_isr_legacy,
},
- .check_plcp_health = iwl3945_good_plcp_health,
.debugfs_ops = {
.rx_stats_read = iwl3945_ucode_rx_stats_read,
.fw_name_pre = IWL4965_FW_PRE,
.ucode_api_max = IWL4965_UCODE_API_MAX,
.ucode_api_min = IWL4965_UCODE_API_MIN,
+ .sku = IWL_SKU_A|IWL_SKU_G|IWL_SKU_N,
.valid_tx_ant = ANT_AB,
.valid_rx_ant = ANT_ABC,
.eeprom_ver = EEPROM_4965_EEPROM_VERSION,
.fw_name_pre = IWL6050_FW_PRE, \
.ucode_api_max = IWL6050_UCODE_API_MAX, \
.ucode_api_min = IWL6050_UCODE_API_MIN, \
+ .valid_tx_ant = ANT_AB, /* .cfg overwrite */ \
+ .valid_rx_ant = ANT_AB, /* .cfg overwrite */ \
.ops = &iwl6050_ops, \
.eeprom_ver = EEPROM_6050_EEPROM_VERSION, \
.eeprom_calib_ver = EEPROM_6050_TX_POWER_VERSION, \
eeprom_sku = iwl_eeprom_query16(priv, EEPROM_SKU_CAP);
- priv->cfg->sku = ((eeprom_sku & EEPROM_SKU_CAP_BAND_SELECTION) >>
+ if (!priv->cfg->sku) {
+ /* not using sku overwrite */
+ priv->cfg->sku =
+ ((eeprom_sku & EEPROM_SKU_CAP_BAND_SELECTION) >>
EEPROM_SKU_CAP_BAND_POS);
- if (eeprom_sku & EEPROM_SKU_CAP_11N_ENABLE)
- priv->cfg->sku |= IWL_SKU_N;
-
+ if (eeprom_sku & EEPROM_SKU_CAP_11N_ENABLE)
+ priv->cfg->sku |= IWL_SKU_N;
+ }
if (!priv->cfg->sku) {
IWL_ERR(priv, "Invalid device sku\n");
return -EINVAL;
/* not using .cfg overwrite */
radio_cfg = iwl_eeprom_query16(priv, EEPROM_RADIO_CONFIG);
priv->cfg->valid_tx_ant = EEPROM_RF_CFG_TX_ANT_MSK(radio_cfg);
- priv->cfg->valid_rx_ant = EEPROM_RF_CFG_TX_ANT_MSK(radio_cfg);
+ priv->cfg->valid_rx_ant = EEPROM_RF_CFG_RX_ANT_MSK(radio_cfg);
if (!priv->cfg->valid_tx_ant || !priv->cfg->valid_rx_ant) {
IWL_ERR(priv, "Invalid chain (0X%x, 0X%x)\n",
priv->cfg->valid_tx_ant,
/* only Re-enable if disabled by irq */
if (test_bit(STATUS_INT_ENABLED, &priv->status))
iwl_enable_interrupts(priv);
+ /* Re-enable RF_KILL if it occurred */
+ else if (handled & CSR_INT_BIT_RF_KILL)
+ iwl_enable_rfkill_int(priv);
#ifdef CONFIG_IWLWIFI_DEBUG
if (iwl_get_debug_level(priv) & (IWL_DL_ISR)) {
/* only Re-enable if disabled by irq */
if (test_bit(STATUS_INT_ENABLED, &priv->status))
iwl_enable_interrupts(priv);
+ /* Re-enable RF_KILL if it occurred */
+ else if (handled & CSR_INT_BIT_RF_KILL)
+ iwl_enable_rfkill_int(priv);
}
/* the threshold ratio of actual_ack_cnt to expected_ack_cnt in percent */
ndev = alloc_netdev_mq(0, "wlan%d", ether_setup, IWM_TX_QUEUES);
if (!ndev) {
dev_err(dev, "no memory for network device instance\n");
+ ret = -ENOMEM;
goto out_priv;
}
GFP_KERNEL);
if (!iwm->umac_profile) {
dev_err(dev, "Couldn't alloc memory for profile\n");
+ ret = -ENOMEM;
goto out_profile;
}
if (!fw || !fw->size || !fw->data) {
ERROR(rt2x00dev, "Failed to read Firmware.\n");
+ release_firmware(fw);
return -ENOENT;
}
{ USB_DEVICE(0x04bb, 0x093d), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x148f, 0x2671), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0812, 0x3101), USB_DEVICE_DATA(&rt73usb_ops) },
/* Qcom */
{ USB_DEVICE(0x18e8, 0x6196), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x18e8, 0x6229), USB_DEVICE_DATA(&rt73usb_ops) },
}
static void efuse_write_data_case1(struct ieee80211_hw *hw, u16 *efuse_addr,
- u8 efuse_data, u8 offset, int *bcontinual,
- u8 *write_state, struct pgpkt_struct target_pkt,
- int *repeat_times, int *bresult, u8 word_en)
+ u8 efuse_data, u8 offset, int *bcontinual,
+ u8 *write_state, struct pgpkt_struct *target_pkt,
+ int *repeat_times, int *bresult, u8 word_en)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct pgpkt_struct tmp_pkt;
tmp_pkt.word_en = tmp_header & 0x0F;
tmp_word_cnts = efuse_calculate_word_cnts(tmp_pkt.word_en);
- if (tmp_pkt.offset != target_pkt.offset) {
- efuse_addr = efuse_addr + (tmp_word_cnts * 2) + 1;
+ if (tmp_pkt.offset != target_pkt->offset) {
+ *efuse_addr = *efuse_addr + (tmp_word_cnts * 2) + 1;
*write_state = PG_STATE_HEADER;
} else {
for (tmpindex = 0; tmpindex < (tmp_word_cnts * 2); tmpindex++) {
}
if (bdataempty == false) {
- efuse_addr = efuse_addr + (tmp_word_cnts * 2) + 1;
+ *efuse_addr = *efuse_addr + (tmp_word_cnts * 2) + 1;
*write_state = PG_STATE_HEADER;
} else {
match_word_en = 0x0F;
- if (!((target_pkt.word_en & BIT(0)) |
+ if (!((target_pkt->word_en & BIT(0)) |
(tmp_pkt.word_en & BIT(0))))
match_word_en &= (~BIT(0));
- if (!((target_pkt.word_en & BIT(1)) |
+ if (!((target_pkt->word_en & BIT(1)) |
(tmp_pkt.word_en & BIT(1))))
match_word_en &= (~BIT(1));
- if (!((target_pkt.word_en & BIT(2)) |
+ if (!((target_pkt->word_en & BIT(2)) |
(tmp_pkt.word_en & BIT(2))))
match_word_en &= (~BIT(2));
- if (!((target_pkt.word_en & BIT(3)) |
+ if (!((target_pkt->word_en & BIT(3)) |
(tmp_pkt.word_en & BIT(3))))
match_word_en &= (~BIT(3));
badworden = efuse_word_enable_data_write(
hw, *efuse_addr + 1,
tmp_pkt.word_en,
- target_pkt.data);
+ target_pkt->data);
if (0x0F != (badworden & 0x0F)) {
u8 reorg_offset = offset;
}
tmp_word_en = 0x0F;
- if ((target_pkt.word_en & BIT(0)) ^
+ if ((target_pkt->word_en & BIT(0)) ^
(match_word_en & BIT(0)))
tmp_word_en &= (~BIT(0));
- if ((target_pkt.word_en & BIT(1)) ^
+ if ((target_pkt->word_en & BIT(1)) ^
(match_word_en & BIT(1)))
tmp_word_en &= (~BIT(1));
- if ((target_pkt.word_en & BIT(2)) ^
+ if ((target_pkt->word_en & BIT(2)) ^
(match_word_en & BIT(2)))
tmp_word_en &= (~BIT(2));
- if ((target_pkt.word_en & BIT(3)) ^
+ if ((target_pkt->word_en & BIT(3)) ^
(match_word_en & BIT(3)))
tmp_word_en &= (~BIT(3));
if ((tmp_word_en & 0x0F) != 0x0F) {
*efuse_addr = efuse_get_current_size(hw);
- target_pkt.offset = offset;
- target_pkt.word_en = tmp_word_en;
+ target_pkt->offset = offset;
+ target_pkt->word_en = tmp_word_en;
} else
*bcontinual = false;
*write_state = PG_STATE_HEADER;
}
} else {
*efuse_addr += (2 * tmp_word_cnts) + 1;
- target_pkt.offset = offset;
- target_pkt.word_en = word_en;
+ target_pkt->offset = offset;
+ target_pkt->word_en = word_en;
*write_state = PG_STATE_HEADER;
}
}
efuse_write_data_case1(hw, &efuse_addr,
efuse_data, offset,
&bcontinual,
- &write_state, target_pkt,
+ &write_state, &target_pkt,
&repeat_times, &bresult,
word_en);
else
struct sk_buff *uskb = NULL;
u8 *pdata;
uskb = dev_alloc_skb(skb->len + 128);
+ if (!uskb) {
+ RT_TRACE(rtlpriv,
+ (COMP_INTR | COMP_RECV),
+ DBG_EMERG,
+ ("can't alloc rx skb\n"));
+ goto done;
+ }
memcpy(IEEE80211_SKB_RXCB(uskb),
&rx_status,
sizeof(rx_status));
new_skb = dev_alloc_skb(rtlpci->rxbuffersize);
if (unlikely(!new_skb)) {
RT_TRACE(rtlpriv, (COMP_INTR | COMP_RECV),
- DBG_DMESG,
+ DBG_EMERG,
("can't alloc skb for rx\n"));
goto done;
}
struct sk_buff *skb =
dev_alloc_skb(rtlpci->rxbuffersize);
u32 bufferaddress;
- entry = &rtlpci->rx_ring[rx_queue_idx].desc[i];
if (!skb)
return 0;
+ entry = &rtlpci->rx_ring[rx_queue_idx].desc[i];
/*skb->dev = dev; */
if (changed & BSS_CHANGED_BEACON) {
beacon = ieee80211_beacon_get(hw, vif);
+ if (!beacon)
+ goto out_sleep;
+
ret = wl1251_cmd_template_set(wl, CMD_BEACON, beacon->data,
beacon->len);
spi_message_add_tail(&t, &m);
spi_sync(wl_to_spi(wl), &m);
- kfree(cmd);
-
wl1271_dump(DEBUG_SPI, "spi reset -> ", cmd, WSPI_INIT_CMD_LEN);
+ kfree(cmd);
}
static void wl1271_spi_init(struct wl1271 *wl)
unsigned long rx_pfn_array[NET_RX_RING_SIZE];
struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+
+ /* Statistics */
+ int rx_gso_checksum_fixup;
};
struct netfront_rx_info {
return cons;
}
-static int skb_checksum_setup(struct sk_buff *skb)
+static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
{
struct iphdr *iph;
unsigned char *th;
int err = -EPROTO;
+ int recalculate_partial_csum = 0;
+
+ /*
+ * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
+ * peers can fail to set NETRXF_csum_blank when sending a GSO
+ * frame. In this case force the SKB to CHECKSUM_PARTIAL and
+ * recalculate the partial checksum.
+ */
+ if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
+ struct netfront_info *np = netdev_priv(dev);
+ np->rx_gso_checksum_fixup++;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ recalculate_partial_csum = 1;
+ }
+
+ /* A non-CHECKSUM_PARTIAL SKB does not require setup. */
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
if (skb->protocol != htons(ETH_P_IP))
goto out;
switch (iph->protocol) {
case IPPROTO_TCP:
skb->csum_offset = offsetof(struct tcphdr, check);
+
+ if (recalculate_partial_csum) {
+ struct tcphdr *tcph = (struct tcphdr *)th;
+ tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
+ skb->len - iph->ihl*4,
+ IPPROTO_TCP, 0);
+ }
break;
case IPPROTO_UDP:
skb->csum_offset = offsetof(struct udphdr, check);
+
+ if (recalculate_partial_csum) {
+ struct udphdr *udph = (struct udphdr *)th;
+ udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
+ skb->len - iph->ihl*4,
+ IPPROTO_UDP, 0);
+ }
break;
default:
if (net_ratelimit())
/* Ethernet work: Delayed to here as it peeks the header. */
skb->protocol = eth_type_trans(skb, dev);
- if (skb->ip_summed == CHECKSUM_PARTIAL) {
- if (skb_checksum_setup(skb)) {
- kfree_skb(skb);
- packets_dropped++;
- dev->stats.rx_errors++;
- continue;
- }
+ if (checksum_setup(dev, skb)) {
+ kfree_skb(skb);
+ packets_dropped++;
+ dev->stats.rx_errors++;
+ continue;
}
dev->stats.rx_packets++;
}
}
+static const struct xennet_stat {
+ char name[ETH_GSTRING_LEN];
+ u16 offset;
+} xennet_stats[] = {
+ {
+ "rx_gso_checksum_fixup",
+ offsetof(struct netfront_info, rx_gso_checksum_fixup)
+ },
+};
+
+static int xennet_get_sset_count(struct net_device *dev, int string_set)
+{
+ switch (string_set) {
+ case ETH_SS_STATS:
+ return ARRAY_SIZE(xennet_stats);
+ default:
+ return -EINVAL;
+ }
+}
+
+static void xennet_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 * data)
+{
+ void *np = netdev_priv(dev);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
+ data[i] = *(int *)(np + xennet_stats[i].offset);
+}
+
+static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
+{
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
+ memcpy(data + i * ETH_GSTRING_LEN,
+ xennet_stats[i].name, ETH_GSTRING_LEN);
+ break;
+ }
+}
+
static const struct ethtool_ops xennet_ethtool_ops =
{
.set_tx_csum = ethtool_op_set_tx_csum,
.set_sg = xennet_set_sg,
.set_tso = xennet_set_tso,
.get_link = ethtool_op_get_link,
+
+ .get_sset_count = xennet_get_sset_count,
+ .get_ethtool_stats = xennet_get_ethtool_stats,
+ .get_strings = xennet_get_strings,
};
#ifdef CONFIG_SYSFS
/* Make sure we haven't left any pointers around in the wait
* list. */
- spin_lock (&port->waitlist_lock);
+ spin_lock_irq(&port->waitlist_lock);
if (dev->waitprev || dev->waitnext || port->waithead == dev) {
if (dev->waitprev)
dev->waitprev->waitnext = dev->waitnext;
else
port->waittail = dev->waitprev;
}
- spin_unlock (&port->waitlist_lock);
+ spin_unlock_irq(&port->waitlist_lock);
kfree(dev->state);
kfree(dev);
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/capability.h>
+#include <linux/security.h>
#include <linux/pci-aspm.h>
#include <linux/slab.h>
#include "pci.h"
u8 *data = (u8*) buf;
/* Several chips lock up trying to read undefined config space */
- if (cap_raised(filp->f_cred->cap_effective, CAP_SYS_ADMIN)) {
+ if (security_capable(filp->f_cred, CAP_SYS_ADMIN) == 0) {
size = dev->cfg_size;
} else if (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS) {
size = 128;
# PCI Express ASPM
#
config PCIEASPM
- bool "PCI Express ASPM control" if EMBEDDED
+ bool "PCI Express ASPM control" if EXPERT
depends on PCI && PCIEPORTBUS
default y
help
config YENTA
tristate "CardBus yenta-compatible bridge support"
depends on PCI
- select CARDBUS if !EMBEDDED
+ select CARDBUS if !EXPERT
select PCCARD_NONSTATIC if PCMCIA != n
---help---
This option enables support for CardBus host bridges. Virtually
config YENTA_O2
default y
- bool "Special initialization for O2Micro bridges" if EMBEDDED
+ bool "Special initialization for O2Micro bridges" if EXPERT
depends on YENTA
config YENTA_RICOH
default y
- bool "Special initialization for Ricoh bridges" if EMBEDDED
+ bool "Special initialization for Ricoh bridges" if EXPERT
depends on YENTA
config YENTA_TI
default y
- bool "Special initialization for TI and EnE bridges" if EMBEDDED
+ bool "Special initialization for TI and EnE bridges" if EXPERT
depends on YENTA
config YENTA_ENE_TUNE
default y
- bool "Auto-tune EnE bridges for CB cards" if EMBEDDED
+ bool "Auto-tune EnE bridges for CB cards" if EXPERT
depends on YENTA_TI && CARDBUS
config YENTA_TOSHIBA
default y
- bool "Special initialization for Toshiba ToPIC bridges" if EMBEDDED
+ bool "Special initialization for Toshiba ToPIC bridges" if EXPERT
depends on YENTA
config PD6729
config IDEAPAD_LAPTOP
tristate "Lenovo IdeaPad Laptop Extras"
depends on ACPI
- depends on RFKILL
+ depends on RFKILL && INPUT
select INPUT_SPARSEKMAP
help
This is a driver for the rfkill switches on Lenovo IdeaPad netbooks.
*/
#define AMW0_GUID1 "67C3371D-95A3-4C37-BB61-DD47B491DAAB"
#define AMW0_GUID2 "431F16ED-0C2B-444C-B267-27DEB140CF9C"
-#define WMID_GUID1 "6AF4F258-B401-42fd-BE91-3D4AC2D7C0D3"
+#define WMID_GUID1 "6AF4F258-B401-42FD-BE91-3D4AC2D7C0D3"
#define WMID_GUID2 "95764E09-FB56-4e83-B31A-37761F60994A"
#define WMID_GUID3 "61EF69EA-865C-4BC3-A502-A0DEBA0CB531"
return -EINVAL;
return count;
}
-static DEVICE_ATTR(threeg, S_IWUGO | S_IRUGO | S_IWUSR, show_bool_threeg,
+static DEVICE_ATTR(threeg, S_IRUGO | S_IWUSR, show_bool_threeg,
set_bool_threeg);
static ssize_t show_interface(struct device *dev, struct device_attribute *attr,
struct proc_dir_entry *proc;
mode_t mode;
- /*
- * If parameter uid or gid is not changed, keep the default setting for
- * our proc entries (-rw-rw-rw-) else, it means we care about security,
- * and then set to -rw-rw----
- */
-
if ((asus_uid == 0) && (asus_gid == 0)) {
- mode = S_IFREG | S_IRUGO | S_IWUGO;
+ mode = S_IFREG | S_IRUGO | S_IWUSR | S_IWGRP;
} else {
mode = S_IFREG | S_IRUSR | S_IRGRP | S_IWUSR | S_IWGRP;
printk(KERN_WARNING " asus_uid and asus_gid parameters are "
dell_send_request(buffer, 17, 11);
/* If the hardware switch controls this radio, and the hardware
- switch is disabled, don't allow changing the software state */
+ switch is disabled, don't allow changing the software state.
+ If the hardware switch is reported as not supported, always
+ fire the SMI to toggle the killswitch. */
if ((hwswitch_state & BIT(hwswitch_bit)) &&
- !(buffer->output[1] & BIT(16))) {
+ !(buffer->output[1] & BIT(16)) &&
+ (buffer->output[1] & BIT(0))) {
ret = -EINVAL;
goto out;
}
static void dell_update_rfkill(struct work_struct *ignored)
{
+ int status;
+
+ get_buffer();
+ dell_send_request(buffer, 17, 11);
+ status = buffer->output[1];
+ release_buffer();
+
+ /* if hardware rfkill is not supported, set it explicitly */
+ if (!(status & BIT(0))) {
+ if (wifi_rfkill)
+ dell_rfkill_set((void *)1, !((status & BIT(17)) >> 17));
+ if (bluetooth_rfkill)
+ dell_rfkill_set((void *)2, !((status & BIT(18)) >> 18));
+ if (wwan_rfkill)
+ dell_rfkill_set((void *)3, !((status & BIT(19)) >> 19));
+ }
+
if (wifi_rfkill)
dell_rfkill_query(wifi_rfkill, (void *)1);
if (bluetooth_rfkill)
#define GPOSW_DOU 0x08
#define GPOSW_RDRV 0x30
+#define GPIO_UPDATE_TYPE 0x80000000
#define NUM_GPIO 24
-struct pmic_gpio_irq {
- spinlock_t lock;
- u32 trigger[NUM_GPIO];
- u32 dirty;
- struct work_struct work;
-};
-
-
struct pmic_gpio {
+ struct mutex buslock;
struct gpio_chip chip;
- struct pmic_gpio_irq irqtypes;
void *gpiointr;
int irq;
unsigned irq_base;
+ unsigned int update_type;
+ u32 trigger_type;
};
-static void pmic_program_irqtype(int gpio, int type)
-{
- if (type & IRQ_TYPE_EDGE_RISING)
- intel_scu_ipc_update_register(GPIO0 + gpio, 0x20, 0x20);
- else
- intel_scu_ipc_update_register(GPIO0 + gpio, 0x00, 0x20);
-
- if (type & IRQ_TYPE_EDGE_FALLING)
- intel_scu_ipc_update_register(GPIO0 + gpio, 0x10, 0x10);
- else
- intel_scu_ipc_update_register(GPIO0 + gpio, 0x00, 0x10);
-};
-
-static void pmic_irqtype_work(struct work_struct *work)
-{
- struct pmic_gpio_irq *t =
- container_of(work, struct pmic_gpio_irq, work);
- unsigned long flags;
- int i;
- u16 type;
-
- spin_lock_irqsave(&t->lock, flags);
- /* As we drop the lock, we may need multiple scans if we race the
- pmic_irq_type function */
- while (t->dirty) {
- /*
- * For each pin that has the dirty bit set send an IPC
- * message to configure the hardware via the PMIC
- */
- for (i = 0; i < NUM_GPIO; i++) {
- if (!(t->dirty & (1 << i)))
- continue;
- t->dirty &= ~(1 << i);
- /* We can't trust the array entry or dirty
- once the lock is dropped */
- type = t->trigger[i];
- spin_unlock_irqrestore(&t->lock, flags);
- pmic_program_irqtype(i, type);
- spin_lock_irqsave(&t->lock, flags);
- }
- }
- spin_unlock_irqrestore(&t->lock, flags);
-}
-
static int pmic_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
if (offset > 8) {
1 << (offset - 16));
}
-static int pmic_irq_type(unsigned irq, unsigned type)
+/*
+ * This is called from genirq with pg->buslock locked and
+ * irq_desc->lock held. We can not access the scu bus here, so we
+ * store the change and update in the bus_sync_unlock() function below
+ */
+static int pmic_irq_type(struct irq_data *data, unsigned type)
{
- struct pmic_gpio *pg = get_irq_chip_data(irq);
- u32 gpio = irq - pg->irq_base;
- unsigned long flags;
+ struct pmic_gpio *pg = irq_data_get_irq_chip_data(data);
+ u32 gpio = data->irq - pg->irq_base;
if (gpio >= pg->chip.ngpio)
return -EINVAL;
- spin_lock_irqsave(&pg->irqtypes.lock, flags);
- pg->irqtypes.trigger[gpio] = type;
- pg->irqtypes.dirty |= (1 << gpio);
- spin_unlock_irqrestore(&pg->irqtypes.lock, flags);
- schedule_work(&pg->irqtypes.work);
+ pg->trigger_type = type;
+ pg->update_type = gpio | GPIO_UPDATE_TYPE;
return 0;
}
-
-
static int pmic_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
{
struct pmic_gpio *pg = container_of(chip, struct pmic_gpio, chip);
}
/* the gpiointr register is read-clear, so just do nothing. */
-static void pmic_irq_unmask(unsigned irq)
-{
-};
+static void pmic_irq_unmask(struct irq_data *data) { }
-static void pmic_irq_mask(unsigned irq)
-{
-};
+static void pmic_irq_mask(struct irq_data *data) { }
static struct irq_chip pmic_irqchip = {
.name = "PMIC-GPIO",
- .mask = pmic_irq_mask,
- .unmask = pmic_irq_unmask,
- .set_type = pmic_irq_type,
+ .irq_mask = pmic_irq_mask,
+ .irq_unmask = pmic_irq_unmask,
+ .irq_set_type = pmic_irq_type,
};
-static void pmic_irq_handler(unsigned irq, struct irq_desc *desc)
+static irqreturn_t pmic_irq_handler(int irq, void *data)
{
- struct pmic_gpio *pg = (struct pmic_gpio *)get_irq_data(irq);
+ struct pmic_gpio *pg = data;
u8 intsts = *((u8 *)pg->gpiointr + 4);
int gpio;
+ irqreturn_t ret = IRQ_NONE;
for (gpio = 0; gpio < 8; gpio++) {
if (intsts & (1 << gpio)) {
pr_debug("pmic pin %d triggered\n", gpio);
generic_handle_irq(pg->irq_base + gpio);
+ ret = IRQ_HANDLED;
}
}
-
- if (desc->chip->irq_eoi)
- desc->chip->irq_eoi(irq_get_irq_data(irq));
- else
- dev_warn(pg->chip.dev, "missing EOI handler for irq %d\n", irq);
+ return ret;
}
static int __devinit platform_pmic_gpio_probe(struct platform_device *pdev)
pg->chip.can_sleep = 1;
pg->chip.dev = dev;
- INIT_WORK(&pg->irqtypes.work, pmic_irqtype_work);
- spin_lock_init(&pg->irqtypes.lock);
+ mutex_init(&pg->buslock);
pg->chip.dev = dev;
retval = gpiochip_add(&pg->chip);
printk(KERN_ERR "%s: Can not add pmic gpio chip.\n", __func__);
goto err;
}
- set_irq_data(pg->irq, pg);
- set_irq_chained_handler(pg->irq, pmic_irq_handler);
+
+ retval = request_irq(pg->irq, pmic_irq_handler, 0, "pmic", pg);
+ if (retval) {
+ printk(KERN_WARNING "pmic: Interrupt request failed\n");
+ goto err;
+ }
+
for (i = 0; i < 8; i++) {
set_irq_chip_and_handler_name(i + pg->irq_base, &pmic_irqchip,
handle_simple_irq, "demux");
#include <linux/sfi.h>
#include <asm/mrst.h>
#include <asm/intel_scu_ipc.h>
-#include <asm/mrst.h>
/* IPC defines the following message types */
#define IPCMSG_WATCHDOG_TIMER 0xF8 /* Set Kernel Watchdog Threshold */
{
int i, nc, bytes, d;
u32 offset = 0;
- u32 err = 0;
+ int err;
u8 cbuf[IPC_WWBUF_SIZE] = { };
u32 *wbuf = (u32 *)&cbuf;
*/
int intel_scu_ipc_simple_command(int cmd, int sub)
{
- u32 err = 0;
+ int err;
mutex_lock(&ipclock);
if (ipcdev.pdev == NULL) {
int intel_scu_ipc_command(int cmd, int sub, u32 *in, int inlen,
u32 *out, int outlen)
{
- u32 err = 0;
- int i = 0;
+ int i, err;
mutex_lock(&ipclock);
if (ipcdev.pdev == NULL) {
module_init(ipc_module_init);
module_exit(ipc_module_exit);
-MODULE_LICENSE("GPL V2");
+MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Utility driver for intel scu ipc");
MODULE_AUTHOR("Sreedhara <sreedhara.ds@intel.com>");
return -EINVAL; \
return count; \
} \
-static DEVICE_ATTR(value, S_IWUGO | S_IRUGO | S_IWUSR, \
+static DEVICE_ATTR(value, S_IRUGO | S_IWUSR, \
show_bool_##value, set_bool_##value);
show_set_bool(wireless, TC1100_INSTANCE_WIRELESS);
if (keycode != KEY_RESERVED) {
mutex_lock(&tpacpi_inputdev_send_mutex);
+ input_event(tpacpi_inputdev, EV_MSC, MSC_SCAN, scancode);
input_report_key(tpacpi_inputdev, keycode, 1);
- if (keycode == KEY_UNKNOWN)
- input_event(tpacpi_inputdev, EV_MSC, MSC_SCAN,
- scancode);
input_sync(tpacpi_inputdev);
+ input_event(tpacpi_inputdev, EV_MSC, MSC_SCAN, scancode);
input_report_key(tpacpi_inputdev, keycode, 0);
- if (keycode == KEY_UNKNOWN)
- input_event(tpacpi_inputdev, EV_MSC, MSC_SCAN,
- scancode);
input_sync(tpacpi_inputdev);
mutex_unlock(&tpacpi_inputdev_send_mutex);
/* First of all we get the time stamp... */
pps_get_ts(&ts);
- dev_info(pps->dev, "PPS event at %lu\n", jiffies);
-
pps_event(pps, &ts, PPS_CAPTUREASSERT, NULL);
mod_timer(&ktimer, jiffies + HZ);
}
device->pardev = parport_register_device(port, KBUILD_MODNAME,
- NULL, NULL, parport_irq, 0, device);
+ NULL, NULL, parport_irq, PARPORT_FLAG_EXCL, device);
if (!device->pardev) {
pr_err("couldn't register with %s\n", port->name);
goto err_free;
}
device.pardev = parport_register_device(port, KBUILD_MODNAME,
- NULL, NULL, NULL, 0, &device);
+ NULL, NULL, NULL, PARPORT_FLAG_EXCL, &device);
if (!device.pardev) {
pr_err("couldn't register with %s\n", port->name);
return;
* @port: Master port to send transactions
* @destid: Current destination ID in network
* @hopcount: Number of hops into the network
+ * @prev: previous rio_dev
+ * @prev_port: previous port number
*
* Recursively discovers a RIO network. Transactions are sent via the
* master port passed in @port.
rtc->id = id;
rtc->ops = ops;
rtc->owner = owner;
+ rtc->irq_freq = 1;
rtc->max_user_freq = 64;
rtc->dev.parent = dev;
rtc->dev.class = rtc_class;
#include <linux/log2.h>
#include <linux/workqueue.h>
+static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer);
+static void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer);
+
static int __rtc_read_time(struct rtc_device *rtc, struct rtc_time *tm)
{
int err;
err = mutex_lock_interruptible(&rtc->ops_lock);
if (err)
return err;
- alarm->enabled = rtc->aie_timer.enabled;
- if (alarm->enabled)
+ if (rtc->ops == NULL)
+ err = -ENODEV;
+ else if (!rtc->ops->read_alarm)
+ err = -EINVAL;
+ else {
+ memset(alarm, 0, sizeof(struct rtc_wkalrm));
+ alarm->enabled = rtc->aie_timer.enabled;
alarm->time = rtc_ktime_to_tm(rtc->aie_timer.node.expires);
+ }
mutex_unlock(&rtc->ops_lock);
- return 0;
+ return err;
}
EXPORT_SYMBOL_GPL(rtc_read_alarm);
return err;
if (rtc->aie_timer.enabled) {
rtc_timer_remove(rtc, &rtc->aie_timer);
- rtc->aie_timer.enabled = 0;
}
rtc->aie_timer.node.expires = rtc_tm_to_ktime(alarm->time);
rtc->aie_timer.period = ktime_set(0, 0);
if (alarm->enabled) {
- rtc->aie_timer.enabled = 1;
- rtc_timer_enqueue(rtc, &rtc->aie_timer);
+ err = rtc_timer_enqueue(rtc, &rtc->aie_timer);
}
mutex_unlock(&rtc->ops_lock);
- return 0;
+ return err;
}
EXPORT_SYMBOL_GPL(rtc_set_alarm);
return err;
if (rtc->aie_timer.enabled != enabled) {
- if (enabled) {
- rtc->aie_timer.enabled = 1;
- rtc_timer_enqueue(rtc, &rtc->aie_timer);
- } else {
+ if (enabled)
+ err = rtc_timer_enqueue(rtc, &rtc->aie_timer);
+ else
rtc_timer_remove(rtc, &rtc->aie_timer);
- rtc->aie_timer.enabled = 0;
- }
}
- if (!rtc->ops)
+ if (err)
+ /* nothing */;
+ else if (!rtc->ops)
err = -ENODEV;
else if (!rtc->ops->alarm_irq_enable)
err = -EINVAL;
if (err)
return err;
+#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
+ if (enabled == 0 && rtc->uie_irq_active) {
+ mutex_unlock(&rtc->ops_lock);
+ return rtc_dev_update_irq_enable_emul(rtc, 0);
+ }
+#endif
/* make sure we're changing state */
if (rtc->uie_rtctimer.enabled == enabled)
goto out;
now = rtc_tm_to_ktime(tm);
rtc->uie_rtctimer.node.expires = ktime_add(now, onesec);
rtc->uie_rtctimer.period = ktime_set(1, 0);
- rtc->uie_rtctimer.enabled = 1;
- rtc_timer_enqueue(rtc, &rtc->uie_rtctimer);
- } else {
+ err = rtc_timer_enqueue(rtc, &rtc->uie_rtctimer);
+ } else
rtc_timer_remove(rtc, &rtc->uie_rtctimer);
- rtc->uie_rtctimer.enabled = 0;
- }
out:
mutex_unlock(&rtc->ops_lock);
+#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
+ /*
+ * Enable emulation if the driver did not provide
+ * the update_irq_enable function pointer or if returned
+ * -EINVAL to signal that it has been configured without
+ * interrupts or that are not available at the moment.
+ */
+ if (err == -EINVAL)
+ err = rtc_dev_update_irq_enable_emul(rtc, enabled);
+#endif
return err;
}
*
* Triggers the registered irq_task function callback.
*/
-static void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
+void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode)
{
unsigned long flags;
int err = 0;
unsigned long flags;
+ if (freq <= 0)
+ return -EINVAL;
+
spin_lock_irqsave(&rtc->irq_task_lock, flags);
if (rtc->irq_task != NULL && task == NULL)
err = -EBUSY;
* Enqueues a timer onto the rtc devices timerqueue and sets
* the next alarm event appropriately.
*
+ * Sets the enabled bit on the added timer.
+ *
* Must hold ops_lock for proper serialization of timerqueue
*/
-void rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
{
+ timer->enabled = 1;
timerqueue_add(&rtc->timerqueue, &timer->node);
if (&timer->node == timerqueue_getnext(&rtc->timerqueue)) {
struct rtc_wkalrm alarm;
err = __rtc_set_alarm(rtc, &alarm);
if (err == -ETIME)
schedule_work(&rtc->irqwork);
+ else if (err) {
+ timerqueue_del(&rtc->timerqueue, &timer->node);
+ timer->enabled = 0;
+ return err;
+ }
}
+ return 0;
}
/**
* Removes a timer onto the rtc devices timerqueue and sets
* the next alarm event appropriately.
*
+ * Clears the enabled bit on the removed timer.
+ *
* Must hold ops_lock for proper serialization of timerqueue
*/
-void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer)
+static void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer)
{
struct timerqueue_node *next = timerqueue_getnext(&rtc->timerqueue);
timerqueue_del(&rtc->timerqueue, &timer->node);
-
+ timer->enabled = 0;
if (next == &timer->node) {
struct rtc_wkalrm alarm;
int err;
timer->node.expires = expires;
timer->period = period;
- timer->enabled = 1;
- rtc_timer_enqueue(rtc, timer);
+ ret = rtc_timer_enqueue(rtc, timer);
mutex_unlock(&rtc->ops_lock);
return ret;
mutex_lock(&rtc->ops_lock);
if (timer->enabled)
rtc_timer_remove(rtc, timer);
- timer->enabled = 0;
mutex_unlock(&rtc->ops_lock);
return ret;
}
return ret;
}
-static int at32_rtc_ioctl(struct device *dev, unsigned int cmd,
- unsigned long arg)
+static int at32_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct rtc_at32ap700x *rtc = dev_get_drvdata(dev);
int ret = 0;
spin_lock_irq(&rtc->lock);
- switch (cmd) {
- case RTC_AIE_ON:
+ if(enabled) {
if (rtc_readl(rtc, VAL) > rtc->alarm_time) {
ret = -EINVAL;
- break;
+ goto out;
}
rtc_writel(rtc, CTRL, rtc_readl(rtc, CTRL)
| RTC_BIT(CTRL_TOPEN));
rtc_writel(rtc, ICR, RTC_BIT(ICR_TOPI));
rtc_writel(rtc, IER, RTC_BIT(IER_TOPI));
- break;
- case RTC_AIE_OFF:
+ } else {
rtc_writel(rtc, CTRL, rtc_readl(rtc, CTRL)
& ~RTC_BIT(CTRL_TOPEN));
rtc_writel(rtc, IDR, RTC_BIT(IDR_TOPI));
rtc_writel(rtc, ICR, RTC_BIT(ICR_TOPI));
- break;
- default:
- ret = -ENOIOCTLCMD;
- break;
}
-
+out:
spin_unlock_irq(&rtc->lock);
return ret;
}
static struct rtc_class_ops at32_rtc_ops = {
- .ioctl = at32_rtc_ioctl,
.read_time = at32_rtc_readtime,
.set_time = at32_rtc_settime,
.read_alarm = at32_rtc_readalarm,
.set_alarm = at32_rtc_setalarm,
+ .alarm_irq_enable = at32_rtc_alarm_irq_enable,
};
static int __init at32_rtc_probe(struct platform_device *pdev)
/* important: scrub old status before enabling IRQs */
switch (cmd) {
- case RTC_AIE_OFF: /* alarm off */
- at91_sys_write(AT91_RTC_IDR, AT91_RTC_ALARM);
- break;
- case RTC_AIE_ON: /* alarm on */
- at91_sys_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
- at91_sys_write(AT91_RTC_IER, AT91_RTC_ALARM);
- break;
case RTC_UIE_OFF: /* update off */
at91_sys_write(AT91_RTC_IDR, AT91_RTC_SECEV);
break;
return ret;
}
+static int at91_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ pr_debug("%s(): cmd=%08x\n", __func__, enabled);
+
+ if (enabled) {
+ at91_sys_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
+ at91_sys_write(AT91_RTC_IER, AT91_RTC_ALARM);
+ } else
+ at91_sys_write(AT91_RTC_IDR, AT91_RTC_ALARM);
+
+ return 0;
+}
/*
* Provide additional RTC information in /proc/driver/rtc
*/
.read_alarm = at91_rtc_readalarm,
.set_alarm = at91_rtc_setalarm,
.proc = at91_rtc_proc,
+ .alarm_irq_enable = at91_rtc_alarm_irq_enable,
};
/*
dev_dbg(dev, "ioctl: cmd=%08x, arg=%08lx, mr %08x\n", cmd, arg, mr);
switch (cmd) {
- case RTC_AIE_OFF: /* alarm off */
- rtt_writel(rtc, MR, mr & ~AT91_RTT_ALMIEN);
- break;
- case RTC_AIE_ON: /* alarm on */
- rtt_writel(rtc, MR, mr | AT91_RTT_ALMIEN);
- break;
case RTC_UIE_OFF: /* update off */
rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN);
break;
return ret;
}
+static int at91_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ struct sam9_rtc *rtc = dev_get_drvdata(dev);
+ u32 mr = rtt_readl(rtc, MR);
+
+ dev_dbg(dev, "alarm_irq_enable: enabled=%08x, mr %08x\n", enabled, mr);
+ if (enabled)
+ rtt_writel(rtc, MR, mr | AT91_RTT_ALMIEN);
+ else
+ rtt_writel(rtc, MR, mr & ~AT91_RTT_ALMIEN);
+ return 0;
+}
+
/*
* Provide additional RTC information in /proc/driver/rtc
*/
.read_alarm = at91_rtc_readalarm,
.set_alarm = at91_rtc_setalarm,
.proc = at91_rtc_proc,
+ .alarm_irq_enabled = at91_rtc_alarm_irq_enable,
};
/*
bfin_rtc_int_clear(~RTC_ISTAT_SEC);
break;
- case RTC_AIE_ON:
- dev_dbg_stamp(dev);
- bfin_rtc_int_set_alarm(rtc);
- break;
- case RTC_AIE_OFF:
- dev_dbg_stamp(dev);
- bfin_rtc_int_clear(~(RTC_ISTAT_ALARM | RTC_ISTAT_ALARM_DAY));
- break;
-
default:
dev_dbg_stamp(dev);
ret = -ENOIOCTLCMD;
return ret;
}
+static int bfin_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ struct bfin_rtc *rtc = dev_get_drvdata(dev);
+
+ dev_dbg_stamp(dev);
+ if (enabled)
+ bfin_rtc_int_set_alarm(rtc);
+ else
+ bfin_rtc_int_clear(~(RTC_ISTAT_ALARM | RTC_ISTAT_ALARM_DAY));
+}
+
static int bfin_rtc_read_time(struct device *dev, struct rtc_time *tm)
{
struct bfin_rtc *rtc = dev_get_drvdata(dev);
.read_alarm = bfin_rtc_read_alarm,
.set_alarm = bfin_rtc_set_alarm,
.proc = bfin_rtc_proc,
+ .alarm_irq_enable = bfin_rtc_alarm_irq_enable,
};
static int __devinit bfin_rtc_probe(struct platform_device *pdev)
return err;
}
+#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
+/*
+ * Routine to poll RTC seconds field for change as often as possible,
+ * after first RTC_UIE use timer to reduce polling
+ */
+static void rtc_uie_task(struct work_struct *work)
+{
+ struct rtc_device *rtc =
+ container_of(work, struct rtc_device, uie_task);
+ struct rtc_time tm;
+ int num = 0;
+ int err;
+
+ err = rtc_read_time(rtc, &tm);
+
+ spin_lock_irq(&rtc->irq_lock);
+ if (rtc->stop_uie_polling || err) {
+ rtc->uie_task_active = 0;
+ } else if (rtc->oldsecs != tm.tm_sec) {
+ num = (tm.tm_sec + 60 - rtc->oldsecs) % 60;
+ rtc->oldsecs = tm.tm_sec;
+ rtc->uie_timer.expires = jiffies + HZ - (HZ/10);
+ rtc->uie_timer_active = 1;
+ rtc->uie_task_active = 0;
+ add_timer(&rtc->uie_timer);
+ } else if (schedule_work(&rtc->uie_task) == 0) {
+ rtc->uie_task_active = 0;
+ }
+ spin_unlock_irq(&rtc->irq_lock);
+ if (num)
+ rtc_handle_legacy_irq(rtc, num, RTC_UF);
+}
+static void rtc_uie_timer(unsigned long data)
+{
+ struct rtc_device *rtc = (struct rtc_device *)data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rtc->irq_lock, flags);
+ rtc->uie_timer_active = 0;
+ rtc->uie_task_active = 1;
+ if ((schedule_work(&rtc->uie_task) == 0))
+ rtc->uie_task_active = 0;
+ spin_unlock_irqrestore(&rtc->irq_lock, flags);
+}
+
+static int clear_uie(struct rtc_device *rtc)
+{
+ spin_lock_irq(&rtc->irq_lock);
+ if (rtc->uie_irq_active) {
+ rtc->stop_uie_polling = 1;
+ if (rtc->uie_timer_active) {
+ spin_unlock_irq(&rtc->irq_lock);
+ del_timer_sync(&rtc->uie_timer);
+ spin_lock_irq(&rtc->irq_lock);
+ rtc->uie_timer_active = 0;
+ }
+ if (rtc->uie_task_active) {
+ spin_unlock_irq(&rtc->irq_lock);
+ flush_scheduled_work();
+ spin_lock_irq(&rtc->irq_lock);
+ }
+ rtc->uie_irq_active = 0;
+ }
+ spin_unlock_irq(&rtc->irq_lock);
+ return 0;
+}
+
+static int set_uie(struct rtc_device *rtc)
+{
+ struct rtc_time tm;
+ int err;
+
+ err = rtc_read_time(rtc, &tm);
+ if (err)
+ return err;
+ spin_lock_irq(&rtc->irq_lock);
+ if (!rtc->uie_irq_active) {
+ rtc->uie_irq_active = 1;
+ rtc->stop_uie_polling = 0;
+ rtc->oldsecs = tm.tm_sec;
+ rtc->uie_task_active = 1;
+ if (schedule_work(&rtc->uie_task) == 0)
+ rtc->uie_task_active = 0;
+ }
+ rtc->irq_data = 0;
+ spin_unlock_irq(&rtc->irq_lock);
+ return 0;
+}
+
+int rtc_dev_update_irq_enable_emul(struct rtc_device *rtc, unsigned int enabled)
+{
+ if (enabled)
+ return set_uie(rtc);
+ else
+ return clear_uie(rtc);
+}
+EXPORT_SYMBOL(rtc_dev_update_irq_enable_emul);
+
+#endif /* CONFIG_RTC_INTF_DEV_UIE_EMUL */
static ssize_t
rtc_dev_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
if (err)
goto done;
- /* try the driver's ioctl interface */
- if (ops->ioctl) {
- err = ops->ioctl(rtc->dev.parent, cmd, arg);
- if (err != -ENOIOCTLCMD) {
- mutex_unlock(&rtc->ops_lock);
- return err;
- }
- }
-
- /* if the driver does not provide the ioctl interface
- * or if that particular ioctl was not implemented
- * (-ENOIOCTLCMD), we will try to emulate here.
- *
+ /*
* Drivers *SHOULD NOT* provide ioctl implementations
* for these requests. Instead, provide methods to
* support the following code, so that the RTC's main
return err;
default:
- err = -ENOTTY;
+ /* Finally try the driver's ioctl interface */
+ if (ops->ioctl) {
+ err = ops->ioctl(rtc->dev.parent, cmd, arg);
+ if (err == -ENOIOCTLCMD)
+ err = -ENOTTY;
+ }
break;
}
rtc->dev.devt = MKDEV(MAJOR(rtc_devt), rtc->id);
+#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
+ INIT_WORK(&rtc->uie_task, rtc_uie_task);
+ setup_timer(&rtc->uie_timer, rtc_uie_timer, (unsigned long)rtc);
+#endif
+
cdev_init(&rtc->char_dev, &rtc_dev_fops);
rtc->char_dev.owner = rtc->owner;
}
__raw_writel(data, &priv->rtcregs[reg]);
}
+
+static int ds1286_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ struct ds1286_priv *priv = dev_get_drvdata(dev);
+ unsigned long flags;
+ unsigned char val;
+
+ /* Allow or mask alarm interrupts */
+ spin_lock_irqsave(&priv->lock, flags);
+ val = ds1286_rtc_read(priv, RTC_CMD);
+ if (enabled)
+ val &= ~RTC_TDM;
+ else
+ val |= RTC_TDM;
+ ds1286_rtc_write(priv, val, RTC_CMD);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ return 0;
+}
+
#ifdef CONFIG_RTC_INTF_DEV
static int ds1286_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
unsigned char val;
switch (cmd) {
- case RTC_AIE_OFF:
- /* Mask alarm int. enab. bit */
- spin_lock_irqsave(&priv->lock, flags);
- val = ds1286_rtc_read(priv, RTC_CMD);
- val |= RTC_TDM;
- ds1286_rtc_write(priv, val, RTC_CMD);
- spin_unlock_irqrestore(&priv->lock, flags);
- break;
- case RTC_AIE_ON:
- /* Allow alarm interrupts. */
- spin_lock_irqsave(&priv->lock, flags);
- val = ds1286_rtc_read(priv, RTC_CMD);
- val &= ~RTC_TDM;
- ds1286_rtc_write(priv, val, RTC_CMD);
- spin_unlock_irqrestore(&priv->lock, flags);
- break;
case RTC_WIE_OFF:
/* Mask watchdog int. enab. bit */
spin_lock_irqsave(&priv->lock, flags);
}
static const struct rtc_class_ops ds1286_ops = {
- .ioctl = ds1286_ioctl,
- .proc = ds1286_proc,
+ .ioctl = ds1286_ioctl,
+ .proc = ds1286_proc,
.read_time = ds1286_read_time,
.set_time = ds1286_set_time,
.read_alarm = ds1286_read_alarm,
.set_alarm = ds1286_set_alarm,
+ .alarm_irq_enable = ds1286_alarm_irq_enable,
};
static int __devinit ds1286_probe(struct platform_device *pdev)
* Interface to RTC framework
*/
-#ifdef CONFIG_RTC_INTF_DEV
-
-/*
- * Context: caller holds rtc->ops_lock (to protect ds1305->ctrl)
- */
-static int ds1305_ioctl(struct device *dev, unsigned cmd, unsigned long arg)
+static int ds1305_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct ds1305 *ds1305 = dev_get_drvdata(dev);
u8 buf[2];
- int status = -ENOIOCTLCMD;
+ long err = -EINVAL;
buf[0] = DS1305_WRITE | DS1305_CONTROL;
buf[1] = ds1305->ctrl[0];
- switch (cmd) {
- case RTC_AIE_OFF:
- status = 0;
- if (!(buf[1] & DS1305_AEI0))
- goto done;
- buf[1] &= ~DS1305_AEI0;
- break;
-
- case RTC_AIE_ON:
- status = 0;
+ if (enabled) {
if (ds1305->ctrl[0] & DS1305_AEI0)
goto done;
buf[1] |= DS1305_AEI0;
- break;
- }
- if (status == 0) {
- status = spi_write_then_read(ds1305->spi, buf, sizeof buf,
- NULL, 0);
- if (status >= 0)
- ds1305->ctrl[0] = buf[1];
+ } else {
+ if (!(buf[1] & DS1305_AEI0))
+ goto done;
+ buf[1] &= ~DS1305_AEI0;
}
-
+ err = spi_write_then_read(ds1305->spi, buf, sizeof buf, NULL, 0);
+ if (err >= 0)
+ ds1305->ctrl[0] = buf[1];
done:
- return status;
+ return err;
+
}
-#else
-#define ds1305_ioctl NULL
-#endif
/*
* Get/set of date and time is pretty normal.
#endif
static const struct rtc_class_ops ds1305_ops = {
- .ioctl = ds1305_ioctl,
.read_time = ds1305_get_time,
.set_time = ds1305_set_time,
.read_alarm = ds1305_get_alarm,
.set_alarm = ds1305_set_alarm,
.proc = ds1305_proc,
+ .alarm_irq_enable = ds1305_alarm_irq_enable,
};
static void ds1305_work(struct work_struct *work)
return 0;
}
-static int ds1307_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+static int ds1307_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct i2c_client *client = to_i2c_client(dev);
struct ds1307 *ds1307 = i2c_get_clientdata(client);
int ret;
- switch (cmd) {
- case RTC_AIE_OFF:
- if (!test_bit(HAS_ALARM, &ds1307->flags))
- return -ENOTTY;
-
- ret = i2c_smbus_read_byte_data(client, DS1337_REG_CONTROL);
- if (ret < 0)
- return ret;
-
- ret &= ~DS1337_BIT_A1IE;
-
- ret = i2c_smbus_write_byte_data(client,
- DS1337_REG_CONTROL, ret);
- if (ret < 0)
- return ret;
-
- break;
-
- case RTC_AIE_ON:
- if (!test_bit(HAS_ALARM, &ds1307->flags))
- return -ENOTTY;
+ if (!test_bit(HAS_ALARM, &ds1307->flags))
+ return -ENOTTY;
- ret = i2c_smbus_read_byte_data(client, DS1337_REG_CONTROL);
- if (ret < 0)
- return ret;
+ ret = i2c_smbus_read_byte_data(client, DS1337_REG_CONTROL);
+ if (ret < 0)
+ return ret;
+ if (enabled)
ret |= DS1337_BIT_A1IE;
+ else
+ ret &= ~DS1337_BIT_A1IE;
- ret = i2c_smbus_write_byte_data(client,
- DS1337_REG_CONTROL, ret);
- if (ret < 0)
- return ret;
-
- break;
-
- default:
- return -ENOIOCTLCMD;
- }
+ ret = i2c_smbus_write_byte_data(client, DS1337_REG_CONTROL, ret);
+ if (ret < 0)
+ return ret;
return 0;
}
.set_time = ds1307_set_time,
.read_alarm = ds1337_read_alarm,
.set_alarm = ds1337_set_alarm,
- .ioctl = ds1307_ioctl,
+ .alarm_irq_enable = ds1307_alarm_irq_enable,
};
/*----------------------------------------------------------------------*/
mutex_unlock(&ds1374->mutex);
}
-static int ds1374_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+static int ds1374_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct i2c_client *client = to_i2c_client(dev);
struct ds1374 *ds1374 = i2c_get_clientdata(client);
- int ret = -ENOIOCTLCMD;
+ int ret;
mutex_lock(&ds1374->mutex);
- switch (cmd) {
- case RTC_AIE_OFF:
- ret = i2c_smbus_read_byte_data(client, DS1374_REG_CR);
- if (ret < 0)
- goto out;
-
- ret &= ~DS1374_REG_CR_WACE;
-
- ret = i2c_smbus_write_byte_data(client, DS1374_REG_CR, ret);
- if (ret < 0)
- goto out;
-
- break;
-
- case RTC_AIE_ON:
- ret = i2c_smbus_read_byte_data(client, DS1374_REG_CR);
- if (ret < 0)
- goto out;
+ ret = i2c_smbus_read_byte_data(client, DS1374_REG_CR);
+ if (ret < 0)
+ goto out;
+ if (enabled) {
ret |= DS1374_REG_CR_WACE | DS1374_REG_CR_AIE;
ret &= ~DS1374_REG_CR_WDALM;
-
- ret = i2c_smbus_write_byte_data(client, DS1374_REG_CR, ret);
- if (ret < 0)
- goto out;
-
- break;
+ } else {
+ ret &= ~DS1374_REG_CR_WACE;
}
+ ret = i2c_smbus_write_byte_data(client, DS1374_REG_CR, ret);
out:
mutex_unlock(&ds1374->mutex);
.set_time = ds1374_set_time,
.read_alarm = ds1374_read_alarm,
.set_alarm = ds1374_set_alarm,
- .ioctl = ds1374_ioctl,
+ .alarm_irq_enable = ds1374_alarm_irq_enable,
};
static int ds1374_probe(struct i2c_client *client,
return m41t80_set_datetime(to_i2c_client(dev), tm);
}
-#if defined(CONFIG_RTC_INTF_DEV) || defined(CONFIG_RTC_INTF_DEV_MODULE)
-static int
-m41t80_rtc_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+static int m41t80_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct i2c_client *client = to_i2c_client(dev);
int rc;
- switch (cmd) {
- case RTC_AIE_OFF:
- case RTC_AIE_ON:
- break;
- default:
- return -ENOIOCTLCMD;
- }
-
rc = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_MON);
if (rc < 0)
goto err;
- switch (cmd) {
- case RTC_AIE_OFF:
- rc &= ~M41T80_ALMON_AFE;
- break;
- case RTC_AIE_ON:
+
+ if (enabled)
rc |= M41T80_ALMON_AFE;
- break;
- }
+ else
+ rc &= ~M41T80_ALMON_AFE;
+
if (i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, rc) < 0)
goto err;
+
return 0;
err:
return -EIO;
}
-#else
-#define m41t80_rtc_ioctl NULL
-#endif
static int m41t80_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *t)
{
.read_alarm = m41t80_rtc_read_alarm,
.set_alarm = m41t80_rtc_set_alarm,
.proc = m41t80_rtc_proc,
- .ioctl = m41t80_rtc_ioctl,
+ .alarm_irq_enable = m41t80_rtc_alarm_irq_enable,
};
#if defined(CONFIG_RTC_INTF_SYSFS) || defined(CONFIG_RTC_INTF_SYSFS_MODULE)
/*
* Handle commands from user-space
*/
-static int m48t59_rtc_ioctl(struct device *dev, unsigned int cmd,
- unsigned long arg)
+static int m48t59_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct platform_device *pdev = to_platform_device(dev);
struct m48t59_plat_data *pdata = pdev->dev.platform_data;
struct m48t59_private *m48t59 = platform_get_drvdata(pdev);
unsigned long flags;
- int ret = 0;
spin_lock_irqsave(&m48t59->lock, flags);
- switch (cmd) {
- case RTC_AIE_OFF: /* alarm interrupt off */
- M48T59_WRITE(0x00, M48T59_INTR);
- break;
- case RTC_AIE_ON: /* alarm interrupt on */
+ if (enabled)
M48T59_WRITE(M48T59_INTR_AFE, M48T59_INTR);
- break;
- default:
- ret = -ENOIOCTLCMD;
- break;
- }
+ else
+ M48T59_WRITE(0x00, M48T59_INTR);
spin_unlock_irqrestore(&m48t59->lock, flags);
- return ret;
+ return 0;
}
static int m48t59_rtc_proc(struct device *dev, struct seq_file *seq)
}
static const struct rtc_class_ops m48t59_rtc_ops = {
- .ioctl = m48t59_rtc_ioctl,
.read_time = m48t59_rtc_read_time,
.set_time = m48t59_rtc_set_time,
.read_alarm = m48t59_rtc_readalarm,
.set_alarm = m48t59_rtc_setalarm,
.proc = m48t59_rtc_proc,
+ .alarm_irq_enable = m48t59_rtc_alarm_irq_enable,
};
static const struct rtc_class_ops m48t02_rtc_ops = {
return 0;
}
-#if defined(CONFIG_RTC_INTF_DEV) || defined(CONFIG_RTC_INTF_DEV_MODULE)
-
/* Currently, the vRTC doesn't support UIE ON/OFF */
-static int
-mrst_rtc_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+static int mrst_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned long flags;
- switch (cmd) {
- case RTC_AIE_OFF:
- case RTC_AIE_ON:
- if (!mrst->irq)
- return -EINVAL;
- break;
- default:
- /* PIE ON/OFF is handled by mrst_irq_set_state() */
- return -ENOIOCTLCMD;
- }
-
spin_lock_irqsave(&rtc_lock, flags);
- switch (cmd) {
- case RTC_AIE_OFF: /* alarm off */
- mrst_irq_disable(mrst, RTC_AIE);
- break;
- case RTC_AIE_ON: /* alarm on */
+ if (enabled)
mrst_irq_enable(mrst, RTC_AIE);
- break;
- }
+ else
+ mrst_irq_disable(mrst, RTC_AIE);
spin_unlock_irqrestore(&rtc_lock, flags);
return 0;
}
-#else
-#define mrst_rtc_ioctl NULL
-#endif
#if defined(CONFIG_RTC_INTF_PROC) || defined(CONFIG_RTC_INTF_PROC_MODULE)
#endif
static const struct rtc_class_ops mrst_rtc_ops = {
- .ioctl = mrst_rtc_ioctl,
.read_time = mrst_read_time,
.set_time = mrst_set_time,
.read_alarm = mrst_read_alarm,
.set_alarm = mrst_set_alarm,
.proc = mrst_procfs,
.irq_set_state = mrst_irq_set_state,
+ .alarm_irq_enable = mrst_rtc_alarm_irq_enable,
};
static struct mrst_rtc mrst_rtc;
static inline void msm6242_write(struct msm6242_priv *priv, unsigned int val,
unsigned int reg)
{
- return __raw_writel(val, &priv->regs[reg]);
+ __raw_writel(val, &priv->regs[reg]);
}
static inline void msm6242_set(struct msm6242_priv *priv, unsigned int val,
return 0;
}
-static int mv_rtc_ioctl(struct device *dev, unsigned int cmd,
- unsigned long arg)
+static int mv_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct platform_device *pdev = to_platform_device(dev);
struct rtc_plat_data *pdata = platform_get_drvdata(pdev);
void __iomem *ioaddr = pdata->ioaddr;
if (pdata->irq < 0)
- return -ENOIOCTLCMD; /* fall back into rtc-dev's emulation */
- switch (cmd) {
- case RTC_AIE_OFF:
- writel(0, ioaddr + RTC_ALARM_INTERRUPT_MASK_REG_OFFS);
- break;
- case RTC_AIE_ON:
+ return -EINVAL; /* fall back into rtc-dev's emulation */
+
+ if (enabled)
writel(1, ioaddr + RTC_ALARM_INTERRUPT_MASK_REG_OFFS);
- break;
- default:
- return -ENOIOCTLCMD;
- }
+ else
+ writel(0, ioaddr + RTC_ALARM_INTERRUPT_MASK_REG_OFFS);
return 0;
}
.set_time = mv_rtc_set_time,
.read_alarm = mv_rtc_read_alarm,
.set_alarm = mv_rtc_set_alarm,
- .ioctl = mv_rtc_ioctl,
+ .alarm_irq_enable = mv_rtc_alarm_irq_enable,
};
static int __devinit mv_rtc_probe(struct platform_device *pdev)
u8 reg;
switch (cmd) {
- case RTC_AIE_OFF:
- case RTC_AIE_ON:
case RTC_UIE_OFF:
case RTC_UIE_ON:
break;
rtc_wait_not_busy();
reg = rtc_read(OMAP_RTC_INTERRUPTS_REG);
switch (cmd) {
- /* AIE = Alarm Interrupt Enable */
- case RTC_AIE_OFF:
- reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM;
- break;
- case RTC_AIE_ON:
- reg |= OMAP_RTC_INTERRUPTS_IT_ALARM;
- break;
/* UIE = Update Interrupt Enable (1/second) */
case RTC_UIE_OFF:
reg &= ~OMAP_RTC_INTERRUPTS_IT_TIMER;
#define omap_rtc_ioctl NULL
#endif
+static int omap_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ u8 reg;
+
+ local_irq_disable();
+ rtc_wait_not_busy();
+ reg = rtc_read(OMAP_RTC_INTERRUPTS_REG);
+ if (enabled)
+ reg |= OMAP_RTC_INTERRUPTS_IT_ALARM;
+ else
+ reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM;
+ rtc_wait_not_busy();
+ rtc_write(reg, OMAP_RTC_INTERRUPTS_REG);
+ local_irq_enable();
+
+ return 0;
+}
+
/* this hardware doesn't support "don't care" alarm fields */
static int tm2bcd(struct rtc_time *tm)
{
.set_time = omap_rtc_set_time,
.read_alarm = omap_rtc_read_alarm,
.set_alarm = omap_rtc_set_alarm,
+ .alarm_irq_enable = omap_rtc_alarm_irq_enable,
};
static int omap_rtc_alarm;
static int rtc_proc_open(struct inode *inode, struct file *file)
{
+ int ret;
struct rtc_device *rtc = PDE(inode)->data;
if (!try_module_get(THIS_MODULE))
return -ENODEV;
- return single_open(file, rtc_proc_show, rtc);
+ ret = single_open(file, rtc_proc_show, rtc);
+ if (ret)
+ module_put(THIS_MODULE);
+ return ret;
}
static int rtc_proc_release(struct inode *inode, struct file *file)
static inline void rp5c01_write(struct rp5c01_priv *priv, unsigned int val,
unsigned int reg)
{
- return __raw_writel(val, &priv->regs[reg]);
+ __raw_writel(val, &priv->regs[reg]);
}
static void rp5c01_lock(struct rp5c01_priv *priv)
if (rs5c->type == rtc_rs5c372a
&& (buf & RS5C372A_CTRL1_SL1))
return -ENOIOCTLCMD;
- case RTC_AIE_OFF:
- case RTC_AIE_ON:
- /* these irq management calls only make sense for chips
- * which are wired up to an IRQ.
- */
- if (!rs5c->has_irq)
- return -ENOIOCTLCMD;
- break;
default:
return -ENOIOCTLCMD;
}
addr = RS5C_ADDR(RS5C_REG_CTRL1);
switch (cmd) {
- case RTC_AIE_OFF: /* alarm off */
- buf &= ~RS5C_CTRL1_AALE;
- break;
- case RTC_AIE_ON: /* alarm on */
- buf |= RS5C_CTRL1_AALE;
- break;
case RTC_UIE_OFF: /* update off */
buf &= ~RS5C_CTRL1_CT_MASK;
break;
#endif
+static int rs5c_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rs5c372 *rs5c = i2c_get_clientdata(client);
+ unsigned char buf;
+ int status, addr;
+
+ buf = rs5c->regs[RS5C_REG_CTRL1];
+
+ if (!rs5c->has_irq)
+ return -EINVAL;
+
+ status = rs5c_get_regs(rs5c);
+ if (status < 0)
+ return status;
+
+ addr = RS5C_ADDR(RS5C_REG_CTRL1);
+ if (enabled)
+ buf |= RS5C_CTRL1_AALE;
+ else
+ buf &= ~RS5C_CTRL1_AALE;
+
+ if (i2c_smbus_write_byte_data(client, addr, buf) < 0) {
+ printk(KERN_WARNING "%s: can't update alarm\n",
+ rs5c->rtc->name);
+ status = -EIO;
+ } else
+ rs5c->regs[RS5C_REG_CTRL1] = buf;
+
+ return status;
+}
+
+
/* NOTE: Since RTC_WKALM_{RD,SET} were originally defined for EFI,
* which only exposes a polled programming interface; and since
* these calls map directly to those EFI requests; we don't demand
.set_time = rs5c372_rtc_set_time,
.read_alarm = rs5c_read_alarm,
.set_alarm = rs5c_set_alarm,
+ .alarm_irq_enable = rs5c_rtc_alarm_irq_enable,
};
#if defined(CONFIG_RTC_INTF_SYSFS) || defined(CONFIG_RTC_INTF_SYSFS_MODULE)
unsigned long arg)
{
switch (cmd) {
- case RTC_AIE_OFF:
- spin_lock_irq(&sa1100_rtc_lock);
- RTSR &= ~RTSR_ALE;
- spin_unlock_irq(&sa1100_rtc_lock);
- return 0;
- case RTC_AIE_ON:
- spin_lock_irq(&sa1100_rtc_lock);
- RTSR |= RTSR_ALE;
- spin_unlock_irq(&sa1100_rtc_lock);
- return 0;
case RTC_UIE_OFF:
spin_lock_irq(&sa1100_rtc_lock);
RTSR &= ~RTSR_HZE;
return -ENOIOCTLCMD;
}
+static int sa1100_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ spin_lock_irq(&sa1100_rtc_lock);
+ if (enabled)
+ RTSR |= RTSR_ALE;
+ else
+ RTSR &= ~RTSR_ALE;
+ spin_unlock_irq(&sa1100_rtc_lock);
+ return 0;
+}
+
static int sa1100_rtc_read_time(struct device *dev, struct rtc_time *tm)
{
rtc_time_to_tm(RCNR, tm);
.proc = sa1100_rtc_proc,
.irq_set_freq = sa1100_irq_set_freq,
.irq_set_state = sa1100_irq_set_state,
+ .alarm_irq_enable = sa1100_rtc_alarm_irq_enable,
};
static int sa1100_rtc_probe(struct platform_device *pdev)
unsigned int ret = 0;
switch (cmd) {
- case RTC_AIE_OFF:
- case RTC_AIE_ON:
- sh_rtc_setaie(dev, cmd == RTC_AIE_ON);
- break;
case RTC_UIE_OFF:
rtc->periodic_freq &= ~PF_OXS;
sh_rtc_setcie(dev, 0);
return ret;
}
+static int sh_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ sh_rtc_setaie(dev, enabled);
+ return 0;
+}
+
static int sh_rtc_read_time(struct device *dev, struct rtc_time *tm)
{
struct platform_device *pdev = to_platform_device(dev);
.irq_set_state = sh_rtc_irq_set_state,
.irq_set_freq = sh_rtc_irq_set_freq,
.proc = sh_rtc_proc,
+ .alarm_irq_enable = sh_rtc_alarm_irq_enable,
};
static int __init sh_rtc_probe(struct platform_device *pdev)
return 0;
}
-static int test_rtc_ioctl(struct device *dev, unsigned int cmd,
- unsigned long arg)
+static int test_rtc_alarm_irq_enable(struct device *dev, unsigned int enable)
{
- /* We do support interrupts, they're generated
- * using the sysfs interface.
- */
- switch (cmd) {
- case RTC_PIE_ON:
- case RTC_PIE_OFF:
- case RTC_UIE_ON:
- case RTC_UIE_OFF:
- case RTC_AIE_ON:
- case RTC_AIE_OFF:
- return 0;
-
- default:
- return -ENOIOCTLCMD;
- }
+ return 0;
}
static const struct rtc_class_ops test_rtc_ops = {
.read_alarm = test_rtc_read_alarm,
.set_alarm = test_rtc_set_alarm,
.set_mmss = test_rtc_set_mmss,
- .ioctl = test_rtc_ioctl,
+ .alarm_irq_enable = test_rtc_alarm_irq_enable,
};
static ssize_t test_irq_show(struct device *dev,
static int vr41xx_rtc_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
- case RTC_AIE_ON:
- spin_lock_irq(&rtc_lock);
-
- if (!alarm_enabled) {
- enable_irq(aie_irq);
- alarm_enabled = 1;
- }
-
- spin_unlock_irq(&rtc_lock);
- break;
- case RTC_AIE_OFF:
- spin_lock_irq(&rtc_lock);
-
- if (alarm_enabled) {
- disable_irq(aie_irq);
- alarm_enabled = 0;
- }
-
- spin_unlock_irq(&rtc_lock);
- break;
case RTC_EPOCH_READ:
return put_user(epoch, (unsigned long __user *)arg);
case RTC_EPOCH_SET:
return 0;
}
+static int vr41xx_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
+{
+ spin_lock_irq(&rtc_lock);
+ if (enabled) {
+ if (!alarm_enabled) {
+ enable_irq(aie_irq);
+ alarm_enabled = 1;
+ }
+ } else {
+ if (alarm_enabled) {
+ disable_irq(aie_irq);
+ alarm_enabled = 0;
+ }
+ }
+ spin_unlock_irq(&rtc_lock);
+ return 0;
+}
+
static irqreturn_t elapsedtime_interrupt(int irq, void *dev_id)
{
struct platform_device *pdev = (struct platform_device *)dev_id;
private = (struct dasd_eckd_private *) device->private;
lcu = private->lcu;
+ /* nothing to do if already disconnected */
+ if (!lcu)
+ return;
device->discipline->get_uid(device, &uid);
spin_lock_irqsave(&lcu->lock, flags);
list_del_init(&device->alias_list);
private = (struct dasd_eckd_private *) device->private;
lcu = private->lcu;
+ /* nothing to do if already removed */
+ if (!lcu)
+ return 0;
spin_lock_irqsave(&lcu->lock, flags);
_remove_device_from_lcu(lcu, device);
spin_unlock_irqrestore(&lcu->lock, flags);
static struct ccw_device_id dasd_eckd_ids[] = {
{ CCW_DEVICE_DEVTYPE (0x3990, 0, 0x3390, 0), .driver_info = 0x1},
{ CCW_DEVICE_DEVTYPE (0x2105, 0, 0x3390, 0), .driver_info = 0x2},
- { CCW_DEVICE_DEVTYPE (0x3880, 0, 0x3390, 0), .driver_info = 0x3},
+ { CCW_DEVICE_DEVTYPE (0x3880, 0, 0x3380, 0), .driver_info = 0x3},
{ CCW_DEVICE_DEVTYPE (0x3990, 0, 0x3380, 0), .driver_info = 0x4},
{ CCW_DEVICE_DEVTYPE (0x2105, 0, 0x3380, 0), .driver_info = 0x5},
{ CCW_DEVICE_DEVTYPE (0x9343, 0, 0x9345, 0), .driver_info = 0x6},
static int get_inbound_buffer_frontier(struct qdio_q *q)
{
int count, stop;
- unsigned char state;
+ unsigned char state = 0;
/*
* Don't check 128 buffers, as otherwise qdio_inbound_q_moved
static int get_outbound_buffer_frontier(struct qdio_q *q)
{
int count, stop;
- unsigned char state;
+ unsigned char state = 0;
if (need_siga_sync(q))
if (((queue_type(q) != QDIO_IQDIO_QFMT) &&
struct iucv_event ev;
int rc;
- if (memcmp(iucvMagic, ipuser, sizeof(ipuser)))
+ if (memcmp(iucvMagic, ipuser, 16))
/* ipuser must match iucvMagic. */
return -EINVAL;
rc = -EINVAL;
chp_dsc = (struct channelPath_dsc *)ccw_device_get_chp_desc(ccwdev, 0);
if (chp_dsc != NULL) {
/* CHPP field bit 6 == 1 -> single queue */
- if ((chp_dsc->chpp & 0x02) == 0x02)
+ if ((chp_dsc->chpp & 0x02) == 0x02) {
+ if ((atomic_read(&card->qdio.state) !=
+ QETH_QDIO_UNINITIALIZED) &&
+ (card->qdio.no_out_queues == 4))
+ /* change from 4 to 1 outbound queues */
+ qeth_free_qdio_buffers(card);
card->qdio.no_out_queues = 1;
+ if (card->qdio.default_out_queue != 0)
+ dev_info(&card->gdev->dev,
+ "Priority Queueing not supported\n");
+ card->qdio.default_out_queue = 0;
+ } else {
+ if ((atomic_read(&card->qdio.state) !=
+ QETH_QDIO_UNINITIALIZED) &&
+ (card->qdio.no_out_queues == 1)) {
+ /* change from 1 to 4 outbound queues */
+ qeth_free_qdio_buffers(card);
+ card->qdio.default_out_queue = 2;
+ }
+ card->qdio.no_out_queues = 4;
+ }
card->info.func_level = 0x4100 + chp_dsc->desc;
kfree(chp_dsc);
}
- if (card->qdio.no_out_queues == 1) {
- card->qdio.default_out_queue = 0;
- dev_info(&card->gdev->dev,
- "Priority Queueing not supported\n");
- }
QETH_DBF_TEXT_(SETUP, 2, "nr:%x", card->qdio.no_out_queues);
QETH_DBF_TEXT_(SETUP, 2, "lvl:%02x", card->info.func_level);
return;
}
}
-static inline int qeth_get_max_mtu_for_card(int cardtype)
-{
- switch (cardtype) {
-
- case QETH_CARD_TYPE_UNKNOWN:
- case QETH_CARD_TYPE_OSD:
- case QETH_CARD_TYPE_OSN:
- case QETH_CARD_TYPE_OSM:
- case QETH_CARD_TYPE_OSX:
- return 61440;
- case QETH_CARD_TYPE_IQD:
- return 57344;
- default:
- return 1500;
- }
-}
-
-static inline int qeth_get_mtu_out_of_mpc(int cardtype)
-{
- switch (cardtype) {
- case QETH_CARD_TYPE_IQD:
- return 1;
- default:
- return 0;
- }
-}
-
static inline int qeth_get_mtu_outof_framesize(int framesize)
{
switch (framesize) {
case QETH_CARD_TYPE_OSD:
case QETH_CARD_TYPE_OSM:
case QETH_CARD_TYPE_OSX:
- return ((mtu >= 576) && (mtu <= 61440));
case QETH_CARD_TYPE_IQD:
return ((mtu >= 576) &&
- (mtu <= card->info.max_mtu + 4096 - 32));
+ (mtu <= card->info.max_mtu));
case QETH_CARD_TYPE_OSN:
case QETH_CARD_TYPE_UNKNOWN:
default:
memcpy(&card->token.ulp_filter_r,
QETH_ULP_ENABLE_RESP_FILTER_TOKEN(iob->data),
QETH_MPC_TOKEN_LENGTH);
- if (qeth_get_mtu_out_of_mpc(card->info.type)) {
+ if (card->info.type == QETH_CARD_TYPE_IQD) {
memcpy(&framesize, QETH_ULP_ENABLE_RESP_MAX_MTU(iob->data), 2);
mtu = qeth_get_mtu_outof_framesize(framesize);
if (!mtu) {
QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc);
return 0;
}
- card->info.max_mtu = mtu;
+ if (card->info.initial_mtu && (card->info.initial_mtu != mtu)) {
+ /* frame size has changed */
+ if (card->dev &&
+ ((card->dev->mtu == card->info.initial_mtu) ||
+ (card->dev->mtu > mtu)))
+ card->dev->mtu = mtu;
+ qeth_free_qdio_buffers(card);
+ }
card->info.initial_mtu = mtu;
+ card->info.max_mtu = mtu;
card->qdio.in_buf_size = mtu + 2 * PAGE_SIZE;
} else {
card->info.initial_mtu = qeth_get_initial_mtu_for_card(card);
- card->info.max_mtu = qeth_get_max_mtu_for_card(card->info.type);
+ card->info.max_mtu = *(__u16 *)QETH_ULP_ENABLE_RESP_MAX_MTU(
+ iob->data);
card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT;
}
}
}
+static void qeth_determine_capabilities(struct qeth_card *card)
+{
+ int rc;
+ int length;
+ char *prcd;
+ struct ccw_device *ddev;
+ int ddev_offline = 0;
+
+ QETH_DBF_TEXT(SETUP, 2, "detcapab");
+ ddev = CARD_DDEV(card);
+ if (!ddev->online) {
+ ddev_offline = 1;
+ rc = ccw_device_set_online(ddev);
+ if (rc) {
+ QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ goto out;
+ }
+ }
+
+ rc = qeth_read_conf_data(card, (void **) &prcd, &length);
+ if (rc) {
+ QETH_DBF_MESSAGE(2, "%s qeth_read_conf_data returned %i\n",
+ dev_name(&card->gdev->dev), rc);
+ QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ goto out_offline;
+ }
+ qeth_configure_unitaddr(card, prcd);
+ qeth_configure_blkt_default(card, prcd);
+ kfree(prcd);
+
+ rc = qdio_get_ssqd_desc(ddev, &card->ssqd);
+ if (rc)
+ QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+
+out_offline:
+ if (ddev_offline == 1)
+ ccw_device_set_offline(ddev);
+out:
+ return;
+}
+
static int qeth_qdio_establish(struct qeth_card *card)
{
struct qdio_initialize init_data;
QETH_DBF_TEXT(SETUP, 2, "hrdsetup");
atomic_set(&card->force_alloc_skb, 0);
+ qeth_get_channel_path_desc(card);
retry:
if (retries)
QETH_DBF_MESSAGE(2, "%s Retrying to do IDX activates.\n",
else
goto retry;
}
+ qeth_determine_capabilities(card);
qeth_init_tokens(card);
qeth_init_func_level(card);
rc = qeth_idx_activate_channel(&card->read, qeth_idx_read_cb);
card->discipline.ccwgdriver = NULL;
}
-static void qeth_determine_capabilities(struct qeth_card *card)
-{
- int rc;
- int length;
- char *prcd;
-
- QETH_DBF_TEXT(SETUP, 2, "detcapab");
- rc = ccw_device_set_online(CARD_DDEV(card));
- if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
- goto out;
- }
-
-
- rc = qeth_read_conf_data(card, (void **) &prcd, &length);
- if (rc) {
- QETH_DBF_MESSAGE(2, "%s qeth_read_conf_data returned %i\n",
- dev_name(&card->gdev->dev), rc);
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
- goto out_offline;
- }
- qeth_configure_unitaddr(card, prcd);
- qeth_configure_blkt_default(card, prcd);
- kfree(prcd);
-
- rc = qdio_get_ssqd_desc(CARD_DDEV(card), &card->ssqd);
- if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
-
-out_offline:
- ccw_device_set_offline(CARD_DDEV(card));
-out:
- return;
-}
-
static int qeth_core_probe_device(struct ccwgroup_device *gdev)
{
struct qeth_card *card;
case IPA_RC_L2_DUP_LAYER3_MAC:
dev_warn(&card->gdev->dev,
"MAC address %pM already exists\n",
- card->dev->dev_addr);
+ cmd->data.setdelmac.mac);
break;
case IPA_RC_L2_MAC_NOT_AUTH_BY_HYP:
case IPA_RC_L2_MAC_NOT_AUTH_BY_ADP:
dev_warn(&card->gdev->dev,
"MAC address %pM is not authorized\n",
- card->dev->dev_addr);
+ cmd->data.setdelmac.mac);
break;
default:
break;
return NETDEV_TX_OK;
}
-static int qeth_l2_open(struct net_device *dev)
+static int __qeth_l2_open(struct net_device *dev)
{
struct qeth_card *card = dev->ml_priv;
int rc = 0;
QETH_CARD_TEXT(card, 4, "qethopen");
+ if (card->state == CARD_STATE_UP)
+ return rc;
if (card->state != CARD_STATE_SOFTSETUP)
return -ENODEV;
return rc;
}
+static int qeth_l2_open(struct net_device *dev)
+{
+ struct qeth_card *card = dev->ml_priv;
+
+ QETH_CARD_TEXT(card, 5, "qethope_");
+ if (qeth_wait_for_threads(card, QETH_RECOVER_THREAD)) {
+ QETH_CARD_TEXT(card, 3, "openREC");
+ return -ERESTARTSYS;
+ }
+ return __qeth_l2_open(dev);
+}
+
static int qeth_l2_stop(struct net_device *dev)
{
struct qeth_card *card = dev->ml_priv;
if (recover_flag == CARD_STATE_RECOVER) {
if (recovery_mode &&
card->info.type != QETH_CARD_TYPE_OSN) {
- qeth_l2_open(card->dev);
+ __qeth_l2_open(card->dev);
} else {
rtnl_lock();
dev_open(card->dev);
*/
if (iph->protocol == IPPROTO_UDP)
hdr->hdr.l3.ext_flags |= QETH_HDR_EXT_UDP;
- hdr->hdr.l3.ext_flags |= QETH_HDR_EXT_CSUM_TRANSP_REQ;
+ hdr->hdr.l3.ext_flags |= QETH_HDR_EXT_CSUM_TRANSP_REQ |
+ QETH_HDR_EXT_CSUM_HDR_REQ;
+ iph->check = 0;
if (card->options.performance_stats)
card->perf_stats.tx_csum++;
}
return NETDEV_TX_OK;
}
-static int qeth_l3_open(struct net_device *dev)
+static int __qeth_l3_open(struct net_device *dev)
{
struct qeth_card *card = dev->ml_priv;
int rc = 0;
QETH_CARD_TEXT(card, 4, "qethopen");
+ if (card->state == CARD_STATE_UP)
+ return rc;
if (card->state != CARD_STATE_SOFTSETUP)
return -ENODEV;
card->data.state = CH_STATE_UP;
return rc;
}
+static int qeth_l3_open(struct net_device *dev)
+{
+ struct qeth_card *card = dev->ml_priv;
+
+ QETH_CARD_TEXT(card, 5, "qethope_");
+ if (qeth_wait_for_threads(card, QETH_RECOVER_THREAD)) {
+ QETH_CARD_TEXT(card, 3, "openREC");
+ return -ERESTARTSYS;
+ }
+ return __qeth_l3_open(dev);
+}
+
static int qeth_l3_stop(struct net_device *dev)
{
struct qeth_card *card = dev->ml_priv;
netif_carrier_off(card->dev);
if (recover_flag == CARD_STATE_RECOVER) {
if (recovery_mode)
- qeth_l3_open(card->dev);
+ __qeth_l3_open(card->dev);
else {
rtnl_lock();
dev_open(card->dev);
static int smsg_path_pending(struct iucv_path *path, u8 ipvmid[8],
u8 ipuser[16])
{
- if (strncmp(ipvmid, "*MSG ", sizeof(ipvmid)) != 0)
+ if (strncmp(ipvmid, "*MSG ", 8) != 0)
return -EINVAL;
/* Path pending from *MSG. */
return iucv_path_accept(path, &smsg_handler, "SMSGIUCV ", NULL);
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr.h
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: SCSI RAID Device Driver for
** ARECA RAID Host adapter
*******************************************************************************
struct device_attribute;
/*The limit of outstanding scsi command that firmware can handle*/
#define ARCMSR_MAX_OUTSTANDING_CMD 256
-#define ARCMSR_MAX_FREECCB_NUM 320
-#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.15 2010/02/02"
+#ifdef CONFIG_XEN
+ #define ARCMSR_MAX_FREECCB_NUM 160
+#else
+ #define ARCMSR_MAX_FREECCB_NUM 320
+#endif
+#define ARCMSR_DRIVER_VERSION "Driver Version 1.20.00.15 2010/08/05"
#define ARCMSR_SCSI_INITIATOR_ID 255
#define ARCMSR_MAX_XFER_SECTORS 512
#define ARCMSR_MAX_XFER_SECTORS_B 4096
#define ARCMSR_MAX_HBB_POSTQUEUE 264
#define ARCMSR_MAX_XFER_LEN 0x26000 /* 152K */
#define ARCMSR_CDB_SG_PAGE_LENGTH 256
-#define SCSI_CMD_ARECA_SPECIFIC 0xE1
#ifndef PCI_DEVICE_ID_ARECA_1880
#define PCI_DEVICE_ID_ARECA_1880 0x1880
#endif
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr_attr.c
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: attributes exported to sysfs and device host
*******************************************************************************
** Copyright (C) 2002 - 2005, Areca Technology Corporation All rights reserved
*******************************************************************************
** O.S : Linux
** FILE NAME : arcmsr_hba.c
-** BY : Erich Chen
+** BY : Nick Cheng
** Description: SCSI RAID Device Driver for
** ARECA RAID Host adapter
*******************************************************************************
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(ARCMSR_DRIVER_VERSION);
static int sleeptime = 10;
-static int retrycount = 30;
+static int retrycount = 12;
wait_queue_head_t wait_q;
static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
struct scsi_cmnd *cmd);
if (isleep > 0) {
msleep(isleep*1000);
}
- printk(KERN_NOTICE "wake-up\n");
return 0;
}
}
static void arcmsr_drain_donequeue(struct AdapterControlBlock *acb, struct CommandControlBlock *pCCB, bool error)
-
{
int id, lun;
if ((pCCB->acb != acb) || (pCCB->startdone != ARCMSR_CCB_START)) {
, pCCB->startdone
, atomic_read(&acb->ccboutstandingcount));
return;
- }
+ }
arcmsr_report_ccb_state(acb, pCCB, error);
}
case ACB_ADAPTER_TYPE_B: {
struct MessageUnit_B *reg = acb->pmuB;
/*clear all outbound posted Q*/
- writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, ®->iop2drv_doorbell); /* clear doorbell interrupt */
+ writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, reg->iop2drv_doorbell); /* clear doorbell interrupt */
for (i = 0; i < ARCMSR_MAX_HBB_POSTQUEUE; i++) {
if ((flag_ccb = readl(®->done_qbuffer[i])) != 0) {
writel(0, ®->done_qbuffer[i]);
arcmsr_drain_donequeue(acb, pCCB, error);
}
}
-
static void arcmsr_hbb_postqueue_isr(struct AdapterControlBlock *acb)
{
uint32_t index;
if (atomic_read(&acb->ccboutstandingcount) >=
ARCMSR_MAX_OUTSTANDING_CMD)
return SCSI_MLQUEUE_HOST_BUSY;
- if ((scsicmd == SCSI_CMD_ARECA_SPECIFIC)) {
- printk(KERN_NOTICE "Receiveing SCSI_CMD_ARECA_SPECIFIC command..\n");
- return 0;
- }
ccb = arcmsr_get_freeccb(acb);
if (!ccb)
return SCSI_MLQUEUE_HOST_BUSY;
int index, rtn;
bool error;
polling_hbb_ccb_retry:
+
poll_count++;
/* clear doorbell interrupt */
writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, reg->iop2drv_doorbell);
{
struct MessageUnit_A __iomem *reg = acb->pmuA;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0 ) || ((acb->acb_flags & ACB_F_ABORT) != 0 )){
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if (atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_INBOUND_MESG0_GET_CONFIG, ®->inbound_msgaddr0);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
}
{
struct MessageUnit_B __iomem *reg = acb->pmuB;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0 ) || ((acb->acb_flags & ACB_F_ABORT) != 0 )){
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
if (atomic_read(&acb->ante_token_value) == atomic_read(&acb->rq_map_token)) {
- atomic_set(&acb->rq_map_token,16);
+ atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if(atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_MESSAGE_GET_CONFIG, reg->drv2iop_doorbell);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
}
{
struct MessageUnit_C __iomem *reg = acb->pmuC;
if (unlikely(atomic_read(&acb->rq_map_token) == 0) || ((acb->acb_flags & ACB_F_BUS_RESET) != 0) || ((acb->acb_flags & ACB_F_ABORT) != 0)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
} else {
acb->fw_flag = FW_NORMAL;
atomic_set(&acb->rq_map_token, 16);
}
atomic_set(&acb->ante_token_value, atomic_read(&acb->rq_map_token));
- if (atomic_dec_and_test(&acb->rq_map_token))
+ if (atomic_dec_and_test(&acb->rq_map_token)) {
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
return;
+ }
writel(ARCMSR_INBOUND_MESG0_GET_CONFIG, ®->inbound_msgaddr0);
writel(ARCMSR_HBCMU_DRV2IOP_MESSAGE_CMD_DONE, ®->inbound_doorbell);
mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
uint32_t intmask_org;
uint8_t rtnval = 0x00;
int i = 0;
+ unsigned long flags;
+
if (atomic_read(&acb->ccboutstandingcount) != 0) {
/* disable all outbound interrupt */
intmask_org = arcmsr_disable_outbound_ints(acb);
for (i = 0; i < ARCMSR_MAX_FREECCB_NUM; i++) {
ccb = acb->pccb_pool[i];
if (ccb->startdone == ARCMSR_CCB_START) {
- arcmsr_ccb_complete(ccb);
+ scsi_dma_unmap(ccb->pcmd);
+ ccb->startdone = ARCMSR_CCB_DONE;
+ ccb->ccb_flags = 0;
+ spin_lock_irqsave(&acb->ccblist_lock, flags);
+ list_add_tail(&ccb->list, &acb->ccb_free_list);
+ spin_unlock_irqrestore(&acb->ccblist_lock, flags);
}
}
atomic_set(&acb->ccboutstandingcount, 0);
static int arcmsr_bus_reset(struct scsi_cmnd *cmd)
{
- struct AdapterControlBlock *acb =
- (struct AdapterControlBlock *)cmd->device->host->hostdata;
+ struct AdapterControlBlock *acb;
uint32_t intmask_org, outbound_doorbell;
int retry_count = 0;
int rtn = FAILED;
atomic_set(&acb->rq_map_token, 16);
atomic_set(&acb->ante_token_value, 16);
acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
acb->acb_flags &= ~ACB_F_BUS_RESET;
rtn = SUCCESS;
printk(KERN_ERR "arcmsr: scsi bus reset eh returns with success\n");
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
rtn = SUCCESS;
}
break;
rtn = FAILED;
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
rtn = SUCCESS;
}
break;
atomic_set(&acb->rq_map_token, 16);
atomic_set(&acb->ante_token_value, 16);
acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6 * HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6 * HZ));
acb->acb_flags &= ~ACB_F_BUS_RESET;
rtn = SUCCESS;
printk(KERN_ERR "arcmsr: scsi bus reset eh returns with success\n");
} else {
acb->acb_flags &= ~ACB_F_BUS_RESET;
- if (atomic_read(&acb->rq_map_token) == 0) {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- init_timer(&acb->eternal_timer);
- acb->eternal_timer.expires = jiffies + msecs_to_jiffies(6*HZ);
- acb->eternal_timer.data = (unsigned long) acb;
- acb->eternal_timer.function = &arcmsr_request_device_map;
- add_timer(&acb->eternal_timer);
- } else {
- atomic_set(&acb->rq_map_token, 16);
- atomic_set(&acb->ante_token_value, 16);
- acb->fw_flag = FW_NORMAL;
- mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
- }
+ atomic_set(&acb->rq_map_token, 16);
+ atomic_set(&acb->ante_token_value, 16);
+ acb->fw_flag = FW_NORMAL;
+ mod_timer(&acb->eternal_timer, jiffies + msecs_to_jiffies(6*HZ));
rtn = SUCCESS;
}
break;
spin_lock_irqsave(shost->host_lock, flags);
list_splice_init(&shost->eh_cmd_q, &eh_work_q);
+ shost->host_eh_scheduled = 0;
spin_unlock_irqrestore(shost->host_lock, flags);
SAS_DPRINTK("Enter %s\n", __func__);
/* adjust hba_queue_depth, reply_free_queue_depth,
* and queue_size
*/
- ioc->hba_queue_depth -= queue_diff;
- ioc->reply_free_queue_depth -= queue_diff;
- queue_size -= queue_diff;
+ ioc->hba_queue_depth -= (queue_diff / 2);
+ ioc->reply_free_queue_depth -= (queue_diff / 2);
+ queue_size = facts->MaxReplyDescriptorPostQueueDepth;
}
ioc->reply_post_queue_depth = queue_size;
static void
_base_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase)
{
+ mpt2sas_scsih_reset_handler(ioc, reset_phase);
+ mpt2sas_ctl_reset_handler(ioc, reset_phase);
switch (reset_phase) {
case MPT2_IOC_PRE_RESET:
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: "
"MPT2_IOC_DONE_RESET\n", ioc->name, __func__));
break;
}
- mpt2sas_scsih_reset_handler(ioc, reset_phase);
- mpt2sas_ctl_reset_handler(ioc, reset_phase);
}
/**
{
int r;
unsigned long flags;
+ u8 pe_complete = ioc->wait_for_port_enable_to_complete;
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter\n", ioc->name,
__func__));
if (r)
goto out;
_base_reset_handler(ioc, MPT2_IOC_AFTER_RESET);
+
+ /* If this hard reset is called while port enable is active, then
+ * there is no reason to call make_ioc_operational
+ */
+ if (pe_complete) {
+ r = -EFAULT;
+ goto out;
+ }
r = _base_make_ioc_operational(ioc, sleep_flag);
if (!r)
_base_reset_handler(ioc, MPT2_IOC_DONE_RESET);
}
/**
- * mptscsih_get_scsi_lookup - returns scmd entry
+ * _scsih_scsi_lookup_get - returns scmd entry
* @ioc: per adapter object
* @smid: system request message index
*
return ioc->scsi_lookup[smid - 1].scmd;
}
+/**
+ * _scsih_scsi_lookup_get_clear - returns scmd entry
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns the smid stored scmd pointer.
+ * Then will derefrence the stored scmd pointer.
+ */
+static inline struct scsi_cmnd *
+_scsih_scsi_lookup_get_clear(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+ unsigned long flags;
+ struct scsi_cmnd *scmd;
+
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+ scmd = ioc->scsi_lookup[smid - 1].scmd;
+ ioc->scsi_lookup[smid - 1].scmd = NULL;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+ return scmd;
+}
+
/**
* _scsih_scsi_lookup_find_by_scmd - scmd lookup
* @ioc: per adapter object
u16 handle;
for (i = 0 ; i < event_data->NumEntries; i++) {
- if (event_data->PHY[i].PhyStatus &
- MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT)
- continue;
handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
if (!handle)
continue;
u16 count = 0;
for (smid = 1; smid <= ioc->scsiio_depth; smid++) {
- scmd = _scsih_scsi_lookup_get(ioc, smid);
+ scmd = _scsih_scsi_lookup_get_clear(ioc, smid);
if (!scmd)
continue;
count++;
u32 response_code = 0;
mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
- scmd = _scsih_scsi_lookup_get(ioc, smid);
+ scmd = _scsih_scsi_lookup_get_clear(ioc, smid);
if (scmd == NULL)
return 1;
event_data);
#endif
+ /* In MPI Revision K (0xC), the internal device reset complete was
+ * implemented, so avoid setting tm_busy flag for older firmware.
+ */
+ if ((ioc->facts.HeaderVersion >> 8) < 0xC)
+ return;
+
if (event_data->ReasonCode !=
MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET &&
event_data->ReasonCode !=
struct fw_event_work *fw_event)
{
struct scsi_cmnd *scmd;
+ struct scsi_device *sdev;
u16 smid, handle;
u32 lun;
struct MPT2SAS_DEVICE *sas_device_priv_data;
Mpi2EventDataSasBroadcastPrimitive_t *event_data = fw_event->event_data;
#endif
u16 ioc_status;
+ unsigned long flags;
+ int r;
+
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "broadcast primative: "
"phy number(%d), width(%d)\n", ioc->name, event_data->PhyNum,
event_data->PortWidth));
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter\n", ioc->name,
__func__));
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+ ioc->broadcast_aen_busy = 0;
termination_count = 0;
query_count = 0;
mpi_reply = ioc->tm_cmds.reply;
scmd = _scsih_scsi_lookup_get(ioc, smid);
if (!scmd)
continue;
- sas_device_priv_data = scmd->device->hostdata;
+ sdev = scmd->device;
+ sas_device_priv_data = sdev->hostdata;
if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
continue;
/* skip hidden raid components */
lun = sas_device_priv_data->lun;
query_count++;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
mpt2sas_scsih_issue_tm(ioc, handle, 0, 0, lun,
MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK, smid, 30, NULL);
ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
(mpi_reply->ResponseCode ==
MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED ||
mpi_reply->ResponseCode ==
- MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC))
+ MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC)) {
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
continue;
-
- mpt2sas_scsih_issue_tm(ioc, handle, 0, 0, lun,
- MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET, 0, 30, NULL);
+ }
+ r = mpt2sas_scsih_issue_tm(ioc, handle, sdev->channel, sdev->id,
+ sdev->lun, MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30,
+ scmd);
+ if (r == FAILED)
+ sdev_printk(KERN_WARNING, sdev, "task abort: FAILED "
+ "scmd(%p)\n", scmd);
termination_count += le32_to_cpu(mpi_reply->TerminationCount);
+ spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
}
- ioc->broadcast_aen_busy = 0;
+ spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
dtmprintk(ioc, printk(MPT2SAS_INFO_FMT
"%s - exit, query_count = %d termination_count = %d\n",
destroy_workqueue(wq);
/* release all the volumes */
+ _scsih_ir_shutdown(ioc);
list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list,
list) {
if (raid_device->starget) {
{
struct Scsi_Host *host = rport_to_shost(rport);
fc_port_t *fcport = *(fc_port_t **)rport->dd_data;
+ unsigned long flags;
if (!fcport)
return;
* Transport has effectively 'deleted' the rport, clear
* all local references.
*/
- spin_lock_irq(host->host_lock);
+ spin_lock_irqsave(host->host_lock, flags);
fcport->rport = fcport->drport = NULL;
*((fc_port_t **)rport->dd_data) = NULL;
- spin_unlock_irq(host->host_lock);
+ spin_unlock_irqrestore(host->host_lock, flags);
if (test_bit(ABORT_ISP_ACTIVE, &fcport->vha->dpc_flags))
return;
{
fc_port_t *fcport = data;
struct fc_rport *rport;
+ unsigned long flags;
- spin_lock_irq(fcport->vha->host->host_lock);
+ spin_lock_irqsave(fcport->vha->host->host_lock, flags);
rport = fcport->drport ? fcport->drport: fcport->rport;
fcport->drport = NULL;
- spin_unlock_irq(fcport->vha->host->host_lock);
+ spin_unlock_irqrestore(fcport->vha->host->host_lock, flags);
if (rport)
fc_remote_port_delete(rport);
}
struct fc_rport_identifiers rport_ids;
struct fc_rport *rport;
struct qla_hw_data *ha = vha->hw;
+ unsigned long flags;
qla2x00_rport_del(fcport);
"Unable to allocate fc remote port!\n");
return;
}
- spin_lock_irq(fcport->vha->host->host_lock);
+ spin_lock_irqsave(fcport->vha->host->host_lock, flags);
*((fc_port_t **)rport->dd_data) = fcport;
- spin_unlock_irq(fcport->vha->host->host_lock);
+ spin_unlock_irqrestore(fcport->vha->host->host_lock, flags);
rport->supported_classes = fcport->supported_classes;
}
if (atomic_read(&fcport->state) != FCS_ONLINE) {
if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD ||
- atomic_read(&fcport->state) == FCS_DEVICE_LOST ||
atomic_read(&base_vha->loop_state) == LOOP_DEAD) {
cmd->result = DID_NO_CONNECT << 16;
goto qc24_fail_command;
{
struct fc_rport *rport;
scsi_qla_host_t *base_vha;
+ unsigned long flags;
if (!fcport->rport)
return;
rport = fcport->rport;
if (defer) {
base_vha = pci_get_drvdata(vha->hw->pdev);
- spin_lock_irq(vha->host->host_lock);
+ spin_lock_irqsave(vha->host->host_lock, flags);
fcport->drport = rport;
- spin_unlock_irq(vha->host->host_lock);
+ spin_unlock_irqrestore(vha->host->host_lock, flags);
set_bit(FCPORT_UPDATE_NEEDED, &base_vha->dpc_flags);
qla2xxx_wake_dpc(base_vha);
} else
set_user_nice(current, -20);
+ set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) {
DEBUG3(printk("qla2x00: DPC handler sleeping\n"));
- set_current_state(TASK_INTERRUPTIBLE);
schedule();
__set_current_state(TASK_RUNNING);
qla2x00_do_dpc_all_vps(base_vha);
ha->dpc_active = 0;
+ set_current_state(TASK_INTERRUPTIBLE);
} /* End of while(1) */
+ __set_current_state(TASK_RUNNING);
DEBUG(printk("scsi(%ld): DPC handler exiting\n", base_vha->host_no));
unsigned long long lba, unsigned int num, int write)
{
int ret;
- unsigned int block, rest = 0;
+ unsigned long long block, rest = 0;
int (*func)(struct scsi_cmnd *, unsigned char *, int);
func = write ? fetch_to_dev_buffer : fill_from_dev_buffer;
return 0;
}
-#define VALID(x) (x | 0x80)
+#define SENSE_VALID_FLAG 0x80
+#define VALID(x) (x | SENSE_VALID_FLAG)
static unsigned char intc_irq_sense_table[IRQ_TYPE_SENSE_MASK + 1] = {
[IRQ_TYPE_EDGE_FALLING] = VALID(0),
ihp = intc_find_irq(d->sense, d->nr_sense, irq);
if (ihp) {
addr = INTC_REG(d, _INTC_ADDR_E(ihp->handle), 0);
- intc_reg_fns[_INTC_FN(ihp->handle)](addr, ihp->handle, value);
+ intc_reg_fns[_INTC_FN(ihp->handle)](addr, ihp->handle,
+ value & ~SENSE_VALID_FLAG);
}
return 0;
#include <linux/of_device.h>
#include <linux/spi/pxa2xx_spi.h>
-struct awesome_struct {
+struct ce4100_info {
struct ssp_device ssp;
- struct platform_device spi_pdev;
- struct pxa2xx_spi_master spi_pdata;
+ struct platform_device *spi_pdev;
};
static DEFINE_MUTEX(ssp_lock);
}
EXPORT_SYMBOL_GPL(pxa_ssp_free);
-static void plat_dev_release(struct device *dev)
-{
- struct awesome_struct *as = container_of(dev,
- struct awesome_struct, spi_pdev.dev);
-
- of_device_node_put(&as->spi_pdev.dev);
-}
-
static int __devinit ce4100_spi_probe(struct pci_dev *dev,
const struct pci_device_id *ent)
{
int ret;
resource_size_t phys_beg;
resource_size_t phys_len;
- struct awesome_struct *spi_info;
+ struct ce4100_info *spi_info;
struct platform_device *pdev;
- struct pxa2xx_spi_master *spi_pdata;
+ struct pxa2xx_spi_master spi_pdata;
struct ssp_device *ssp;
ret = pci_enable_device(dev);
return ret;
}
+ pdev = platform_device_alloc("pxa2xx-spi", dev->devfn);
spi_info = kzalloc(sizeof(*spi_info), GFP_KERNEL);
- if (!spi_info) {
+ if (!pdev || !spi_info ) {
ret = -ENOMEM;
- goto err_kz;
+ goto err_nomem;
}
- ssp = &spi_info->ssp;
- pdev = &spi_info->spi_pdev;
- spi_pdata = &spi_info->spi_pdata;
+ memset(&spi_pdata, 0, sizeof(spi_pdata));
+ spi_pdata.num_chipselect = dev->devfn;
- pdev->name = "pxa2xx-spi";
- pdev->id = dev->devfn;
- pdev->dev.parent = &dev->dev;
- pdev->dev.platform_data = &spi_info->spi_pdata;
+ ret = platform_device_add_data(pdev, &spi_pdata, sizeof(spi_pdata));
+ if (ret)
+ goto err_nomem;
+ pdev->dev.parent = &dev->dev;
#ifdef CONFIG_OF
pdev->dev.of_node = dev->dev.of_node;
#endif
- pdev->dev.release = plat_dev_release;
-
- spi_pdata->num_chipselect = dev->devfn;
-
+ ssp = &spi_info->ssp;
ssp->phys_base = pci_resource_start(dev, 0);
ssp->mmio_base = ioremap(phys_beg, phys_len);
if (!ssp->mmio_base) {
dev_err(&pdev->dev, "failed to ioremap() registers\n");
ret = -EIO;
- goto err_remap;
+ goto err_nomem;
}
ssp->irq = dev->irq;
ssp->port_id = pdev->id;
pci_set_drvdata(dev, spi_info);
- ret = platform_device_register(pdev);
+ ret = platform_device_add(pdev);
if (ret)
goto err_dev_add;
mutex_unlock(&ssp_lock);
iounmap(ssp->mmio_base);
-err_remap:
- kfree(spi_info);
-
-err_kz:
+err_nomem:
release_mem_region(phys_beg, phys_len);
-
+ platform_device_put(pdev);
+ kfree(spi_info);
return ret;
}
static void __devexit ce4100_spi_remove(struct pci_dev *dev)
{
- struct awesome_struct *spi_info;
- struct platform_device *pdev;
+ struct ce4100_info *spi_info;
struct ssp_device *ssp;
spi_info = pci_get_drvdata(dev);
-
ssp = &spi_info->ssp;
- pdev = &spi_info->spi_pdev;
-
- platform_device_unregister(pdev);
+ platform_device_unregister(spi_info->spi_pdev);
iounmap(ssp->mmio_base);
release_mem_region(pci_resource_start(dev, 0),
}
static struct pci_device_id ce4100_spi_devices[] __devinitdata = {
-
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2e6a) },
{ },
};
bytes_done = 0;
while (bytes_done < t->len) {
+ void *rx_buf = t->rx_buf ? t->rx_buf + bytes_done : NULL;
+ const void *tx_buf = t->tx_buf ? t->tx_buf + bytes_done : NULL;
n = sh_msiof_spi_txrx_once(p, tx_fifo, rx_fifo,
- t->tx_buf + bytes_done,
- t->rx_buf + bytes_done,
+ tx_buf,
+ rx_buf,
words, bits);
if (n < 0)
break;
config SSB_SILENT
bool "No SSB kernel messages"
- depends on SSB && EMBEDDED
+ depends on SSB && EXPERT
help
This option turns off all Sonics Silicon Backplane printks.
Note that you won't be able to identify problems, once
/* Fetch the vendor specific tuples. */
res = pcmcia_loop_tuple(bus->host_pcmcia, SSB_PCMCIA_CIS,
- ssb_pcmcia_do_get_invariants, sprom);
+ ssb_pcmcia_do_get_invariants, iv);
if ((res == 0) || (res == -ENOSPC))
return 0;
status = 1;
goto complete;
}
- len = (firmware->size > MAX_BDADDR_FORMAT_LENGTH)? MAX_BDADDR_FORMAT_LENGTH: firmware->size;
- memcpy(config_bdaddr, firmware->data,len);
+ len = min(firmware->size, MAX_BDADDR_FORMAT_LENGTH - 1);
+ memcpy(config_bdaddr, firmware->data, len);
config_bdaddr[len] = '\0';
write_bdaddr(hdev,config_bdaddr,BDADDR_TYPE_STRING);
A_RELEASE_FIRMWARE(firmware);
struct wl_info *wl = hw->priv;
ASSERT(wl);
WL_LOCK(wl);
- wl_down(wl);
ieee80211_stop_queues(hw);
WL_UNLOCK(wl);
-
- return;
}
static int
static void
wl_ops_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
{
- return;
+ struct wl_info *wl;
+
+ wl = HW_TO_WL(hw);
+
+ /* put driver in down state */
+ WL_LOCK(wl);
+ wl_down(wl);
+ WL_UNLOCK(wl);
}
static int
switch (type) {
case NL80211_CHAN_HT20:
case NL80211_CHAN_NO_HT:
- WL_LOCK(wl);
err = wlc_set(wl->wlc, WLC_SET_CHANNEL, chan->hw_value);
- WL_UNLOCK(wl);
break;
case NL80211_CHAN_HT40MINUS:
case NL80211_CHAN_HT40PLUS:
int err = 0;
int new_int;
+ WL_LOCK(wl);
if (changed & IEEE80211_CONF_CHANGE_LISTEN_INTERVAL) {
WL_NONE("%s: Setting listen interval to %d\n",
__func__, conf->listen_interval);
}
config_out:
+ WL_UNLOCK(wl);
return err;
}
static void wl_ops_sw_scan_start(struct ieee80211_hw *hw)
{
+ struct wl_info *wl = hw->priv;
WL_NONE("Scan Start\n");
+ WL_LOCK(wl);
+ wlc_scan_start(wl->wlc);
+ WL_UNLOCK(wl);
return;
}
static void wl_ops_sw_scan_complete(struct ieee80211_hw *hw)
{
+ struct wl_info *wl = hw->priv;
WL_NONE("Scan Complete\n");
+ WL_LOCK(wl);
+ wlc_scan_stop(wl->wlc);
+ WL_UNLOCK(wl);
return;
}
wl_found++;
return wl;
- fail:
+fail:
wl_free(wl);
fail1:
return NULL;
return 0;
}
-#ifdef LINUXSTA_PS
static int wl_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct wl_info *wl;
return -ENODEV;
}
+ /* only need to flag hw is down for proper resume */
WL_LOCK(wl);
- wl_down(wl);
wl->pub->hw_up = false;
WL_UNLOCK(wl);
- pci_save_state(pdev, wl->pci_psstate);
+
+ pci_save_state(pdev);
pci_disable_device(pdev);
return pci_set_power_state(pdev, PCI_D3hot);
}
if (err)
return err;
- pci_restore_state(pdev, wl->pci_psstate);
+ pci_restore_state(pdev);
err = pci_enable_device(pdev);
if (err)
if ((val & 0x0000ff00) != 0)
pci_write_config_dword(pdev, 0x40, val & 0xffff00ff);
- WL_LOCK(wl);
- err = wl_up(wl);
- WL_UNLOCK(wl);
-
+ /*
+ * done. driver will be put in up state
+ * in wl_ops_add_interface() call.
+ */
return err;
}
-#endif /* LINUXSTA_PS */
static void wl_remove(struct pci_dev *pdev)
{
}
static struct pci_driver wl_pci_driver = {
- .name = "brcm80211",
- .probe = wl_pci_probe,
-#ifdef LINUXSTA_PS
- .suspend = wl_suspend,
- .resume = wl_resume,
-#endif /* LINUXSTA_PS */
- .remove = __devexit_p(wl_remove),
- .id_table = wl_id_table,
+ .name = "brcm80211",
+ .probe = wl_pci_probe,
+ .suspend = wl_suspend,
+ .resume = wl_resume,
+ .remove = __devexit_p(wl_remove),
+ .id_table = wl_id_table,
};
/**
fifo = prio2fifo[prio];
ASSERT((uint) skb_headroom(sdu) >= TXOFF);
- ASSERT(!(sdu->cloned));
ASSERT(!(sdu->next));
ASSERT(!(sdu->prev));
ASSERT(fifo < NFIFO);
kfree(qi);
}
+
+/*
+ * Flag 'scan in progress' to withold dynamic phy calibration
+ */
+void wlc_scan_start(struct wlc_info *wlc)
+{
+ wlc_phy_hold_upd(wlc->band->pi, PHY_HOLD_FOR_SCAN, true);
+}
+
+void wlc_scan_stop(struct wlc_info *wlc)
+{
+ wlc_phy_hold_upd(wlc->band->pi, PHY_HOLD_FOR_SCAN, false);
+}
extern u16 wlc_rate_shm_offset(struct wlc_info *wlc, u8 rate);
extern u32 wlc_get_rspec_history(struct wlc_bsscfg *cfg);
extern u32 wlc_get_current_highest_rate(struct wlc_bsscfg *cfg);
+extern void wlc_scan_start(struct wlc_info *wlc);
+extern void wlc_scan_stop(struct wlc_info *wlc);
static inline int wlc_iovar_getuint(struct wlc_info *wlc, const char *name,
uint *arg)
config COMEDI_NI_ATMIO
tristate "NI AT-MIO E series ISA-PNP card support"
depends on ISAPNP && COMEDI_NI_TIO && COMEDI_NI_COMMON
+ select COMEDI_8255
default N
---help---
Enable support for National Instruments AT-MIO E series cards
config COMEDI_NI_PCIMIO
tristate "NI PCI-MIO-E series and M series support"
depends on COMEDI_NI_TIO && COMEDI_NI_COMMON
+ select COMEDI_8255
+ select COMEDI_FC
default N
---help---
Enable support for National Instruments PCI-MIO-E series and M series
config COMEDI_NI_MIO_CS
tristate "NI DAQCard E series PCMCIA support"
depends on COMEDI_NI_TIO && COMEDI_NI_COMMON
+ select COMEDI_8255
select COMEDI_FC
default N
---help---
config COMEDI_NI_TIO
tristate "NI general purpose counter support"
depends on COMEDI_MITE
- select COMEDI_8255
default N
---help---
Enable support for National Instruments general purpose counters.
#define PCI_DAQ_SIZE 4096
#define PCI_DAQ_SIZE_660X 8192
-MODULE_LICENSE("GPL");
-
struct mite_struct *mite_devices;
EXPORT_SYMBOL(mite_devices);
module_init(driver_ni6527_init_module);
module_exit(driver_ni6527_cleanup_module);
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
module_init(driver_ni_65xx_init_module);
module_exit(driver_ni_65xx_cleanup_module);
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
};
return 0;
}
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
mite_list_devices();
return -EIO;
}
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
/* grab our IRQ */
if (irq) {
isr_flags = 0;
- if (thisboard->bustype == pci_bustype)
+ if (thisboard->bustype == pci_bustype
+ || thisboard->bustype == pcmcia_bustype)
isr_flags |= IRQF_SHARED;
if (request_irq(irq, labpc_interrupt, isr_flags,
driver_labpc.driver_name, dev)) {
module_init(driver_pcidio_init_module);
module_exit(driver_pcidio_cleanup_module);
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
return 0;
}
+
+MODULE_AUTHOR("Comedi http://www.comedi.org");
+MODULE_DESCRIPTION("Comedi low-level driver");
+MODULE_LICENSE("GPL");
blkdev->gd->first_minor = 0;
blkdev->gd->fops = &block_ops;
blkdev->gd->private_data = blkdev;
+ blkdev->gd->driverfs_dev = &(blkdev->device_ctx->device);
sprintf(blkdev->gd->disk_name, "hd%c", 'a' + devnum);
blkvsc_do_inquiry(blkdev);
/* ASSERT(device); */
packet = kzalloc(NETVSC_PACKET_SIZE * sizeof(unsigned char),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!packet)
return;
buffer = packet;
if (status == 1) {
netif_carrier_on(net);
netif_wake_queue(net);
+ netif_notify_peers(net);
} else {
netif_carrier_off(net);
netif_stop_queue(net);
/* Set initial state */
netif_carrier_off(net);
- netif_stop_queue(net);
net_device_ctx = netdev_priv(net);
net_device_ctx->device_ctx = device_ctx;
/* Corresponds to Vref / 2^(bits) */
unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits;
- return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000);
+ return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000);
}
static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad7476_show_scale, NULL, 0);
/* Corresponds to Vref / 2^(bits) */
unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits;
- return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000);
+ return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000);
}
static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad7887_show_scale, NULL, 0);
/* Corresponds to Vref / 2^(bits) */
unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits;
- return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000);
+ return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000);
}
static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad799x_show_scale, NULL, 0);
/* Corresponds to Vref / 2^(bits) */
unsigned int scale_uv = (st->vref_mv * 1000) >> st->chip_info->bits;
- return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000);
+ return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000);
}
static IIO_DEVICE_ATTR(out_scale, S_IRUGO, ad5446_show_scale, NULL, 0);
sc_access[3].reg_addr = 0x109;
sc_access[3].mask = MASK6;
sc_access[3].value = 0x00;
- num_val = 4;
+ sc_access[4].reg_addr = 0x104;
+ sc_access[4].value = 0x3C;
+ sc_access[4].mask = 0xff;
+ num_val = 5;
break;
default:
return -EINVAL;
-The binding between hdpvr and lirc_zilog is currently disabled,
+1. Both ir-kbd-i2c and lirc_zilog provide support for RX events.
+The 'tx_only' lirc_zilog module parameter will allow ir-kbd-i2c
+and lirc_zilog to coexist in the kernel, if the user requires such a set-up.
+However the IR unit will not work well without coordination between the
+two modules. A shared mutex, for transceiver access locking, needs to be
+supplied by bridge drivers, in struct IR_i2_init_data, to both ir-kbd-i2c
+and lirc_zilog, before they will coexist usefully. This should be fixed
+before moving out of staging.
+
+2. References and locking need careful examination. For cx18 and ivtv PCI
+cards, which are not easily "hot unplugged", the imperfect state of reference
+counting and locking is acceptable if not correct. For USB connected units
+like HD PVR, PVR USB2, HVR-1900, and HVR1950, the likelyhood of an Ooops on
+unplug is probably great. Proper reference counting and locking needs to be
+implemented before this module is moved out of staging.
+
+3. The binding between hdpvr and lirc_zilog is currently disabled,
due to an OOPS reported a few years ago when both the hdpvr and cx18
drivers were loaded in his system. More details can be seen at:
http://www.mail-archive.com/linux-media@vger.kernel.org/msg09163.html
More tests need to be done, in order to fix the reported issue.
-There's a conflict between ir-kbd-i2c: Both provide support for RX events.
-Such conflict needs to be fixed, before moving it out of staging.
+4. In addition to providing a shared mutex for transceiver access
+locking, bridge drivers, if able, should provide a chip reset() callback
+to lirc_zilog via struct IR_i2c_init_data. cx18 and ivtv already have routines
+to perform Z8 chip resets via GPIO manipulations. This will allow lirc_zilog
+to bring the chip back to normal when it hangs, in the same places the
+original lirc_pvr150 driver code does. This is not strictly needed, so it
+is not required to move lirc_zilog out of staging.
+
+5. Both lirc_zilog and ir-kbd-i2c support the Zilog Z8 for IR, as programmed
+and installed on Hauppauge products. When working on either module, developers
+must consider at least the following bridge drivers which mention an IR Rx unit
+at address 0x71 (indicative of a Z8):
-The way I2C probe works, it will try to register the driver twice, one
-for RX and another for TX. The logic needs to be fixed to avoid such
-issue.
+ ivtv cx18 hdpvr pvrusb2 bt8xx cx88 saa7134
exit:
mutex_unlock(&context->ctx_lock);
+ kfree(data_buf);
return (!retval) ? n_bytes : retval;
}
i++;
}
terminate_send(tx_buf[i - 1]);
+ kfree(tx_buf);
return n;
}
unsigned long flags;
int counttimer;
int *wbuf;
+ ssize_t ret;
if (!is_claimed)
return -EBUSY;
if (timer == 0) {
/* try again if device is ready */
timer = init_lirc_timer();
- if (timer == 0)
- return -EIO;
+ if (timer == 0) {
+ ret = -EIO;
+ goto out;
+ }
}
/* adjust values from usecs */
if (check_pselecd && (in(1) & LP_PSELECD)) {
lirc_off();
local_irq_restore(flags);
- return -EIO;
+ ret = -EIO;
+ goto out;
}
} while (counttimer < wbuf[i]);
i++;
level = newlevel;
if (check_pselecd && (in(1) & LP_PSELECD)) {
local_irq_restore(flags);
- return -EIO;
+ ret = -EIO;
+ goto out;
}
} while (counttimer < wbuf[i]);
i++;
#else
/* place code that handles write without external timer here */
#endif
- return n;
+ ret = n;
+out:
+ kfree(wbuf);
+
+ return ret;
}
static unsigned int lirc_poll(struct file *file, poll_table *wait)
exit:
mutex_unlock(&context->ctx_lock);
+ kfree(data_buf);
return (!retval) ? n_bytes : retval;
}
if (n % sizeof(int) || count % 2 == 0)
return -EINVAL;
wbuf = memdup_user(buf, n);
- if (PTR_ERR(wbuf))
+ if (IS_ERR(wbuf))
return PTR_ERR(wbuf);
spin_lock_irqsave(&hardware[type].lock, flags);
if (type == LIRC_IRDEO) {
}
off();
spin_unlock_irqrestore(&hardware[type].lock, flags);
+ kfree(wbuf);
return n;
}
/* enable receiver */
Ser2UTCR3 = UTCR3_RXE|UTCR3_RIE;
#endif
+ kfree(tx_buf);
return count;
}
*
* parts are cut&pasted from the lirc_i2c.c driver
*
+ * Numerous changes updating lirc_zilog.c in kernel 2.6.38 and later are
+ * Copyright (C) 2011 Andy Walls <awalls@md.metrocast.net>
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
#include <media/lirc_dev.h>
#include <media/lirc.h>
-struct IR {
- struct lirc_driver l;
-
- /* Device info */
- struct mutex ir_lock;
- int open;
- bool is_hdpvr;
-
+struct IR_rx {
/* RX device */
- struct i2c_client c_rx;
- int have_rx;
+ struct i2c_client *c;
/* RX device buffer & lock */
struct lirc_buffer buf;
struct mutex buf_lock;
/* RX polling thread data */
- struct completion *t_notify;
- struct completion *t_notify2;
- int shutdown;
struct task_struct *task;
/* RX read data */
unsigned char b[3];
+ bool hdpvr_data_fmt;
+};
+struct IR_tx {
/* TX device */
- struct i2c_client c_tx;
+ struct i2c_client *c;
+
+ /* TX additional actions needed */
int need_boot;
- int have_tx;
+ bool post_tx_ready_poll;
+};
+
+struct IR {
+ struct lirc_driver l;
+
+ struct mutex ir_lock;
+ int open;
+
+ struct i2c_adapter *adapter;
+ struct IR_rx *rx;
+ struct IR_tx *tx;
};
/* Minor -> data mapping */
+static struct mutex ir_devices_lock;
static struct IR *ir_devices[MAX_IRCTL_DEVICES];
/* Block size for IR transmitter */
#define zilog_notify(s, args...) printk(KERN_NOTICE KBUILD_MODNAME ": " s, \
## args)
#define zilog_error(s, args...) printk(KERN_ERR KBUILD_MODNAME ": " s, ## args)
-
-#define ZILOG_HAUPPAUGE_IR_RX_NAME "Zilog/Hauppauge IR RX"
-#define ZILOG_HAUPPAUGE_IR_TX_NAME "Zilog/Hauppauge IR TX"
+#define zilog_info(s, args...) printk(KERN_INFO KBUILD_MODNAME ": " s, ## args)
/* module parameters */
static int debug; /* debug output */
-static int disable_rx; /* disable RX device */
-static int disable_tx; /* disable TX device */
+static int tx_only; /* only handle the IR Tx function */
static int minor = -1; /* minor number */
#define dprintk(fmt, args...) \
int ret;
int failures = 0;
unsigned char sendbuf[1] = { 0 };
+ struct IR_rx *rx = ir->rx;
- if (lirc_buffer_full(&ir->buf)) {
+ if (rx == NULL)
+ return -ENXIO;
+
+ if (lirc_buffer_full(&rx->buf)) {
dprintk("buffer overflow\n");
return -EOVERFLOW;
}
* data and we have space
*/
do {
+ if (kthread_should_stop())
+ return -ENODATA;
+
/*
* Lock i2c bus for the duration. RX/TX chips interfere so
* this is worth it
*/
mutex_lock(&ir->ir_lock);
+ if (kthread_should_stop()) {
+ mutex_unlock(&ir->ir_lock);
+ return -ENODATA;
+ }
+
/*
* Send random "poll command" (?) Windows driver does this
* and it is a good point to detect chip failure.
*/
- ret = i2c_master_send(&ir->c_rx, sendbuf, 1);
+ ret = i2c_master_send(rx->c, sendbuf, 1);
if (ret != 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
if (failures >= 3) {
"trying reset\n");
set_current_state(TASK_UNINTERRUPTIBLE);
+ if (kthread_should_stop()) {
+ mutex_unlock(&ir->ir_lock);
+ return -ENODATA;
+ }
schedule_timeout((100 * HZ + 999) / 1000);
- ir->need_boot = 1;
+ ir->tx->need_boot = 1;
++failures;
mutex_unlock(&ir->ir_lock);
continue;
}
- ret = i2c_master_recv(&ir->c_rx, keybuf, sizeof(keybuf));
+ if (kthread_should_stop()) {
+ mutex_unlock(&ir->ir_lock);
+ return -ENODATA;
+ }
+ ret = i2c_master_recv(rx->c, keybuf, sizeof(keybuf));
mutex_unlock(&ir->ir_lock);
if (ret != sizeof(keybuf)) {
zilog_error("i2c_master_recv failed with %d -- "
"keeping last read buffer\n", ret);
} else {
- ir->b[0] = keybuf[3];
- ir->b[1] = keybuf[4];
- ir->b[2] = keybuf[5];
- dprintk("key (0x%02x/0x%02x)\n", ir->b[0], ir->b[1]);
+ rx->b[0] = keybuf[3];
+ rx->b[1] = keybuf[4];
+ rx->b[2] = keybuf[5];
+ dprintk("key (0x%02x/0x%02x)\n", rx->b[0], rx->b[1]);
}
/* key pressed ? */
- if (ir->is_hdpvr) {
+ if (rx->hdpvr_data_fmt) {
if (got_data && (keybuf[0] == 0x80))
return 0;
else if (got_data && (keybuf[0] == 0x00))
return -ENODATA;
- } else if ((ir->b[0] & 0x80) == 0)
+ } else if ((rx->b[0] & 0x80) == 0)
return got_data ? 0 : -ENODATA;
/* look what we have */
- code = (((__u16)ir->b[0] & 0x7f) << 6) | (ir->b[1] >> 2);
+ code = (((__u16)rx->b[0] & 0x7f) << 6) | (rx->b[1] >> 2);
codes[0] = (code >> 8) & 0xff;
codes[1] = code & 0xff;
/* return it */
- lirc_buffer_write(&ir->buf, codes);
+ lirc_buffer_write(&rx->buf, codes);
++got_data;
- } while (!lirc_buffer_full(&ir->buf));
+ } while (!lirc_buffer_full(&rx->buf));
return 0;
}
static int lirc_thread(void *arg)
{
struct IR *ir = arg;
-
- if (ir->t_notify != NULL)
- complete(ir->t_notify);
+ struct IR_rx *rx = ir->rx;
dprintk("poll thread started\n");
- do {
- if (ir->open) {
- set_current_state(TASK_INTERRUPTIBLE);
+ while (!kthread_should_stop()) {
+ set_current_state(TASK_INTERRUPTIBLE);
- /*
- * This is ~113*2 + 24 + jitter (2*repeat gap +
- * code length). We use this interval as the chip
- * resets every time you poll it (bad!). This is
- * therefore just sufficient to catch all of the
- * button presses. It makes the remote much more
- * responsive. You can see the difference by
- * running irw and holding down a button. With
- * 100ms, the old polling interval, you'll notice
- * breaks in the repeat sequence corresponding to
- * lost keypresses.
- */
- schedule_timeout((260 * HZ) / 1000);
- if (ir->shutdown)
- break;
- if (!add_to_buf(ir))
- wake_up_interruptible(&ir->buf.wait_poll);
- } else {
- /* if device not opened so we can sleep half a second */
- set_current_state(TASK_INTERRUPTIBLE);
+ /* if device not opened, we can sleep half a second */
+ if (!ir->open) {
schedule_timeout(HZ/2);
+ continue;
}
- } while (!ir->shutdown);
-
- if (ir->t_notify2 != NULL)
- wait_for_completion(ir->t_notify2);
- ir->task = NULL;
- if (ir->t_notify != NULL)
- complete(ir->t_notify);
+ /*
+ * This is ~113*2 + 24 + jitter (2*repeat gap + code length).
+ * We use this interval as the chip resets every time you poll
+ * it (bad!). This is therefore just sufficient to catch all
+ * of the button presses. It makes the remote much more
+ * responsive. You can see the difference by running irw and
+ * holding down a button. With 100ms, the old polling
+ * interval, you'll notice breaks in the repeat sequence
+ * corresponding to lost keypresses.
+ */
+ schedule_timeout((260 * HZ) / 1000);
+ if (kthread_should_stop())
+ break;
+ if (!add_to_buf(ir))
+ wake_up_interruptible(&rx->buf.wait_poll);
+ }
dprintk("poll thread ended\n");
return 0;
* this is completely broken code. lirc_unregister_driver()
* must be possible even when the device is open
*/
- if (ir->c_rx.addr)
- i2c_use_client(&ir->c_rx);
- if (ir->c_tx.addr)
- i2c_use_client(&ir->c_tx);
+ if (ir->rx != NULL)
+ i2c_use_client(ir->rx->c);
+ if (ir->tx != NULL)
+ i2c_use_client(ir->tx->c);
return 0;
}
{
struct IR *ir = data;
- if (ir->c_rx.addr)
- i2c_release_client(&ir->c_rx);
- if (ir->c_tx.addr)
- i2c_release_client(&ir->c_tx);
+ if (ir->rx)
+ i2c_release_client(ir->rx->c);
+ if (ir->tx)
+ i2c_release_client(ir->tx->c);
if (ir->l.owner != NULL)
module_put(ir->l.owner);
}
}
/* send a block of data to the IR TX device */
-static int send_data_block(struct IR *ir, unsigned char *data_block)
+static int send_data_block(struct IR_tx *tx, unsigned char *data_block)
{
int i, j, ret;
unsigned char buf[5];
buf[1 + j] = data_block[i + j];
dprintk("%02x %02x %02x %02x %02x",
buf[0], buf[1], buf[2], buf[3], buf[4]);
- ret = i2c_master_send(&ir->c_tx, buf, tosend + 1);
+ ret = i2c_master_send(tx->c, buf, tosend + 1);
if (ret != tosend + 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
/* send boot data to the IR TX device */
-static int send_boot_data(struct IR *ir)
+static int send_boot_data(struct IR_tx *tx)
{
- int ret;
+ int ret, i;
unsigned char buf[4];
/* send the boot block */
- ret = send_data_block(ir, tx_data->boot_data);
+ ret = send_data_block(tx, tx_data->boot_data);
if (ret != 0)
return ret;
- /* kick it off? */
+ /* Hit the go button to activate the new boot data */
buf[0] = 0x00;
buf[1] = 0x20;
- ret = i2c_master_send(&ir->c_tx, buf, 2);
+ ret = i2c_master_send(tx->c, buf, 2);
if (ret != 2) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
- ret = i2c_master_send(&ir->c_tx, buf, 1);
+
+ /*
+ * Wait for zilog to settle after hitting go post boot block upload.
+ * Without this delay, the HD-PVR and HVR-1950 both return an -EIO
+ * upon attempting to get firmware revision, and tx probe thus fails.
+ */
+ for (i = 0; i < 10; i++) {
+ ret = i2c_master_send(tx->c, buf, 1);
+ if (ret == 1)
+ break;
+ udelay(100);
+ }
+
if (ret != 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
/* Here comes the firmware version... (hopefully) */
- ret = i2c_master_recv(&ir->c_tx, buf, 4);
+ ret = i2c_master_recv(tx->c, buf, 4);
if (ret != 4) {
zilog_error("i2c_master_recv failed with %d\n", ret);
return 0;
}
- if (buf[0] != 0x80) {
- zilog_error("unexpected IR TX response: %02x\n", buf[0]);
+ if ((buf[0] != 0x80) && (buf[0] != 0xa0)) {
+ zilog_error("unexpected IR TX init response: %02x\n", buf[0]);
return 0;
}
zilog_notify("Zilog/Hauppauge IR blaster firmware version "
}
/* load "firmware" for the IR TX device */
-static int fw_load(struct IR *ir)
+static int fw_load(struct IR_tx *tx)
{
int ret;
unsigned int i;
}
/* Request codeset data file */
- ret = request_firmware(&fw_entry, "haup-ir-blaster.bin", &ir->c_tx.dev);
+ ret = request_firmware(&fw_entry, "haup-ir-blaster.bin", &tx->c->dev);
if (ret != 0) {
zilog_error("firmware haup-ir-blaster.bin not available "
"(%d)\n", ret);
}
/* initialise the IR TX device */
-static int tx_init(struct IR *ir)
+static int tx_init(struct IR_tx *tx)
{
int ret;
/* Load 'firmware' */
- ret = fw_load(ir);
+ ret = fw_load(tx);
if (ret != 0)
return ret;
/* Send boot block */
- ret = send_boot_data(ir);
+ ret = send_boot_data(tx);
if (ret != 0)
return ret;
- ir->need_boot = 0;
+ tx->need_boot = 0;
/* Looks good */
return 0;
static ssize_t read(struct file *filep, char *outbuf, size_t n, loff_t *ppos)
{
struct IR *ir = filep->private_data;
- unsigned char buf[ir->buf.chunk_size];
+ struct IR_rx *rx = ir->rx;
int ret = 0, written = 0;
DECLARE_WAITQUEUE(wait, current);
dprintk("read called\n");
- if (ir->c_rx.addr == 0)
+ if (rx == NULL)
return -ENODEV;
- if (mutex_lock_interruptible(&ir->buf_lock))
+ if (mutex_lock_interruptible(&rx->buf_lock))
return -ERESTARTSYS;
- if (n % ir->buf.chunk_size) {
+ if (n % rx->buf.chunk_size) {
dprintk("read result = -EINVAL\n");
- mutex_unlock(&ir->buf_lock);
+ mutex_unlock(&rx->buf_lock);
return -EINVAL;
}
* to avoid losing scan code (in case when queue is awaken somewhere
* between while condition checking and scheduling)
*/
- add_wait_queue(&ir->buf.wait_poll, &wait);
+ add_wait_queue(&rx->buf.wait_poll, &wait);
set_current_state(TASK_INTERRUPTIBLE);
/*
* mode and 'copy_to_user' is happy, wait for data.
*/
while (written < n && ret == 0) {
- if (lirc_buffer_empty(&ir->buf)) {
+ if (lirc_buffer_empty(&rx->buf)) {
/*
* According to the read(2) man page, 'written' can be
* returned as less than 'n', instead of blocking
schedule();
set_current_state(TASK_INTERRUPTIBLE);
} else {
- lirc_buffer_read(&ir->buf, buf);
+ unsigned char buf[rx->buf.chunk_size];
+ lirc_buffer_read(&rx->buf, buf);
ret = copy_to_user((void *)outbuf+written, buf,
- ir->buf.chunk_size);
- written += ir->buf.chunk_size;
+ rx->buf.chunk_size);
+ written += rx->buf.chunk_size;
}
}
- remove_wait_queue(&ir->buf.wait_poll, &wait);
+ remove_wait_queue(&rx->buf.wait_poll, &wait);
set_current_state(TASK_RUNNING);
- mutex_unlock(&ir->buf_lock);
+ mutex_unlock(&rx->buf_lock);
dprintk("read result = %s (%d)\n",
ret ? "-EFAULT" : "OK", ret);
}
/* send a keypress to the IR TX device */
-static int send_code(struct IR *ir, unsigned int code, unsigned int key)
+static int send_code(struct IR_tx *tx, unsigned int code, unsigned int key)
{
unsigned char data_block[TX_BLOCK_SIZE];
unsigned char buf[2];
return ret;
/* Send the data block */
- ret = send_data_block(ir, data_block);
+ ret = send_data_block(tx, data_block);
if (ret != 0)
return ret;
/* Send data block length? */
buf[0] = 0x00;
buf[1] = 0x40;
- ret = i2c_master_send(&ir->c_tx, buf, 2);
+ ret = i2c_master_send(tx->c, buf, 2);
if (ret != 2) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
- ret = i2c_master_send(&ir->c_tx, buf, 1);
+
+ /* Give the z8 a moment to process data block */
+ for (i = 0; i < 10; i++) {
+ ret = i2c_master_send(tx->c, buf, 1);
+ if (ret == 1)
+ break;
+ udelay(100);
+ }
+
if (ret != 1) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
}
/* Send finished download? */
- ret = i2c_master_recv(&ir->c_tx, buf, 1);
+ ret = i2c_master_recv(tx->c, buf, 1);
if (ret != 1) {
zilog_error("i2c_master_recv failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
/* Send prepare command? */
buf[0] = 0x00;
buf[1] = 0x80;
- ret = i2c_master_send(&ir->c_tx, buf, 2);
+ ret = i2c_master_send(tx->c, buf, 2);
if (ret != 2) {
zilog_error("i2c_master_send failed with %d\n", ret);
return ret < 0 ? ret : -EFAULT;
* last i2c_master_recv always fails with a -5, so for now, we're
* going to skip this whole mess and say we're done on the HD PVR
*/
- if (ir->is_hdpvr) {
+ if (!tx->post_tx_ready_poll) {
dprintk("sent code %u, key %u\n", code, key);
return 0;
}
for (i = 0; i < 20; ++i) {
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout((50 * HZ + 999) / 1000);
- ret = i2c_master_send(&ir->c_tx, buf, 1);
+ ret = i2c_master_send(tx->c, buf, 1);
if (ret == 1)
break;
dprintk("NAK expected: i2c_master_send "
}
/* Seems to be an 'ok' response */
- i = i2c_master_recv(&ir->c_tx, buf, 1);
+ i = i2c_master_recv(tx->c, buf, 1);
if (i != 1) {
zilog_error("i2c_master_recv failed with %d\n", ret);
return -EFAULT;
loff_t *ppos)
{
struct IR *ir = filep->private_data;
+ struct IR_tx *tx = ir->tx;
size_t i;
int failures = 0;
- if (ir->c_tx.addr == 0)
+ if (tx == NULL)
return -ENODEV;
/* Validate user parameters */
}
/* Send boot data first if required */
- if (ir->need_boot == 1) {
- ret = send_boot_data(ir);
+ if (tx->need_boot == 1) {
+ ret = send_boot_data(tx);
if (ret == 0)
- ir->need_boot = 0;
+ tx->need_boot = 0;
}
/* Send the code */
if (ret == 0) {
- ret = send_code(ir, (unsigned)command >> 16,
+ ret = send_code(tx, (unsigned)command >> 16,
(unsigned)command & 0xFFFF);
if (ret == -EPROTO) {
mutex_unlock(&ir->ir_lock);
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout((100 * HZ + 999) / 1000);
- ir->need_boot = 1;
+ tx->need_boot = 1;
++failures;
} else
i += sizeof(int);
static unsigned int poll(struct file *filep, poll_table *wait)
{
struct IR *ir = filep->private_data;
+ struct IR_rx *rx = ir->rx;
unsigned int ret;
dprintk("poll called\n");
- if (ir->c_rx.addr == 0)
+ if (rx == NULL)
return -ENODEV;
- mutex_lock(&ir->buf_lock);
+ mutex_lock(&rx->buf_lock);
- poll_wait(filep, &ir->buf.wait_poll, wait);
+ poll_wait(filep, &rx->buf.wait_poll, wait);
dprintk("poll result = %s\n",
- lirc_buffer_empty(&ir->buf) ? "0" : "POLLIN|POLLRDNORM");
+ lirc_buffer_empty(&rx->buf) ? "0" : "POLLIN|POLLRDNORM");
- ret = lirc_buffer_empty(&ir->buf) ? 0 : (POLLIN|POLLRDNORM);
+ ret = lirc_buffer_empty(&rx->buf) ? 0 : (POLLIN|POLLRDNORM);
- mutex_unlock(&ir->buf_lock);
+ mutex_unlock(&rx->buf_lock);
return ret;
}
int result;
unsigned long mode, features = 0;
- if (ir->c_rx.addr != 0)
+ features |= LIRC_CAN_SEND_PULSE;
+ if (ir->rx != NULL)
features |= LIRC_CAN_REC_LIRCCODE;
- if (ir->c_tx.addr != 0)
- features |= LIRC_CAN_SEND_PULSE;
switch (cmd) {
case LIRC_GET_LENGTH:
result = -EINVAL;
break;
case LIRC_GET_SEND_MODE:
- if (!(features&LIRC_CAN_SEND_MASK))
- return -ENOSYS;
-
result = put_user(LIRC_MODE_PULSE, (unsigned long *) arg);
break;
case LIRC_SET_SEND_MODE:
- if (!(features&LIRC_CAN_SEND_MASK))
- return -ENOSYS;
-
result = get_user(mode, (unsigned long *) arg);
if (!result && mode != LIRC_MODE_PULSE)
return -EINVAL;
return result;
}
+/* ir_devices_lock must be held */
+static struct IR *find_ir_device_by_minor(unsigned int minor)
+{
+ if (minor >= MAX_IRCTL_DEVICES)
+ return NULL;
+
+ return ir_devices[minor];
+}
+
/*
* Open the IR device. Get hold of our IR structure and
* stash it in private_data for the file
{
struct IR *ir;
int ret;
+ unsigned int minor = MINOR(node->i_rdev);
/* find our IR struct */
- unsigned minor = MINOR(node->i_rdev);
- if (minor >= MAX_IRCTL_DEVICES) {
- dprintk("minor %d: open result = -ENODEV\n",
- minor);
+ mutex_lock(&ir_devices_lock);
+ ir = find_ir_device_by_minor(minor);
+ mutex_unlock(&ir_devices_lock);
+
+ if (ir == NULL)
return -ENODEV;
- }
- ir = ir_devices[minor];
/* increment in use count */
mutex_lock(&ir->ir_lock);
static int ir_remove(struct i2c_client *client);
static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id);
-static int ir_command(struct i2c_client *client, unsigned int cmd, void *arg);
#define ID_FLAG_TX 0x01
#define ID_FLAG_HDPVR 0x02
},
.probe = ir_probe,
.remove = ir_remove,
- .command = ir_command,
.id_table = ir_transceiver_id,
};
.release = close
};
-static int ir_remove(struct i2c_client *client)
+static void destroy_rx_kthread(struct IR_rx *rx)
{
- struct IR *ir = i2c_get_clientdata(client);
+ /* end up polling thread */
+ if (rx != NULL && !IS_ERR_OR_NULL(rx->task)) {
+ kthread_stop(rx->task);
+ rx->task = NULL;
+ }
+}
- mutex_lock(&ir->ir_lock);
+/* ir_devices_lock must be held */
+static int add_ir_device(struct IR *ir)
+{
+ int i;
- if (ir->have_rx || ir->have_tx) {
- DECLARE_COMPLETION(tn);
- DECLARE_COMPLETION(tn2);
-
- /* end up polling thread */
- if (ir->task && !IS_ERR(ir->task)) {
- ir->t_notify = &tn;
- ir->t_notify2 = &tn2;
- ir->shutdown = 1;
- wake_up_process(ir->task);
- complete(&tn2);
- wait_for_completion(&tn);
- ir->t_notify = NULL;
- ir->t_notify2 = NULL;
+ for (i = 0; i < MAX_IRCTL_DEVICES; i++)
+ if (ir_devices[i] == NULL) {
+ ir_devices[i] = ir;
+ break;
}
- } else {
- mutex_unlock(&ir->ir_lock);
- zilog_error("%s: detached from something we didn't "
- "attach to\n", __func__);
- return -ENODEV;
+ return i == MAX_IRCTL_DEVICES ? -ENOMEM : i;
+}
+
+/* ir_devices_lock must be held */
+static void del_ir_device(struct IR *ir)
+{
+ int i;
+
+ for (i = 0; i < MAX_IRCTL_DEVICES; i++)
+ if (ir_devices[i] == ir) {
+ ir_devices[i] = NULL;
+ break;
+ }
+}
+
+static int ir_remove(struct i2c_client *client)
+{
+ struct IR *ir = i2c_get_clientdata(client);
+
+ mutex_lock(&ir_devices_lock);
+
+ if (ir == NULL) {
+ /* We destroyed everything when the first client came through */
+ mutex_unlock(&ir_devices_lock);
+ return 0;
}
- /* unregister lirc driver */
- if (ir->l.minor >= 0 && ir->l.minor < MAX_IRCTL_DEVICES) {
- lirc_unregister_driver(ir->l.minor);
- ir_devices[ir->l.minor] = NULL;
+ /* Good-bye LIRC */
+ lirc_unregister_driver(ir->l.minor);
+
+ /* Good-bye Rx */
+ destroy_rx_kthread(ir->rx);
+ if (ir->rx != NULL) {
+ if (ir->rx->buf.fifo_initialized)
+ lirc_buffer_free(&ir->rx->buf);
+ i2c_set_clientdata(ir->rx->c, NULL);
+ kfree(ir->rx);
}
- /* free memory */
- lirc_buffer_free(&ir->buf);
- mutex_unlock(&ir->ir_lock);
+ /* Good-bye Tx */
+ i2c_set_clientdata(ir->tx->c, NULL);
+ kfree(ir->tx);
+
+ /* Good-bye IR */
+ del_ir_device(ir);
kfree(ir);
+ mutex_unlock(&ir_devices_lock);
return 0;
}
-static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
+
+/* ir_devices_lock must be held */
+static struct IR *find_ir_device_by_adapter(struct i2c_adapter *adapter)
{
+ int i;
struct IR *ir = NULL;
+
+ for (i = 0; i < MAX_IRCTL_DEVICES; i++)
+ if (ir_devices[i] != NULL &&
+ ir_devices[i]->adapter == adapter) {
+ ir = ir_devices[i];
+ break;
+ }
+
+ return ir;
+}
+
+static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ struct IR *ir;
struct i2c_adapter *adap = client->adapter;
- char buf;
int ret;
- int have_rx = 0, have_tx = 0;
+ bool tx_probe = false;
- dprintk("%s: adapter name (%s) nr %d, i2c_device_id name (%s), "
- "client addr=0x%02x\n",
- __func__, adap->name, adap->nr, id->name, client->addr);
+ dprintk("%s: %s on i2c-%d (%s), client addr=0x%02x\n",
+ __func__, id->name, adap->nr, adap->name, client->addr);
/*
- * FIXME - This probe function probes both the Tx and Rx
- * addresses of the IR microcontroller.
- *
- * However, the I2C subsystem is passing along one I2C client at a
- * time, based on matches to the ir_transceiver_id[] table above.
- * The expectation is that each i2c_client address will be probed
- * individually by drivers so the I2C subsystem can mark all client
- * addresses as claimed or not.
- *
- * This probe routine causes only one of the client addresses, TX or RX,
- * to be claimed. This will cause a problem if the I2C subsystem is
- * subsequently triggered to probe unclaimed clients again.
+ * The IR receiver is at i2c address 0x71.
+ * The IR transmitter is at i2c address 0x70.
*/
- /*
- * The external IR receiver is at i2c address 0x71.
- * The IR transmitter is at 0x70.
- */
- client->addr = 0x70;
- if (!disable_tx) {
- if (i2c_master_recv(client, &buf, 1) == 1)
- have_tx = 1;
- dprintk("probe 0x70 @ %s: %s\n",
- adap->name, have_tx ? "success" : "failed");
- }
+ if (id->driver_data & ID_FLAG_TX)
+ tx_probe = true;
+ else if (tx_only) /* module option */
+ return -ENXIO;
- if (!disable_rx) {
- client->addr = 0x71;
- if (i2c_master_recv(client, &buf, 1) == 1)
- have_rx = 1;
- dprintk("probe 0x71 @ %s: %s\n",
- adap->name, have_rx ? "success" : "failed");
- }
+ zilog_info("probing IR %s on %s (i2c-%d)\n",
+ tx_probe ? "Tx" : "Rx", adap->name, adap->nr);
- if (!(have_rx || have_tx)) {
- zilog_error("%s: no devices found\n", adap->name);
- goto out_nodev;
- }
+ mutex_lock(&ir_devices_lock);
- printk(KERN_INFO "lirc_zilog: chip found with %s\n",
- have_rx && have_tx ? "RX and TX" :
- have_rx ? "RX only" : "TX only");
+ /* Use a single struct IR instance for both the Rx and Tx functions */
+ ir = find_ir_device_by_adapter(adap);
+ if (ir == NULL) {
+ ir = kzalloc(sizeof(struct IR), GFP_KERNEL);
+ if (ir == NULL) {
+ ret = -ENOMEM;
+ goto out_no_ir;
+ }
+ /* store for use in ir_probe() again, and open() later on */
+ ret = add_ir_device(ir);
+ if (ret)
+ goto out_free_ir;
+
+ ir->adapter = adap;
+ mutex_init(&ir->ir_lock);
+
+ /* set lirc_dev stuff */
+ memcpy(&ir->l, &lirc_template, sizeof(struct lirc_driver));
+ ir->l.minor = minor; /* module option */
+ ir->l.code_length = 13;
+ ir->l.rbuf = NULL;
+ ir->l.fops = &lirc_fops;
+ ir->l.data = ir;
+ ir->l.dev = &adap->dev;
+ ir->l.sample_rate = 0;
+ }
- ir = kzalloc(sizeof(struct IR), GFP_KERNEL);
+ if (tx_probe) {
+ /* Set up a struct IR_tx instance */
+ ir->tx = kzalloc(sizeof(struct IR_tx), GFP_KERNEL);
+ if (ir->tx == NULL) {
+ ret = -ENOMEM;
+ goto out_free_xx;
+ }
- if (!ir)
- goto out_nomem;
+ ir->tx->c = client;
+ ir->tx->need_boot = 1;
+ ir->tx->post_tx_ready_poll =
+ (id->driver_data & ID_FLAG_HDPVR) ? false : true;
+ } else {
+ /* Set up a struct IR_rx instance */
+ ir->rx = kzalloc(sizeof(struct IR_rx), GFP_KERNEL);
+ if (ir->rx == NULL) {
+ ret = -ENOMEM;
+ goto out_free_xx;
+ }
- ret = lirc_buffer_init(&ir->buf, 2, BUFLEN / 2);
- if (ret)
- goto out_nomem;
+ ret = lirc_buffer_init(&ir->rx->buf, 2, BUFLEN / 2);
+ if (ret)
+ goto out_free_xx;
- mutex_init(&ir->ir_lock);
- mutex_init(&ir->buf_lock);
- ir->need_boot = 1;
- ir->is_hdpvr = (id->driver_data & ID_FLAG_HDPVR) ? true : false;
+ mutex_init(&ir->rx->buf_lock);
+ ir->rx->c = client;
+ ir->rx->hdpvr_data_fmt =
+ (id->driver_data & ID_FLAG_HDPVR) ? true : false;
- memcpy(&ir->l, &lirc_template, sizeof(struct lirc_driver));
- ir->l.minor = -1;
+ /* set lirc_dev stuff */
+ ir->l.rbuf = &ir->rx->buf;
+ }
- /* I2C attach to device */
i2c_set_clientdata(client, ir);
- /* initialise RX device */
- if (have_rx) {
- DECLARE_COMPLETION(tn);
- memcpy(&ir->c_rx, client, sizeof(struct i2c_client));
-
- ir->c_rx.addr = 0x71;
- strlcpy(ir->c_rx.name, ZILOG_HAUPPAUGE_IR_RX_NAME,
- I2C_NAME_SIZE);
+ /* Proceed only if we have the required Tx and Rx clients ready to go */
+ if (ir->tx == NULL ||
+ (ir->rx == NULL && !tx_only)) {
+ zilog_info("probe of IR %s on %s (i2c-%d) done. Waiting on "
+ "IR %s.\n", tx_probe ? "Tx" : "Rx", adap->name,
+ adap->nr, tx_probe ? "Rx" : "Tx");
+ goto out_ok;
+ }
+ /* initialise RX device */
+ if (ir->rx != NULL) {
/* try to fire up polling thread */
- ir->t_notify = &tn;
- ir->task = kthread_run(lirc_thread, ir, "lirc_zilog");
- if (IS_ERR(ir->task)) {
- ret = PTR_ERR(ir->task);
- zilog_error("lirc_register_driver: cannot run "
- "poll thread %d\n", ret);
- goto err;
+ ir->rx->task = kthread_run(lirc_thread, ir,
+ "zilog-rx-i2c-%d", adap->nr);
+ if (IS_ERR(ir->rx->task)) {
+ ret = PTR_ERR(ir->rx->task);
+ zilog_error("%s: could not start IR Rx polling thread"
+ "\n", __func__);
+ goto out_free_xx;
}
- wait_for_completion(&tn);
- ir->t_notify = NULL;
- ir->have_rx = 1;
}
- /* initialise TX device */
- if (have_tx) {
- memcpy(&ir->c_tx, client, sizeof(struct i2c_client));
- ir->c_tx.addr = 0x70;
- strlcpy(ir->c_tx.name, ZILOG_HAUPPAUGE_IR_TX_NAME,
- I2C_NAME_SIZE);
- ir->have_tx = 1;
- }
-
- /* set lirc_dev stuff */
- ir->l.code_length = 13;
- ir->l.rbuf = &ir->buf;
- ir->l.fops = &lirc_fops;
- ir->l.data = ir;
- ir->l.minor = minor;
- ir->l.dev = &adap->dev;
- ir->l.sample_rate = 0;
-
/* register with lirc */
ir->l.minor = lirc_register_driver(&ir->l);
if (ir->l.minor < 0 || ir->l.minor >= MAX_IRCTL_DEVICES) {
- zilog_error("ir_attach: \"minor\" must be between 0 and %d "
- "(%d)!\n", MAX_IRCTL_DEVICES-1, ir->l.minor);
+ zilog_error("%s: \"minor\" must be between 0 and %d (%d)!\n",
+ __func__, MAX_IRCTL_DEVICES-1, ir->l.minor);
ret = -EBADRQC;
- goto err;
+ goto out_free_thread;
}
- /* store this for getting back in open() later on */
- ir_devices[ir->l.minor] = ir;
-
/*
* if we have the tx device, load the 'firmware'. We do this
* after registering with lirc as otherwise hotplug seems to take
* 10s to create the lirc device.
*/
- if (have_tx) {
- /* Special TX init */
- ret = tx_init(ir);
- if (ret != 0)
- goto err;
- }
+ ret = tx_init(ir->tx);
+ if (ret != 0)
+ goto out_unregister;
+ zilog_info("probe of IR %s on %s (i2c-%d) done. IR unit ready.\n",
+ tx_probe ? "Tx" : "Rx", adap->name, adap->nr);
+out_ok:
+ mutex_unlock(&ir_devices_lock);
return 0;
-err:
- /* undo everything, hopefully... */
- if (ir->c_rx.addr)
- ir_remove(&ir->c_rx);
- if (ir->c_tx.addr)
- ir_remove(&ir->c_tx);
- return ret;
-
-out_nodev:
- zilog_error("no device found\n");
- return -ENODEV;
-
-out_nomem:
- zilog_error("memory allocation failure\n");
+out_unregister:
+ lirc_unregister_driver(ir->l.minor);
+out_free_thread:
+ destroy_rx_kthread(ir->rx);
+out_free_xx:
+ if (ir->rx != NULL) {
+ if (ir->rx->buf.fifo_initialized)
+ lirc_buffer_free(&ir->rx->buf);
+ if (ir->rx->c != NULL)
+ i2c_set_clientdata(ir->rx->c, NULL);
+ kfree(ir->rx);
+ }
+ if (ir->tx != NULL) {
+ if (ir->tx->c != NULL)
+ i2c_set_clientdata(ir->tx->c, NULL);
+ kfree(ir->tx);
+ }
+out_free_ir:
+ del_ir_device(ir);
kfree(ir);
- return -ENOMEM;
-}
-
-static int ir_command(struct i2c_client *client, unsigned int cmd, void *arg)
-{
- /* nothing */
- return 0;
+out_no_ir:
+ zilog_error("%s: probing IR %s on %s (i2c-%d) failed with %d\n",
+ __func__, tx_probe ? "Tx" : "Rx", adap->name, adap->nr,
+ ret);
+ mutex_unlock(&ir_devices_lock);
+ return ret;
}
static int __init zilog_init(void)
zilog_notify("Zilog/Hauppauge IR driver initializing\n");
mutex_init(&tx_data_lock);
+ mutex_init(&ir_devices_lock);
request_module("firmware_class");
MODULE_DESCRIPTION("Zilog/Hauppauge infrared transmitter driver (i2c stack)");
MODULE_AUTHOR("Gerd Knorr, Michal Kochanowicz, Christoph Bartelmus, "
- "Ulrich Mueller, Stefan Jahn, Jerome Brock, Mark Weaver");
+ "Ulrich Mueller, Stefan Jahn, Jerome Brock, Mark Weaver, "
+ "Andy Walls");
MODULE_LICENSE("GPL");
/* for compat with old name, which isn't all that accurate anymore */
MODULE_ALIAS("lirc_pvr150");
module_param(debug, bool, 0644);
MODULE_PARM_DESC(debug, "Enable debugging messages");
-module_param(disable_rx, bool, 0644);
-MODULE_PARM_DESC(disable_rx, "Disable the IR receiver device");
-
-module_param(disable_tx, bool, 0644);
-MODULE_PARM_DESC(disable_tx, "Disable the IR transmitter device");
+module_param(tx_only, bool, 0644);
+MODULE_PARM_DESC(tx_only, "Only handle the IR transmit function");
if ((!mfd) || (mfd->key != MFD_KEY))
return 0;
- acquire_console_sem();
+ console_lock();
fb_set_suspend(mfd->fbi, 1);
ret = msm_fb_suspend_sub(mfd);
pdev->dev.power.power_state = state;
}
- release_console_sem();
+ console_unlock();
return ret;
}
#else
if ((!mfd) || (mfd->key != MFD_KEY))
return 0;
- acquire_console_sem();
+ console_lock();
ret = msm_fb_resume_sub(mfd);
pdev->dev.power.power_state = PMSG_ON;
fb_set_suspend(mfd->fbi, 1);
- release_console_sem();
+ console_unlock();
return ret;
}
*
* For now, we just hope..
*/
- acquire_console_sem();
+ console_lock();
ignore_fb_events = 1;
if (fb_blank(fbinfo, FB_BLANK_UNBLANK)) {
ignore_fb_events = 0;
- release_console_sem();
+ console_unlock();
printk(KERN_ERR "olpc-dcon: Failed to enter CPU mode\n");
dcon_pending = DCON_SOURCE_DCON;
return;
}
ignore_fb_events = 0;
- release_console_sem();
+ console_unlock();
/* And turn off the DCON */
pdata->set_dconload(1);
}
}
- acquire_console_sem();
+ console_lock();
ignore_fb_events = 1;
if (fb_blank(fbinfo, FB_BLANK_POWERDOWN))
printk(KERN_ERR "olpc-dcon: couldn't blank fb!\n");
ignore_fb_events = 0;
- release_console_sem();
+ console_unlock();
printk(KERN_INFO "olpc-dcon: The DCON has control\n");
break;
net_dev->ml_priv = (void *)pAd;
pAd->net_dev = net_dev;
- netif_stop_queue(net_dev);
-
return net_dev;
}
{USB_DEVICE(0x0411, 0x016f)}, /* MelCo.,Inc. WLI-UC-G301N */
{USB_DEVICE(0x1737, 0x0070)}, /* Linksys WUSB100 */
{USB_DEVICE(0x1737, 0x0071)}, /* Linksys WUSB600N */
+ {USB_DEVICE(0x1737, 0x0078)}, /* Linksys WUSB100v2 */
{USB_DEVICE(0x0411, 0x00e8)}, /* Buffalo WLI-UC-G300N */
{USB_DEVICE(0x050d, 0x815c)}, /* Belkin F5D8053 */
{USB_DEVICE(0x100D, 0x9031)}, /* Motorola 2770 */
u8 *ptmpchar = NULL, *ppayload, *ptr;
struct tx_desc *ptx_desc;
u32 txdscp_sz = sizeof(struct tx_desc);
+ u8 ret = _FAIL;
ulfilelength = rtl871x_open_fw(padapter, &phfwfile_hdl, &pmappedfw);
if (pmappedfw && (ulfilelength > 0)) {
update_fwhdr(&fwhdr, pmappedfw);
if (chk_fwhdr(&fwhdr, ulfilelength) == _FAIL)
- goto exit_fail;
+ goto firmware_rel;
fill_fwpriv(padapter, &fwhdr.fwpriv);
/* firmware check ok */
maxlen = (fwhdr.img_IMEM_size > fwhdr.img_SRAM_size) ?
maxlen += txdscp_sz;
ptmpchar = _malloc(maxlen + FWBUFF_ALIGN_SZ);
if (ptmpchar == NULL)
- return _FAIL;
+ goto firmware_rel;
ptx_desc = (struct tx_desc *)(ptmpchar + FWBUFF_ALIGN_SZ -
((addr_t)(ptmpchar) & (FWBUFF_ALIGN_SZ - 1)));
goto exit_fail;
} else
goto exit_fail;
- return _SUCCESS;
+ ret = _SUCCESS;
exit_fail:
kfree(ptmpchar);
- return _FAIL;
+firmware_rel:
+ release_firmware((struct firmware *)phfwfile_hdl);
+ return ret;
}
uint rtl8712_hal_init(struct _adapter *padapter)
static void r871xu_dev_remove(struct usb_interface *pusb_intf);
static struct usb_device_id rtl871x_usb_id_tbl[] = {
- /*92SU
- * Realtek */
- {USB_DEVICE(0x0bda, 0x8171)},
- {USB_DEVICE(0x0bda, 0x8172)},
+
+/* RTL8188SU */
+ /* Realtek */
+ {USB_DEVICE(0x0BDA, 0x8171)},
{USB_DEVICE(0x0bda, 0x8173)},
- {USB_DEVICE(0x0bda, 0x8174)},
{USB_DEVICE(0x0bda, 0x8712)},
{USB_DEVICE(0x0bda, 0x8713)},
{USB_DEVICE(0x0bda, 0xC512)},
- /* Abocom */
+ /* Abocom */
{USB_DEVICE(0x07B8, 0x8188)},
+ /* ASUS */
+ {USB_DEVICE(0x0B05, 0x1786)},
+ {USB_DEVICE(0x0B05, 0x1791)}, /* 11n mode disable */
+ /* Belkin */
+ {USB_DEVICE(0x050D, 0x945A)},
/* Corega */
- {USB_DEVICE(0x07aa, 0x0047)},
- /* Dlink */
- {USB_DEVICE(0x07d1, 0x3303)},
- {USB_DEVICE(0x07d1, 0x3302)},
- {USB_DEVICE(0x07d1, 0x3300)},
- /* Dlink for Skyworth */
- {USB_DEVICE(0x14b2, 0x3300)},
- {USB_DEVICE(0x14b2, 0x3301)},
- {USB_DEVICE(0x14b2, 0x3302)},
+ {USB_DEVICE(0x07AA, 0x0047)},
+ /* D-Link */
+ {USB_DEVICE(0x2001, 0x3306)},
+ {USB_DEVICE(0x07D1, 0x3306)}, /* 11n mode disable */
+ /* Edimax */
+ {USB_DEVICE(0x7392, 0x7611)},
/* EnGenius */
{USB_DEVICE(0x1740, 0x9603)},
- {USB_DEVICE(0x1740, 0x9605)},
+ /* Hawking */
+ {USB_DEVICE(0x0E66, 0x0016)},
+ /* Hercules */
+ {USB_DEVICE(0x06F8, 0xE034)},
+ {USB_DEVICE(0x06F8, 0xE032)},
+ /* Logitec */
+ {USB_DEVICE(0x0789, 0x0167)},
+ /* PCI */
+ {USB_DEVICE(0x2019, 0xAB28)},
+ {USB_DEVICE(0x2019, 0xED16)},
+ /* Sitecom */
+ {USB_DEVICE(0x0DF6, 0x0057)},
+ {USB_DEVICE(0x0DF6, 0x0045)},
+ {USB_DEVICE(0x0DF6, 0x0059)}, /* 11n mode disable */
+ {USB_DEVICE(0x0DF6, 0x004B)},
+ {USB_DEVICE(0x0DF6, 0x0063)},
+ /* Sweex */
+ {USB_DEVICE(0x177F, 0x0154)},
+ /* Thinkware */
+ {USB_DEVICE(0x0BDA, 0x5077)},
+ /* Toshiba */
+ {USB_DEVICE(0x1690, 0x0752)},
+ /* - */
+ {USB_DEVICE(0x20F4, 0x646B)},
+ {USB_DEVICE(0x083A, 0xC512)},
+
+/* RTL8191SU */
+ /* Realtek */
+ {USB_DEVICE(0x0BDA, 0x8172)},
+ /* Amigo */
+ {USB_DEVICE(0x0EB0, 0x9061)},
+ /* ASUS/EKB */
+ {USB_DEVICE(0x0BDA, 0x8172)},
+ {USB_DEVICE(0x13D3, 0x3323)},
+ {USB_DEVICE(0x13D3, 0x3311)}, /* 11n mode disable */
+ {USB_DEVICE(0x13D3, 0x3342)},
+ /* ASUS/EKBLenovo */
+ {USB_DEVICE(0x13D3, 0x3333)},
+ {USB_DEVICE(0x13D3, 0x3334)},
+ {USB_DEVICE(0x13D3, 0x3335)}, /* 11n mode disable */
+ {USB_DEVICE(0x13D3, 0x3336)}, /* 11n mode disable */
+ /* ASUS/Media BOX */
+ {USB_DEVICE(0x13D3, 0x3309)},
/* Belkin */
- {USB_DEVICE(0x050d, 0x815F)},
- {USB_DEVICE(0x050d, 0x945A)},
- {USB_DEVICE(0x050d, 0x845A)},
- /* Guillemot */
- {USB_DEVICE(0x06f8, 0xe031)},
+ {USB_DEVICE(0x050D, 0x815F)},
+ /* D-Link */
+ {USB_DEVICE(0x07D1, 0x3302)},
+ {USB_DEVICE(0x07D1, 0x3300)},
+ {USB_DEVICE(0x07D1, 0x3303)},
/* Edimax */
- {USB_DEVICE(0x7392, 0x7611)},
{USB_DEVICE(0x7392, 0x7612)},
- {USB_DEVICE(0x7392, 0x7622)},
- /* Sitecom */
- {USB_DEVICE(0x0DF6, 0x0045)},
+ /* EnGenius */
+ {USB_DEVICE(0x1740, 0x9605)},
+ /* Guillemot */
+ {USB_DEVICE(0x06F8, 0xE031)},
/* Hawking */
{USB_DEVICE(0x0E66, 0x0015)},
- {USB_DEVICE(0x0E66, 0x0016)},
- {USB_DEVICE(0x0b05, 0x1786)},
- {USB_DEVICE(0x0b05, 0x1791)}, /* 11n mode disable */
-
+ /* Mediao */
{USB_DEVICE(0x13D3, 0x3306)},
- {USB_DEVICE(0x13D3, 0x3309)},
+ /* PCI */
+ {USB_DEVICE(0x2019, 0xED18)},
+ {USB_DEVICE(0x2019, 0x4901)},
+ /* Sitecom */
+ {USB_DEVICE(0x0DF6, 0x0058)},
+ {USB_DEVICE(0x0DF6, 0x0049)},
+ {USB_DEVICE(0x0DF6, 0x004C)},
+ {USB_DEVICE(0x0DF6, 0x0064)},
+ /* Skyworth */
+ {USB_DEVICE(0x14b2, 0x3300)},
+ {USB_DEVICE(0x14b2, 0x3301)},
+ {USB_DEVICE(0x14B2, 0x3302)},
+ /* - */
+ {USB_DEVICE(0x04F2, 0xAFF2)},
+ {USB_DEVICE(0x04F2, 0xAFF5)},
+ {USB_DEVICE(0x04F2, 0xAFF6)},
+ {USB_DEVICE(0x13D3, 0x3339)},
+ {USB_DEVICE(0x13D3, 0x3340)}, /* 11n mode disable */
+ {USB_DEVICE(0x13D3, 0x3341)}, /* 11n mode disable */
{USB_DEVICE(0x13D3, 0x3310)},
- {USB_DEVICE(0x13D3, 0x3311)}, /* 11n mode disable */
{USB_DEVICE(0x13D3, 0x3325)},
- {USB_DEVICE(0x083A, 0xC512)},
+
+/* RTL8192SU */
+ /* Realtek */
+ {USB_DEVICE(0x0BDA, 0x8174)},
+ {USB_DEVICE(0x0BDA, 0x8174)},
+ /* Belkin */
+ {USB_DEVICE(0x050D, 0x845A)},
+ /* Corega */
+ {USB_DEVICE(0x07AA, 0x0051)},
+ /* Edimax */
+ {USB_DEVICE(0x7392, 0x7622)},
+ /* NEC */
+ {USB_DEVICE(0x0409, 0x02B6)},
{}
};
static struct specific_device_id specific_device_id_tbl[] = {
{.idVendor = 0x0b05, .idProduct = 0x1791,
.flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x0df6, .idProduct = 0x0059,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x13d3, .idProduct = 0x3306,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
{.idVendor = 0x13D3, .idProduct = 0x3311,
.flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x13d3, .idProduct = 0x3335,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x13d3, .idProduct = 0x3336,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x13d3, .idProduct = 0x3340,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
+ {.idVendor = 0x13d3, .idProduct = 0x3341,
+ .flags = SPEC_DEV_ID_DISABLE_HT},
{}
};
/* when doing suspend, call fb apis and pci apis */
if (msg.event == PM_EVENT_SUSPEND) {
- acquire_console_sem();
+ console_lock();
fb_set_suspend(&sfb->fb, 1);
- release_console_sem();
+ console_unlock();
retv = pci_save_state(pdev);
pci_disable_device(pdev);
retv = pci_choose_state(pdev, msg);
smtcfb_setmode(sfb);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(&sfb->fb, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
unsigned long flags;
len = strlen(buf);
- if (len > 0 || len < 3) {
+ if (len > 0 && len < 3) {
ch = buf[0];
if (ch == '\n')
ch = '0';
input_set_abs_params(rmi4_data->input_dev, ABS_MT_TOUCH_MAJOR, 0,
MAX_TOUCH_MAJOR, 0, 0);
- retval = input_register_device(rmi4_data->input_dev);
- if (retval) {
- dev_err(&client->dev, "%s:input register failed\n", __func__);
- goto err_input_register;
- }
-
/* Clear interrupts */
synaptics_rmi4_i2c_block_read(rmi4_data,
rmi4_data->fn01_data_base_addr + 1, intr_status,
if (retval) {
dev_err(&client->dev, "%s:Unable to get attn irq %d\n",
__func__, platformdata->irq_number);
- goto err_request_irq;
+ goto err_unset_clientdata;
+ }
+
+ retval = input_register_device(rmi4_data->input_dev);
+ if (retval) {
+ dev_err(&client->dev, "%s:input register failed\n", __func__);
+ goto err_free_irq;
}
return retval;
-err_request_irq:
+err_free_irq:
free_irq(platformdata->irq_number, rmi4_data);
- input_unregister_device(rmi4_data->input_dev);
-err_input_register:
+err_unset_clientdata:
i2c_set_clientdata(client, NULL);
err_query_dev:
if (platformdata->regulator_en) {
* Calls the Bridge's CHNL_ISR to determine if this interrupt is ours, then
* schedules a DPC to dispatch I/O.
*/
-void io_mbox_msg(u32 msg)
+int io_mbox_msg(struct notifier_block *self, unsigned long len, void *msg)
{
struct io_mgr *pio_mgr;
struct dev_object *dev_obj;
dev_get_io_mgr(dev_obj, &pio_mgr);
if (!pio_mgr)
- return;
+ return NOTIFY_BAD;
- pio_mgr->intr_val = (u16)msg;
+ pio_mgr->intr_val = (u16)((u32)msg);
if (pio_mgr->intr_val & MBX_PM_CLASS)
io_dispatch_pm(pio_mgr);
spin_unlock_irqrestore(&pio_mgr->dpc_lock, flags);
tasklet_schedule(&pio_mgr->dpc_tasklet);
}
- return;
+ return NOTIFY_OK;
}
/*
bridge_msg_set_queue_id,
};
+static struct notifier_block dsp_mbox_notifier = {
+ .notifier_call = io_mbox_msg,
+};
+
static inline void flush_all(struct bridge_dev_context *dev_context)
{
if (dev_context->dw_brd_state == BRD_DSP_HIBERNATION ||
* Enable Mailbox events and also drain any pending
* stale messages.
*/
- dev_context->mbox = omap_mbox_get("dsp");
+ dev_context->mbox = omap_mbox_get("dsp", &dsp_mbox_notifier);
if (IS_ERR(dev_context->mbox)) {
dev_context->mbox = NULL;
pr_err("%s: Failed to get dsp mailbox handle\n",
}
if (!status) {
- dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg;
-
/*PM_IVA2GRPSEL_PER = 0xC0;*/
temp = readl(resources->dw_per_pm_base + 0xA8);
temp = (temp & 0xFFFFFF30) | 0xC0;
/* Disable the mailbox interrupts */
if (dev_context->mbox) {
omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
- omap_mbox_put(dev_context->mbox);
+ omap_mbox_put(dev_context->mbox, &dsp_mbox_notifier);
dev_context->mbox = NULL;
}
/* Reset IVA2 clocks*/
pt_attrs = kzalloc(sizeof(struct pg_table_attrs), GFP_KERNEL);
if (pt_attrs != NULL) {
- /* Assuming that we use only DSP's memory map
- * until 0x4000:0000 , we would need only 1024
- * L1 enties i.e L1 size = 4K */
- pt_attrs->l1_size = 0x1000;
+ pt_attrs->l1_size = SZ_16K; /* 4096 entries of 32 bits */
align_size = pt_attrs->l1_size;
/* Align sizes are expected to be power of 2 */
/* we like to get aligned on L1 table size */
/*
* ======== io_mbox_msg ========
* Purpose:
- * Main interrupt handler for the shared memory Bridge channel manager.
- * Calls the Bridge's chnlsm_isr to determine if this interrupt is ours,
- * then schedules a DPC to dispatch I/O.
+ * Main message handler for the shared memory Bridge channel manager.
+ * Determine if this message is ours, then schedules a DPC to
+ * dispatch I/O.
* Parameters:
- * ref_data: Pointer to the channel manager object for this board.
- * Set in an initial call to ISR_Install().
+ * self: Pointer to its own notifier_block struct.
+ * len: Length of message.
+ * msg: Message code received.
* Returns:
- * TRUE if interrupt handled; FALSE otherwise.
- * Requires:
- * Must be in locked memory if executing in kernel mode.
- * Must only call functions which are in locked memory if Kernel mode.
- * Must only call asynchronous services.
- * Interrupts are disabled and EOI for this interrupt has been sent.
- * Ensures:
+ * NOTIFY_OK if handled; NOTIFY_BAD otherwise.
*/
-void io_mbox_msg(u32 msg);
+int io_mbox_msg(struct notifier_block *self, unsigned long len, void *msg);
/*
* ======== io_request_chnl ========
* ------------------------------------------------------------------
*/
-int tm6000_v4l2_register(struct tm6000_core *dev)
+static struct video_device *vdev_init(struct tm6000_core *dev,
+ const struct video_device
+ *template, const char *type_name)
{
- int ret = -1;
struct video_device *vfd;
vfd = video_device_alloc();
- if(!vfd) {
+ if (NULL == vfd)
+ return NULL;
+
+ *vfd = *template;
+ vfd->v4l2_dev = &dev->v4l2_dev;
+ vfd->release = video_device_release;
+ vfd->debug = tm6000_debug;
+ vfd->lock = &dev->lock;
+
+ snprintf(vfd->name, sizeof(vfd->name), "%s %s", dev->name, type_name);
+
+ video_set_drvdata(vfd, dev);
+ return vfd;
+}
+
+int tm6000_v4l2_register(struct tm6000_core *dev)
+{
+ int ret = -1;
+
+ dev->vfd = vdev_init(dev, &tm6000_template, "video");
+
+ if (!dev->vfd) {
+ printk(KERN_INFO "%s: can't register video device\n",
+ dev->name);
return -ENOMEM;
}
- dev->vfd = vfd;
/* init video dma queues */
INIT_LIST_HEAD(&dev->vidq.active);
INIT_LIST_HEAD(&dev->vidq.queued);
- memcpy(dev->vfd, &tm6000_template, sizeof(*(dev->vfd)));
- dev->vfd->debug = tm6000_debug;
- dev->vfd->lock = &dev->lock;
+ ret = video_register_device(dev->vfd, VFL_TYPE_GRABBER, video_nr);
- vfd->v4l2_dev = &dev->v4l2_dev;
- video_set_drvdata(vfd, dev);
+ if (ret < 0) {
+ printk(KERN_INFO "%s: can't register video device\n",
+ dev->name);
+ return ret;
+ }
+
+ printk(KERN_INFO "%s: registered device %s\n",
+ dev->name, video_device_node_name(dev->vfd));
- ret = video_register_device(dev->vfd, VFL_TYPE_GRABBER, video_nr);
printk(KERN_INFO "Trident TVMaster TM5600/TM6000/TM6010 USB2 board (Load status: %d)\n", ret);
return ret;
}
struct stub_device {
struct usb_interface *interface;
+ struct usb_device *udev;
struct list_head list;
struct usbip_device ud;
static void stub_device_reset(struct usbip_device *ud)
{
struct stub_device *sdev = container_of(ud, struct stub_device, ud);
- struct usb_device *udev = interface_to_usbdev(sdev->interface);
+ struct usb_device *udev = sdev->udev;
int ret;
usbip_udbg("device reset");
+
ret = usb_lock_device_for_reset(udev, sdev->interface);
if (ret < 0) {
dev_err(&udev->dev, "lock for reset\n");
*
* Allocates and initializes a new stub_device struct.
*/
-static struct stub_device *stub_device_alloc(struct usb_interface *interface)
+static struct stub_device *stub_device_alloc(struct usb_device *udev,
+ struct usb_interface *interface)
{
struct stub_device *sdev;
int busnum = interface_to_busnum(interface);
return NULL;
}
- sdev->interface = interface;
+ sdev->interface = usb_get_intf(interface);
+ sdev->udev = usb_get_dev(udev);
/*
* devid is defined with devnum when this driver is first allocated.
return err;
}
+ usb_get_intf(interface);
return 0;
}
/* ok. this is my device. */
- sdev = stub_device_alloc(interface);
+ sdev = stub_device_alloc(udev, interface);
if (!sdev)
return -ENOMEM;
dev_err(&interface->dev, "create sysfs files for %s\n",
udev_busid);
usb_set_intfdata(interface, NULL);
+ usb_put_intf(interface);
+
busid_priv->interf_count = 0;
busid_priv->sdev = NULL;
if (busid_priv->interf_count > 1) {
busid_priv->interf_count--;
shutdown_busid(busid_priv);
+ usb_put_intf(interface);
return;
}
/* 1. shutdown the current connection */
shutdown_busid(busid_priv);
+ usb_put_dev(sdev->udev);
+ usb_put_intf(interface);
+
/* 3. free sdev */
busid_priv->sdev = NULL;
stub_device_free(sdev);
static int get_pipe(struct stub_device *sdev, int epnum, int dir)
{
- struct usb_device *udev = interface_to_usbdev(sdev->interface);
+ struct usb_device *udev = sdev->udev;
struct usb_host_endpoint *ep;
struct usb_endpoint_descriptor *epd = NULL;
int ret;
struct stub_priv *priv;
struct usbip_device *ud = &sdev->ud;
- struct usb_device *udev = interface_to_usbdev(sdev->interface);
+ struct usb_device *udev = sdev->udev;
int pipe = get_pipe(sdev, pdu->base.ep, pdu->base.direction);
* But, the index of this array begins from 0.
*/
struct vhci_device vdev[VHCI_NPORTS];
-
- /* vhci_device which has not been assiged its address yet */
- int pending_port;
};
void vhci_rx_loop(struct usbip_task *ut);
void vhci_tx_loop(struct usbip_task *ut);
+struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev,
+ __u32 seqnum);
+
#define hardware (&the_controller->pdev.dev)
static inline struct vhci_device *port_to_vdev(__u32 port)
* the_controller->vdev[rhport].ud.status = VDEV_CONNECT;
* spin_unlock(&the_controller->vdev[rhport].ud.lock); */
- the_controller->pending_port = rhport;
-
spin_unlock_irqrestore(&the_controller->lock, flags);
usb_hcd_poll_rh_status(vhci_to_hcd(the_controller));
struct device *dev = &urb->dev->dev;
int ret = 0;
unsigned long flags;
+ struct vhci_device *vdev;
usbip_dbg_vhci_hc("enter, usb_hcd %p urb %p mem_flags %d\n",
hcd, urb, mem_flags);
return urb->status;
}
+ vdev = port_to_vdev(urb->dev->portnum-1);
+
+ /* refuse enqueue for dead connection */
+ spin_lock(&vdev->ud.lock);
+ if (vdev->ud.status == VDEV_ST_NULL || vdev->ud.status == VDEV_ST_ERROR) {
+ usbip_uerr("enqueue for inactive port %d\n", vdev->rhport);
+ spin_unlock(&vdev->ud.lock);
+ spin_unlock_irqrestore(&the_controller->lock, flags);
+ return -ENODEV;
+ }
+ spin_unlock(&vdev->ud.lock);
+
ret = usb_hcd_link_urb_to_ep(hcd, urb);
if (ret)
goto no_need_unlink;
__u8 type = usb_pipetype(urb->pipe);
struct usb_ctrlrequest *ctrlreq =
(struct usb_ctrlrequest *) urb->setup_packet;
- struct vhci_device *vdev =
- port_to_vdev(the_controller->pending_port);
if (type != PIPE_CONTROL || !ctrlreq) {
dev_err(dev, "invalid request to devnum 0\n");
dev_info(dev, "SetAddress Request (%d) to port %d\n",
ctrlreq->wValue, vdev->rhport);
- vdev->udev = urb->dev;
+ if (vdev->udev)
+ usb_put_dev(vdev->udev);
+ vdev->udev = usb_get_dev(urb->dev);
spin_lock(&vdev->ud.lock);
vdev->ud.status = VDEV_ST_USED;
"Get_Descriptor to device 0 "
"(get max pipe size)\n");
- /* FIXME: reference count? (usb_get_dev()) */
- vdev->udev = urb->dev;
+ if (vdev->udev)
+ usb_put_dev(vdev->udev);
+ vdev->udev = usb_get_dev(urb->dev);
goto out;
default:
return 0;
}
-
static void vhci_device_unlink_cleanup(struct vhci_device *vdev)
{
struct vhci_unlink *unlink, *tmp;
spin_lock(&vdev->priv_lock);
list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) {
+ usbip_uinfo("unlink cleanup tx %lu\n", unlink->unlink_seqnum);
list_del(&unlink->list);
kfree(unlink);
}
list_for_each_entry_safe(unlink, tmp, &vdev->unlink_rx, list) {
+ struct urb *urb;
+
+ /* give back URB of unanswered unlink request */
+ usbip_uinfo("unlink cleanup rx %lu\n", unlink->unlink_seqnum);
+
+ urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum);
+ if (!urb) {
+ usbip_uinfo("the urb (seqnum %lu) was already given back\n",
+ unlink->unlink_seqnum);
+ list_del(&unlink->list);
+ kfree(unlink);
+ continue;
+ }
+
+ urb->status = -ENODEV;
+
+ spin_lock(&the_controller->lock);
+ usb_hcd_unlink_urb_from_ep(vhci_to_hcd(the_controller), urb);
+ spin_unlock(&the_controller->lock);
+
+ usb_hcd_giveback_urb(vhci_to_hcd(the_controller), urb, urb->status);
+
list_del(&unlink->list);
kfree(unlink);
}
vdev->speed = 0;
vdev->devid = 0;
+ if (vdev->udev)
+ usb_put_dev(vdev->udev);
+ vdev->udev = NULL;
+
ud->tcp_socket = NULL;
ud->status = VDEV_ST_NULL;
#include "vhci.h"
-/* get URB from transmitted urb queue */
-static struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev,
+/* get URB from transmitted urb queue. caller must hold vdev->priv_lock */
+struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev,
__u32 seqnum)
{
struct vhci_priv *priv, *tmp;
struct urb *urb = NULL;
int status;
- spin_lock(&vdev->priv_lock);
-
list_for_each_entry_safe(priv, tmp, &vdev->priv_rx, list) {
if (priv->seqnum == seqnum) {
urb = priv->urb;
}
}
- spin_unlock(&vdev->priv_lock);
-
return urb;
}
struct usbip_device *ud = &vdev->ud;
struct urb *urb;
+ spin_lock(&vdev->priv_lock);
urb = pickup_urb_and_free_priv(vdev, pdu->base.seqnum);
+ spin_unlock(&vdev->priv_lock);
if (!urb) {
usbip_uerr("cannot find a urb of seqnum %u\n",
return;
}
+ spin_lock(&vdev->priv_lock);
+
urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum);
+
+ spin_unlock(&vdev->priv_lock);
+
if (!urb) {
/*
* I get the result of a unlink request. But, it seems that I
return;
}
+static int vhci_priv_tx_empty(struct vhci_device *vdev)
+{
+ int empty = 0;
+
+ spin_lock(&vdev->priv_lock);
+
+ empty = list_empty(&vdev->priv_rx);
+
+ spin_unlock(&vdev->priv_lock);
+
+ return empty;
+}
+
/* recv a pdu */
static void vhci_rx_pdu(struct usbip_device *ud)
{
memset(&pdu, 0, sizeof(pdu));
-
/* 1. receive a pdu header */
ret = usbip_xmit(0, ud->tcp_socket, (char *) &pdu, sizeof(pdu), 0);
+ if (ret < 0) {
+ if (ret == -ECONNRESET)
+ usbip_uinfo("connection reset by peer\n");
+ else if (ret == -EAGAIN) {
+ /* ignore if connection was idle */
+ if (vhci_priv_tx_empty(vdev))
+ return;
+ usbip_uinfo("connection timed out with pending urbs\n");
+ } else if (ret != -ERESTARTSYS)
+ usbip_uinfo("xmit failed %d\n", ret);
+
+ usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
+ return;
+ }
+ if (ret == 0) {
+ usbip_uinfo("connection closed");
+ usbip_event_add(ud, VDEV_EVENT_DOWN);
+ return;
+ }
if (ret != sizeof(pdu)) {
- usbip_uerr("receiving pdu failed! size is %d, should be %d\n",
+ usbip_uerr("received pdu size is %d, should be %d\n",
ret, (unsigned int)sizeof(pdu));
usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
return;
unsigned char XGI_IsLCDDualLink(struct vb_device_info *pVBInfo)
{
- if ((((pVBInfo->VBInfo & SetCRT2ToLCD) | SetCRT2ToLCDA))
- && (pVBInfo->LCDInfo & SetLCDDualLink)) /* shampoo0129 */
+ if ((pVBInfo->VBInfo & (SetCRT2ToLCD | SetCRT2ToLCDA)) &&
+ (pVBInfo->LCDInfo & SetLCDDualLink)) /* shampoo0129 */
return 1;
return 0;
if (pVBInfo->IF_DEF_LVDS == 0) {
CRT2Index = CRT2Index >> 6; /* for LCD */
- if (((pVBInfo->VBInfo & SetCRT2ToLCD) | SetCRT2ToLCDA)) { /*301b*/
+ if (pVBInfo->VBInfo & (SetCRT2ToLCD | SetCRT2ToLCDA)) { /*301b*/
if (pVBInfo->LCDResInfo != Panel1024x768)
VCLKIndex = LCDXlat2VCLK[CRT2Index];
else
if (zram_test_flag(zram, index, ZRAM_ZERO)) {
handle_zero_page(page);
+ index++;
continue;
}
pr_debug("Read before write: sector=%lu, size=%u",
(ulong)(bio->bi_sector), bio->bi_size);
/* Do nothing */
+ index++;
continue;
}
/* Page is stored uncompressed since it's incompressible */
if (unlikely(zram_test_flag(zram, index, ZRAM_UNCOMPRESSED))) {
handle_uncompressed_page(zram, page, index);
+ index++;
continue;
}
mutex_unlock(&zram->lock);
zram_stat_inc(&zram->stats.pages_zero);
zram_set_flag(zram, index, ZRAM_ZERO);
+ index++;
continue;
}
target_core_transport.o \
target_core_cdb.o \
target_core_ua.o \
- target_core_rd.o \
- target_core_mib.o
+ target_core_rd.o
obj-$(CONFIG_TARGET_CORE) += target_core_mod.o
#include <linux/parser.h>
#include <linux/syscalls.h>
#include <linux/configfs.h>
-#include <linux/proc_fs.h>
#include <target/target_core_base.h>
#include <target/target_core_device.h>
{
struct se_subsystem_dev *se_dev = container_of(to_config_group(item),
struct se_subsystem_dev, se_dev_group);
- struct config_group *dev_cg;
-
- if (!(se_dev))
- return;
+ struct se_hba *hba = item_to_hba(&se_dev->se_dev_hba->hba_group.cg_item);
+ struct se_subsystem_api *t = hba->transport;
+ struct config_group *dev_cg = &se_dev->se_dev_group;
- dev_cg = &se_dev->se_dev_group;
kfree(dev_cg->default_groups);
+ /*
+ * This pointer will set when the storage is enabled with:
+ *`echo 1 > $CONFIGFS/core/$HBA/$DEV/dev_enable`
+ */
+ if (se_dev->se_dev_ptr) {
+ printk(KERN_INFO "Target_Core_ConfigFS: Calling se_free_"
+ "virtual_device() for se_dev_ptr: %p\n",
+ se_dev->se_dev_ptr);
+
+ se_free_virtual_device(se_dev->se_dev_ptr, hba);
+ } else {
+ /*
+ * Release struct se_subsystem_dev->se_dev_su_ptr..
+ */
+ printk(KERN_INFO "Target_Core_ConfigFS: Calling t->free_"
+ "device() for se_dev_su_ptr: %p\n",
+ se_dev->se_dev_su_ptr);
+
+ t->free_device(se_dev->se_dev_su_ptr);
+ }
+
+ printk(KERN_INFO "Target_Core_ConfigFS: Deallocating se_subsystem"
+ "_dev_t: %p\n", se_dev);
+ kfree(se_dev);
}
static ssize_t target_core_dev_show(struct config_item *item,
NULL,
};
+static void target_core_alua_lu_gp_release(struct config_item *item)
+{
+ struct t10_alua_lu_gp *lu_gp = container_of(to_config_group(item),
+ struct t10_alua_lu_gp, lu_gp_group);
+
+ core_alua_free_lu_gp(lu_gp);
+}
+
static struct configfs_item_operations target_core_alua_lu_gp_ops = {
+ .release = target_core_alua_lu_gp_release,
.show_attribute = target_core_alua_lu_gp_attr_show,
.store_attribute = target_core_alua_lu_gp_attr_store,
};
printk(KERN_INFO "Target_Core_ConfigFS: Releasing ALUA Logical Unit"
" Group: core/alua/lu_gps/%s, ID: %hu\n",
config_item_name(item), lu_gp->lu_gp_id);
-
+ /*
+ * core_alua_free_lu_gp() is called from target_core_alua_lu_gp_ops->release()
+ * -> target_core_alua_lu_gp_release()
+ */
config_item_put(item);
- core_alua_free_lu_gp(lu_gp);
}
static struct configfs_group_operations target_core_alua_lu_gps_group_ops = {
NULL,
};
+static void target_core_alua_tg_pt_gp_release(struct config_item *item)
+{
+ struct t10_alua_tg_pt_gp *tg_pt_gp = container_of(to_config_group(item),
+ struct t10_alua_tg_pt_gp, tg_pt_gp_group);
+
+ core_alua_free_tg_pt_gp(tg_pt_gp);
+}
+
static struct configfs_item_operations target_core_alua_tg_pt_gp_ops = {
+ .release = target_core_alua_tg_pt_gp_release,
.show_attribute = target_core_alua_tg_pt_gp_attr_show,
.store_attribute = target_core_alua_tg_pt_gp_attr_store,
};
printk(KERN_INFO "Target_Core_ConfigFS: Releasing ALUA Target Port"
" Group: alua/tg_pt_gps/%s, ID: %hu\n",
config_item_name(item), tg_pt_gp->tg_pt_gp_id);
-
+ /*
+ * core_alua_free_tg_pt_gp() is called from target_core_alua_tg_pt_gp_ops->release()
+ * -> target_core_alua_tg_pt_gp_release().
+ */
config_item_put(item);
- core_alua_free_tg_pt_gp(tg_pt_gp);
}
static struct configfs_group_operations target_core_alua_tg_pt_gps_group_ops = {
struct se_subsystem_api *t;
struct config_item *df_item;
struct config_group *dev_cg, *tg_pt_gp_cg;
- int i, ret;
+ int i;
hba = item_to_hba(&se_dev->se_dev_hba->hba_group.cg_item);
- if (mutex_lock_interruptible(&hba->hba_access_mutex))
- goto out;
-
+ mutex_lock(&hba->hba_access_mutex);
t = hba->transport;
spin_lock(&se_global->g_device_lock);
config_item_put(df_item);
}
kfree(tg_pt_gp_cg->default_groups);
- core_alua_free_tg_pt_gp(T10_ALUA(se_dev)->default_tg_pt_gp);
+ /*
+ * core_alua_free_tg_pt_gp() is called from ->default_tg_pt_gp
+ * directly from target_core_alua_tg_pt_gp_release().
+ */
T10_ALUA(se_dev)->default_tg_pt_gp = NULL;
dev_cg = &se_dev->se_dev_group;
dev_cg->default_groups[i] = NULL;
config_item_put(df_item);
}
-
- config_item_put(item);
/*
- * This pointer will set when the storage is enabled with:
- * `echo 1 > $CONFIGFS/core/$HBA/$DEV/dev_enable`
+ * The releasing of se_dev and associated se_dev->se_dev_ptr is done
+ * from target_core_dev_item_ops->release() ->target_core_dev_release().
*/
- if (se_dev->se_dev_ptr) {
- printk(KERN_INFO "Target_Core_ConfigFS: Calling se_free_"
- "virtual_device() for se_dev_ptr: %p\n",
- se_dev->se_dev_ptr);
-
- ret = se_free_virtual_device(se_dev->se_dev_ptr, hba);
- if (ret < 0)
- goto hba_out;
- } else {
- /*
- * Release struct se_subsystem_dev->se_dev_su_ptr..
- */
- printk(KERN_INFO "Target_Core_ConfigFS: Calling t->free_"
- "device() for se_dev_su_ptr: %p\n",
- se_dev->se_dev_su_ptr);
-
- t->free_device(se_dev->se_dev_su_ptr);
- }
-
- printk(KERN_INFO "Target_Core_ConfigFS: Deallocating se_subsystem"
- "_dev_t: %p\n", se_dev);
-
-hba_out:
+ config_item_put(item);
mutex_unlock(&hba->hba_access_mutex);
-out:
- kfree(se_dev);
}
static struct configfs_group_operations target_core_hba_group_ops = {
CONFIGFS_EATTR_OPS(target_core_hba, se_hba, hba_group);
+static void target_core_hba_release(struct config_item *item)
+{
+ struct se_hba *hba = container_of(to_config_group(item),
+ struct se_hba, hba_group);
+ core_delete_hba(hba);
+}
+
static struct configfs_attribute *target_core_hba_attrs[] = {
&target_core_hba_hba_info.attr,
&target_core_hba_hba_mode.attr,
};
static struct configfs_item_operations target_core_hba_item_ops = {
+ .release = target_core_hba_release,
.show_attribute = target_core_hba_attr_show,
.store_attribute = target_core_hba_attr_store,
};
struct config_group *group,
struct config_item *item)
{
- struct se_hba *hba = item_to_hba(item);
-
+ /*
+ * core_delete_hba() is called from target_core_hba_item_ops->release()
+ * -> target_core_hba_release()
+ */
config_item_put(item);
- core_delete_hba(hba);
}
static struct configfs_group_operations target_core_group_ops = {
struct config_group *target_cg, *hba_cg = NULL, *alua_cg = NULL;
struct config_group *lu_gp_cg = NULL;
struct configfs_subsystem *subsys;
- struct proc_dir_entry *scsi_target_proc = NULL;
struct t10_alua_lu_gp *lu_gp;
int ret;
if (core_dev_setup_virtual_lun0() < 0)
goto out;
- scsi_target_proc = proc_mkdir("scsi_target", 0);
- if (!(scsi_target_proc)) {
- printk(KERN_ERR "proc_mkdir(scsi_target, 0) failed\n");
- goto out;
- }
- ret = init_scsi_target_mib();
- if (ret < 0)
- goto out;
-
return 0;
out:
configfs_unregister_subsystem(subsys);
- if (scsi_target_proc)
- remove_proc_entry("scsi_target", 0);
core_dev_release_virtual_lun0();
rd_module_exit();
out_global:
config_item_put(item);
}
kfree(lu_gp_cg->default_groups);
- core_alua_free_lu_gp(se_global->default_lu_gp);
- se_global->default_lu_gp = NULL;
+ lu_gp_cg->default_groups = NULL;
alua_cg = &se_global->alua_group;
for (i = 0; alua_cg->default_groups[i]; i++) {
config_item_put(item);
}
kfree(alua_cg->default_groups);
+ alua_cg->default_groups = NULL;
hba_cg = &se_global->target_core_hbagroup;
for (i = 0; hba_cg->default_groups[i]; i++) {
config_item_put(item);
}
kfree(hba_cg->default_groups);
-
- for (i = 0; subsys->su_group.default_groups[i]; i++) {
- item = &subsys->su_group.default_groups[i]->cg_item;
- subsys->su_group.default_groups[i] = NULL;
- config_item_put(item);
- }
+ hba_cg->default_groups = NULL;
+ /*
+ * We expect subsys->su_group.default_groups to be released
+ * by configfs subsystem provider logic..
+ */
+ configfs_unregister_subsystem(subsys);
kfree(subsys->su_group.default_groups);
- configfs_unregister_subsystem(subsys);
+ core_alua_free_lu_gp(se_global->default_lu_gp);
+ se_global->default_lu_gp = NULL;
+
printk(KERN_INFO "TARGET_CORE[0]: Released ConfigFS Fabric"
" Infrastructure\n");
- remove_scsi_target_mib();
- remove_proc_entry("scsi_target", 0);
core_dev_release_virtual_lun0();
rd_module_exit();
release_se_global();
/*
* deve->se_lun_acl will be NULL for demo-mode created LUNs
* that have not been explictly concerted to MappedLUNs ->
- * struct se_lun_acl.
+ * struct se_lun_acl, but we remove deve->alua_port_list from
+ * port->sep_alua_list. This also means that active UAs and
+ * NodeACL context specific PR metadata for demo-mode
+ * MappedLUN *deve will be released below..
*/
- if (!(deve->se_lun_acl))
- return 0;
-
spin_lock_bh(&port->sep_alua_lock);
list_del(&deve->alua_port_list);
spin_unlock_bh(&port->sep_alua_lock);
printk(KERN_ERR "struct se_dev_entry->se_lun_acl"
" already set for demo mode -> explict"
" LUN ACL transition\n");
+ spin_unlock_irq(&nacl->device_list_lock);
return -1;
}
if (deve->se_lun != lun) {
printk(KERN_ERR "struct se_dev_entry->se_lun does"
" match passed struct se_lun for demo mode"
" -> explict LUN ACL transition\n");
+ spin_unlock_irq(&nacl->device_list_lock);
return -1;
}
deve->se_lun_acl = lun_acl;
}
}
spin_unlock(&hba->device_lock);
-
- while (atomic_read(&hba->dev_mib_access_count))
- cpu_relax();
}
int se_dev_check_online(struct se_device *dev)
CONFIGFS_EATTR_OPS(target_fabric_mappedlun, se_lun_acl, se_lun_group);
+static void target_fabric_mappedlun_release(struct config_item *item)
+{
+ struct se_lun_acl *lacl = container_of(to_config_group(item),
+ struct se_lun_acl, se_lun_group);
+ struct se_portal_group *se_tpg = lacl->se_lun_nacl->se_tpg;
+
+ core_dev_free_initiator_node_lun_acl(se_tpg, lacl);
+}
+
static struct configfs_attribute *target_fabric_mappedlun_attrs[] = {
&target_fabric_mappedlun_write_protect.attr,
NULL,
};
static struct configfs_item_operations target_fabric_mappedlun_item_ops = {
+ .release = target_fabric_mappedlun_release,
.show_attribute = target_fabric_mappedlun_attr_show,
.store_attribute = target_fabric_mappedlun_attr_store,
.allow_link = target_fabric_mappedlun_link,
struct config_group *group,
struct config_item *item)
{
- struct se_lun_acl *lacl = container_of(to_config_group(item),
- struct se_lun_acl, se_lun_group);
- struct se_portal_group *se_tpg = lacl->se_lun_nacl->se_tpg;
-
config_item_put(item);
- core_dev_free_initiator_node_lun_acl(se_tpg, lacl);
+}
+
+static void target_fabric_nacl_base_release(struct config_item *item)
+{
+ struct se_node_acl *se_nacl = container_of(to_config_group(item),
+ struct se_node_acl, acl_group);
+ struct se_portal_group *se_tpg = se_nacl->se_tpg;
+ struct target_fabric_configfs *tf = se_tpg->se_tpg_wwn->wwn_tf;
+
+ tf->tf_ops.fabric_drop_nodeacl(se_nacl);
}
static struct configfs_item_operations target_fabric_nacl_base_item_ops = {
+ .release = target_fabric_nacl_base_release,
.show_attribute = target_fabric_nacl_base_attr_show,
.store_attribute = target_fabric_nacl_base_attr_store,
};
struct config_group *group,
struct config_item *item)
{
- struct se_portal_group *se_tpg = container_of(group,
- struct se_portal_group, tpg_acl_group);
- struct target_fabric_configfs *tf = se_tpg->se_tpg_wwn->wwn_tf;
struct se_node_acl *se_nacl = container_of(to_config_group(item),
struct se_node_acl, acl_group);
struct config_item *df_item;
nacl_cg->default_groups[i] = NULL;
config_item_put(df_item);
}
-
+ /*
+ * struct se_node_acl free is done in target_fabric_nacl_base_release()
+ */
config_item_put(item);
- tf->tf_ops.fabric_drop_nodeacl(se_nacl);
}
static struct configfs_group_operations target_fabric_nacl_group_ops = {
CONFIGFS_EATTR_OPS(target_fabric_np_base, se_tpg_np, tpg_np_group);
+static void target_fabric_np_base_release(struct config_item *item)
+{
+ struct se_tpg_np *se_tpg_np = container_of(to_config_group(item),
+ struct se_tpg_np, tpg_np_group);
+ struct se_portal_group *se_tpg = se_tpg_np->tpg_np_parent;
+ struct target_fabric_configfs *tf = se_tpg->se_tpg_wwn->wwn_tf;
+
+ tf->tf_ops.fabric_drop_np(se_tpg_np);
+}
+
static struct configfs_item_operations target_fabric_np_base_item_ops = {
+ .release = target_fabric_np_base_release,
.show_attribute = target_fabric_np_base_attr_show,
.store_attribute = target_fabric_np_base_attr_store,
};
if (!(se_tpg_np) || IS_ERR(se_tpg_np))
return ERR_PTR(-EINVAL);
+ se_tpg_np->tpg_np_parent = se_tpg;
config_group_init_type_name(&se_tpg_np->tpg_np_group, name,
&TF_CIT_TMPL(tf)->tfc_tpg_np_base_cit);
struct config_group *group,
struct config_item *item)
{
- struct se_portal_group *se_tpg = container_of(group,
- struct se_portal_group, tpg_np_group);
- struct target_fabric_configfs *tf = se_tpg->se_tpg_wwn->wwn_tf;
- struct se_tpg_np *se_tpg_np = container_of(to_config_group(item),
- struct se_tpg_np, tpg_np_group);
-
+ /*
+ * struct se_tpg_np is released via target_fabric_np_base_release()
+ */
config_item_put(item);
- tf->tf_ops.fabric_drop_np(se_tpg_np);
}
static struct configfs_group_operations target_fabric_np_group_ops = {
*/
CONFIGFS_EATTR_OPS(target_fabric_tpg, se_portal_group, tpg_group);
+static void target_fabric_tpg_release(struct config_item *item)
+{
+ struct se_portal_group *se_tpg = container_of(to_config_group(item),
+ struct se_portal_group, tpg_group);
+ struct se_wwn *wwn = se_tpg->se_tpg_wwn;
+ struct target_fabric_configfs *tf = wwn->wwn_tf;
+
+ tf->tf_ops.fabric_drop_tpg(se_tpg);
+}
+
static struct configfs_item_operations target_fabric_tpg_base_item_ops = {
+ .release = target_fabric_tpg_release,
.show_attribute = target_fabric_tpg_attr_show,
.store_attribute = target_fabric_tpg_attr_store,
};
struct config_group *group,
struct config_item *item)
{
- struct se_wwn *wwn = container_of(group, struct se_wwn, wwn_group);
- struct target_fabric_configfs *tf = wwn->wwn_tf;
struct se_portal_group *se_tpg = container_of(to_config_group(item),
struct se_portal_group, tpg_group);
struct config_group *tpg_cg = &se_tpg->tpg_group;
}
config_item_put(item);
- tf->tf_ops.fabric_drop_tpg(se_tpg);
}
+static void target_fabric_release_wwn(struct config_item *item)
+{
+ struct se_wwn *wwn = container_of(to_config_group(item),
+ struct se_wwn, wwn_group);
+ struct target_fabric_configfs *tf = wwn->wwn_tf;
+
+ tf->tf_ops.fabric_drop_wwn(wwn);
+}
+
+static struct configfs_item_operations target_fabric_tpg_item_ops = {
+ .release = target_fabric_release_wwn,
+};
+
static struct configfs_group_operations target_fabric_tpg_group_ops = {
.make_group = target_fabric_make_tpg,
.drop_item = target_fabric_drop_tpg,
};
-TF_CIT_SETUP(tpg, NULL, &target_fabric_tpg_group_ops, NULL);
+TF_CIT_SETUP(tpg, &target_fabric_tpg_item_ops, &target_fabric_tpg_group_ops,
+ NULL);
/* End of tfc_tpg_cit */
struct config_group *group,
struct config_item *item)
{
- struct target_fabric_configfs *tf = container_of(group,
- struct target_fabric_configfs, tf_group);
- struct se_wwn *wwn = container_of(to_config_group(item),
- struct se_wwn, wwn_group);
-
config_item_put(item);
- tf->tf_ops.fabric_drop_wwn(wwn);
}
static struct configfs_group_operations target_fabric_wwn_group_ops = {
bd = blkdev_get_by_path(ib_dev->ibd_udev_path,
FMODE_WRITE|FMODE_READ|FMODE_EXCL, ib_dev);
- if (!(bd))
+ if (IS_ERR(bd))
goto failed;
/*
* Setup the local scope queue_limits from struct request_queue->limits
{
struct iblock_dev *ib_dev = p;
- blkdev_put(ib_dev->ibd_bd, FMODE_WRITE|FMODE_READ|FMODE_EXCL);
- bioset_free(ib_dev->ibd_bio_set);
+ if (ib_dev->ibd_bd != NULL)
+ blkdev_put(ib_dev->ibd_bd, FMODE_WRITE|FMODE_READ|FMODE_EXCL);
+ if (ib_dev->ibd_bio_set != NULL)
+ bioset_free(ib_dev->ibd_bio_set);
kfree(ib_dev);
}
+++ /dev/null
-/*******************************************************************************
- * Filename: target_core_mib.c
- *
- * Copyright (c) 2006-2007 SBE, Inc. All Rights Reserved.
- * Copyright (c) 2007-2010 Rising Tide Systems
- * Copyright (c) 2008-2010 Linux-iSCSI.org
- *
- * Nicholas A. Bellinger <nab@linux-iscsi.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
- *
- ******************************************************************************/
-
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/delay.h>
-#include <linux/timer.h>
-#include <linux/string.h>
-#include <linux/version.h>
-#include <generated/utsrelease.h>
-#include <linux/utsname.h>
-#include <linux/proc_fs.h>
-#include <linux/seq_file.h>
-#include <linux/blkdev.h>
-#include <scsi/scsi.h>
-#include <scsi/scsi_device.h>
-#include <scsi/scsi_host.h>
-
-#include <target/target_core_base.h>
-#include <target/target_core_transport.h>
-#include <target/target_core_fabric_ops.h>
-#include <target/target_core_configfs.h>
-
-#include "target_core_hba.h"
-#include "target_core_mib.h"
-
-/* SCSI mib table index */
-static struct scsi_index_table scsi_index_table;
-
-#ifndef INITIAL_JIFFIES
-#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ))
-#endif
-
-/* SCSI Instance Table */
-#define SCSI_INST_SW_INDEX 1
-#define SCSI_TRANSPORT_INDEX 1
-
-#define NONE "None"
-#define ISPRINT(a) ((a >= ' ') && (a <= '~'))
-
-static inline int list_is_first(const struct list_head *list,
- const struct list_head *head)
-{
- return list->prev == head;
-}
-
-static void *locate_hba_start(
- struct seq_file *seq,
- loff_t *pos)
-{
- spin_lock(&se_global->g_device_lock);
- return seq_list_start(&se_global->g_se_dev_list, *pos);
-}
-
-static void *locate_hba_next(
- struct seq_file *seq,
- void *v,
- loff_t *pos)
-{
- return seq_list_next(v, &se_global->g_se_dev_list, pos);
-}
-
-static void locate_hba_stop(struct seq_file *seq, void *v)
-{
- spin_unlock(&se_global->g_device_lock);
-}
-
-/****************************************************************************
- * SCSI MIB Tables
- ****************************************************************************/
-
-/*
- * SCSI Instance Table
- */
-static void *scsi_inst_seq_start(
- struct seq_file *seq,
- loff_t *pos)
-{
- spin_lock(&se_global->hba_lock);
- return seq_list_start(&se_global->g_hba_list, *pos);
-}
-
-static void *scsi_inst_seq_next(
- struct seq_file *seq,
- void *v,
- loff_t *pos)
-{
- return seq_list_next(v, &se_global->g_hba_list, pos);
-}
-
-static void scsi_inst_seq_stop(struct seq_file *seq, void *v)
-{
- spin_unlock(&se_global->hba_lock);
-}
-
-static int scsi_inst_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba = list_entry(v, struct se_hba, hba_list);
-
- if (list_is_first(&hba->hba_list, &se_global->g_hba_list))
- seq_puts(seq, "inst sw_indx\n");
-
- seq_printf(seq, "%u %u\n", hba->hba_index, SCSI_INST_SW_INDEX);
- seq_printf(seq, "plugin: %s version: %s\n",
- hba->transport->name, TARGET_CORE_VERSION);
-
- return 0;
-}
-
-static const struct seq_operations scsi_inst_seq_ops = {
- .start = scsi_inst_seq_start,
- .next = scsi_inst_seq_next,
- .stop = scsi_inst_seq_stop,
- .show = scsi_inst_seq_show
-};
-
-static int scsi_inst_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_inst_seq_ops);
-}
-
-static const struct file_operations scsi_inst_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_inst_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Device Table
- */
-static void *scsi_dev_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_dev_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_dev_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-static int scsi_dev_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- char str[28];
- int k;
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst indx role ports\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- seq_printf(seq, "%u %u %s %u\n", hba->hba_index,
- dev->dev_index, "Target", dev->dev_port_count);
-
- memcpy(&str[0], (void *)DEV_T10_WWN(dev), 28);
-
- /* vendor */
- for (k = 0; k < 8; k++)
- str[k] = ISPRINT(DEV_T10_WWN(dev)->vendor[k]) ?
- DEV_T10_WWN(dev)->vendor[k] : 0x20;
- str[k] = 0x20;
-
- /* model */
- for (k = 0; k < 16; k++)
- str[k+9] = ISPRINT(DEV_T10_WWN(dev)->model[k]) ?
- DEV_T10_WWN(dev)->model[k] : 0x20;
- str[k + 9] = 0;
-
- seq_printf(seq, "dev_alias: %s\n", str);
-
- return 0;
-}
-
-static const struct seq_operations scsi_dev_seq_ops = {
- .start = scsi_dev_seq_start,
- .next = scsi_dev_seq_next,
- .stop = scsi_dev_seq_stop,
- .show = scsi_dev_seq_show
-};
-
-static int scsi_dev_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_dev_seq_ops);
-}
-
-static const struct file_operations scsi_dev_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_dev_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Port Table
- */
-static void *scsi_port_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_port_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_port_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-static int scsi_port_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- struct se_port *sep, *sep_tmp;
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst device indx role busy_count\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- /* FIXME: scsiPortBusyStatuses count */
- spin_lock(&dev->se_port_lock);
- list_for_each_entry_safe(sep, sep_tmp, &dev->dev_sep_list, sep_list) {
- seq_printf(seq, "%u %u %u %s%u %u\n", hba->hba_index,
- dev->dev_index, sep->sep_index, "Device",
- dev->dev_index, 0);
- }
- spin_unlock(&dev->se_port_lock);
-
- return 0;
-}
-
-static const struct seq_operations scsi_port_seq_ops = {
- .start = scsi_port_seq_start,
- .next = scsi_port_seq_next,
- .stop = scsi_port_seq_stop,
- .show = scsi_port_seq_show
-};
-
-static int scsi_port_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_port_seq_ops);
-}
-
-static const struct file_operations scsi_port_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_port_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Transport Table
- */
-static void *scsi_transport_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_transport_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_transport_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-static int scsi_transport_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- struct se_port *se, *se_tmp;
- struct se_portal_group *tpg;
- struct t10_wwn *wwn;
- char buf[64];
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst device indx dev_name\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- wwn = DEV_T10_WWN(dev);
-
- spin_lock(&dev->se_port_lock);
- list_for_each_entry_safe(se, se_tmp, &dev->dev_sep_list, sep_list) {
- tpg = se->sep_tpg;
- sprintf(buf, "scsiTransport%s",
- TPG_TFO(tpg)->get_fabric_name());
-
- seq_printf(seq, "%u %s %u %s+%s\n",
- hba->hba_index, /* scsiTransportIndex */
- buf, /* scsiTransportType */
- (TPG_TFO(tpg)->tpg_get_inst_index != NULL) ?
- TPG_TFO(tpg)->tpg_get_inst_index(tpg) :
- 0,
- TPG_TFO(tpg)->tpg_get_wwn(tpg),
- (strlen(wwn->unit_serial)) ?
- /* scsiTransportDevName */
- wwn->unit_serial : wwn->vendor);
- }
- spin_unlock(&dev->se_port_lock);
-
- return 0;
-}
-
-static const struct seq_operations scsi_transport_seq_ops = {
- .start = scsi_transport_seq_start,
- .next = scsi_transport_seq_next,
- .stop = scsi_transport_seq_stop,
- .show = scsi_transport_seq_show
-};
-
-static int scsi_transport_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_transport_seq_ops);
-}
-
-static const struct file_operations scsi_transport_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_transport_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Target Device Table
- */
-static void *scsi_tgt_dev_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_tgt_dev_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_tgt_dev_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-
-#define LU_COUNT 1 /* for now */
-static int scsi_tgt_dev_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- int non_accessible_lus = 0;
- char status[16];
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst indx num_LUs status non_access_LUs"
- " resets\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- switch (dev->dev_status) {
- case TRANSPORT_DEVICE_ACTIVATED:
- strcpy(status, "activated");
- break;
- case TRANSPORT_DEVICE_DEACTIVATED:
- strcpy(status, "deactivated");
- non_accessible_lus = 1;
- break;
- case TRANSPORT_DEVICE_SHUTDOWN:
- strcpy(status, "shutdown");
- non_accessible_lus = 1;
- break;
- case TRANSPORT_DEVICE_OFFLINE_ACTIVATED:
- case TRANSPORT_DEVICE_OFFLINE_DEACTIVATED:
- strcpy(status, "offline");
- non_accessible_lus = 1;
- break;
- default:
- sprintf(status, "unknown(%d)", dev->dev_status);
- non_accessible_lus = 1;
- }
-
- seq_printf(seq, "%u %u %u %s %u %u\n",
- hba->hba_index, dev->dev_index, LU_COUNT,
- status, non_accessible_lus, dev->num_resets);
-
- return 0;
-}
-
-static const struct seq_operations scsi_tgt_dev_seq_ops = {
- .start = scsi_tgt_dev_seq_start,
- .next = scsi_tgt_dev_seq_next,
- .stop = scsi_tgt_dev_seq_stop,
- .show = scsi_tgt_dev_seq_show
-};
-
-static int scsi_tgt_dev_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_tgt_dev_seq_ops);
-}
-
-static const struct file_operations scsi_tgt_dev_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_tgt_dev_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Target Port Table
- */
-static void *scsi_tgt_port_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_tgt_port_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_tgt_port_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-static int scsi_tgt_port_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- struct se_port *sep, *sep_tmp;
- struct se_portal_group *tpg;
- u32 rx_mbytes, tx_mbytes;
- unsigned long long num_cmds;
- char buf[64];
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst device indx name port_index in_cmds"
- " write_mbytes read_mbytes hs_in_cmds\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- spin_lock(&dev->se_port_lock);
- list_for_each_entry_safe(sep, sep_tmp, &dev->dev_sep_list, sep_list) {
- tpg = sep->sep_tpg;
- sprintf(buf, "%sPort#",
- TPG_TFO(tpg)->get_fabric_name());
-
- seq_printf(seq, "%u %u %u %s%d %s%s%d ",
- hba->hba_index,
- dev->dev_index,
- sep->sep_index,
- buf, sep->sep_index,
- TPG_TFO(tpg)->tpg_get_wwn(tpg), "+t+",
- TPG_TFO(tpg)->tpg_get_tag(tpg));
-
- spin_lock(&sep->sep_lun->lun_sep_lock);
- num_cmds = sep->sep_stats.cmd_pdus;
- rx_mbytes = (sep->sep_stats.rx_data_octets >> 20);
- tx_mbytes = (sep->sep_stats.tx_data_octets >> 20);
- spin_unlock(&sep->sep_lun->lun_sep_lock);
-
- seq_printf(seq, "%llu %u %u %u\n", num_cmds,
- rx_mbytes, tx_mbytes, 0);
- }
- spin_unlock(&dev->se_port_lock);
-
- return 0;
-}
-
-static const struct seq_operations scsi_tgt_port_seq_ops = {
- .start = scsi_tgt_port_seq_start,
- .next = scsi_tgt_port_seq_next,
- .stop = scsi_tgt_port_seq_stop,
- .show = scsi_tgt_port_seq_show
-};
-
-static int scsi_tgt_port_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_tgt_port_seq_ops);
-}
-
-static const struct file_operations scsi_tgt_port_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_tgt_port_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Authorized Initiator Table:
- * It contains the SCSI Initiators authorized to be attached to one of the
- * local Target ports.
- * Iterates through all active TPGs and extracts the info from the ACLs
- */
-static void *scsi_auth_intr_seq_start(struct seq_file *seq, loff_t *pos)
-{
- spin_lock_bh(&se_global->se_tpg_lock);
- return seq_list_start(&se_global->g_se_tpg_list, *pos);
-}
-
-static void *scsi_auth_intr_seq_next(struct seq_file *seq, void *v,
- loff_t *pos)
-{
- return seq_list_next(v, &se_global->g_se_tpg_list, pos);
-}
-
-static void scsi_auth_intr_seq_stop(struct seq_file *seq, void *v)
-{
- spin_unlock_bh(&se_global->se_tpg_lock);
-}
-
-static int scsi_auth_intr_seq_show(struct seq_file *seq, void *v)
-{
- struct se_portal_group *se_tpg = list_entry(v, struct se_portal_group,
- se_tpg_list);
- struct se_dev_entry *deve;
- struct se_lun *lun;
- struct se_node_acl *se_nacl;
- int j;
-
- if (list_is_first(&se_tpg->se_tpg_list,
- &se_global->g_se_tpg_list))
- seq_puts(seq, "inst dev port indx dev_or_port intr_name "
- "map_indx att_count num_cmds read_mbytes "
- "write_mbytes hs_num_cmds creation_time row_status\n");
-
- if (!(se_tpg))
- return 0;
-
- spin_lock(&se_tpg->acl_node_lock);
- list_for_each_entry(se_nacl, &se_tpg->acl_node_list, acl_list) {
-
- atomic_inc(&se_nacl->mib_ref_count);
- smp_mb__after_atomic_inc();
- spin_unlock(&se_tpg->acl_node_lock);
-
- spin_lock_irq(&se_nacl->device_list_lock);
- for (j = 0; j < TRANSPORT_MAX_LUNS_PER_TPG; j++) {
- deve = &se_nacl->device_list[j];
- if (!(deve->lun_flags &
- TRANSPORT_LUNFLAGS_INITIATOR_ACCESS) ||
- (!deve->se_lun))
- continue;
- lun = deve->se_lun;
- if (!lun->lun_se_dev)
- continue;
-
- seq_printf(seq, "%u %u %u %u %u %s %u %u %u %u %u %u"
- " %u %s\n",
- /* scsiInstIndex */
- (TPG_TFO(se_tpg)->tpg_get_inst_index != NULL) ?
- TPG_TFO(se_tpg)->tpg_get_inst_index(se_tpg) :
- 0,
- /* scsiDeviceIndex */
- lun->lun_se_dev->dev_index,
- /* scsiAuthIntrTgtPortIndex */
- TPG_TFO(se_tpg)->tpg_get_tag(se_tpg),
- /* scsiAuthIntrIndex */
- se_nacl->acl_index,
- /* scsiAuthIntrDevOrPort */
- 1,
- /* scsiAuthIntrName */
- se_nacl->initiatorname[0] ?
- se_nacl->initiatorname : NONE,
- /* FIXME: scsiAuthIntrLunMapIndex */
- 0,
- /* scsiAuthIntrAttachedTimes */
- deve->attach_count,
- /* scsiAuthIntrOutCommands */
- deve->total_cmds,
- /* scsiAuthIntrReadMegaBytes */
- (u32)(deve->read_bytes >> 20),
- /* scsiAuthIntrWrittenMegaBytes */
- (u32)(deve->write_bytes >> 20),
- /* FIXME: scsiAuthIntrHSOutCommands */
- 0,
- /* scsiAuthIntrLastCreation */
- (u32)(((u32)deve->creation_time -
- INITIAL_JIFFIES) * 100 / HZ),
- /* FIXME: scsiAuthIntrRowStatus */
- "Ready");
- }
- spin_unlock_irq(&se_nacl->device_list_lock);
-
- spin_lock(&se_tpg->acl_node_lock);
- atomic_dec(&se_nacl->mib_ref_count);
- smp_mb__after_atomic_dec();
- }
- spin_unlock(&se_tpg->acl_node_lock);
-
- return 0;
-}
-
-static const struct seq_operations scsi_auth_intr_seq_ops = {
- .start = scsi_auth_intr_seq_start,
- .next = scsi_auth_intr_seq_next,
- .stop = scsi_auth_intr_seq_stop,
- .show = scsi_auth_intr_seq_show
-};
-
-static int scsi_auth_intr_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_auth_intr_seq_ops);
-}
-
-static const struct file_operations scsi_auth_intr_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_auth_intr_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Attached Initiator Port Table:
- * It lists the SCSI Initiators attached to one of the local Target ports.
- * Iterates through all active TPGs and use active sessions from each TPG
- * to list the info fo this table.
- */
-static void *scsi_att_intr_port_seq_start(struct seq_file *seq, loff_t *pos)
-{
- spin_lock_bh(&se_global->se_tpg_lock);
- return seq_list_start(&se_global->g_se_tpg_list, *pos);
-}
-
-static void *scsi_att_intr_port_seq_next(struct seq_file *seq, void *v,
- loff_t *pos)
-{
- return seq_list_next(v, &se_global->g_se_tpg_list, pos);
-}
-
-static void scsi_att_intr_port_seq_stop(struct seq_file *seq, void *v)
-{
- spin_unlock_bh(&se_global->se_tpg_lock);
-}
-
-static int scsi_att_intr_port_seq_show(struct seq_file *seq, void *v)
-{
- struct se_portal_group *se_tpg = list_entry(v, struct se_portal_group,
- se_tpg_list);
- struct se_dev_entry *deve;
- struct se_lun *lun;
- struct se_node_acl *se_nacl;
- struct se_session *se_sess;
- unsigned char buf[64];
- int j;
-
- if (list_is_first(&se_tpg->se_tpg_list,
- &se_global->g_se_tpg_list))
- seq_puts(seq, "inst dev port indx port_auth_indx port_name"
- " port_ident\n");
-
- if (!(se_tpg))
- return 0;
-
- spin_lock(&se_tpg->session_lock);
- list_for_each_entry(se_sess, &se_tpg->tpg_sess_list, sess_list) {
- if ((TPG_TFO(se_tpg)->sess_logged_in(se_sess)) ||
- (!se_sess->se_node_acl) ||
- (!se_sess->se_node_acl->device_list))
- continue;
-
- atomic_inc(&se_sess->mib_ref_count);
- smp_mb__after_atomic_inc();
- se_nacl = se_sess->se_node_acl;
- atomic_inc(&se_nacl->mib_ref_count);
- smp_mb__after_atomic_inc();
- spin_unlock(&se_tpg->session_lock);
-
- spin_lock_irq(&se_nacl->device_list_lock);
- for (j = 0; j < TRANSPORT_MAX_LUNS_PER_TPG; j++) {
- deve = &se_nacl->device_list[j];
- if (!(deve->lun_flags &
- TRANSPORT_LUNFLAGS_INITIATOR_ACCESS) ||
- (!deve->se_lun))
- continue;
-
- lun = deve->se_lun;
- if (!lun->lun_se_dev)
- continue;
-
- memset(buf, 0, 64);
- if (TPG_TFO(se_tpg)->sess_get_initiator_sid != NULL)
- TPG_TFO(se_tpg)->sess_get_initiator_sid(
- se_sess, (unsigned char *)&buf[0], 64);
-
- seq_printf(seq, "%u %u %u %u %u %s+i+%s\n",
- /* scsiInstIndex */
- (TPG_TFO(se_tpg)->tpg_get_inst_index != NULL) ?
- TPG_TFO(se_tpg)->tpg_get_inst_index(se_tpg) :
- 0,
- /* scsiDeviceIndex */
- lun->lun_se_dev->dev_index,
- /* scsiPortIndex */
- TPG_TFO(se_tpg)->tpg_get_tag(se_tpg),
- /* scsiAttIntrPortIndex */
- (TPG_TFO(se_tpg)->sess_get_index != NULL) ?
- TPG_TFO(se_tpg)->sess_get_index(se_sess) :
- 0,
- /* scsiAttIntrPortAuthIntrIdx */
- se_nacl->acl_index,
- /* scsiAttIntrPortName */
- se_nacl->initiatorname[0] ?
- se_nacl->initiatorname : NONE,
- /* scsiAttIntrPortIdentifier */
- buf);
- }
- spin_unlock_irq(&se_nacl->device_list_lock);
-
- spin_lock(&se_tpg->session_lock);
- atomic_dec(&se_nacl->mib_ref_count);
- smp_mb__after_atomic_dec();
- atomic_dec(&se_sess->mib_ref_count);
- smp_mb__after_atomic_dec();
- }
- spin_unlock(&se_tpg->session_lock);
-
- return 0;
-}
-
-static const struct seq_operations scsi_att_intr_port_seq_ops = {
- .start = scsi_att_intr_port_seq_start,
- .next = scsi_att_intr_port_seq_next,
- .stop = scsi_att_intr_port_seq_stop,
- .show = scsi_att_intr_port_seq_show
-};
-
-static int scsi_att_intr_port_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_att_intr_port_seq_ops);
-}
-
-static const struct file_operations scsi_att_intr_port_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_att_intr_port_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/*
- * SCSI Logical Unit Table
- */
-static void *scsi_lu_seq_start(struct seq_file *seq, loff_t *pos)
-{
- return locate_hba_start(seq, pos);
-}
-
-static void *scsi_lu_seq_next(struct seq_file *seq, void *v, loff_t *pos)
-{
- return locate_hba_next(seq, v, pos);
-}
-
-static void scsi_lu_seq_stop(struct seq_file *seq, void *v)
-{
- locate_hba_stop(seq, v);
-}
-
-#define SCSI_LU_INDEX 1
-static int scsi_lu_seq_show(struct seq_file *seq, void *v)
-{
- struct se_hba *hba;
- struct se_subsystem_dev *se_dev = list_entry(v, struct se_subsystem_dev,
- g_se_dev_list);
- struct se_device *dev = se_dev->se_dev_ptr;
- int j;
- char str[28];
-
- if (list_is_first(&se_dev->g_se_dev_list, &se_global->g_se_dev_list))
- seq_puts(seq, "inst dev indx LUN lu_name vend prod rev"
- " dev_type status state-bit num_cmds read_mbytes"
- " write_mbytes resets full_stat hs_num_cmds creation_time\n");
-
- if (!(dev))
- return 0;
-
- hba = dev->se_hba;
- if (!(hba)) {
- /* Log error ? */
- return 0;
- }
-
- /* Fix LU state, if we can read it from the device */
- seq_printf(seq, "%u %u %u %llu %s", hba->hba_index,
- dev->dev_index, SCSI_LU_INDEX,
- (unsigned long long)0, /* FIXME: scsiLuDefaultLun */
- (strlen(DEV_T10_WWN(dev)->unit_serial)) ?
- /* scsiLuWwnName */
- (char *)&DEV_T10_WWN(dev)->unit_serial[0] :
- "None");
-
- memcpy(&str[0], (void *)DEV_T10_WWN(dev), 28);
- /* scsiLuVendorId */
- for (j = 0; j < 8; j++)
- str[j] = ISPRINT(DEV_T10_WWN(dev)->vendor[j]) ?
- DEV_T10_WWN(dev)->vendor[j] : 0x20;
- str[8] = 0;
- seq_printf(seq, " %s", str);
-
- /* scsiLuProductId */
- for (j = 0; j < 16; j++)
- str[j] = ISPRINT(DEV_T10_WWN(dev)->model[j]) ?
- DEV_T10_WWN(dev)->model[j] : 0x20;
- str[16] = 0;
- seq_printf(seq, " %s", str);
-
- /* scsiLuRevisionId */
- for (j = 0; j < 4; j++)
- str[j] = ISPRINT(DEV_T10_WWN(dev)->revision[j]) ?
- DEV_T10_WWN(dev)->revision[j] : 0x20;
- str[4] = 0;
- seq_printf(seq, " %s", str);
-
- seq_printf(seq, " %u %s %s %llu %u %u %u %u %u %u\n",
- /* scsiLuPeripheralType */
- TRANSPORT(dev)->get_device_type(dev),
- (dev->dev_status == TRANSPORT_DEVICE_ACTIVATED) ?
- "available" : "notavailable", /* scsiLuStatus */
- "exposed", /* scsiLuState */
- (unsigned long long)dev->num_cmds,
- /* scsiLuReadMegaBytes */
- (u32)(dev->read_bytes >> 20),
- /* scsiLuWrittenMegaBytes */
- (u32)(dev->write_bytes >> 20),
- dev->num_resets, /* scsiLuInResets */
- 0, /* scsiLuOutTaskSetFullStatus */
- 0, /* scsiLuHSInCommands */
- (u32)(((u32)dev->creation_time - INITIAL_JIFFIES) *
- 100 / HZ));
-
- return 0;
-}
-
-static const struct seq_operations scsi_lu_seq_ops = {
- .start = scsi_lu_seq_start,
- .next = scsi_lu_seq_next,
- .stop = scsi_lu_seq_stop,
- .show = scsi_lu_seq_show
-};
-
-static int scsi_lu_seq_open(struct inode *inode, struct file *file)
-{
- return seq_open(file, &scsi_lu_seq_ops);
-}
-
-static const struct file_operations scsi_lu_seq_fops = {
- .owner = THIS_MODULE,
- .open = scsi_lu_seq_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-/****************************************************************************/
-
-/*
- * Remove proc fs entries
- */
-void remove_scsi_target_mib(void)
-{
- remove_proc_entry("scsi_target/mib/scsi_inst", NULL);
- remove_proc_entry("scsi_target/mib/scsi_dev", NULL);
- remove_proc_entry("scsi_target/mib/scsi_port", NULL);
- remove_proc_entry("scsi_target/mib/scsi_transport", NULL);
- remove_proc_entry("scsi_target/mib/scsi_tgt_dev", NULL);
- remove_proc_entry("scsi_target/mib/scsi_tgt_port", NULL);
- remove_proc_entry("scsi_target/mib/scsi_auth_intr", NULL);
- remove_proc_entry("scsi_target/mib/scsi_att_intr_port", NULL);
- remove_proc_entry("scsi_target/mib/scsi_lu", NULL);
- remove_proc_entry("scsi_target/mib", NULL);
-}
-
-/*
- * Create proc fs entries for the mib tables
- */
-int init_scsi_target_mib(void)
-{
- struct proc_dir_entry *dir_entry;
- struct proc_dir_entry *scsi_inst_entry;
- struct proc_dir_entry *scsi_dev_entry;
- struct proc_dir_entry *scsi_port_entry;
- struct proc_dir_entry *scsi_transport_entry;
- struct proc_dir_entry *scsi_tgt_dev_entry;
- struct proc_dir_entry *scsi_tgt_port_entry;
- struct proc_dir_entry *scsi_auth_intr_entry;
- struct proc_dir_entry *scsi_att_intr_port_entry;
- struct proc_dir_entry *scsi_lu_entry;
-
- dir_entry = proc_mkdir("scsi_target/mib", NULL);
- if (!(dir_entry)) {
- printk(KERN_ERR "proc_mkdir() failed.\n");
- return -1;
- }
-
- scsi_inst_entry =
- create_proc_entry("scsi_target/mib/scsi_inst", 0, NULL);
- if (scsi_inst_entry)
- scsi_inst_entry->proc_fops = &scsi_inst_seq_fops;
- else
- goto error;
-
- scsi_dev_entry =
- create_proc_entry("scsi_target/mib/scsi_dev", 0, NULL);
- if (scsi_dev_entry)
- scsi_dev_entry->proc_fops = &scsi_dev_seq_fops;
- else
- goto error;
-
- scsi_port_entry =
- create_proc_entry("scsi_target/mib/scsi_port", 0, NULL);
- if (scsi_port_entry)
- scsi_port_entry->proc_fops = &scsi_port_seq_fops;
- else
- goto error;
-
- scsi_transport_entry =
- create_proc_entry("scsi_target/mib/scsi_transport", 0, NULL);
- if (scsi_transport_entry)
- scsi_transport_entry->proc_fops = &scsi_transport_seq_fops;
- else
- goto error;
-
- scsi_tgt_dev_entry =
- create_proc_entry("scsi_target/mib/scsi_tgt_dev", 0, NULL);
- if (scsi_tgt_dev_entry)
- scsi_tgt_dev_entry->proc_fops = &scsi_tgt_dev_seq_fops;
- else
- goto error;
-
- scsi_tgt_port_entry =
- create_proc_entry("scsi_target/mib/scsi_tgt_port", 0, NULL);
- if (scsi_tgt_port_entry)
- scsi_tgt_port_entry->proc_fops = &scsi_tgt_port_seq_fops;
- else
- goto error;
-
- scsi_auth_intr_entry =
- create_proc_entry("scsi_target/mib/scsi_auth_intr", 0, NULL);
- if (scsi_auth_intr_entry)
- scsi_auth_intr_entry->proc_fops = &scsi_auth_intr_seq_fops;
- else
- goto error;
-
- scsi_att_intr_port_entry =
- create_proc_entry("scsi_target/mib/scsi_att_intr_port", 0, NULL);
- if (scsi_att_intr_port_entry)
- scsi_att_intr_port_entry->proc_fops =
- &scsi_att_intr_port_seq_fops;
- else
- goto error;
-
- scsi_lu_entry = create_proc_entry("scsi_target/mib/scsi_lu", 0, NULL);
- if (scsi_lu_entry)
- scsi_lu_entry->proc_fops = &scsi_lu_seq_fops;
- else
- goto error;
-
- return 0;
-
-error:
- printk(KERN_ERR "create_proc_entry() failed.\n");
- remove_scsi_target_mib();
- return -1;
-}
-
-/*
- * Initialize the index table for allocating unique row indexes to various mib
- * tables
- */
-void init_scsi_index_table(void)
-{
- memset(&scsi_index_table, 0, sizeof(struct scsi_index_table));
- spin_lock_init(&scsi_index_table.lock);
-}
-
-/*
- * Allocate a new row index for the entry type specified
- */
-u32 scsi_get_new_index(scsi_index_t type)
-{
- u32 new_index;
-
- if ((type < 0) || (type >= SCSI_INDEX_TYPE_MAX)) {
- printk(KERN_ERR "Invalid index type %d\n", type);
- return -1;
- }
-
- spin_lock(&scsi_index_table.lock);
- new_index = ++scsi_index_table.scsi_mib_index[type];
- if (new_index == 0)
- new_index = ++scsi_index_table.scsi_mib_index[type];
- spin_unlock(&scsi_index_table.lock);
-
- return new_index;
-}
-EXPORT_SYMBOL(scsi_get_new_index);
+++ /dev/null
-#ifndef TARGET_CORE_MIB_H
-#define TARGET_CORE_MIB_H
-
-typedef enum {
- SCSI_INST_INDEX,
- SCSI_DEVICE_INDEX,
- SCSI_AUTH_INTR_INDEX,
- SCSI_INDEX_TYPE_MAX
-} scsi_index_t;
-
-struct scsi_index_table {
- spinlock_t lock;
- u32 scsi_mib_index[SCSI_INDEX_TYPE_MAX];
-} ____cacheline_aligned;
-
-/* SCSI Port stats */
-struct scsi_port_stats {
- u64 cmd_pdus;
- u64 tx_data_octets;
- u64 rx_data_octets;
-} ____cacheline_aligned;
-
-extern int init_scsi_target_mib(void);
-extern void remove_scsi_target_mib(void);
-extern void init_scsi_index_table(void);
-extern u32 scsi_get_new_index(scsi_index_t);
-
-#endif /*** TARGET_CORE_MIB_H ***/
*/
bd = blkdev_get_by_path(se_dev->se_dev_udev_path,
FMODE_WRITE|FMODE_READ|FMODE_EXCL, pdv);
- if (!(bd)) {
- printk("pSCSI: blkdev_get_by_path() failed\n");
+ if (IS_ERR(bd)) {
+ printk(KERN_ERR "pSCSI: blkdev_get_by_path() failed\n");
scsi_device_put(sd);
return NULL;
}
spin_lock_init(&acl->device_list_lock);
spin_lock_init(&acl->nacl_sess_lock);
atomic_set(&acl->acl_pr_ref_count, 0);
- atomic_set(&acl->mib_ref_count, 0);
acl->queue_depth = TPG_TFO(tpg)->tpg_get_default_depth(tpg);
snprintf(acl->initiatorname, TRANSPORT_IQN_LEN, "%s", initiatorname);
acl->se_tpg = tpg;
cpu_relax();
}
-void core_tpg_wait_for_mib_ref(struct se_node_acl *nacl)
-{
- while (atomic_read(&nacl->mib_ref_count) != 0)
- cpu_relax();
-}
-
void core_tpg_clear_object_luns(struct se_portal_group *tpg)
{
int i, ret;
spin_unlock_bh(&tpg->session_lock);
core_tpg_wait_for_nacl_pr_ref(acl);
- core_tpg_wait_for_mib_ref(acl);
core_clear_initiator_node_from_tpg(acl, tpg);
core_free_device_list_for_node(acl, tpg);
int core_tpg_deregister(struct se_portal_group *se_tpg)
{
+ struct se_node_acl *nacl, *nacl_tmp;
+
printk(KERN_INFO "TARGET_CORE[%s]: Deallocating %s struct se_portal_group"
" for endpoint: %s Portal Tag %u\n",
(se_tpg->se_tpg_type == TRANSPORT_TPG_TYPE_NORMAL) ?
while (atomic_read(&se_tpg->tpg_pr_ref_count) != 0)
cpu_relax();
+ /*
+ * Release any remaining demo-mode generated se_node_acl that have
+ * not been released because of TFO->tpg_check_demo_mode_cache() == 1
+ * in transport_deregister_session().
+ */
+ spin_lock_bh(&se_tpg->acl_node_lock);
+ list_for_each_entry_safe(nacl, nacl_tmp, &se_tpg->acl_node_list,
+ acl_list) {
+ list_del(&nacl->acl_list);
+ se_tpg->num_node_acls--;
+ spin_unlock_bh(&se_tpg->acl_node_lock);
+
+ core_tpg_wait_for_nacl_pr_ref(nacl);
+ core_free_device_list_for_node(nacl, se_tpg);
+ TPG_TFO(se_tpg)->tpg_release_fabric_acl(se_tpg, nacl);
+
+ spin_lock_bh(&se_tpg->acl_node_lock);
+ }
+ spin_unlock_bh(&se_tpg->acl_node_lock);
if (se_tpg->se_tpg_type == TRANSPORT_TPG_TYPE_NORMAL)
core_tpg_release_virtual_lun0(se_tpg);
se_global = NULL;
}
+/* SCSI statistics table index */
+static struct scsi_index_table scsi_index_table;
+
+/*
+ * Initialize the index table for allocating unique row indexes to various mib
+ * tables.
+ */
+void init_scsi_index_table(void)
+{
+ memset(&scsi_index_table, 0, sizeof(struct scsi_index_table));
+ spin_lock_init(&scsi_index_table.lock);
+}
+
+/*
+ * Allocate a new row index for the entry type specified
+ */
+u32 scsi_get_new_index(scsi_index_t type)
+{
+ u32 new_index;
+
+ if ((type < 0) || (type >= SCSI_INDEX_TYPE_MAX)) {
+ printk(KERN_ERR "Invalid index type %d\n", type);
+ return -EINVAL;
+ }
+
+ spin_lock(&scsi_index_table.lock);
+ new_index = ++scsi_index_table.scsi_mib_index[type];
+ if (new_index == 0)
+ new_index = ++scsi_index_table.scsi_mib_index[type];
+ spin_unlock(&scsi_index_table.lock);
+
+ return new_index;
+}
+
void transport_init_queue_obj(struct se_queue_obj *qobj)
{
atomic_set(&qobj->queue_cnt, 0);
}
INIT_LIST_HEAD(&se_sess->sess_list);
INIT_LIST_HEAD(&se_sess->sess_acl_list);
- atomic_set(&se_sess->mib_ref_count, 0);
return se_sess;
}
transport_free_session(se_sess);
return;
}
- /*
- * Wait for possible reference in drivers/target/target_core_mib.c:
- * scsi_att_intr_port_seq_show()
- */
- while (atomic_read(&se_sess->mib_ref_count) != 0)
- cpu_relax();
spin_lock_bh(&se_tpg->session_lock);
list_del(&se_sess->sess_list);
spin_unlock_bh(&se_tpg->acl_node_lock);
core_tpg_wait_for_nacl_pr_ref(se_nacl);
- core_tpg_wait_for_mib_ref(se_nacl);
core_free_device_list_for_node(se_nacl, se_tpg);
TPG_TFO(se_tpg)->tpg_release_fabric_acl(se_tpg,
se_nacl);
return ret;
}
+
+ BUG_ON(list_empty(se_mem_list));
/*
* This is the normal path for all normal non BIDI and BIDI-COMMAND
* WRITE payloads.. If we need to do BIDI READ passthrough for
struct se_mem *se_mem = NULL, *se_mem_lout = NULL;
u32 se_mem_cnt = 0, task_offset = 0;
- BUG_ON(list_empty(cmd->t_task->t_mem_list));
+ if (!list_empty(T_TASK(cmd)->t_mem_list))
+ se_mem = list_entry(T_TASK(cmd)->t_mem_list->next,
+ struct se_mem, se_list);
ret = transport_do_se_mem_map(dev, task,
cmd->t_task->t_mem_list, NULL, se_mem,
obj-$(CONFIG_R3964) += n_r3964.o
obj-y += vt/
+obj-$(CONFIG_HVC_DRIVER) += hvc/
+obj-y += serial/
--- /dev/null
+obj-$(CONFIG_HVC_CONSOLE) += hvc_vio.o hvsi.o
+obj-$(CONFIG_HVC_ISERIES) += hvc_iseries.o
+obj-$(CONFIG_HVC_RTAS) += hvc_rtas.o
+obj-$(CONFIG_HVC_TILE) += hvc_tile.o
+obj-$(CONFIG_HVC_DCC) += hvc_dcc.o
+obj-$(CONFIG_HVC_BEAT) += hvc_beat.o
+obj-$(CONFIG_HVC_DRIVER) += hvc_console.o
+obj-$(CONFIG_HVC_IRQ) += hvc_irq.o
+obj-$(CONFIG_HVC_XEN) += hvc_xen.o
+obj-$(CONFIG_HVC_IUCV) += hvc_iucv.o
+obj-$(CONFIG_HVC_UDBG) += hvc_udbg.o
+obj-$(CONFIG_HVCS) += hvcs.o
gsm->initiator = c->initiator;
gsm->mru = c->mru;
+ gsm->mtu = c->mtu;
gsm->encoding = c->encapsulation;
gsm->adaption = c->adaption;
gsm->n2 = c->n2;
__u8 __user *buf, size_t nr)
{
struct n_hdlc *n_hdlc = tty2n_hdlc(tty);
- int ret;
+ int ret = 0;
struct n_hdlc_buf *rbuf;
+ DECLARE_WAITQUEUE(wait, current);
if (debuglevel >= DEBUG_LEVEL_INFO)
printk("%s(%d)n_hdlc_tty_read() called\n",__FILE__,__LINE__);
return -EFAULT;
}
- tty_lock();
+ add_wait_queue(&tty->read_wait, &wait);
for (;;) {
if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
- tty_unlock();
- return -EIO;
+ ret = -EIO;
+ break;
}
+ if (tty_hung_up_p(file))
+ break;
- n_hdlc = tty2n_hdlc (tty);
- if (!n_hdlc || n_hdlc->magic != HDLC_MAGIC ||
- tty != n_hdlc->tty) {
- tty_unlock();
- return 0;
- }
+ set_current_state(TASK_INTERRUPTIBLE);
rbuf = n_hdlc_buf_get(&n_hdlc->rx_buf_list);
- if (rbuf)
+ if (rbuf) {
+ if (rbuf->count > nr) {
+ /* too large for caller's buffer */
+ ret = -EOVERFLOW;
+ } else {
+ if (copy_to_user(buf, rbuf->buf, rbuf->count))
+ ret = -EFAULT;
+ else
+ ret = rbuf->count;
+ }
+
+ if (n_hdlc->rx_free_buf_list.count >
+ DEFAULT_RX_BUF_COUNT)
+ kfree(rbuf);
+ else
+ n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf);
break;
+ }
/* no data */
if (file->f_flags & O_NONBLOCK) {
- tty_unlock();
- return -EAGAIN;
+ ret = -EAGAIN;
+ break;
}
-
- interruptible_sleep_on (&tty->read_wait);
+
+ schedule();
+
if (signal_pending(current)) {
- tty_unlock();
- return -EINTR;
+ ret = -EINTR;
+ break;
}
}
-
- if (rbuf->count > nr)
- /* frame too large for caller's buffer (discard frame) */
- ret = -EOVERFLOW;
- else {
- /* Copy the data to the caller's buffer */
- if (copy_to_user(buf, rbuf->buf, rbuf->count))
- ret = -EFAULT;
- else
- ret = rbuf->count;
- }
-
- /* return HDLC buffer to free list unless the free list */
- /* count has exceeded the default value, in which case the */
- /* buffer is freed back to the OS to conserve memory */
- if (n_hdlc->rx_free_buf_list.count > DEFAULT_RX_BUF_COUNT)
- kfree(rbuf);
- else
- n_hdlc_buf_put(&n_hdlc->rx_free_buf_list,rbuf);
- tty_unlock();
+
+ remove_wait_queue(&tty->read_wait, &wait);
+ __set_current_state(TASK_RUNNING);
+
return ret;
} /* end of n_hdlc_tty_read() */
count = maxframe;
}
- tty_lock();
-
add_wait_queue(&tty->write_wait, &wait);
- set_current_state(TASK_INTERRUPTIBLE);
+
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
- /* Allocate transmit buffer */
- /* sleep until transmit buffer available */
- while (!(tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list))) {
+ tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list);
+ if (tbuf)
+ break;
+
if (file->f_flags & O_NONBLOCK) {
error = -EAGAIN;
break;
}
}
- set_current_state(TASK_RUNNING);
+ __set_current_state(TASK_RUNNING);
remove_wait_queue(&tty->write_wait, &wait);
if (!error) {
n_hdlc_buf_put(&n_hdlc->tx_buf_list,tbuf);
n_hdlc_send_frames(n_hdlc,tty);
}
- tty_unlock();
+
return error;
} /* end of n_hdlc_tty_write() */
static void receive_chars(struct m68k_serial *info, unsigned short rx)
{
- struct tty_struct *tty = info->port.tty;
+ struct tty_struct *tty = info->tty;
m68328_uart *uart = &uart_addr[info->line];
unsigned char ch, flag;
goto clear_and_return;
}
- if((info->xmit_cnt <= 0) || info->port.tty->stopped) {
+ if((info->xmit_cnt <= 0) || info->tty->stopped) {
/* That's peculiar... TX ints off */
uart->ustcnt &= ~USTCNT_TX_INTR_MASK;
goto clear_and_return;
struct m68k_serial *info = container_of(work, struct m68k_serial, tqueue);
struct tty_struct *tty;
- tty = info->port.tty;
+ tty = info->tty;
if (!tty)
return;
#if 0
struct m68k_serial *info = container_of(work, struct m68k_serial, tqueue_hangup);
struct tty_struct *tty;
- tty = info->port.tty;
+ tty = info->tty;
if (!tty)
return;
uart->ustcnt = USTCNT_UEN | USTCNT_RXEN | USTCNT_RX_INTR_MASK;
#endif
- if (info->port.tty)
- clear_bit(TTY_IO_ERROR, &info->port.tty->flags);
+ if (info->tty)
+ clear_bit(TTY_IO_ERROR, &info->tty->flags);
info->xmit_cnt = info->xmit_head = info->xmit_tail = 0;
/*
info->xmit_buf = 0;
}
- if (info->port.tty)
- set_bit(TTY_IO_ERROR, &info->port.tty->flags);
+ if (info->tty)
+ set_bit(TTY_IO_ERROR, &info->tty->flags);
info->flags &= ~S_INITIALIZED;
local_irq_restore(flags);
unsigned cflag;
int i;
- if (!info->port.tty || !info->port.tty->termios)
+ if (!info->tty || !info->tty->termios)
return;
- cflag = info->port.tty->termios->c_cflag;
+ cflag = info->tty->termios->c_cflag;
if (!(port = info->port))
return;
static int rs_ioctl(struct tty_struct *tty, struct file * file,
unsigned int cmd, unsigned long arg)
{
- int error;
struct m68k_serial * info = (struct m68k_serial *)tty->driver_data;
int retval;
tty_ldisc_flush(tty);
tty->closing = 0;
info->event = 0;
- info->port.tty = NULL;
+ info->tty = NULL;
#warning "This is not and has never been valid so fix it"
#if 0
if (tty->ldisc.num != ldiscs[N_TTY].num) {
info->event = 0;
info->count = 0;
info->flags &= ~S_NORMAL_ACTIVE;
- info->port.tty = NULL;
+ info->tty = NULL;
wake_up_interruptible(&info->open_wait);
}
info->count++;
tty->driver_data = info;
- info->port.tty = tty;
+ info->tty = tty;
/*
* Start up serial port
info = &m68k_soft[i];
info->magic = SERIAL_MAGIC;
info->port = (int) &uart_addr[i];
- info->port.tty = NULL;
+ info->tty = NULL;
info->irq = uart_irqs[i];
info->custom_divisor = 16;
info->close_delay = 50;
/* .read_proc = rs_360_read_proc, */
.tiocmget = rs_360_tiocmget,
.tiocmset = rs_360_tiocmset,
+ .get_icount = rs_360_get_icount,
};
static int __init rs_360_init(void)
.fifo_size = 128,
.tx_loadsz = 128,
.fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
- .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP,
+ /* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
},
[PORT_16654] = {
.name = "ST16654",
default SERIAL_8250
config SERIAL_8250_PCI
- tristate "8250/16550 PCI device support" if EMBEDDED
+ tristate "8250/16550 PCI device support" if EXPERT
depends on SERIAL_8250 && PCI
default SERIAL_8250
help
Saves about 9K.
config SERIAL_8250_PNP
- tristate "8250/16550 PNP device support" if EMBEDDED
+ tristate "8250/16550 PNP device support" if EXPERT
depends on SERIAL_8250 && PNP
default SERIAL_8250
help
config SERIAL_GRLIB_GAISLER_APBUART
tristate "GRLIB APBUART serial support"
depends on OF
+ select SERIAL_CORE
---help---
Add support for the GRLIB APBUART serial port.
{
struct bfin_serial_port *uart = dev_id;
- spin_lock(&uart->port.lock);
while (UART_GET_LSR(uart) & DR)
bfin_serial_rx_chars(uart);
- spin_unlock(&uart->port.lock);
return IRQ_HANDLED;
}
{
int x_pos, pos;
- dma_disable_irq(uart->tx_dma_channel);
- dma_disable_irq(uart->rx_dma_channel);
- spin_lock_bh(&uart->port.lock);
+ dma_disable_irq_nosync(uart->rx_dma_channel);
+ spin_lock_bh(&uart->rx_lock);
/* 2D DMA RX buffer ring is used. Because curr_y_count and
* curr_x_count can't be read as an atomic operation,
uart->rx_dma_buf.tail = uart->rx_dma_buf.head;
}
- spin_unlock_bh(&uart->port.lock);
- dma_enable_irq(uart->tx_dma_channel);
+ spin_unlock_bh(&uart->rx_lock);
dma_enable_irq(uart->rx_dma_channel);
mod_timer(&(uart->rx_dma_timer), jiffies + DMA_RX_FLUSH_JIFFIES);
unsigned short irqstat;
int x_pos, pos;
- spin_lock(&uart->port.lock);
+ spin_lock(&uart->rx_lock);
irqstat = get_dma_curr_irqstat(uart->rx_dma_channel);
clear_dma_irqstat(uart->rx_dma_channel);
uart->rx_dma_buf.tail = uart->rx_dma_buf.head;
}
- spin_unlock(&uart->port.lock);
+ spin_unlock(&uart->rx_lock);
return IRQ_HANDLED;
}
}
#ifdef CONFIG_SERIAL_BFIN_DMA
+ spin_lock_init(&uart->rx_lock);
uart->tx_done = 1;
uart->tx_count = 0;
s->rts = 0;
sprintf(b, "max3100-%d", s->minor);
- s->workqueue = create_freezeable_workqueue(b);
+ s->workqueue = create_freezable_workqueue(b);
if (!s->workqueue) {
dev_warn(&s->spi->dev, "cannot create workqueue\n");
return -EBUSY;
struct max3107_port *s = container_of(port, struct max3107_port, port);
/* Initialize work queue */
- s->workqueue = create_freezeable_workqueue("max3107");
+ s->workqueue = create_freezable_workqueue("max3107");
if (!s->workqueue) {
dev_err(&s->spi->dev, "Workqueue creation failed\n");
return -EBUSY;
#ifdef CONFIG_SERIAL_SB1250_DUART_CONSOLE
/*
* Serial console stuff. Very basic, polling driver for doing serial
- * console output. The console_sem is held by the caller, so we
+ * console output. The console_lock is held by the caller, so we
* shouldn't be interrupted for more console activity.
*/
static void sbd_console_putchar(struct uart_port *uport, int ch)
#include <asm/irq_regs.h>
/* Whether we react on sysrq keys or just ignore them */
-static int __read_mostly sysrq_enabled = 1;
+static int __read_mostly sysrq_enabled = SYSRQ_DEFAULT_ENABLE;
static bool __read_mostly sysrq_always_enabled;
static bool sysrq_on(void)
unsigned int alt_use;
bool active;
bool need_reinject;
+ bool reinjecting;
};
static void sysrq_reinject_alt_sysrq(struct work_struct *work)
unsigned int alt_code = sysrq->alt_use;
if (sysrq->need_reinject) {
+ /* we do not want the assignment to be reordered */
+ sysrq->reinjecting = true;
+ mb();
+
/* Simulate press and release of Alt + SysRq */
input_inject_event(handle, EV_KEY, alt_code, 1);
input_inject_event(handle, EV_KEY, KEY_SYSRQ, 1);
input_inject_event(handle, EV_KEY, KEY_SYSRQ, 0);
input_inject_event(handle, EV_KEY, alt_code, 0);
input_inject_event(handle, EV_SYN, SYN_REPORT, 1);
+
+ mb();
+ sysrq->reinjecting = false;
}
}
bool was_active = sysrq->active;
bool suppress;
+ /*
+ * Do not filter anything if we are in the process of re-injecting
+ * Alt+SysRq combination.
+ */
+ if (sysrq->reinjecting)
+ return false;
+
switch (type) {
case EV_SYN:
sysrq->alt_use = sysrq->alt;
/*
* If nothing else will be pressed we'll need
- * to * re-inject Alt-SysRq keysroke.
+ * to re-inject Alt-SysRq keysroke.
*/
sysrq->need_reinject = true;
}
struct console *c;
ssize_t count = 0;
- acquire_console_sem();
- for (c = console_drivers; c; c = c->next) {
+ console_lock();
+ for_each_console(c) {
if (!c->device)
continue;
if (!c->write)
while (i--)
count += sprintf(buf + count, "%s%d%c",
cs[i]->name, cs[i]->index, i ? ' ':'\n');
- release_console_sem();
+ console_unlock();
return count;
}
if (IS_ERR(consdev))
consdev = NULL;
else
- device_create_file(consdev, &dev_attr_active);
+ WARN_ON(device_create_file(consdev, &dev_attr_active) < 0);
#ifdef CONFIG_VT
vty_init(&console_fops);
/* always called with BTM from vt_ioctl */
WARN_ON(!tty_locked());
- acquire_console_sem();
+ console_lock();
poke_blanked_console();
- release_console_sem();
+ console_unlock();
ld = tty_ldisc_ref(tty);
if (!ld) {
/* Select the proper current console and verify
* sanity of the situation under the console lock.
*/
- acquire_console_sem();
+ console_lock();
attr = (currcons & 128);
currcons = (currcons & 127);
* the pagefault handling code may want to call printk().
*/
- release_console_sem();
+ console_unlock();
ret = copy_to_user(buf, con_buf_start, orig_count);
- acquire_console_sem();
+ console_lock();
if (ret) {
read += (orig_count - ret);
if (read)
ret = read;
unlock_out:
- release_console_sem();
+ console_unlock();
mutex_unlock(&con_buf_mtx);
return ret;
}
/* Select the proper current console and verify
* sanity of the situation under the console lock.
*/
- acquire_console_sem();
+ console_lock();
attr = (currcons & 128);
currcons = (currcons & 127);
/* Temporarily drop the console lock so that we can read
* in the write data from userspace safely.
*/
- release_console_sem();
+ console_unlock();
ret = copy_from_user(con_buf, buf, this_round);
- acquire_console_sem();
+ console_lock();
if (ret) {
this_round -= ret;
vcs_scr_updated(vc);
unlock_out:
- release_console_sem();
+ console_unlock();
mutex_unlock(&con_buf_mtx);
struct vc_data *vc = tty->driver_data;
int ret;
- acquire_console_sem();
+ console_lock();
ret = vc_do_resize(tty, vc, ws->ws_col, ws->ws_row);
- release_console_sem();
+ console_unlock();
return ret;
}
vc->vc_color = vc->vc_def_color;
}
-/* console_sem is held */
+/* console_lock is held */
static void csi_m(struct vc_data *vc)
{
int i;
return vc_cons[fg_console].d->vc_report_mouse;
}
-/* console_sem is held */
+/* console_lock is held */
static void set_mode(struct vc_data *vc, int on_off)
{
int i;
}
}
-/* console_sem is held */
+/* console_lock is held */
static void setterm_command(struct vc_data *vc)
{
switch(vc->vc_par[0]) {
}
}
-/* console_sem is held */
+/* console_lock is held */
static void csi_at(struct vc_data *vc, unsigned int nr)
{
if (nr > vc->vc_cols - vc->vc_x)
insert_char(vc, nr);
}
-/* console_sem is held */
+/* console_lock is held */
static void csi_L(struct vc_data *vc, unsigned int nr)
{
if (nr > vc->vc_rows - vc->vc_y)
vc->vc_need_wrap = 0;
}
-/* console_sem is held */
+/* console_lock is held */
static void csi_P(struct vc_data *vc, unsigned int nr)
{
if (nr > vc->vc_cols - vc->vc_x)
delete_char(vc, nr);
}
-/* console_sem is held */
+/* console_lock is held */
static void csi_M(struct vc_data *vc, unsigned int nr)
{
if (nr > vc->vc_rows - vc->vc_y)
vc->vc_need_wrap = 0;
}
-/* console_sem is held (except via vc_init->reset_terminal */
+/* console_lock is held (except via vc_init->reset_terminal */
static void save_cur(struct vc_data *vc)
{
vc->vc_saved_x = vc->vc_x;
vc->vc_saved_G1 = vc->vc_G1_charset;
}
-/* console_sem is held */
+/* console_lock is held */
static void restore_cur(struct vc_data *vc)
{
gotoxy(vc, vc->vc_saved_x, vc->vc_saved_y);
EShash, ESsetG0, ESsetG1, ESpercent, ESignore, ESnonstd,
ESpalette };
-/* console_sem is held (except via vc_init()) */
+/* console_lock is held (except via vc_init()) */
static void reset_terminal(struct vc_data *vc, int do_clear)
{
vc->vc_top = 0;
csi_J(vc, 2);
}
-/* console_sem is held */
+/* console_lock is held */
static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
{
/*
return bisearch(ucs, double_width, ARRAY_SIZE(double_width) - 1);
}
-/* acquires console_sem */
+/* acquires console_lock */
static int do_con_write(struct tty_struct *tty, const unsigned char *buf, int count)
{
#ifdef VT_BUF_VRAM_ONLY
might_sleep();
- acquire_console_sem();
+ console_lock();
vc = tty->driver_data;
if (vc == NULL) {
printk(KERN_ERR "vt: argh, driver_data is NULL !\n");
- release_console_sem();
+ console_unlock();
return 0;
}
if (!vc_cons_allocated(currcons)) {
/* could this happen? */
printk_once("con_write: tty %d not allocated\n", currcons+1);
- release_console_sem();
+ console_unlock();
return 0;
}
}
FLUSH
console_conditional_schedule();
- release_console_sem();
+ console_unlock();
notify_update(vc);
return n;
#undef FLUSH
* us to do the switches asynchronously (needed when we want
* to switch due to a keyboard interrupt). Synchronization
* with other console code and prevention of re-entrancy is
- * ensured with console_sem.
+ * ensured with console_lock.
*/
static void console_callback(struct work_struct *ignored)
{
- acquire_console_sem();
+ console_lock();
if (want_console >= 0) {
if (want_console != fg_console &&
}
notify_update(vc_cons[fg_console].d);
- release_console_sem();
+ console_unlock();
}
int set_console(int nr)
*/
/*
- * Generally a bit racy with respect to console_sem().
+ * Generally a bit racy with respect to console_lock();.
*
* There are some functions which don't need it.
*
switch (type)
{
case TIOCL_SETSEL:
- acquire_console_sem();
+ console_lock();
ret = set_selection((struct tiocl_selection __user *)(p+1), tty);
- release_console_sem();
+ console_unlock();
break;
case TIOCL_PASTESEL:
ret = paste_selection(tty);
break;
case TIOCL_UNBLANKSCREEN:
- acquire_console_sem();
+ console_lock();
unblank_screen();
- release_console_sem();
+ console_unlock();
break;
case TIOCL_SELLOADLUT:
ret = sel_loadlut(p);
}
break;
case TIOCL_BLANKSCREEN: /* until explicitly unblanked, not only poked */
- acquire_console_sem();
+ console_lock();
ignore_poke = 1;
do_blank_screen(0);
- release_console_sem();
+ console_unlock();
break;
case TIOCL_BLANKEDSCREEN:
ret = console_blanked;
return;
/* if we race with con_close(), vt may be null */
- acquire_console_sem();
+ console_lock();
vc = tty->driver_data;
if (vc)
set_cursor(vc);
- release_console_sem();
+ console_unlock();
}
/*
unsigned int currcons = tty->index;
int ret = 0;
- acquire_console_sem();
+ console_lock();
if (tty->driver_data == NULL) {
ret = vc_allocate(currcons);
if (ret == 0) {
/* Still being freed */
if (vc->port.tty) {
- release_console_sem();
+ console_unlock();
return -ERESTARTSYS;
}
tty->driver_data = vc;
tty->termios->c_iflag |= IUTF8;
else
tty->termios->c_iflag &= ~IUTF8;
- release_console_sem();
+ console_unlock();
return ret;
}
}
- release_console_sem();
+ console_unlock();
return ret;
}
{
struct vc_data *vc = tty->driver_data;
BUG_ON(vc == NULL);
- acquire_console_sem();
+ console_lock();
vc->port.tty = NULL;
- release_console_sem();
+ console_unlock();
tty_shutdown(tty);
}
struct vc_data *vc;
unsigned int currcons = 0, i;
- acquire_console_sem();
+ console_lock();
if (conswitchp)
display_desc = conswitchp->con_startup();
if (!display_desc) {
fg_console = 0;
- release_console_sem();
+ console_unlock();
return 0;
}
printable = 1;
printk("\n");
- release_console_sem();
+ console_unlock();
#ifdef CONFIG_VT_CONSOLE
register_console(&vt_console_driver);
if (IS_ERR(tty0dev))
tty0dev = NULL;
else
- device_create_file(tty0dev, &dev_attr_active);
+ WARN_ON(device_create_file(tty0dev, &dev_attr_active) < 0);
vcs_init();
if (!try_module_get(owner))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
/* check if driver is registered */
for (i = 0; i < MAX_NR_CON_DRIVER; i++) {
retval = 0;
err:
- release_console_sem();
+ console_unlock();
module_put(owner);
return retval;
};
if (!try_module_get(owner))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
/* check if driver is registered and if it is unbindable */
for (i = 0; i < MAX_NR_CON_DRIVER; i++) {
}
if (retval) {
- release_console_sem();
+ console_unlock();
goto err;
}
}
if (retval) {
- release_console_sem();
+ console_unlock();
goto err;
}
if (!con_is_bound(csw)) {
- release_console_sem();
+ console_unlock();
goto err;
}
if (!con_is_bound(csw))
con_driver->flag &= ~CON_DRIVER_FLAG_INIT;
- release_console_sem();
+ console_unlock();
/* ignore return value, binding should not fail */
bind_con_driver(defcsw, first, last, deflt);
err:
if (!try_module_get(owner))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
for (i = 0; i < MAX_NR_CON_DRIVER; i++) {
con_driver = ®istered_con_driver[i];
/* already registered */
if (con_driver->con == csw)
- retval = -EINVAL;
+ retval = -EBUSY;
}
if (retval)
}
err:
- release_console_sem();
+ console_unlock();
module_put(owner);
return retval;
}
{
int i, retval = -ENODEV;
- acquire_console_sem();
+ console_lock();
/* cannot unregister a bound driver */
if (con_is_bound(csw))
}
}
err:
- release_console_sem();
+ console_unlock();
return retval;
}
EXPORT_SYMBOL(unregister_con_driver);
int err;
err = register_con_driver(csw, first, last);
-
+ /* if we get an busy error we still want to bind the console driver
+ * and return success, as we may have unbound the console driver
+ Â * but not unregistered it.
+ */
+ if (err == -EBUSY)
+ err = 0;
if (!err)
bind_con_driver(csw, first, last, deflt);
{
int rc;
- acquire_console_sem();
+ console_lock();
rc = set_get_cmap (arg,1);
- release_console_sem();
+ console_unlock();
return rc;
}
{
int rc;
- acquire_console_sem();
+ console_lock();
rc = set_get_cmap (arg,0);
- release_console_sem();
+ console_unlock();
return rc;
}
} else
font.data = NULL;
- acquire_console_sem();
+ console_lock();
if (vc->vc_sw->con_font_get)
rc = vc->vc_sw->con_font_get(vc, &font);
else
rc = -ENOSYS;
- release_console_sem();
+ console_unlock();
if (rc)
goto out;
font.data = memdup_user(op->data, size);
if (IS_ERR(font.data))
return PTR_ERR(font.data);
- acquire_console_sem();
+ console_lock();
if (vc->vc_sw->con_font_set)
rc = vc->vc_sw->con_font_set(vc, &font, op->flags);
else
rc = -ENOSYS;
- release_console_sem();
+ console_unlock();
kfree(font.data);
return rc;
}
else
name[MAX_FONT_NAME - 1] = 0;
- acquire_console_sem();
+ console_lock();
if (vc->vc_sw->con_font_default)
rc = vc->vc_sw->con_font_default(vc, &font, s);
else
rc = -ENOSYS;
- release_console_sem();
+ console_unlock();
if (!rc) {
op->width = font.width;
op->height = font.height;
if (vc->vc_mode != KD_TEXT)
return -EINVAL;
- acquire_console_sem();
+ console_lock();
if (!vc->vc_sw->con_font_copy)
rc = -ENOSYS;
else if (con < 0 || !vc_cons_allocated(con))
rc = 0;
else
rc = vc->vc_sw->con_font_copy(vc, con);
- release_console_sem();
+ console_unlock();
return rc;
}
/*
* explicitly blank/unblank the screen if switching modes
*/
- acquire_console_sem();
+ console_lock();
if (arg == KD_TEXT)
do_unblank_screen(1);
else
do_blank_screen(1);
- release_console_sem();
+ console_unlock();
break;
case KDGETMODE:
ret = -EINVAL;
goto out;
}
- acquire_console_sem();
+ console_lock();
vc->vt_mode = tmp;
/* the frsig is ignored, so we set it to 0 */
vc->vt_mode.frsig = 0;
vc->vt_pid = get_pid(task_pid(current));
/* no switch is required -- saw@shade.msu.ru */
vc->vt_newvt = -1;
- release_console_sem();
+ console_unlock();
break;
}
struct vt_mode tmp;
int rc;
- acquire_console_sem();
+ console_lock();
memcpy(&tmp, &vc->vt_mode, sizeof(struct vt_mode));
- release_console_sem();
+ console_unlock();
rc = copy_to_user(up, &tmp, sizeof(struct vt_mode));
if (rc)
ret = -ENXIO;
else {
arg--;
- acquire_console_sem();
+ console_lock();
ret = vc_allocate(arg);
- release_console_sem();
+ console_unlock();
if (ret)
break;
set_console(arg);
ret = -ENXIO;
else {
vsa.console--;
- acquire_console_sem();
+ console_lock();
ret = vc_allocate(vsa.console);
if (ret == 0) {
struct vc_data *nvc;
put_pid(nvc->vt_pid);
nvc->vt_pid = get_pid(task_pid(current));
}
- release_console_sem();
+ console_unlock();
if (ret)
break;
/* Commence switch and lock */
/*
* Switching-from response
*/
- acquire_console_sem();
+ console_lock();
if (vc->vt_newvt >= 0) {
if (arg == 0)
/*
vc->vt_newvt = -1;
ret = vc_allocate(newvt);
if (ret) {
- release_console_sem();
+ console_unlock();
break;
}
/*
if (arg != VT_ACKACQ)
ret = -EINVAL;
}
- release_console_sem();
+ console_unlock();
break;
/*
}
if (arg == 0) {
/* deallocate all unused consoles, but leave 0 */
- acquire_console_sem();
+ console_lock();
for (i=1; i<MAX_NR_CONSOLES; i++)
if (! VT_BUSY(i))
vc_deallocate(i);
- release_console_sem();
+ console_unlock();
} else {
/* deallocate a single console, if possible */
arg--;
if (VT_BUSY(arg))
ret = -EBUSY;
else if (arg) { /* leave 0 */
- acquire_console_sem();
+ console_lock();
vc_deallocate(arg);
- release_console_sem();
+ console_unlock();
}
}
break;
get_user(cc, &vtsizes->v_cols))
ret = -EFAULT;
else {
- acquire_console_sem();
+ console_lock();
for (i = 0; i < MAX_NR_CONSOLES; i++) {
vc = vc_cons[i].d;
vc_resize(vc_cons[i].d, cc, ll);
}
}
- release_console_sem();
+ console_unlock();
}
break;
}
for (i = 0; i < MAX_NR_CONSOLES; i++) {
if (!vc_cons[i].d)
continue;
- acquire_console_sem();
+ console_lock();
if (vlin)
vc_cons[i].d->vc_scan_lines = vlin;
if (clin)
vc_cons[i].d->vc_font.height = clin;
vc_cons[i].d->vc_resize_user = 1;
vc_resize(vc_cons[i].d, cc, ll);
- release_console_sem();
+ console_unlock();
}
break;
}
struct vc_data *vc;
struct tty_struct *tty;
- acquire_console_sem();
+ console_lock();
vc = vc_con->d;
if (vc) {
tty = vc->port.tty;
__do_SAK(tty);
reset_vc(vc);
}
- release_console_sem();
+ console_unlock();
}
#ifdef CONFIG_COMPAT
{
int prev;
- acquire_console_sem();
+ console_lock();
/* Graphics mode - up to X */
if (disable_vt_switch) {
- release_console_sem();
+ console_unlock();
return 0;
}
prev = fg_console;
if (alloc && vc_allocate(vt)) {
/* we can't have a free VC for now. Too bad,
* we don't want to mess the screen for now. */
- release_console_sem();
+ console_unlock();
return -ENOSPC;
}
* Let the calling function know so it can decide
* what to do.
*/
- release_console_sem();
+ console_unlock();
return -EIO;
}
- release_console_sem();
+ console_unlock();
tty_lock();
if (vt_waitactive(vt + 1)) {
pr_debug("Suspend: Can't switch VCs.");
*/
void pm_set_vt_switch(int do_switch)
{
- acquire_console_sem();
+ console_lock();
disable_vt_switch = !do_switch;
- release_console_sem();
+ console_unlock();
}
EXPORT_SYMBOL(pm_set_vt_switch);
{ NOKIA_PCSUITE_ACM_INFO(0x0154), }, /* Nokia 5800 XpressMusic */
{ NOKIA_PCSUITE_ACM_INFO(0x04ce), }, /* Nokia E90 */
{ NOKIA_PCSUITE_ACM_INFO(0x01d4), }, /* Nokia E55 */
+ { NOKIA_PCSUITE_ACM_INFO(0x0302), }, /* Nokia N8 */
{ SAMSUNG_PCSUITE_ACM_INFO(0x6651), }, /* Samsung GTi8510 (INNOV8) */
/* NOTE: non-Nokia COMM/ACM/0xff is likely MSFT RNDIS... NOT a modem! */
goto outnp;
}
- if (!file->f_flags && O_NONBLOCK)
+ if (!(file->f_flags & O_NONBLOCK))
r = wait_event_interruptible(desc->wait, !test_bit(WDM_IN_USE,
&desc->flags));
else
config USB_OTG_WHITELIST
bool "Rely on OTG Targeted Peripherals List"
- depends on USB_OTG || EMBEDDED
+ depends on USB_OTG || EXPERT
default y if USB_OTG
- default n if EMBEDDED
+ default n if EXPERT
help
If you say Y here, the "otg_whitelist.h" file will be used as a
product whitelist, so USB peripherals not listed there will be
config USB_OTG_BLACKLIST_HUB
bool "Disable external hubs"
- depends on USB_OTG || EMBEDDED
+ depends on USB_OTG || EXPERT
help
If you say Y here, then Linux will refuse to enumerate
external hubs. OTG hosts are allowed to reduce hardware
ep_dev->dev.parent = parent;
ep_dev->dev.release = ep_device_release;
dev_set_name(&ep_dev->dev, "ep_%02x", endpoint->desc.bEndpointAddress);
- device_enable_async_suspend(&ep_dev->dev);
retval = device_register(&ep_dev->dev);
if (retval)
goto error_register;
+ device_enable_async_suspend(&ep_dev->dev);
endpoint->ep_dev = ep_dev;
return retval;
return retval;
}
- synchronize_irq(pci_dev->irq);
+ /* If MSI-X is enabled, the driver will have synchronized all vectors
+ * in pci_suspend(). If MSI or legacy PCI is enabled, that will be
+ * synchronized here.
+ */
+ if (!hcd->msix_enabled)
+ synchronize_irq(pci_dev->irq);
/* Downstream ports from this root hub should already be quiesced, so
* there will be no DMA activity. Now we can shut down the upstream
dev_dbg(&rhdev->dev, "usb %s%s\n",
(msg.event & PM_EVENT_AUTO ? "auto-" : ""), "resume");
- clear_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags);
if (!hcd->driver->bus_resume)
return -ENOENT;
if (hcd->state == HC_STATE_RUNNING)
hcd->state = HC_STATE_RESUMING;
status = hcd->driver->bus_resume(hcd);
+ clear_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags);
if (status == 0) {
/* TRSMRCY = 10 msec */
msleep(10);
static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
{
struct usb_device *hdev = hub->hdev;
+ struct usb_hcd *hcd;
+ int ret;
int port1;
int status;
bool need_debounce_delay = false;
usb_autopm_get_interface_no_resume(
to_usb_interface(hub->intfdev));
return; /* Continues at init2: below */
+ } else if (type == HUB_RESET_RESUME) {
+ /* The internal host controller state for the hub device
+ * may be gone after a host power loss on system resume.
+ * Update the device's info so the HW knows it's a hub.
+ */
+ hcd = bus_to_hcd(hdev->bus);
+ if (hcd->driver->update_hub_device) {
+ ret = hcd->driver->update_hub_device(hcd, hdev,
+ &hub->tt, GFP_NOIO);
+ if (ret < 0) {
+ dev_err(hub->intfdev, "Host not "
+ "accepting hub info "
+ "update.\n");
+ dev_err(hub->intfdev, "LS/FS devices "
+ "and hubs may not work "
+ "under this hub\n.");
+ }
+ }
+ hub_power_on(hub, true);
} else {
hub_power_on(hub, true);
}
udev->ttport = hdev->ttport;
} else if (udev->speed != USB_SPEED_HIGH
&& hdev->speed == USB_SPEED_HIGH) {
+ if (!hub->tt.hub) {
+ dev_err(&udev->dev, "parent hub has no TT\n");
+ retval = -EINVAL;
+ goto fail;
+ }
udev->tt = &hub->tt;
udev->ttport = port1;
}
select USB_GADGET_SELECTED
config USB_GADGET_EG20T
- boolean "Intel EG20T(Topcliff) USB Device controller"
+ boolean "Intel EG20T PCH/OKI SEMICONDUCTOR ML7213 IOH UDC"
depends on PCI
select USB_GADGET_DUALSPEED
help
This driver dose not support interrupt transfer or isochronous
transfer modes.
+ This driver also can be used for OKI SEMICONDUCTOR's ML7213 which is
+ for IVI(In-Vehicle Infotainment) use.
+ ML7213 is companion chip for Intel Atom E6xx series.
+ ML7213 is completely compatible for Intel EG20T PCH.
+
config USB_EG20T
tristate
depends on USB_GADGET_EG20T
ci13xxx_udc core.
This driver depends on OTG driver for PHY initialization,
clock management, powering up VBUS, and power management.
+ This driver is not supported on boards like trout which
+ has an external PHY.
Say "y" to link the driver statically, or "m" to build a
dynamically linked module called "ci13xxx_msm" and force all
/* control endpoint description */
static const struct usb_endpoint_descriptor
-ctrl_endpt_desc = {
+ctrl_endpt_out_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_CONTROL,
+ .wMaxPacketSize = cpu_to_le16(CTRL_PAYLOAD_MAX),
+};
+
+static const struct usb_endpoint_descriptor
+ctrl_endpt_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+
+ .bEndpointAddress = USB_DIR_IN,
.bmAttributes = USB_ENDPOINT_XFER_CONTROL,
.wMaxPacketSize = cpu_to_le16(CTRL_PAYLOAD_MAX),
};
hw_bank.size /= sizeof(u32);
reg = hw_aread(ABS_DCCPARAMS, DCCPARAMS_DEN) >> ffs_nr(DCCPARAMS_DEN);
- if (reg == 0 || reg > ENDPT_MAX)
- return -ENODEV;
+ hw_ep_max = reg * 2; /* cache hw ENDPT_MAX */
- hw_ep_max = reg; /* cache hw ENDPT_MAX */
+ if (hw_ep_max == 0 || hw_ep_max > ENDPT_MAX)
+ return -ENODEV;
/* setup lock mode ? */
}
spin_lock_irqsave(udc->lock, flags);
- for (i = 0; i < hw_ep_max; i++) {
- struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i];
+ for (i = 0; i < hw_ep_max/2; i++) {
+ struct ci13xxx_ep *mEpRx = &udc->ci13xxx_ep[i];
+ struct ci13xxx_ep *mEpTx = &udc->ci13xxx_ep[i + hw_ep_max/2];
n += scnprintf(buf + n, PAGE_SIZE - n,
"EP=%02i: RX=%08X TX=%08X\n",
- i, (u32)mEp->qh[RX].dma, (u32)mEp->qh[TX].dma);
+ i, (u32)mEpRx->qh.dma, (u32)mEpTx->qh.dma);
for (j = 0; j < (sizeof(struct ci13xxx_qh)/sizeof(u32)); j++) {
n += scnprintf(buf + n, PAGE_SIZE - n,
" %04X: %08X %08X\n", j,
- *((u32 *)mEp->qh[RX].ptr + j),
- *((u32 *)mEp->qh[TX].ptr + j));
+ *((u32 *)mEpRx->qh.ptr + j),
+ *((u32 *)mEpTx->qh.ptr + j));
}
}
spin_unlock_irqrestore(udc->lock, flags);
unsigned long flags;
struct list_head *ptr = NULL;
struct ci13xxx_req *req = NULL;
- unsigned i, j, k, n = 0, qSize = sizeof(struct ci13xxx_td)/sizeof(u32);
+ unsigned i, j, n = 0, qSize = sizeof(struct ci13xxx_td)/sizeof(u32);
dbg_trace("[%s] %p\n", __func__, buf);
if (attr == NULL || buf == NULL) {
spin_lock_irqsave(udc->lock, flags);
for (i = 0; i < hw_ep_max; i++)
- for (k = RX; k <= TX; k++)
- list_for_each(ptr, &udc->ci13xxx_ep[i].qh[k].queue)
- {
- req = list_entry(ptr,
- struct ci13xxx_req, queue);
+ list_for_each(ptr, &udc->ci13xxx_ep[i].qh.queue)
+ {
+ req = list_entry(ptr, struct ci13xxx_req, queue);
+
+ n += scnprintf(buf + n, PAGE_SIZE - n,
+ "EP=%02i: TD=%08X %s\n",
+ i % hw_ep_max/2, (u32)req->dma,
+ ((i < hw_ep_max/2) ? "RX" : "TX"));
+ for (j = 0; j < qSize; j++)
n += scnprintf(buf + n, PAGE_SIZE - n,
- "EP=%02i: TD=%08X %s\n",
- i, (u32)req->dma,
- ((k == RX) ? "RX" : "TX"));
-
- for (j = 0; j < qSize; j++)
- n += scnprintf(buf + n, PAGE_SIZE - n,
- " %04X: %08X\n", j,
- *((u32 *)req->ptr + j));
- }
+ " %04X: %08X\n", j,
+ *((u32 *)req->ptr + j));
+ }
spin_unlock_irqrestore(udc->lock, flags);
return n;
* At this point it's guaranteed exclusive access to qhead
* (endpt is not primed) so it's no need to use tripwire
*/
- mEp->qh[mEp->dir].ptr->td.next = mReq->dma; /* TERMINATE = 0 */
- mEp->qh[mEp->dir].ptr->td.token &= ~TD_STATUS; /* clear status */
+ mEp->qh.ptr->td.next = mReq->dma; /* TERMINATE = 0 */
+ mEp->qh.ptr->td.token &= ~TD_STATUS; /* clear status */
if (mReq->req.zero == 0)
- mEp->qh[mEp->dir].ptr->cap |= QH_ZLT;
+ mEp->qh.ptr->cap |= QH_ZLT;
else
- mEp->qh[mEp->dir].ptr->cap &= ~QH_ZLT;
+ mEp->qh.ptr->cap &= ~QH_ZLT;
wmb(); /* synchronize before ep prime */
hw_ep_flush(mEp->num, mEp->dir);
- while (!list_empty(&mEp->qh[mEp->dir].queue)) {
+ while (!list_empty(&mEp->qh.queue)) {
/* pop oldest request */
struct ci13xxx_req *mReq = \
- list_entry(mEp->qh[mEp->dir].queue.next,
+ list_entry(mEp->qh.queue.next,
struct ci13xxx_req, queue);
list_del_init(&mReq->queue);
mReq->req.status = -ESHUTDOWN;
{
struct usb_ep *ep;
struct ci13xxx *udc = container_of(gadget, struct ci13xxx, gadget);
- struct ci13xxx_ep *mEp = container_of(gadget->ep0,
- struct ci13xxx_ep, ep);
trace("%p", gadget);
gadget_for_each_ep(ep, gadget) {
usb_ep_fifo_flush(ep);
}
- usb_ep_fifo_flush(gadget->ep0);
+ usb_ep_fifo_flush(&udc->ep0out.ep);
+ usb_ep_fifo_flush(&udc->ep0in.ep);
udc->driver->disconnect(gadget);
gadget_for_each_ep(ep, gadget) {
usb_ep_disable(ep);
}
- usb_ep_disable(gadget->ep0);
+ usb_ep_disable(&udc->ep0out.ep);
+ usb_ep_disable(&udc->ep0in.ep);
- if (mEp->status != NULL) {
- usb_ep_free_request(gadget->ep0, mEp->status);
- mEp->status = NULL;
+ if (udc->status != NULL) {
+ usb_ep_free_request(&udc->ep0in.ep, udc->status);
+ udc->status = NULL;
}
return 0;
__releases(udc->lock)
__acquires(udc->lock)
{
- struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[0];
int retval;
trace("%p", udc);
if (retval)
goto done;
- retval = usb_ep_enable(&mEp->ep, &ctrl_endpt_desc);
+ retval = usb_ep_enable(&udc->ep0out.ep, &ctrl_endpt_out_desc);
+ if (retval)
+ goto done;
+
+ retval = usb_ep_enable(&udc->ep0in.ep, &ctrl_endpt_in_desc);
if (!retval) {
- mEp->status = usb_ep_alloc_request(&mEp->ep, GFP_ATOMIC);
- if (mEp->status == NULL) {
- usb_ep_disable(&mEp->ep);
+ udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_ATOMIC);
+ if (udc->status == NULL) {
+ usb_ep_disable(&udc->ep0out.ep);
retval = -ENOMEM;
}
}
/**
* isr_get_status_response: get_status request response
- * @ep: endpoint
+ * @udc: udc struct
* @setup: setup request packet
*
* This function returns an error code
*/
-static int isr_get_status_response(struct ci13xxx_ep *mEp,
+static int isr_get_status_response(struct ci13xxx *udc,
struct usb_ctrlrequest *setup)
__releases(mEp->lock)
__acquires(mEp->lock)
{
+ struct ci13xxx_ep *mEp = &udc->ep0in;
struct usb_request *req = NULL;
gfp_t gfp_flags = GFP_ATOMIC;
int dir, num, retval;
/**
* isr_setup_status_phase: queues the status phase of a setup transation
- * @mEp: endpoint
+ * @udc: udc struct
*
* This function returns an error code
*/
-static int isr_setup_status_phase(struct ci13xxx_ep *mEp)
+static int isr_setup_status_phase(struct ci13xxx *udc)
__releases(mEp->lock)
__acquires(mEp->lock)
{
int retval;
+ struct ci13xxx_ep *mEp;
- trace("%p", mEp);
-
- /* mEp is always valid & configured */
-
- if (mEp->type == USB_ENDPOINT_XFER_CONTROL)
- mEp->dir = (mEp->dir == TX) ? RX : TX;
+ trace("%p", udc);
- mEp->status->no_interrupt = 1;
+ mEp = (udc->ep0_dir == TX) ? &udc->ep0out : &udc->ep0in;
spin_unlock(mEp->lock);
- retval = usb_ep_queue(&mEp->ep, mEp->status, GFP_ATOMIC);
+ retval = usb_ep_queue(&mEp->ep, udc->status, GFP_ATOMIC);
spin_lock(mEp->lock);
return retval;
trace("%p", mEp);
- if (list_empty(&mEp->qh[mEp->dir].queue))
+ if (list_empty(&mEp->qh.queue))
return -EINVAL;
/* pop oldest request */
- mReq = list_entry(mEp->qh[mEp->dir].queue.next,
+ mReq = list_entry(mEp->qh.queue.next,
struct ci13xxx_req, queue);
list_del_init(&mReq->queue);
dbg_done(_usb_addr(mEp), mReq->ptr->token, retval);
- if (!list_empty(&mEp->qh[mEp->dir].queue)) {
+ if (!list_empty(&mEp->qh.queue)) {
struct ci13xxx_req* mReqEnq;
- mReqEnq = list_entry(mEp->qh[mEp->dir].queue.next,
+ mReqEnq = list_entry(mEp->qh.queue.next,
struct ci13xxx_req, queue);
_hardware_enqueue(mEp, mReqEnq);
}
int type, num, err = -EINVAL;
struct usb_ctrlrequest req;
-
if (mEp->desc == NULL)
continue; /* not configured */
- if ((mEp->dir == RX && hw_test_and_clear_complete(i)) ||
- (mEp->dir == TX && hw_test_and_clear_complete(i + 16))) {
+ if (hw_test_and_clear_complete(i)) {
err = isr_tr_complete_low(mEp);
if (mEp->type == USB_ENDPOINT_XFER_CONTROL) {
if (err > 0) /* needs status phase */
- err = isr_setup_status_phase(mEp);
+ err = isr_setup_status_phase(udc);
if (err < 0) {
dbg_event(_usb_addr(mEp),
"ERROR", err);
continue;
}
+ /*
+ * Flush data and handshake transactions of previous
+ * setup packet.
+ */
+ _ep_nuke(&udc->ep0out);
+ _ep_nuke(&udc->ep0in);
+
/* read_setup_packet */
do {
hw_test_and_set_setup_guard();
- memcpy(&req, &mEp->qh[RX].ptr->setup, sizeof(req));
+ memcpy(&req, &mEp->qh.ptr->setup, sizeof(req));
} while (!hw_test_and_clear_setup_guard());
type = req.bRequestType;
- mEp->dir = (type & USB_DIR_IN) ? TX : RX;
+ udc->ep0_dir = (type & USB_DIR_IN) ? TX : RX;
dbg_setup(_usb_addr(mEp), &req);
if (err)
break;
}
- err = isr_setup_status_phase(mEp);
+ err = isr_setup_status_phase(udc);
break;
case USB_REQ_GET_STATUS:
if (type != (USB_DIR_IN|USB_RECIP_DEVICE) &&
if (le16_to_cpu(req.wLength) != 2 ||
le16_to_cpu(req.wValue) != 0)
break;
- err = isr_get_status_response(mEp, &req);
+ err = isr_get_status_response(udc, &req);
break;
case USB_REQ_SET_ADDRESS:
if (type != (USB_DIR_OUT|USB_RECIP_DEVICE))
err = hw_usb_set_address((u8)le16_to_cpu(req.wValue));
if (err)
break;
- err = isr_setup_status_phase(mEp);
+ err = isr_setup_status_phase(udc);
break;
case USB_REQ_SET_FEATURE:
if (type != (USB_DIR_OUT|USB_RECIP_ENDPOINT) &&
spin_lock(udc->lock);
if (err)
break;
- err = isr_setup_status_phase(mEp);
+ err = isr_setup_status_phase(udc);
break;
default:
delegate:
if (req.wLength == 0) /* no data phase */
- mEp->dir = TX;
+ udc->ep0_dir = TX;
spin_unlock(udc->lock);
err = udc->driver->setup(&udc->gadget, &req);
const struct usb_endpoint_descriptor *desc)
{
struct ci13xxx_ep *mEp = container_of(ep, struct ci13xxx_ep, ep);
- int direction, retval = 0;
+ int retval = 0;
unsigned long flags;
trace("%p, %p", ep, desc);
mEp->desc = desc;
- if (!list_empty(&mEp->qh[mEp->dir].queue))
+ if (!list_empty(&mEp->qh.queue))
warn("enabling a non-empty endpoint!");
mEp->dir = usb_endpoint_dir_in(desc) ? TX : RX;
mEp->ep.maxpacket = __constant_le16_to_cpu(desc->wMaxPacketSize);
- direction = mEp->dir;
- do {
- dbg_event(_usb_addr(mEp), "ENABLE", 0);
+ dbg_event(_usb_addr(mEp), "ENABLE", 0);
- mEp->qh[mEp->dir].ptr->cap = 0;
+ mEp->qh.ptr->cap = 0;
- if (mEp->type == USB_ENDPOINT_XFER_CONTROL)
- mEp->qh[mEp->dir].ptr->cap |= QH_IOS;
- else if (mEp->type == USB_ENDPOINT_XFER_ISOC)
- mEp->qh[mEp->dir].ptr->cap &= ~QH_MULT;
- else
- mEp->qh[mEp->dir].ptr->cap &= ~QH_ZLT;
-
- mEp->qh[mEp->dir].ptr->cap |=
- (mEp->ep.maxpacket << ffs_nr(QH_MAX_PKT)) & QH_MAX_PKT;
- mEp->qh[mEp->dir].ptr->td.next |= TD_TERMINATE; /* needed? */
-
- retval |= hw_ep_enable(mEp->num, mEp->dir, mEp->type);
+ if (mEp->type == USB_ENDPOINT_XFER_CONTROL)
+ mEp->qh.ptr->cap |= QH_IOS;
+ else if (mEp->type == USB_ENDPOINT_XFER_ISOC)
+ mEp->qh.ptr->cap &= ~QH_MULT;
+ else
+ mEp->qh.ptr->cap &= ~QH_ZLT;
- if (mEp->type == USB_ENDPOINT_XFER_CONTROL)
- mEp->dir = (mEp->dir == TX) ? RX : TX;
+ mEp->qh.ptr->cap |=
+ (mEp->ep.maxpacket << ffs_nr(QH_MAX_PKT)) & QH_MAX_PKT;
+ mEp->qh.ptr->td.next |= TD_TERMINATE; /* needed? */
- } while (mEp->dir != direction);
+ retval |= hw_ep_enable(mEp->num, mEp->dir, mEp->type);
spin_unlock_irqrestore(mEp->lock, flags);
return retval;
spin_lock_irqsave(mEp->lock, flags);
if (mEp->type == USB_ENDPOINT_XFER_CONTROL &&
- !list_empty(&mEp->qh[mEp->dir].queue)) {
+ !list_empty(&mEp->qh.queue)) {
_ep_nuke(mEp);
retval = -EOVERFLOW;
warn("endpoint ctrl %X nuked", _usb_addr(mEp));
/* push request */
mReq->req.status = -EINPROGRESS;
mReq->req.actual = 0;
- list_add_tail(&mReq->queue, &mEp->qh[mEp->dir].queue);
+ list_add_tail(&mReq->queue, &mEp->qh.queue);
- if (list_is_singular(&mEp->qh[mEp->dir].queue))
+ if (list_is_singular(&mEp->qh.queue))
retval = _hardware_enqueue(mEp, mReq);
if (retval == -EALREADY) {
trace("%p, %p", ep, req);
if (ep == NULL || req == NULL || mEp->desc == NULL ||
- list_empty(&mReq->queue) || list_empty(&mEp->qh[mEp->dir].queue))
+ list_empty(&mReq->queue) || list_empty(&mEp->qh.queue))
return -EINVAL;
spin_lock_irqsave(mEp->lock, flags);
#ifndef STALL_IN
/* g_file_storage MS compliant but g_zero fails chapter 9 compliance */
if (value && mEp->type == USB_ENDPOINT_XFER_BULK && mEp->dir == TX &&
- !list_empty(&mEp->qh[mEp->dir].queue)) {
+ !list_empty(&mEp->qh.queue)) {
spin_unlock_irqrestore(mEp->lock, flags);
return -EAGAIN;
}
if (is_active) {
pm_runtime_get_sync(&_gadget->dev);
hw_device_reset(udc);
- hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma);
+ hw_device_state(udc->ep0out.qh.dma);
} else {
hw_device_state(0);
if (udc->udc_driver->notify_event)
int (*bind)(struct usb_gadget *))
{
struct ci13xxx *udc = _udc;
- unsigned long i, k, flags;
+ unsigned long flags;
+ int i, j;
int retval = -ENOMEM;
trace("%p", driver);
info("hw_ep_max = %d", hw_ep_max);
- udc->driver = driver;
udc->gadget.dev.driver = NULL;
retval = 0;
- for (i = 0; i < hw_ep_max; i++) {
- struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i];
+ for (i = 0; i < hw_ep_max/2; i++) {
+ for (j = RX; j <= TX; j++) {
+ int k = i + j * hw_ep_max/2;
+ struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[k];
- scnprintf(mEp->name, sizeof(mEp->name), "ep%i", (int)i);
+ scnprintf(mEp->name, sizeof(mEp->name), "ep%i%s", i,
+ (j == TX) ? "in" : "out");
- mEp->lock = udc->lock;
- mEp->device = &udc->gadget.dev;
- mEp->td_pool = udc->td_pool;
+ mEp->lock = udc->lock;
+ mEp->device = &udc->gadget.dev;
+ mEp->td_pool = udc->td_pool;
- mEp->ep.name = mEp->name;
- mEp->ep.ops = &usb_ep_ops;
- mEp->ep.maxpacket = CTRL_PAYLOAD_MAX;
+ mEp->ep.name = mEp->name;
+ mEp->ep.ops = &usb_ep_ops;
+ mEp->ep.maxpacket = CTRL_PAYLOAD_MAX;
- /* this allocation cannot be random */
- for (k = RX; k <= TX; k++) {
- INIT_LIST_HEAD(&mEp->qh[k].queue);
+ INIT_LIST_HEAD(&mEp->qh.queue);
spin_unlock_irqrestore(udc->lock, flags);
- mEp->qh[k].ptr = dma_pool_alloc(udc->qh_pool,
- GFP_KERNEL,
- &mEp->qh[k].dma);
+ mEp->qh.ptr = dma_pool_alloc(udc->qh_pool, GFP_KERNEL,
+ &mEp->qh.dma);
spin_lock_irqsave(udc->lock, flags);
- if (mEp->qh[k].ptr == NULL)
+ if (mEp->qh.ptr == NULL)
retval = -ENOMEM;
else
- memset(mEp->qh[k].ptr, 0,
- sizeof(*mEp->qh[k].ptr));
- }
- if (i == 0)
- udc->gadget.ep0 = &mEp->ep;
- else
+ memset(mEp->qh.ptr, 0, sizeof(*mEp->qh.ptr));
+
+ /* skip ep0 out and in endpoints */
+ if (i == 0)
+ continue;
+
list_add_tail(&mEp->ep.ep_list, &udc->gadget.ep_list);
+ }
}
if (retval)
goto done;
+ udc->gadget.ep0 = &udc->ep0in.ep;
/* bind gadget */
driver->driver.bus = NULL;
udc->gadget.dev.driver = &driver->driver;
goto done;
}
+ udc->driver = driver;
pm_runtime_get_sync(&udc->gadget.dev);
if (udc->udc_driver->flags & CI13XXX_PULLUP_ON_VBUS) {
if (udc->vbus_active) {
}
}
- retval = hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma);
+ retval = hw_device_state(udc->ep0out.qh.dma);
if (retval)
pm_runtime_put_sync(&udc->gadget.dev);
done:
spin_unlock_irqrestore(udc->lock, flags);
- if (retval)
- usb_gadget_unregister_driver(driver);
return retval;
}
EXPORT_SYMBOL(usb_gadget_probe_driver);
int usb_gadget_unregister_driver(struct usb_gadget_driver *driver)
{
struct ci13xxx *udc = _udc;
- unsigned long i, k, flags;
+ unsigned long i, flags;
trace("%p", driver);
for (i = 0; i < hw_ep_max; i++) {
struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i];
- if (i == 0)
- udc->gadget.ep0 = NULL;
- else if (!list_empty(&mEp->ep.ep_list))
+ if (!list_empty(&mEp->ep.ep_list))
list_del_init(&mEp->ep.ep_list);
- for (k = RX; k <= TX; k++)
- if (mEp->qh[k].ptr != NULL)
- dma_pool_free(udc->qh_pool,
- mEp->qh[k].ptr, mEp->qh[k].dma);
+ if (mEp->qh.ptr != NULL)
+ dma_pool_free(udc->qh_pool, mEp->qh.ptr, mEp->qh.dma);
}
+ udc->gadget.ep0 = NULL;
udc->driver = NULL;
spin_unlock_irqrestore(udc->lock, flags);
* DEFINE
*****************************************************************************/
#define CI13XXX_PAGE_SIZE 4096ul /* page size for TD's */
-#define ENDPT_MAX (16)
+#define ENDPT_MAX (32)
#define CTRL_PAYLOAD_MAX (64)
#define RX (0) /* similar to USB_DIR_OUT but can be used as an index */
#define TX (1) /* similar to USB_DIR_IN but can be used as an index */
struct list_head queue;
struct ci13xxx_qh *ptr;
dma_addr_t dma;
- } qh[2];
- struct usb_request *status;
+ } qh;
int wedge;
/* global resources */
struct dma_pool *qh_pool; /* DMA pool for queue heads */
struct dma_pool *td_pool; /* DMA pool for transfer descs */
+ struct usb_request *status; /* ep0 status request */
struct usb_gadget gadget; /* USB slave device */
struct ci13xxx_ep ci13xxx_ep[ENDPT_MAX]; /* extended endpts */
+ u32 ep0_dir; /* ep0 direction */
+#define ep0out ci13xxx_ep[0]
+#define ep0in ci13xxx_ep[16]
struct usb_gadget_driver *driver; /* 3rd party gadget driver */
struct ci13xxx_udc_driver *udc_driver; /* device controller driver */
*/
switch (ctrl->bRequestType & USB_RECIP_MASK) {
case USB_RECIP_INTERFACE:
- if (cdev->config)
- f = cdev->config->interface[intf];
+ if (!cdev->config || w_index >= MAX_CONFIG_INTERFACES)
+ break;
+ f = cdev->config->interface[intf];
break;
case USB_RECIP_ENDPOINT:
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
+#include <linux/usb/composite.h>
#include "gadget_chips.h"
return ERR_PTR(-ENOMEM);
common->free_storage_on_release = 1;
} else {
- memset(common, 0, sizeof common);
+ memset(common, 0, sizeof *common);
common->free_storage_on_release = 0;
}
#define PCH_UDC_BRLEN 0x0F /* Burst length */
#define PCH_UDC_THLEN 0x1F /* Threshold length */
/* Value of EP Buffer Size */
-#define UDC_EP0IN_BUFF_SIZE 64
-#define UDC_EPIN_BUFF_SIZE 512
-#define UDC_EP0OUT_BUFF_SIZE 64
-#define UDC_EPOUT_BUFF_SIZE 512
+#define UDC_EP0IN_BUFF_SIZE 16
+#define UDC_EPIN_BUFF_SIZE 256
+#define UDC_EP0OUT_BUFF_SIZE 16
+#define UDC_EPOUT_BUFF_SIZE 256
/* Value of EP maximum packet size */
#define UDC_EP0IN_MAX_PKT_SIZE 64
#define UDC_EP0OUT_MAX_PKT_SIZE 64
struct pci_pool *data_requests;
struct pci_pool *stp_requests;
dma_addr_t dma_addr;
- unsigned long ep0out_buf[64];
+ void *ep0out_buf;
struct usb_ctrlrequest setup_data;
unsigned long phys_addr;
void __iomem *base_addr;
#define PCH_UDC_PCI_BAR 1
#define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808
+#define PCI_VENDOR_ID_ROHM 0x10DB
+#define PCI_DEVICE_ID_ML7213_IOH_UDC 0x801D
static const char ep0_string[] = "ep0in";
static DEFINE_SPINLOCK(udc_stall_spinlock); /* stall spin lock */
dev = ep->dev;
if (req->dma_mapped) {
if (ep->in)
- pci_unmap_single(dev->pdev, req->req.dma,
- req->req.length, PCI_DMA_TODEVICE);
+ dma_unmap_single(&dev->pdev->dev, req->req.dma,
+ req->req.length, DMA_TO_DEVICE);
else
- pci_unmap_single(dev->pdev, req->req.dma,
- req->req.length, PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&dev->pdev->dev, req->req.dma,
+ req->req.length, DMA_FROM_DEVICE);
req->dma_mapped = 0;
req->req.dma = DMA_ADDR_INVALID;
}
pch_udc_clear_dma(ep->dev, DMA_DIR_RX);
td_data = req->td_data;
- ep->td_data = req->td_data;
/* Set the status bits for all descriptors */
while (1) {
td_data->status = (td_data->status & ~PCH_UDC_BUFF_STS) |
if (usbreq->length &&
((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) {
if (ep->in)
- usbreq->dma = pci_map_single(dev->pdev, usbreq->buf,
- usbreq->length, PCI_DMA_TODEVICE);
+ usbreq->dma = dma_map_single(&dev->pdev->dev,
+ usbreq->buf,
+ usbreq->length,
+ DMA_TO_DEVICE);
else
- usbreq->dma = pci_map_single(dev->pdev, usbreq->buf,
- usbreq->length, PCI_DMA_FROMDEVICE);
+ usbreq->dma = dma_map_single(&dev->pdev->dev,
+ usbreq->buf,
+ usbreq->length,
+ DMA_FROM_DEVICE);
req->dma_mapped = 1;
}
if (usbreq->length > 0) {
- retval = prepare_dma(ep, req, gfp);
+ retval = prepare_dma(ep, req, GFP_ATOMIC);
if (retval)
goto probe_end;
}
pch_udc_wait_ep_stall(ep);
pch_udc_ep_clear_nak(ep);
pch_udc_enable_ep_interrupts(ep->dev, (1 << ep->num));
- pch_udc_set_dma(dev, DMA_DIR_TX);
}
}
/* Now add this request to the ep's pending requests */
PCH_UDC_BS_DMA_DONE)
return;
pch_udc_clear_dma(ep->dev, DMA_DIR_RX);
+ pch_udc_ep_set_ddptr(ep, 0);
if ((req->td_data_last->status & PCH_UDC_RXTX_STS) !=
PCH_UDC_RTS_SUCC) {
dev_err(&dev->pdev->dev, "Invalid RXTX status (0x%08x) "
u32 epsts;
struct pch_udc_ep *ep;
- ep = &dev->ep[2*ep_num];
+ ep = &dev->ep[UDC_EPIN_IDX(ep_num)];
epsts = ep->epsts;
ep->epsts = 0;
struct pch_udc_ep *ep;
struct pch_udc_request *req = NULL;
- ep = &dev->ep[2*ep_num + 1];
+ ep = &dev->ep[UDC_EPOUT_IDX(ep_num)];
epsts = ep->epsts;
ep->epsts = 0;
}
if (epsts & UDC_EPSTS_HE)
return;
- if (epsts & UDC_EPSTS_RSS)
+ if (epsts & UDC_EPSTS_RSS) {
pch_udc_ep_set_stall(ep);
pch_udc_enable_ep_interrupts(ep->dev,
PCH_UDC_EPINT(ep->in, ep->num));
+ }
if (epsts & UDC_EPSTS_RCS) {
if (!dev->prot_stall) {
pch_udc_ep_clear_stall(ep);
{
u32 epsts;
struct pch_udc_ep *ep;
+ struct pch_udc_ep *ep_out;
ep = &dev->ep[UDC_EP0IN_IDX];
+ ep_out = &dev->ep[UDC_EP0OUT_IDX];
epsts = ep->epsts;
ep->epsts = 0;
return;
if (epsts & UDC_EPSTS_HE)
return;
- if ((epsts & UDC_EPSTS_TDC) && (!dev->stall))
+ if ((epsts & UDC_EPSTS_TDC) && (!dev->stall)) {
pch_udc_complete_transfer(ep);
+ pch_udc_clear_dma(dev, DMA_DIR_RX);
+ ep_out->td_data->status = (ep_out->td_data->status &
+ ~PCH_UDC_BUFF_STS) |
+ PCH_UDC_BS_HST_RDY;
+ pch_udc_ep_clear_nak(ep_out);
+ pch_udc_set_dma(dev, DMA_DIR_RX);
+ pch_udc_ep_set_rrdy(ep_out);
+ }
/* On IN interrupt, provide data if we have any */
if ((epsts & UDC_EPSTS_IN) && !(epsts & UDC_EPSTS_TDC) &&
!(epsts & UDC_EPSTS_TXEMPTY))
dev->stall = 0;
dev->ep[UDC_EP0IN_IDX].halted = 0;
dev->ep[UDC_EP0OUT_IDX].halted = 0;
- /* In data not ready */
- pch_udc_ep_set_nak(&(dev->ep[UDC_EP0IN_IDX]));
dev->setup_data = ep->td_stp->request;
pch_udc_init_setup_buff(ep->td_stp);
- pch_udc_clear_dma(dev, DMA_DIR_TX);
+ pch_udc_clear_dma(dev, DMA_DIR_RX);
pch_udc_ep_fifo_flush(&(dev->ep[UDC_EP0IN_IDX]),
dev->ep[UDC_EP0IN_IDX].in);
if ((dev->setup_data.bRequestType & USB_DIR_IN))
setup_supported = dev->driver->setup(&dev->gadget,
&dev->setup_data);
spin_lock(&dev->lock);
+
+ if (dev->setup_data.bRequestType & USB_DIR_IN) {
+ ep->td_data->status = (ep->td_data->status &
+ ~PCH_UDC_BUFF_STS) |
+ PCH_UDC_BS_HST_RDY;
+ pch_udc_ep_set_ddptr(ep, ep->td_data_phys);
+ }
/* ep0 in returns data on IN phase */
if (setup_supported >= 0 && setup_supported <
UDC_EP0IN_MAX_PKT_SIZE) {
pch_udc_ep_clear_nak(&(dev->ep[UDC_EP0IN_IDX]));
/* Gadget would have queued a request when
* we called the setup */
- pch_udc_set_dma(dev, DMA_DIR_RX);
- pch_udc_ep_clear_nak(ep);
+ if (!(dev->setup_data.bRequestType & USB_DIR_IN)) {
+ pch_udc_set_dma(dev, DMA_DIR_RX);
+ pch_udc_ep_clear_nak(ep);
+ }
} else if (setup_supported < 0) {
/* if unsupported request, then stall */
pch_udc_ep_set_stall(&(dev->ep[UDC_EP0IN_IDX]));
}
} else if ((((stat & UDC_EPSTS_OUT_MASK) >> UDC_EPSTS_OUT_SHIFT) ==
UDC_EPSTS_OUT_DATA) && !dev->stall) {
- if (list_empty(&ep->queue)) {
- dev_err(&dev->pdev->dev, "%s: No request\n", __func__);
- ep->td_data->status = (ep->td_data->status &
- ~PCH_UDC_BUFF_STS) |
- PCH_UDC_BS_HST_RDY;
- pch_udc_set_dma(dev, DMA_DIR_RX);
- } else {
- /* control write */
- /* next function will pickuo an clear the status */
+ pch_udc_clear_dma(dev, DMA_DIR_RX);
+ pch_udc_ep_set_ddptr(ep, 0);
+ if (!list_empty(&ep->queue)) {
ep->epsts = stat;
-
- pch_udc_svc_data_out(dev, 0);
- /* re-program desc. pointer for possible ZLPs */
- pch_udc_ep_set_ddptr(ep, ep->td_data_phys);
- pch_udc_set_dma(dev, DMA_DIR_RX);
+ pch_udc_svc_data_out(dev, PCH_UDC_EP0);
}
+ pch_udc_set_dma(dev, DMA_DIR_RX);
}
pch_udc_ep_set_rrdy(ep);
}
struct pch_udc_ep *ep;
struct pch_udc_request *req;
- ep = &dev->ep[2*ep_num];
+ ep = &dev->ep[UDC_EPIN_IDX(ep_num)];
if (!list_empty(&ep->queue)) {
req = list_entry(ep->queue.next, struct pch_udc_request, queue);
pch_udc_enable_ep_interrupts(ep->dev,
for (i = 0; i < PCH_UDC_USED_EP_NUM; i++) {
/* IN */
if (ep_intr & (0x1 << i)) {
- ep = &dev->ep[2*i];
+ ep = &dev->ep[UDC_EPIN_IDX(i)];
ep->epsts = pch_udc_read_ep_status(ep);
pch_udc_clear_ep_status(ep, ep->epsts);
}
/* OUT */
if (ep_intr & (0x10000 << i)) {
- ep = &dev->ep[2*i+1];
+ ep = &dev->ep[UDC_EPOUT_IDX(i)];
ep->epsts = pch_udc_read_ep_status(ep);
pch_udc_clear_ep_status(ep, ep->epsts);
}
dev->ep[UDC_EP0IN_IDX].ep.maxpacket = UDC_EP0IN_MAX_PKT_SIZE;
dev->ep[UDC_EP0OUT_IDX].ep.maxpacket = UDC_EP0OUT_MAX_PKT_SIZE;
- dev->dma_addr = pci_map_single(dev->pdev, dev->ep0out_buf, 256,
- PCI_DMA_FROMDEVICE);
-
/* remove ep0 in and out from the list. They have own pointer */
list_del_init(&dev->ep[UDC_EP0IN_IDX].ep.ep_list);
list_del_init(&dev->ep[UDC_EP0OUT_IDX].ep.ep_list);
dev->ep[UDC_EP0IN_IDX].td_stp_phys = 0;
dev->ep[UDC_EP0IN_IDX].td_data = NULL;
dev->ep[UDC_EP0IN_IDX].td_data_phys = 0;
+
+ dev->ep0out_buf = kzalloc(UDC_EP0OUT_BUFF_SIZE * 4, GFP_KERNEL);
+ if (!dev->ep0out_buf)
+ return -ENOMEM;
+ dev->dma_addr = dma_map_single(&dev->pdev->dev, dev->ep0out_buf,
+ UDC_EP0OUT_BUFF_SIZE * 4,
+ DMA_FROM_DEVICE);
return 0;
}
pch_udc_disable_interrupts(dev, UDC_DEVINT_MSK);
- /* Assues that there are no pending requets with this driver */
+ /* Assures that there are no pending requests with this driver */
+ driver->disconnect(&dev->gadget);
driver->unbind(&dev->gadget);
dev->gadget.dev.driver = NULL;
dev->driver = NULL;
pci_pool_destroy(dev->stp_requests);
}
+ if (dev->dma_addr)
+ dma_unmap_single(&dev->pdev->dev, dev->dma_addr,
+ UDC_EP0OUT_BUFF_SIZE * 4, DMA_FROM_DEVICE);
+ kfree(dev->ep0out_buf);
+
pch_udc_exit(dev);
if (dev->irq_registered)
int ret;
pci_set_power_state(pdev, PCI_D0);
- ret = pci_restore_state(pdev);
- if (ret) {
- dev_err(&pdev->dev, "%s: pci_restore_state failed\n", __func__);
- return ret;
- }
+ pci_restore_state(pdev);
ret = pci_enable_device(pdev);
if (ret) {
dev_err(&pdev->dev, "%s: pci_enable_device failed\n", __func__);
.class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,
.class_mask = 0xffffffff,
},
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_ROHM, PCI_DEVICE_ID_ML7213_IOH_UDC),
+ .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe,
+ .class_mask = 0xffffffff,
+ },
{ 0 },
};
* parameters are in UTF-8 (superset of ASCII's 7 bit characters).
*/
-static ushort __initdata idVendor;
+static ushort idVendor;
module_param(idVendor, ushort, S_IRUGO);
MODULE_PARM_DESC(idVendor, "USB Vendor ID");
-static ushort __initdata idProduct;
+static ushort idProduct;
module_param(idProduct, ushort, S_IRUGO);
MODULE_PARM_DESC(idProduct, "USB Product ID");
-static ushort __initdata bcdDevice;
+static ushort bcdDevice;
module_param(bcdDevice, ushort, S_IRUGO);
MODULE_PARM_DESC(bcdDevice, "USB Device version (BCD)");
-static char *__initdata iManufacturer;
+static char *iManufacturer;
module_param(iManufacturer, charp, S_IRUGO);
MODULE_PARM_DESC(iManufacturer, "USB Manufacturer string");
-static char *__initdata iProduct;
+static char *iProduct;
module_param(iProduct, charp, S_IRUGO);
MODULE_PARM_DESC(iProduct, "USB Product string");
-static char *__initdata iSerialNum;
+static char *iSerialNum;
module_param(iSerialNum, charp, S_IRUGO);
MODULE_PARM_DESC(iSerialNum, "1");
-static char *__initdata iPNPstring;
+static char *iPNPstring;
module_param(iPNPstring, charp, S_IRUGO);
MODULE_PARM_DESC(iPNPstring, "MFG:linux;MDL:g_printer;CLS:PRINTER;SN:1;");
int status;
mutex_lock(&usb_printer_gadget.lock_printer_io);
- class_destroy(usb_gadget_class);
- unregister_chrdev_region(g_printer_devno, 2);
-
status = usb_gadget_unregister_driver(&printer_driver);
if (status)
ERROR(dev, "usb_gadget_unregister_driver %x\n", status);
+ unregister_chrdev_region(g_printer_devno, 2);
+ class_destroy(usb_gadget_class);
mutex_unlock(&usb_printer_gadget.lock_printer_io);
}
module_exit(cleanup);
break;
case R8A66597_BULK:
/* isochronous pipes may be used as bulk pipes */
- if (info->pipe > R8A66597_BASE_PIPENUM_BULK)
+ if (info->pipe >= R8A66597_BASE_PIPENUM_BULK)
bufnum = info->pipe - R8A66597_BASE_PIPENUM_BULK;
else
bufnum = info->pipe - R8A66597_BASE_PIPENUM_ISOC;
Qualcomm chipsets. Root Hub has inbuilt TT.
This driver depends on OTG driver for PHY initialization,
clock management, powering up VBUS, and power management.
+ This driver is not supported on boards like trout which
+ has an external PHY.
config USB_EHCI_HCD_PPC_OF
bool "EHCI support for PPC USB controller on OF platform bus"
* mark HW unaccessible. The PM and USB cores make sure that
* the root hub is either suspended or stopped.
*/
- spin_lock_irqsave(&ehci->lock, flags);
ehci_prepare_ports_for_controller_suspend(ehci, device_may_wakeup(dev));
+ spin_lock_irqsave(&ehci->lock, flags);
ehci_writel(ehci, 0, &ehci->regs->intr_enable);
(void)ehci_readl(ehci, &ehci->regs->intr_enable);
struct resource *res;
int irq;
int retval;
- unsigned int temp;
pr_debug("initializing FSL-SOC USB Controller\n");
goto err3;
}
- /*
- * Check if it is MPC5121 SoC, otherwise set pdata->have_sysif_regs
- * flag for 83xx or 8536 system interface registers.
- */
- if (pdata->big_endian_mmio)
- temp = in_be32(hcd->regs + FSL_SOC_USB_ID);
- else
- temp = in_le32(hcd->regs + FSL_SOC_USB_ID);
-
- if ((temp & ID_MSK) != (~((temp & NID_MSK) >> 8) & ID_MSK))
- pdata->have_sysif_regs = 1;
-
/* Enable USB controller, 83xx or 8536 */
if (pdata->have_sysif_regs)
setbits32(hcd->regs + FSL_SOC_USB_CTRL, 0x4);
#define _EHCI_FSL_H
/* offsets for the non-ehci registers in the FSL SOC USB controller */
-#define FSL_SOC_USB_ID 0x0
-#define ID_MSK 0x3f
-#define NID_MSK 0x3f00
#define FSL_SOC_USB_ULPIVP 0x170
#define FSL_SOC_USB_PORTSC1 0x184
#define PORT_PTS_MSK (3<<30)
ehci->iaa_watchdog.function = ehci_iaa_watchdog;
ehci->iaa_watchdog.data = (unsigned long) ehci;
+ hcc_params = ehci_readl(ehci, &ehci->caps->hcc_params);
+
/*
* hw default: 1K periodic list heads, one per frame.
* periodic_size can shrink by USBCMD update if hcc_params allows.
ehci->periodic_size = DEFAULT_I_TDPS;
INIT_LIST_HEAD(&ehci->cached_itd_list);
INIT_LIST_HEAD(&ehci->cached_sitd_list);
+
+ if (HCC_PGM_FRAMELISTLEN(hcc_params)) {
+ /* periodic schedule size can be smaller than default */
+ switch (EHCI_TUNE_FLS) {
+ case 0: ehci->periodic_size = 1024; break;
+ case 1: ehci->periodic_size = 512; break;
+ case 2: ehci->periodic_size = 256; break;
+ default: BUG();
+ }
+ }
if ((retval = ehci_mem_init(ehci, GFP_KERNEL)) < 0)
return retval;
/* controllers may cache some of the periodic schedule ... */
- hcc_params = ehci_readl(ehci, &ehci->caps->hcc_params);
if (HCC_ISOC_CACHE(hcc_params)) // full frame cache
ehci->i_thresh = 2 + 8;
else // N microframes cached
/* periodic schedule size can be smaller than default */
temp &= ~(3 << 2);
temp |= (EHCI_TUNE_FLS << 2);
- switch (EHCI_TUNE_FLS) {
- case 0: ehci->periodic_size = 1024; break;
- case 1: ehci->periodic_size = 512; break;
- case 2: ehci->periodic_size = 256; break;
- default: BUG();
- }
}
if (HCC_LPM(hcc_params)) {
/* support link power management EHCI 1.1 addendum */
{
int port;
u32 temp;
+ unsigned long flags;
/* If remote wakeup is enabled for the root hub but disabled
* for the controller, we must adjust all the port wakeup flags
if (!ehci_to_hcd(ehci)->self.root_hub->do_remote_wakeup || do_wakeup)
return;
+ spin_lock_irqsave(&ehci->lock, flags);
+
/* clear phy low-power mode before changing wakeup flags */
if (ehci->has_hostpc) {
port = HCS_N_PORTS(ehci->hcs_params);
temp = ehci_readl(ehci, hostpc_reg);
ehci_writel(ehci, temp & ~HOSTPC_PHCD, hostpc_reg);
}
+ spin_unlock_irqrestore(&ehci->lock, flags);
msleep(5);
+ spin_lock_irqsave(&ehci->lock, flags);
}
port = HCS_N_PORTS(ehci->hcs_params);
/* Does the root hub have a port wakeup pending? */
if (!suspending && (ehci_readl(ehci, &ehci->regs->status) & STS_PCD))
usb_hcd_resume_root_hub(ehci_to_hcd(ehci));
+
+ spin_unlock_irqrestore(&ehci->lock, flags);
}
static int ehci_bus_suspend (struct usb_hcd *hcd)
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/usb/otg.h>
+#include <linux/usb/ulpi.h>
#include <linux/slab.h>
#include <mach/mxc_ehci.h>
+#include <asm/mach-types.h>
+
#define ULPI_VIEWPORT_OFFSET 0x170
struct ehci_mxc_priv {
struct usb_hcd *hcd;
struct resource *res;
int irq, ret;
+ unsigned int flags;
struct ehci_mxc_priv *priv;
struct device *dev = &pdev->dev;
struct ehci_hcd *ehci;
clk_enable(priv->ahbclk);
}
- /* "dr" device has its own clock */
- if (pdev->id == 0) {
+ /* "dr" device has its own clock on i.MX51 */
+ if (cpu_is_mx51() && (pdev->id == 0)) {
priv->phy1clk = clk_get(dev, "usb_phy1");
if (IS_ERR(priv->phy1clk)) {
ret = PTR_ERR(priv->phy1clk);
if (ret)
goto err_add;
+ if (pdata->otg) {
+ /*
+ * efikamx and efikasb have some hardware bug which is
+ * preventing usb to work unless CHRGVBUS is set.
+ * It's in violation of USB specs
+ */
+ if (machine_is_mx51_efikamx() || machine_is_mx51_efikasb()) {
+ flags = otg_io_read(pdata->otg, ULPI_OTG_CTRL);
+ flags |= ULPI_OTG_CTRL_CHRGVBUS;
+ ret = otg_io_write(pdata->otg, flags, ULPI_OTG_CTRL);
+ if (ret) {
+ dev_err(dev, "unable to set CHRVBUS\n");
+ goto err_add;
+ }
+ }
+ }
+
return 0;
err_add:
hcd = usb_create_hcd(&ehci_omap_hc_driver, &pdev->dev,
dev_name(&pdev->dev));
if (!hcd) {
- dev_dbg(&pdev->dev, "failed to create hcd with err %d\n", ret);
+ dev_err(&pdev->dev, "failed to create hcd with err %d\n", ret);
ret = -ENOMEM;
goto err_create_hcd;
}
ret = omap_start_ehc(omap, hcd);
if (ret) {
- dev_dbg(&pdev->dev, "failed to start ehci\n");
+ dev_err(&pdev->dev, "failed to start ehci with err %d\n", ret);
goto err_start;
}
ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED);
if (ret) {
- dev_dbg(&pdev->dev, "failed to add hcd with err %d\n", ret);
+ dev_err(&pdev->dev, "failed to add hcd with err %d\n", ret);
goto err_add_hcd;
}
return 0;
}
-static int ehci_quirk_amd_SB800(struct ehci_hcd *ehci)
+static int ehci_quirk_amd_hudson(struct ehci_hcd *ehci)
{
struct pci_dev *amd_smbus_dev;
u8 rev = 0;
amd_smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL);
- if (!amd_smbus_dev)
- return 0;
-
- pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev);
- if (rev < 0x40) {
- pci_dev_put(amd_smbus_dev);
- amd_smbus_dev = NULL;
- return 0;
+ if (amd_smbus_dev) {
+ pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev);
+ if (rev < 0x40) {
+ pci_dev_put(amd_smbus_dev);
+ amd_smbus_dev = NULL;
+ return 0;
+ }
+ } else {
+ amd_smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x780b, NULL);
+ if (!amd_smbus_dev)
+ return 0;
+ pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev);
+ if (rev < 0x11 || rev > 0x18) {
+ pci_dev_put(amd_smbus_dev);
+ amd_smbus_dev = NULL;
+ return 0;
+ }
}
if (!amd_nb_dev)
amd_nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL);
- if (!amd_nb_dev)
- ehci_err(ehci, "QUIRK: unable to get AMD NB device\n");
- ehci_info(ehci, "QUIRK: Enable AMD SB800 L1 fix\n");
+ ehci_info(ehci, "QUIRK: Enable exception for AMD Hudson ASPM\n");
pci_dev_put(amd_smbus_dev);
amd_smbus_dev = NULL;
/* cache this readonly data; minimize chip reads */
ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params);
- if (ehci_quirk_amd_SB800(ehci))
+ if (ehci_quirk_amd_hudson(ehci))
ehci->amd_l1_fix = 1;
retval = ehci_halt(ehci);
* mark HW unaccessible. The PM and USB cores make sure that
* the root hub is either suspended or stopped.
*/
- spin_lock_irqsave (&ehci->lock, flags);
ehci_prepare_ports_for_controller_suspend(ehci, do_wakeup);
+ spin_lock_irqsave (&ehci->lock, flags);
ehci_writel(ehci, 0, &ehci->regs->intr_enable);
(void)ehci_readl(ehci, &ehci->regs->intr_enable);
}
}
-struct fsl_usb2_platform_data fsl_usb2_mpc5121_pd = {
+static struct fsl_usb2_platform_data fsl_usb2_mpc5121_pd = {
.big_endian_desc = 1,
.big_endian_mmio = 1,
.es = 1,
+ .have_sysif_regs = 0,
.le_setup_buf = 1,
.init = fsl_usb2_mpc5121_init,
.exit = fsl_usb2_mpc5121_exit,
};
#endif /* CONFIG_PPC_MPC512x */
+static struct fsl_usb2_platform_data fsl_usb2_mpc8xxx_pd = {
+ .have_sysif_regs = 1,
+};
+
static const struct of_device_id fsl_usb2_mph_dr_of_match[] = {
- { .compatible = "fsl-usb2-mph", },
- { .compatible = "fsl-usb2-dr", },
+ { .compatible = "fsl-usb2-mph", .data = &fsl_usb2_mpc8xxx_pd, },
+ { .compatible = "fsl-usb2-dr", .data = &fsl_usb2_mpc8xxx_pd, },
#ifdef CONFIG_PPC_MPC512x
{ .compatible = "fsl,mpc5121-usb2-dr", .data = &fsl_usb2_mpc5121_pd, },
#endif
DBG("dev %d ep%d maxpacket %d\n",
udev->devnum, epnum, ep->maxpacket);
retval = -EINVAL;
+ kfree(ep);
goto fail;
}
/* Ring the host controller doorbell after placing a command on the ring */
void xhci_ring_cmd_db(struct xhci_hcd *xhci)
{
- u32 temp;
-
xhci_dbg(xhci, "// Ding dong!\n");
- temp = xhci_readl(xhci, &xhci->dba->doorbell[0]) & DB_MASK;
- xhci_writel(xhci, temp | DB_TARGET_HOST, &xhci->dba->doorbell[0]);
+ xhci_writel(xhci, DB_VALUE_HOST, &xhci->dba->doorbell[0]);
/* Flush PCI posted writes */
xhci_readl(xhci, &xhci->dba->doorbell[0]);
}
unsigned int ep_index,
unsigned int stream_id)
{
- struct xhci_virt_ep *ep;
- unsigned int ep_state;
- u32 field;
__u32 __iomem *db_addr = &xhci->dba->doorbell[slot_id];
+ struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+ unsigned int ep_state = ep->ep_state;
- ep = &xhci->devs[slot_id]->eps[ep_index];
- ep_state = ep->ep_state;
/* Don't ring the doorbell for this endpoint if there are pending
- * cancellations because the we don't want to interrupt processing.
+ * cancellations because we don't want to interrupt processing.
* We don't want to restart any stream rings if there's a set dequeue
* pointer command pending because the device can choose to start any
* stream once the endpoint is on the HW schedule.
* FIXME - check all the stream rings for pending cancellations.
*/
- if (!(ep_state & EP_HALT_PENDING) && !(ep_state & SET_DEQ_PENDING)
- && !(ep_state & EP_HALTED)) {
- field = xhci_readl(xhci, db_addr) & DB_MASK;
- field |= EPI_TO_DB(ep_index) | STREAM_ID_TO_DB(stream_id);
- xhci_writel(xhci, field, db_addr);
- }
+ if ((ep_state & EP_HALT_PENDING) || (ep_state & SET_DEQ_PENDING) ||
+ (ep_state & EP_HALTED))
+ return;
+ xhci_writel(xhci, DB_VALUE(ep_index, stream_id), db_addr);
+ /* The CPU has better things to do at this point than wait for a
+ * write-posting flush. It'll get there soon enough.
+ */
}
/* Ring the doorbell for any rings with pending URBs */
addr = &xhci->op_regs->port_status_base + NUM_PORT_REGS * (port_id - 1);
temp = xhci_readl(xhci, addr);
- if ((temp & PORT_CONNECT) && (hcd->state == HC_STATE_SUSPENDED)) {
+ if (hcd->state == HC_STATE_SUSPENDED) {
xhci_dbg(xhci, "resume root hub\n");
usb_hcd_resume_root_hub(hcd);
}
/* Others already handled above */
break;
}
- dev_dbg(&td->urb->dev->dev,
- "ep %#x - asked for %d bytes, "
+ xhci_dbg(xhci, "ep %#x - asked for %d bytes, "
"%d bytes untransferred\n",
td->urb->ep->desc.bEndpointAddress,
td->urb->transfer_buffer_length,
}
xhci_dbg(xhci, "\n");
if (!in_interrupt())
- dev_dbg(&urb->dev->dev, "ep %#x - urb len = %d, sglist used, num_trbs = %d\n",
+ xhci_dbg(xhci, "ep %#x - urb len = %d, sglist used, "
+ "num_trbs = %d\n",
urb->ep->desc.bEndpointAddress,
urb->transfer_buffer_length,
num_trbs);
static void giveback_first_trb(struct xhci_hcd *xhci, int slot_id,
unsigned int ep_index, unsigned int stream_id, int start_cycle,
- struct xhci_generic_trb *start_trb, struct xhci_td *td)
+ struct xhci_generic_trb *start_trb)
{
/*
* Pass all the TRBs to the hardware at once and make sure this write
* isn't reordered.
*/
wmb();
- start_trb->field[3] |= start_cycle;
+ if (start_cycle)
+ start_trb->field[3] |= start_cycle;
+ else
+ start_trb->field[3] &= ~0x1;
xhci_ring_ep_doorbell(xhci, slot_id, ep_index, stream_id);
}
* to set the polling interval (once the API is added).
*/
if (xhci_interval != ep_interval) {
- if (!printk_ratelimit())
+ if (printk_ratelimit())
dev_dbg(&urb->dev->dev, "Driver uses different interval"
" (%d microframe%s) than xHCI "
"(%d microframe%s)\n",
u32 remainder = 0;
/* Don't change the cycle bit of the first TRB until later */
- if (first_trb)
+ if (first_trb) {
first_trb = false;
- else
+ if (start_cycle == 0)
+ field |= 0x1;
+ } else
field |= ep_ring->cycle_state;
/* Chain all the TRBs together; clear the chain bit in the last
check_trb_math(urb, num_trbs, running_total);
giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
- start_cycle, start_trb, td);
+ start_cycle, start_trb);
return 0;
}
/* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */
if (!in_interrupt())
- dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d), addr = %#llx, num_trbs = %d\n",
+ xhci_dbg(xhci, "ep %#x - urb len = %#x (%d), "
+ "addr = %#llx, num_trbs = %d\n",
urb->ep->desc.bEndpointAddress,
urb->transfer_buffer_length,
urb->transfer_buffer_length,
field = 0;
/* Don't change the cycle bit of the first TRB until later */
- if (first_trb)
+ if (first_trb) {
first_trb = false;
- else
+ if (start_cycle == 0)
+ field |= 0x1;
+ } else
field |= ep_ring->cycle_state;
/* Chain all the TRBs together; clear the chain bit in the last
check_trb_math(urb, num_trbs, running_total);
giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
- start_cycle, start_trb, td);
+ start_cycle, start_trb);
return 0;
}
/* Queue setup TRB - see section 6.4.1.2.1 */
/* FIXME better way to translate setup_packet into two u32 fields? */
setup = (struct usb_ctrlrequest *) urb->setup_packet;
+ field = 0;
+ field |= TRB_IDT | TRB_TYPE(TRB_SETUP);
+ if (start_cycle == 0)
+ field |= 0x1;
queue_trb(xhci, ep_ring, false, true,
/* FIXME endianness is probably going to bite my ass here. */
setup->bRequestType | setup->bRequest << 8 | setup->wValue << 16,
setup->wIndex | setup->wLength << 16,
TRB_LEN(8) | TRB_INTR_TARGET(0),
/* Immediate data in pointer */
- TRB_IDT | TRB_TYPE(TRB_SETUP));
+ field);
/* If there's data, queue data TRBs */
field = 0;
field | TRB_IOC | TRB_TYPE(TRB_STATUS) | ep_ring->cycle_state);
giveback_first_trb(xhci, slot_id, ep_index, 0,
- start_cycle, start_trb, td);
+ start_cycle, start_trb);
return 0;
}
int running_total, trb_buff_len, td_len, td_remain_len, ret;
u64 start_addr, addr;
int i, j;
+ bool more_trbs_coming;
ep_ring = xhci->devs[slot_id]->eps[ep_index].ring;
}
if (!in_interrupt())
- dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d),"
+ xhci_dbg(xhci, "ep %#x - urb len = %#x (%d),"
" addr = %#llx, num_tds = %d\n",
urb->ep->desc.bEndpointAddress,
urb->transfer_buffer_length,
field |= TRB_TYPE(TRB_ISOC);
/* Assume URB_ISO_ASAP is set */
field |= TRB_SIA;
- if (i > 0)
+ if (i == 0) {
+ if (start_cycle == 0)
+ field |= 0x1;
+ } else
field |= ep_ring->cycle_state;
first_trb = false;
} else {
*/
if (j < trbs_per_td - 1) {
field |= TRB_CHAIN;
+ more_trbs_coming = true;
} else {
td->last_trb = ep_ring->enqueue;
field |= TRB_IOC;
+ more_trbs_coming = false;
}
/* Calculate TRB length */
length_field = TRB_LEN(trb_buff_len) |
remainder |
TRB_INTR_TARGET(0);
- queue_trb(xhci, ep_ring, false, false,
+ queue_trb(xhci, ep_ring, false, more_trbs_coming,
lower_32_bits(addr),
upper_32_bits(addr),
length_field,
}
}
- wmb();
- start_trb->field[3] |= start_cycle;
-
- xhci_ring_ep_doorbell(xhci, slot_id, ep_index, urb->stream_id);
+ giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+ start_cycle, start_trb);
return 0;
}
* to set the polling interval (once the API is added).
*/
if (xhci_interval != ep_interval) {
- if (!printk_ratelimit())
+ if (printk_ratelimit())
dev_dbg(&urb->dev->dev, "Driver uses different interval"
" (%d microframe%s) than xHCI "
"(%d microframe%s)\n",
static int xhci_setup_msix(struct xhci_hcd *xhci)
{
int i, ret = 0;
- struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+ struct usb_hcd *hcd = xhci_to_hcd(xhci);
+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
/*
* calculate number of msi-x vectors supported.
goto disable_msix;
}
+ hcd->msix_enabled = 1;
return ret;
disable_msix:
/* Free any IRQs and disable MSI-X */
static void xhci_cleanup_msix(struct xhci_hcd *xhci)
{
- struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+ struct usb_hcd *hcd = xhci_to_hcd(xhci);
+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
xhci_free_irq(xhci);
pci_disable_msi(pdev);
}
+ hcd->msix_enabled = 0;
return;
}
spin_lock_irq(&xhci->lock);
xhci_halt(xhci);
xhci_reset(xhci);
- xhci_cleanup_msix(xhci);
spin_unlock_irq(&xhci->lock);
+ xhci_cleanup_msix(xhci);
+
#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
/* Tell the event ring poll function not to reschedule */
xhci->zombie = 1;
spin_lock_irq(&xhci->lock);
xhci_halt(xhci);
- xhci_cleanup_msix(xhci);
spin_unlock_irq(&xhci->lock);
+ xhci_cleanup_msix(xhci);
+
xhci_dbg(xhci, "xhci_shutdown completed - status = %x\n",
xhci_readl(xhci, &xhci->op_regs->status));
}
int rc = 0;
struct usb_hcd *hcd = xhci_to_hcd(xhci);
u32 command;
+ int i;
spin_lock_irq(&xhci->lock);
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
spin_unlock_irq(&xhci->lock);
return -ETIMEDOUT;
}
- /* step 5: remove core well power */
- xhci_cleanup_msix(xhci);
spin_unlock_irq(&xhci->lock);
+ /* step 5: remove core well power */
+ /* synchronize irq when using MSI-X */
+ if (xhci->msix_entries) {
+ for (i = 0; i < xhci->msix_count; i++)
+ synchronize_irq(xhci->msix_entries[i].vector);
+ }
+
return rc;
}
{
u32 command, temp = 0;
struct usb_hcd *hcd = xhci_to_hcd(xhci);
- struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
int old_state, retval;
old_state = hcd->state;
xhci_dbg(xhci, "Stop HCD\n");
xhci_halt(xhci);
xhci_reset(xhci);
- if (hibernated)
- xhci_cleanup_msix(xhci);
spin_unlock_irq(&xhci->lock);
+ xhci_cleanup_msix(xhci);
#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
/* Tell the event ring poll function not to reschedule */
return retval;
}
- spin_unlock_irq(&xhci->lock);
- /* Re-setup MSI-X */
- if (hcd->irq)
- free_irq(hcd->irq, hcd);
- hcd->irq = -1;
-
- retval = xhci_setup_msix(xhci);
- if (retval)
- /* fall back to msi*/
- retval = xhci_setup_msi(xhci);
-
- if (retval) {
- /* fall back to legacy interrupt*/
- retval = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED,
- hcd->irq_descr, hcd);
- if (retval) {
- xhci_err(xhci, "request interrupt %d failed\n",
- pdev->irq);
- return retval;
- }
- hcd->irq = pdev->irq;
- }
-
- spin_lock_irq(&xhci->lock);
/* step 4: set Run/Stop bit */
command = xhci_readl(xhci, &xhci->op_regs->command);
command |= CMD_RUN;
xhci_err(xhci, "Error while assigning device slot ID\n");
return 0;
}
- /* xhci_alloc_virt_device() does not touch rings; no need to lock */
- if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_KERNEL)) {
+ /* xhci_alloc_virt_device() does not touch rings; no need to lock.
+ * Use GFP_NOIO, since this function can be called from
+ * xhci_discover_or_reset_device(), which may be called as part of
+ * mass storage driver error handling.
+ */
+ if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) {
/* Disable slot, if we can do it without mem alloc */
xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
spin_lock_irqsave(&xhci->lock, flags);
/**
* struct doorbell_array
*
+ * Bits 0 - 7: Endpoint target
+ * Bits 8 - 15: RsvdZ
+ * Bits 16 - 31: Stream ID
+ *
* Section 5.6
*/
struct xhci_doorbell_array {
u32 doorbell[256];
};
-#define DB_TARGET_MASK 0xFFFFFF00
-#define DB_STREAM_ID_MASK 0x0000FFFF
-#define DB_TARGET_HOST 0x0
-#define DB_STREAM_ID_HOST 0x0
-#define DB_MASK (0xff << 8)
-
-/* Endpoint Target - bits 0:7 */
-#define EPI_TO_DB(p) (((p) + 1) & 0xff)
-#define STREAM_ID_TO_DB(p) (((p) & 0xffff) << 16)
-
+#define DB_VALUE(ep, stream) ((((ep) + 1) & 0xff) | ((stream) << 16))
+#define DB_VALUE_HOST 0x00000000
/**
* struct xhci_protocol_caps
static void change_color(struct usb_led *led)
{
- int retval;
+ int retval = 0;
unsigned char *buffer;
buffer = kmalloc(8, GFP_KERNEL);
{ USB_DEVICE(0x0557, 0x2001) },
{ USB_DEVICE(0x0729, 0x1284) },
{ USB_DEVICE(0x1293, 0x0002) },
- { USB_DEVICE(0x1293, 0x0002) },
{ USB_DEVICE(0x050d, 0x0002) },
{ } /* Terminating entry */
};
musb->xceiv->set_power = bfin_musb_set_power;
musb->isr = blackfin_interrupt;
+ musb->double_buffer_not_ok = true;
return 0;
}
static inline struct musb *dev_to_musb(struct device *dev)
{
-#ifdef CONFIG_USB_MUSB_HDRC_HCD
- /* usbcore insists dev->driver_data is a "struct hcd *" */
- return hcd_to_musb(dev_get_drvdata(dev));
-#else
return dev_get_drvdata(dev);
-#endif
}
/*-------------------------------------------------------------------------*/
musb = kzalloc(sizeof *musb, GFP_KERNEL);
if (!musb)
return NULL;
- dev_set_drvdata(dev, musb);
#endif
-
+ dev_set_drvdata(dev, musb);
musb->mregs = mbase;
musb->ctrl_base = mbase;
musb->nIrq = -ENODEV;
void __iomem *base;
iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!iomem || irq == 0)
+ if (!iomem || irq <= 0)
return -ENODEV;
base = ioremap(iomem->start, resource_size(iomem));
unsigned set_address:1;
unsigned test_mode:1;
unsigned softconnect:1;
+ /*
+ * FIXME: Remove this flag.
+ *
+ * This is only added to allow Blackfin to work
+ * with current driver. For some unknown reason
+ * Blackfin doesn't work with double buffering
+ * and that's enabled by default.
+ *
+ * We added this flag to forcefully disable double
+ * buffering until we get it working.
+ */
+ unsigned double_buffer_not_ok:1 __deprecated;
u8 address;
u8 test_mode_nr;
dma_addr_t dma_addr,
u32 length);
int (*channel_abort)(struct dma_channel *);
+ int (*is_compatible)(struct dma_channel *channel,
+ u16 maxpacket,
+ void *buf, u32 length);
};
/* called after channel_program(), may indicate a fault */
/* ----------------------------------------------------------------------- */
+#define is_buffer_mapped(req) (is_dma_capable() && \
+ (req->map_state != UN_MAPPED))
+
/* Maps the buffer to dma */
static inline void map_dma_buffer(struct musb_request *request,
- struct musb *musb)
+ struct musb *musb, struct musb_ep *musb_ep)
{
+ int compatible = true;
+ struct dma_controller *dma = musb->dma_controller;
+
+ request->map_state = UN_MAPPED;
+
+ if (!is_dma_capable() || !musb_ep->dma)
+ return;
+
+ /* Check if DMA engine can handle this request.
+ * DMA code must reject the USB request explicitly.
+ * Default behaviour is to map the request.
+ */
+ if (dma->is_compatible)
+ compatible = dma->is_compatible(musb_ep->dma,
+ musb_ep->packet_sz, request->request.buf,
+ request->request.length);
+ if (!compatible)
+ return;
+
if (request->request.dma == DMA_ADDR_INVALID) {
request->request.dma = dma_map_single(
musb->controller,
request->tx
? DMA_TO_DEVICE
: DMA_FROM_DEVICE);
- request->mapped = 1;
+ request->map_state = MUSB_MAPPED;
} else {
dma_sync_single_for_device(musb->controller,
request->request.dma,
request->tx
? DMA_TO_DEVICE
: DMA_FROM_DEVICE);
- request->mapped = 0;
+ request->map_state = PRE_MAPPED;
}
}
static inline void unmap_dma_buffer(struct musb_request *request,
struct musb *musb)
{
+ if (!is_buffer_mapped(request))
+ return;
+
if (request->request.dma == DMA_ADDR_INVALID) {
DBG(20, "not unmapping a never mapped buffer\n");
return;
}
- if (request->mapped) {
+ if (request->map_state == MUSB_MAPPED) {
dma_unmap_single(musb->controller,
request->request.dma,
request->request.length,
? DMA_TO_DEVICE
: DMA_FROM_DEVICE);
request->request.dma = DMA_ADDR_INVALID;
- request->mapped = 0;
- } else {
+ } else { /* PRE_MAPPED */
dma_sync_single_for_cpu(musb->controller,
request->request.dma,
request->request.length,
request->tx
? DMA_TO_DEVICE
: DMA_FROM_DEVICE);
-
}
+ request->map_state = UN_MAPPED;
}
/*
ep->busy = 1;
spin_unlock(&musb->lock);
- if (is_dma_capable() && ep->dma)
- unmap_dma_buffer(req, musb);
+ unmap_dma_buffer(req, musb);
if (request->status == 0)
DBG(5, "%s done request %p, %d/%d\n",
ep->end_point.name, request,
csr);
#ifndef CONFIG_MUSB_PIO_ONLY
- if (is_dma_capable() && musb_ep->dma) {
+ if (is_buffer_mapped(req)) {
struct dma_controller *c = musb->dma_controller;
size_t request_size;
* Unmap the dma buffer back to cpu if dma channel
* programming fails
*/
- if (is_dma_capable() && musb_ep->dma)
- unmap_dma_buffer(req, musb);
+ unmap_dma_buffer(req, musb);
musb_write_fifo(musb_ep->hw_ep, fifo_count,
(u8 *) (request->buf + request->actual));
return;
}
- if (is_cppi_enabled() && musb_ep->dma) {
+ if (is_cppi_enabled() && is_buffer_mapped(req)) {
struct dma_controller *c = musb->dma_controller;
struct dma_channel *channel = musb_ep->dma;
len = musb_readw(epio, MUSB_RXCOUNT);
if (request->actual < request->length) {
#ifdef CONFIG_USB_INVENTRA_DMA
- if (is_dma_capable() && musb_ep->dma) {
+ if (is_buffer_mapped(req)) {
struct dma_controller *c;
struct dma_channel *channel;
int use_dma = 0;
fifo_count = min_t(unsigned, len, fifo_count);
#ifdef CONFIG_USB_TUSB_OMAP_DMA
- if (tusb_dma_omap() && musb_ep->dma) {
+ if (tusb_dma_omap() && is_buffer_mapped(req)) {
struct dma_controller *c = musb->dma_controller;
struct dma_channel *channel = musb_ep->dma;
u32 dma_addr = request->dma + request->actual;
* programming fails. This buffer is mapped if the
* channel allocation is successful
*/
- if (is_dma_capable() && musb_ep->dma) {
+ if (is_buffer_mapped(req)) {
unmap_dma_buffer(req, musb);
/*
/* Set TXMAXP with the FIFO size of the endpoint
* to disable double buffering mode.
*/
- musb_writew(regs, MUSB_TXMAXP, musb_ep->packet_sz | (musb_ep->hb_mult << 11));
+ if (musb->double_buffer_not_ok)
+ musb_writew(regs, MUSB_TXMAXP, hw_ep->max_packet_sz_tx);
+ else
+ musb_writew(regs, MUSB_TXMAXP, musb_ep->packet_sz
+ | (musb_ep->hb_mult << 11));
csr = MUSB_TXCSR_MODE | MUSB_TXCSR_CLRDATATOG;
if (musb_readw(regs, MUSB_TXCSR)
/* Set RXMAXP with the FIFO size of the endpoint
* to disable double buffering mode.
*/
- musb_writew(regs, MUSB_RXMAXP, musb_ep->packet_sz | (musb_ep->hb_mult << 11));
+ if (musb->double_buffer_not_ok)
+ musb_writew(regs, MUSB_RXMAXP, hw_ep->max_packet_sz_tx);
+ else
+ musb_writew(regs, MUSB_RXMAXP, musb_ep->packet_sz
+ | (musb_ep->hb_mult << 11));
/* force shared fifo to OUT-only mode */
if (hw_ep->is_shared_fifo) {
request->epnum = musb_ep->current_epnum;
request->tx = musb_ep->is_in;
- if (is_dma_capable() && musb_ep->dma)
- map_dma_buffer(request, musb);
- else
- request->mapped = 0;
+ map_dma_buffer(request, musb, musb_ep);
spin_lock_irqsave(&musb->lock, lockflags);
#ifndef __MUSB_GADGET_H
#define __MUSB_GADGET_H
+enum buffer_map_state {
+ UN_MAPPED = 0,
+ PRE_MAPPED,
+ MUSB_MAPPED
+};
+
struct musb_request {
struct usb_request request;
struct musb_ep *ep;
struct musb *musb;
u8 tx; /* endpoint direction */
u8 epnum;
- u8 mapped;
+ enum buffer_map_state map_state;
};
static inline struct musb_request *to_musb_request(struct usb_request *req)
/* Set RXMAXP with the FIFO size of the endpoint
* to disable double buffer mode.
*/
- if (musb->hwvers < MUSB_HWVERS_2000)
+ if (musb->double_buffer_not_ok)
musb_writew(ep->regs, MUSB_RXMAXP, ep->max_packet_sz_rx);
else
musb_writew(ep->regs, MUSB_RXMAXP,
/* protocol/endpoint/interval/NAKlimit */
if (epnum) {
musb_writeb(epio, MUSB_TXTYPE, qh->type_reg);
- if (can_bulk_split(musb, qh->type))
+ if (musb->double_buffer_not_ok)
musb_writew(epio, MUSB_TXMAXP,
- packet_sz
- | ((hw_ep->max_packet_sz_tx /
- packet_sz) - 1) << 11);
+ hw_ep->max_packet_sz_tx);
else
musb_writew(epio, MUSB_TXMAXP,
- packet_sz);
+ qh->maxpacket |
+ ((qh->hb_mult - 1) << 11));
musb_writeb(epio, MUSB_TXINTERVAL, qh->intv_reg);
} else {
musb_writeb(epio, MUSB_NAKLIMIT0, qh->intv_reg);
{
musb_writew(mbase,
MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_ADDR_LOW),
- ((u16)((u32) dma_addr & 0xFFFF)));
+ dma_addr);
musb_writew(mbase,
MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_ADDR_HIGH),
- ((u16)(((u32) dma_addr >> 16) & 0xFFFF)));
+ (dma_addr >> 16));
}
static inline u32 musb_read_hsdma_count(void __iomem *mbase, u8 bchannel)
{
- return musb_readl(mbase,
+ u32 count = musb_readw(mbase,
MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_COUNT_HIGH));
+
+ count = count << 16;
+
+ count |= musb_readw(mbase,
+ MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_COUNT_LOW));
+
+ return count;
}
static inline void musb_write_hsdma_count(void __iomem *mbase,
u8 bchannel, u32 len)
{
- musb_writel(mbase,
+ musb_writew(mbase,
+ MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_COUNT_LOW),len);
+ musb_writew(mbase,
MUSB_HSDMA_CHANNEL_OFFSET(bchannel, MUSB_HSDMA_COUNT_HIGH),
- len);
+ (len >> 16));
}
#endif /* CONFIG_BLACKFIN */
required after resetting the hardware and power management.
This driver is required even for peripheral only or host only
mode configurations.
+ This driver is not supported on boards like trout which
+ has an external PHY.
config AB8500_USB
tristate "AB8500 USB Transceiver Driver"
platform_set_drvdata(pdev, nop);
+ BLOCKING_INIT_NOTIFIER_HEAD(&nop->otg.notifier);
+
return 0;
exit:
kfree(nop);
/* ULPI hardcoded IDs, used for probing */
static struct ulpi_info ulpi_ids[] = {
ULPI_INFO(ULPI_ID(0x04cc, 0x1504), "NXP ISP1504"),
- ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB3319"),
+ ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB331x"),
};
static int ulpi_set_otg_flags(struct otg_transceiver *otg)
if (actual_length >= 4) {
struct ch341_private *priv = usb_get_serial_port_data(port);
unsigned long flags;
+ u8 prev_line_status = priv->line_status;
spin_lock_irqsave(&priv->lock, flags);
priv->line_status = (~(data[2])) & CH341_BITS_MODEM_STAT;
if ((data[1] & CH341_MULT_STAT))
priv->multi_status_change = 1;
spin_unlock_irqrestore(&priv->lock, flags);
+
+ if ((priv->line_status ^ prev_line_status) & CH341_BIT_DCD) {
+ struct tty_struct *tty = tty_port_tty_get(&port->port);
+ if (tty)
+ usb_serial_handle_dcd_change(port, tty,
+ priv->line_status & CH341_BIT_DCD);
+ tty_kref_put(tty);
+ }
+
wake_up_interruptible(&priv->delta_msr_wait);
}
static void cp210x_break_ctl(struct tty_struct *, int);
static int cp210x_startup(struct usb_serial *);
static void cp210x_dtr_rts(struct usb_serial_port *p, int on);
-static int cp210x_carrier_raised(struct usb_serial_port *p);
static int debug;
{ USB_DEVICE(0x10C4, 0x8115) }, /* Arygon NFC/Mifare Reader */
{ USB_DEVICE(0x10C4, 0x813D) }, /* Burnside Telecom Deskmobile */
{ USB_DEVICE(0x10C4, 0x813F) }, /* Tams Master Easy Control */
- { USB_DEVICE(0x10C4, 0x8149) }, /* West Mountain Radio Computerized Battery Analyzer */
{ USB_DEVICE(0x10C4, 0x814A) }, /* West Mountain Radio RIGblaster P&P */
{ USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
{ USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
{ USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */
{ USB_DEVICE(0x10C4, 0x8382) }, /* Cygnal Integrated Products, Inc. */
{ USB_DEVICE(0x10C4, 0x83A8) }, /* Amber Wireless AMB2560 */
+ { USB_DEVICE(0x10C4, 0x83D8) }, /* DekTec DTA Plus VHF/UHF Booster/Attenuator */
{ USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */
+ { USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */
{ USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */
{ USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */
{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
.tiocmget = cp210x_tiocmget,
.tiocmset = cp210x_tiocmset,
.attach = cp210x_startup,
- .dtr_rts = cp210x_dtr_rts,
- .carrier_raised = cp210x_carrier_raised
+ .dtr_rts = cp210x_dtr_rts
};
/* Config request types */
return result;
}
-static int cp210x_carrier_raised(struct usb_serial_port *p)
-{
- unsigned int control;
- cp210x_get_config(p, CP210X_GET_MDMSTS, &control, 1);
- if (control & CONTROL_DCD)
- return 1;
- return 0;
-}
-
static void cp210x_break_ctl (struct tty_struct *tty, int break_state)
{
struct usb_serial_port *port = tty->driver_data;
static int digi_chars_in_buffer(struct tty_struct *tty);
static int digi_open(struct tty_struct *tty, struct usb_serial_port *port);
static void digi_close(struct usb_serial_port *port);
-static int digi_carrier_raised(struct usb_serial_port *port);
static void digi_dtr_rts(struct usb_serial_port *port, int on);
static int digi_startup_device(struct usb_serial *serial);
static int digi_startup(struct usb_serial *serial);
.open = digi_open,
.close = digi_close,
.dtr_rts = digi_dtr_rts,
- .carrier_raised = digi_carrier_raised,
.write = digi_write,
.write_room = digi_write_room,
.write_bulk_callback = digi_write_bulk_callback,
digi_set_modem_signals(port, on * (TIOCM_DTR|TIOCM_RTS), 1);
}
-static int digi_carrier_raised(struct usb_serial_port *port)
-{
- struct digi_port *priv = usb_get_serial_port_data(port);
- if (priv->dp_modem_signals & TIOCM_CD)
- return 1;
- return 0;
-}
-
static int digi_open(struct tty_struct *tty, struct usb_serial_port *port)
{
int ret;
static int ftdi_jtag_probe(struct usb_serial *serial);
static int ftdi_mtxorb_hack_setup(struct usb_serial *serial);
static int ftdi_NDI_device_setup(struct usb_serial *serial);
+static int ftdi_stmclite_probe(struct usb_serial *serial);
static void ftdi_USB_UIRT_setup(struct ftdi_private *priv);
static void ftdi_HE_TIRA1_setup(struct ftdi_private *priv);
.port_probe = ftdi_HE_TIRA1_setup,
};
+static struct ftdi_sio_quirk ftdi_stmclite_quirk = {
+ .probe = ftdi_stmclite_probe,
+};
+
/*
* The 8U232AM has the same API as the sio except for:
* - it can support MUCH higher baudrates; up to:
{ USB_DEVICE(FTDI_VID, FTDI_OCEANIC_PID) },
{ USB_DEVICE(TTI_VID, TTI_QL355P_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_RM_CANVIEW_PID) },
+ { USB_DEVICE(ACTON_VID, ACTON_SPECTRAPRO_PID) },
{ USB_DEVICE(CONTEC_VID, CONTEC_COM1USBH_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USOTL4_PID) },
{ USB_DEVICE(BANDB_VID, BANDB_USTL4_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_PCDJ_DAC2_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_RRCIRKITS_LOCOBUFFER_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ASK_RDR400_PID) },
- { USB_DEVICE(ICOM_ID1_VID, ICOM_ID1_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_1_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_OPC_U_UC_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2C1_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2C2_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2D_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2VT_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2VR_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP4KVT_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP4KVR_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2KVT_PID) },
+ { USB_DEVICE(ICOM_VID, ICOM_ID_RP2KVR_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_DOTEC_PID) },
{ USB_DEVICE(QIHARDWARE_VID, MILKYMISTONE_JTAGSERIAL_PID),
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ { USB_DEVICE(ST_VID, ST_STMCLT1030_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_stmclite_quirk },
{ }, /* Optional parameter entry */
{ } /* Terminating entry */
};
return 0;
}
+/*
+ * First and second port on STMCLiteadaptors is reserved for JTAG interface
+ * and the forth port for pio
+ */
+static int ftdi_stmclite_probe(struct usb_serial *serial)
+{
+ struct usb_device *udev = serial->dev;
+ struct usb_interface *interface = serial->interface;
+
+ dbg("%s", __func__);
+
+ if (interface == udev->actconfig->interface[2])
+ return 0;
+
+ dev_info(&udev->dev, "Ignoring serial port reserved for JTAG\n");
+
+ return -ENODEV;
+}
+
/*
* The Matrix Orbital VK204-25-USB has an invalid IN endpoint.
* We have to correct it if we want to read from it.
#define RATOC_VENDOR_ID 0x0584
#define RATOC_PRODUCT_ID_USB60F 0xb020
+/*
+ * Acton Research Corp.
+ */
+#define ACTON_VID 0x0647 /* Vendor ID */
+#define ACTON_SPECTRAPRO_PID 0x0100
+
/*
* Contec products (http://www.contec.com)
* Submitted by Daniel Sangorrin
#define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */
/*
- * Icom ID-1 digital transceiver
+ * Definitions for Icom Inc. devices
*/
-
-#define ICOM_ID1_VID 0x0C26
-#define ICOM_ID1_PID 0x0004
+#define ICOM_VID 0x0C26 /* Icom vendor ID */
+/* Note: ID-1 is a communications tranceiver for HAM-radio operators */
+#define ICOM_ID_1_PID 0x0004 /* ID-1 USB to RS-232 */
+/* Note: OPC is an Optional cable to connect an Icom Tranceiver */
+#define ICOM_OPC_U_UC_PID 0x0018 /* OPC-478UC, OPC-1122U cloning cable */
+/* Note: ID-RP* devices are Icom Repeater Devices for HAM-radio */
+#define ICOM_ID_RP2C1_PID 0x0009 /* ID-RP2C Asset 1 to RS-232 */
+#define ICOM_ID_RP2C2_PID 0x000A /* ID-RP2C Asset 2 to RS-232 */
+#define ICOM_ID_RP2D_PID 0x000B /* ID-RP2D configuration port*/
+#define ICOM_ID_RP2VT_PID 0x000C /* ID-RP2V Transmit config port */
+#define ICOM_ID_RP2VR_PID 0x000D /* ID-RP2V Receive config port */
+#define ICOM_ID_RP4KVT_PID 0x0010 /* ID-RP4000V Transmit config port */
+#define ICOM_ID_RP4KVR_PID 0x0011 /* ID-RP4000V Receive config port */
+#define ICOM_ID_RP2KVT_PID 0x0012 /* ID-RP2000V Transmit config port */
+#define ICOM_ID_RP2KVR_PID 0x0013 /* ID-RP2000V Receive config port */
/*
* GN Otometrics (http://www.otometrics.com)
#define STB_PID 0x0001 /* Sensor Terminal Board */
#define WHT_PID 0x0004 /* Wireless Handheld Terminal */
+/*
+ * STMicroelectonics
+ */
+#define ST_VID 0x0483
+#define ST_STMCLT1030_PID 0x3747 /* ST Micro Connect Lite STMCLT1030 */
+
/*
* Papouch products (http://www.papouch.com/)
* Submitted by Folkert van Heusden
}
EXPORT_SYMBOL_GPL(usb_serial_handle_break);
+/**
+ * usb_serial_handle_dcd_change - handle a change of carrier detect state
+ * @port: usb_serial_port structure for the open port
+ * @tty: tty_struct structure for the port
+ * @status: new carrier detect status, nonzero if active
+ */
+void usb_serial_handle_dcd_change(struct usb_serial_port *usb_port,
+ struct tty_struct *tty, unsigned int status)
+{
+ struct tty_port *port = &usb_port->port;
+
+ dbg("%s - port %d, status %d", __func__, usb_port->number, status);
+
+ if (status)
+ wake_up_interruptible(&port->open_wait);
+ else if (tty && !C_CLOCAL(tty))
+ tty_hangup(tty);
+}
+EXPORT_SYMBOL_GPL(usb_serial_handle_dcd_change);
+
int usb_serial_generic_resume(struct usb_serial *serial)
{
struct usb_serial_port *port;
dbg("%s %d.%d.%d", fw_info, rec->data[0], rec->data[1], build);
- edge_serial->product_info.FirmwareMajorVersion = fw->data[0];
- edge_serial->product_info.FirmwareMinorVersion = fw->data[1];
+ edge_serial->product_info.FirmwareMajorVersion = rec->data[0];
+ edge_serial->product_info.FirmwareMinorVersion = rec->data[1];
edge_serial->product_info.FirmwareBuildNumber = cpu_to_le16(build);
for (rec = ihex_next_binrec(rec); rec;
.name = "epic",
},
.description = "EPiC device",
+ .usb_driver = &io_driver,
.id_table = Epic_port_id_table,
.num_ports = 1,
.open = edge_open,
.name = "iuu_phoenix",
},
.id_table = id_table,
+ .usb_driver = &iuu_driver,
.num_ports = 1,
.bulk_in_size = 512,
.bulk_out_size = 512,
.name = "keyspan_no_firm",
},
.description = "Keyspan - (without firmware)",
+ .usb_driver = &keyspan_driver,
.id_table = keyspan_pre_ids,
.num_ports = 1,
.attach = keyspan_fake_startup,
.name = "keyspan_1",
},
.description = "Keyspan 1 port adapter",
+ .usb_driver = &keyspan_driver,
.id_table = keyspan_1port_ids,
.num_ports = 1,
.open = keyspan_open,
.name = "keyspan_2",
},
.description = "Keyspan 2 port adapter",
+ .usb_driver = &keyspan_driver,
.id_table = keyspan_2port_ids,
.num_ports = 2,
.open = keyspan_open,
.name = "keyspan_4",
},
.description = "Keyspan 4 port adapter",
+ .usb_driver = &keyspan_driver,
.id_table = keyspan_4port_ids,
.num_ports = 4,
.open = keyspan_open,
}
}
-static int keyspan_pda_carrier_raised(struct usb_serial_port *port)
-{
- struct usb_serial *serial = port->serial;
- unsigned char modembits;
-
- /* If we can read the modem status and the DCD is low then
- carrier is not raised yet */
- if (keyspan_pda_get_modem_info(serial, &modembits) >= 0) {
- if (!(modembits & (1>>6)))
- return 0;
- }
- /* Carrier raised, or we failed (eg disconnected) so
- progress accordingly */
- return 1;
-}
-
static int keyspan_pda_open(struct tty_struct *tty,
struct usb_serial_port *port)
.id_table = id_table_std,
.num_ports = 1,
.dtr_rts = keyspan_pda_dtr_rts,
- .carrier_raised = keyspan_pda_carrier_raised,
.open = keyspan_pda_open,
.close = keyspan_pda_close,
.write = keyspan_pda_write,
.name = "moto-modem",
},
.id_table = id_table,
+ .usb_driver = &moto_driver,
.num_ports = 1,
};
#define HAIER_VENDOR_ID 0x201e
#define HAIER_PRODUCT_CE100 0x2009
-#define CINTERION_VENDOR_ID 0x0681
+/* Cinterion (formerly Siemens) products */
+#define SIEMENS_VENDOR_ID 0x0681
+#define CINTERION_VENDOR_ID 0x1e2d
+#define CINTERION_PRODUCT_HC25_MDM 0x0047
+#define CINTERION_PRODUCT_HC25_MDMNET 0x0040
+#define CINTERION_PRODUCT_HC28_MDM 0x004C
+#define CINTERION_PRODUCT_HC28_MDMNET 0x004A /* same for HC28J */
+#define CINTERION_PRODUCT_EU3_E 0x0051
+#define CINTERION_PRODUCT_EU3_P 0x0052
+#define CINTERION_PRODUCT_PH8 0x0053
/* Olivetti products */
#define OLIVETTI_VENDOR_ID 0x0b3c
{ USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100F) },
{ USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1011)},
{ USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1012)},
- { USB_DEVICE(CINTERION_VENDOR_ID, 0x0047) },
+ /* Cinterion */
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_E) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_P) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) },
+ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) },
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDM) },
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) },
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) },
+
{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },
{ USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
{ USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */
.name = "oti6858",
},
.id_table = id_table,
+ .usb_driver = &oti6858_driver,
.num_ports = 1,
.open = oti6858_open,
.close = oti6858_close,
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MMX) },
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_GPRS) },
{ USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_HCR331) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MOTOROLA) },
{ USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) },
{ USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) },
{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) },
{
struct pl2303_private *priv = usb_get_serial_port_data(port);
+ struct tty_struct *tty;
unsigned long flags;
u8 status_idx = UART_STATE;
u8 length = UART_STATE + 1;
+ u8 prev_line_status;
u16 idv, idp;
idv = le16_to_cpu(port->serial->dev->descriptor.idVendor);
/* Save off the uart status for others to look at */
spin_lock_irqsave(&priv->lock, flags);
+ prev_line_status = priv->line_status;
priv->line_status = data[status_idx];
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->line_status & UART_BREAK_ERROR)
usb_serial_handle_break(port);
wake_up_interruptible(&priv->delta_msr_wait);
+
+ tty = tty_port_tty_get(&port->port);
+ if (!tty)
+ return;
+ if ((priv->line_status ^ prev_line_status) & UART_DCD)
+ usb_serial_handle_dcd_change(port, tty,
+ priv->line_status & UART_DCD);
+ tty_kref_put(tty);
}
static void pl2303_read_int_callback(struct urb *urb)
#define PL2303_PRODUCT_ID_MMX 0x0612
#define PL2303_PRODUCT_ID_GPRS 0x0609
#define PL2303_PRODUCT_ID_HCR331 0x331a
+#define PL2303_PRODUCT_ID_MOTOROLA 0x0307
#define ATEN_VENDOR_ID 0x0557
#define ATEN_VENDOR_ID2 0x0547
#define UTSTARCOM_PRODUCT_UM175_V1 0x3712
#define UTSTARCOM_PRODUCT_UM175_V2 0x3714
#define UTSTARCOM_PRODUCT_UM175_ALLTEL 0x3715
+#define PANTECH_PRODUCT_UML290_VZW 0x3718
/* CMOTECH devices */
#define CMOTECH_VENDOR_ID 0x16d8
{ USB_DEVICE_AND_INTERFACE_INFO(LG_VENDOR_ID, LG_PRODUCT_VX4400_6000, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SANYO_VENDOR_ID, SANYO_PRODUCT_KATANA_LX, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_U520, 0xff, 0x00, 0x00) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xff, 0xff) },
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);
.name = "qcaux",
},
.id_table = id_table,
+ .usb_driver = &qcaux_driver,
.num_ports = 1,
};
.name = "siemens_mpi",
},
.id_table = id_table,
+ .usb_driver = &siemens_usb_mpi_driver,
.num_ports = 1,
};
/* how come ??? */
#define UART_STATE 0x08
-#define UART_STATE_TRANSIENT_MASK 0x74
+#define UART_STATE_TRANSIENT_MASK 0x75
#define UART_DCD 0x01
#define UART_DSR 0x02
#define UART_BREAK_ERROR 0x04
/* overrun is special, not associated with a char */
if (status & UART_OVERRUN_ERROR)
tty_insert_flip_char(tty, 0, TTY_OVERRUN);
+
+ if (status & UART_DCD)
+ usb_serial_handle_dcd_change(port, tty,
+ priv->line_status & MSR_STATUS_LINE_DCD);
}
tty_insert_flip_string_fixed_flag(tty, data, tty_flag,
.name = "SPCP8x5",
},
.id_table = id_table,
+ .usb_driver = &spcp8x5_driver,
.num_ports = 1,
.open = spcp8x5_open,
.dtr_rts = spcp8x5_dtr_rts,
static void __exit ti_exit(void)
{
+ usb_deregister(&ti_usb_driver);
usb_serial_deregister(&ti_1port_device);
usb_serial_deregister(&ti_2port_device);
- usb_deregister(&ti_usb_driver);
}
return -ENODEV;
fixup_generic(driver);
- if (driver->usb_driver)
- driver->usb_driver->supports_autosuspend = 1;
if (!driver->description)
driver->description = driver->driver.name;
+ if (!driver->usb_driver) {
+ WARN(1, "Serial driver %s has no usb_driver\n",
+ driver->description);
+ return -EINVAL;
+ }
+ driver->usb_driver->supports_autosuspend = 1;
/* Add this device to our list of devices */
mutex_lock(&table_lock);
.name = "debug",
},
.id_table = id_table,
+ .usb_driver = &debug_driver,
.num_ports = 1,
.bulk_out_size = USB_DEBUG_MAX_PACKET_SIZE,
.break_ctl = usb_debug_break_ctl,
"Cypress ISD-300LP",
USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0),
+UNUSUAL_DEV( 0x14cd, 0x6116, 0x0000, 0x9999,
+ "Super Top",
+ "USB 2.0 SATA BRIDGE",
+ USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0),
+
#endif /* defined(CONFIG_USB_STORAGE_CYPRESS_ATACB) || ... */
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_BULK32),
+/* Reported by <ttkspam@free.fr>
+ * The device reports a vendor-specific device class, requiring an
+ * explicit vendor/product match.
+ */
+UNUSUAL_DEV( 0x0851, 0x1542, 0x0002, 0x0002,
+ "MagicPixel",
+ "FW_Omega2",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL, 0),
+
/* Andrew Lunn <andrew@lunn.ch>
* PanDigital Digital Picture Frame. Does not like ALLOW_MEDIUM_REMOVAL
* on LUN 4.
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_IGNORE_RESIDUE ),
+/* Submitted by Nick Holloway */
+UNUSUAL_DEV( 0x0f88, 0x042e, 0x0100, 0x0100,
+ "VTech",
+ "Kidizoom",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_FIX_CAPACITY ),
+
/* Reported by Michael Stattmann <michael@stattmann.com> */
UNUSUAL_DEV( 0x0fce, 0xd008, 0x0000, 0x0000,
"Sony Ericsson",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NO_READ_DISC_INFO ),
+/* Patch by Richard Schütz <r.schtz@t-online.de>
+ * This external hard drive enclosure uses a JMicron chip which
+ * needs the US_FL_IGNORE_RESIDUE flag to work properly. */
+UNUSUAL_DEV( 0x1e68, 0x001b, 0x0000, 0x0000,
+ "TrekStor GmbH & Co. KG",
+ "DataStation maxi g.u",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_IGNORE_RESIDUE | US_FL_SANE_SENSE ),
+
+/* Reported by Jasper Mackenzie <scarletpimpernal@hotmail.com> */
+UNUSUAL_DEV( 0x1e74, 0x4621, 0x0000, 0x0000,
+ "Coby Electronics",
+ "MP3 Player",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ),
+
UNUSUAL_DEV( 0x2116, 0x0320, 0x0001, 0x0001,
"ST",
"2A",
size_t hdr_size;
struct socket *sock;
- /* TODO: check that we are running from vhost_worker?
- * Not sure it's worth it, it's straight-forward enough. */
+ /* TODO: check that we are running from vhost_worker? */
sock = rcu_dereference_check(vq->private_data, 1);
if (!sock)
return;
size_t len, total_len = 0;
int err;
size_t hdr_size;
- struct socket *sock = rcu_dereference(vq->private_data);
+ /* TODO: check that we are running from vhost_worker? */
+ struct socket *sock = rcu_dereference_check(vq->private_data, 1);
if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
return;
int err, headcount;
size_t vhost_hlen, sock_hlen;
size_t vhost_len, sock_len;
- struct socket *sock = rcu_dereference(vq->private_data);
+ /* TODO: check that we are running from vhost_worker? */
+ struct socket *sock = rcu_dereference_check(vq->private_data, 1);
if (!sock || skb_queue_empty(&sock->sk->sk_receive_queue))
return;
{
unsigned acked_features;
- acked_features =
- rcu_dereference_index_check(dev->acked_features,
- lockdep_is_held(&dev->mutex));
+ /* TODO: check that we are running from vhost_worker or dev mutex is
+ * held? */
+ acked_features = rcu_dereference_index_check(dev->acked_features, 1);
return acked_features & (1 << bit);
}
config FB_INTEL
tristate "Intel 830M/845G/852GM/855GM/865G/915G/945G/945GM/965G/965GM support (EXPERIMENTAL)"
- depends on EXPERIMENTAL && FB && PCI && X86 && AGP_INTEL && EMBEDDED
+ depends on EXPERIMENTAL && FB && PCI && X86 && AGP_INTEL && EXPERT
select FB_MODE_HELPERS
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
#include <linux/svga.h>
#include <linux/init.h>
#include <linux/pci.h>
-#include <linux/console.h> /* Why should fb driver call console functions? because acquire_console_sem() */
+#include <linux/console.h> /* Why should fb driver call console functions? because console_lock() */
#include <video/vga.h>
#ifdef CONFIG_MTRR
dev_info(info->device, "suspend\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if ((state.event == PM_EVENT_FREEZE) || (par->ref_count == 0)) {
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
pci_set_power_state(dev, pci_choose_state(dev, state));
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
dev_info(info->device, "resume\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if (par->ref_count == 0)
fail:
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
#else
{
struct aty128fb_par *par = data;
- if (try_acquire_console_sem())
+ if (!console_trylock())
return;
pci_restore_state(par->pdev);
aty128_do_resume(par->pdev);
- release_console_sem();
+ console_unlock();
}
#endif /* CONFIG_PPC_PMAC */
printk(KERN_DEBUG "aty128fb: suspending...\n");
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
if (state.event != PM_EVENT_ON)
aty128_set_suspend(par, 1);
- release_console_sem();
+ console_unlock();
pdev->dev.power.power_state = state;
{
int rc;
- acquire_console_sem();
+ console_lock();
rc = aty128_do_resume(pdev);
- release_console_sem();
+ console_unlock();
return rc;
}
if (state.event == pdev->dev.power.power_state.event)
return 0;
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
par->lock_blank = 0;
atyfb_blank(FB_BLANK_UNBLANK, info);
fb_set_suspend(info, 0);
- release_console_sem();
+ console_unlock();
return -EIO;
}
#else
pci_set_power_state(pdev, pci_choose_state(pdev, state));
#endif
- release_console_sem();
+ console_unlock();
pdev->dev.power.power_state = state;
if (pdev->dev.power.power_state.event == PM_EVENT_ON)
return 0;
- acquire_console_sem();
+ console_lock();
/*
* PCI state will have been restored by the core, so
par->lock_blank = 0;
atyfb_blank(FB_BLANK_UNBLANK, info);
- release_console_sem();
+ console_unlock();
pdev->dev.power.power_state = PMSG_ON;
goto done;
}
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
if (rinfo->pm_mode & radeon_pm_d2)
radeon_set_suspend(rinfo, 1);
- release_console_sem();
+ console_unlock();
done:
pdev->dev.power.power_state = mesg;
return 0;
if (rinfo->no_schedule) {
- if (try_acquire_console_sem())
+ if (!console_trylock())
return 0;
} else
- acquire_console_sem();
+ console_lock();
printk(KERN_DEBUG "radeonfb (%s): resuming from state: %d...\n",
pci_name(pdev), pdev->dev.power.power_state.event);
pdev->dev.power.power_state = PMSG_ON;
bail:
- release_console_sem();
+ console_unlock();
return rc;
}
#define MAX_BRIGHTNESS (0xFF)
#define MIN_BRIGHTNESS (0)
-#define CURRENT_MASK (0x1F << 1)
+#define CURRENT_BITMASK (0x1F << 1)
struct pm860x_backlight_data {
struct pm860x_chip *chip;
if ((data->current_brightness == 0) && brightness) {
if (data->iset) {
ret = pm860x_set_bits(data->i2c, wled_idc(data->port),
- CURRENT_MASK, data->iset);
+ CURRENT_BITMASK, data->iset);
if (ret < 0)
goto out;
}
{
struct backlight_properties props;
dma_addr_t dma_handle;
+ int ret;
if (request_dma(CH_PPI, KBUILD_MODNAME)) {
pr_err("couldn't request PPI DMA\n");
if (request_ports()) {
pr_err("couldn't request gpio port\n");
- free_dma(CH_PPI);
- return -EFAULT;
+ ret = -EFAULT;
+ goto out_ports;
}
fb_buffer = dma_alloc_coherent(NULL, TOTAL_VIDEO_MEM_SIZE,
&dma_handle, GFP_KERNEL);
if (fb_buffer == NULL) {
pr_err("couldn't allocate dma buffer\n");
- free_dma(CH_PPI);
- free_ports();
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_dma_coherent;
}
if (L1_DATA_A_LENGTH)
if (dma_desc_table == NULL) {
pr_err("couldn't allocate dma descriptor\n");
- free_dma(CH_PPI);
- free_ports();
- dma_free_coherent(NULL, TOTAL_VIDEO_MEM_SIZE, fb_buffer, 0);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_table;
}
bfin_lq035_fb.screen_base = (void *)fb_buffer;
bfin_lq035_fb.pseudo_palette = kzalloc(sizeof(u32) * 16, GFP_KERNEL);
if (bfin_lq035_fb.pseudo_palette == NULL) {
pr_err("failed to allocate pseudo_palette\n");
- free_dma(CH_PPI);
- free_ports();
- dma_free_coherent(NULL, TOTAL_VIDEO_MEM_SIZE, fb_buffer, 0);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_palette;
}
if (fb_alloc_cmap(&bfin_lq035_fb.cmap, NBR_PALETTE, 0) < 0) {
pr_err("failed to allocate colormap (%d entries)\n",
NBR_PALETTE);
- free_dma(CH_PPI);
- free_ports();
- dma_free_coherent(NULL, TOTAL_VIDEO_MEM_SIZE, fb_buffer, 0);
- kfree(bfin_lq035_fb.pseudo_palette);
- return -EFAULT;
+ ret = -EFAULT;
+ goto out_cmap;
}
if (register_framebuffer(&bfin_lq035_fb) < 0) {
pr_err("unable to register framebuffer\n");
- free_dma(CH_PPI);
- free_ports();
- dma_free_coherent(NULL, TOTAL_VIDEO_MEM_SIZE, fb_buffer, 0);
- fb_buffer = NULL;
- kfree(bfin_lq035_fb.pseudo_palette);
- fb_dealloc_cmap(&bfin_lq035_fb.cmap);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out_reg;
}
i2c_add_driver(&ad5280_driver);
lcd_dev = lcd_device_register(KBUILD_MODNAME, &pdev->dev, NULL,
&bfin_lcd_ops);
+ if (IS_ERR(lcd_dev)) {
+ pr_err("unable to register lcd\n");
+ ret = PTR_ERR(lcd_dev);
+ goto out_lcd;
+ }
lcd_dev->props.max_contrast = 255,
pr_info("initialized");
return 0;
+out_lcd:
+ unregister_framebuffer(&bfin_lq035_fb);
+out_reg:
+ fb_dealloc_cmap(&bfin_lq035_fb.cmap);
+out_cmap:
+ kfree(bfin_lq035_fb.pseudo_palette);
+out_palette:
+out_table:
+ dma_free_coherent(NULL, TOTAL_VIDEO_MEM_SIZE, fb_buffer, 0);
+ fb_buffer = NULL;
+out_dma_coherent:
+ free_ports();
+out_ports:
+ free_dma(CH_PPI);
+ return ret;
}
static int __devexit bfin_lq035_remove(struct platform_device *pdev)
if (!(state.event & PM_EVENT_SLEEP))
goto done;
- acquire_console_sem();
+ console_lock();
chipsfb_blank(1, p);
fb_set_suspend(p, 1);
- release_console_sem();
+ console_unlock();
done:
pdev->dev.power.power_state = state;
return 0;
{
struct fb_info *p = pci_get_drvdata(pdev);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(p, 0);
chipsfb_blank(0, p);
- release_console_sem();
+ console_unlock();
pdev->dev.power.power_state = PMSG_ON;
return 0;
menu "Console display driver support"
config VGA_CONSOLE
- bool "VGA text console" if EMBEDDED || !X86
+ bool "VGA text console" if EXPERT || !X86
depends on !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && !SUPERH && !BLACKFIN && !AVR32 && !MN10300 && (!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER)
default y
help
int c;
int mode;
- acquire_console_sem();
+ console_lock();
if (ops && ops->currcon != -1)
vc = vc_cons[ops->currcon].d;
if (!vc || !CON_IS_VISIBLE(vc) ||
registered_fb[con2fb_map[vc->vc_num]] != info ||
vc->vc_deccm != 1) {
- release_console_sem();
+ console_unlock();
return;
}
CM_ERASE : CM_DRAW;
ops->cursor(vc, info, mode, softback_lines, get_color(vc, info, c, 1),
get_color(vc, info, c, 0));
- release_console_sem();
+ console_unlock();
}
static void cursor_timer_handler(unsigned long dev_addr)
found = search_fb_in_map(newidx);
- acquire_console_sem();
+ console_lock();
con2fb_map[unit] = newidx;
if (!err && !found)
err = con2fb_acquire_newinfo(vc, info, unit, oldidx);
if (!search_fb_in_map(info_idx))
info_idx = newidx;
- release_console_sem();
+ console_unlock();
return err;
}
if (fbcon_has_exited)
return count;
- acquire_console_sem();
+ console_lock();
idx = con2fb_map[fg_console];
if (idx == -1 || registered_fb[idx] == NULL)
rotate = simple_strtoul(buf, last, 0);
fbcon_rotate(info, rotate);
err:
- release_console_sem();
+ console_unlock();
return count;
}
if (fbcon_has_exited)
return count;
- acquire_console_sem();
+ console_lock();
idx = con2fb_map[fg_console];
if (idx == -1 || registered_fb[idx] == NULL)
rotate = simple_strtoul(buf, last, 0);
fbcon_rotate_all(info, rotate);
err:
- release_console_sem();
+ console_unlock();
return count;
}
if (fbcon_has_exited)
return 0;
- acquire_console_sem();
+ console_lock();
idx = con2fb_map[fg_console];
if (idx == -1 || registered_fb[idx] == NULL)
info = registered_fb[idx];
rotate = fbcon_get_rotate(info);
err:
- release_console_sem();
+ console_unlock();
return snprintf(buf, PAGE_SIZE, "%d\n", rotate);
}
if (fbcon_has_exited)
return 0;
- acquire_console_sem();
+ console_lock();
idx = con2fb_map[fg_console];
if (idx == -1 || registered_fb[idx] == NULL)
blink = (ops->flags & FBCON_FLAGS_CURSOR_TIMER) ? 1 : 0;
err:
- release_console_sem();
+ console_unlock();
return snprintf(buf, PAGE_SIZE, "%d\n", blink);
}
if (fbcon_has_exited)
return count;
- acquire_console_sem();
+ console_lock();
idx = con2fb_map[fg_console];
if (idx == -1 || registered_fb[idx] == NULL)
}
err:
- release_console_sem();
+ console_unlock();
return count;
}
if (num_registered_fb) {
int i;
- acquire_console_sem();
+ console_lock();
for (i = 0; i < FB_MAX; i++) {
if (registered_fb[i] != NULL) {
}
}
- release_console_sem();
+ console_unlock();
fbcon_takeover(0);
}
}
{
int i;
- acquire_console_sem();
+ console_lock();
fb_register_client(&fbcon_event_notifier);
fbcon_device = device_create(fb_class, NULL, MKDEV(0, 0), NULL,
"fbcon");
for (i = 0; i < MAX_NR_CONSOLES; i++)
con2fb_map[i] = -1;
- release_console_sem();
+ console_unlock();
fbcon_start();
return 0;
}
static void __exit fb_console_exit(void)
{
- acquire_console_sem();
+ console_lock();
fb_unregister_client(&fbcon_event_notifier);
fbcon_deinit_device();
device_destroy(fb_class, MKDEV(0, 0));
fbcon_exit();
- release_console_sem();
+ console_unlock();
unregister_con_driver(&fb_con);
}
}
}
-/*
- * Called only duing init so call of alloc_bootmen is ok.
- * Marked __init_refok to silence modpost.
- */
-static void __init_refok vgacon_scrollback_startup(void)
+static void vgacon_scrollback_startup(void)
{
vgacon_scrollback = kcalloc(CONFIG_VGACON_SOFT_SCROLLBACK_SIZE, 1024, GFP_NOWAIT);
vgacon_scrollback_init(vga_video_num_columns * 2);
irq_freq:
#ifdef CONFIG_CPU_FREQ
+ lcd_da8xx_cpufreq_deregister(par);
+#endif
err_cpu_freq:
unregister_framebuffer(da8xx_fb_info);
-#endif
err_dealloc_cmap:
fb_dealloc_cmap(&da8xx_fb_info->cmap);
struct fb_info *info = platform_get_drvdata(dev);
struct da8xx_fb_par *par = info->par;
- acquire_console_sem();
+ console_lock();
if (par->panel_power_ctrl)
par->panel_power_ctrl(0);
fb_set_suspend(info, 1);
lcd_disable_raster();
clk_disable(par->lcdc_clk);
- release_console_sem();
+ console_unlock();
return 0;
}
struct fb_info *info = platform_get_drvdata(dev);
struct da8xx_fb_par *par = info->par;
- acquire_console_sem();
+ console_lock();
if (par->panel_power_ctrl)
par->panel_power_ctrl(1);
clk_enable(par->lcdc_clk);
lcd_enable_raster();
fb_set_suspend(info, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
return -EFAULT;
if (!lock_fb_info(info))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
info->flags |= FBINFO_MISC_USEREVENT;
ret = fb_set_var(info, &var);
info->flags &= ~FBINFO_MISC_USEREVENT;
- release_console_sem();
+ console_unlock();
unlock_fb_info(info);
if (!ret && copy_to_user(argp, &var, sizeof(var)))
ret = -EFAULT;
return -EFAULT;
if (!lock_fb_info(info))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
ret = fb_pan_display(info, &var);
- release_console_sem();
+ console_unlock();
unlock_fb_info(info);
if (ret == 0 && copy_to_user(argp, &var, sizeof(var)))
return -EFAULT;
case FBIOBLANK:
if (!lock_fb_info(info))
return -ENODEV;
- acquire_console_sem();
+ console_lock();
info->flags |= FBINFO_MISC_USEREVENT;
ret = fb_blank(info, arg);
info->flags &= ~FBINFO_MISC_USEREVENT;
- release_console_sem();
+ console_unlock();
unlock_fb_info(info);
break;
default:
int err;
var->activate |= FB_ACTIVATE_FORCE;
- acquire_console_sem();
+ console_lock();
fb_info->flags |= FBINFO_MISC_USEREVENT;
err = fb_set_var(fb_info, var);
fb_info->flags &= ~FBINFO_MISC_USEREVENT;
- release_console_sem();
+ console_unlock();
if (err)
return err;
return 0;
if (i * sizeof(struct fb_videomode) != count)
return -EINVAL;
- acquire_console_sem();
+ console_lock();
list_splice(&fb_info->modelist, &old_list);
fb_videomode_to_modelist((const struct fb_videomode *)buf, i,
&fb_info->modelist);
} else
fb_destroy_modelist(&old_list);
- release_console_sem();
+ console_unlock();
return 0;
}
char *last = NULL;
int err;
- acquire_console_sem();
+ console_lock();
fb_info->flags |= FBINFO_MISC_USEREVENT;
err = fb_blank(fb_info, simple_strtoul(buf, &last, 0));
fb_info->flags &= ~FBINFO_MISC_USEREVENT;
- release_console_sem();
+ console_unlock();
if (err < 0)
return err;
return count;
return -EINVAL;
var.yoffset = simple_strtoul(last, &last, 0);
- acquire_console_sem();
+ console_lock();
err = fb_pan_display(fb_info, &var);
- release_console_sem();
+ console_unlock();
if (err < 0)
return err;
state = simple_strtoul(buf, &last, 0);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(fb_info, (int)state);
- release_console_sem();
+ console_unlock();
return count;
}
struct fb_info *info = pci_get_drvdata(pdev);
if (state.event == PM_EVENT_SUSPEND) {
- acquire_console_sem();
+ console_lock();
gx_powerdown(info);
fb_set_suspend(info, 1);
- release_console_sem();
+ console_unlock();
}
/* there's no point in setting PCI states; we emulate PCI, so
struct fb_info *info = pci_get_drvdata(pdev);
int ret;
- acquire_console_sem();
+ console_lock();
ret = gx_powerup(info);
if (ret) {
printk(KERN_ERR "gxfb: power up failed!\n");
}
fb_set_suspend(info, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
#endif
struct fb_info *info = pci_get_drvdata(pdev);
if (state.event == PM_EVENT_SUSPEND) {
- acquire_console_sem();
+ console_lock();
lx_powerdown(info);
fb_set_suspend(info, 1);
- release_console_sem();
+ console_unlock();
}
/* there's no point in setting PCI states; we emulate PCI, so
struct fb_info *info = pci_get_drvdata(pdev);
int ret;
- acquire_console_sem();
+ console_lock();
ret = lx_powerup(info);
if (ret) {
printk(KERN_ERR "lxfb: power up failed!\n");
}
fb_set_suspend(info, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
#else
return 0;
}
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
if (info->fbops->fb_sync)
pci_save_state(dev);
pci_disable_device(dev);
pci_set_power_state(dev, pci_choose_state(dev, mesg));
- release_console_sem();
+ console_unlock();
return 0;
}
return 0;
}
- acquire_console_sem();
+ console_lock();
pci_set_power_state(dev, PCI_D0);
pci_restore_state(dev);
fb_set_suspend (info, 0);
info->fbops->fb_blank(VESA_NO_BLANKING, info);
fail:
- release_console_sem();
+ console_unlock();
return 0;
}
/***********************************************************************
{
struct jzfb *jzfb = dev_get_drvdata(dev);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(jzfb->fb, 1);
- release_console_sem();
+ console_unlock();
mutex_lock(&jzfb->lock);
if (jzfb->is_enabled)
jzfb_enable(jzfb);
mutex_unlock(&jzfb->lock);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(jzfb->fb, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
struct mx3fb_data *mx3fb = platform_get_drvdata(pdev);
struct mx3fb_info *mx3_fbi = mx3fb->fbi->par;
- acquire_console_sem();
+ console_lock();
fb_set_suspend(mx3fb->fbi, 1);
- release_console_sem();
+ console_unlock();
if (mx3_fbi->blank == FB_BLANK_UNBLANK) {
sdc_disable_channel(mx3_fbi);
sdc_set_brightness(mx3fb, mx3fb->backlight_level);
}
- acquire_console_sem();
+ console_lock();
fb_set_suspend(mx3fb->fbi, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
nuc900fb_stop_lcd(fbinfo);
msleep(1);
+ unregister_framebuffer(fbinfo);
+ nuc900fb_cpufreq_deregister(fbi);
nuc900fb_unmap_video_memory(fbinfo);
iounmap(fbi->io);
struct fb_info *fbinfo = platform_get_drvdata(dev);
struct nuc900fb_info *info = fbinfo->par;
- nuc900fb_stop_lcd();
+ nuc900fb_stop_lcd(fbinfo);
msleep(1);
clk_disable(info->clk);
return 0;
msleep(1);
nuc900fb_init_registers(fbinfo);
- nuc900fb_activate_var(bfinfo);
+ nuc900fb_activate_var(fbinfo);
return 0;
}
if (mesg.event == PM_EVENT_PRETHAW)
mesg.event = PM_EVENT_FREEZE;
- acquire_console_sem();
+ console_lock();
par->pm_state = mesg.event;
if (mesg.event & PM_EVENT_SLEEP) {
}
dev->dev.power.power_state = mesg;
- release_console_sem();
+ console_unlock();
return 0;
}
struct fb_info *info = pci_get_drvdata(dev);
struct nvidia_par *par = info->par;
- acquire_console_sem();
+ console_lock();
pci_set_power_state(dev, PCI_D0);
if (par->pm_state != PM_EVENT_FREEZE) {
nvidiafb_blank(FB_BLANK_UNBLANK, info);
fail:
- release_console_sem();
+ console_unlock();
return 0;
}
#else
if (atomic_dec_and_test(&ps3fb.f_count)) {
if (atomic_read(&ps3fb.ext_flip)) {
atomic_set(&ps3fb.ext_flip, 0);
- if (!try_acquire_console_sem()) {
+ if (console_trylock()) {
ps3fb_sync(info, 0); /* single buffer */
- release_console_sem();
+ console_unlock();
}
}
}
if (vmode) {
var = info->var;
fb_videomode_to_var(&var, vmode);
- acquire_console_sem();
+ console_lock();
info->flags |= FBINFO_MISC_USEREVENT;
/* Force, in case only special bits changed */
var.activate |= FB_ACTIVATE_FORCE;
par->new_mode_id = val;
retval = fb_set_var(info, &var);
info->flags &= ~FBINFO_MISC_USEREVENT;
- release_console_sem();
+ console_unlock();
}
break;
}
break;
dev_dbg(info->device, "PS3FB_IOCTL_FSEL:%d\n", val);
- acquire_console_sem();
+ console_lock();
retval = ps3fb_sync(info, val);
- release_console_sem();
+ console_unlock();
break;
default:
set_current_state(TASK_INTERRUPTIBLE);
if (ps3fb.is_kicked) {
ps3fb.is_kicked = 0;
- acquire_console_sem();
+ console_lock();
ps3fb_sync(info, 0); /* single buffer */
- release_console_sem();
+ console_unlock();
}
schedule();
}
*/
pxa168fb_init_mode(info, mi);
- ret = pxa168fb_check_var(&info->var, info);
- if (ret)
- goto failed_free_fbmem;
-
/*
* Fill in sane defaults.
*/
ret = pxa168fb_check_var(&info->var, info);
if (ret)
- goto failed;
+ goto failed_free_fbmem;
/*
* enable controller clock
/*
- * pxa3xx-gc.c - Linux kernel module for PXA3xx graphics controllers
+ * pxa3xx-gcu.c - Linux kernel module for PXA3xx graphics controllers
*
* This driver needs a DirectFB counterpart in user space, communication
* is handled via mmap()ed memory areas and an ioctl.
buffer->next = priv->free;
priv->free = buffer;
spin_unlock_irqrestore(&priv->spinlock, flags);
- return ret;
+ return -EFAULT;
}
buffer->length = words;
#include <linux/svga.h>
#include <linux/init.h>
#include <linux/pci.h>
-#include <linux/console.h> /* Why should fb driver call console functions? because acquire_console_sem() */
+#include <linux/console.h> /* Why should fb driver call console functions? because console_lock() */
#include <video/vga.h>
#ifdef CONFIG_MTRR
dev_info(info->device, "suspend\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if ((state.event == PM_EVENT_FREEZE) || (par->ref_count == 0)) {
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
pci_set_power_state(dev, pci_choose_state(dev, state));
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
dev_info(info->device, "resume\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if (par->ref_count == 0) {
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
err = pci_enable_device(dev);
if (err) {
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
dev_err(info->device, "error %d enabling device for resume\n", err);
return err;
}
fb_set_suspend(info, 0);
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
if (mesg.event == PM_EVENT_FREEZE)
return 0;
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
if (info->fbops->fb_sync)
pci_save_state(dev);
pci_disable_device(dev);
pci_set_power_state(dev, pci_choose_state(dev, mesg));
- release_console_sem();
+ console_unlock();
return 0;
}
return 0;
}
- acquire_console_sem();
+ console_lock();
pci_set_power_state(dev, PCI_D0);
pci_restore_state(dev);
savagefb_set_par(info);
fb_set_suspend(info, 0);
savagefb_blank(FB_BLANK_UNBLANK, info);
- release_console_sem();
+ console_unlock();
return 0;
}
ch = info->par;
- acquire_console_sem();
+ console_lock();
/* HDMI plug in */
if (!sh_hdmi_must_reconfigure(hdmi) &&
fb_set_suspend(info, 0);
}
- release_console_sem();
+ console_unlock();
} else {
ret = 0;
if (!hdmi->info)
fb_destroy_modedb(hdmi->monspec.modedb);
hdmi->monspec.modedb = NULL;
- acquire_console_sem();
+ console_lock();
/* HDMI disconnect */
fb_set_suspend(hdmi->info, 1);
- release_console_sem();
+ console_unlock();
pm_runtime_put(hdmi->dev);
}
/* Nothing to reconfigure, when called from fbcon */
if (user) {
- acquire_console_sem();
+ console_lock();
sh_mobile_fb_reconfig(info);
- release_console_sem();
+ console_unlock();
}
mutex_unlock(&ch->open_lock);
/* tell console/fb driver we are suspending */
- acquire_console_sem();
+ console_lock();
fb_set_suspend(fbi, 1);
- release_console_sem();
+ console_unlock();
/* backup copies in case chip is powered down over suspend */
memcpy_toio(par->cursor.k_addr, par->store_cursor,
par->cursor.size);
- acquire_console_sem();
+ console_lock();
fb_set_suspend(fbi, 0);
- release_console_sem();
+ console_unlock();
vfree(par->store_fb);
vfree(par->store_cursor);
#include <linux/fb.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
-/* Why should fb driver call console functions? because acquire_console_sem() */
+/* Why should fb driver call console functions? because console_lock() */
#include <linux/console.h>
#include <linux/mfd/core.h>
#include <linux/mfd/tmio.h>
struct mfd_cell *cell = dev->dev.platform_data;
int retval = 0;
- acquire_console_sem();
+ console_lock();
fb_set_suspend(info, 1);
if (cell->suspend)
retval = cell->suspend(dev);
- release_console_sem();
+ console_unlock();
return retval;
}
struct mfd_cell *cell = dev->dev.platform_data;
int retval = 0;
- acquire_console_sem();
+ console_lock();
if (cell->resume) {
retval = cell->resume(dev);
fb_set_suspend(info, 0);
out:
- release_console_sem();
+ console_unlock();
return retval;
}
#else
#ifdef CONFIG_PM
static int viafb_suspend(void *unused)
{
- acquire_console_sem();
+ console_lock();
fb_set_suspend(viafbinfo, 1);
viafb_sync(viafbinfo);
- release_console_sem();
+ console_unlock();
return 0;
}
static int viafb_resume(void *unused)
{
- acquire_console_sem();
+ console_lock();
if (viaparinfo->shared->vdev->engine_mmio)
viafb_reset_engine(viaparinfo);
viafb_set_par(viafbinfo);
viafb_set_par(viafbinfo1);
fb_set_suspend(viafbinfo, 0);
- release_console_sem();
+ console_unlock();
return 0;
}
#include <linux/svga.h>
#include <linux/init.h>
#include <linux/pci.h>
-#include <linux/console.h> /* Why should fb driver call console functions? because acquire_console_sem() */
+#include <linux/console.h> /* Why should fb driver call console functions? because console_lock() */
#include <video/vga.h>
#ifdef CONFIG_MTRR
dev_info(info->device, "suspend\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if ((state.event == PM_EVENT_FREEZE) || (par->ref_count == 0)) {
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
pci_set_power_state(dev, pci_choose_state(dev, state));
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
dev_info(info->device, "resume\n");
- acquire_console_sem();
+ console_lock();
mutex_lock(&(par->open_lock));
if (par->ref_count == 0)
fail:
mutex_unlock(&(par->open_lock));
- release_console_sem();
+ console_unlock();
return 0;
}
if (console_set_on_cmdline)
return;
- acquire_console_sem();
+ console_lock();
for_each_console(c) {
if (!strcmp(c->name, "tty") && c->index == 0)
break;
}
- release_console_sem();
+ console_unlock();
if (c) {
unregister_console(c);
c->flags |= CON_CONSDEV;
MODULE_DEVICE_TABLE(pci, virtio_pci_id_table);
-/* A PCI device has it's own struct device and so does a virtio device so
- * we create a place for the virtio devices to show up in sysfs. I think it
- * would make more sense for virtio to not insist on having it's own device. */
-static struct device *virtio_pci_root;
-
/* Convert a generic virtio device to our structure */
static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev)
{
if (vp_dev == NULL)
return -ENOMEM;
- vp_dev->vdev.dev.parent = virtio_pci_root;
+ vp_dev->vdev.dev.parent = &pci_dev->dev;
vp_dev->vdev.dev.release = virtio_pci_release_dev;
vp_dev->vdev.config = &virtio_pci_config_ops;
vp_dev->pci_dev = pci_dev;
static int __init virtio_pci_init(void)
{
- int err;
-
- virtio_pci_root = root_device_register("virtio-pci");
- if (IS_ERR(virtio_pci_root))
- return PTR_ERR(virtio_pci_root);
-
- err = pci_register_driver(&virtio_pci_driver);
- if (err)
- root_device_unregister(virtio_pci_root);
-
- return err;
+ return pci_register_driver(&virtio_pci_driver);
}
module_init(virtio_pci_init);
static void __exit virtio_pci_exit(void)
{
pci_unregister_driver(&virtio_pci_driver);
- root_device_unregister(virtio_pci_root);
}
module_exit(virtio_pci_exit);
/* get interface & functional clock objects */
hdq_data->hdq_ick = clk_get(&pdev->dev, "ick");
- hdq_data->hdq_fck = clk_get(&pdev->dev, "fck");
+ if (IS_ERR(hdq_data->hdq_ick)) {
+ dev_dbg(&pdev->dev, "Can't get HDQ ick clock object\n");
+ ret = PTR_ERR(hdq_data->hdq_ick);
+ goto err_ick;
+ }
- if (IS_ERR(hdq_data->hdq_ick) || IS_ERR(hdq_data->hdq_fck)) {
- dev_dbg(&pdev->dev, "Can't get HDQ clock objects\n");
- if (IS_ERR(hdq_data->hdq_ick)) {
- ret = PTR_ERR(hdq_data->hdq_ick);
- goto err_clk;
- }
- if (IS_ERR(hdq_data->hdq_fck)) {
- ret = PTR_ERR(hdq_data->hdq_fck);
- clk_put(hdq_data->hdq_ick);
- goto err_clk;
- }
+ hdq_data->hdq_fck = clk_get(&pdev->dev, "fck");
+ if (IS_ERR(hdq_data->hdq_fck)) {
+ dev_dbg(&pdev->dev, "Can't get HDQ fck clock object\n");
+ ret = PTR_ERR(hdq_data->hdq_fck);
+ goto err_fck;
}
hdq_data->hdq_usecount = 0;
clk_disable(hdq_data->hdq_ick);
err_intfclk:
- clk_put(hdq_data->hdq_ick);
clk_put(hdq_data->hdq_fck);
-err_clk:
+err_fck:
+ clk_put(hdq_data->hdq_ick);
+
+err_ick:
iounmap(hdq_data->hdq_base);
err_ioremap:
# M68K Architecture
-config M548x_WATCHDOG
- tristate "MCF548x watchdog support"
+config M54xx_WATCHDOG
+ tristate "MCF54xx watchdog support"
depends on M548x
help
To compile this driver as a module, choose M here: the
- module will be called m548x_wdt.
+ module will be called m54xx_wdt.
# MIPS Architecture
# M32R Architecture
# M68K Architecture
-obj-$(CONFIG_M548x_WATCHDOG) += m548x_wdt.o
+obj-$(CONFIG_M54xx_WATCHDOG) += m54xx_wdt.o
# MIPS Architecture
obj-$(CONFIG_ATH79_WDT) += ath79_wdt.o
/*
- * drivers/watchdog/m548x_wdt.c
+ * drivers/watchdog/m54xx_wdt.c
*
- * Watchdog driver for ColdFire MCF548x processors
+ * Watchdog driver for ColdFire MCF547x & MCF548x processors
* Copyright 2010 (c) Philippe De Muyter <phdm@macqel.be>
*
* Adapted from the IXP4xx watchdog driver, which carries these notices:
#include <linux/uaccess.h>
#include <asm/coldfire.h>
-#include <asm/m548xsim.h>
-#include <asm/m548xgpt.h>
+#include <asm/m54xxsim.h>
+#include <asm/m54xxgpt.h>
static int nowayout = WATCHDOG_NOWAYOUT;
static unsigned int heartbeat = 30; /* (secs) Default is 0.5 minute */
__raw_writel(gms0, MCF_MBAR + MCF_GPT_GMS0);
}
-static int m548x_wdt_open(struct inode *inode, struct file *file)
+static int m54xx_wdt_open(struct inode *inode, struct file *file)
{
if (test_and_set_bit(WDT_IN_USE, &wdt_status))
return -EBUSY;
return nonseekable_open(inode, file);
}
-static ssize_t m548x_wdt_write(struct file *file, const char *data,
+static ssize_t m54xx_wdt_write(struct file *file, const char *data,
size_t len, loff_t *ppos)
{
if (len) {
static const struct watchdog_info ident = {
.options = WDIOF_MAGICCLOSE | WDIOF_SETTIMEOUT |
WDIOF_KEEPALIVEPING,
- .identity = "Coldfire M548x Watchdog",
+ .identity = "Coldfire M54xx Watchdog",
};
-static long m548x_wdt_ioctl(struct file *file, unsigned int cmd,
+static long m54xx_wdt_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
int ret = -ENOTTY;
return ret;
}
-static int m548x_wdt_release(struct inode *inode, struct file *file)
+static int m54xx_wdt_release(struct inode *inode, struct file *file)
{
if (test_bit(WDT_OK_TO_CLOSE, &wdt_status))
wdt_disable();
}
-static const struct file_operations m548x_wdt_fops = {
+static const struct file_operations m54xx_wdt_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
- .write = m548x_wdt_write,
- .unlocked_ioctl = m548x_wdt_ioctl,
- .open = m548x_wdt_open,
- .release = m548x_wdt_release,
+ .write = m54xx_wdt_write,
+ .unlocked_ioctl = m54xx_wdt_ioctl,
+ .open = m54xx_wdt_open,
+ .release = m54xx_wdt_release,
};
-static struct miscdevice m548x_wdt_miscdev = {
+static struct miscdevice m54xx_wdt_miscdev = {
.minor = WATCHDOG_MINOR,
.name = "watchdog",
- .fops = &m548x_wdt_fops,
+ .fops = &m54xx_wdt_fops,
};
-static int __init m548x_wdt_init(void)
+static int __init m54xx_wdt_init(void)
{
if (!request_mem_region(MCF_MBAR + MCF_GPT_GCIR0, 4,
- "Coldfire M548x Watchdog")) {
+ "Coldfire M54xx Watchdog")) {
printk(KERN_WARNING
- "Coldfire M548x Watchdog : I/O region busy\n");
+ "Coldfire M54xx Watchdog : I/O region busy\n");
return -EBUSY;
}
printk(KERN_INFO "ColdFire watchdog driver is loaded.\n");
- return misc_register(&m548x_wdt_miscdev);
+ return misc_register(&m54xx_wdt_miscdev);
}
-static void __exit m548x_wdt_exit(void)
+static void __exit m54xx_wdt_exit(void)
{
- misc_deregister(&m548x_wdt_miscdev);
+ misc_deregister(&m54xx_wdt_miscdev);
release_mem_region(MCF_MBAR + MCF_GPT_GCIR0, 4);
}
-module_init(m548x_wdt_init);
-module_exit(m548x_wdt_exit);
+module_init(m54xx_wdt_init);
+module_exit(m54xx_wdt_exit);
MODULE_AUTHOR("Philippe De Muyter <phdm@macqel.be>");
-MODULE_DESCRIPTION("Coldfire M548x Watchdog");
+MODULE_DESCRIPTION("Coldfire M54xx Watchdog");
module_param(heartbeat, int, 0);
MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 30s)");
#ifdef CONFIG_PM_SLEEP
static int xen_hvm_suspend(void *data)
{
+ int err;
struct sched_shutdown r = { .reason = SHUTDOWN_suspend };
int *cancelled = data;
BUG_ON(!irqs_disabled());
+ err = sysdev_suspend(PMSG_SUSPEND);
+ if (err) {
+ printk(KERN_ERR "xen_hvm_suspend: sysdev_suspend failed: %d\n",
+ err);
+ return err;
+ }
+
*cancelled = HYPERVISOR_sched_op(SCHEDOP_shutdown, &r);
xen_hvm_post_suspend(*cancelled);
xen_timer_resume();
}
+ sysdev_resume();
+
return 0;
}
int ret;
mutex_lock(&u->reply_mutex);
+again:
while (list_empty(&u->read_buffers)) {
mutex_unlock(&u->reply_mutex);
if (filp->f_flags & O_NONBLOCK)
i += sz - ret;
rb->cons += sz - ret;
- if (ret != sz) {
+ if (ret != 0) {
if (i == 0)
i = -EFAULT;
goto out;
struct read_buffer, list);
}
}
+ if (i == 0)
+ goto again;
out:
mutex_unlock(&u->reply_mutex);
mutex_lock(&u->reply_mutex);
rc = queue_reply(&u->read_buffers, &reply, sizeof(reply));
+ wake_up(&u->read_waitq);
mutex_unlock(&u->reply_mutex);
}
ret = copy_from_user(u->u.buffer + u->len, ubuf, len);
- if (ret == len) {
+ if (ret != 0) {
rc = -EFAULT;
goto out;
}
msg_type = u->u.msg.type;
switch (msg_type) {
- case XS_TRANSACTION_START:
- case XS_TRANSACTION_END:
- case XS_DIRECTORY:
- case XS_READ:
- case XS_GET_PERMS:
- case XS_RELEASE:
- case XS_GET_DOMAIN_PATH:
- case XS_WRITE:
- case XS_MKDIR:
- case XS_RM:
- case XS_SET_PERMS:
- /* Send out a transaction */
- ret = xenbus_write_transaction(msg_type, u);
- break;
-
case XS_WATCH:
case XS_UNWATCH:
/* (Un)Ask for some path to be watched for changes */
break;
default:
- ret = -EINVAL;
+ /* Send out a transaction */
+ ret = xenbus_write_transaction(msg_type, u);
break;
}
if (ret != 0)
struct xenbus_file_priv *u = filp->private_data;
struct xenbus_transaction_holder *trans, *tmp;
struct watch_adapter *watch, *tmp_watch;
+ struct read_buffer *rb, *tmp_rb;
/*
* No need for locking here because there are no other users,
free_watch_adapter(watch);
}
+ list_for_each_entry_safe(rb, tmp_rb, &u->read_buffers, list) {
+ list_del(&rb->list);
+ kfree(rb);
+ }
kfree(u);
return 0;
tristate
config FILE_LOCKING
- bool "Enable POSIX file locking API" if EMBEDDED
+ bool "Enable POSIX file locking API" if EXPERT
default y
help
This option enables standard file locking support, required
res = __blkdev_get(bdev, mode, 0);
- /* __blkdev_get() may alter read only status, check it afterwards */
- if (!res && (mode & FMODE_WRITE) && bdev_read_only(bdev)) {
- __blkdev_put(bdev, mode, 0);
- res = -EACCES;
- }
-
if (whole) {
/* finish claiming */
mutex_lock(&bdev->bd_mutex);
if (err)
return ERR_PTR(err);
+ if ((mode & FMODE_WRITE) && bdev_read_only(bdev)) {
+ blkdev_put(bdev, mode);
+ return ERR_PTR(-EACCES);
+ }
+
return bdev;
}
EXPORT_SYMBOL(blkdev_get_by_path);
char *value = NULL;
struct posix_acl *acl;
+ if (!IS_POSIXACL(inode))
+ return NULL;
+
acl = get_cached_acl(inode, type);
if (acl != ACL_NOT_CACHED)
return acl;
struct posix_acl *acl;
int ret = 0;
+ if (!IS_POSIXACL(dentry->d_inode))
+ return -EOPNOTSUPP;
+
acl = btrfs_get_acl(dentry->d_inode, type);
if (IS_ERR(acl))
u64 em_len;
u64 em_start;
struct extent_map *em;
- int ret;
+ int ret = -ENOMEM;
u32 *sums;
tree = &BTRFS_I(inode)->io_tree;
compressed_len = em->block_len;
cb = kmalloc(compressed_bio_size(root, compressed_len), GFP_NOFS);
+ if (!cb)
+ goto out;
+
atomic_set(&cb->pending_bios, 0);
cb->errors = 0;
cb->inode = inode;
nr_pages = (compressed_len + PAGE_CACHE_SIZE - 1) /
PAGE_CACHE_SIZE;
- cb->compressed_pages = kmalloc(sizeof(struct page *) * nr_pages,
+ cb->compressed_pages = kzalloc(sizeof(struct page *) * nr_pages,
GFP_NOFS);
+ if (!cb->compressed_pages)
+ goto fail1;
+
bdev = BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev;
for (page_index = 0; page_index < nr_pages; page_index++) {
cb->compressed_pages[page_index] = alloc_page(GFP_NOFS |
__GFP_HIGHMEM);
+ if (!cb->compressed_pages[page_index])
+ goto fail2;
}
cb->nr_pages = nr_pages;
cb->len = uncompressed_len;
comp_bio = compressed_bio_alloc(bdev, cur_disk_byte, GFP_NOFS);
+ if (!comp_bio)
+ goto fail2;
comp_bio->bi_private = cb;
comp_bio->bi_end_io = end_compressed_bio_read;
atomic_inc(&cb->pending_bios);
bio_put(comp_bio);
return 0;
+
+fail2:
+ for (page_index = 0; page_index < nr_pages; page_index++)
+ free_page((unsigned long)cb->compressed_pages[page_index]);
+
+ kfree(cb->compressed_pages);
+fail1:
+ kfree(cb);
+out:
+ free_extent_map(em);
+ return ret;
}
static struct list_head comp_idle_workspace[BTRFS_COMPRESS_TYPES];
return ret;
}
-void __exit btrfs_exit_compress(void)
+void btrfs_exit_compress(void)
{
free_workspaces();
}
tree = &BTRFS_I(page->mapping->host)->io_tree;
- if (page->private == EXTENT_PAGE_PRIVATE)
+ if (page->private == EXTENT_PAGE_PRIVATE) {
+ WARN_ON(1);
goto out;
- if (!page->private)
+ }
+ if (!page->private) {
+ WARN_ON(1);
goto out;
+ }
len = page->private >> 2;
WARN_ON(len == 0);
spin_unlock(&root->fs_info->new_trans_lock);
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
if (transid == trans->transid) {
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
up_write(&root->fs_info->cleanup_work_sem);
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
/* run commit again to drop the original snapshot */
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
btrfs_commit_transaction(trans, root);
ret = btrfs_write_and_wait_transaction(NULL, root);
BUG_ON(ret);
kfree(fs_info->chunk_root);
kfree(fs_info->dev_root);
kfree(fs_info->csum_root);
+ kfree(fs_info);
+
return 0;
}
int ret;
path = btrfs_alloc_path();
+ if (!path)
+ return ERR_PTR(-ENOMEM);
if (dir->i_ino == BTRFS_FIRST_FREE_OBJECTID) {
key.objectid = root->root_key.objectid;
if (!path)
return -ENOMEM;
- exclude_super_stripes(extent_root, block_group);
- spin_lock(&block_group->space_info->lock);
- block_group->space_info->bytes_readonly += block_group->bytes_super;
- spin_unlock(&block_group->space_info->lock);
-
last = max_t(u64, block_group->key.objectid, BTRFS_SUPER_INFO_OFFSET);
/*
cache->cached = BTRFS_CACHE_NO;
}
spin_unlock(&cache->lock);
- if (ret == 1)
+ if (ret == 1) {
+ free_excluded_extents(fs_info->extent_root, cache);
return 0;
+ }
}
if (load_cache_only)
u64 reserved;
u64 max_reclaim;
u64 reclaimed = 0;
+ long time_left;
int pause = 1;
int nr_pages = (2 * 1024 * 1024) >> PAGE_CACHE_SHIFT;
+ int loops = 0;
block_rsv = &root->fs_info->delalloc_block_rsv;
space_info = block_rsv->space_info;
max_reclaim = min(reserved, to_reclaim);
- while (1) {
+ while (loops < 1024) {
/* have the flusher threads jump in and do some IO */
smp_mb();
nr_pages = min_t(unsigned long, nr_pages,
writeback_inodes_sb_nr_if_idle(root->fs_info->sb, nr_pages);
spin_lock(&space_info->lock);
- if (reserved > space_info->bytes_reserved)
+ if (reserved > space_info->bytes_reserved) {
+ loops = 0;
reclaimed += reserved - space_info->bytes_reserved;
+ } else {
+ loops++;
+ }
reserved = space_info->bytes_reserved;
spin_unlock(&space_info->lock);
return -EAGAIN;
__set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(pause);
+ time_left = schedule_timeout(pause);
+
+ /* We were interrupted, exit */
+ if (time_left)
+ break;
+
pause <<= 1;
if (pause > HZ / 10)
pause = HZ / 10;
if (num_bytes > 0) {
if (dest) {
- block_rsv_add_bytes(dest, num_bytes, 0);
- } else {
+ spin_lock(&dest->lock);
+ if (!dest->full) {
+ u64 bytes_to_add;
+
+ bytes_to_add = dest->size - dest->reserved;
+ bytes_to_add = min(num_bytes, bytes_to_add);
+ dest->reserved += bytes_to_add;
+ if (dest->reserved >= dest->size)
+ dest->full = 1;
+ num_bytes -= bytes_to_add;
+ }
+ spin_unlock(&dest->lock);
+ }
+ if (num_bytes) {
spin_lock(&space_info->lock);
space_info->bytes_reserved -= num_bytes;
spin_unlock(&space_info->lock);
num_bytes = ALIGN(num_bytes, root->sectorsize);
atomic_dec(&BTRFS_I(inode)->outstanding_extents);
+ WARN_ON(atomic_read(&BTRFS_I(inode)->outstanding_extents) < 0);
spin_lock(&BTRFS_I(inode)->accounting_lock);
nr_extents = atomic_read(&BTRFS_I(inode)->outstanding_extents);
struct btrfs_root *root, u32 blocksize)
{
struct btrfs_block_rsv *block_rsv;
+ struct btrfs_block_rsv *global_rsv = &root->fs_info->global_block_rsv;
int ret;
block_rsv = get_block_rsv(trans, root);
if (block_rsv->size == 0) {
ret = reserve_metadata_bytes(trans, root, block_rsv,
blocksize, 0);
- if (ret)
+ /*
+ * If we couldn't reserve metadata bytes try and use some from
+ * the global reserve.
+ */
+ if (ret && block_rsv != global_rsv) {
+ ret = block_rsv_use_bytes(global_rsv, blocksize);
+ if (!ret)
+ return global_rsv;
+ return ERR_PTR(ret);
+ } else if (ret) {
return ERR_PTR(ret);
+ }
return block_rsv;
}
ret = block_rsv_use_bytes(block_rsv, blocksize);
if (!ret)
return block_rsv;
+ if (ret) {
+ WARN_ON(1);
+ ret = reserve_metadata_bytes(trans, root, block_rsv, blocksize,
+ 0);
+ if (!ret) {
+ spin_lock(&block_rsv->lock);
+ block_rsv->size += blocksize;
+ spin_unlock(&block_rsv->lock);
+ return block_rsv;
+ } else if (ret && block_rsv != global_rsv) {
+ ret = block_rsv_use_bytes(global_rsv, blocksize);
+ if (!ret)
+ return global_rsv;
+ }
+ }
return ERR_PTR(-ENOSPC);
}
BUG_ON(!wc);
trans = btrfs_start_transaction(tree_root, 0);
+ BUG_ON(IS_ERR(trans));
+
if (block_rsv)
trans->block_rsv = block_rsv;
btrfs_end_transaction_throttle(trans, tree_root);
trans = btrfs_start_transaction(tree_root, 0);
+ BUG_ON(IS_ERR(trans));
if (block_rsv)
trans->block_rsv = block_rsv;
}
int ret = 0;
ra = kzalloc(sizeof(*ra), GFP_NOFS);
+ if (!ra)
+ return -ENOMEM;
mutex_lock(&inode->i_mutex);
first_index = start >> PAGE_CACHE_SHIFT;
u64 end = start + extent_key->offset - 1;
em = alloc_extent_map(GFP_NOFS);
- BUG_ON(!em || IS_ERR(em));
+ BUG_ON(!em);
em->start = start;
em->len = extent_key->offset;
BUG_ON(reloc_root->commit_root != NULL);
while (1) {
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
mutex_lock(&root->fs_info->drop_mutex);
ret = btrfs_drop_snapshot(trans, reloc_root);
if (found) {
trans = btrfs_start_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_commit_transaction(trans, root);
BUG_ON(ret);
}
trans = btrfs_start_transaction(extent_root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
if (extent_key->objectid == 0) {
ret = del_extent_zero(trans, extent_root, path, extent_key);
if (block_group->cached == BTRFS_CACHE_STARTED)
wait_block_group_cache_done(block_group);
+ /*
+ * We haven't cached this block group, which means we could
+ * possibly have excluded extents on this block group.
+ */
+ if (block_group->cached == BTRFS_CACHE_NO)
+ free_excluded_extents(info->extent_root, block_group);
+
btrfs_remove_free_space_cache(block_group);
btrfs_put_block_group(block_group);
cache->flags = btrfs_block_group_flags(&cache->item);
cache->sectorsize = root->sectorsize;
+ /*
+ * We need to exclude the super stripes now so that the space
+ * info has super bytes accounted for, otherwise we'll think
+ * we have more space than we actually do.
+ */
+ exclude_super_stripes(root, cache);
+
/*
* check for two cases, either we are full, and therefore
* don't need to bother with the caching work since we won't
* time, particularly in the full case.
*/
if (found_key.offset == btrfs_block_group_used(&cache->item)) {
- exclude_super_stripes(root, cache);
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
free_excluded_extents(root, cache);
} else if (btrfs_block_group_used(&cache->item) == 0) {
- exclude_super_stripes(root, cache);
cache->last_byte_to_unpin = (u64)-1;
cache->cached = BTRFS_CACHE_FINISHED;
add_new_free_space(cache, root->fs_info,
bio_get(bio);
if (tree->ops && tree->ops->submit_bio_hook)
- tree->ops->submit_bio_hook(page->mapping->host, rw, bio,
+ ret = tree->ops->submit_bio_hook(page->mapping->host, rw, bio,
mirror_num, bio_flags, start);
else
submit_bio(rw, bio);
nr = bio_get_nr_vecs(bdev);
bio = btrfs_bio_alloc(bdev, sector, nr, GFP_NOFS | __GFP_HIGH);
+ if (!bio)
+ return -ENOMEM;
bio_add_page(bio, page, page_size, offset);
bio->bi_end_io = end_io_func;
static void set_page_extent_head(struct page *page, unsigned long len)
{
+ WARN_ON(!PagePrivate(page));
set_page_private(page, EXTENT_PAGE_PRIVATE_FIRST_PAGE | len << 2);
}
ret = __extent_read_full_page(tree, page, get_extent, &bio, 0,
&bio_flags);
if (bio)
- submit_one_bio(READ, bio, 0, bio_flags);
+ ret = submit_one_bio(READ, bio, 0, bio_flags);
return ret;
}
* at this point we can safely clear everything except the
* locked bit and the nodatasum bit
*/
- clear_extent_bit(tree, start, end,
+ ret = clear_extent_bit(tree, start, end,
~(EXTENT_LOCKED | EXTENT_NODATASUM),
0, 0, NULL, mask);
+
+ /* if clear_extent_bit failed for enomem reasons,
+ * we can't allow the release to continue.
+ */
+ if (ret < 0)
+ ret = 0;
+ else
+ ret = 1;
}
return ret;
}
}
if (!PageUptodate(p))
uptodate = 0;
- unlock_page(p);
+
+ /*
+ * see below about how we avoid a nasty race with release page
+ * and why we unlock later
+ */
+ if (i != 0)
+ unlock_page(p);
}
if (uptodate)
set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
atomic_inc(&eb->refs);
spin_unlock(&tree->buffer_lock);
radix_tree_preload_end();
+
+ /*
+ * there is a race where release page may have
+ * tried to find this extent buffer in the radix
+ * but failed. It will tell the VM it is safe to
+ * reclaim the, and it will clear the page private bit.
+ * We must make sure to set the page private bit properly
+ * after the extent buffer is in the radix tree so
+ * it doesn't get lost
+ */
+ set_page_extent_mapped(eb->first_page);
+ set_page_extent_head(eb->first_page, eb->len);
+ if (!page0)
+ unlock_page(eb->first_page);
return eb;
free_eb:
+ if (eb->first_page && !page0)
+ unlock_page(eb->first_page);
+
if (!atomic_dec_and_test(&eb->refs))
return exists;
btrfs_release_extent_buffer(eb);
continue;
lock_page(page);
+ WARN_ON(!PagePrivate(page));
+
+ set_page_extent_mapped(page);
if (i == 0)
set_page_extent_head(page, eb->len);
- else
- set_page_private(page, EXTENT_PAGE_PRIVATE);
clear_page_dirty_for_io(page);
spin_lock_irq(&page->mapping->tree_lock);
for (i = start_i; i < num_pages; i++) {
page = extent_buffer_page(eb, i);
+
+ WARN_ON(!PagePrivate(page));
+
+ set_page_extent_mapped(page);
+ if (i == 0)
+ set_page_extent_head(page, eb->len);
+
if (inc_all_pages)
page_cache_get(page);
if (!PageUptodate(page)) {
{
struct extent_map *em;
em = kmem_cache_alloc(extent_map_cache, mask);
- if (!em || IS_ERR(em))
- return em;
+ if (!em)
+ return NULL;
em->in_tree = 0;
em->flags = 0;
em->compress_type = BTRFS_COMPRESS_NONE;
root = root->fs_info->csum_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
while (1) {
key.objectid = BTRFS_EXTENT_CSUM_OBJECTID;
if (path->slots[0] == 0)
goto out;
path->slots[0]--;
+ } else if (ret < 0) {
+ goto out;
}
+
leaf = path->nodes[0];
btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
split = alloc_extent_map(GFP_NOFS);
if (!split2)
split2 = alloc_extent_map(GFP_NOFS);
+ BUG_ON(!split || !split2);
write_lock(&em_tree->lock);
em = lookup_extent_mapping(em_tree, start, len);
for (i = 0; i < num_pages; i++) {
pages[i] = grab_cache_page(inode->i_mapping, index + i);
if (!pages[i]) {
- err = -ENOMEM;
- BUG_ON(1);
+ int c;
+ for (c = i - 1; c >= 0; c--) {
+ unlock_page(pages[c]);
+ page_cache_release(pages[c]);
+ }
+ return -ENOMEM;
}
wait_on_page_writeback(pages[i]);
}
PAGE_CACHE_SIZE, PAGE_CACHE_SIZE /
(sizeof(struct page *)));
pages = kmalloc(nrptrs * sizeof(struct page *), GFP_KERNEL);
+ if (!pages) {
+ ret = -ENOMEM;
+ goto out;
+ }
/* generic_write_checks can change our pos */
start_pos = pos;
size_t write_bytes = min(iov_iter_count(&i),
nrptrs * (size_t)PAGE_CACHE_SIZE -
offset);
- size_t num_pages = (write_bytes + PAGE_CACHE_SIZE - 1) >>
- PAGE_CACHE_SHIFT;
+ size_t num_pages = (write_bytes + offset +
+ PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
WARN_ON(num_pages > nrptrs);
memset(pages, 0, sizeof(struct page *) * nrptrs);
copied = btrfs_copy_from_user(pos, num_pages,
write_bytes, pages, &i);
- dirty_pages = (copied + PAGE_CACHE_SIZE - 1) >>
- PAGE_CACHE_SHIFT;
+ dirty_pages = (copied + offset + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT;
if (num_pages > dirty_pages) {
if (copied > 0)
return entry;
}
-static void unlink_free_space(struct btrfs_block_group_cache *block_group,
- struct btrfs_free_space *info)
+static inline void
+__unlink_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info)
{
rb_erase(&info->offset_index, &block_group->free_space_offset);
block_group->free_extents--;
+}
+
+static void unlink_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info)
+{
+ __unlink_free_space(block_group, info);
block_group->free_space -= info->bytes;
}
u64 max_bytes;
u64 bitmap_bytes;
u64 extent_bytes;
+ u64 size = block_group->key.offset;
/*
* The goal is to keep the total amount of memory used per 1gb of space
* at or below 32k, so we need to adjust how much memory we allow to be
* used by extent based free space tracking
*/
- max_bytes = MAX_CACHE_BYTES_PER_GIG *
- (div64_u64(block_group->key.offset, 1024 * 1024 * 1024));
+ if (size < 1024 * 1024 * 1024)
+ max_bytes = MAX_CACHE_BYTES_PER_GIG;
+ else
+ max_bytes = MAX_CACHE_BYTES_PER_GIG *
+ div64_u64(size, 1024 * 1024 * 1024);
/*
* we want to account for 1 more bitmap than what we have so we can make
recalculate_thresholds(block_group);
}
+static void free_bitmap(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *bitmap_info)
+{
+ unlink_free_space(block_group, bitmap_info);
+ kfree(bitmap_info->bitmap);
+ kfree(bitmap_info);
+ block_group->total_bitmaps--;
+ recalculate_thresholds(block_group);
+}
+
static noinline int remove_from_bitmap(struct btrfs_block_group_cache *block_group,
struct btrfs_free_space *bitmap_info,
u64 *offset, u64 *bytes)
*/
search_start = *offset;
search_bytes = *bytes;
+ search_bytes = min(search_bytes, end - search_start + 1);
ret = search_bitmap(block_group, bitmap_info, &search_start,
&search_bytes);
BUG_ON(ret < 0 || search_start != *offset);
if (*bytes) {
struct rb_node *next = rb_next(&bitmap_info->offset_index);
- if (!bitmap_info->bytes) {
- unlink_free_space(block_group, bitmap_info);
- kfree(bitmap_info->bitmap);
- kfree(bitmap_info);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ if (!bitmap_info->bytes)
+ free_bitmap(block_group, bitmap_info);
/*
* no entry after this bitmap, but we still have bytes to
return -EAGAIN;
goto again;
- } else if (!bitmap_info->bytes) {
- unlink_free_space(block_group, bitmap_info);
- kfree(bitmap_info->bitmap);
- kfree(bitmap_info);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ } else if (!bitmap_info->bytes)
+ free_bitmap(block_group, bitmap_info);
return 0;
}
return ret;
}
-int btrfs_add_free_space(struct btrfs_block_group_cache *block_group,
- u64 offset, u64 bytes)
+bool try_merge_free_space(struct btrfs_block_group_cache *block_group,
+ struct btrfs_free_space *info, bool update_stat)
{
- struct btrfs_free_space *right_info = NULL;
- struct btrfs_free_space *left_info = NULL;
- struct btrfs_free_space *info = NULL;
- int ret = 0;
-
- info = kzalloc(sizeof(struct btrfs_free_space), GFP_NOFS);
- if (!info)
- return -ENOMEM;
-
- info->offset = offset;
- info->bytes = bytes;
-
- spin_lock(&block_group->tree_lock);
+ struct btrfs_free_space *left_info;
+ struct btrfs_free_space *right_info;
+ bool merged = false;
+ u64 offset = info->offset;
+ u64 bytes = info->bytes;
/*
* first we want to see if there is free space adjacent to the range we
else
left_info = tree_search_offset(block_group, offset - 1, 0, 0);
- /*
- * If there was no extent directly to the left or right of this new
- * extent then we know we're going to have to allocate a new extent, so
- * before we do that see if we need to drop this into a bitmap
- */
- if ((!left_info || left_info->bitmap) &&
- (!right_info || right_info->bitmap)) {
- ret = insert_into_bitmap(block_group, info);
-
- if (ret < 0) {
- goto out;
- } else if (ret) {
- ret = 0;
- goto out;
- }
- }
-
if (right_info && !right_info->bitmap) {
- unlink_free_space(block_group, right_info);
+ if (update_stat)
+ unlink_free_space(block_group, right_info);
+ else
+ __unlink_free_space(block_group, right_info);
info->bytes += right_info->bytes;
kfree(right_info);
+ merged = true;
}
if (left_info && !left_info->bitmap &&
left_info->offset + left_info->bytes == offset) {
- unlink_free_space(block_group, left_info);
+ if (update_stat)
+ unlink_free_space(block_group, left_info);
+ else
+ __unlink_free_space(block_group, left_info);
info->offset = left_info->offset;
info->bytes += left_info->bytes;
kfree(left_info);
+ merged = true;
}
+ return merged;
+}
+
+int btrfs_add_free_space(struct btrfs_block_group_cache *block_group,
+ u64 offset, u64 bytes)
+{
+ struct btrfs_free_space *info;
+ int ret = 0;
+
+ info = kzalloc(sizeof(struct btrfs_free_space), GFP_NOFS);
+ if (!info)
+ return -ENOMEM;
+
+ info->offset = offset;
+ info->bytes = bytes;
+
+ spin_lock(&block_group->tree_lock);
+
+ if (try_merge_free_space(block_group, info, true))
+ goto link;
+
+ /*
+ * There was no extent directly to the left or right of this new
+ * extent then we know we're going to have to allocate a new extent, so
+ * before we do that see if we need to drop this into a bitmap
+ */
+ ret = insert_into_bitmap(block_group, info);
+ if (ret < 0) {
+ goto out;
+ } else if (ret) {
+ ret = 0;
+ goto out;
+ }
+link:
ret = link_free_space(block_group, info);
if (ret)
kfree(info);
node = rb_next(&entry->offset_index);
rb_erase(&entry->offset_index, &cluster->root);
BUG_ON(entry->bitmap);
+ try_merge_free_space(block_group, entry, false);
tree_insert_offset(&block_group->free_space_offset,
entry->offset, &entry->offset_index, 0);
}
ret = offset;
if (entry->bitmap) {
bitmap_clear_bits(block_group, entry, offset, bytes);
- if (!entry->bytes) {
- unlink_free_space(block_group, entry);
- kfree(entry->bitmap);
- kfree(entry);
- block_group->total_bitmaps--;
- recalculate_thresholds(block_group);
- }
+ if (!entry->bytes)
+ free_bitmap(block_group, entry);
} else {
unlink_free_space(block_group, entry);
entry->offset += bytes;
ret = search_start;
bitmap_clear_bits(block_group, entry, ret, bytes);
+ if (entry->bytes == 0)
+ free_bitmap(block_group, entry);
out:
spin_unlock(&cluster->lock);
spin_unlock(&block_group->tree_lock);
entry->offset += bytes;
entry->bytes -= bytes;
- if (entry->bytes == 0) {
+ if (entry->bytes == 0)
rb_erase(&entry->offset_index, &cluster->root);
- kfree(entry);
- }
break;
}
out:
spin_unlock(&cluster->lock);
+ if (!ret)
+ return 0;
+
+ spin_lock(&block_group->tree_lock);
+
+ block_group->free_space -= bytes;
+ if (entry->bytes == 0) {
+ block_group->free_extents--;
+ kfree(entry);
+ }
+
+ spin_unlock(&block_group->tree_lock);
+
return ret;
}
}
if (start == 0) {
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
GFP_NOFS);
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_reserve_extent(trans, root,
async_extent->compressed_size,
async_extent->compressed_size,
async_extent->ram_size - 1, 0);
em = alloc_extent_map(GFP_NOFS);
+ BUG_ON(!em);
em->start = async_extent->start;
em->len = async_extent->ram_size;
em->orig_start = em->start;
BUG_ON(root == root->fs_info->tree_root);
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
BUG_ON(ret);
em = alloc_extent_map(GFP_NOFS);
+ BUG_ON(!em);
em->start = start;
em->orig_start = em->start;
ram_size = ins.offset;
} else {
trans = btrfs_join_transaction(root, 1);
}
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
cow_start = (u64)-1;
cur_offset = start;
struct extent_map_tree *em_tree;
em_tree = &BTRFS_I(inode)->extent_tree;
em = alloc_extent_map(GFP_NOFS);
+ BUG_ON(!em);
em->start = cur_offset;
em->orig_start = em->start;
em->len = num_bytes;
out_page:
unlock_page(page);
page_cache_release(page);
+ kfree(fixup);
}
/*
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
ret = btrfs_update_inode(trans, root, inode);
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
*/
if (is_bad_inode(inode)) {
trans = btrfs_start_transaction(root, 0);
+ BUG_ON(IS_ERR(trans));
btrfs_orphan_del(trans, inode);
btrfs_end_transaction(trans, root);
iput(inode);
if (root->orphan_block_rsv || root->orphan_item_inserted) {
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_end_transaction(trans, root);
}
path = btrfs_alloc_path();
if (!path) {
ret = -ENOMEM;
- goto err;
+ goto out;
}
path->leave_spinning = 1;
struct extent_buffer *eb;
int level;
u64 refs = 1;
- int uninitialized_var(ret);
for (level = 0; level < BTRFS_MAX_LEVEL; level++) {
+ int ret;
+
if (!path->nodes[level])
break;
eb = path->nodes[level];
if (refs > 1)
return 1;
}
- return ret; /* XXX callers? */
+ return 0;
}
/*
}
srcu_read_unlock(&root->fs_info->subvol_srcu, index);
- if (root != sub_root) {
+ if (!IS_ERR(inode) && root != sub_root) {
down_read(&root->fs_info->cleanup_work_sem);
if (!(inode->i_sb->s_flags & MS_RDONLY))
btrfs_orphan_cleanup(sub_root);
trans = btrfs_join_transaction_nolock(root, 1);
else
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
btrfs_set_trans_block_group(trans, inode);
if (nolock)
ret = btrfs_end_transaction_nolock(trans, root);
return;
trans = btrfs_join_transaction(root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_set_trans_block_group(trans, inode);
ret = btrfs_update_inode(trans, root, inode);
em = NULL;
btrfs_release_path(root, path);
trans = btrfs_join_transaction(root, 1);
+ if (IS_ERR(trans))
+ return ERR_CAST(trans);
goto again;
}
map = kmap(page);
btrfs_drop_extent_cache(inode, start, start + len - 1, 0);
trans = btrfs_join_transaction(root, 0);
- if (!trans)
- return ERR_PTR(-ENOMEM);
+ if (IS_ERR(trans))
+ return ERR_CAST(trans);
trans->block_rsv = &root->fs_info->delalloc_block_rsv;
* while we look for nocow cross refs
*/
trans = btrfs_join_transaction(root, 0);
- if (!trans)
+ if (IS_ERR(trans))
goto must_cow;
if (can_nocow_odirect(trans, inode, start, len) == 1) {
BUG_ON(!ordered);
trans = btrfs_join_transaction(root, 1);
- if (!trans) {
+ if (IS_ERR(trans)) {
err = -ENOMEM;
goto out;
}
trans = btrfs_join_transaction(root, 1);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_update_inode(trans, root, inode);
BUG_ON(ret);
if (new_size > old_size) {
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ goto out_unlock;
+ }
ret = btrfs_grow_device(trans, device, new_size);
btrfs_commit_transaction(trans, root);
} else {
memcpy(&new_key, &key, sizeof(new_key));
new_key.objectid = inode->i_ino;
- new_key.offset = key.offset + destoff - off;
+ if (off <= key.offset)
+ new_key.offset = key.offset + destoff - off;
+ else
+ new_key.offset = destoff;
trans = btrfs_start_transaction(root, 1);
if (IS_ERR(trans)) {
ret = -ENOMEM;
trans = btrfs_start_ioctl_transaction(root, 0);
- if (!trans)
+ if (IS_ERR(trans))
goto out_drop;
file->private_data = trans;
path->leave_spinning = 1;
trans = btrfs_start_transaction(root, 1);
- if (!trans) {
+ if (IS_ERR(trans)) {
btrfs_free_path(path);
- return -ENOMEM;
+ return PTR_ERR(trans);
}
dir_id = btrfs_super_root_dir(&root->fs_info->super_copy);
int num_types = 4;
int alloc_size;
int ret = 0;
- int slot_count = 0;
+ u64 slot_count = 0;
int i, c;
if (copy_from_user(&space_args,
goto out;
}
- slot_count = min_t(int, space_args.space_slots, slot_count);
+ slot_count = min_t(u64, space_args.space_slots, slot_count);
alloc_size = sizeof(*dest) * slot_count;
for (i = 0; i < num_types; i++) {
struct btrfs_space_info *tmp;
+ if (!slot_count)
+ break;
+
info = NULL;
rcu_read_lock();
list_for_each_entry_rcu(tmp, &root->fs_info->space_info,
memcpy(dest, &space, sizeof(space));
dest++;
space_args.total_spaces++;
+ slot_count--;
}
+ if (!slot_count)
+ break;
}
up_read(&info->groups_sem);
}
u64 transid;
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
transid = trans->transid;
btrfs_commit_transaction_async(trans, root, 0);
u64 file_offset)
{
struct rb_root *root = &tree->tree;
- struct rb_node *prev;
+ struct rb_node *prev = NULL;
struct rb_node *ret;
struct btrfs_ordered_extent *entry;
#else
BUG();
#endif
+ break;
case BTRFS_BLOCK_GROUP_ITEM_KEY:
bi = btrfs_item_ptr(l, i,
struct btrfs_block_group_item);
new_node->bytenr = dest->node->start;
new_node->level = node->level;
new_node->lowest = node->lowest;
+ new_node->checked = 1;
new_node->root = dest;
if (!node->lowest) {
while (1) {
trans = btrfs_start_transaction(root, 0);
+ BUG_ON(IS_ERR(trans));
trans->block_rsv = rc->block_rsv;
ret = btrfs_block_rsv_check(trans, root, rc->block_rsv,
}
trans = btrfs_join_transaction(rc->extent_root, 1);
+ if (IS_ERR(trans)) {
+ if (!err)
+ btrfs_block_rsv_release(rc->extent_root,
+ rc->block_rsv, num_bytes);
+ return PTR_ERR(trans);
+ }
if (!err) {
if (num_bytes != rc->merging_rsv_size) {
trans = btrfs_join_transaction(root, 0);
if (IS_ERR(trans)) {
btrfs_free_path(path);
+ ret = PTR_ERR(trans);
goto out;
}
set_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
+ BUG_ON(IS_ERR(trans));
btrfs_commit_transaction(trans, rc->extent_root);
return 0;
}
while (1) {
trans = btrfs_start_transaction(rc->extent_root, 0);
+ BUG_ON(IS_ERR(trans));
if (update_backref_cache(trans, &rc->backref_cache)) {
btrfs_end_transaction(trans, rc->extent_root);
/* get rid of pinned extents */
trans = btrfs_join_transaction(rc->extent_root, 1);
- btrfs_commit_transaction(trans, rc->extent_root);
+ if (IS_ERR(trans))
+ err = PTR_ERR(trans);
+ else
+ btrfs_commit_transaction(trans, rc->extent_root);
out_free:
btrfs_free_block_rsv(rc->extent_root, rc->block_rsv);
btrfs_free_path(path);
int ret;
trans = btrfs_start_transaction(root->fs_info->tree_root, 0);
+ BUG_ON(IS_ERR(trans));
memset(&root->root_item.drop_progress, 0,
sizeof(root->root_item.drop_progress));
set_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
+ if (IS_ERR(trans)) {
+ unset_reloc_control(rc);
+ err = PTR_ERR(trans);
+ goto out_free;
+ }
rc->merge_reloc_tree = 1;
unset_reloc_control(rc);
trans = btrfs_join_transaction(rc->extent_root, 1);
- btrfs_commit_transaction(trans, rc->extent_root);
-out:
+ if (IS_ERR(trans))
+ err = PTR_ERR(trans);
+ else
+ btrfs_commit_transaction(trans, rc->extent_root);
+out_free:
kfree(rc);
+out:
while (!list_empty(&reloc_roots)) {
reloc_root = list_entry(reloc_roots.next,
struct btrfs_root, root_list);
struct btrfs_fs_devices **fs_devices)
{
substring_t args[MAX_OPT_ARGS];
- char *opts, *p;
+ char *opts, *orig, *p;
int error = 0;
int intarg;
opts = kstrdup(options, GFP_KERNEL);
if (!opts)
return -ENOMEM;
+ orig = opts;
while ((p = strsep(&opts, ",")) != NULL) {
int token;
}
out_free_opts:
- kfree(opts);
+ kfree(orig);
out:
/*
* If no subvolume name is specified we use the default one. Allocate
btrfs_wait_ordered_extents(root, 0, 0);
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
ret = btrfs_commit_transaction(trans, root);
return ret;
}
}
btrfs_close_devices(fs_devices);
+ kfree(fs_info);
+ kfree(tree_root);
} else {
char b[BDEVNAME_SIZE];
INIT_DELAYED_WORK(&ac->work, do_async_commit);
ac->root = root;
ac->newtrans = btrfs_join_transaction(root, 0);
+ if (IS_ERR(ac->newtrans)) {
+ int err = PTR_ERR(ac->newtrans);
+ kfree(ac);
+ return err;
+ }
/* take transaction reference */
mutex_lock(&root->fs_info->trans_mutex);
}
dst_copy = kmalloc(item_size, GFP_NOFS);
src_copy = kmalloc(item_size, GFP_NOFS);
+ if (!dst_copy || !src_copy) {
+ btrfs_release_path(root, path);
+ kfree(dst_copy);
+ kfree(src_copy);
+ return -ENOMEM;
+ }
read_extent_buffer(eb, src_copy, src_ptr, item_size);
btrfs_dir_item_key_to_cpu(leaf, di, &location);
name_len = btrfs_dir_name_len(leaf, di);
name = kmalloc(name_len, GFP_NOFS);
+ if (!name)
+ return -ENOMEM;
+
read_extent_buffer(leaf, name, (unsigned long)(di + 1), name_len);
btrfs_release_path(root, path);
int match = 0;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
ret = btrfs_search_slot(NULL, log, key, path, 0, 0);
if (ret != 0)
goto out;
key.offset = (u64)-1;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
while (1) {
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
name_len = btrfs_dir_name_len(eb, di);
name = kmalloc(name_len, GFP_NOFS);
+ if (!name)
+ return -ENOMEM;
+
log_type = btrfs_dir_type(eb, di);
read_extent_buffer(eb, name, (unsigned long)(di + 1),
name_len);
root_owner = btrfs_header_owner(parent);
next = btrfs_find_create_tree_block(root, bytenr, blocksize);
+ if (!next)
+ return -ENOMEM;
if (*level == 1) {
wc->process_func(root, next, wc, ptr_gen);
wait_log_commit(trans, log_root_tree,
log_root_tree->log_transid);
mutex_unlock(&log_root_tree->log_mutex);
+ ret = 0;
goto out;
}
atomic_set(&log_root_tree->log_commit[index2], 1);
smp_mb();
if (waitqueue_active(&root->log_commit_wait[index1]))
wake_up(&root->log_commit_wait[index1]);
- return 0;
+ return ret;
}
static void free_log_tree(struct btrfs_trans_handle *trans,
log = root->log_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
di = btrfs_lookup_dir_item(trans, log, path, dir->i_ino,
name, name_len, -1);
if (IS_ERR(di)) {
ins_data = kmalloc(nr * sizeof(struct btrfs_key) +
nr * sizeof(u32), GFP_NOFS);
+ if (!ins_data)
+ return -ENOMEM;
+
ins_sizes = (u32 *)ins_data;
ins_keys = (struct btrfs_key *)(ins_data + nr * sizeof(u32));
log = root->log_root;
path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
dst_path = btrfs_alloc_path();
+ if (!dst_path) {
+ btrfs_free_path(path);
+ return -ENOMEM;
+ }
min_key.objectid = inode->i_ino;
min_key.type = BTRFS_INODE_ITEM_KEY;
BUG_ON(!path);
trans = btrfs_start_transaction(fs_info->tree_root, 0);
+ BUG_ON(IS_ERR(trans));
wc.trans = trans;
wc.pin = 1;
return -ENOMEM;
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ btrfs_free_path(path);
+ return PTR_ERR(trans);
+ }
key.objectid = BTRFS_DEV_ITEMS_OBJECTID;
key.type = BTRFS_DEV_ITEM_KEY;
key.offset = device->devid;
ret = find_next_devid(root, &device->devid);
if (ret) {
+ kfree(device->name);
kfree(device);
goto error;
}
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ kfree(device->name);
+ kfree(device);
+ ret = PTR_ERR(trans);
+ goto error;
+ }
+
lock_chunks(root);
device->writeable = 1;
return ret;
trans = btrfs_start_transaction(root, 0);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
lock_chunks(root);
BUG_ON(ret);
trans = btrfs_start_transaction(dev_root, 0);
- BUG_ON(!trans);
+ BUG_ON(IS_ERR(trans));
ret = btrfs_grow_device(trans, device, old_size);
BUG_ON(ret);
/* Shrinking succeeded, else we would be at "done". */
trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ goto done;
+ }
+
lock_chunks(root);
device->disk_total_bytes = new_size;
/* NOTE: no side-effects allowed, until we take s_mutex */
revoking = cap->implemented & ~cap->issued;
- if (revoking)
- dout(" mds%d revoking %s\n", cap->mds,
- ceph_cap_string(revoking));
+ dout(" mds%d cap %p issued %s implemented %s revoking %s\n",
+ cap->mds, cap, ceph_cap_string(cap->issued),
+ ceph_cap_string(cap->implemented),
+ ceph_cap_string(revoking));
if (cap == ci->i_auth_cap &&
(cap->issued & CEPH_CAP_FILE_WR)) {
if (cap == ci->i_auth_cap && ci->i_dirty_caps)
flushing = __mark_caps_flushing(inode, session);
+ else
+ flushing = 0;
mds = cap->mds; /* remember mds, so we don't repeat */
sent++;
}
}
+static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
+ struct ceph_mds_session *session,
+ struct inode *inode)
+{
+ struct ceph_inode_info *ci = ceph_inode(inode);
+ struct ceph_cap *cap;
+ int delayed = 0;
+
+ spin_lock(&inode->i_lock);
+ cap = ci->i_auth_cap;
+ dout("kick_flushing_inode_caps %p flushing %s flush_seq %lld\n", inode,
+ ceph_cap_string(ci->i_flushing_caps), ci->i_cap_flush_seq);
+ __ceph_flush_snaps(ci, &session, 1);
+ if (ci->i_flushing_caps) {
+ delayed = __send_cap(mdsc, cap, CEPH_CAP_OP_FLUSH,
+ __ceph_caps_used(ci),
+ __ceph_caps_wanted(ci),
+ cap->issued | cap->implemented,
+ ci->i_flushing_caps, NULL);
+ if (delayed) {
+ spin_lock(&inode->i_lock);
+ __cap_delay_requeue(mdsc, ci);
+ spin_unlock(&inode->i_lock);
+ }
+ } else {
+ spin_unlock(&inode->i_lock);
+ }
+}
+
/*
* Take references to capabilities we hold, so that we don't release
ceph_add_cap(inode, session, cap_id, -1,
issued, wanted, seq, mseq, realmino, CEPH_CAP_FLAG_AUTH,
NULL /* no caps context */);
- try_flush_caps(inode, session, NULL);
+ kick_flushing_inode_caps(mdsc, session, inode);
up_read(&mdsc->snap_rwsem);
/* make sure we re-request max_size, if necessary */
case CEPH_CAP_OP_IMPORT:
handle_cap_import(mdsc, inode, h, session,
snaptrace, snaptrace_len);
- ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY,
- session);
+ ceph_check_caps(ceph_inode(inode), 0, session);
goto done_unlocked;
}
}
di->dentry = dentry;
di->lease_session = NULL;
+ di->parent_inode = igrab(dentry->d_parent->d_inode);
dentry->d_fsdata = di;
dentry->d_time = jiffies;
ceph_dentry_lru_add(dentry);
u64 snapid = CEPH_NOSNAP;
if (!IS_ROOT(dentry)) {
- parent_inode = dentry->d_parent->d_inode;
+ parent_inode = di->parent_inode;
if (parent_inode)
snapid = ceph_snap(parent_inode);
}
kmem_cache_free(ceph_dentry_cachep, di);
dentry->d_fsdata = NULL;
}
+ if (parent_inode)
+ iput(parent_inode);
}
static int ceph_snapdir_d_revalidate(struct dentry *dentry,
ci->i_ceph_flags |= CEPH_I_COMPLETE;
ci->i_max_offset = 2;
}
-
- /* it may be better to set st_size in getattr instead? */
- if (ceph_test_mount_opt(ceph_sb_to_client(inode->i_sb), RBYTES))
- inode->i_size = ci->i_rbytes;
break;
default:
pr_err("fill_inode %llx.%llx BAD mode 0%o\n",
else
stat->dev = 0;
if (S_ISDIR(inode->i_mode)) {
- stat->size = ci->i_rbytes;
+ if (ceph_test_mount_opt(ceph_sb_to_client(inode->i_sb),
+ RBYTES))
+ stat->size = ci->i_rbytes;
+ else
+ stat->size = ci->i_files + ci->i_subdirs;
stat->blocks = 0;
stat->blksize = 65536;
}
dout("choose_mds %p %llx.%llx "
"frag %u mds%d (%d/%d)\n",
inode, ceph_vinop(inode),
- frag.frag, frag.mds,
+ frag.frag, mds,
(int)r, frag.ndist);
- return mds;
+ if (ceph_mdsmap_get_state(mdsc->mdsmap, mds) >=
+ CEPH_MDS_STATE_ACTIVE)
+ return mds;
}
/* since this file/dir wasn't known to be
dout("choose_mds %p %llx.%llx "
"frag %u mds%d (auth)\n",
inode, ceph_vinop(inode), frag.frag, mds);
- return mds;
+ if (ceph_mdsmap_get_state(mdsc->mdsmap, mds) >=
+ CEPH_MDS_STATE_ACTIVE)
+ return mds;
}
}
}
if (lastinode)
iput(lastinode);
- dout("queue_realm_cap_snaps %p %llx children\n", realm, realm->ino);
- list_for_each_entry(child, &realm->children, child_item)
- queue_realm_cap_snaps(child);
+ list_for_each_entry(child, &realm->children, child_item) {
+ dout("queue_realm_cap_snaps %p %llx queue child %p %llx\n",
+ realm, realm->ino, child, child->ino);
+ list_del_init(&child->dirty_item);
+ list_add(&child->dirty_item, &realm->dirty_item);
+ }
+ list_del_init(&realm->dirty_item);
dout("queue_realm_cap_snaps %p %llx done\n", realm, realm->ino);
}
* queue cap snaps _after_ we've built the new snap contexts,
* so that i_head_snapc can be set appropriately.
*/
- list_for_each_entry(realm, &dirty_realms, dirty_item) {
+ while (!list_empty(&dirty_realms)) {
+ realm = list_first_entry(&dirty_realms, struct ceph_snap_realm,
+ dirty_item);
queue_realm_cap_snaps(realm);
}
fsopt->rsize = CEPH_MOUNT_RSIZE_DEFAULT;
fsopt->snapdir_name = kstrdup(CEPH_SNAPDIRNAME_DEFAULT, GFP_KERNEL);
+ fsopt->caps_wanted_delay_min = CEPH_CAPS_WANTED_DELAY_MIN_DEFAULT;
+ fsopt->caps_wanted_delay_max = CEPH_CAPS_WANTED_DELAY_MAX_DEFAULT;
fsopt->cap_release_safety = CEPH_CAP_RELEASE_SAFETY_DEFAULT;
fsopt->max_readdir = CEPH_MAX_READDIR_DEFAULT;
fsopt->max_readdir_bytes = CEPH_MAX_READDIR_BYTES_DEFAULT;
struct dentry *dentry;
u64 time;
u64 offset;
+ struct inode *parent_inode;
};
struct ceph_inode_xattrs_info {
struct rb_node **p;
struct rb_node *parent = NULL;
struct ceph_inode_xattr *xattr = NULL;
+ int name_len = strlen(name);
int c;
p = &ci->i_xattrs.index.rb_node;
parent = *p;
xattr = rb_entry(parent, struct ceph_inode_xattr, node);
c = strncmp(name, xattr->name, xattr->name_len);
+ if (c == 0 && name_len > xattr->name_len)
+ c = 1;
if (c < 0)
p = &(*p)->rb_left;
else if (c > 0)
depends on INET
select NLS
select CRYPTO
+ select CRYPTO_MD4
select CRYPTO_MD5
select CRYPTO_HMAC
select CRYPTO_ARC4
cifs-y := cifsfs.o cifssmb.o cifs_debug.o connect.o dir.o file.o inode.o \
link.o misc.o netmisc.o smbdes.o smbencrypt.o transport.o asn1.o \
- md4.o md5.o cifs_unicode.o nterr.o xattr.o cifsencrypt.o \
+ cifs_unicode.o nterr.o xattr.o cifsencrypt.o \
readdir.o ioctl.o sess.o export.o
cifs-$(CONFIG_CIFS_ACL) += cifsacl.o
if oplock (caching token) is granted and held. Note that
direct allows write operations larger than page size
to be sent to the server.
+ strictcache Use for switching on strict cache mode. In this mode the
+ client read from the cache all the time it has Oplock Level II,
+ otherwise - read from the server. All written data are stored
+ in the cache, but if the client doesn't have Exclusive Oplock,
+ it writes the data to the server.
acl Allow setfacl and getfacl to manage posix ACLs if server
supports them. (default)
noacl Do not allow setfacl and getfacl calls on this mount
spin_lock(&GlobalMid_Lock);
list_for_each(tmp, &server->pending_mid_q) {
mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
- cERROR(1, "State: %d Cmd: %d Pid: %d Tsk: %p Mid %d",
+ cERROR(1, "State: %d Cmd: %d Pid: %d Cbdata: %p Mid %d",
mid_entry->midState,
(int)mid_entry->command,
mid_entry->pid,
- mid_entry->tsk,
+ mid_entry->callback_data,
mid_entry->mid);
#ifdef CONFIG_CIFS_STATS2
cERROR(1, "IsLarge: %d buf: %p time rcv: %ld now: %ld",
mid_entry = list_entry(tmp3, struct mid_q_entry,
qhead);
seq_printf(m, "\tState: %d com: %d pid:"
- " %d tsk: %p mid %d\n",
+ " %d cbdata: %p mid %d\n",
mid_entry->midState,
(int)mid_entry->command,
mid_entry->pid,
- mid_entry->tsk,
+ mid_entry->callback_data,
mid_entry->mid);
}
spin_unlock(&GlobalMid_Lock);
atomic_read(&totSmBufAllocCount));
#endif /* CONFIG_CIFS_STATS2 */
- seq_printf(m, "Operations (MIDs): %d\n", midCount.counter);
+ seq_printf(m, "Operations (MIDs): %d\n", atomic_read(&midCount));
seq_printf(m,
"\n%d session %d share reconnects\n",
tcpSesReconnectCount.counter, tconInfoReconnectCount.counter);
cFYI(1, "in %s", __func__);
BUG_ON(IS_ROOT(mntpt));
- xid = GetXid();
-
/*
* The MSDFS spec states that paths in DFS referral requests and
* responses must be prefixed by a single '\' character instead of
mnt = ERR_PTR(-ENOMEM);
full_path = build_path_from_dentry(mntpt);
if (full_path == NULL)
- goto free_xid;
+ goto cdda_exit;
cifs_sb = CIFS_SB(mntpt->d_inode->i_sb);
tlink = cifs_sb_tlink(cifs_sb);
- mnt = ERR_PTR(-EINVAL);
if (IS_ERR(tlink)) {
mnt = ERR_CAST(tlink);
goto free_full_path;
}
ses = tlink_tcon(tlink)->ses;
+ xid = GetXid();
rc = get_dfs_path(xid, ses, full_path + 1, cifs_sb->local_nls,
&num_referrals, &referrals,
cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);
+ FreeXid(xid);
cifs_put_tlink(tlink);
free_dfs_info_array(referrals, num_referrals);
free_full_path:
kfree(full_path);
-free_xid:
- FreeXid(xid);
+cdda_exit:
cFYI(1, "leaving %s" , __func__);
return mnt;
}
#define CIFS_MOUNT_FSCACHE 0x8000 /* local caching enabled */
#define CIFS_MOUNT_MF_SYMLINKS 0x10000 /* Minshall+French Symlinks enabled */
#define CIFS_MOUNT_MULTIUSER 0x20000 /* multiuser mount */
+#define CIFS_MOUNT_STRICT_IO 0x40000 /* strict cache mode */
struct cifs_sb_info {
struct rb_root tlink_tree;
int charlen, outlen = 0;
int maxwords = maxbytes / 2;
char tmp[NLS_MAX_CHARSET_SIZE];
+ __u16 ftmp;
- for (i = 0; i < maxwords && from[i]; i++) {
- charlen = codepage->uni2char(le16_to_cpu(from[i]), tmp,
- NLS_MAX_CHARSET_SIZE);
+ for (i = 0; i < maxwords; i++) {
+ ftmp = get_unaligned_le16(&from[i]);
+ if (ftmp == 0)
+ break;
+
+ charlen = codepage->uni2char(ftmp, tmp, NLS_MAX_CHARSET_SIZE);
if (charlen > 0)
outlen += charlen;
else
}
/*
- * cifs_mapchar - convert a little-endian char to proper char in codepage
+ * cifs_mapchar - convert a host-endian char to proper char in codepage
* @target - where converted character should be copied
- * @src_char - 2 byte little-endian source character
+ * @src_char - 2 byte host-endian source character
* @cp - codepage to which character should be converted
* @mapchar - should character be mapped according to mapchars mount option?
*
* enough to hold the result of the conversion (at least NLS_MAX_CHARSET_SIZE).
*/
static int
-cifs_mapchar(char *target, const __le16 src_char, const struct nls_table *cp,
+cifs_mapchar(char *target, const __u16 src_char, const struct nls_table *cp,
bool mapchar)
{
int len = 1;
* build_path_from_dentry are modified, as they use slash as
* separator.
*/
- switch (le16_to_cpu(src_char)) {
+ switch (src_char) {
case UNI_COLON:
*target = ':';
break;
return len;
cp_convert:
- len = cp->uni2char(le16_to_cpu(src_char), target,
- NLS_MAX_CHARSET_SIZE);
+ len = cp->uni2char(src_char, target, NLS_MAX_CHARSET_SIZE);
if (len <= 0) {
*target = '?';
len = 1;
int nullsize = nls_nullsize(codepage);
int fromwords = fromlen / 2;
char tmp[NLS_MAX_CHARSET_SIZE];
+ __u16 ftmp;
/*
* because the chars can be of varying widths, we need to take care
*/
safelen = tolen - (NLS_MAX_CHARSET_SIZE + nullsize);
- for (i = 0; i < fromwords && from[i]; i++) {
+ for (i = 0; i < fromwords; i++) {
+ ftmp = get_unaligned_le16(&from[i]);
+ if (ftmp == 0)
+ break;
+
/*
* check to see if converting this character might make the
* conversion bleed into the null terminator
*/
if (outlen >= safelen) {
- charlen = cifs_mapchar(tmp, from[i], codepage, mapchar);
+ charlen = cifs_mapchar(tmp, ftmp, codepage, mapchar);
if ((outlen + charlen) > (tolen - nullsize))
break;
}
/* put converted char into 'to' buffer */
- charlen = cifs_mapchar(&to[outlen], from[i], codepage, mapchar);
+ charlen = cifs_mapchar(&to[outlen], ftmp, codepage, mapchar);
outlen += charlen;
}
{
int charlen;
int i;
- wchar_t *wchar_to = (wchar_t *)to; /* needed to quiet sparse */
+ wchar_t wchar_to; /* needed to quiet sparse */
for (i = 0; len && *from; i++, from += charlen, len -= charlen) {
-
- /* works for 2.4.0 kernel or later */
- charlen = codepage->char2uni(from, len, &wchar_to[i]);
+ charlen = codepage->char2uni(from, len, &wchar_to);
if (charlen < 1) {
- cERROR(1, "strtoUCS: char2uni of %d returned %d",
- (int)*from, charlen);
+ cERROR(1, "strtoUCS: char2uni of 0x%x returned %d",
+ *from, charlen);
/* A question mark */
- to[i] = cpu_to_le16(0x003f);
+ wchar_to = 0x003f;
charlen = 1;
- } else
- to[i] = cpu_to_le16(wchar_to[i]);
-
+ }
+ put_unaligned_le16(wchar_to, &to[i]);
}
- to[i] = 0;
+ put_unaligned_le16(0, &to[i]);
return i;
}
return dst;
}
+/*
+ * Convert 16 bit Unicode pathname to wire format from string in current code
+ * page. Conversion may involve remapping up the six characters that are
+ * only legal in POSIX-like OS (if they are present in the string). Path
+ * names are little endian 16 bit Unicode on the wire
+ */
+int
+cifsConvertToUCS(__le16 *target, const char *source, int maxlen,
+ const struct nls_table *cp, int mapChars)
+{
+ int i, j, charlen;
+ int len_remaining = maxlen;
+ char src_char;
+ __u16 temp;
+
+ if (!mapChars)
+ return cifs_strtoUCS(target, source, PATH_MAX, cp);
+
+ for (i = 0, j = 0; i < maxlen; j++) {
+ src_char = source[i];
+ switch (src_char) {
+ case 0:
+ put_unaligned_le16(0, &target[j]);
+ goto ctoUCS_out;
+ case ':':
+ temp = UNI_COLON;
+ break;
+ case '*':
+ temp = UNI_ASTERIK;
+ break;
+ case '?':
+ temp = UNI_QUESTION;
+ break;
+ case '<':
+ temp = UNI_LESSTHAN;
+ break;
+ case '>':
+ temp = UNI_GRTRTHAN;
+ break;
+ case '|':
+ temp = UNI_PIPE;
+ break;
+ /*
+ * FIXME: We can not handle remapping backslash (UNI_SLASH)
+ * until all the calls to build_path_from_dentry are modified,
+ * as they use backslash as separator.
+ */
+ default:
+ charlen = cp->char2uni(source+i, len_remaining,
+ &temp);
+ /*
+ * if no match, use question mark, which at least in
+ * some cases serves as wild card
+ */
+ if (charlen < 1) {
+ temp = 0x003f;
+ charlen = 1;
+ }
+ len_remaining -= charlen;
+ /*
+ * character may take more than one byte in the source
+ * string, but will take exactly two bytes in the
+ * target string
+ */
+ i += charlen;
+ continue;
+ }
+ put_unaligned_le16(temp, &target[j]);
+ i++; /* move to next char in source string */
+ len_remaining--;
+ }
+
+ctoUCS_out:
+ return i;
+}
+
;
-/* security id for everyone */
+/* security id for everyone/world system group */
static const struct cifs_sid sid_everyone = {
1, 1, {0, 0, 0, 0, 0, 1}, {0} };
+/* security id for Authenticated Users system group */
+static const struct cifs_sid sid_authusers = {
+ 1, 1, {0, 0, 0, 0, 0, 5}, {11} };
/* group users */
static const struct cifs_sid sid_user = {1, 2 , {0, 0, 0, 0, 0, 5}, {} };
if (num_aces > 0) {
umode_t user_mask = S_IRWXU;
umode_t group_mask = S_IRWXG;
- umode_t other_mask = S_IRWXO;
+ umode_t other_mask = S_IRWXU | S_IRWXG | S_IRWXO;
ppace = kmalloc(num_aces * sizeof(struct cifs_ace *),
GFP_KERNEL);
+ if (!ppace) {
+ cERROR(1, "DACL memory allocation error");
+ return;
+ }
for (i = 0; i < num_aces; ++i) {
ppace[i] = (struct cifs_ace *) (acl_base + acl_size);
ppace[i]->type,
&fattr->cf_mode,
&other_mask);
+ if (compare_sids(&(ppace[i]->sid), &sid_authusers))
+ access_flags_to_mode(ppace[i]->access_req,
+ ppace[i]->type,
+ &fattr->cf_mode,
+ &other_mask);
+
/* memcpy((void *)(&(cifscred->aces[i])),
(void *)ppace[i],
#include "cifspdu.h"
#include "cifsglob.h"
#include "cifs_debug.h"
-#include "md5.h"
#include "cifs_unicode.h"
#include "cifsproto.h"
#include "ntlmssp.h"
/* Note that the smb header signature field on input contains the
sequence number before this function is called */
-extern void mdfour(unsigned char *out, unsigned char *in, int n);
-extern void E_md4hash(const unsigned char *passwd, unsigned char *p16);
-extern void SMBencrypt(unsigned char *passwd, const unsigned char *c8,
- unsigned char *p24);
-
static int cifs_calculate_signature(const struct smb_hdr *cifs_pdu,
struct TCP_Server_Info *server, char *signature)
{
/* first calculate 24 bytes ntlm response and then 16 byte session key */
int setup_ntlm_response(struct cifsSesInfo *ses)
{
+ int rc = 0;
unsigned int temp_len = CIFS_SESS_KEY_SIZE + CIFS_AUTH_RESP_SIZE;
char temp_key[CIFS_SESS_KEY_SIZE];
}
ses->auth_key.len = temp_len;
- SMBNTencrypt(ses->password, ses->server->cryptkey,
+ rc = SMBNTencrypt(ses->password, ses->server->cryptkey,
ses->auth_key.response + CIFS_SESS_KEY_SIZE);
+ if (rc) {
+ cFYI(1, "%s Can't generate NTLM response, error: %d",
+ __func__, rc);
+ return rc;
+ }
- E_md4hash(ses->password, temp_key);
- mdfour(ses->auth_key.response, temp_key, CIFS_SESS_KEY_SIZE);
+ rc = E_md4hash(ses->password, temp_key);
+ if (rc) {
+ cFYI(1, "%s Can't generate NT hash, error: %d", __func__, rc);
+ return rc;
+ }
- return 0;
+ rc = mdfour(ses->auth_key.response, temp_key, CIFS_SESS_KEY_SIZE);
+ if (rc)
+ cFYI(1, "%s Can't generate NTLM session key, error: %d",
+ __func__, rc);
+
+ return rc;
}
#ifdef CONFIG_CIFS_WEAK_PW_HASH
get_random_bytes(sec_key, CIFS_SESS_KEY_SIZE);
tfm_arc4 = crypto_alloc_blkcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
- if (!tfm_arc4 || IS_ERR(tfm_arc4)) {
+ if (IS_ERR(tfm_arc4)) {
+ rc = PTR_ERR(tfm_arc4);
cERROR(1, "could not allocate crypto API arc4\n");
- return PTR_ERR(tfm_arc4);
+ return rc;
}
desc.tfm = tfm_arc4;
unsigned int size;
server->secmech.hmacmd5 = crypto_alloc_shash("hmac(md5)", 0, 0);
- if (!server->secmech.hmacmd5 ||
- IS_ERR(server->secmech.hmacmd5)) {
+ if (IS_ERR(server->secmech.hmacmd5)) {
cERROR(1, "could not allocate crypto hmacmd5\n");
return PTR_ERR(server->secmech.hmacmd5);
}
server->secmech.md5 = crypto_alloc_shash("md5", 0, 0);
- if (!server->secmech.md5 || IS_ERR(server->secmech.md5)) {
+ if (IS_ERR(server->secmech.md5)) {
cERROR(1, "could not allocate crypto md5\n");
rc = PTR_ERR(server->secmech.md5);
goto crypto_allocate_md5_fail;
+++ /dev/null
-/*
- * fs/cifs/cifsencrypt.h
- *
- * Copyright (c) International Business Machines Corp., 2005
- * Author(s): Steve French (sfrench@us.ibm.com)
- *
- * Externs for misc. small encryption routines
- * so we do not have to put them in cifsproto.h
- *
- * This library is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; either version 2.1 of the License, or
- * (at your option) any later version.
- *
- * This library is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
- * the GNU Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public License
- * along with this library; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-/* md4.c */
-extern void mdfour(unsigned char *out, unsigned char *in, int n);
-/* smbdes.c */
-extern void E_P16(unsigned char *p14, unsigned char *p16);
-extern void E_P24(unsigned char *p21, const unsigned char *c8,
- unsigned char *p24);
-
-
-
module_param(cifs_max_pending, int, 0);
MODULE_PARM_DESC(cifs_max_pending, "Simultaneous requests to server. "
"Default: 50 Range: 2 to 256");
-
+unsigned short echo_retries = 5;
+module_param(echo_retries, ushort, 0644);
+MODULE_PARM_DESC(echo_retries, "Number of echo attempts before giving up and "
+ "reconnecting server. Default: 5. 0 means "
+ "never reconnect.");
extern mempool_t *cifs_sm_req_poolp;
extern mempool_t *cifs_req_poolp;
extern mempool_t *cifs_mid_poolp;
{
struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode;
ssize_t written;
+ int rc;
written = generic_file_aio_write(iocb, iov, nr_segs, pos);
- if (!CIFS_I(inode)->clientCanCacheAll)
- filemap_fdatawrite(inode->i_mapping);
+
+ if (CIFS_I(inode)->clientCanCacheAll)
+ return written;
+
+ rc = filemap_fdatawrite(inode->i_mapping);
+ if (rc)
+ cFYI(1, "cifs_file_aio_write: %d rc on %p inode", rc, inode);
+
return written;
}
.setlease = cifs_setlease,
};
+const struct file_operations cifs_file_strict_ops = {
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .aio_read = cifs_strict_readv,
+ .aio_write = cifs_strict_writev,
+ .open = cifs_open,
+ .release = cifs_close,
+ .lock = cifs_lock,
+ .fsync = cifs_strict_fsync,
+ .flush = cifs_flush,
+ .mmap = cifs_file_strict_mmap,
+ .splice_read = generic_file_splice_read,
+ .llseek = cifs_llseek,
+#ifdef CONFIG_CIFS_POSIX
+ .unlocked_ioctl = cifs_ioctl,
+#endif /* CONFIG_CIFS_POSIX */
+ .setlease = cifs_setlease,
+};
+
const struct file_operations cifs_file_direct_ops = {
/* no aio, no readv -
BB reevaluate whether they can be done with directio, no cache */
.llseek = cifs_llseek,
.setlease = cifs_setlease,
};
+
const struct file_operations cifs_file_nobrl_ops = {
.read = do_sync_read,
.write = do_sync_write,
.setlease = cifs_setlease,
};
+const struct file_operations cifs_file_strict_nobrl_ops = {
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .aio_read = cifs_strict_readv,
+ .aio_write = cifs_strict_writev,
+ .open = cifs_open,
+ .release = cifs_close,
+ .fsync = cifs_strict_fsync,
+ .flush = cifs_flush,
+ .mmap = cifs_file_strict_mmap,
+ .splice_read = generic_file_splice_read,
+ .llseek = cifs_llseek,
+#ifdef CONFIG_CIFS_POSIX
+ .unlocked_ioctl = cifs_ioctl,
+#endif /* CONFIG_CIFS_POSIX */
+ .setlease = cifs_setlease,
+};
+
const struct file_operations cifs_file_direct_nobrl_ops = {
/* no mmap, no aio, no readv -
BB reevaluate whether they can be done with directio, no cache */
struct dentry *);
extern int cifs_revalidate_file(struct file *filp);
extern int cifs_revalidate_dentry(struct dentry *);
+extern void cifs_invalidate_mapping(struct inode *inode);
extern int cifs_getattr(struct vfsmount *, struct dentry *, struct kstat *);
extern int cifs_setattr(struct dentry *, struct iattr *);
/* Functions related to files and directories */
extern const struct file_operations cifs_file_ops;
extern const struct file_operations cifs_file_direct_ops; /* if directio mnt */
-extern const struct file_operations cifs_file_nobrl_ops;
-extern const struct file_operations cifs_file_direct_nobrl_ops; /* no brlocks */
+extern const struct file_operations cifs_file_strict_ops; /* if strictio mnt */
+extern const struct file_operations cifs_file_nobrl_ops; /* no brlocks */
+extern const struct file_operations cifs_file_direct_nobrl_ops;
+extern const struct file_operations cifs_file_strict_nobrl_ops;
extern int cifs_open(struct inode *inode, struct file *file);
extern int cifs_close(struct inode *inode, struct file *file);
extern int cifs_closedir(struct inode *inode, struct file *file);
extern ssize_t cifs_user_read(struct file *file, char __user *read_data,
- size_t read_size, loff_t *poffset);
+ size_t read_size, loff_t *poffset);
+extern ssize_t cifs_strict_readv(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos);
extern ssize_t cifs_user_write(struct file *file, const char __user *write_data,
- size_t write_size, loff_t *poffset);
+ size_t write_size, loff_t *poffset);
+extern ssize_t cifs_strict_writev(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos);
extern int cifs_lock(struct file *, int, struct file_lock *);
extern int cifs_fsync(struct file *, int);
+extern int cifs_strict_fsync(struct file *, int);
extern int cifs_flush(struct file *, fl_owner_t id);
extern int cifs_file_mmap(struct file * , struct vm_area_struct *);
+extern int cifs_file_strict_mmap(struct file * , struct vm_area_struct *);
extern const struct file_operations cifs_dir_ops;
extern int cifs_dir_open(struct inode *inode, struct file *file);
extern int cifs_readdir(struct file *file, void *direntry, filldir_t filldir);
extern const struct export_operations cifs_export_ops;
#endif /* EXPERIMENTAL */
-#define CIFS_VERSION "1.68"
+#define CIFS_VERSION "1.71"
#endif /* _CIFSFS_H */
int srv_count; /* reference counter */
/* 15 character server name + 0x20 16th byte indicating type = srv */
char server_RFC1001_name[RFC1001_NAME_LEN_WITH_NULL];
+ enum statusEnum tcpStatus; /* what we think the status is */
char *hostname; /* hostname portion of UNC string */
struct socket *ssocket;
struct sockaddr_storage dstaddr;
struct sockaddr_storage srcaddr; /* locally bind to this IP */
+#ifdef CONFIG_NET_NS
+ struct net *net;
+#endif
wait_queue_head_t response_q;
wait_queue_head_t request_q; /* if more than maxmpx to srvr must block*/
struct list_head pending_mid_q;
- void *Server_NlsInfo; /* BB - placeholder for future NLS info */
- unsigned short server_codepage; /* codepage for the server */
- enum protocolEnum protocolType;
- char versionMajor;
- char versionMinor;
- bool svlocal:1; /* local server or remote */
bool noblocksnd; /* use blocking sendmsg */
bool noautotune; /* do not autotune send buf sizes */
bool tcp_nodelay;
atomic_t inFlight; /* number of requests on the wire to server */
-#ifdef CONFIG_CIFS_STATS2
- atomic_t inSend; /* requests trying to send */
- atomic_t num_waiters; /* blocked waiting to get in sendrecv */
-#endif
- enum statusEnum tcpStatus; /* what we think the status is */
struct mutex srv_mutex;
struct task_struct *tsk;
char server_GUID[16];
char secMode;
+ bool session_estab; /* mark when very first sess is established */
+ u16 dialect; /* dialect index that server chose */
enum securityEnum secType;
unsigned int maxReq; /* Clients should submit no more */
/* than maxReq distinct unanswered SMBs to the server when using */
/* multiplexed reads or writes */
unsigned int maxBuf; /* maxBuf specifies the maximum */
/* message size the server can send or receive for non-raw SMBs */
+ /* maxBuf is returned by SMB NegotiateProtocol so maxBuf is only 0 */
+ /* when socket is setup (and during reconnect) before NegProt sent */
unsigned int max_rw; /* maxRw specifies the maximum */
/* message size the server can send or receive for */
/* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */
unsigned int max_vcs; /* maximum number of smb sessions, at least
those that can be specified uniquely with
vcnumbers */
- char sessid[4]; /* unique token id for this session */
- /* (returned on Negotiate */
int capabilities; /* allow selective disabling of caps by smb sess */
int timeAdj; /* Adjust for difference in server time zone in sec */
__u16 CurrentMid; /* multiplex id - rotating counter */
__u32 sequence_number; /* for signing, protected by srv_mutex */
struct session_key session_key;
unsigned long lstrp; /* when we got last response from this server */
- u16 dialect; /* dialect index that server chose */
struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */
/* extended security flavors that server supports */
+ bool sec_ntlmssp; /* supports NTLMSSP */
+ bool sec_kerberosu2u; /* supports U2U Kerberos */
bool sec_kerberos; /* supports plain Kerberos */
bool sec_mskerberos; /* supports legacy MS Kerberos */
- bool sec_kerberosu2u; /* supports U2U Kerberos */
- bool sec_ntlmssp; /* supports NTLMSSP */
- bool session_estab; /* mark when very first sess is established */
+ struct delayed_work echo; /* echo ping workqueue job */
#ifdef CONFIG_CIFS_FSCACHE
struct fscache_cookie *fscache; /* client index cache cookie */
#endif
+#ifdef CONFIG_CIFS_STATS2
+ atomic_t inSend; /* requests trying to send */
+ atomic_t num_waiters; /* blocked waiting to get in sendrecv */
+#endif
};
+/*
+ * Macros to allow the TCP_Server_Info->net field and related code to drop out
+ * when CONFIG_NET_NS isn't set.
+ */
+
+#ifdef CONFIG_NET_NS
+
+static inline struct net *cifs_net_ns(struct TCP_Server_Info *srv)
+{
+ return srv->net;
+}
+
+static inline void cifs_set_net_ns(struct TCP_Server_Info *srv, struct net *net)
+{
+ srv->net = net;
+}
+
+#else
+
+static inline struct net *cifs_net_ns(struct TCP_Server_Info *srv)
+{
+ return &init_net;
+}
+
+static inline void cifs_set_net_ns(struct TCP_Server_Info *srv, struct net *net)
+{
+}
+
+#endif
+
/*
* Session structure. One of these for each uid session with a particular host
*/
/* BB add in lists for dirty pages i.e. write caching info for oplock */
struct list_head openFileList;
__u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
- unsigned long time; /* jiffies of last update/check of inode */
- bool clientCanCacheRead:1; /* read oplock */
- bool clientCanCacheAll:1; /* read and writebehind oplock */
- bool delete_pending:1; /* DELETE_ON_CLOSE is set */
- bool invalid_mapping:1; /* pagecache is invalid */
+ bool clientCanCacheRead; /* read oplock */
+ bool clientCanCacheAll; /* read and writebehind oplock */
+ bool delete_pending; /* DELETE_ON_CLOSE is set */
+ bool invalid_mapping; /* pagecache is invalid */
+ unsigned long time; /* jiffies of last update of inode */
u64 server_eof; /* current file size on server */
u64 uniqueid; /* server inode number */
u64 createtime; /* creation time on server */
#endif
+struct mid_q_entry;
+
+/*
+ * This is the prototype for the mid callback function. When creating one,
+ * take special care to avoid deadlocks. Things to bear in mind:
+ *
+ * - it will be called by cifsd
+ * - the GlobalMid_Lock will be held
+ * - the mid will be removed from the pending_mid_q list
+ */
+typedef void (mid_callback_t)(struct mid_q_entry *mid);
+
/* one of these for every pending CIFS request to the server */
struct mid_q_entry {
struct list_head qhead; /* mids waiting on reply from this server */
unsigned long when_sent; /* time when smb send finished */
unsigned long when_received; /* when demux complete (taken off wire) */
#endif
- struct task_struct *tsk; /* task waiting for response */
+ mid_callback_t *callback; /* call completion callback */
+ void *callback_data; /* general purpose pointer for callback */
struct smb_hdr *resp_buf; /* response buffer */
int midState; /* wish this were enum but can not pass to wait_event */
__u8 command; /* smb command code */
#define MID_REQUEST_SUBMITTED 2
#define MID_RESPONSE_RECEIVED 4
#define MID_RETRY_NEEDED 8 /* session closed while this request out */
-#define MID_NO_RESP_NEEDED 0x10
+#define MID_RESPONSE_MALFORMED 0x10
/* Types of response buffer returned from SendReceive2 */
#define CIFS_NO_BUFFER 0 /* Response buffer not returned */
#define CIFS_IOVEC 4 /* array of response buffers */
/* Type of Request to SendReceive2 */
-#define CIFS_STD_OP 0 /* normal request timeout */
-#define CIFS_LONG_OP 1 /* long op (up to 45 sec, oplock time) */
-#define CIFS_VLONG_OP 2 /* sloow op - can take up to 180 seconds */
-#define CIFS_BLOCKING_OP 4 /* operation can block */
-#define CIFS_ASYNC_OP 8 /* do not wait for response */
-#define CIFS_TIMEOUT_MASK 0x00F /* only one of 5 above set in req */
+#define CIFS_BLOCKING_OP 1 /* operation can block */
+#define CIFS_ASYNC_OP 2 /* do not wait for response */
+#define CIFS_TIMEOUT_MASK 0x003 /* only one of above set in req */
#define CIFS_LOG_ERROR 0x010 /* log NT STATUS if non-zero */
#define CIFS_LARGE_BUF_OP 0x020 /* large request buffer */
#define CIFS_NO_RESP 0x040 /* no response buffer required */
GLOBAL_EXTERN unsigned int cifs_min_small; /* min size of small buf pool */
GLOBAL_EXTERN unsigned int cifs_max_pending; /* MAX requests at once to server*/
+/* reconnect after this many failed echo attempts */
+GLOBAL_EXTERN unsigned short echo_retries;
+
void cifs_oplock_break(struct work_struct *work);
void cifs_oplock_break_get(struct cifsFileInfo *cfile);
void cifs_oplock_break_put(struct cifsFileInfo *cfile);
#define _CIFSPDU_H
#include <net/sock.h>
+#include <asm/unaligned.h>
#include "smbfsctl.h"
#ifdef CONFIG_CIFS_WEAK_PW_HASH
#define SMB_COM_SETATTR 0x09 /* trivial response */
#define SMB_COM_LOCKING_ANDX 0x24 /* trivial response */
#define SMB_COM_COPY 0x29 /* trivial rsp, fail filename ignrd*/
+#define SMB_COM_ECHO 0x2B /* echo request */
#define SMB_COM_OPEN_ANDX 0x2D /* Legacy open for old servers */
#define SMB_COM_READ_ANDX 0x2E
#define SMB_COM_WRITE_ANDX 0x2F
__u16 Mid;
__u8 WordCount;
} __attribute__((packed));
-/* given a pointer to an smb_hdr retrieve the value of byte count */
-#define BCC(smb_var) (*(__u16 *)((char *)(smb_var) + sizeof(struct smb_hdr) + (2 * (smb_var)->WordCount)))
-#define BCC_LE(smb_var) (*(__le16 *)((char *)(smb_var) + sizeof(struct smb_hdr) + (2 * (smb_var)->WordCount)))
+
+/* given a pointer to an smb_hdr retrieve a char pointer to the byte count */
+#define BCC(smb_var) ((unsigned char *)(smb_var) + sizeof(struct smb_hdr) + \
+ (2 * (smb_var)->WordCount))
+
/* given a pointer to an smb_hdr retrieve the pointer to the byte area */
-#define pByteArea(smb_var) ((unsigned char *)(smb_var) + sizeof(struct smb_hdr) + (2 * (smb_var)->WordCount) + 2)
+#define pByteArea(smb_var) (BCC(smb_var) + 2)
+
+/* get the converted ByteCount for a SMB packet and return it */
+static inline __u16
+get_bcc(struct smb_hdr *hdr)
+{
+ __u16 *bc_ptr = (__u16 *)BCC(hdr);
+
+ return get_unaligned(bc_ptr);
+}
+
+/* get the unconverted ByteCount for a SMB packet and return it */
+static inline __u16
+get_bcc_le(struct smb_hdr *hdr)
+{
+ __le16 *bc_ptr = (__le16 *)BCC(hdr);
+
+ return get_unaligned_le16(bc_ptr);
+}
+
+/* set the ByteCount for a SMB packet in host-byte order */
+static inline void
+put_bcc(__u16 count, struct smb_hdr *hdr)
+{
+ __u16 *bc_ptr = (__u16 *)BCC(hdr);
+
+ put_unaligned(count, bc_ptr);
+}
+
+/* set the ByteCount for a SMB packet in little-endian */
+static inline void
+put_bcc_le(__u16 count, struct smb_hdr *hdr)
+{
+ __le16 *bc_ptr = (__le16 *)BCC(hdr);
+
+ put_unaligned_le16(count, bc_ptr);
+}
/*
* Computer Name Length (since Netbios name was length 16 with last byte 0x20)
*
*/
+typedef struct smb_com_echo_req {
+ struct smb_hdr hdr;
+ __le16 EchoCount;
+ __le16 ByteCount;
+ char Data[1];
+} __attribute__((packed)) ECHO_REQ;
+
+typedef struct smb_com_echo_rsp {
+ struct smb_hdr hdr;
+ __le16 SequenceNumber;
+ __le16 ByteCount;
+ char Data[1];
+} __attribute__((packed)) ECHO_RSP;
+
typedef struct smb_com_logoff_andx_req {
struct smb_hdr hdr; /* wct = 2 */
__u8 AndXCommand;
const char *fullpath, const struct dfs_info3_param *ref,
char **devname);
/* extern void renew_parental_timestamps(struct dentry *direntry);*/
+extern struct mid_q_entry *AllocMidQEntry(const struct smb_hdr *smb_buffer,
+ struct TCP_Server_Info *server);
+extern void DeleteMidQEntry(struct mid_q_entry *midEntry);
+extern int cifs_call_async(struct TCP_Server_Info *server,
+ struct smb_hdr *in_buf, mid_callback_t *callback,
+ void *cbdata);
extern int SendReceive(const unsigned int /* xid */ , struct cifsSesInfo *,
struct smb_hdr * /* input */ ,
struct smb_hdr * /* out */ ,
extern bool is_valid_oplock_break(struct smb_hdr *smb,
struct TCP_Server_Info *);
extern bool is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof);
+extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
+ unsigned int bytes_written);
extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, bool);
extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *, bool);
extern unsigned int smbCalcSize(struct smb_hdr *ptr);
const __u16 netfid, const __u64 len,
const __u64 offset, const __u32 numUnlock,
const __u32 numLock, const __u8 lockType,
- const bool waitFlag);
+ const bool waitFlag, const __u8 oplock_level);
extern int CIFSSMBPosixLock(const int xid, struct cifsTconInfo *tcon,
const __u16 smb_file_id, const int get_flag,
const __u64 len, struct file_lock *,
const __u16 lock_type, const bool waitFlag);
extern int CIFSSMBTDis(const int xid, struct cifsTconInfo *tcon);
+extern int CIFSSMBEcho(struct TCP_Server_Info *server);
extern int CIFSSMBLogoff(const int xid, struct cifsSesInfo *ses);
extern struct cifsSesInfo *sesInfoAlloc(void);
extern int cifs_verify_signature(struct smb_hdr *,
struct TCP_Server_Info *server,
__u32 expected_sequence_number);
-extern void SMBNTencrypt(unsigned char *, unsigned char *, unsigned char *);
+extern int SMBNTencrypt(unsigned char *, unsigned char *, unsigned char *);
extern int setup_ntlm_response(struct cifsSesInfo *);
extern int setup_ntlmv2_rsp(struct cifsSesInfo *, const struct nls_table *);
extern int cifs_crypto_shash_allocate(struct TCP_Server_Info *);
extern int CIFSCheckMFSymlink(struct cifs_fattr *fattr,
const unsigned char *path,
struct cifs_sb_info *cifs_sb, int xid);
+extern int mdfour(unsigned char *, unsigned char *, int);
+extern int E_md4hash(const unsigned char *passwd, unsigned char *p16);
+extern void SMBencrypt(unsigned char *passwd, const unsigned char *c8,
+ unsigned char *p24);
+extern void E_P16(unsigned char *p14, unsigned char *p16);
+extern void E_P24(unsigned char *p21, const unsigned char *c8,
+ unsigned char *p24);
#endif /* _CIFSPROTO_H */
}
}
- if (ses->status == CifsExiting)
- return -EIO;
-
/*
* Give demultiplex thread up to 10 seconds to reconnect, should be
* greater than cifs socket timeout which is 7 seconds
* retrying until process is killed or server comes
* back on-line
*/
- if (!tcon->retry || ses->status == CifsExiting) {
+ if (!tcon->retry) {
cFYI(1, "gave up waiting on reconnect in smb_init");
return -EHOSTDOWN;
}
static int validate_t2(struct smb_t2_rsp *pSMB)
{
- int rc = -EINVAL;
- int total_size;
- char *pBCC;
+ unsigned int total_size;
+
+ /* check for plausible wct */
+ if (pSMB->hdr.WordCount < 10)
+ goto vt2_err;
- /* check for plausible wct, bcc and t2 data and parm sizes */
/* check for parm and data offset going beyond end of smb */
- if (pSMB->hdr.WordCount >= 10) {
- if ((le16_to_cpu(pSMB->t2_rsp.ParameterOffset) <= 1024) &&
- (le16_to_cpu(pSMB->t2_rsp.DataOffset) <= 1024)) {
- /* check that bcc is at least as big as parms + data */
- /* check that bcc is less than negotiated smb buffer */
- total_size = le16_to_cpu(pSMB->t2_rsp.ParameterCount);
- if (total_size < 512) {
- total_size +=
- le16_to_cpu(pSMB->t2_rsp.DataCount);
- /* BCC le converted in SendReceive */
- pBCC = (pSMB->hdr.WordCount * 2) +
- sizeof(struct smb_hdr) +
- (char *)pSMB;
- if ((total_size <= (*(u16 *)pBCC)) &&
- (total_size <
- CIFSMaxBufSize+MAX_CIFS_HDR_SIZE)) {
- return 0;
- }
- }
- }
- }
+ if (get_unaligned_le16(&pSMB->t2_rsp.ParameterOffset) > 1024 ||
+ get_unaligned_le16(&pSMB->t2_rsp.DataOffset) > 1024)
+ goto vt2_err;
+
+ /* check that bcc is at least as big as parms + data */
+ /* check that bcc is less than negotiated smb buffer */
+ total_size = get_unaligned_le16(&pSMB->t2_rsp.ParameterCount);
+ if (total_size >= 512)
+ goto vt2_err;
+
+ total_size += get_unaligned_le16(&pSMB->t2_rsp.DataCount);
+ if (total_size > get_bcc(&pSMB->hdr) ||
+ total_size >= CIFSMaxBufSize + MAX_CIFS_HDR_SIZE)
+ goto vt2_err;
+
+ return 0;
+vt2_err:
cifs_dump_mem("Invalid transact2 SMB: ", (char *)pSMB,
sizeof(struct smb_t2_rsp) + 16);
- return rc;
+ return -EINVAL;
}
+
int
CIFSSMBNegotiate(unsigned int xid, struct cifsSesInfo *ses)
{
server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize),
(__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);
server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs);
- GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey);
/* even though we do not use raw we might as well set this
accurately, in case we ever find a need for it */
if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {
(__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);
server->max_rw = le32_to_cpu(pSMBr->MaxRawSize);
cFYI(DBG2, "Max buf = %d", ses->server->maxBuf);
- GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey);
server->capabilities = le32_to_cpu(pSMBr->Capabilities);
server->timeAdj = (int)(__s16)le16_to_cpu(pSMBr->ServerTimeZone);
server->timeAdj *= 60;
return rc;
}
+/*
+ * This is a no-op for now. We're not really interested in the reply, but
+ * rather in the fact that the server sent one and that server->lstrp
+ * gets updated.
+ *
+ * FIXME: maybe we should consider checking that the reply matches request?
+ */
+static void
+cifs_echo_callback(struct mid_q_entry *mid)
+{
+ struct TCP_Server_Info *server = mid->callback_data;
+
+ DeleteMidQEntry(mid);
+ atomic_dec(&server->inFlight);
+ wake_up(&server->request_q);
+}
+
+int
+CIFSSMBEcho(struct TCP_Server_Info *server)
+{
+ ECHO_REQ *smb;
+ int rc = 0;
+
+ cFYI(1, "In echo request");
+
+ rc = small_smb_init(SMB_COM_ECHO, 0, NULL, (void **)&smb);
+ if (rc)
+ return rc;
+
+ /* set up echo request */
+ smb->hdr.Tid = cpu_to_le16(0xffff);
+ smb->hdr.WordCount = 1;
+ put_unaligned_le16(1, &smb->EchoCount);
+ put_bcc_le(1, &smb->hdr);
+ smb->Data[0] = 'a';
+ smb->hdr.smb_buf_length += 3;
+
+ rc = cifs_call_async(server, (struct smb_hdr *)smb,
+ cifs_echo_callback, server);
+ if (rc)
+ cFYI(1, "Echo request failed: %d", rc);
+
+ cifs_small_buf_release(smb);
+
+ return rc;
+}
+
int
CIFSSMBLogoff(const int xid, struct cifsSesInfo *ses)
{
pSMB->ByteCount = cpu_to_le16(count);
/* long_op set to 1 to allow for oplock break timeouts */
rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
- (struct smb_hdr *)pSMBr, &bytes_returned, CIFS_LONG_OP);
+ (struct smb_hdr *)pSMBr, &bytes_returned, 0);
cifs_stats_inc(&tcon->num_opens);
if (rc) {
cFYI(1, "Error in Open = %d", rc);
pSMB->ByteCount = cpu_to_le16(count);
/* long_op set to 1 to allow for oplock break timeouts */
rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
- (struct smb_hdr *)pSMBr, &bytes_returned, CIFS_LONG_OP);
+ (struct smb_hdr *)pSMBr, &bytes_returned, 0);
cifs_stats_inc(&tcon->num_opens);
if (rc) {
cFYI(1, "Error in Open = %d", rc);
iov[0].iov_base = (char *)pSMB;
iov[0].iov_len = pSMB->hdr.smb_buf_length + 4;
rc = SendReceive2(xid, tcon->ses, iov, 1 /* num iovecs */,
- &resp_buf_type, CIFS_STD_OP | CIFS_LOG_ERROR);
+ &resp_buf_type, CIFS_LOG_ERROR);
cifs_stats_inc(&tcon->num_reads);
pSMBr = (READ_RSP *)iov[0].iov_base;
if (rc) {
CIFSSMBLock(const int xid, struct cifsTconInfo *tcon,
const __u16 smb_file_id, const __u64 len,
const __u64 offset, const __u32 numUnlock,
- const __u32 numLock, const __u8 lockType, const bool waitFlag)
+ const __u32 numLock, const __u8 lockType,
+ const bool waitFlag, const __u8 oplock_level)
{
int rc = 0;
LOCK_REQ *pSMB = NULL;
pSMB->NumberOfLocks = cpu_to_le16(numLock);
pSMB->NumberOfUnlocks = cpu_to_le16(numUnlock);
pSMB->LockType = lockType;
+ pSMB->OplockLevel = oplock_level;
pSMB->AndXCommand = 0xFF; /* none */
pSMB->Fid = smb_file_id; /* netfid stays le */
iov[0].iov_len = pSMB->hdr.smb_buf_length + 4;
rc = SendReceive2(xid, tcon->ses, iov, 1 /* num iovec */, &buf_type,
- CIFS_STD_OP);
+ 0);
cifs_stats_inc(&tcon->num_acl_get);
if (rc) {
cFYI(1, "Send error in QuerySecDesc = %d", rc);
__u16 fid, __u32 pid_of_opener, bool SetAllocation)
{
struct smb_com_transaction2_sfi_req *pSMB = NULL;
- char *data_offset;
struct file_end_of_file_info *parm_data;
int rc = 0;
__u16 params, param_offset, offset, byte_count, count;
param_offset = offsetof(struct smb_com_transaction2_sfi_req, Fid) - 4;
offset = param_offset + params;
- data_offset = (char *) (&pSMB->hdr.Protocol) + offset;
-
count = sizeof(struct file_end_of_file_info);
pSMB->MaxParameterCount = cpu_to_le16(2);
/* BB find exact max SMB PDU from sess structure BB */
}
/* make sure list_len doesn't go past end of SMB */
- end_of_smb = (char *)pByteArea(&pSMBr->hdr) + BCC(&pSMBr->hdr);
+ end_of_smb = (char *)pByteArea(&pSMBr->hdr) + get_bcc(&pSMBr->hdr);
if ((char *)ea_response_data + list_len > end_of_smb) {
cFYI(1, "EA list appears to go beyond SMB");
rc = -EIO;
#define CIFS_PORT 445
#define RFC1001_PORT 139
-extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8,
- unsigned char *p24);
+/* SMB echo "timeout" -- FIXME: tunable? */
+#define SMB_ECHO_INTERVAL (60 * HZ)
extern mempool_t *cifs_req_poolp;
bool no_xattr:1; /* set if xattr (EA) support should be disabled*/
bool server_ino:1; /* use inode numbers from server ie UniqueId */
bool direct_io:1;
+ bool strict_io:1; /* strict cache behavior */
bool remap:1; /* set to remap seven reserved chars in filenames */
bool posix_paths:1; /* unset to not ask for posix pathnames. */
bool no_linux_ext:1;
/* before reconnecting the tcp session, mark the smb session (uid)
and the tid bad so they are not used until reconnected */
+ cFYI(1, "%s: marking sessions and tcons for reconnect", __func__);
spin_lock(&cifs_tcp_ses_lock);
list_for_each(tmp, &server->smb_ses_list) {
ses = list_entry(tmp, struct cifsSesInfo, smb_ses_list);
}
}
spin_unlock(&cifs_tcp_ses_lock);
+
/* do not want to be sending data on a socket we are freeing */
+ cFYI(1, "%s: tearing down socket", __func__);
mutex_lock(&server->srv_mutex);
if (server->ssocket) {
cFYI(1, "State: 0x%x Flags: 0x%lx", server->ssocket->state,
kfree(server->session_key.response);
server->session_key.response = NULL;
server->session_key.len = 0;
+ server->lstrp = jiffies;
+ mutex_unlock(&server->srv_mutex);
+ /* mark submitted MIDs for retry and issue callback */
+ cFYI(1, "%s: issuing mid callbacks", __func__);
spin_lock(&GlobalMid_Lock);
- list_for_each(tmp, &server->pending_mid_q) {
- mid_entry = list_entry(tmp, struct
- mid_q_entry,
- qhead);
- if (mid_entry->midState == MID_REQUEST_SUBMITTED) {
- /* Mark other intransit requests as needing
- retry so we do not immediately mark the
- session bad again (ie after we reconnect
- below) as they timeout too */
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ if (mid_entry->midState == MID_REQUEST_SUBMITTED)
mid_entry->midState = MID_RETRY_NEEDED;
- }
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
}
spin_unlock(&GlobalMid_Lock);
- mutex_unlock(&server->srv_mutex);
while ((server->tcpStatus != CifsExiting) &&
(server->tcpStatus != CifsGood)) {
if (server->tcpStatus != CifsExiting)
server->tcpStatus = CifsGood;
spin_unlock(&GlobalMid_Lock);
- /* atomic_set(&server->inFlight,0);*/
- wake_up(&server->response_q);
}
}
+
return rc;
}
static int check2ndT2(struct smb_hdr *pSMB, unsigned int maxBufSize)
{
struct smb_t2_rsp *pSMBt;
- int total_data_size;
- int data_in_this_rsp;
int remaining;
+ __u16 total_data_size, data_in_this_rsp;
if (pSMB->Command != SMB_COM_TRANSACTION2)
return 0;
pSMBt = (struct smb_t2_rsp *)pSMB;
- total_data_size = le16_to_cpu(pSMBt->t2_rsp.TotalDataCount);
- data_in_this_rsp = le16_to_cpu(pSMBt->t2_rsp.DataCount);
+ total_data_size = get_unaligned_le16(&pSMBt->t2_rsp.TotalDataCount);
+ data_in_this_rsp = get_unaligned_le16(&pSMBt->t2_rsp.DataCount);
remaining = total_data_size - data_in_this_rsp;
{
struct smb_t2_rsp *pSMB2 = (struct smb_t2_rsp *)psecond;
struct smb_t2_rsp *pSMBt = (struct smb_t2_rsp *)pTargetSMB;
- int total_data_size;
- int total_in_buf;
- int remaining;
- int total_in_buf2;
char *data_area_of_target;
char *data_area_of_buf2;
- __u16 byte_count;
+ int remaining;
+ __u16 byte_count, total_data_size, total_in_buf, total_in_buf2;
- total_data_size = le16_to_cpu(pSMBt->t2_rsp.TotalDataCount);
+ total_data_size = get_unaligned_le16(&pSMBt->t2_rsp.TotalDataCount);
- if (total_data_size != le16_to_cpu(pSMB2->t2_rsp.TotalDataCount)) {
+ if (total_data_size !=
+ get_unaligned_le16(&pSMB2->t2_rsp.TotalDataCount))
cFYI(1, "total data size of primary and secondary t2 differ");
- }
- total_in_buf = le16_to_cpu(pSMBt->t2_rsp.DataCount);
+ total_in_buf = get_unaligned_le16(&pSMBt->t2_rsp.DataCount);
remaining = total_data_size - total_in_buf;
if (remaining == 0) /* nothing to do, ignore */
return 0;
- total_in_buf2 = le16_to_cpu(pSMB2->t2_rsp.DataCount);
+ total_in_buf2 = get_unaligned_le16(&pSMB2->t2_rsp.DataCount);
if (remaining < total_in_buf2) {
cFYI(1, "transact2 2nd response contains too much data");
}
/* find end of first SMB data area */
data_area_of_target = (char *)&pSMBt->hdr.Protocol +
- le16_to_cpu(pSMBt->t2_rsp.DataOffset);
+ get_unaligned_le16(&pSMBt->t2_rsp.DataOffset);
/* validate target area */
- data_area_of_buf2 = (char *) &pSMB2->hdr.Protocol +
- le16_to_cpu(pSMB2->t2_rsp.DataOffset);
+ data_area_of_buf2 = (char *)&pSMB2->hdr.Protocol +
+ get_unaligned_le16(&pSMB2->t2_rsp.DataOffset);
data_area_of_target += total_in_buf;
/* copy second buffer into end of first buffer */
memcpy(data_area_of_target, data_area_of_buf2, total_in_buf2);
total_in_buf += total_in_buf2;
- pSMBt->t2_rsp.DataCount = cpu_to_le16(total_in_buf);
- byte_count = le16_to_cpu(BCC_LE(pTargetSMB));
+ put_unaligned_le16(total_in_buf, &pSMBt->t2_rsp.DataCount);
+ byte_count = get_bcc_le(pTargetSMB);
byte_count += total_in_buf2;
- BCC_LE(pTargetSMB) = cpu_to_le16(byte_count);
+ put_bcc_le(byte_count, pTargetSMB);
byte_count = pTargetSMB->smb_buf_length;
byte_count += total_in_buf2;
return 0; /* we are done */
} else /* more responses to go */
return 1;
+}
+
+static void
+cifs_echo_request(struct work_struct *work)
+{
+ int rc;
+ struct TCP_Server_Info *server = container_of(work,
+ struct TCP_Server_Info, echo.work);
+
+ /*
+ * We cannot send an echo until the NEGOTIATE_PROTOCOL request is
+ * done, which is indicated by maxBuf != 0. Also, no need to ping if
+ * we got a response recently
+ */
+ if (server->maxBuf == 0 ||
+ time_before(jiffies, server->lstrp + SMB_ECHO_INTERVAL - HZ))
+ goto requeue_echo;
+
+ rc = CIFSSMBEcho(server);
+ if (rc)
+ cFYI(1, "Unable to send echo request to server: %s",
+ server->hostname);
+requeue_echo:
+ queue_delayed_work(system_nrt_wq, &server->echo, SMB_ECHO_INTERVAL);
}
static int
struct msghdr smb_msg;
struct kvec iov;
struct socket *csocket = server->ssocket;
- struct list_head *tmp;
- struct cifsSesInfo *ses;
+ struct list_head *tmp, *tmp2;
struct task_struct *task_to_wake = NULL;
struct mid_q_entry *mid_entry;
char temp;
smb_msg.msg_control = NULL;
smb_msg.msg_controllen = 0;
pdu_length = 4; /* enough to get RFC1001 header */
+
incomplete_rcv:
+ if (echo_retries > 0 &&
+ time_after(jiffies, server->lstrp +
+ (echo_retries * SMB_ECHO_INTERVAL))) {
+ cERROR(1, "Server %s has not responded in %d seconds. "
+ "Reconnecting...", server->hostname,
+ (echo_retries * SMB_ECHO_INTERVAL / HZ));
+ cifs_reconnect(server);
+ csocket = server->ssocket;
+ wake_up(&server->response_q);
+ continue;
+ }
+
length =
kernel_recvmsg(csocket, &smb_msg,
&iov, 1, pdu_length, 0 /* BB other flags? */);
else if (reconnect == 1)
continue;
- length += 4; /* account for rfc1002 hdr */
+ total_read += 4; /* account for rfc1002 hdr */
+ dump_smb(smb_buffer, total_read);
- dump_smb(smb_buffer, length);
- if (checkSMB(smb_buffer, smb_buffer->Mid, total_read+4)) {
- cifs_dump_mem("Bad SMB: ", smb_buffer, 48);
- continue;
- }
+ /*
+ * We know that we received enough to get to the MID as we
+ * checked the pdu_length earlier. Now check to see
+ * if the rest of the header is OK. We borrow the length
+ * var for the rest of the loop to avoid a new stack var.
+ *
+ * 48 bytes is enough to display the header and a little bit
+ * into the payload for debugging purposes.
+ */
+ length = checkSMB(smb_buffer, smb_buffer->Mid, total_read);
+ if (length != 0)
+ cifs_dump_mem("Bad SMB: ", smb_buffer,
+ min_t(unsigned int, total_read, 48));
+ mid_entry = NULL;
+ server->lstrp = jiffies;
- task_to_wake = NULL;
spin_lock(&GlobalMid_Lock);
- list_for_each(tmp, &server->pending_mid_q) {
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
if ((mid_entry->mid == smb_buffer->Mid) &&
(mid_entry->midState == MID_REQUEST_SUBMITTED) &&
(mid_entry->command == smb_buffer->Command)) {
- if (check2ndT2(smb_buffer,server->maxBuf) > 0) {
+ if (length == 0 &&
+ check2ndT2(smb_buffer, server->maxBuf) > 0) {
/* We have a multipart transact2 resp */
isMultiRsp = true;
if (mid_entry->resp_buf) {
mid_entry->resp_buf = smb_buffer;
mid_entry->largeBuf = isLargeBuf;
multi_t2_fnd:
- task_to_wake = mid_entry->tsk;
- mid_entry->midState = MID_RESPONSE_RECEIVED;
+ if (length == 0)
+ mid_entry->midState =
+ MID_RESPONSE_RECEIVED;
+ else
+ mid_entry->midState =
+ MID_RESPONSE_MALFORMED;
#ifdef CONFIG_CIFS_STATS2
mid_entry->when_received = jiffies;
#endif
- /* so we do not time out requests to server
- which is still responding (since server could
- be busy but not dead) */
- server->lstrp = jiffies;
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
break;
}
+ mid_entry = NULL;
}
spin_unlock(&GlobalMid_Lock);
- if (task_to_wake) {
+
+ if (mid_entry != NULL) {
/* Was previous buf put in mpx struct for multi-rsp? */
if (!isMultiRsp) {
/* smb buffer will be freed by user thread */
else
smallbuf = NULL;
}
- wake_up_process(task_to_wake);
+ } else if (length != 0) {
+ /* response sanity checks failed */
+ continue;
} else if (!is_valid_oplock_break(smb_buffer, server) &&
!isMultiRsp) {
cERROR(1, "No task to wake, unknown frame received! "
- "NumMids %d", midCount.counter);
+ "NumMids %d", atomic_read(&midCount));
cifs_dump_mem("Received Data is: ", (char *)smb_buffer,
sizeof(struct smb_hdr));
#ifdef CONFIG_CIFS_DEBUG2
if (smallbuf) /* no sense logging a debug message if NULL */
cifs_small_buf_release(smallbuf);
- /*
- * BB: we shouldn't have to do any of this. It shouldn't be
- * possible to exit from the thread with active SMB sessions
- */
- spin_lock(&cifs_tcp_ses_lock);
- if (list_empty(&server->pending_mid_q)) {
- /* loop through server session structures attached to this and
- mark them dead */
- list_for_each(tmp, &server->smb_ses_list) {
- ses = list_entry(tmp, struct cifsSesInfo,
- smb_ses_list);
- ses->status = CifsExiting;
- ses->server = NULL;
- }
- spin_unlock(&cifs_tcp_ses_lock);
- } else {
- /* although we can not zero the server struct pointer yet,
- since there are active requests which may depnd on them,
- mark the corresponding SMB sessions as exiting too */
- list_for_each(tmp, &server->smb_ses_list) {
- ses = list_entry(tmp, struct cifsSesInfo,
- smb_ses_list);
- ses->status = CifsExiting;
- }
-
+ if (!list_empty(&server->pending_mid_q)) {
spin_lock(&GlobalMid_Lock);
- list_for_each(tmp, &server->pending_mid_q) {
- mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
- if (mid_entry->midState == MID_REQUEST_SUBMITTED) {
- cFYI(1, "Clearing Mid 0x%x - waking up ",
+ list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
+ mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
+ cFYI(1, "Clearing Mid 0x%x - issuing callback",
mid_entry->mid);
- task_to_wake = mid_entry->tsk;
- if (task_to_wake)
- wake_up_process(task_to_wake);
- }
+ list_del_init(&mid_entry->qhead);
+ mid_entry->callback(mid_entry);
}
spin_unlock(&GlobalMid_Lock);
- spin_unlock(&cifs_tcp_ses_lock);
/* 1/8th of sec is more than enough time for them to exit */
msleep(125);
}
coming home not much else we can do but free the memory */
}
- /* last chance to mark ses pointers invalid
- if there are any pointing to this (e.g
- if a crazy root user tried to kill cifsd
- kernel thread explicitly this might happen) */
- /* BB: This shouldn't be necessary, see above */
- spin_lock(&cifs_tcp_ses_lock);
- list_for_each(tmp, &server->smb_ses_list) {
- ses = list_entry(tmp, struct cifsSesInfo, smb_ses_list);
- ses->server = NULL;
- }
- spin_unlock(&cifs_tcp_ses_lock);
-
kfree(server->hostname);
task_to_wake = xchg(&server->tsk, NULL);
kfree(server);
vol->direct_io = 1;
} else if (strnicmp(data, "forcedirectio", 13) == 0) {
vol->direct_io = 1;
+ } else if (strnicmp(data, "strictcache", 11) == 0) {
+ vol->strict_io = 1;
} else if (strnicmp(data, "noac", 4) == 0) {
printk(KERN_WARNING "CIFS: Mount option noac not "
"supported. Instead set "
spin_lock(&cifs_tcp_ses_lock);
list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) {
+ if (!net_eq(cifs_net_ns(server), current->nsproxy->net_ns))
+ continue;
+
if (!match_address(server, addr,
(struct sockaddr *)&vol->srcaddr))
continue;
return;
}
+ put_net(cifs_net_ns(server));
+
list_del_init(&server->tcp_ses_list);
spin_unlock(&cifs_tcp_ses_lock);
+ cancel_delayed_work_sync(&server->echo);
+
spin_lock(&GlobalMid_Lock);
server->tcpStatus = CifsExiting;
spin_unlock(&GlobalMid_Lock);
goto out_err;
}
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
tcp_ses->hostname = extract_hostname(volume_info->UNC);
if (IS_ERR(tcp_ses->hostname)) {
rc = PTR_ERR(tcp_ses->hostname);
volume_info->target_rfc1001_name, RFC1001_NAME_LEN_WITH_NULL);
tcp_ses->session_estab = false;
tcp_ses->sequence_number = 0;
+ tcp_ses->lstrp = jiffies;
INIT_LIST_HEAD(&tcp_ses->tcp_ses_list);
INIT_LIST_HEAD(&tcp_ses->smb_ses_list);
+ INIT_DELAYED_WORK(&tcp_ses->echo, cifs_echo_request);
/*
* at this point we are the only ones with the pointer
cifs_fscache_get_client_cookie(tcp_ses);
+ /* queue echo request delayed work */
+ queue_delayed_work(system_nrt_wq, &tcp_ses->echo, SMB_ECHO_INTERVAL);
+
return tcp_ses;
out_err_crypto_release:
cifs_crypto_shash_release(tcp_ses);
+ put_net(cifs_net_ns(tcp_ses));
+
out_err:
if (tcp_ses) {
if (!IS_ERR(tcp_ses->hostname))
}
if (socket == NULL) {
- rc = sock_create_kern(sfamily, SOCK_STREAM,
- IPPROTO_TCP, &socket);
+ rc = __sock_create(cifs_net_ns(server), sfamily, SOCK_STREAM,
+ IPPROTO_TCP, &socket, 1);
if (rc < 0) {
cERROR(1, "Error %d creating socket", rc);
server->ssocket = NULL;
if (pvolume_info->multiuser)
cifs_sb->mnt_cifs_flags |= (CIFS_MOUNT_MULTIUSER |
CIFS_MOUNT_NO_PERM);
+ if (pvolume_info->strict_io)
+ cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_STRICT_IO;
if (pvolume_info->direct_io) {
cFYI(1, "mounting share using direct i/o");
cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DIRECT_IO;
TCONX_RSP *pSMBr;
unsigned char *bcc_ptr;
int rc = 0;
- int length, bytes_left;
- __u16 count;
+ int length;
+ __u16 bytes_left, count;
if (ses == NULL)
return -EIO;
bcc_ptr++; /* skip password */
/* already aligned so no need to do it below */
} else {
- pSMB->PasswordLength = cpu_to_le16(CIFS_SESS_KEY_SIZE);
+ pSMB->PasswordLength = cpu_to_le16(CIFS_AUTH_RESP_SIZE);
/* BB FIXME add code to fail this if NTLMv2 or Kerberos
specified as required (when that support is added to
the vfs in the future) as only NTLM or the much
bcc_ptr);
else
#endif /* CIFS_WEAK_PW_HASH */
- SMBNTencrypt(tcon->password, ses->server->cryptkey, bcc_ptr);
+ rc = SMBNTencrypt(tcon->password, ses->server->cryptkey,
+ bcc_ptr);
- bcc_ptr += CIFS_SESS_KEY_SIZE;
+ bcc_ptr += CIFS_AUTH_RESP_SIZE;
if (ses->capabilities & CAP_UNICODE) {
/* must align unicode strings */
*bcc_ptr = 0; /* null byte password */
pSMB->ByteCount = cpu_to_le16(count);
rc = SendReceive(xid, ses, smb_buffer, smb_buffer_response, &length,
- CIFS_STD_OP);
+ 0);
/* above now done in SendReceive */
if ((rc == 0) && (tcon != NULL)) {
tcon->need_reconnect = false;
tcon->tid = smb_buffer_response->Tid;
bcc_ptr = pByteArea(smb_buffer_response);
- bytes_left = BCC(smb_buffer_response);
+ bytes_left = get_bcc(smb_buffer_response);
length = strnlen(bcc_ptr, bytes_left - 2);
if (smb_buffer->Flags2 & SMBFLG2_UNICODE)
is_unicode = true;
struct inode *inode = cifs_file->dentry->d_inode;
struct cifsTconInfo *tcon = tlink_tcon(cifs_file->tlink);
struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
struct cifsLockInfo *li, *tmp;
spin_lock(&cifs_file_list_lock);
if (list_empty(&cifsi->openFileList)) {
cFYI(1, "closing last open instance for inode %p",
cifs_file->dentry->d_inode);
+
+ /* in strict cache mode we need invalidate mapping on the last
+ close because it may cause a error when we open this file
+ again and get at least level II oplock */
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO)
+ CIFS_I(inode)->invalid_mapping = true;
+
cifs_set_oplock_level(cifsi, 0);
}
spin_unlock(&cifs_file_list_lock);
struct cifsTconInfo *tcon;
struct tcon_link *tlink;
struct cifsFileInfo *pCifsFile = NULL;
- struct cifsInodeInfo *pCifsInode;
char *full_path = NULL;
bool posix_open_ok = false;
__u16 netfid;
}
tcon = tlink_tcon(tlink);
- pCifsInode = CIFS_I(file->f_path.dentry->d_inode);
-
full_path = build_path_from_dentry(file->f_path.dentry);
if (full_path == NULL) {
rc = -ENOMEM;
/* BB we could chain these into one lock request BB */
rc = CIFSSMBLock(xid, tcon, netfid, length, pfLock->fl_start,
- 0, 1, lockType, 0 /* wait flag */ );
+ 0, 1, lockType, 0 /* wait flag */, 0);
if (rc == 0) {
rc = CIFSSMBLock(xid, tcon, netfid, length,
pfLock->fl_start, 1 /* numUnlock */ ,
0 /* numLock */ , lockType,
- 0 /* wait flag */ );
+ 0 /* wait flag */, 0);
pfLock->fl_type = F_UNLCK;
if (rc != 0)
cERROR(1, "Error unlocking previously locked "
rc = CIFSSMBLock(xid, tcon, netfid, length,
pfLock->fl_start, 0, 1,
lockType | LOCKING_ANDX_SHARED_LOCK,
- 0 /* wait flag */);
+ 0 /* wait flag */, 0);
if (rc == 0) {
rc = CIFSSMBLock(xid, tcon, netfid,
length, pfLock->fl_start, 1, 0,
lockType |
LOCKING_ANDX_SHARED_LOCK,
- 0 /* wait flag */);
+ 0 /* wait flag */, 0);
pfLock->fl_type = F_RDLCK;
if (rc != 0)
cERROR(1, "Error unlocking "
if (numLock) {
rc = CIFSSMBLock(xid, tcon, netfid, length,
- pfLock->fl_start,
- 0, numLock, lockType, wait_flag);
+ pfLock->fl_start, 0, numLock, lockType,
+ wait_flag, 0);
if (rc == 0) {
/* For Windows locks we must store them. */
(pfLock->fl_start + length) >=
(li->offset + li->length)) {
stored_rc = CIFSSMBLock(xid, tcon,
- netfid,
- li->length, li->offset,
- 1, 0, li->type, false);
+ netfid, li->length,
+ li->offset, 1, 0,
+ li->type, false, 0);
if (stored_rc)
rc = stored_rc;
else {
return rc;
}
-/*
- * Set the timeout on write requests past EOF. For some servers (Windows)
- * these calls can be very long.
- *
- * If we're writing >10M past the EOF we give a 180s timeout. Anything less
- * than that gets a 45s timeout. Writes not past EOF get 15s timeouts.
- * The 10M cutoff is totally arbitrary. A better scheme for this would be
- * welcome if someone wants to suggest one.
- *
- * We may be able to do a better job with this if there were some way to
- * declare that a file should be sparse.
- */
-static int
-cifs_write_timeout(struct cifsInodeInfo *cifsi, loff_t offset)
-{
- if (offset <= cifsi->server_eof)
- return CIFS_STD_OP;
- else if (offset > (cifsi->server_eof + (10 * 1024 * 1024)))
- return CIFS_VLONG_OP;
- else
- return CIFS_LONG_OP;
-}
-
/* update the file size (if needed) after a write */
-static void
+void
cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
unsigned int bytes_written)
{
unsigned int total_written;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
- int xid, long_op;
+ int xid;
struct cifsFileInfo *open_file;
struct cifsInodeInfo *cifsi = CIFS_I(inode);
xid = GetXid();
- long_op = cifs_write_timeout(cifsi, *poffset);
for (total_written = 0; write_size > total_written;
total_written += bytes_written) {
rc = -EAGAIN;
min_t(const int, cifs_sb->wsize,
write_size - total_written),
*poffset, &bytes_written,
- NULL, write_data + total_written, long_op);
+ NULL, write_data + total_written, 0);
}
if (rc || (bytes_written == 0)) {
if (total_written)
cifs_update_eof(cifsi, *poffset, bytes_written);
*poffset += bytes_written;
}
- long_op = CIFS_STD_OP; /* subsequent writes fast -
- 15 seconds is plenty */
}
cifs_stats_bytes_written(pTcon, total_written);
unsigned int total_written;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
- int xid, long_op;
+ int xid;
struct dentry *dentry = open_file->dentry;
struct cifsInodeInfo *cifsi = CIFS_I(dentry->d_inode);
xid = GetXid();
- long_op = cifs_write_timeout(cifsi, *poffset);
for (total_written = 0; write_size > total_written;
total_written += bytes_written) {
rc = -EAGAIN;
rc = CIFSSMBWrite2(xid, pTcon,
open_file->netfid, len,
*poffset, &bytes_written,
- iov, 1, long_op);
+ iov, 1, 0);
} else
rc = CIFSSMBWrite(xid, pTcon,
open_file->netfid,
write_size - total_written),
*poffset, &bytes_written,
write_data + total_written,
- NULL, long_op);
+ NULL, 0);
}
if (rc || (bytes_written == 0)) {
if (total_written)
cifs_update_eof(cifsi, *poffset, bytes_written);
*poffset += bytes_written;
}
- long_op = CIFS_STD_OP; /* subsequent writes fast -
- 15 seconds is plenty */
}
cifs_stats_bytes_written(pTcon, total_written);
char *write_data;
int rc = -EFAULT;
int bytes_written = 0;
- struct cifs_sb_info *cifs_sb;
struct inode *inode;
struct cifsFileInfo *open_file;
return -EFAULT;
inode = page->mapping->host;
- cifs_sb = CIFS_SB(inode->i_sb);
offset += (loff_t)from;
write_data = kmap(page);
struct pagevec pvec;
int rc = 0;
int scanned = 0;
- int xid, long_op;
+ int xid;
cifs_sb = CIFS_SB(mapping->host->i_sb);
break;
}
if (n_iov) {
+retry_write:
open_file = find_writable_file(CIFS_I(mapping->host),
false);
if (!open_file) {
cERROR(1, "No writable handles for inode");
rc = -EBADF;
} else {
- long_op = cifs_write_timeout(cifsi, offset);
rc = CIFSSMBWrite2(xid, tcon, open_file->netfid,
bytes_to_write, offset,
&bytes_written, iov, n_iov,
- long_op);
+ 0);
cifsFileInfo_put(open_file);
- cifs_update_eof(cifsi, offset, bytes_written);
}
- if (rc || bytes_written < bytes_to_write) {
- cERROR(1, "Write2 ret %d, wrote %d",
- rc, bytes_written);
- mapping_set_error(mapping, rc);
- } else {
+ cFYI(1, "Write2 rc=%d, wrote=%u", rc, bytes_written);
+
+ /*
+ * For now, treat a short write as if nothing got
+ * written. A zero length write however indicates
+ * ENOSPC or EFBIG. We have no way to know which
+ * though, so call it ENOSPC for now. EFBIG would
+ * get translated to AS_EIO anyway.
+ *
+ * FIXME: make it take into account the data that did
+ * get written
+ */
+ if (rc == 0) {
+ if (bytes_written == 0)
+ rc = -ENOSPC;
+ else if (bytes_written < bytes_to_write)
+ rc = -EAGAIN;
+ }
+
+ /* retry on data-integrity flush */
+ if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN)
+ goto retry_write;
+
+ /* fix the stats and EOF */
+ if (bytes_written > 0) {
cifs_stats_bytes_written(tcon, bytes_written);
+ cifs_update_eof(cifsi, offset, bytes_written);
}
for (i = 0; i < n_iov; i++) {
page = pvec.pages[first + i];
- /* Should we also set page error on
- success rc but too little data written? */
- /* BB investigate retry logic on temporary
- server crash cases and how recovery works
- when page marked as error */
- if (rc)
+ /* on retryable write error, redirty page */
+ if (rc == -EAGAIN)
+ redirty_page_for_writepage(wbc, page);
+ else if (rc != 0)
SetPageError(page);
kunmap(page);
unlock_page(page);
end_page_writeback(page);
page_cache_release(page);
}
+
+ if (rc != -EAGAIN)
+ mapping_set_error(mapping, rc);
+ else
+ rc = 0;
+
if ((wbc->nr_to_write -= n_iov) <= 0)
done = 1;
index = next;
return rc;
}
-int cifs_fsync(struct file *file, int datasync)
+int cifs_strict_fsync(struct file *file, int datasync)
{
int xid;
int rc = 0;
struct cifsTconInfo *tcon;
struct cifsFileInfo *smbfile = file->private_data;
struct inode *inode = file->f_path.dentry->d_inode;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
xid = GetXid();
cFYI(1, "Sync file - name: %s datasync: 0x%x",
file->f_path.dentry->d_name.name, datasync);
- rc = filemap_write_and_wait(inode->i_mapping);
- if (rc == 0) {
- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ if (!CIFS_I(inode)->clientCanCacheRead)
+ cifs_invalidate_mapping(inode);
- tcon = tlink_tcon(smbfile->tlink);
- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC))
- rc = CIFSSMBFlush(xid, tcon, smbfile->netfid);
- }
+ tcon = tlink_tcon(smbfile->tlink);
+ if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC))
+ rc = CIFSSMBFlush(xid, tcon, smbfile->netfid);
+
+ FreeXid(xid);
+ return rc;
+}
+
+int cifs_fsync(struct file *file, int datasync)
+{
+ int xid;
+ int rc = 0;
+ struct cifsTconInfo *tcon;
+ struct cifsFileInfo *smbfile = file->private_data;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
+
+ xid = GetXid();
+
+ cFYI(1, "Sync file - name: %s datasync: 0x%x",
+ file->f_path.dentry->d_name.name, datasync);
+
+ tcon = tlink_tcon(smbfile->tlink);
+ if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC))
+ rc = CIFSSMBFlush(xid, tcon, smbfile->netfid);
FreeXid(xid);
return rc;
return rc;
}
-ssize_t cifs_user_read(struct file *file, char __user *read_data,
- size_t read_size, loff_t *poffset)
+static int
+cifs_write_allocate_pages(struct page **pages, unsigned long num_pages)
{
- int rc = -EACCES;
+ int rc = 0;
+ unsigned long i;
+
+ for (i = 0; i < num_pages; i++) {
+ pages[i] = alloc_page(__GFP_HIGHMEM);
+ if (!pages[i]) {
+ /*
+ * save number of pages we have already allocated and
+ * return with ENOMEM error
+ */
+ num_pages = i;
+ rc = -ENOMEM;
+ goto error;
+ }
+ }
+
+ return rc;
+
+error:
+ for (i = 0; i < num_pages; i++)
+ put_page(pages[i]);
+ return rc;
+}
+
+static inline
+size_t get_numpages(const size_t wsize, const size_t len, size_t *cur_len)
+{
+ size_t num_pages;
+ size_t clen;
+
+ clen = min_t(const size_t, len, wsize);
+ num_pages = clen / PAGE_CACHE_SIZE;
+ if (clen % PAGE_CACHE_SIZE)
+ num_pages++;
+
+ if (cur_len)
+ *cur_len = clen;
+
+ return num_pages;
+}
+
+static ssize_t
+cifs_iovec_write(struct file *file, const struct iovec *iov,
+ unsigned long nr_segs, loff_t *poffset)
+{
+ unsigned int written;
+ unsigned long num_pages, npages, i;
+ size_t copied, len, cur_len;
+ ssize_t total_written = 0;
+ struct kvec *to_send;
+ struct page **pages;
+ struct iov_iter it;
+ struct inode *inode;
+ struct cifsFileInfo *open_file;
+ struct cifsTconInfo *pTcon;
+ struct cifs_sb_info *cifs_sb;
+ int xid, rc;
+
+ len = iov_length(iov, nr_segs);
+ if (!len)
+ return 0;
+
+ rc = generic_write_checks(file, poffset, &len, 0);
+ if (rc)
+ return rc;
+
+ cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
+ num_pages = get_numpages(cifs_sb->wsize, len, &cur_len);
+
+ pages = kmalloc(sizeof(struct pages *)*num_pages, GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ to_send = kmalloc(sizeof(struct kvec)*(num_pages + 1), GFP_KERNEL);
+ if (!to_send) {
+ kfree(pages);
+ return -ENOMEM;
+ }
+
+ rc = cifs_write_allocate_pages(pages, num_pages);
+ if (rc) {
+ kfree(pages);
+ kfree(to_send);
+ return rc;
+ }
+
+ xid = GetXid();
+ open_file = file->private_data;
+ pTcon = tlink_tcon(open_file->tlink);
+ inode = file->f_path.dentry->d_inode;
+
+ iov_iter_init(&it, iov, nr_segs, len, 0);
+ npages = num_pages;
+
+ do {
+ size_t save_len = cur_len;
+ for (i = 0; i < npages; i++) {
+ copied = min_t(const size_t, cur_len, PAGE_CACHE_SIZE);
+ copied = iov_iter_copy_from_user(pages[i], &it, 0,
+ copied);
+ cur_len -= copied;
+ iov_iter_advance(&it, copied);
+ to_send[i+1].iov_base = kmap(pages[i]);
+ to_send[i+1].iov_len = copied;
+ }
+
+ cur_len = save_len - cur_len;
+
+ do {
+ if (open_file->invalidHandle) {
+ rc = cifs_reopen_file(open_file, false);
+ if (rc != 0)
+ break;
+ }
+ rc = CIFSSMBWrite2(xid, pTcon, open_file->netfid,
+ cur_len, *poffset, &written,
+ to_send, npages, 0);
+ } while (rc == -EAGAIN);
+
+ for (i = 0; i < npages; i++)
+ kunmap(pages[i]);
+
+ if (written) {
+ len -= written;
+ total_written += written;
+ cifs_update_eof(CIFS_I(inode), *poffset, written);
+ *poffset += written;
+ } else if (rc < 0) {
+ if (!total_written)
+ total_written = rc;
+ break;
+ }
+
+ /* get length and number of kvecs of the next write */
+ npages = get_numpages(cifs_sb->wsize, len, &cur_len);
+ } while (len > 0);
+
+ if (total_written > 0) {
+ spin_lock(&inode->i_lock);
+ if (*poffset > inode->i_size)
+ i_size_write(inode, *poffset);
+ spin_unlock(&inode->i_lock);
+ }
+
+ cifs_stats_bytes_written(pTcon, total_written);
+ mark_inode_dirty_sync(inode);
+
+ for (i = 0; i < num_pages; i++)
+ put_page(pages[i]);
+ kfree(to_send);
+ kfree(pages);
+ FreeXid(xid);
+ return total_written;
+}
+
+static ssize_t cifs_user_writev(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ ssize_t written;
+ struct inode *inode;
+
+ inode = iocb->ki_filp->f_path.dentry->d_inode;
+
+ /*
+ * BB - optimize the way when signing is disabled. We can drop this
+ * extra memory-to-memory copying and use iovec buffers for constructing
+ * write request.
+ */
+
+ written = cifs_iovec_write(iocb->ki_filp, iov, nr_segs, &pos);
+ if (written > 0) {
+ CIFS_I(inode)->invalid_mapping = true;
+ iocb->ki_pos = pos;
+ }
+
+ return written;
+}
+
+ssize_t cifs_strict_writev(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct inode *inode;
+
+ inode = iocb->ki_filp->f_path.dentry->d_inode;
+
+ if (CIFS_I(inode)->clientCanCacheAll)
+ return generic_file_aio_write(iocb, iov, nr_segs, pos);
+
+ /*
+ * In strict cache mode we need to write the data to the server exactly
+ * from the pos to pos+len-1 rather than flush all affected pages
+ * because it may cause a error with mandatory locks on these pages but
+ * not on the region from pos to ppos+len-1.
+ */
+
+ return cifs_user_writev(iocb, iov, nr_segs, pos);
+}
+
+static ssize_t
+cifs_iovec_read(struct file *file, const struct iovec *iov,
+ unsigned long nr_segs, loff_t *poffset)
+{
+ int rc;
+ int xid;
+ ssize_t total_read;
unsigned int bytes_read = 0;
- unsigned int total_read = 0;
- unsigned int current_read_size;
+ size_t len, cur_len;
+ int iov_offset = 0;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
- int xid;
struct cifsFileInfo *open_file;
- char *smb_read_data;
- char __user *current_offset;
struct smb_com_read_rsp *pSMBr;
+ char *read_data;
+
+ if (!nr_segs)
+ return 0;
+
+ len = iov_length(iov, nr_segs);
+ if (!len)
+ return 0;
xid = GetXid();
cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
- if (file->private_data == NULL) {
- rc = -EBADF;
- FreeXid(xid);
- return rc;
- }
open_file = file->private_data;
pTcon = tlink_tcon(open_file->tlink);
if ((file->f_flags & O_ACCMODE) == O_WRONLY)
cFYI(1, "attempting read on write only file instance");
- for (total_read = 0, current_offset = read_data;
- read_size > total_read;
- total_read += bytes_read, current_offset += bytes_read) {
- current_read_size = min_t(const int, read_size - total_read,
- cifs_sb->rsize);
+ for (total_read = 0; total_read < len; total_read += bytes_read) {
+ cur_len = min_t(const size_t, len - total_read, cifs_sb->rsize);
rc = -EAGAIN;
- smb_read_data = NULL;
+ read_data = NULL;
+
while (rc == -EAGAIN) {
int buf_type = CIFS_NO_BUFFER;
if (open_file->invalidHandle) {
if (rc != 0)
break;
}
- rc = CIFSSMBRead(xid, pTcon,
- open_file->netfid,
- current_read_size, *poffset,
- &bytes_read, &smb_read_data,
- &buf_type);
- pSMBr = (struct smb_com_read_rsp *)smb_read_data;
- if (smb_read_data) {
- if (copy_to_user(current_offset,
- smb_read_data +
- 4 /* RFC1001 length field */ +
- le16_to_cpu(pSMBr->DataOffset),
- bytes_read))
+ rc = CIFSSMBRead(xid, pTcon, open_file->netfid,
+ cur_len, *poffset, &bytes_read,
+ &read_data, &buf_type);
+ pSMBr = (struct smb_com_read_rsp *)read_data;
+ if (read_data) {
+ char *data_offset = read_data + 4 +
+ le16_to_cpu(pSMBr->DataOffset);
+ if (memcpy_toiovecend(iov, data_offset,
+ iov_offset, bytes_read))
rc = -EFAULT;
-
if (buf_type == CIFS_SMALL_BUFFER)
- cifs_small_buf_release(smb_read_data);
+ cifs_small_buf_release(read_data);
else if (buf_type == CIFS_LARGE_BUFFER)
- cifs_buf_release(smb_read_data);
- smb_read_data = NULL;
+ cifs_buf_release(read_data);
+ read_data = NULL;
+ iov_offset += bytes_read;
}
}
+
if (rc || (bytes_read == 0)) {
if (total_read) {
break;
*poffset += bytes_read;
}
}
+
FreeXid(xid);
return total_read;
}
+ssize_t cifs_user_read(struct file *file, char __user *read_data,
+ size_t read_size, loff_t *poffset)
+{
+ struct iovec iov;
+ iov.iov_base = read_data;
+ iov.iov_len = read_size;
+
+ return cifs_iovec_read(file, &iov, 1, poffset);
+}
+
+static ssize_t cifs_user_readv(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ ssize_t read;
+
+ read = cifs_iovec_read(iocb->ki_filp, iov, nr_segs, &pos);
+ if (read > 0)
+ iocb->ki_pos = pos;
+
+ return read;
+}
+
+ssize_t cifs_strict_readv(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct inode *inode;
+
+ inode = iocb->ki_filp->f_path.dentry->d_inode;
+
+ if (CIFS_I(inode)->clientCanCacheRead)
+ return generic_file_aio_read(iocb, iov, nr_segs, pos);
+
+ /*
+ * In strict cache mode we need to read from the server all the time
+ * if we don't have level II oplock because the server can delay mtime
+ * change - so we can't make a decision about inode invalidating.
+ * And we can also fail with pagereading if there are mandatory locks
+ * on pages affected by this read but not on the region from pos to
+ * pos+len-1.
+ */
+
+ return cifs_user_readv(iocb, iov, nr_segs, pos);
+}
static ssize_t cifs_read(struct file *file, char *read_data, size_t read_size,
- loff_t *poffset)
+ loff_t *poffset)
{
int rc = -EACCES;
unsigned int bytes_read = 0;
return total_read;
}
+int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ int rc, xid;
+ struct inode *inode = file->f_path.dentry->d_inode;
+
+ xid = GetXid();
+
+ if (!CIFS_I(inode)->clientCanCacheRead)
+ cifs_invalidate_mapping(inode);
+
+ rc = generic_file_mmap(file, vma);
+ FreeXid(xid);
+ return rc;
+}
+
int cifs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
int rc, xid;
*/
if (!cfile->oplock_break_cancelled) {
rc = CIFSSMBLock(0, tlink_tcon(cfile->tlink), cfile->netfid, 0,
- 0, 0, 0, LOCKING_ANDX_OPLOCK_RELEASE, false);
+ 0, 0, 0, LOCKING_ANDX_OPLOCK_RELEASE, false,
+ cinode->clientCanCacheRead ? 1 : 0);
cFYI(1, "Oplock release rc = %d", rc);
}
inode->i_fop = &cifs_file_direct_nobrl_ops;
else
inode->i_fop = &cifs_file_direct_ops;
+ } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
+ inode->i_fop = &cifs_file_strict_nobrl_ops;
+ else
+ inode->i_fop = &cifs_file_strict_ops;
} else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
inode->i_fop = &cifs_file_nobrl_ops;
else { /* not direct, send byte range locks */
inode->i_fop = &cifs_file_ops;
}
-
/* check if server can support readpages */
if (cifs_sb_master_tcon(cifs_sb)->ses->server->maxBuf <
PAGE_CACHE_SIZE + MAX_CIFS_HDR_SIZE)
/*
* Zap the cache. Called when invalid_mapping flag is set.
*/
-static void
+void
cifs_invalidate_mapping(struct inode *inode)
{
int rc;
#include "cifsproto.h"
#include "cifs_debug.h"
#include "cifs_fs_sb.h"
-#include "md5.h"
#define CIFS_MF_SYMLINK_LEN_OFFSET (4+1)
#define CIFS_MF_SYMLINK_MD5_OFFSET (CIFS_MF_SYMLINK_LEN_OFFSET+(4+1))
md5_hash[8], md5_hash[9], md5_hash[10], md5_hash[11],\
md5_hash[12], md5_hash[13], md5_hash[14], md5_hash[15]
+static int
+symlink_hash(unsigned int link_len, const char *link_str, u8 *md5_hash)
+{
+ int rc;
+ unsigned int size;
+ struct crypto_shash *md5;
+ struct sdesc *sdescmd5;
+
+ md5 = crypto_alloc_shash("md5", 0, 0);
+ if (IS_ERR(md5)) {
+ rc = PTR_ERR(md5);
+ cERROR(1, "%s: Crypto md5 allocation error %d\n", __func__, rc);
+ return rc;
+ }
+ size = sizeof(struct shash_desc) + crypto_shash_descsize(md5);
+ sdescmd5 = kmalloc(size, GFP_KERNEL);
+ if (!sdescmd5) {
+ rc = -ENOMEM;
+ cERROR(1, "%s: Memory allocation failure\n", __func__);
+ goto symlink_hash_err;
+ }
+ sdescmd5->shash.tfm = md5;
+ sdescmd5->shash.flags = 0x0;
+
+ rc = crypto_shash_init(&sdescmd5->shash);
+ if (rc) {
+ cERROR(1, "%s: Could not init md5 shash\n", __func__);
+ goto symlink_hash_err;
+ }
+ crypto_shash_update(&sdescmd5->shash, link_str, link_len);
+ rc = crypto_shash_final(&sdescmd5->shash, md5_hash);
+
+symlink_hash_err:
+ crypto_free_shash(md5);
+ kfree(sdescmd5);
+
+ return rc;
+}
+
static int
CIFSParseMFSymlink(const u8 *buf,
unsigned int buf_len,
unsigned int link_len;
const char *md5_str1;
const char *link_str;
- struct MD5Context md5_ctx;
u8 md5_hash[16];
char md5_str2[34];
if (rc != 1)
return -EINVAL;
- cifs_MD5_init(&md5_ctx);
- cifs_MD5_update(&md5_ctx, (const u8 *)link_str, link_len);
- cifs_MD5_final(md5_hash, &md5_ctx);
+ rc = symlink_hash(link_len, link_str, md5_hash);
+ if (rc) {
+ cFYI(1, "%s: MD5 hash failure: %d\n", __func__, rc);
+ return rc;
+ }
snprintf(md5_str2, sizeof(md5_str2),
CIFS_MF_SYMLINK_MD5_FORMAT,
static int
CIFSFormatMFSymlink(u8 *buf, unsigned int buf_len, const char *link_str)
{
+ int rc;
unsigned int link_len;
unsigned int ofs;
- struct MD5Context md5_ctx;
u8 md5_hash[16];
if (buf_len != CIFS_MF_SYMLINK_FILE_SIZE)
if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN)
return -ENAMETOOLONG;
- cifs_MD5_init(&md5_ctx);
- cifs_MD5_update(&md5_ctx, (const u8 *)link_str, link_len);
- cifs_MD5_final(md5_hash, &md5_ctx);
+ rc = symlink_hash(link_len, link_str, md5_hash);
+ if (rc) {
+ cFYI(1, "%s: MD5 hash failure: %d\n", __func__, rc);
+ return rc;
+ }
snprintf(buf, buf_len,
CIFS_MF_SYMLINK_LEN_FORMAT CIFS_MF_SYMLINK_MD5_FORMAT,
+++ /dev/null
-/*
- Unix SMB/Netbios implementation.
- Version 1.9.
- a implementation of MD4 designed for use in the SMB authentication protocol
- Copyright (C) Andrew Tridgell 1997-1998.
- Modified by Steve French (sfrench@us.ibm.com) 2002-2003
-
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; either version 2 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-*/
-#include <linux/module.h>
-#include <linux/fs.h>
-#include "cifsencrypt.h"
-
-/* NOTE: This code makes no attempt to be fast! */
-
-static __u32
-F(__u32 X, __u32 Y, __u32 Z)
-{
- return (X & Y) | ((~X) & Z);
-}
-
-static __u32
-G(__u32 X, __u32 Y, __u32 Z)
-{
- return (X & Y) | (X & Z) | (Y & Z);
-}
-
-static __u32
-H(__u32 X, __u32 Y, __u32 Z)
-{
- return X ^ Y ^ Z;
-}
-
-static __u32
-lshift(__u32 x, int s)
-{
- x &= 0xFFFFFFFF;
- return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s));
-}
-
-#define ROUND1(a,b,c,d,k,s) (*a) = lshift((*a) + F(*b,*c,*d) + X[k], s)
-#define ROUND2(a,b,c,d,k,s) (*a) = lshift((*a) + G(*b,*c,*d) + X[k] + (__u32)0x5A827999,s)
-#define ROUND3(a,b,c,d,k,s) (*a) = lshift((*a) + H(*b,*c,*d) + X[k] + (__u32)0x6ED9EBA1,s)
-
-/* this applies md4 to 64 byte chunks */
-static void
-mdfour64(__u32 *M, __u32 *A, __u32 *B, __u32 *C, __u32 *D)
-{
- int j;
- __u32 AA, BB, CC, DD;
- __u32 X[16];
-
-
- for (j = 0; j < 16; j++)
- X[j] = M[j];
-
- AA = *A;
- BB = *B;
- CC = *C;
- DD = *D;
-
- ROUND1(A, B, C, D, 0, 3);
- ROUND1(D, A, B, C, 1, 7);
- ROUND1(C, D, A, B, 2, 11);
- ROUND1(B, C, D, A, 3, 19);
- ROUND1(A, B, C, D, 4, 3);
- ROUND1(D, A, B, C, 5, 7);
- ROUND1(C, D, A, B, 6, 11);
- ROUND1(B, C, D, A, 7, 19);
- ROUND1(A, B, C, D, 8, 3);
- ROUND1(D, A, B, C, 9, 7);
- ROUND1(C, D, A, B, 10, 11);
- ROUND1(B, C, D, A, 11, 19);
- ROUND1(A, B, C, D, 12, 3);
- ROUND1(D, A, B, C, 13, 7);
- ROUND1(C, D, A, B, 14, 11);
- ROUND1(B, C, D, A, 15, 19);
-
- ROUND2(A, B, C, D, 0, 3);
- ROUND2(D, A, B, C, 4, 5);
- ROUND2(C, D, A, B, 8, 9);
- ROUND2(B, C, D, A, 12, 13);
- ROUND2(A, B, C, D, 1, 3);
- ROUND2(D, A, B, C, 5, 5);
- ROUND2(C, D, A, B, 9, 9);
- ROUND2(B, C, D, A, 13, 13);
- ROUND2(A, B, C, D, 2, 3);
- ROUND2(D, A, B, C, 6, 5);
- ROUND2(C, D, A, B, 10, 9);
- ROUND2(B, C, D, A, 14, 13);
- ROUND2(A, B, C, D, 3, 3);
- ROUND2(D, A, B, C, 7, 5);
- ROUND2(C, D, A, B, 11, 9);
- ROUND2(B, C, D, A, 15, 13);
-
- ROUND3(A, B, C, D, 0, 3);
- ROUND3(D, A, B, C, 8, 9);
- ROUND3(C, D, A, B, 4, 11);
- ROUND3(B, C, D, A, 12, 15);
- ROUND3(A, B, C, D, 2, 3);
- ROUND3(D, A, B, C, 10, 9);
- ROUND3(C, D, A, B, 6, 11);
- ROUND3(B, C, D, A, 14, 15);
- ROUND3(A, B, C, D, 1, 3);
- ROUND3(D, A, B, C, 9, 9);
- ROUND3(C, D, A, B, 5, 11);
- ROUND3(B, C, D, A, 13, 15);
- ROUND3(A, B, C, D, 3, 3);
- ROUND3(D, A, B, C, 11, 9);
- ROUND3(C, D, A, B, 7, 11);
- ROUND3(B, C, D, A, 15, 15);
-
- *A += AA;
- *B += BB;
- *C += CC;
- *D += DD;
-
- *A &= 0xFFFFFFFF;
- *B &= 0xFFFFFFFF;
- *C &= 0xFFFFFFFF;
- *D &= 0xFFFFFFFF;
-
- for (j = 0; j < 16; j++)
- X[j] = 0;
-}
-
-static void
-copy64(__u32 *M, unsigned char *in)
-{
- int i;
-
- for (i = 0; i < 16; i++)
- M[i] = (in[i * 4 + 3] << 24) | (in[i * 4 + 2] << 16) |
- (in[i * 4 + 1] << 8) | (in[i * 4 + 0] << 0);
-}
-
-static void
-copy4(unsigned char *out, __u32 x)
-{
- out[0] = x & 0xFF;
- out[1] = (x >> 8) & 0xFF;
- out[2] = (x >> 16) & 0xFF;
- out[3] = (x >> 24) & 0xFF;
-}
-
-/* produce a md4 message digest from data of length n bytes */
-void
-mdfour(unsigned char *out, unsigned char *in, int n)
-{
- unsigned char buf[128];
- __u32 M[16];
- __u32 b = n * 8;
- int i;
- __u32 A = 0x67452301;
- __u32 B = 0xefcdab89;
- __u32 C = 0x98badcfe;
- __u32 D = 0x10325476;
-
- while (n > 64) {
- copy64(M, in);
- mdfour64(M, &A, &B, &C, &D);
- in += 64;
- n -= 64;
- }
-
- for (i = 0; i < 128; i++)
- buf[i] = 0;
- memcpy(buf, in, n);
- buf[n] = 0x80;
-
- if (n <= 55) {
- copy4(buf + 56, b);
- copy64(M, buf);
- mdfour64(M, &A, &B, &C, &D);
- } else {
- copy4(buf + 120, b);
- copy64(M, buf);
- mdfour64(M, &A, &B, &C, &D);
- copy64(M, buf + 64);
- mdfour64(M, &A, &B, &C, &D);
- }
-
- for (i = 0; i < 128; i++)
- buf[i] = 0;
- copy64(M, buf);
-
- copy4(out, A);
- copy4(out + 4, B);
- copy4(out + 8, C);
- copy4(out + 12, D);
-
- A = B = C = D = 0;
-}
+++ /dev/null
-/*
- * This code implements the MD5 message-digest algorithm.
- * The algorithm is due to Ron Rivest. This code was
- * written by Colin Plumb in 1993, no copyright is claimed.
- * This code is in the public domain; do with it what you wish.
- *
- * Equivalent code is available from RSA Data Security, Inc.
- * This code has been tested against that, and is equivalent,
- * except that you don't need to include two pages of legalese
- * with every copy.
- *
- * To compute the message digest of a chunk of bytes, declare an
- * MD5Context structure, pass it to cifs_MD5_init, call cifs_MD5_update as
- * needed on buffers full of bytes, and then call cifs_MD5_final, which
- * will fill a supplied 16-byte array with the digest.
- */
-
-/* This code slightly modified to fit into Samba by
- abartlet@samba.org Jun 2001
- and to fit the cifs vfs by
- Steve French sfrench@us.ibm.com */
-
-#include <linux/string.h>
-#include "md5.h"
-
-static void MD5Transform(__u32 buf[4], __u32 const in[16]);
-
-/*
- * Note: this code is harmless on little-endian machines.
- */
-static void
-byteReverse(unsigned char *buf, unsigned longs)
-{
- __u32 t;
- do {
- t = (__u32) ((unsigned) buf[3] << 8 | buf[2]) << 16 |
- ((unsigned) buf[1] << 8 | buf[0]);
- *(__u32 *) buf = t;
- buf += 4;
- } while (--longs);
-}
-
-/*
- * Start MD5 accumulation. Set bit count to 0 and buffer to mysterious
- * initialization constants.
- */
-void
-cifs_MD5_init(struct MD5Context *ctx)
-{
- ctx->buf[0] = 0x67452301;
- ctx->buf[1] = 0xefcdab89;
- ctx->buf[2] = 0x98badcfe;
- ctx->buf[3] = 0x10325476;
-
- ctx->bits[0] = 0;
- ctx->bits[1] = 0;
-}
-
-/*
- * Update context to reflect the concatenation of another buffer full
- * of bytes.
- */
-void
-cifs_MD5_update(struct MD5Context *ctx, unsigned char const *buf, unsigned len)
-{
- register __u32 t;
-
- /* Update bitcount */
-
- t = ctx->bits[0];
- if ((ctx->bits[0] = t + ((__u32) len << 3)) < t)
- ctx->bits[1]++; /* Carry from low to high */
- ctx->bits[1] += len >> 29;
-
- t = (t >> 3) & 0x3f; /* Bytes already in shsInfo->data */
-
- /* Handle any leading odd-sized chunks */
-
- if (t) {
- unsigned char *p = (unsigned char *) ctx->in + t;
-
- t = 64 - t;
- if (len < t) {
- memmove(p, buf, len);
- return;
- }
- memmove(p, buf, t);
- byteReverse(ctx->in, 16);
- MD5Transform(ctx->buf, (__u32 *) ctx->in);
- buf += t;
- len -= t;
- }
- /* Process data in 64-byte chunks */
-
- while (len >= 64) {
- memmove(ctx->in, buf, 64);
- byteReverse(ctx->in, 16);
- MD5Transform(ctx->buf, (__u32 *) ctx->in);
- buf += 64;
- len -= 64;
- }
-
- /* Handle any remaining bytes of data. */
-
- memmove(ctx->in, buf, len);
-}
-
-/*
- * Final wrapup - pad to 64-byte boundary with the bit pattern
- * 1 0* (64-bit count of bits processed, MSB-first)
- */
-void
-cifs_MD5_final(unsigned char digest[16], struct MD5Context *ctx)
-{
- unsigned int count;
- unsigned char *p;
-
- /* Compute number of bytes mod 64 */
- count = (ctx->bits[0] >> 3) & 0x3F;
-
- /* Set the first char of padding to 0x80. This is safe since there is
- always at least one byte free */
- p = ctx->in + count;
- *p++ = 0x80;
-
- /* Bytes of padding needed to make 64 bytes */
- count = 64 - 1 - count;
-
- /* Pad out to 56 mod 64 */
- if (count < 8) {
- /* Two lots of padding: Pad the first block to 64 bytes */
- memset(p, 0, count);
- byteReverse(ctx->in, 16);
- MD5Transform(ctx->buf, (__u32 *) ctx->in);
-
- /* Now fill the next block with 56 bytes */
- memset(ctx->in, 0, 56);
- } else {
- /* Pad block to 56 bytes */
- memset(p, 0, count - 8);
- }
- byteReverse(ctx->in, 14);
-
- /* Append length in bits and transform */
- ((__u32 *) ctx->in)[14] = ctx->bits[0];
- ((__u32 *) ctx->in)[15] = ctx->bits[1];
-
- MD5Transform(ctx->buf, (__u32 *) ctx->in);
- byteReverse((unsigned char *) ctx->buf, 4);
- memmove(digest, ctx->buf, 16);
- memset(ctx, 0, sizeof(*ctx)); /* In case it's sensitive */
-}
-
-/* The four core functions - F1 is optimized somewhat */
-
-/* #define F1(x, y, z) (x & y | ~x & z) */
-#define F1(x, y, z) (z ^ (x & (y ^ z)))
-#define F2(x, y, z) F1(z, x, y)
-#define F3(x, y, z) (x ^ y ^ z)
-#define F4(x, y, z) (y ^ (x | ~z))
-
-/* This is the central step in the MD5 algorithm. */
-#define MD5STEP(f, w, x, y, z, data, s) \
- (w += f(x, y, z) + data, w = w<<s | w>>(32-s), w += x)
-
-/*
- * The core of the MD5 algorithm, this alters an existing MD5 hash to
- * reflect the addition of 16 longwords of new data. cifs_MD5_update blocks
- * the data and converts bytes into longwords for this routine.
- */
-static void
-MD5Transform(__u32 buf[4], __u32 const in[16])
-{
- register __u32 a, b, c, d;
-
- a = buf[0];
- b = buf[1];
- c = buf[2];
- d = buf[3];
-
- MD5STEP(F1, a, b, c, d, in[0] + 0xd76aa478, 7);
- MD5STEP(F1, d, a, b, c, in[1] + 0xe8c7b756, 12);
- MD5STEP(F1, c, d, a, b, in[2] + 0x242070db, 17);
- MD5STEP(F1, b, c, d, a, in[3] + 0xc1bdceee, 22);
- MD5STEP(F1, a, b, c, d, in[4] + 0xf57c0faf, 7);
- MD5STEP(F1, d, a, b, c, in[5] + 0x4787c62a, 12);
- MD5STEP(F1, c, d, a, b, in[6] + 0xa8304613, 17);
- MD5STEP(F1, b, c, d, a, in[7] + 0xfd469501, 22);
- MD5STEP(F1, a, b, c, d, in[8] + 0x698098d8, 7);
- MD5STEP(F1, d, a, b, c, in[9] + 0x8b44f7af, 12);
- MD5STEP(F1, c, d, a, b, in[10] + 0xffff5bb1, 17);
- MD5STEP(F1, b, c, d, a, in[11] + 0x895cd7be, 22);
- MD5STEP(F1, a, b, c, d, in[12] + 0x6b901122, 7);
- MD5STEP(F1, d, a, b, c, in[13] + 0xfd987193, 12);
- MD5STEP(F1, c, d, a, b, in[14] + 0xa679438e, 17);
- MD5STEP(F1, b, c, d, a, in[15] + 0x49b40821, 22);
-
- MD5STEP(F2, a, b, c, d, in[1] + 0xf61e2562, 5);
- MD5STEP(F2, d, a, b, c, in[6] + 0xc040b340, 9);
- MD5STEP(F2, c, d, a, b, in[11] + 0x265e5a51, 14);
- MD5STEP(F2, b, c, d, a, in[0] + 0xe9b6c7aa, 20);
- MD5STEP(F2, a, b, c, d, in[5] + 0xd62f105d, 5);
- MD5STEP(F2, d, a, b, c, in[10] + 0x02441453, 9);
- MD5STEP(F2, c, d, a, b, in[15] + 0xd8a1e681, 14);
- MD5STEP(F2, b, c, d, a, in[4] + 0xe7d3fbc8, 20);
- MD5STEP(F2, a, b, c, d, in[9] + 0x21e1cde6, 5);
- MD5STEP(F2, d, a, b, c, in[14] + 0xc33707d6, 9);
- MD5STEP(F2, c, d, a, b, in[3] + 0xf4d50d87, 14);
- MD5STEP(F2, b, c, d, a, in[8] + 0x455a14ed, 20);
- MD5STEP(F2, a, b, c, d, in[13] + 0xa9e3e905, 5);
- MD5STEP(F2, d, a, b, c, in[2] + 0xfcefa3f8, 9);
- MD5STEP(F2, c, d, a, b, in[7] + 0x676f02d9, 14);
- MD5STEP(F2, b, c, d, a, in[12] + 0x8d2a4c8a, 20);
-
- MD5STEP(F3, a, b, c, d, in[5] + 0xfffa3942, 4);
- MD5STEP(F3, d, a, b, c, in[8] + 0x8771f681, 11);
- MD5STEP(F3, c, d, a, b, in[11] + 0x6d9d6122, 16);
- MD5STEP(F3, b, c, d, a, in[14] + 0xfde5380c, 23);
- MD5STEP(F3, a, b, c, d, in[1] + 0xa4beea44, 4);
- MD5STEP(F3, d, a, b, c, in[4] + 0x4bdecfa9, 11);
- MD5STEP(F3, c, d, a, b, in[7] + 0xf6bb4b60, 16);
- MD5STEP(F3, b, c, d, a, in[10] + 0xbebfbc70, 23);
- MD5STEP(F3, a, b, c, d, in[13] + 0x289b7ec6, 4);
- MD5STEP(F3, d, a, b, c, in[0] + 0xeaa127fa, 11);
- MD5STEP(F3, c, d, a, b, in[3] + 0xd4ef3085, 16);
- MD5STEP(F3, b, c, d, a, in[6] + 0x04881d05, 23);
- MD5STEP(F3, a, b, c, d, in[9] + 0xd9d4d039, 4);
- MD5STEP(F3, d, a, b, c, in[12] + 0xe6db99e5, 11);
- MD5STEP(F3, c, d, a, b, in[15] + 0x1fa27cf8, 16);
- MD5STEP(F3, b, c, d, a, in[2] + 0xc4ac5665, 23);
-
- MD5STEP(F4, a, b, c, d, in[0] + 0xf4292244, 6);
- MD5STEP(F4, d, a, b, c, in[7] + 0x432aff97, 10);
- MD5STEP(F4, c, d, a, b, in[14] + 0xab9423a7, 15);
- MD5STEP(F4, b, c, d, a, in[5] + 0xfc93a039, 21);
- MD5STEP(F4, a, b, c, d, in[12] + 0x655b59c3, 6);
- MD5STEP(F4, d, a, b, c, in[3] + 0x8f0ccc92, 10);
- MD5STEP(F4, c, d, a, b, in[10] + 0xffeff47d, 15);
- MD5STEP(F4, b, c, d, a, in[1] + 0x85845dd1, 21);
- MD5STEP(F4, a, b, c, d, in[8] + 0x6fa87e4f, 6);
- MD5STEP(F4, d, a, b, c, in[15] + 0xfe2ce6e0, 10);
- MD5STEP(F4, c, d, a, b, in[6] + 0xa3014314, 15);
- MD5STEP(F4, b, c, d, a, in[13] + 0x4e0811a1, 21);
- MD5STEP(F4, a, b, c, d, in[4] + 0xf7537e82, 6);
- MD5STEP(F4, d, a, b, c, in[11] + 0xbd3af235, 10);
- MD5STEP(F4, c, d, a, b, in[2] + 0x2ad7d2bb, 15);
- MD5STEP(F4, b, c, d, a, in[9] + 0xeb86d391, 21);
-
- buf[0] += a;
- buf[1] += b;
- buf[2] += c;
- buf[3] += d;
-}
-
-#if 0 /* currently unused */
-/***********************************************************************
- the rfc 2104 version of hmac_md5 initialisation.
-***********************************************************************/
-static void
-hmac_md5_init_rfc2104(unsigned char *key, int key_len,
- struct HMACMD5Context *ctx)
-{
- int i;
-
- /* if key is longer than 64 bytes reset it to key=MD5(key) */
- if (key_len > 64) {
- unsigned char tk[16];
- struct MD5Context tctx;
-
- cifs_MD5_init(&tctx);
- cifs_MD5_update(&tctx, key, key_len);
- cifs_MD5_final(tk, &tctx);
-
- key = tk;
- key_len = 16;
- }
-
- /* start out by storing key in pads */
- memset(ctx->k_ipad, 0, sizeof(ctx->k_ipad));
- memset(ctx->k_opad, 0, sizeof(ctx->k_opad));
- memcpy(ctx->k_ipad, key, key_len);
- memcpy(ctx->k_opad, key, key_len);
-
- /* XOR key with ipad and opad values */
- for (i = 0; i < 64; i++) {
- ctx->k_ipad[i] ^= 0x36;
- ctx->k_opad[i] ^= 0x5c;
- }
-
- cifs_MD5_init(&ctx->ctx);
- cifs_MD5_update(&ctx->ctx, ctx->k_ipad, 64);
-}
-#endif
-
-/***********************************************************************
- the microsoft version of hmac_md5 initialisation.
-***********************************************************************/
-void
-hmac_md5_init_limK_to_64(const unsigned char *key, int key_len,
- struct HMACMD5Context *ctx)
-{
- int i;
-
- /* if key is longer than 64 bytes truncate it */
- if (key_len > 64)
- key_len = 64;
-
- /* start out by storing key in pads */
- memset(ctx->k_ipad, 0, sizeof(ctx->k_ipad));
- memset(ctx->k_opad, 0, sizeof(ctx->k_opad));
- memcpy(ctx->k_ipad, key, key_len);
- memcpy(ctx->k_opad, key, key_len);
-
- /* XOR key with ipad and opad values */
- for (i = 0; i < 64; i++) {
- ctx->k_ipad[i] ^= 0x36;
- ctx->k_opad[i] ^= 0x5c;
- }
-
- cifs_MD5_init(&ctx->ctx);
- cifs_MD5_update(&ctx->ctx, ctx->k_ipad, 64);
-}
-
-/***********************************************************************
- update hmac_md5 "inner" buffer
-***********************************************************************/
-void
-hmac_md5_update(const unsigned char *text, int text_len,
- struct HMACMD5Context *ctx)
-{
- cifs_MD5_update(&ctx->ctx, text, text_len); /* then text of datagram */
-}
-
-/***********************************************************************
- finish off hmac_md5 "inner" buffer and generate outer one.
-***********************************************************************/
-void
-hmac_md5_final(unsigned char *digest, struct HMACMD5Context *ctx)
-{
- struct MD5Context ctx_o;
-
- cifs_MD5_final(digest, &ctx->ctx);
-
- cifs_MD5_init(&ctx_o);
- cifs_MD5_update(&ctx_o, ctx->k_opad, 64);
- cifs_MD5_update(&ctx_o, digest, 16);
- cifs_MD5_final(digest, &ctx_o);
-}
-
-/***********************************************************
- single function to calculate an HMAC MD5 digest from data.
- use the microsoft hmacmd5 init method because the key is 16 bytes.
-************************************************************/
-#if 0 /* currently unused */
-static void
-hmac_md5(unsigned char key[16], unsigned char *data, int data_len,
- unsigned char *digest)
-{
- struct HMACMD5Context ctx;
- hmac_md5_init_limK_to_64(key, 16, &ctx);
- if (data_len != 0)
- hmac_md5_update(data, data_len, &ctx);
-
- hmac_md5_final(digest, &ctx);
-}
-#endif
+++ /dev/null
-#ifndef MD5_H
-#define MD5_H
-#ifndef HEADER_MD5_H
-/* Try to avoid clashes with OpenSSL */
-#define HEADER_MD5_H
-#endif
-
-struct MD5Context {
- __u32 buf[4];
- __u32 bits[2];
- unsigned char in[64];
-};
-#endif /* !MD5_H */
-
-#ifndef _HMAC_MD5_H
-struct HMACMD5Context {
- struct MD5Context ctx;
- unsigned char k_ipad[65];
- unsigned char k_opad[65];
-};
-#endif /* _HMAC_MD5_H */
-
-void cifs_MD5_init(struct MD5Context *context);
-void cifs_MD5_update(struct MD5Context *context, unsigned char const *buf,
- unsigned len);
-void cifs_MD5_final(unsigned char digest[16], struct MD5Context *context);
-
-/* The following definitions come from lib/hmacmd5.c */
-
-/* void hmac_md5_init_rfc2104(unsigned char *key, int key_len,
- struct HMACMD5Context *ctx);*/
-void hmac_md5_init_limK_to_64(const unsigned char *key, int key_len,
- struct HMACMD5Context *ctx);
-void hmac_md5_update(const unsigned char *text, int text_len,
- struct HMACMD5Context *ctx);
-void hmac_md5_final(unsigned char *digest, struct HMACMD5Context *ctx);
-/* void hmac_md5(unsigned char key[16], unsigned char *data, int data_len,
- unsigned char *digest);*/
{
__u16 mid = 0;
__u16 last_mid;
- int collision;
-
- if (server == NULL)
- return mid;
+ bool collision;
spin_lock(&GlobalMid_Lock);
last_mid = server->CurrentMid; /* we do not want to loop forever */
(and it would also have to have been a request that
did not time out) */
while (server->CurrentMid != last_mid) {
- struct list_head *tmp;
struct mid_q_entry *mid_entry;
+ unsigned int num_mids;
- collision = 0;
+ collision = false;
if (server->CurrentMid == 0)
server->CurrentMid++;
- list_for_each(tmp, &server->pending_mid_q) {
- mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
-
- if ((mid_entry->mid == server->CurrentMid) &&
- (mid_entry->midState == MID_REQUEST_SUBMITTED)) {
+ num_mids = 0;
+ list_for_each_entry(mid_entry, &server->pending_mid_q, qhead) {
+ ++num_mids;
+ if (mid_entry->mid == server->CurrentMid &&
+ mid_entry->midState == MID_REQUEST_SUBMITTED) {
/* This mid is in use, try a different one */
- collision = 1;
+ collision = true;
break;
}
}
- if (collision == 0) {
+
+ /*
+ * if we have more than 32k mids in the list, then something
+ * is very wrong. Possibly a local user is trying to DoS the
+ * box by issuing long-running calls and SIGKILL'ing them. If
+ * we get to 2^16 mids then we're in big trouble as this
+ * function could loop forever.
+ *
+ * Go ahead and assign out the mid in this situation, but force
+ * an eventual reconnect to clean out the pending_mid_q.
+ */
+ if (num_mids > 32768)
+ server->tcpStatus = CifsNeedReconnect;
+
+ if (!collision) {
mid = server->CurrentMid;
break;
}
}
static int
-checkSMBhdr(struct smb_hdr *smb, __u16 mid)
+check_smb_hdr(struct smb_hdr *smb, __u16 mid)
{
- /* Make sure that this really is an SMB, that it is a response,
- and that the message ids match */
- if ((*(__le32 *) smb->Protocol == cpu_to_le32(0x424d53ff)) &&
- (mid == smb->Mid)) {
- if (smb->Flags & SMBFLG_RESPONSE)
- return 0;
- else {
- /* only one valid case where server sends us request */
- if (smb->Command == SMB_COM_LOCKING_ANDX)
- return 0;
- else
- cERROR(1, "Received Request not response");
- }
- } else { /* bad signature or mid */
- if (*(__le32 *) smb->Protocol != cpu_to_le32(0x424d53ff))
- cERROR(1, "Bad protocol string signature header %x",
- *(unsigned int *) smb->Protocol);
- if (mid != smb->Mid)
- cERROR(1, "Mids do not match");
+ /* does it have the right SMB "signature" ? */
+ if (*(__le32 *) smb->Protocol != cpu_to_le32(0x424d53ff)) {
+ cERROR(1, "Bad protocol string signature header 0x%x",
+ *(unsigned int *)smb->Protocol);
+ return 1;
}
- cERROR(1, "bad smb detected. The Mid=%d", smb->Mid);
+
+ /* Make sure that message ids match */
+ if (mid != smb->Mid) {
+ cERROR(1, "Mids do not match. received=%u expected=%u",
+ smb->Mid, mid);
+ return 1;
+ }
+
+ /* if it's a response then accept */
+ if (smb->Flags & SMBFLG_RESPONSE)
+ return 0;
+
+ /* only one valid case where server sends us request */
+ if (smb->Command == SMB_COM_LOCKING_ANDX)
+ return 0;
+
+ cERROR(1, "Server sent request, not response. mid=%u", smb->Mid);
return 1;
}
return 1;
}
- if (checkSMBhdr(smb, mid))
+ if (check_smb_hdr(smb, mid))
return 1;
clc_len = smbCalcSize_LE(smb);
if (((4 + len) & 0xFFFF) == (clc_len & 0xFFFF))
return 0; /* bcc wrapped */
}
- cFYI(1, "Calculated size %d vs length %d mismatch for mid %d",
+ cFYI(1, "Calculated size %u vs length %u mismatch for mid=%u",
clc_len, 4 + len, smb->Mid);
- /* Windows XP can return a few bytes too much, presumably
- an illegal pad, at the end of byte range lock responses
- so we allow for that three byte pad, as long as actual
- received length is as long or longer than calculated length */
- /* We have now had to extend this more, since there is a
- case in which it needs to be bigger still to handle a
- malformed response to transact2 findfirst from WinXP when
- access denied is returned and thus bcc and wct are zero
- but server says length is 0x21 bytes too long as if the server
- forget to reset the smb rfc1001 length when it reset the
- wct and bcc to minimum size and drop the t2 parms and data */
- if ((4+len > clc_len) && (len <= clc_len + 512))
- return 0;
- else {
- cERROR(1, "RFC1001 size %d bigger than SMB for Mid=%d",
+
+ if (4 + len < clc_len) {
+ cERROR(1, "RFC1001 size %u smaller than SMB for mid=%u",
len, smb->Mid);
return 1;
+ } else if (len > clc_len + 512) {
+ /*
+ * Some servers (Windows XP in particular) send more
+ * data than the lengths in the SMB packet would
+ * indicate on certain calls (byte range locks and
+ * trans2 find first calls in particular). While the
+ * client can handle such a frame by ignoring the
+ * trailing data, we choose limit the amount of extra
+ * data to 512 bytes.
+ */
+ cERROR(1, "RFC1001 size %u more than 512 bytes larger "
+ "than SMB for mid=%u", len, smb->Mid);
+ return 1;
}
}
return 0;
pCifsInode = CIFS_I(netfile->dentry->d_inode);
cifs_set_oplock_level(pCifsInode,
- pSMB->OplockLevel);
+ pSMB->OplockLevel ? OPLOCK_READ : 0);
/*
* cifs_oplock_break_put() can't be called
* from here. Get reference after queueing
return;
}
-/* Convert 16 bit Unicode pathname to wire format from string in current code
- page. Conversion may involve remapping up the seven characters that are
- only legal in POSIX-like OS (if they are present in the string). Path
- names are little endian 16 bit Unicode on the wire */
-int
-cifsConvertToUCS(__le16 *target, const char *source, int maxlen,
- const struct nls_table *cp, int mapChars)
-{
- int i, j, charlen;
- int len_remaining = maxlen;
- char src_char;
- __u16 temp;
-
- if (!mapChars)
- return cifs_strtoUCS(target, source, PATH_MAX, cp);
-
- for (i = 0, j = 0; i < maxlen; j++) {
- src_char = source[i];
- switch (src_char) {
- case 0:
- target[j] = 0;
- goto ctoUCS_out;
- case ':':
- target[j] = cpu_to_le16(UNI_COLON);
- break;
- case '*':
- target[j] = cpu_to_le16(UNI_ASTERIK);
- break;
- case '?':
- target[j] = cpu_to_le16(UNI_QUESTION);
- break;
- case '<':
- target[j] = cpu_to_le16(UNI_LESSTHAN);
- break;
- case '>':
- target[j] = cpu_to_le16(UNI_GRTRTHAN);
- break;
- case '|':
- target[j] = cpu_to_le16(UNI_PIPE);
- break;
- /* BB We can not handle remapping slash until
- all the calls to build_path_from_dentry
- are modified, as they use slash as separator BB */
- /* case '\\':
- target[j] = cpu_to_le16(UNI_SLASH);
- break;*/
- default:
- charlen = cp->char2uni(source+i,
- len_remaining, &temp);
- /* if no match, use question mark, which
- at least in some cases servers as wild card */
- if (charlen < 1) {
- target[j] = cpu_to_le16(0x003f);
- charlen = 1;
- } else
- target[j] = cpu_to_le16(temp);
- len_remaining -= charlen;
- /* character may take more than one byte in the
- the source string, but will take exactly two
- bytes in the target string */
- i += charlen;
- continue;
- }
- i++; /* move to next char in source string */
- len_remaining--;
- }
-
-ctoUCS_out:
- return i;
-}
-
void
cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb)
{
{
int rc, alen, slen;
const char *pct;
- char *endp, scope_id[13];
+ char scope_id[13];
struct sockaddr_in *s4 = (struct sockaddr_in *) dst;
struct sockaddr_in6 *s6 = (struct sockaddr_in6 *) dst;
memcpy(scope_id, pct + 1, slen);
scope_id[slen] = '\0';
- s6->sin6_scope_id = (u32) simple_strtoul(pct, &endp, 0);
- if (endp != scope_id + slen)
- return 0;
+ rc = strict_strtoul(scope_id, 0,
+ (unsigned long *)&s6->sin6_scope_id);
+ rc = (rc == 0) ? 1 : 0;
}
return rc;
smbCalcSize(struct smb_hdr *ptr)
{
return (sizeof(struct smb_hdr) + (2 * ptr->WordCount) +
- 2 /* size of the bcc field */ + BCC(ptr));
+ 2 /* size of the bcc field */ + get_bcc(ptr));
}
unsigned int
smbCalcSize_LE(struct smb_hdr *ptr)
{
return (sizeof(struct smb_hdr) + (2 * ptr->WordCount) +
- 2 /* size of the bcc field */ + le16_to_cpu(BCC_LE(ptr)));
+ 2 /* size of the bcc field */ + get_bcc_le(ptr));
}
/* The following are taken from fs/ntfs/util.c */
{
int rc = 0;
int xid, i;
- struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
struct cifsFileInfo *cifsFile = NULL;
char *current_entry;
xid = GetXid();
- cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
-
/*
* Ensure FindFirst doesn't fail before doing filldir() for '.' and
* '..'. Otherwise we won't be able to notify VFS in case of failure.
}
static void
-decode_unicode_ssetup(char **pbcc_area, int bleft, struct cifsSesInfo *ses,
+decode_unicode_ssetup(char **pbcc_area, __u16 bleft, struct cifsSesInfo *ses,
const struct nls_table *nls_cp)
{
int len;
return;
}
-static int decode_ascii_ssetup(char **pbcc_area, int bleft,
+static int decode_ascii_ssetup(char **pbcc_area, __u16 bleft,
struct cifsSesInfo *ses,
const struct nls_table *nls_cp)
{
char *str_area;
SESSION_SETUP_ANDX *pSMB;
__u32 capabilities;
- int count;
+ __u16 count;
int resp_buf_type;
struct kvec iov[3];
enum securityEnum type;
- __u16 action;
- int bytes_remaining;
+ __u16 action, bytes_remaining;
struct key *spnego_key = NULL;
__le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */
u16 blob_len;
if (type == LANMAN) {
#ifdef CONFIG_CIFS_WEAK_PW_HASH
- char lnm_session_key[CIFS_SESS_KEY_SIZE];
+ char lnm_session_key[CIFS_AUTH_RESP_SIZE];
pSMB->req.hdr.Flags2 &= ~SMBFLG2_UNICODE;
/* no capabilities flags in old lanman negotiation */
- pSMB->old_req.PasswordLength = cpu_to_le16(CIFS_SESS_KEY_SIZE);
+ pSMB->old_req.PasswordLength = cpu_to_le16(CIFS_AUTH_RESP_SIZE);
/* Calculate hash with password and copy into bcc_ptr.
* Encryption Key (stored as in cryptkey) gets used if the
true : false, lnm_session_key);
ses->flags |= CIFS_SES_LANMAN;
- memcpy(bcc_ptr, (char *)lnm_session_key, CIFS_SESS_KEY_SIZE);
- bcc_ptr += CIFS_SESS_KEY_SIZE;
+ memcpy(bcc_ptr, (char *)lnm_session_key, CIFS_AUTH_RESP_SIZE);
+ bcc_ptr += CIFS_AUTH_RESP_SIZE;
/* can not sign if LANMAN negotiated so no need
to calculate signing key? but what if server
count = iov[1].iov_len + iov[2].iov_len;
smb_buf->smb_buf_length += count;
- BCC_LE(smb_buf) = cpu_to_le16(count);
+ put_bcc_le(count, smb_buf);
rc = SendReceive2(xid, ses, iov, 3 /* num_iovecs */, &resp_buf_type,
- CIFS_STD_OP /* not long */ | CIFS_LOG_ERROR);
+ CIFS_LOG_ERROR);
/* SMB request buf freed in SendReceive2 */
pSMB = (SESSION_SETUP_ANDX *)iov[0].iov_base;
cFYI(1, "UID = %d ", ses->Suid);
/* response can have either 3 or 4 word count - Samba sends 3 */
/* and lanman response is 3 */
- bytes_remaining = BCC(smb_buf);
+ bytes_remaining = get_bcc(smb_buf);
bcc_ptr = pByteArea(smb_buf);
if (smb_buf->WordCount == 4) {
up with a different answer to the one above)
*/
#include <linux/slab.h>
-#include "cifsencrypt.h"
#define uchar unsigned char
static uchar perm1[56] = { 57, 49, 41, 33, 25, 17, 9,
#include "cifs_unicode.h"
#include "cifspdu.h"
#include "cifsglob.h"
-#include "md5.h"
#include "cifs_debug.h"
-#include "cifsencrypt.h"
+#include "cifsproto.h"
#ifndef false
#define false 0
#define SSVALX(buf,pos,val) (CVAL(buf,pos)=(val)&0xFF,CVAL(buf,pos+1)=(val)>>8)
#define SSVAL(buf,pos,val) SSVALX((buf),(pos),((__u16)(val)))
-/*The following definitions come from libsmb/smbencrypt.c */
+/* produce a md4 message digest from data of length n bytes */
+int
+mdfour(unsigned char *md4_hash, unsigned char *link_str, int link_len)
+{
+ int rc;
+ unsigned int size;
+ struct crypto_shash *md4;
+ struct sdesc *sdescmd4;
+
+ md4 = crypto_alloc_shash("md4", 0, 0);
+ if (IS_ERR(md4)) {
+ rc = PTR_ERR(md4);
+ cERROR(1, "%s: Crypto md4 allocation error %d\n", __func__, rc);
+ return rc;
+ }
+ size = sizeof(struct shash_desc) + crypto_shash_descsize(md4);
+ sdescmd4 = kmalloc(size, GFP_KERNEL);
+ if (!sdescmd4) {
+ rc = -ENOMEM;
+ cERROR(1, "%s: Memory allocation failure\n", __func__);
+ goto mdfour_err;
+ }
+ sdescmd4->shash.tfm = md4;
+ sdescmd4->shash.flags = 0x0;
+
+ rc = crypto_shash_init(&sdescmd4->shash);
+ if (rc) {
+ cERROR(1, "%s: Could not init md4 shash\n", __func__);
+ goto mdfour_err;
+ }
+ crypto_shash_update(&sdescmd4->shash, link_str, link_len);
+ rc = crypto_shash_final(&sdescmd4->shash, md4_hash);
-void SMBencrypt(unsigned char *passwd, const unsigned char *c8,
- unsigned char *p24);
-void E_md4hash(const unsigned char *passwd, unsigned char *p16);
-static void SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8,
- unsigned char p24[24]);
-void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, unsigned char *p24);
+mdfour_err:
+ crypto_free_shash(md4);
+ kfree(sdescmd4);
+
+ return rc;
+}
+
+/* Does the des encryption from the NT or LM MD4 hash. */
+static void
+SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8,
+ unsigned char p24[24])
+{
+ unsigned char p21[21];
+
+ memset(p21, '\0', 21);
+
+ memcpy(p21, passwd, 16);
+ E_P24(p21, c8, p24);
+}
/*
This implements the X/Open SMB password encryption
* Creates the MD4 Hash of the users password in NT UNICODE.
*/
-void
+int
E_md4hash(const unsigned char *passwd, unsigned char *p16)
{
+ int rc;
int len;
__u16 wpwd[129];
/* Calculate length in bytes */
len = _my_wcslen(wpwd) * sizeof(__u16);
- mdfour(p16, (unsigned char *) wpwd, len);
+ rc = mdfour(p16, (unsigned char *) wpwd, len);
memset(wpwd, 0, 129 * 2);
+
+ return rc;
}
#if 0 /* currently unused */
}
#endif
-/* Does the des encryption from the NT or LM MD4 hash. */
-static void
-SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8,
- unsigned char p24[24])
-{
- unsigned char p21[21];
-
- memset(p21, '\0', 21);
-
- memcpy(p21, passwd, 16);
- E_P24(p21, c8, p24);
-}
-
/* Does the des encryption from the FIRST 8 BYTES of the NT or LM MD4 hash. */
#if 0 /* currently unused */
static void
#endif
/* Does the NT MD4 hash then des encryption. */
-
-void
+int
SMBNTencrypt(unsigned char *passwd, unsigned char *c8, unsigned char *p24)
{
+ int rc;
unsigned char p21[21];
memset(p21, '\0', 21);
- E_md4hash(passwd, p21);
+ rc = E_md4hash(passwd, p21);
+ if (rc) {
+ cFYI(1, "%s Can't generate NT hash, error: %d", __func__, rc);
+ return rc;
+ }
SMBOWFencrypt(p21, c8, p24);
+ return rc;
}
extern mempool_t *cifs_mid_poolp;
-static struct mid_q_entry *
+static void
+wake_up_task(struct mid_q_entry *mid)
+{
+ wake_up_process(mid->callback_data);
+}
+
+struct mid_q_entry *
AllocMidQEntry(const struct smb_hdr *smb_buffer, struct TCP_Server_Info *server)
{
struct mid_q_entry *temp;
/* do_gettimeofday(&temp->when_sent);*/ /* easier to use jiffies */
/* when mid allocated can be before when sent */
temp->when_alloc = jiffies;
- temp->tsk = current;
+
+ /*
+ * The default is for the mid to be synchronous, so the
+ * default callback just wakes up the current task.
+ */
+ temp->callback = wake_up_task;
+ temp->callback_data = current;
}
- spin_lock(&GlobalMid_Lock);
- list_add_tail(&temp->qhead, &server->pending_mid_q);
atomic_inc(&midCount);
temp->midState = MID_REQUEST_ALLOCATED;
- spin_unlock(&GlobalMid_Lock);
return temp;
}
-static void
+void
DeleteMidQEntry(struct mid_q_entry *midEntry)
{
#ifdef CONFIG_CIFS_STATS2
unsigned long now;
#endif
- spin_lock(&GlobalMid_Lock);
midEntry->midState = MID_FREE;
- list_del(&midEntry->qhead);
atomic_dec(&midCount);
- spin_unlock(&GlobalMid_Lock);
if (midEntry->largeBuf)
cifs_buf_release(midEntry->resp_buf);
else
mempool_free(midEntry, cifs_mid_poolp);
}
+static void
+delete_mid(struct mid_q_entry *mid)
+{
+ spin_lock(&GlobalMid_Lock);
+ list_del(&mid->qhead);
+ spin_unlock(&GlobalMid_Lock);
+
+ DeleteMidQEntry(mid);
+}
+
static int
smb_sendv(struct TCP_Server_Info *server, struct kvec *iov, int n_vec)
{
server->tcpStatus = CifsNeedReconnect;
}
- if (rc < 0) {
+ if (rc < 0 && rc != -EINTR)
cERROR(1, "Error %d sending data on socket to server", rc);
- } else
+ else
rc = 0;
/* Don't want to modify the buffer as a
return smb_sendv(server, &iov, 1);
}
-static int wait_for_free_request(struct cifsSesInfo *ses, const int long_op)
+static int wait_for_free_request(struct TCP_Server_Info *server,
+ const int long_op)
{
if (long_op == CIFS_ASYNC_OP) {
/* oplock breaks must not be held up */
- atomic_inc(&ses->server->inFlight);
+ atomic_inc(&server->inFlight);
return 0;
}
spin_lock(&GlobalMid_Lock);
while (1) {
- if (atomic_read(&ses->server->inFlight) >=
- cifs_max_pending){
+ if (atomic_read(&server->inFlight) >= cifs_max_pending) {
spin_unlock(&GlobalMid_Lock);
#ifdef CONFIG_CIFS_STATS2
- atomic_inc(&ses->server->num_waiters);
+ atomic_inc(&server->num_waiters);
#endif
- wait_event(ses->server->request_q,
- atomic_read(&ses->server->inFlight)
+ wait_event(server->request_q,
+ atomic_read(&server->inFlight)
< cifs_max_pending);
#ifdef CONFIG_CIFS_STATS2
- atomic_dec(&ses->server->num_waiters);
+ atomic_dec(&server->num_waiters);
#endif
spin_lock(&GlobalMid_Lock);
} else {
- if (ses->server->tcpStatus == CifsExiting) {
+ if (server->tcpStatus == CifsExiting) {
spin_unlock(&GlobalMid_Lock);
return -ENOENT;
}
/* update # of requests on the wire to server */
if (long_op != CIFS_BLOCKING_OP)
- atomic_inc(&ses->server->inFlight);
+ atomic_inc(&server->inFlight);
spin_unlock(&GlobalMid_Lock);
break;
}
*ppmidQ = AllocMidQEntry(in_buf, ses->server);
if (*ppmidQ == NULL)
return -ENOMEM;
+ spin_lock(&GlobalMid_Lock);
+ list_add_tail(&(*ppmidQ)->qhead, &ses->server->pending_mid_q);
+ spin_unlock(&GlobalMid_Lock);
return 0;
}
-static int wait_for_response(struct cifsSesInfo *ses,
- struct mid_q_entry *midQ,
- unsigned long timeout,
- unsigned long time_to_wait)
+static int
+wait_for_response(struct TCP_Server_Info *server, struct mid_q_entry *midQ)
{
- unsigned long curr_timeout;
+ int error;
- for (;;) {
- curr_timeout = timeout + jiffies;
- wait_event_timeout(ses->server->response_q,
- midQ->midState != MID_REQUEST_SUBMITTED, timeout);
+ error = wait_event_killable(server->response_q,
+ midQ->midState != MID_REQUEST_SUBMITTED);
+ if (error < 0)
+ return -ERESTARTSYS;
- if (time_after(jiffies, curr_timeout) &&
- (midQ->midState == MID_REQUEST_SUBMITTED) &&
- ((ses->server->tcpStatus == CifsGood) ||
- (ses->server->tcpStatus == CifsNew))) {
+ return 0;
+}
- unsigned long lrt;
- /* We timed out. Is the server still
- sending replies ? */
- spin_lock(&GlobalMid_Lock);
- lrt = ses->server->lstrp;
- spin_unlock(&GlobalMid_Lock);
+/*
+ * Send a SMB request and set the callback function in the mid to handle
+ * the result. Caller is responsible for dealing with timeouts.
+ */
+int
+cifs_call_async(struct TCP_Server_Info *server, struct smb_hdr *in_buf,
+ mid_callback_t *callback, void *cbdata)
+{
+ int rc;
+ struct mid_q_entry *mid;
- /* Calculate time_to_wait past last receive time.
- Although we prefer not to time out if the
- server is still responding - we will time
- out if the server takes more than 15 (or 45
- or 180) seconds to respond to this request
- and has not responded to any request from
- other threads on the client within 10 seconds */
- lrt += time_to_wait;
- if (time_after(jiffies, lrt)) {
- /* No replies for time_to_wait. */
- cERROR(1, "server not responding");
- return -1;
- }
- } else {
- return 0;
- }
+ rc = wait_for_free_request(server, CIFS_ASYNC_OP);
+ if (rc)
+ return rc;
+
+ /* enable signing if server requires it */
+ if (server->secMode & (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED))
+ in_buf->Flags2 |= SMBFLG2_SECURITY_SIGNATURE;
+
+ mutex_lock(&server->srv_mutex);
+ mid = AllocMidQEntry(in_buf, server);
+ if (mid == NULL) {
+ mutex_unlock(&server->srv_mutex);
+ return -ENOMEM;
}
-}
+ /* put it on the pending_mid_q */
+ spin_lock(&GlobalMid_Lock);
+ list_add_tail(&mid->qhead, &server->pending_mid_q);
+ spin_unlock(&GlobalMid_Lock);
+
+ rc = cifs_sign_smb(in_buf, server, &mid->sequence_number);
+ if (rc) {
+ mutex_unlock(&server->srv_mutex);
+ goto out_err;
+ }
+
+ mid->callback = callback;
+ mid->callback_data = cbdata;
+ mid->midState = MID_REQUEST_SUBMITTED;
+#ifdef CONFIG_CIFS_STATS2
+ atomic_inc(&server->inSend);
+#endif
+ rc = smb_send(server, in_buf, in_buf->smb_buf_length);
+#ifdef CONFIG_CIFS_STATS2
+ atomic_dec(&server->inSend);
+ mid->when_sent = jiffies;
+#endif
+ mutex_unlock(&server->srv_mutex);
+ if (rc)
+ goto out_err;
+
+ return rc;
+out_err:
+ delete_mid(mid);
+ atomic_dec(&server->inFlight);
+ wake_up(&server->request_q);
+ return rc;
+}
/*
*
return rc;
}
+static int
+sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
+{
+ int rc = 0;
+
+ cFYI(1, "%s: cmd=%d mid=%d state=%d", __func__, mid->command,
+ mid->mid, mid->midState);
+
+ spin_lock(&GlobalMid_Lock);
+ /* ensure that it's no longer on the pending_mid_q */
+ list_del_init(&mid->qhead);
+
+ switch (mid->midState) {
+ case MID_RESPONSE_RECEIVED:
+ spin_unlock(&GlobalMid_Lock);
+ return rc;
+ case MID_REQUEST_SUBMITTED:
+ /* socket is going down, reject all calls */
+ if (server->tcpStatus == CifsExiting) {
+ cERROR(1, "%s: canceling mid=%d cmd=0x%x state=%d",
+ __func__, mid->mid, mid->command, mid->midState);
+ rc = -EHOSTDOWN;
+ break;
+ }
+ case MID_RETRY_NEEDED:
+ rc = -EAGAIN;
+ break;
+ case MID_RESPONSE_MALFORMED:
+ rc = -EIO;
+ break;
+ default:
+ cERROR(1, "%s: invalid mid state mid=%d state=%d", __func__,
+ mid->mid, mid->midState);
+ rc = -EIO;
+ }
+ spin_unlock(&GlobalMid_Lock);
+
+ DeleteMidQEntry(mid);
+ return rc;
+}
+
+/*
+ * An NT cancel request header looks just like the original request except:
+ *
+ * The Command is SMB_COM_NT_CANCEL
+ * The WordCount is zeroed out
+ * The ByteCount is zeroed out
+ *
+ * This function mangles an existing request buffer into a
+ * SMB_COM_NT_CANCEL request and then sends it.
+ */
+static int
+send_nt_cancel(struct TCP_Server_Info *server, struct smb_hdr *in_buf,
+ struct mid_q_entry *mid)
+{
+ int rc = 0;
+
+ /* -4 for RFC1001 length and +2 for BCC field */
+ in_buf->smb_buf_length = sizeof(struct smb_hdr) - 4 + 2;
+ in_buf->Command = SMB_COM_NT_CANCEL;
+ in_buf->WordCount = 0;
+ put_bcc_le(0, in_buf);
+
+ mutex_lock(&server->srv_mutex);
+ rc = cifs_sign_smb(in_buf, server, &mid->sequence_number);
+ if (rc) {
+ mutex_unlock(&server->srv_mutex);
+ return rc;
+ }
+ rc = smb_send(server, in_buf, in_buf->smb_buf_length);
+ mutex_unlock(&server->srv_mutex);
+
+ cFYI(1, "issued NT_CANCEL for mid %u, rc = %d",
+ in_buf->Mid, rc);
+
+ return rc;
+}
+
int
SendReceive2(const unsigned int xid, struct cifsSesInfo *ses,
struct kvec *iov, int n_vec, int *pRespBufType /* ret */,
int rc = 0;
int long_op;
unsigned int receive_len;
- unsigned long timeout;
struct mid_q_entry *midQ;
struct smb_hdr *in_buf = iov[0].iov_base;
to the same server. We may make this configurable later or
use ses->maxReq */
- rc = wait_for_free_request(ses, long_op);
+ rc = wait_for_free_request(ses->server, long_op);
if (rc) {
cifs_small_buf_release(in_buf);
return rc;
#endif
mutex_unlock(&ses->server->srv_mutex);
- cifs_small_buf_release(in_buf);
- if (rc < 0)
- goto out;
-
- if (long_op == CIFS_STD_OP)
- timeout = 15 * HZ;
- else if (long_op == CIFS_VLONG_OP) /* e.g. slow writes past EOF */
- timeout = 180 * HZ;
- else if (long_op == CIFS_LONG_OP)
- timeout = 45 * HZ; /* should be greater than
- servers oplock break timeout (about 43 seconds) */
- else if (long_op == CIFS_ASYNC_OP)
- goto out;
- else if (long_op == CIFS_BLOCKING_OP)
- timeout = 0x7FFFFFFF; /* large, but not so large as to wrap */
- else {
- cERROR(1, "unknown timeout flag %d", long_op);
- rc = -EIO;
+ if (rc < 0) {
+ cifs_small_buf_release(in_buf);
goto out;
}
- /* wait for 15 seconds or until woken up due to response arriving or
- due to last connection to this server being unmounted */
- if (signal_pending(current)) {
- /* if signal pending do not hold up user for full smb timeout
- but we still give response a chance to complete */
- timeout = 2 * HZ;
+ if (long_op == CIFS_ASYNC_OP) {
+ cifs_small_buf_release(in_buf);
+ goto out;
}
- /* No user interrupts in wait - wreaks havoc with performance */
- wait_for_response(ses, midQ, timeout, 10 * HZ);
-
- spin_lock(&GlobalMid_Lock);
-
- if (midQ->resp_buf == NULL) {
- cERROR(1, "No response to cmd %d mid %d",
- midQ->command, midQ->mid);
+ rc = wait_for_response(ses->server, midQ);
+ if (rc != 0) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
if (midQ->midState == MID_REQUEST_SUBMITTED) {
- if (ses->server->tcpStatus == CifsExiting)
- rc = -EHOSTDOWN;
- else {
- ses->server->tcpStatus = CifsNeedReconnect;
- midQ->midState = MID_RETRY_NEEDED;
- }
- }
-
- if (rc != -EHOSTDOWN) {
- if (midQ->midState == MID_RETRY_NEEDED) {
- rc = -EAGAIN;
- cFYI(1, "marking request for retry");
- } else {
- rc = -EIO;
- }
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ cifs_small_buf_release(in_buf);
+ atomic_dec(&ses->server->inFlight);
+ wake_up(&ses->server->request_q);
+ return rc;
}
spin_unlock(&GlobalMid_Lock);
- DeleteMidQEntry(midQ);
- /* Update # of requests on wire to server */
+ }
+
+ cifs_small_buf_release(in_buf);
+
+ rc = sync_mid_result(midQ, ses->server);
+ if (rc != 0) {
atomic_dec(&ses->server->inFlight);
wake_up(&ses->server->request_q);
return rc;
}
- spin_unlock(&GlobalMid_Lock);
receive_len = midQ->resp_buf->smb_buf_length;
if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) {
if (receive_len >= sizeof(struct smb_hdr) - 4
/* do not count RFC1001 header */ +
(2 * midQ->resp_buf->WordCount) + 2 /* bcc */ )
- BCC(midQ->resp_buf) =
- le16_to_cpu(BCC_LE(midQ->resp_buf));
+ put_bcc(get_bcc_le(midQ->resp_buf), midQ->resp_buf);
if ((flags & CIFS_NO_RESP) == 0)
midQ->resp_buf = NULL; /* mark it so buf will
not be freed by
- DeleteMidQEntry */
+ delete_mid */
} else {
rc = -EIO;
cFYI(1, "Bad MID state?");
}
out:
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
atomic_dec(&ses->server->inFlight);
wake_up(&ses->server->request_q);
{
int rc = 0;
unsigned int receive_len;
- unsigned long timeout;
struct mid_q_entry *midQ;
if (ses == NULL) {
return -EIO;
}
- rc = wait_for_free_request(ses, long_op);
+ rc = wait_for_free_request(ses->server, long_op);
if (rc)
return rc;
if (rc < 0)
goto out;
- if (long_op == CIFS_STD_OP)
- timeout = 15 * HZ;
- /* wait for 15 seconds or until woken up due to response arriving or
- due to last connection to this server being unmounted */
- else if (long_op == CIFS_ASYNC_OP)
- goto out;
- else if (long_op == CIFS_VLONG_OP) /* writes past EOF can be slow */
- timeout = 180 * HZ;
- else if (long_op == CIFS_LONG_OP)
- timeout = 45 * HZ; /* should be greater than
- servers oplock break timeout (about 43 seconds) */
- else if (long_op == CIFS_BLOCKING_OP)
- timeout = 0x7FFFFFFF; /* large but no so large as to wrap */
- else {
- cERROR(1, "unknown timeout flag %d", long_op);
- rc = -EIO;
+ if (long_op == CIFS_ASYNC_OP)
goto out;
- }
-
- if (signal_pending(current)) {
- /* if signal pending do not hold up user for full smb timeout
- but we still give response a chance to complete */
- timeout = 2 * HZ;
- }
-
- /* No user interrupts in wait - wreaks havoc with performance */
- wait_for_response(ses, midQ, timeout, 10 * HZ);
- spin_lock(&GlobalMid_Lock);
- if (midQ->resp_buf == NULL) {
- cERROR(1, "No response for cmd %d mid %d",
- midQ->command, midQ->mid);
+ rc = wait_for_response(ses->server, midQ);
+ if (rc != 0) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
if (midQ->midState == MID_REQUEST_SUBMITTED) {
- if (ses->server->tcpStatus == CifsExiting)
- rc = -EHOSTDOWN;
- else {
- ses->server->tcpStatus = CifsNeedReconnect;
- midQ->midState = MID_RETRY_NEEDED;
- }
- }
-
- if (rc != -EHOSTDOWN) {
- if (midQ->midState == MID_RETRY_NEEDED) {
- rc = -EAGAIN;
- cFYI(1, "marking request for retry");
- } else {
- rc = -EIO;
- }
+ /* no longer considered to be "in-flight" */
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ atomic_dec(&ses->server->inFlight);
+ wake_up(&ses->server->request_q);
+ return rc;
}
spin_unlock(&GlobalMid_Lock);
- DeleteMidQEntry(midQ);
- /* Update # of requests on wire to server */
+ }
+
+ rc = sync_mid_result(midQ, ses->server);
+ if (rc != 0) {
atomic_dec(&ses->server->inFlight);
wake_up(&ses->server->request_q);
return rc;
}
- spin_unlock(&GlobalMid_Lock);
receive_len = midQ->resp_buf->smb_buf_length;
if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) {
if (receive_len >= sizeof(struct smb_hdr) - 4
/* do not count RFC1001 header */ +
(2 * out_buf->WordCount) + 2 /* bcc */ )
- BCC(out_buf) = le16_to_cpu(BCC_LE(out_buf));
+ put_bcc(get_bcc_le(midQ->resp_buf), midQ->resp_buf);
} else {
rc = -EIO;
cERROR(1, "Bad MID state?");
}
out:
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
atomic_dec(&ses->server->inFlight);
wake_up(&ses->server->request_q);
return rc;
}
-/* Send an NT_CANCEL SMB to cause the POSIX blocking lock to return. */
-
-static int
-send_nt_cancel(struct cifsTconInfo *tcon, struct smb_hdr *in_buf,
- struct mid_q_entry *midQ)
-{
- int rc = 0;
- struct cifsSesInfo *ses = tcon->ses;
- __u16 mid = in_buf->Mid;
-
- header_assemble(in_buf, SMB_COM_NT_CANCEL, tcon, 0);
- in_buf->Mid = mid;
- mutex_lock(&ses->server->srv_mutex);
- rc = cifs_sign_smb(in_buf, ses->server, &midQ->sequence_number);
- if (rc) {
- mutex_unlock(&ses->server->srv_mutex);
- return rc;
- }
- rc = smb_send(ses->server, in_buf, in_buf->smb_buf_length);
- mutex_unlock(&ses->server->srv_mutex);
- return rc;
-}
-
/* We send a LOCKINGX_CANCEL_LOCK to cause the Windows
blocking lock to return. */
pSMB->hdr.Mid = GetNextMid(ses->server);
return SendReceive(xid, ses, in_buf, out_buf,
- &bytes_returned, CIFS_STD_OP);
+ &bytes_returned, 0);
}
int
return -EIO;
}
- rc = wait_for_free_request(ses, CIFS_BLOCKING_OP);
+ rc = wait_for_free_request(ses->server, CIFS_BLOCKING_OP);
if (rc)
return rc;
rc = cifs_sign_smb(in_buf, ses->server, &midQ->sequence_number);
if (rc) {
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
mutex_unlock(&ses->server->srv_mutex);
return rc;
}
mutex_unlock(&ses->server->srv_mutex);
if (rc < 0) {
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
return rc;
}
if (in_buf->Command == SMB_COM_TRANSACTION2) {
/* POSIX lock. We send a NT_CANCEL SMB to cause the
blocking lock to return. */
-
- rc = send_nt_cancel(tcon, in_buf, midQ);
+ rc = send_nt_cancel(ses->server, in_buf, midQ);
if (rc) {
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
return rc;
}
} else {
/* If we get -ENOLCK back the lock may have
already been removed. Don't exit in this case. */
if (rc && rc != -ENOLCK) {
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
return rc;
}
}
- /* Wait 5 seconds for the response. */
- if (wait_for_response(ses, midQ, 5 * HZ, 5 * HZ) == 0) {
- /* We got the response - restart system call. */
- rstart = 1;
- }
- }
-
- spin_lock(&GlobalMid_Lock);
- if (midQ->resp_buf) {
- spin_unlock(&GlobalMid_Lock);
- receive_len = midQ->resp_buf->smb_buf_length;
- } else {
- cERROR(1, "No response for cmd %d mid %d",
- midQ->command, midQ->mid);
- if (midQ->midState == MID_REQUEST_SUBMITTED) {
- if (ses->server->tcpStatus == CifsExiting)
- rc = -EHOSTDOWN;
- else {
- ses->server->tcpStatus = CifsNeedReconnect;
- midQ->midState = MID_RETRY_NEEDED;
+ rc = wait_for_response(ses->server, midQ);
+ if (rc) {
+ send_nt_cancel(ses->server, in_buf, midQ);
+ spin_lock(&GlobalMid_Lock);
+ if (midQ->midState == MID_REQUEST_SUBMITTED) {
+ /* no longer considered to be "in-flight" */
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ return rc;
}
+ spin_unlock(&GlobalMid_Lock);
}
- if (rc != -EHOSTDOWN) {
- if (midQ->midState == MID_RETRY_NEEDED) {
- rc = -EAGAIN;
- cFYI(1, "marking request for retry");
- } else {
- rc = -EIO;
- }
- }
- spin_unlock(&GlobalMid_Lock);
- DeleteMidQEntry(midQ);
- return rc;
+ /* We got the response - restart system call. */
+ rstart = 1;
}
+ rc = sync_mid_result(midQ, ses->server);
+ if (rc != 0)
+ return rc;
+
+ receive_len = midQ->resp_buf->smb_buf_length;
if (receive_len > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE) {
cERROR(1, "Frame too large received. Length: %d Xid: %d",
receive_len, xid);
if (receive_len >= sizeof(struct smb_hdr) - 4
/* do not count RFC1001 header */ +
(2 * out_buf->WordCount) + 2 /* bcc */ )
- BCC(out_buf) = le16_to_cpu(BCC_LE(out_buf));
+ put_bcc(get_bcc_le(out_buf), out_buf);
out:
- DeleteMidQEntry(midQ);
+ delete_mid(midQ);
if (rstart && rc == -EACCES)
return -ERESTARTSYS;
return rc;
/**
* dentry_rcuwalk_barrier - invalidate in-progress rcu-walk lookups
+ * @dentry: the target dentry
* After this call, in-progress rcu-walk path lookup will fail. This
* should be called after unhashing, and after changing d_inode (if
* the dentry has not already been unhashed).
/**
* d_kill - kill dentry and return parent
* @dentry: dentry to kill
+ * @parent: parent dentry
*
* The dentry must already be unhashed and removed from the LRU.
*
/**
* d_validate - verify dentry provided from insecure source (deprecated)
* @dentry: The dentry alleged to be valid child of @dparent
- * @parent: The parent dentry (known to be valid)
+ * @dparent: The parent dentry (known to be valid)
*
* An insecure source has sent us a dentry, here we verify it and dget() it.
* This is used by ncpfs in its readdir implementation.
}
EXPORT_SYMBOL_GPL(dio_end_io);
-static int
+static void
dio_bio_alloc(struct dio *dio, struct block_device *bdev,
sector_t first_sector, int nr_vecs)
{
struct bio *bio;
+ /*
+ * bio_alloc() is guaranteed to return a bio when called with
+ * __GFP_WAIT and we request a valid number of vectors.
+ */
bio = bio_alloc(GFP_KERNEL, nr_vecs);
bio->bi_bdev = bdev;
dio->bio = bio;
dio->logical_offset_in_bio = dio->cur_page_fs_offset;
- return 0;
}
/*
goto out;
sector = start_sector << (dio->blkbits - 9);
nr_pages = min(dio->pages_in_io, bio_get_nr_vecs(dio->map_bh.b_bdev));
+ nr_pages = min(nr_pages, BIO_MAX_PAGES);
BUG_ON(nr_pages <= 0);
- ret = dio_bio_alloc(dio, dio->map_bh.b_bdev, sector, nr_pages);
+ dio_bio_alloc(dio, dio->map_bh.b_bdev, sector, nr_pages);
dio->boundary = 0;
out:
return ret;
static int work_start(void)
{
- recv_workqueue = alloc_workqueue("dlm_recv", WQ_MEM_RECLAIM |
- WQ_HIGHPRI | WQ_FREEZEABLE, 0);
+ recv_workqueue = create_singlethread_workqueue("dlm_recv");
if (!recv_workqueue) {
log_print("can't start dlm_recv");
return -ENOMEM;
}
- send_workqueue = alloc_workqueue("dlm_send", WQ_MEM_RECLAIM |
- WQ_HIGHPRI | WQ_FREEZEABLE, 0);
+ send_workqueue = create_singlethread_workqueue("dlm_send");
if (!send_workqueue) {
log_print("can't start dlm_send");
destroy_workqueue(recv_workqueue);
{
struct dentry *lower_dentry;
struct vfsmount *lower_mnt;
- struct dentry *dentry_save;
- struct vfsmount *vfsmount_save;
+ struct dentry *dentry_save = NULL;
+ struct vfsmount *vfsmount_save = NULL;
int rc = 1;
- if (nd->flags & LOOKUP_RCU)
+ if (nd && nd->flags & LOOKUP_RCU)
return -ECHILD;
lower_dentry = ecryptfs_dentry_to_lower(dentry);
lower_mnt = ecryptfs_dentry_to_lower_mnt(dentry);
if (!lower_dentry->d_op || !lower_dentry->d_op->d_revalidate)
goto out;
- dentry_save = nd->path.dentry;
- vfsmount_save = nd->path.mnt;
- nd->path.dentry = lower_dentry;
- nd->path.mnt = lower_mnt;
+ if (nd) {
+ dentry_save = nd->path.dentry;
+ vfsmount_save = nd->path.mnt;
+ nd->path.dentry = lower_dentry;
+ nd->path.mnt = lower_mnt;
+ }
rc = lower_dentry->d_op->d_revalidate(lower_dentry, nd);
- nd->path.dentry = dentry_save;
- nd->path.mnt = vfsmount_save;
+ if (nd) {
+ nd->path.dentry = dentry_save;
+ nd->path.mnt = vfsmount_save;
+ }
if (dentry->d_inode) {
struct inode *lower_inode =
ecryptfs_inode_to_lower(dentry->d_inode);
u32 flags);
int ecryptfs_lookup_and_interpose_lower(struct dentry *ecryptfs_dentry,
struct dentry *lower_dentry,
- struct inode *ecryptfs_dir_inode,
- struct nameidata *ecryptfs_nd);
+ struct inode *ecryptfs_dir_inode);
int ecryptfs_decode_and_decrypt_filename(char **decrypted_name,
size_t *decrypted_name_size,
struct dentry *ecryptfs_dentry,
const struct file_operations ecryptfs_dir_fops = {
.readdir = ecryptfs_readdir,
+ .read = generic_read_dir,
.unlocked_ioctl = ecryptfs_unlocked_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ecryptfs_compat_ioctl,
unsigned int flags_save;
int rc;
- dentry_save = nd->path.dentry;
- vfsmount_save = nd->path.mnt;
- flags_save = nd->flags;
- nd->path.dentry = lower_dentry;
- nd->path.mnt = lower_mnt;
- nd->flags &= ~LOOKUP_OPEN;
+ if (nd) {
+ dentry_save = nd->path.dentry;
+ vfsmount_save = nd->path.mnt;
+ flags_save = nd->flags;
+ nd->path.dentry = lower_dentry;
+ nd->path.mnt = lower_mnt;
+ nd->flags &= ~LOOKUP_OPEN;
+ }
rc = vfs_create(lower_dir_inode, lower_dentry, mode, nd);
- nd->path.dentry = dentry_save;
- nd->path.mnt = vfsmount_save;
- nd->flags = flags_save;
+ if (nd) {
+ nd->path.dentry = dentry_save;
+ nd->path.mnt = vfsmount_save;
+ nd->flags = flags_save;
+ }
return rc;
}
*/
int ecryptfs_lookup_and_interpose_lower(struct dentry *ecryptfs_dentry,
struct dentry *lower_dentry,
- struct inode *ecryptfs_dir_inode,
- struct nameidata *ecryptfs_nd)
+ struct inode *ecryptfs_dir_inode)
{
struct dentry *lower_dir_dentry;
struct vfsmount *lower_mnt;
goto out;
if (special_file(lower_inode->i_mode))
goto out;
- if (!ecryptfs_nd)
- goto out;
/* Released in this function */
page_virt = kmem_cache_zalloc(ecryptfs_header_cache_2, GFP_USER);
if (!page_virt) {
return rc;
}
-/**
- * ecryptfs_new_lower_dentry
- * @name: The name of the new dentry.
- * @lower_dir_dentry: Parent directory of the new dentry.
- * @nd: nameidata from last lookup.
- *
- * Create a new dentry or get it from lower parent dir.
- */
-static struct dentry *
-ecryptfs_new_lower_dentry(struct qstr *name, struct dentry *lower_dir_dentry,
- struct nameidata *nd)
-{
- struct dentry *new_dentry;
- struct dentry *tmp;
- struct inode *lower_dir_inode;
-
- lower_dir_inode = lower_dir_dentry->d_inode;
-
- tmp = d_alloc(lower_dir_dentry, name);
- if (!tmp)
- return ERR_PTR(-ENOMEM);
-
- mutex_lock(&lower_dir_inode->i_mutex);
- new_dentry = lower_dir_inode->i_op->lookup(lower_dir_inode, tmp, nd);
- mutex_unlock(&lower_dir_inode->i_mutex);
-
- if (!new_dentry)
- new_dentry = tmp;
- else
- dput(tmp);
-
- return new_dentry;
-}
-
-
-/**
- * ecryptfs_lookup_one_lower
- * @ecryptfs_dentry: The eCryptfs dentry that we are looking up
- * @lower_dir_dentry: lower parent directory
- * @name: lower file name
- *
- * Get the lower dentry from vfs. If lower dentry does not exist yet,
- * create it.
- */
-static struct dentry *
-ecryptfs_lookup_one_lower(struct dentry *ecryptfs_dentry,
- struct dentry *lower_dir_dentry, struct qstr *name)
-{
- struct nameidata nd;
- struct vfsmount *lower_mnt;
- int err;
-
- lower_mnt = mntget(ecryptfs_dentry_to_lower_mnt(
- ecryptfs_dentry->d_parent));
- err = vfs_path_lookup(lower_dir_dentry, lower_mnt, name->name , 0, &nd);
- mntput(lower_mnt);
-
- if (!err) {
- /* we dont need the mount */
- mntput(nd.path.mnt);
- return nd.path.dentry;
- }
- if (err != -ENOENT)
- return ERR_PTR(err);
-
- /* create a new lower dentry */
- return ecryptfs_new_lower_dentry(name, lower_dir_dentry, &nd);
-}
-
/**
* ecryptfs_lookup
* @ecryptfs_dir_inode: The eCryptfs directory inode
size_t encrypted_and_encoded_name_size;
struct ecryptfs_mount_crypt_stat *mount_crypt_stat = NULL;
struct dentry *lower_dir_dentry, *lower_dentry;
- struct qstr lower_name;
int rc = 0;
if ((ecryptfs_dentry->d_name.len == 1
goto out_d_drop;
}
lower_dir_dentry = ecryptfs_dentry_to_lower(ecryptfs_dentry->d_parent);
- lower_name.name = ecryptfs_dentry->d_name.name;
- lower_name.len = ecryptfs_dentry->d_name.len;
- lower_name.hash = ecryptfs_dentry->d_name.hash;
- if (lower_dir_dentry->d_op && lower_dir_dentry->d_op->d_hash) {
- rc = lower_dir_dentry->d_op->d_hash(lower_dir_dentry,
- lower_dir_dentry->d_inode, &lower_name);
- if (rc < 0)
- goto out_d_drop;
- }
- lower_dentry = ecryptfs_lookup_one_lower(ecryptfs_dentry,
- lower_dir_dentry, &lower_name);
+ mutex_lock(&lower_dir_dentry->d_inode->i_mutex);
+ lower_dentry = lookup_one_len(ecryptfs_dentry->d_name.name,
+ lower_dir_dentry,
+ ecryptfs_dentry->d_name.len);
+ mutex_unlock(&lower_dir_dentry->d_inode->i_mutex);
if (IS_ERR(lower_dentry)) {
rc = PTR_ERR(lower_dentry);
- ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_lower() returned "
+ ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_len() returned "
"[%d] on lower_dentry = [%s]\n", __func__, rc,
encrypted_and_encoded_name);
goto out_d_drop;
"filename; rc = [%d]\n", __func__, rc);
goto out_d_drop;
}
- lower_name.name = encrypted_and_encoded_name;
- lower_name.len = encrypted_and_encoded_name_size;
- lower_name.hash = full_name_hash(lower_name.name, lower_name.len);
- if (lower_dir_dentry->d_op && lower_dir_dentry->d_op->d_hash) {
- rc = lower_dir_dentry->d_op->d_hash(lower_dir_dentry,
- lower_dir_dentry->d_inode, &lower_name);
- if (rc < 0)
- goto out_d_drop;
- }
- lower_dentry = ecryptfs_lookup_one_lower(ecryptfs_dentry,
- lower_dir_dentry, &lower_name);
+ mutex_lock(&lower_dir_dentry->d_inode->i_mutex);
+ lower_dentry = lookup_one_len(encrypted_and_encoded_name,
+ lower_dir_dentry,
+ encrypted_and_encoded_name_size);
+ mutex_unlock(&lower_dir_dentry->d_inode->i_mutex);
if (IS_ERR(lower_dentry)) {
rc = PTR_ERR(lower_dentry);
- ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_lower() returned "
+ ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_len() returned "
"[%d] on lower_dentry = [%s]\n", __func__, rc,
encrypted_and_encoded_name);
goto out_d_drop;
}
lookup_and_interpose:
rc = ecryptfs_lookup_and_interpose_lower(ecryptfs_dentry, lower_dentry,
- ecryptfs_dir_inode,
- ecryptfs_nd);
+ ecryptfs_dir_inode);
goto out;
out_d_drop:
d_drop(ecryptfs_dentry);
rc = vfs_getattr(ecryptfs_dentry_to_lower_mnt(dentry),
ecryptfs_dentry_to_lower(dentry), &lower_stat);
if (!rc) {
+ fsstack_copy_attr_all(dentry->d_inode,
+ ecryptfs_inode_to_lower(dentry->d_inode));
generic_fillattr(dentry->d_inode, stat);
stat->blocks = lower_stat.blocks;
}
* @ctx: [in] Pointer to eventfd context.
*
* The eventfd context reference must have been previously acquired either
- * with eventfd_ctx_get() or eventfd_ctx_fdget()).
+ * with eventfd_ctx_get() or eventfd_ctx_fdget().
*/
void eventfd_ctx_put(struct eventfd_ctx *ctx)
{
* eventfd_ctx_remove_wait_queue - Read the current counter and removes wait queue.
* @ctx: [in] Pointer to eventfd context.
* @wait: [in] Wait queue to be removed.
- * @cnt: [out] Pointer to the 64bit conter value.
+ * @cnt: [out] Pointer to the 64-bit counter value.
*
- * Returns zero if successful, or the following error codes:
+ * Returns %0 if successful, or the following error codes:
*
* -EAGAIN : The operation would have blocked.
*
* eventfd_ctx_read - Reads the eventfd counter or wait if it is zero.
* @ctx: [in] Pointer to eventfd context.
* @no_wait: [in] Different from zero if the operation should not block.
- * @cnt: [out] Pointer to the 64bit conter value.
+ * @cnt: [out] Pointer to the 64-bit counter value.
*
- * Returns zero if successful, or the following error codes:
+ * Returns %0 if successful, or the following error codes:
*
- * -EAGAIN : The operation would have blocked but @no_wait was nonzero.
+ * -EAGAIN : The operation would have blocked but @no_wait was non-zero.
* -ERESTARTSYS : A signal interrupted the wait operation.
*
* If @no_wait is zero, the function might sleep until the eventfd internal
return ep_scan_ready_list(ep, ep_send_events_proc, &esed);
}
+static inline struct timespec ep_set_mstimeout(long ms)
+{
+ struct timespec now, ts = {
+ .tv_sec = ms / MSEC_PER_SEC,
+ .tv_nsec = NSEC_PER_MSEC * (ms % MSEC_PER_SEC),
+ };
+
+ ktime_get_ts(&now);
+ return timespec_add_safe(now, ts);
+}
+
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
int maxevents, long timeout)
{
unsigned long flags;
long slack;
wait_queue_t wait;
- struct timespec end_time;
ktime_t expires, *to = NULL;
if (timeout > 0) {
- ktime_get_ts(&end_time);
- timespec_add_ns(&end_time, (u64)timeout * NSEC_PER_MSEC);
+ struct timespec end_time = ep_set_mstimeout(timeout);
+
slack = select_estimate_accuracy(&end_time);
to = &expires;
*to = timespec_to_ktime(end_time);
goto out;
file = do_filp_open(AT_FDCWD, tmp,
- O_LARGEFILE | O_RDONLY | FMODE_EXEC, 0,
+ O_LARGEFILE | O_RDONLY | __FMODE_EXEC, 0,
MAY_READ | MAY_EXEC | MAY_OPEN);
putname(tmp);
error = PTR_ERR(file);
int err;
file = do_filp_open(AT_FDCWD, name,
- O_LARGEFILE | O_RDONLY | FMODE_EXEC, 0,
+ O_LARGEFILE | O_RDONLY | __FMODE_EXEC, 0,
MAY_EXEC | MAY_OPEN);
if (IS_ERR(file))
goto out;
memcpy(oi->i_data, fcb.i_data, sizeof(fcb.i_data));
}
- inode->i_mapping->backing_dev_info = sb->s_bdi;
if (S_ISREG(inode->i_mode)) {
inode->i_op = &exofs_file_inode_operations;
inode->i_fop = &exofs_file_operations;
sbi = sb->s_fs_info;
- inode->i_mapping->backing_dev_info = sb->s_bdi;
sb->s_dirt = 1;
inode_init_owner(inode, dir, mode);
inode->i_ino = sbi->s_nextid++;
static int ext3_mark_dquot_dirty(struct dquot *dquot);
static int ext3_write_info(struct super_block *sb, int type);
static int ext3_quota_on(struct super_block *sb, int type, int format_id,
- char *path);
+ struct path *path);
static int ext3_quota_on_mount(struct super_block *sb, int type);
static ssize_t ext3_quota_read(struct super_block *sb, int type, char *data,
size_t len, loff_t off);
* Standard function to be called on quota_on
*/
static int ext3_quota_on(struct super_block *sb, int type, int format_id,
- char *name)
+ struct path *path)
{
int err;
- struct path path;
if (!test_opt(sb, QUOTA))
return -EINVAL;
- err = kern_path(name, LOOKUP_FOLLOW, &path);
- if (err)
- return err;
-
/* Quotafile not on the same filesystem? */
- if (path.mnt->mnt_sb != sb) {
- path_put(&path);
+ if (path->mnt->mnt_sb != sb)
return -EXDEV;
- }
/* Journaling quota? */
if (EXT3_SB(sb)->s_qf_names[type]) {
/* Quotafile not of fs root? */
- if (path.dentry->d_parent != sb->s_root)
+ if (path->dentry->d_parent != sb->s_root)
ext3_msg(sb, KERN_WARNING,
"warning: Quota file not on filesystem root. "
"Journaled quota will not work.");
* When we journal data on quota file, we have to flush journal to see
* all updates to the file when we bypass pagecache...
*/
- if (ext3_should_journal_data(path.dentry->d_inode)) {
+ if (ext3_should_journal_data(path->dentry->d_inode)) {
/*
* We don't need to lock updates but journal_flush() could
* otherwise be livelocked...
journal_lock_updates(EXT3_SB(sb)->s_journal);
err = journal_flush(EXT3_SB(sb)->s_journal);
journal_unlock_updates(EXT3_SB(sb)->s_journal);
- if (err) {
- path_put(&path);
+ if (err)
return err;
- }
}
- err = dquot_quota_on_path(sb, type, format_id, &path);
- path_put(&path);
- return err;
+ return dquot_quota_on(sb, type, format_id, path);
}
/* Read data from quotafile - avoid pagecache and such because we cannot afford
atomic_t i_ioend_count; /* Number of outstanding io_end structs */
/* current io_end structure for async DIO write*/
ext4_io_end_t *cur_aio_dio;
+ atomic_t i_aiodio_unwritten; /* Nr. of inflight conversions pending */
spinlock_t i_block_reservation_lock;
#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
+/* For ioend & aio unwritten conversion wait queues */
+#define EXT4_WQ_HASH_SZ 37
+#define ext4_ioend_wq(v) (&ext4__ioend_wq[((unsigned long)(v)) %\
+ EXT4_WQ_HASH_SZ])
+#define ext4_aio_mutex(v) (&ext4__aio_mutex[((unsigned long)(v)) %\
+ EXT4_WQ_HASH_SZ])
+extern wait_queue_head_t ext4__ioend_wq[EXT4_WQ_HASH_SZ];
+extern struct mutex ext4__aio_mutex[EXT4_WQ_HASH_SZ];
+
#endif /* __KERNEL__ */
#endif /* _EXT4_H */
* that this IO needs to convertion to written when IO is
* completed
*/
- if (io)
+ if (io && !(io->flag & EXT4_IO_END_UNWRITTEN)) {
io->flag = EXT4_IO_END_UNWRITTEN;
- else
+ atomic_inc(&EXT4_I(inode)->i_aiodio_unwritten);
+ } else
ext4_set_inode_state(inode, EXT4_STATE_DIO_UNWRITTEN);
if (ext4_should_dioread_nolock(inode))
map->m_flags |= EXT4_MAP_UNINIT;
* that we need to perform convertion when IO is done.
*/
if ((flags & EXT4_GET_BLOCKS_PRE_IO)) {
- if (io)
+ if (io && !(io->flag & EXT4_IO_END_UNWRITTEN)) {
io->flag = EXT4_IO_END_UNWRITTEN;
- else
+ atomic_inc(&EXT4_I(inode)->i_aiodio_unwritten);
+ } else
ext4_set_inode_state(inode,
EXT4_STATE_DIO_UNWRITTEN);
}
return 0;
}
+static void ext4_aiodio_wait(struct inode *inode)
+{
+ wait_queue_head_t *wq = ext4_ioend_wq(inode);
+
+ wait_event(*wq, (atomic_read(&EXT4_I(inode)->i_aiodio_unwritten) == 0));
+}
+
+/*
+ * This tests whether the IO in question is block-aligned or not.
+ * Ext4 utilizes unwritten extents when hole-filling during direct IO, and they
+ * are converted to written only after the IO is complete. Until they are
+ * mapped, these blocks appear as holes, so dio_zero_block() will assume that
+ * it needs to zero out portions of the start and/or end block. If 2 AIO
+ * threads are at work on the same unwritten block, they must be synchronized
+ * or one thread will zero the other's data, causing corruption.
+ */
+static int
+ext4_unaligned_aio(struct inode *inode, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct super_block *sb = inode->i_sb;
+ int blockmask = sb->s_blocksize - 1;
+ size_t count = iov_length(iov, nr_segs);
+ loff_t final_size = pos + count;
+
+ if (pos >= inode->i_size)
+ return 0;
+
+ if ((pos & blockmask) || (final_size & blockmask))
+ return 1;
+
+ return 0;
+}
+
static ssize_t
ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode;
+ int unaligned_aio = 0;
+ int ret;
/*
* If we have encountered a bitmap-format file, the size limit
nr_segs = iov_shorten((struct iovec *)iov, nr_segs,
sbi->s_bitmap_maxbytes - pos);
}
+ } else if (unlikely((iocb->ki_filp->f_flags & O_DIRECT) &&
+ !is_sync_kiocb(iocb))) {
+ unaligned_aio = ext4_unaligned_aio(inode, iov, nr_segs, pos);
}
- return generic_file_aio_write(iocb, iov, nr_segs, pos);
+ /* Unaligned direct AIO must be serialized; see comment above */
+ if (unaligned_aio) {
+ static unsigned long unaligned_warn_time;
+
+ /* Warn about this once per day */
+ if (printk_timed_ratelimit(&unaligned_warn_time, 60*60*24*HZ))
+ ext4_msg(inode->i_sb, KERN_WARNING,
+ "Unaligned AIO/DIO on inode %ld by %s; "
+ "performance will be poor.",
+ inode->i_ino, current->comm);
+ mutex_lock(ext4_aio_mutex(inode));
+ ext4_aiodio_wait(inode);
+ }
+
+ ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
+
+ if (unaligned_aio)
+ mutex_unlock(ext4_aio_mutex(inode));
+
+ return ret;
}
static const struct vm_operations_struct ext4_file_vm_ops = {
/* We create slab caches for groupinfo data structures based on the
* superblock block size. There will be one per mounted filesystem for
* each unique s_blocksize_bits */
-#define NR_GRPINFO_CACHES \
- (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE + 1)
+#define NR_GRPINFO_CACHES 8
static struct kmem_cache *ext4_groupinfo_caches[NR_GRPINFO_CACHES];
+static const char *ext4_groupinfo_slab_names[NR_GRPINFO_CACHES] = {
+ "ext4_groupinfo_1k", "ext4_groupinfo_2k", "ext4_groupinfo_4k",
+ "ext4_groupinfo_8k", "ext4_groupinfo_16k", "ext4_groupinfo_32k",
+ "ext4_groupinfo_64k", "ext4_groupinfo_128k"
+};
+
static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
ext4_group_t group);
static void ext4_mb_generate_from_freelist(struct super_block *sb, void *bitmap,
return -ENOMEM;
}
+static void ext4_groupinfo_destroy_slabs(void)
+{
+ int i;
+
+ for (i = 0; i < NR_GRPINFO_CACHES; i++) {
+ if (ext4_groupinfo_caches[i])
+ kmem_cache_destroy(ext4_groupinfo_caches[i]);
+ ext4_groupinfo_caches[i] = NULL;
+ }
+}
+
+static int ext4_groupinfo_create_slab(size_t size)
+{
+ static DEFINE_MUTEX(ext4_grpinfo_slab_create_mutex);
+ int slab_size;
+ int blocksize_bits = order_base_2(size);
+ int cache_index = blocksize_bits - EXT4_MIN_BLOCK_LOG_SIZE;
+ struct kmem_cache *cachep;
+
+ if (cache_index >= NR_GRPINFO_CACHES)
+ return -EINVAL;
+
+ if (unlikely(cache_index < 0))
+ cache_index = 0;
+
+ mutex_lock(&ext4_grpinfo_slab_create_mutex);
+ if (ext4_groupinfo_caches[cache_index]) {
+ mutex_unlock(&ext4_grpinfo_slab_create_mutex);
+ return 0; /* Already created */
+ }
+
+ slab_size = offsetof(struct ext4_group_info,
+ bb_counters[blocksize_bits + 2]);
+
+ cachep = kmem_cache_create(ext4_groupinfo_slab_names[cache_index],
+ slab_size, 0, SLAB_RECLAIM_ACCOUNT,
+ NULL);
+
+ mutex_unlock(&ext4_grpinfo_slab_create_mutex);
+ if (!cachep) {
+ printk(KERN_EMERG "EXT4: no memory for groupinfo slab cache\n");
+ return -ENOMEM;
+ }
+
+ ext4_groupinfo_caches[cache_index] = cachep;
+
+ return 0;
+}
+
int ext4_mb_init(struct super_block *sb, int needs_recovery)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
unsigned offset;
unsigned max;
int ret;
- int cache_index;
- struct kmem_cache *cachep;
- char *namep = NULL;
i = (sb->s_blocksize_bits + 2) * sizeof(*sbi->s_mb_offsets);
goto out;
}
- cache_index = sb->s_blocksize_bits - EXT4_MIN_BLOCK_LOG_SIZE;
- cachep = ext4_groupinfo_caches[cache_index];
- if (!cachep) {
- char name[32];
- int len = offsetof(struct ext4_group_info,
- bb_counters[sb->s_blocksize_bits + 2]);
-
- sprintf(name, "ext4_groupinfo_%d", sb->s_blocksize_bits);
- namep = kstrdup(name, GFP_KERNEL);
- if (!namep) {
- ret = -ENOMEM;
- goto out;
- }
-
- /* Need to free the kmem_cache_name() when we
- * destroy the slab */
- cachep = kmem_cache_create(namep, len, 0,
- SLAB_RECLAIM_ACCOUNT, NULL);
- if (!cachep) {
- ret = -ENOMEM;
- goto out;
- }
- ext4_groupinfo_caches[cache_index] = cachep;
- }
+ ret = ext4_groupinfo_create_slab(sb->s_blocksize);
+ if (ret < 0)
+ goto out;
/* order 0 is regular bitmap */
sbi->s_mb_maxs[0] = sb->s_blocksize << 3;
if (ret) {
kfree(sbi->s_mb_offsets);
kfree(sbi->s_mb_maxs);
- kfree(namep);
}
return ret;
}
void ext4_exit_mballoc(void)
{
- int i;
/*
* Wait for completion of call_rcu()'s on ext4_pspace_cachep
* before destroying the slab cache.
kmem_cache_destroy(ext4_pspace_cachep);
kmem_cache_destroy(ext4_ac_cachep);
kmem_cache_destroy(ext4_free_ext_cachep);
-
- for (i = 0; i < NR_GRPINFO_CACHES; i++) {
- struct kmem_cache *cachep = ext4_groupinfo_caches[i];
- if (cachep) {
- char *name = (char *)kmem_cache_name(cachep);
- kmem_cache_destroy(cachep);
- kfree(name);
- }
- }
+ ext4_groupinfo_destroy_slabs();
ext4_remove_debugfs_entry();
}
static struct kmem_cache *io_page_cachep, *io_end_cachep;
-#define WQ_HASH_SZ 37
-#define to_ioend_wq(v) (&ioend_wq[((unsigned long)v) % WQ_HASH_SZ])
-static wait_queue_head_t ioend_wq[WQ_HASH_SZ];
-
int __init ext4_init_pageio(void)
{
- int i;
-
io_page_cachep = KMEM_CACHE(ext4_io_page, SLAB_RECLAIM_ACCOUNT);
if (io_page_cachep == NULL)
return -ENOMEM;
kmem_cache_destroy(io_page_cachep);
return -ENOMEM;
}
- for (i = 0; i < WQ_HASH_SZ; i++)
- init_waitqueue_head(&ioend_wq[i]);
-
return 0;
}
void ext4_ioend_wait(struct inode *inode)
{
- wait_queue_head_t *wq = to_ioend_wq(inode);
+ wait_queue_head_t *wq = ext4_ioend_wq(inode);
wait_event(*wq, (atomic_read(&EXT4_I(inode)->i_ioend_count) == 0));
}
for (i = 0; i < io->num_io_pages; i++)
put_io_page(io->pages[i]);
io->num_io_pages = 0;
- wq = to_ioend_wq(io->inode);
+ wq = ext4_ioend_wq(io->inode);
if (atomic_dec_and_test(&EXT4_I(io->inode)->i_ioend_count) &&
waitqueue_active(wq))
wake_up_all(wq);
struct inode *inode = io->inode;
loff_t offset = io->offset;
ssize_t size = io->size;
+ wait_queue_head_t *wq;
int ret = 0;
ext4_debug("ext4_end_io_nolock: io 0x%p from inode %lu,list->next 0x%p,"
if (io->iocb)
aio_complete(io->iocb, io->result, 0);
/* clear the DIO AIO unwritten flag */
- io->flag &= ~EXT4_IO_END_UNWRITTEN;
+ if (io->flag & EXT4_IO_END_UNWRITTEN) {
+ io->flag &= ~EXT4_IO_END_UNWRITTEN;
+ /* Wake up anyone waiting on unwritten extent conversion */
+ wq = ext4_ioend_wq(io->inode);
+ if (atomic_dec_and_test(&EXT4_I(inode)->i_aiodio_unwritten) &&
+ waitqueue_active(wq)) {
+ wake_up_all(wq);
+ }
+ }
+
return ret;
}
struct inode *inode;
unsigned long flags;
int i;
+ sector_t bi_sector = bio->bi_sector;
BUG_ON(!io_end);
bio->bi_private = NULL;
if (error)
SetPageError(page);
BUG_ON(!head);
- if (head->b_size == PAGE_CACHE_SIZE)
- clear_buffer_dirty(head);
- else {
+ if (head->b_size != PAGE_CACHE_SIZE) {
loff_t offset;
loff_t io_end_offset = io_end->offset + io_end->size;
if (error)
buffer_io_error(bh);
- clear_buffer_dirty(bh);
}
if (buffer_delay(bh))
partial_write = 1;
(unsigned long long) io_end->offset,
(long) io_end->size,
(unsigned long long)
- bio->bi_sector >> (inode->i_blkbits - 9));
+ bi_sector >> (inode->i_blkbits - 9));
}
/* Add the io_end to per-inode completed io list*/
blocksize = 1 << inode->i_blkbits;
+ BUG_ON(!PageLocked(page));
BUG_ON(PageWriteback(page));
set_page_writeback(page);
ClearPageError(page);
for (bh = head = page_buffers(page), block_start = 0;
bh != head || !block_start;
block_start = block_end, bh = bh->b_this_page) {
+
block_end = block_start + blocksize;
if (block_start >= len) {
clear_buffer_dirty(bh);
set_buffer_uptodate(bh);
continue;
}
+ clear_buffer_dirty(bh);
ret = io_submit_add_bh(io, io_page, inode, wbc, bh);
if (ret) {
/*
const char *dev_name, void *data);
static void ext4_destroy_lazyinit_thread(void);
static void ext4_unregister_li_request(struct super_block *sb);
+static void ext4_clear_request_list(void);
#if !defined(CONFIG_EXT3_FS) && !defined(CONFIG_EXT3_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23)
static struct file_system_type ext3_fs_type = {
ei->i_sync_tid = 0;
ei->i_datasync_tid = 0;
atomic_set(&ei->i_ioend_count, 0);
+ atomic_set(&ei->i_aiodio_unwritten, 0);
return &ei->vfs_inode;
}
static int ext4_mark_dquot_dirty(struct dquot *dquot);
static int ext4_write_info(struct super_block *sb, int type);
static int ext4_quota_on(struct super_block *sb, int type, int format_id,
- char *path);
+ struct path *path);
static int ext4_quota_off(struct super_block *sb, int type);
static int ext4_quota_on_mount(struct super_block *sb, int type);
static ssize_t ext4_quota_read(struct super_block *sb, int type, char *data,
mutex_unlock(&ext4_li_info->li_list_mtx);
}
+static struct task_struct *ext4_lazyinit_task;
+
/*
* This is the function where ext4lazyinit thread lives. It walks
* through the request list searching for next scheduled filesystem.
if (time_before(jiffies, next_wakeup))
schedule();
finish_wait(&eli->li_wait_daemon, &wait);
+ if (kthread_should_stop()) {
+ ext4_clear_request_list();
+ goto exit_thread;
+ }
}
exit_thread:
wake_up(&eli->li_wait_task);
kfree(ext4_li_info);
+ ext4_lazyinit_task = NULL;
ext4_li_info = NULL;
mutex_unlock(&ext4_li_mtx);
static int ext4_run_lazyinit_thread(void)
{
- struct task_struct *t;
-
- t = kthread_run(ext4_lazyinit_thread, ext4_li_info, "ext4lazyinit");
- if (IS_ERR(t)) {
- int err = PTR_ERR(t);
+ ext4_lazyinit_task = kthread_run(ext4_lazyinit_thread,
+ ext4_li_info, "ext4lazyinit");
+ if (IS_ERR(ext4_lazyinit_task)) {
+ int err = PTR_ERR(ext4_lazyinit_task);
ext4_clear_request_list();
del_timer_sync(&ext4_li_info->li_timer);
kfree(ext4_li_info);
* If thread exited earlier
* there's nothing to be done.
*/
- if (!ext4_li_info)
+ if (!ext4_li_info || !ext4_lazyinit_task)
return;
- ext4_clear_request_list();
-
- while (ext4_li_info->li_task) {
- wake_up(&ext4_li_info->li_wait_daemon);
- wait_event(ext4_li_info->li_wait_task,
- ext4_li_info->li_task == NULL);
- }
+ kthread_stop(ext4_lazyinit_task);
}
static int ext4_fill_super(struct super_block *sb, void *data, int silent)
* Standard function to be called on quota_on
*/
static int ext4_quota_on(struct super_block *sb, int type, int format_id,
- char *name)
+ struct path *path)
{
int err;
- struct path path;
if (!test_opt(sb, QUOTA))
return -EINVAL;
- err = kern_path(name, LOOKUP_FOLLOW, &path);
- if (err)
- return err;
-
/* Quotafile not on the same filesystem? */
- if (path.mnt->mnt_sb != sb) {
- path_put(&path);
+ if (path->mnt->mnt_sb != sb)
return -EXDEV;
- }
/* Journaling quota? */
if (EXT4_SB(sb)->s_qf_names[type]) {
/* Quotafile not in fs root? */
- if (path.dentry->d_parent != sb->s_root)
+ if (path->dentry->d_parent != sb->s_root)
ext4_msg(sb, KERN_WARNING,
"Quota file not on filesystem root. "
"Journaled quota will not work");
* all updates to the file when we bypass pagecache...
*/
if (EXT4_SB(sb)->s_journal &&
- ext4_should_journal_data(path.dentry->d_inode)) {
+ ext4_should_journal_data(path->dentry->d_inode)) {
/*
* We don't need to lock updates but journal_flush() could
* otherwise be livelocked...
jbd2_journal_lock_updates(EXT4_SB(sb)->s_journal);
err = jbd2_journal_flush(EXT4_SB(sb)->s_journal);
jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
- if (err) {
- path_put(&path);
+ if (err)
return err;
- }
}
- err = dquot_quota_on_path(sb, type, format_id, &path);
- path_put(&path);
- return err;
+ return dquot_quota_on(sb, type, format_id, path);
}
static int ext4_quota_off(struct super_block *sb, int type)
.fs_flags = FS_REQUIRES_DEV,
};
-int __init ext4_init_feat_adverts(void)
+static int __init ext4_init_feat_adverts(void)
{
struct ext4_features *ef;
int ret = -ENOMEM;
return ret;
}
+static void ext4_exit_feat_adverts(void)
+{
+ kobject_put(&ext4_feat->f_kobj);
+ wait_for_completion(&ext4_feat->f_kobj_unregister);
+ kfree(ext4_feat);
+}
+
+/* Shared across all ext4 file systems */
+wait_queue_head_t ext4__ioend_wq[EXT4_WQ_HASH_SZ];
+struct mutex ext4__aio_mutex[EXT4_WQ_HASH_SZ];
+
static int __init ext4_init_fs(void)
{
- int err;
+ int i, err;
ext4_check_flag_values();
+
+ for (i = 0; i < EXT4_WQ_HASH_SZ; i++) {
+ mutex_init(&ext4__aio_mutex[i]);
+ init_waitqueue_head(&ext4__ioend_wq[i]);
+ }
+
err = ext4_init_pageio();
if (err)
return err;
err = ext4_init_system_zone();
if (err)
- goto out5;
+ goto out7;
ext4_kset = kset_create_and_add("ext4", NULL, fs_kobj);
if (!ext4_kset)
- goto out4;
+ goto out6;
ext4_proc_root = proc_mkdir("fs/ext4", NULL);
+ if (!ext4_proc_root)
+ goto out5;
err = ext4_init_feat_adverts();
+ if (err)
+ goto out4;
err = ext4_init_mballoc();
if (err)
out2:
ext4_exit_mballoc();
out3:
- kfree(ext4_feat);
+ ext4_exit_feat_adverts();
+out4:
remove_proc_entry("fs/ext4", NULL);
+out5:
kset_unregister(ext4_kset);
-out4:
+out6:
ext4_exit_system_zone();
-out5:
+out7:
ext4_exit_pageio();
return err;
}
destroy_inodecache();
ext4_exit_xattr();
ext4_exit_mballoc();
+ ext4_exit_feat_adverts();
remove_proc_entry("fs/ext4", NULL);
kset_unregister(ext4_kset);
ext4_exit_system_zone();
__O_SYNC | O_DSYNC | FASYNC |
O_DIRECT | O_LARGEFILE | O_DIRECTORY |
O_NOFOLLOW | O_NOATIME | O_CLOEXEC |
- FMODE_EXEC
+ __FMODE_EXEC
));
fasync_cache = kmem_cache_create("fasync_cache",
goto fail;
percpu_counter_inc(&nr_files);
+ f->f_cred = get_cred(cred);
if (security_file_alloc(f))
goto fail_sec;
INIT_LIST_HEAD(&f->f_u.fu_list);
atomic_long_set(&f->f_count, 1);
rwlock_init(&f->f_owner.lock);
- f->f_cred = get_cred(cred);
spin_lock_init(&f->f_lock);
eventpoll_init_file(f);
/* f->f_version: 0 */
#endif
glock_workqueue = alloc_workqueue("glock_workqueue", WQ_MEM_RECLAIM |
- WQ_HIGHPRI | WQ_FREEZEABLE, 0);
+ WQ_HIGHPRI | WQ_FREEZABLE, 0);
if (IS_ERR(glock_workqueue))
return PTR_ERR(glock_workqueue);
gfs2_delete_workqueue = alloc_workqueue("delete_workqueue",
- WQ_MEM_RECLAIM | WQ_FREEZEABLE,
+ WQ_MEM_RECLAIM | WQ_FREEZABLE,
0);
if (IS_ERR(gfs2_delete_workqueue)) {
destroy_workqueue(glock_workqueue);
}
/**
- * GFS2 lookup code fills in vfs inode contents based on info obtained
- * from directory entry inside gfs2_inode_lookup(). This has caused issues
- * with NFS code path since its get_dentry routine doesn't have the relevant
- * directory entry when gfs2_inode_lookup() is invoked. Part of the code
- * segment inside gfs2_inode_lookup code needs to get moved around.
+ * gfs2_set_iop - Sets inode operations
+ * @inode: The inode with correct i_mode filled in
*
- * Clears I_NEW as well.
- **/
+ * GFS2 lookup code fills in vfs inode contents based on info obtained
+ * from directory entry inside gfs2_inode_lookup().
+ */
-void gfs2_set_iop(struct inode *inode)
+static void gfs2_set_iop(struct inode *inode)
{
struct gfs2_sbd *sdp = GFS2_SB(inode);
umode_t mode = inode->i_mode;
inode->i_op = &gfs2_file_iops;
init_special_inode(inode, inode->i_mode, inode->i_rdev);
}
-
- unlock_new_inode(inode);
}
/**
* Returns: A VFS inode, or an error
*/
-struct inode *gfs2_inode_lookup(struct super_block *sb,
- unsigned int type,
- u64 no_addr,
- u64 no_formal_ino)
+struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned int type,
+ u64 no_addr, u64 no_formal_ino)
{
struct inode *inode;
struct gfs2_inode *ip;
error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);
if (unlikely(error))
goto fail_iopen;
- ip->i_iopen_gh.gh_gl->gl_object = ip;
+ ip->i_iopen_gh.gh_gl->gl_object = ip;
gfs2_glock_put(io_gl);
io_gl = NULL;
- if ((type == DT_UNKNOWN) && (no_formal_ino == 0))
- goto gfs2_nfsbypass;
-
- inode->i_mode = DT2IF(type);
-
- /*
- * We must read the inode in order to work out its type in
- * this case. Note that this doesn't happen often as we normally
- * know the type beforehand. This code path only occurs during
- * unlinked inode recovery (where it is safe to do this glock,
- * which is not true in the general case).
- */
if (type == DT_UNKNOWN) {
- struct gfs2_holder gh;
- error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, &gh);
- if (unlikely(error))
- goto fail_glock;
- /* Inode is now uptodate */
- gfs2_glock_dq_uninit(&gh);
+ /* Inode glock must be locked already */
+ error = gfs2_inode_refresh(GFS2_I(inode));
+ if (error)
+ goto fail_refresh;
+ } else {
+ inode->i_mode = DT2IF(type);
}
gfs2_set_iop(inode);
+ unlock_new_inode(inode);
}
-gfs2_nfsbypass:
return inode;
-fail_glock:
- gfs2_glock_dq(&ip->i_iopen_gh);
+
+fail_refresh:
+ ip->i_iopen_gh.gh_gl->gl_object = NULL;
+ gfs2_glock_dq_uninit(&ip->i_iopen_gh);
fail_iopen:
if (io_gl)
gfs2_glock_put(io_gl);
fail_put:
- if (inode->i_state & I_NEW)
- ip->i_gl->gl_object = NULL;
+ ip->i_gl->gl_object = NULL;
gfs2_glock_put(ip->i_gl);
fail:
- if (inode->i_state & I_NEW)
- iget_failed(inode);
- else
- iput(inode);
+ iget_failed(inode);
return ERR_PTR(error);
}
if (IS_ERR(inode))
goto fail;
- error = gfs2_inode_refresh(GFS2_I(inode));
- if (error)
- goto fail_iput;
-
- /* Pick up the works we bypass in gfs2_inode_lookup */
- if (inode->i_state & I_NEW)
- gfs2_set_iop(inode);
-
/* Two extra checks for NFS only */
if (no_formal_ino) {
error = -ESTALE;
return -EIO;
}
-extern void gfs2_set_iop(struct inode *inode);
extern struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned type,
u64 no_addr, u64 no_formal_ino);
extern struct inode *gfs2_lookup_by_inum(struct gfs2_sbd *sdp, u64 no_addr,
error = -ENOMEM;
gfs_recovery_wq = alloc_workqueue("gfs_recovery",
- WQ_MEM_RECLAIM | WQ_FREEZEABLE, 0);
+ WQ_MEM_RECLAIM | WQ_FREEZABLE, 0);
if (!gfs_recovery_wq)
goto fail_wq;
if (error)
goto out_truncate;
+ ip->i_iopen_gh.gh_flags |= GL_NOCACHE;
gfs2_glock_dq_wait(&ip->i_iopen_gh);
gfs2_holder_reinit(LM_ST_EXCLUSIVE, LM_FLAG_TRY_1CB | GL_NOCACHE, &ip->i_iopen_gh);
error = gfs2_glock_nq(&ip->i_iopen_gh);
u32 start, len, goal;
int res;
- if (sbi->total_blocks - sbi->free_blocks + 8 >
- sbi->alloc_file->i_size * 8) {
+ if (sbi->alloc_file->i_size * 8 <
+ sbi->total_blocks - sbi->free_blocks + 8) {
/* extend alloc file */
printk(KERN_ERR "hfs: extend alloc file! "
"(%llu,%u,%u)\n",
res = hfsplus_submit_bio(sb->s_bdev, *part_start + HFS_PMAP_BLK,
data, READ);
if (res)
- return res;
+ goto out;
switch (be16_to_cpu(*((__be16 *)data))) {
case HFS_OLD_PMAP_MAGIC:
res = -ENOENT;
break;
}
-
+out:
kfree(data);
return res;
}
struct inode *root, *inode;
struct qstr str;
struct nls_table *nls = NULL;
- int err = -EINVAL;
+ int err;
+ err = -EINVAL;
sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
if (!sbi)
- return -ENOMEM;
+ goto out;
sb->s_fs_info = sbi;
mutex_init(&sbi->alloc_mutex);
mutex_init(&sbi->vh_mutex);
hfsplus_fill_defaults(sbi);
+
+ err = -EINVAL;
if (!hfsplus_parse_options(data, sbi)) {
printk(KERN_ERR "hfs: unable to parse mount options\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
/* temporarily use utf8 to correctly find the hidden dir below */
sbi->nls = load_nls("utf8");
if (!sbi->nls) {
printk(KERN_ERR "hfs: unable to load nls for utf8\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
/* Grab the volume header */
if (hfsplus_read_wrapper(sb)) {
if (!silent)
printk(KERN_WARNING "hfs: unable to find HFS+ superblock\n");
- err = -EINVAL;
- goto cleanup;
+ goto out_unload_nls;
}
vhdr = sbi->s_vhdr;
if (be16_to_cpu(vhdr->version) < HFSPLUS_MIN_VERSION ||
be16_to_cpu(vhdr->version) > HFSPLUS_CURRENT_VERSION) {
printk(KERN_ERR "hfs: wrong filesystem version\n");
- goto cleanup;
+ goto out_free_vhdr;
}
sbi->total_blocks = be32_to_cpu(vhdr->total_blocks);
sbi->free_blocks = be32_to_cpu(vhdr->free_blocks);
sbi->ext_tree = hfs_btree_open(sb, HFSPLUS_EXT_CNID);
if (!sbi->ext_tree) {
printk(KERN_ERR "hfs: failed to load extents file\n");
- goto cleanup;
+ goto out_free_vhdr;
}
sbi->cat_tree = hfs_btree_open(sb, HFSPLUS_CAT_CNID);
if (!sbi->cat_tree) {
printk(KERN_ERR "hfs: failed to load catalog file\n");
- goto cleanup;
+ goto out_close_ext_tree;
}
inode = hfsplus_iget(sb, HFSPLUS_ALLOC_CNID);
if (IS_ERR(inode)) {
printk(KERN_ERR "hfs: failed to load allocation file\n");
err = PTR_ERR(inode);
- goto cleanup;
+ goto out_close_cat_tree;
}
sbi->alloc_file = inode;
if (IS_ERR(root)) {
printk(KERN_ERR "hfs: failed to load root directory\n");
err = PTR_ERR(root);
- goto cleanup;
- }
- sb->s_d_op = &hfsplus_dentry_operations;
- sb->s_root = d_alloc_root(root);
- if (!sb->s_root) {
- iput(root);
- err = -ENOMEM;
- goto cleanup;
+ goto out_put_alloc_file;
}
str.len = sizeof(HFSP_HIDDENDIR_NAME) - 1;
if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
hfs_find_exit(&fd);
if (entry.type != cpu_to_be16(HFSPLUS_FOLDER))
- goto cleanup;
+ goto out_put_root;
inode = hfsplus_iget(sb, be32_to_cpu(entry.folder.id));
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
- goto cleanup;
+ goto out_put_root;
}
sbi->hidden_dir = inode;
} else
hfs_find_exit(&fd);
- if (sb->s_flags & MS_RDONLY)
- goto out;
+ if (!(sb->s_flags & MS_RDONLY)) {
+ /*
+ * H+LX == hfsplusutils, H+Lx == this driver, H+lx is unused
+ * all three are registered with Apple for our use
+ */
+ vhdr->last_mount_vers = cpu_to_be32(HFSP_MOUNT_VERSION);
+ vhdr->modify_date = hfsp_now2mt();
+ be32_add_cpu(&vhdr->write_count, 1);
+ vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_UNMNT);
+ vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_INCNSTNT);
+ hfsplus_sync_fs(sb, 1);
- /* H+LX == hfsplusutils, H+Lx == this driver, H+lx is unused
- * all three are registered with Apple for our use
- */
- vhdr->last_mount_vers = cpu_to_be32(HFSP_MOUNT_VERSION);
- vhdr->modify_date = hfsp_now2mt();
- be32_add_cpu(&vhdr->write_count, 1);
- vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_UNMNT);
- vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_INCNSTNT);
- hfsplus_sync_fs(sb, 1);
-
- if (!sbi->hidden_dir) {
- mutex_lock(&sbi->vh_mutex);
- sbi->hidden_dir = hfsplus_new_inode(sb, S_IFDIR);
- hfsplus_create_cat(sbi->hidden_dir->i_ino, sb->s_root->d_inode,
- &str, sbi->hidden_dir);
- mutex_unlock(&sbi->vh_mutex);
-
- hfsplus_mark_inode_dirty(sbi->hidden_dir, HFSPLUS_I_CAT_DIRTY);
+ if (!sbi->hidden_dir) {
+ mutex_lock(&sbi->vh_mutex);
+ sbi->hidden_dir = hfsplus_new_inode(sb, S_IFDIR);
+ hfsplus_create_cat(sbi->hidden_dir->i_ino, root, &str,
+ sbi->hidden_dir);
+ mutex_unlock(&sbi->vh_mutex);
+
+ hfsplus_mark_inode_dirty(sbi->hidden_dir,
+ HFSPLUS_I_CAT_DIRTY);
+ }
}
-out:
+
+ sb->s_d_op = &hfsplus_dentry_operations;
+ sb->s_root = d_alloc_root(root);
+ if (!sb->s_root) {
+ err = -ENOMEM;
+ goto out_put_hidden_dir;
+ }
+
unload_nls(sbi->nls);
sbi->nls = nls;
return 0;
-cleanup:
- hfsplus_put_super(sb);
+out_put_hidden_dir:
+ iput(sbi->hidden_dir);
+out_put_root:
+ iput(sbi->alloc_file);
+out_put_alloc_file:
+ iput(sbi->alloc_file);
+out_close_cat_tree:
+ hfs_btree_close(sbi->cat_tree);
+out_close_ext_tree:
+ hfs_btree_close(sbi->ext_tree);
+out_free_vhdr:
+ kfree(sbi->s_vhdr);
+ kfree(sbi->s_backup_vhdr);
+out_unload_nls:
+ unload_nls(sbi->nls);
unload_nls(nls);
+ kfree(sbi);
+out:
return err;
}
break;
case cpu_to_be16(HFSP_WRAP_MAGIC):
if (!hfsplus_read_mdb(sbi->s_vhdr, &wd))
- goto out;
+ goto out_free_backup_vhdr;
wd.ablk_size >>= HFSPLUS_SECTOR_SHIFT;
part_start += wd.ablk_start + wd.embed_start * wd.ablk_size;
part_size = wd.embed_count * wd.ablk_size;
* (should do this only for cdrom/loop though)
*/
if (hfs_part_find(sb, &part_start, &part_size))
- goto out;
+ goto out_free_backup_vhdr;
goto reread;
}
len = isize;
}
+ /*
+ * Some filesystems can't deal with being asked to map less than
+ * blocksize, so make sure our len is at least block length.
+ */
+ if (logical_to_blk(inode, len) == 0)
+ len = blk_to_logical(inode, 1);
+
start_blk = logical_to_blk(inode, start);
last_blk = logical_to_blk(inode, start + len - 1);
}
/*
- * Called under j_state_lock. Returns true if a transaction commit was started.
+ * Called with j_state_lock locked for writing.
+ * Returns true if a transaction commit was started.
*/
int __jbd2_log_start_commit(journal_t *journal, tid_t target)
{
{
transaction_t *transaction = NULL;
tid_t tid;
+ int need_to_start = 0;
read_lock(&journal->j_state_lock);
if (journal->j_running_transaction && !current->journal_info) {
transaction = journal->j_running_transaction;
- __jbd2_log_start_commit(journal, transaction->t_tid);
+ if (!tid_geq(journal->j_commit_request, transaction->t_tid))
+ need_to_start = 1;
} else if (journal->j_committing_transaction)
transaction = journal->j_committing_transaction;
tid = transaction->t_tid;
read_unlock(&journal->j_state_lock);
+ if (need_to_start)
+ jbd2_log_start_commit(journal, tid);
jbd2_log_wait_commit(journal, tid);
return 1;
}
static int start_this_handle(journal_t *journal, handle_t *handle,
int gfp_mask)
{
- transaction_t *transaction;
- int needed;
- int nblocks = handle->h_buffer_credits;
- transaction_t *new_transaction = NULL;
+ transaction_t *transaction, *new_transaction = NULL;
+ tid_t tid;
+ int needed, need_to_start;
+ int nblocks = handle->h_buffer_credits;
if (nblocks > journal->j_max_transaction_buffers) {
printk(KERN_ERR "JBD: %s wants too many credits (%d > %d)\n",
atomic_sub(nblocks, &transaction->t_outstanding_credits);
prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
TASK_UNINTERRUPTIBLE);
- __jbd2_log_start_commit(journal, transaction->t_tid);
+ tid = transaction->t_tid;
+ need_to_start = !tid_geq(journal->j_commit_request, tid);
read_unlock(&journal->j_state_lock);
+ if (need_to_start)
+ jbd2_log_start_commit(journal, tid);
schedule();
finish_wait(&journal->j_wait_transaction_locked, &wait);
goto repeat;
{
transaction_t *transaction = handle->h_transaction;
journal_t *journal = transaction->t_journal;
- int ret;
+ tid_t tid;
+ int need_to_start, ret;
/* If we've had an abort of any type, don't even think about
* actually doing the restart! */
spin_unlock(&transaction->t_handle_lock);
jbd_debug(2, "restarting handle %p\n", handle);
- __jbd2_log_start_commit(journal, transaction->t_tid);
+ tid = transaction->t_tid;
+ need_to_start = !tid_geq(journal->j_commit_request, tid);
read_unlock(&journal->j_state_lock);
+ if (need_to_start)
+ jbd2_log_start_commit(journal, tid);
lock_map_release(&handle->h_lockdep_map);
handle->h_buffer_credits = nblocks;
struct nsm_handle *nsm,
const struct nlm_reboot *info)
{
- struct nlm_host *host = NULL;
+ struct nlm_host *host;
struct hlist_head *chain;
struct hlist_node *pos;
host->h_state++;
nlm_get_host(host);
- goto out;
+ mutex_unlock(&nlm_host_mutex);
+ return host;
}
}
-out:
+
mutex_unlock(&nlm_host_mutex);
- return host;
+ return NULL;
}
/**
struct fs_struct *fs = current->fs;
struct dentry *parent = nd->path.dentry;
- /*
- * It can be possible to revalidate the dentry that we started
- * the path walk with. force_reval_path may also revalidate the
- * dentry already committed to the nameidata.
- */
- if (unlikely(parent == dentry))
- return nameidata_drop_rcu(nd);
-
BUG_ON(!(nd->flags & LOOKUP_RCU));
if (nd->root.mnt) {
spin_lock(&fs->lock);
*/
void release_open_intent(struct nameidata *nd)
{
- if (nd->intent.open.file->f_path.dentry == NULL)
- put_filp(nd->intent.open.file);
- else
- fput(nd->intent.open.file);
-}
-
-/*
- * Call d_revalidate and handle filesystems that request rcu-walk
- * to be dropped. This may be called and return in rcu-walk mode,
- * regardless of success or error. If -ECHILD is returned, the caller
- * must return -ECHILD back up the path walk stack so path walk may
- * be restarted in ref-walk mode.
- */
-static int d_revalidate(struct dentry *dentry, struct nameidata *nd)
-{
- int status;
+ struct file *file = nd->intent.open.file;
- status = dentry->d_op->d_revalidate(dentry, nd);
- if (status == -ECHILD) {
- if (nameidata_dentry_drop_rcu(nd, dentry))
- return status;
- status = dentry->d_op->d_revalidate(dentry, nd);
+ if (file && !IS_ERR(file)) {
+ if (file->f_path.dentry == NULL)
+ put_filp(file);
+ else
+ fput(file);
}
+}
- return status;
+static inline int d_revalidate(struct dentry *dentry, struct nameidata *nd)
+{
+ return dentry->d_op->d_revalidate(dentry, nd);
}
-static inline struct dentry *
+static struct dentry *
do_revalidate(struct dentry *dentry, struct nameidata *nd)
{
- int status;
-
- status = d_revalidate(dentry, nd);
+ int status = d_revalidate(dentry, nd);
if (unlikely(status <= 0)) {
/*
* The dentry failed validation.
* to return a fail status.
*/
if (status < 0) {
- /* If we're in rcu-walk, we don't have a ref */
- if (!(nd->flags & LOOKUP_RCU))
- dput(dentry);
+ dput(dentry);
dentry = ERR_PTR(status);
-
- } else {
- /* Don't d_invalidate in rcu-walk mode */
- if (nameidata_dentry_drop_rcu_maybe(nd, dentry))
- return ERR_PTR(-ECHILD);
- if (!d_invalidate(dentry)) {
- dput(dentry);
- dentry = NULL;
- }
+ } else if (!d_invalidate(dentry)) {
+ dput(dentry);
+ dentry = NULL;
}
}
return dentry;
}
+static inline struct dentry *
+do_revalidate_rcu(struct dentry *dentry, struct nameidata *nd)
+{
+ int status = d_revalidate(dentry, nd);
+ if (likely(status > 0))
+ return dentry;
+ if (status == -ECHILD) {
+ if (nameidata_dentry_drop_rcu(nd, dentry))
+ return ERR_PTR(-ECHILD);
+ return do_revalidate(dentry, nd);
+ }
+ if (status < 0)
+ return ERR_PTR(status);
+ /* Don't d_invalidate in rcu-walk mode */
+ if (nameidata_dentry_drop_rcu(nd, dentry))
+ return ERR_PTR(-ECHILD);
+ if (!d_invalidate(dentry)) {
+ dput(dentry);
+ dentry = NULL;
+ }
+ return dentry;
+}
+
static inline int need_reval_dot(struct dentry *dentry)
{
if (likely(!(dentry->d_flags & DCACHE_OP_REVALIDATE)))
return 0;
if (!status) {
- /* Don't d_invalidate in rcu-walk mode */
- if (nameidata_drop_rcu(nd))
- return -ECHILD;
d_invalidate(dentry);
status = -ESTALE;
}
int error;
struct dentry *dentry = link->dentry;
+ BUG_ON(nd->flags & LOOKUP_RCU);
+
touch_atime(link->mnt, dentry);
nd_set_link(nd, NULL);
* Without that kind of total limit, nasty chains of consecutive
* symlinks can cause almost arbitrarily long lookups.
*/
-static inline int do_follow_link(struct path *path, struct nameidata *nd)
+static inline int do_follow_link(struct inode *inode, struct path *path, struct nameidata *nd)
{
void *cookie;
int err = -ELOOP;
+
+ /* We drop rcu-walk here */
+ if (nameidata_dentry_drop_rcu_maybe(nd, path->dentry))
+ return -ECHILD;
+ BUG_ON(inode != path->dentry->d_inode);
+
if (current->link_count >= MAX_NESTED_LINKS)
goto loop;
if (current->total_link_count >= 40)
return -ECHILD;
nd->seq = seq;
- if (dentry->d_flags & DCACHE_OP_REVALIDATE)
- goto need_revalidate;
-done2:
+ if (unlikely(dentry->d_flags & DCACHE_OP_REVALIDATE)) {
+ dentry = do_revalidate_rcu(dentry, nd);
+ if (!dentry)
+ goto need_lookup;
+ if (IS_ERR(dentry))
+ goto fail;
+ if (!(nd->flags & LOOKUP_RCU))
+ goto done;
+ }
path->mnt = mnt;
path->dentry = dentry;
if (likely(__follow_mount_rcu(nd, path, inode, false)))
if (!dentry)
goto need_lookup;
found:
- if (dentry->d_flags & DCACHE_OP_REVALIDATE)
- goto need_revalidate;
+ if (unlikely(dentry->d_flags & DCACHE_OP_REVALIDATE)) {
+ dentry = do_revalidate(dentry, nd);
+ if (!dentry)
+ goto need_lookup;
+ if (IS_ERR(dentry))
+ goto fail;
+ }
done:
path->mnt = mnt;
path->dentry = dentry;
mutex_unlock(&dir->i_mutex);
goto found;
-need_revalidate:
- dentry = do_revalidate(dentry, nd);
- if (!dentry)
- goto need_lookup;
- if (IS_ERR(dentry))
- goto fail;
- if (nd->flags & LOOKUP_RCU)
- goto done2;
- goto done;
-
fail:
return PTR_ERR(dentry);
}
goto out_dput;
if (inode->i_op->follow_link) {
- /* We commonly drop rcu-walk here */
- if (nameidata_dentry_drop_rcu_maybe(nd, next.dentry))
- return -ECHILD;
- BUG_ON(inode != next.dentry->d_inode);
- err = do_follow_link(&next, nd);
+ err = do_follow_link(inode, &next, nd);
if (err)
goto return_err;
nd->inode = nd->path.dentry->d_inode;
break;
if (inode && unlikely(inode->i_op->follow_link) &&
(lookup_flags & LOOKUP_FOLLOW)) {
- if (nameidata_dentry_drop_rcu_maybe(nd, next.dentry))
- return -ECHILD;
- BUG_ON(inode != next.dentry->d_inode);
- err = do_follow_link(&next, nd);
+ err = do_follow_link(inode, &next, nd);
if (err)
goto return_err;
nd->inode = nd->path.dentry->d_inode;
* We may need to check the cached dentry for staleness.
*/
if (need_reval_dot(nd->path.dentry)) {
+ if (nameidata_drop_rcu_last_maybe(nd))
+ return -ECHILD;
/* Note: we do not d_invalidate() */
err = d_revalidate(nd->path.dentry, nd);
if (!err)
err = -ESTALE;
if (err < 0)
break;
+ return 0;
}
return_base:
if (nameidata_drop_rcu_last_maybe(nd))
return filp;
exit:
- if (!IS_ERR(nd->intent.open.file))
- release_open_intent(nd);
path_put(&nd->path);
return ERR_PTR(error);
}
exit_dput:
path_put_conditional(path, nd);
exit:
- if (!IS_ERR(nd->intent.open.file))
- release_open_intent(nd);
path_put(&nd->path);
return ERR_PTR(error);
}
}
audit_inode(pathname, nd.path.dentry);
filp = finish_open(&nd, open_flag, acc_mode);
+ release_open_intent(&nd);
return filp;
creat:
path_put(&nd.root);
if (filp == ERR_PTR(-ESTALE) && !(flags & LOOKUP_REVAL))
goto reval;
+ release_open_intent(&nd);
return filp;
exit_dput:
out_path:
path_put(&nd.path);
out_filp:
- if (!IS_ERR(nd.intent.open.file))
- release_open_intent(&nd);
filp = ERR_PTR(error);
goto out;
}
}
#if defined(CONFIG_NFS_V4_1)
-/*
- * * CB_SEQUENCE operations will fail until the callback sessionid is set.
- * */
-int nfs4_set_callback_sessionid(struct nfs_client *clp)
-{
- struct svc_serv *serv = clp->cl_rpcclient->cl_xprt->bc_serv;
- struct nfs4_sessionid *bc_sid;
-
- if (!serv->sv_bc_xprt)
- return -EINVAL;
-
- /* on success freed in xprt_free */
- bc_sid = kmalloc(sizeof(struct nfs4_sessionid), GFP_KERNEL);
- if (!bc_sid)
- return -ENOMEM;
- memcpy(bc_sid->data, &clp->cl_session->sess_id.data,
- NFS4_MAX_SESSIONID_LEN);
- spin_lock_bh(&serv->sv_cb_lock);
- serv->sv_bc_xprt->xpt_bc_sid = bc_sid;
- spin_unlock_bh(&serv->sv_cb_lock);
- dprintk("%s set xpt_bc_sid=%u:%u:%u:%u for sv_bc_xprt %p\n", __func__,
- ((u32 *)bc_sid->data)[0], ((u32 *)bc_sid->data)[1],
- ((u32 *)bc_sid->data)[2], ((u32 *)bc_sid->data)[3],
- serv->sv_bc_xprt);
- return 0;
-}
-
/*
* The callback service for NFSv4.1 callbacks
*/
struct nfs_callback_data *cb_info)
{
}
-int nfs4_set_callback_sessionid(struct nfs_client *clp)
-{
- return 0;
-}
#endif /* CONFIG_NFS_V4_1 */
/*
mutex_unlock(&nfs_callback_mutex);
}
-static int check_gss_callback_principal(struct nfs_client *clp,
- struct svc_rqst *rqstp)
+/* Boolean check of RPC_AUTH_GSS principal */
+int
+check_gss_callback_principal(struct nfs_client *clp, struct svc_rqst *rqstp)
{
struct rpc_clnt *r = clp->cl_rpcclient;
char *p = svc_gss_principal(rqstp);
+ if (rqstp->rq_authop->flavour != RPC_AUTH_GSS)
+ return 1;
+
/* No RPC_AUTH_GSS on NFSv4.1 back channel yet */
if (clp->cl_minorversion != 0)
- return SVC_DROP;
+ return 0;
/*
* It might just be a normal user principal, in which case
* userspace won't bother to tell us the name at all.
*/
if (p == NULL)
- return SVC_DENIED;
+ return 0;
/* Expect a GSS_C_NT_HOSTBASED_NAME like "nfs@serverhostname" */
if (memcmp(p, "nfs@", 4) != 0)
- return SVC_DENIED;
+ return 0;
p += 4;
if (strcmp(p, r->cl_server) != 0)
- return SVC_DENIED;
- return SVC_OK;
+ return 0;
+ return 1;
}
-/* pg_authenticate method helper */
-static struct nfs_client *nfs_cb_find_client(struct svc_rqst *rqstp)
-{
- struct nfs4_sessionid *sessionid = bc_xprt_sid(rqstp);
- int is_cb_compound = rqstp->rq_proc == CB_COMPOUND ? 1 : 0;
-
- dprintk("--> %s rq_proc %d\n", __func__, rqstp->rq_proc);
- if (svc_is_backchannel(rqstp))
- /* Sessionid (usually) set after CB_NULL ping */
- return nfs4_find_client_sessionid(svc_addr(rqstp), sessionid,
- is_cb_compound);
- else
- /* No callback identifier in pg_authenticate */
- return nfs4_find_client_no_ident(svc_addr(rqstp));
-}
-
-/* pg_authenticate method for nfsv4 callback threads. */
+/*
+ * pg_authenticate method for nfsv4 callback threads.
+ *
+ * The authflavor has been negotiated, so an incorrect flavor is a server
+ * bug. Drop packets with incorrect authflavor.
+ *
+ * All other checking done after NFS decoding where the nfs_client can be
+ * found in nfs4_callback_compound
+ */
static int nfs_callback_authenticate(struct svc_rqst *rqstp)
{
- struct nfs_client *clp;
- RPC_IFDEBUG(char buf[RPC_MAX_ADDRBUFLEN]);
- int ret = SVC_OK;
-
- /* Don't talk to strangers */
- clp = nfs_cb_find_client(rqstp);
- if (clp == NULL)
- return SVC_DROP;
-
- dprintk("%s: %s NFSv4 callback!\n", __func__,
- svc_print_addr(rqstp, buf, sizeof(buf)));
-
switch (rqstp->rq_authop->flavour) {
- case RPC_AUTH_NULL:
- if (rqstp->rq_proc != CB_NULL)
- ret = SVC_DENIED;
- break;
- case RPC_AUTH_UNIX:
- break;
- case RPC_AUTH_GSS:
- ret = check_gss_callback_principal(clp, rqstp);
- break;
- default:
- ret = SVC_DENIED;
+ case RPC_AUTH_NULL:
+ if (rqstp->rq_proc != CB_NULL)
+ return SVC_DROP;
+ break;
+ case RPC_AUTH_GSS:
+ /* No RPC_AUTH_GSS support yet in NFSv4.1 */
+ if (svc_is_backchannel(rqstp))
+ return SVC_DROP;
}
- nfs_put_client(clp);
- return ret;
+ return SVC_OK;
}
/*
*/
#ifndef __LINUX_FS_NFS_CALLBACK_H
#define __LINUX_FS_NFS_CALLBACK_H
+#include <linux/sunrpc/svc.h>
#define NFS4_CALLBACK 0x40000000
#define NFS4_CALLBACK_XDRSIZE 2048
struct cb_process_state {
__be32 drc_status;
struct nfs_client *clp;
- struct nfs4_sessionid *svc_sid; /* v4.1 callback service sessionid */
};
struct cb_compound_hdr_arg {
extern void nfs4_check_drain_bc_complete(struct nfs4_session *ses);
extern void nfs4_cb_take_slot(struct nfs_client *clp);
#endif /* CONFIG_NFS_V4_1 */
-
+extern int check_gss_callback_principal(struct nfs_client *, struct svc_rqst *);
extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
struct cb_getattrres *res,
struct cb_process_state *cps);
{
struct nfs_client *clp;
int i;
- __be32 status;
+ __be32 status = htonl(NFS4ERR_BADSESSION);
cps->clp = NULL;
- status = htonl(NFS4ERR_BADSESSION);
- /* Incoming session must match the callback session */
- if (memcmp(&args->csa_sessionid, cps->svc_sid, NFS4_MAX_SESSIONID_LEN))
- goto out;
-
- clp = nfs4_find_client_sessionid(args->csa_addr,
- &args->csa_sessionid, 1);
+ clp = nfs4_find_client_sessionid(args->csa_addr, &args->csa_sessionid);
if (clp == NULL)
goto out;
res->csr_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
res->csr_target_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
nfs4_cb_take_slot(clp);
- cps->clp = clp; /* put in nfs4_callback_compound */
out:
+ cps->clp = clp; /* put in nfs4_callback_compound */
for (i = 0; i < args->csa_nrclists; i++)
kfree(args->csa_rclists[i].rcl_refcalls);
kfree(args->csa_rclists);
if (hdr_arg.minorversion == 0) {
cps.clp = nfs4_find_client_ident(hdr_arg.cb_ident);
- if (!cps.clp)
+ if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp))
return rpc_drop_reply;
- } else
- cps.svc_sid = bc_xprt_sid(rqstp);
+ }
hdr_res.taglen = hdr_arg.taglen;
hdr_res.tag = hdr_arg.tag;
* For CB_COMPOUND calls, find a client by IP address, protocol version,
* minorversion, and sessionID
*
- * CREATE_SESSION triggers a CB_NULL ping from servers. The callback service
- * sessionid can only be set after the CREATE_SESSION return, so a CB_NULL
- * can arrive before the callback sessionid is set. For CB_NULL calls,
- * find a client by IP address protocol version, and minorversion.
- *
* Returns NULL if no such client
*/
struct nfs_client *
nfs4_find_client_sessionid(const struct sockaddr *addr,
- struct nfs4_sessionid *sid, int is_cb_compound)
+ struct nfs4_sessionid *sid)
{
struct nfs_client *clp;
if (!nfs4_has_session(clp))
continue;
- /* Match sessionid unless cb_null call*/
- if (is_cb_compound && (memcmp(clp->cl_session->sess_id.data,
- sid->data, NFS4_MAX_SESSIONID_LEN) != 0))
+ /* Match sessionid*/
+ if (memcmp(clp->cl_session->sess_id.data,
+ sid->data, NFS4_MAX_SESSIONID_LEN) != 0)
continue;
atomic_inc(&clp->cl_count);
struct nfs_client *
nfs4_find_client_sessionid(const struct sockaddr *addr,
- struct nfs4_sessionid *sid, int is_cb_compound)
+ struct nfs4_sessionid *sid)
{
return NULL;
}
static void nfs_do_free_delegation(struct nfs_delegation *delegation)
{
- if (delegation->cred)
- put_rpccred(delegation->cred);
kfree(delegation);
}
static void nfs_free_delegation(struct nfs_delegation *delegation)
{
+ if (delegation->cred) {
+ put_rpccred(delegation->cred);
+ delegation->cred = NULL;
+ }
call_rcu(&delegation->rcu, nfs_free_delegation_callback);
}
pos += vec->iov_len;
}
+ /*
+ * If no bytes were started, return the error, and let the
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+ nfs_direct_req_release(dreq);
+ return result < 0 ? result : -EIO;
+ }
+
if (put_dreq(dreq))
nfs_direct_complete(dreq);
-
- if (requested_bytes != 0)
- return 0;
-
- if (result < 0)
- return result;
- return -EIO;
+ return 0;
}
static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
pos += vec->iov_len;
}
+ /*
+ * If no bytes were started, return the error, and let the
+ * generic layer handle the completion.
+ */
+ if (requested_bytes == 0) {
+ nfs_direct_req_release(dreq);
+ return result < 0 ? result : -EIO;
+ }
+
if (put_dreq(dreq))
nfs_direct_write_complete(dreq, dreq->inode);
-
- if (requested_bytes != 0)
- return 0;
-
- if (result < 0)
- return result;
- return -EIO;
+ return 0;
}
static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
return ret;
}
-static void nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr)
{
struct nfs_inode *nfsi = NFS_I(inode);
+ unsigned long ret = 0;
if ((fattr->valid & NFS_ATTR_FATTR_PRECHANGE)
&& (fattr->valid & NFS_ATTR_FATTR_CHANGE)
nfsi->change_attr = fattr->change_attr;
if (S_ISDIR(inode->i_mode))
nfsi->cache_validity |= NFS_INO_INVALID_DATA;
+ ret |= NFS_INO_INVALID_ATTR;
}
/* If we have atomic WCC data, we may update some attributes */
if ((fattr->valid & NFS_ATTR_FATTR_PRECTIME)
&& (fattr->valid & NFS_ATTR_FATTR_CTIME)
- && timespec_equal(&inode->i_ctime, &fattr->pre_ctime))
- memcpy(&inode->i_ctime, &fattr->ctime, sizeof(inode->i_ctime));
+ && timespec_equal(&inode->i_ctime, &fattr->pre_ctime)) {
+ memcpy(&inode->i_ctime, &fattr->ctime, sizeof(inode->i_ctime));
+ ret |= NFS_INO_INVALID_ATTR;
+ }
if ((fattr->valid & NFS_ATTR_FATTR_PREMTIME)
&& (fattr->valid & NFS_ATTR_FATTR_MTIME)
&& timespec_equal(&inode->i_mtime, &fattr->pre_mtime)) {
- memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime));
- if (S_ISDIR(inode->i_mode))
- nfsi->cache_validity |= NFS_INO_INVALID_DATA;
+ memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime));
+ if (S_ISDIR(inode->i_mode))
+ nfsi->cache_validity |= NFS_INO_INVALID_DATA;
+ ret |= NFS_INO_INVALID_ATTR;
}
if ((fattr->valid & NFS_ATTR_FATTR_PRESIZE)
&& (fattr->valid & NFS_ATTR_FATTR_SIZE)
&& i_size_read(inode) == nfs_size_to_loff_t(fattr->pre_size)
- && nfsi->npages == 0)
- i_size_write(inode, nfs_size_to_loff_t(fattr->size));
+ && nfsi->npages == 0) {
+ i_size_write(inode, nfs_size_to_loff_t(fattr->size));
+ ret |= NFS_INO_INVALID_ATTR;
+ }
+ return ret;
}
/**
| NFS_INO_REVAL_PAGECACHE);
/* Do atomic weak cache consistency updates */
- nfs_wcc_update_inode(inode, fattr);
+ invalid |= nfs_wcc_update_inode(inode, fattr);
/* More cache consistency checks */
if (fattr->valid & NFS_ATTR_FATTR_CHANGE) {
extern struct nfs_client *nfs4_find_client_no_ident(const struct sockaddr *);
extern struct nfs_client *nfs4_find_client_ident(int);
extern struct nfs_client *
-nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *,
- int);
+nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *);
extern struct nfs_server *nfs_create_server(
const struct nfs_parsed_mount_data *,
struct nfs_fh *);
if (!nfs_server_capable(inode, NFS_CAP_ACLS))
goto out;
- /* We are doing this here, because XDR marshalling can only
- return -ENOMEM. */
+ /* We are doing this here because XDR marshalling does not
+ * return any results, it BUGs. */
status = -ENOSPC;
if (acl != NULL && acl->a_count > NFS_ACL_MAX_ENTRIES)
goto out;
encode_nfs_fh3(xdr, NFS_FH(args->inode));
encode_uint32(xdr, args->mask);
+
+ base = req->rq_slen;
if (args->npages != 0)
xdr_write_pages(xdr, args->pages, 0, args->len);
+ else
+ xdr_reserve_space(xdr, NFS_ACL_INLINE_BUFSIZE);
- base = req->rq_slen;
error = nfsacl_encode(xdr->buf, base, args->inode,
(args->mask & NFS_ACL) ?
args->acl_access : NULL, 1, 0);
/* ipv6 length plus port is legal */
if (rlen > INET6_ADDRSTRLEN + 8) {
- dprintk("%s Invalid address, length %d\n", __func__,
+ dprintk("%s: Invalid address, length %d\n", __func__,
rlen);
goto out_err;
}
/* replace the port dots with dashes for the in4_pton() delimiter*/
for (i = 0; i < 2; i++) {
char *res = strrchr(buf, '.');
+ if (!res) {
+ dprintk("%s: Failed finding expected dots in port\n",
+ __func__);
+ goto out_free;
+ }
*res = '-';
}
port = htons((tmp[0] << 8) | (tmp[1]));
ds = nfs4_pnfs_ds_add(inode, ip_addr, port);
- dprintk("%s Decoded address and port %s\n", __func__, buf);
+ dprintk("%s: Decoded address and port %s\n", __func__, buf);
out_free:
kfree(buf);
out_err:
#include <linux/module.h>
#include <linux/sunrpc/bc_xprt.h>
#include <linux/xattr.h>
+#include <linux/utsname.h>
#include "nfs4_fs.h"
#include "delegation.h"
*p = htonl((u32)clp->cl_boot_time.tv_nsec);
args.verifier = &verifier;
- while (1) {
- args.id_len = scnprintf(args.id, sizeof(args.id),
- "%s/%s %u",
- clp->cl_ipaddr,
- rpc_peeraddr2str(clp->cl_rpcclient,
- RPC_DISPLAY_ADDR),
- clp->cl_id_uniquifier);
-
- status = rpc_call_sync(clp->cl_rpcclient, &msg, 0);
-
- if (status != -NFS4ERR_CLID_INUSE)
- break;
-
- if (signalled())
- break;
-
- if (++clp->cl_id_uniquifier == 0)
- break;
- }
+ args.id_len = scnprintf(args.id, sizeof(args.id),
+ "%s/%s.%s/%u",
+ clp->cl_ipaddr,
+ init_utsname()->nodename,
+ init_utsname()->domainname,
+ clp->cl_rpcclient->cl_auth->au_flavor);
- status = nfs4_check_cl_exchange_flags(clp->cl_exchange_flags);
+ status = rpc_call_sync(clp->cl_rpcclient, &msg, 0);
+ if (!status)
+ status = nfs4_check_cl_exchange_flags(clp->cl_exchange_flags);
dprintk("<-- %s status= %d\n", __func__, status);
return status;
}
status = nfs4_proc_create_session(clp);
if (status != 0)
goto out;
- status = nfs4_set_callback_sessionid(clp);
- if (status != 0) {
- printk(KERN_WARNING "Sessionid not set. No callback service\n");
- nfs_callback_down(1);
- status = 0;
- }
nfs41_setup_state_renewal(clp);
nfs_mark_client_ready(clp, NFS_CS_READY);
out:
__be32 *p = xdr_inline_decode(xdr, 4);
if (unlikely(!p))
goto out_overflow;
- if (!ntohl(*p++)) {
+ if (*p == xdr_zero) {
p = xdr_inline_decode(xdr, 4);
if (unlikely(!p))
goto out_overflow;
- if (!ntohl(*p++))
+ if (*p == xdr_zero)
return -EAGAIN;
entry->eof = 1;
return -EBADCOOKIE;
goto out_overflow;
entry->prev_cookie = entry->cookie;
p = xdr_decode_hyper(p, &entry->cookie);
- entry->len = ntohl(*p++);
+ entry->len = be32_to_cpup(p);
p = xdr_inline_decode(xdr, entry->len);
if (unlikely(!p))
if (entry->fattr->valid & NFS_ATTR_FATTR_TYPE)
entry->d_type = nfs_umode_to_dtype(entry->fattr->mode);
- if (verify_attr_len(xdr, p, len) < 0)
- goto out_overflow;
-
return 0;
out_overflow:
{
struct pnfs_deviceid_cache *local = clp->cl_devid_cache;
- dprintk("--> %s cl_devid_cache %p\n", __func__, clp->cl_devid_cache);
+ dprintk("--> %s ({%d})\n", __func__, atomic_read(&local->dc_ref));
if (atomic_dec_and_lock(&local->dc_ref, &clp->cl_lock)) {
int i;
/* Verify cache is empty */
while (!list_empty(&list)) {
data = list_entry(list.next, struct nfs_write_data, pages);
list_del(&data->pages);
- nfs_writedata_release(data);
+ nfs_writedata_free(data);
}
nfs_redirty_request(req);
return -ENOMEM;
gid_t gid;
};
+struct nfsacl_simple_acl {
+ struct posix_acl acl;
+ struct posix_acl_entry ace[4];
+};
+
static int
xdr_nfsace_encode(struct xdr_array2_desc *desc, void *elem)
{
return 0;
}
-unsigned int
-nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode,
- struct posix_acl *acl, int encode_entries, int typeflag)
+/**
+ * nfsacl_encode - Encode an NFSv3 ACL
+ *
+ * @buf: destination xdr_buf to contain XDR encoded ACL
+ * @base: byte offset in xdr_buf where XDR'd ACL begins
+ * @inode: inode of file whose ACL this is
+ * @acl: posix_acl to encode
+ * @encode_entries: whether to encode ACEs as well
+ * @typeflag: ACL type: NFS_ACL_DEFAULT or zero
+ *
+ * Returns size of encoded ACL in bytes or a negative errno value.
+ */
+int nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode,
+ struct posix_acl *acl, int encode_entries, int typeflag)
{
int entries = (acl && acl->a_count) ? max_t(int, acl->a_count, 4) : 0;
struct nfsacl_encode_desc nfsacl_desc = {
.uid = inode->i_uid,
.gid = inode->i_gid,
};
+ struct nfsacl_simple_acl aclbuf;
int err;
- struct posix_acl *acl2 = NULL;
if (entries > NFS_ACL_MAX_ENTRIES ||
xdr_encode_word(buf, base, entries))
return -EINVAL;
if (encode_entries && acl && acl->a_count == 3) {
- /* Fake up an ACL_MASK entry. */
- acl2 = posix_acl_alloc(4, GFP_KERNEL);
- if (!acl2)
- return -ENOMEM;
+ struct posix_acl *acl2 = &aclbuf.acl;
+
+ /* Avoid the use of posix_acl_alloc(). nfsacl_encode() is
+ * invoked in contexts where a memory allocation failure is
+ * fatal. Fortunately this fake ACL is small enough to
+ * construct on the stack. */
+ memset(acl2, 0, sizeof(acl2));
+ posix_acl_init(acl2, 4);
+
/* Insert entries in canonical order: other orders seem
to confuse Solaris VxFS. */
acl2->a_entries[0] = acl->a_entries[0]; /* ACL_USER_OBJ */
nfsacl_desc.acl = acl2;
}
err = xdr_encode_array2(buf, base + 4, &nfsacl_desc.desc);
- if (acl2)
- posix_acl_release(acl2);
if (!err)
err = 8 + nfsacl_desc.desc.elem_size *
nfsacl_desc.desc.array_len;
return 0;
}
-unsigned int
-nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt,
- struct posix_acl **pacl)
+/**
+ * nfsacl_decode - Decode an NFSv3 ACL
+ *
+ * @buf: xdr_buf containing XDR'd ACL data to decode
+ * @base: byte offset in xdr_buf where XDR'd ACL begins
+ * @aclcnt: count of ACEs in decoded posix_acl
+ * @pacl: buffer in which to place decoded posix_acl
+ *
+ * Returns the length of the decoded ACL in bytes, or a negative errno value.
+ */
+int nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt,
+ struct posix_acl **pacl)
{
struct nfsacl_decode_desc nfsacl_desc = {
.desc = {
out:
return status;
out_default:
- return nfs_cb_stat_to_errno(status);
+ return nfs_cb_stat_to_errno(nfserr);
}
/*
if (unlikely(status))
goto out;
if (unlikely(nfserr != NFS4_OK))
- goto out_default;
+ status = nfs_cb_stat_to_errno(nfserr);
out:
return status;
-out_default:
- return nfs_cb_stat_to_errno(status);
}
/*
dp->dl_client = clp;
get_nfs4_file(fp);
dp->dl_file = fp;
- dp->dl_vfs_file = find_readable_file(fp);
- get_file(dp->dl_vfs_file);
- dp->dl_flock = NULL;
dp->dl_type = type;
dp->dl_stateid.si_boot = boot_time;
dp->dl_stateid.si_stateownerid = current_delegid++;
fh_copy_shallow(&dp->dl_fh, ¤t_fh->fh_handle);
dp->dl_time = 0;
atomic_set(&dp->dl_count, 1);
- list_add(&dp->dl_perfile, &fp->fi_delegations);
- list_add(&dp->dl_perclnt, &clp->cl_delegations);
INIT_WORK(&dp->dl_recall.cb_work, nfsd4_do_callback_rpc);
return dp;
}
if (atomic_dec_and_test(&dp->dl_count)) {
dprintk("NFSD: freeing dp %p\n",dp);
put_nfs4_file(dp->dl_file);
- fput(dp->dl_vfs_file);
kmem_cache_free(deleg_slab, dp);
num_delegations--;
}
}
-/* Remove the associated file_lock first, then remove the delegation.
- * lease_modify() is called to remove the FS_LEASE file_lock from
- * the i_flock list, eventually calling nfsd's lock_manager
- * fl_release_callback.
- */
-static void
-nfs4_close_delegation(struct nfs4_delegation *dp)
+static void nfs4_put_deleg_lease(struct nfs4_file *fp)
{
- dprintk("NFSD: close_delegation dp %p\n",dp);
- /* XXX: do we even need this check?: */
- if (dp->dl_flock)
- vfs_setlease(dp->dl_vfs_file, F_UNLCK, &dp->dl_flock);
+ if (atomic_dec_and_test(&fp->fi_delegees)) {
+ vfs_setlease(fp->fi_deleg_file, F_UNLCK, &fp->fi_lease);
+ fp->fi_lease = NULL;
+ fp->fi_deleg_file = NULL;
+ }
}
/* Called under the state lock. */
static void
unhash_delegation(struct nfs4_delegation *dp)
{
- list_del_init(&dp->dl_perfile);
list_del_init(&dp->dl_perclnt);
spin_lock(&recall_lock);
+ list_del_init(&dp->dl_perfile);
list_del_init(&dp->dl_recall_lru);
spin_unlock(&recall_lock);
- nfs4_close_delegation(dp);
+ nfs4_put_deleg_lease(dp->dl_file);
nfs4_put_delegation(dp);
}
spin_lock(&recall_lock);
while (!list_empty(&clp->cl_delegations)) {
dp = list_entry(clp->cl_delegations.next, struct nfs4_delegation, dl_perclnt);
- dprintk("NFSD: expire client. dp %p, fp %p\n", dp,
- dp->dl_flock);
list_del_init(&dp->dl_perclnt);
list_move(&dp->dl_recall_lru, &reaplist);
}
fp->fi_inode = igrab(ino);
fp->fi_id = current_fileid++;
fp->fi_had_conflict = false;
+ fp->fi_lease = NULL;
memset(fp->fi_fds, 0, sizeof(fp->fi_fds));
memset(fp->fi_access, 0, sizeof(fp->fi_access));
spin_lock(&recall_lock);
nfs4_file_put_access(fp, O_RDONLY);
}
-/*
- * Spawn a thread to perform a recall on the delegation represented
- * by the lease (file_lock)
- *
- * Called from break_lease() with lock_flocks() held.
- * Note: we assume break_lease will only call this *once* for any given
- * lease.
- */
-static
-void nfsd_break_deleg_cb(struct file_lock *fl)
+static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
{
- struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
-
- dprintk("NFSD nfsd_break_deleg_cb: dp %p fl %p\n",dp,fl);
- if (!dp)
- return;
-
/* We're assuming the state code never drops its reference
* without first removing the lease. Since we're in this lease
* callback (and since the lease code is serialized by the kernel
* it's safe to take a reference: */
atomic_inc(&dp->dl_count);
- spin_lock(&recall_lock);
list_add_tail(&dp->dl_recall_lru, &del_recall_lru);
- spin_unlock(&recall_lock);
/* only place dl_time is set. protected by lock_flocks*/
dp->dl_time = get_seconds();
+ nfsd4_cb_recall(dp);
+}
+
+/* Called from break_lease() with lock_flocks() held. */
+static void nfsd_break_deleg_cb(struct file_lock *fl)
+{
+ struct nfs4_file *fp = (struct nfs4_file *)fl->fl_owner;
+ struct nfs4_delegation *dp;
+
+ BUG_ON(!fp);
+ /* We assume break_lease is only called once per lease: */
+ BUG_ON(fp->fi_had_conflict);
/*
* We don't want the locks code to timeout the lease for us;
- * we'll remove it ourself if the delegation isn't returned
- * in time.
+ * we'll remove it ourself if a delegation isn't returned
+ * in time:
*/
fl->fl_break_time = 0;
- dp->dl_file->fi_had_conflict = true;
- nfsd4_cb_recall(dp);
+ spin_lock(&recall_lock);
+ fp->fi_had_conflict = true;
+ list_for_each_entry(dp, &fp->fi_delegations, dl_perfile)
+ nfsd_break_one_deleg(dp);
+ spin_unlock(&recall_lock);
}
static
static struct nfs4_delegation *
find_delegation_file(struct nfs4_file *fp, stateid_t *stid)
{
- struct nfs4_delegation *dp;
+ struct nfs4_delegation *dp = NULL;
+ spin_lock(&recall_lock);
list_for_each_entry(dp, &fp->fi_delegations, dl_perfile) {
if (dp->dl_stateid.si_stateownerid == stid->si_stateownerid)
- return dp;
+ break;
}
- return NULL;
+ spin_unlock(&recall_lock);
+ return dp;
}
int share_access_to_flags(u32 share_access)
return clp->cl_minorversion && clp->cl_cb_state == NFSD4_CB_UNKNOWN;
}
+static struct file_lock *nfs4_alloc_init_lease(struct nfs4_delegation *dp, int flag)
+{
+ struct file_lock *fl;
+
+ fl = locks_alloc_lock();
+ if (!fl)
+ return NULL;
+ locks_init_lock(fl);
+ fl->fl_lmops = &nfsd_lease_mng_ops;
+ fl->fl_flags = FL_LEASE;
+ fl->fl_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
+ fl->fl_end = OFFSET_MAX;
+ fl->fl_owner = (fl_owner_t)(dp->dl_file);
+ fl->fl_pid = current->tgid;
+ return fl;
+}
+
+static int nfs4_setlease(struct nfs4_delegation *dp, int flag)
+{
+ struct nfs4_file *fp = dp->dl_file;
+ struct file_lock *fl;
+ int status;
+
+ fl = nfs4_alloc_init_lease(dp, flag);
+ if (!fl)
+ return -ENOMEM;
+ fl->fl_file = find_readable_file(fp);
+ list_add(&dp->dl_perclnt, &dp->dl_client->cl_delegations);
+ status = vfs_setlease(fl->fl_file, fl->fl_type, &fl);
+ if (status) {
+ list_del_init(&dp->dl_perclnt);
+ locks_free_lock(fl);
+ return -ENOMEM;
+ }
+ fp->fi_lease = fl;
+ fp->fi_deleg_file = fl->fl_file;
+ get_file(fp->fi_deleg_file);
+ atomic_set(&fp->fi_delegees, 1);
+ list_add(&dp->dl_perfile, &fp->fi_delegations);
+ return 0;
+}
+
+static int nfs4_set_delegation(struct nfs4_delegation *dp, int flag)
+{
+ struct nfs4_file *fp = dp->dl_file;
+
+ if (!fp->fi_lease)
+ return nfs4_setlease(dp, flag);
+ spin_lock(&recall_lock);
+ if (fp->fi_had_conflict) {
+ spin_unlock(&recall_lock);
+ return -EAGAIN;
+ }
+ atomic_inc(&fp->fi_delegees);
+ list_add(&dp->dl_perfile, &fp->fi_delegations);
+ spin_unlock(&recall_lock);
+ list_add(&dp->dl_perclnt, &dp->dl_client->cl_delegations);
+ return 0;
+}
+
/*
* Attempt to hand out a delegation.
*/
struct nfs4_delegation *dp;
struct nfs4_stateowner *sop = stp->st_stateowner;
int cb_up;
- struct file_lock *fl;
int status, flag = 0;
cb_up = nfsd4_cb_channel_good(sop->so_client);
}
dp = alloc_init_deleg(sop->so_client, stp, fh, flag);
- if (dp == NULL) {
- flag = NFS4_OPEN_DELEGATE_NONE;
- goto out;
- }
- status = -ENOMEM;
- fl = locks_alloc_lock();
- if (!fl)
- goto out;
- locks_init_lock(fl);
- fl->fl_lmops = &nfsd_lease_mng_ops;
- fl->fl_flags = FL_LEASE;
- fl->fl_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
- fl->fl_end = OFFSET_MAX;
- fl->fl_owner = (fl_owner_t)dp;
- fl->fl_file = find_readable_file(stp->st_file);
- BUG_ON(!fl->fl_file);
- fl->fl_pid = current->tgid;
- dp->dl_flock = fl;
-
- /* vfs_setlease checks to see if delegation should be handed out.
- * the lock_manager callback fl_change is used
- */
- if ((status = vfs_setlease(fl->fl_file, fl->fl_type, &fl))) {
- dprintk("NFSD: setlease failed [%d], no delegation\n", status);
- dp->dl_flock = NULL;
- locks_free_lock(fl);
- unhash_delegation(dp);
- flag = NFS4_OPEN_DELEGATE_NONE;
- goto out;
- }
+ if (dp == NULL)
+ goto out_no_deleg;
+ status = nfs4_set_delegation(dp, flag);
+ if (status)
+ goto out_free;
memcpy(&open->op_delegate_stateid, &dp->dl_stateid, sizeof(dp->dl_stateid));
&& open->op_delegate_type != NFS4_OPEN_DELEGATE_NONE)
dprintk("NFSD: WARNING: refusing delegation reclaim\n");
open->op_delegate_type = flag;
+ return;
+out_free:
+ nfs4_put_delegation(dp);
+out_no_deleg:
+ flag = NFS4_OPEN_DELEGATE_NONE;
+ goto out;
}
/*
test_val = u;
break;
}
- dprintk("NFSD: purging unused delegation dp %p, fp %p\n",
- dp, dp->dl_flock);
list_move(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&recall_lock);
goto out;
renew_client(dp->dl_client);
if (filpp) {
- *filpp = find_readable_file(dp->dl_file);
+ *filpp = dp->dl_file->fi_deleg_file;
BUG_ON(!*filpp);
}
} else { /* open or lock stateid */
READ_BUF(dummy32);
len += (XDR_QUADLEN(dummy32) << 2);
READMEM(buf, dummy32);
- if ((host_err = nfsd_map_name_to_uid(argp->rqstp, buf, dummy32, &iattr->ia_uid)))
- goto out_nfserr;
+ if ((status = nfsd_map_name_to_uid(argp->rqstp, buf, dummy32, &iattr->ia_uid)))
+ return status;
iattr->ia_valid |= ATTR_UID;
}
if (bmval[1] & FATTR4_WORD1_OWNER_GROUP) {
READ_BUF(dummy32);
len += (XDR_QUADLEN(dummy32) << 2);
READMEM(buf, dummy32);
- if ((host_err = nfsd_map_name_to_gid(argp->rqstp, buf, dummy32, &iattr->ia_gid)))
- goto out_nfserr;
+ if ((status = nfsd_map_name_to_gid(argp->rqstp, buf, dummy32, &iattr->ia_gid)))
+ return status;
iattr->ia_valid |= ATTR_GID;
}
if (bmval[1] & FATTR4_WORD1_TIME_ACCESS_SET) {
atomic_t dl_count; /* ref count */
struct nfs4_client *dl_client;
struct nfs4_file *dl_file;
- struct file *dl_vfs_file;
- struct file_lock *dl_flock;
u32 dl_type;
time_t dl_time;
/* For recall: */
*/
atomic_t fi_readers;
atomic_t fi_writers;
+ struct file *fi_deleg_file;
+ struct file_lock *fi_lease;
+ atomic_t fi_delegees;
struct inode *fi_inode;
u32 fi_id; /* used with stateowner->so_id
* for stateid_hashtbl hash */
if (ra->p_count == 0)
frap = rap;
}
- depth = nfsdstats.ra_size*11/10;
+ depth = nfsdstats.ra_size;
if (!frap) {
spin_unlock(&rab->pb_lock);
return NULL;
goto out_dput_new;
host_err = nfsd_break_lease(odentry->d_inode);
+ if (host_err)
+ goto out_drop_write;
+ if (ndentry->d_inode) {
+ host_err = nfsd_break_lease(ndentry->d_inode);
+ if (host_err)
+ goto out_drop_write;
+ }
if (host_err)
goto out_drop_write;
host_err = vfs_rename(fdir, odentry, tdir, ndentry);
host_err = mnt_want_write(fhp->fh_export->ex_path.mnt);
if (host_err)
- goto out_nfserr;
+ goto out_put;
host_err = nfsd_break_lease(rdentry->d_inode);
if (host_err)
- goto out_put;
+ goto out_drop_write;
if (type != S_IFDIR)
host_err = vfs_unlink(dirp, rdentry);
else
host_err = vfs_rmdir(dirp, rdentry);
-out_put:
- dput(rdentry);
-
if (!host_err)
host_err = commit_metadata(fhp);
-
+out_drop_write:
mnt_drop_write(fhp->fh_export->ex_path.mnt);
+out_put:
+ dput(rdentry);
+
out_nfserr:
err = nfserrno(host_err);
out:
sbp[0]->s_state =
cpu_to_le16(le16_to_cpu(sbp[0]->s_state) & ~NILFS_VALID_FS);
/* synchronize sbp[1] with sbp[0] */
- memcpy(sbp[1], sbp[0], nilfs->ns_sbsize);
+ if (sbp[1])
+ memcpy(sbp[1], sbp[0], nilfs->ns_sbsize);
return nilfs_commit_super(sbi, NILFS_SB_COMMIT_ALL);
}
/**
* mft.c - NTFS kernel mft record operations. Part of the Linux-NTFS project.
*
- * Copyright (c) 2001-2006 Anton Altaparmakov
+ * Copyright (c) 2001-2011 Anton Altaparmakov and Tuxera Inc.
* Copyright (c) 2002 Richard Russon
*
* This program/include file is free software; you can redistribute it and/or
flush_dcache_page(page);
SetPageUptodate(page);
if (base_ni) {
+ MFT_RECORD *m_tmp;
+
/*
* Setup the base mft record in the extent mft record. This
* completes initialization of the allocated extent mft record
* attach it to the base inode @base_ni and map, pin, and lock
* its, i.e. the allocated, mft record.
*/
- m = map_extent_mft_record(base_ni, bit, &ni);
- if (IS_ERR(m)) {
+ m_tmp = map_extent_mft_record(base_ni, bit, &ni);
+ if (IS_ERR(m_tmp)) {
ntfs_error(vol->sb, "Failed to map allocated extent "
"mft record 0x%llx.", (long long)bit);
- err = PTR_ERR(m);
+ err = PTR_ERR(m_tmp);
/* Set the mft record itself not in use. */
m->flags &= cpu_to_le16(
~le16_to_cpu(MFT_RECORD_IN_USE));
ntfs_unmap_page(page);
goto undo_mftbmp_alloc;
}
+ BUG_ON(m != m_tmp);
/*
* Make sure the allocated mft record is written out to disk.
* No need to set the inode dirty because the caller is going
}
/* Handle quota on quotactl */
-static int ocfs2_quota_on(struct super_block *sb, int type, int format_id,
- char *path)
+static int ocfs2_quota_on(struct super_block *sb, int type, int format_id)
{
unsigned int feature[MAXQUOTAS] = { OCFS2_FEATURE_RO_COMPAT_USRQUOTA,
OCFS2_FEATURE_RO_COMPAT_GRPQUOTA};
}
static const struct quotactl_ops ocfs2_quotactl_ops = {
- .quota_on = ocfs2_quota_on,
+ .quota_on_meta = ocfs2_quota_on,
.quota_off = ocfs2_quota_off,
.quota_sync = dquot_quota_sync,
.get_info = dquot_get_dqinfo,
/* Pick up the filp from the open intent */
filp = nd->intent.open.file;
+ nd->intent.open.file = NULL;
+
/* Has the filesystem initialised the file for us? */
if (filp->f_path.dentry == NULL) {
path_get(&nd->path);
int mac_partition(struct parsed_partitions *state)
{
- int slot = 1;
Sector sect;
unsigned char *data;
- int blk, blocks_in_map;
+ int slot, blocks_in_map;
unsigned secsize;
#ifdef CONFIG_PPC_PMAC
int found_root = 0;
put_dev_sector(sect);
return 0; /* not a MacOS disk */
}
- strlcat(state->pp_buf, " [mac]", PAGE_SIZE);
blocks_in_map = be32_to_cpu(part->map_count);
- for (blk = 1; blk <= blocks_in_map; ++blk) {
- int pos = blk * secsize;
+ if (blocks_in_map < 0 || blocks_in_map >= DISK_MAX_PARTS) {
+ put_dev_sector(sect);
+ return 0;
+ }
+ strlcat(state->pp_buf, " [mac]", PAGE_SIZE);
+ for (slot = 1; slot <= blocks_in_map; ++slot) {
+ int pos = slot * secsize;
put_dev_sector(sect);
data = read_part_sector(state, pos/512, §);
if (!data)
}
if (goodness > found_root_goodness) {
- found_root = blk;
+ found_root = slot;
found_root_goodness = goodness;
}
}
#endif /* CONFIG_PPC_PMAC */
-
- ++slot;
}
#ifdef CONFIG_PPC_PMAC
if (found_root_goodness)
break;
}
if (do_wakeup) {
- wake_up_interruptible_sync_poll(&pipe->wait, POLLOUT);
+ wake_up_interruptible_sync_poll(&pipe->wait, POLLOUT | POLLWRNORM);
kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
pipe_wait(pipe);
/* Signal writers asynchronously that there is more room. */
if (do_wakeup) {
- wake_up_interruptible_sync_poll(&pipe->wait, POLLOUT);
+ wake_up_interruptible_sync_poll(&pipe->wait, POLLOUT | POLLWRNORM);
kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
if (ret > 0)
break;
}
if (do_wakeup) {
- wake_up_interruptible_sync_poll(&pipe->wait, POLLIN);
+ wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM);
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
do_wakeup = 0;
}
out:
mutex_unlock(&inode->i_mutex);
if (do_wakeup) {
- wake_up_interruptible_sync_poll(&pipe->wait, POLLIN);
+ wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM);
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
}
if (ret > 0)
if (!pipe->readers && !pipe->writers) {
free_pipe_info(inode);
} else {
- wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLOUT);
+ wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLOUT | POLLRDNORM | POLLWRNORM | POLLERR | POLLHUP);
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
}
#include <linux/errno.h>
+EXPORT_SYMBOL(posix_acl_init);
EXPORT_SYMBOL(posix_acl_alloc);
EXPORT_SYMBOL(posix_acl_clone);
EXPORT_SYMBOL(posix_acl_valid);
EXPORT_SYMBOL(posix_acl_chmod_masq);
EXPORT_SYMBOL(posix_acl_permission);
+/*
+ * Init a fresh posix_acl
+ */
+void
+posix_acl_init(struct posix_acl *acl, int count)
+{
+ atomic_set(&acl->a_refcount, 1);
+ acl->a_count = count;
+}
+
/*
* Allocate a new ACL with the specified number of entries.
*/
const size_t size = sizeof(struct posix_acl) +
count * sizeof(struct posix_acl_entry);
struct posix_acl *acl = kmalloc(size, flags);
- if (acl) {
- atomic_set(&acl->a_refcount, 1);
- acl->a_count = count;
- }
+ if (acl)
+ posix_acl_init(acl, count);
return acl;
}
config PROC_FS
- bool "/proc file system support" if EMBEDDED
+ bool "/proc file system support" if EXPERT
default y
help
This is a virtual file system providing information about the status
Exports the dump image of crashed kernel in ELF format.
config PROC_SYSCTL
- bool "Sysctl support (/proc/sys)" if EMBEDDED
+ bool "Sysctl support (/proc/sys)" if EXPERT
depends on PROC_FS
select SYSCTL
default y
config PROC_PAGE_MONITOR
default y
depends on PROC_FS && MMU
- bool "Enable /proc page monitoring" if EMBEDDED
+ bool "Enable /proc page monitoring" if EXPERT
help
Various /proc files exist to monitor process memory utilization:
/proc/pid/smaps, /proc/pid/clear_refs, /proc/pid/pagemap,
task_cap(m, task);
task_cpus_allowed(m, task);
cpuset_task_status_allowed(m, task);
-#if defined(CONFIG_S390)
- task_show_regs(m, task);
-#endif
task_context_switch_counts(m, task);
return 0;
}
struct console *con;
loff_t off = 0;
- acquire_console_sem();
+ console_lock();
for_each_console(con)
if (off++ == *pos)
break;
static void c_stop(struct seq_file *m, void *v)
{
- release_console_sem();
+ console_unlock();
}
static const struct seq_operations consoles_op = {
}
EXPORT_SYMBOL(dquot_resume);
-int dquot_quota_on_path(struct super_block *sb, int type, int format_id,
- struct path *path)
+int dquot_quota_on(struct super_block *sb, int type, int format_id,
+ struct path *path)
{
int error = security_quota_on(path->dentry);
if (error)
DQUOT_LIMITS_ENABLED);
return error;
}
-EXPORT_SYMBOL(dquot_quota_on_path);
-
-int dquot_quota_on(struct super_block *sb, int type, int format_id, char *name)
-{
- struct path path;
- int error;
-
- error = kern_path(name, LOOKUP_FOLLOW, &path);
- if (!error) {
- error = dquot_quota_on_path(sb, type, format_id, &path);
- path_put(&path);
- }
- return error;
-}
EXPORT_SYMBOL(dquot_quota_on);
/*
}
static int quota_quotaon(struct super_block *sb, int type, int cmd, qid_t id,
- void __user *addr)
+ struct path *path)
{
- char *pathname;
- int ret = -ENOSYS;
-
- pathname = getname(addr);
- if (IS_ERR(pathname))
- return PTR_ERR(pathname);
- if (sb->s_qcop->quota_on)
- ret = sb->s_qcop->quota_on(sb, type, id, pathname);
- putname(pathname);
- return ret;
+ if (!sb->s_qcop->quota_on && !sb->s_qcop->quota_on_meta)
+ return -ENOSYS;
+ if (sb->s_qcop->quota_on_meta)
+ return sb->s_qcop->quota_on_meta(sb, type, id);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+ return sb->s_qcop->quota_on(sb, type, id, path);
}
static int quota_getfmt(struct super_block *sb, int type, void __user *addr)
/* Copy parameters and call proper function */
static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
- void __user *addr)
+ void __user *addr, struct path *path)
{
int ret;
switch (cmd) {
case Q_QUOTAON:
- return quota_quotaon(sb, type, cmd, id, addr);
+ return quota_quotaon(sb, type, cmd, id, path);
case Q_QUOTAOFF:
if (!sb->s_qcop->quota_off)
return -ENOSYS;
{
uint cmds, type;
struct super_block *sb = NULL;
+ struct path path, *pathp = NULL;
int ret;
cmds = cmd >> SUBCMDSHIFT;
return -ENODEV;
}
+ /*
+ * Path for quotaon has to be resolved before grabbing superblock
+ * because that gets s_umount sem which is also possibly needed by path
+ * resolution (think about autofs) and thus deadlocks could arise.
+ */
+ if (cmds == Q_QUOTAON) {
+ ret = user_path_at(AT_FDCWD, addr, LOOKUP_FOLLOW, &path);
+ if (ret)
+ pathp = ERR_PTR(ret);
+ else
+ pathp = &path;
+ }
+
sb = quotactl_block(special);
if (IS_ERR(sb))
return PTR_ERR(sb);
- ret = do_quotactl(sb, type, cmds, id, addr);
+ ret = do_quotactl(sb, type, cmds, id, addr, pathp);
drop_super(sb);
+ if (pathp && !IS_ERR(pathp))
+ path_put(pathp);
return ret;
}
static int reiserfs_release_dquot(struct dquot *);
static int reiserfs_mark_dquot_dirty(struct dquot *);
static int reiserfs_write_info(struct super_block *, int);
-static int reiserfs_quota_on(struct super_block *, int, int, char *);
+static int reiserfs_quota_on(struct super_block *, int, int, struct path *);
static const struct dquot_operations reiserfs_quota_operations = {
.write_dquot = reiserfs_write_dquot,
* Standard function to be called on quota_on
*/
static int reiserfs_quota_on(struct super_block *sb, int type, int format_id,
- char *name)
+ struct path *path)
{
int err;
- struct path path;
struct inode *inode;
struct reiserfs_transaction_handle th;
if (!(REISERFS_SB(sb)->s_mount_opt & (1 << REISERFS_QUOTA)))
return -EINVAL;
- err = kern_path(name, LOOKUP_FOLLOW, &path);
- if (err)
- return err;
/* Quotafile not on the same filesystem? */
- if (path.mnt->mnt_sb != sb) {
+ if (path->mnt->mnt_sb != sb) {
err = -EXDEV;
goto out;
}
- inode = path.dentry->d_inode;
+ inode = path->dentry->d_inode;
/* We must not pack tails for quota files on reiserfs for quota IO to work */
if (!(REISERFS_I(inode)->i_flags & i_nopack_mask)) {
err = reiserfs_unpack(inode, NULL);
/* Journaling quota? */
if (REISERFS_SB(sb)->s_qf_names[type]) {
/* Quotafile not of fs root? */
- if (path.dentry->d_parent != sb->s_root)
+ if (path->dentry->d_parent != sb->s_root)
reiserfs_warning(sb, "super-6521",
"Quota file not on filesystem root. "
"Journalled quota will not work.");
if (err)
goto out;
}
- err = dquot_quota_on_path(sb, type, format_id, &path);
+ err = dquot_quota_on(sb, type, format_id, path);
out:
- path_put(&path);
return err;
}
*length = (unsigned char) bh->b_data[*offset] |
(unsigned char) bh->b_data[*offset + 1] << 8;
*offset += 2;
+
+ if (*offset == msblk->devblksize) {
+ put_bh(bh);
+ bh = sb_bread(sb, ++(*cur_index));
+ if (bh == NULL)
+ return NULL;
+ *offset = 0;
+ }
}
return bh;
if (!buffer_uptodate(bh[k]))
goto release_mutex;
- if (avail == 0) {
- offset = 0;
- put_bh(bh[k++]);
- continue;
- }
-
stream->buf.in = bh[k]->b_data + offset;
stream->buf.in_size = avail;
stream->buf.in_pos = 0;
if (!buffer_uptodate(bh[k]))
goto release_mutex;
- if (avail == 0) {
- offset = 0;
- put_bh(bh[k++]);
- continue;
- }
-
stream->next_in = bh[k]->b_data + offset;
stream->avail_in = avail;
offset = 0;
struct file_system_type *fs = s->s_type;
if (atomic_dec_and_test(&s->s_active)) {
fs->kill_sb(s);
+ /*
+ * We need to call rcu_barrier so all the delayed rcu free
+ * inodes are flushed before we release the fs module.
+ */
+ rcu_barrier();
put_filesystem(fs);
put_super(s);
} else {
config SYSFS
- bool "sysfs file system support" if EMBEDDED
+ bool "sysfs file system support" if EXPERT
default y
help
The sysfs filesystem is a virtual filesystem that the kernel uses to
/*
* Extent size must be a multiple of the appropriate block
- * size, if set at all.
+ * size, if set at all. It must also be smaller than the
+ * maximum extent size supported by the filesystem.
+ *
+ * Also, for non-realtime files, limit the extent size hint to
+ * half the size of the AGs in the filesystem so alignment
+ * doesn't result in extents larger than an AG.
*/
if (fa->fsx_extsize != 0) {
- xfs_extlen_t size;
+ xfs_extlen_t size;
+ xfs_fsblock_t extsize_fsb;
+
+ extsize_fsb = XFS_B_TO_FSB(mp, fa->fsx_extsize);
+ if (extsize_fsb > MAXEXTLEN) {
+ code = XFS_ERROR(EINVAL);
+ goto error_return;
+ }
if (XFS_IS_REALTIME_INODE(ip) ||
((mask & FSX_XFLAGS) &&
mp->m_sb.sb_blocklog;
} else {
size = mp->m_sb.sb_blocksize;
+ if (extsize_fsb > mp->m_sb.sb_agblocks / 2) {
+ code = XFS_ERROR(EINVAL);
+ goto error_return;
+ }
}
if (fa->fsx_extsize % size) {
xfs_dquot_t *dqpout;
xfs_dquot_t *dqp;
int restarts;
+ int startagain;
restarts = 0;
dqpout = NULL;
/* lockorder: hashchainlock, freelistlock, mplistlock, dqlock, dqflock */
-startagain:
+again:
+ startagain = 0;
mutex_lock(&xfs_Gqm->qm_dqfrlist_lock);
list_for_each_entry(dqp, &xfs_Gqm->qm_dqfrlist, q_freelist) {
ASSERT(! (dqp->dq_flags & XFS_DQ_INACTIVE));
trace_xfs_dqreclaim_want(dqp);
-
- xfs_dqunlock(dqp);
- mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock);
- if (++restarts >= XFS_QM_RECLAIM_MAX_RESTARTS)
- return NULL;
XQM_STATS_INC(xqmstats.xs_qm_dqwants);
- goto startagain;
+ restarts++;
+ startagain = 1;
+ goto dqunlock;
}
/*
ASSERT(list_empty(&dqp->q_mplist));
list_del_init(&dqp->q_freelist);
xfs_Gqm->qm_dqfrlist_cnt--;
- xfs_dqunlock(dqp);
dqpout = dqp;
XQM_STATS_INC(xqmstats.xs_qm_dqinact_reclaims);
- break;
+ goto dqunlock;
}
ASSERT(dqp->q_hash);
ASSERT(!list_empty(&dqp->q_mplist));
/*
- * Try to grab the flush lock. If this dquot is in the process of
- * getting flushed to disk, we don't want to reclaim it.
+ * Try to grab the flush lock. If this dquot is in the process
+ * of getting flushed to disk, we don't want to reclaim it.
*/
- if (!xfs_dqflock_nowait(dqp)) {
- xfs_dqunlock(dqp);
- continue;
- }
+ if (!xfs_dqflock_nowait(dqp))
+ goto dqunlock;
/*
* We have the flush lock so we know that this is not in the
xfs_fs_cmn_err(CE_WARN, mp,
"xfs_qm_dqreclaim: dquot %p flush failed", dqp);
}
- xfs_dqunlock(dqp); /* dqflush unlocks dqflock */
- continue;
+ goto dqunlock;
}
/*
*/
if (!mutex_trylock(&mp->m_quotainfo->qi_dqlist_lock)) {
restarts++;
- mutex_unlock(&dqp->q_hash->qh_lock);
- xfs_dqfunlock(dqp);
- xfs_dqunlock(dqp);
- mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock);
- if (restarts++ >= XFS_QM_RECLAIM_MAX_RESTARTS)
- return NULL;
- goto startagain;
+ startagain = 1;
+ goto qhunlock;
}
ASSERT(dqp->q_nrefs == 0);
xfs_Gqm->qm_dqfrlist_cnt--;
dqpout = dqp;
mutex_unlock(&mp->m_quotainfo->qi_dqlist_lock);
+qhunlock:
mutex_unlock(&dqp->q_hash->qh_lock);
dqfunlock:
xfs_dqfunlock(dqp);
+dqunlock:
xfs_dqunlock(dqp);
if (dqpout)
break;
if (restarts >= XFS_QM_RECLAIM_MAX_RESTARTS)
- return NULL;
+ break;
+ if (startagain) {
+ mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock);
+ goto again;
+ }
}
mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock);
return dqpout;
*/
#define XFS_ALLOC_SET_ASIDE(mp) (4 + ((mp)->m_sb.sb_agcount * 4))
+/*
+ * When deciding how much space to allocate out of an AG, we limit the
+ * allocation maximum size to the size the AG. However, we cannot use all the
+ * blocks in the AG - some are permanently used by metadata. These
+ * blocks are generally:
+ * - the AG superblock, AGF, AGI and AGFL
+ * - the AGF (bno and cnt) and AGI btree root blocks
+ * - 4 blocks on the AGFL according to XFS_ALLOC_SET_ASIDE() limits
+ *
+ * The AG headers are sector sized, so the amount of space they take up is
+ * dependent on filesystem geometry. The others are all single blocks.
+ */
+#define XFS_ALLOC_AG_MAX_USABLE(mp) \
+ ((mp)->m_sb.sb_agblocks - XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)) - 7)
+
+
/*
* Argument structure for xfs_alloc routines.
* This is turned into a structure to avoid having 20 arguments passed
* Filling in the middle part of a previous delayed allocation.
* Contiguity is impossible here.
* This case is avoided almost all the time.
+ *
+ * We start with a delayed allocation:
+ *
+ * +ddddddddddddddddddddddddddddddddddddddddddddddddddddddd+
+ * PREV @ idx
+ *
+ * and we are allocating:
+ * +rrrrrrrrrrrrrrrrr+
+ * new
+ *
+ * and we set it up for insertion as:
+ * +ddddddddddddddddddd+rrrrrrrrrrrrrrrrr+ddddddddddddddddd+
+ * new
+ * PREV @ idx LEFT RIGHT
+ * inserted at idx + 1
*/
temp = new->br_startoff - PREV.br_startoff;
- trace_xfs_bmap_pre_update(ip, idx, 0, _THIS_IP_);
- xfs_bmbt_set_blockcount(ep, temp);
- r[0] = *new;
- r[1].br_state = PREV.br_state;
- r[1].br_startblock = 0;
- r[1].br_startoff = new_endoff;
temp2 = PREV.br_startoff + PREV.br_blockcount - new_endoff;
- r[1].br_blockcount = temp2;
- xfs_iext_insert(ip, idx + 1, 2, &r[0], state);
+ trace_xfs_bmap_pre_update(ip, idx, 0, _THIS_IP_);
+ xfs_bmbt_set_blockcount(ep, temp); /* truncate PREV */
+ LEFT = *new;
+ RIGHT.br_state = PREV.br_state;
+ RIGHT.br_startblock = nullstartblock(
+ (int)xfs_bmap_worst_indlen(ip, temp2));
+ RIGHT.br_startoff = new_endoff;
+ RIGHT.br_blockcount = temp2;
+ /* insert LEFT (r[0]) and RIGHT (r[1]) at the same time */
+ xfs_iext_insert(ip, idx + 1, 2, &LEFT, state);
ip->i_df.if_lastex = idx + 1;
ip->i_d.di_nextents++;
if (cur == NULL)
startag = ag = 0;
pag = xfs_perag_get(mp, ag);
- while (*blen < ap->alen) {
+ while (*blen < args->maxlen) {
if (!pag->pagf_init) {
error = xfs_alloc_pagf_init(mp, args->tp, ag,
XFS_ALLOC_FLAG_TRYLOCK);
notinit = 1;
if (xfs_inode_is_filestream(ap->ip)) {
- if (*blen >= ap->alen)
+ if (*blen >= args->maxlen)
break;
if (ap->userdata) {
* If the best seen length is less than the request
* length, use the best as the minimum.
*/
- else if (*blen < ap->alen)
+ else if (*blen < args->maxlen)
args->minlen = *blen;
/*
- * Otherwise we've seen an extent as big as alen,
+ * Otherwise we've seen an extent as big as maxlen,
* use that as the minimum.
*/
else
- args->minlen = ap->alen;
+ args->minlen = args->maxlen;
/*
* set the failure fallback case to look in the selected
args.tp = ap->tp;
args.mp = mp;
args.fsbno = ap->rval;
- args.maxlen = MIN(ap->alen, mp->m_sb.sb_agblocks);
+
+ /* Trim the allocation back to the maximum an AG can fit. */
+ args.maxlen = MIN(ap->alen, XFS_ALLOC_AG_MAX_USABLE(mp));
args.firstblock = ap->firstblock;
blen = 0;
if (nullfb) {
/*
* Adjust for alignment
*/
- if (blen > args.alignment && blen <= ap->alen)
+ if (blen > args.alignment && blen <= args.maxlen)
args.minlen = blen - args.alignment;
args.minalignslop = 0;
} else {
* of minlen+alignment+slop doesn't go up
* between the calls.
*/
- if (blen > mp->m_dalign && blen <= ap->alen)
+ if (blen > mp->m_dalign && blen <= args.maxlen)
nextminlen = blen - mp->m_dalign;
else
nextminlen = args.minlen;
/* Figure out the extent size, adjust alen */
extsz = xfs_get_extsz_hint(ip);
if (extsz) {
+ /*
+ * make sure we don't exceed a single
+ * extent length when we align the
+ * extent by reducing length we are
+ * going to allocate by the maximum
+ * amount extent size aligment may
+ * require.
+ */
+ alen = XFS_FILBLKS_MIN(len,
+ MAXEXTLEN - (2 * extsz - 1));
error = xfs_bmap_extsize_align(mp,
&got, &prev, extsz,
rt, eof,
if (remove) {
/*
- * We have to remove the log item from the transaction
- * as we are about to release our reference to the
- * buffer. If we don't, the unlock that occurs later
- * in xfs_trans_uncommit() will ry to reference the
+ * If we are in a transaction context, we have to
+ * remove the log item from the transaction as we are
+ * about to release our reference to the buffer. If we
+ * don't, the unlock that occurs later in
+ * xfs_trans_uncommit() will try to reference the
* buffer which we no longer have a hold on.
*/
- xfs_trans_del_item(lip);
+ if (lip->li_desc)
+ xfs_trans_del_item(lip);
/*
* Since the transaction no longer refers to the buffer,
if (remove) {
ASSERT(!(lip->li_flags & XFS_LI_IN_AIL));
- xfs_trans_del_item(lip);
+ if (lip->li_desc)
+ xfs_trans_del_item(lip);
xfs_efi_item_free(efip);
return;
}
int shift = 0;
int64_t freesp;
- alloc_blocks = XFS_B_TO_FSB(mp, ip->i_size);
+ /*
+ * rounddown_pow_of_two() returns an undefined result
+ * if we pass in alloc_blocks = 0. Hence the "+ 1" to
+ * ensure we always pass in a non-zero value.
+ */
+ alloc_blocks = XFS_B_TO_FSB(mp, ip->i_size) + 1;
alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN,
rounddown_pow_of_two(alloc_blocks));
xlog_tid_t xfs_log_get_trans_ident(struct xfs_trans *tp);
-int xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp,
+void xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp,
struct xfs_log_vec *log_vector,
xfs_lsn_t *commit_lsn, int flags);
bool xfs_log_item_in_current_chkpt(struct xfs_log_item *lip);
error = xlog_write(log, &lvhdr, tic, &ctx->start_lsn, NULL, 0);
if (error)
- goto out_abort;
+ goto out_abort_free_ticket;
/*
* now that we've written the checkpoint into the log, strictly
}
spin_unlock(&cil->xc_cil_lock);
+ /* xfs_log_done always frees the ticket on error. */
commit_lsn = xfs_log_done(log->l_mp, tic, &commit_iclog, 0);
- if (error || commit_lsn == -1)
+ if (commit_lsn == -1)
goto out_abort;
/* attach all the transactions w/ busy extents to iclog */
kmem_free(new_ctx);
return 0;
+out_abort_free_ticket:
+ xfs_log_ticket_put(tic);
out_abort:
xlog_cil_committed(ctx, XFS_LI_ABORTED);
return XFS_ERROR(EIO);
* background commit, returns without it held once background commits are
* allowed again.
*/
-int
+void
xfs_log_commit_cil(
struct xfs_mount *mp,
struct xfs_trans *tp,
if (flags & XFS_TRANS_RELEASE_LOG_RES)
log_flags = XFS_LOG_REL_PERM_RESERV;
- if (XLOG_FORCED_SHUTDOWN(log)) {
- xlog_cil_free_logvec(log_vector);
- return XFS_ERROR(EIO);
- }
-
/*
* do all the hard work of formatting items (including memory
* allocation) outside the CIL context lock. This prevents stalling CIL
*/
if (push)
xlog_cil_push(log, 0);
- return 0;
}
/*
* Bulk operation version of xfs_trans_committed that takes a log vector of
* items to insert into the AIL. This uses bulk AIL insertion techniques to
* minimise lock traffic.
+ *
+ * If we are called with the aborted flag set, it is because a log write during
+ * a CIL checkpoint commit has failed. In this case, all the items in the
+ * checkpoint have already gone through IOP_COMMITED and IOP_UNLOCK, which
+ * means that checkpoint commit abort handling is treated exactly the same
+ * as an iclog write error even though we haven't started any IO yet. Hence in
+ * this case all we need to do is IOP_COMMITTED processing, followed by an
+ * IOP_UNPIN(aborted) call.
*/
void
xfs_trans_committed_bulk(
if (XFS_LSN_CMP(item_lsn, (xfs_lsn_t)-1) == 0)
continue;
+ /*
+ * if we are aborting the operation, no point in inserting the
+ * object into the AIL as we are in a shutdown situation.
+ */
+ if (aborted) {
+ ASSERT(XFS_FORCED_SHUTDOWN(ailp->xa_mount));
+ IOP_UNPIN(lip, 1);
+ continue;
+ }
+
if (item_lsn != commit_lsn) {
/*
}
/*
- * Called from the trans_commit code when we notice that
- * the filesystem is in the middle of a forced shutdown.
+ * Called from the trans_commit code when we notice that the filesystem is in
+ * the middle of a forced shutdown.
+ *
+ * When we are called here, we have already pinned all the items in the
+ * transaction. However, neither IOP_COMMITTING or IOP_UNLOCK has been called
+ * so we can simply walk the items in the transaction, unpin them with an abort
+ * flag and then free the items. Note that unpinning the items can result in
+ * them being freed immediately, so we need to use a safe list traversal method
+ * here.
*/
STATIC void
xfs_trans_uncommit(
struct xfs_trans *tp,
uint flags)
{
- struct xfs_log_item_desc *lidp;
+ struct xfs_log_item_desc *lidp, *n;
- list_for_each_entry(lidp, &tp->t_items, lid_trans) {
- /*
- * Unpin all but those that aren't dirty.
- */
+ list_for_each_entry_safe(lidp, n, &tp->t_items, lid_trans) {
if (lidp->lid_flags & XFS_LID_DIRTY)
IOP_UNPIN(lidp->lid_item, 1);
}
int flags)
{
struct xfs_log_vec *log_vector;
- int error;
/*
* Get each log item to allocate a vector structure for
if (!log_vector)
return ENOMEM;
- error = xfs_log_commit_cil(mp, tp, log_vector, commit_lsn, flags);
- if (error)
- return error;
+ xfs_log_commit_cil(mp, tp, log_vector, commit_lsn, flags);
current_restore_flags_nested(&tp->t_pflags, PF_FSTRANS);
xfs_trans_free(tp);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/* Current ACPICA subsystem version in YYYYMMDD format */
-#define ACPI_CA_VERSION 0x20101209
+#define ACPI_CA_VERSION 0x20110112
#include "actypes.h"
#include "actbl.h"
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2010, Intel Corp.
+ * Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#endif
#ifdef CONFIG_EVENT_TRACING
-#define FTRACE_EVENTS() VMLINUX_SYMBOL(__start_ftrace_events) = .; \
+#define FTRACE_EVENTS() . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start_ftrace_events) = .; \
*(_ftrace_events) \
VMLINUX_SYMBOL(__stop_ftrace_events) = .;
#else
#endif
#ifdef CONFIG_FTRACE_SYSCALLS
-#define TRACE_SYSCALLS() VMLINUX_SYMBOL(__start_syscalls_metadata) = .; \
+#define TRACE_SYSCALLS() . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start_syscalls_metadata) = .; \
*(__syscalls_metadata) \
VMLINUX_SYMBOL(__stop_syscalls_metadata) = .;
#else
CPU_KEEP(exit.data) \
MEM_KEEP(init.data) \
MEM_KEEP(exit.data) \
- . = ALIGN(32); \
- VMLINUX_SYMBOL(__start___tracepoints) = .; \
+ STRUCT_ALIGN(); \
*(__tracepoints) \
- VMLINUX_SYMBOL(__stop___tracepoints) = .; \
/* implement dynamic printk debug */ \
. = ALIGN(8); \
VMLINUX_SYMBOL(__start___verbose) = .; \
VMLINUX_SYMBOL(__stop___verbose) = .; \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
- TRACE_PRINTKS() \
- \
- STRUCT_ALIGN(); \
- FTRACE_EVENTS() \
- \
- STRUCT_ALIGN(); \
- TRACE_SYSCALLS()
+ TRACE_PRINTKS()
/*
* Data section helpers
VMLINUX_SYMBOL(__start_rodata) = .; \
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
+ . = ALIGN(8); \
+ VMLINUX_SYMBOL(__start___tracepoints_ptrs) = .; \
+ *(__tracepoints_ptrs) /* Tracepoints: pointer array */\
+ VMLINUX_SYMBOL(__stop___tracepoints_ptrs) = .; \
*(__markers_strings) /* Markers: strings */ \
*(__tracepoints_strings)/* Tracepoints: strings */ \
} \
VMLINUX_SYMBOL(__start___param) = .; \
*(__param) \
VMLINUX_SYMBOL(__stop___param) = .; \
+ } \
+ \
+ /* Built-in module versions. */ \
+ __modver : AT(ADDR(__modver) - LOAD_OFFSET) { \
+ VMLINUX_SYMBOL(__start___modver) = .; \
+ *(__modver) \
+ VMLINUX_SYMBOL(__stop___modver) = .; \
. = ALIGN((align)); \
VMLINUX_SYMBOL(__end_rodata) = .; \
} \
KERNEL_CTORS() \
*(.init.rodata) \
MCOUNT_REC() \
+ FTRACE_EVENTS() \
+ TRACE_SYSCALLS() \
DEV_DISCARD(init.rodata) \
CPU_DISCARD(init.rodata) \
MEM_DISCARD(init.rodata) \
extern u32 drm_vblank_count(struct drm_device *dev, int crtc);
extern u32 drm_vblank_count_and_time(struct drm_device *dev, int crtc,
struct timeval *vblanktime);
-extern void drm_handle_vblank(struct drm_device *dev, int crtc);
+extern bool drm_handle_vblank(struct drm_device *dev, int crtc);
extern int drm_vblank_get(struct drm_device *dev, int crtc);
extern void drm_vblank_put(struct drm_device *dev, int crtc);
extern void drm_vblank_off(struct drm_device *dev, int crtc);
/**
* drm_crtc_funcs - control CRTCs for a given device
+ * @reset: reset CRTC after state has been invalidate (e.g. resume)
* @dpms: control display power levels
* @save: save CRTC state
* @resore: restore CRTC state
void (*save)(struct drm_crtc *crtc); /* suspend? */
/* Restore CRTC state */
void (*restore)(struct drm_crtc *crtc); /* resume? */
+ /* Reset CRTC state */
+ void (*reset)(struct drm_crtc *crtc);
/* cursor controls */
int (*cursor_set)(struct drm_crtc *crtc, struct drm_file *file_priv,
* @dpms: set power state (see drm_crtc_funcs above)
* @save: save connector state
* @restore: restore connector state
+ * @reset: reset connector after state has been invalidate (e.g. resume)
* @mode_valid: is this mode valid on the given connector?
* @mode_fixup: try to fixup proposed mode for this connector
* @mode_set: set this mode
void (*dpms)(struct drm_connector *connector, int mode);
void (*save)(struct drm_connector *connector);
void (*restore)(struct drm_connector *connector);
+ void (*reset)(struct drm_connector *connector);
/* Check to see if anything is attached to the connector.
* @force is set to false whilst polling, true when checking the
};
struct drm_encoder_funcs {
+ void (*reset)(struct drm_encoder *encoder);
void (*destroy)(struct drm_encoder *encoder);
};
struct drm_display_mode *mode);
extern void drm_mode_debug_printmodeline(struct drm_display_mode *mode);
extern void drm_mode_config_init(struct drm_device *dev);
+extern void drm_mode_config_reset(struct drm_device *dev);
extern void drm_mode_config_cleanup(struct drm_device *dev);
extern void drm_mode_set_name(struct drm_display_mode *mode);
extern bool drm_mode_equal(struct drm_display_mode *mode1, struct drm_display_mode *mode2);
{0x1002, 0x4156, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
{0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP}, \
{0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
- {0x1002, 0x4243, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
{0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
{0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
{0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|RADEON_IS_IGP|RADEON_IS_MOBILITY}, \
#define RADEON_INFO_TILING_CONFIG 0x06
#define RADEON_INFO_WANT_HYPERZ 0x07
#define RADEON_INFO_WANT_CMASK 0x08 /* get access to CMASK on r300 */
+#define RADEON_INFO_CLOCK_CRYSTAL_FREQ 0x09 /* clock crystal frequency */
struct drm_radeon_info {
uint32_t request;
header-y += byteorder/
header-y += can/
+header-y += caif/
header-y += dvb/
header-y += hdlc/
header-y += isdn/
u32 *mask, u32 req);
extern void acpi_early_init(void);
-int acpi_os_map_generic_address(struct acpi_generic_address *addr);
-void acpi_os_unmap_generic_address(struct acpi_generic_address *addr);
-
#else /* !CONFIG_ACPI */
#define acpi_disabled 1
--- /dev/null
+#ifndef _ACPI_IO_H_
+#define _ACPI_IO_H_
+
+#include <linux/io.h>
+#include <acpi/acpi.h>
+
+static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
+ acpi_size size)
+{
+ return ioremap_cache(phys, size);
+}
+
+int acpi_os_map_generic_address(struct acpi_generic_address *addr);
+void acpi_os_unmap_generic_address(struct acpi_generic_address *addr);
+
+#endif
--- /dev/null
+header-y += caif_socket.h
+header-y += if_caif.h
extern void register_console(struct console *);
extern int unregister_console(struct console *);
extern struct console *console_drivers;
-extern void acquire_console_sem(void);
-extern int try_acquire_console_sem(void);
-extern void release_console_sem(void);
+extern void console_lock(void);
+extern int console_trylock(void);
+extern void console_unlock(void);
extern void console_conditional_schedule(void);
extern void console_unblank(void);
extern struct tty_driver *console_device(int *);
}
/*
- * Check if the task should be counted as freezeable by the freezer
+ * Check if the task should be counted as freezable by the freezer
*/
static inline int freezer_should_skip(struct task_struct *p)
{
void __user *buffer, size_t *lenp, loff_t *ppos);
int __init get_filesystem_list(char *buf);
+#define __FMODE_EXEC ((__force int) FMODE_EXEC)
+#define __FMODE_NONOTIFY ((__force int) FMODE_NONOTIFY)
+
#define ACC_MODE(x) ("\004\002\006\006"[(x)&O_ACCMODE])
#define OPEN_FMODE(flag) ((__force fmode_t)(((flag + 1) & O_ACCMODE) | \
- (flag & FMODE_NONOTIFY)))
+ (flag & __FMODE_NONOTIFY)))
#endif /* __KERNEL__ */
#endif /* _LINUX_FS_H */
((1 << ZONES_SHIFT) - 1);
if (__builtin_constant_p(bit))
- MAYBE_BUILD_BUG_ON((GFP_ZONE_BAD >> bit) & 1);
+ BUILD_BUG_ON((GFP_ZONE_BAD >> bit) & 1);
else {
#ifdef CONFIG_DEBUG_VM
BUG_ON((GFP_ZONE_BAD >> bit) & 1);
(transparent_hugepage_flags & \
(1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG) && \
((__vma)->vm_flags & VM_HUGEPAGE))) && \
- !((__vma)->vm_flags & VM_NOHUGEPAGE))
+ !((__vma)->vm_flags & VM_NOHUGEPAGE) && \
+ !is_vma_temporary_stack(__vma))
#define transparent_hugepage_defrag(__vma) \
((transparent_hugepage_flags & \
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_FLAG)) || \
/* block-ack parameters */
#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
+#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
* @cs_en: pointer to the cs enable function
* @cs_dis: pointer to the cs disable function
* @irq_read_val: pointer to read the pen irq value function
- * @x_max_res: xmax resolution
- * @y_max_res: ymax resolution
* @touch_x_max: touch x max
* @touch_y_max: touch y max
* @cs_pin: chip select pin
int (*cs_en)(int reset_pin);
int (*cs_dis)(int reset_pin);
int (*irq_read_val)(void);
- int x_max_res;
- int y_max_res;
int touch_x_max;
int touch_y_max;
unsigned int cs_pin;
#include <linux/types.h>
#include <linux/input.h>
-#define MATRIX_MAX_ROWS 16
-#define MATRIX_MAX_COLS 16
+#define MATRIX_MAX_ROWS 32
+#define MATRIX_MAX_COLS 32
#define KEY(row, col, val) ((((row) & (MATRIX_MAX_ROWS - 1)) << 24) |\
(((col) & (MATRIX_MAX_COLS - 1)) << 16) |\
#define IRQF_MODIFY_MASK \
(IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
- IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL)
+ IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
+ IRQ_PER_CPU)
#ifdef CONFIG_IRQ_PER_CPU
# define CHECK_IRQ_PER_CPU(var) ((var) & IRQ_PER_CPU)
#define get_irq_desc_data(desc) ((desc)->irq_data.handler_data)
#define get_irq_desc_msi(desc) ((desc)->irq_data.msi_desc)
-/*
- * Monolithic do_IRQ implementation.
- */
-#ifndef CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ
-extern unsigned int __do_IRQ(unsigned int irq);
-#endif
-
/*
* Architectures call this to let the generic IRQ layer
* handle an interrupt. If the descriptor is attached to an
*/
static inline void generic_handle_irq_desc(unsigned int irq, struct irq_desc *desc)
{
-#ifdef CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ
desc->handle_irq(irq, desc);
-#else
- if (likely(desc->handle_irq))
- desc->handle_irq(irq, desc);
- else
- __do_IRQ(irq);
-#endif
}
static inline void generic_handle_irq(unsigned int irq)
extern unsigned long get_taint(void);
extern int root_mountflags;
+extern bool early_boot_irqs_disabled;
+
/* Values used for system_state */
extern enum system_states {
SYSTEM_BOOTING,
char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding: libc5 uses this.. */
};
-/* Force a compilation error if condition is true */
-#define BUILD_BUG_ON(condition) ((void)BUILD_BUG_ON_ZERO(condition))
-
-/* Force a compilation error if condition is constant and true */
-#define MAYBE_BUILD_BUG_ON(cond) ((void)sizeof(char[1 - 2 * !!(cond)]))
-
/* Force a compilation error if a constant expression is not a power of 2 */
#define BUILD_BUG_ON_NOT_POWER_OF_2(n) \
BUILD_BUG_ON((n) == 0 || (((n) & ((n) - 1)) != 0))
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
#define BUILD_BUG_ON_NULL(e) ((void *)sizeof(struct { int:-!!(e); }))
+/**
+ * BUILD_BUG_ON - break compile if a condition is true.
+ * @condition: the condition which the compiler should know is false.
+ *
+ * If you have some code which relies on certain constants being equal, or
+ * other compile-time-evaluated condition, you should use BUILD_BUG_ON to
+ * detect if someone changes it.
+ *
+ * The implementation uses gcc's reluctance to create a negative array, but
+ * gcc (as of 4.4) only emits that error for obvious cases (eg. not arguments
+ * to inline functions). So as a fallback we use the optimizer; if it can't
+ * prove the condition is false, it will cause a link error on the undefined
+ * "__build_bug_on_failed". This error message can be harder to track down
+ * though, hence the two different methods.
+ */
+#ifndef __OPTIMIZE__
+#define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
+#else
+extern int __build_bug_on_failed;
+#define BUILD_BUG_ON(condition) \
+ do { \
+ ((void)sizeof(char[1 - 2*!!(condition)])); \
+ if (condition) __build_bug_on_failed = 1; \
+ } while(0)
+#endif
+
/* Trap pasters of __FUNCTION__ at compile-time */
#define __FUNCTION__ (__func__)
struct list_head k_list;
void (*get)(struct klist_node *);
void (*put)(struct klist_node *);
-} __attribute__ ((aligned (4)));
+} __attribute__ ((aligned (sizeof(void *))));
#define KLIST_INIT(_name, _get, _put) \
{ .k_lock = __SPIN_LOCK_UNLOCKED(_name.k_lock), \
\
_n = (long) &((ptr)->name##_end) \
- (long) &((ptr)->name##_begin); \
- MAYBE_BUILD_BUG_ON(_n < 0); \
+ BUILD_BUG_ON(_n < 0); \
\
kmemcheck_mark_initialized(&((ptr)->name##_begin), _n); \
} while (0)
* in an undefined state.
*/
#ifndef CONFIG_DEBUG_LIST
+static inline void __list_del_entry(struct list_head *entry)
+{
+ __list_del(entry->prev, entry->next);
+}
+
static inline void list_del(struct list_head *entry)
{
__list_del(entry->prev, entry->next);
entry->prev = LIST_POISON2;
}
#else
+extern void __list_del_entry(struct list_head *entry);
extern void list_del(struct list_head *entry);
#endif
*/
static inline void list_del_init(struct list_head *entry)
{
- __list_del(entry->prev, entry->next);
+ __list_del_entry(entry);
INIT_LIST_HEAD(entry);
}
*/
static inline void list_move(struct list_head *list, struct list_head *head)
{
- __list_del(list->prev, list->next);
+ __list_del_entry(list);
list_add(list, head);
}
static inline void list_move_tail(struct list_head *list,
struct list_head *head)
{
- __list_del(list->prev, list->next);
+ __list_del_entry(list);
list_add_tail(list, head);
}
#endif /* CONFIG_LOCKDEP */
#ifdef CONFIG_TRACE_IRQFLAGS
-extern void early_boot_irqs_off(void);
-extern void early_boot_irqs_on(void);
extern void print_irqtrace_events(struct task_struct *curr);
#else
-static inline void early_boot_irqs_off(void)
-{
-}
-static inline void early_boot_irqs_on(void)
-{
-}
static inline void print_irqtrace_events(struct task_struct *curr)
{
}
#ifdef CONFIG_DEBUG_LOCK_ALLOC
# ifdef CONFIG_PROVE_LOCKING
# define lock_map_acquire(l) lock_acquire(l, 0, 0, 0, 2, NULL, _THIS_IP_)
+# define lock_map_acquire_read(l) lock_acquire(l, 0, 0, 2, 2, NULL, _THIS_IP_)
# else
# define lock_map_acquire(l) lock_acquire(l, 0, 0, 0, 1, NULL, _THIS_IP_)
+# define lock_map_acquire_read(l) lock_acquire(l, 0, 0, 2, 1, NULL, _THIS_IP_)
# endif
# define lock_map_release(l) lock_release(l, 1, _THIS_IP_)
#else
# define lock_map_acquire(l) do { } while (0)
+# define lock_map_acquire_read(l) do { } while (0)
# define lock_map_release(l) do { } while (0)
#endif
gfp_t gfp_mask);
u64 mem_cgroup_get_limit(struct mem_cgroup *mem);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail);
+#endif
+
#else /* CONFIG_CGROUP_MEM_RES_CTLR */
struct mem_cgroup;
return 0;
}
+static inline void mem_cgroup_split_huge_fixup(struct page *head,
+ struct page *tail)
+{
+}
+
#endif /* CONFIG_CGROUP_MEM_CONT */
#endif /* _LINUX_MEMCONTROL_H */
page[1].lru.prev = (void *)order;
}
+#ifdef CONFIG_MMU
/*
* Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when
* servicing faults for write access. In the normal case, do always want
pte = pte_mkwrite(pte);
return pte;
}
+#endif
/*
* Multiple processes may "see" the same page. E.g. for untouched
static inline u32 sh_mmcif_readl(void __iomem *addr, int reg)
{
- return readl(addr + reg);
+ return __raw_readl(addr + reg);
}
static inline void sh_mmcif_writel(void __iomem *addr, int reg, u32 val)
{
- writel(val, addr + reg);
+ __raw_writel(val, addr + reg);
}
#define SH_MMCIF_BBS 512 /* boot block size */
void (*free)(struct module *);
};
+struct module_version_attribute {
+ struct module_attribute mattr;
+ const char *module_name;
+ const char *version;
+} __attribute__ ((__aligned__(sizeof(void *))));
+
struct module_kobject
{
struct kobject kobj;
Using this automatically adds a checksum of the .c files and the
local headers in "srcversion".
*/
+
+#if defined(MODULE) || !defined(CONFIG_SYSFS)
#define MODULE_VERSION(_version) MODULE_INFO(version, _version)
+#else
+#define MODULE_VERSION(_version) \
+ extern ssize_t __modver_version_show(struct module_attribute *, \
+ struct module *, char *); \
+ static struct module_version_attribute __modver_version_attr \
+ __used \
+ __attribute__ ((__section__ ("__modver"),aligned(sizeof(void *)))) \
+ = { \
+ .mattr = { \
+ .attr = { \
+ .name = "version", \
+ .mode = S_IRUGO, \
+ }, \
+ .show = __modver_version_show, \
+ }, \
+ .module_name = KBUILD_MODNAME, \
+ .version = _version, \
+ }
+#endif
/* Optional firmware file (or files) needed by the module
* format is simply firmware file name. Multiple firmware
keeping pointers to this stuff */
char *args;
#ifdef CONFIG_TRACEPOINTS
- struct tracepoint *tracepoints;
+ struct tracepoint * const *tracepoints_ptrs;
unsigned int num_tracepoints;
#endif
#ifdef HAVE_JUMP_LABEL
unsigned int num_trace_bprintk_fmt;
#endif
#ifdef CONFIG_EVENT_TRACING
- struct ftrace_event_call *trace_events;
+ struct ftrace_event_call **trace_events;
unsigned int num_trace_events;
#endif
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
/* Chosen so that structs with an unsigned long line up. */
#define MAX_PARAM_PREFIX_LEN (64 - sizeof(unsigned long))
-#ifdef MODULE
#define ___module_cat(a,b) __mod_ ## a ## b
#define __module_cat(a,b) ___module_cat(a,b)
+#ifdef MODULE
#define __MODULE_INFO(tag, name, info) \
static const char __module_cat(name,__LINE__)[] \
__used __attribute__((section(".modinfo"), unused, aligned(1))) \
= __stringify(tag) "=" info
#else /* !MODULE */
-#define __MODULE_INFO(tag, name, info)
+/* This struct is here for syntactic coherency, it is not used */
+#define __MODULE_INFO(tag, name, info) \
+ struct __module_cat(name,__LINE__) {}
#endif
#define __MODULE_PARM_TYPE(name, _type) \
__MODULE_INFO(parmtype, name##type, #name ":" _type)
extern int ip_mroute_setsockopt(struct sock *, int, char __user *, unsigned int);
extern int ip_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
extern int ipmr_ioctl(struct sock *sk, int cmd, void __user *arg);
+extern int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
extern int ip_mr_init(void);
#else
static inline
extern int ip6_mroute_getsockopt(struct sock *, int, char __user *, int __user *);
extern int ip6_mr_input(struct sk_buff *skb);
extern int ip6mr_ioctl(struct sock *sk, int cmd, void __user *arg);
+extern int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
extern int ip6_mr_init(void);
extern void ip6_mr_cleanup(void);
#else
return w;
}
-extern unsigned int
+extern int
nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode,
struct posix_acl *acl, int encode_entries, int typeflag);
-extern unsigned int
+extern int
nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt,
struct posix_acl **pacl);
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/printk.h>
#include <asm/atomic.h>
/* Each escaped entry is prefixed by ESCAPE_CODE
int oprofile_add_data64(struct op_entry *entry, u64 val);
int oprofile_write_commit(struct op_entry *entry);
-#ifdef CONFIG_PERF_EVENTS
+#ifdef CONFIG_HW_PERF_EVENTS
int __init oprofile_perf_init(struct oprofile_operations *ops);
void oprofile_perf_exit(void);
char *op_name_from_perf_id(void);
-#endif /* CONFIG_PERF_EVENTS */
+#else
+static inline int __init oprofile_perf_init(struct oprofile_operations *ops)
+{
+ pr_info("oprofile: hardware counters not available\n");
+ return -ENODEV;
+}
+static inline void oprofile_perf_exit(void) { }
+#endif /* CONFIG_HW_PERF_EVENTS */
#endif /* OPROFILE_H */
/* posix_acl.c */
+extern void posix_acl_init(struct posix_acl *, int);
extern struct posix_acl *posix_acl_alloc(int, gfp_t);
extern struct posix_acl *posix_acl_clone(const struct posix_acl *, gfp_t);
extern int posix_acl_valid(const struct posix_acl *);
qsize_t *(*get_reserved_space) (struct inode *);
};
+struct path;
+
/* Operations handling requests from userspace */
struct quotactl_ops {
- int (*quota_on)(struct super_block *, int, int, char *);
+ int (*quota_on)(struct super_block *, int, int, struct path *);
+ int (*quota_on_meta)(struct super_block *, int, int);
int (*quota_off)(struct super_block *, int);
int (*quota_sync)(struct super_block *, int, int);
int (*get_info)(struct super_block *, int, struct if_dqinfo *);
int dquot_file_open(struct inode *inode, struct file *file);
-int dquot_quota_on(struct super_block *sb, int type, int format_id,
- char *path);
int dquot_enable(struct inode *inode, int type, int format_id,
unsigned int flags);
-int dquot_quota_on_path(struct super_block *sb, int type, int format_id,
+int dquot_quota_on(struct super_block *sb, int type, int format_id,
struct path *path);
int dquot_quota_on_mount(struct super_block *sb, char *qf_name,
int format_id, int type);
return ret;
}
+/**
+ * res_counter_check_margin - check if the counter allows charging
+ * @cnt: the resource counter to check
+ * @bytes: the number of bytes to check the remaining space against
+ *
+ * Returns a boolean value on whether the counter can be charged
+ * @bytes or whether this would exceed the limit.
+ */
+static inline bool res_counter_check_margin(struct res_counter *cnt,
+ unsigned long bytes)
+{
+ bool ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cnt->lock, flags);
+ ret = cnt->limit - cnt->usage >= bytes;
+ spin_unlock_irqrestore(&cnt->lock, flags);
+ return ret;
+}
+
static inline bool res_counter_check_under_soft_limit(struct res_counter *cnt)
{
bool ret;
struct hrtimer pie_timer; /* sub second exp, so needs hrtimer */
int pie_enabled;
struct work_struct irqwork;
+
+
+#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL
+ struct work_struct uie_task;
+ struct timer_list uie_timer;
+ /* Those fields are protected by rtc->irq_lock */
+ unsigned int oldsecs;
+ unsigned int uie_irq_active:1;
+ unsigned int stop_uie_polling:1;
+ unsigned int uie_task_active:1;
+ unsigned int uie_timer_active:1;
+#endif
};
#define to_rtc_device(d) container_of(d, struct rtc_device, dev)
extern int rtc_dev_update_irq_enable_emul(struct rtc_device *rtc,
unsigned int enabled);
+void rtc_handle_legacy_irq(struct rtc_device *rtc, int num, int mode);
void rtc_aie_update_irq(void *private);
void rtc_uie_update_irq(void *private);
enum hrtimer_restart rtc_pie_update_irq(struct hrtimer *timer);
int rtc_unregister(rtc_task_t *task);
int rtc_control(rtc_task_t *t, unsigned int cmd, unsigned long arg);
-void rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer);
-void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer);
void rtc_timer_init(struct rtc_timer *timer, void (*f)(void* p), void* data);
int rtc_timer_start(struct rtc_device *rtc, struct rtc_timer* timer,
ktime_t expires, ktime_t period);
#define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */
#define PF_MEMPOLICY 0x10000000 /* Non-default NUMA mempolicy */
#define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */
-#define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezeable */
+#define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */
#define PF_FREEZER_NOSIG 0x80000000 /* Freezer won't send signals to it */
/*
const kernel_cap_t *effective,
const kernel_cap_t *inheritable,
const kernel_cap_t *permitted);
-int security_capable(int cap);
+int security_capable(const struct cred *cred, int cap);
int security_real_capable(struct task_struct *tsk, int cap);
int security_real_capable_noaudit(struct task_struct *tsk, int cap);
int security_sysctl(struct ctl_table *table, int op);
return cap_capset(new, old, effective, inheritable, permitted);
}
-static inline int security_capable(int cap)
+static inline int security_capable(const struct cred *cred, int cap)
{
- return cap_capable(current, current_cred(), cap, SECURITY_CAP_AUDIT);
+ return cap_capable(current, cred, cap, SECURITY_CAP_AUDIT);
}
static inline int security_real_capable(struct task_struct *tsk, int cap)
return 1;
return 0;
}
-static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
-{
- if (svc_is_backchannel(rqstp))
- return (struct nfs4_sessionid *)
- rqstp->rq_server->sv_bc_xprt->xpt_bc_sid;
- return NULL;
-}
-
#else /* CONFIG_NFS_V4_1 */
static inline int xprt_setup_backchannel(struct rpc_xprt *xprt,
unsigned int min_reqs)
return 0;
}
-static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
-{
- return NULL;
-}
-
static inline void xprt_free_bc_request(struct rpc_rqst *req)
{
}
size_t xpt_remotelen; /* length of address */
struct rpc_wait_queue xpt_bc_pending; /* backchannel wait queue */
struct list_head xpt_users; /* callbacks on free */
- void *xpt_bc_sid; /* back channel session ID */
struct net *xpt_net;
struct rpc_xprt *xpt_bc_xprt; /* NFSv4.1 backchannel */
extern struct trace_event_functions exit_syscall_print_funcs;
#define SYSCALL_TRACE_ENTER_EVENT(sname) \
- static struct syscall_metadata \
- __attribute__((__aligned__(4))) __syscall_meta_##sname; \
+ static struct syscall_metadata __syscall_meta_##sname; \
static struct ftrace_event_call __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("_ftrace_events"))) \
event_enter_##sname = { \
.name = "sys_enter"#sname, \
.class = &event_class_syscall_enter, \
.event.funcs = &enter_syscall_print_funcs, \
.data = (void *)&__syscall_meta_##sname,\
}; \
+ static struct ftrace_event_call __used \
+ __attribute__((section("_ftrace_events"))) \
+ *__event_enter_##sname = &event_enter_##sname; \
__TRACE_EVENT_FLAGS(enter_##sname, TRACE_EVENT_FL_CAP_ANY)
#define SYSCALL_TRACE_EXIT_EVENT(sname) \
- static struct syscall_metadata \
- __attribute__((__aligned__(4))) __syscall_meta_##sname; \
+ static struct syscall_metadata __syscall_meta_##sname; \
static struct ftrace_event_call __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("_ftrace_events"))) \
event_exit_##sname = { \
.name = "sys_exit"#sname, \
.class = &event_class_syscall_exit, \
.event.funcs = &exit_syscall_print_funcs, \
.data = (void *)&__syscall_meta_##sname,\
}; \
+ static struct ftrace_event_call __used \
+ __attribute__((section("_ftrace_events"))) \
+ *__event_exit_##sname = &event_exit_##sname; \
__TRACE_EVENT_FLAGS(exit_##sname, TRACE_EVENT_FL_CAP_ANY)
#define SYSCALL_METADATA(sname, nb) \
SYSCALL_TRACE_ENTER_EVENT(sname); \
SYSCALL_TRACE_EXIT_EVENT(sname); \
static struct syscall_metadata __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("__syscalls_metadata"))) \
__syscall_meta_##sname = { \
.name = "sys"#sname, \
.nb_args = nb, \
.enter_event = &event_enter_##sname, \
.exit_event = &event_exit_##sname, \
.enter_fields = LIST_HEAD_INIT(__syscall_meta_##sname.enter_fields), \
- };
+ }; \
+ static struct syscall_metadata __used \
+ __attribute__((section("__syscalls_metadata"))) \
+ *__p_syscall_meta_##sname = &__syscall_meta_##sname;
#define SYSCALL_DEFINE0(sname) \
SYSCALL_TRACE_ENTER_EVENT(_##sname); \
SYSCALL_TRACE_EXIT_EVENT(_##sname); \
static struct syscall_metadata __used \
- __attribute__((__aligned__(4))) \
- __attribute__((section("__syscalls_metadata"))) \
__syscall_meta__##sname = { \
.name = "sys_"#sname, \
.nb_args = 0, \
.exit_event = &event_exit__##sname, \
.enter_fields = LIST_HEAD_INIT(__syscall_meta__##sname.enter_fields), \
}; \
+ static struct syscall_metadata __used \
+ __attribute__((section("__syscalls_metadata"))) \
+ *__p_syscall_meta_##sname = &__syscall_meta__##sname; \
asmlinkage long sys_##sname(void)
#else
#define SYSCALL_DEFINE0(name) asmlinkage long sys_##name(void)
#include <linux/errno.h>
#include <linux/types.h>
+/* Enable/disable SYSRQ support by default (0==no, 1==yes). */
+#define SYSRQ_DEFAULT_ENABLE 1
+
/* Possible values of bitmask for enabling sysrq functions */
/* 0x0001 is reserved for enable everything */
#define SYSRQ_ENABLE_LOG 0x0002
void (*regfunc)(void);
void (*unregfunc)(void);
struct tracepoint_func __rcu *funcs;
-} __attribute__((aligned(32))); /*
- * Aligned on 32 bytes because it is
- * globally visible and gcc happily
- * align these on the structure size.
- * Keep in sync with vmlinux.lds.h.
- */
+};
/*
* Connect a probe to a tracepoint.
struct tracepoint_iter {
struct module *module;
- struct tracepoint *tracepoint;
+ struct tracepoint * const *tracepoint;
};
extern void tracepoint_iter_start(struct tracepoint_iter *iter);
extern void tracepoint_iter_next(struct tracepoint_iter *iter);
extern void tracepoint_iter_stop(struct tracepoint_iter *iter);
extern void tracepoint_iter_reset(struct tracepoint_iter *iter);
-extern int tracepoint_get_iter_range(struct tracepoint **tracepoint,
- struct tracepoint *begin, struct tracepoint *end);
+extern int tracepoint_get_iter_range(struct tracepoint * const **tracepoint,
+ struct tracepoint * const *begin, struct tracepoint * const *end);
/*
* tracepoint_synchronize_unregister must be called between the last tracepoint
#define PARAMS(args...) args
#ifdef CONFIG_TRACEPOINTS
-extern void tracepoint_update_probe_range(struct tracepoint *begin,
- struct tracepoint *end);
+extern
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end);
#else
-static inline void tracepoint_update_probe_range(struct tracepoint *begin,
- struct tracepoint *end)
+static inline
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end)
{ }
#endif /* CONFIG_TRACEPOINTS */
{ \
}
+/*
+ * We have no guarantee that gcc and the linker won't up-align the tracepoint
+ * structures, so we create an array of pointers that will be used for iteration
+ * on the tracepoints.
+ */
#define DEFINE_TRACE_FN(name, reg, unreg) \
static const char __tpstrtab_##name[] \
__attribute__((section("__tracepoints_strings"))) = #name; \
struct tracepoint __tracepoint_##name \
- __attribute__((section("__tracepoints"), aligned(32))) = \
- { __tpstrtab_##name, 0, reg, unreg, NULL }
+ __attribute__((section("__tracepoints"))) = \
+ { __tpstrtab_##name, 0, reg, unreg, NULL }; \
+ static struct tracepoint * const __tracepoint_ptr_##name __used \
+ __attribute__((section("__tracepoints_ptrs"))) = \
+ &__tracepoint_##name;
#define DEFINE_TRACE(name) \
DEFINE_TRACE_FN(name, NULL, NULL);
#define USB_CDC_COMM_FEATURE 0x01
#define USB_CDC_CAP_LINE 0x02
-#define USB_CDC_CAP_BRK 0x04
+#define USB_CDC_CAP_BRK 0x04
#define USB_CDC_CAP_NOTIFY 0x08
/* "Union Functional Descriptor" from CDC spec 5.2.3.8 */
__le16 wLength;
} __attribute__ ((packed));
+struct usb_cdc_speed_change {
+ __le32 DLBitRRate; /* contains the downlink bit rate (IN pipe) */
+ __le32 ULBitRate; /* contains the uplink bit rate (OUT pipe) */
+} __attribute__ ((packed));
+
/*-------------------------------------------------------------------------*/
/*
__le16 wNdpOutDivisor;
__le16 wNdpOutPayloadRemainder;
__le16 wNdpOutAlignment;
- __le16 wPadding2;
+ __le16 wNtbOutMaxDatagrams;
} __attribute__ ((packed));
/*
__le16 wHeaderLength;
__le16 wSequence;
__le16 wBlockLength;
- __le16 wFpIndex;
+ __le16 wNdpIndex;
} __attribute__ ((packed));
struct usb_cdc_ncm_nth32 {
__le16 wHeaderLength;
__le16 wSequence;
__le32 dwBlockLength;
- __le32 dwFpIndex;
+ __le32 dwNdpIndex;
} __attribute__ ((packed));
/*
struct usb_cdc_ncm_ndp16 {
__le32 dwSignature;
__le16 wLength;
- __le16 wNextFpIndex;
+ __le16 wNextNdpIndex;
struct usb_cdc_ncm_dpe16 dpe16[0];
} __attribute__ ((packed));
#define USB_CDC_NCM_NCAP_ENCAP_COMMAND (1 << 2)
#define USB_CDC_NCM_NCAP_MAX_DATAGRAM_SIZE (1 << 3)
#define USB_CDC_NCM_NCAP_CRC_MODE (1 << 4)
+#define USB_CDC_NCM_NCAP_NTB_INPUT_SIZE (1 << 5)
/* CDC NCM subclass Table 6-3: NTB Parameter Structure */
#define USB_CDC_NCM_NTB16_SUPPORTED (1 << 0)
#define USB_CDC_NCM_NTB_MIN_IN_SIZE 2048
#define USB_CDC_NCM_NTB_MIN_OUT_SIZE 2048
+/* NTB Input Size Structure */
+struct usb_cdc_ncm_ndp_input_size {
+ __le32 dwNtbInMaxSize;
+ __le16 wNtbInMaxDatagrams;
+ __le16 wReserved;
+} __attribute__ ((packed));
+
/* CDC NCM subclass 6.2.11 SetCrcMode */
#define USB_CDC_NCM_CRC_NOT_APPENDED 0x00
#define USB_CDC_NCM_CRC_APPENDED 0x01
/* Flags that get set only during HCD registration or removal. */
unsigned rh_registered:1;/* is root hub registered? */
unsigned rh_pollable:1; /* may we poll the root hub? */
+ unsigned msix_enabled:1; /* driver has MSI-X enabled? */
/* The next flag is a stopgap, to be removed when all the HCDs
* support the new root-hub polling mechanism. */
#ifndef __LINUX_USB_GADGET_MSM72K_UDC_H__
#define __LINUX_USB_GADGET_MSM72K_UDC_H__
-#ifdef CONFIG_ARCH_MSM7X00A
-#define USB_SBUSCFG (MSM_USB_BASE + 0x0090)
-#else
#define USB_AHBBURST (MSM_USB_BASE + 0x0090)
#define USB_AHBMODE (MSM_USB_BASE + 0x0098)
-#endif
#define USB_CAPLENGTH (MSM_USB_BASE + 0x0100) /* 8 bit */
#define USB_USBCMD (MSM_USB_BASE + 0x0140)
extern int usb_serial_handle_sysrq_char(struct usb_serial_port *port,
unsigned int ch);
extern int usb_serial_handle_break(struct usb_serial_port *port);
+extern void usb_serial_handle_dcd_change(struct usb_serial_port *usb_port,
+ struct tty_struct *tty,
+ unsigned int status);
extern int usb_serial_bus_register(struct usb_serial_driver *device);
unsigned int fbit)
{
/* Did you forget to fix assumptions on max features? */
- MAYBE_BUILD_BUG_ON(fbit >= 32);
+ if (__builtin_constant_p(fbit))
+ BUILD_BUG_ON(fbit >= 32);
+ else
+ BUG_ON(fbit >= 32);
if (fbit < VIRTIO_TRANSPORT_F_START)
virtio_check_driver_offered_feature(vdev, fbit);
* This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
* anyone can use the definitions to implement compatible drivers/servers.
*
- * Copyright (C) Red Hat, Inc., 2009, 2010
+ * Copyright (C) Red Hat, Inc., 2009, 2010, 2011
+ * Copyright (C) Amit Shah <amit.shah@redhat.com>, 2009, 2010, 2011
*/
/* Feature bits */
enum {
WQ_NON_REENTRANT = 1 << 0, /* guarantee non-reentrance */
WQ_UNBOUND = 1 << 1, /* not bound to any cpu */
- WQ_FREEZEABLE = 1 << 2, /* freeze during suspend */
+ WQ_FREEZABLE = 1 << 2, /* freeze during suspend */
WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */
WQ_HIGHPRI = 1 << 4, /* high priority */
WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */
/**
* alloc_ordered_workqueue - allocate an ordered workqueue
* @name: name of the workqueue
- * @flags: WQ_* flags (only WQ_FREEZEABLE and WQ_MEM_RECLAIM are meaningful)
+ * @flags: WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful)
*
* Allocate an ordered workqueue. An ordered workqueue executes at
* most one work item at any given time in the queued order. They are
#define create_workqueue(name) \
alloc_workqueue((name), WQ_MEM_RECLAIM, 1)
-#define create_freezeable_workqueue(name) \
- alloc_workqueue((name), WQ_FREEZEABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
+#define create_freezable_workqueue(name) \
+ alloc_workqueue((name), WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
#define create_singlethread_workqueue(name) \
alloc_workqueue((name), WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
--- /dev/null
+/* mt9v011 sensor
+ *
+ * Copyright (C) 2011 Hans Verkuil <hverkuil@xs4all.nl>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __MT9V011_H__
+#define __MT9V011_H__
+
+struct mt9v011_platform_data {
+ unsigned xtal; /* Hz */
+};
+
+#endif
}
#define IR_MAX_DURATION 0xFFFFFFFF /* a bit more than 4 seconds */
+#define US_TO_NS(usec) ((usec) * 1000)
+#define MS_TO_US(msec) ((msec) * 1000)
+#define MS_TO_NS(msec) ((msec) * 1000 * 1000)
void ir_raw_event_handle(struct rc_dev *dev);
int ir_raw_event_store(struct rc_dev *dev, struct ir_raw_event *ev);
/* different device locks */
spinlock_t slock;
- struct mutex lock;
+ struct mutex v4l2_lock;
unsigned char __iomem *mem; /* pointer to mapped IO memory */
u32 revision; /* chip revision; needed for bug-workarounds*/
/* Load an i2c module and return an initialized v4l2_subdev struct.
The client_type argument is the name of the chip that's on the adapter. */
-struct v4l2_subdev *v4l2_i2c_new_subdev_cfg(struct v4l2_device *v4l2_dev,
+struct v4l2_subdev *v4l2_i2c_new_subdev(struct v4l2_device *v4l2_dev,
struct i2c_adapter *adapter, const char *client_type,
- int irq, void *platform_data,
u8 addr, const unsigned short *probe_addrs);
-/* Load an i2c module and return an initialized v4l2_subdev struct.
- The client_type argument is the name of the chip that's on the adapter. */
-static inline struct v4l2_subdev *v4l2_i2c_new_subdev(struct v4l2_device *v4l2_dev,
- struct i2c_adapter *adapter, const char *client_type,
- u8 addr, const unsigned short *probe_addrs)
-{
- return v4l2_i2c_new_subdev_cfg(v4l2_dev, adapter, client_type, 0, NULL,
- addr, probe_addrs);
-}
-
struct i2c_board_info;
struct v4l2_subdev *v4l2_i2c_new_subdev_board(struct v4l2_device *v4l2_dev,
#include <linux/list.h>
#include <linux/device.h>
+#include <linux/videodev2.h>
/* forward references */
struct v4l2_ctrl_handler;
* @handler: The handler that owns the control.
* @cluster: Point to start of cluster array.
* @ncontrols: Number of controls in cluster array.
- * @has_new: Internal flag: set when there is a valid new value.
* @done: Internal flag: set for each processed control.
+ * @is_new: Set when the user specified a new value for this control. It
+ * is also set when called from v4l2_ctrl_handler_setup. Drivers
+ * should never set this flag.
* @is_private: If set, then this control is private to its handler and it
* will not be added to any other handlers. Drivers can set
* this flag.
struct v4l2_ctrl_handler *handler;
struct v4l2_ctrl **cluster;
unsigned ncontrols;
- unsigned int has_new:1;
unsigned int done:1;
+ unsigned int is_new:1;
unsigned int is_private:1;
unsigned int is_volatile:1;
u8 strength; /* Pin drive strength */
};
-/* s_config: if set, then it is always called by the v4l2_i2c_new_subdev*
- functions after the v4l2_subdev was registered. It is used to pass
- platform data to the subdev which can be used during initialization.
-
+/*
s_io_pin_config: configure one or more chip I/O pins for chips that
multiplex different internal signal pads out to IO pins. This function
takes a pointer to an array of 'n' pin configuration entries, one for
struct v4l2_subdev_core_ops {
int (*g_chip_ident)(struct v4l2_subdev *sd, struct v4l2_dbg_chip_ident *chip);
int (*log_status)(struct v4l2_subdev *sd);
- int (*s_config)(struct v4l2_subdev *sd, int irq, void *platform_data);
int (*s_io_pin_config)(struct v4l2_subdev *sd, size_t n,
struct v4l2_subdev_io_pin_config *pincfg);
int (*init)(struct v4l2_subdev *sd, u32 val);
const struct v4l2_subdev_sensor_ops *sensor;
};
+/*
+ * Internal ops. Never call this from drivers, only the v4l2 framework can call
+ * these ops.
+ *
+ * registered: called when this subdev is registered. When called the v4l2_dev
+ * field is set to the correct v4l2_device.
+ *
+ * unregistered: called when this subdev is unregistered. When called the
+ * v4l2_dev field is still set to the correct v4l2_device.
+ */
+struct v4l2_subdev_internal_ops {
+ int (*registered)(struct v4l2_subdev *sd);
+ void (*unregistered)(struct v4l2_subdev *sd);
+};
+
#define V4L2_SUBDEV_NAME_SIZE 32
/* Set this flag if this subdev is a i2c device. */
u32 flags;
struct v4l2_device *v4l2_dev;
const struct v4l2_subdev_ops *ops;
+ /* Never call these internal ops from within a driver! */
+ const struct v4l2_subdev_internal_ops *internal_ops;
/* The control handler of this subdev. May be NULL. */
struct v4l2_ctrl_handler *ctrl_handler;
/* name must be unique */
__u32 link_mode;
__u8 auth_type;
__u8 sec_level;
+ __u8 pending_sec_level;
__u8 power_save;
__u16 disc_timeout;
unsigned long pend;
*/
static inline void genlmsg_cancel(struct sk_buff *skb, void *hdr)
{
- nlmsg_cancel(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
+ if (hdr)
+ nlmsg_cancel(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
}
/**
if (e == NULL)
return;
- if (!(e->ctmask & (1 << event)))
- return;
-
set_bit(event, &e->cache);
}
{
__skb_queue_tail(list, skb);
sch->qstats.backlog += qdisc_pkt_len(skb);
- qdisc_bstats_update(sch, skb);
return NET_XMIT_SUCCESS;
}
{
struct sk_buff *skb = __skb_dequeue(list);
- if (likely(skb != NULL))
+ if (likely(skb != NULL)) {
sch->qstats.backlog -= qdisc_pkt_len(skb);
+ qdisc_bstats_update(sch, skb);
+ }
return skb;
}
static inline unsigned int __qdisc_queue_drop_head(struct Qdisc *sch,
struct sk_buff_head *list)
{
- struct sk_buff *skb = __qdisc_dequeue_head(sch, list);
+ struct sk_buff *skb = __skb_dequeue(list);
if (likely(skb != NULL)) {
unsigned int len = qdisc_pkt_len(skb);
+ sch->qstats.backlog -= len;
kfree_skb(skb);
return len;
}
#define SCTP_GET_PEER_ADDR_INFO 15
#define SCTP_DELAYED_ACK_TIME 16
#define SCTP_DELAYED_ACK SCTP_DELAYED_ACK_TIME
+#define SCTP_DELAYED_SACK SCTP_DELAYED_ACK_TIME
#define SCTP_CONTEXT 17
#define SCTP_FRAGMENT_INTERLEAVE 18
#define SCTP_PARTIAL_DELIVERY_POINT 19 /* Set/Get partial delivery point */
int level,
int optname, char __user *optval,
int __user *option);
+ int (*compat_ioctl)(struct sock *sk,
+ unsigned int cmd, unsigned long arg);
#endif
int (*sendmsg)(struct kiocb *iocb, struct sock *sk,
struct msghdr *msg, size_t len);
#define _SCSI_SCSI_H
#include <linux/types.h>
+#include <linux/scatterlist.h>
struct scsi_cmnd;
#include <scsi/scsi_cmnd.h>
#include <net/sock.h>
#include <net/tcp.h>
-#include "target_core_mib.h"
#define TARGET_CORE_MOD_VERSION "v4.0.0-rc6"
#define SHUTDOWN_SIGS (sigmask(SIGKILL)|sigmask(SIGINT)|sigmask(SIGABRT))
SAM_TASK_ATTR_EMULATED
} t10_task_attr_index_t;
+/*
+ * Used for target SCSI statistics
+ */
+typedef enum {
+ SCSI_INST_INDEX,
+ SCSI_DEVICE_INDEX,
+ SCSI_AUTH_INTR_INDEX,
+ SCSI_INDEX_TYPE_MAX
+} scsi_index_t;
+
+struct scsi_index_table {
+ spinlock_t lock;
+ u32 scsi_mib_index[SCSI_INDEX_TYPE_MAX];
+} ____cacheline_aligned;
+
struct se_cmd;
struct t10_alua {
spinlock_t stats_lock;
/* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */
atomic_t acl_pr_ref_count;
- /* Used for MIB access */
- atomic_t mib_ref_count;
struct se_dev_entry *device_list;
struct se_session *nacl_sess;
struct se_portal_group *se_tpg;
} ____cacheline_aligned;
struct se_session {
- /* Used for MIB access */
- atomic_t mib_ref_count;
u64 sess_bin_isid;
struct se_node_acl *se_node_acl;
struct se_portal_group *se_tpg;
/* Virtual iSCSI devices attached. */
u32 dev_count;
u32 hba_index;
- atomic_t dev_mib_access_count;
atomic_t load_balance_queue;
atomic_t left_queue_depth;
/* Maximum queue depth the HBA can handle. */
#define SE_LUN(c) ((struct se_lun *)(c)->se_lun)
+struct scsi_port_stats {
+ u64 cmd_pdus;
+ u64 tx_data_octets;
+ u64 rx_data_octets;
+} ____cacheline_aligned;
+
struct se_port {
/* RELATIVE TARGET PORT IDENTIFER */
u16 sep_rtpi;
} ____cacheline_aligned;
struct se_tpg_np {
+ struct se_portal_group *tpg_np_parent;
struct config_group tpg_np_group;
} ____cacheline_aligned;
extern int init_se_global(void);
extern void release_se_global(void);
+extern void init_scsi_index_table(void);
+extern u32 scsi_get_new_index(scsi_index_t);
extern void transport_init_queue_obj(struct se_queue_obj *);
extern int transport_subsystem_check_init(void);
extern int transport_subsystem_register(struct se_subsystem_api *);
* .reg = ftrace_event_reg,
* };
*
- * static struct ftrace_event_call __used
- * __attribute__((__aligned__(4)))
- * __attribute__((section("_ftrace_events"))) event_<call> = {
+ * static struct ftrace_event_call event_<call> = {
* .name = "<call>",
* .class = event_class_<template>,
* .event = &ftrace_event_type_<call>,
* .print_fmt = print_fmt_<call>,
* };
+ * // its only safe to use pointers when doing linker tricks to
+ * // create an array.
+ * static struct ftrace_event_call __used
+ * __attribute__((section("_ftrace_events"))) *__event_<call> = &event_<call>;
*
*/
#undef DEFINE_EVENT
#define DEFINE_EVENT(template, call, proto, args) \
\
-static struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+static struct ftrace_event_call __used event_##call = { \
.name = #call, \
.class = &event_class_##template, \
.event.funcs = &ftrace_event_type_funcs_##template, \
.print_fmt = print_fmt_##template, \
-};
+}; \
+static struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
#undef DEFINE_EVENT_PRINT
#define DEFINE_EVENT_PRINT(template, call, proto, args, print) \
\
static const char print_fmt_##call[] = print; \
\
-static struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+static struct ftrace_event_call __used event_##call = { \
.name = #call, \
.class = &event_class_##template, \
.event.funcs = &ftrace_event_type_funcs_##call, \
.print_fmt = print_fmt_##call, \
-}
+}; \
+static struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
endif # CGROUPS
menuconfig NAMESPACES
- bool "Namespaces support" if EMBEDDED
- default !EMBEDDED
+ bool "Namespaces support" if EXPERT
+ default !EXPERT
help
Provides the way to make tasks work with different objects using
the same id. For example same IPC id may refer to different objects
config ANON_INODES
bool
-menuconfig EMBEDDED
- bool "Configure standard kernel features (for small systems)"
+menuconfig EXPERT
+ bool "Configure standard kernel features (expert users)"
help
This option allows certain base kernel options and settings
to be disabled or tweaked. This is for specialized
environments which can tolerate a "non-standard" kernel.
Only use this if you really know what you are doing.
+config EMBEDDED
+ bool "Embedded system"
+ select EXPERT
+ help
+ This option should be enabled if compiling the kernel for
+ an embedded system so certain expert options are available
+ for configuration.
+
config UID16
- bool "Enable 16-bit UID system calls" if EMBEDDED
+ bool "Enable 16-bit UID system calls" if EXPERT
depends on ARM || BLACKFIN || CRIS || FRV || H8300 || X86_32 || M68K || (S390 && !64BIT) || SUPERH || SPARC32 || (SPARC64 && COMPAT) || UML || (X86_64 && IA32_EMULATION)
default y
help
This enables the legacy 16-bit UID syscall wrappers.
config SYSCTL_SYSCALL
- bool "Sysctl syscall support" if EMBEDDED
+ bool "Sysctl syscall support" if EXPERT
depends on PROC_SYSCTL
default y
select SYSCTL
If unsure say Y here.
config KALLSYMS
- bool "Load all symbols for debugging/ksymoops" if EMBEDDED
+ bool "Load all symbols for debugging/ksymoops" if EXPERT
default y
help
Say Y here to let the kernel print out symbolic crash information and
config HOTPLUG
- bool "Support for hot-pluggable devices" if EMBEDDED
+ bool "Support for hot-pluggable devices" if EXPERT
default y
help
This option is provided for the case where no hotplug or uevent
config PRINTK
default y
- bool "Enable support for printk" if EMBEDDED
+ bool "Enable support for printk" if EXPERT
help
This option enables normal printk support. Removing it
eliminates most of the message strings from the kernel image
strongly discouraged.
config BUG
- bool "BUG() support" if EMBEDDED
+ bool "BUG() support" if EXPERT
default y
help
Disabling this option eliminates support for BUG and WARN, reducing
config ELF_CORE
default y
- bool "Enable ELF core dumps" if EMBEDDED
+ bool "Enable ELF core dumps" if EXPERT
help
Enable support for generating core dumps. Disabling saves about 4k.
config PCSPKR_PLATFORM
- bool "Enable PC-Speaker support" if EMBEDDED
+ bool "Enable PC-Speaker support" if EXPERT
depends on ALPHA || X86 || MIPS || PPC_PREP || PPC_CHRP || PPC_PSERIES
default y
help
config BASE_FULL
default y
- bool "Enable full-sized data structures for core" if EMBEDDED
+ bool "Enable full-sized data structures for core" if EXPERT
help
Disabling this option reduces the size of miscellaneous core
kernel data structures. This saves memory on small machines,
but may reduce performance.
config FUTEX
- bool "Enable futex support" if EMBEDDED
+ bool "Enable futex support" if EXPERT
default y
select RT_MUTEXES
help
run glibc-based applications correctly.
config EPOLL
- bool "Enable eventpoll support" if EMBEDDED
+ bool "Enable eventpoll support" if EXPERT
default y
select ANON_INODES
help
support for epoll family of system calls.
config SIGNALFD
- bool "Enable signalfd() system call" if EMBEDDED
+ bool "Enable signalfd() system call" if EXPERT
select ANON_INODES
default y
help
If unsure, say Y.
config TIMERFD
- bool "Enable timerfd() system call" if EMBEDDED
+ bool "Enable timerfd() system call" if EXPERT
select ANON_INODES
default y
help
If unsure, say Y.
config EVENTFD
- bool "Enable eventfd() system call" if EMBEDDED
+ bool "Enable eventfd() system call" if EXPERT
select ANON_INODES
default y
help
If unsure, say Y.
config SHMEM
- bool "Use full shmem filesystem" if EMBEDDED
+ bool "Use full shmem filesystem" if EXPERT
default y
depends on MMU
help
which may be appropriate on small systems without swap.
config AIO
- bool "Enable AIO support" if EMBEDDED
+ bool "Enable AIO support" if EXPERT
default y
help
This option enables POSIX asynchronous I/O which may by used
config VM_EVENT_COUNTERS
default y
- bool "Enable VM event counters for /proc/vmstat" if EMBEDDED
+ bool "Enable VM event counters for /proc/vmstat" if EXPERT
help
VM event counters are needed for event counts to be shown.
This option allows the disabling of the VM event counters
- on EMBEDDED systems. /proc/vmstat will only show page counts
+ on EXPERT systems. /proc/vmstat will only show page counts
if VM event counters are disabled.
config PCI_QUIRKS
default y
- bool "Enable PCI quirk workarounds" if EMBEDDED
+ bool "Enable PCI quirk workarounds" if EXPERT
depends on PCI
help
This enables workarounds for various PCI chipset
config SLUB_DEBUG
default y
- bool "Enable SLUB debugging support" if EMBEDDED
+ bool "Enable SLUB debugging support" if EXPERT
depends on SLUB && SYSFS
help
SLUB has extensive debug support features. Disabling these can
a slab allocator.
config SLOB
- depends on EMBEDDED
+ depends on EXPERT
bool "SLOB (Simple Allocator)"
help
SLOB replaces the stock allocator with a drastically simpler
config MMAP_ALLOW_UNINITIALIZED
bool "Allow mmapped anonymous memory to be uninitialized"
- depends on EMBEDDED && !MMU
+ depends on EXPERT && !MMU
default n
help
Normally, and according to the Linux spec, anonymous memory obtained
pre_start = 0;
read_current_timer(&start);
start_jiffies = jiffies;
- while (jiffies <= (start_jiffies + 1)) {
+ while (time_before_eq(jiffies, start_jiffies + 1)) {
pre_start = start;
read_current_timer(&start);
}
pre_end = 0;
end = post_start;
- while (jiffies <=
- (start_jiffies + 1 + DELAY_CALIBRATION_TICKS)) {
+ while (time_before_eq(jiffies, start_jiffies + 1 +
+ DELAY_CALIBRATION_TICKS)) {
pre_end = end;
read_current_timer(&end);
}
extern void tc_init(void);
#endif
+/*
+ * Debug helper: via this flag we know that we are in 'early bootup code'
+ * where only the boot processor is running with IRQ disabled. This means
+ * two things - IRQ must not be enabled before the flag is cleared and some
+ * operations which are not allowed with IRQ disabled are allowed while the
+ * flag is set.
+ */
+bool early_boot_irqs_disabled __read_mostly;
+
enum system_states system_state __read_mostly;
EXPORT_SYMBOL(system_state);
cgroup_init_early();
local_irq_disable();
- early_boot_irqs_off();
+ early_boot_irqs_disabled = true;
/*
* Interrupts are still disabled. Do necessary setups, then
if (!irqs_disabled())
printk(KERN_CRIT "start_kernel(): bug: interrupts were "
"enabled early\n");
- early_boot_irqs_on();
+ early_boot_irqs_disabled = false;
local_irq_enable();
/* Interrupts are enabled now so all GFP allocations are safe. */
BUG();
}
- if (security_capable(cap) == 0) {
+ if (security_capable(current_cred(), cap) == 0) {
current->flags |= PF_SUPERPRIV;
return 1;
}
#endif
atomic_set(&new->usage, 1);
+#ifdef CONFIG_DEBUG_CREDENTIALS
+ new->magic = CRED_MAGIC;
+#endif
if (security_cred_alloc_blank(new, GFP_KERNEL) < 0)
goto error;
-#ifdef CONFIG_DEBUG_CREDENTIALS
- new->magic = CRED_MAGIC;
-#endif
return new;
error:
validate_creds(old);
*new = *old;
+ atomic_set(&new->usage, 1);
+ set_cred_subscribers(new, 0);
get_uid(new->user);
get_group_info(new->group_info);
if (security_prepare_creds(new, old, GFP_KERNEL) < 0)
goto error;
- atomic_set(&new->usage, 1);
- set_cred_subscribers(new, 0);
put_cred(old);
validate_creds(new);
return new;
if (cred->magic != CRED_MAGIC)
return true;
#ifdef CONFIG_SECURITY_SELINUX
- if (selinux_is_enabled()) {
+ /*
+ * cred->security == NULL if security_cred_alloc_blank() or
+ * security_prepare_creds() returned an error.
+ */
+ if (selinux_is_enabled() && cred->security) {
if ((unsigned long) cred->security < PAGE_SIZE)
return true;
if ((*(u32 *)cred->security & 0xffffff00) ==
config GENERIC_HARDIRQS
def_bool y
-config GENERIC_HARDIRQS_NO__DO_IRQ
- def_bool y
-
# Select this to disable the deprecated stuff
config GENERIC_HARDIRQS_NO_DEPRECATED
def_bool n
return retval;
}
-
-#ifndef CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ
-
-#ifdef CONFIG_ENABLE_WARN_DEPRECATED
-# warning __do_IRQ is deprecated. Please convert to proper flow handlers
-#endif
-
-/**
- * __do_IRQ - original all in one highlevel IRQ handler
- * @irq: the interrupt number
- *
- * __do_IRQ handles all normal device IRQ's (the special
- * SMP cross-CPU interrupts have their own specific
- * handlers).
- *
- * This is the original x86 implementation which is used for every
- * interrupt type.
- */
-unsigned int __do_IRQ(unsigned int irq)
-{
- struct irq_desc *desc = irq_to_desc(irq);
- struct irqaction *action;
- unsigned int status;
-
- kstat_incr_irqs_this_cpu(irq, desc);
-
- if (CHECK_IRQ_PER_CPU(desc->status)) {
- irqreturn_t action_ret;
-
- /*
- * No locking required for CPU-local interrupts:
- */
- if (desc->irq_data.chip->ack)
- desc->irq_data.chip->ack(irq);
- if (likely(!(desc->status & IRQ_DISABLED))) {
- action_ret = handle_IRQ_event(irq, desc->action);
- if (!noirqdebug)
- note_interrupt(irq, desc, action_ret);
- }
- desc->irq_data.chip->end(irq);
- return 1;
- }
-
- raw_spin_lock(&desc->lock);
- if (desc->irq_data.chip->ack)
- desc->irq_data.chip->ack(irq);
- /*
- * REPLAY is when Linux resends an IRQ that was dropped earlier
- * WAITING is used by probe to mark irqs that are being tested
- */
- status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING);
- status |= IRQ_PENDING; /* we _want_ to handle it */
-
- /*
- * If the IRQ is disabled for whatever reason, we cannot
- * use the action we have.
- */
- action = NULL;
- if (likely(!(status & (IRQ_DISABLED | IRQ_INPROGRESS)))) {
- action = desc->action;
- status &= ~IRQ_PENDING; /* we commit to handling */
- status |= IRQ_INPROGRESS; /* we are handling it */
- }
- desc->status = status;
-
- /*
- * If there is no IRQ handler or it was disabled, exit early.
- * Since we set PENDING, if another processor is handling
- * a different instance of this same irq, the other processor
- * will take care of it.
- */
- if (unlikely(!action))
- goto out;
-
- /*
- * Edge triggered interrupts need to remember
- * pending events.
- * This applies to any hw interrupts that allow a second
- * instance of the same irq to arrive while we are in do_IRQ
- * or in the handler. But the code here only handles the _second_
- * instance of the irq, not the third or fourth. So it is mostly
- * useful for irq hardware that does not mask cleanly in an
- * SMP environment.
- */
- for (;;) {
- irqreturn_t action_ret;
-
- raw_spin_unlock(&desc->lock);
-
- action_ret = handle_IRQ_event(irq, action);
- if (!noirqdebug)
- note_interrupt(irq, desc, action_ret);
-
- raw_spin_lock(&desc->lock);
- if (likely(!(desc->status & IRQ_PENDING)))
- break;
- desc->status &= ~IRQ_PENDING;
- }
- desc->status &= ~IRQ_INPROGRESS;
-
-out:
- /*
- * The ->end() handler has to deal with interrupts which got
- * disabled while the handler was running.
- */
- desc->irq_data.chip->end(irq);
- raw_spin_unlock(&desc->lock);
-
- return 1;
-}
-#endif
void move_native_irq(int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
+ bool masked;
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
if (unlikely(desc->status & IRQ_DISABLED))
return;
- desc->irq_data.chip->irq_mask(&desc->irq_data);
+ /*
+ * Be careful vs. already masked interrupts. If this is a
+ * threaded interrupt with ONESHOT set, we can end up with an
+ * interrupt storm.
+ */
+ masked = desc->status & IRQ_MASKED;
+ if (!masked)
+ desc->irq_data.chip->irq_mask(&desc->irq_data);
move_masked_irq(irq);
- desc->irq_data.chip->irq_unmask(&desc->irq_data);
+ if (!masked)
+ desc->irq_data.chip->irq_unmask(&desc->irq_data);
}
-
return 1;
}
-/*
- * Debugging helper: via this flag we know that we are in
- * 'early bootup code', and will warn about any invalid irqs-on event:
- */
-static int early_boot_irqs_enabled;
-
-void early_boot_irqs_off(void)
-{
- early_boot_irqs_enabled = 0;
-}
-
-void early_boot_irqs_on(void)
-{
- early_boot_irqs_enabled = 1;
-}
-
/*
* Hardirqs will be enabled:
*/
if (unlikely(!debug_locks || current->lockdep_recursion))
return;
- if (DEBUG_LOCKS_WARN_ON(unlikely(!early_boot_irqs_enabled)))
+ if (DEBUG_LOCKS_WARN_ON(unlikely(early_boot_irqs_disabled)))
return;
if (unlikely(curr->hardirqs_enabled)) {
#endif
#ifdef CONFIG_TRACEPOINTS
- mod->tracepoints = section_objs(info, "__tracepoints",
- sizeof(*mod->tracepoints),
- &mod->num_tracepoints);
+ mod->tracepoints_ptrs = section_objs(info, "__tracepoints_ptrs",
+ sizeof(*mod->tracepoints_ptrs),
+ &mod->num_tracepoints);
#endif
#ifdef HAVE_JUMP_LABEL
mod->jump_entries = section_objs(info, "__jump_table",
struct modversion_info *ver,
struct kernel_param *kp,
struct kernel_symbol *ks,
- struct tracepoint *tp)
+ struct tracepoint * const *tp)
{
}
EXPORT_SYMBOL(module_layout);
mutex_lock(&module_mutex);
list_for_each_entry(mod, &modules, list)
if (!mod->taints)
- tracepoint_update_probe_range(mod->tracepoints,
- mod->tracepoints + mod->num_tracepoints);
+ tracepoint_update_probe_range(mod->tracepoints_ptrs,
+ mod->tracepoints_ptrs + mod->num_tracepoints);
mutex_unlock(&module_mutex);
}
else if (iter_mod > iter->module)
iter->tracepoint = NULL;
found = tracepoint_get_iter_range(&iter->tracepoint,
- iter_mod->tracepoints,
- iter_mod->tracepoints
+ iter_mod->tracepoints_ptrs,
+ iter_mod->tracepoints_ptrs
+ iter_mod->num_tracepoints);
if (found) {
iter->module = iter_mod;
params[i].ops->free(params[i].arg);
}
-static void __init kernel_add_sysfs_param(const char *name,
- struct kernel_param *kparam,
- unsigned int name_skip)
+static struct module_kobject * __init locate_module_kobject(const char *name)
{
struct module_kobject *mk;
struct kobject *kobj;
kobj = kset_find_obj(module_kset, name);
if (kobj) {
- /* We already have one. Remove params so we can add more. */
mk = to_module_kobject(kobj);
- /* We need to remove it before adding parameters. */
- sysfs_remove_group(&mk->kobj, &mk->mp->grp);
} else {
mk = kzalloc(sizeof(struct module_kobject), GFP_KERNEL);
BUG_ON(!mk);
"%s", name);
if (err) {
kobject_put(&mk->kobj);
- printk(KERN_ERR "Module '%s' failed add to sysfs, "
- "error number %d\n", name, err);
- printk(KERN_ERR "The system will be unstable now.\n");
- return;
+ printk(KERN_ERR
+ "Module '%s' failed add to sysfs, error number %d\n",
+ name, err);
+ printk(KERN_ERR
+ "The system will be unstable now.\n");
+ return NULL;
}
- /* So that exit path is even. */
+
+ /* So that we hold reference in both cases. */
kobject_get(&mk->kobj);
}
+ return mk;
+}
+
+static void __init kernel_add_sysfs_param(const char *name,
+ struct kernel_param *kparam,
+ unsigned int name_skip)
+{
+ struct module_kobject *mk;
+ int err;
+
+ mk = locate_module_kobject(name);
+ if (!mk)
+ return;
+
+ /* We need to remove old parameters before adding more. */
+ if (mk->mp)
+ sysfs_remove_group(&mk->kobj, &mk->mp->grp);
+
/* These should not fail at boot. */
err = add_sysfs_param(mk, kparam, kparam->name + name_skip);
BUG_ON(err);
}
}
+ssize_t __modver_version_show(struct module_attribute *mattr,
+ struct module *mod, char *buf)
+{
+ struct module_version_attribute *vattr =
+ container_of(mattr, struct module_version_attribute, mattr);
+
+ return sprintf(buf, "%s\n", vattr->version);
+}
+
+extern struct module_version_attribute __start___modver[], __stop___modver[];
+
+static void __init version_sysfs_builtin(void)
+{
+ const struct module_version_attribute *vattr;
+ struct module_kobject *mk;
+ int err;
+
+ for (vattr = __start___modver; vattr < __stop___modver; vattr++) {
+ mk = locate_module_kobject(vattr->module_name);
+ if (mk) {
+ err = sysfs_create_file(&mk->kobj, &vattr->mattr.attr);
+ kobject_uevent(&mk->kobj, KOBJ_ADD);
+ kobject_put(&mk->kobj);
+ }
+ }
+}
/* module-related sysfs stuff */
}
module_sysfs_initialized = 1;
+ version_sysfs_builtin();
param_sysfs_builtin();
return 0;
return;
raw_spin_lock(&ctx->lock);
- update_context_time(ctx);
+ if (ctx->is_active)
+ update_context_time(ctx);
update_event_times(event);
+ if (event->state == PERF_EVENT_STATE_ACTIVE)
+ event->pmu->read(event);
raw_spin_unlock(&ctx->lock);
-
- event->pmu->read(event);
}
static inline u64 perf_event_count(struct perf_event *event)
* accessed from NMI. Use a temporary manual per cpu allocation
* until that gets sorted out.
*/
- size = sizeof(*entries) + sizeof(struct perf_callchain_entry *) *
- num_possible_cpus();
+ size = offsetof(struct callchain_cpus_entries, cpu_entries[nr_cpu_ids]);
entries = kzalloc(size, GFP_KERNEL);
if (!entries)
if (!task)
return ERR_PTR(-ESRCH);
- /*
- * Can't attach events to a dying task.
- */
- err = -ESRCH;
- if (task->flags & PF_EXITING)
- goto errout;
-
/* Reuse ptrace permission checks for now. */
err = -EACCES;
if (!ptrace_may_access(task, PTRACE_MODE_READ))
get_ctx(ctx);
- if (cmpxchg(&task->perf_event_ctxp[ctxn], NULL, ctx)) {
- /*
- * We raced with some other task; use
- * the context they set.
- */
+ err = 0;
+ mutex_lock(&task->perf_event_mutex);
+ /*
+ * If it has already passed perf_event_exit_task().
+ * we must see PF_EXITING, it takes this mutex too.
+ */
+ if (task->flags & PF_EXITING)
+ err = -ESRCH;
+ else if (task->perf_event_ctxp[ctxn])
+ err = -EAGAIN;
+ else
+ rcu_assign_pointer(task->perf_event_ctxp[ctxn], ctx);
+ mutex_unlock(&task->perf_event_mutex);
+
+ if (unlikely(err)) {
put_task_struct(task);
kfree(ctx);
- goto retry;
+
+ if (err == -EAGAIN)
+ goto retry;
+ goto errout;
}
}
goto out;
}
+static struct lock_class_key cpuctx_mutex;
+
int perf_pmu_register(struct pmu *pmu, char *name, int type)
{
int cpu, ret;
cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
__perf_event_init_context(&cpuctx->ctx);
+ lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex);
cpuctx->ctx.type = cpu_context;
cpuctx->ctx.pmu = pmu;
cpuctx->jiffies_interval = 1;
* scheduled, so we are now safe from rescheduling changing
* our context.
*/
- child_ctx = child->perf_event_ctxp[ctxn];
+ child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]);
task_ctx_sched_out(child_ctx, EVENT_ALL);
/*
unsigned long flags;
int ret = 0;
- child->perf_event_ctxp[ctxn] = NULL;
-
- mutex_init(&child->perf_event_mutex);
- INIT_LIST_HEAD(&child->perf_event_list);
-
if (likely(!parent->perf_event_ctxp[ctxn]))
return 0;
{
int ctxn, ret;
+ memset(child->perf_event_ctxp, 0, sizeof(child->perf_event_ctxp));
+ mutex_init(&child->perf_event_mutex);
+ INIT_LIST_HEAD(&child->perf_event_list);
+
for_each_task_context_nr(ctxn) {
ret = perf_event_init_context(child, ctxn);
if (ret)
static int __init pm_start_workqueue(void)
{
- pm_wq = alloc_workqueue("pm", WQ_FREEZEABLE, 0);
+ pm_wq = alloc_workqueue("pm", WQ_FREEZABLE, 0);
return pm_wq ? 0 : -ENOMEM;
}
*/
#define TIMEOUT (20 * HZ)
-static inline int freezeable(struct task_struct * p)
+static inline int freezable(struct task_struct * p)
{
if ((p == current) ||
(p->flags & PF_NOFREEZE) ||
todo = 0;
read_lock(&tasklist_lock);
do_each_thread(g, p) {
- if (frozen(p) || !freezeable(p))
+ if (frozen(p) || !freezable(p))
continue;
if (!freeze_task(p, sig_only))
read_lock(&tasklist_lock);
do_each_thread(g, p) {
- if (!freezeable(p))
+ if (!freezable(p))
continue;
if (nosig_only && should_send_signal(p))
swsusp_alloc(struct memory_bitmap *orig_bm, struct memory_bitmap *copy_bm,
unsigned int nr_pages, unsigned int nr_highmem)
{
- int error = 0;
-
if (nr_highmem > 0) {
- error = get_highmem_buffer(PG_ANY);
- if (error)
+ if (get_highmem_buffer(PG_ANY))
goto err_out;
if (nr_highmem > alloc_highmem) {
nr_highmem -= alloc_highmem;
err_out:
swsusp_free();
- return error;
+ return -ENOMEM;
}
asmlinkage int swsusp_save(void)
/*
* logbuf_lock protects log_buf, log_start, log_end, con_start and logged_chars
* It is also used in interesting ways to provide interlocking in
- * release_console_sem().
+ * console_unlock();.
*/
static DEFINE_SPINLOCK(logbuf_lock);
int dmesg_restrict;
#endif
+static int syslog_action_restricted(int type)
+{
+ if (dmesg_restrict)
+ return 1;
+ /* Unless restricted, we allow "read all" and "get buffer size" for everybody */
+ return type != SYSLOG_ACTION_READ_ALL && type != SYSLOG_ACTION_SIZE_BUFFER;
+}
+
+static int check_syslog_permissions(int type, bool from_file)
+{
+ /*
+ * If this is from /proc/kmsg and we've already opened it, then we've
+ * already done the capabilities checks at open time.
+ */
+ if (from_file && type != SYSLOG_ACTION_OPEN)
+ return 0;
+
+ if (syslog_action_restricted(type)) {
+ if (capable(CAP_SYSLOG))
+ return 0;
+ /* For historical reasons, accept CAP_SYS_ADMIN too, with a warning */
+ if (capable(CAP_SYS_ADMIN)) {
+ WARN_ONCE(1, "Attempt to access syslog with CAP_SYS_ADMIN "
+ "but no CAP_SYSLOG (deprecated).\n");
+ return 0;
+ }
+ return -EPERM;
+ }
+ return 0;
+}
+
int do_syslog(int type, char __user *buf, int len, bool from_file)
{
unsigned i, j, limit, count;
int do_clear = 0;
char c;
- int error = 0;
+ int error;
- /*
- * If this is from /proc/kmsg we only do the capabilities checks
- * at open time.
- */
- if (type == SYSLOG_ACTION_OPEN || !from_file) {
- if (dmesg_restrict && !capable(CAP_SYSLOG))
- goto warn; /* switch to return -EPERM after 2.6.39 */
- if ((type != SYSLOG_ACTION_READ_ALL &&
- type != SYSLOG_ACTION_SIZE_BUFFER) &&
- !capable(CAP_SYSLOG))
- goto warn; /* switch to return -EPERM after 2.6.39 */
- }
+ error = check_syslog_permissions(type, from_file);
+ if (error)
+ goto out;
error = security_syslog(type);
if (error)
}
out:
return error;
-warn:
- /* remove after 2.6.39 */
- if (capable(CAP_SYS_ADMIN))
- WARN_ONCE(1, "Attempt to access syslog with CAP_SYS_ADMIN "
- "but no CAP_SYSLOG (deprecated and denied).\n");
- return -EPERM;
}
SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len)
/*
* Call the console drivers, asking them to write out
* log_buf[start] to log_buf[end - 1].
- * The console_sem must be held.
+ * The console_lock must be held.
*/
static void call_console_drivers(unsigned start, unsigned end)
{
*
* This is printk(). It can be called from any context. We want it to work.
*
- * We try to grab the console_sem. If we succeed, it's easy - we log the output and
+ * We try to grab the console_lock. If we succeed, it's easy - we log the output and
* call the console drivers. If we fail to get the semaphore we place the output
* into the log buffer and return. The current holder of the console_sem will
- * notice the new output in release_console_sem() and will send it to the
- * consoles before releasing the semaphore.
+ * notice the new output in console_unlock(); and will send it to the
+ * consoles before releasing the lock.
*
* One effect of this deferred printing is that code which calls printk() and
* then changes console_loglevel may break. This is because console_loglevel
/*
* Try to get console ownership to actually show the kernel
* messages from a 'printk'. Return true (and with the
- * console_semaphore held, and 'console_locked' set) if it
+ * console_lock held, and 'console_locked' set) if it
* is successful, false otherwise.
*
* This gets called with the 'logbuf_lock' spinlock held and
* interrupts disabled. It should return with 'lockbuf_lock'
* released but interrupts still disabled.
*/
-static int acquire_console_semaphore_for_printk(unsigned int cpu)
+static int console_trylock_for_printk(unsigned int cpu)
__releases(&logbuf_lock)
{
int retval = 0;
- if (!try_acquire_console_sem()) {
+ if (console_trylock()) {
retval = 1;
/*
* actual magic (print out buffers, wake up klogd,
* etc).
*
- * The acquire_console_semaphore_for_printk() function
+ * The console_trylock_for_printk() function
* will release 'logbuf_lock' regardless of whether it
* actually gets the semaphore or not.
*/
- if (acquire_console_semaphore_for_printk(this_cpu))
- release_console_sem();
+ if (console_trylock_for_printk(this_cpu))
+ console_unlock();
lockdep_on();
out_restore_irqs:
if (!console_suspend_enabled)
return;
printk("Suspending console(s) (use no_console_suspend to debug)\n");
- acquire_console_sem();
+ console_lock();
console_suspended = 1;
up(&console_sem);
}
return;
down(&console_sem);
console_suspended = 0;
- release_console_sem();
+ console_unlock();
}
/**
case CPU_DYING:
case CPU_DOWN_FAILED:
case CPU_UP_CANCELED:
- acquire_console_sem();
- release_console_sem();
+ console_lock();
+ console_unlock();
}
return NOTIFY_OK;
}
/**
- * acquire_console_sem - lock the console system for exclusive use.
+ * console_lock - lock the console system for exclusive use.
*
- * Acquires a semaphore which guarantees that the caller has
+ * Acquires a lock which guarantees that the caller has
* exclusive access to the console system and the console_drivers list.
*
* Can sleep, returns nothing.
*/
-void acquire_console_sem(void)
+void console_lock(void)
{
BUG_ON(in_interrupt());
down(&console_sem);
console_locked = 1;
console_may_schedule = 1;
}
-EXPORT_SYMBOL(acquire_console_sem);
+EXPORT_SYMBOL(console_lock);
-int try_acquire_console_sem(void)
+/**
+ * console_trylock - try to lock the console system for exclusive use.
+ *
+ * Tried to acquire a lock which guarantees that the caller has
+ * exclusive access to the console system and the console_drivers list.
+ *
+ * returns 1 on success, and 0 on failure to acquire the lock.
+ */
+int console_trylock(void)
{
if (down_trylock(&console_sem))
- return -1;
+ return 0;
if (console_suspended) {
up(&console_sem);
- return -1;
+ return 0;
}
console_locked = 1;
console_may_schedule = 0;
- return 0;
+ return 1;
}
-EXPORT_SYMBOL(try_acquire_console_sem);
+EXPORT_SYMBOL(console_trylock);
int is_console_locked(void)
{
}
/**
- * release_console_sem - unlock the console system
+ * console_unlock - unlock the console system
*
- * Releases the semaphore which the caller holds on the console system
+ * Releases the console_lock which the caller holds on the console system
* and the console driver list.
*
- * While the semaphore was held, console output may have been buffered
- * by printk(). If this is the case, release_console_sem() emits
- * the output prior to releasing the semaphore.
+ * While the console_lock was held, console output may have been buffered
+ * by printk(). If this is the case, console_unlock(); emits
+ * the output prior to releasing the lock.
*
* If there is output waiting for klogd, we wake it up.
*
- * release_console_sem() may be called from any context.
+ * console_unlock(); may be called from any context.
*/
-void release_console_sem(void)
+void console_unlock(void)
{
unsigned long flags;
unsigned _con_start, _log_end;
if (wake_klogd)
wake_up_klogd();
}
-EXPORT_SYMBOL(release_console_sem);
+EXPORT_SYMBOL(console_unlock);
/**
* console_conditional_schedule - yield the CPU if required
* if this CPU should yield the CPU to another task, do
* so here.
*
- * Must be called within acquire_console_sem().
+ * Must be called within console_lock();.
*/
void __sched console_conditional_schedule(void)
{
if (down_trylock(&console_sem) != 0)
return;
} else
- acquire_console_sem();
+ console_lock();
console_locked = 1;
console_may_schedule = 0;
for_each_console(c)
if ((c->flags & CON_ENABLED) && c->unblank)
c->unblank();
- release_console_sem();
+ console_unlock();
}
/*
struct console *c;
struct tty_driver *driver = NULL;
- acquire_console_sem();
+ console_lock();
for_each_console(c) {
if (!c->device)
continue;
if (driver)
break;
}
- release_console_sem();
+ console_unlock();
return driver;
}
*/
void console_stop(struct console *console)
{
- acquire_console_sem();
+ console_lock();
console->flags &= ~CON_ENABLED;
- release_console_sem();
+ console_unlock();
}
EXPORT_SYMBOL(console_stop);
void console_start(struct console *console)
{
- acquire_console_sem();
+ console_lock();
console->flags |= CON_ENABLED;
- release_console_sem();
+ console_unlock();
}
EXPORT_SYMBOL(console_start);
* Put this console in the list - keep the
* preferred driver at the head of the list.
*/
- acquire_console_sem();
+ console_lock();
if ((newcon->flags & CON_CONSDEV) || console_drivers == NULL) {
newcon->next = console_drivers;
console_drivers = newcon;
}
if (newcon->flags & CON_PRINTBUFFER) {
/*
- * release_console_sem() will print out the buffered messages
+ * console_unlock(); will print out the buffered messages
* for us.
*/
spin_lock_irqsave(&logbuf_lock, flags);
con_start = log_start;
spin_unlock_irqrestore(&logbuf_lock, flags);
}
- release_console_sem();
+ console_unlock();
console_sysfs_notify();
/*
return braille_unregister_console(console);
#endif
- acquire_console_sem();
+ console_lock();
if (console_drivers == console) {
console_drivers=console->next;
res = 0;
if (console_drivers != NULL && console->flags & CON_CONSDEV)
console_drivers->flags |= CON_CONSDEV;
- release_console_sem();
+ console_unlock();
console_sysfs_notify();
return res;
}
child->exit_code = data;
dead = __ptrace_detach(current, child);
if (!child->exit_state)
- wake_up_process(child);
+ wake_up_state(child, TASK_TRACED | TASK_STOPPED);
}
write_unlock_irq(&tasklist_lock);
/* try_to_wake_up() stats */
unsigned int ttwu_count;
unsigned int ttwu_local;
-
- /* BKL stats */
- unsigned int bkl_count;
#endif
};
struct task_group *tg;
struct cgroup_subsys_state *css;
+ if (p->flags & PF_EXITING)
+ return &root_task_group;
+
css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
lockdep_is_held(&task_rq(p)->lock));
tg = container_of(css, struct task_group, css);
schedstat_inc(this_rq(), sched_count);
#ifdef CONFIG_SCHEDSTATS
if (unlikely(prev->lock_depth >= 0)) {
- schedstat_inc(this_rq(), bkl_count);
+ schedstat_inc(this_rq(), rq_sched_info.bkl_count);
schedstat_inc(prev, sched_info.bkl_count);
}
#endif
* assigned.
*/
if (rt_bandwidth_enabled() && rt_policy(policy) &&
- task_group(p)->rt_bandwidth.rt_runtime == 0) {
+ task_group(p)->rt_bandwidth.rt_runtime == 0 &&
+ !task_group_is_autogroup(task_group(p))) {
__task_rq_unlock(rq);
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
return -EPERM;
}
}
+static void
+cpu_cgroup_exit(struct cgroup_subsys *ss, struct task_struct *task)
+{
+ /*
+ * cgroup_exit() is called in the copy_process() failure path.
+ * Ignore this case since the task hasn't ran yet, this avoids
+ * trying to poke a half freed task state from generic code.
+ */
+ if (!(task->flags & PF_EXITING))
+ return;
+
+ sched_move_task(task);
+}
+
#ifdef CONFIG_FAIR_GROUP_SCHED
static int cpu_shares_write_u64(struct cgroup *cgrp, struct cftype *cftype,
u64 shareval)
.destroy = cpu_cgroup_destroy,
.can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach,
+ .exit = cpu_cgroup_exit,
.populate = cpu_cgroup_populate,
.subsys_id = cpu_cgroup_subsys_id,
.early_init = 1,
{
struct autogroup *ag = container_of(kref, struct autogroup, kref);
+#ifdef CONFIG_RT_GROUP_SCHED
+ /* We've redirected RT tasks to the root task group... */
+ ag->tg->rt_se = NULL;
+ ag->tg->rt_rq = NULL;
+#endif
sched_destroy_group(ag->tg);
}
return ag;
}
+#ifdef CONFIG_RT_GROUP_SCHED
+static void free_rt_sched_group(struct task_group *tg);
+#endif
+
static inline struct autogroup *autogroup_create(void)
{
struct autogroup *ag = kzalloc(sizeof(*ag), GFP_KERNEL);
init_rwsem(&ag->lock);
ag->id = atomic_inc_return(&autogroup_seq_nr);
ag->tg = tg;
+#ifdef CONFIG_RT_GROUP_SCHED
+ /*
+ * Autogroup RT tasks are redirected to the root task group
+ * so we don't have to move tasks around upon policy change,
+ * or flail around trying to allocate bandwidth on the fly.
+ * A bandwidth exception in __sched_setscheduler() allows
+ * the policy change to proceed. Thereafter, task_group()
+ * returns &root_task_group, so zero bandwidth is required.
+ */
+ free_rt_sched_group(tg);
+ tg->rt_se = root_task_group.rt_se;
+ tg->rt_rq = root_task_group.rt_rq;
+#endif
tg->autogroup = ag;
return ag;
return true;
}
+static inline bool task_group_is_autogroup(struct task_group *tg)
+{
+ return tg != &root_task_group && tg->autogroup;
+}
+
static inline struct task_group *
autogroup_task_group(struct task_struct *p, struct task_group *tg)
{
#ifdef CONFIG_SCHED_DEBUG
static inline int autogroup_path(struct task_group *tg, char *buf, int buflen)
{
+ int enabled = ACCESS_ONCE(sysctl_sched_autogroup_enabled);
+
+ if (!enabled || !tg->autogroup)
+ return 0;
+
return snprintf(buf, buflen, "%s-%ld", "/autogroup", tg->autogroup->id);
}
#endif /* CONFIG_SCHED_DEBUG */
static inline void autogroup_init(struct task_struct *init_task) { }
static inline void autogroup_free(struct task_group *tg) { }
+static inline bool task_group_is_autogroup(struct task_group *tg)
+{
+ return 0;
+}
static inline struct task_group *
autogroup_task_group(struct task_struct *p, struct task_group *tg)
#include <linux/kallsyms.h>
#include <linux/utsname.h>
+static DEFINE_SPINLOCK(sched_debug_lock);
+
/*
* This allows printing both to /proc/sched_debug and
* to the console
}
#endif
+#ifdef CONFIG_CGROUP_SCHED
+static char group_path[PATH_MAX];
+
+static char *task_group_path(struct task_group *tg)
+{
+ if (autogroup_path(tg, group_path, PATH_MAX))
+ return group_path;
+
+ /*
+ * May be NULL if the underlying cgroup isn't fully-created yet
+ */
+ if (!tg->css.cgroup) {
+ group_path[0] = '\0';
+ return group_path;
+ }
+ cgroup_path(tg->css.cgroup, group_path, PATH_MAX);
+ return group_path;
+}
+#endif
+
static void
print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
{
SEQ_printf(m, "%15Ld %15Ld %15Ld.%06ld %15Ld.%06ld %15Ld.%06ld",
0LL, 0LL, 0LL, 0L, 0LL, 0L, 0LL, 0L);
#endif
+#ifdef CONFIG_CGROUP_SCHED
+ SEQ_printf(m, " %s", task_group_path(task_group(p)));
+#endif
SEQ_printf(m, "\n");
}
struct sched_entity *last;
unsigned long flags;
+#ifdef CONFIG_FAIR_GROUP_SCHED
+ SEQ_printf(m, "\ncfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
+#else
SEQ_printf(m, "\ncfs_rq[%d]:\n", cpu);
+#endif
SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "exec_clock",
SPLIT_NS(cfs_rq->exec_clock));
void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
{
+#ifdef CONFIG_RT_GROUP_SCHED
+ SEQ_printf(m, "\nrt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
+#else
SEQ_printf(m, "\nrt_rq[%d]:\n", cpu);
+#endif
#define P(x) \
SEQ_printf(m, " .%-30s: %Ld\n", #x, (long long)(rt_rq->x))
static void print_cpu(struct seq_file *m, int cpu)
{
struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
#ifdef CONFIG_X86
{
P(ttwu_count);
P(ttwu_local);
- P(bkl_count);
+ SEQ_printf(m, " .%-30s: %d\n", "bkl_count",
+ rq->rq_sched_info.bkl_count);
#undef P
+#undef P64
#endif
+ spin_lock_irqsave(&sched_debug_lock, flags);
print_cfs_stats(m, cpu);
print_rt_stats(m, cpu);
+ rcu_read_lock();
print_rq(m, rq, cpu);
+ rcu_read_unlock();
+ spin_unlock_irqrestore(&sched_debug_lock, flags);
}
static const char *sched_tunable_scaling_names[] = {
cfs_rq->nr_running--;
}
-#if defined CONFIG_SMP && defined CONFIG_FAIR_GROUP_SCHED
+#ifdef CONFIG_FAIR_GROUP_SCHED
+# ifdef CONFIG_SMP
static void update_cfs_rq_load_contribution(struct cfs_rq *cfs_rq,
int global_update)
{
u64 now, delta;
unsigned long load = cfs_rq->load.weight;
- if (!cfs_rq)
+ if (cfs_rq->tg == &root_task_group)
return;
- now = rq_of(cfs_rq)->clock;
+ now = rq_of(cfs_rq)->clock_task;
delta = now - cfs_rq->load_stamp;
/* truncate load history at 4 idle periods */
list_del_leaf_cfs_rq(cfs_rq);
}
+static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg,
+ long weight_delta)
+{
+ long load_weight, load, shares;
+
+ load = cfs_rq->load.weight + weight_delta;
+
+ load_weight = atomic_read(&tg->load_weight);
+ load_weight -= cfs_rq->load_contribution;
+ load_weight += load;
+
+ shares = (tg->shares * load);
+ if (load_weight)
+ shares /= load_weight;
+
+ if (shares < MIN_SHARES)
+ shares = MIN_SHARES;
+ if (shares > tg->shares)
+ shares = tg->shares;
+
+ return shares;
+}
+
+static void update_entity_shares_tick(struct cfs_rq *cfs_rq)
+{
+ if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
+ update_cfs_load(cfs_rq, 0);
+ update_cfs_shares(cfs_rq, 0);
+ }
+}
+# else /* CONFIG_SMP */
+static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
+{
+}
+
+static inline long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg,
+ long weight_delta)
+{
+ return tg->shares;
+}
+
+static inline void update_entity_shares_tick(struct cfs_rq *cfs_rq)
+{
+}
+# endif /* CONFIG_SMP */
static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
unsigned long weight)
{
{
struct task_group *tg;
struct sched_entity *se;
- long load_weight, load, shares;
-
- if (!cfs_rq)
- return;
+ long shares;
tg = cfs_rq->tg;
se = tg->se[cpu_of(rq_of(cfs_rq))];
if (!se)
return;
-
- load = cfs_rq->load.weight + weight_delta;
-
- load_weight = atomic_read(&tg->load_weight);
- load_weight -= cfs_rq->load_contribution;
- load_weight += load;
-
- shares = (tg->shares * load);
- if (load_weight)
- shares /= load_weight;
-
- if (shares < MIN_SHARES)
- shares = MIN_SHARES;
- if (shares > tg->shares)
- shares = tg->shares;
+#ifndef CONFIG_SMP
+ if (likely(se->load.weight == tg->shares))
+ return;
+#endif
+ shares = calc_cfs_shares(cfs_rq, tg, weight_delta);
reweight_entity(cfs_rq_of(se), se, shares);
}
-
-static void update_entity_shares_tick(struct cfs_rq *cfs_rq)
-{
- if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
- update_cfs_load(cfs_rq, 0);
- update_cfs_shares(cfs_rq, 0);
- }
-}
#else /* CONFIG_FAIR_GROUP_SCHED */
static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
{
struct sched_entity *se = __pick_next_entity(cfs_rq);
s64 delta = curr->vruntime - se->vruntime;
+ if (delta < 0)
+ return;
+
if (delta > ideal_runtime)
resched_task(rq_of(cfs_rq)->curr);
}
return wl;
for_each_sched_entity(se) {
- long S, rw, s, a, b;
+ long lw, w;
- S = se->my_q->tg->shares;
- s = se->load.weight;
- rw = se->my_q->load.weight;
+ tg = se->my_q->tg;
+ w = se->my_q->load.weight;
- a = S*(rw + wl);
- b = S*rw + s*wg;
+ /* use this cpu's instantaneous contribution */
+ lw = atomic_read(&tg->load_weight);
+ lw -= se->my_q->load_contribution;
+ lw += w + wg;
- wl = s*(a-b);
+ wl += w;
- if (likely(b))
- wl /= b;
+ if (lw > 0 && wl < lw)
+ wl = (wl * tg->shares) / lw;
+ else
+ wl = tg->shares;
- /*
- * Assume the group is already running and will
- * thus already be accounted for in the weight.
- *
- * That is, moving shares between CPUs, does not
- * alter the group weight.
- */
+ /* zero point is MIN_SHARES */
+ if (wl < MIN_SHARES)
+ wl = MIN_SHARES;
+ wl -= se->load.weight;
wg = 0;
}
static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
{
- unsigned long this_load, load;
+ s64 this_load, load;
int idx, this_cpu, prev_cpu;
unsigned long tl_per_task;
struct task_group *tg;
* Otherwise check if either cpus are near enough in load to allow this
* task to be woken on this_cpu.
*/
- if (this_load) {
- unsigned long this_eff_load, prev_eff_load;
+ if (this_load > 0) {
+ s64 this_eff_load, prev_eff_load;
this_eff_load = 100;
this_eff_load *= power_of(prev_cpu);
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
u64 delta_exec;
- if (!task_has_rt_policy(curr))
+ if (curr->sched_class != &rt_sched_class)
return;
delta_exec = rq->clock_task - curr->se.exec_start;
*/
list_for_each_entry_rcu(data, &call_function.queue, csd.list) {
int refs;
+ void (*func) (void *info);
- if (!cpumask_test_and_clear_cpu(cpu, data->cpumask))
+ /*
+ * Since we walk the list without any locks, we might
+ * see an entry that was completed, removed from the
+ * list and is in the process of being reused.
+ *
+ * We must check that the cpu is in the cpumask before
+ * checking the refs, and both must be set before
+ * executing the callback on this cpu.
+ */
+
+ if (!cpumask_test_cpu(cpu, data->cpumask))
+ continue;
+
+ smp_rmb();
+
+ if (atomic_read(&data->refs) == 0)
continue;
+ func = data->csd.func; /* for later warn */
data->csd.func(data->csd.info);
+ /*
+ * If the cpu mask is not still set then it enabled interrupts,
+ * we took another smp interrupt, and executed the function
+ * twice on this cpu. In theory that copy decremented refs.
+ */
+ if (!cpumask_test_and_clear_cpu(cpu, data->cpumask)) {
+ WARN(1, "%pS enabled interrupts and double executed\n",
+ func);
+ continue;
+ }
+
refs = atomic_dec_return(&data->refs);
WARN_ON(refs < 0);
- if (!refs) {
- raw_spin_lock(&call_function.lock);
- list_del_rcu(&data->csd.list);
- raw_spin_unlock(&call_function.lock);
- }
if (refs)
continue;
+ WARN_ON(!cpumask_empty(data->cpumask));
+
+ raw_spin_lock(&call_function.lock);
+ list_del_rcu(&data->csd.list);
+ raw_spin_unlock(&call_function.lock);
+
csd_unlock(&data->csd);
}
* can't happen.
*/
WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
- && !oops_in_progress);
+ && !oops_in_progress && !early_boot_irqs_disabled);
/* So, what's a CPU they want? Ignoring this one. */
cpu = cpumask_first_and(mask, cpu_online_mask);
data = &__get_cpu_var(cfd_data);
csd_lock(&data->csd);
+ BUG_ON(atomic_read(&data->refs) || !cpumask_empty(data->cpumask));
data->csd.func = func;
data->csd.info = info;
cpumask_and(data->cpumask, mask, cpu_online_mask);
cpumask_clear_cpu(this_cpu, data->cpumask);
+
+ /*
+ * To ensure the interrupt handler gets an complete view
+ * we order the cpumask and refs writes and order the read
+ * of them in the interrupt handler. In addition we may
+ * only clear our own cpu bit from the mask.
+ */
+ smp_wmb();
+
atomic_set(&data->refs, cpumask_weight(data->cpumask));
raw_spin_lock_irqsave(&call_function.lock, flags);
#endif /* USE_GENERIC_SMP_HELPERS */
/*
- * Call a function on all processors
+ * Call a function on all processors. May be used during early boot while
+ * early_boot_irqs_disabled is set. Use local_irq_save/restore() instead
+ * of local_irq_disable/enable().
*/
int on_each_cpu(void (*func) (void *info), void *info, int wait)
{
+ unsigned long flags;
int ret = 0;
preempt_disable();
ret = smp_call_function(func, info, wait);
- local_irq_disable();
+ local_irq_save(flags);
func(info);
- local_irq_enable();
+ local_irq_restore(flags);
preempt_enable();
return ret;
}
const struct cred *cred = current_cred(), *tcred;
tcred = __task_cred(task);
- if ((cred->uid != tcred->euid ||
+ if (current != task &&
+ (cred->uid != tcred->euid ||
cred->uid != tcred->suid ||
cred->uid != tcred->uid ||
cred->gid != tcred->egid ||
#endif
#ifdef CONFIG_MAGIC_SYSRQ
-static int __sysrq_enabled; /* Note: sysrq code ises it's own private copy */
+/* Note: sysrq code uses it's own private copy */
+static int __sysrq_enabled = SYSRQ_DEFAULT_ENABLE;
static int sysrq_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *lenp,
}
local_irq_enable();
- printk(KERN_INFO "Switched to NOHz mode on CPU #%d\n",
- smp_processor_id());
+ printk(KERN_INFO "Switched to NOHz mode on CPU #%d\n", smp_processor_id());
}
/*
}
#ifdef CONFIG_NO_HZ
- if (tick_nohz_enabled)
+ if (tick_nohz_enabled) {
ts->nohz_mode = NOHZ_MODE_HIGHRES;
+ printk(KERN_INFO "Switched to NOHz mode on CPU #%d\n", smp_processor_id());
+ }
#endif
}
#endif /* HIGH_RES_TIMERS */
char symname[KSYM_NAME_LEN];
if (lookup_symbol_name((unsigned long)sym, symname) < 0)
- SEQ_printf(m, "<%p>", sym);
+ SEQ_printf(m, "<%pK>", sym);
else
SEQ_printf(m, "%s", symname);
}
static void
print_base(struct seq_file *m, struct hrtimer_clock_base *base, u64 now)
{
- SEQ_printf(m, " .base: %p\n", base);
+ SEQ_printf(m, " .base: %pK\n", base);
SEQ_printf(m, " .index: %d\n",
base->index);
SEQ_printf(m, " .resolution: %Lu nsecs\n",
*
* Synchronization rules: Callers must prevent restarting of the timer,
* otherwise this function is meaningless. It must not be called from
- * hardirq contexts. The caller must not hold locks which would prevent
+ * interrupt contexts. The caller must not hold locks which would prevent
* completion of the timer's handler. The timer's handler must not call
* add_timer_on(). Upon exit the timer is not queued and the handler is
* not running on any CPU.
int del_timer_sync(struct timer_list *timer)
{
#ifdef CONFIG_LOCKDEP
- local_bh_disable();
+ unsigned long flags;
+
+ local_irq_save(flags);
lock_map_acquire(&timer->lockdep_map);
lock_map_release(&timer->lockdep_map);
- local_bh_enable();
+ local_irq_restore(flags);
#endif
/*
* don't use it in hardirq context, because it
!blk_tracer_enabled))
return;
+ /*
+ * If the BLK_TC_NOTIFY action mask isn't set, don't send any note
+ * message to the trace.
+ */
+ if (!(bt->act_mask & BLK_TC_NOTIFY))
+ return;
+
local_irq_save(flags);
buf = per_cpu_ptr(bt->msg_data, smp_processor_id());
va_start(args, fmt);
static void trace_module_add_events(struct module *mod)
{
struct ftrace_module_file_ops *file_ops = NULL;
- struct ftrace_event_call *call, *start, *end;
+ struct ftrace_event_call **call, **start, **end;
start = mod->trace_events;
end = mod->trace_events + mod->num_trace_events;
return;
for_each_event(call, start, end) {
- __trace_add_event_call(call, mod,
+ __trace_add_event_call(*call, mod,
&file_ops->id, &file_ops->enable,
&file_ops->filter, &file_ops->format);
}
.priority = 0,
};
-extern struct ftrace_event_call __start_ftrace_events[];
-extern struct ftrace_event_call __stop_ftrace_events[];
+extern struct ftrace_event_call *__start_ftrace_events[];
+extern struct ftrace_event_call *__stop_ftrace_events[];
static char bootup_event_buf[COMMAND_LINE_SIZE] __initdata;
static __init int event_trace_init(void)
{
- struct ftrace_event_call *call;
+ struct ftrace_event_call **call;
struct dentry *d_tracer;
struct dentry *entry;
struct dentry *d_events;
pr_warning("tracing: Failed to allocate common fields");
for_each_event(call, __start_ftrace_events, __stop_ftrace_events) {
- __trace_add_event_call(call, NULL, &ftrace_event_id_fops,
+ __trace_add_event_call(*call, NULL, &ftrace_event_id_fops,
&ftrace_enable_fops,
&ftrace_event_filter_fops,
&ftrace_event_format_fops);
.fields = LIST_HEAD_INIT(event_class_ftrace_##call.fields),\
}; \
\
-struct ftrace_event_call __used \
-__attribute__((__aligned__(4))) \
-__attribute__((section("_ftrace_events"))) event_##call = { \
+struct ftrace_event_call __used event_##call = { \
.name = #call, \
.event.type = etype, \
.class = &event_class_ftrace_##call, \
.print_fmt = print, \
}; \
+struct ftrace_event_call __used \
+__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call;
#include "trace_entries.h"
* Stubs:
*/
-void early_boot_irqs_off(void)
-{
-}
-
-void early_boot_irqs_on(void)
-{
-}
-
void trace_softirqs_on(unsigned long ip)
{
}
.raw_init = init_syscall_trace,
};
-extern unsigned long __start_syscalls_metadata[];
-extern unsigned long __stop_syscalls_metadata[];
+extern struct syscall_metadata *__start_syscalls_metadata[];
+extern struct syscall_metadata *__stop_syscalls_metadata[];
static struct syscall_metadata **syscalls_metadata;
-static struct syscall_metadata *find_syscall_meta(unsigned long syscall)
+static __init struct syscall_metadata *
+find_syscall_meta(unsigned long syscall)
{
- struct syscall_metadata *start;
- struct syscall_metadata *stop;
+ struct syscall_metadata **start;
+ struct syscall_metadata **stop;
char str[KSYM_SYMBOL_LEN];
- start = (struct syscall_metadata *)__start_syscalls_metadata;
- stop = (struct syscall_metadata *)__stop_syscalls_metadata;
+ start = __start_syscalls_metadata;
+ stop = __stop_syscalls_metadata;
kallsyms_lookup(syscall, NULL, NULL, NULL, str);
for ( ; start < stop; start++) {
* with "SyS" instead of "sys", leading to an unwanted
* mismatch.
*/
- if (start->name && !strcmp(start->name + 3, str + 3))
- return start;
+ if ((*start)->name && !strcmp((*start)->name + 3, str + 3))
+ return *start;
}
return NULL;
}
#include <linux/sched.h>
#include <linux/jump_label.h>
-extern struct tracepoint __start___tracepoints[];
-extern struct tracepoint __stop___tracepoints[];
+extern struct tracepoint * const __start___tracepoints_ptrs[];
+extern struct tracepoint * const __stop___tracepoints_ptrs[];
/* Set to 1 to enable tracepoint debug output */
static const int tracepoint_debug;
*
* Updates the probe callback corresponding to a range of tracepoints.
*/
-void
-tracepoint_update_probe_range(struct tracepoint *begin, struct tracepoint *end)
+void tracepoint_update_probe_range(struct tracepoint * const *begin,
+ struct tracepoint * const *end)
{
- struct tracepoint *iter;
+ struct tracepoint * const *iter;
struct tracepoint_entry *mark_entry;
if (!begin)
mutex_lock(&tracepoints_mutex);
for (iter = begin; iter < end; iter++) {
- mark_entry = get_tracepoint(iter->name);
+ mark_entry = get_tracepoint((*iter)->name);
if (mark_entry) {
- set_tracepoint(&mark_entry, iter,
+ set_tracepoint(&mark_entry, *iter,
!!mark_entry->refcount);
} else {
- disable_tracepoint(iter);
+ disable_tracepoint(*iter);
}
}
mutex_unlock(&tracepoints_mutex);
static void tracepoint_update_probes(void)
{
/* Core kernel tracepoints */
- tracepoint_update_probe_range(__start___tracepoints,
- __stop___tracepoints);
+ tracepoint_update_probe_range(__start___tracepoints_ptrs,
+ __stop___tracepoints_ptrs);
/* tracepoints in modules. */
module_update_tracepoints();
}
* Will return the first tracepoint in the range if the input tracepoint is
* NULL.
*/
-int tracepoint_get_iter_range(struct tracepoint **tracepoint,
- struct tracepoint *begin, struct tracepoint *end)
+int tracepoint_get_iter_range(struct tracepoint * const **tracepoint,
+ struct tracepoint * const *begin, struct tracepoint * const *end)
{
if (!*tracepoint && begin != end) {
*tracepoint = begin;
/* Core kernel tracepoints */
if (!iter->module) {
found = tracepoint_get_iter_range(&iter->tracepoint,
- __start___tracepoints, __stop___tracepoints);
+ __start___tracepoints_ptrs,
+ __stop___tracepoints_ptrs);
if (found)
goto end;
}
switch (val) {
case MODULE_STATE_COMING:
case MODULE_STATE_GOING:
- tracepoint_update_probe_range(mod->tracepoints,
- mod->tracepoints + mod->num_tracepoints);
+ tracepoint_update_probe_range(mod->tracepoints_ptrs,
+ mod->tracepoints_ptrs + mod->num_tracepoints);
break;
}
return 0;
#include <asm/irq_regs.h>
#include <linux/perf_event.h>
-int watchdog_enabled;
+int watchdog_enabled = 1;
int __read_mostly softlockup_thresh = 60;
static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
#endif
-static int no_watchdog;
-
-
/* boot commands */
/*
* Should we panic when a soft-lockup or hard-lockup occurs:
if (!strncmp(str, "panic", 5))
hardlockup_panic = 1;
else if (!strncmp(str, "0", 1))
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nmi_watchdog=", hardlockup_panic_setup);
static int __init nowatchdog_setup(char *str)
{
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nowatchdog", nowatchdog_setup);
/* deprecated */
static int __init nosoftlockup_setup(char *str)
{
- no_watchdog = 1;
+ watchdog_enabled = 0;
return 1;
}
__setup("nosoftlockup", nosoftlockup_setup);
goto out_save;
}
- printk(KERN_ERR "NMI watchdog disabled for cpu%i: unable to create perf event: %ld\n",
- cpu, PTR_ERR(event));
+
+ /* vary the KERN level based on the returned errno */
+ if (PTR_ERR(event) == -EOPNOTSUPP)
+ printk(KERN_INFO "NMI watchdog disabled (cpu%i): not supported (no LAPIC?)\n", cpu);
+ else if (PTR_ERR(event) == -ENOENT)
+ printk(KERN_WARNING "NMI watchdog disabled (cpu%i): hardware events not enabled\n", cpu);
+ else
+ printk(KERN_ERR "NMI watchdog disabled (cpu%i): unable to create perf event: %ld\n", cpu, PTR_ERR(event));
return PTR_ERR(event);
/* success path */
wake_up_process(p);
}
- /* if any cpu succeeds, watchdog is considered enabled for the system */
- watchdog_enabled = 1;
-
return 0;
}
static void watchdog_enable_all_cpus(void)
{
int cpu;
- int result = 0;
+
+ watchdog_enabled = 0;
for_each_online_cpu(cpu)
- result += watchdog_enable(cpu);
+ if (!watchdog_enable(cpu))
+ /* if any cpu succeeds, watchdog is considered
+ enabled for the system */
+ watchdog_enabled = 1;
- if (result)
+ if (!watchdog_enabled)
printk(KERN_ERR "watchdog: failed to be enabled on some cpus\n");
}
{
int cpu;
- if (no_watchdog)
- return;
-
for_each_online_cpu(cpu)
watchdog_disable(cpu);
{
proc_dointvec(table, write, buffer, length, ppos);
- if (watchdog_enabled)
- watchdog_enable_all_cpus();
- else
- watchdog_disable_all_cpus();
+ if (write) {
+ if (watchdog_enabled)
+ watchdog_enable_all_cpus();
+ else
+ watchdog_disable_all_cpus();
+ }
return 0;
}
break;
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
- err = watchdog_enable(hotcpu);
+ if (watchdog_enabled)
+ err = watchdog_enable(hotcpu);
break;
#ifdef CONFIG_HOTPLUG_CPU
case CPU_UP_CANCELED:
void *cpu = (void *)(long)smp_processor_id();
int err;
- if (no_watchdog)
- return;
-
err = cpu_callback(&cpu_nfb, CPU_UP_PREPARE, cpu);
WARN_ON(notifier_to_errno(err));
MAX_IDLE_WORKERS_RATIO = 4, /* 1/4 of busy can be idle */
IDLE_WORKER_TIMEOUT = 300 * HZ, /* keep idle ones for 5 mins */
- MAYDAY_INITIAL_TIMEOUT = HZ / 100, /* call for help after 10ms */
+ MAYDAY_INITIAL_TIMEOUT = HZ / 100 >= 2 ? HZ / 100 : 2,
+ /* call for help after 10ms
+ (min two ticks) */
MAYDAY_INTERVAL = HZ / 10, /* and then every 100ms */
CREATE_COOLDOWN = HZ, /* time to breath after fail */
TRUSTEE_COOLDOWN = HZ / 10, /* for trustee draining */
worker->flags &= ~flags;
- /* if transitioning out of NOT_RUNNING, increment nr_running */
+ /*
+ * If transitioning out of NOT_RUNNING, increment nr_running. Note
+ * that the nested NOT_RUNNING is not a noop. NOT_RUNNING is mask
+ * of multiple flags, not a single flag.
+ */
if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
if (!(worker->flags & WORKER_NOT_RUNNING))
atomic_inc(get_gcwq_nr_running(gcwq->cpu));
spin_unlock_irq(&gcwq->lock);
work_clear_pending(work);
- lock_map_acquire(&cwq->wq->lockdep_map);
+ lock_map_acquire_read(&cwq->wq->lockdep_map);
lock_map_acquire(&lockdep_map);
trace_workqueue_execute_start(work);
f(work);
move_linked_works(work, scheduled, &n);
process_scheduled_works(rescuer);
+
+ /*
+ * Leave this gcwq. If keep_working() is %true, notify a
+ * regular worker; otherwise, we end up with 0 concurrency
+ * and stalling the execution.
+ */
+ if (keep_working(gcwq))
+ wake_up_worker(gcwq);
+
spin_unlock_irq(&gcwq->lock);
}
insert_wq_barrier(cwq, barr, work, worker);
spin_unlock_irq(&gcwq->lock);
- lock_map_acquire(&cwq->wq->lockdep_map);
+ /*
+ * If @max_active is 1 or rescuer is in use, flushing another work
+ * item on the same workqueue may lead to deadlock. Make sure the
+ * flusher is not running on the same workqueue by verifying write
+ * access.
+ */
+ if (cwq->wq->saved_max_active == 1 || cwq->wq->flags & WQ_RESCUER)
+ lock_map_acquire(&cwq->wq->lockdep_map);
+ else
+ lock_map_acquire_read(&cwq->wq->lockdep_map);
lock_map_release(&cwq->wq->lockdep_map);
+
return true;
already_gone:
spin_unlock_irq(&gcwq->lock);
*/
spin_lock(&workqueue_lock);
- if (workqueue_freezing && wq->flags & WQ_FREEZEABLE)
+ if (workqueue_freezing && wq->flags & WQ_FREEZABLE)
for_each_cwq_cpu(cpu, wq)
get_cwq(cpu, wq)->max_active = 0;
spin_lock_irq(&gcwq->lock);
- if (!(wq->flags & WQ_FREEZEABLE) ||
+ if (!(wq->flags & WQ_FREEZABLE) ||
!(gcwq->flags & GCWQ_FREEZING))
get_cwq(gcwq->cpu, wq)->max_active = max_active;
* want to get it over with ASAP - spam rescuers, wake up as
* many idlers as necessary and create new ones till the
* worklist is empty. Note that if the gcwq is frozen, there
- * may be frozen works in freezeable cwqs. Don't declare
+ * may be frozen works in freezable cwqs. Don't declare
* completion while frozen.
*/
while (gcwq->nr_workers != gcwq->nr_idle ||
/**
* freeze_workqueues_begin - begin freezing workqueues
*
- * Start freezing workqueues. After this function returns, all
- * freezeable workqueues will queue new works to their frozen_works
- * list instead of gcwq->worklist.
+ * Start freezing workqueues. After this function returns, all freezable
+ * workqueues will queue new works to their frozen_works list instead of
+ * gcwq->worklist.
*
* CONTEXT:
* Grabs and releases workqueue_lock and gcwq->lock's.
list_for_each_entry(wq, &workqueues, list) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
- if (cwq && wq->flags & WQ_FREEZEABLE)
+ if (cwq && wq->flags & WQ_FREEZABLE)
cwq->max_active = 0;
}
}
/**
- * freeze_workqueues_busy - are freezeable workqueues still busy?
+ * freeze_workqueues_busy - are freezable workqueues still busy?
*
* Check whether freezing is complete. This function must be called
* between freeze_workqueues_begin() and thaw_workqueues().
* Grabs and releases workqueue_lock.
*
* RETURNS:
- * %true if some freezeable workqueues are still busy. %false if
- * freezing is complete.
+ * %true if some freezable workqueues are still busy. %false if freezing
+ * is complete.
*/
bool freeze_workqueues_busy(void)
{
list_for_each_entry(wq, &workqueues, list) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
- if (!cwq || !(wq->flags & WQ_FREEZEABLE))
+ if (!cwq || !(wq->flags & WQ_FREEZABLE))
continue;
BUG_ON(cwq->nr_active < 0);
list_for_each_entry(wq, &workqueues, list) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
- if (!cwq || !(wq->flags & WQ_FREEZEABLE))
+ if (!cwq || !(wq->flags & WQ_FREEZABLE))
continue;
/* restore max_active and repopulate worklist */
Disable for production systems.
config DEBUG_BUGVERBOSE
- bool "Verbose BUG() reporting (adds 70K)" if DEBUG_KERNEL && EMBEDDED
+ bool "Verbose BUG() reporting (adds 70K)" if DEBUG_KERNEL && EXPERT
depends on BUG
depends on ARM || AVR32 || M32R || M68K || SPARC32 || SPARC64 || \
FRV || SUPERH || GENERIC_BUG || BLACKFIN || MN10300
If unsure, say N.
config DEBUG_MEMORY_INIT
- bool "Debug memory initialisation" if EMBEDDED
- default !EMBEDDED
+ bool "Debug memory initialisation" if EXPERT
+ default !EXPERT
help
Enable this for additional checks during memory initialisation.
The sanity checks verify aspects of the VM such as the memory model
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
depends on DEBUG_KERNEL && \
- (CRIS || M68K || M68KNOMMU || FRV || UML || \
+ (CRIS || M68K || FRV || UML || \
AVR32 || SUPERH || BLACKFIN || MN10300) || \
ARCH_WANT_FRAME_POINTERS
default y if (DEBUG_INFO && UML) || ARCH_WANT_FRAME_POINTERS
}
EXPORT_SYMBOL(__list_add);
+void __list_del_entry(struct list_head *entry)
+{
+ struct list_head *prev, *next;
+
+ prev = entry->prev;
+ next = entry->next;
+
+ if (WARN(next == LIST_POISON1,
+ "list_del corruption, %p->next is LIST_POISON1 (%p)\n",
+ entry, LIST_POISON1) ||
+ WARN(prev == LIST_POISON2,
+ "list_del corruption, %p->prev is LIST_POISON2 (%p)\n",
+ entry, LIST_POISON2) ||
+ WARN(prev->next != entry,
+ "list_del corruption. prev->next should be %p, "
+ "but was %p\n", entry, prev->next) ||
+ WARN(next->prev != entry,
+ "list_del corruption. next->prev should be %p, "
+ "but was %p\n", entry, next->prev))
+ return;
+
+ __list_del(prev, next);
+}
+EXPORT_SYMBOL(__list_del_entry);
+
/**
* list_del - deletes entry from list.
* @entry: the element to delete from the list.
*/
void list_del(struct list_head *entry)
{
- WARN(entry->next == LIST_POISON1,
- "list_del corruption, next is LIST_POISON1 (%p)\n",
- LIST_POISON1);
- WARN(entry->next != LIST_POISON1 && entry->prev == LIST_POISON2,
- "list_del corruption, prev is LIST_POISON2 (%p)\n",
- LIST_POISON2);
- WARN(entry->prev->next != entry,
- "list_del corruption. prev->next should be %p, "
- "but was %p\n", entry, entry->prev->next);
- WARN(entry->next->prev != entry,
- "list_del corruption. next->prev should be %p, "
- "but was %p\n", entry, entry->next->prev);
- __list_del(entry->prev, entry->next);
+ __list_del_entry(entry);
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
}
}
/*
- * The iftag must have been set somewhere because otherwise
- * we would return immediated at the beginning of the function
+ * We need not to tag the root tag if there is no tag which is set with
+ * settag within the range from *first_indexp to last_index.
*/
- root_tag_set(root, settag);
+ if (tagged > 0)
+ root_tag_set(root, settag);
*first_indexp = index;
return tagged;
rb_augment_path(node, func, data);
}
+EXPORT_SYMBOL(rb_augment_insert);
/*
* before removing the node, find the deepest node on the rebalance path
return deepest;
}
+EXPORT_SYMBOL(rb_augment_erase_begin);
/*
* after removal, update the tree to account for the removed entry
if (node)
rb_augment_path(node, func, data);
}
+EXPORT_SYMBOL(rb_augment_erase_end);
/*
* This function returns the first node (in sort order) of the tree.
*
* INTRODUCTION
*
- * The textsearch infrastructure provides text searching facitilies for
+ * The textsearch infrastructure provides text searching facilities for
* both linear and non-linear data. Individual search algorithms are
* implemented in modules and chosen by the user.
*
* to the algorithm to store persistent variables.
* (4) Core eventually resets the search offset and forwards the find()
* request to the algorithm.
- * (5) Algorithm calls get_next_block() provided by the user continously
+ * (5) Algorithm calls get_next_block() provided by the user continuously
* to fetch the data to be searched in block by block.
* (6) Algorithm invokes finish() after the last call to get_next_block
* to clean up any leftovers from get_next_block. (Optional)
* the pattern to look for and flags. As a flag, you can set TS_IGNORECASE
* to perform case insensitive matching. But it might slow down
* performance of algorithm, so you should use it at own your risk.
- * The returned configuration may then be used for an arbitary
+ * The returned configuration may then be used for an arbitrary
* amount of times and even in parallel as long as a separate struct
* ts_state variable is provided to every instance.
*
* The actual search is performed by either calling textsearch_find_-
* continuous() for linear data or by providing an own get_next_block()
* implementation and calling textsearch_find(). Both functions return
- * the position of the first occurrence of the patern or UINT_MAX if
- * no match was found. Subsequent occurences can be found by calling
+ * the position of the first occurrence of the pattern or UINT_MAX if
+ * no match was found. Subsequent occurrences can be found by calling
* textsearch_next() regardless of the linearity of the data.
*
* Once you're done using a configuration it must be given back via
CRC32 is supported. See Documentation/xz.txt for more information.
config XZ_DEC_X86
- bool "x86 BCJ filter decoder" if EMBEDDED
+ bool "x86 BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config XZ_DEC_POWERPC
- bool "PowerPC BCJ filter decoder" if EMBEDDED
+ bool "PowerPC BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config XZ_DEC_IA64
- bool "IA-64 BCJ filter decoder" if EMBEDDED
+ bool "IA-64 BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config XZ_DEC_ARM
- bool "ARM BCJ filter decoder" if EMBEDDED
+ bool "ARM BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config XZ_DEC_ARMTHUMB
- bool "ARM-Thumb BCJ filter decoder" if EMBEDDED
+ bool "ARM-Thumb BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config XZ_DEC_SPARC
- bool "SPARC BCJ filter decoder" if EMBEDDED
+ bool "SPARC BCJ filter decoder" if EXPERT
default y
depends on XZ_DEC
select XZ_DEC_BCJ
config COMPACTION
bool "Allow for memory compaction"
select MIGRATION
- depends on EXPERIMENTAL && HUGETLB_PAGE && MMU
+ depends on MMU
help
Allows the compaction of memory for the allocation of huge pages.
if (!zone_watermark_ok(zone, cc->order, watermark, 0, 0))
return COMPACT_CONTINUE;
+ /*
+ * order == -1 is expected when compacting via
+ * /proc/sys/vm/compact_memory
+ */
if (cc->order == -1)
return COMPACT_CONTINUE;
if (!zone_watermark_ok(zone, 0, watermark, 0, 0))
return COMPACT_SKIPPED;
+ /*
+ * order == -1 is expected when compacting via
+ * /proc/sys/vm/compact_memory
+ */
+ if (order == -1)
+ return COMPACT_CONTINUE;
+
/*
* fragmentation index determines if allocation failures are due to
* low memory or external fragmentation
/* after clearing PageTail the gup refcount can be released */
smp_mb();
- page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+ /*
+ * retain hwpoison flag of the poisoned tail page:
+ * fix for the unsuitable process killed on Guest Machine(KVM)
+ * by the memory-failure.
+ */
+ page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON;
page_tail->flags |= (page->flags &
((1L << PG_referenced) |
(1L << PG_swapbacked) |
BUG_ON(!PageDirty(page_tail));
BUG_ON(!PageSwapBacked(page_tail));
+ mem_cgroup_split_huge_fixup(page, page_tail);
+
lru_add_page_tail(zone, page, page_tail);
}
/* VM_PFNMAP vmas may have vm_ops null but vm_file set */
if (!vma->anon_vma || vma->vm_ops || vma->vm_file)
goto out;
+ if (is_vma_temporary_stack(vma))
+ goto out;
VM_BUG_ON(is_linear_pfn_mapping(vma) || is_pfn_mapping(vma));
pgd = pgd_offset(mm, address);
spin_lock(ptl);
isolated = __collapse_huge_page_isolate(vma, address, pte);
spin_unlock(ptl);
- pte_unmap(pte);
if (unlikely(!isolated)) {
+ pte_unmap(pte);
spin_lock(&mm->page_table_lock);
BUG_ON(!pmd_none(*pmd));
set_pmd_at(mm, address, pmd, _pmd);
spin_unlock(&mm->page_table_lock);
anon_vma_unlock(vma->anon_vma);
- mem_cgroup_uncharge_page(new_page);
goto out;
}
anon_vma_unlock(vma->anon_vma);
__collapse_huge_page_copy(pte, new_page, vma, address, ptl);
+ pte_unmap(pte);
__SetPageUptodate(new_page);
pgtable = pmd_pgtable(_pmd);
VM_BUG_ON(page_count(pgtable) != 1);
return;
out:
+ mem_cgroup_uncharge_page(new_page);
#ifdef CONFIG_NUMA
put_page(new_page);
#endif
if ((!(vma->vm_flags & VM_HUGEPAGE) &&
!khugepaged_always()) ||
(vma->vm_flags & VM_NOHUGEPAGE)) {
+ skip:
progress++;
continue;
}
-
/* VM_PFNMAP vmas may have vm_ops null but vm_file set */
- if (!vma->anon_vma || vma->vm_ops || vma->vm_file) {
- khugepaged_scan.address = vma->vm_end;
- progress++;
- continue;
- }
+ if (!vma->anon_vma || vma->vm_ops || vma->vm_file)
+ goto skip;
+ if (is_vma_temporary_stack(vma))
+ goto skip;
+
VM_BUG_ON(is_linear_pfn_mapping(vma) || is_pfn_mapping(vma));
hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
hend = vma->vm_end & HPAGE_PMD_MASK;
- if (hstart >= hend) {
- progress++;
- continue;
- }
+ if (hstart >= hend)
+ goto skip;
+ if (khugepaged_scan.address > hend)
+ goto skip;
if (khugepaged_scan.address < hstart)
khugepaged_scan.address = hstart;
- if (khugepaged_scan.address > hend) {
- khugepaged_scan.address = hend + HPAGE_PMD_SIZE;
- progress++;
- continue;
- }
- BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
+ VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
while (khugepaged_scan.address < hend) {
int ret;
breakouterloop_mmap_sem:
spin_lock(&khugepaged_mm_lock);
- BUG_ON(khugepaged_scan.mm_slot != mm_slot);
+ VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot);
/*
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
for (;;) {
mutex_unlock(&khugepaged_mutex);
- BUG_ON(khugepaged_thread != current);
+ VM_BUG_ON(khugepaged_thread != current);
khugepaged_loop();
- BUG_ON(khugepaged_thread != current);
+ VM_BUG_ON(khugepaged_thread != current);
mutex_lock(&khugepaged_mutex);
if (!khugepaged_enabled())
* after the module is removed.
*/
for (i = 0; i < 10; i++) {
- elem = kmalloc(sizeof(*elem), GFP_KERNEL);
- pr_info("kmemleak: kmalloc(sizeof(*elem)) = %p\n", elem);
+ elem = kzalloc(sizeof(*elem), GFP_KERNEL);
+ pr_info("kmemleak: kzalloc(sizeof(*elem)) = %p\n", elem);
if (!elem)
return -ENOMEM;
- memset(elem, 0, sizeof(*elem));
INIT_LIST_HEAD(&elem->list);
-
list_add_tail(&elem->list, &test_list);
}
#define BYTES_PER_POINTER sizeof(void *)
/* GFP bitmask for kmemleak internal allocations */
-#define GFP_KMEMLEAK_MASK (GFP_KERNEL | GFP_ATOMIC)
+#define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \
+ __GFP_NORETRY | __GFP_NOMEMALLOC | \
+ __GFP_NOWARN)
/* scanning area inside a memory block */
struct kmemleak_scan_area {
struct kmemleak_object *object;
struct prio_tree_node *node;
- object = kmem_cache_alloc(object_cache, gfp & GFP_KMEMLEAK_MASK);
+ object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
if (!object) {
- kmemleak_stop("Cannot allocate a kmemleak_object structure\n");
+ pr_warning("Cannot allocate a kmemleak_object structure\n");
+ kmemleak_disable();
return NULL;
}
return;
}
- area = kmem_cache_alloc(scan_area_cache, gfp & GFP_KMEMLEAK_MASK);
+ area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
if (!area) {
- kmemleak_warn("Cannot allocate a scan area\n");
+ pr_warning("Cannot allocate a scan area\n");
goto out;
}
BUG_ON(0 == size);
- size = memblock_align_up(size, align);
-
/* Pump up max_addr */
if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
end = memblock.current_limit;
int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size)
{
- int idx = memblock_search(&memblock.reserved, base);
+ int idx = memblock_search(&memblock.memory, base);
if (idx == -1)
return 0;
- return memblock.reserved.regions[idx].base <= base &&
- (memblock.reserved.regions[idx].base +
- memblock.reserved.regions[idx].size) >= (base + size);
+ return memblock.memory.regions[idx].base <= base &&
+ (memblock.memory.regions[idx].base +
+ memblock.memory.regions[idx].size) >= (base + size);
}
int __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t size)
}
static void mem_cgroup_charge_statistics(struct mem_cgroup *mem,
- struct page_cgroup *pc,
- bool charge)
+ bool file, int nr_pages)
{
- int val = (charge) ? 1 : -1;
-
preempt_disable();
- if (PageCgroupCache(pc))
- __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_CACHE], val);
+ if (file)
+ __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_CACHE], nr_pages);
else
- __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_RSS], val);
+ __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_RSS], nr_pages);
- if (charge)
+ /* pagein of a big page is an event. So, ignore page size */
+ if (nr_pages > 0)
__this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGIN_COUNT]);
- else
+ else {
__this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_PGPGOUT_COUNT]);
- __this_cpu_inc(mem->stat->count[MEM_CGROUP_EVENTS]);
+ nr_pages = -nr_pages; /* for event */
+ }
+
+ __this_cpu_add(mem->stat->count[MEM_CGROUP_EVENTS], nr_pages);
preempt_enable();
}
* removed from global LRU.
*/
mz = page_cgroup_zoneinfo(pc);
- MEM_CGROUP_ZSTAT(mz, lru) -= 1;
+ /* huge page split is done under lru_lock. so, we have no races. */
+ MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
if (mem_cgroup_is_root(pc->mem_cgroup))
return;
VM_BUG_ON(list_empty(&pc->lru));
return;
pc = lookup_page_cgroup(page);
- /*
- * Used bit is set without atomic ops but after smp_wmb().
- * For making pc->mem_cgroup visible, insert smp_rmb() here.
- */
- smp_rmb();
/* unused or root page is not rotated. */
- if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
+ if (!PageCgroupUsed(pc))
+ return;
+ /* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
+ smp_rmb();
+ if (mem_cgroup_is_root(pc->mem_cgroup))
return;
mz = page_cgroup_zoneinfo(pc);
list_move(&pc->lru, &mz->lists[lru]);
return;
pc = lookup_page_cgroup(page);
VM_BUG_ON(PageCgroupAcctLRU(pc));
- /*
- * Used bit is set without atomic ops but after smp_wmb().
- * For making pc->mem_cgroup visible, insert smp_rmb() here.
- */
- smp_rmb();
if (!PageCgroupUsed(pc))
return;
-
+ /* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
+ smp_rmb();
mz = page_cgroup_zoneinfo(pc);
- MEM_CGROUP_ZSTAT(mz, lru) += 1;
+ /* huge page split is done under lru_lock. so, we have no races. */
+ MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
SetPageCgroupAcctLRU(pc);
if (mem_cgroup_is_root(pc->mem_cgroup))
return;
return NULL;
pc = lookup_page_cgroup(page);
- /*
- * Used bit is set without atomic ops but after smp_wmb().
- * For making pc->mem_cgroup visible, insert smp_rmb() here.
- */
- smp_rmb();
if (!PageCgroupUsed(pc))
return NULL;
-
+ /* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
+ smp_rmb();
mz = page_cgroup_zoneinfo(pc);
if (!mz)
return NULL;
return false;
}
+/**
+ * mem_cgroup_check_margin - check if the memory cgroup allows charging
+ * @mem: memory cgroup to check
+ * @bytes: the number of bytes the caller intends to charge
+ *
+ * Returns a boolean value on whether @mem can be charged @bytes or
+ * whether this would exceed the limit.
+ */
+static bool mem_cgroup_check_margin(struct mem_cgroup *mem, unsigned long bytes)
+{
+ if (!res_counter_check_margin(&mem->res, bytes))
+ return false;
+ if (do_swap_account && !res_counter_check_margin(&mem->memsw, bytes))
+ return false;
+ return true;
+}
+
static unsigned int get_swappiness(struct mem_cgroup *memcg)
{
struct cgroup *cgrp = memcg->css.cgroup;
if (unlikely(!mem || !PageCgroupUsed(pc)))
goto out;
/* pc->mem_cgroup is unstable ? */
- if (unlikely(mem_cgroup_stealed(mem))) {
+ if (unlikely(mem_cgroup_stealed(mem)) || PageTransHuge(page)) {
/* take a lock against to access pc->mem_cgroup */
move_lock_page_cgroup(pc, &flags);
need_unlock = true;
if (likely(!ret))
return CHARGE_OK;
+ res_counter_uncharge(&mem->res, csize);
mem_over_limit = mem_cgroup_from_res_counter(fail_res, memsw);
flags |= MEM_CGROUP_RECLAIM_NOSWAP;
} else
mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
-
- if (csize > PAGE_SIZE) /* change csize and retry */
+ /*
+ * csize can be either a huge page (HPAGE_SIZE), a batch of
+ * regular pages (CHARGE_SIZE), or a single regular page
+ * (PAGE_SIZE).
+ *
+ * Never reclaim on behalf of optional batching, retry with a
+ * single page instead.
+ */
+ if (csize == CHARGE_SIZE)
return CHARGE_RETRY;
if (!(gfp_mask & __GFP_WAIT))
return CHARGE_WOULDBLOCK;
ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
- gfp_mask, flags);
+ gfp_mask, flags);
+ if (mem_cgroup_check_margin(mem_over_limit, csize))
+ return CHARGE_RETRY;
/*
- * try_to_free_mem_cgroup_pages() might not give us a full
- * picture of reclaim. Some pages are reclaimed and might be
- * moved to swap cache or just unmapped from the cgroup.
- * Check the limit again to see if the reclaim reduced the
- * current usage of the cgroup before giving up
+ * Even though the limit is exceeded at this point, reclaim
+ * may have been able to free some pages. Retry the charge
+ * before killing the task.
+ *
+ * Only for regular pages, though: huge pages are rather
+ * unlikely to succeed so close to the limit, and we fall back
+ * to regular pages anyway in case of failure.
*/
- if (ret || mem_cgroup_check_under_limit(mem_over_limit))
+ if (csize == PAGE_SIZE && ret)
return CHARGE_RETRY;
/*
return mem;
}
-/*
- * commit a charge got by __mem_cgroup_try_charge() and makes page_cgroup to be
- * USED state. If already USED, uncharge and return.
- */
-static void ____mem_cgroup_commit_charge(struct mem_cgroup *mem,
- struct page_cgroup *pc,
- enum charge_type ctype)
+static void __mem_cgroup_commit_charge(struct mem_cgroup *mem,
+ struct page_cgroup *pc,
+ enum charge_type ctype,
+ int page_size)
{
+ int nr_pages = page_size >> PAGE_SHIFT;
+
+ /* try_charge() can return NULL to *memcg, taking care of it. */
+ if (!mem)
+ return;
+
+ lock_page_cgroup(pc);
+ if (unlikely(PageCgroupUsed(pc))) {
+ unlock_page_cgroup(pc);
+ mem_cgroup_cancel_charge(mem, page_size);
+ return;
+ }
+ /*
+ * we don't need page_cgroup_lock about tail pages, becase they are not
+ * accessed by any other context at this point.
+ */
pc->mem_cgroup = mem;
/*
* We access a page_cgroup asynchronously without lock_page_cgroup().
break;
}
- mem_cgroup_charge_statistics(mem, pc, true);
+ mem_cgroup_charge_statistics(mem, PageCgroupCache(pc), nr_pages);
+ unlock_page_cgroup(pc);
+ /*
+ * "charge_statistics" updated event counter. Then, check it.
+ * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
+ * if they exceeds softlimit.
+ */
+ memcg_check_events(mem, pc->page);
}
-static void __mem_cgroup_commit_charge(struct mem_cgroup *mem,
- struct page_cgroup *pc,
- enum charge_type ctype,
- int page_size)
-{
- int i;
- int count = page_size >> PAGE_SHIFT;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- /* try_charge() can return NULL to *memcg, taking care of it. */
- if (!mem)
- return;
+#define PCGF_NOCOPY_AT_SPLIT ((1 << PCG_LOCK) | (1 << PCG_MOVE_LOCK) |\
+ (1 << PCG_ACCT_LRU) | (1 << PCG_MIGRATION))
+/*
+ * Because tail pages are not marked as "used", set it. We're under
+ * zone->lru_lock, 'splitting on pmd' and compund_lock.
+ */
+void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail)
+{
+ struct page_cgroup *head_pc = lookup_page_cgroup(head);
+ struct page_cgroup *tail_pc = lookup_page_cgroup(tail);
+ unsigned long flags;
- lock_page_cgroup(pc);
- if (unlikely(PageCgroupUsed(pc))) {
- unlock_page_cgroup(pc);
- mem_cgroup_cancel_charge(mem, page_size);
+ if (mem_cgroup_disabled())
return;
- }
-
/*
- * we don't need page_cgroup_lock about tail pages, becase they are not
- * accessed by any other context at this point.
+ * We have no races with charge/uncharge but will have races with
+ * page state accounting.
*/
- for (i = 0; i < count; i++)
- ____mem_cgroup_commit_charge(mem, pc + i, ctype);
+ move_lock_page_cgroup(head_pc, &flags);
- unlock_page_cgroup(pc);
- /*
- * "charge_statistics" updated event counter. Then, check it.
- * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
- * if they exceeds softlimit.
- */
- memcg_check_events(mem, pc->page);
+ tail_pc->mem_cgroup = head_pc->mem_cgroup;
+ smp_wmb(); /* see __commit_charge() */
+ if (PageCgroupAcctLRU(head_pc)) {
+ enum lru_list lru;
+ struct mem_cgroup_per_zone *mz;
+
+ /*
+ * LRU flags cannot be copied because we need to add tail
+ *.page to LRU by generic call and our hook will be called.
+ * We hold lru_lock, then, reduce counter directly.
+ */
+ lru = page_lru(head);
+ mz = page_cgroup_zoneinfo(head_pc);
+ MEM_CGROUP_ZSTAT(mz, lru) -= 1;
+ }
+ tail_pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
+ move_unlock_page_cgroup(head_pc, &flags);
}
+#endif
/**
* __mem_cgroup_move_account - move account of the page
*/
static void __mem_cgroup_move_account(struct page_cgroup *pc,
- struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge)
+ struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge,
+ int charge_size)
{
+ int nr_pages = charge_size >> PAGE_SHIFT;
+
VM_BUG_ON(from == to);
VM_BUG_ON(PageLRU(pc->page));
VM_BUG_ON(!page_is_cgroup_locked(pc));
__this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
preempt_enable();
}
- mem_cgroup_charge_statistics(from, pc, false);
+ mem_cgroup_charge_statistics(from, PageCgroupCache(pc), -nr_pages);
if (uncharge)
/* This is not "cancel", but cancel_charge does all we need. */
- mem_cgroup_cancel_charge(from, PAGE_SIZE);
+ mem_cgroup_cancel_charge(from, charge_size);
/* caller should have done css_get */
pc->mem_cgroup = to;
- mem_cgroup_charge_statistics(to, pc, true);
+ mem_cgroup_charge_statistics(to, PageCgroupCache(pc), nr_pages);
/*
* We charges against "to" which may not have any tasks. Then, "to"
* can be under rmdir(). But in current implementation, caller of
* __mem_cgroup_move_account()
*/
static int mem_cgroup_move_account(struct page_cgroup *pc,
- struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge)
+ struct mem_cgroup *from, struct mem_cgroup *to,
+ bool uncharge, int charge_size)
{
int ret = -EINVAL;
unsigned long flags;
+ /*
+ * The page is isolated from LRU. So, collapse function
+ * will not handle this page. But page splitting can happen.
+ * Do this check under compound_page_lock(). The caller should
+ * hold it.
+ */
+ if ((charge_size > PAGE_SIZE) && !PageTransHuge(pc->page))
+ return -EBUSY;
lock_page_cgroup(pc);
if (PageCgroupUsed(pc) && pc->mem_cgroup == from) {
move_lock_page_cgroup(pc, &flags);
- __mem_cgroup_move_account(pc, from, to, uncharge);
+ __mem_cgroup_move_account(pc, from, to, uncharge, charge_size);
move_unlock_page_cgroup(pc, &flags);
ret = 0;
}
struct cgroup *cg = child->css.cgroup;
struct cgroup *pcg = cg->parent;
struct mem_cgroup *parent;
+ int page_size = PAGE_SIZE;
+ unsigned long flags;
int ret;
/* Is ROOT ? */
if (isolate_lru_page(page))
goto put;
+ if (PageTransHuge(page))
+ page_size = HPAGE_SIZE;
+
parent = mem_cgroup_from_cont(pcg);
- ret = __mem_cgroup_try_charge(NULL, gfp_mask, &parent, false,
- PAGE_SIZE);
+ ret = __mem_cgroup_try_charge(NULL, gfp_mask,
+ &parent, false, page_size);
if (ret || !parent)
goto put_back;
- ret = mem_cgroup_move_account(pc, child, parent, true);
+ if (page_size > PAGE_SIZE)
+ flags = compound_lock_irqsave(page);
+
+ ret = mem_cgroup_move_account(pc, child, parent, true, page_size);
if (ret)
- mem_cgroup_cancel_charge(parent, PAGE_SIZE);
+ mem_cgroup_cancel_charge(parent, page_size);
+
+ if (page_size > PAGE_SIZE)
+ compound_unlock_irqrestore(page, flags);
put_back:
putback_lru_page(page);
put:
gfp_t gfp_mask, enum charge_type ctype)
{
struct mem_cgroup *mem = NULL;
+ int page_size = PAGE_SIZE;
struct page_cgroup *pc;
+ bool oom = true;
int ret;
- int page_size = PAGE_SIZE;
if (PageTransHuge(page)) {
page_size <<= compound_order(page);
VM_BUG_ON(!PageTransHuge(page));
+ /*
+ * Never OOM-kill a process for a huge page. The
+ * fault handler will fall back to regular pages.
+ */
+ oom = false;
}
pc = lookup_page_cgroup(page);
return 0;
prefetchw(pc);
- ret = __mem_cgroup_try_charge(mm, gfp_mask, &mem, true, page_size);
+ ret = __mem_cgroup_try_charge(mm, gfp_mask, &mem, oom, page_size);
if (ret || !mem)
return ret;
static struct mem_cgroup *
__mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype)
{
- int i;
int count;
struct page_cgroup *pc;
struct mem_cgroup *mem = NULL;
break;
}
- for (i = 0; i < count; i++)
- mem_cgroup_charge_statistics(mem, pc + i, false);
+ mem_cgroup_charge_statistics(mem, PageCgroupCache(pc), -count);
ClearPageCgroupUsed(pc);
/*
goto put;
pc = lookup_page_cgroup(page);
if (!mem_cgroup_move_account(pc,
- mc.from, mc.to, false)) {
+ mc.from, mc.to, false, PAGE_SIZE)) {
mc.precharge--;
/* we uncharge from mc.from later. */
mc.moved_charge++;
static int __init enable_swap_account(char *s)
{
/* consider enabled if no parameter or 1 is given */
- if (!s || !strcmp(s, "1"))
+ if (!(*s) || !strcmp(s, "=1"))
really_do_swap_account = 1;
- else if (!strcmp(s, "0"))
+ else if (!strcmp(s, "=0"))
really_do_swap_account = 0;
return 1;
}
static int __init disable_swap_account(char *s)
{
- enable_swap_account("0");
+ printk_once("noswapaccount is deprecated and will be removed in 2.6.40. Use swapaccount=0 instead\n");
+ enable_swap_account("=0");
return 1;
}
__setup("noswapaccount", disable_swap_account);
}
/*
- * Only all shrink_slab here (which would also
- * shrink other caches) if access is not potentially fatal.
+ * Only call shrink_slab here (which would also shrink other caches) if
+ * access is not potentially fatal.
*/
if (access) {
int nr;
struct task_struct *tsk;
struct anon_vma *av;
- if (!PageHuge(page) && unlikely(split_huge_page(page)))
- return;
read_lock(&tasklist_lock);
av = page_lock_anon_vma(page);
if (av == NULL) /* Not actually mapped anymore */
int ret;
int kill = 1;
struct page *hpage = compound_head(p);
+ struct page *ppage;
if (PageReserved(p) || PageSlab(p))
return SWAP_SUCCESS;
}
}
+ /*
+ * ppage: poisoned page
+ * if p is regular page(4k page)
+ * ppage == real poisoned page;
+ * else p is hugetlb or THP, ppage == head page.
+ */
+ ppage = hpage;
+
+ if (PageTransHuge(hpage)) {
+ /*
+ * Verify that this isn't a hugetlbfs head page, the check for
+ * PageAnon is just for avoid tripping a split_huge_page
+ * internal debug check, as split_huge_page refuses to deal with
+ * anything that isn't an anon page. PageAnon can't go away fro
+ * under us because we hold a refcount on the hpage, without a
+ * refcount on the hpage. split_huge_page can't be safely called
+ * in the first place, having a refcount on the tail isn't
+ * enough * to be safe.
+ */
+ if (!PageHuge(hpage) && PageAnon(hpage)) {
+ if (unlikely(split_huge_page(hpage))) {
+ /*
+ * FIXME: if splitting THP is failed, it is
+ * better to stop the following operation rather
+ * than causing panic by unmapping. System might
+ * survive if the page is freed later.
+ */
+ printk(KERN_INFO
+ "MCE %#lx: failed to split THP\n", pfn);
+
+ BUG_ON(!PageHWPoison(p));
+ return SWAP_FAIL;
+ }
+ /* THP is split, so ppage should be the real poisoned page. */
+ ppage = p;
+ }
+ }
+
/*
* First collect all the processes that have the page
* mapped in dirty form. This has to be done before try_to_unmap,
* there's nothing that can be done.
*/
if (kill)
- collect_procs(hpage, &tokill);
+ collect_procs(ppage, &tokill);
+
+ if (hpage != ppage)
+ lock_page_nosync(ppage);
- ret = try_to_unmap(hpage, ttu);
+ ret = try_to_unmap(ppage, ttu);
if (ret != SWAP_SUCCESS)
printk(KERN_ERR "MCE %#lx: failed to unmap page (mapcount=%d)\n",
- pfn, page_mapcount(hpage));
+ pfn, page_mapcount(ppage));
+
+ if (hpage != ppage)
+ unlock_page(ppage);
/*
* Now that the dirty bit has been propagated to the
* use a more force-full uncatchable kill to prevent
* any accesses to the poisoned memory.
*/
- kill_procs_ao(&tokill, !!PageDirty(hpage), trapno,
+ kill_procs_ao(&tokill, !!PageDirty(ppage), trapno,
ret != SWAP_SUCCESS, p, pfn);
return ret;
* The check (unnecessarily) ignores LRU pages being isolated and
* walked by the page reclaim code, however that's not a big loss.
*/
- if (!PageLRU(p) && !PageHuge(p))
- shake_page(p, 0);
- if (!PageLRU(p) && !PageHuge(p)) {
- /*
- * shake_page could have turned it free.
- */
- if (is_free_buddy_page(p)) {
- action_result(pfn, "free buddy, 2nd try", DELAYED);
- return 0;
+ if (!PageHuge(p) && !PageTransCompound(p)) {
+ if (!PageLRU(p))
+ shake_page(p, 0);
+ if (!PageLRU(p)) {
+ /*
+ * shake_page could have turned it free.
+ */
+ if (is_free_buddy_page(p)) {
+ action_result(pfn, "free buddy, 2nd try",
+ DELAYED);
+ return 0;
+ }
+ action_result(pfn, "non LRU", IGNORED);
+ put_page(p);
+ return -EBUSY;
}
- action_result(pfn, "non LRU", IGNORED);
- put_page(p);
- return -EBUSY;
}
/*
* For error on the tail page, we should set PG_hwpoison
* on the head page to show that the hugepage is hwpoisoned
*/
- if (PageTail(p) && TestSetPageHWPoison(hpage)) {
+ if (PageHuge(p) && PageTail(p) && TestSetPageHWPoison(hpage)) {
action_result(pfn, "hugepage already hardware poisoned",
IGNORED);
unlock_page(hpage);
ret = migrate_huge_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL, 0,
true);
if (ret) {
- putback_lru_pages(&pagelist);
+ struct page *page1, *page2;
+ list_for_each_entry_safe(page1, page2, &pagelist, lru)
+ put_page(page1);
+
pr_debug("soft offline: %#lx: migration failed %d, type %lx\n",
pfn, ret, page->flags);
if (ret > 0)
ret = migrate_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL,
0, true);
if (ret) {
+ putback_lru_pages(&pagelist);
pr_info("soft offline: %#lx: migration failed %d, type %lx\n",
pfn, ret, page->flags);
if (ret > 0)
&ptl);
if (!pte_same(*page_table, orig_pte)) {
unlock_page(old_page);
- page_cache_release(old_page);
goto unlock;
}
page_cache_release(old_page);
&ptl);
if (!pte_same(*page_table, orig_pte)) {
unlock_page(old_page);
- page_cache_release(old_page);
goto unlock;
}
}
__SetPageUptodate(new_page);
- /*
- * Don't let another task, with possibly unlocked vma,
- * keep the mlocked page.
- */
- if ((vma->vm_flags & VM_LOCKED) && old_page) {
- lock_page(old_page); /* for LRU manipulation */
- clear_page_mlock(old_page);
- unlock_page(old_page);
- }
-
if (mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))
goto oom_free_new;
if (new_page)
page_cache_release(new_page);
- if (old_page)
- page_cache_release(old_page);
unlock:
pte_unmap_unlock(page_table, ptl);
+ if (old_page) {
+ /*
+ * Don't let another task, with possibly unlocked vma,
+ * keep the mlocked page.
+ */
+ if ((ret & VM_FAULT_WRITE) && (vma->vm_flags & VM_LOCKED)) {
+ lock_page(old_page); /* LRU manipulation */
+ munlock_vma_page(old_page);
+ unlock_page(old_page);
+ }
+ page_cache_release(old_page);
+ }
return ret;
oom_free_new:
page_cache_release(new_page);
goto out;
}
charged = 1;
- /*
- * Don't let another task, with possibly unlocked vma,
- * keep the mlocked page.
- */
- if (vma->vm_flags & VM_LOCKED)
- clear_page_mlock(vmf.page);
copy_user_highpage(page, vmf.page, address, vma);
__SetPageUptodate(page);
} else {
unlock:
unlock_page(page);
+move_newpage:
if (rc != -EAGAIN) {
/*
* A page that has been migrated has all references
putback_lru_page(page);
}
-move_newpage:
-
/*
* Move the new page to the LRU. If migration was not successful
* then this will free the page.
* are movable anymore because to has become empty
* or no retryable pages exist anymore.
* Caller should call putback_lru_pages to return pages to the LRU
- * or free list.
+ * or free list only if ret != 0.
*
* Return: Number of pages not migrated or error code.
*/
}
rc = 0;
out:
-
- list_for_each_entry_safe(page, page2, from, lru)
- put_page(page);
-
if (rc)
return rc;
if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE)
gup_flags |= FOLL_WRITE;
+ /*
+ * We want mlock to succeed for regions that have any permissions
+ * other than PROT_NONE.
+ */
+ if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))
+ gup_flags |= FOLL_FORCE;
+
if (vma->vm_flags & VM_LOCKED)
gup_flags |= FOLL_MLOCK;
pset = per_cpu_ptr(zone->pageset, cpu);
pcp = &pset->pcp;
- free_pcppages_bulk(zone, pcp->count, pcp);
- pcp->count = 0;
+ if (pcp->count) {
+ free_pcppages_bulk(zone, pcp->count, pcp);
+ pcp->count = 0;
+ }
local_irq_restore(flags);
}
}
*/
alloc_flags = gfp_to_alloc_flags(gfp_mask);
+ /*
+ * Find the true preferred zone if the allocation is unconstrained by
+ * cpusets.
+ */
+ if (!(alloc_flags & ALLOC_CPUSET) && !nodemask)
+ first_zones_zonelist(zonelist, high_zoneidx, NULL,
+ &preferred_zone);
+
/* This is the last chance, in general, before the goto nopage. */
page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
get_mems_allowed();
/* The preferred zone is used for statistics later */
- first_zones_zonelist(zonelist, high_zoneidx, nodemask, &preferred_zone);
+ first_zones_zonelist(zonelist, high_zoneidx,
+ nodemask ? : &cpuset_current_mems_allowed,
+ &preferred_zone);
if (!preferred_zone) {
put_mems_allowed();
return NULL;
* Copyright (C) 2010 Linus Torvalds
*/
+#include <linux/pagemap.h>
#include <asm/tlb.h>
#include <asm-generic/pgtable.h>
* @inode: inode
* @newsize: new file size
*
- * truncate_setsize updastes i_size update and performs pagecache
- * truncation (if necessary) for a file size updates. It will be
- * typically be called from the filesystem's setattr function when
- * ATTR_SIZE is passed in.
+ * truncate_setsize updates i_size and performs pagecache truncation (if
+ * necessary) to @newsize. It will be typically be called from the filesystem's
+ * setattr function when ATTR_SIZE is passed in.
*
- * Must be called with inode_mutex held and after all filesystem
- * specific block truncation has been performed.
+ * Must be called with inode_mutex held and before all filesystem specific
+ * block truncation has been performed.
*/
void truncate_setsize(struct inode *inode, loff_t newsize)
{
#include <linux/memcontrol.h>
#include <linux/delayacct.h>
#include <linux/sysctl.h>
-#include <linux/compaction.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
unsigned long nr[NR_LRU_LISTS];
unsigned long nr_to_scan;
enum lru_list l;
- unsigned long nr_reclaimed;
+ unsigned long nr_reclaimed, nr_scanned;
unsigned long nr_to_reclaim = sc->nr_to_reclaim;
- unsigned long nr_scanned = sc->nr_scanned;
restart:
nr_reclaimed = 0;
+ nr_scanned = sc->nr_scanned;
get_scan_count(zone, sc, nr, priority);
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
struct zone *preferred_zone;
first_zones_zonelist(zonelist, gfp_zone(sc->gfp_mask),
- NULL, &preferred_zone);
+ &cpuset_current_mems_allowed,
+ &preferred_zone);
wait_iff_congested(preferred_zone, BLK_RW_ASYNC, HZ/10);
}
}
} \
while (0)
#else /* !CONFIG_BATMAN_ADV_DEBUG */
-static inline void bat_dbg(char type __attribute__((unused)),
- struct bat_priv *bat_priv __attribute__((unused)),
- char *fmt __attribute__((unused)), ...)
+static inline void bat_dbg(char type __always_unused,
+ struct bat_priv *bat_priv __always_unused,
+ char *fmt __always_unused, ...)
{
}
#endif
uint8_t num_hna;
uint8_t gw_flags; /* flags related to gateway class */
uint8_t align;
-} __attribute__((packed));
+} __packed;
#define BAT_PACKET_LEN sizeof(struct batman_packet)
uint8_t orig[6];
uint16_t seqno;
uint8_t uid;
-} __attribute__((packed));
+} __packed;
#define BAT_RR_LEN 16
uint8_t uid;
uint8_t rr_cur;
uint8_t rr[BAT_RR_LEN][ETH_ALEN];
-} __attribute__((packed));
+} __packed;
struct unicast_packet {
uint8_t packet_type;
uint8_t version; /* batman version field */
uint8_t dest[6];
uint8_t ttl;
-} __attribute__((packed));
+} __packed;
struct unicast_frag_packet {
uint8_t packet_type;
uint8_t flags;
uint8_t orig[6];
uint16_t seqno;
-} __attribute__((packed));
+} __packed;
struct bcast_packet {
uint8_t packet_type;
uint8_t orig[6];
uint8_t ttl;
uint32_t seqno;
-} __attribute__((packed));
+} __packed;
struct vis_packet {
uint8_t packet_type;
* neighbors */
uint8_t target_orig[6]; /* who should receive this packet */
uint8_t sender_orig[6]; /* who sent or rebroadcasted this packet */
-} __attribute__((packed));
+} __packed;
#endif /* _NET_BATMAN_ADV_PACKET_H_ */
/* this packet might be part of the vis send queue. */
struct sk_buff *skb_packet;
/* vis_info may follow here*/
-} __attribute__((packed));
+} __packed;
struct vis_info_entry {
uint8_t src[ETH_ALEN];
uint8_t dest[ETH_ALEN];
uint8_t quality; /* quality = 0 means HNA */
-} __attribute__((packed));
+} __packed;
struct recvlist_node {
struct list_head list;
skb = tfp->skb;
}
+ if (skb_linearize(skb) < 0 || skb_linearize(tmp_skb) < 0)
+ goto err;
+
skb_pull(tmp_skb, sizeof(struct unicast_frag_packet));
- if (pskb_expand_head(skb, 0, tmp_skb->len, GFP_ATOMIC) < 0) {
- /* free buffered skb, skb will be freed later */
- kfree_skb(tfp->skb);
- return NULL;
- }
+ if (pskb_expand_head(skb, 0, tmp_skb->len, GFP_ATOMIC) < 0)
+ goto err;
/* move free entry to end */
tfp->skb = NULL;
unicast_packet->packet_type = BAT_UNICAST;
return skb;
+
+err:
+ /* free buffered skb, skb will be freed later */
+ kfree_skb(tfp->skb);
+ return NULL;
}
static void frag_create_entry(struct list_head *head, struct sk_buff *skb)
if (!bat_priv->primary_if)
goto dropped;
- unicast_packet = (struct unicast_packet *) skb->data;
+ frag_skb = dev_alloc_skb(data_len - (data_len / 2) + ucf_hdr_len);
+ if (!frag_skb)
+ goto dropped;
+ unicast_packet = (struct unicast_packet *) skb->data;
memcpy(&tmp_uc, unicast_packet, uc_hdr_len);
- frag_skb = dev_alloc_skb(data_len - (data_len / 2) + ucf_hdr_len);
skb_split(skb, frag_skb, data_len / 2);
if (my_skb_head_push(skb, ucf_hdr_len - uc_hdr_len) < 0 ||
spin_unlock_bh(&bat_priv->vis_list_lock);
kfree_skb(info->skb_packet);
+ kfree(info);
}
/* Compare two vis packets, used by the hashing algorithm */
buff_pos += sprintf(buff + buff_pos, "%pM,",
entry->addr);
- for (i = 0; i < packet->entries; i++)
+ for (j = 0; j < packet->entries; j++)
buff_pos += vis_data_read_entry(
buff + buff_pos,
- &entries[i],
+ &entries[j],
entry->addr,
entry->primary);
info);
if (hash_added < 0) {
/* did not work (for some reason) */
- kref_put(&old_info->refcount, free_info);
+ kref_put(&info->refcount, free_info);
info = NULL;
}
container_of(work, struct delayed_work, work);
struct bat_priv *bat_priv =
container_of(delayed_work, struct bat_priv, vis_work);
- struct vis_info *info, *temp;
+ struct vis_info *info;
spin_lock_bh(&bat_priv->vis_hash_lock);
purge_vis_packets(bat_priv);
send_list_add(bat_priv, bat_priv->my_vis_info);
}
- list_for_each_entry_safe(info, temp, &bat_priv->vis_send_list,
- send_list) {
+ while (!list_empty(&bat_priv->vis_send_list)) {
+ info = list_first_entry(&bat_priv->vis_send_list,
+ typeof(*info), send_list);
kref_get(&info->refcount);
spin_unlock_bh(&bat_priv->vis_hash_lock);
hci_conn_hold(acl);
if (acl->state == BT_OPEN || acl->state == BT_CLOSED) {
- acl->sec_level = sec_level;
+ acl->sec_level = BT_SECURITY_LOW;
+ acl->pending_sec_level = sec_level;
acl->auth_type = auth_type;
hci_acl_connect(acl);
- } else {
- if (acl->sec_level < sec_level)
- acl->sec_level = sec_level;
- if (acl->auth_type < auth_type)
- acl->auth_type = auth_type;
}
if (type == ACL_LINK)
{
BT_DBG("conn %p", conn);
+ if (conn->pending_sec_level > sec_level)
+ sec_level = conn->pending_sec_level;
+
if (sec_level > conn->sec_level)
- conn->sec_level = sec_level;
+ conn->pending_sec_level = sec_level;
else if (conn->link_mode & HCI_LM_AUTH)
return 1;
+ /* Make sure we preserve an existing MITM requirement*/
+ auth_type |= (conn->auth_type & 0x01);
+
conn->auth_type = auth_type;
if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->pend)) {
destroy_workqueue(hdev->workqueue);
+ hci_dev_lock_bh(hdev);
+ hci_blacklist_clear(hdev);
+ hci_dev_unlock_bh(hdev);
+
__hci_dev_put(hdev);
return 0;
if (conn->state != BT_CONFIG || !conn->out)
return 0;
- if (conn->sec_level == BT_SECURITY_SDP)
+ if (conn->pending_sec_level == BT_SECURITY_SDP)
return 0;
/* Only request authentication for SSP connections or non-SSP
* devices with sec_level HIGH */
if (!(hdev->ssp_mode > 0 && conn->ssp_mode > 0) &&
- conn->sec_level != BT_SECURITY_HIGH)
+ conn->pending_sec_level != BT_SECURITY_HIGH)
return 0;
return 1;
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
- if (!ev->status)
+ if (!ev->status) {
conn->link_mode |= HCI_LM_AUTH;
- else
+ conn->sec_level = conn->pending_sec_level;
+ } else
conn->sec_level = BT_SECURITY_LOW;
clear_bit(HCI_CONN_AUTH_PEND, &conn->pend);
}
}
-/* Service level security */
-static inline int l2cap_check_security(struct sock *sk)
+static inline u8 l2cap_get_auth_type(struct sock *sk)
{
- struct l2cap_conn *conn = l2cap_pi(sk)->conn;
- __u8 auth_type;
+ if (sk->sk_type == SOCK_RAW) {
+ switch (l2cap_pi(sk)->sec_level) {
+ case BT_SECURITY_HIGH:
+ return HCI_AT_DEDICATED_BONDING_MITM;
+ case BT_SECURITY_MEDIUM:
+ return HCI_AT_DEDICATED_BONDING;
+ default:
+ return HCI_AT_NO_BONDING;
+ }
+ } else if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) {
+ if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW)
+ l2cap_pi(sk)->sec_level = BT_SECURITY_SDP;
- if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) {
if (l2cap_pi(sk)->sec_level == BT_SECURITY_HIGH)
- auth_type = HCI_AT_NO_BONDING_MITM;
+ return HCI_AT_NO_BONDING_MITM;
else
- auth_type = HCI_AT_NO_BONDING;
-
- if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW)
- l2cap_pi(sk)->sec_level = BT_SECURITY_SDP;
+ return HCI_AT_NO_BONDING;
} else {
switch (l2cap_pi(sk)->sec_level) {
case BT_SECURITY_HIGH:
- auth_type = HCI_AT_GENERAL_BONDING_MITM;
- break;
+ return HCI_AT_GENERAL_BONDING_MITM;
case BT_SECURITY_MEDIUM:
- auth_type = HCI_AT_GENERAL_BONDING;
- break;
+ return HCI_AT_GENERAL_BONDING;
default:
- auth_type = HCI_AT_NO_BONDING;
- break;
+ return HCI_AT_NO_BONDING;
}
}
+}
+
+/* Service level security */
+static inline int l2cap_check_security(struct sock *sk)
+{
+ struct l2cap_conn *conn = l2cap_pi(sk)->conn;
+ __u8 auth_type;
+
+ auth_type = l2cap_get_auth_type(sk);
return hci_conn_security(conn->hcon, l2cap_pi(sk)->sec_level,
auth_type);
result = L2CAP_CR_SEC_BLOCK;
else
result = L2CAP_CR_BAD_PSM;
+ sk->sk_state = BT_DISCONN;
rsp.scid = cpu_to_le16(l2cap_pi(sk)->dcid);
rsp.dcid = cpu_to_le16(l2cap_pi(sk)->scid);
err = -ENOMEM;
- if (sk->sk_type == SOCK_RAW) {
- switch (l2cap_pi(sk)->sec_level) {
- case BT_SECURITY_HIGH:
- auth_type = HCI_AT_DEDICATED_BONDING_MITM;
- break;
- case BT_SECURITY_MEDIUM:
- auth_type = HCI_AT_DEDICATED_BONDING;
- break;
- default:
- auth_type = HCI_AT_NO_BONDING;
- break;
- }
- } else if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) {
- if (l2cap_pi(sk)->sec_level == BT_SECURITY_HIGH)
- auth_type = HCI_AT_NO_BONDING_MITM;
- else
- auth_type = HCI_AT_NO_BONDING;
-
- if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW)
- l2cap_pi(sk)->sec_level = BT_SECURITY_SDP;
- } else {
- switch (l2cap_pi(sk)->sec_level) {
- case BT_SECURITY_HIGH:
- auth_type = HCI_AT_GENERAL_BONDING_MITM;
- break;
- case BT_SECURITY_MEDIUM:
- auth_type = HCI_AT_GENERAL_BONDING;
- break;
- default:
- auth_type = HCI_AT_NO_BONDING;
- break;
- }
- }
+ auth_type = l2cap_get_auth_type(sk);
hcon = hci_connect(hdev, ACL_LINK, dst,
l2cap_pi(sk)->sec_level, auth_type);
if (sk->sk_type != SOCK_SEQPACKET &&
sk->sk_type != SOCK_STREAM) {
l2cap_sock_clear_timer(sk);
- sk->sk_state = BT_CONNECTED;
+ if (l2cap_check_security(sk))
+ sk->sk_state = BT_CONNECTED;
} else
l2cap_do_start(sk);
}
if (pi->mode == L2CAP_MODE_STREAMING) {
l2cap_streaming_send(sk);
} else {
- if (pi->conn_state & L2CAP_CONN_REMOTE_BUSY &&
- pi->conn_state && L2CAP_CONN_WAIT_F) {
+ if ((pi->conn_state & L2CAP_CONN_REMOTE_BUSY) &&
+ (pi->conn_state & L2CAP_CONN_WAIT_F)) {
err = len;
break;
}
* initiator rfcomm_process_rx already calls
* rfcomm_session_put() */
if (s->sock->sk->sk_state != BT_CLOSED)
- rfcomm_session_put(s);
+ if (list_empty(&s->dlcs))
+ rfcomm_session_put(s);
break;
}
}
fdb = kmem_cache_alloc(br_fdb_cache, GFP_ATOMIC);
if (fdb) {
memcpy(fdb->addr.addr, addr, ETH_ALEN);
- hlist_add_head_rcu(&fdb->hlist, head);
-
fdb->dst = source;
fdb->is_local = is_local;
fdb->is_static = is_local;
fdb->ageing_timer = jiffies;
+
+ hlist_add_head_rcu(&fdb->hlist, head);
}
return fdb;
}
if (is_multicast_ether_addr(dest)) {
mdst = br_mdb_get(br, skb);
if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) {
- if ((mdst && !hlist_unhashed(&mdst->mglist)) ||
+ if ((mdst && mdst->mglist) ||
br_multicast_is_router(br))
skb2 = skb;
br_multicast_forward(mdst, skb, skb2);
if (!netif_running(br->dev) || timer_pending(&mp->timer))
goto out;
- if (!hlist_unhashed(&mp->mglist))
- hlist_del_init(&mp->mglist);
+ mp->mglist = false;
if (mp->ports)
goto out;
del_timer(&p->query_timer);
call_rcu_bh(&p->rcu, br_multicast_free_pg);
- if (!mp->ports && hlist_unhashed(&mp->mglist) &&
+ if (!mp->ports && !mp->mglist &&
netif_running(br->dev))
mod_timer(&mp->timer, jiffies);
struct net_bridge *br = mp->br;
spin_lock(&br->multicast_lock);
- if (!netif_running(br->dev) || hlist_unhashed(&mp->mglist) ||
+ if (!netif_running(br->dev) || !mp->mglist ||
mp->queries_sent >= br->multicast_last_member_count)
goto out;
goto err;
if (!port) {
- hlist_add_head(&mp->mglist, &br->mglist);
+ mp->mglist = true;
mod_timer(&mp->timer, now + br->multicast_membership_interval);
goto out;
}
max_delay *= br->multicast_last_member_count;
- if (!hlist_unhashed(&mp->mglist) &&
+ if (mp->mglist &&
(timer_pending(&mp->timer) ?
time_after(mp->timer.expires, now + max_delay) :
try_to_del_timer_sync(&mp->timer) >= 0))
if (timer_pending(&p->timer) ?
time_after(p->timer.expires, now + max_delay) :
try_to_del_timer_sync(&p->timer) >= 0)
- mod_timer(&mp->timer, now + max_delay);
+ mod_timer(&p->timer, now + max_delay);
}
out:
goto out;
max_delay *= br->multicast_last_member_count;
- if (!hlist_unhashed(&mp->mglist) &&
+ if (mp->mglist &&
(timer_pending(&mp->timer) ?
time_after(mp->timer.expires, now + max_delay) :
try_to_del_timer_sync(&mp->timer) >= 0))
if (timer_pending(&p->timer) ?
time_after(p->timer.expires, now + max_delay) :
try_to_del_timer_sync(&p->timer) >= 0)
- mod_timer(&mp->timer, now + max_delay);
+ mod_timer(&p->timer, now + max_delay);
}
out:
br->multicast_last_member_interval;
if (!port) {
- if (!hlist_unhashed(&mp->mglist) &&
+ if (mp->mglist &&
(timer_pending(&mp->timer) ?
time_after(mp->timer.expires, time) :
try_to_del_timer_sync(&mp->timer) >= 0)) {
struct net_bridge_mdb_entry
{
struct hlist_node hlist[2];
- struct hlist_node mglist;
struct net_bridge *br;
struct net_bridge_port_group __rcu *ports;
struct rcu_head rcu;
struct timer_list timer;
struct timer_list query_timer;
struct br_ip addr;
+ bool mglist;
u32 queries_sent;
};
spinlock_t multicast_lock;
struct net_bridge_mdb_htable __rcu *mdb;
struct hlist_head router_list;
- struct hlist_head mglist;
struct timer_list multicast_router_timer;
struct timer_list multicast_querier_timer;
struct cflayer *servl = NULL;
struct cfcnfg_phyinfo *phyinfo = NULL;
u8 phyid = 0;
+
caif_assert(adap_layer != NULL);
channel_id = adap_layer->id;
if (adap_layer->dn == NULL || channel_id == 0) {
goto end;
}
servl = cfmuxl_remove_uplayer(cnfg->mux, channel_id);
- if (servl == NULL)
- goto end;
- layer_set_up(servl, NULL);
- ret = cfctrl_linkdown_req(cnfg->ctrl, channel_id, adap_layer);
if (servl == NULL) {
pr_err("PROTOCOL ERROR - Error removing service_layer Channel_Id(%d)",
channel_id);
ret = -EINVAL;
goto end;
}
+ layer_set_up(servl, NULL);
+ ret = cfctrl_linkdown_req(cnfg->ctrl, channel_id, adap_layer);
+ if (ret)
+ goto end;
caif_assert(channel_id == servl->id);
if (adap_layer->dn != NULL) {
phyid = cfsrvl_getphyid(adap_layer->dn);
priv->conn_req.sockaddr.u.dgm.connection_id = -1;
priv->flowenabled = false;
- ASSERT_RTNL();
init_waitqueue_head(&priv->netmgmt_wq);
- list_add(&priv->list_field, &chnl_net_list);
}
ret = register_netdevice(dev);
if (ret)
pr_warn("device rtml registration failed\n");
+ else
+ list_add(&caifdev->list_field, &chnl_net_list);
return ret;
}
struct sockaddr_can *addr =
(struct sockaddr_can *)msg->msg_name;
+ if (msg->msg_namelen < sizeof(*addr))
+ return -EINVAL;
+
if (addr->can_family != AF_CAN)
return -EINVAL;
struct sockaddr_can *addr =
(struct sockaddr_can *)msg->msg_name;
+ if (msg->msg_namelen < sizeof(*addr))
+ return -EINVAL;
+
if (addr->can_family != AF_CAN)
return -EINVAL;
{
struct kvec iov = {buf, len};
struct msghdr msg = { .msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL };
+ int r;
- return kernel_recvmsg(sock, &msg, &iov, 1, len, msg.msg_flags);
+ r = kernel_recvmsg(sock, &msg, &iov, 1, len, msg.msg_flags);
+ if (r == -EAGAIN)
+ r = 0;
+ return r;
}
/*
size_t kvlen, size_t len, int more)
{
struct msghdr msg = { .msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL };
+ int r;
if (more)
msg.msg_flags |= MSG_MORE;
else
msg.msg_flags |= MSG_EOR; /* superfluous, but what the hell */
- return kernel_sendmsg(sock, &msg, iov, kvlen, len);
+ r = kernel_sendmsg(sock, &msg, iov, kvlen, len);
+ if (r == -EAGAIN)
+ r = 0;
+ return r;
}
(msg->pages || msg->pagelist || msg->bio || in_trail))
kunmap(page);
+ if (ret == -EAGAIN)
+ ret = 0;
if (ret <= 0)
goto out;
if (con->out_skip) {
ret = write_partial_skip(con);
if (ret <= 0)
- goto done;
- if (ret < 0) {
- dout("try_write write_partial_skip err %d\n", ret);
- goto done;
- }
+ goto out;
}
if (con->out_kvec_left) {
ret = write_partial_kvec(con);
if (ret <= 0)
- goto done;
+ goto out;
}
/* msg pages? */
if (ret == 1)
goto more_kvec; /* we need to send the footer, too! */
if (ret == 0)
- goto done;
+ goto out;
if (ret < 0) {
dout("try_write write_partial_msg_pages err %d\n",
ret);
- goto done;
+ goto out;
}
}
/* Nothing to do! */
clear_bit(WRITE_PENDING, &con->state);
dout("try_write nothing else to write.\n");
-done:
ret = 0;
out:
- dout("try_write done on %p\n", con);
+ dout("try_write done on %p ret %d\n", con, ret);
return ret;
}
dout("try_read connecting\n");
ret = read_partial_banner(con);
if (ret <= 0)
- goto done;
- if (process_banner(con) < 0) {
- ret = -1;
goto out;
- }
+ ret = process_banner(con);
+ if (ret < 0)
+ goto out;
}
ret = read_partial_connect(con);
if (ret <= 0)
- goto done;
- if (process_connect(con) < 0) {
- ret = -1;
goto out;
- }
+ ret = process_connect(con);
+ if (ret < 0)
+ goto out;
goto more;
}
dout("skipping %d / %d bytes\n", skip, -con->in_base_pos);
ret = ceph_tcp_recvmsg(con->sock, buf, skip);
if (ret <= 0)
- goto done;
+ goto out;
con->in_base_pos += ret;
if (con->in_base_pos)
goto more;
*/
ret = ceph_tcp_recvmsg(con->sock, &con->in_tag, 1);
if (ret <= 0)
- goto done;
+ goto out;
dout("try_read got tag %d\n", (int)con->in_tag);
switch (con->in_tag) {
case CEPH_MSGR_TAG_MSG:
break;
case CEPH_MSGR_TAG_CLOSE:
set_bit(CLOSED, &con->state); /* fixme */
- goto done;
+ goto out;
default:
goto bad_tag;
}
case -EBADMSG:
con->error_msg = "bad crc";
ret = -EIO;
- goto out;
+ break;
case -EIO:
con->error_msg = "io error";
- goto out;
- default:
- goto done;
+ break;
}
+ goto out;
}
if (con->in_tag == CEPH_MSGR_TAG_READY)
goto more;
if (con->in_tag == CEPH_MSGR_TAG_ACK) {
ret = read_partial_ack(con);
if (ret <= 0)
- goto done;
+ goto out;
process_ack(con);
goto more;
}
-done:
- ret = 0;
out:
- dout("try_read done on %p\n", con);
+ dout("try_read done on %p ret %d\n", con, ret);
return ret;
bad_tag:
* @ha: hardware address
*
* Search for an interface by MAC address. Returns NULL if the device
- * is not found or a pointer to the device. The caller must hold RCU
+ * is not found or a pointer to the device.
+ * The caller must hold RCU or RTNL.
* The returned device has not had its ref count increased
* and the caller must therefore be careful about locking
*
static int __dev_close(struct net_device *dev)
{
+ int retval;
LIST_HEAD(single);
list_add(&dev->unreg_list, &single);
- return __dev_close_many(&single);
+ retval = __dev_close_many(&single);
+ list_del(&single);
+ return retval;
}
int dev_close_many(struct list_head *head)
list_add(&dev->unreg_list, &single);
dev_close_many(&single);
-
+ list_del(&single);
return 0;
}
EXPORT_SYMBOL(dev_close);
static int harmonize_features(struct sk_buff *skb, __be16 protocol, int features)
{
- if (!can_checksum_protocol(protocol, features)) {
+ if (!can_checksum_protocol(features, protocol)) {
features &= ~NETIF_F_ALL_CSUM;
features &= ~NETIF_F_SG;
} else if (illegal_highdma(skb->dev, skb)) {
return harmonize_features(skb, protocol, features);
}
- features &= skb->dev->vlan_features;
+ features &= (skb->dev->vlan_features | NETIF_F_HW_VLAN_TX);
if (protocol != htons(ETH_P_8021Q)) {
return harmonize_features(skb, protocol, features);
} else {
features &= NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST |
- NETIF_F_GEN_CSUM;
+ NETIF_F_GEN_CSUM | NETIF_F_HW_VLAN_TX;
return harmonize_features(skb, protocol, features);
}
}
map = rcu_dereference(rxqueue->rps_map);
if (map) {
- if (map->len == 1) {
+ if (map->len == 1 &&
+ !rcu_dereference_raw(rxqueue->rps_flow_table)) {
tcpu = map->cpus[0];
if (cpu_online(tcpu))
cpu = tcpu;
__skb_pull(skb, skb_headlen(skb));
skb_reserve(skb, NET_IP_ALIGN - skb_headroom(skb));
skb->vlan_tci = 0;
+ skb->dev = napi->dev;
+ skb->skb_iif = 0;
napi->skb = skb;
}
list_add(&dev->unreg_list, &single);
rollback_registered_many(&single);
+ list_del(&single);
}
unsigned long netdev_fix_features(unsigned long features, const char *name)
dev_net_set(dev, &init_net);
+ dev->gso_max_size = GSO_MAX_SIZE;
+
+ INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
+ dev->ethtool_ntuple_list.count = 0;
+ INIT_LIST_HEAD(&dev->napi_list);
+ INIT_LIST_HEAD(&dev->unreg_list);
+ INIT_LIST_HEAD(&dev->link_watch_list);
+ dev->priv_flags = IFF_XMIT_DST_RELEASE;
+ setup(dev);
+
dev->num_tx_queues = txqs;
dev->real_num_tx_queues = txqs;
if (netif_alloc_netdev_queues(dev))
- goto free_pcpu;
+ goto free_all;
#ifdef CONFIG_RPS
dev->num_rx_queues = rxqs;
dev->real_num_rx_queues = rxqs;
if (netif_alloc_rx_queues(dev))
- goto free_pcpu;
+ goto free_all;
#endif
- dev->gso_max_size = GSO_MAX_SIZE;
-
- INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
- dev->ethtool_ntuple_list.count = 0;
- INIT_LIST_HEAD(&dev->napi_list);
- INIT_LIST_HEAD(&dev->unreg_list);
- INIT_LIST_HEAD(&dev->link_watch_list);
- dev->priv_flags = IFF_XMIT_DST_RELEASE;
- setup(dev);
strcpy(dev->name, name);
return dev;
+free_all:
+ free_netdev(dev);
+ return NULL;
+
free_pcpu:
free_percpu(dev->pcpu_refcnt);
kfree(dev->_tx);
}
}
unregister_netdevice_many(&dev_kill_list);
+ list_del(&dev_kill_list);
rtnl_unlock();
}
if (regs.len > reglen)
regs.len = reglen;
- regbuf = vmalloc(reglen);
+ regbuf = vzalloc(reglen);
if (!regbuf)
return -ENOMEM;
return -EOPNOTSUPP;
if (af_ops->validate_link_af) {
- err = af_ops->validate_link_af(dev,
- tb[IFLA_AF_SPEC]);
+ err = af_ops->validate_link_af(dev, af);
if (err < 0)
return err;
}
snprintf(ifname, IFNAMSIZ, "%s%%d", ops->kind);
dest_net = rtnl_link_get_net(net, tb);
+ if (IS_ERR(dest_net))
+ return PTR_ERR(dest_net);
+
dev = rtnl_create_link(net, dest_net, ifname, ops, tb);
if (IS_ERR(dev))
if (kind != 2 && security_netlink_recv(skb, CAP_NET_ADMIN))
return -EPERM;
- if (kind == 2 && (nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+ if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) {
struct sock *rtnl;
rtnl_dumpit_func dumpit;
shinfo = skb_shinfo(skb);
memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
atomic_set(&shinfo->dataref, 1);
+ kmemcheck_annotate_variable(shinfo->destructor_arg);
if (fclone) {
struct sk_buff *child = skb + 1;
merge:
if (offset > headlen) {
- skbinfo->frags[0].page_offset += offset - headlen;
- skbinfo->frags[0].size -= offset - headlen;
+ unsigned int eat = offset - headlen;
+
+ skbinfo->frags[0].page_offset += eat;
+ skbinfo->frags[0].size -= eat;
+ skb->data_len -= eat;
+ skb->len -= eat;
offset = headlen;
}
u8 up, idtype;
int ret = -EINVAL;
- if (!tb[DCB_ATTR_APP] || !netdev->dcbnl_ops->getapp)
+ if (!tb[DCB_ATTR_APP])
goto out;
ret = nla_parse_nested(app_tb, DCB_APP_ATTR_MAX, tb[DCB_ATTR_APP],
goto out;
id = nla_get_u16(app_tb[DCB_APP_ATTR_ID]);
- up = netdev->dcbnl_ops->getapp(netdev, idtype, id);
+
+ if (netdev->dcbnl_ops->getapp) {
+ up = netdev->dcbnl_ops->getapp(netdev, idtype, id);
+ } else {
+ struct dcb_app app = {
+ .selector = idtype,
+ .protocol = id,
+ };
+ up = dcb_getapp(netdev, &app);
+ }
/* send this back */
dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
dcb->cmd = DCB_CMD_GAPP;
app_nest = nla_nest_start(dcbnl_skb, DCB_ATTR_APP);
+ if (!app_nest)
+ goto out_cancel;
+
ret = nla_put_u8(dcbnl_skb, DCB_APP_ATTR_IDTYPE, idtype);
if (ret)
goto out_cancel;
u8 dcb_setapp(struct net_device *dev, struct dcb_app *new)
{
struct dcb_app_type *itr;
+ struct dcb_app_type event;
+
+ memcpy(&event.name, dev->name, sizeof(event.name));
+ memcpy(&event.app, new, sizeof(event.app));
spin_lock(&dcb_lock);
/* Search for existing match and replace */
}
out:
spin_unlock(&dcb_lock);
- call_dcbevent_notifiers(DCB_APP_EVENT, new);
+ call_dcbevent_notifiers(DCB_APP_EVENT, &event);
return 0;
}
EXPORT_SYMBOL(dcb_setapp);
}
module_exit(dsa_cleanup_module);
-MODULE_AUTHOR("Lennert Buytenhek <buytenh@wantstofly.org>")
+MODULE_AUTHOR("Lennert Buytenhek <buytenh@wantstofly.org>");
MODULE_DESCRIPTION("Driver for Distributed Switch Architecture switch chips");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:dsa");
static int econet_sendmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len)
{
- struct sock *sk = sock->sk;
struct sockaddr_ec *saddr=(struct sockaddr_ec *)msg->msg_name;
struct net_device *dev;
struct ec_addr addr;
int err;
unsigned char port, cb;
#if defined(CONFIG_ECONET_AUNUDP) || defined(CONFIG_ECONET_NATIVE)
+ struct sock *sk = sock->sk;
struct sk_buff *skb;
struct ec_cb *eb;
#endif
error_free_buf:
vfree(userbuf);
+error:
#else
err = -EPROTOTYPE;
#endif
- error:
mutex_unlock(&econet_mutex);
return err;
}
EXPORT_SYMBOL(inet_ioctl);
+#ifdef CONFIG_COMPAT
+int inet_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+{
+ struct sock *sk = sock->sk;
+ int err = -ENOIOCTLCMD;
+
+ if (sk->sk_prot->compat_ioctl)
+ err = sk->sk_prot->compat_ioctl(sk, cmd, arg);
+
+ return err;
+}
+#endif
+
const struct proto_ops inet_stream_ops = {
.family = PF_INET,
.owner = THIS_MODULE,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
EXPORT_SYMBOL(inet_stream_ops);
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
EXPORT_SYMBOL(inet_dgram_ops);
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
+ .compat_ioctl = inet_compat_ioctl,
#endif
};
IPV4_DEVCONF_ALL(net, PROXY_ARP) = on;
return 0;
}
- if (__in_dev_get_rcu(dev)) {
- IN_DEV_CONF_SET(__in_dev_get_rcu(dev), PROXY_ARP, on);
+ if (__in_dev_get_rtnl(dev)) {
+ IN_DEV_CONF_SET(__in_dev_get_rtnl(dev), PROXY_ARP, on);
return 0;
}
return -ENXIO;
}
-/* must be called with rcu_read_lock() */
static int arp_req_set_public(struct net *net, struct arpreq *r,
struct net_device *dev)
{
if (!(r.arp_flags & ATF_NETMASK))
((struct sockaddr_in *)&r.arp_netmask)->sin_addr.s_addr =
htonl(0xFFFFFFFFUL);
- rcu_read_lock();
+ rtnl_lock();
if (r.arp_dev[0]) {
err = -ENODEV;
- dev = dev_get_by_name_rcu(net, r.arp_dev);
+ dev = __dev_get_by_name(net, r.arp_dev);
if (dev == NULL)
goto out;
break;
}
out:
- rcu_read_unlock();
+ rtnl_unlock();
if (cmd == SIOCGARP && !err && copy_to_user(arg, &r, sizeof(r)))
err = -EFAULT;
return err;
return mtu >= 68;
}
+static void inetdev_send_gratuitous_arp(struct net_device *dev,
+ struct in_device *in_dev)
+
+{
+ struct in_ifaddr *ifa = in_dev->ifa_list;
+
+ if (!ifa)
+ return;
+
+ arp_send(ARPOP_REQUEST, ETH_P_ARP,
+ ifa->ifa_address, dev,
+ ifa->ifa_address, NULL,
+ dev->dev_addr, NULL);
+}
+
/* Called only under RTNL semaphore */
static int inetdev_event(struct notifier_block *this, unsigned long event,
}
ip_mc_up(in_dev);
/* fall through */
- case NETDEV_NOTIFY_PEERS:
case NETDEV_CHANGEADDR:
+ if (!IN_DEV_ARP_NOTIFY(in_dev))
+ break;
+ /* fall through */
+ case NETDEV_NOTIFY_PEERS:
/* Send gratuitous ARP to notify of link change */
- if (IN_DEV_ARP_NOTIFY(in_dev)) {
- struct in_ifaddr *ifa = in_dev->ifa_list;
-
- if (ifa)
- arp_send(ARPOP_REQUEST, ETH_P_ARP,
- ifa->ifa_address, dev,
- ifa->ifa_address, NULL,
- dev->dev_addr, NULL);
- }
+ inetdev_send_gratuitous_arp(dev, in_dev);
break;
case NETDEV_DOWN:
ip_mc_down(in_dev);
nlmsg_len(nlh) < hdrlen)
return -EINVAL;
- if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+ if (nlh->nlmsg_flags & NLM_F_DUMP) {
if (nlmsg_attrlen(nlh, hdrlen)) {
struct nlattr *attr;
struct inet_peer *inet_getpeer(struct inetpeer_addr *daddr, int create)
{
struct inet_peer __rcu **stack[PEER_MAXDEPTH], ***stackptr;
- struct inet_peer_base *base = family_to_base(AF_INET);
+ struct inet_peer_base *base = family_to_base(daddr->family);
struct inet_peer *p;
/* Look up for the address quickly, lockless.
.fl4_dst = dst,
.fl4_src = tiph->saddr,
.fl4_tos = RT_TOS(tos),
+ .proto = IPPROTO_GRE,
.fl_gre_key = tunnel->parms.o_key
};
if (ip_route_output_key(dev_net(dev), &rt, &fl)) {
#include <linux/notifier.h>
#include <linux/if_arp.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/compat.h>
#include <net/ipip.h>
#include <net/checksum.h>
#include <net/netlink.h>
}
}
+#ifdef CONFIG_COMPAT
+struct compat_sioc_sg_req {
+ struct in_addr src;
+ struct in_addr grp;
+ compat_ulong_t pktcnt;
+ compat_ulong_t bytecnt;
+ compat_ulong_t wrong_if;
+};
+
+struct compat_sioc_vif_req {
+ vifi_t vifi; /* Which iface */
+ compat_ulong_t icount;
+ compat_ulong_t ocount;
+ compat_ulong_t ibytes;
+ compat_ulong_t obytes;
+};
+
+int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+{
+ struct compat_sioc_sg_req sr;
+ struct compat_sioc_vif_req vr;
+ struct vif_device *vif;
+ struct mfc_cache *c;
+ struct net *net = sock_net(sk);
+ struct mr_table *mrt;
+
+ mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ if (mrt == NULL)
+ return -ENOENT;
+
+ switch (cmd) {
+ case SIOCGETVIFCNT:
+ if (copy_from_user(&vr, arg, sizeof(vr)))
+ return -EFAULT;
+ if (vr.vifi >= mrt->maxvif)
+ return -EINVAL;
+ read_lock(&mrt_lock);
+ vif = &mrt->vif_table[vr.vifi];
+ if (VIF_EXISTS(mrt, vr.vifi)) {
+ vr.icount = vif->pkt_in;
+ vr.ocount = vif->pkt_out;
+ vr.ibytes = vif->bytes_in;
+ vr.obytes = vif->bytes_out;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &vr, sizeof(vr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ case SIOCGETSGCNT:
+ if (copy_from_user(&sr, arg, sizeof(sr)))
+ return -EFAULT;
+
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr.src.s_addr, sr.grp.s_addr);
+ if (c) {
+ sr.pktcnt = c->mfc_un.res.pkt;
+ sr.bytecnt = c->mfc_un.res.bytes;
+ sr.wrong_if = c->mfc_un.res.wrong_if;
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+ return -EFAULT;
+ return 0;
+ }
+ rcu_read_unlock();
+ return -EADDRNOTAVAIL;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+#endif
+
static int ipmr_device_event(struct notifier_block *this, unsigned long event, void *ptr)
{
if (mangle->flags & ~ARPT_MANGLE_MASK ||
!(mangle->flags & ARPT_MANGLE_MASK))
- return false;
+ return -EINVAL;
if (mangle->target != NF_DROP && mangle->target != NF_ACCEPT &&
mangle->target != XT_CONTINUE)
- return false;
- return true;
+ return -EINVAL;
+ return 0;
}
static struct xt_target arpt_mangle_reg __read_mostly = {
#include <linux/seq_file.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/compat.h>
static struct raw_hashinfo raw_v4_hashinfo = {
.lock = __RW_LOCK_UNLOCKED(raw_v4_hashinfo.lock),
}
}
+#ifdef CONFIG_COMPAT
+static int compat_raw_ioctl(struct sock *sk, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case SIOCOUTQ:
+ case SIOCINQ:
+ return -ENOIOCTLCMD;
+ default:
+#ifdef CONFIG_IP_MROUTE
+ return ipmr_compat_ioctl(sk, cmd, compat_ptr(arg));
+#else
+ return -ENOIOCTLCMD;
+#endif
+ }
+}
+#endif
+
struct proto raw_prot = {
.name = "RAW",
.owner = THIS_MODULE,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_raw_setsockopt,
.compat_getsockopt = compat_raw_getsockopt,
+ .compat_ioctl = compat_raw_ioctl,
#endif
};
return NULL;
}
+static unsigned int ipv4_blackhole_default_mtu(const struct dst_entry *dst)
+{
+ return 0;
+}
+
static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu)
{
}
.protocol = cpu_to_be16(ETH_P_IP),
.destroy = ipv4_dst_destroy,
.check = ipv4_blackhole_dst_check,
+ .default_mtu = ipv4_blackhole_default_mtu,
+ .default_advmss = ipv4_default_advmss,
.update_pmtu = ipv4_rt_blackhole_update_pmtu,
};
if (!skb_copy_datagram_iovec(skb, 0, tp->ucopy.iov, chunk)) {
tp->ucopy.len -= chunk;
tp->copied_seq += chunk;
- eaten = (chunk == skb->len && !th->fin);
+ eaten = (chunk == skb->len);
tcp_rcv_space_adjust(sk);
}
local_bh_disable();
}
req = req->dl_next;
}
- st->offset = 0;
if (++st->sbucket >= icsk->icsk_accept_queue.listen_opt->nr_table_entries)
break;
get_req:
dev->type == ARPHRD_TUNNEL6 ||
dev->type == ARPHRD_SIT ||
dev->type == ARPHRD_NONE) {
- printk(KERN_INFO
- "%s: Disabled Privacy Extensions\n",
- dev->name);
ndev->cnf.use_tempaddr = -1;
} else {
in6_dev_hold(ndev);
struct net *net = dev_net(dev);
struct inet6_dev *idev;
struct inet6_ifaddr *ifa;
- LIST_HEAD(keep_list);
- int state;
+ int state, i;
ASSERT_RTNL();
- /* Flush routes if device is being removed or it is not loopback */
- if (how || !(dev->flags & IFF_LOOPBACK))
- rt6_ifdown(net, dev);
+ rt6_ifdown(net, dev);
+ neigh_ifdown(&nd_tbl, dev);
idev = __in6_dev_get(dev);
if (idev == NULL)
}
+ /* Step 2: clear hash table */
+ for (i = 0; i < IN6_ADDR_HSIZE; i++) {
+ struct hlist_head *h = &inet6_addr_lst[i];
+ struct hlist_node *n;
+
+ spin_lock_bh(&addrconf_hash_lock);
+ restart:
+ hlist_for_each_entry_rcu(ifa, n, h, addr_lst) {
+ if (ifa->idev == idev) {
+ hlist_del_init_rcu(&ifa->addr_lst);
+ addrconf_del_timer(ifa);
+ goto restart;
+ }
+ }
+ spin_unlock_bh(&addrconf_hash_lock);
+ }
+
write_lock_bh(&idev->lock);
/* Step 2: clear flags for stateless addrconf */
struct inet6_ifaddr, if_list);
addrconf_del_timer(ifa);
- /* If just doing link down, and address is permanent
- and not link-local, then retain it. */
- if (!how &&
- (ifa->flags&IFA_F_PERMANENT) &&
- !(ipv6_addr_type(&ifa->addr) & IPV6_ADDR_LINKLOCAL)) {
- list_move_tail(&ifa->if_list, &keep_list);
-
- /* If not doing DAD on this address, just keep it. */
- if ((dev->flags&(IFF_NOARP|IFF_LOOPBACK)) ||
- idev->cnf.accept_dad <= 0 ||
- (ifa->flags & IFA_F_NODAD))
- continue;
+ list_del(&ifa->if_list);
- /* If it was tentative already, no need to notify */
- if (ifa->flags & IFA_F_TENTATIVE)
- continue;
+ write_unlock_bh(&idev->lock);
- /* Flag it for later restoration when link comes up */
- ifa->flags |= IFA_F_TENTATIVE;
- ifa->state = INET6_IFADDR_STATE_DAD;
- } else {
- list_del(&ifa->if_list);
-
- /* clear hash table */
- spin_lock_bh(&addrconf_hash_lock);
- hlist_del_init_rcu(&ifa->addr_lst);
- spin_unlock_bh(&addrconf_hash_lock);
-
- write_unlock_bh(&idev->lock);
- spin_lock_bh(&ifa->state_lock);
- state = ifa->state;
- ifa->state = INET6_IFADDR_STATE_DEAD;
- spin_unlock_bh(&ifa->state_lock);
-
- if (state != INET6_IFADDR_STATE_DEAD) {
- __ipv6_ifa_notify(RTM_DELADDR, ifa);
- atomic_notifier_call_chain(&inet6addr_chain,
- NETDEV_DOWN, ifa);
- }
+ spin_lock_bh(&ifa->state_lock);
+ state = ifa->state;
+ ifa->state = INET6_IFADDR_STATE_DEAD;
+ spin_unlock_bh(&ifa->state_lock);
- in6_ifa_put(ifa);
- write_lock_bh(&idev->lock);
+ if (state != INET6_IFADDR_STATE_DEAD) {
+ __ipv6_ifa_notify(RTM_DELADDR, ifa);
+ atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifa);
}
- }
+ in6_ifa_put(ifa);
- list_splice(&keep_list, &idev->addr_list);
+ write_lock_bh(&idev->lock);
+ }
write_unlock_bh(&idev->lock);
addrconf_leave_solict(ifp->idev, &ifp->addr);
dst_hold(&ifp->rt->dst);
- if (ifp->state == INET6_IFADDR_STATE_DEAD &&
- ip6_del_rt(ifp->rt))
+ if (ip6_del_rt(ifp->rt))
dst_free(&ifp->rt->dst);
break;
}
#include <linux/seq_file.h>
#include <linux/init.h>
#include <linux/slab.h>
+#include <linux/compat.h>
#include <net/protocol.h>
#include <linux/skbuff.h>
#include <net/sock.h>
}
}
+#ifdef CONFIG_COMPAT
+struct compat_sioc_sg_req6 {
+ struct sockaddr_in6 src;
+ struct sockaddr_in6 grp;
+ compat_ulong_t pktcnt;
+ compat_ulong_t bytecnt;
+ compat_ulong_t wrong_if;
+};
+
+struct compat_sioc_mif_req6 {
+ mifi_t mifi;
+ compat_ulong_t icount;
+ compat_ulong_t ocount;
+ compat_ulong_t ibytes;
+ compat_ulong_t obytes;
+};
+
+int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+{
+ struct compat_sioc_sg_req6 sr;
+ struct compat_sioc_mif_req6 vr;
+ struct mif_device *vif;
+ struct mfc6_cache *c;
+ struct net *net = sock_net(sk);
+ struct mr6_table *mrt;
+
+ mrt = ip6mr_get_table(net, raw6_sk(sk)->ip6mr_table ? : RT6_TABLE_DFLT);
+ if (mrt == NULL)
+ return -ENOENT;
+
+ switch (cmd) {
+ case SIOCGETMIFCNT_IN6:
+ if (copy_from_user(&vr, arg, sizeof(vr)))
+ return -EFAULT;
+ if (vr.mifi >= mrt->maxvif)
+ return -EINVAL;
+ read_lock(&mrt_lock);
+ vif = &mrt->vif6_table[vr.mifi];
+ if (MIF_EXISTS(mrt, vr.mifi)) {
+ vr.icount = vif->pkt_in;
+ vr.ocount = vif->pkt_out;
+ vr.ibytes = vif->bytes_in;
+ vr.obytes = vif->bytes_out;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &vr, sizeof(vr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ case SIOCGETSGCNT_IN6:
+ if (copy_from_user(&sr, arg, sizeof(sr)))
+ return -EFAULT;
+
+ read_lock(&mrt_lock);
+ c = ip6mr_cache_find(mrt, &sr.src.sin6_addr, &sr.grp.sin6_addr);
+ if (c) {
+ sr.pktcnt = c->mfc_un.res.pkt;
+ sr.bytecnt = c->mfc_un.res.bytes;
+ sr.wrong_if = c->mfc_un.res.wrong_if;
+ read_unlock(&mrt_lock);
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+ return -EFAULT;
+ return 0;
+ }
+ read_unlock(&mrt_lock);
+ return -EADDRNOTAVAIL;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+#endif
static inline int ip6mr_forward2_finish(struct sk_buff *skb)
{
#include <linux/netfilter.h>
#include <linux/netfilter_ipv6.h>
#include <linux/skbuff.h>
+#include <linux/compat.h>
#include <asm/uaccess.h>
#include <asm/ioctls.h>
}
}
+#ifdef CONFIG_COMPAT
+static int compat_rawv6_ioctl(struct sock *sk, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case SIOCOUTQ:
+ case SIOCINQ:
+ return -ENOIOCTLCMD;
+ default:
+#ifdef CONFIG_IPV6_MROUTE
+ return ip6mr_compat_ioctl(sk, cmd, compat_ptr(arg));
+#else
+ return -ENOIOCTLCMD;
+#endif
+ }
+}
+#endif
+
static void rawv6_close(struct sock *sk, long timeout)
{
if (inet_sk(sk)->inet_num == IPPROTO_RAW)
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_rawv6_setsockopt,
.compat_getsockopt = compat_rawv6_getsockopt,
+ .compat_ioctl = compat_rawv6_ioctl,
#endif
};
#define RT6_TRACE(x...) do { ; } while (0)
#endif
-#define CLONE_OFFLINK_ROUTE 0
-
static struct rt6_info * ip6_rt_copy(struct rt6_info *ort);
static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie);
static unsigned int ip6_default_advmss(const struct dst_entry *dst);
.local_out = __ip6_local_out,
};
+static unsigned int ip6_blackhole_default_mtu(const struct dst_entry *dst)
+{
+ return 0;
+}
+
static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu)
{
}
.protocol = cpu_to_be16(ETH_P_IPV6),
.destroy = ip6_dst_destroy,
.check = ip6_dst_check,
+ .default_mtu = ip6_blackhole_default_mtu,
+ .default_advmss = ip6_default_advmss,
.update_pmtu = ip6_rt_blackhole_update_pmtu,
};
in6_dev_put(idev);
}
if (peer) {
- BUG_ON(!(rt->rt6i_flags & RTF_CACHE));
rt->rt6i_peer = NULL;
inet_putpeer(peer);
}
{
struct inet_peer *peer;
- if (WARN_ON(!(rt->rt6i_flags & RTF_CACHE)))
- return;
-
peer = inet_getpeer_v6(&rt->rt6i_dst.addr, create);
if (peer && cmpxchg(&rt->rt6i_peer, NULL, peer) != NULL)
inet_putpeer(peer);
if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP))
nrt = rt6_alloc_cow(rt, &fl->fl6_dst, &fl->fl6_src);
- else {
-#if CLONE_OFFLINK_ROUTE
+ else
nrt = rt6_alloc_clone(rt, &fl->fl6_dst);
-#else
- goto out2;
-#endif
- }
dst_release(&rt->dst);
rt = nrt ? : net->ipv6.ip6_null_entry;
#include <net/addrconf.h>
#include <net/inet_frag.h>
+static struct ctl_table empty[1];
+
static ctl_table ipv6_table_template[] = {
{
.procname = "route",
.mode = 0644,
.proc_handler = proc_dointvec
},
+ {
+ .procname = "neigh",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = empty,
+ },
{ }
};
int ipv6_static_sysctl_register(void)
{
- static struct ctl_table empty[1];
ip6_base = register_sysctl_paths(net_ipv6_ctl_path, empty);
if (ip6_base == NULL)
return -ENOMEM;
if (!xdst->u.rt6.rt6i_idev)
return -ENODEV;
+ xdst->u.rt6.rt6i_peer = rt->rt6i_peer;
+ if (rt->rt6i_peer)
+ atomic_inc(&rt->rt6i_peer->refcnt);
+
/* Sheit... I remember I did this right. Apparently,
* it was magically lost, so this code needs audit */
xdst->u.rt6.rt6i_flags = rt->rt6i_flags & (RTF_ANYCAST |
if (likely(xdst->u.rt6.rt6i_idev))
in6_dev_put(xdst->u.rt6.rt6i_idev);
+ if (likely(xdst->u.rt6.rt6i_peer))
+ inet_putpeer(xdst->u.rt6.rt6i_peer);
xfrm_dst_destroy(xdst);
}
def_bool n
config MAC80211_RC_PID
- bool "PID controller based rate control algorithm" if EMBEDDED
+ bool "PID controller based rate control algorithm" if EXPERT
select MAC80211_HAS_RC
---help---
This option enables a TX rate control algorithm for
rate.
config MAC80211_RC_MINSTREL
- bool "Minstrel" if EMBEDDED
+ bool "Minstrel" if EXPERT
select MAC80211_HAS_RC
default y
---help---
This option enables the 'minstrel' TX rate control algorithm
config MAC80211_RC_MINSTREL_HT
- bool "Minstrel 802.11n support" if EMBEDDED
+ bool "Minstrel 802.11n support" if EXPERT
depends on MAC80211_RC_MINSTREL
default y
---help---
struct ieee80211_mgmt *mgmt,
size_t len)
{
- struct ieee80211_hw *hw = &local->hw;
- struct ieee80211_conf *conf = &hw->conf;
struct tid_ampdu_rx *tid_agg_rx;
u16 capab, tid, timeout, ba_policy, buf_size, start_seq_num, status;
u8 dialog_token;
goto end_no_lock;
}
/* determine default buffer size */
- if (buf_size == 0) {
- struct ieee80211_supported_band *sband;
-
- sband = local->hw.wiphy->bands[conf->channel->band];
- buf_size = IEEE80211_MIN_AMPDU_BUF;
- buf_size = buf_size << sband->ht_cap.ampdu_factor;
- }
+ if (buf_size == 0)
+ buf_size = IEEE80211_MAX_AMPDU_BUF;
/* examine state machine */
*cookie ^= 2;
IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_TX_OFFCHAN;
local->hw_roc_skb = skb;
+ local->hw_roc_skb_for_status = skb;
mutex_unlock(&local->mtx);
return 0;
if (ret == 0) {
kfree_skb(local->hw_roc_skb);
local->hw_roc_skb = NULL;
+ local->hw_roc_skb_for_status = NULL;
}
mutex_unlock(&local->mtx);
struct ieee80211_channel *hw_roc_channel;
struct net_device *hw_roc_dev;
- struct sk_buff *hw_roc_skb;
+ struct sk_buff *hw_roc_skb, *hw_roc_skb_for_status;
struct work_struct hw_roc_start, hw_roc_done;
enum nl80211_channel_type hw_roc_channel_type;
unsigned int hw_roc_duration;
MODULE_PARM_DESC(ieee80211_disable_40mhz_24ghz,
"Disable 40MHz support in the 2.4GHz band");
+static struct lock_class_key ieee80211_rx_skb_queue_class;
+
void ieee80211_configure_filter(struct ieee80211_local *local)
{
u64 mc;
spin_lock_init(&local->filter_lock);
spin_lock_init(&local->queue_stop_reason_lock);
- skb_queue_head_init(&local->rx_skb_queue);
+ /*
+ * The rx_skb_queue is only accessed from tasklets,
+ * but other SKB queues are used from within IRQ
+ * context. Therefore, this one needs a different
+ * locking class so our direct, non-irq-safe use of
+ * the queue's lock doesn't throw lockdep warnings.
+ */
+ skb_queue_head_init_class(&local->rx_skb_queue,
+ &ieee80211_rx_skb_queue_class);
INIT_DELAYED_WORK(&local->scan_work, ieee80211_scan_work);
if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) {
struct ieee80211_work *wk;
+ u64 cookie = (unsigned long)skb;
rcu_read_lock();
list_for_each_entry_rcu(wk, &local->work_list, list) {
break;
}
rcu_read_unlock();
+ if (local->hw_roc_skb_for_status == skb) {
+ cookie = local->hw_roc_cookie ^ 2;
+ local->hw_roc_skb_for_status = NULL;
+ }
cfg80211_mgmt_tx_status(
- skb->dev, (unsigned long) skb, skb->data, skb->len,
+ skb->dev, cookie, skb->data, skb->len,
!!(info->flags & IEEE80211_TX_STAT_ACK), GFP_ATOMIC);
}
skb_orphan(skb);
}
- if (skb_header_cloned(skb))
+ if (skb_cloned(skb))
I802_DEBUG_INC(local->tx_expand_skb_head_cloned);
else if (head_need || tail_need)
I802_DEBUG_INC(local->tx_expand_skb_head);
sdata = vif_to_sdata(vif);
+ if (!ieee80211_sdata_running(sdata))
+ goto out;
+
if (tim_offset)
*tim_offset = 0;
if (tim_length)
switch (sdata->vif.type) {
case NL80211_IFTYPE_STATION:
changed |= BSS_CHANGED_ASSOC;
+ mutex_lock(&sdata->u.mgd.mtx);
ieee80211_bss_info_change_notify(sdata, changed);
+ mutex_unlock(&sdata->u.mgd.mtx);
break;
case NL80211_IFTYPE_ADHOC:
changed |= BSS_CHANGED_IBSS;
/* Optimization: we don't need to hold module
reference here, since function can't sleep. --RR */
+repeat:
verdict = elem->hook(hook, skb, indev, outdev, okfn);
if (verdict != NF_ACCEPT) {
#ifdef CONFIG_NETFILTER_DEBUG
#endif
if (verdict != NF_REPEAT)
return verdict;
- *i = (*i)->prev;
+ goto repeat;
}
}
return NF_ACCEPT;
if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status))
nf_conntrack_event_cache(IPCT_REPLY, ct);
out:
- if (tmpl)
- nf_ct_put(tmpl);
+ if (tmpl) {
+ /* Special case: we have to repeat this hook, assign the
+ * template again to this packet. We assume that this packet
+ * has no conntrack assigned. This is used by nf_ct_tcp. */
+ if (ret == NF_REPEAT)
+ skb->nfct = (struct nf_conntrack *)tmpl;
+ else
+ nf_ct_put(tmpl);
+ }
return ret;
}
* this does not harm and it happens very rarely. */
unsigned long missed = e->missed;
+ if (!((events | missed) & e->ctmask))
+ goto out_unlock;
+
ret = notify->fcn(events | missed, &item);
if (unlikely(ret < 0 || missed)) {
spin_lock_bh(&ct->lock);
if (ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).pid,
cb->nlh->nlmsg_seq,
IPCTNL_MSG_CT_NEW, ct) < 0) {
+ nf_conntrack_get(&ct->ct_general);
cb->args[1] = (unsigned long)ct;
goto out;
}
u16 zone;
int err;
- if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP)
+ if (nlh->nlmsg_flags & NLM_F_DUMP)
return netlink_dump_start(ctnl, skb, nlh, ctnetlink_dump_table,
ctnetlink_done);
u16 zone;
int err;
- if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+ if (nlh->nlmsg_flags & NLM_F_DUMP) {
return netlink_dump_start(ctnl, skb, nlh,
ctnetlink_exp_dump_table,
ctnetlink_exp_done);
}
static inline int
-iprange_ipv6_sub(const struct in6_addr *a, const struct in6_addr *b)
+iprange_ipv6_lt(const struct in6_addr *a, const struct in6_addr *b)
{
unsigned int i;
- int r;
for (i = 0; i < 4; ++i) {
- r = ntohl(a->s6_addr32[i]) - ntohl(b->s6_addr32[i]);
- if (r != 0)
- return r;
+ if (a->s6_addr32[i] != b->s6_addr32[i])
+ return ntohl(a->s6_addr32[i]) < ntohl(b->s6_addr32[i]);
}
return 0;
bool m;
if (info->flags & IPRANGE_SRC) {
- m = iprange_ipv6_sub(&iph->saddr, &info->src_min.in6) < 0;
- m |= iprange_ipv6_sub(&iph->saddr, &info->src_max.in6) > 0;
+ m = iprange_ipv6_lt(&iph->saddr, &info->src_min.in6);
+ m |= iprange_ipv6_lt(&info->src_max.in6, &iph->saddr);
m ^= !!(info->flags & IPRANGE_SRC_INV);
if (m)
return false;
}
if (info->flags & IPRANGE_DST) {
- m = iprange_ipv6_sub(&iph->daddr, &info->dst_min.in6) < 0;
- m |= iprange_ipv6_sub(&iph->daddr, &info->dst_max.in6) > 0;
+ m = iprange_ipv6_lt(&iph->daddr, &info->dst_min.in6);
+ m |= iprange_ipv6_lt(&info->dst_max.in6, &iph->daddr);
m ^= !!(info->flags & IPRANGE_DST_INV);
if (m)
return false;
security_netlink_recv(skb, CAP_NET_ADMIN))
return -EPERM;
- if ((nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+ if (nlh->nlmsg_flags & NLM_F_DUMP) {
if (ops->dumpit == NULL)
return -EOPNOTSUPP;
default y
config RFKILL_INPUT
- bool "RF switch input support" if EMBEDDED
+ bool "RF switch input support" if EXPERT
depends on RFKILL
depends on INPUT = y || RFKILL = INPUT
- default y if !EMBEDDED
+ default y if !EXPERT
ret = qdisc_enqueue(skb, cl->q);
if (ret == NET_XMIT_SUCCESS) {
sch->q.qlen++;
- qdisc_bstats_update(sch, skb);
cbq_mark_toplevel(q, cl);
if (!cl->next_alive)
cbq_activate_class(cl);
ret = qdisc_enqueue(skb, cl->q);
if (ret == NET_XMIT_SUCCESS) {
sch->q.qlen++;
- qdisc_bstats_update(sch, skb);
if (!cl->next_alive)
cbq_activate_class(cl);
return 0;
skb = cbq_dequeue_1(sch);
if (skb) {
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
sch->flags &= ~TCQ_F_THROTTLED;
return skb;
}
bstats_update(&cl->bstats, skb);
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
return err;
skb = qdisc_dequeue_peeked(cl->qdisc);
if (cl->qdisc->q.qlen == 0)
list_del(&cl->alist);
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
return err;
}
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
return NET_XMIT_SUCCESS;
if (skb == NULL)
return NULL;
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
index = skb->tc_index & (p->indices - 1);
static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc* sch)
{
- struct sk_buff *skb_head;
struct fifo_sched_data *q = qdisc_priv(sch);
if (likely(skb_queue_len(&sch->q) < q->limit))
return qdisc_enqueue_tail(skb, sch);
/* queue full, remove one skb to fulfill the limit */
- skb_head = qdisc_dequeue_head(sch);
+ __qdisc_queue_drop_head(sch, &sch->q);
sch->qstats.drops++;
- kfree_skb(skb_head);
-
qdisc_enqueue_tail(skb, sch);
return NET_XMIT_CN;
set_active(cl, qdisc_pkt_len(skb));
bstats_update(&cl->bstats, skb);
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
return NET_XMIT_SUCCESS;
}
sch->flags &= ~TCQ_F_THROTTLED;
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
sch->q.qlen++;
- qdisc_bstats_update(sch, skb);
return NET_XMIT_SUCCESS;
}
static struct sk_buff *htb_dequeue(struct Qdisc *sch)
{
- struct sk_buff *skb = NULL;
+ struct sk_buff *skb;
struct htb_sched *q = qdisc_priv(sch);
int level;
psched_time_t next_event;
/* try to dequeue direct packets as high prio (!) to minimize cpu work */
skb = __skb_dequeue(&q->direct_queue);
if (skb != NULL) {
+ok:
+ qdisc_bstats_update(sch, skb);
sch->flags &= ~TCQ_F_THROTTLED;
sch->q.qlen--;
return skb;
int prio = ffz(m);
m |= 1 << prio;
skb = htb_dequeue_tree(q, prio, level);
- if (likely(skb != NULL)) {
- sch->q.qlen--;
- sch->flags &= ~TCQ_F_THROTTLED;
- goto fin;
- }
+ if (likely(skb != NULL))
+ goto ok;
}
}
sch->qstats.overlimits++;
ret = qdisc_enqueue(skb, qdisc);
if (ret == NET_XMIT_SUCCESS) {
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
return NET_XMIT_SUCCESS;
}
qdisc = q->queues[q->curband];
skb = qdisc->dequeue(qdisc);
if (skb) {
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
if (likely(ret == NET_XMIT_SUCCESS)) {
sch->q.qlen++;
- qdisc_bstats_update(sch, skb);
} else if (net_xmit_drop_count(ret)) {
sch->qstats.drops++;
}
skb->tstamp.tv64 = 0;
#endif
pr_debug("netem_dequeue: return skb=%p\n", skb);
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
__skb_queue_after(list, skb, nskb);
sch->qstats.backlog += qdisc_pkt_len(nskb);
- qdisc_bstats_update(sch, nskb);
return NET_XMIT_SUCCESS;
}
ret = qdisc_enqueue(skb, qdisc);
if (ret == NET_XMIT_SUCCESS) {
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
return NET_XMIT_SUCCESS;
}
struct Qdisc *qdisc = q->queues[prio];
struct sk_buff *skb = qdisc->dequeue(qdisc);
if (skb) {
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
ret = qdisc_enqueue(skb, child);
if (likely(ret == NET_XMIT_SUCCESS)) {
- qdisc_bstats_update(sch, skb);
sch->q.qlen++;
} else if (net_xmit_drop_count(ret)) {
q->stats.pdrop++;
struct Qdisc *child = q->qdisc;
skb = child->dequeue(child);
- if (skb)
+ if (skb) {
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
- else if (!red_is_idling(&q->parms))
- red_start_of_idle_period(&q->parms);
-
+ } else {
+ if (!red_is_idling(&q->parms))
+ red_start_of_idle_period(&q->parms);
+ }
return skb;
}
q->tail = slot;
slot->allot = q->scaled_quantum;
}
- if (++sch->q.qlen <= q->limit) {
- qdisc_bstats_update(sch, skb);
+ if (++sch->q.qlen <= q->limit)
return NET_XMIT_SUCCESS;
- }
sfq_drop(sch);
return NET_XMIT_CN;
}
skb = slot_dequeue_head(slot);
sfq_dec(q, a);
+ qdisc_bstats_update(sch, skb);
sch->q.qlen--;
sch->qstats.backlog -= qdisc_pkt_len(skb);
}
sch->q.qlen++;
- qdisc_bstats_update(sch, skb);
return NET_XMIT_SUCCESS;
}
q->ptokens = ptoks;
sch->q.qlen--;
sch->flags &= ~TCQ_F_THROTTLED;
+ qdisc_bstats_update(sch, skb);
return skb;
}
if (q->q.qlen < dev->tx_queue_len) {
__skb_queue_tail(&q->q, skb);
- qdisc_bstats_update(sch, skb);
return NET_XMIT_SUCCESS;
}
dat->m->slaves = sch;
netif_wake_queue(m);
}
+ } else {
+ qdisc_bstats_update(sch, skb);
}
sch->q.qlen = dat->q.qlen + dat_queue->qdisc->q.qlen;
return skb;
retval = sctp_setsockopt_peer_addr_params(sk, optval, optlen);
break;
- case SCTP_DELAYED_ACK:
+ case SCTP_DELAYED_SACK:
retval = sctp_setsockopt_delayed_ack(sk, optval, optlen);
break;
case SCTP_PARTIAL_DELIVERY_POINT:
retval = sctp_getsockopt_peer_addr_params(sk, len, optval,
optlen);
break;
- case SCTP_DELAYED_ACK:
+ case SCTP_DELAYED_SACK:
retval = sctp_getsockopt_delayed_ack(sk, len, optval,
optlen);
break;
*/
static void svc_bc_sock_free(struct svc_xprt *xprt)
{
- if (xprt) {
- kfree(xprt->xpt_bc_sid);
+ if (xprt)
kfree(container_of(xprt, struct svc_sock, sk_xprt));
- }
}
#endif /* CONFIG_NFS_V4_1 */
If unsure, say N.
config CFG80211_INTERNAL_REGDB
- bool "use statically compiled regulatory rules database" if EMBEDDED
+ bool "use statically compiled regulatory rules database" if EXPERT
default n
depends on CFG80211
---help---
#include <net/sock.h>
#include <net/x25.h>
-/*
- * Parse a set of facilities into the facilities structures. Unrecognised
- * facilities are written to the debug log file.
+/**
+ * x25_parse_facilities - Parse facilities from skb into the facilities structs
+ *
+ * @skb: sk_buff to parse
+ * @facilities: Regular facilites, updated as facilities are found
+ * @dte_facs: ITU DTE facilities, updated as DTE facilities are found
+ * @vc_fac_mask: mask is updated with all facilities found
+ *
+ * Return codes:
+ * -1 - Parsing error, caller should drop call and clean up
+ * 0 - Parse OK, this skb has no facilities
+ * >0 - Parse OK, returns the length of the facilities header
+ *
*/
int x25_parse_facilities(struct sk_buff *skb, struct x25_facilities *facilities,
struct x25_dte_facilities *dte_facs, unsigned long *vc_fac_mask)
switch (*p & X25_FAC_CLASS_MASK) {
case X25_FAC_CLASS_A:
if (len < 2)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_REVERSE:
if((p[1] & 0x81) == 0x81) {
break;
case X25_FAC_CLASS_B:
if (len < 3)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_PACKET_SIZE:
facilities->pacsize_in = p[1];
break;
case X25_FAC_CLASS_C:
if (len < 4)
- return 0;
+ return -1;
printk(KERN_DEBUG "X.25: unknown facility %02X, "
"values %02X, %02X, %02X\n",
p[0], p[1], p[2], p[3]);
break;
case X25_FAC_CLASS_D:
if (len < p[1] + 2)
- return 0;
+ return -1;
switch (*p) {
case X25_FAC_CALLING_AE:
if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ return -1;
dte_facs->calling_len = p[2];
memcpy(dte_facs->calling_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLING_AE;
break;
case X25_FAC_CALLED_AE:
if (p[1] > X25_MAX_DTE_FACIL_LEN || p[1] <= 1)
- return 0;
+ return -1;
dte_facs->called_len = p[2];
memcpy(dte_facs->called_ae, &p[3], p[1] - 1);
*vc_fac_mask |= X25_MASK_CALLED_AE;
{
struct x25_address source_addr, dest_addr;
int len;
+ struct x25_sock *x25 = x25_sk(sk);
switch (frametype) {
case X25_CALL_ACCEPTED: {
- struct x25_sock *x25 = x25_sk(sk);
x25_stop_timer(sk);
x25->condition = 0x00;
&dest_addr);
if (len > 0)
skb_pull(skb, len);
+ else if (len < 0)
+ goto out_clear;
len = x25_parse_facilities(skb, &x25->facilities,
&x25->dte_facilities,
&x25->vc_facil_mask);
if (len > 0)
skb_pull(skb, len);
- else
- return -1;
+ else if (len < 0)
+ goto out_clear;
/*
* Copy any Call User Data.
*/
}
return 0;
+
+out_clear:
+ x25_write_internal(sk, X25_CLEAR_REQUEST);
+ x25->state = X25_STATE_2;
+ x25_start_t23timer(sk);
+ return 0;
}
/*
write_lock_bh(&x25_neigh_list_lock);
list_for_each_safe(entry, tmp, &x25_neigh_list) {
+ struct net_device *dev;
+
nb = list_entry(entry, struct x25_neigh, node);
+ dev = nb->dev;
__x25_remove_neigh(nb);
- dev_put(nb->dev);
+ dev_put(dev);
}
write_unlock_bh(&x25_neigh_list_lock);
}
default:
BUG();
}
- xdst = dst_alloc(dst_ops) ?: ERR_PTR(-ENOBUFS);
+ xdst = dst_alloc(dst_ops);
xfrm_policy_put_afinfo(afinfo);
- xdst->flo.ops = &xfrm_bundle_fc_ops;
+ if (likely(xdst))
+ xdst->flo.ops = &xfrm_bundle_fc_ops;
+ else
+ xdst = ERR_PTR(-ENOBUFS);
return xdst;
}
if ((type == (XFRM_MSG_GETSA - XFRM_MSG_BASE) ||
type == (XFRM_MSG_GETPOLICY - XFRM_MSG_BASE)) &&
- (nlh->nlmsg_flags & NLM_F_DUMP) == NLM_F_DUMP) {
+ (nlh->nlmsg_flags & NLM_F_DUMP)) {
if (link->dump == NULL)
return -EINVAL;
char *end = m + len;
char *p;
char s[PATH_MAX];
+ int first;
p = strchr(m, ':');
if (!p) {
clear_config();
+ first = 1;
while (m < end) {
while (m < end && (*m == ' ' || *m == '\\' || *m == '\n'))
m++;
if (strrcmp(s, "include/generated/autoconf.h") &&
strrcmp(s, "arch/um/include/uml-config.h") &&
strrcmp(s, ".ver")) {
- printf(" %s \\\n", s);
+ /*
+ * Do not output the first dependency (the
+ * source file), so that kbuild is not confused
+ * if a .c file is rewritten into .S or vice
+ * versa.
+ */
+ if (!first)
+ printf(" %s \\\n", s);
do_config_file(s);
}
+ first = 0;
m = p + 1;
}
printf("\n%s: $(deps_%s)\n\n", target, target);
fi
# Build header package
-find . -name Makefile -o -name Kconfig\* -o -name \*.pl > /tmp/files$$
-find arch/x86/include include scripts -type f >> /tmp/files$$
+(cd $srctree; find . -name Makefile -o -name Kconfig\* -o -name \*.pl > /tmp/files$$)
+(cd $srctree; find arch/$SRCARCH/include include scripts -type f >> /tmp/files$$)
(cd $objtree; find .config Module.symvers include scripts -type f >> /tmp/objfiles$$)
destdir=$kernel_headers_dir/usr/src/linux-headers-$version
mkdir -p "$destdir"
-tar -c -f - -T /tmp/files$$ | (cd $destdir; tar -xf -)
+(cd $srctree; tar -c -f - -T /tmp/files$$) | (cd $destdir; tar -xf -)
(cd $objtree; tar -c -f - -T /tmp/objfiles$$) | (cd $destdir; tar -xf -)
rm -f /tmp/files$$ /tmp/objfiles$$
arch=$(dpkg --print-architecture)
request_key_auth.o \
user_defined.o
-obj-$(CONFIG_TRUSTED_KEYS) += trusted_defined.o
-obj-$(CONFIG_ENCRYPTED_KEYS) += encrypted_defined.o
+obj-$(CONFIG_TRUSTED_KEYS) += trusted.o
+obj-$(CONFIG_ENCRYPTED_KEYS) += encrypted.o
obj-$(CONFIG_KEYS_COMPAT) += compat.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSCTL) += sysctl.o
-/* compat.c: 32-bit compatibility syscall for 64-bit systems
+/* 32-bit compatibility syscall for 64-bit systems
*
* Copyright (C) 2004-5 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
#include <linux/compat.h>
#include "internal.h"
-/*****************************************************************************/
/*
- * the key control system call, 32-bit compatibility version for 64-bit archs
- * - this should only be called if the 64-bit arch uses weird pointers in
- * 32-bit mode or doesn't guarantee that the top 32-bits of the argument
- * registers on taking a 32-bit syscall are zero
- * - if you can, you should call sys_keyctl directly
+ * The key control system call, 32-bit compatibility version for 64-bit archs
+ *
+ * This should only be called if the 64-bit arch uses weird pointers in 32-bit
+ * mode or doesn't guarantee that the top 32-bits of the argument registers on
+ * taking a 32-bit syscall are zero. If you can, you should call sys_keyctl()
+ * directly.
*/
asmlinkage long compat_sys_keyctl(u32 option,
u32 arg2, u32 arg3, u32 arg4, u32 arg5)
default:
return -EOPNOTSUPP;
}
-
-} /* end compat_sys_keyctl() */
+}
#include <crypto/sha.h>
#include <crypto/aes.h>
-#include "encrypted_defined.h"
+#include "encrypted.h"
static const char KEY_TRUSTED_PREFIX[] = "trusted:";
static const char KEY_USER_PREFIX[] = "user:";
static time_t key_gc_new_timer;
/*
- * Schedule a garbage collection run
- * - precision isn't particularly important
+ * Schedule a garbage collection run.
+ * - time precision isn't particularly important
*/
void key_schedule_gc(time_t gc_at)
{
}
/*
- * Garbage collect pointers from a keyring
- * - return true if we altered the keyring
+ * Garbage collect pointers from a keyring.
+ *
+ * Return true if we altered the keyring.
*/
static bool key_gc_keyring(struct key *keyring, time_t limit)
__releases(key_serial_lock)
}
/*
- * Garbage collector for keys
- * - this involves scanning the keyrings for dead, expired and revoked keys
- * that have overstayed their welcome
+ * Garbage collector for keys. This involves scanning the keyrings for dead,
+ * expired and revoked keys that have overstayed their welcome
*/
static void key_garbage_collector(struct work_struct *work)
{
-/* internal.h: authentication token and access key management internal defs
+/* Authentication token and access key management internal defs
*
* Copyright (C) 2003-5, 2007 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
/*****************************************************************************/
/*
- * keep track of keys for a user
- * - this needs to be separate to user_struct to avoid a refcount-loop
- * (user_struct pins some keyrings which pin this struct)
- * - this also keeps track of keys under request from userspace for this UID
+ * Keep track of keys for a user.
+ *
+ * This needs to be separate to user_struct to avoid a refcount-loop
+ * (user_struct pins some keyrings which pin this struct).
+ *
+ * We also keep track of keys under request from userspace for this UID here.
*/
struct key_user {
struct rb_node node;
extern void key_user_put(struct key_user *user);
/*
- * key quota limits
+ * Key quota limits.
* - root has its own separate limits to everyone else
*/
extern unsigned key_quota_root_maxkeys;
extern int __key_link_begin(struct key *keyring,
const struct key_type *type,
const char *description,
- struct keyring_list **_prealloc);
+ unsigned long *_prealloc);
extern int __key_link_check_live_key(struct key *keyring, struct key *key);
extern void __key_link(struct key *keyring, struct key *key,
- struct keyring_list **_prealloc);
+ unsigned long *_prealloc);
extern void __key_link_end(struct key *keyring,
struct key_type *type,
- struct keyring_list *prealloc);
+ unsigned long prealloc);
extern key_ref_t __keyring_search_one(key_ref_t keyring_ref,
const struct key_type *type,
extern void keyring_gc(struct key *keyring, time_t limit);
extern void key_schedule_gc(time_t expiry_at);
-/*
- * check to see whether permission is granted to use a key in the desired way
- */
extern int key_task_permission(const key_ref_t key_ref,
const struct cred *cred,
key_perm_t perm);
+/*
+ * Check to see whether permission is granted to use a key in the desired way.
+ */
static inline int key_permission(const key_ref_t key_ref, key_perm_t perm)
{
return key_task_permission(key_ref, current_cred(), perm);
#define KEY_ALL 0x3f /* all the above permissions */
/*
- * request_key authorisation
+ * Authorisation record for request_key().
*/
struct request_key_auth {
struct key *target_key;
extern struct key *key_get_instantiation_authkey(key_serial_t target_id);
/*
- * keyctl functions
+ * keyctl() functions
*/
extern long keyctl_get_keyring_ID(key_serial_t, int);
extern long keyctl_join_session_keyring(const char __user *);
extern long keyctl_session_to_parent(void);
/*
- * debugging key validation
+ * Debugging key validation
*/
#ifdef KEY_DEBUGGING
extern void __key_check(const struct key *);
static void key_cleanup(struct work_struct *work);
static DECLARE_WORK(key_cleanup_task, key_cleanup);
-/* we serialise key instantiation and link */
+/* We serialise key instantiation and link */
DEFINE_MUTEX(key_construction_mutex);
-/* any key who's type gets unegistered will be re-typed to this */
+/* Any key who's type gets unegistered will be re-typed to this */
static struct key_type key_type_dead = {
.name = "dead",
};
}
#endif
-/*****************************************************************************/
/*
- * get the key quota record for a user, allocating a new record if one doesn't
- * already exist
+ * Get the key quota record for a user, allocating a new record if one doesn't
+ * already exist.
*/
struct key_user *key_user_lookup(uid_t uid, struct user_namespace *user_ns)
{
struct rb_node *parent = NULL;
struct rb_node **p;
- try_again:
+try_again:
p = &key_user_tree.rb_node;
spin_lock(&key_user_lock);
goto out;
/* okay - we found a user record for this UID */
- found:
+found:
atomic_inc(&user->usage);
spin_unlock(&key_user_lock);
kfree(candidate);
- out:
+out:
return user;
+}
-} /* end key_user_lookup() */
-
-/*****************************************************************************/
/*
- * dispose of a user structure
+ * Dispose of a user structure
*/
void key_user_put(struct key_user *user)
{
kfree(user);
}
+}
-} /* end key_user_put() */
-
-/*****************************************************************************/
/*
- * assign a key the next unique serial number
- * - these are assigned randomly to avoid security issues through covert
- * channel problems
+ * Allocate a serial number for a key. These are assigned randomly to avoid
+ * security issues through covert channel problems.
*/
static inline void key_alloc_serial(struct key *key)
{
if (key->serial < xkey->serial)
goto attempt_insertion;
}
+}
-} /* end key_alloc_serial() */
-
-/*****************************************************************************/
-/*
- * allocate a key of the specified type
- * - update the user's quota to reflect the existence of the key
- * - called from a key-type operation with key_types_sem read-locked by
- * key_create_or_update()
- * - this prevents unregistration of the key type
- * - upon return the key is as yet uninstantiated; the caller needs to either
- * instantiate the key or discard it before returning
+/**
+ * key_alloc - Allocate a key of the specified type.
+ * @type: The type of key to allocate.
+ * @desc: The key description to allow the key to be searched out.
+ * @uid: The owner of the new key.
+ * @gid: The group ID for the new key's group permissions.
+ * @cred: The credentials specifying UID namespace.
+ * @perm: The permissions mask of the new key.
+ * @flags: Flags specifying quota properties.
+ *
+ * Allocate a key of the specified type with the attributes given. The key is
+ * returned in an uninstantiated state and the caller needs to instantiate the
+ * key before returning.
+ *
+ * The user's key count quota is updated to reflect the creation of the key and
+ * the user's key data quota has the default for the key type reserved. The
+ * instantiation function should amend this as necessary. If insufficient
+ * quota is available, -EDQUOT will be returned.
+ *
+ * The LSM security modules can prevent a key being created, in which case
+ * -EACCES will be returned.
+ *
+ * Returns a pointer to the new key if successful and an error code otherwise.
+ *
+ * Note that the caller needs to ensure the key type isn't uninstantiated.
+ * Internally this can be done by locking key_types_sem. Externally, this can
+ * be done by either never unregistering the key type, or making sure
+ * key_alloc() calls don't race with module unloading.
*/
struct key *key_alloc(struct key_type *type, const char *desc,
uid_t uid, gid_t gid, const struct cred *cred,
key_user_put(user);
key = ERR_PTR(-EDQUOT);
goto error;
-
-} /* end key_alloc() */
-
+}
EXPORT_SYMBOL(key_alloc);
-/*****************************************************************************/
-/*
- * reserve an amount of quota for the key's payload
+/**
+ * key_payload_reserve - Adjust data quota reservation for the key's payload
+ * @key: The key to make the reservation for.
+ * @datalen: The amount of data payload the caller now wants.
+ *
+ * Adjust the amount of the owning user's key data quota that a key reserves.
+ * If the amount is increased, then -EDQUOT may be returned if there isn't
+ * enough free quota available.
+ *
+ * If successful, 0 is returned.
*/
int key_payload_reserve(struct key *key, size_t datalen)
{
key->datalen = datalen;
return ret;
-
-} /* end key_payload_reserve() */
-
+}
EXPORT_SYMBOL(key_payload_reserve);
-/*****************************************************************************/
/*
- * instantiate a key and link it into the target keyring atomically
- * - called with the target keyring's semaphore writelocked
+ * Instantiate a key and link it into the target keyring atomically. Must be
+ * called with the target keyring's semaphore writelocked. The target key's
+ * semaphore need not be locked as instantiation is serialised by
+ * key_construction_mutex.
*/
static int __key_instantiate_and_link(struct key *key,
const void *data,
size_t datalen,
struct key *keyring,
struct key *authkey,
- struct keyring_list **_prealloc)
+ unsigned long *_prealloc)
{
int ret, awaken;
wake_up_bit(&key->flags, KEY_FLAG_USER_CONSTRUCT);
return ret;
+}
-} /* end __key_instantiate_and_link() */
-
-/*****************************************************************************/
-/*
- * instantiate a key and link it into the target keyring atomically
+/**
+ * key_instantiate_and_link - Instantiate a key and link it into the keyring.
+ * @key: The key to instantiate.
+ * @data: The data to use to instantiate the keyring.
+ * @datalen: The length of @data.
+ * @keyring: Keyring to create a link in on success (or NULL).
+ * @authkey: The authorisation token permitting instantiation.
+ *
+ * Instantiate a key that's in the uninstantiated state using the provided data
+ * and, if successful, link it in to the destination keyring if one is
+ * supplied.
+ *
+ * If successful, 0 is returned, the authorisation token is revoked and anyone
+ * waiting for the key is woken up. If the key was already instantiated,
+ * -EBUSY will be returned.
*/
int key_instantiate_and_link(struct key *key,
const void *data,
struct key *keyring,
struct key *authkey)
{
- struct keyring_list *prealloc;
+ unsigned long prealloc;
int ret;
if (keyring) {
__key_link_end(keyring, key->type, prealloc);
return ret;
-
-} /* end key_instantiate_and_link() */
+}
EXPORT_SYMBOL(key_instantiate_and_link);
-/*****************************************************************************/
-/*
- * negatively instantiate a key and link it into the target keyring atomically
+/**
+ * key_negate_and_link - Negatively instantiate a key and link it into the keyring.
+ * @key: The key to instantiate.
+ * @timeout: The timeout on the negative key.
+ * @keyring: Keyring to create a link in on success (or NULL).
+ * @authkey: The authorisation token permitting instantiation.
+ *
+ * Negatively instantiate a key that's in the uninstantiated state and, if
+ * successful, set its timeout and link it in to the destination keyring if one
+ * is supplied. The key and any links to the key will be automatically garbage
+ * collected after the timeout expires.
+ *
+ * Negative keys are used to rate limit repeated request_key() calls by causing
+ * them to return -ENOKEY until the negative key expires.
+ *
+ * If successful, 0 is returned, the authorisation token is revoked and anyone
+ * waiting for the key is woken up. If the key was already instantiated,
+ * -EBUSY will be returned.
*/
int key_negate_and_link(struct key *key,
unsigned timeout,
struct key *keyring,
struct key *authkey)
{
- struct keyring_list *prealloc;
+ unsigned long prealloc;
struct timespec now;
int ret, awaken, link_ret = 0;
wake_up_bit(&key->flags, KEY_FLAG_USER_CONSTRUCT);
return ret == 0 ? link_ret : ret;
-
-} /* end key_negate_and_link() */
+}
EXPORT_SYMBOL(key_negate_and_link);
-/*****************************************************************************/
/*
- * do cleaning up in process context so that we don't have to disable
- * interrupts all over the place
+ * Garbage collect keys in process context so that we don't have to disable
+ * interrupts all over the place.
+ *
+ * key_put() schedules this rather than trying to do the cleanup itself, which
+ * means key_put() doesn't have to sleep.
*/
static void key_cleanup(struct work_struct *work)
{
struct rb_node *_n;
struct key *key;
- go_again:
+go_again:
/* look for a dead key in the tree */
spin_lock(&key_serial_lock);
spin_unlock(&key_serial_lock);
return;
- found_dead_key:
+found_dead_key:
/* we found a dead key - once we've removed it from the tree, we can
* drop the lock */
rb_erase(&key->serial_node, &key_serial_tree);
/* there may, of course, be more than one key to destroy */
goto go_again;
+}
-} /* end key_cleanup() */
-
-/*****************************************************************************/
-/*
- * dispose of a reference to a key
- * - when all the references are gone, we schedule the cleanup task to come and
- * pull it out of the tree in definite process context
+/**
+ * key_put - Discard a reference to a key.
+ * @key: The key to discard a reference from.
+ *
+ * Discard a reference to a key, and when all the references are gone, we
+ * schedule the cleanup task to come and pull it out of the tree in process
+ * context at some later time.
*/
void key_put(struct key *key)
{
if (atomic_dec_and_test(&key->usage))
schedule_work(&key_cleanup_task);
}
-
-} /* end key_put() */
-
+}
EXPORT_SYMBOL(key_put);
-/*****************************************************************************/
/*
- * find a key by its serial number
+ * Find a key by its serial number.
*/
struct key *key_lookup(key_serial_t id)
{
goto found;
}
- not_found:
+not_found:
key = ERR_PTR(-ENOKEY);
goto error;
- found:
+found:
/* pretend it doesn't exist if it is awaiting deletion */
if (atomic_read(&key->usage) == 0)
goto not_found;
*/
atomic_inc(&key->usage);
- error:
+error:
spin_unlock(&key_serial_lock);
return key;
+}
-} /* end key_lookup() */
-
-/*****************************************************************************/
/*
- * find and lock the specified key type against removal
- * - we return with the sem readlocked
+ * Find and lock the specified key type against removal.
+ *
+ * We return with the sem read-locked if successful. If the type wasn't
+ * available -ENOKEY is returned instead.
*/
struct key_type *key_type_lookup(const char *type)
{
up_read(&key_types_sem);
ktype = ERR_PTR(-ENOKEY);
- found_kernel_type:
+found_kernel_type:
return ktype;
+}
-} /* end key_type_lookup() */
-
-/*****************************************************************************/
/*
- * unlock a key type
+ * Unlock a key type locked by key_type_lookup().
*/
void key_type_put(struct key_type *ktype)
{
up_read(&key_types_sem);
+}
-} /* end key_type_put() */
-
-/*****************************************************************************/
/*
- * attempt to update an existing key
- * - the key has an incremented refcount
- * - we need to put the key if we get an error
+ * Attempt to update an existing key.
+ *
+ * The key is given to us with an incremented refcount that we need to discard
+ * if we get an error.
*/
static inline key_ref_t __key_update(key_ref_t key_ref,
const void *payload, size_t plen)
key_put(key);
key_ref = ERR_PTR(ret);
goto out;
+}
-} /* end __key_update() */
-
-/*****************************************************************************/
-/*
- * search the specified keyring for a key of the same description; if one is
- * found, update it, otherwise add a new one
+/**
+ * key_create_or_update - Update or create and instantiate a key.
+ * @keyring_ref: A pointer to the destination keyring with possession flag.
+ * @type: The type of key.
+ * @description: The searchable description for the key.
+ * @payload: The data to use to instantiate or update the key.
+ * @plen: The length of @payload.
+ * @perm: The permissions mask for a new key.
+ * @flags: The quota flags for a new key.
+ *
+ * Search the destination keyring for a key of the same description and if one
+ * is found, update it, otherwise create and instantiate a new one and create a
+ * link to it from that keyring.
+ *
+ * If perm is KEY_PERM_UNDEF then an appropriate key permissions mask will be
+ * concocted.
+ *
+ * Returns a pointer to the new key if successful, -ENODEV if the key type
+ * wasn't available, -ENOTDIR if the keyring wasn't a keyring, -EACCES if the
+ * caller isn't permitted to modify the keyring or the LSM did not permit
+ * creation of the key.
+ *
+ * On success, the possession flag from the keyring ref will be tacked on to
+ * the key ref before it is returned.
*/
key_ref_t key_create_or_update(key_ref_t keyring_ref,
const char *type,
key_perm_t perm,
unsigned long flags)
{
- struct keyring_list *prealloc;
+ unsigned long prealloc;
const struct cred *cred = current_cred();
struct key_type *ktype;
struct key *keyring, *key = NULL;
key_ref = __key_update(key_ref, payload, plen);
goto error;
-
-} /* end key_create_or_update() */
-
+}
EXPORT_SYMBOL(key_create_or_update);
-/*****************************************************************************/
-/*
- * update a key
+/**
+ * key_update - Update a key's contents.
+ * @key_ref: The pointer (plus possession flag) to the key.
+ * @payload: The data to be used to update the key.
+ * @plen: The length of @payload.
+ *
+ * Attempt to update the contents of a key with the given payload data. The
+ * caller must be granted Write permission on the key. Negative keys can be
+ * instantiated by this method.
+ *
+ * Returns 0 on success, -EACCES if not permitted and -EOPNOTSUPP if the key
+ * type does not support updating. The key type may return other errors.
*/
int key_update(key_ref_t key_ref, const void *payload, size_t plen)
{
error:
return ret;
-
-} /* end key_update() */
-
+}
EXPORT_SYMBOL(key_update);
-/*****************************************************************************/
-/*
- * revoke a key
+/**
+ * key_revoke - Revoke a key.
+ * @key: The key to be revoked.
+ *
+ * Mark a key as being revoked and ask the type to free up its resources. The
+ * revocation timeout is set and the key and all its links will be
+ * automatically garbage collected after key_gc_delay amount of time if they
+ * are not manually dealt with first.
*/
void key_revoke(struct key *key)
{
}
up_write(&key->sem);
-
-} /* end key_revoke() */
-
+}
EXPORT_SYMBOL(key_revoke);
-/*****************************************************************************/
-/*
- * register a type of key
+/**
+ * register_key_type - Register a type of key.
+ * @ktype: The new key type.
+ *
+ * Register a new key type.
+ *
+ * Returns 0 on success or -EEXIST if a type of this name already exists.
*/
int register_key_type(struct key_type *ktype)
{
list_add(&ktype->link, &key_types_list);
ret = 0;
- out:
+out:
up_write(&key_types_sem);
return ret;
-
-} /* end register_key_type() */
-
+}
EXPORT_SYMBOL(register_key_type);
-/*****************************************************************************/
-/*
- * unregister a type of key
+/**
+ * unregister_key_type - Unregister a type of key.
+ * @ktype: The key type.
+ *
+ * Unregister a key type and mark all the extant keys of this type as dead.
+ * Those keys of this type are then destroyed to get rid of their payloads and
+ * they and their links will be garbage collected as soon as possible.
*/
void unregister_key_type(struct key_type *ktype)
{
up_write(&key_types_sem);
key_schedule_gc(0);
-
-} /* end unregister_key_type() */
-
+}
EXPORT_SYMBOL(unregister_key_type);
-/*****************************************************************************/
/*
- * initialise the key management stuff
+ * Initialise the key management state.
*/
void __init key_init(void)
{
rb_insert_color(&root_key_user.node,
&key_user_tree);
-
-} /* end key_init() */
+}
-/* keyctl.c: userspace keyctl operations
+/* Userspace key control operations
*
* Copyright (C) 2004-5 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
int ret;
ret = strncpy_from_user(type, _type, len);
-
if (ret < 0)
return ret;
-
if (ret == 0 || ret >= len)
return -EINVAL;
-
if (type[0] == '.')
return -EPERM;
-
type[len - 1] = '\0';
-
return 0;
}
-/*****************************************************************************/
/*
- * extract the description of a new key from userspace and either add it as a
- * new key to the specified keyring or update a matching key in that keyring
- * - the keyring must be writable
- * - returns the new key's serial number
- * - implements add_key()
+ * Extract the description of a new key from userspace and either add it as a
+ * new key to the specified keyring or update a matching key in that keyring.
+ *
+ * The keyring must be writable so that we can attach the key to it.
+ *
+ * If successful, the new key's serial number is returned, otherwise an error
+ * code is returned.
*/
SYSCALL_DEFINE5(add_key, const char __user *, _type,
const char __user *, _description,
kfree(description);
error:
return ret;
+}
-} /* end sys_add_key() */
-
-/*****************************************************************************/
/*
- * search the process keyrings for a matching key
- * - nested keyrings may also be searched if they have Search permission
- * - if a key is found, it will be attached to the destination keyring if
- * there's one specified
- * - /sbin/request-key will be invoked if _callout_info is non-NULL
- * - the _callout_info string will be passed to /sbin/request-key
- * - if the _callout_info string is empty, it will be rendered as "-"
- * - implements request_key()
+ * Search the process keyrings and keyring trees linked from those for a
+ * matching key. Keyrings must have appropriate Search permission to be
+ * searched.
+ *
+ * If a key is found, it will be attached to the destination keyring if there's
+ * one specified and the serial number of the key will be returned.
+ *
+ * If no key is found, /sbin/request-key will be invoked if _callout_info is
+ * non-NULL in an attempt to create a key. The _callout_info string will be
+ * passed to /sbin/request-key to aid with completing the request. If the
+ * _callout_info string is "" then it will be changed to "-".
*/
SYSCALL_DEFINE4(request_key, const char __user *, _type,
const char __user *, _description,
kfree(description);
error:
return ret;
+}
-} /* end sys_request_key() */
-
-/*****************************************************************************/
/*
- * get the ID of the specified process keyring
- * - the keyring must have search permission to be found
- * - implements keyctl(KEYCTL_GET_KEYRING_ID)
+ * Get the ID of the specified process keyring.
+ *
+ * The requested keyring must have search permission to be found.
+ *
+ * If successful, the ID of the requested keyring will be returned.
*/
long keyctl_get_keyring_ID(key_serial_t id, int create)
{
key_ref_put(key_ref);
error:
return ret;
+}
-} /* end keyctl_get_keyring_ID() */
-
-/*****************************************************************************/
/*
- * join the session keyring
- * - implements keyctl(KEYCTL_JOIN_SESSION_KEYRING)
+ * Join a (named) session keyring.
+ *
+ * Create and join an anonymous session keyring or join a named session
+ * keyring, creating it if necessary. A named session keyring must have Search
+ * permission for it to be joined. Session keyrings without this permit will
+ * be skipped over.
+ *
+ * If successful, the ID of the joined session keyring will be returned.
*/
long keyctl_join_session_keyring(const char __user *_name)
{
error:
return ret;
+}
-} /* end keyctl_join_session_keyring() */
-
-/*****************************************************************************/
/*
- * update a key's data payload
- * - the key must be writable
- * - implements keyctl(KEYCTL_UPDATE)
+ * Update a key's data payload from the given data.
+ *
+ * The key must grant the caller Write permission and the key type must support
+ * updating for this to work. A negative key can be positively instantiated
+ * with this call.
+ *
+ * If successful, 0 will be returned. If the key type does not support
+ * updating, then -EOPNOTSUPP will be returned.
*/
long keyctl_update_key(key_serial_t id,
const void __user *_payload,
kfree(payload);
error:
return ret;
+}
-} /* end keyctl_update_key() */
-
-/*****************************************************************************/
/*
- * revoke a key
- * - the key must be writable
- * - implements keyctl(KEYCTL_REVOKE)
+ * Revoke a key.
+ *
+ * The key must be grant the caller Write or Setattr permission for this to
+ * work. The key type should give up its quota claim when revoked. The key
+ * and any links to the key will be automatically garbage collected after a
+ * certain amount of time (/proc/sys/kernel/keys/gc_delay).
+ *
+ * If successful, 0 is returned.
*/
long keyctl_revoke_key(key_serial_t id)
{
key_ref_put(key_ref);
error:
return ret;
+}
-} /* end keyctl_revoke_key() */
-
-/*****************************************************************************/
/*
- * clear the specified process keyring
- * - the keyring must be writable
- * - implements keyctl(KEYCTL_CLEAR)
+ * Clear the specified keyring, creating an empty process keyring if one of the
+ * special keyring IDs is used.
+ *
+ * The keyring must grant the caller Write permission for this to work. If
+ * successful, 0 will be returned.
*/
long keyctl_keyring_clear(key_serial_t ringid)
{
key_ref_put(keyring_ref);
error:
return ret;
+}
-} /* end keyctl_keyring_clear() */
-
-/*****************************************************************************/
/*
- * link a key into a keyring
- * - the keyring must be writable
- * - the key must be linkable
- * - implements keyctl(KEYCTL_LINK)
+ * Create a link from a keyring to a key if there's no matching key in the
+ * keyring, otherwise replace the link to the matching key with a link to the
+ * new key.
+ *
+ * The key must grant the caller Link permission and the the keyring must grant
+ * the caller Write permission. Furthermore, if an additional link is created,
+ * the keyring's quota will be extended.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_keyring_link(key_serial_t id, key_serial_t ringid)
{
key_ref_put(keyring_ref);
error:
return ret;
+}
-} /* end keyctl_keyring_link() */
-
-/*****************************************************************************/
/*
- * unlink the first attachment of a key from a keyring
- * - the keyring must be writable
- * - we don't need any permissions on the key
- * - implements keyctl(KEYCTL_UNLINK)
+ * Unlink a key from a keyring.
+ *
+ * The keyring must grant the caller Write permission for this to work; the key
+ * itself need not grant the caller anything. If the last link to a key is
+ * removed then that key will be scheduled for destruction.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_keyring_unlink(key_serial_t id, key_serial_t ringid)
{
key_ref_put(keyring_ref);
error:
return ret;
+}
-} /* end keyctl_keyring_unlink() */
-
-/*****************************************************************************/
/*
- * describe a user key
- * - the key must have view permission
- * - if there's a buffer, we place up to buflen bytes of data into it
- * - unless there's an error, we return the amount of description available,
- * irrespective of how much we may have copied
- * - the description is formatted thus:
+ * Return a description of a key to userspace.
+ *
+ * The key must grant the caller View permission for this to work.
+ *
+ * If there's a buffer, we place up to buflen bytes of data into it formatted
+ * in the following way:
+ *
* type;uid;gid;perm;description<NUL>
- * - implements keyctl(KEYCTL_DESCRIBE)
+ *
+ * If successful, we return the amount of description available, irrespective
+ * of how much we may have copied into the buffer.
*/
long keyctl_describe_key(key_serial_t keyid,
char __user *buffer,
key_ref_put(key_ref);
error:
return ret;
+}
-} /* end keyctl_describe_key() */
-
-/*****************************************************************************/
/*
- * search the specified keyring for a matching key
- * - the start keyring must be searchable
- * - nested keyrings may also be searched if they are searchable
- * - only keys with search permission may be found
- * - if a key is found, it will be attached to the destination keyring if
- * there's one specified
- * - implements keyctl(KEYCTL_SEARCH)
+ * Search the specified keyring and any keyrings it links to for a matching
+ * key. Only keyrings that grant the caller Search permission will be searched
+ * (this includes the starting keyring). Only keys with Search permission can
+ * be found.
+ *
+ * If successful, the found key will be linked to the destination keyring if
+ * supplied and the key has Link permission, and the found key ID will be
+ * returned.
*/
long keyctl_keyring_search(key_serial_t ringid,
const char __user *_type,
kfree(description);
error:
return ret;
+}
-} /* end keyctl_keyring_search() */
-
-/*****************************************************************************/
/*
- * read a user key's payload
- * - the keyring must be readable or the key must be searchable from the
- * process's keyrings
- * - if there's a buffer, we place up to buflen bytes of data into it
- * - unless there's an error, we return the amount of data in the key,
- * irrespective of how much we may have copied
- * - implements keyctl(KEYCTL_READ)
+ * Read a key's payload.
+ *
+ * The key must either grant the caller Read permission, or it must grant the
+ * caller Search permission when searched for from the process keyrings.
+ *
+ * If successful, we place up to buflen bytes of data into the buffer, if one
+ * is provided, and return the amount of data that is available in the key,
+ * irrespective of how much we copied into the buffer.
*/
long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
{
key_put(key);
error:
return ret;
+}
-} /* end keyctl_read_key() */
-
-/*****************************************************************************/
/*
- * change the ownership of a key
- * - the keyring owned by the changer
- * - if the uid or gid is -1, then that parameter is not changed
- * - implements keyctl(KEYCTL_CHOWN)
+ * Change the ownership of a key
+ *
+ * The key must grant the caller Setattr permission for this to work, though
+ * the key need not be fully instantiated yet. For the UID to be changed, or
+ * for the GID to be changed to a group the caller is not a member of, the
+ * caller must have sysadmin capability. If either uid or gid is -1 then that
+ * attribute is not changed.
+ *
+ * If the UID is to be changed, the new user must have sufficient quota to
+ * accept the key. The quota deduction will be removed from the old user to
+ * the new user should the attribute be changed.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_chown_key(key_serial_t id, uid_t uid, gid_t gid)
{
zapowner = newowner;
ret = -EDQUOT;
goto error_put;
+}
-} /* end keyctl_chown_key() */
-
-/*****************************************************************************/
/*
- * change the permission mask on a key
- * - the keyring owned by the changer
- * - implements keyctl(KEYCTL_SETPERM)
+ * Change the permission mask on a key.
+ *
+ * The key must grant the caller Setattr permission for this to work, though
+ * the key need not be fully instantiated yet. If the caller does not have
+ * sysadmin capability, it may only change the permission on keys that it owns.
*/
long keyctl_setperm_key(key_serial_t id, key_perm_t perm)
{
key_put(key);
error:
return ret;
-
-} /* end keyctl_setperm_key() */
+}
/*
- * get the destination keyring for instantiation
+ * Get the destination keyring for instantiation and check that the caller has
+ * Write permission on it.
*/
static long get_instantiation_keyring(key_serial_t ringid,
struct request_key_auth *rka,
}
/*
- * change the request_key authorisation key on the current process
+ * Change the request_key authorisation key on the current process.
*/
static int keyctl_change_reqkey_auth(struct key *key)
{
return commit_creds(new);
}
-/*****************************************************************************/
/*
- * instantiate the key with the specified payload, and, if one is given, link
- * the key into the keyring
+ * Instantiate a key with the specified payload and link the key into the
+ * destination keyring if one is given.
+ *
+ * The caller must have the appropriate instantiation permit set for this to
+ * work (see keyctl_assume_authority). No other permissions are required.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_instantiate_key(key_serial_t id,
const void __user *_payload,
vfree(payload);
error:
return ret;
+}
-} /* end keyctl_instantiate_key() */
-
-/*****************************************************************************/
/*
- * negatively instantiate the key with the given timeout (in seconds), and, if
- * one is given, link the key into the keyring
+ * Negatively instantiate the key with the given timeout (in seconds) and link
+ * the key into the destination keyring if one is given.
+ *
+ * The caller must have the appropriate instantiation permit set for this to
+ * work (see keyctl_assume_authority). No other permissions are required.
+ *
+ * The key and any links to the key will be automatically garbage collected
+ * after the timeout expires.
+ *
+ * Negative keys are used to rate limit repeated request_key() calls by causing
+ * them to return -ENOKEY until the negative key expires.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_negate_key(key_serial_t id, unsigned timeout, key_serial_t ringid)
{
error:
return ret;
+}
-} /* end keyctl_negate_key() */
-
-/*****************************************************************************/
/*
- * set the default keyring in which request_key() will cache keys
- * - return the old setting
+ * Read or set the default keyring in which request_key() will cache keys and
+ * return the old setting.
+ *
+ * If a process keyring is specified then this will be created if it doesn't
+ * yet exist. The old setting will be returned if successful.
*/
long keyctl_set_reqkey_keyring(int reqkey_defl)
{
error:
abort_creds(new);
return ret;
+}
-} /* end keyctl_set_reqkey_keyring() */
-
-/*****************************************************************************/
/*
- * set or clear the timeout for a key
+ * Set or clear the timeout on a key.
+ *
+ * Either the key must grant the caller Setattr permission or else the caller
+ * must hold an instantiation authorisation token for the key.
+ *
+ * The timeout is either 0 to clear the timeout, or a number of seconds from
+ * the current time. The key and any links to the key will be automatically
+ * garbage collected after the timeout expires.
+ *
+ * If successful, 0 is returned.
*/
long keyctl_set_timeout(key_serial_t id, unsigned timeout)
{
ret = 0;
error:
return ret;
+}
-} /* end keyctl_set_timeout() */
-
-/*****************************************************************************/
/*
- * assume the authority to instantiate the specified key
+ * Assume (or clear) the authority to instantiate the specified key.
+ *
+ * This sets the authoritative token currently in force for key instantiation.
+ * This must be done for a key to be instantiated. It has the effect of making
+ * available all the keys from the caller of the request_key() that created a
+ * key to request_key() calls made by the caller of this function.
+ *
+ * The caller must have the instantiation key in their process keyrings with a
+ * Search permission grant available to the caller.
+ *
+ * If the ID given is 0, then the setting will be cleared and 0 returned.
+ *
+ * If the ID given has a matching an authorisation key, then that key will be
+ * set and its ID will be returned. The authorisation key can be read to get
+ * the callout information passed to request_key().
*/
long keyctl_assume_authority(key_serial_t id)
{
ret = authkey->serial;
error:
return ret;
-
-} /* end keyctl_assume_authority() */
+}
/*
- * get the security label of a key
- * - the key must grant us view permission
- * - if there's a buffer, we place up to buflen bytes of data into it
- * - unless there's an error, we return the amount of information available,
- * irrespective of how much we may have copied (including the terminal NUL)
- * - implements keyctl(KEYCTL_GET_SECURITY)
+ * Get a key's the LSM security label.
+ *
+ * The key must grant the caller View permission for this to work.
+ *
+ * If there's a buffer, then up to buflen bytes of data will be placed into it.
+ *
+ * If successful, the amount of information available will be returned,
+ * irrespective of how much was copied (including the terminal NUL).
*/
long keyctl_get_security(key_serial_t keyid,
char __user *buffer,
}
/*
- * attempt to install the calling process's session keyring on the process's
- * parent process
- * - the keyring must exist and must grant us LINK permission
- * - implements keyctl(KEYCTL_SESSION_TO_PARENT)
+ * Attempt to install the calling process's session keyring on the process's
+ * parent process.
+ *
+ * The keyring must exist and must grant the caller LINK permission, and the
+ * parent process must be single-threaded and must have the same effective
+ * ownership as this process and mustn't be SUID/SGID.
+ *
+ * The keyring will be emplaced on the parent when it next resumes userspace.
+ *
+ * If successful, 0 will be returned.
*/
long keyctl_session_to_parent(void)
{
#endif /* !TIF_NOTIFY_RESUME */
}
-/*****************************************************************************/
/*
- * the key control system call
+ * The key control system call
*/
SYSCALL_DEFINE5(keyctl, int, option, unsigned long, arg2, unsigned long, arg3,
unsigned long, arg4, unsigned long, arg5)
default:
return -EOPNOTSUPP;
}
-
-} /* end sys_keyctl() */
+}
(keyring)->payload.subscriptions, \
rwsem_is_locked((struct rw_semaphore *)&(keyring)->sem)))
+#define KEY_LINK_FIXQUOTA 1UL
+
/*
- * when plumbing the depths of the key tree, this sets a hard limit set on how
- * deep we're willing to go
+ * When plumbing the depths of the key tree, this sets a hard limit
+ * set on how deep we're willing to go.
*/
#define KEYRING_SEARCH_MAX_DEPTH 6
/*
- * we keep all named keyrings in a hash to speed looking them up
+ * We keep all named keyrings in a hash to speed looking them up.
*/
#define KEYRING_NAME_HASH_SIZE (1 << 5)
}
/*
- * the keyring type definition
+ * The keyring key type definition. Keyrings are simply keys of this type and
+ * can be treated as ordinary keys in addition to having their own special
+ * operations.
*/
static int keyring_instantiate(struct key *keyring,
const void *data, size_t datalen);
.describe = keyring_describe,
.read = keyring_read,
};
-
EXPORT_SYMBOL(key_type_keyring);
/*
- * semaphore to serialise link/link calls to prevent two link calls in parallel
- * introducing a cycle
+ * Semaphore to serialise link/link calls to prevent two link calls in parallel
+ * introducing a cycle.
*/
static DECLARE_RWSEM(keyring_serialise_link_sem);
-/*****************************************************************************/
/*
- * publish the name of a keyring so that it can be found by name (if it has
- * one)
+ * Publish the name of a keyring so that it can be found by name (if it has
+ * one).
*/
static void keyring_publish_name(struct key *keyring)
{
write_unlock(&keyring_name_lock);
}
+}
-} /* end keyring_publish_name() */
-
-/*****************************************************************************/
/*
- * initialise a keyring
- * - we object if we were given any data
+ * Initialise a keyring.
+ *
+ * Returns 0 on success, -EINVAL if given any data.
*/
static int keyring_instantiate(struct key *keyring,
const void *data, size_t datalen)
}
return ret;
+}
-} /* end keyring_instantiate() */
-
-/*****************************************************************************/
/*
- * match keyrings on their name
+ * Match keyrings on their name
*/
static int keyring_match(const struct key *keyring, const void *description)
{
return keyring->description &&
strcmp(keyring->description, description) == 0;
+}
-} /* end keyring_match() */
-
-/*****************************************************************************/
/*
- * dispose of the data dangling from the corpse of a keyring
+ * Clean up a keyring when it is destroyed. Unpublish its name if it had one
+ * and dispose of its data.
*/
static void keyring_destroy(struct key *keyring)
{
key_put(klist->keys[loop]);
kfree(klist);
}
+}
-} /* end keyring_destroy() */
-
-/*****************************************************************************/
/*
- * describe the keyring
+ * Describe a keyring for /proc.
*/
static void keyring_describe(const struct key *keyring, struct seq_file *m)
{
else
seq_puts(m, ": empty");
rcu_read_unlock();
+}
-} /* end keyring_describe() */
-
-/*****************************************************************************/
/*
- * read a list of key IDs from the keyring's contents
- * - the keyring's semaphore is read-locked
+ * Read a list of key IDs from the keyring's contents in binary form
+ *
+ * The keyring's semaphore is read-locked by the caller.
*/
static long keyring_read(const struct key *keyring,
char __user *buffer, size_t buflen)
error:
return ret;
+}
-} /* end keyring_read() */
-
-/*****************************************************************************/
/*
- * allocate a keyring and link into the destination keyring
+ * Allocate a keyring and link into the destination keyring.
*/
struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid,
const struct cred *cred, unsigned long flags,
}
return keyring;
+}
-} /* end keyring_alloc() */
-
-/*****************************************************************************/
-/*
- * search the supplied keyring tree for a key that matches the criterion
- * - perform a breadth-then-depth search up to the prescribed limit
- * - we only find keys on which we have search permission
- * - we use the supplied match function to see if the description (or other
- * feature of interest) matches
- * - we rely on RCU to prevent the keyring lists from disappearing on us
- * - we return -EAGAIN if we didn't find any matching key
- * - we return -ENOKEY if we only found negative matching keys
- * - we propagate the possession attribute from the keyring ref to the key ref
+/**
+ * keyring_search_aux - Search a keyring tree for a key matching some criteria
+ * @keyring_ref: A pointer to the keyring with possession indicator.
+ * @cred: The credentials to use for permissions checks.
+ * @type: The type of key to search for.
+ * @description: Parameter for @match.
+ * @match: Function to rule on whether or not a key is the one required.
+ *
+ * Search the supplied keyring tree for a key that matches the criteria given.
+ * The root keyring and any linked keyrings must grant Search permission to the
+ * caller to be searchable and keys can only be found if they too grant Search
+ * to the caller. The possession flag on the root keyring pointer controls use
+ * of the possessor bits in permissions checking of the entire tree. In
+ * addition, the LSM gets to forbid keyring searches and key matches.
+ *
+ * The search is performed as a breadth-then-depth search up to the prescribed
+ * limit (KEYRING_SEARCH_MAX_DEPTH).
+ *
+ * Keys are matched to the type provided and are then filtered by the match
+ * function, which is given the description to use in any way it sees fit. The
+ * match function may use any attributes of a key that it wishes to to
+ * determine the match. Normally the match function from the key type would be
+ * used.
+ *
+ * RCU is used to prevent the keyring key lists from disappearing without the
+ * need to take lots of locks.
+ *
+ * Returns a pointer to the found key and increments the key usage count if
+ * successful; -EAGAIN if no matching keys were found, or if expired or revoked
+ * keys were found; -ENOKEY if only negative keys were found; -ENOTDIR if the
+ * specified keyring wasn't a keyring.
+ *
+ * In the case of a successful return, the possession attribute from
+ * @keyring_ref is propagated to the returned key reference.
*/
key_ref_t keyring_search_aux(key_ref_t keyring_ref,
const struct cred *cred,
rcu_read_unlock();
error:
return key_ref;
+}
-} /* end keyring_search_aux() */
-
-/*****************************************************************************/
-/*
- * search the supplied keyring tree for a key that matches the criterion
- * - perform a breadth-then-depth search up to the prescribed limit
- * - we only find keys on which we have search permission
- * - we readlock the keyrings as we search down the tree
- * - we return -EAGAIN if we didn't find any matching key
- * - we return -ENOKEY if we only found negative matching keys
+/**
+ * keyring_search - Search the supplied keyring tree for a matching key
+ * @keyring: The root of the keyring tree to be searched.
+ * @type: The type of keyring we want to find.
+ * @description: The name of the keyring we want to find.
+ *
+ * As keyring_search_aux() above, but using the current task's credentials and
+ * type's default matching function.
*/
key_ref_t keyring_search(key_ref_t keyring,
struct key_type *type,
return keyring_search_aux(keyring, current->cred,
type, description, type->match);
-
-} /* end keyring_search() */
-
+}
EXPORT_SYMBOL(keyring_search);
-/*****************************************************************************/
/*
- * search the given keyring only (no recursion)
- * - keyring must be locked by caller
- * - caller must guarantee that the keyring is a keyring
+ * Search the given keyring only (no recursion).
+ *
+ * The caller must guarantee that the keyring is a keyring and that the
+ * permission is granted to search the keyring as no check is made here.
+ *
+ * RCU is used to make it unnecessary to lock the keyring key list here.
+ *
+ * Returns a pointer to the found key with usage count incremented if
+ * successful and returns -ENOKEY if not found. Revoked keys and keys not
+ * providing the requested permission are skipped over.
+ *
+ * If successful, the possession indicator is propagated from the keyring ref
+ * to the returned key reference.
*/
key_ref_t __keyring_search_one(key_ref_t keyring_ref,
const struct key_type *ktype,
atomic_inc(&key->usage);
rcu_read_unlock();
return make_key_ref(key, possessed);
+}
-} /* end __keyring_search_one() */
-
-/*****************************************************************************/
/*
- * find a keyring with the specified name
- * - all named keyrings are searched
- * - normally only finds keyrings with search permission for the current process
+ * Find a keyring with the specified name.
+ *
+ * All named keyrings in the current user namespace are searched, provided they
+ * grant Search permission directly to the caller (unless this check is
+ * skipped). Keyrings whose usage points have reached zero or who have been
+ * revoked are skipped.
+ *
+ * Returns a pointer to the keyring with the keyring's refcount having being
+ * incremented on success. -ENOKEY is returned if a key could not be found.
*/
struct key *find_keyring_by_name(const char *name, bool skip_perm_check)
{
out:
read_unlock(&keyring_name_lock);
return keyring;
+}
-} /* end find_keyring_by_name() */
-
-/*****************************************************************************/
/*
- * see if a cycle will will be created by inserting acyclic tree B in acyclic
- * tree A at the topmost level (ie: as a direct child of A)
- * - since we are adding B to A at the top level, checking for cycles should
- * just be a matter of seeing if node A is somewhere in tree B
+ * See if a cycle will will be created by inserting acyclic tree B in acyclic
+ * tree A at the topmost level (ie: as a direct child of A).
+ *
+ * Since we are adding B to A at the top level, checking for cycles should just
+ * be a matter of seeing if node A is somewhere in tree B.
*/
static int keyring_detect_cycle(struct key *A, struct key *B)
{
cycle_detected:
ret = -EDEADLK;
goto error;
-
-} /* end keyring_detect_cycle() */
+}
/*
- * dispose of a keyring list after the RCU grace period, freeing the unlinked
+ * Dispose of a keyring list after the RCU grace period, freeing the unlinked
* key
*/
static void keyring_unlink_rcu_disposal(struct rcu_head *rcu)
}
/*
- * preallocate memory so that a key can be linked into to a keyring
+ * Preallocate memory so that a key can be linked into to a keyring.
*/
int __key_link_begin(struct key *keyring, const struct key_type *type,
- const char *description,
- struct keyring_list **_prealloc)
+ const char *description, unsigned long *_prealloc)
__acquires(&keyring->sem)
{
struct keyring_list *klist, *nklist;
+ unsigned long prealloc;
unsigned max;
size_t size;
int loop, ret;
/* note replacement slot */
klist->delkey = nklist->delkey = loop;
+ prealloc = (unsigned long)nklist;
goto done;
}
}
if (klist && klist->nkeys < klist->maxkeys) {
/* there's sufficient slack space to append directly */
nklist = NULL;
+ prealloc = KEY_LINK_FIXQUOTA;
} else {
/* grow the key list */
max = 4;
nklist->keys[nklist->delkey] = NULL;
}
+ prealloc = (unsigned long)nklist | KEY_LINK_FIXQUOTA;
done:
- *_prealloc = nklist;
+ *_prealloc = prealloc;
kleave(" = 0");
return 0;
}
/*
- * check already instantiated keys aren't going to be a problem
- * - the caller must have called __key_link_begin()
- * - don't need to call this for keys that were created since __key_link_begin()
- * was called
+ * Check already instantiated keys aren't going to be a problem.
+ *
+ * The caller must have called __key_link_begin(). Don't need to call this for
+ * keys that were created since __key_link_begin() was called.
*/
int __key_link_check_live_key(struct key *keyring, struct key *key)
{
}
/*
- * link a key into to a keyring
- * - must be called with __key_link_begin() having being called
- * - discard already extant link to matching key if there is one
+ * Link a key into to a keyring.
+ *
+ * Must be called with __key_link_begin() having being called. Discards any
+ * already extant link to matching key if there is one, so that each keyring
+ * holds at most one link to any given key of a particular type+description
+ * combination.
*/
void __key_link(struct key *keyring, struct key *key,
- struct keyring_list **_prealloc)
+ unsigned long *_prealloc)
{
struct keyring_list *klist, *nklist;
- nklist = *_prealloc;
- *_prealloc = NULL;
+ nklist = (struct keyring_list *)(*_prealloc & ~KEY_LINK_FIXQUOTA);
+ *_prealloc = 0;
kenter("%d,%d,%p", keyring->serial, key->serial, nklist);
}
/*
- * finish linking a key into to a keyring
- * - must be called with __key_link_begin() having being called
+ * Finish linking a key into to a keyring.
+ *
+ * Must be called with __key_link_begin() having being called.
*/
void __key_link_end(struct key *keyring, struct key_type *type,
- struct keyring_list *prealloc)
+ unsigned long prealloc)
__releases(&keyring->sem)
{
BUG_ON(type == NULL);
BUG_ON(type->name == NULL);
- kenter("%d,%s,%p", keyring->serial, type->name, prealloc);
+ kenter("%d,%s,%lx", keyring->serial, type->name, prealloc);
if (type == &key_type_keyring)
up_write(&keyring_serialise_link_sem);
if (prealloc) {
- kfree(prealloc);
- key_payload_reserve(keyring,
- keyring->datalen - KEYQUOTA_LINK_BYTES);
+ if (prealloc & KEY_LINK_FIXQUOTA)
+ key_payload_reserve(keyring,
+ keyring->datalen -
+ KEYQUOTA_LINK_BYTES);
+ kfree((struct keyring_list *)(prealloc & ~KEY_LINK_FIXQUOTA));
}
up_write(&keyring->sem);
}
-/*
- * link a key to a keyring
+/**
+ * key_link - Link a key to a keyring
+ * @keyring: The keyring to make the link in.
+ * @key: The key to link to.
+ *
+ * Make a link in a keyring to a key, such that the keyring holds a reference
+ * on that key and the key can potentially be found by searching that keyring.
+ *
+ * This function will write-lock the keyring's semaphore and will consume some
+ * of the user's key data quota to hold the link.
+ *
+ * Returns 0 if successful, -ENOTDIR if the keyring isn't a keyring,
+ * -EKEYREVOKED if the keyring has been revoked, -ENFILE if the keyring is
+ * full, -EDQUOT if there is insufficient key data quota remaining to add
+ * another link or -ENOMEM if there's insufficient memory.
+ *
+ * It is assumed that the caller has checked that it is permitted for a link to
+ * be made (the keyring should have Write permission and the key Link
+ * permission).
*/
int key_link(struct key *keyring, struct key *key)
{
- struct keyring_list *prealloc;
+ unsigned long prealloc;
int ret;
key_check(keyring);
return ret;
}
-
EXPORT_SYMBOL(key_link);
-/*****************************************************************************/
-/*
- * unlink the first link to a key from a keyring
+/**
+ * key_unlink - Unlink the first link to a key from a keyring.
+ * @keyring: The keyring to remove the link from.
+ * @key: The key the link is to.
+ *
+ * Remove a link from a keyring to a key.
+ *
+ * This function will write-lock the keyring's semaphore.
+ *
+ * Returns 0 if successful, -ENOTDIR if the keyring isn't a keyring, -ENOENT if
+ * the key isn't linked to by the keyring or -ENOMEM if there's insufficient
+ * memory.
+ *
+ * It is assumed that the caller has checked that it is permitted for a link to
+ * be removed (the keyring should have Write permission; no permissions are
+ * required on the key).
*/
int key_unlink(struct key *keyring, struct key *key)
{
ret = -ENOMEM;
up_write(&keyring->sem);
goto error;
-
-} /* end key_unlink() */
-
+}
EXPORT_SYMBOL(key_unlink);
-/*****************************************************************************/
/*
- * dispose of a keyring list after the RCU grace period, releasing the keys it
- * links to
+ * Dispose of a keyring list after the RCU grace period, releasing the keys it
+ * links to.
*/
static void keyring_clear_rcu_disposal(struct rcu_head *rcu)
{
key_put(klist->keys[loop]);
kfree(klist);
+}
-} /* end keyring_clear_rcu_disposal() */
-
-/*****************************************************************************/
-/*
- * clear the specified process keyring
- * - implements keyctl(KEYCTL_CLEAR)
+/**
+ * keyring_clear - Clear a keyring
+ * @keyring: The keyring to clear.
+ *
+ * Clear the contents of the specified keyring.
+ *
+ * Returns 0 if successful or -ENOTDIR if the keyring isn't a keyring.
*/
int keyring_clear(struct key *keyring)
{
}
return ret;
-
-} /* end keyring_clear() */
-
+}
EXPORT_SYMBOL(keyring_clear);
-/*****************************************************************************/
/*
- * dispose of the links from a revoked keyring
- * - called with the key sem write-locked
+ * Dispose of the links from a revoked keyring.
+ *
+ * This is called with the key sem write-locked.
*/
static void keyring_revoke(struct key *keyring)
{
rcu_assign_pointer(keyring->payload.subscriptions, NULL);
call_rcu(&klist->rcu, keyring_clear_rcu_disposal);
}
-
-} /* end keyring_revoke() */
+}
/*
- * Determine whether a key is dead
+ * Determine whether a key is dead.
*/
static bool key_is_dead(struct key *key, time_t limit)
{
}
/*
- * Collect garbage from the contents of a keyring
+ * Collect garbage from the contents of a keyring, replacing the old list with
+ * a new one with the pointers all shuffled down.
+ *
+ * Dead keys are classed as oned that are flagged as being dead or are revoked,
+ * expired or negative keys that were revoked or expired before the specified
+ * limit.
*/
void keyring_gc(struct key *keyring, time_t limit)
{
-/* permission.c: key permission determination
+/* Key permission checking
*
* Copyright (C) 2005 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
#include <linux/security.h>
#include "internal.h"
-/*****************************************************************************/
/**
* key_task_permission - Check a key can be used
- * @key_ref: The key to check
- * @cred: The credentials to use
- * @perm: The permissions to check for
+ * @key_ref: The key to check.
+ * @cred: The credentials to use.
+ * @perm: The permissions to check for.
*
* Check to see whether permission is granted to use a key in the desired way,
* but permit the security modules to override.
*
- * The caller must hold either a ref on cred or must hold the RCU readlock or a
- * spinlock.
+ * The caller must hold either a ref on cred or must hold the RCU readlock.
+ *
+ * Returns 0 if successful, -EACCES if access is denied based on the
+ * permissions bits or the LSM check.
*/
int key_task_permission(const key_ref_t key_ref, const struct cred *cred,
key_perm_t perm)
/* let LSM be the final arbiter */
return security_key_permission(key_ref, cred, perm);
-
-} /* end key_task_permission() */
-
+}
EXPORT_SYMBOL(key_task_permission);
-/*****************************************************************************/
-/*
- * validate a key
+/**
+ * key_validate - Validate a key.
+ * @key: The key to be validated.
+ *
+ * Check that a key is valid, returning 0 if the key is okay, -EKEYREVOKED if
+ * the key's type has been removed or if the key has been revoked or
+ * -EKEYEXPIRED if the key has expired.
*/
int key_validate(struct key *key)
{
error:
return ret;
-
-} /* end key_validate() */
-
+}
EXPORT_SYMBOL(key_validate);
-/* proc.c: proc files for key database enumeration
+/* procfs files for key database enumeration
*
* Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
.release = seq_release,
};
-/*****************************************************************************/
/*
- * declare the /proc files
+ * Declare the /proc files.
*/
static int __init key_proc_init(void)
{
panic("Cannot create /proc/key-users\n");
return 0;
-
-} /* end key_proc_init() */
+}
__initcall(key_proc_init);
-/*****************************************************************************/
/*
- * implement "/proc/keys" to provides a list of the keys on the system
+ * Implement "/proc/keys" to provide a list of the keys on the system that
+ * grant View permission to the caller.
*/
#ifdef CONFIG_KEYS_DEBUG_PROC_KEYS
return __key_user_next(n);
}
-/*****************************************************************************/
/*
- * implement "/proc/key-users" to provides a list of the key users
+ * Implement "/proc/key-users" to provides a list of the key users and their
+ * quotas.
*/
static int proc_key_users_open(struct inode *inode, struct file *file)
{
maxbytes);
return 0;
-
}
-/* Management of a process's keyrings
+/* Manage a process's keyrings
*
* Copyright (C) 2004-2005, 2008 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
#include <asm/uaccess.h>
#include "internal.h"
-/* session keyring create vs join semaphore */
+/* Session keyring create vs join semaphore */
static DEFINE_MUTEX(key_session_mutex);
-/* user keyring creation semaphore */
+/* User keyring creation semaphore */
static DEFINE_MUTEX(key_user_keyring_mutex);
-/* the root user's tracking struct */
+/* The root user's tracking struct */
struct key_user root_key_user = {
.usage = ATOMIC_INIT(3),
.cons_lock = __MUTEX_INITIALIZER(root_key_user.cons_lock),
.user_ns = &init_user_ns,
};
-/*****************************************************************************/
/*
- * install user and user session keyrings for a particular UID
+ * Install the user and user session keyrings for the current process's UID.
*/
int install_user_keyrings(void)
{
}
/*
- * install a fresh thread keyring directly to new credentials
+ * Install a fresh thread keyring directly to new credentials. This keyring is
+ * allowed to overrun the quota.
*/
int install_thread_keyring_to_cred(struct cred *new)
{
}
/*
- * install a fresh thread keyring, discarding the old one
+ * Install a fresh thread keyring, discarding the old one.
*/
static int install_thread_keyring(void)
{
}
/*
- * install a process keyring directly to a credentials struct
- * - returns -EEXIST if there was already a process keyring, 0 if one installed,
- * and other -ve on any other error
+ * Install a process keyring directly to a credentials struct.
+ *
+ * Returns -EEXIST if there was already a process keyring, 0 if one installed,
+ * and other value on any other error
*/
int install_process_keyring_to_cred(struct cred *new)
{
}
/*
- * make sure a process keyring is installed
- * - we
+ * Make sure a process keyring is installed for the current process. The
+ * existing process keyring is not replaced.
+ *
+ * Returns 0 if there is a process keyring by the end of this function, some
+ * error otherwise.
*/
static int install_process_keyring(void)
{
}
/*
- * install a session keyring directly to a credentials struct
+ * Install a session keyring directly to a credentials struct.
*/
int install_session_keyring_to_cred(struct cred *cred, struct key *keyring)
{
}
/*
- * install a session keyring, discarding the old one
- * - if a keyring is not supplied, an empty one is invented
+ * Install a session keyring, discarding the old one. If a keyring is not
+ * supplied, an empty one is invented.
*/
static int install_session_keyring(struct key *keyring)
{
return commit_creds(new);
}
-/*****************************************************************************/
/*
- * the filesystem user ID changed
+ * Handle the fsuid changing.
*/
void key_fsuid_changed(struct task_struct *tsk)
{
tsk->cred->thread_keyring->uid = tsk->cred->fsuid;
up_write(&tsk->cred->thread_keyring->sem);
}
+}
-} /* end key_fsuid_changed() */
-
-/*****************************************************************************/
/*
- * the filesystem group ID changed
+ * Handle the fsgid changing.
*/
void key_fsgid_changed(struct task_struct *tsk)
{
tsk->cred->thread_keyring->gid = tsk->cred->fsgid;
up_write(&tsk->cred->thread_keyring->sem);
}
+}
-} /* end key_fsgid_changed() */
-
-/*****************************************************************************/
/*
- * search only my process keyrings for the first matching key
- * - we use the supplied match function to see if the description (or other
- * feature of interest) matches
- * - we return -EAGAIN if we didn't find any matching key
- * - we return -ENOKEY if we found only negative matching keys
+ * Search the process keyrings attached to the supplied cred for the first
+ * matching key.
+ *
+ * The search criteria are the type and the match function. The description is
+ * given to the match function as a parameter, but doesn't otherwise influence
+ * the search. Typically the match function will compare the description
+ * parameter to the key's description.
+ *
+ * This can only search keyrings that grant Search permission to the supplied
+ * credentials. Keyrings linked to searched keyrings will also be searched if
+ * they grant Search permission too. Keys can only be found if they grant
+ * Search permission to the credentials.
+ *
+ * Returns a pointer to the key with the key usage count incremented if
+ * successful, -EAGAIN if we didn't find any matching key or -ENOKEY if we only
+ * matched negative keys.
+ *
+ * In the case of a successful return, the possession attribute is set on the
+ * returned key reference.
*/
key_ref_t search_my_process_keyrings(struct key_type *type,
const void *description,
return key_ref;
}
-/*****************************************************************************/
/*
- * search the process keyrings for the first matching key
- * - we use the supplied match function to see if the description (or other
- * feature of interest) matches
- * - we return -EAGAIN if we didn't find any matching key
- * - we return -ENOKEY if we found only negative matching keys
+ * Search the process keyrings attached to the supplied cred for the first
+ * matching key in the manner of search_my_process_keyrings(), but also search
+ * the keys attached to the assumed authorisation key using its credentials if
+ * one is available.
+ *
+ * Return same as search_my_process_keyrings().
*/
key_ref_t search_process_keyrings(struct key_type *type,
const void *description,
found:
return key_ref;
+}
-} /* end search_process_keyrings() */
-
-/*****************************************************************************/
/*
- * see if the key we're looking at is the target key
+ * See if the key we're looking at is the target key.
*/
int lookup_user_key_possessed(const struct key *key, const void *target)
{
return key == target;
+}
-} /* end lookup_user_key_possessed() */
-
-/*****************************************************************************/
/*
- * lookup a key given a key ID from userspace with a given permissions mask
- * - don't create special keyrings unless so requested
- * - partially constructed keys aren't found unless requested
+ * Look up a key ID given us by userspace with a given permissions mask to get
+ * the key it refers to.
+ *
+ * Flags can be passed to request that special keyrings be created if referred
+ * to directly, to permit partially constructed keys to be found and to skip
+ * validity and permission checks on the found key.
+ *
+ * Returns a pointer to the key with an incremented usage count if successful;
+ * -EINVAL if the key ID is invalid; -ENOKEY if the key ID does not correspond
+ * to a key or the best found key was a negative key; -EKEYREVOKED or
+ * -EKEYEXPIRED if the best found key was revoked or expired; -EACCES if the
+ * found key doesn't grant the requested permit or the LSM denied access to it;
+ * or -ENOMEM if a special keyring couldn't be created.
+ *
+ * In the case of a successful return, the possession attribute is set on the
+ * returned key reference.
*/
key_ref_t lookup_user_key(key_serial_t id, unsigned long lflags,
key_perm_t perm)
reget_creds:
put_cred(cred);
goto try_again;
+}
-} /* end lookup_user_key() */
-
-/*****************************************************************************/
/*
- * join the named keyring as the session keyring if possible, or attempt to
- * create a new one of that name if not
- * - if the name is NULL, an empty anonymous keyring is installed instead
- * - named session keyring joining is done with a semaphore held
+ * Join the named keyring as the session keyring if possible else attempt to
+ * create a new one of that name and join that.
+ *
+ * If the name is NULL, an empty anonymous keyring will be installed as the
+ * session keyring.
+ *
+ * Named session keyrings are joined with a semaphore held to prevent the
+ * keyrings from going away whilst the attempt is made to going them and also
+ * to prevent a race in creating compatible session keyrings.
*/
long join_session_keyring(const char *name)
{
}
/*
- * Replace a process's session keyring when that process resumes userspace on
- * behalf of one of its children
+ * Replace a process's session keyring on behalf of one of its children when
+ * the target process is about to resume userspace execution.
*/
void key_replace_session_keyring(void)
{
return signal_pending(current) ? -ERESTARTSYS : 0;
}
-/*
- * call to complete the construction of a key
+/**
+ * complete_request_key - Complete the construction of a key.
+ * @cons: The key construction record.
+ * @error: The success or failute of the construction.
+ *
+ * Complete the attempt to construct a key. The key will be negated
+ * if an error is indicated. The authorisation key will be revoked
+ * unconditionally.
*/
void complete_request_key(struct key_construction *cons, int error)
{
}
EXPORT_SYMBOL(complete_request_key);
+/*
+ * Initialise a usermode helper that is going to have a specific session
+ * keyring.
+ *
+ * This is called in context of freshly forked kthread before kernel_execve(),
+ * so we can simply install the desired session_keyring at this point.
+ */
static int umh_keys_init(struct subprocess_info *info)
{
struct cred *cred = (struct cred*)current_cred();
struct key *keyring = info->data;
- /*
- * This is called in context of freshly forked kthread before
- * kernel_execve(), we can just change our ->session_keyring.
- */
+
return install_session_keyring_to_cred(cred, keyring);
}
+/*
+ * Clean up a usermode helper with session keyring.
+ */
static void umh_keys_cleanup(struct subprocess_info *info)
{
struct key *keyring = info->data;
key_put(keyring);
}
+/*
+ * Call a usermode helper with a specific session keyring.
+ */
static int call_usermodehelper_keys(char *path, char **argv, char **envp,
struct key *session_keyring, enum umh_wait wait)
{
}
/*
- * request userspace finish the construction of a key
+ * Request userspace finish the construction of a key
* - execute "/sbin/request-key <op> <key> <uid> <gid> <keyring> <keyring> <keyring>"
*/
static int call_sbin_request_key(struct key_construction *cons,
}
/*
- * call out to userspace for key construction
- * - we ignore program failure and go on key status instead
+ * Call out to userspace for key construction.
+ *
+ * Program failure is ignored in favour of key status.
*/
static int construct_key(struct key *key, const void *callout_info,
size_t callout_len, void *aux,
}
/*
- * get the appropriate destination keyring for the request
- * - we return whatever keyring we select with an extra reference upon it which
- * the caller must release
+ * Get the appropriate destination keyring for the request.
+ *
+ * The keyring selected is returned with an extra reference upon it which the
+ * caller must release.
*/
static void construct_get_dest_keyring(struct key **_dest_keyring)
{
}
/*
- * allocate a new key in under-construction state and attempt to link it in to
- * the requested place
- * - may return a key that's already under construction instead
+ * Allocate a new key in under-construction state and attempt to link it in to
+ * the requested keyring.
+ *
+ * May return a key that's already under construction instead if there was a
+ * race between two thread calling request_key().
*/
static int construct_alloc_key(struct key_type *type,
const char *description,
struct key_user *user,
struct key **_key)
{
- struct keyring_list *prealloc;
const struct cred *cred = current_cred();
+ unsigned long prealloc;
struct key *key;
key_ref_t key_ref;
int ret;
}
/*
- * commence key construction
+ * Commence key construction.
*/
static struct key *construct_key_and_link(struct key_type *type,
const char *description,
return ERR_PTR(ret);
}
-/*
- * request a key
- * - search the process's keyrings
- * - check the list of keys being created or updated
- * - call out to userspace for a key if supplementary info was provided
- * - cache the key in an appropriate keyring
+/**
+ * request_key_and_link - Request a key and cache it in a keyring.
+ * @type: The type of key we want.
+ * @description: The searchable description of the key.
+ * @callout_info: The data to pass to the instantiation upcall (or NULL).
+ * @callout_len: The length of callout_info.
+ * @aux: Auxiliary data for the upcall.
+ * @dest_keyring: Where to cache the key.
+ * @flags: Flags to key_alloc().
+ *
+ * A key matching the specified criteria is searched for in the process's
+ * keyrings and returned with its usage count incremented if found. Otherwise,
+ * if callout_info is not NULL, a key will be allocated and some service
+ * (probably in userspace) will be asked to instantiate it.
+ *
+ * If successfully found or created, the key will be linked to the destination
+ * keyring if one is provided.
+ *
+ * Returns a pointer to the key if successful; -EACCES, -ENOKEY, -EKEYREVOKED
+ * or -EKEYEXPIRED if an inaccessible, negative, revoked or expired key was
+ * found; -ENOKEY if no key was found and no @callout_info was given; -EDQUOT
+ * if insufficient key quota was available to create a new key; or -ENOMEM if
+ * insufficient memory was available.
+ *
+ * If the returned key was created, then it may still be under construction,
+ * and wait_for_key_construction() should be used to wait for that to complete.
*/
struct key *request_key_and_link(struct key_type *type,
const char *description,
return key;
}
-/*
- * wait for construction of a key to complete
+/**
+ * wait_for_key_construction - Wait for construction of a key to complete
+ * @key: The key being waited for.
+ * @intr: Whether to wait interruptibly.
+ *
+ * Wait for a key to finish being constructed.
+ *
+ * Returns 0 if successful; -ERESTARTSYS if the wait was interrupted; -ENOKEY
+ * if the key was negated; or -EKEYREVOKED or -EKEYEXPIRED if the key was
+ * revoked or expired.
*/
int wait_for_key_construction(struct key *key, bool intr)
{
}
EXPORT_SYMBOL(wait_for_key_construction);
-/*
- * request a key
- * - search the process's keyrings
- * - check the list of keys being created or updated
- * - call out to userspace for a key if supplementary info was provided
- * - waits uninterruptible for creation to complete
+/**
+ * request_key - Request a key and wait for construction
+ * @type: Type of key.
+ * @description: The searchable description of the key.
+ * @callout_info: The data to pass to the instantiation upcall (or NULL).
+ *
+ * As for request_key_and_link() except that it does not add the returned key
+ * to a keyring if found, new keys are always allocated in the user's quota,
+ * the callout_info must be a NUL-terminated string and no auxiliary data can
+ * be passed.
+ *
+ * Furthermore, it then works as wait_for_key_construction() to wait for the
+ * completion of keys undergoing construction with a non-interruptible wait.
*/
struct key *request_key(struct key_type *type,
const char *description,
}
EXPORT_SYMBOL(request_key);
-/*
- * request a key with auxiliary data for the upcaller
- * - search the process's keyrings
- * - check the list of keys being created or updated
- * - call out to userspace for a key if supplementary info was provided
- * - waits uninterruptible for creation to complete
+/**
+ * request_key_with_auxdata - Request a key with auxiliary data for the upcaller
+ * @type: The type of key we want.
+ * @description: The searchable description of the key.
+ * @callout_info: The data to pass to the instantiation upcall (or NULL).
+ * @callout_len: The length of callout_info.
+ * @aux: Auxiliary data for the upcall.
+ *
+ * As for request_key_and_link() except that it does not add the returned key
+ * to a keyring if found and new keys are always allocated in the user's quota.
+ *
+ * Furthermore, it then works as wait_for_key_construction() to wait for the
+ * completion of keys undergoing construction with a non-interruptible wait.
*/
struct key *request_key_with_auxdata(struct key_type *type,
const char *description,
EXPORT_SYMBOL(request_key_with_auxdata);
/*
- * request a key (allow async construction)
- * - search the process's keyrings
- * - check the list of keys being created or updated
- * - call out to userspace for a key if supplementary info was provided
+ * request_key_async - Request a key (allow async construction)
+ * @type: Type of key.
+ * @description: The searchable description of the key.
+ * @callout_info: The data to pass to the instantiation upcall (or NULL).
+ * @callout_len: The length of callout_info.
+ *
+ * As for request_key_and_link() except that it does not add the returned key
+ * to a keyring if found, new keys are always allocated in the user's quota and
+ * no auxiliary data can be passed.
+ *
+ * The caller should call wait_for_key_construction() to wait for the
+ * completion of the returned key if it is still undergoing construction.
*/
struct key *request_key_async(struct key_type *type,
const char *description,
/*
* request a key with auxiliary data for the upcaller (allow async construction)
- * - search the process's keyrings
- * - check the list of keys being created or updated
- * - call out to userspace for a key if supplementary info was provided
+ * @type: Type of key.
+ * @description: The searchable description of the key.
+ * @callout_info: The data to pass to the instantiation upcall (or NULL).
+ * @callout_len: The length of callout_info.
+ * @aux: Auxiliary data for the upcall.
+ *
+ * As for request_key_and_link() except that it does not add the returned key
+ * to a keyring if found and new keys are always allocated in the user's quota.
+ *
+ * The caller should call wait_for_key_construction() to wait for the
+ * completion of the returned key if it is still undergoing construction.
*/
struct key *request_key_async_with_auxdata(struct key_type *type,
const char *description,
-/* request_key_auth.c: request key authorisation controlling key def
+/* Request key authorisation token key definition.
*
* Copyright (C) 2005 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
static long request_key_auth_read(const struct key *, char __user *, size_t);
/*
- * the request-key authorisation key type definition
+ * The request-key authorisation key type definition.
*/
struct key_type key_type_request_key_auth = {
.name = ".request_key_auth",
.read = request_key_auth_read,
};
-/*****************************************************************************/
/*
- * instantiate a request-key authorisation key
+ * Instantiate a request-key authorisation key.
*/
static int request_key_auth_instantiate(struct key *key,
const void *data,
{
key->payload.data = (struct request_key_auth *) data;
return 0;
+}
-} /* end request_key_auth_instantiate() */
-
-/*****************************************************************************/
/*
- * reading a request-key authorisation key retrieves the callout information
+ * Describe an authorisation token.
*/
static void request_key_auth_describe(const struct key *key,
struct seq_file *m)
seq_puts(m, "key:");
seq_puts(m, key->description);
seq_printf(m, " pid:%d ci:%zu", rka->pid, rka->callout_len);
+}
-} /* end request_key_auth_describe() */
-
-/*****************************************************************************/
/*
- * read the callout_info data
+ * Read the callout_info data (retrieves the callout information).
* - the key's semaphore is read-locked
*/
static long request_key_auth_read(const struct key *key,
}
return ret;
+}
-} /* end request_key_auth_read() */
-
-/*****************************************************************************/
/*
- * handle revocation of an authorisation token key
- * - called with the key sem write-locked
+ * Handle revocation of an authorisation token key.
+ *
+ * Called with the key sem write-locked.
*/
static void request_key_auth_revoke(struct key *key)
{
put_cred(rka->cred);
rka->cred = NULL;
}
+}
-} /* end request_key_auth_revoke() */
-
-/*****************************************************************************/
/*
- * destroy an instantiation authorisation token key
+ * Destroy an instantiation authorisation token key.
*/
static void request_key_auth_destroy(struct key *key)
{
key_put(rka->dest_keyring);
kfree(rka->callout_info);
kfree(rka);
+}
-} /* end request_key_auth_destroy() */
-
-/*****************************************************************************/
/*
- * create an authorisation token for /sbin/request-key or whoever to gain
- * access to the caller's security data
+ * Create an authorisation token for /sbin/request-key or whoever to gain
+ * access to the caller's security data.
*/
struct key *request_key_auth_new(struct key *target, const void *callout_info,
size_t callout_len, struct key *dest_keyring)
kfree(rka);
kleave("= %d", ret);
return ERR_PTR(ret);
+}
-} /* end request_key_auth_new() */
-
-/*****************************************************************************/
/*
- * see if an authorisation key is associated with a particular key
+ * See if an authorisation key is associated with a particular key.
*/
static int key_get_instantiation_authkey_match(const struct key *key,
const void *_id)
key_serial_t id = (key_serial_t)(unsigned long) _id;
return rka->target_key->serial == id;
+}
-} /* end key_get_instantiation_authkey_match() */
-
-/*****************************************************************************/
/*
- * get the authorisation key for instantiation of a specific key if attached to
- * the current process's keyrings
- * - this key is inserted into a keyring and that is set as /sbin/request-key's
- * session keyring
- * - a target_id of zero specifies any valid token
+ * Search the current process's keyrings for the authorisation key for
+ * instantiation of a key.
*/
struct key *key_get_instantiation_authkey(key_serial_t target_id)
{
error:
return authkey;
-
-} /* end key_get_instantiation_authkey() */
+}
#include <linux/tpm.h>
#include <linux/tpm_command.h>
-#include "trusted_defined.h"
+#include "trusted.h"
static const char hmac_alg[] = "hmac(sha1)";
static const char hash_alg[] = "sha1";
if (dlen == 0)
break;
data = va_arg(argp, unsigned char *);
- if (data == NULL)
- return -EINVAL;
+ if (data == NULL) {
+ ret = -EINVAL;
+ break;
+ }
ret = crypto_shash_update(&sdesc->shash, data, dlen);
if (ret < 0)
- goto out;
+ break;
}
va_end(argp);
if (!ret)
if (dlen == 0)
break;
data = va_arg(argp, unsigned char *);
- ret = crypto_shash_update(&sdesc->shash, data, dlen);
- if (ret < 0) {
- va_end(argp);
- goto out;
+ if (!data) {
+ ret = -EINVAL;
+ break;
}
+ ret = crypto_shash_update(&sdesc->shash, data, dlen);
+ if (ret < 0)
+ break;
}
va_end(argp);
- ret = crypto_shash_final(&sdesc->shash, paramdigest);
+ if (!ret)
+ ret = crypto_shash_final(&sdesc->shash, paramdigest);
if (!ret)
ret = TSS_rawhmac(digest, key, keylen, SHA1_DIGEST_SIZE,
paramdigest, TPM_NONCE_SIZE, h1,
break;
dpos = va_arg(argp, unsigned int);
ret = crypto_shash_update(&sdesc->shash, buffer + dpos, dlen);
- if (ret < 0) {
- va_end(argp);
- goto out;
- }
+ if (ret < 0)
+ break;
}
va_end(argp);
- ret = crypto_shash_final(&sdesc->shash, paramdigest);
+ if (!ret)
+ ret = crypto_shash_final(&sdesc->shash, paramdigest);
if (ret < 0)
goto out;
break;
dpos = va_arg(argp, unsigned int);
ret = crypto_shash_update(&sdesc->shash, buffer + dpos, dlen);
- if (ret < 0) {
- va_end(argp);
- goto out;
- }
+ if (ret < 0)
+ break;
}
va_end(argp);
- ret = crypto_shash_final(&sdesc->shash, paramdigest);
+ if (!ret)
+ ret = crypto_shash_final(&sdesc->shash, paramdigest);
if (ret < 0)
goto out;
/* get session for sealing key */
ret = osap(tb, &sess, keyauth, keytype, keyhandle);
if (ret < 0)
- return ret;
+ goto out;
dump_sess(&sess);
/* calculate encrypted authorization value */
memcpy(td->xorwork + SHA1_DIGEST_SIZE, sess.enonce, SHA1_DIGEST_SIZE);
ret = TSS_sha1(td->xorwork, SHA1_DIGEST_SIZE * 2, td->xorhash);
if (ret < 0)
- return ret;
+ goto out;
ret = tpm_get_random(tb, td->nonceodd, TPM_NONCE_SIZE);
if (ret < 0)
- return ret;
+ goto out;
ordinal = htonl(TPM_ORD_SEAL);
datsize = htonl(datalen);
pcrsize = htonl(pcrinfosize);
&datsize, datalen, data, 0, 0);
}
if (ret < 0)
- return ret;
+ goto out;
/* build and send the TPM request packet */
INIT_BUF(tb);
ret = trusted_tpm_send(TPM_ANY_NUM, tb->data, MAX_BUF_SIZE);
if (ret < 0)
- return ret;
+ goto out;
/* calculate the size of the returned Blob */
sealinfosize = LOAD32(tb->data, TPM_DATA_OFFSET + sizeof(uint32_t));
memcpy(blob, tb->data + TPM_DATA_OFFSET, storedsize);
*bloblen = storedsize;
}
+out:
+ kfree(td);
return ret;
}
ret = datablob_parse(datablob, new_p, new_o);
if (ret != Opt_update) {
ret = -EINVAL;
+ kfree(new_p);
goto out;
}
/* copy old key values, and reseal with new pcrs */
EXPORT_SYMBOL_GPL(key_type_user);
-/*****************************************************************************/
/*
* instantiate a user defined key
*/
error:
return ret;
-
-} /* end user_instantiate() */
+}
EXPORT_SYMBOL_GPL(user_instantiate);
-/*****************************************************************************/
/*
* dispose of the old data from an updated user defined key
*/
upayload = container_of(rcu, struct user_key_payload, rcu);
kfree(upayload);
+}
-} /* end user_update_rcu_disposal() */
-
-/*****************************************************************************/
/*
* update a user defined key
* - the key's semaphore is write-locked
error:
return ret;
-
-} /* end user_update() */
+}
EXPORT_SYMBOL_GPL(user_update);
-/*****************************************************************************/
/*
* match users on their name
*/
int user_match(const struct key *key, const void *description)
{
return strcmp(key->description, description) == 0;
-
-} /* end user_match() */
+}
EXPORT_SYMBOL_GPL(user_match);
-/*****************************************************************************/
/*
* dispose of the links from a revoked keyring
* - called with the key sem write-locked
rcu_assign_pointer(key->payload.data, NULL);
call_rcu(&upayload->rcu, user_update_rcu_disposal);
}
-
-} /* end user_revoke() */
+}
EXPORT_SYMBOL(user_revoke);
-/*****************************************************************************/
/*
* dispose of the data dangling from the corpse of a user key
*/
struct user_key_payload *upayload = key->payload.data;
kfree(upayload);
-
-} /* end user_destroy() */
+}
EXPORT_SYMBOL_GPL(user_destroy);
-/*****************************************************************************/
/*
* describe the user key
*/
seq_puts(m, key->description);
seq_printf(m, ": %u", key->datalen);
-
-} /* end user_describe() */
+}
EXPORT_SYMBOL_GPL(user_describe);
-/*****************************************************************************/
/*
* read the key data
* - the key's semaphore is read-locked
}
return ret;
-
-} /* end user_read() */
+}
EXPORT_SYMBOL_GPL(user_read);
effective, inheritable, permitted);
}
-int security_capable(int cap)
+int security_capable(const struct cred *cred, int cap)
{
- return security_ops->capable(current, current_cred(), cap,
- SECURITY_CAP_AUDIT);
+ return security_ops->capable(current, cred, cap, SECURITY_CAP_AUDIT);
}
int security_real_capable(struct task_struct *tsk, int cap)
{
struct task_security_struct *tsec = cred->security;
- BUG_ON((unsigned long) cred->security < PAGE_SIZE);
+ /*
+ * cred->security == NULL if security_cred_alloc_blank() or
+ * security_prepare_creds() returned an error.
+ */
+ BUG_ON(cred->security && (unsigned long) cred->security < PAGE_SIZE);
cred->security = (void *) 0x7UL;
kfree(tsec);
}
p->bool_val_to_struct = (struct cond_bool_datum **)
kmalloc(p->p_bools.nprim * sizeof(struct cond_bool_datum *), GFP_KERNEL);
if (!p->bool_val_to_struct)
- return -1;
+ return -ENOMEM;
return 0;
}
if (rc)
goto out;
- rc = -ENOMEM;
- if (cond_init_bool_indexes(p))
+ rc = cond_init_bool_indexes(p);
+ if (rc)
goto out;
for (i = 0; i < SYM_NUM; i++) {
#define DRIVER_NAME "aaci-pl041"
+#define FRAME_PERIOD_US 21
+
/*
* PM support is not complete. Turn it off.
*/
if (v & SLFR_1RXV)
readl(aaci->base + AACI_SL1RX);
- writel(maincr, aaci->base + AACI_MAINCR);
+ if (maincr != readl(aaci->base + AACI_MAINCR)) {
+ writel(maincr, aaci->base + AACI_MAINCR);
+ readl(aaci->base + AACI_MAINCR);
+ udelay(1);
+ }
}
/*
unsigned short val)
{
struct aaci *aaci = ac97->private_data;
+ int timeout;
u32 v;
- int timeout = 5000;
if (ac97->num >= 4)
return;
writel(val << 4, aaci->base + AACI_SL2TX);
writel(reg << 12, aaci->base + AACI_SL1TX);
- /*
- * Wait for the transmission of both slots to complete.
- */
+ /* Initially, wait one frame period */
+ udelay(FRAME_PERIOD_US);
+
+ /* And then wait an additional eight frame periods for it to be sent */
+ timeout = FRAME_PERIOD_US * 8;
do {
+ udelay(1);
v = readl(aaci->base + AACI_SLFR);
} while ((v & (SLFR_1TXB|SLFR_2TXB)) && --timeout);
- if (!timeout)
+ if (v & (SLFR_1TXB|SLFR_2TXB))
dev_err(&aaci->dev->dev,
"timeout waiting for write to complete\n");
static unsigned short aaci_ac97_read(struct snd_ac97 *ac97, unsigned short reg)
{
struct aaci *aaci = ac97->private_data;
+ int timeout, retries = 10;
u32 v;
- int timeout = 5000;
- int retries = 10;
if (ac97->num >= 4)
return ~0;
*/
writel((reg << 12) | (1 << 19), aaci->base + AACI_SL1TX);
- /*
- * Wait for the transmission to complete.
- */
+ /* Initially, wait one frame period */
+ udelay(FRAME_PERIOD_US);
+
+ /* And then wait an additional eight frame periods for it to be sent */
+ timeout = FRAME_PERIOD_US * 8;
do {
+ udelay(1);
v = readl(aaci->base + AACI_SLFR);
} while ((v & SLFR_1TXB) && --timeout);
- if (!timeout) {
+ if (v & SLFR_1TXB) {
dev_err(&aaci->dev->dev, "timeout on slot 1 TX busy\n");
v = ~0;
goto out;
}
- /*
- * Give the AC'97 codec more than enough time
- * to respond. (42us = ~2 frames at 48kHz.)
- */
- udelay(42);
+ /* Now wait for the response frame */
+ udelay(FRAME_PERIOD_US);
- /*
- * Wait for slot 2 to indicate data.
- */
- timeout = 5000;
+ /* And then wait an additional eight frame periods for data */
+ timeout = FRAME_PERIOD_US * 8;
do {
+ udelay(1);
cond_resched();
v = readl(aaci->base + AACI_SLFR) & (SLFR_1RXV|SLFR_2RXV);
} while ((v != (SLFR_1RXV|SLFR_2RXV)) && --timeout);
- if (!timeout) {
+ if (v != (SLFR_1RXV|SLFR_2RXV)) {
dev_err(&aaci->dev->dev, "timeout on RX valid\n");
v = ~0;
goto out;
int timeout = 5000;
do {
+ udelay(1);
val = readl(aacirun->base + AACI_SR);
} while (val & mask && timeout--);
}
* Give the AC'97 codec more than enough time
* to wake up. (42us = ~2 frames at 48kHz.)
*/
- udelay(42);
+ udelay(FRAME_PERIOD_US * 2);
ret = snd_ac97_bus(aaci->card, 0, &aaci_bus_ops, aaci, &ac97_bus);
if (ret)
* disabling the channel doesn't clear the FIFO.
*/
writel(aaci->maincr & ~MAINCR_IE, aaci->base + AACI_MAINCR);
+ readl(aaci->base + AACI_MAINCR);
+ udelay(1);
writel(aaci->maincr, aaci->base + AACI_MAINCR);
/*
#include <linux/dw_dmac.h>
#include <mach/cpu.h>
-#include <mach/hardware.h>
#include <mach/gpio.h>
+#ifdef CONFIG_ARCH_AT91
+#include <mach/hardware.h>
+#endif
+
#include "ac97c.h"
enum {
{
struct snd_hrtimer *stime = container_of(hrt, struct snd_hrtimer, hrt);
struct snd_timer *t = stime->timer;
+ unsigned long oruns;
if (!atomic_read(&stime->running))
return HRTIMER_NORESTART;
- hrtimer_forward_now(hrt, ns_to_ktime(t->sticks * resolution));
- snd_timer_interrupt(stime->timer, t->sticks);
+ oruns = hrtimer_forward_now(hrt, ns_to_ktime(t->sticks * resolution));
+ snd_timer_interrupt(stime->timer, t->sticks * oruns);
if (!atomic_read(&stime->running))
return HRTIMER_NORESTART;
}
static struct snd_timer_hardware hrtimer_hw = {
- .flags = SNDRV_TIMER_HW_AUTO,
+ .flags = SNDRV_TIMER_HW_AUTO | SNDRV_TIMER_HW_TASKLET,
.open = snd_hrtimer_open,
.close = snd_hrtimer_close,
.start = snd_hrtimer_start,
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/ioport.h>
+#include <linux/io.h>
#include <linux/moduleparam.h>
#include <sound/core.h>
#include <sound/initval.h>
#include <sound/rawmidi.h>
#include <linux/delay.h>
-#include <asm/io.h>
-
/*
* globals
*/
$(obj)/bin2hex pss_synth < $< > $@
else
$(obj)/pss_boot.h:
- ( \
+ $(Q)( \
echo 'static unsigned char * pss_synth = NULL;'; \
echo 'static int pss_synthLen = 0;'; \
) > $@
$(obj)/hex2hex -i trix_boot < $< > $@
else
$(obj)/trix_boot.h:
- ( \
+ $(Q)( \
echo 'static unsigned char * trix_boot = NULL;'; \
echo 'static int trix_boot_len = 0;'; \
) > $@
static int inline vortex_adbdma_getlinearpos(vortex_t * vortex, int adbdma)
{
stream_t *dma = &vortex->dma_adb[adbdma];
- int temp;
+ int temp, page, delta;
temp = hwread(vortex->mmio, VORTEX_ADBDMA_STAT + (adbdma << 2));
- temp = (dma->period_virt * dma->period_bytes) + (temp & (dma->period_bytes - 1));
- return temp;
+ page = (temp & ADB_SUBBUF_MASK) >> ADB_SUBBUF_SHIFT;
+ if (dma->nr_periods >= 4)
+ delta = (page - dma->period_real) & 3;
+ else {
+ delta = (page - dma->period_real);
+ if (delta < 0)
+ delta += dma->nr_periods;
+ }
+ return (dma->period_virt + delta) * dma->period_bytes
+ + (temp & (dma->period_bytes - 1));
}
static void vortex_adbdma_startfifo(vortex_t * vortex, int adbdma)
snd_azf3328_dbgcallenter();
switch (bitrate) {
-#define AZF_FMT_XLATE(in_freq, out_bits) \
- do { \
- case AZF_FREQ_ ## in_freq: \
- freq = SOUNDFORMAT_FREQ_ ## out_bits; \
- break; \
- } while (0);
- AZF_FMT_XLATE(4000, SUSPECTED_4000)
- AZF_FMT_XLATE(4800, SUSPECTED_4800)
- /* the AZF3328 names it "5510" for some strange reason: */
- AZF_FMT_XLATE(5512, 5510)
- AZF_FMT_XLATE(6620, 6620)
- AZF_FMT_XLATE(8000, 8000)
- AZF_FMT_XLATE(9600, 9600)
- AZF_FMT_XLATE(11025, 11025)
- AZF_FMT_XLATE(13240, SUSPECTED_13240)
- AZF_FMT_XLATE(16000, 16000)
- AZF_FMT_XLATE(22050, 22050)
- AZF_FMT_XLATE(32000, 32000)
+ case AZF_FREQ_4000: freq = SOUNDFORMAT_FREQ_SUSPECTED_4000; break;
+ case AZF_FREQ_4800: freq = SOUNDFORMAT_FREQ_SUSPECTED_4800; break;
+ case AZF_FREQ_5512:
+ /* the AZF3328 names it "5510" for some strange reason */
+ freq = SOUNDFORMAT_FREQ_5510; break;
+ case AZF_FREQ_6620: freq = SOUNDFORMAT_FREQ_6620; break;
+ case AZF_FREQ_8000: freq = SOUNDFORMAT_FREQ_8000; break;
+ case AZF_FREQ_9600: freq = SOUNDFORMAT_FREQ_9600; break;
+ case AZF_FREQ_11025: freq = SOUNDFORMAT_FREQ_11025; break;
+ case AZF_FREQ_13240: freq = SOUNDFORMAT_FREQ_SUSPECTED_13240; break;
+ case AZF_FREQ_16000: freq = SOUNDFORMAT_FREQ_16000; break;
+ case AZF_FREQ_22050: freq = SOUNDFORMAT_FREQ_22050; break;
+ case AZF_FREQ_32000: freq = SOUNDFORMAT_FREQ_32000; break;
default:
snd_printk(KERN_WARNING "unknown bitrate %d, assuming 44.1kHz!\n", bitrate);
/* fall-through */
- AZF_FMT_XLATE(44100, 44100)
- AZF_FMT_XLATE(48000, 48000)
- AZF_FMT_XLATE(66200, SUSPECTED_66200)
-#undef AZF_FMT_XLATE
+ case AZF_FREQ_44100: freq = SOUNDFORMAT_FREQ_44100; break;
+ case AZF_FREQ_48000: freq = SOUNDFORMAT_FREQ_48000; break;
+ case AZF_FREQ_66200: freq = SOUNDFORMAT_FREQ_SUSPECTED_66200; break;
}
/* val = 0xff07; 3m27.993s (65301Hz; -> 64000Hz???) hmm, 66120, 65967, 66123 */
/* val = 0xff09; 17m15.098s (13123,478Hz; -> 12000Hz???) hmm, 13237.2Hz? */
snd_print_pcm_rates(a->rates, buf, sizeof(buf));
if (a->format == AUDIO_CODING_TYPE_LPCM)
- snd_print_pcm_bits(a->sample_bits, buf2 + 8, sizeof(buf2 - 8));
+ snd_print_pcm_bits(a->sample_bits, buf2 + 8, sizeof(buf2) - 8);
else if (a->max_bitrate)
snprintf(buf2, sizeof(buf2),
", max bitrate = %d", a->max_bitrate);
SND_PCI_QUIRK(0x1043, 0x813d, "ASUS P5AD2", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1043, 0x81b3, "ASUS", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1043, 0x81e7, "ASUS M2V", POS_FIX_LPIB),
+ SND_PCI_QUIRK(0x1043, 0x8410, "ASUS", POS_FIX_LPIB),
SND_PCI_QUIRK(0x104d, 0x9069, "Sony VPCS11V9E", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1106, 0x3288, "ASUS M2V-MX SE", POS_FIX_LPIB),
SND_PCI_QUIRK(0x1179, 0xff10, "Toshiba A100-259", POS_FIX_LPIB),
if (err < 0)
goto out_free;
#ifdef CONFIG_SND_HDA_PATCH_LOADER
- if (patch[dev]) {
+ if (patch[dev] && *patch[dev]) {
snd_printk(KERN_ERR SFX "Applying patch firmware '%s'\n",
patch[dev]);
err = snd_hda_load_patch(chip->bus, patch[dev]);
unsigned int auto_mic;
int auto_mic_ext; /* autocfg.inputs[] index for ext mic */
unsigned int need_dac_fix;
+ hda_nid_t slave_dig_outs[2];
/* capture */
unsigned int num_adc_nids;
unsigned int ideapad:1;
unsigned int thinkpad:1;
unsigned int hp_laptop:1;
+ unsigned int asus:1;
unsigned int ext_mic_present;
unsigned int recording;
info->stream[SNDRV_PCM_STREAM_CAPTURE].nid =
spec->dig_in_nid;
}
+ if (spec->slave_dig_outs[0])
+ codec->slave_dig_outs = spec->slave_dig_outs;
}
return 0;
struct conexant_spec *spec;
struct conexant_jack *jack;
const char *name;
- int err;
+ int i, err;
spec = codec->spec;
snd_array_init(&spec->jacks, sizeof(*jack), 32);
+
+ jack = spec->jacks.list;
+ for (i = 0; i < spec->jacks.used; i++, jack++)
+ if (jack->nid == nid)
+ return 0 ; /* already present */
+
jack = snd_array_new(&spec->jacks);
name = (type == SND_JACK_HEADPHONE) ? "Headphone" : "Mic" ;
static hda_nid_t cxt5066_dac_nids[1] = { 0x10 };
static hda_nid_t cxt5066_adc_nids[3] = { 0x14, 0x15, 0x16 };
static hda_nid_t cxt5066_capsrc_nids[1] = { 0x17 };
-#define CXT5066_SPDIF_OUT 0x21
+static hda_nid_t cxt5066_digout_pin_nids[2] = { 0x20, 0x22 };
/* OLPC's microphone port is DC coupled for use with external sensors,
* therefore we use a 50% mic bias in order to center the input signal with
}
}
+
+/* toggle input of built-in digital mic and mic jack appropriately */
+static void cxt5066_asus_automic(struct hda_codec *codec)
+{
+ unsigned int present;
+
+ present = snd_hda_jack_detect(codec, 0x1b);
+ snd_printdd("CXT5066: external microphone present=%d\n", present);
+ snd_hda_codec_write(codec, 0x17, 0, AC_VERB_SET_CONNECT_SEL,
+ present ? 1 : 0);
+}
+
+
/* toggle input of built-in digital mic and mic jack appropriately */
static void cxt5066_hp_laptop_automic(struct hda_codec *codec)
{
cxt5066_update_speaker(codec);
}
-/* unsolicited event for jack sensing */
-static void cxt5066_olpc_unsol_event(struct hda_codec *codec, unsigned int res)
+/* Dispatch the right mic autoswitch function */
+static void cxt5066_automic(struct hda_codec *codec)
{
struct conexant_spec *spec = codec->spec;
- snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
- /* ignore mic events in DC mode; we're always using the jack */
- if (!spec->dc_enable)
- cxt5066_olpc_automic(codec);
- break;
- }
-}
-/* unsolicited event for jack sensing */
-static void cxt5066_vostro_event(struct hda_codec *codec, unsigned int res)
-{
- snd_printdd("CXT5066_vostro: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
+ if (spec->dell_vostro)
cxt5066_vostro_automic(codec);
- break;
- }
-}
-
-/* unsolicited event for jack sensing */
-static void cxt5066_ideapad_event(struct hda_codec *codec, unsigned int res)
-{
- snd_printdd("CXT5066_ideapad: unsol event %x (%x)\n", res, res >> 26);
- switch (res >> 26) {
- case CONEXANT_HP_EVENT:
- cxt5066_hp_automute(codec);
- break;
- case CONEXANT_MIC_EVENT:
+ else if (spec->ideapad)
cxt5066_ideapad_automic(codec);
- break;
- }
+ else if (spec->thinkpad)
+ cxt5066_thinkpad_automic(codec);
+ else if (spec->hp_laptop)
+ cxt5066_hp_laptop_automic(codec);
+ else if (spec->asus)
+ cxt5066_asus_automic(codec);
}
/* unsolicited event for jack sensing */
-static void cxt5066_hp_laptop_event(struct hda_codec *codec, unsigned int res)
+static void cxt5066_olpc_unsol_event(struct hda_codec *codec, unsigned int res)
{
- snd_printdd("CXT5066_hp_laptop: unsol event %x (%x)\n", res, res >> 26);
+ struct conexant_spec *spec = codec->spec;
+ snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
switch (res >> 26) {
case CONEXANT_HP_EVENT:
cxt5066_hp_automute(codec);
break;
case CONEXANT_MIC_EVENT:
- cxt5066_hp_laptop_automic(codec);
+ /* ignore mic events in DC mode; we're always using the jack */
+ if (!spec->dc_enable)
+ cxt5066_olpc_automic(codec);
break;
}
}
/* unsolicited event for jack sensing */
-static void cxt5066_thinkpad_event(struct hda_codec *codec, unsigned int res)
+static void cxt5066_unsol_event(struct hda_codec *codec, unsigned int res)
{
- snd_printdd("CXT5066_thinkpad: unsol event %x (%x)\n", res, res >> 26);
+ snd_printdd("CXT5066: unsol event %x (%x)\n", res, res >> 26);
switch (res >> 26) {
case CONEXANT_HP_EVENT:
cxt5066_hp_automute(codec);
break;
case CONEXANT_MIC_EVENT:
- cxt5066_thinkpad_automic(codec);
+ cxt5066_automic(codec);
break;
}
}
+
static const struct hda_input_mux cxt5066_analog_mic_boost = {
.num_items = 5,
.items = {
spec->recording = 0;
}
+static void conexant_check_dig_outs(struct hda_codec *codec,
+ hda_nid_t *dig_pins,
+ int num_pins)
+{
+ struct conexant_spec *spec = codec->spec;
+ hda_nid_t *nid_loc = &spec->multiout.dig_out_nid;
+ int i;
+
+ for (i = 0; i < num_pins; i++, dig_pins++) {
+ unsigned int cfg = snd_hda_codec_get_pincfg(codec, *dig_pins);
+ if (get_defcfg_connect(cfg) == AC_JACK_PORT_NONE)
+ continue;
+ if (snd_hda_get_connections(codec, *dig_pins, nid_loc, 1) != 1)
+ continue;
+ if (spec->slave_dig_outs[0])
+ nid_loc++;
+ else
+ nid_loc = spec->slave_dig_outs;
+ }
+}
+
static struct hda_input_mux cxt5066_capture_source = {
.num_items = 4,
.items = {
/* initialize jack-sensing, too */
static int cxt5066_init(struct hda_codec *codec)
{
- struct conexant_spec *spec = codec->spec;
-
snd_printdd("CXT5066: init\n");
conexant_init(codec);
if (codec->patch_ops.unsol_event) {
cxt5066_hp_automute(codec);
- if (spec->dell_vostro)
- cxt5066_vostro_automic(codec);
- else if (spec->ideapad)
- cxt5066_ideapad_automic(codec);
- else if (spec->thinkpad)
- cxt5066_thinkpad_automic(codec);
- else if (spec->hp_laptop)
- cxt5066_hp_laptop_automic(codec);
+ cxt5066_automic(codec);
}
cxt5066_set_mic_boost(codec);
return 0;
CXT5066_DELL_VOSTRO, /* Dell Vostro 1015i */
CXT5066_IDEAPAD, /* Lenovo IdeaPad U150 */
CXT5066_THINKPAD, /* Lenovo ThinkPad T410s, others? */
+ CXT5066_ASUS, /* Asus K52JU, Lenovo G560 - Int mic at 0x1a and Ext mic at 0x1b */
CXT5066_HP_LAPTOP, /* HP Laptop */
CXT5066_MODELS
};
[CXT5066_DELL_VOSTRO] = "dell-vostro",
[CXT5066_IDEAPAD] = "ideapad",
[CXT5066_THINKPAD] = "thinkpad",
+ [CXT5066_ASUS] = "asus",
[CXT5066_HP_LAPTOP] = "hp-laptop",
};
SND_PCI_QUIRK(0x1028, 0x0402, "Dell Vostro", CXT5066_DELL_VOSTRO),
SND_PCI_QUIRK(0x1028, 0x0408, "Dell Inspiron One 19T", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x103c, 0x360b, "HP G60", CXT5066_HP_LAPTOP),
- SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_HP_LAPTOP),
+ SND_PCI_QUIRK(0x1043, 0x13f3, "Asus A52J", CXT5066_ASUS),
+ SND_PCI_QUIRK(0x1043, 0x1643, "Asus K52JU", CXT5066_ASUS),
+ SND_PCI_QUIRK(0x1043, 0x1993, "Asus U50F", CXT5066_ASUS),
SND_PCI_QUIRK(0x1179, 0xff1e, "Toshiba Satellite C650D", CXT5066_IDEAPAD),
SND_PCI_QUIRK(0x1179, 0xff50, "Toshiba Satellite P500-PSPGSC-01800T", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x1179, 0xffe0, "Toshiba Satellite Pro T130-15F", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT5066_OLPC_XO_1_5),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400s", CXT5066_THINKPAD),
SND_PCI_QUIRK(0x17aa, 0x21c5, "Thinkpad Edge 13", CXT5066_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x21c6, "Thinkpad Edge 13", CXT5066_ASUS),
SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo Thinkpad", CXT5066_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo G560", CXT5066_ASUS),
SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo", CXT5066_IDEAPAD), /* Fallback for Lenovos without dock mic */
{}
};
spec->multiout.max_channels = 2;
spec->multiout.num_dacs = ARRAY_SIZE(cxt5066_dac_nids);
spec->multiout.dac_nids = cxt5066_dac_nids;
- spec->multiout.dig_out_nid = CXT5066_SPDIF_OUT;
+ conexant_check_dig_outs(codec, cxt5066_digout_pin_nids,
+ ARRAY_SIZE(cxt5066_digout_pin_nids));
spec->num_adc_nids = 1;
spec->adc_nids = cxt5066_adc_nids;
spec->capsrc_nids = cxt5066_capsrc_nids;
spec->num_init_verbs++;
spec->dell_automute = 1;
break;
+ case CXT5066_ASUS:
case CXT5066_HP_LAPTOP:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_hp_laptop_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->init_verbs[spec->num_init_verbs] =
cxt5066_init_verbs_hp_laptop;
spec->num_init_verbs++;
- spec->hp_laptop = 1;
+ spec->hp_laptop = board_config == CXT5066_HP_LAPTOP;
+ spec->asus = board_config == CXT5066_ASUS;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
/* no S/PDIF out */
- spec->multiout.dig_out_nid = 0;
+ if (board_config == CXT5066_HP_LAPTOP)
+ spec->multiout.dig_out_nid = 0;
/* input source automatically selected */
spec->input_mux = NULL;
spec->port_d_mode = 0;
break;
case CXT5066_DELL_VOSTRO:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_vostro_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->init_verbs[0] = cxt5066_init_verbs_vostro;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master_olpc;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
break;
case CXT5066_IDEAPAD:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_ideapad_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
spec->init_verbs[0] = cxt5066_init_verbs_ideapad;
break;
case CXT5066_THINKPAD:
codec->patch_ops.init = cxt5066_init;
- codec->patch_ops.unsol_event = cxt5066_thinkpad_event;
+ codec->patch_ops.unsol_event = cxt5066_unsol_event;
spec->mixers[spec->num_mixers++] = cxt5066_mixer_master;
spec->mixers[spec->num_mixers++] = cxt5066_mixers;
spec->init_verbs[0] = cxt5066_init_verbs_thinkpad;
}
}
spec->multiout.dac_nids = spec->private_dac_nids;
- spec->multiout.max_channels = nums * 2;
+ spec->multiout.max_channels = spec->multiout.num_dacs * 2;
if (cfg->hp_outs > 0)
spec->auto_mute = 1;
return 0;
}
-static int cx_auto_add_volume(struct hda_codec *codec, const char *basename,
+static int cx_auto_add_volume_idx(struct hda_codec *codec, const char *basename,
const char *dir, int cidx,
- hda_nid_t nid, int hda_dir)
+ hda_nid_t nid, int hda_dir, int amp_idx)
{
static char name[32];
static struct snd_kcontrol_new knew[] = {
for (i = 0; i < 2; i++) {
struct snd_kcontrol *kctl;
- knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, 3, 0, hda_dir);
+ knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, 3, amp_idx,
+ hda_dir);
knew[i].subdevice = HDA_SUBDEV_AMP_FLAG;
knew[i].index = cidx;
snprintf(name, sizeof(name), "%s%s %s", basename, dir, sfx[i]);
return 0;
}
+#define cx_auto_add_volume(codec, str, dir, cidx, nid, hda_dir) \
+ cx_auto_add_volume_idx(codec, str, dir, cidx, nid, hda_dir, 0)
+
#define cx_auto_add_pb_volume(codec, nid, str, idx) \
cx_auto_add_volume(codec, str, " Playback", idx, nid, HDA_OUTPUT)
struct conexant_spec *spec = codec->spec;
struct auto_pin_cfg *cfg = &spec->autocfg;
static const char *prev_label;
- int i, err, cidx;
+ int i, err, cidx, conn_len;
+ hda_nid_t conn[HDA_MAX_CONNECTIONS];
+
+ int multi_adc_volume = 0; /* If the ADC nid has several input volumes */
+ int adc_nid = spec->adc_nids[0];
+
+ conn_len = snd_hda_get_connections(codec, adc_nid, conn,
+ HDA_MAX_CONNECTIONS);
+ if (conn_len < 0)
+ return conn_len;
+
+ multi_adc_volume = cfg->num_inputs > 1 && conn_len > 1;
+ if (!multi_adc_volume) {
+ err = cx_auto_add_volume(codec, "Capture", "", 0, adc_nid,
+ HDA_INPUT);
+ if (err < 0)
+ return err;
+ }
- err = cx_auto_add_volume(codec, "Capture", "", 0, spec->adc_nids[0],
- HDA_INPUT);
- if (err < 0)
- return err;
prev_label = NULL;
cidx = 0;
for (i = 0; i < cfg->num_inputs; i++) {
hda_nid_t nid = cfg->inputs[i].pin;
const char *label;
- if (!(get_wcaps(codec, nid) & AC_WCAP_IN_AMP))
+ int j;
+ int pin_amp = get_wcaps(codec, nid) & AC_WCAP_IN_AMP;
+ if (!pin_amp && !multi_adc_volume)
continue;
+
label = hda_get_autocfg_input_label(codec, cfg, i);
if (label == prev_label)
cidx++;
else
cidx = 0;
prev_label = label;
- err = cx_auto_add_volume(codec, label, " Capture", cidx,
- nid, HDA_INPUT);
- if (err < 0)
- return err;
+
+ if (pin_amp) {
+ err = cx_auto_add_volume(codec, label, " Boost", cidx,
+ nid, HDA_INPUT);
+ if (err < 0)
+ return err;
+ }
+
+ if (!multi_adc_volume)
+ continue;
+ for (j = 0; j < conn_len; j++) {
+ if (conn[j] == nid) {
+ err = cx_auto_add_volume_idx(codec, label,
+ " Capture", cidx, adc_nid, HDA_INPUT, j);
+ if (err < 0)
+ return err;
+ break;
+ }
+ }
}
return 0;
}
hdmi_ai->ver = 0x01;
hdmi_ai->len = 0x0a;
hdmi_ai->CC02_CT47 = channels - 1;
+ hdmi_ai->CA = ca;
hdmi_checksum_audio_infoframe(hdmi_ai);
} else if (spec->sink_eld[i].conn_type == 1) { /* DisplayPort */
struct dp_audio_infoframe *dp_ai;
dp_ai->len = 0x1b;
dp_ai->ver = 0x11 << 2;
dp_ai->CC02_CT47 = channels - 1;
+ dp_ai->CA = ca;
} else {
snd_printd("HDMI: unknown connection type at pin %d\n",
pin_nid);
{
struct alc_spec *spec = codec->spec;
int id = spec->fixup_id;
+#ifdef CONFIG_SND_DEBUG_VERBOSE
const char *modelname = spec->fixup_name;
+#endif
int depth = 0;
if (!spec->fixup_list)
{ } /* end */
};
+static struct snd_kcontrol_new alc888_acer_aspire_4930g_mixer[] = {
+ HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
+ HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT),
+ HDA_CODEC_VOLUME("Surround Playback Volume", 0x0d, 0x0, HDA_OUTPUT),
+ HDA_BIND_MUTE("Surround Playback Switch", 0x0d, 2, HDA_INPUT),
+ HDA_CODEC_VOLUME_MONO("Center Playback Volume", 0x0f, 2, 0x0,
+ HDA_OUTPUT),
+ HDA_BIND_MUTE_MONO("Center Playback Switch", 0x0f, 2, 2, HDA_INPUT),
+ HDA_CODEC_VOLUME_MONO("LFE Playback Volume", 0x0f, 1, 0x0, HDA_OUTPUT),
+ HDA_BIND_MUTE_MONO("LFE Playback Switch", 0x0f, 1, 2, HDA_INPUT),
+ HDA_CODEC_VOLUME("Side Playback Volume", 0x0e, 0x0, HDA_OUTPUT),
+ HDA_BIND_MUTE("Side Playback Switch", 0x0e, 2, HDA_INPUT),
+ HDA_CODEC_VOLUME("CD Playback Volume", 0x0b, 0x04, HDA_INPUT),
+ HDA_CODEC_MUTE("CD Playback Switch", 0x0b, 0x04, HDA_INPUT),
+ HDA_CODEC_VOLUME("Line Playback Volume", 0x0b, 0x02, HDA_INPUT),
+ HDA_CODEC_MUTE("Line Playback Switch", 0x0b, 0x02, HDA_INPUT),
+ HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x0, HDA_INPUT),
+ HDA_CODEC_VOLUME("Mic Boost Volume", 0x18, 0, HDA_INPUT),
+ HDA_CODEC_MUTE("Mic Playback Switch", 0x0b, 0x0, HDA_INPUT),
+ { } /* end */
+};
+
+
static struct snd_kcontrol_new alc889_acer_aspire_8930g_mixer[] = {
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT),
.init_hook = alc_automute_amp,
},
[ALC888_ACER_ASPIRE_4930G] = {
- .mixers = { alc888_base_mixer,
+ .mixers = { alc888_acer_aspire_4930g_mixer,
alc883_chmode_mixer },
.init_verbs = { alc883_init_verbs, alc880_gpio1_init_verbs,
alc888_acer_aspire_4930g_verbs },
return 0;
}
-static int alc861vd_auto_create_multi_out_ctls(struct alc_spec *spec,
- const struct auto_pin_cfg *cfg);
-
/* almost identical with ALC880 parser... */
static int alc882_parse_auto_config(struct hda_codec *codec)
{
err = alc880_auto_fill_dac_nids(spec, &spec->autocfg);
if (err < 0)
return err;
- if (codec->vendor_id == 0x10ec0887)
- err = alc861vd_auto_create_multi_out_ctls(spec, &spec->autocfg);
- else
- err = alc880_auto_create_multi_out_ctls(spec, &spec->autocfg);
+ err = alc880_auto_create_multi_out_ctls(spec, &spec->autocfg);
if (err < 0)
return err;
err = alc880_auto_create_extra_out(spec, spec->autocfg.hp_pins[0],
ALC262_HP_BPC),
SND_PCI_QUIRK_MASK(0x103c, 0xff00, 0x1300, "HP xw series",
ALC262_HP_BPC),
+ SND_PCI_QUIRK_MASK(0x103c, 0xff00, 0x1500, "HP z series",
+ ALC262_HP_BPC),
SND_PCI_QUIRK_MASK(0x103c, 0xff00, 0x1700, "HP xw series",
ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x2800, "HP D7000", ALC262_HP_BPC_D7000_WL),
SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
- SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
+ SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
{}
#define alc861vd_idx_to_mixer_switch(nid) ((nid) + 0x0c)
/* add playback controls from the parsed DAC table */
-/* Based on ALC880 version. But ALC861VD and ALC887 have separate,
+/* Based on ALC880 version. But ALC861VD has separate,
* different NIDs for mute/unmute switch and volume control */
static int alc861vd_auto_create_multi_out_ctls(struct alc_spec *spec,
const struct auto_pin_cfg *cfg)
ALC662_3ST_6ch_DIG),
SND_PCI_QUIRK_MASK(0x1854, 0xf000, 0x2000, "ASUS H13-200x",
ALC663_ASUS_H13),
+ SND_PCI_QUIRK(0x1991, 0x5628, "Ordissimo EVE", ALC662_LENOVO_101E),
{}
};
ALC662_FIXUP_ASPIRE,
ALC662_FIXUP_IDEAPAD,
ALC272_FIXUP_MARIO,
+ ALC662_FIXUP_CZC_P10T,
};
static const struct alc_fixup alc662_fixups[] = {
[ALC272_FIXUP_MARIO] = {
.type = ALC_FIXUP_FUNC,
.v.func = alc272_fixup_mario,
- }
+ },
+ [ALC662_FIXUP_CZC_P10T] = {
+ .type = ALC_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ {0x14, AC_VERB_SET_EAPD_BTLENABLE, 0},
+ {}
+ }
+ },
};
static struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0308, "Acer Aspire 8942G", ALC662_FIXUP_ASPIRE),
SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD),
SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD),
+ SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
{}
};
{
int err;
struct snd_akm4xxx *ak;
+ unsigned char tmp;
if (ice->eeprom.subvendor == ICE1712_SUBDEVICE_DELTA1010 &&
ice->eeprom.gpiodir == 0x7b)
break;
}
+ /* initialize the SPI clock to high */
+ tmp = snd_ice1712_read(ice, ICE1712_IREG_GPIO_DATA);
+ tmp |= ICE1712_DELTA_AP_CCLK;
+ snd_ice1712_write(ice, ICE1712_IREG_GPIO_DATA, tmp);
+ udelay(5);
+
/* initialize spdif */
switch (ice->eeprom.subvendor) {
case ICE1712_SUBDEVICE_AUDIOPHILE:
void (*update_dac_volume)(struct oxygen *chip);
void (*update_dac_mute)(struct oxygen *chip);
void (*update_center_lfe_mix)(struct oxygen *chip, bool mixed);
+ unsigned int (*adjust_dac_routing)(struct oxygen *chip,
+ unsigned int play_routing);
void (*gpio_changed)(struct oxygen *chip);
void (*uart_input)(struct oxygen *chip);
void (*ac97_switch)(struct oxygen *chip,
(1 << OXYGEN_PLAY_DAC1_SOURCE_SHIFT) |
(2 << OXYGEN_PLAY_DAC2_SOURCE_SHIFT) |
(3 << OXYGEN_PLAY_DAC3_SOURCE_SHIFT);
+ if (chip->model.adjust_dac_routing)
+ reg_value = chip->model.adjust_dac_routing(chip, reg_value);
oxygen_write16_masked(chip, OXYGEN_PLAY_ROUTING, reg_value,
OXYGEN_PLAY_DAC0_SOURCE_MASK |
OXYGEN_PLAY_DAC1_SOURCE_MASK |
unsigned int i;
snd_iprintf(buffer, "\nCS4398: 7?");
- for (i = 2; i <= 8; ++i)
+ for (i = 2; i < 8; ++i)
snd_iprintf(buffer, " %02x", data->cs4398_regs[i]);
snd_iprintf(buffer, "\n");
dump_cs4362a_registers(data, buffer);
*
* SPI 0 -> CS4245
*
+ * I²S 1 -> CS4245
+ * I²S 2 -> CS4361 (center/LFE)
+ * I²S 3 -> CS4361 (surround)
+ * I²S 4 -> CS4361 (front)
+ *
* GPIO 3 <- ?
* GPIO 4 <- headphone detect
* GPIO 5 -> route input jack to line-in (0) or mic-in (1)
* input 1 <- aux
* input 2 <- front mic
* input 4 <- line/mic
+ * DAC out -> headphones
* aux out -> front panel headphones
*/
cs4245_write_cached(chip, CS4245_ADC_CTRL, value);
}
+static inline unsigned int shift_bits(unsigned int value,
+ unsigned int shift_from,
+ unsigned int shift_to,
+ unsigned int mask)
+{
+ if (shift_from < shift_to)
+ return (value << (shift_to - shift_from)) & mask;
+ else
+ return (value >> (shift_from - shift_to)) & mask;
+}
+
+static unsigned int adjust_dg_dac_routing(struct oxygen *chip,
+ unsigned int play_routing)
+{
+ return (play_routing & OXYGEN_PLAY_DAC0_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC2_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC1_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC1_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC1_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC2_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC2_SOURCE_MASK) |
+ shift_bits(play_routing,
+ OXYGEN_PLAY_DAC0_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC3_SOURCE_SHIFT,
+ OXYGEN_PLAY_DAC3_SOURCE_MASK);
+}
+
static int output_switch_info(struct snd_kcontrol *ctl,
struct snd_ctl_elem_info *info)
{
.resume = dg_resume,
.set_dac_params = set_cs4245_dac_params,
.set_adc_params = set_cs4245_adc_params,
+ .adjust_dac_routing = adjust_dg_dac_routing,
.dump_registers = dump_cs4245_registers,
.model_data_size = sizeof(struct dg),
.device_config = PLAYBACK_0_TO_I2S |
#define __PDAUDIOCF_H
#include <sound/pcm.h>
-#include <asm/io.h>
+#include <linux/io.h>
#include <linux/interrupt.h>
#include <pcmcia/cistpl.h>
#include <pcmcia/ds.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/firmware.h>
+#include <linux/io.h>
#include <sound/core.h>
-#include <asm/io.h>
#include "vxpocket.h"
}
dev->pcm->private_data = dev;
- strcpy(dev->pcm->name, dev->product_name);
+ strlcpy(dev->pcm->name, dev->product_name, sizeof(dev->pcm->name));
memset(dev->sub_playback, 0, sizeof(dev->sub_playback));
memset(dev->sub_capture, 0, sizeof(dev->sub_capture));
if (ret < 0)
return ret;
- strcpy(rmidi->name, device->product_name);
+ strlcpy(rmidi->name, device->product_name, sizeof(rmidi->name));
rmidi->info_flags = SNDRV_RAWMIDI_INFO_DUPLEX;
rmidi->private_data = device;
};
-/*E-mu 0202(0404) eXtension Unit(XU) control*/
+/*E-mu 0202/0404/0204 eXtension Unit(XU) control*/
enum {
USB_XU_CLOCK_RATE = 0xe301,
USB_XU_CLOCK_SOURCE = 0xe302,
cval->initialized = 1;
} else {
if (type == USB_XU_CLOCK_RATE) {
- /* E-Mu USB 0404/0202/TrackerPre
+ /* E-Mu USB 0404/0202/TrackerPre/0204
* samplerate control quirk
*/
cval->min = 0;
.idProduct = 0x3f0a,
.bInterfaceClass = USB_CLASS_AUDIO,
},
+{
+ /* E-Mu 0204 USB */
+ .match_flags = USB_DEVICE_ID_MATCH_DEVICE,
+ .idVendor = 0x041e,
+ .idProduct = 0x3f19,
+ .bInterfaceClass = USB_CLASS_AUDIO,
+},
/*
* Logitech QuickCam: bDeviceClass is vendor-specific, so generic interface
}
/*
- * For E-Mu 0404USB/0202USB/TrackerPre sample rate should be set for device,
+ * For E-Mu 0404USB/0202USB/TrackerPre/0204 sample rate should be set for device,
* not for interface.
*/
case USB_ID(0x041e, 0x3f02): /* E-Mu 0202 USB */
case USB_ID(0x041e, 0x3f04): /* E-Mu 0404 USB */
case USB_ID(0x041e, 0x3f0a): /* E-Mu Tracker Pre */
+ case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */
set_format_emu_quirk(subs, fmt);
break;
}
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Winit-self
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wpacked
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wredundant-decls
-EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wstack-protector
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wstrict-aliasing=3
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wswitch-default
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wswitch-enum
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wno-system-headers
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wundef
-EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wvolatile-register-var
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wwrite-strings
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wbad-function-cast
EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wmissing-declarations
CFLAGS := $(CFLAGS) -fstack-protector-all
endif
+ifeq ($(call try-cc,$(SOURCE_HELLO),-Werror -Wstack-protector),y)
+ CFLAGS := $(CFLAGS) -Wstack-protector
+endif
+
+ifeq ($(call try-cc,$(SOURCE_HELLO),-Werror -Wvolatile-register-var),y)
+ CFLAGS := $(CFLAGS) -Wvolatile-register-var
+endif
### --- END CONFIGURATION SECTION ---
continue;
offset = start + i;
- sprintf(cmd, "addr2line -e %s %016llx", filename, offset);
+ sprintf(cmd, "addr2line -e %s %016" PRIx64, filename, offset);
fp = popen(cmd, "r");
if (!fp)
continue;
for (offset = 0; offset < len; ++offset)
if (h->ip[offset] != 0)
- printf("%*Lx: %Lu\n", BITS_PER_LONG / 2,
+ printf("%*" PRIx64 ": %" PRIu64 "\n", BITS_PER_LONG / 2,
sym->start + offset, h->ip[offset]);
- printf("%*s: %Lu\n", BITS_PER_LONG / 2, "h->sum", h->sum);
+ printf("%*s: %" PRIu64 "\n", BITS_PER_LONG / 2, "h->sum", h->sum);
}
static int hist_entry__tty_annotate(struct hist_entry *he)
addr = data->ptr;
if (sym != NULL)
- snprintf(buf, sizeof(buf), "%s+%Lx", sym->name,
+ snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
addr - map->unmap_ip(map, sym->start));
else
- snprintf(buf, sizeof(buf), "%#Lx", addr);
+ snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
printf(" %-34s |", buf);
printf(" %9llu/%-5lu | %9llu/%-5lu | %8lu | %8lu | %6.3f%%\n",
pr_info("%10u ", st->nr_acquired);
pr_info("%10u ", st->nr_contended);
- pr_info("%15llu ", st->wait_time_total);
- pr_info("%15llu ", st->wait_time_max);
- pr_info("%15llu ", st->wait_time_min == ULLONG_MAX ?
+ pr_info("%15" PRIu64 " ", st->wait_time_total);
+ pr_info("%15" PRIu64 " ", st->wait_time_max);
+ pr_info("%15" PRIu64 " ", st->wait_time_min == ULLONG_MAX ?
0 : st->wait_time_min);
pr_info("\n");
}
perf_session__process_machines(session, event__synthesize_guest_os);
if (!system_wide)
- event__synthesize_thread(target_tid, process_synthesized_event,
- session);
+ event__synthesize_thread_map(threads, process_synthesized_event,
+ session);
else
event__synthesize_threads(process_synthesized_event, session);
* Approximate RIP event size: 24 bytes.
*/
fprintf(stderr,
- "[ perf record: Captured and wrote %.3f MB %s (~%lld samples) ]\n",
+ "[ perf record: Captured and wrote %.3f MB %s (~%" PRIu64 " samples) ]\n",
(double)bytes_written / 1024.0 / 1024.0,
output_name,
bytes_written / 24);
event->read.value);
}
- dump_printf(": %d %d %s %Lu\n", event->read.pid, event->read.tid,
+ dump_printf(": %d %d %s %" PRIu64 "\n", event->read.pid, event->read.tid,
attr ? __event_name(attr->type, attr->config) : "FAIL",
event->read.value);
}
run_measurement_overhead = min_delta;
- printf("run measurement overhead: %Ld nsecs\n", min_delta);
+ printf("run measurement overhead: %" PRIu64 " nsecs\n", min_delta);
}
static void calibrate_sleep_measurement_overhead(void)
min_delta -= 10000;
sleep_measurement_overhead = min_delta;
- printf("sleep measurement overhead: %Ld nsecs\n", min_delta);
+ printf("sleep measurement overhead: %" PRIu64 " nsecs\n", min_delta);
}
static struct sched_atom *
burn_nsecs(1e6);
T1 = get_nsecs();
- printf("the run test took %Ld nsecs\n", T1-T0);
+ printf("the run test took %" PRIu64 " nsecs\n", T1 - T0);
T0 = get_nsecs();
sleep_nsecs(1e6);
T1 = get_nsecs();
- printf("the sleep test took %Ld nsecs\n", T1-T0);
+ printf("the sleep test took %" PRIu64 " nsecs\n", T1 - T0);
}
#define FILL_FIELD(ptr, field, event, data) \
delta = 0;
if (delta < 0)
- die("hm, delta: %Ld < 0 ?\n", delta);
+ die("hm, delta: %" PRIu64 " < 0 ?\n", delta);
if (verbose) {
- printf(" ... switch from %s/%d to %s/%d [ran %Ld nsecs]\n",
+ printf(" ... switch from %s/%d to %s/%d [ran %" PRIu64 " nsecs]\n",
switch_event->prev_comm, switch_event->prev_pid,
switch_event->next_comm, switch_event->next_pid,
delta);
delta = 0;
if (delta < 0)
- die("hm, delta: %Ld < 0 ?\n", delta);
+ die("hm, delta: %" PRIu64 " < 0 ?\n", delta);
sched_out = perf_session__findnew(session, switch_event->prev_pid);
avg = work_list->total_lat / work_list->nb_atoms;
- printf("|%11.3f ms |%9llu | avg:%9.3f ms | max:%9.3f ms | max at: %9.6f s\n",
+ printf("|%11.3f ms |%9" PRIu64 " | avg:%9.3f ms | max:%9.3f ms | max at: %9.6f s\n",
(double)work_list->total_runtime / 1e6,
work_list->nb_atoms, (double)avg / 1e6,
(double)work_list->max_lat / 1e6,
delta = 0;
if (delta < 0)
- die("hm, delta: %Ld < 0 ?\n", delta);
+ die("hm, delta: %" PRIu64 " < 0 ?\n", delta);
sched_out = perf_session__findnew(session, switch_event->prev_pid);
}
printf(" -----------------------------------------------------------------------------------------\n");
- printf(" TOTAL: |%11.3f ms |%9Ld |\n",
+ printf(" TOTAL: |%11.3f ms |%9" PRIu64 " |\n",
(double)all_runtime/1e6, all_count);
printf(" ---------------------------------------------------\n");
if (session->sample_type & PERF_SAMPLE_RAW) {
if (debug_mode) {
if (sample->time < last_timestamp) {
- pr_err("Samples misordered, previous: %llu "
- "this: %llu\n", last_timestamp,
+ pr_err("Samples misordered, previous: %" PRIu64
+ " this: %" PRIu64 "\n", last_timestamp,
sample->time);
nr_unordered++;
}
ret = perf_session__process_events(session, &event_ops);
if (debug_mode)
- pr_err("Misordered timestamps: %llu\n", nr_unordered);
+ pr_err("Misordered timestamps: %" PRIu64 "\n", nr_unordered);
return ret;
}
update_stats(&ps->res_stats[i], count[i]);
if (verbose) {
- fprintf(stderr, "%s: %Ld %Ld %Ld\n", event_name(counter),
- count[0], count[1], count[2]);
+ fprintf(stderr, "%s: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
+ event_name(counter), count[0], count[1], count[2]);
}
/*
if (llabs(skew) < page_size)
continue;
- pr_debug("%#Lx: diff end addr for %s v: %#Lx k: %#Lx\n",
+ pr_debug("%#" PRIx64 ": diff end addr for %s v: %#" PRIx64 " k: %#" PRIx64 "\n",
sym->start, sym->name, sym->end, pair->end);
} else {
struct rb_node *nnd;
goto detour;
}
- pr_debug("%#Lx: diff name v: %s k: %s\n",
+ pr_debug("%#" PRIx64 ": diff name v: %s k: %s\n",
sym->start, sym->name, pair->name);
}
} else
- pr_debug("%#Lx: %s not on kallsyms\n", sym->start, sym->name);
+ pr_debug("%#" PRIx64 ": %s not on kallsyms\n", sym->start, sym->name);
err = -1;
}
if (pair->start == pos->start) {
pair->priv = 1;
- pr_info(" %Lx-%Lx %Lx %s in kallsyms as",
+ pr_info(" %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
pos->start, pos->end, pos->pgoff, pos->dso->name);
if (pos->pgoff != pair->pgoff || pos->end != pair->end)
- pr_info(": \n*%Lx-%Lx %Lx",
+ pr_info(": \n*%" PRIx64 "-%" PRIx64 " %" PRIx64 "",
pair->start, pair->end, pair->pgoff);
pr_info(" %s\n", pair->dso->name);
pair->priv = 1;
}
if (evsel->counts->cpu[0].val != nr_open_calls) {
- pr_debug("perf_evsel__read_on_cpu: expected to intercept %d calls, got %Ld\n",
+ pr_debug("perf_evsel__read_on_cpu: expected to intercept %d calls, got %" PRIu64 "\n",
nr_open_calls, evsel->counts->cpu[0].val);
goto out_close_fd;
}
struct perf_evsel *evsel;
struct perf_event_attr attr;
unsigned int nr_open_calls = 111, i;
- cpu_set_t *cpu_set;
- size_t cpu_set_size;
+ cpu_set_t cpu_set;
int id = trace_event__id("sys_enter_open");
if (id < 0) {
return -1;
}
- cpu_set = CPU_ALLOC(cpus->nr);
- if (cpu_set == NULL)
- goto out_thread_map_delete;
-
- cpu_set_size = CPU_ALLOC_SIZE(cpus->nr);
- CPU_ZERO_S(cpu_set_size, cpu_set);
+ CPU_ZERO(&cpu_set);
memset(&attr, 0, sizeof(attr));
attr.type = PERF_TYPE_TRACEPOINT;
evsel = perf_evsel__new(&attr, 0);
if (evsel == NULL) {
pr_debug("perf_evsel__new\n");
- goto out_cpu_free;
+ goto out_thread_map_delete;
}
if (perf_evsel__open(evsel, cpus, threads) < 0) {
for (cpu = 0; cpu < cpus->nr; ++cpu) {
unsigned int ncalls = nr_open_calls + cpu;
+ /*
+ * XXX eventually lift this restriction in a way that
+ * keeps perf building on older glibc installations
+ * without CPU_ALLOC. 1024 cpus in 2010 still seems
+ * a reasonable upper limit tho :-)
+ */
+ if (cpus->map[cpu] >= CPU_SETSIZE) {
+ pr_debug("Ignoring CPU %d\n", cpus->map[cpu]);
+ continue;
+ }
- CPU_SET(cpu, cpu_set);
- sched_setaffinity(0, cpu_set_size, cpu_set);
+ CPU_SET(cpus->map[cpu], &cpu_set);
+ if (sched_setaffinity(0, sizeof(cpu_set), &cpu_set) < 0) {
+ pr_debug("sched_setaffinity() failed on CPU %d: %s ",
+ cpus->map[cpu],
+ strerror(errno));
+ goto out_close_fd;
+ }
for (i = 0; i < ncalls; ++i) {
fd = open("/etc/passwd", O_RDONLY);
close(fd);
}
- CPU_CLR(cpu, cpu_set);
+ CPU_CLR(cpus->map[cpu], &cpu_set);
}
/*
for (cpu = 0; cpu < cpus->nr; ++cpu) {
unsigned int expected;
+ if (cpus->map[cpu] >= CPU_SETSIZE)
+ continue;
+
if (perf_evsel__read_on_cpu(evsel, cpu, 0) < 0) {
pr_debug("perf_evsel__open_read_on_cpu\n");
goto out_close_fd;
expected = nr_open_calls + cpu;
if (evsel->counts->cpu[cpu].val != expected) {
- pr_debug("perf_evsel__read_on_cpu: expected to intercept %d calls on cpu %d, got %Ld\n",
- expected, cpu, evsel->counts->cpu[cpu].val);
+ pr_debug("perf_evsel__read_on_cpu: expected to intercept %d calls on cpu %d, got %" PRIu64 "\n",
+ expected, cpus->map[cpu], evsel->counts->cpu[cpu].val);
goto out_close_fd;
}
}
perf_evsel__close_fd(evsel, 1, threads->nr);
out_evsel_delete:
perf_evsel__delete(evsel);
-out_cpu_free:
- CPU_FREE(cpu_set);
out_thread_map_delete:
thread_map__delete(threads);
return err;
#include <stdio.h>
#include <termios.h>
#include <unistd.h>
+#include <inttypes.h>
#include <errno.h>
#include <time.h>
len = sym->end - sym->start;
sprintf(command,
- "objdump --start-address=%#0*Lx --stop-address=%#0*Lx -dS %s",
+ "objdump --start-address=%#0*" PRIx64 " --stop-address=%#0*" PRIx64 " -dS %s",
BITS_PER_LONG / 4, map__rip_2objdump(map, sym->start),
BITS_PER_LONG / 4, map__rip_2objdump(map, sym->end), path);
struct source_line *line;
char pattern[PATTERN_LEN + 1];
- sprintf(pattern, "%0*Lx <", BITS_PER_LONG / 4,
+ sprintf(pattern, "%0*" PRIx64 " <", BITS_PER_LONG / 4,
map__rip_2objdump(syme->map, symbol->start));
pthread_mutex_lock(&syme->src->lock);
if (nr_counters == 1 || !display_weighted) {
struct perf_evsel *first;
first = list_entry(evsel_list.next, struct perf_evsel, node);
- printf("%Ld", first->attr.sample_period);
+ printf("%" PRIu64, (uint64_t)first->attr.sample_period);
if (freq)
printf("Hz ");
else
percent_color_fprintf(stdout, "%4.1f%%", pcnt);
if (verbose)
- printf(" %016llx", sym->start);
+ printf(" %016" PRIx64, sym->start);
printf(" %-*.*s", sym_width, sym_width, sym->name);
printf(" %-*.*s\n", dso_width, dso_width,
dso_width >= syme->map->dso->long_name_len ?
return -ENOMEM;
if (target_tid != -1)
- event__synthesize_thread(target_tid, event__process, session);
+ event__synthesize_thread_map(threads, event__process, session);
else
event__synthesize_threads(event__process, session);
process, session);
}
-int event__synthesize_thread(pid_t pid, event__handler_t process,
- struct perf_session *session)
+int event__synthesize_thread_map(struct thread_map *threads,
+ event__handler_t process,
+ struct perf_session *session)
{
event_t *comm_event, *mmap_event;
- int err = -1;
+ int err = -1, thread;
comm_event = malloc(sizeof(comm_event->comm) + session->id_hdr_size);
if (comm_event == NULL)
if (mmap_event == NULL)
goto out_free_comm;
- err = __event__synthesize_thread(comm_event, mmap_event, pid,
- process, session);
+ err = 0;
+ for (thread = 0; thread < threads->nr; ++thread) {
+ if (__event__synthesize_thread(comm_event, mmap_event,
+ threads->map[thread],
+ process, session)) {
+ err = -1;
+ break;
+ }
+ }
free(mmap_event);
out_free_comm:
free(comm_event);
int event__process_lost(event_t *self, struct sample_data *sample __used,
struct perf_session *session)
{
- dump_printf(": id:%Ld: lost:%Ld\n", self->lost.id, self->lost.lost);
+ dump_printf(": id:%" PRIu64 ": lost:%" PRIu64 "\n",
+ self->lost.id, self->lost.lost);
session->hists.stats.total_lost += self->lost.lost;
return 0;
}
u8 cpumode = self->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
int ret = 0;
- dump_printf(" %d/%d: [%#Lx(%#Lx) @ %#Lx]: %s\n",
+ dump_printf(" %d/%d: [%#" PRIx64 "(%#" PRIx64 ") @ %#" PRIx64 "]: %s\n",
self->mmap.pid, self->mmap.tid, self->mmap.start,
self->mmap.len, self->mmap.pgoff, self->mmap.filename);
void event__print_totals(void);
struct perf_session;
+struct thread_map;
typedef int (*event__handler_synth_t)(event_t *event,
struct perf_session *session);
typedef int (*event__handler_t)(event_t *event, struct sample_data *sample,
struct perf_session *session);
-int event__synthesize_thread(pid_t pid, event__handler_t process,
- struct perf_session *session);
+int event__synthesize_thread_map(struct thread_map *threads,
+ event__handler_t process,
+ struct perf_session *session);
int event__synthesize_threads(event__handler_t process,
struct perf_session *session);
int event__synthesize_kernel_mmap(event__handler_t process,
int cpu, thread;
struct perf_counts_values *aggr = &evsel->counts->aggr, count;
- aggr->val = 0;
+ aggr->val = aggr->ena = aggr->run = 0;
for (cpu = 0; cpu < ncpus; cpu++) {
for (thread = 0; thread < nthreads; thread++) {
int feat, int fd)
{
if (lseek(fd, self->offset, SEEK_SET) == (off_t)-1) {
- pr_debug("Failed to lseek to %Ld offset for feature %d, "
- "continuing...\n", self->offset, feat);
+ pr_debug("Failed to lseek to %" PRIu64 " offset for feature "
+ "%d, continuing...\n", self->offset, feat);
return 0;
}
}
}
} else
- ret = snprintf(s, size, sep ? "%lld" : "%12lld ", period);
+ ret = snprintf(s, size, sep ? "%" PRIu64 : "%12" PRIu64 " ", period);
if (symbol_conf.show_nr_samples) {
if (sep)
- ret += snprintf(s + ret, size - ret, "%c%lld", *sep, period);
+ ret += snprintf(s + ret, size - ret, "%c%" PRIu64, *sep, period);
else
- ret += snprintf(s + ret, size - ret, "%11lld", period);
+ ret += snprintf(s + ret, size - ret, "%11" PRIu64, period);
}
if (pair_hists) {
sym_size = sym->end - sym->start;
offset = ip - sym->start;
- pr_debug3("%s: ip=%#Lx\n", __func__, self->ms.map->unmap_ip(self->ms.map, ip));
+ pr_debug3("%s: ip=%#" PRIx64 "\n", __func__, self->ms.map->unmap_ip(self->ms.map, ip));
if (offset >= sym_size)
return 0;
h->sum++;
h->ip[offset]++;
- pr_debug3("%#Lx %s: period++ [ip: %#Lx, %#Lx] => %Ld\n", self->ms.sym->start,
- self->ms.sym->name, ip, ip - self->ms.sym->start, h->ip[offset]);
+ pr_debug3("%#" PRIx64 " %s: period++ [ip: %#" PRIx64 ", %#" PRIx64
+ "] => %" PRIu64 "\n", self->ms.sym->start, self->ms.sym->name,
+ ip, ip - self->ms.sym->start, h->ip[offset]);
return 0;
}
goto out_free_filename;
}
- pr_debug("%s: filename=%s, sym=%s, start=%#Lx, end=%#Lx\n", __func__,
+ pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
filename, sym->name, map->unmap_ip(map, sym->start),
map->unmap_ip(map, sym->end));
dso, dso->long_name, sym, sym->name);
snprintf(command, sizeof(command),
- "objdump --start-address=0x%016Lx --stop-address=0x%016Lx -dS -C %s|grep -v %s|expand",
+ "objdump --start-address=0x%016" PRIx64 " --stop-address=0x%016" PRIx64 " -dS -C %s|grep -v %s|expand",
map__rip_2objdump(map, sym->start),
map__rip_2objdump(map, sym->end),
symfs_filename, filename);
#define _PERF_LINUX_BITOPS_H_
#include <linux/kernel.h>
+#include <linux/compiler.h>
#include <asm/hweight.h>
#define BITS_PER_LONG __WORDSIZE
#include "symbol.h"
#include <errno.h>
+#include <inttypes.h>
#include <limits.h>
#include <stdlib.h>
#include <string.h>
size_t map__fprintf(struct map *self, FILE *fp)
{
- return fprintf(fp, " %Lx-%Lx %Lx %s\n",
+ return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
self->start, self->end, self->pgoff, self->dso->name);
}
static char buf[32];
if (type == PERF_TYPE_RAW) {
- sprintf(buf, "raw 0x%llx", config);
+ sprintf(buf, "raw 0x%" PRIx64, config);
return buf;
}
};
extern struct tracepoint_path *tracepoint_id_to_path(u64 config);
-extern bool have_tracepoints(struct list_head *evsel_list);
+extern bool have_tracepoints(struct list_head *evlist);
extern int nr_counters;
sym = __find_kernel_function_by_name(tp->symbol, &map);
if (sym) {
addr = map->unmap_ip(map, sym->start + tp->offset);
- pr_debug("try to find %s+%ld@%llx\n", tp->symbol,
+ pr_debug("try to find %s+%ld@%" PRIx64 "\n", tp->symbol,
tp->offset, addr);
ret = find_perf_probe_point((unsigned long)addr, pp);
}
{
unsigned int i;
- printf("... chain: nr:%Lu\n", sample->callchain->nr);
+ printf("... chain: nr:%" PRIu64 "\n", sample->callchain->nr);
for (i = 0; i < sample->callchain->nr; i++)
- printf("..... %2d: %016Lx\n", i, sample->callchain->ips[i]);
+ printf("..... %2d: %016" PRIx64 "\n",
+ i, sample->callchain->ips[i]);
}
static void perf_session__print_tstamp(struct perf_session *session,
printf("%u ", sample->cpu);
if (session->sample_type & PERF_SAMPLE_TIME)
- printf("%Lu ", sample->time);
+ printf("%" PRIu64 " ", sample->time);
}
static void dump_event(struct perf_session *session, event_t *event,
if (!dump_trace)
return;
- printf("\n%#Lx [%#x]: event: %d\n", file_offset, event->header.size,
- event->header.type);
+ printf("\n%#" PRIx64 " [%#x]: event: %d\n",
+ file_offset, event->header.size, event->header.type);
trace_event(event);
if (sample)
perf_session__print_tstamp(session, event, sample);
- printf("%#Lx [%#x]: PERF_RECORD_%s", file_offset, event->header.size,
- event__get_event_name(event->header.type));
+ printf("%#" PRIx64 " [%#x]: PERF_RECORD_%s", file_offset,
+ event->header.size, event__get_event_name(event->header.type));
}
static void dump_sample(struct perf_session *session, event_t *event,
if (!dump_trace)
return;
- printf("(IP, %d): %d/%d: %#Lx period: %Ld\n", event->header.misc,
- sample->pid, sample->tid, sample->ip, sample->period);
+ printf("(IP, %d): %d/%d: %#" PRIx64 " period: %" PRIu64 "\n",
+ event->header.misc, sample->pid, sample->tid, sample->ip,
+ sample->period);
if (session->sample_type & PERF_SAMPLE_CALLCHAIN)
callchain__printf(sample);
{
if (ops->lost == event__process_lost &&
session->hists.stats.total_lost != 0) {
- ui__warning("Processed %Lu events and LOST %Lu!\n\n"
- "Check IO/CPU overload!\n\n",
+ ui__warning("Processed %" PRIu64 " events and LOST %" PRIu64
+ "!\n\nCheck IO/CPU overload!\n\n",
session->hists.stats.total_period,
session->hists.stats.total_lost);
}
if (size == 0 ||
(skip = perf_session__process_event(self, &event, ops, head)) < 0) {
- dump_printf("%#Lx [%#x]: skipping unknown header type: %d\n",
+ dump_printf("%#" PRIx64 " [%#x]: skipping unknown header type: %d\n",
head, event.header.size, event.header.type);
/*
* assume we lost track of the stream, check alignment, and
if (size == 0 ||
perf_session__process_event(session, event, ops, file_pos) < 0) {
- dump_printf("%#Lx [%#x]: skipping unknown header type: %d\n",
+ dump_printf("%#" PRIx64 " [%#x]: skipping unknown header type: %d\n",
file_offset + head, event->header.size,
event->header.type);
/*
* of the License.
*/
+#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
return cpu2slot(cpu) * SLOT_MULT;
}
-static double time2pixels(u64 time)
+static double time2pixels(u64 __time)
{
double X;
- X = 1.0 * svg_page_width * (time - first_time) / (last_time - first_time);
+ X = 1.0 * svg_page_width * (__time - first_time) / (last_time - first_time);
return X;
}
total_height = (1 + rows + cpu2slot(cpus)) * SLOT_MULT;
fprintf(svgfile, "<?xml version=\"1.0\" standalone=\"no\"?> \n");
- fprintf(svgfile, "<svg width=\"%i\" height=\"%llu\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\">\n", svg_page_width, total_height);
+ fprintf(svgfile, "<svg width=\"%i\" height=\"%" PRIu64 "\" version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\">\n", svg_page_width, total_height);
fprintf(svgfile, "<defs>\n <style type=\"text/css\">\n <![CDATA[\n");
color = 128;
}
- fprintf(svgfile, "<line x1=\"%4.8f\" y1=\"%4.2f\" x2=\"%4.8f\" y2=\"%llu\" style=\"stroke:rgb(%i,%i,%i);stroke-width:%1.3f\"/>\n",
+ fprintf(svgfile, "<line x1=\"%4.8f\" y1=\"%4.2f\" x2=\"%4.8f\" y2=\"%" PRIu64 "\" style=\"stroke:rgb(%i,%i,%i);stroke-width:%1.3f\"/>\n",
time2pixels(i), SLOT_MULT/2, time2pixels(i), total_height, color, color, color, thickness);
i += 10000000;
#include <sys/param.h>
#include <fcntl.h>
#include <unistd.h>
+#include <inttypes.h>
#include "build-id.h"
#include "debug.h"
#include "symbol.h"
self->binding = binding;
self->namelen = namelen - 1;
- pr_debug4("%s: %s %#Lx-%#Lx\n", __func__, name, start, self->end);
+ pr_debug4("%s: %s %#" PRIx64 "-%#" PRIx64 "\n", __func__, name, start, self->end);
memcpy(self->name, name, namelen);
static size_t symbol__fprintf(struct symbol *self, FILE *fp)
{
- return fprintf(fp, " %llx-%llx %c %s\n",
+ return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %c %s\n",
self->start, self->end,
self->binding == STB_GLOBAL ? 'g' :
self->binding == STB_LOCAL ? 'l' : 'w',
section_name = elf_sec__name(&shdr, secstrs);
+ /* On ARM, symbols for thumb functions have 1 added to
+ * the symbol address as a flag - remove it */
+ if ((ehdr.e_machine == EM_ARM) &&
+ (map->type == MAP__FUNCTION) &&
+ (sym.st_value & 1))
+ --sym.st_value;
+
if (self->kernel != DSO_TYPE_USER || kmodule) {
char dso_name[PATH_MAX];
}
if (curr_dso->adjust_symbols) {
- pr_debug4("%s: adjusting symbol: st_value: %#Lx "
- "sh_addr: %#Lx sh_offset: %#Lx\n", __func__,
+ pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " "
+ "sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n", __func__,
(u64)sym.st_value, (u64)shdr.sh_addr,
(u64)shdr.sh_offset);
sym.st_value -= shdr.sh_addr - shdr.sh_offset;
#ifndef __PERF_TYPES_H
#define __PERF_TYPES_H
+#include <stdint.h>
+
/*
- * We define u64 as unsigned long long for every architecture
- * so that we can print it with %Lx without getting warnings.
+ * We define u64 as uint64_t for every architecture
+ * so that we can print it with "%"PRIx64 without getting warnings.
*/
-typedef unsigned long long u64;
-typedef signed long long s64;
+typedef uint64_t u64;
+typedef int64_t s64;
typedef unsigned int u32;
typedef signed int s32;
typedef unsigned short u16;
if (self->ms.sym)
return self->ms.sym->name;
- snprintf(bf, bfsize, "%#Lx", self->ip);
+ snprintf(bf, bfsize, "%#" PRIx64, self->ip);
return bf;
}
#include "../libslang.h"
#include <elf.h>
+#include <inttypes.h>
#include <sys/ttydefaults.h>
#include <ctype.h>
#include <string.h>
int width;
ui_browser__set_percent_color(self, 0, current_entry);
- slsmg_printf("%*llx %*llx %c ",
+ slsmg_printf("%*" PRIx64 " %*" PRIx64 " %c ",
mb->addrlen, sym->start, mb->addrlen, sym->end,
sym->binding == STB_GLOBAL ? 'g' :
sym->binding == STB_LOCAL ? 'l' : 'w');
++mb.b.nr_entries;
}
- mb.addrlen = snprintf(tmp, sizeof(tmp), "%llx", maxaddr);
+ mb.addrlen = snprintf(tmp, sizeof(tmp), "%" PRIx64, maxaddr);
return map_browser__run(&mb);
}
if (width > tidwidth)
tidwidth = width;
for (j = 0; j < values->counters; j++) {
- width = snprintf(NULL, 0, "%Lu", values->value[i][j]);
+ width = snprintf(NULL, 0, "%" PRIu64, values->value[i][j]);
if (width > counterwidth[j])
counterwidth[j] = width;
}
fprintf(fp, " %*d %*d", pidwidth, values->pid[i],
tidwidth, values->tid[i]);
for (j = 0; j < values->counters; j++)
- fprintf(fp, " %*Lu",
+ fprintf(fp, " %*" PRIu64,
counterwidth[j], values->value[i][j]);
fprintf(fp, "\n");
}
width = strlen(values->countername[j]);
if (width > namewidth)
namewidth = width;
- width = snprintf(NULL, 0, "%llx", values->counterrawid[j]);
+ width = snprintf(NULL, 0, "%" PRIx64, values->counterrawid[j]);
if (width > rawwidth)
rawwidth = width;
}
for (i = 0; i < values->threads; i++) {
for (j = 0; j < values->counters; j++) {
- width = snprintf(NULL, 0, "%Lu", values->value[i][j]);
+ width = snprintf(NULL, 0, "%" PRIu64, values->value[i][j]);
if (width > countwidth)
countwidth = width;
}
countwidth, "Count");
for (i = 0; i < values->threads; i++)
for (j = 0; j < values->counters; j++)
- fprintf(fp, " %*d %*d %*s %*llx %*Lu\n",
+ fprintf(fp, " %*d %*d %*s %*" PRIx64 " %*" PRIu64,
pidwidth, values->pid[i],
tidwidth, values->tid[i],
namewidth, values->countername[j],
int num_cpus;
-typedef struct per_cpu_counters {
+struct counters {
unsigned long long tsc; /* per thread */
unsigned long long aperf; /* per thread */
unsigned long long mperf; /* per thread */
int pkg;
int core;
int cpu;
- struct per_cpu_counters *next;
-} PCC;
+ struct counters *next;
+};
-PCC *pcc_even;
-PCC *pcc_odd;
-PCC *pcc_delta;
-PCC *pcc_average;
+struct counters *cnt_even;
+struct counters *cnt_odd;
+struct counters *cnt_delta;
+struct counters *cnt_average;
struct timeval tv_even;
struct timeval tv_odd;
struct timeval tv_delta;
return msr;
}
-void print_header()
+void print_header(void)
{
if (show_pkg)
fprintf(stderr, "pkg ");
putc('\n', stderr);
}
-void dump_pcc(PCC *pcc)
+void dump_cnt(struct counters *cnt)
{
- fprintf(stderr, "package: %d ", pcc->pkg);
- fprintf(stderr, "core:: %d ", pcc->core);
- fprintf(stderr, "CPU: %d ", pcc->cpu);
- fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
- fprintf(stderr, "c3: %016llX\n", pcc->c3);
- fprintf(stderr, "c6: %016llX\n", pcc->c6);
- fprintf(stderr, "c7: %016llX\n", pcc->c7);
- fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
- fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
- fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
- fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
- fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
- fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+ fprintf(stderr, "package: %d ", cnt->pkg);
+ fprintf(stderr, "core:: %d ", cnt->core);
+ fprintf(stderr, "CPU: %d ", cnt->cpu);
+ fprintf(stderr, "TSC: %016llX\n", cnt->tsc);
+ fprintf(stderr, "c3: %016llX\n", cnt->c3);
+ fprintf(stderr, "c6: %016llX\n", cnt->c6);
+ fprintf(stderr, "c7: %016llX\n", cnt->c7);
+ fprintf(stderr, "aperf: %016llX\n", cnt->aperf);
+ fprintf(stderr, "pc2: %016llX\n", cnt->pc2);
+ fprintf(stderr, "pc3: %016llX\n", cnt->pc3);
+ fprintf(stderr, "pc6: %016llX\n", cnt->pc6);
+ fprintf(stderr, "pc7: %016llX\n", cnt->pc7);
+ fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, cnt->extra_msr);
}
-void dump_list(PCC *pcc)
+void dump_list(struct counters *cnt)
{
- printf("dump_list 0x%p\n", pcc);
+ printf("dump_list 0x%p\n", cnt);
- for (; pcc; pcc = pcc->next)
- dump_pcc(pcc);
+ for (; cnt; cnt = cnt->next)
+ dump_cnt(cnt);
}
-void print_pcc(PCC *p)
+void print_cnt(struct counters *p)
{
double interval_float;
interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
/* topology columns, print blanks on 1st (average) line */
- if (p == pcc_average) {
+ if (p == cnt_average) {
if (show_pkg)
fprintf(stderr, " ");
if (show_core)
putc('\n', stderr);
}
-void print_counters(PCC *cnt)
+void print_counters(struct counters *counters)
{
- PCC *pcc;
+ struct counters *cnt;
print_header();
if (num_cpus > 1)
- print_pcc(pcc_average);
+ print_cnt(cnt_average);
- for (pcc = cnt; pcc != NULL; pcc = pcc->next)
- print_pcc(pcc);
+ for (cnt = counters; cnt != NULL; cnt = cnt->next)
+ print_cnt(cnt);
}
#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
-
-int compute_delta(PCC *after, PCC *before, PCC *delta)
+int compute_delta(struct counters *after,
+ struct counters *before, struct counters *delta)
{
int errors = 0;
int perf_err = 0;
delta->extra_msr = after->extra_msr;
if (errors) {
fprintf(stderr, "ERROR cpu%d before:\n", before->cpu);
- dump_pcc(before);
+ dump_cnt(before);
fprintf(stderr, "ERROR cpu%d after:\n", before->cpu);
- dump_pcc(after);
+ dump_cnt(after);
errors = 0;
}
}
return 0;
}
-void compute_average(PCC *delta, PCC *avg)
+void compute_average(struct counters *delta, struct counters *avg)
{
- PCC *sum;
+ struct counters *sum;
- sum = calloc(1, sizeof(PCC));
+ sum = calloc(1, sizeof(struct counters));
if (sum == NULL) {
perror("calloc sum");
exit(1);
free(sum);
}
-void get_counters(PCC *pcc)
+void get_counters(struct counters *cnt)
{
- for ( ; pcc; pcc = pcc->next) {
- pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+ for ( ; cnt; cnt = cnt->next) {
+ cnt->tsc = get_msr(cnt->cpu, MSR_TSC);
if (do_nhm_cstates)
- pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+ cnt->c3 = get_msr(cnt->cpu, MSR_CORE_C3_RESIDENCY);
if (do_nhm_cstates)
- pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+ cnt->c6 = get_msr(cnt->cpu, MSR_CORE_C6_RESIDENCY);
if (do_snb_cstates)
- pcc->c7 = get_msr(pcc->cpu, MSR_CORE_C7_RESIDENCY);
+ cnt->c7 = get_msr(cnt->cpu, MSR_CORE_C7_RESIDENCY);
if (has_aperf)
- pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+ cnt->aperf = get_msr(cnt->cpu, MSR_APERF);
if (has_aperf)
- pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+ cnt->mperf = get_msr(cnt->cpu, MSR_MPERF);
if (do_snb_cstates)
- pcc->pc2 = get_msr(pcc->cpu, MSR_PKG_C2_RESIDENCY);
+ cnt->pc2 = get_msr(cnt->cpu, MSR_PKG_C2_RESIDENCY);
if (do_nhm_cstates)
- pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+ cnt->pc3 = get_msr(cnt->cpu, MSR_PKG_C3_RESIDENCY);
if (do_nhm_cstates)
- pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+ cnt->pc6 = get_msr(cnt->cpu, MSR_PKG_C6_RESIDENCY);
if (do_snb_cstates)
- pcc->pc7 = get_msr(pcc->cpu, MSR_PKG_C7_RESIDENCY);
+ cnt->pc7 = get_msr(cnt->cpu, MSR_PKG_C7_RESIDENCY);
if (extra_msr_offset)
- pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+ cnt->extra_msr = get_msr(cnt->cpu, extra_msr_offset);
}
}
-
-void print_nehalem_info()
+void print_nehalem_info(void)
{
unsigned long long msr;
unsigned int ratio;
}
-void free_counter_list(PCC *list)
+void free_counter_list(struct counters *list)
{
- PCC *p;
+ struct counters *p;
for (p = list; p; ) {
- PCC *free_me;
+ struct counters *free_me;
free_me = p;
p = p->next;
free(free_me);
}
- return;
}
void free_all_counters(void)
{
- free_counter_list(pcc_even);
- pcc_even = NULL;
+ free_counter_list(cnt_even);
+ cnt_even = NULL;
- free_counter_list(pcc_odd);
- pcc_odd = NULL;
+ free_counter_list(cnt_odd);
+ cnt_odd = NULL;
- free_counter_list(pcc_delta);
- pcc_delta = NULL;
+ free_counter_list(cnt_delta);
+ cnt_delta = NULL;
- free_counter_list(pcc_average);
- pcc_average = NULL;
+ free_counter_list(cnt_average);
+ cnt_average = NULL;
}
-void insert_cpu_counters(PCC **list, PCC *new)
+void insert_counters(struct counters **list,
+ struct counters *new)
{
- PCC *prev;
+ struct counters *prev;
/*
* list was empty
*/
new->next = prev->next;
prev->next = new;
-
- return;
}
-void alloc_new_cpu_counters(int pkg, int core, int cpu)
+void alloc_new_counters(int pkg, int core, int cpu)
{
- PCC *new;
+ struct counters *new;
if (verbose > 1)
printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
- new = (PCC *)calloc(1, sizeof(PCC));
+ new = (struct counters *)calloc(1, sizeof(struct counters));
if (new == NULL) {
perror("calloc");
exit(1);
new->pkg = pkg;
new->core = core;
new->cpu = cpu;
- insert_cpu_counters(&pcc_odd, new);
+ insert_counters(&cnt_odd, new);
- new = (PCC *)calloc(1, sizeof(PCC));
+ new = (struct counters *)calloc(1,
+ sizeof(struct counters));
if (new == NULL) {
perror("calloc");
exit(1);
new->pkg = pkg;
new->core = core;
new->cpu = cpu;
- insert_cpu_counters(&pcc_even, new);
+ insert_counters(&cnt_even, new);
- new = (PCC *)calloc(1, sizeof(PCC));
+ new = (struct counters *)calloc(1, sizeof(struct counters));
if (new == NULL) {
perror("calloc");
exit(1);
new->pkg = pkg;
new->core = core;
new->cpu = cpu;
- insert_cpu_counters(&pcc_delta, new);
+ insert_counters(&cnt_delta, new);
- new = (PCC *)calloc(1, sizeof(PCC));
+ new = (struct counters *)calloc(1, sizeof(struct counters));
if (new == NULL) {
perror("calloc");
exit(1);
new->pkg = pkg;
new->core = core;
new->cpu = cpu;
- pcc_average = new;
+ cnt_average = new;
}
int get_physical_package_id(int cpu)
{
printf("turbostat: topology changed, re-initializing.\n");
free_all_counters();
- num_cpus = for_all_cpus(alloc_new_cpu_counters);
+ num_cpus = for_all_cpus(alloc_new_counters);
need_reinitialize = 0;
printf("num_cpus is now %d\n", num_cpus);
}
/*
* check to see if a cpu came on-line
*/
-void verify_num_cpus()
+void verify_num_cpus(void)
{
int new_num_cpus;
num_cpus, new_num_cpus);
need_reinitialize = 1;
}
-
- return;
}
void turbostat_loop()
{
restart:
- get_counters(pcc_even);
+ get_counters(cnt_even);
gettimeofday(&tv_even, (struct timezone *)NULL);
while (1) {
goto restart;
}
sleep(interval_sec);
- get_counters(pcc_odd);
+ get_counters(cnt_odd);
gettimeofday(&tv_odd, (struct timezone *)NULL);
- compute_delta(pcc_odd, pcc_even, pcc_delta);
+ compute_delta(cnt_odd, cnt_even, cnt_delta);
timersub(&tv_odd, &tv_even, &tv_delta);
- compute_average(pcc_delta, pcc_average);
- print_counters(pcc_delta);
+ compute_average(cnt_delta, cnt_average);
+ print_counters(cnt_delta);
if (need_reinitialize) {
re_initialize();
goto restart;
}
sleep(interval_sec);
- get_counters(pcc_even);
+ get_counters(cnt_even);
gettimeofday(&tv_even, (struct timezone *)NULL);
- compute_delta(pcc_even, pcc_odd, pcc_delta);
+ compute_delta(cnt_even, cnt_odd, cnt_delta);
timersub(&tv_even, &tv_odd, &tv_delta);
- compute_average(pcc_delta, pcc_average);
- print_counters(pcc_delta);
+ compute_average(cnt_delta, cnt_average);
+ print_counters(cnt_delta);
}
}
* this check is valid for both Intel and AMD
*/
asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
- has_invariant_tsc = edx && (1 << 8);
+ has_invariant_tsc = edx & (1 << 8);
if (!has_invariant_tsc) {
fprintf(stderr, "No invariant TSC\n");
*/
asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x6));
- has_aperf = ecx && (1 << 0);
+ has_aperf = ecx & (1 << 0);
if (!has_aperf) {
fprintf(stderr, "No APERF MSR\n");
exit(1);
check_dev_msr();
check_super_user();
- num_cpus = for_all_cpus(alloc_new_cpu_counters);
+ num_cpus = for_all_cpus(alloc_new_counters);
if (verbose)
print_nehalem_info();
{
int retval;
pid_t child_pid;
- get_counters(pcc_even);
+ get_counters(cnt_even);
gettimeofday(&tv_even, (struct timezone *)NULL);
child_pid = fork();
exit(1);
}
}
- get_counters(pcc_odd);
+ get_counters(cnt_odd);
gettimeofday(&tv_odd, (struct timezone *)NULL);
- retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+ retval = compute_delta(cnt_odd, cnt_even, cnt_delta);
timersub(&tv_odd, &tv_even, &tv_delta);
- compute_average(pcc_delta, pcc_average);
+ compute_average(cnt_delta, cnt_average);
if (!retval)
- print_counters(pcc_delta);
+ print_counters(cnt_delta);
fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
If you are not sure, leave it set to "0".
config RD_GZIP
- bool "Support initial ramdisks compressed using gzip" if EMBEDDED
+ bool "Support initial ramdisks compressed using gzip" if EXPERT
default y
depends on BLK_DEV_INITRD
select DECOMPRESS_GZIP
If unsure, say Y.
config RD_BZIP2
- bool "Support initial ramdisks compressed using bzip2" if EMBEDDED
- default !EMBEDDED
+ bool "Support initial ramdisks compressed using bzip2" if EXPERT
+ default !EXPERT
depends on BLK_DEV_INITRD
select DECOMPRESS_BZIP2
help
If unsure, say N.
config RD_LZMA
- bool "Support initial ramdisks compressed using LZMA" if EMBEDDED
- default !EMBEDDED
+ bool "Support initial ramdisks compressed using LZMA" if EXPERT
+ default !EXPERT
depends on BLK_DEV_INITRD
select DECOMPRESS_LZMA
help
If unsure, say N.
config RD_XZ
- bool "Support initial ramdisks compressed using XZ" if EMBEDDED
- default !EMBEDDED
+ bool "Support initial ramdisks compressed using XZ" if EXPERT
+ default !EXPERT
depends on BLK_DEV_INITRD
select DECOMPRESS_XZ
help
If unsure, say N.
config RD_LZO
- bool "Support initial ramdisks compressed using LZO" if EMBEDDED
- default !EMBEDDED
+ bool "Support initial ramdisks compressed using LZO" if EXPERT
+ default !EXPERT
depends on BLK_DEV_INITRD
select DECOMPRESS_LZO
help