- soft limit
- moving (recharging) account at moving a task is selectable.
- usage threshold notifier
+ - memory pressure notifier
- oom-killer disable knob and oom-notifier
- Root cgroup has no limit controls.
memory.stat # show various statistics
memory.use_hierarchy # set/show hierarchical account enabled
memory.force_empty # trigger forced move charge to parent
+ memory.pressure_level # set memory pressure notifications
memory.swappiness # set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate # set/show controls of moving charges
But see section 8.2: when moving a task to another cgroup, its pages may
be recharged to the new cgroup, if move_charge_at_immigrate has been chosen.
- Exception: If CONFIG_CGROUP_CGROUP_MEMCG_SWAP is not used.
+ Exception: If CONFIG_MEMCG_SWAP is not used.
When you do swapoff and make swapped-out pages of shmem(tmpfs) to
be backed into memory in force, charges for pages are accounted against the
caller of swapoff rather than the users of shmem.
under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may
be stopped.)
-11. TODO
+11. Memory Pressure
+
+The pressure level notifications can be used to monitor the memory
+allocation cost; based on the pressure, applications can implement
+different strategies of managing their memory resources. The pressure
+levels are defined as following:
+
+The "low" level means that the system is reclaiming memory for new
+allocations. Monitoring this reclaiming activity might be useful for
+maintaining cache level. Upon notification, the program (typically
+"Activity Manager") might analyze vmstat and act in advance (i.e.
+prematurely shutdown unimportant services).
+
+The "medium" level means that the system is experiencing medium memory
+pressure, the system might be making swap, paging out active file caches,
+etc. Upon this event applications may decide to further analyze
+vmstat/zoneinfo/memcg or internal memory usage statistics and free any
+resources that can be easily reconstructed or re-read from a disk.
+
+The "critical" level means that the system is actively thrashing, it is
+about to out of memory (OOM) or even the in-kernel OOM killer is on its
+way to trigger. Applications should do whatever they can to help the
+system. It might be too late to consult with vmstat or any other
+statistics, so it's advisable to take an immediate action.
+
+The events are propagated upward until the event is handled, i.e. the
+events are not pass-through. Here is what this means: for example you have
+three cgroups: A->B->C. Now you set up an event listener on cgroups A, B
+and C, and suppose group C experiences some pressure. In this situation,
+only group C will receive the notification, i.e. groups A and B will not
+receive it. This is done to avoid excessive "broadcasting" of messages,
+which disturbs the system and which is especially bad if we are low on
+memory or thrashing. So, organize the cgroups wisely, or propagate the
+events manually (or, ask us to implement the pass-through events,
+explaining why would you need them.)
+
+The file memory.pressure_level is only used to setup an eventfd. To
+register a notification, an application must:
+
+- create an eventfd using eventfd(2);
+- open memory.pressure_level;
+- write string like "<event_fd> <fd of memory.pressure_level> <level>"
+ to cgroup.event_control.
+
+Application will be notified through eventfd when memory pressure is at
+the specific level (or higher). Read/write operations to
+memory.pressure_level are no implemented.
+
+Test:
+
+ Here is a small script example that makes a new cgroup, sets up a
+ memory limit, sets up a notification in the cgroup and then makes child
+ cgroup experience a critical pressure:
+
+ # cd /sys/fs/cgroup/memory/
+ # mkdir foo
+ # cd foo
+ # cgroup_event_listener memory.pressure_level low &
+ # echo 8000000 > memory.limit_in_bytes
+ # echo 8000000 > memory.memsw.limit_in_bytes
+ # echo $$ > tasks
+ # dd if=/dev/zero | read x
+
+ (Expect a bunch of notifications, and eventually, the oom-killer will
+ trigger.)
+
+12. TODO
1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first
-* Samsung's usb phy transceiver
+SAMSUNG USB-PHY controllers
-The Samsung's phy transceiver is used for controlling usb phy for
-s3c-hsotg as well as ehci-s5p and ohci-exynos usb controllers
-across Samsung SOCs.
+** Samsung's usb 2.0 phy transceiver
+
+The Samsung's usb 2.0 phy transceiver is used for controlling
+usb 2.0 phy for s3c-hsotg as well as ehci-s5p and ohci-exynos
+usb controllers across Samsung SOCs.
TODO: Adding the PHY binding with controller(s) according to the under
- developement generic PHY driver.
+ development generic PHY driver.
Required properties:
Exynos4210:
-- compatible : should be "samsung,exynos4210-usbphy"
+- compatible : should be "samsung,exynos4210-usb2phy"
- reg : base physical address of the phy registers and length of memory mapped
region.
+- clocks: Clock IDs array as required by the controller.
+- clock-names: names of clock correseponding IDs clock property as requested
+ by the controller driver.
Exynos5250:
-- compatible : should be "samsung,exynos5250-usbphy"
+- compatible : should be "samsung,exynos5250-usb2phy"
- reg : base physical address of the phy registers and length of memory mapped
region.
usbphy@125B0000 {
#address-cells = <1>;
#size-cells = <1>;
- compatible = "samsung,exynos4210-usbphy";
+ compatible = "samsung,exynos4210-usb2phy";
reg = <0x125B0000 0x100>;
ranges;
+ clocks = <&clock 2>, <&clock 305>;
+ clock-names = "xusbxti", "otg";
+
usbphy-sys {
/* USB device and host PHY_CONTROL registers */
reg = <0x10020704 0x8>;
};
};
+
+
+** Samsung's usb 3.0 phy transceiver
+
+Starting exynso5250, Samsung's SoC have usb 3.0 phy transceiver
+which is used for controlling usb 3.0 phy for dwc3-exynos usb 3.0
+controllers across Samsung SOCs.
+
+Required properties:
+
+Exynos5250:
+- compatible : should be "samsung,exynos5250-usb3phy"
+- reg : base physical address of the phy registers and length of memory mapped
+ region.
+- clocks: Clock IDs array as required by the controller.
+- clock-names: names of clocks correseponding to IDs in the clock property
+ as requested by the controller driver.
+
+Optional properties:
+- #address-cells: should be '1' when usbphy node has a child node with 'reg'
+ property.
+- #size-cells: should be '1' when usbphy node has a child node with 'reg'
+ property.
+- ranges: allows valid translation between child's address space and parent's
+ address space.
+
+- The child node 'usbphy-sys' to the node 'usbphy' is for the system controller
+ interface for usb-phy. It should provide the following information required by
+ usb-phy controller to control phy.
+ - reg : base physical address of PHY_CONTROL registers.
+ The size of this register is the total sum of size of all PHY_CONTROL
+ registers that the SoC has. For example, the size will be
+ '0x4' in case we have only one PHY_CONTROL register (e.g.
+ OTHERS register in S3C64XX or USB_PHY_CONTROL register in S5PV210)
+ and, '0x8' in case we have two PHY_CONTROL registers (e.g.
+ USBDEVICE_PHY_CONTROL and USBHOST_PHY_CONTROL registers in exynos4x).
+ and so on.
+
+Example:
+ usbphy@12100000 {
+ compatible = "samsung,exynos5250-usb3phy";
+ reg = <0x12100000 0x100>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ clocks = <&clock 1>, <&clock 286>;
+ clock-names = "ext_xtal", "usbdrd30";
+
+ usbphy-sys {
+ /* USB device and host PHY_CONTROL registers */
+ reg = <0x10040704 0x8>;
+ };
+ };
AVR32 AVR32 architecture is enabled.
AX25 Appropriate AX.25 support is enabled.
BLACKFIN Blackfin architecture is enabled.
+ CLK Common clock infrastructure is enabled.
DRM Direct Rendering Management support is enabled.
DYNAMIC_DEBUG Build in debug messages and enable them at runtime
EDD BIOS Enhanced Disk Drive Services (EDD) is enabled
on: enable for both 32- and 64-bit processes
off: disable for both 32- and 64-bit processes
+ alloc_snapshot [FTRACE]
+ Allocate the ftrace snapshot buffer on boot up when the
+ main buffer is allocated. This is handy if debugging
+ and you need to use tracing_snapshot() on boot up, and
+ do not want to use tracing_snapshot_alloc() as it needs
+ to be done where GFP_KERNEL allocations are allowed.
+
amd_iommu= [HW,X86-64]
Pass parameters to the AMD IOMMU driver in the system.
Possible values are:
cio_ignore= [S390]
See Documentation/s390/CommonIO for details.
+ clk_ignore_unused
+ [CLK]
+ Keep all clocks already enabled by bootloader on,
+ even if no driver has claimed them. This is useful
+ for debug and development, but should not be
+ needed on a platform with proper driver support.
+ For more information, see Documentation/clk.txt.
clock= [BUGS=X86-32, HW] gettimeofday clocksource override.
[Deprecated]
is selected automatically. Check
Documentation/kdump/kdump.txt for further details.
- crashkernel_low=size[KMG]
- [KNL, x86] parts under 4G.
-
crashkernel=range1:size1[,range2:size2,...][@offset]
[KNL] Same as above, but depends on the memory
in the running system. The syntax of range is
a memory unit (amount[KMG]). See also
Documentation/kdump/kdump.txt for an example.
+ crashkernel=size[KMG],high
+ [KNL, x86_64] range could be above 4G. Allow kernel
+ to allocate physical memory region from top, so could
+ be above 4G if system have more than 4G ram installed.
+ Otherwise memory region will be allocated below 4G, if
+ available.
+ It will be ignored if crashkernel=X is specified.
+ crashkernel=size[KMG],low
+ [KNL, x86_64] range under 4G. When crashkernel=X,high
+ is passed, kernel could allocate physical memory region
+ above 4G, that cause second kernel crash on system
+ that require some amount of low memory, e.g. swiotlb
+ requires at least 64M+32K low memory. Kernel would
+ try to allocate 72M below 4G automatically.
+ This one let user to specify own low range under 4G
+ for second kernel instead.
+ 0: to disable low allocation.
+ It will be ignored when crashkernel=X,high is not used
+ or memory reserved is below 4G.
+
cs89x0_dma= [HW,NET]
Format: <dma>
(mmio) or 32-bit (mmio32).
The options are the same as for ttyS, above.
- earlyprintk= [X86,SH,BLACKFIN]
+ earlyprintk= [X86,SH,BLACKFIN,ARM]
earlyprintk=vga
earlyprintk=xen
earlyprintk=serial[,ttySn[,baudrate]]
+ earlyprintk=serial[,0x...[,baudrate]]
earlyprintk=ttySn[,baudrate]
earlyprintk=dbgp[debugController#]
+ earlyprintk is useful when the kernel crashes before
+ the normal console is initialized. It is not enabled by
+ default because it has some cosmetic problems.
+
Append ",keep" to not disable it when the real console
takes over.
Only vga or serial or usb debug port at a time.
- Currently only ttyS0 and ttyS1 are supported.
+ Currently only ttyS0 and ttyS1 may be specified by
+ name. Other I/O ports may be explicitly specified
+ on some architectures (x86 and arm at least) by
+ replacing ttySn with an I/O port address, like this:
+ earlyprintk=serial,0x1008,115200
+ You can find the port for a given device in
+ /proc/tty/driver/serial:
+ 2: uart:ST16650V2 port:00001008 irq:18 ...
Interaction with the standard serial driver is not
very good.
edd= [EDD]
Format: {"off" | "on" | "skip[mbr]"}
+ efi_no_storage_paranoia [EFI; X86]
+ Using this parameter you can use more than 50% of
+ your efi variable storage. Use this parameter only if
+ you are really sure that your UEFI does sane gc and
+ fulfills the spec otherwise your board may brick.
+
eisa_irq_edge= [PARISC,HW]
See header of drivers/parisc/eisa.c.
module.sig_enforce
[KNL] When CONFIG_MODULE_SIG is set, this means that
modules without (valid) signatures will fail to load.
- Note that if CONFIG_MODULE_SIG_ENFORCE is set, that
+ Note that if CONFIG_MODULE_SIG_FORCE is set, that
is always true, so this option does nothing.
mousedev.tap_time=
noreplace-smp [X86-32,SMP] Don't replace SMP instructions
with UP alternatives
- noresidual [PPC] Don't use residual data on PReP machines.
-
nordrand [X86] Disable the direct use of the RDRAND
instruction even if it is supported by the
processor. RDRAND is still available to user
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.
Invocation of these CPUs' RCU callbacks will
- be offloaded to "rcuoN" kthreads created for
- that purpose. This reduces OS jitter on the
+ be offloaded to "rcuox/N" kthreads created for
+ that purpose, where "x" is "b" for RCU-bh, "p"
+ for RCU-preempt, and "s" for RCU-sched, and "N"
+ is the CPU number. This reduces OS jitter on the
offloaded CPUs, which can be useful for HPC and
+
real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors.
leaf rcu_node structure. Useful for very large
systems.
+ rcutree.jiffies_till_first_fqs= [KNL,BOOT]
+ Set delay from grace-period initialization to
+ first attempt to force quiescent states.
+ Units are jiffies, minimum value is zero,
+ and maximum value is HZ.
+
+ rcutree.jiffies_till_next_fqs= [KNL,BOOT]
+ Set delay between subsequent attempts to force
+ quiescent states. Units are jiffies, minimum
+ value is one, and maximum value is HZ.
+
rcutree.qhimark= [KNL,BOOT]
Set threshold of queued
RCU callbacks over which batch limiting is disabled.
rcutree.rcu_cpu_stall_timeout= [KNL,BOOT]
Set timeout for RCU CPU stall warning messages.
- rcutree.jiffies_till_first_fqs= [KNL,BOOT]
- Set delay from grace-period initialization to
- first attempt to force quiescent states.
- Units are jiffies, minimum value is zero,
- and maximum value is HZ.
+ rcutree.rcu_idle_gp_delay= [KNL,BOOT]
+ Set wakeup interval for idle CPUs that have
+ RCU callbacks (RCU_FAST_NO_HZ=y).
- rcutree.jiffies_till_next_fqs= [KNL,BOOT]
- Set delay between subsequent attempts to force
- quiescent states. Units are jiffies, minimum
- value is one, and maximum value is HZ.
+ rcutree.rcu_idle_lazy_gp_delay= [KNL,BOOT]
+ Set wakeup interval for idle CPUs that have
+ only "lazy" RCU callbacks (RCU_FAST_NO_HZ=y).
+ Lazy RCU callbacks are those which RCU can
+ prove do nothing more than free memory.
rcutorture.fqs_duration= [KNL,BOOT]
Set duration of force_quiescent_state bursts.
or other driver-specific files in the
Documentation/watchdog/ directory.
+ workqueue.disable_numa
+ By default, all work items queued to unbound
+ workqueues are affine to the NUMA nodes they're
+ issued on, which results in better behavior in
+ general. If NUMA affinity needs to be disabled for
+ whatever reason, this option can be used. Note
+ that this also can be controlled per-workqueue for
+ workqueues visible under /sys/bus/workqueue/.
+
x2apic_phys [X86-64,APIC] Use x2apic physical mode instead of
default x2apic cluster mode on platforms
supporting x2apic.
break;
case KVM_CAP_ARM_SET_DEVICE_ADDR:
r = 1;
+ break;
case KVM_CAP_NR_VCPUS:
r = num_online_cpus();
break;
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers)
|| !arm_exit_handlers[hsr_ec]) {
- kvm_err("Unkown exception class: %#08lx, "
+ kvm_err("Unknown exception class: %#08lx, "
"hsr: %#08x\n", hsr_ec,
(unsigned int)vcpu->arch.hsr);
BUG();
#define ARMADA_370_XP_MAX_PER_CPU_IRQS (28)
+#define ARMADA_370_XP_TIMER0_PER_CPU_IRQ (5)
+
#define ACTIVE_DOORBELLS (8)
static DEFINE_RAW_SPINLOCK(irq_controller_lock);
/*
* In SMP mode:
* For shared global interrupts, mask/unmask global enable bit
- * For CPU interrtups, mask/unmask the calling CPU's bit
+ * For CPU interrupts, mask/unmask the calling CPU's bit
*/
static void armada_370_xp_irq_mask(struct irq_data *d)
{
-#ifdef CONFIG_SMP
irq_hw_number_t hwirq = irqd_to_hwirq(d);
- if (hwirq > ARMADA_370_XP_MAX_PER_CPU_IRQS)
+ if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
writel(hwirq, main_int_base +
ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS);
else
writel(hwirq, per_cpu_int_base +
ARMADA_370_XP_INT_SET_MASK_OFFS);
-#else
- writel(irqd_to_hwirq(d),
- per_cpu_int_base + ARMADA_370_XP_INT_SET_MASK_OFFS);
-#endif
}
static void armada_370_xp_irq_unmask(struct irq_data *d)
{
-#ifdef CONFIG_SMP
irq_hw_number_t hwirq = irqd_to_hwirq(d);
- if (hwirq > ARMADA_370_XP_MAX_PER_CPU_IRQS)
+ if (hwirq != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
writel(hwirq, main_int_base +
ARMADA_370_XP_INT_SET_ENABLE_OFFS);
else
writel(hwirq, per_cpu_int_base +
ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
-#else
- writel(irqd_to_hwirq(d),
- per_cpu_int_base + ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
-#endif
}
#ifdef CONFIG_SMP
unsigned int virq, irq_hw_number_t hw)
{
armada_370_xp_irq_mask(irq_get_irq_data(virq));
- writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS);
+ if (hw != ARMADA_370_XP_TIMER0_PER_CPU_IRQ)
+ writel(hw, per_cpu_int_base +
+ ARMADA_370_XP_INT_CLEAR_MASK_OFFS);
+ else
+ writel(hw, main_int_base + ARMADA_370_XP_INT_SET_ENABLE_OFFS);
irq_set_status_flags(virq, IRQ_LEVEL);
- if (hw < ARMADA_370_XP_MAX_PER_CPU_IRQS) {
+ if (hw == ARMADA_370_XP_TIMER0_PER_CPU_IRQ) {
irq_set_percpu_devid(virq);
irq_set_chip_and_handler(virq, &armada_370_xp_irq_chip,
handle_percpu_devid_irq);
#define sie_intercept_code \
{0x04, "Instruction"}, \
{0x08, "Program interruption"}, \
- {0x0C, "Instruction and program interuption"}, \
+ {0x0C, "Instruction and program interruption"}, \
{0x10, "External request"}, \
{0x14, "External interruption"}, \
{0x18, "I/O request"}, \
__entry->instruction,
insn_to_mnemonic((unsigned char *)
&__entry->instruction,
- __entry->insn) ?
+ __entry->insn, sizeof(__entry->insn)) ?
"unknown" : __entry->insn)
);
intr_coalescing_ticks = ticks;
spin_unlock(&host->lock);
- DPRINTK("intrrupt coalescing, count = 0x%x, ticks = %x\n",
+ DPRINTK("interrupt coalescing, count = 0x%x, ticks = %x\n",
intr_coalescing_count, intr_coalescing_ticks);
DPRINTK("ICC register status: (hcr base: 0x%x) = 0x%x\n",
hcr_base, ioread32(hcr_base + ICC));
if (hcr_base)
iounmap(hcr_base);
- if (host_priv)
- kfree(host_priv);
+ kfree(host_priv);
return retval;
}
cpu_freq_select = ((readl(sar) >> SARL_A370_PCLK_FREQ_OPT) &
SARL_A370_PCLK_FREQ_OPT_MASK);
- if (cpu_freq_select > ARRAY_SIZE(armada_370_cpu_frequencies)) {
+ if (cpu_freq_select >= ARRAY_SIZE(armada_370_cpu_frequencies)) {
- pr_err("CPU freq select unsuported %d\n", cpu_freq_select);
+ pr_err("CPU freq select unsupported %d\n", cpu_freq_select);
cpu_freq = 0;
} else
cpu_freq = armada_370_cpu_frequencies[cpu_freq_select];
cpu_freq_select |= (((readl(sar+4) >> SARH_AXP_PCLK_FREQ_OPT) &
SARH_AXP_PCLK_FREQ_OPT_MASK)
<< SARH_AXP_PCLK_FREQ_OPT_SHIFT);
- if (cpu_freq_select > ARRAY_SIZE(armada_xp_cpu_frequencies)) {
+ if (cpu_freq_select >= ARRAY_SIZE(armada_xp_cpu_frequencies)) {
- pr_err("CPU freq select unsuported: %d\n", cpu_freq_select);
+ pr_err("CPU freq select unsupported: %d\n", cpu_freq_select);
cpu_freq = 0;
} else
cpu_freq = armada_xp_cpu_frequencies[cpu_freq_select];
#define C (((status = I915_READ_NOTRACE(ch_ctl)) & DP_AUX_CH_CTL_SEND_BUSY) == 0)
if (has_aux_irq)
- done = wait_event_timeout(dev_priv->gmbus_wait_queue, C, 10);
+ done = wait_event_timeout(dev_priv->gmbus_wait_queue, C,
+ msecs_to_jiffies(10));
else
done = wait_for_atomic(C, 10) == 0;
if (!done)
struct intel_link_m_n m_n;
int pipe = intel_crtc->pipe;
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
+ int target_clock;
/*
* Find the lane count in the intel_encoder private
}
}
+ target_clock = mode->clock;
+ for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
+ if (intel_encoder->type == INTEL_OUTPUT_EDP) {
+ target_clock = intel_edp_target_clock(intel_encoder,
+ mode);
+ break;
+ }
+ }
+
/*
* Compute the GMCH and Link ratios. The '3' here is
* the number of bytes_per_pixel post-LUT, which we always
* set up for 8-bits of R/G/B, or 3 bytes total.
*/
intel_link_compute_m_n(intel_crtc->bpp, lane_count,
- mode->clock, adjusted_mode->clock, &m_n);
+ target_clock, adjusted_mode->clock, &m_n);
if (IS_HASWELL(dev)) {
I915_WRITE(PIPE_DATA_M1(cpu_transcoder),
for (i = 0; i < intel_dp->lane_count; i++)
if ((intel_dp->train_set[i] & DP_TRAIN_MAX_SWING_REACHED) == 0)
break;
- if (i == intel_dp->lane_count && voltage_tries == 5) {
+ if (i == intel_dp->lane_count) {
++loop_tries;
if (loop_tries == 5) {
DRM_DEBUG_KMS("too many full retries, give up\n");
}
if (channel_eq)
- DRM_DEBUG_KMS("Channel EQ done. DP Training successfull\n");
+ DRM_DEBUG_KMS("Channel EQ done. DP Training successful\n");
intel_dp_set_link_train(intel_dp, DP, DP_TRAINING_PATTERN_DISABLE);
}
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
struct intel_dp *intel_dp = &intel_dig_port->dp;
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
i2c_del_adapter(&intel_dp->adapter);
drm_encoder_cleanup(encoder);
if (is_edp(intel_dp)) {
cancel_delayed_work_sync(&intel_dp->panel_vdd_work);
+ mutex_lock(&dev->mode_config.mutex);
ironlake_panel_vdd_off_sync(intel_dp);
+ mutex_unlock(&dev->mode_config.mutex);
}
kfree(intel_dig_port);
}
}
/**
- * radeon_irq_kms_fini - tear down driver interrrupt info
+ * radeon_irq_kms_fini - tear down driver interrupt info
*
* @rdev: radeon device pointer
*
{
unsigned long irqflags;
+ if (!rdev->ddev->irq_enabled)
+ return;
+
spin_lock_irqsave(&rdev->irq.lock, irqflags);
rdev->irq.afmt[block] = true;
radeon_irq_set(rdev);
{
unsigned long irqflags;
+ if (!rdev->ddev->irq_enabled)
+ return;
+
spin_lock_irqsave(&rdev->irq.lock, irqflags);
rdev->irq.afmt[block] = false;
radeon_irq_set(rdev);
unsigned long irqflags;
int i;
+ if (!rdev->ddev->irq_enabled)
+ return;
+
spin_lock_irqsave(&rdev->irq.lock, irqflags);
for (i = 0; i < RADEON_MAX_HPD_PINS; ++i)
rdev->irq.hpd[i] |= !!(hpd_mask & (1 << i));
unsigned long irqflags;
int i;
+ if (!rdev->ddev->irq_enabled)
+ return;
+
spin_lock_irqsave(&rdev->irq.lock, irqflags);
for (i = 0; i < RADEON_MAX_HPD_PINS; ++i)
rdev->irq.hpd[i] &= !(hpd_mask & (1 << i));
int j;
int l;
- l = strlen(msg);
+ l = min(strlen(msg), sizeof(cmd.parm) - sizeof(cmd.parm.cmsg)
+ + sizeof(cmd.parm.cmsg.para) - 2);
+
if (!l) {
isdn_tty_modem_result(RESULT_ERROR, info);
return;
tty->termios.c_ospeed == old_termios->c_ospeed)
return;
isdn_tty_change_speed(info);
- if ((old_termios->c_cflag & CRTSCTS) &&
- !(tty->termios.c_cflag & CRTSCTS))
- tty->hw_stopped = 0;
}
}
p++;
isdn_tty_cmd_ATA(info);
return;
- break;
case 'D':
/* D - Dial */
if (info->msr & UART_MSR_DCD)
}
/**
- * mei_amthif_host_init_ - mei initialization amthif client.
+ * mei_amthif_host_init - mei initialization amthif client.
*
* @dev: the device structure
*
/**
- * mei_amthif_irq_process_completed - processes completed iamthif operation.
+ * mei_amthif_irq_write_completed - processes completed iamthif operation.
*
* @dev: the device structure.
* @slots: free slots.
struct mei_msg_hdr mei_hdr;
struct mei_cl *cl = cb->cl;
size_t len = dev->iamthif_msg_buf_size - dev->iamthif_msg_buf_index;
- size_t msg_slots = mei_data2slots(len);
+ u32 msg_slots = mei_data2slots(len);
mei_hdr.host_addr = cl->host_client_id;
mei_hdr.me_addr = cl->me_client_id;
* mei_amthif_irq_read_message - read routine after ISR to
* handle the read amthif message
*
- * @complete_list: An instance of our list structure
* @dev: the device structure
* @mei_hdr: header of amthif message
+ * @complete_list: An instance of our list structure
*
* returns 0 on success, <0 on failure.
*/
-int mei_amthif_irq_read_message(struct mei_cl_cb *complete_list,
- struct mei_device *dev, struct mei_msg_hdr *mei_hdr)
+int mei_amthif_irq_read_msg(struct mei_device *dev,
+ struct mei_msg_hdr *mei_hdr,
+ struct mei_cl_cb *complete_list)
{
struct mei_cl_cb *cb;
unsigned char *buffer;
if (!mei_hdr->msg_complete)
return 0;
- dev_dbg(&dev->pdev->dev,
- "amthif_message_buffer_index =%d\n",
+ dev_dbg(&dev->pdev->dev, "amthif_message_buffer_index =%d\n",
mei_hdr->length);
dev_dbg(&dev->pdev->dev, "completed amthif read.\n ");
*/
int mei_amthif_irq_read(struct mei_device *dev, s32 *slots)
{
+ u32 msg_slots = mei_data2slots(sizeof(struct hbm_flow_control));
- if (((*slots) * sizeof(u32)) < (sizeof(struct mei_msg_hdr)
- + sizeof(struct hbm_flow_control))) {
+ if (*slots < msg_slots)
return -EMSGSIZE;
- }
- *slots -= mei_data2slots(sizeof(struct hbm_flow_control));
+
+ *slots -= msg_slots;
+
if (mei_hbm_cl_flow_control_req(dev, &dev->iamthif_cl)) {
dev_dbg(&dev->pdev->dev, "iamthif flow control failed\n");
return -EIO;
/**
* mei_amthif_release - the release function
*
- * @inode: pointer to inode structure
+ * @dev: device structure
* @file: pointer to file structure
*
* returns 0 on success, <0 on error
* mei_io_cb_init - allocate and initialize io callback
*
* @cl - mei client
- * @file: pointer to file structure
+ * @fp: pointer to file structure
*
* returns mei_cl_cb pointer or NULL;
*/
/**
* mei_io_cb_alloc_req_buf - allocate request buffer
*
- * @cb - io callback structure
- * @size: size of the buffer
+ * @cb: io callback structure
+ * @length: size of the buffer
*
* returns 0 on success
* -EINVAL if cb is NULL
return 0;
}
/**
- * mei_io_cb_alloc_req_buf - allocate respose buffer
+ * mei_io_cb_alloc_resp_buf - allocate respose buffer
*
- * @cb - io callback structure
- * @size: size of the buffer
+ * @cb: io callback structure
+ * @length: size of the buffer
*
* returns 0 on success
* -EINVAL if cb is NULL
/**
* mei_cl_flush_queues - flushes queue lists belonging to cl.
*
- * @dev: the device structure
* @cl: host client
*/
int mei_cl_flush_queues(struct mei_cl *cl)
init_waitqueue_head(&cl->rx_wait);
init_waitqueue_head(&cl->tx_wait);
INIT_LIST_HEAD(&cl->link);
+ INIT_LIST_HEAD(&cl->device_link);
cl->reading_state = MEI_IDLE;
cl->writing_state = MEI_IDLE;
cl->dev = dev;
/**
* mei_cl_find_read_cb - find this cl's callback in the read list
*
- * @dev: device structure
+ * @cl: host client
+ *
* returns cb on success, NULL on error
*/
struct mei_cl_cb *mei_cl_find_read_cb(struct mei_cl *cl)
*
* @cl - host client
* @id - fixed host id or -1 for genereting one
+ *
* returns 0 on success
* -EINVAL on incorrect values
* -ENONET if client not found
/**
* mei_cl_unlink - remove me_cl from the list
*
- * @dev: the device structure
+ * @cl: host client
*/
int mei_cl_unlink(struct mei_cl *cl)
{
mei_amthif_host_init(dev);
else if (!uuid_le_cmp(client_props->protocol_name, mei_wd_guid))
mei_wd_host_init(dev);
+ else if (!uuid_le_cmp(client_props->protocol_name, mei_nfc_guid))
+ mei_nfc_host_init(dev);
+
}
dev->dev_state = MEI_DEV_ENABLED;
/**
* mei_cl_flow_ctrl_creds - checks flow_control credits for cl.
*
- * @dev: the device structure
* @cl: private data of the file object
*
* returns 1 if mei_flow_ctrl_creds >0, 0 - otherwise.
/**
* mei_cl_flow_ctrl_reduce - reduces flow_control.
*
- * @dev: the device structure
* @cl: private data of the file object
+ *
* @returns
* 0 on success
* -ENOENT when me client is not found
}
/**
- * mei_cl_start_read - the start read client message function.
+ * mei_cl_read_start - the start read client message function.
*
* @cl: host client
*
* returns 0 on success, <0 on failure.
*/
-int mei_cl_read_start(struct mei_cl *cl)
+int mei_cl_read_start(struct mei_cl *cl, size_t length)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
if (!cb)
return -ENOMEM;
- rets = mei_io_cb_alloc_resp_buf(cb,
- dev->me_clients[i].props.max_msg_length);
+ /* always allocate at least client max message */
+ length = max_t(size_t, length, dev->me_clients[i].props.max_msg_length);
+ rets = mei_io_cb_alloc_resp_buf(cb, length);
if (rets)
goto err;
return rets;
}
+/**
+ * mei_cl_write - submit a write cb to mei device
+ assumes device_lock is locked
+ *
+ * @cl: host client
+ * @cl: write callback with filled data
+ *
+ * returns numbe of bytes sent on success, <0 on failure.
+ */
+int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
+{
+ struct mei_device *dev;
+ struct mei_msg_data *buf;
+ struct mei_msg_hdr mei_hdr;
+ int rets;
+
+
+ if (WARN_ON(!cl || !cl->dev))
+ return -ENODEV;
+
+ if (WARN_ON(!cb))
+ return -EINVAL;
+
+ dev = cl->dev;
+
+
+ buf = &cb->request_buffer;
+
+ dev_dbg(&dev->pdev->dev, "mei_cl_write %d\n", buf->size);
+
+
+ cb->fop_type = MEI_FOP_WRITE;
+
+ rets = mei_cl_flow_ctrl_creds(cl);
+ if (rets < 0)
+ goto err;
+
+ /* Host buffer is not ready, we queue the request */
+ if (rets == 0 || !dev->hbuf_is_ready) {
+ cb->buf_idx = 0;
+ /* unseting complete will enqueue the cb for write */
+ mei_hdr.msg_complete = 0;
+ cl->writing_state = MEI_WRITING;
+ rets = buf->size;
+ goto out;
+ }
+
+ dev->hbuf_is_ready = false;
+
+ /* Check for a maximum length */
+ if (buf->size > mei_hbuf_max_len(dev)) {
+ mei_hdr.length = mei_hbuf_max_len(dev);
+ mei_hdr.msg_complete = 0;
+ } else {
+ mei_hdr.length = buf->size;
+ mei_hdr.msg_complete = 1;
+ }
+
+ mei_hdr.host_addr = cl->host_client_id;
+ mei_hdr.me_addr = cl->me_client_id;
+ mei_hdr.reserved = 0;
+
+ dev_dbg(&dev->pdev->dev, "write " MEI_HDR_FMT "\n",
+ MEI_HDR_PRM(&mei_hdr));
+
+
+ if (mei_write_message(dev, &mei_hdr, buf->data)) {
+ rets = -EIO;
+ goto err;
+ }
+
+ cl->writing_state = MEI_WRITING;
+ cb->buf_idx = mei_hdr.length;
+
+ rets = buf->size;
+out:
+ if (mei_hdr.msg_complete) {
+ if (mei_cl_flow_ctrl_reduce(cl)) {
+ rets = -ENODEV;
+ goto err;
+ }
+ list_add_tail(&cb->list, &dev->write_waiting_list.list);
+ } else {
+ list_add_tail(&cb->list, &dev->write_list.list);
+ }
+
+
+ if (blocking && cl->writing_state != MEI_WRITE_COMPLETE) {
+
+ mutex_unlock(&dev->device_lock);
+ if (wait_event_interruptible(cl->tx_wait,
+ cl->writing_state == MEI_WRITE_COMPLETE)) {
+ if (signal_pending(current))
+ rets = -EINTR;
+ else
+ rets = -ERESTARTSYS;
+ }
+ mutex_lock(&dev->device_lock);
+ }
+err:
+ return rets;
+}
+
+
+
/**
* mei_cl_all_disconnect - disconnect forcefully all connected clients
*
sizeof(struct mei_me_client), GFP_KERNEL);
if (!clients) {
dev_err(&dev->pdev->dev, "memory allocation for ME clients failed.\n");
- dev->dev_state = MEI_DEV_RESETING;
+ dev->dev_state = MEI_DEV_RESETTING;
mei_reset(dev, 1);
return;
}
/**
* mei_hbm_cl_hdr - construct client hbm header
+ *
* @cl: - client
* @hbm_cmd: host bus message command
* @buf: buffer for cl header
return false;
}
+int mei_hbm_start_wait(struct mei_device *dev)
+{
+ int ret;
+ if (dev->hbm_state > MEI_HBM_START)
+ return 0;
+
+ mutex_unlock(&dev->device_lock);
+ ret = wait_event_interruptible_timeout(dev->wait_recvd_msg,
+ dev->hbm_state == MEI_HBM_IDLE ||
+ dev->hbm_state > MEI_HBM_START,
+ mei_secs_to_jiffies(MEI_INTEROP_TIMEOUT));
+ mutex_lock(&dev->device_lock);
+
+ if (ret <= 0 && (dev->hbm_state <= MEI_HBM_START)) {
+ dev->hbm_state = MEI_HBM_IDLE;
+ dev_err(&dev->pdev->dev, "wating for mei start failed\n");
+ return -ETIMEDOUT;
+ }
+ return 0;
+}
+
/**
* mei_hbm_start_req - sends start request message.
*
* @dev: the device structure
*/
-void mei_hbm_start_req(struct mei_device *dev)
+int mei_hbm_start_req(struct mei_device *dev)
{
struct mei_msg_hdr *mei_hdr = &dev->wr_msg.hdr;
struct hbm_host_version_request *start_req;
start_req->host_version.major_version = HBM_MAJOR_VERSION;
start_req->host_version.minor_version = HBM_MINOR_VERSION;
- dev->recvd_msg = false;
+ dev->hbm_state = MEI_HBM_IDLE;
if (mei_write_message(dev, mei_hdr, dev->wr_msg.data)) {
- dev_dbg(&dev->pdev->dev, "write send version message to FW fail.\n");
- dev->dev_state = MEI_DEV_RESETING;
+ dev_err(&dev->pdev->dev, "version message writet failed\n");
+ dev->dev_state = MEI_DEV_RESETTING;
mei_reset(dev, 1);
+ return -ENODEV;
}
- dev->init_clients_state = MEI_START_MESSAGE;
+ dev->hbm_state = MEI_HBM_START;
dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT;
- return ;
+ return 0;
}
-/**
+/*
* mei_hbm_enum_clients_req - sends enumeration client request message.
*
* @dev: the device structure
enum_req->hbm_cmd = HOST_ENUM_REQ_CMD;
if (mei_write_message(dev, mei_hdr, dev->wr_msg.data)) {
- dev->dev_state = MEI_DEV_RESETING;
- dev_dbg(&dev->pdev->dev, "write send enumeration request message to FW fail.\n");
+ dev->dev_state = MEI_DEV_RESETTING;
+ dev_err(&dev->pdev->dev, "enumeration request write failed.\n");
mei_reset(dev, 1);
}
- dev->init_clients_state = MEI_ENUM_CLIENTS_MESSAGE;
+ dev->hbm_state = MEI_HBM_ENUM_CLIENTS;
dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT;
return;
}
/**
- * mei_hbm_prop_requsest - request property for a single client
+ * mei_hbm_prop_req - request property for a single client
*
* @dev: the device structure
*
/* We got all client properties */
if (next_client_index == MEI_CLIENTS_MAX) {
+ dev->hbm_state = MEI_HBM_STARTED;
schedule_work(&dev->init_work);
return 0;
prop_req->address = next_client_index;
if (mei_write_message(dev, mei_hdr, dev->wr_msg.data)) {
- dev->dev_state = MEI_DEV_RESETING;
- dev_err(&dev->pdev->dev, "Properties request command failed\n");
+ dev->dev_state = MEI_DEV_RESETTING;
+ dev_err(&dev->pdev->dev, "properties request write failed\n");
mei_reset(dev, 1);
return -EIO;
}
/**
- * add_single_flow_creds - adds single buffer credentials.
+ * mei_hbm_add_single_flow_creds - adds single buffer credentials.
*
- * @file: private data ot the file object.
+ * @dev: the device structure
* @flow: flow control.
*/
static void mei_hbm_add_single_flow_creds(struct mei_device *dev,
/**
- * mei_client_disconnect_request - disconnect request initiated by me
+ * mei_hbm_fw_disconnect_req - disconnect request initiated by me
* host sends disoconnect response
*
* @dev: the device structure.
dev->version = version_res->me_max_version;
dev_dbg(&dev->pdev->dev, "version mismatch.\n");
+ dev->hbm_state = MEI_HBM_STOP;
mei_hbm_stop_req_prepare(dev, &dev->wr_msg.hdr,
dev->wr_msg.data);
mei_write_message(dev, &dev->wr_msg.hdr,
dev->wr_msg.data);
+
return;
}
dev->version.major_version = HBM_MAJOR_VERSION;
dev->version.minor_version = HBM_MINOR_VERSION;
if (dev->dev_state == MEI_DEV_INIT_CLIENTS &&
- dev->init_clients_state == MEI_START_MESSAGE) {
+ dev->hbm_state == MEI_HBM_START) {
dev->init_clients_timer = 0;
mei_hbm_enum_clients_req(dev);
} else {
- dev->recvd_msg = false;
- dev_dbg(&dev->pdev->dev, "reset due to received hbm: host start\n");
+ dev_err(&dev->pdev->dev, "reset: wrong host start response\n");
mei_reset(dev, 1);
return;
}
- dev->recvd_msg = true;
+ wake_up_interruptible(&dev->wait_recvd_msg);
dev_dbg(&dev->pdev->dev, "host start response message received.\n");
break;
me_client = &dev->me_clients[dev->me_client_presentation_num];
if (props_res->status || !dev->me_clients) {
- dev_dbg(&dev->pdev->dev, "reset due to received host client properties response bus message wrong status.\n");
+ dev_err(&dev->pdev->dev, "reset: properties response hbm wrong status.\n");
mei_reset(dev, 1);
return;
}
if (me_client->client_id != props_res->address) {
- dev_err(&dev->pdev->dev,
- "Host client properties reply mismatch\n");
+ dev_err(&dev->pdev->dev, "reset: host properties response address mismatch\n");
mei_reset(dev, 1);
-
return;
}
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
- dev->init_clients_state != MEI_CLIENT_PROPERTIES_MESSAGE) {
- dev_err(&dev->pdev->dev,
- "Unexpected client properties reply\n");
+ dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) {
+ dev_err(&dev->pdev->dev, "reset: unexpected properties response\n");
mei_reset(dev, 1);
return;
enum_res = (struct hbm_host_enum_response *) mei_msg;
memcpy(dev->me_clients_map, enum_res->valid_addresses, 32);
if (dev->dev_state == MEI_DEV_INIT_CLIENTS &&
- dev->init_clients_state == MEI_ENUM_CLIENTS_MESSAGE) {
+ dev->hbm_state == MEI_HBM_ENUM_CLIENTS) {
dev->init_clients_timer = 0;
dev->me_client_presentation_num = 0;
dev->me_client_index = 0;
mei_hbm_me_cl_allocate(dev);
- dev->init_clients_state =
- MEI_CLIENT_PROPERTIES_MESSAGE;
+ dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES;
/* first property reqeust */
mei_hbm_prop_req(dev);
} else {
- dev_dbg(&dev->pdev->dev, "reset due to received host enumeration clients response bus message.\n");
+ dev_err(&dev->pdev->dev, "reset: unexpected enumeration response hbm.\n");
mei_reset(dev, 1);
return;
}
break;
case HOST_STOP_RES_CMD:
+
+ if (dev->hbm_state != MEI_HBM_STOP)
+ dev_err(&dev->pdev->dev, "unexpected stop response hbm.\n");
dev->dev_state = MEI_DEV_DISABLED;
- dev_dbg(&dev->pdev->dev, "resetting because of FW stop response.\n");
+ dev_info(&dev->pdev->dev, "reset: FW stop response.\n");
mei_reset(dev, 1);
break;
case ME_STOP_REQ_CMD:
+ dev->hbm_state = MEI_HBM_STOP;
mei_hbm_stop_req_prepare(dev, &dev->wr_ext_msg.hdr,
dev->wr_ext_msg.data);
break;
/**
- * mei_reg_read - Reads 32bit data from the mei device
+ * mei_me_reg_read - Reads 32bit data from the mei device
*
* @dev: the device structure
* @offset: offset from which to read the data
*
* returns register value (u32)
*/
-static inline u32 mei_reg_read(const struct mei_me_hw *hw,
+static inline u32 mei_me_reg_read(const struct mei_me_hw *hw,
unsigned long offset)
{
return ioread32(hw->mem_addr + offset);
/**
- * mei_reg_write - Writes 32bit data to the mei device
+ * mei_me_reg_write - Writes 32bit data to the mei device
*
* @dev: the device structure
* @offset: offset from which to write the data
* @value: register value to write (u32)
*/
-static inline void mei_reg_write(const struct mei_me_hw *hw,
+static inline void mei_me_reg_write(const struct mei_me_hw *hw,
unsigned long offset, u32 value)
{
iowrite32(value, hw->mem_addr + offset);
}
/**
- * mei_mecbrw_read - Reads 32bit data from ME circular buffer
+ * mei_me_mecbrw_read - Reads 32bit data from ME circular buffer
* read window register
*
* @dev: the device structure
*/
static u32 mei_me_mecbrw_read(const struct mei_device *dev)
{
- return mei_reg_read(to_me_hw(dev), ME_CB_RW);
+ return mei_me_reg_read(to_me_hw(dev), ME_CB_RW);
}
/**
- * mei_mecsr_read - Reads 32bit data from the ME CSR
+ * mei_me_mecsr_read - Reads 32bit data from the ME CSR
*
* @dev: the device structure
*
* returns ME_CSR_HA register value (u32)
*/
-static inline u32 mei_mecsr_read(const struct mei_me_hw *hw)
+static inline u32 mei_me_mecsr_read(const struct mei_me_hw *hw)
{
- return mei_reg_read(hw, ME_CSR_HA);
+ return mei_me_reg_read(hw, ME_CSR_HA);
}
/**
*/
static inline u32 mei_hcsr_read(const struct mei_me_hw *hw)
{
- return mei_reg_read(hw, H_CSR);
+ return mei_me_reg_read(hw, H_CSR);
}
/**
static inline void mei_hcsr_set(struct mei_me_hw *hw, u32 hcsr)
{
hcsr &= ~H_IS;
- mei_reg_write(hw, H_CSR, hcsr);
+ mei_me_reg_write(hw, H_CSR, hcsr);
}
/**
- * me_hw_config - configure hw dependent settings
+ * mei_me_hw_config - configure hw dependent settings
*
* @dev: mei device
*/
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
if ((hcsr & H_IS) == H_IS)
- mei_reg_write(hw, H_CSR, hcsr);
+ mei_me_reg_write(hw, H_CSR, hcsr);
}
/**
* mei_me_intr_enable - enables mei device interrupts
mei_hcsr_set(hw, hcsr);
}
+/**
+ * mei_me_hw_reset_release - release device from the reset
+ *
+ * @dev: the device structure
+ */
+static void mei_me_hw_reset_release(struct mei_device *dev)
+{
+ struct mei_me_hw *hw = to_me_hw(dev);
+ u32 hcsr = mei_hcsr_read(hw);
+
+ hcsr |= H_IG;
+ hcsr &= ~H_RST;
+ mei_hcsr_set(hw, hcsr);
+}
/**
* mei_me_hw_reset - resets fw via mei csr register.
*
* @dev: the device structure
- * @interrupts_enabled: if interrupt should be enabled after reset.
+ * @intr_enable: if interrupt should be enabled after reset.
*/
static void mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
{
if (intr_enable)
hcsr |= H_IE;
else
- hcsr &= ~H_IE;
-
- mei_hcsr_set(hw, hcsr);
-
- hcsr = mei_hcsr_read(hw) | H_IG;
- hcsr &= ~H_RST;
+ hcsr |= ~H_IE;
mei_hcsr_set(hw, hcsr);
- hcsr = mei_hcsr_read(hw);
+ if (dev->dev_state == MEI_DEV_POWER_DOWN)
+ mei_me_hw_reset_release(dev);
- dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", hcsr);
+ dev_dbg(&dev->pdev->dev, "current HCSR = 0x%08x.\n", mei_hcsr_read(hw));
}
/**
static bool mei_me_hw_is_ready(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
- hw->me_hw_state = mei_mecsr_read(hw);
+ hw->me_hw_state = mei_me_mecsr_read(hw);
return (hw->me_hw_state & ME_RDY_HRA) == ME_RDY_HRA;
}
+static int mei_me_hw_ready_wait(struct mei_device *dev)
+{
+ int err;
+ if (mei_me_hw_is_ready(dev))
+ return 0;
+
+ mutex_unlock(&dev->device_lock);
+ err = wait_event_interruptible_timeout(dev->wait_hw_ready,
+ dev->recvd_hw_ready, MEI_INTEROP_TIMEOUT);
+ mutex_lock(&dev->device_lock);
+ if (!err && !dev->recvd_hw_ready) {
+ dev_err(&dev->pdev->dev,
+ "wait hw ready failed. status = 0x%x\n", err);
+ return -ETIMEDOUT;
+ }
+
+ dev->recvd_hw_ready = false;
+ return 0;
+}
+
+static int mei_me_hw_start(struct mei_device *dev)
+{
+ int ret = mei_me_hw_ready_wait(dev);
+ if (ret)
+ return ret;
+ dev_dbg(&dev->pdev->dev, "hw is ready\n");
+
+ mei_me_host_set_ready(dev);
+ return ret;
+}
+
+
/**
* mei_hbuf_filled_slots - gets number of device filled buffer slots
*
}
/**
- * mei_hbuf_is_empty - checks if host buffer is empty.
+ * mei_me_hbuf_is_empty - checks if host buffer is empty.
*
* @dev: the device structure
*
unsigned char *buf)
{
struct mei_me_hw *hw = to_me_hw(dev);
- unsigned long rem, dw_cnt;
+ unsigned long rem;
unsigned long length = header->length;
u32 *reg_buf = (u32 *)buf;
u32 hcsr;
+ u32 dw_cnt;
int i;
int empty_slots;
if (empty_slots < 0 || dw_cnt > empty_slots)
return -EIO;
- mei_reg_write(hw, H_CB_WW, *((u32 *) header));
+ mei_me_reg_write(hw, H_CB_WW, *((u32 *) header));
for (i = 0; i < length / 4; i++)
- mei_reg_write(hw, H_CB_WW, reg_buf[i]);
+ mei_me_reg_write(hw, H_CB_WW, reg_buf[i]);
rem = length & 0x3;
if (rem > 0) {
u32 reg = 0;
memcpy(®, &buf[length - rem], rem);
- mei_reg_write(hw, H_CB_WW, reg);
+ mei_me_reg_write(hw, H_CB_WW, reg);
}
hcsr = mei_hcsr_read(hw) | H_IG;
char read_ptr, write_ptr;
unsigned char buffer_depth, filled_slots;
- hw->me_hw_state = mei_mecsr_read(hw);
+ hw->me_hw_state = mei_me_mecsr_read(hw);
buffer_depth = (unsigned char)((hw->me_hw_state & ME_CBD_HRA) >> 24);
read_ptr = (char) ((hw->me_hw_state & ME_CBRP_HRA) >> 8);
write_ptr = (char) ((hw->me_hw_state & ME_CBWP_HRA) >> 16);
return IRQ_NONE;
/* clear H_IS bit in H_CSR */
- mei_reg_write(hw, H_CSR, csr_reg);
+ mei_me_reg_write(hw, H_CSR, csr_reg);
return IRQ_WAKE_THREAD;
}
{
struct mei_device *dev = (struct mei_device *) dev_id;
struct mei_cl_cb complete_list;
- struct mei_cl_cb *cb_pos = NULL, *cb_next = NULL;
- struct mei_cl *cl;
s32 slots;
int rets;
- bool bus_message_received;
-
dev_dbg(&dev->pdev->dev, "function called after ISR to handle the interrupt processing.\n");
/* initialize our complete list */
/* check if ME wants a reset */
if (!mei_hw_is_ready(dev) &&
- dev->dev_state != MEI_DEV_RESETING &&
+ dev->dev_state != MEI_DEV_RESETTING &&
dev->dev_state != MEI_DEV_INITIALIZING) {
dev_dbg(&dev->pdev->dev, "FW not ready.\n");
mei_reset(dev, 1);
if (mei_hw_is_ready(dev)) {
dev_dbg(&dev->pdev->dev, "we need to start the dev.\n");
- mei_host_set_ready(dev);
-
- dev_dbg(&dev->pdev->dev, "link is established start sending messages.\n");
- /* link is established * start sending messages. */
-
- dev->dev_state = MEI_DEV_INIT_CLIENTS;
+ dev->recvd_hw_ready = true;
+ wake_up_interruptible(&dev->wait_hw_ready);
- mei_hbm_start_req(dev);
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
} else {
- dev_dbg(&dev->pdev->dev, "FW not ready.\n");
+ dev_dbg(&dev->pdev->dev, "Reset Completed.\n");
+ mei_me_hw_reset_release(dev);
mutex_unlock(&dev->device_lock);
return IRQ_HANDLED;
}
dev_dbg(&dev->pdev->dev, "end of bottom half function.\n");
dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
- bus_message_received = false;
- if (dev->recvd_msg && waitqueue_active(&dev->wait_recvd_msg)) {
- dev_dbg(&dev->pdev->dev, "received waiting bus message\n");
- bus_message_received = true;
- }
mutex_unlock(&dev->device_lock);
- if (bus_message_received) {
- dev_dbg(&dev->pdev->dev, "wake up dev->wait_recvd_msg\n");
- wake_up_interruptible(&dev->wait_recvd_msg);
- bus_message_received = false;
- }
- if (list_empty(&complete_list.list))
- return IRQ_HANDLED;
+ mei_irq_compl_handler(dev, &complete_list);
- list_for_each_entry_safe(cb_pos, cb_next, &complete_list.list, list) {
- cl = cb_pos->cl;
- list_del(&cb_pos->list);
- if (cl) {
- if (cl != &dev->iamthif_cl) {
- dev_dbg(&dev->pdev->dev, "completing call back.\n");
- mei_irq_complete_handler(cl, cb_pos);
- cb_pos = NULL;
- } else if (cl == &dev->iamthif_cl) {
- mei_amthif_complete(dev, cb_pos);
- }
- }
- }
return IRQ_HANDLED;
}
static const struct mei_hw_ops mei_me_hw_ops = {
- .host_set_ready = mei_me_host_set_ready,
.host_is_ready = mei_me_host_is_ready,
.hw_is_ready = mei_me_hw_is_ready,
.hw_reset = mei_me_hw_reset,
- .hw_config = mei_me_hw_config,
+ .hw_config = mei_me_hw_config,
+ .hw_start = mei_me_hw_start,
.intr_clear = mei_me_intr_clear,
.intr_enable = mei_me_intr_enable,
};
/**
- * init_mei_device - allocates and initializes the mei device structure
+ * mei_me_dev_init - allocates and initializes the mei device structure
*
* @pdev: The pci device structure
*
mei_device_init(dev);
- INIT_LIST_HEAD(&dev->wd_cl.link);
- INIT_LIST_HEAD(&dev->iamthif_cl.link);
- mei_io_list_init(&dev->amthif_cmd_list);
- mei_io_list_init(&dev->amthif_rd_complete_list);
-
- INIT_DELAYED_WORK(&dev->timer_work, mei_timer);
- INIT_WORK(&dev->init_work, mei_host_client_init);
-
dev->ops = &mei_me_hw_ops;
dev->pdev = pdev;
*/
+#include <linux/export.h>
#include <linux/pci.h>
#include <linux/kthread.h>
#include <linux/interrupt.h>
/**
- * mei_irq_complete_handler - processes completed operation.
+ * mei_cl_complete_handler - processes completed operation for a client
*
* @cl: private data of the file object.
- * @cb_pos: callback block.
+ * @cb: callback block.
*/
-void mei_irq_complete_handler(struct mei_cl *cl, struct mei_cl_cb *cb_pos)
+static void mei_cl_complete_handler(struct mei_cl *cl, struct mei_cl_cb *cb)
{
- if (cb_pos->fop_type == MEI_FOP_WRITE) {
- mei_io_cb_free(cb_pos);
- cb_pos = NULL;
+ if (cb->fop_type == MEI_FOP_WRITE) {
+ mei_io_cb_free(cb);
+ cb = NULL;
cl->writing_state = MEI_WRITE_COMPLETE;
if (waitqueue_active(&cl->tx_wait))
wake_up_interruptible(&cl->tx_wait);
- } else if (cb_pos->fop_type == MEI_FOP_READ &&
+ } else if (cb->fop_type == MEI_FOP_READ &&
MEI_READING == cl->reading_state) {
cl->reading_state = MEI_READ_COMPLETE;
if (waitqueue_active(&cl->rx_wait))
wake_up_interruptible(&cl->rx_wait);
+ else
+ mei_cl_bus_rx_event(cl);
+
+ }
+}
+
+/**
+ * mei_irq_compl_handler - dispatch complete handelers
+ * for the completed callbacks
+ *
+ * @dev - mei device
+ * @compl_list - list of completed cbs
+ */
+void mei_irq_compl_handler(struct mei_device *dev, struct mei_cl_cb *compl_list)
+{
+ struct mei_cl_cb *cb, *next;
+ struct mei_cl *cl;
+
+ list_for_each_entry_safe(cb, next, &compl_list->list, list) {
+ cl = cb->cl;
+ list_del(&cb->list);
+ if (!cl)
+ continue;
+ dev_dbg(&dev->pdev->dev, "completing call back.\n");
+ if (cl == &dev->iamthif_cl)
+ mei_amthif_complete(dev, cb);
+ else
+ mei_cl_complete_handler(cl, cb);
}
}
+EXPORT_SYMBOL_GPL(mei_irq_compl_handler);
/**
- * _mei_irq_thread_state_ok - checks if mei header matches file private data
+ * mei_cl_hbm_equal - check if hbm is addressed to the client
*
- * @cl: private data of the file object
+ * @cl: host client
* @mei_hdr: header of mei client message
*
- * returns !=0 if matches, 0 if no match.
+ * returns true if matches, false otherwise
+ */
+static inline int mei_cl_hbm_equal(struct mei_cl *cl,
+ struct mei_msg_hdr *mei_hdr)
+{
+ return cl->host_client_id == mei_hdr->host_addr &&
+ cl->me_client_id == mei_hdr->me_addr;
+}
+/**
+ * mei_cl_is_reading - checks if the client
+ is the one to read this message
+ *
+ * @cl: mei client
+ * @mei_hdr: header of mei message
+ *
+ * returns true on match and false otherwise
*/
-static int _mei_irq_thread_state_ok(struct mei_cl *cl,
- struct mei_msg_hdr *mei_hdr)
+static bool mei_cl_is_reading(struct mei_cl *cl, struct mei_msg_hdr *mei_hdr)
{
- return (cl->host_client_id == mei_hdr->host_addr &&
- cl->me_client_id == mei_hdr->me_addr &&
+ return mei_cl_hbm_equal(cl, mei_hdr) &&
cl->state == MEI_FILE_CONNECTED &&
- MEI_READ_COMPLETE != cl->reading_state);
+ cl->reading_state != MEI_READ_COMPLETE;
}
/**
- * mei_irq_thread_read_client_message - bottom half read routine after ISR to
- * handle the read mei client message data processing.
+ * mei_irq_read_client_message - process client message
*
- * @complete_list: An instance of our list structure
* @dev: the device structure
* @mei_hdr: header of mei client message
+ * @complete_list: An instance of our list structure
*
* returns 0 on success, <0 on failure.
*/
-static int mei_irq_thread_read_client_message(struct mei_cl_cb *complete_list,
- struct mei_device *dev,
- struct mei_msg_hdr *mei_hdr)
+static int mei_cl_irq_read_msg(struct mei_device *dev,
+ struct mei_msg_hdr *mei_hdr,
+ struct mei_cl_cb *complete_list)
{
struct mei_cl *cl;
- struct mei_cl_cb *cb_pos = NULL, *cb_next = NULL;
+ struct mei_cl_cb *cb, *next;
unsigned char *buffer = NULL;
- dev_dbg(&dev->pdev->dev, "start client msg\n");
- if (list_empty(&dev->read_list.list))
- goto quit;
+ list_for_each_entry_safe(cb, next, &dev->read_list.list, list) {
+ cl = cb->cl;
+ if (!cl || !mei_cl_is_reading(cl, mei_hdr))
+ continue;
- list_for_each_entry_safe(cb_pos, cb_next, &dev->read_list.list, list) {
- cl = cb_pos->cl;
- if (cl && _mei_irq_thread_state_ok(cl, mei_hdr)) {
- cl->reading_state = MEI_READING;
- buffer = cb_pos->response_buffer.data + cb_pos->buf_idx;
+ cl->reading_state = MEI_READING;
- if (cb_pos->response_buffer.size <
- mei_hdr->length + cb_pos->buf_idx) {
- dev_dbg(&dev->pdev->dev, "message overflow.\n");
- list_del(&cb_pos->list);
+ if (cb->response_buffer.size == 0 ||
+ cb->response_buffer.data == NULL) {
+ dev_err(&dev->pdev->dev, "response buffer is not allocated.\n");
+ list_del(&cb->list);
+ return -ENOMEM;
+ }
+
+ if (cb->response_buffer.size < mei_hdr->length + cb->buf_idx) {
+ dev_dbg(&dev->pdev->dev, "message overflow. size %d len %d idx %ld\n",
+ cb->response_buffer.size,
+ mei_hdr->length, cb->buf_idx);
+ buffer = krealloc(cb->response_buffer.data,
+ mei_hdr->length + cb->buf_idx,
+ GFP_KERNEL);
+
+ if (!buffer) {
+ dev_err(&dev->pdev->dev, "allocation failed.\n");
+ list_del(&cb->list);
return -ENOMEM;
}
- if (buffer)
- mei_read_slots(dev, buffer, mei_hdr->length);
-
- cb_pos->buf_idx += mei_hdr->length;
- if (mei_hdr->msg_complete) {
- cl->status = 0;
- list_del(&cb_pos->list);
- dev_dbg(&dev->pdev->dev,
- "completed read H cl = %d, ME cl = %d, length = %lu\n",
- cl->host_client_id,
- cl->me_client_id,
- cb_pos->buf_idx);
-
- list_add_tail(&cb_pos->list,
- &complete_list->list);
- }
-
- break;
+ cb->response_buffer.data = buffer;
+ cb->response_buffer.size =
+ mei_hdr->length + cb->buf_idx;
}
+ buffer = cb->response_buffer.data + cb->buf_idx;
+ mei_read_slots(dev, buffer, mei_hdr->length);
+
+ cb->buf_idx += mei_hdr->length;
+ if (mei_hdr->msg_complete) {
+ cl->status = 0;
+ list_del(&cb->list);
+ dev_dbg(&dev->pdev->dev, "completed read H cl = %d, ME cl = %d, length = %lu\n",
+ cl->host_client_id,
+ cl->me_client_id,
+ cb->buf_idx);
+ list_add_tail(&cb->list, &complete_list->list);
+ }
+ break;
}
-quit:
dev_dbg(&dev->pdev->dev, "message read\n");
if (!buffer) {
mei_read_slots(dev, dev->rd_msg_buf, mei_hdr->length);
struct mei_cl *cl,
struct mei_cl_cb *cmpl_list)
{
- if ((*slots * sizeof(u32)) < (sizeof(struct mei_msg_hdr) +
- sizeof(struct hbm_client_connect_request)))
- return -EBADMSG;
+ u32 msg_slots =
+ mei_data2slots(sizeof(struct hbm_client_connect_request));
- *slots -= mei_data2slots(sizeof(struct hbm_client_connect_request));
+ if (*slots < msg_slots)
+ return -EMSGSIZE;
+
+ *slots -= msg_slots;
if (mei_hbm_cl_disconnect_req(dev, cl)) {
cl->status = 0;
cb_pos->buf_idx = 0;
list_move_tail(&cb_pos->list, &cmpl_list->list);
- return -EMSGSIZE;
- } else {
- cl->state = MEI_FILE_DISCONNECTING;
- cl->status = 0;
- cb_pos->buf_idx = 0;
- list_move_tail(&cb_pos->list, &dev->ctrl_rd_list.list);
- cl->timer_count = MEI_CONNECT_TIMEOUT;
+ return -EIO;
}
+ cl->state = MEI_FILE_DISCONNECTING;
+ cl->status = 0;
+ cb_pos->buf_idx = 0;
+ list_move_tail(&cb_pos->list, &dev->ctrl_rd_list.list);
+ cl->timer_count = MEI_CONNECT_TIMEOUT;
+
return 0;
}
/**
- * _mei_hb_read - processes read related operation.
+ * _mei_irq_thread_read - processes read related operation.
*
* @dev: the device structure.
* @slots: free slots.
struct mei_cl *cl,
struct mei_cl_cb *cmpl_list)
{
- if ((*slots * sizeof(u32)) < (sizeof(struct mei_msg_hdr) +
- sizeof(struct hbm_flow_control))) {
+ u32 msg_slots = mei_data2slots(sizeof(struct hbm_flow_control));
+
+ if (*slots < msg_slots) {
/* return the cancel routine */
list_del(&cb_pos->list);
- return -EBADMSG;
+ return -EMSGSIZE;
}
- *slots -= mei_data2slots(sizeof(struct hbm_flow_control));
+ *slots -= msg_slots;
if (mei_hbm_cl_flow_control_req(dev, cl)) {
cl->status = -ENODEV;
struct mei_cl *cl,
struct mei_cl_cb *cmpl_list)
{
- if ((*slots * sizeof(u32)) < (sizeof(struct mei_msg_hdr) +
- sizeof(struct hbm_client_connect_request))) {
+ u32 msg_slots =
+ mei_data2slots(sizeof(struct hbm_client_connect_request));
+
+ if (*slots < msg_slots) {
/* return the cancel routine */
list_del(&cb_pos->list);
- return -EBADMSG;
+ return -EMSGSIZE;
}
+ *slots -= msg_slots;
+
cl->state = MEI_FILE_CONNECTING;
- *slots -= mei_data2slots(sizeof(struct hbm_client_connect_request));
+
if (mei_hbm_cl_connect_req(dev, cl)) {
cl->status = -ENODEV;
cb_pos->buf_idx = 0;
struct mei_msg_hdr mei_hdr;
struct mei_cl *cl = cb->cl;
size_t len = cb->request_buffer.size - cb->buf_idx;
- size_t msg_slots = mei_data2slots(len);
+ u32 msg_slots = mei_data2slots(len);
mei_hdr.host_addr = cl->host_client_id;
mei_hdr.me_addr = cl->me_client_id;
return -ENODEV;
}
- if (mei_cl_flow_ctrl_reduce(cl))
- return -ENODEV;
cl->status = 0;
cb->buf_idx += mei_hdr.length;
- if (mei_hdr.msg_complete)
+ if (mei_hdr.msg_complete) {
+ if (mei_cl_flow_ctrl_reduce(cl))
+ return -ENODEV;
list_move_tail(&cb->list, &dev->write_waiting_list.list);
+ }
return 0;
}
/**
- * mei_irq_thread_read_handler - bottom half read routine after ISR to
+ * mei_irq_read_handler - bottom half read routine after ISR to
* handle the read processing.
*
* @dev: the device structure
" client = %d, ME client = %d\n",
cl_pos->host_client_id,
cl_pos->me_client_id);
- if (cl_pos->host_client_id == mei_hdr->host_addr &&
- cl_pos->me_client_id == mei_hdr->me_addr)
+ if (mei_cl_hbm_equal(cl_pos, mei_hdr))
break;
}
}
}
if (((*slots) * sizeof(u32)) < mei_hdr->length) {
- dev_dbg(&dev->pdev->dev,
+ dev_err(&dev->pdev->dev,
"we can't read the message slots =%08x.\n",
*slots);
/* we can't read the message */
} else if (mei_hdr->host_addr == dev->iamthif_cl.host_client_id &&
(MEI_FILE_CONNECTED == dev->iamthif_cl.state) &&
(dev->iamthif_state == MEI_IAMTHIF_READING)) {
- dev_dbg(&dev->pdev->dev, "call mei_irq_thread_read_iamthif_message.\n");
+ dev_dbg(&dev->pdev->dev, "call mei_irq_thread_read_iamthif_message.\n");
dev_dbg(&dev->pdev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr));
- ret = mei_amthif_irq_read_message(cmpl_list, dev, mei_hdr);
+ ret = mei_amthif_irq_read_msg(dev, mei_hdr, cmpl_list);
if (ret)
goto end;
} else {
- dev_dbg(&dev->pdev->dev, "call mei_irq_thread_read_client_message.\n");
- ret = mei_irq_thread_read_client_message(cmpl_list,
- dev, mei_hdr);
+ dev_dbg(&dev->pdev->dev, "call mei_cl_irq_read_msg.\n");
+ dev_dbg(&dev->pdev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr));
+ ret = mei_cl_irq_read_msg(dev, mei_hdr, cmpl_list);
if (ret)
goto end;
-
}
/* reset the number of slots and header */
if (*slots == -EOVERFLOW) {
/* overflow - reset */
- dev_dbg(&dev->pdev->dev, "resetting due to slots overflow.\n");
+ dev_err(&dev->pdev->dev, "resetting due to slots overflow.\n");
/* set the event since message has been read */
ret = -ERANGE;
goto end;
end:
return ret;
}
+EXPORT_SYMBOL_GPL(mei_irq_read_handler);
/**
*
* returns 0 on success, <0 on failure.
*/
-int mei_irq_write_handler(struct mei_device *dev,
- struct mei_cl_cb *cmpl_list)
+int mei_irq_write_handler(struct mei_device *dev, struct mei_cl_cb *cmpl_list)
{
struct mei_cl *cl;
}
return 0;
}
+EXPORT_SYMBOL_GPL(mei_irq_write_handler);
if (dev->dev_state == MEI_DEV_INIT_CLIENTS) {
if (dev->init_clients_timer) {
if (--dev->init_clients_timer == 0) {
- dev_dbg(&dev->pdev->dev, "IMEI reset due to init clients timeout ,init clients state = %d.\n",
- dev->init_clients_state);
+ dev_err(&dev->pdev->dev, "reset: init clients timeout hbm_state = %d.\n",
+ dev->hbm_state);
mei_reset(dev, 1);
}
}
list_for_each_entry_safe(cl_pos, cl_next, &dev->file_list, link) {
if (cl_pos->timer_count) {
if (--cl_pos->timer_count == 0) {
- dev_dbg(&dev->pdev->dev, "HECI reset due to connect/disconnect timeout.\n");
+ dev_err(&dev->pdev->dev, "reset: connect/disconnect timeout.\n");
mei_reset(dev, 1);
goto out;
}
if (dev->iamthif_stall_timer) {
if (--dev->iamthif_stall_timer == 0) {
- dev_dbg(&dev->pdev->dev, "resetting because of hang to amthi.\n");
+ dev_err(&dev->pdev->dev, "reset: amthif hanged.\n");
mei_reset(dev, 1);
dev->iamthif_msg_buf_size = 0;
dev->iamthif_msg_buf_index = 0;
static struct pci_dev *mei_pdev;
/* mei_pci_tbl - PCI Device ID Table */
-static DEFINE_PCI_DEVICE_TABLE(mei_pci_tbl) = {
+static DEFINE_PCI_DEVICE_TABLE(mei_me_pci_tbl) = {
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_82946GZ)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_82G35)},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MEI_DEV_ID_82Q965)},
{0, }
};
-MODULE_DEVICE_TABLE(pci, mei_pci_tbl);
+MODULE_DEVICE_TABLE(pci, mei_me_pci_tbl);
static DEFINE_MUTEX(mei_mutex);
/**
* mei_quirk_probe - probe for devices that doesn't valid ME interface
+ *
* @pdev: PCI device structure
* @ent: entry into pci_device_table
*
* returns true if ME Interface is valid, false otherwise
*/
-static bool mei_quirk_probe(struct pci_dev *pdev,
+static bool mei_me_quirk_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
u32 reg;
*
* returns 0 on success, <0 on failure.
*/
-static int mei_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct mei_device *dev;
struct mei_me_hw *hw;
mutex_lock(&mei_mutex);
- if (!mei_quirk_probe(pdev, ent)) {
+ if (!mei_me_quirk_probe(pdev, ent)) {
err = -ENODEV;
goto end;
}
goto disable_msi;
}
- if (mei_hw_init(dev)) {
+ if (mei_start(dev)) {
dev_err(&pdev->dev, "init hw failure.\n");
err = -ENODEV;
goto release_irq;
}
- err = mei_register(&pdev->dev);
+ err = mei_register(dev);
if (err)
goto release_irq;
mei_pdev = pdev;
pci_set_drvdata(pdev, dev);
-
schedule_delayed_work(&dev->timer_work, HZ);
mutex_unlock(&mei_mutex);
* mei_remove is called by the PCI subsystem to alert the driver
* that it should release a PCI device.
*/
-static void mei_remove(struct pci_dev *pdev)
+static void mei_me_remove(struct pci_dev *pdev)
{
struct mei_device *dev;
struct mei_me_hw *hw;
hw = to_me_hw(dev);
- mutex_lock(&dev->device_lock);
-
- cancel_delayed_work(&dev->timer_work);
- mei_wd_stop(dev);
+ dev_err(&pdev->dev, "stop\n");
+ mei_stop(dev);
mei_pdev = NULL;
- if (dev->iamthif_cl.state == MEI_FILE_CONNECTED) {
- dev->iamthif_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->iamthif_cl);
- }
- if (dev->wd_cl.state == MEI_FILE_CONNECTED) {
- dev->wd_cl.state = MEI_FILE_DISCONNECTING;
- mei_cl_disconnect(&dev->wd_cl);
- }
-
- /* Unregistering watchdog device */
- mei_watchdog_unregister(dev);
-
- /* remove entry if already in list */
- dev_dbg(&pdev->dev, "list del iamthif and wd file list.\n");
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->wd_cl);
-
- if (dev->open_handle_count > 0)
- dev->open_handle_count--;
- mei_cl_unlink(&dev->iamthif_cl);
-
- dev->iamthif_current_cb = NULL;
- dev->me_clients_num = 0;
-
- mutex_unlock(&dev->device_lock);
-
- flush_scheduled_work();
-
/* disable interrupts */
mei_disable_interrupts(dev);
if (hw->mem_addr)
pci_iounmap(pdev, hw->mem_addr);
+ mei_deregister(dev);
+
kfree(dev);
pci_release_regions(pdev);
pci_disable_device(pdev);
- mei_deregister();
}
#ifdef CONFIG_PM
-static int mei_pci_suspend(struct device *device)
+static int mei_me_pci_suspend(struct device *device)
{
struct pci_dev *pdev = to_pci_dev(device);
struct mei_device *dev = pci_get_drvdata(pdev);
- int err;
if (!dev)
return -ENODEV;
- mutex_lock(&dev->device_lock);
- cancel_delayed_work(&dev->timer_work);
+ dev_err(&pdev->dev, "suspend\n");
- /* Stop watchdog if exists */
- err = mei_wd_stop(dev);
- /* Set new mei state */
- if (dev->dev_state == MEI_DEV_ENABLED ||
- dev->dev_state == MEI_DEV_RECOVERING_FROM_RESET) {
- dev->dev_state = MEI_DEV_POWER_DOWN;
- mei_reset(dev, 0);
- }
- mutex_unlock(&dev->device_lock);
+ mei_stop(dev);
+
+ mei_disable_interrupts(dev);
free_irq(pdev->irq, dev);
pci_disable_msi(pdev);
- return err;
+ return 0;
}
-static int mei_pci_resume(struct device *device)
+static int mei_me_pci_resume(struct device *device)
{
struct pci_dev *pdev = to_pci_dev(device);
struct mei_device *dev;
return err;
}
-static SIMPLE_DEV_PM_OPS(mei_pm_ops, mei_pci_suspend, mei_pci_resume);
-#define MEI_PM_OPS (&mei_pm_ops)
+static SIMPLE_DEV_PM_OPS(mei_me_pm_ops, mei_me_pci_suspend, mei_me_pci_resume);
+#define MEI_ME_PM_OPS (&mei_me_pm_ops)
#else
-#define MEI_PM_OPS NULL
+#define MEI_ME_PM_OPS NULL
#endif /* CONFIG_PM */
/*
* PCI driver structure
*/
-static struct pci_driver mei_driver = {
+static struct pci_driver mei_me_driver = {
.name = KBUILD_MODNAME,
- .id_table = mei_pci_tbl,
- .probe = mei_probe,
- .remove = mei_remove,
- .shutdown = mei_remove,
- .driver.pm = MEI_PM_OPS,
+ .id_table = mei_me_pci_tbl,
+ .probe = mei_me_probe,
+ .remove = mei_me_remove,
+ .shutdown = mei_me_remove,
+ .driver.pm = MEI_ME_PM_OPS,
};
-module_pci_driver(mei_driver);
+module_pci_driver(mei_me_driver);
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Intel(R) Management Engine Interface");
* mei_wd_host_init - connect to the watchdog client
*
* @dev: the device structure
+ *
* returns -ENENT if wd client cannot be found
* -EIO if write has failed
* 0 on success
*
* returns 0 if success, negative errno code for failure
*/
-static int mei_wd_ops_set_timeout(struct watchdog_device *wd_dev, unsigned int timeout)
+static int mei_wd_ops_set_timeout(struct watchdog_device *wd_dev,
+ unsigned int timeout)
{
struct mei_device *dev;
DCBX_READ_REMOTE_MIB);
if (rc) {
- BNX2X_ERR("Faild to read remote mib from FW\n");
+ BNX2X_ERR("Failed to read remote mib from FW\n");
return rc;
}
DCBX_READ_LOCAL_MIB);
if (rc) {
- BNX2X_ERR("Faild to read local mib from FW\n");
+ BNX2X_ERR("Failed to read local mib from FW\n");
return rc;
}
break;
default:
BNX2X_ERR("Non valid capability ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
DP(BNX2X_MSG_DCB, "capid %d:%x\n", capid, *cap);
break;
default:
BNX2X_ERR("Non valid TC-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
return -EINVAL;
}
-static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
+static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
{
struct bnx2x *bp = netdev_priv(netdev);
DP(BNX2X_MSG_DCB, "state = %d\n", bp->dcbx_local_feat.pfc.enabled);
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "dcbnl call not valid\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
{
if (type < ARRAY_SIZE(rproc_crash_names))
return rproc_crash_names[type];
- return "unkown";
+ return "unknown";
}
/*
* TODO: support predefined notifyids (via resource table)
*/
ret = idr_alloc(&rproc->notifyids, rvring, 0, 0, GFP_KERNEL);
- if (ret) {
+ if (ret < 0) {
dev_err(dev, "idr_alloc failed: %d\n", ret);
dma_free_coherent(dev->parent, size, va, dma);
return ret;
/* it is now safe to add the virtio device */
ret = rproc_add_virtio_dev(rvdev, rsc->id);
if (ret)
- goto free_rvdev;
+ goto remove_rvdev;
return 0;
+remove_rvdev:
+ list_del(&rvdev->node);
free_rvdev:
kfree(rvdev);
return ret;
/* Module parameter for WTSR function control */
static int wtsr_en = 1;
module_param(wtsr_en, int, 0444);
- MODULE_PARM_DESC(wtsr_en, "Wachdog Timeout & Sofware Reset (default=on)");
+ MODULE_PARM_DESC(wtsr_en, "Watchdog Timeout & Software Reset (default=on)");
/* Module parameter for SMPL function control */
static int smpl_en = 1;
module_param(smpl_en, int, 0444);
device_init_wakeup(&pdev->dev, 1);
- info->rtc_dev = rtc_device_register("max8997-rtc", &pdev->dev,
- &max8997_rtc_ops, THIS_MODULE);
+ info->rtc_dev = devm_rtc_device_register(&pdev->dev, "max8997-rtc",
+ &max8997_rtc_ops, THIS_MODULE);
if (IS_ERR(info->rtc_dev)) {
ret = PTR_ERR(info->rtc_dev);
virq = irq_create_mapping(max8997->irq_domain, MAX8997_PMICIRQ_RTCA1);
if (!virq) {
dev_err(&pdev->dev, "Failed to create mapping alarm IRQ\n");
+ ret = -ENXIO;
goto err_out;
}
info->virq = virq;
ret = devm_request_threaded_irq(&pdev->dev, virq, NULL,
max8997_rtc_alarm_irq, 0,
"rtc-alarm0", info);
- if (ret < 0) {
+ if (ret < 0)
dev_err(&pdev->dev, "Failed to request alarm IRQ: %d: %d\n",
info->virq, ret);
- goto err_out;
- }
-
- return ret;
err_out:
- rtc_device_unregister(info->rtc_dev);
return ret;
}
static int max8997_rtc_remove(struct platform_device *pdev)
{
- struct max8997_rtc_info *info = platform_get_drvdata(pdev);
-
- if (info)
- rtc_device_unregister(info->rtc_dev);
-
return 0;
}
}
/*
- * FB_VMODE_SMOOTH_XPAN will be cleared, if one of the folloing
+ * FB_VMODE_SMOOTH_XPAN will be cleared, if one of the following
* checks failed and smooth scrolling is not possible
*/
},
};
-static int __init amifb_init(void)
-{
- return platform_driver_probe(&amifb_driver, amifb_probe);
-}
-
-module_init(amifb_init);
-
-static void __exit amifb_exit(void)
-{
- platform_driver_unregister(&amifb_driver);
-}
-
-module_exit(amifb_exit);
+module_platform_driver_probe(amifb_driver, amifb_probe);
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:amiga-video");
.w = 1024,
.h = 768,
},
+ [AUOK190X_RESOLUTION_600_800] = {
+ .w = 600,
+ .h = 800,
+ },
+ [AUOK190X_RESOLUTION_768_1024] = {
+ .w = 768,
+ .h = 1024,
+ },
};
/*
par->board->set_ctl(par, AUOK190X_I80_DC, 1);
}
-static int auok190x_issue_pixels(struct auok190xfb_par *par, int size,
- u16 *data)
+/**
+ * Conversion of 16bit color to 4bit grayscale
+ * does roughly (0.3 * R + 0.6 G + 0.1 B) / 2
+ */
+static inline int rgb565_to_gray4(u16 data, struct fb_var_screeninfo *var)
+{
+ return ((((data & 0xF800) >> var->red.offset) * 77 +
+ ((data & 0x07E0) >> (var->green.offset + 1)) * 151 +
+ ((data & 0x1F) >> var->blue.offset) * 28) >> 8 >> 1);
+}
+
+static int auok190x_issue_pixels_rgb565(struct auok190xfb_par *par, int size,
+ u16 *data)
+{
+ struct fb_var_screeninfo *var = &par->info->var;
+ struct device *dev = par->info->device;
+ int i;
+ u16 tmp;
+
+ if (size & 7) {
+ dev_err(dev, "issue_pixels: size %d must be a multiple of 8\n",
+ size);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < (size >> 2); i++) {
+ par->board->set_ctl(par, AUOK190X_I80_WR, 0);
+
+ tmp = (rgb565_to_gray4(data[4*i], var) & 0x000F);
+ tmp |= (rgb565_to_gray4(data[4*i+1], var) << 4) & 0x00F0;
+ tmp |= (rgb565_to_gray4(data[4*i+2], var) << 8) & 0x0F00;
+ tmp |= (rgb565_to_gray4(data[4*i+3], var) << 12) & 0xF000;
+
+ par->board->set_hdb(par, tmp);
+ par->board->set_ctl(par, AUOK190X_I80_WR, 1);
+ }
+
+ return 0;
+}
+
+static int auok190x_issue_pixels_gray8(struct auok190xfb_par *par, int size,
+ u16 *data)
{
struct device *dev = par->info->device;
int i;
return 0;
}
+static int auok190x_issue_pixels(struct auok190xfb_par *par, int size,
+ u16 *data)
+{
+ struct fb_info *info = par->info;
+ struct device *dev = par->info->device;
+
+ if (info->var.bits_per_pixel == 8 && info->var.grayscale)
+ auok190x_issue_pixels_gray8(par, size, data);
+ else if (info->var.bits_per_pixel == 16)
+ auok190x_issue_pixels_rgb565(par, size, data);
+ else
+ dev_err(dev, "unsupported color mode (bits: %d, gray: %d)\n",
+ info->var.bits_per_pixel, info->var.grayscale);
+
+ return 0;
+}
+
static u16 auok190x_read_data(struct auok190xfb_par *par)
{
u16 data;
{
struct fb_deferred_io *fbdefio = info->fbdefio;
struct auok190xfb_par *par = info->par;
+ u16 line_length = info->fix.line_length;
u16 yres = info->var.yres;
- u16 xres = info->var.xres;
u16 y1 = 0, h = 0;
int prev_index = -1;
struct page *cur;
}
/* height increment is fixed per page */
- h_inc = DIV_ROUND_UP(PAGE_SIZE , xres);
+ h_inc = DIV_ROUND_UP(PAGE_SIZE , line_length);
/* calculate number of pages from pixel height */
threshold = par->consecutive_threshold / h_inc;
list_for_each_entry(cur, &fbdefio->pagelist, lru) {
if (prev_index < 0) {
/* just starting so assign first page */
- y1 = (cur->index << PAGE_SHIFT) / xres;
+ y1 = (cur->index << PAGE_SHIFT) / line_length;
h = h_inc;
} else if ((cur->index - prev_index) <= threshold) {
/* page is within our threshold for single updates */
par->update_partial(par, y1, y1 + h);
/* start over with our non consecutive page */
- y1 = (cur->index << PAGE_SHIFT) / xres;
+ y1 = (cur->index << PAGE_SHIFT) / line_length;
h = h_inc;
}
prev_index = cur->index;
static int auok190xfb_check_var(struct fb_var_screeninfo *var,
struct fb_info *info)
{
- if (info->var.xres != var->xres || info->var.yres != var->yres ||
- info->var.xres_virtual != var->xres_virtual ||
- info->var.yres_virtual != var->yres_virtual) {
- pr_info("%s: Resolution not supported: X%u x Y%u\n",
- __func__, var->xres, var->yres);
+ struct device *dev = info->device;
+ struct auok190xfb_par *par = info->par;
+ struct panel_info *panel = &panel_table[par->resolution];
+ int size;
+
+ /*
+ * Color depth
+ */
+
+ if (var->bits_per_pixel == 8 && var->grayscale == 1) {
+ /*
+ * For 8-bit grayscale, R, G, and B offset are equal.
+ */
+ var->red.length = 8;
+ var->red.offset = 0;
+ var->red.msb_right = 0;
+
+ var->green.length = 8;
+ var->green.offset = 0;
+ var->green.msb_right = 0;
+
+ var->blue.length = 8;
+ var->blue.offset = 0;
+ var->blue.msb_right = 0;
+
+ var->transp.length = 0;
+ var->transp.offset = 0;
+ var->transp.msb_right = 0;
+ } else if (var->bits_per_pixel == 16) {
+ var->red.length = 5;
+ var->red.offset = 11;
+ var->red.msb_right = 0;
+
+ var->green.length = 6;
+ var->green.offset = 5;
+ var->green.msb_right = 0;
+
+ var->blue.length = 5;
+ var->blue.offset = 0;
+ var->blue.msb_right = 0;
+
+ var->transp.length = 0;
+ var->transp.offset = 0;
+ var->transp.msb_right = 0;
+ } else {
+ dev_warn(dev, "unsupported color mode (bits: %d, grayscale: %d)\n",
+ info->var.bits_per_pixel, info->var.grayscale);
return -EINVAL;
}
+ /*
+ * Dimensions
+ */
+
+ switch (var->rotate) {
+ case FB_ROTATE_UR:
+ case FB_ROTATE_UD:
+ var->xres = panel->w;
+ var->yres = panel->h;
+ break;
+ case FB_ROTATE_CW:
+ case FB_ROTATE_CCW:
+ var->xres = panel->h;
+ var->yres = panel->w;
+ break;
+ default:
+ dev_dbg(dev, "Invalid rotation request\n");
+ return -EINVAL;
+ }
+
+ var->xres_virtual = var->xres;
+ var->yres_virtual = var->yres;
+
/*
* Memory limit
*/
- if ((info->fix.line_length * var->yres_virtual) > info->fix.smem_len) {
- pr_info("%s: Memory Limit requested yres_virtual = %u\n",
- __func__, var->yres_virtual);
+ size = var->xres_virtual * var->yres_virtual * var->bits_per_pixel / 8;
+ if (size > info->fix.smem_len) {
+ dev_err(dev, "Memory limit exceeded, requested %dK\n",
+ size >> 10);
return -ENOMEM;
}
return 0;
}
+static int auok190xfb_set_fix(struct fb_info *info)
+{
+ struct fb_fix_screeninfo *fix = &info->fix;
+ struct fb_var_screeninfo *var = &info->var;
+
+ fix->line_length = var->xres_virtual * var->bits_per_pixel / 8;
+
+ fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->accel = FB_ACCEL_NONE;
+ fix->visual = (var->grayscale) ? FB_VISUAL_STATIC_PSEUDOCOLOR
+ : FB_VISUAL_TRUECOLOR;
+ fix->xpanstep = 0;
+ fix->ypanstep = 0;
+ fix->ywrapstep = 0;
+
+ return 0;
+}
+
+static int auok190xfb_set_par(struct fb_info *info)
+{
+ struct auok190xfb_par *par = info->par;
+
+ par->rotation = info->var.rotate;
+ auok190xfb_set_fix(info);
+
+ /* reinit the controller to honor the rotation */
+ par->init(par);
+
+ /* wait for init to complete */
+ par->board->wait_for_rdy(par);
+
+ return 0;
+}
+
static struct fb_ops auok190xfb_ops = {
.owner = THIS_MODULE,
.fb_read = fb_sys_read,
.fb_copyarea = auok190xfb_copyarea,
.fb_imageblit = auok190xfb_imageblit,
.fb_check_var = auok190xfb_check_var,
+ .fb_set_par = auok190xfb_set_par,
};
/*
static void auok190x_recover(struct auok190xfb_par *par)
{
+ struct device *dev = par->info->device;
+
auok190x_power(par, 0);
msleep(100);
auok190x_power(par, 1);
+ /* after powercycling the device, it's always active */
+ pm_runtime_set_active(dev);
+ par->standby = 0;
+
par->init(par);
/* wait for init to complete */
/* initialise fix, var, resolution and rotation */
strlcpy(info->fix.id, init->id, 16);
- info->fix.type = FB_TYPE_PACKED_PIXELS;
- info->fix.visual = FB_VISUAL_STATIC_PSEUDOCOLOR;
- info->fix.xpanstep = 0;
- info->fix.ypanstep = 0;
- info->fix.ywrapstep = 0;
- info->fix.accel = FB_ACCEL_NONE;
-
info->var.bits_per_pixel = 8;
info->var.grayscale = 1;
- info->var.red.length = 8;
- info->var.green.length = 8;
- info->var.blue.length = 8;
panel = &panel_table[board->resolution];
- /* if 90 degree rotation, switch width and height */
- if (board->rotation & 1) {
- info->var.xres = panel->h;
- info->var.yres = panel->w;
- info->var.xres_virtual = panel->h;
- info->var.yres_virtual = panel->w;
- info->fix.line_length = panel->h;
- } else {
- info->var.xres = panel->w;
- info->var.yres = panel->h;
- info->var.xres_virtual = panel->w;
- info->var.yres_virtual = panel->h;
- info->fix.line_length = panel->w;
- }
-
par->resolution = board->resolution;
- par->rotation = board->rotation;
+ par->rotation = 0;
/* videomemory handling */
- videomemorysize = roundup((panel->w * panel->h), PAGE_SIZE);
+ videomemorysize = roundup((panel->w * panel->h) * 2, PAGE_SIZE);
videomemory = vmalloc(videomemorysize);
if (!videomemory) {
ret = -ENOMEM;
info->flags = FBINFO_FLAG_DEFAULT | FBINFO_VIRTFB;
info->fbops = &auok190xfb_ops;
+ ret = auok190xfb_check_var(&info->var, info);
+ if (ret)
+ goto err_defio;
+
+ auok190xfb_set_fix(info);
+
/* deferred io init */
info->fbdefio = devm_kzalloc(info->device,
goto err_defio;
}
- dev_dbg(info->device, "targetting %d frames per second\n", board->fps);
+ dev_dbg(info->device, "targeting %d frames per second\n", board->fps);
info->fbdefio->delay = HZ / board->fps;
info->fbdefio->first_io = auok190xfb_dpy_first_io,
info->fbdefio->deferred_io = auok190xfb_dpy_deferred_io,
init_once((void *) fi);
- /* Initilize f2fs-specific inode info */
+ /* Initialize f2fs-specific inode info */
fi->vfs_inode.i_version = 1;
atomic_set(&fi->dirty_dents, 0);
fi->i_current_depth = 1;
.kill_sb = kill_block_super,
.fs_flags = FS_REQUIRES_DEV,
};
+MODULE_ALIAS_FS("f2fs");
static int __init init_inodecache(void)
{
*
* present_pages is physical pages existing within the zone, which
* is calculated as:
- * present_pages = spanned_pages - absent_pages(pags in holes);
+ * present_pages = spanned_pages - absent_pages(pages in holes);
*
* managed_pages is present pages managed by the buddy system, which
* is calculated as (reserved_pages includes pages allocated by the
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
-static inline unsigned zone_end_pfn(const struct zone *zone)
+static inline unsigned long zone_end_pfn(const struct zone *zone)
{
return zone->zone_start_pfn + zone->spanned_pages;
}
* Unregister hstate attributes from a single node device.
* No-op if no hstate attributes attached.
*/
- void hugetlb_unregister_node(struct node *node)
+ static void hugetlb_unregister_node(struct node *node)
{
struct hstate *h;
struct node_hstate *nhs = &node_hstates[node->dev.id];
* Register hstate attributes for a single node device.
* No-op if attributes already registered.
*/
- void hugetlb_register_node(struct node *node)
+ static void hugetlb_register_node(struct node *node)
{
struct hstate *h;
struct node_hstate *nhs = &node_hstates[node->dev.id];
nid, h->surplus_huge_pages_node[nid]);
}
+void hugetlb_show_meminfo(void)
+{
+ struct hstate *h;
+ int nid;
+
+ for_each_node_state(nid, N_MEMORY)
+ for_each_hstate(h)
+ pr_info("Node %d hugepages_total=%u hugepages_free=%u hugepages_surp=%u hugepages_size=%lukB\n",
+ nid,
+ h->nr_huge_pages_node[nid],
+ h->free_huge_pages_node[nid],
+ h->surplus_huge_pages_node[nid],
+ 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
+}
+
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
pte_t entry;
if (writable) {
- entry =
- pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
+ entry = huge_pte_mkwrite(huge_pte_mkdirty(mk_huge_pte(page,
+ vma->vm_page_prot)));
} else {
- entry = huge_pte_wrprotect(mk_pte(page, vma->vm_page_prot));
+ entry = huge_pte_wrprotect(mk_huge_pte(page,
+ vma->vm_page_prot));
}
entry = pte_mkyoung(entry);
entry = pte_mkhuge(entry);
{
pte_t entry;
- entry = pte_mkwrite(pte_mkdirty(huge_ptep_get(ptep)));
+ entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep)));
if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1))
update_mmu_cache(vma, address, ptep);
}
* HWPoisoned hugepage is already unmapped and dropped reference
*/
if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) {
- pte_clear(mm, address, ptep);
+ huge_pte_clear(mm, address, ptep);
continue;
}
pte = huge_ptep_get_and_clear(mm, address, ptep);
tlb_remove_tlb_entry(tlb, ptep, address);
- if (pte_dirty(pte))
+ if (huge_pte_dirty(pte))
set_page_dirty(page);
page_remove_rmap(page);
* page now as it is used to determine if a reservation has been
* consumed.
*/
- if ((flags & FAULT_FLAG_WRITE) && !pte_write(entry)) {
+ if ((flags & FAULT_FLAG_WRITE) && !huge_pte_write(entry)) {
if (vma_needs_reservation(h, vma, address) < 0) {
ret = VM_FAULT_OOM;
goto out_mutex;
if (flags & FAULT_FLAG_WRITE) {
- if (!pte_write(entry)) {
+ if (!huge_pte_write(entry)) {
ret = hugetlb_cow(mm, vma, address, ptep, entry,
pagecache_page);
goto out_page_table_lock;
}
- entry = pte_mkdirty(entry);
+ entry = huge_pte_mkdirty(entry);
}
entry = pte_mkyoung(entry);
if (huge_ptep_set_access_flags(vma, address, ptep, entry,
break;
}
- if (absent ||
- ((flags & FOLL_WRITE) && !pte_write(huge_ptep_get(pte)))) {
+ /*
+ * We need call hugetlb_fault for both hugepages under migration
+ * (in which case hugetlb_fault waits for the migration,) and
+ * hwpoisoned hugepages (in which case we need to prevent the
+ * caller from accessing to them.) In order to do this, we use
+ * here is_swap_pte instead of is_hugetlb_entry_migration and
+ * is_hugetlb_entry_hwpoisoned. This is because it simply covers
+ * both cases, and because we can't follow correct pages
+ * directly from any kind of swap entries.
+ */
+ if (absent || is_swap_pte(huge_ptep_get(pte)) ||
+ ((flags & FOLL_WRITE) &&
+ !huge_pte_write(huge_ptep_get(pte)))) {
int ret;
spin_unlock(&mm->page_table_lock);
}
if (!huge_pte_none(huge_ptep_get(ptep))) {
pte = huge_ptep_get_and_clear(mm, address, ptep);
- pte = pte_mkhuge(pte_modify(pte, newprot));
+ pte = pte_mkhuge(huge_pte_modify(pte, newprot));
pte = arch_make_huge_pte(pte, vma, NULL, 0);
set_huge_pte_at(mm, address, ptep, pte);
pages++;
tlb->mm = mm;
tlb->fullmm = fullmm;
+ tlb->need_flush_all = 0;
tlb->start = -1UL;
tlb->end = 0;
tlb->need_flush = 0;
* Choose text because data symbols depend on CONFIG_KALLSYMS_ALL=y
*/
if (vma->vm_ops)
- print_symbol(KERN_ALERT "vma->vm_ops->fault: %s\n",
- (unsigned long)vma->vm_ops->fault);
+ printk(KERN_ALERT "vma->vm_ops->fault: %pSR\n",
+ vma->vm_ops->fault);
if (vma->vm_file && vma->vm_file->f_op)
- print_symbol(KERN_ALERT "vma->vm_file->f_op->mmap: %s\n",
- (unsigned long)vma->vm_file->f_op->mmap);
+ printk(KERN_ALERT "vma->vm_file->f_op->mmap: %pSR\n",
+ vma->vm_file->f_op->mmap);
dump_stack();
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
}
}
EXPORT_SYMBOL(remap_pfn_range);
+/**
+ * vm_iomap_memory - remap memory to userspace
+ * @vma: user vma to map to
+ * @start: start of area
+ * @len: size of area
+ *
+ * This is a simplified io_remap_pfn_range() for common driver use. The
+ * driver just needs to give us the physical memory range to be mapped,
+ * we'll figure out the rest from the vma information.
+ *
+ * NOTE! Some drivers might want to tweak vma->vm_page_prot first to get
+ * whatever write-combining details or similar.
+ */
+int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
+{
+ unsigned long vm_len, pfn, pages;
+
+ /* Check that the physical memory area passed in looks valid */
+ if (start + len < start)
+ return -EINVAL;
+ /*
+ * You *really* shouldn't map things that aren't page-aligned,
+ * but we've historically allowed it because IO memory might
+ * just have smaller alignment.
+ */
+ len += start & ~PAGE_MASK;
+ pfn = start >> PAGE_SHIFT;
+ pages = (len + ~PAGE_MASK) >> PAGE_SHIFT;
+ if (pfn + pages < pfn)
+ return -EINVAL;
+
+ /* We start the mapping 'vm_pgoff' pages into the area */
+ if (vma->vm_pgoff > pages)
+ return -EINVAL;
+ pfn += vma->vm_pgoff;
+ pages -= vma->vm_pgoff;
+
+ /* Can we fit all of the mapping? */
+ vm_len = vma->vm_end - vma->vm_start;
+ if (vm_len >> PAGE_SHIFT > pages)
+ return -EINVAL;
+
+ /* Ok, let it rip */
+ return io_remap_pfn_range(vma, vma->vm_start, pfn, vm_len, vma->vm_page_prot);
+}
+EXPORT_SYMBOL(vm_iomap_memory);
+
static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
unsigned long addr, unsigned long end,
pte_fn_t fn, void *data)
page = alloc_zeroed_user_highpage_movable(vma, address);
if (!page)
goto oom;
+ /*
+ * The memory barrier inside __SetPageUptodate makes sure that
+ * preceeding stores to the page contents become visible before
+ * the set_pte_at() write.
+ */
__SetPageUptodate(page);
if (mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))
static int run_dir(const char *d, const char *perf)
{
+ char v[] = "-vvvvv";
+ int vcnt = min(verbose, (int) sizeof(v) - 1);
char cmd[3*PATH_MAX];
- snprintf(cmd, 3*PATH_MAX, PYTHON " %s/attr.py -d %s/attr/ -p %s %s",
- d, d, perf, verbose ? "-v" : "");
+ if (verbose)
+ vcnt++;
+
+ snprintf(cmd, 3*PATH_MAX, PYTHON " %s/attr.py -d %s/attr/ -p %s %.*s",
+ d, d, perf, vcnt, v);
return system(cmd);
}
!lstat(path_perf, &st))
return run_dir(path_dir, path_perf);
- fprintf(stderr, " (ommitted)");
+ fprintf(stderr, " (omitted)");
return 0;
}
#include "evsel.h"
#include "evlist.h"
#include "sysfs.h"
-#include "debugfs.h"
+#include <lk/debugfs.h>
#include "tests.h"
#include <linux/hw_breakpoint.h>
struct perf_evlist *evlist;
int ret;
- evlist = perf_evlist__new(NULL, NULL);
+ evlist = perf_evlist__new();
if (evlist == NULL)
return -ENOMEM;
ret = stat(path, &st);
if (ret) {
- pr_debug("ommiting PMU cpu events tests\n");
+ pr_debug("omitting PMU cpu events tests\n");
return 0;
}