The conn->smp_chan pointer can be NULL if SMP PDUs arrive at unexpected
moments. To avoid NULL pointer dereferences the code should be checking
for this and disconnect if an unexpected SMP PDU arrives. This patch
fixes the issue by adding a check for conn->smp_chan for all other PDUs
except pairing request and security request (which are are the first
PDUs to come to initialize the SMP context).
Don't access uninitialized work-queue when removing device.
The work queue is initialized only if the device multi-queue.
So don't call cancel_work unless this is a multi-queue device.
For BUCK10 the control registers are wrongly set as buck9 control register
This patch corrects the control registers for buck10
Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
of_regulator_match() saves some dynamcially allocated state into the
match table that's passed to it. By implementation and not contract, for
each match table entry, if non-NULL state is already present,
of_regulator_match() will not overwrite it. of_regulator_match() is
typically called each time a regulator is probe()d. This means it is
called with the same match table over and over again if a regulator
triggers deferred probe. This results in stale, kfree()d data being left
in the match table from probe to probe, which causes a variety of crashes
or use of invalid data.
Explicitly free all output state from of_regulator_match() before
generating new results in order to avoid this.
Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Correct value for minimal voltage for ldo10 output is 950000 uV. This
patch fixes the typo introduced by patch adf6178ad5552a7f2f742a8c85343c50
("regulator: max8998: Use uV in voltage_map_desc"), what solves broken
probe of max8998 in v3.8-rc4.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kernel commits 41affd5 and 6539306 changed the locking in rtl_lps_leave()
from a spinlock to a mutex by doing the calls indirectly from a work queue
to reduce the time that interrupts were disabled. This change was fine for
most systems; however a scheduling while atomic bug was reported in
https://bugzilla.redhat.com/show_bug.cgi?id=903881. The backtrace indicates
that routine rtl_is_special(), which calls rtl_lps_leave() in three places
was entered in atomic context. These direct calls are replaced by putting a
request on the appropriate work queue.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Reported-and-tested-by: Nathaniel Doherty <ntdoherty@gmail.com> Cc: Nathaniel Doherty <ntdoherty@gmail.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch (as1653) fixes a bug in ehci-hcd. Unlike iTD entries, an
siTD entry in the periodic schedule may not complete until the frame
after the one it belongs to. Consequently, when scanning the periodic
schedule it is necessary to start with the frame _preceding_ the one
where the previous scan ended.
Not doing this properly can result in memory leaks and failures to
complete isochronous URBs.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-tested-by: Andy Leiserson <andy@leiserson.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the xHCI driver is not available, actively switch the ports to EHCI
mode since some BIOSes leave them in xHCI mode where they would
otherwise appear dead. This was discovered on a Dell Optiplex 7010,
but it's possible other systems could be affected.
This should be backported to kernels as old as 3.0, that contain the
commit 69e848c2090aebba5698a1620604c7dccb448684 "Intel xhci: Support
EHCI/xHCI port switching."
Signed-off-by: David Moore <david.moore@gmail.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch (as1640) fixes a memory leak in xhci-hcd. The urb_priv
data structure isn't always deallocated in the handle_tx_event()
routine for non-control transfers. The patch adds a kfree() call so
that all paths end up freeing the memory properly.
This patch should be backported to kernels as old as 2.6.36, that
contain the commit 8e51adccd4c4b9ffcd509d7f2afce0a906139f75 "USB: xHCI:
Introduce urb_priv structure"
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Reported-and-tested-by: Martin Mokrejs <mmokrejs@fold.natur.cuni.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To calculate the TD size for a particular TRB in an isoc TD, we need
know the endpoint's max packet size. Isochronous endpoints also encode
the number of additional service opportunities in their wMaxPacketSize
field. The TD size calculation did not mask off those bits before using
the field. This resulted in incorrect TD size information for
isochronous TRBs when an URB frame buffer crossed a 64KB boundary.
For example:
- an isoc endpoint has 2 additional service opportunites and
a max packet size of 1020 bytes
- a frame transfer buffer contains 3060 bytes
- one frame buffer crosses a 64KB boundary, and must be split into
one 1276 byte TRB, and one 1784 byte TRB.
The TD size is is the number of packets that remain to be transferred
for a TD after processing all the max packet sized packets in the
current TRB and all previous TRBs.
For this TD, the number of packets to be transferred is (3060 / 1020),
or 3. The first TRB contains 1276 bytes, which means it contains one
full packet, and a 256 byte remainder. After processing all the max
packet-sized packets in the first TRB, the host will have 2 packets left
to transfer.
The old code would calculate the TD size for the first TRB as:
total packet count = DIV_ROUND_UP (TD length / endpoint wMaxPacketSize)
total packet count - (first TRB length / endpoint wMaxPacketSize)
Fix this by masking off the number of additional service opportunities
in the wMaxPacketSize field.
This patch should be backported to stable kernels as old as 3.0, that
contain the commit 4da6e6f247a2601ab9f1e63424e4d944ed4124f3 "xhci 1.0:
Update TD size field format." It may not apply well to kernels older
than 3.2 because of commit 29cc88979a8818cd8c5019426e945aed118b400e
"USB: use usb_endpoint_maxp() instead of le16_to_cpu()".
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
An isochronous TD is comprised of one isochronous TRB chained to zero or
more normal TRBs. Only the isoc TRB has the TBC and TLBPC fields. The
normal TRBs must set those fields to zeroes. The code was setting the
TBC and TLBPC fields for both isoc and normal TRBs. Fix this.
This should be backported to stable kernels as old as 3.0, that contain
the commit b61d378f2da41c748aba6ca19d77e1e1c02bcea5 " xhci 1.0: Set
transfer burst last packet count field."
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1. Optimize the match rules with new macro for Huawei USB storage devices,
to avoid to load USB storage driver for the modem interface
with Huawei devices.
2. Add to support new switch command for new Huawei USB dongles.
Usb3.0 device defines function remote wakeup which is only for interface
recipient rather than device recipient. This is different with usb2.0 device's
remote wakeup feature which is defined for device recipient. According usb3.0
spec 9.4.5, the function remote wakeup can be modified by the SetFeature()
requests using the FUNCTION_SUSPEND feature selector. This patch is to use
correct way to disable usb3.0 device's function remote wakeup after suspend
error and resuming.
This should be backported to kernels as old as 3.4, that contain the
commit 623bef9e03a60adc623b09673297ca7a1cdfb367 "USB/xhci: Enable remote
wakeup for USB3 devices."
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch (as1654) fixes a very old bug in ehci-hcd, connected with
scheduling of periodic split transfers. The calculations for
full/low-speed bus usage are all carried out after the correction for
bit-stuffing has been applied, but the values in the max_tt_usecs
array assume it hasn't been. The array should allow for allocation of
up to 90% of the bus capacity, which is 900 us, not 780 us.
The symptom caused by this bug is that any isochronous transfer to a
full-speed device with a maxpacket size larger than about 980 bytes is
always rejected with a -ENOSPC error.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch (as1652) fixes a long-standing bug in ehci-hcd. The driver
relies on status polls to know when to stop port-resume signalling.
It uses the root-hub status timer to schedule these status polls. But
when the driver for the root hub is resumed, the timer is rescheduled
to go off immediately -- before the port is ready. When this happens
the timer does not get re-enabled, which prevents the port resume from
finishing until some other event occurs.
The symptom is that when a new device is plugged in, it doesn't get
recognized or enumerated until lsusb is run or something else happens.
The solution is to re-enable the root-hub status timer after every
status poll while a port resume is in progress.
This bug hasn't surfaced before now because we never used to try to
suspend the root hub in the middle of a port resume (except by
coincidence).
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-tested-by: Norbert Preining <preining@logic.at> Tested-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch (as1648) fixes a regression affecting nVidia EHCI
controllers. Evidently they don't like to have more than one async QH
unlinked at a time. I can't imagine how they manage to mess it up,
but at least one of them does.
The patch changes the async unlink logic in two ways:
Each time an IAA cycle is started, only the first QH on the
async unlink list is handled (rather than all of them).
Async QHs do not all get unlinked as soon as they have been
empty for long enough. Instead, only the last one (i.e., the
one that has been on the schedule the longest) is unlinked,
and then only if no other unlinks are in progress at the time.
This means that when multiple QHs are empty, they won't be unlinked as
quickly as before. That's okay; it won't affect correct operation of
the driver or add an excessive load. Multiple unlinks tend to be
relatively rare in any case.
This patch (as1647) attempts to work around a problem that seems to
affect some nVidia EHCI controllers. They sometimes take a very long
time to turn off their async or periodic schedules. I don't know if
this is a result of other problems, but in any case it seems wise not
to depend on schedule enables or disables taking effect in any
specific length of time.
The patch removes the existing 20-ms timeout for enabling and
disabling the schedules. The driver will now continue to poll the
schedule state at 1-ms intervals until the controller finally decides
to obey the most recent command issued by the driver. Just in case
this hides a problem, a debugging message will be logged if the
controller takes longer than 20 polls.
I don't know if this will actually fix anything, but it can't hurt.
The RTC control register should be enabled in the process of
initializing.
Without this patch, I failed to enable RTC in Hisilicon Hi3620 SoC. The
register mapping section in RTC is always read as zero. So I doubt that
ST guys may already enable this register in bootloader. So they won't
meet this issue.
Previously the alarm event was not propagated into the RTC subsystem.
By adding a call to rtc_update_irq, this fixes a timeout problem with
the hwclock utility.
Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Cc: Alessandro Zummo <a.zummo@towertech.it> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When setting a huge PTE, besides calling pte_mkhuge(), we also need to
call arch_make_huge_pte(), which we indeed do in make_huge_pte(), but we
forget to do in hugetlb_change_protection() and remove_migration_pte().
Signed-off-by: Zhigang Lu <zlu@tilera.com> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There exists a situation when GC can work in background alone without
any other filesystem activity during significant time.
The nilfs_clean_segments() method calls nilfs_segctor_construct() that
updates superblocks in the case of NILFS_SC_SUPER_ROOT and
THE_NILFS_DISCONTINUED flags are set. But when GC is working alone the
nilfs_clean_segments() is called with unset THE_NILFS_DISCONTINUED flag.
As a result, the update of superblocks doesn't occurred all this time
and in the case of SPOR superblocks keep very old values of last super
root placement.
SYMPTOMS:
Trying to mount a NILFS2 volume after SPOR in such environment ends with
very long mounting time (it can achieve about several hours in some
cases).
REPRODUCING PATH:
1. It needs to use external USB HDD, disable automount and doesn't
make any additional filesystem activity on the NILFS2 volume.
2. Generate temporary file with size about 100 - 500 GB (for example,
dd if=/dev/zero of=<file_name> bs=1073741824 count=200). The size of
file defines duration of GC working.
3. Then it needs to delete file.
4. Start GC manually by means of command "nilfs-clean -p 0". When you
start GC by means of such way then, at the end, superblocks is updated
by once. So, for simulation of SPOR, it needs to wait sometime (15 -
40 minutes) and simply switch off USB HDD manually.
5. Switch on USB HDD again and try to mount NILFS2 volume. As a
result, NILFS2 volume will mount during very long time.
REPRODUCIBILITY: 100%
FIX:
This patch adds checking that superblocks need to update and set
THE_NILFS_DISCONTINUED flag before nilfs_clean_segments() call.
Commit cdeadd712f52b16a9285386d61ee26fd14eb4085 (mtd: nand: davinci: add OF
support for davinci nand controller) has never been really build tested with
the driver as a module. When the driver is built-in, the missing semicolon
after structure initializer is "compensated" by MODULE_DEVICE_TABLE() macro
being empty and so the initializer using the trailing semicolon on the next
line; when the driver is built as a module, compilation error ensues, and as
the 'davinci_all_defconfig' has the NAND driver modular, this error prevents
DaVinci family kernel from building...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the system has multiple domains do_sched_rt_period_timer()
can run on any CPU and may iterate over all rt_rq in
cpu_online_mask. This means when balance_runtime() is run for a
given rt_rq that rt_rq may be in a different rd than the current
processor. Thus if we use smp_processor_id() to get rd in
do_balance_runtime() we may borrow runtime from a rt_rq that is
not part of our rd.
This changes do_balance_runtime to get the rd from the passed in
rt_rq ensuring that we borrow runtime only from the correct rd
for the given rt_rq.
This fixes a BUG at kernel/sched/rt.c:687! in __disable_runtime
when we try reclaim runtime lent to other rt_rq but runtime has
been lent to a rt_rq in another rd.
For some reason they didn't get replaced so far by their
paravirt equivalents, resulting in code to be run with
interrupts disabled that doesn't expect so (causing, in the
observed case, a BUG_ON() to trigger) when syscall auditing is
enabled.
David (Cc-ed) came up with an identical fix, so likely this can
be taken to count as an ack from him.
Reported-by: Peter Moody <pmoody@google.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Link: http://lkml.kernel.org/r/5108E01902000078000BA9C5@nat28.tlf.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Tested-by: Peter Moody <pmoody@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If the requested number of DWs on the ring is larger than
the size of the ring itself, return an error.
In testing with large VM updates, we've seen crashes when we
try and allocate more space on the ring than the total size
of the ring without checking.
This prevents the crash but for large VM updates or bo moves
of very large buffers, we will need to break the transaction
down into multiple batches. I have patches to use IBs for
the next kernel.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Force the crtc mem requests on/off immediately rather
than waiting for the double buffered updates to kick in.
Seems we miss the update in certain conditions. Also
handle the DCE6 case.
Signed-off-by: Christopher Staite <chris@yourdreamnet.co.uk> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The ASM version of hash computation function was truncating the upper bit.
Make the ASM version similar to hpt_hash function. Remove masking vsid bits.
Without this patch, we observed hang during bootup due to not satisfying page
fault request correctly. The fault handler used wrong hash values to update
the HPTE. Hence we kept looping with page fault.
net/netfilter/xt_CT.c: In function ‘xt_ct_tg_check_v1’:
net/netfilter/xt_CT.c:250:6: warning: ‘ret’ may be used uninitialized in this function [-Wmaybe-uninitialized]
net/netfilter/xt_CT.c: In function ‘xt_ct_tg_check_v0’:
net/netfilter/xt_CT.c:112:6: warning: ‘ret’ may be used uninitialized in this function [-Wmaybe-uninitialized]
CAI Qian [Fri, 25 Jan 2013 03:50:09 +0000 (22:50 -0500)]
slub: assign refcount for kmalloc_caches
This is for stable-3.7.y only and this problem has already been solved
in mainline through some slab/slub re-work which isn't suitable to
backport here. See create_kmalloc_cache() in mm/slab_common.c there.
commit cce89f4f6911286500cf7be0363f46c9b0a12ce0('Move kmem_cache
refcounting to common code') moves some refcount manipulation code to
common code. Unfortunately, it also removed refcount assignment for
kmalloc_caches. So, kmalloc_caches's refcount is initially 0.
This makes erroneous situation.
Paul Hargrove report that when he create a 8-byte kmem_cache and
destory it, he encounter below message.
'Objects remaining in kmalloc-8 on kmem_cache_close()'
8-byte kmem_cache merge with 8-byte kmalloc cache and refcount is
increased by one. So, resulting refcount is 1. When destroy it, it hit
refcount = 0, then kmem_cache_close() is executed and error message is
printed.
This patch assign initial refcount 1 to kmalloc_caches, so fix this
erroneous situation.
Reported-by: Paul Hargrove <phhargrove@lbl.gov> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Joonsoo Kim <js1304@gmail.com> Signed-off-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
and started using something from the same cacheline instead. On the
bug reporter's machine this broke entering rc6 states after a
suspend/resume cycle. It turns out reading ECOBUS as posting read
worked fine, while GTFIFODBG did not, preventing RC6 states after
suspend/resume per the bug report referenced below. It's not entirely
clear why, but clearly GTFIFODBG was nowhere near the same cacheline
or address range as FORCEWAKE.
Trying out various registers for posting reads showed that all tested
registers for which NEEDS_FORCE_WAKE() (in i915_drv.c) returns true
work. Conversely, most (but not quite all) registers for which
NEEDS_FORCE_WAKE() returns false do not work. Details in the referenced
bug.
Based on the above, add posting reads on ECOBUS where GTFIFODBG was
previously relied on.
In true cargo cult spirit, add posting reads for FORCEWAKE_VLV writes as
well, but instead of ECOBUS, use FORCEWAKE_ACK_VLV which is in the same
address range as FORCEWAKE_VLV.
v2: Add more details to the commit message. No functional changes.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=52411 Reported-and-tested-by: Alexander Bersenev <bay@hackerdom.ru> CC: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org
[danvet: add cc: stable and make the commit message a bit clearer that
this is a regression fix and what exactly broke.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arptables 0.0.4 (released on 10th Jan 2013) supports calling the
CLASSIFY target, but on adding a rule to the wrong chain, the
diagnostic is as follows:
# arptables -A INPUT -j CLASSIFY --set-class 0:0
arptables: Invalid argument
# dmesg | tail -n1
x_tables: arp_tables: CLASSIFY target: used from hooks
PREROUTING, but only usable from INPUT/FORWARD
This is incorrect, since xt_CLASSIFY.c does specify
(1 << NF_ARP_OUT) | (1 << NF_ARP_FORWARD).
This patch corrects the x_tables diagnostic message to print the
proper hook names for the NFPROTO_ARP case.
Affects all kernels down to and including v2.6.31.
Signed-off-by: Jan Engelhardt <jengelh@inai.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
canqun zhang reported that we're hitting BUG_ON in the
nf_conntrack_destroy path when calling kfree_skb while
rmmod'ing the nf_conntrack module.
Currently, the nf_ct_destroy hook is being set to NULL in the
destroy path of conntrack.init_net. However, this is a problem
since init_net may be destroyed before any other existing netns
(we cannot assume any specific ordering while releasing existing
netns according to what I read in recent emails).
Thanks to Gao feng for initial patch to address this issue.
It also wastes about half the allocated space because of kmalloc()
power-of-two roundups and struct recent_table layout.
Use vmalloc() instead to save space and be less prone to allocation
errors when memory is fragmented.
Reported-by: Miroslav Kratochvil <exa.exa@gmail.com> Reported-by: Dave Jones <davej@redhat.com> Reported-by: Harald Reindl <h.reindl@thelounge.net> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
recent_net_exit() is called before recent_mt_destroy() in the
destroy path of network namespaces. Make sure there are no entries
in the parent proc entry xt_recent before removing it.
Signed-off-by: Vitaly E. Lavrov <lve@guap.ru> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Two packets may race to create the same entry in the hashtable,
double check if this packet lost race. This double checking only
happens in the path of the packet that creates the hashtable for
first time.
Note that, with this patch, no packet drops occur if the race happens.
recent_net_exit() is called before recent_mt_destroy() in the
destroy path of network namespaces. Make sure there are no entries
in the parent proc entry xt_recent before removing it.
Signed-off-by: Vitaly E. Lavrov <lve@guap.ru> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal reported that the removal of the NOTRACK target
(9655050 netfilter: remove xt_NOTRACK) is breaking some existing
setups.
That removal was scheduled for removal since long time ago as
described in Documentation/feature-removal-schedule.txt
What: xt_NOTRACK
Files: net/netfilter/xt_NOTRACK.c
When: April 2011
Why: Superseded by xt_CT
Still, people may have not notice / may have decided to stick to an
old iptables version. I agree with him in that some more conservative
approach by spotting some printk to warn users for some time is less
agressive.
Current iptables 1.4.16.3 already contains the aliasing support
that makes it point to the CT target, so upgrading would fix it.
Still, the policy so far has been to avoid pushing our users to
upgrade.
As a solution, this patch recovers the NOTRACK target inside the CT
target and it now spots a warning.
In (0c36b48 netfilter: nfnetlink_log: fix mac address for 6in4 tunnels)
the include file that defines ARPD_SIT was missing. This passed unnoticed
during my tests (I did not hit this problem here).
net/netfilter/nfnetlink_log.c: In function '__build_packet_message':
net/netfilter/nfnetlink_log.c:494:25: error: 'ARPHRD_SIT' undeclared (first use in this function)
net/netfilter/nfnetlink_log.c:494:25: note: each undeclared identifier is reported only once for
+each function it appears in
Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
For tunnelled ipv6in4 packets, the LOG target (xt_LOG.c) adjusts
the start of the mac field to start at the ethernet header instead
of the ipv4 header for the tunnel. This patch conforms what is
passed by the NFLOG target through nfnetlink to what the LOG target
does. Code borrowed from xt_LOG.c.
Signed-off-by: Bob Hockney <bhockney@ix.netcom.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
target: fix regression with dev_link_magic in target_fabric_port_link
This is to fix a regression that only affect the stable (not for the mainline)
that the stable commit fdf9d86 was incorrectly placed dev->dev_link_magic check
before the *dev assignment in target_fabric_port_link() due to fuzzy automatically
context adjustment during the back-porting.
Reported-by: Chris Boot <bootc@bootc.net> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Chinner [Fri, 25 Jan 2013 18:45:07 +0000 (12:45 -0600)]
xfs: fix periodic log flushing
[Please take this patch for -stable in kernels 3.5-3.7. It doesn't have an
equivalent upstream commit because the code was removed before the bug was
discovered. See f661f1e0bf50 and 7e18530bef6a.]
There is a logic inversion in xfssyncd_worker() which means that the
log is not periodically forced or idled correctly. This means that
metadata changes aggregated in memory do not get flushed in a timely
manner, and hence if filesystem is not cleanly unmounted those
changes can be lost. This loss can manifest itself even hours after
the changes were made if the filesystem is left to idle without a
sync() occurring between the last modification and the
crash/shutdown occuring.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit b836c99fd6c9 (ipv6: unify conntrack reassembly expire
code with standard one) use the standard IPv6 reassembly
code(ip6_expire_frag_queue) to handle conntrack reassembly expire.
In ip6_expire_frag_queue, it invoke dev_get_by_index_rcu to get
which device received this expired packet.so we must save ifindex
when NF_conntrack get this packet.
With this patch applied, I can see ICMP Time Exceeded sent
from the receiver when the sender sent out 1/2 fragmented
IPv6 packet.
Signed-off-by: Haibo Xi <haibbo@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The problem occurs when iptables constructs the tcp reset packet.
It doesn't initialize the pointer to the tcp header within the skb.
When the skb is passed to the ixgbe driver for transmit, the ixgbe
driver attempts to access the tcp header and crashes.
Currently, other drivers (such as our 1G e1000e or igb drivers) don't
access the tcp header on transmit unless the TSO option is turned on.
If one (but not both) allocations of p->chunks[].kpage[]
in radeon_cs_parser_init fail, the error path will free
the successfully allocated page, but leave a stale pointer
value in the kpage[] field. This will later cause a
double-free when radeon_cs_parser_fini is called.
This patch fixes the issue by forcing both pointers to NULL
after kfree in the error path.
The circumstances under which the problem happens are very
rare. The card must be AGP and the system must run out of
kmalloc area just at the right time so that one allocation
succeeds, while the other fails.
Signed-off-by: Ilija Hadzic <ihadzic@research.bell-labs.com> Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When _xfs_buf_find is passed an out of range address, it will fail
to find a relevant struct xfs_perag and oops with a null
dereference. This can happen when trying to walk a filesystem with a
metadata inode that has a partially corrupted extent map (i.e. the
block number returned is corrupt, but is otherwise intact) and we
try to read from the corrupted block address.
In this case, just fail the lookup. If it is readahead being issued,
it will simply not be done, but if it is real read that fails we
will get an error being reported. Ideally this case should result
in an EFSCORRUPTED error being reported, but we cannot return an
error through xfs_buf_read() or xfs_buf_get() so this lookup failure
may result in ENOMEM or EIO errors being reported instead.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> Cc: CAI Qian <caiqian@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
efi.runtime_version is erroneously being set to the value of the
vendor's firmware revision instead of that of the implemented EFI
specification. We can't deduce which EFI functions are available based
on the revision of the vendor's firmware since the version scheme is
likely to be unique to each vendor.
What we really need to know is the revision of the implemented EFI
specification, which is available in the EFI System Table header.
Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Update efi_call_phys_prelog to install an identity mapping of all available
memory. This corrects a bug on very large systems with more then 512 GB in
which bios would not be able to access addresses above not in the mapping.
If the bootloader calls the EFI handover entry point as a standard function
call, then it'll have a return address on the stack. We need to pop that
before calling efi_main(), or the arguments will all be out of position on
the stack.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When booting under OVMF we have precisely one GOP device, and it
implements the ConOut protocol.
We break out of the loop when we look at it... and then promptly abort
because 'first_gop' never gets set. We should set first_gop *before*
breaking out of the loop. Yes, it doesn't really mean "first" any more,
but that doesn't matter. It's only a flag to indicate that a suitable
GOP was found.
In fact, we'd do just as well to initialise 'width' to zero in this
function, then just check *that* instead of first_gop. But I'll do the
minimal fix for now (and for stable@).
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.org Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It has been reported that running this driver on some Samsung laptops
with EFI can cause those machines to become bricked as detailed in the
following report,
There have also been reports of this driver causing Machine Check
Exceptions on recent EFI-enabled Samsung laptops,
https://bugzilla.kernel.org/show_bug.cgi?id=47121
So disable it if booting from EFI since this driver relies on
grovelling around in the BIOS memory map which isn't going to work.
Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: Corentin Chary <corentincj@iksaif.net> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Colin Ian King <colin.king@canonical.com> Cc: Steve Langasek <steve.langasek@canonical.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Originally 'efi_enabled' indicated whether a kernel was booted from
EFI firmware. Over time its semantics have changed, and it now
indicates whether or not we are booted on an EFI machine with
bit-native firmware, e.g. 64-bit kernel with 64-bit firmware.
The immediate motivation for this patch is the bug report at,
which details how running a platform driver on an EFI machine that is
designed to run under BIOS can cause the machine to become
bricked. Also, the following report,
https://bugzilla.kernel.org/show_bug.cgi?id=47121
details how running said driver can also cause Machine Check
Exceptions. Drivers need a new means of detecting whether they're
running on an EFI machine, as sadly the expression,
if (!efi_enabled)
hasn't been a sufficient condition for quite some time.
Users actually want to query 'efi_enabled' for different reasons -
what they really want access to is the list of available EFI
facilities.
For instance, the x86 reboot code needs to know whether it can invoke
the ResetSystem() function provided by the EFI runtime services, while
the ACPI OSL code wants to know whether the EFI config tables were
mapped successfully. There are also checks in some of the platform
driver code to simply see if they're running on an EFI machine (which
would make it a bad idea to do BIOS-y things).
This patch is a prereq for the samsung-laptop fix patch.
Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: David Airlie <airlied@linux.ie> Cc: Corentin Chary <corentincj@iksaif.net> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Olof Johansson <olof@lixom.net> Cc: Peter Jones <pjones@redhat.com> Cc: Colin Ian King <colin.king@canonical.com> Cc: Steve Langasek <steve.langasek@canonical.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Konrad Rzeszutek Wilk <konrad@kernel.org> Cc: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At the moment the MSR driver only relies upon file system
checks. This means that anything as root with any capability set
can write to MSRs. Historically that wasn't very interesting but
on modern processors the MSRs are such that writing to them
provides several ways to execute arbitary code in kernel space.
Sample code and documentation on doing this is circulating and
MSR attacks are used on Windows 64bit rootkits already.
In the Linux case you still need to be able to open the device
file so the impact is fairly limited and reduces the security of
some capability and security model based systems down towards
that of a generic "root owns the box" setup.
Therefore they should require CAP_SYS_RAWIO to prevent an
elevation of capabilities. The impact of this is fairly minimal
on most setups because they don't have heavy use of
capabilities. Those using SELinux, SMACK or AppArmor rules might
want to consider if their rulesets on the MSR driver could be
tighter.
Signed-off-by: Alan Cox <alan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
I get the following warning every day with v3.7, once or
twice a day:
[ 2235.186027] WARNING: at /mnt/sda7/kernel/linux/arch/x86/kernel/apic/ipi.c:109 default_send_IPI_mask_logical+0x2f/0xb8()
As explained by Linus as well:
|
| Once we've done the "list_add_rcu()" to add it to the
| queue, we can have (another) IPI to the target CPU that can
| now see it and clear the mask.
|
| So by the time we get to actually send the IPI, the mask might
| have been cleared by another IPI.
|
This patch also fixes a system hang problem, if the data->cpumask
gets cleared after passing this point:
if (WARN_ONCE(!mask, "empty IPI mask"))
return;
then the problem in commit 83d349f35e1a ("x86: don't send an IPI to
the empty set of CPU's") will happen again.
Signed-off-by: Wang YanQing <udknight@gmail.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Jan Beulich <jbeulich@suse.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: peterz@infradead.org Cc: mina86@mina86.org Cc: srivatsa.bhat@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/20130126075357.GA3205@udknight
[ Tidied up the changelog and the comment in the code. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patch to add the Formosa Industrial Computing, Inc. Infrared Receiver
[IR605A/Q] to hid-ids.h and hid-quirks.c. This IR receiver causes about a 10
second timeout when the usbhid driver attempts to initialze the device. Adding
this device to the quirks list with HID_QUIRK_NO_INIT_REPORTS removes the
delay.
NFS4ERR_DELAY is a legal reply when we call DESTROY_SESSION. It
usually means that the server is busy handling an unfinished RPC
request. Just sleep for a second and then retry.
We also need to be able to handle the NFS4ERR_BACK_CHAN_BUSY return
value. If the NFS server has outstanding callbacks, we just want to
similarly sleep & retry.
If walking the list in nfs4[01]_walk_client_list fails, then the most
likely explanation is that the server dropped the clientid before we
actually managed to confirm it. As long as our nfs_client is the very
last one in the list to be tested, the caller can be assured that this
is the case when the final return value is NFS4ERR_STALE_CLIENTID.
Reported-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Chuck Lever <chuck.lever@oracle.com> Tested-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The reference counting in nfs4_init_client assumes wongly that it
is safe for nfs4_discover_server_trunking() to return a pointer to a
nfs_client prior to bumping the reference count.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Ben Greear <greearb@candelatech.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ensure that any setattr and getattr requests for junctions and/or
mountpoints are sent to the server. Ever since commit 0ec26fd0698 (vfs: automount should ignore LOOKUP_FOLLOW), we have
silently dropped any setattr requests to a server-side mountpoint.
For referrals, we have silently dropped both getattr and setattr
requests.
This patch restores the original behaviour for setattr on mountpoints,
and tries to do the same for referrals, provided that we have a
filehandle...
Currently, nfs_xdev_mount converts all errors from clone_server() to
ENOMEM, which can then leak to userspace (for instance to 'mount'). Fix that.
Also ensure that if nfs_fs_mount_common() returns an error, we
don't dprintk(0)...
DMAR support on g4x/gm45 integrated gpus seems to be totally busted.
So don't bother, but instead disable it by default to allow distros to
unconditionally enable DMAR support.
v2: Actually wire up the right quirk entry, spotted by Adam Jackson.
Note that according to intel marketing materials only g45 and gm45
support DMAR/VT-d. So we have reports for all relevant gen4 pci ids by
now. Still, keep all the other gen4 ids in the quirk table in case the
marketing stuff confused me again, which would not be the first time.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=51921
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=538163
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=538163 Cc: Adam Jackson <ajax@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: stable@vger.kernel.org Acked-By: David Woodhouse <David.Woodhouse@intel.com> Tested-by: stathis <stathis@npcglib.org> Tested-by: Mihai Moldovan <ionic@ionic.de> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Mihai Moldovan <ionic@ionic.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The length parameter should be sizeof(req->name) - 1 because there is no
guarantee that string provided by userspace will contain the trailing
'\0'.
Can be easily reproduced by manually setting req->name to 128 non-zero
bytes prior to ioctl(HIDPCONNADD) and checking the device name setup on
input subsystem:
Signed-off-by: Chris Rattray <crattray@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For non-snoop mode, we fiddle with the page attributes of CORB/RIRB
and the position buffer, but also the ring buffers. The problem is
that the current code blindly assumes that the buffer is contiguous.
However, the ring buffers may be SG-buffers, thus a wrong vmapped
address is passed there, leading to Oops.
A Packard-Bell desktop machine gives no proper pin configuration from
BIOS. It's almost equivalent with the 6stack+fp standard config, just
take the existing fixup.
Commit 23caaf19b11e (ALSA: usb-mixer: Add support for Audio Class v2.0)
forgot to adjust the length check for UAC 2.0 feature unit descriptors.
This would make the code abort on encountering a feature unit without
per-channel controls, and thus prevented the driver to work with any
device having such a unit, such as the RME Babyface or Fireface UCX.
Reported-by: Florian Hanisch <fhanisch@uni-potsdam.de> Tested-by: Matthew Robbetts <wingfeathera@gmail.com> Tested-by: Michael Beer <beerml@sigma6audio.de> Cc: Daniel Mack <daniel@caiaq.de> Signed-off-by: Clemens Ladisch <clemens@ladisch.de> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chain swapping should only be enabled when the EEPROM chainmask is set to 5,
regardless of what the runtime chainmask is.
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fixes a reported CPU soft lockup where the tasklet tries to acquire the
lock and blocks while ath_prepare_reset (holding the lock) waits for it
to complete.
Reported-by: Robert Shade <robert.shade@gmail.com> Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The commit "ath9k: fix rx flush handling" added a deadlock that happens
because ath_rx_tasklet is called in a section that has already taken the
rx buffer lock.
It seems that the only purpose of the rxbuflock was a band-aid fix to the
reset vs rx tasklet race, which has been properly fixed in the commit
"ath9k: add a better fix for the rx tasklet vs rx flush race".
Now that the fix is in, we can safely remove the lock to avoid such issues.
Reported-by: Sujith Manoharan <c_manoha@qca.qualcomm.com> Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Right now the rx flush is not doing anything useful on AR9003+, as it only
works if the buffers in the rx FIFO have not been purged yet, as is done
by ath_stoprecv.
To fix this, always call ath_flushrecv from within ath_stoprecv before
the FIFO is emptied, but still after the hw receive path has been stopped.
This ensures that frames received (and ACKed by the hardware) shortly before
a reset will be seen by the software, which should improve A-MPDU session
stability.
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ensure that the rx tasklet is no longer running when entering the reset path.
Also remove the distinction between flush and no-flush frame processing.
If a frame has been received and ACKed by the hardware, the stack needs to see
it, so that the BA receive window does not go out of sync.
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>