]> git.karo-electronics.de Git - linux-beck.git/log
linux-beck.git
8 years agoarm64: pmu: add fallback probe table
Mark Salter [Wed, 14 Sep 2016 22:32:29 +0000 (17:32 -0500)]
arm64: pmu: add fallback probe table

In preparation for ACPI support, add a pmu_probe_info table to
the arm_pmu_device_probe() call. This table gets used when
probing in the absence of a devicetree node for PMU.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoMAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry
Will Deacon [Thu, 15 Sep 2016 09:14:41 +0000 (10:14 +0100)]
MAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry

There are an increasing number of ARM SoC PMU drivers appearing for
things like interconnects, memory controllers and cache controllers.
Rather than have these handled on an ad-hoc basis, where SoC maintainers
each send their PMU drivers directly to arm-soc, let's take these into
drivers/perf/ and send a single pull request to arm-soc instead, much
like other subsystems.

This patch amends the ARM PMU MAINTAINERS entry to include all of
drivers/perf/ (currently just the ARM CPU PMU), changes Mark Rutland
from Reviewer to Maintainer, so that he can help with the new tree and
adds the device-tree binding to the list of maintained files.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Improve kprobes test for atomic sequence
David A. Long [Mon, 12 Sep 2016 18:21:27 +0000 (14:21 -0400)]
arm64: Improve kprobes test for atomic sequence

Kprobes searches backwards a finite number of instructions to determine if
there is an attempt to probe a load/store exclusive sequence. It stops when
it hits the maximum number of instructions or a load or store exclusive.
However this means it can run up past the beginning of the function and
start looking at literal constants. This has been shown to cause a false
positive and blocks insertion of the probe. To fix this, further limit the
backwards search to stop if it hits a symbol address from kallsyms. The
presumption is that this is the entry point to this code (particularly for
the common case of placing probes at the beginning of functions).

This also improves efficiency by not searching code that is not part of the
function. There may be some possibility that the label might not denote the
entry path to the probed instruction but the likelihood seems low and this
is just another example of how the kprobes user really needs to be
careful about what they are doing.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/kvm: use alternative auto-nop
Mark Rutland [Wed, 7 Sep 2016 10:07:10 +0000 (11:07 +0100)]
arm64/kvm: use alternative auto-nop

Make use of the new alternative_if and alternative_else_nop_endif and
get rid of our open-coded NOP sleds, making the code simpler to read.

Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved
out of the alternative sequence, and in the default case there will be
four additional NOPs executed.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: use alternative auto-nop
Mark Rutland [Wed, 7 Sep 2016 10:07:09 +0000 (11:07 +0100)]
arm64: use alternative auto-nop

Make use of the new alternative_if and alternative_else_nop_endif and
get rid of our homebew NOP sleds, making the code simpler to read.

Note that for cpu_do_switch_mm the ret has been moved out of the
alternative sequence, and in the default case there will be three
additional NOPs executed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: alternative: add auto-nop infrastructure
Mark Rutland [Wed, 7 Sep 2016 10:07:08 +0000 (11:07 +0100)]
arm64: alternative: add auto-nop infrastructure

In some cases, one side of an alternative sequence is simply a number of
NOPs used to balance the other side. Keeping track of this manually is
tedious, and the presence of large chains of NOPs makes the code more
painful to read than necessary.

To ameliorate matters, this patch adds a new alternative_else_nop_endif,
which automatically balances an alternative sequence with a trivial NOP
sled.

In many cases, we would like a NOP-sled in the default case, and
instructions patched in in the presence of a feature. To enable the NOPs
to be generated automatically for this case, this patch also adds a new
alternative_if, and updates alternative_else and alternative_endif to
work with either alternative_if or alternative_endif.

Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: use new nops macro to generate nop sequences]
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: lse: convert lse alternatives NOP padding to use __nops
Will Deacon [Tue, 6 Sep 2016 15:42:58 +0000 (16:42 +0100)]
arm64: lse: convert lse alternatives NOP padding to use __nops

The LSE atomics are implemented using alternative code sequences of
different lengths, and explicit NOP padding is used to ensure the
patching works correctly.

This patch converts the bulk of the LSE code over to using the __nops
macro, which makes it slightly clearer as to what is going on and also
consolidates all of the padding at the end of the various sequences.

Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: barriers: introduce nops and __nops macros for NOP sequences
Will Deacon [Tue, 6 Sep 2016 15:40:23 +0000 (16:40 +0100)]
arm64: barriers: introduce nops and __nops macros for NOP sequences

NOP sequences tend to get used for padding out alternative sections
and uarch-specific pipeline flushes in errata workarounds.

This patch adds macros for generating these sequences as both inline
asm blocks, but also as strings suitable for embedding in other asm
blocks directly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s
Will Deacon [Tue, 6 Sep 2016 13:04:45 +0000 (14:04 +0100)]
arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s

Similar to our {read,write}_sysreg accessors for architected, named
system registers, this patch introduces {read,write}_sysreg_s variants
that can take arbitrary sys_reg output and therefore access IMPDEF
registers or registers that unsupported by binutils.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Remove shadowed asm-generic headers
Robin Murphy [Wed, 7 Sep 2016 15:02:31 +0000 (16:02 +0100)]
arm64: Remove shadowed asm-generic headers

We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h,
so generating the generic ones as well is unnecessary and a potential
source of build hiccups. At the very least, having them present has
confused my source-indexing tool, and that simply will not do.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Work around systems with mismatched cache line sizes
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:16 +0000 (14:07 +0100)]
arm64: Work around systems with mismatched cache line sizes

Systems with differing CPU i-cache/d-cache line sizes can cause
problems with the cache management by software when the execution
is migrated from one to another. Usually, the application reads
the cache size on a CPU and then uses that length to perform cache
operations. However, if it gets migrated to another CPU with a smaller
cache line size, things could go completely wrong. To prevent such
cases, always use the smallest cache line size among the CPUs. The
kernel CPU feature infrastructure already keeps track of the safe
value for all CPUID registers including CTR. This patch works around
the problem by :

For kernel, dynamically patch the kernel to read the cache size
from the system wide copy of CTR_EL0.

For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
and emulate the mrs instruction to return the system wide safe value
of CTR_EL0.

For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
via read_system_reg), we keep track of the pointer to table entry for
CTR_EL0 in the CPU feature infrastructure.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Refactor sysinstr exception handling
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:15 +0000 (14:07 +0100)]
arm64: Refactor sysinstr exception handling

Right now we trap some of the user space data cache operations
based on a few Errata (ARM 819472, 826319, 827319 and 824069).
We need to trap userspace access to CTR_EL0, if we detect mismatched
cache line size. Since both these traps share the EC, refactor
the handler a little bit to make it a bit more reader friendly.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Introduce raw_{d,i}cache_line_size
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:14 +0000 (14:07 +0100)]
arm64: Introduce raw_{d,i}cache_line_size

On systems with mismatched i/d cache min line sizes, we need to use
the smallest size possible across all CPUs. This will be done by fetching
the system wide safe value from CPU feature infrastructure.
However the some special users(e.g kexec, hibernate) would need the line
size on the CPU (rather than the system wide), when either the system
wide feature may not be accessible or it is guranteed that the caller
executes with a gurantee of no migration.
Provide another helper which will fetch cache line size on the current CPU.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: alternative: Add support for patching adrp instructions
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:13 +0000 (14:07 +0100)]
arm64: alternative: Add support for patching adrp instructions

adrp uses PC-relative address offset to a page (of 4K size) of
a symbol. If it appears in an alternative code patched in, we
should adjust the offset to reflect the address where it will
be run from. This patch adds support for fixing the offset
for adrp instructions.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: insn: Add helpers for adrp offsets
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:12 +0000 (14:07 +0100)]
arm64: insn: Add helpers for adrp offsets

Adds helpers for decoding/encoding the PC relative addresses for adrp.
This will be used for handling dynamic patching of 'adrp' instructions
in alternative code patching.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: alternative: Disallow patching instructions using literals
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:11 +0000 (14:07 +0100)]
arm64: alternative: Disallow patching instructions using literals

The alternative code patching doesn't check if the replaced instruction
uses a pc relative literal. This could cause silent corruption in the
instruction stream as the instruction will be executed from a different
address than what it was compiled for. Catch all such cases.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Rearrange CPU errata workaround checks
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:10 +0000 (14:07 +0100)]
arm64: Rearrange CPU errata workaround checks

Right now we run through the work around checks on a CPU
from __cpuinfo_store_cpu. There are some problems with that:

1) We initialise the system wide CPU feature registers only after the
Boot CPU updates its cpuinfo. Now, if a work around depends on the
variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
we have no way of performing it cleanly for the boot CPU.

2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
is not an obvious place for that.

This patch rearranges the CPU specific capability(aka work around) checks.

1) At the moment we use verify_local_cpu_capabilities() to check if a new
CPU has all the system advertised features. Use this for the secondary CPUs
to perform the work around check. For that we rename
  verify_local_cpu_capabilities() => check_local_cpu_capabilities()
which:

   If the system wide capabilities haven't been initialised (i.e, the CPU
   is activated at the boot), update the system wide detected work arounds.

   Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
   system wide capabilities.

2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
initialised the system wide CPU feature values.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Use consistent naming for errata handling
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:09 +0000 (14:07 +0100)]
arm64: Use consistent naming for errata handling

This is a cosmetic change to rename the functions dealing with
the errata work arounds to be more consistent with their naming.

1) check_local_cpu_errata() => update_cpu_errata_workarounds()
check_local_cpu_errata() actually updates the system's errata work
arounds. So rename it to reflect the same.

2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
Use errata_workarounds instead of _errata.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Set the safe value for L1 icache policy
Suzuki K Poulose [Fri, 9 Sep 2016 13:07:08 +0000 (14:07 +0100)]
arm64: Set the safe value for L1 icache policy

Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
not defined at the moment. The safer value for the L1Ip should be
the weakest of the policies, which happens to be AIVIVT. While at it,
fix the comment about safe_val.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/numa: remove the limitation that cpu0 must bind to node0
Zhen Lei [Thu, 1 Sep 2016 06:55:04 +0000 (14:55 +0800)]
arm64/numa: remove the limitation that cpu0 must bind to node0

1. Remove the old binding code.
2. Read the nid of cpu0 from dts.
3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/numa: remove some useless code
Zhen Lei [Thu, 1 Sep 2016 06:55:03 +0000 (14:55 +0800)]
arm64/numa: remove some useless code

When the deleted code is executed, only the bit of cpu0 was set on
cpu_possible_mask. So that, only set_cpu_numa_node(0, NUMA_NO_NODE); will
be executed. And map_cpu_to_node(0, 0) will soon be called. So these code
can be safely removed.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/numa: support HAVE_SETUP_PER_CPU_AREA
Zhen Lei [Thu, 1 Sep 2016 06:55:00 +0000 (14:55 +0800)]
arm64/numa: support HAVE_SETUP_PER_CPU_AREA

To make each percpu area allocated from its local numa node. Without this
patch, all percpu areas will be allocated from the node which cpu0 belongs
to.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: numa: Use pr_fmt()
Kefeng Wang [Thu, 1 Sep 2016 06:54:59 +0000 (14:54 +0800)]
arm64: numa: Use pr_fmt()

Use pr_fmt to prefix kernel output, and remove duplicated msg
of NUMA turned off.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof_numa: Use pr_fmt()
Kefeng Wang [Thu, 1 Sep 2016 06:54:58 +0000 (14:54 +0800)]
of_numa: Use pr_fmt()

Use pr_fmt to prefix kernel output.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof_numa: Use of_get_next_parent to simplify code
Kefeng Wang [Thu, 1 Sep 2016 06:54:57 +0000 (14:54 +0800)]
of_numa: Use of_get_next_parent to simplify code

Use of_get_next_parent() instead of open-code.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/numa: avoid inconsistent information to be printed
Zhen Lei [Thu, 1 Sep 2016 06:54:56 +0000 (14:54 +0800)]
arm64/numa: avoid inconsistent information to be printed

numa_init may return error because of numa configuration error. So "No
NUMA configuration found" is inaccurate. In fact, specific configuration
error information should be immediately printed by the testing branch.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof/numa: remove a duplicated warning
Zhen Lei [Thu, 1 Sep 2016 06:54:55 +0000 (14:54 +0800)]
of/numa: remove a duplicated warning

This warning has been printed in of_numa_parse_cpu_nodes before.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof/numa: add nid check for memory block
Zhen Lei [Thu, 1 Sep 2016 06:54:54 +0000 (14:54 +0800)]
of/numa: add nid check for memory block

If the numa-id which was configured in memory@ devicetree node is greater
than MAX_NUMNODES, we should report a warning. We have done this for cpus
and distance-map dt nodes, this patch help them to be consistent.

Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof/numa: fix a memory@ node can only contains one memory block
Zhen Lei [Thu, 1 Sep 2016 06:54:53 +0000 (14:54 +0800)]
of/numa: fix a memory@ node can only contains one memory block

For a normal memory@ devicetree node, its reg property can contains more
memory blocks.

Because we don't known how many memory blocks maybe contained, so we try
from index=0, increase 1 until error returned(the end).

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoof/numa: remove a duplicated pr_debug information
Zhen Lei [Thu, 1 Sep 2016 06:54:52 +0000 (14:54 +0800)]
of/numa: remove a duplicated pr_debug information

This information will be printed in the subfunction numa_add_memblk.
They are not the same, but very similar.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agodrivers/perf: arm_pmu: expose a cpumask in sysfs
Mark Rutland [Fri, 9 Sep 2016 13:08:30 +0000 (14:08 +0100)]
drivers/perf: arm_pmu: expose a cpumask in sysfs

In systems with heterogeneous CPUs, there are multiple logical CPU PMUs,
each of which covers a subset of CPUs in the system. In some cases
userspace needs to know which CPUs a given logical PMU covers, so we'd
like to expose a cpumask under sysfs, similar to what is done for uncore
PMUs.

Unfortunately, prior to commit 00e727bb389359c8 ("perf stat: Balance
opening and reading events"), perf stat only correctly handled a cpumask
holding a single CPU, and only when profiling in system-wide mode. In
other cases, the presence of a cpumask file could cause perf stat to
behave erratically.

Thus, exposing a cpumask file would break older perf binaries in cases
where they would otherwise work.

To avoid this issue while still providing userspace with the information
it needs, this patch exposes a differently-named file (cpus) under
sysfs. New tools can look for this and operate correctly, while older
tools will not be adversely affected by its presence.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agodrivers/perf: arm_pmu: only use common attr_groups
Mark Rutland [Fri, 9 Sep 2016 13:08:29 +0000 (14:08 +0100)]
drivers/perf: arm_pmu: only use common attr_groups

Now that the 32-bit and 64-bit perf backends use the common groups
directly, remove the fallback and no longer allow the groups array to be
overridden.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm: perf: move to common attr_group fields
Mark Rutland [Fri, 9 Sep 2016 13:08:28 +0000 (14:08 +0100)]
arm: perf: move to common attr_group fields

By using a common attr_groups array, the common arm_pmu code can set up
common files (e.g. cpumask) for us in subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: perf: move to common attr_group fields
Mark Rutland [Fri, 9 Sep 2016 13:08:27 +0000 (14:08 +0100)]
arm64: perf: move to common attr_group fields

By using a common attr_groups array, the common arm_pmu code can set up
common files (e.g. cpumask) for us in subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agodrivers/perf: arm_pmu: add common attr group fields
Mark Rutland [Fri, 9 Sep 2016 13:08:26 +0000 (14:08 +0100)]
drivers/perf: arm_pmu: add common attr group fields

In preparation for adding common attribute groups, add an array of
attribute group pointers to arm_pmu, which will be used if the
backend hasn't already set pmu::attr_groups.

Subsequent patches will move backends over to using these, before adding
common fields.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: simplify contextidr_thread_switch
Mark Rutland [Thu, 8 Sep 2016 12:55:39 +0000 (13:55 +0100)]
arm64: simplify contextidr_thread_switch

When CONFIG_PID_IN_CONTEXTIDR is not selected, we use an empty stub
definition of contextidr_thread_switch(). As everything we rely upon
exists regardless of CONFIG_PID_IN_CONTEXTIDR, we don't strictly require
an empty stub.

By using IS_ENABLED() rather than ifdeffery, we avoid duplication, and
get compiler coverage on all the code even when CONFIG_PID_IN_CONTEXTIDR
is not selected and the code is optimised away.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: simplify sysreg manipulation
Mark Rutland [Thu, 8 Sep 2016 12:55:38 +0000 (13:55 +0100)]
arm64: simplify sysreg manipulation

A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these across arm64 to make code shorter and
clearer. For sequences with a trailing ISB, the existing isb() macro is
also used so that asm blocks can be removed entirely.

A few uses of inline assembly for msr/mrs are left as-is. Those
manipulating sp_el0 for the current thread_info value have special
clobber requiremends.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/kvm: use {read,write}_sysreg()
Mark Rutland [Thu, 8 Sep 2016 12:55:37 +0000 (13:55 +0100)]
arm64/kvm: use {read,write}_sysreg()

A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 KVM code to make the code
shorter and clearer.

At the same time, a comment style violation next to a system register
access is fixed up in reset_pmcr, and comments describing whether
operations are reads or writes are removed as this is now painfully
obvious.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: dcc: simplify accessors
Mark Rutland [Thu, 8 Sep 2016 12:55:36 +0000 (13:55 +0100)]
arm64: dcc: simplify accessors

A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 DCC accessors to make the
code shorter and clearer.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: arch_timer: simplify accessors
Mark Rutland [Thu, 8 Sep 2016 12:55:35 +0000 (13:55 +0100)]
arm64: arch_timer: simplify accessors

A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.

This patch makes use of these in the arm64 arch timer accessors to make
the code shorter and clearer.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: sysreg: allow write_sysreg to use XZR
Mark Rutland [Thu, 8 Sep 2016 12:55:34 +0000 (13:55 +0100)]
arm64: sysreg: allow write_sysreg to use XZR

Currently write_sysreg has to allocate a temporary register to write
zero to a system register, which is unfortunate given that the MSR
instruction accepts XZR as an operand.

Allow XZR to be used when appropriate by fiddling with the assembly
constraints.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64/io: Allow I/O writes to use {W,X}ZR
Robin Murphy [Thu, 8 Sep 2016 10:02:20 +0000 (11:02 +0100)]
arm64/io: Allow I/O writes to use {W,X}ZR

When zeroing an I/O location, the current accessors are forced to
allocate a temporary register to store the zero for the write. By
tweaking the assembly constraints, we can allow the compiler to use
the zero register directly in such cases, and save some juggling.
Compiling a representative kernel configuration with GCC 6 shows
that 2.3KB worth of code can be wasted just on that!

  text     data    bss      dec      hex     filename
 13316776 3248256 18176769 34741801 2121e29 vmlinux.o.new
 13319140 3248256 18176769 34744165 2122765 vmlinux.o.old

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Use static keys for CPU features
Catalin Marinas [Mon, 5 Sep 2016 17:25:48 +0000 (18:25 +0100)]
arm64: Use static keys for CPU features

This patch adds static keys transparently for all the cpu_hwcaps
features by implementing an array of default-false static keys and
enabling them when detected. The cpus_have_cap() check uses the static
keys if the feature being checked is a constant, otherwise the compiler
generates the bitmap test.

Because of the early call to static_branch_enable() via
check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
are initialised in cpuinfo_store_boot_cpu().

Cc: Will Deacon <will.deacon@arm.com>
Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agojump_labels: Allow array initialisers
Catalin Marinas [Mon, 5 Sep 2016 17:25:47 +0000 (18:25 +0100)]
jump_labels: Allow array initialisers

The static key API is currently designed around single variable
definitions. There are cases where an array of static keys is desirable,
so extend the API to allow this rather than using the internal static
key implementation directly.

Cc: Jason Baron <jbaron@akamai.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Suggested-by: Dave P Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: mm: drop fixup_init() and mm.h
Kefeng Wang [Mon, 5 Sep 2016 11:30:22 +0000 (19:30 +0800)]
arm64: mm: drop fixup_init() and mm.h

There is only fixup_init() in mm.h , and it is only called
in free_initmem(), so move the codes from fixup_init() into
free_initmem(), then drop fixup_init() and mm.h.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agodrivers/perf: arm_pmu: Always consider IRQ0 as an error
Marc Zyngier [Tue, 6 Sep 2016 14:34:44 +0000 (15:34 +0100)]
drivers/perf: arm_pmu: Always consider IRQ0 as an error

As declared by the chief penguin, and enforced by the NO_IRQ brigade,
IRQ0 doesn't exist, and is considered as an error (no irq).

Unfortunately, the arm_pmu driver still considers it as valid in
a large number of cases. Let's fix this.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: ftrace: add save_stack_trace_regs()
Pratyush Anand [Mon, 5 Sep 2016 02:33:16 +0000 (08:03 +0530)]
arm64: ftrace: add save_stack_trace_regs()

Currently, enabling stacktrace of a kprobe events generates warning:

  echo stacktrace > /sys/kernel/debug/tracing/trace_options
  echo "p xhci_irq" > /sys/kernel/debug/tracing/kprobe_events
  echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable

save_stack_trace_regs() not implemented yet.
------------[ cut here ]------------
WARNING: CPU: 1 PID: 0 at ../kernel/stacktrace.c:74 save_stack_trace_regs+0x3c/0x48
Modules linked in:

CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-rc4-dirty #5128
Hardware name: ARM Juno development board (r1) (DT)
task: ffff800975dd1900 task.stack: ffff800975ddc000
PC is at save_stack_trace_regs+0x3c/0x48
LR is at save_stack_trace_regs+0x3c/0x48
pc : [<ffff000008126c64>] lr : [<ffff000008126c64>] pstate: 600003c5
sp : ffff80097ef52c00

Call trace:
   save_stack_trace_regs+0x3c/0x48
   __ftrace_trace_stack+0x168/0x208
   trace_buffer_unlock_commit_regs+0x5c/0x7c
   kprobe_trace_func+0x308/0x3d8
   kprobe_dispatcher+0x58/0x60
   kprobe_breakpoint_handler+0xbc/0x18c
   brk_handler+0x50/0x90
   do_debug_exception+0x50/0xbc

This patch implements save_stack_trace_regs(), so that stacktrace of a
kprobe events can be obtained.

After this patch, there is no warning and we can see the stacktrace for
kprobe events in trace buffer.

more /sys/kernel/debug/tracing/trace
          <idle>-0     [004] d.h.  1356.000496: p_xhci_irq_0:(xhci_irq+0x0/0x9ac)
          <idle>-0     [004] d.h.  1356.000497: <stack trace>
  => xhci_irq
  => __handle_irq_event_percpu
  => handle_irq_event_percpu
  => handle_irq_event
  => handle_fasteoi_irq
  => generic_handle_irq
  => __handle_domain_irq
  => gic_handle_irq
  => el1_irq
  => arch_cpu_idle
  => default_idle_call
  => cpu_startup_entry
  => secondary_start_kernel
  =>

Tested-by: David A. Long <dave.long@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: re-export _cpu_resume() from sleep.S
Ard Biesheuvel [Mon, 5 Sep 2016 09:23:17 +0000 (10:23 +0100)]
arm64: kernel: re-export _cpu_resume() from sleep.S

Commit b5fe242972ef ("arm64: kernel: fix style issues in sleep.S")
changed the linkage of _cpu_resume() to local, even though the symbol
is also referenced from hibernate.c. So revert this change.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Drop generic xlate_dev_mem_{k,}ptr()
James Morse [Mon, 5 Sep 2016 08:43:04 +0000 (09:43 +0100)]
arm64: Drop generic xlate_dev_mem_{k,}ptr()

The code that provides /dev/mem uses xlate_dev_mem_{k,}ptr() to
avoid making a cachable mapping of a non-cachable area on ia64.
On arm64 we do this via phys_mem_access_prot() instead, but provide
dummy versions of xlate_dev_mem_{k,}ptr().

These are the same as those in asm-generic/io.h, which we include from
asm/io.h

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: debug: report TRAP_TRACE instead of TRAP_HWBRPT for singlestep
Will Deacon [Thu, 1 Sep 2016 12:35:02 +0000 (13:35 +0100)]
arm64: debug: report TRAP_TRACE instead of TRAP_HWBRPT for singlestep

Single-step traps to userspace (e.g. via ptrace) are expected to use
the TRAP_TRACE for the si_code field of the siginfo, as opposed to
TRAP_HWBRPT that we report currently.

Fix the reported value, which has no effect on existing and legacy
builds of GDB.

Reported-by: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: head.S: document the use of callee saved registers
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:17 +0000 (12:05 +0100)]
arm64: head.S: document the use of callee saved registers

Now that the only remaining occurrences of the use of callee saved
registers are on the primary boot path, add a comment to the code
which register is used for what.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: head.S: use ordinary stack frame for __primary_switched()
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:16 +0000 (12:05 +0100)]
arm64: head.S: use ordinary stack frame for __primary_switched()

Instead of stashing the value of the link register in x28 before setting
up the stack and calling into C code, create an ordinary PCS compatible
stack frame so that we can push the return address onto the stack.

Since exception handlers require a stack as well, assign the stack pointer
register before installing the vector table.

Note that this accounts for the difference between THREAD_START_SP and
THREAD_SIZE, given that the stack pointer is always decremented before
calling into any C code.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: drop use of x24 from primary boot path
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:15 +0000 (12:05 +0100)]
arm64: kernel: drop use of x24 from primary boot path

Keeping __PHYS_OFFSET in x24 is actually less clear than simply taking
the value of __PHYS_OFFSET using an adrp instruction in the three places
that we need it. So change that.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: use x30 for __enable_mmu return address
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:14 +0000 (12:05 +0100)]
arm64: kernel: use x30 for __enable_mmu return address

Using x27 for passing to __enable_mmu what is essentially the return
address makes the code look more complicated than it needs to be. So
switch to x30/lr, and update the secondary and cpu_resume call sites to
simply call __enable_mmu as an ordinary function, with a bl instruction.
This requires the callers to be covered by .idmap.text.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: head.S: move KASLR processing out of __enable_mmu()
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:13 +0000 (12:05 +0100)]
arm64: head.S: move KASLR processing out of __enable_mmu()

The KASLR processing is only used by the primary boot path, and
complements the processing that takes place in __primary_switch().
Move the two parts together, to make the code easier to understand.

Also, fix up a minor whitespace issue.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: fixed conflict with -rc3 due to lack of fd363bd417dd]
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: use ordinary return/argument register for el2_setup()
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:12 +0000 (12:05 +0100)]
arm64: kernel: use ordinary return/argument register for el2_setup()

The function el2_setup() passes its return value in register w20, and
in the two cases where the caller actually cares about this return value,
it is passed into set_cpu_boot_mode_flag() [almost] directly, which
expects its input in w20 as well.

So there is no reason to use a 'special' callee saved register here, but
we can simply follow the PCS for return value and first argument,
respectively.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: fix style issues in sleep.S
Ard Biesheuvel [Wed, 31 Aug 2016 11:05:11 +0000 (12:05 +0100)]
arm64: kernel: fix style issues in sleep.S

This fixes a number of style issues in sleep.S. No functional changes are
intended:
- replace absolute literal references with relative references in
  __cpu_suspend_enter(), which executes from its virtual address
- replace explicit lr assignment plus branch with bl in cpu_resume(), which
  aligns it with stext() and secondary_startup()
- don't export _cpu_resume()
- use adr_l for mpidr_hash reference, and fix the incorrect accompanying
  comment, which has been out of date since commit cabe1c81ea5be983 ("arm64:
  Change cpu_resume() to enable mmu early then access sleep_sp by va")
- replace leading spaces with tabs, and add a bit of whitespace for
  readability

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kernel: do not need to reset UAO on exception entry
Vladimir Murzin [Thu, 1 Sep 2016 13:35:59 +0000 (14:35 +0100)]
arm64: kernel: do not need to reset UAO on exception entry

Commit e19a6ee2460b ("arm64: kernel: Save and restore UAO and
addr_limit on exception entry") states that exception handler inherits
the original PSTATE.UAO value, so UAO needes to be reset
explicitly. However, ARM 8.2 Extension documentation says:

PSTATE.UAO is copied to SPSR_ELx.UAO and is then set to 0 on an
exception taken from AArch64 to AArch64

so hardware already does the right thing.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: debug: convert OS lock CPU hotplug notifier to new infrastructure
Will Deacon [Tue, 16 Aug 2016 10:29:17 +0000 (11:29 +0100)]
arm64: debug: convert OS lock CPU hotplug notifier to new infrastructure

The arm64 debug monitor initialisation code uses a CPU hotplug notifier
to clear the OS lock when CPUs come online.

This patch converts the code to the new hotplug mechanism.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: hw_breakpoint: convert CPU hotplug notifier to new infrastructure
Will Deacon [Mon, 15 Aug 2016 17:55:11 +0000 (18:55 +0100)]
arm64: hw_breakpoint: convert CPU hotplug notifier to new infrastructure

The arm64 hw_breakpoint implementation uses a CPU hotplug notifier to
reset the {break,watch}point registers when CPUs come online.

This patch converts the code to the new hotplug mechanism, whilst moving
the invocation earlier to remove the need to disable IRQs explicitly in
the driver (which could cause havok if we trip a watchpoint in an IRQ
handler whilst restoring the debug register state).

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: remove duplicate macro __KERNEL__ check
zijun_hu [Thu, 1 Sep 2016 10:51:19 +0000 (18:51 +0800)]
arm64: remove duplicate macro __KERNEL__ check

remove duplicate macro __KERNEL__ check

Signed-off-by: zijun_hu <zijun_hu@htc.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: debug: avoid resetting stepping state machine when TIF_SINGLESTEP
Will Deacon [Fri, 26 Aug 2016 10:36:39 +0000 (11:36 +0100)]
arm64: debug: avoid resetting stepping state machine when TIF_SINGLESTEP

When TIF_SINGLESTEP is set for a task, the single-step state machine is
enabled and we must take care not to reset it to the active-not-pending
state if it is already in the active-pending state.

Unfortunately, that's exactly what user_enable_single_step does, by
unconditionally setting the SS bit in the SPSR for the current task.
This causes failures in the GDB testsuite, where GDB ends up missing
expected step traps if the instruction being stepped generates another
trap, e.g. PTRACE_EVENT_FORK from an SVC instruction.

This patch fixes the problem by preserving the current state of the
stepping state machine when TIF_SINGLESTEP is set on the current thread.

Cc: <stable@vger.kernel.org>
Reported-by: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: cpufeature: expose arm64_ftr_reg struct for CTR_EL0
Ard Biesheuvel [Wed, 31 Aug 2016 10:31:10 +0000 (11:31 +0100)]
arm64: cpufeature: expose arm64_ftr_reg struct for CTR_EL0

Expose the arm64_ftr_reg struct covering CTR_EL0 outside of cpufeature.o
so that other code can refer to it directly (i.e., without performing the
binary search)

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: cpufeature: constify arm64_ftr_regs array
Ard Biesheuvel [Wed, 31 Aug 2016 10:31:09 +0000 (11:31 +0100)]
arm64: cpufeature: constify arm64_ftr_regs array

Constify the arm64_ftr_regs array, by moving the mutable arm64_ftr_reg
fields out of the array itself. This also streamlines the bsearch, since
the entire array can be covered by fewer cachelines. Moving the payload
out of the array also allows us to have special explicitly defined
struct instance in case other code needs to refer to it directly.

Note that this replaces the runtime sorting of the array with a runtime
BUG() check whether the array is sorted correctly in the code.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: cpufeature: constify arm64_ftr_bits structures
Ard Biesheuvel [Wed, 31 Aug 2016 10:31:08 +0000 (11:31 +0100)]
arm64: cpufeature: constify arm64_ftr_bits structures

The arm64_ftr_bits structures are never modified, so make them read-only.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: cleanup unused UDBG_* define
Kefeng Wang [Wed, 31 Aug 2016 12:38:50 +0000 (20:38 +0800)]
arm64: cleanup unused UDBG_* define

The UDBG_UNDEFINED/SYSCALL/BADABORT/SEGV are only used to show
verbose user fault messages in arm, not arm64, drop them.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: don't select PERF_USE_VMALLOC by default
Kim Phillips [Tue, 30 Aug 2016 19:08:39 +0000 (14:08 -0500)]
arm64: don't select PERF_USE_VMALLOC by default

Any arm64 based parts that have cache aliasing issues can set it
manually.  Apparently dragged in from ARM(32) defaults in commit
8c2c3df "arm64: Build infrastructure".

Signed-off-by: Kim Phillips <kim.phillips@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Set UTS_MACHINE in the Makefile
Michal Marek [Tue, 30 Aug 2016 08:31:35 +0000 (10:31 +0200)]
arm64: Set UTS_MACHINE in the Makefile

The make rpm target depends on proper UTS_MACHINE definition.  Also, use
the variable in arch/arm64/kernel/setup.c, so that it's not accidentally
removed in the future.

Reported-and-tested-by: Fabian Vogt <fvogt@suse.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: errata: Pass --fix-cortex-a53-843419 to ld if workaround enabled
Will Deacon [Mon, 22 Aug 2016 10:58:36 +0000 (11:58 +0100)]
arm64: errata: Pass --fix-cortex-a53-843419 to ld if workaround enabled

Cortex-A53 erratum 843419 is worked around by the linker, although it is
a configure-time option to GCC as to whether ld is actually asked to
apply the workaround or not.

This patch ensures that we pass --fix-cortex-a53-843419 to the linker
when both CONFIG_ARM64_ERRATUM_843419=y and the linker supports the
option.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoRevert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline"
James Morse [Wed, 17 Aug 2016 12:50:27 +0000 (13:50 +0100)]
Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline"

Now that we use the MPIDR to resume on the same CPU that we hibernated on,
we no longer need to refuse to hibernate if the boot cpu is offline. (Which
we can't possibly know if kexec causes logical CPUs to be renumbered).

This reverts commit 1fe492ce6482b77807b25d29690a48c46456beee.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: hibernate: Resume when hibernate image created on non-boot CPU
James Morse [Wed, 17 Aug 2016 12:50:26 +0000 (13:50 +0100)]
arm64: hibernate: Resume when hibernate image created on non-boot CPU

disable_nonboot_cpus() assumes that the lowest numbered online CPU is
the boot CPU, and that this is the correct CPU to run any power
management code on.

On arm64 CPU0 can be taken offline. For hibernate/resume this means we
may hibernate on a CPU other than CPU0. If the system is rebooted with
kexec 'CPU0' will be assigned to a different CPU. This complicates
hibernate/resume as now we can't trust the CPU numbers.

We currently forbid hibernate if CPU0 has been hotplugged out to avoid
this situation without kexec.

Save the MPIDR of the CPU we hibernated on in the hibernate arch-header,
use hibernate_resume_nonboot_cpu_disable() to direct which CPU we should
resume on based on the MPIDR of the CPU we hibernated on. This allows us to
hibernate/resume on any CPU, even if the logical numbers have been
shuffled by kexec.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agocpu/hotplug: Allow suspend/resume CPU to be specified
James Morse [Wed, 17 Aug 2016 12:50:25 +0000 (13:50 +0100)]
cpu/hotplug: Allow suspend/resume CPU to be specified

disable_nonboot_cpus() assumes that the lowest numbered online CPU is
the boot CPU, and that this is the correct CPU to run any power
management code on.

On x86 this is always correct, as CPU0 cannot (easily) by taken offline.

On arm64 CPU0 can be taken offline. For hibernate/resume this means we
may hibernate on a CPU other than CPU0. If the system is rebooted with
kexec 'CPU0' will be assigned to a different physical CPU. This
complicates hibernate/resume as now we can't trust the CPU numbers.
Arch code can find the correct physical CPU, and ensure it is online
before resume from hibernate begins, but also needs to influence
disable_nonboot_cpus()s choice of CPU.

Rename disable_nonboot_cpus() as freeze_secondary_cpus() and add an
argument indicating which CPU should be left standing. Follow the logic
in migrate_to_reboot_cpu() to use the lowest numbered online CPU if the
requested CPU is not online.
Add disable_nonboot_cpus() as an inline function that has the existing
behaviour.

Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: always enable DEBUG_RODATA and remove the Kconfig option
Mark Rutland [Thu, 25 Aug 2016 16:23:23 +0000 (17:23 +0100)]
arm64: always enable DEBUG_RODATA and remove the Kconfig option

Follow the example set by x86 in commit 9ccaf77cf05915f5 ("x86/mm:
Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option"), and
make these protections a fundamental security feature rather than an
opt-in. This also results in a minor code simplification.

For those rare cases when users wish to disable this protection (e.g.
for debugging), this can be done by passing 'rodata=off' on the command
line.

As DEBUG_RODATA_ALIGN is only intended to address a performance/memory
tradeoff, and does not affect correctness, this is left user-selectable.
DEBUG_MODULE_RONX is also left user-selectable until the core code
provides a boot-time option to disable the protection for debugging
use-cases.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: mark reserved memblock regions explicitly in iomem
AKASHI Takahiro [Mon, 22 Aug 2016 06:55:24 +0000 (15:55 +0900)]
arm64: mark reserved memblock regions explicitly in iomem

Kdump(kexec-tools) parses /proc/iomem to identify all the memory regions
on the system. Since the current kernel names "nomap" regions, like UEFI
runtime services code/data, as "System RAM," kexec-tools sets up elf core
header to include them in a crash dump file (/proc/vmcore).

Then crash dump kernel parses UEFI memory map again, re-marks those regions
as "nomap" and does not create a memory mapping for them unlike the other
areas of System RAM. In this case, copying /proc/vmcore through
copy_oldmem_page() on crash dump kernel will end up with a kernel abort,
as reported in [1].

This patch names all the "nomap" regions explicitly as "reserved" so that
we can exclude them from a crash dump file. acpi_os_ioremap() must also
be modified because those regions have WB attributes [2].

Apart from kdump, this change also matches x86's use of acpi (and
/proc/iomem).

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448186.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/450089.html

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: hibernate: Support DEBUG_PAGEALLOC
James Morse [Wed, 24 Aug 2016 17:27:30 +0000 (18:27 +0100)]
arm64: hibernate: Support DEBUG_PAGEALLOC

DEBUG_PAGEALLOC removes the valid bit of page table entries to prevent
any access to unallocated memory. Hibernate uses this as a hint that those
pages don't need to be saved/restored. This patch adds the
kernel_page_present() function it uses.

hibernate.c copies the resume kernel's linear map for use during restore.
Add _copy_pte() to fill-in the holes made by DEBUG_PAGEALLOC in the resume
kernel, so we can restore data the original kernel had at these addresses.

Finally, DEBUG_PAGEALLOC means the linear-map alias of KERNEL_START to
KERNEL_END may have holes in it, so we can't lazily clean this whole
area to the PoC. Only clean the new mmuoff region, and the kernel/kvm
idmaps.

This reverts commit da24eb1f3f9e2c7b75c5f8c40d8e48e2c4789596.

Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap
James Morse [Wed, 24 Aug 2016 17:27:29 +0000 (18:27 +0100)]
arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap

Resume from hibernate needs to clean any text executed by the kernel with
the MMU off to the PoC. Collect these functions together into the
.idmap.text section as all this code is tightly coupled and also needs
the same cleaning after resume.

Data is more complicated, secondary_holding_pen_release is written with
the MMU on, clean and invalidated, then read with the MMU off. In contrast
__boot_cpu_mode is written with the MMU off, the corresponding cache line
is invalidated, so when we read it with the MMU on we don't get stale data.
These cache maintenance operations conflict with each other if the values
are within a Cache Writeback Granule (CWG) of each other.
Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
the linker script ensures mmuoff.data.write section is aligned to the
architectural maximum CWG of 2KB.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Create sections.h
James Morse [Wed, 24 Aug 2016 17:27:28 +0000 (18:27 +0100)]
arm64: Create sections.h

Each time new section markers are added, kernel/vmlinux.ld.S is updated,
and new extern char __start_foo[] definitions are scattered through the
tree.

Create asm/include/sections.h to collect these definitions (and include
the existing asm-generic version).

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: Introduce execute-only page access permissions
Catalin Marinas [Thu, 11 Aug 2016 17:44:50 +0000 (18:44 +0100)]
arm64: Introduce execute-only page access permissions

The ARMv8 architecture allows execute-only user permissions by clearing
the PTE_UXN and PTE_USER bits. However, the kernel running on a CPU
implementation without User Access Override (ARMv8.2 onwards) can still
access such page, so execute-only page permission does not protect
against read(2)/write(2) etc. accesses. Systems requiring such
protection must enable features like SECCOMP.

This patch changes the arm64 __P100 and __S100 protection_map[] macros
to the new __PAGE_EXECONLY attributes. A side effect is that
pte_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't
set. To work around this, the check is done on the PTE_NG bit via the
pte_ng() macro. VM_READ is also checked now for page faults.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: kprobe: Always clear pstate.D in breakpoint exception handler
Pratyush Anand [Mon, 22 Aug 2016 06:46:00 +0000 (12:16 +0530)]
arm64: kprobe: Always clear pstate.D in breakpoint exception handler

Whenever we are hitting a kprobe from a none-kprobe debug exception handler,
we hit an infinite occurrences of "Unexpected kernel single-step exception
at EL1"

PSTATE.D is debug exception mask bit. It is set whenever we enter into an
exception mode. When it is set then Watchpoint, Breakpoint, and Software
Step exceptions are masked. However, software Breakpoint Instruction
exceptions can never be masked. Therefore, if we ever execute a BRK
instruction, irrespective of D-bit setting, we will be receiving a
corresponding breakpoint exception.

For example:

- We are executing kprobe pre/post handler, and kprobe has been inserted in
  one of the instruction of a function called by handler. So, it executes
  BRK instruction and we land into the case of KPROBE_REENTER. (This case is
  already handled by current code)

- We are executing uprobe handler or any other BRK handler such as in
  WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we
  enter into kprobe breakpoint handler,from another BRK handler.(This case
  is not being handled currently)

In all such cases kprobe breakpoint exception will be raised when we were
already in debug exception mode. SPSR's D bit (bit 9) shows the value of
PSTATE.D immediately before the exception was taken. So, in above example
cases we would find it set in kprobe breakpoint handler.  Single step
exception will always be followed by a kprobe breakpoint exception.However,
it will only be raised gracefully if we clear D bit while returning from
breakpoint exception.  If D bit is set then, it results into undefined
exception and when it's handler enables dbg then single step exception is
generated, however it will never be handled(because address does not match
and therefore treated as unexpected).

This patch clears D-flag unconditionally in setup_singlestep, so that we can
always get single step exception correctly after returning from breakpoint
exception. Additionally, it also removes D-flag set statement for
KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will
always take place in a debug exception state. So, D-flag will already be set
in this case.

Acked-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: head.S: get rid of x25 and x26 with 'global' scope
Ard Biesheuvel [Tue, 16 Aug 2016 19:02:32 +0000 (21:02 +0200)]
arm64: head.S: get rid of x25 and x26 with 'global' scope

Currently, x25 and x26 hold the physical addresses of idmap_pg_dir
and swapper_pg_dir, respectively, when running early boot code. But
having registers with 'global' scope in files that contain different
sections with different lifetimes, and that are called by different
CPUs at different times is a bit messy, especially since stashing the
values does not buy us anything in terms of code size or clarity.

So simply replace each reference to x25 or x26 with an adrp instruction
referring to idmap_pg_dir or swapper_pg_dir directly.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: apply __ro_after_init to some objects
Jisheng Zhang [Mon, 15 Aug 2016 06:45:46 +0000 (14:45 +0800)]
arm64: apply __ro_after_init to some objects

These objects are set during initialization, thereafter are read only.

Previously I only want to mark vdso_pages, vdso_spec, vectors_page and
cpu_ops as __read_mostly from performance point of view. Then inspired
by Kees's patch[1] to apply more __ro_after_init for arm, I think it's
better to mark them as __ro_after_init. What's more, I find some more
objects are also read only after init. So apply __ro_after_init to all
of them.

This patch also removes global vdso_pagelist and tries to clean up
vdso_spec[] assignment code.

[1] http://www.spinics.net/lists/arm-kernel/msg523188.html

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: vdso: constify vm_special_mapping used for aarch32 vectors page
Jisheng Zhang [Mon, 15 Aug 2016 06:45:45 +0000 (14:45 +0800)]
arm64: vdso: constify vm_special_mapping used for aarch32 vectors page

The vm_special_mapping spec which is used for aarch32 vectors page is
never modified, so mark it as const.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: vdso: add __init section marker to alloc_vectors_page
Jisheng Zhang [Mon, 15 Aug 2016 06:45:44 +0000 (14:45 +0800)]
arm64: vdso: add __init section marker to alloc_vectors_page

It is not needed after booting, this patch moves the alloc_vectors_page
function to the __init section.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: remove redundant "select HAVE_CLK"
Masahiro Yamada [Tue, 16 Aug 2016 09:19:22 +0000 (18:19 +0900)]
arm64: remove redundant "select HAVE_CLK"

HAVE_CLK is select'ed by CLKDEV_LOOKUP, which is select'ed by
COMMON_CLK, which is select'ed by ARM64.  No sub-architecture
needs to select HAVE_CLK explicitly.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: remove traces of perf_ops_bp
Mark Rutland [Thu, 11 Aug 2016 16:59:46 +0000 (17:59 +0100)]
arm64: remove traces of perf_ops_bp

Even though perf_ops_bp was removed/renamed back in commit
b0a873ebbf87bf38 ("perf: Register PMU implementations"), as part of
v2.6.37, its definition still lives on in some arch headers.

This patch removes the vestigal definition from arm64.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: perf: Use the builtin_platform_driver
Kefeng Wang [Wed, 10 Aug 2016 12:59:15 +0000 (20:59 +0800)]
arm64: perf: Use the builtin_platform_driver

Use the builtin_platform_driver() to simplify code.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: mm: convert __dma_* routines to use start, size
Kwangwoo Lee [Tue, 2 Aug 2016 00:50:50 +0000 (09:50 +0900)]
arm64: mm: convert __dma_* routines to use start, size

__dma_* routines have been converted to use start and size instread of
start and end addresses. The patch was origianlly for adding
__clean_dcache_area_poc() which will be used in pmem driver to clean
dcache to the PoC(Point of Coherency) in arch_wb_cache_pmem().

The functionality of __clean_dcache_area_poc()  was equivalent to
__dma_clean_range(). The difference was __dma_clean_range() uses the end
address, but __clean_dcache_area_poc() uses the size to clean.

Thus, __clean_dcache_area_poc() has been revised with a fallthrough
function of __dma_clean_range() after the change that __dma_* routines
use start and size instead of using start and end.

As a consequence of using start and size, the name of __dma_* routines
has also been altered following the terminology below:
    area: takes a start and size
    range: takes a start and end

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Kwangwoo Lee <kwangwoo.lee@sk.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: factor work_pending state machine to C
Chris Metcalf [Thu, 14 Jul 2016 20:48:14 +0000 (16:48 -0400)]
arm64: factor work_pending state machine to C

Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
state machine that can be difficult to reason about due to duplicated
code and a large number of branch targets.

This patch factors the common logic out into the existing
do_notify_resume function, converting the code to C in the process,
making the code more legible.

This patch tries to closely mirror the existing behaviour while using
the usual C control flow primitives. As local_irq_{disable,enable} may
be instrumented, we balance exception entry (where we will almost most
likely enable IRQs) with a call to trace_hardirqs_on just before the
return to userspace.

Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoarm64: hibernate: reduce TLB maintenance scope
Mark Rutland [Mon, 8 Aug 2016 10:12:07 +0000 (11:12 +0100)]
arm64: hibernate: reduce TLB maintenance scope

In break_before_make_ttbr_switch we perform broadcast TLB maintenance
for the inner shareable domain, and use a DSB ISH to complete this.
However, at the point we execute this, secondary CPUs are either
physically offline, or executing code outside of the kernel. Upon
entering the kernel, secondary CPUs will invalidate their TLBs before
enabling their MMUs.

Thus we do not need to invalidate TLBs of other CPUs, and as with
idmap_cpu_replace_ttbr1 we can reduce the scope of maintenance to the
TLBs of the local CPU. This keeps our TLB maintenance code consistent,
and is a minor optimisation.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
8 years agoLinux 4.8-rc3
Linus Torvalds [Sun, 21 Aug 2016 23:14:10 +0000 (16:14 -0700)]
Linux 4.8-rc3

8 years agoMerge branch 'parisc-4.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller...
Linus Torvalds [Sun, 21 Aug 2016 21:28:24 +0000 (14:28 -0700)]
Merge branch 'parisc-4.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux

Pull two parisc fixes from Helge Deller:
 "The first patch ensures that the high-res cr16 clocksource (which was
  added in kernel 4.7) gets choosen as default clocksource for parisc.

  The second patch moves the #define of EREFUSED down inside errno.h and
  thus unbreaks building the gccgo compiler"

* 'parisc-4.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
  parisc: Fix order of EREFUSED define in errno.h
  parisc: Fix automatic selection of cr16 clocksource

8 years agoEDAC, skx_edac: Add EDAC driver for Skylake
Tony Luck [Sat, 20 Aug 2016 23:27:58 +0000 (16:27 -0700)]
EDAC, skx_edac: Add EDAC driver for Skylake

This is an entirely new driver instead of yet another set of patches
to sb_edac.c because:

1) Mapping from PCI devices to socket/memory controller is significantly
   different. Skylake scatters devices on a socket across a number of
   PCI buses.
2) There is an extra level of interleaving via the "mcroute" register
   that would be a little messy to squeeze into the old driver.
3) Validation is getting too expensive. Changes to sb_edac need to
   be checked against Sandy Bridge, Ivy Bridge, Haswell, Broadwell and
   Knights Landing.

Acked-by: Aristeu Rozanski <aris@redhat.com>
Acked-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
8 years agoparisc: Fix order of EREFUSED define in errno.h
Helge Deller [Sat, 20 Aug 2016 09:51:38 +0000 (11:51 +0200)]
parisc: Fix order of EREFUSED define in errno.h

When building gccgo in userspace, errno.h gets parsed and the go include file
sysinfo.go is generated.

Since EREFUSED is defined to the same value as ECONNREFUSED, and ECONNREFUSED
is defined later on in errno.h, this leads to go complaining that EREFUSED
isn't defined yet.

Fix this trivial problem by moving the define of EREFUSED down after
ECONNREFUSED in errno.h (and clean up the indenting while touching this line).

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
8 years agoparisc: Fix automatic selection of cr16 clocksource
Helge Deller [Fri, 19 Aug 2016 20:39:02 +0000 (22:39 +0200)]
parisc: Fix automatic selection of cr16 clocksource

Commit 54b66800907 (parisc: Add native high-resolution sched_clock()
implementation) added support to use the CPU-internal cr16 counters as reliable
clocksource with the help of HAVE_UNSTABLE_SCHED_CLOCK.

Sadly the commit missed to remove the hack which prevented cr16 to become the
default clocksource even on SMP systems.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # 4.7+
8 years agoMake the hardened user-copy code depend on having a hardened allocator
Linus Torvalds [Fri, 19 Aug 2016 19:47:01 +0000 (12:47 -0700)]
Make the hardened user-copy code depend on having a hardened allocator

The kernel test robot reported a usercopy failure in the new hardened
sanity checks, due to a page-crossing copy of the FPU state into the
task structure.

This happened because the kernel test robot was testing with SLOB, which
doesn't actually do the required book-keeping for slab allocations, and
as a result the hardening code didn't realize that the task struct
allocation was one single allocation - and the sanity checks fail.

Since SLOB doesn't even claim to support hardening (and you really
shouldn't use it), the straightforward solution is to just make the
usercopy hardening code depend on the allocator supporting it.

Reported-by: kernel test robot <xiaolong.ye@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
8 years agoMerge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa...
Linus Torvalds [Fri, 19 Aug 2016 19:10:06 +0000 (12:10 -0700)]
Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux

Pull i2c fixes from Wolfram Sang:
 "I2C has some pretty standard driver bugfixes and one minor cleanup"

* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
  i2c: meson: Use complete() instead of complete_all()
  i2c: brcmstb: Use complete() instead of complete_all()
  i2c: bcm-kona: Use complete() instead of complete_all()
  i2c: bcm-iproc: Use complete() instead of complete_all()
  i2c: at91: fix support of the "alternative command" feature
  i2c: ocores: add missed clk_disable_unprepare() on failure paths
  i2c: cros-ec-tunnel: Fix usage of cros_ec_cmd_xfer()
  i2c: mux: demux-pinctrl: properly roll back when adding adapter fails

8 years agoMerge tag 'dm-4.8-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device...
Linus Torvalds [Fri, 19 Aug 2016 16:32:48 +0000 (09:32 -0700)]
Merge tag 'dm-4.8-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:

 - a stable fix for DM round robin multipath path selector to disable
   preemption before using this_cpu_ptr()

 - a slight increase in DM crypt's mempool reserves to make swap ontop
   of DM crypt more performant

 - a few DM raid fixes to issues found while testing changes that were
   merged in v4.8-rc1

* tag 'dm-4.8-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm raid: support raid0 with missing metadata devices
  dm raid: enhance attempt_restore_of_faulty_devices() to support more devices
  dm raid: fix restoring of failed devices regression
  dm raid: fix frozen recovery regression
  dm crypt: increase mempool reserve to better support swapping
  dm round robin: do not use this_cpu_ptr() without having preemption disabled

8 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Fri, 19 Aug 2016 16:22:50 +0000 (09:22 -0700)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Six fairly small fixes.  The ipr, mpt3sas and ses ones all trigger
  oopses.  The megaraid one fixes an attach failure on io mapped only
  cards, the fcoe one is an obvious problem in the error path and the
  aacraid one is a theoretical security issue (ability to trick the
  kernel into a buffer overrun)"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  ses: Fix racy cleanup of /sys in remove_dev()
  mpt3sas: Fix resume on WarpDrive flash cards
  ipr: Fix sync scsi scan
  megaraid_sas: Fix probing cards without io port
  aacraid: Check size values after double-fetch from user
  fcoe: Use kfree_skb() instead of kfree()

8 years agoMerge tag 'usb-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Linus Torvalds [Fri, 19 Aug 2016 16:21:24 +0000 (09:21 -0700)]
Merge tag 'usb-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB fixes from Greg KH:
 "Here are a number of USB fixes for reported issues for your tree.

  The normal amount of gadget fixes, xhci fixes, new device ids, and a
  few other minor things.  All of them have been in linux-next for a
  while, the full details are in the shortlog below"

* tag 'usb-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (43 commits)
  xhci: don't dereference a xhci member after removing xhci
  usb: xhci: Fix panic if disconnect
  xhci: really enqueue zero length TRBs.
  xhci: always handle "Command Ring Stopped" events
  cdc-acm: fix wrong pipe type on rx interrupt xfers
  usb: misc: usbtest: add fix for driver hang
  usb: dwc3: gadget: stop processing on HWO set
  usb: dwc3: don't set last bit for ISOC endpoints
  usb: gadget: rndis: free response queue during REMOTE_NDIS_RESET_MSG
  usb: udc: core: fix error handling
  usb: gadget: fsl_qe_udc: off by one in setup_received_handle()
  usb/gadget: fix gadgetfs aio support.
  usb: gadget: composite: Fix return value in case of error
  usb: gadget: uvc: Fix return value in case of error
  usb: gadget: fix check in sync read from ep in gadgetfs
  usb: misc: usbtest: usbtest_do_ioctl may return positive integer
  usb: dwc3: fix missing platform_set_drvdata() in dwc3_of_simple_probe()
  usb: phy: omap-otg: Fix missing platform_set_drvdata() in omap_otg_probe()
  usb: gadget: configfs: add mutex lock before unregister gadget
  usb: gadget: u_ether: fix dereference after null check coverify warning
  ...

8 years agoMerge tag 'xfs-iomap-for-linus-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Fri, 19 Aug 2016 16:06:41 +0000 (09:06 -0700)]
Merge tag 'xfs-iomap-for-linus-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs

Pull xfs and iomap fixes from Dave Chinner:
 "Changes in this update:

  Regression fixes for XFS changes introduce in 4.8-rc1:
   - buffer IO accounting assert failure
   - ENOSPC block accounting reservation issue
   - DAX IO path page cache invalidation fix
   - rmapbt on-disk block count in agf
   - correct classification of rmap block type when updating AGFL.
   - iomap support for attribute fork mapping

  Regression fixes for iomap infrastructure in 4.8-rc1:
   - fiemap: honor FIEMAP_FLAG_SYNC
   - fiemap: implement FIEMAP_FLAG_XATTR support to fix XFS regression
   - make mark_page_accessed and pagefault_disable usage consistent with
     other IO paths"

* tag 'xfs-iomap-for-linus-4.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs:
  xfs: remove OWN_AG rmap when allocating a block from the AGFL
  xfs: (re-)implement FIEMAP_FLAG_XATTR
  xfs: simplify xfs_file_iomap_begin
  iomap: mark ->iomap_end as optional
  iomap: prepare iomap_fiemap for attribute mappings
  iomap: fiemap should honor the FIEMAP_FLAG_SYNC flag
  iomap: remove superflous pagefault_disable from iomap_write_actor
  iomap: remove superflous mark_page_accessed from iomap_write_actor
  xfs: store rmapbt block count in the AGF
  xfs: don't invalidate whole file on DAX read/write
  xfs: fix bogus space reservation in xfs_iomap_write_allocate
  xfs: don't assert fail on non-async buffers on ioacct decrement