arm64: Fix DMA range invalidation for cache line unaligned buffers
authorCatalin Marinas <catalin.marinas@arm.com>
Fri, 9 May 2014 14:58:16 +0000 (15:58 +0100)
committerMark Brown <broonie@linaro.org>
Mon, 12 May 2014 17:10:22 +0000 (18:10 +0100)
commit03b0d29e0369054e638328b3994b8cfbb9d34a35
tree836ff9f64541fb946925717feaa30b90e5eb045e
parent1487ad49044023a8ce9b7fc5a9cdf6bbec0b8748
arm64: Fix DMA range invalidation for cache line unaligned buffers

If the buffer needing cache invalidation for inbound DMA does start or
end on a cache line aligned address, we need to use the non-destructive
clean&invalidate operation. This issue was introduced by commit
7363590d2c46 (arm64: Implement coherent DMA API based on swiotlb).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
(cherry picked from commit ebf81a938dade3b450eb11c57fa744cfac4b523f)
Signed-off-by: Ryan Harkin <ryan.harkin@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
arch/arm64/mm/cache.S