From: Chandler Carruth Date: Thu, 1 Mar 2012 18:55:25 +0000 (+0000) Subject: Rewrite LLVM's generalized support library for hashing to follow the API X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=0b66c6fca22e85f732cf58f459a06c06833d1882;p=oota-llvm.git Rewrite LLVM's generalized support library for hashing to follow the API of the proposed standard hashing interfaces (N3333), and to use a modified and tuned version of the CityHash algorithm. Some of the highlights of this change: -- Significantly higher quality hashing algorithm with very well distributed results, and extremely few collisions. Should be close to a checksum for up to 64-bit keys. Very little clustering or clumping of hash codes, to better distribute load on probed hash tables. -- Built-in support for reserved values. -- Simplified API that composes cleanly with other C++ idioms and APIs. -- Better scaling performance as keys grow. This is the fastest algorithm I've found and measured for moderately sized keys (such as show up in some of the uniquing and folding use cases) -- Support for enabling per-execution seeds to prevent table ordering or other artifacts of hashing algorithms to impact the output of LLVM. The seeding would make each run different and highlight these problems during bootstrap. This implementation was tested extensively using the SMHasher test suite, and pased with flying colors, doing better than the original CityHash algorithm even. I've included a unittest, although it is somewhat minimal at the moment. I've also added (or refactored into the proper location) type traits necessary to implement this, and converted users of GeneralHash over. My only immediate concerns with this implementation is the performance of hashing small keys. I've already started working to improve this, and will continue to do so. Currently, the only algorithms faster produce lower quality results, but it is likely there is a better compromise than the current one. Many thanks to Jeffrey Yasskin who did most of the work on the N3333 paper, pair-programmed some of this code, and reviewed much of it. Many thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original authors of CityHash on which this is heavily based, and Austin Appleby who created MurmurHash and the SMHasher test suite. Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for all of the review comments! If there are further comments or concerns, please let me know and I'll jump on 'em. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151822 91177308-0d34-0410-b5e6-96231b3b80d8 --- diff --git a/include/llvm/ADT/Hashing.h b/include/llvm/ADT/Hashing.h index 682dc223e22..ccf352cb52b 100644 --- a/include/llvm/ADT/Hashing.h +++ b/include/llvm/ADT/Hashing.h @@ -7,170 +7,745 @@ // //===----------------------------------------------------------------------===// // -// This file defines utilities for computing hash values for various data types. +// This file implements the newly proposed standard C++ interfaces for hashing +// arbitrary data and building hash functions for user-defined types. This +// interface was originally proposed in N3333[1] and is currently under review +// for inclusion in a future TR and/or standard. +// +// The primary interfaces provide are comprised of one type and three functions: +// +// -- 'hash_code' class is an opaque type representing the hash code for some +// data. It is the intended product of hashing, and can be used to implement +// hash tables, checksumming, and other common uses of hashes. It is not an +// integer type (although it can be converted to one) because it is risky +// to assume much about the internals of a hash_code. In particular, each +// execution of the program has a high probability of producing a different +// hash_code for a given input. Thus their values are not stable to save or +// persist, and should only be used during the execution for the +// construction of hashing datastructures. +// +// -- 'hash_value' is a function designed to be overloaded for each +// user-defined type which wishes to be used within a hashing context. It +// should be overloaded within the user-defined type's namespace and found +// via ADL. Overloads for primitive types are provided by this library. +// +// -- 'hash_combine' and 'hash_combine_range' are functions designed to aid +// programmers in easily and intuitively combining a set of data into +// a single hash_code for their object. They should only logically be used +// within the implementation of a 'hash_value' routine or similar context. +// +// Note that 'hash_combine_range' contains very special logic for hashing +// a contiguous array of integers or pointers. This logic is *extremely* fast, +// on a modern Intel "Gainestown" Xeon (Nehalem uarch) @2.2 GHz, these were +// benchmarked at over 6.5 GiB/s for large keys, and <20 cycles/hash for keys +// under 32-bytes. // //===----------------------------------------------------------------------===// #ifndef LLVM_ADT_HASHING_H #define LLVM_ADT_HASHING_H -#include "llvm/ADT/ArrayRef.h" -#include "llvm/ADT/StringRef.h" -#include "llvm/Support/AlignOf.h" -#include "llvm/Support/Compiler.h" +#include "llvm/ADT/STLExtras.h" #include "llvm/Support/DataTypes.h" +#include "llvm/Support/type_traits.h" +#include +#include #include +#include +#include + +// Allow detecting C++11 feature availability when building with Clang without +// breaking other compilers. +#ifndef __has_feature +# define __has_feature(x) 0 +#endif namespace llvm { -/// Class to compute a hash value from multiple data fields of arbitrary -/// types. Note that if you are hashing a single data type, such as a -/// string, it may be cheaper to use a hash algorithm that is tailored -/// for that specific data type. -/// Typical Usage: -/// GeneralHash Hash; -/// Hash.add(someValue); -/// Hash.add(someOtherValue); -/// return Hash.finish(); -/// Adapted from MurmurHash2 by Austin Appleby -class GeneralHash { -private: - enum { - M = 0x5bd1e995 - }; - unsigned Hash; - unsigned Count; +/// \brief An opaque object representing a hash code. +/// +/// This object represents the result of hashing some entity. It is intended to +/// be used to implement hashtables or other hashing-based data structures. +/// While it wraps and exposes a numeric value, this value should not be +/// trusted to be stable or predictable across processes or executions. +/// +/// In order to obtain the hash_code for an object 'x': +/// \code +/// using llvm::hash_value; +/// llvm::hash_code code = hash_value(x); +/// \endcode +/// +/// Also note that there are two numerical values which are reserved, and the +/// implementation ensures will never be produced for real hash_codes. These +/// can be used as sentinels within hashing data structures. +class hash_code { + size_t value; + public: - GeneralHash(unsigned Seed = 0) : Hash(Seed), Count(0) {} + /// \brief Default construct a hash_code. Constructs a null code. + hash_code() : value() {} - /// Add a pointer value. - /// Note: this adds pointers to the hash using sizes and endianness that - /// depend on the host. It doesn't matter however, because hashing on - /// pointer values is inherently unstable. - template - GeneralHash& add(const T *PtrVal) { - addBits(&PtrVal, &PtrVal + 1); - return *this; + /// \brief Form a hash code directly from a numerical value. + hash_code(size_t value) : value(value) { + // Ensure we don't form a hash_code with one of the prohibited values. + assert(value != get_null_code().value); + assert(value != get_invalid_code().value); } - /// Add an ArrayRef of arbitrary data. - template - GeneralHash& add(ArrayRef ArrayVal) { - addBits(ArrayVal.begin(), ArrayVal.end()); - return *this; + /// \brief Convert the hash code to its numerical value for use. + /*explicit*/ operator size_t() const { return value; } + + /// \brief Get a hash_code object which corresponds to a null code. + /// + /// The null code must never be the result of any 'hash_value' calls and can + /// be used to detect an unset hash_code. + static hash_code get_null_code() { return hash_code(); } + + /// \brief Get a hash_code object which corresponds to an invalid code. + /// + /// The invalid code must never be the result of any 'hash_value' calls. This + /// can be used to flag invalid hash_codes or mark entries in a hash table. + static hash_code get_invalid_code() { + hash_code invalid_code; + invalid_code.value = static_cast(-1); + return invalid_code; } - /// Add a string - GeneralHash& add(StringRef StrVal) { - addBits(StrVal.begin(), StrVal.end()); - return *this; + friend bool operator==(const hash_code &lhs, const hash_code &rhs) { + return lhs.value == rhs.value; + } + friend bool operator!=(const hash_code &lhs, const hash_code &rhs) { + return lhs.value != rhs.value; } - /// Add an signed 32-bit integer. - GeneralHash& add(int32_t Data) { - addInt(uint32_t(Data)); - return *this; + /// \brief Allow a hash_code to be directly run through hash_value. + friend size_t hash_value(const hash_code &code) { return code.value; } +}; + + +// All of the implementation details of actually computing the various hash +// code values are held within this namespace. These routines are included in +// the header file mainly to allow inlining and constant propagation. +namespace hashing { +namespace detail { + +inline uint64_t fetch64(const char *p) { + uint64_t result; + memcpy(&result, p, sizeof(result)); + return result; +} + +inline uint32_t fetch32(const char *p) { + uint32_t result; + memcpy(&result, p, sizeof(result)); + return result; +} + +/// Some primes between 2^63 and 2^64 for various uses. +static const uint64_t k0 = 0xc3a5c85c97cb3127ULL; +static const uint64_t k1 = 0xb492b66fbe98f273ULL; +static const uint64_t k2 = 0x9ae16a3b2f90404fULL; +static const uint64_t k3 = 0xc949d7c7509e6557ULL; + +/// \brief Bitwise right rotate. +/// Normally this will compile to a single instruction, especially if the +/// shift is a manifest constant. +inline uint64_t rotate(uint64_t val, unsigned shift) { + // Avoid shifting by 64: doing so yields an undefined result. + return shift == 0 ? val : ((val >> shift) | (val << (64 - shift))); +} + +inline uint64_t shift_mix(uint64_t val) { + return val ^ (val >> 47); +} + +inline uint64_t hash_16_bytes(uint64_t low, uint64_t high) { + // Murmur-inspired hashing. + const uint64_t kMul = 0x9ddfea08eb382d69ULL; + uint64_t a = (low ^ high) * kMul; + a ^= (a >> 47); + uint64_t b = (high ^ a) * kMul; + b ^= (b >> 47); + b *= kMul; + return b; +} + +inline uint64_t hash_1to3_bytes(const char *s, size_t len, uint64_t seed) { + uint8_t a = s[0]; + uint8_t b = s[len >> 1]; + uint8_t c = s[len - 1]; + uint32_t y = static_cast(a) + (static_cast(b) << 8); + uint32_t z = len + (static_cast(c) << 2); + return shift_mix(y * k2 ^ z * k3 ^ seed) * k2; +} + +inline uint64_t hash_4to8_bytes(const char *s, size_t len, uint64_t seed) { + uint64_t a = fetch32(s); + return hash_16_bytes(len + (a << 3), seed ^ fetch32(s + len - 4)); +} + +inline uint64_t hash_9to16_bytes(const char *s, size_t len, uint64_t seed) { + uint64_t a = fetch64(s); + uint64_t b = fetch64(s + len - 8); + return hash_16_bytes(seed ^ a, rotate(b + len, len)) ^ b; +} + +inline uint64_t hash_17to32_bytes(const char *s, size_t len, uint64_t seed) { + uint64_t a = fetch64(s) * k1; + uint64_t b = fetch64(s + 8); + uint64_t c = fetch64(s + len - 8) * k2; + uint64_t d = fetch64(s + len - 16) * k0; + return hash_16_bytes(rotate(a - b, 43) + rotate(c ^ seed, 30) + d, + a + rotate(b ^ k3, 20) - c + len + seed); +} + +inline uint64_t hash_33to64_bytes(const char *s, size_t len, uint64_t seed) { + uint64_t z = fetch64(s + 24); + uint64_t a = fetch64(s) + (len + fetch64(s + len - 16)) * k0; + uint64_t b = rotate(a + z, 52); + uint64_t c = rotate(a, 37); + a += fetch64(s + 8); + c += rotate(a, 7); + a += fetch64(s + 16); + uint64_t vf = a + z; + uint64_t vs = b + rotate(a, 31) + c; + a = fetch64(s + 16) + fetch64(s + len - 32); + z = fetch64(s + len - 8); + b = rotate(a + z, 52); + c = rotate(a, 37); + a += fetch64(s + len - 24); + c += rotate(a, 7); + a += fetch64(s + len - 16); + uint64_t wf = a + z; + uint64_t ws = b + rotate(a, 31) + c; + uint64_t r = shift_mix((vf + ws) * k2 + (wf + vs) * k0); + return shift_mix(seed ^ (r * k0) + vs) * k2; +} + +inline uint64_t hash_short(const char *s, size_t length, uint64_t seed) { + uint64_t hash; + if (length >= 4 && length <= 8) + hash = hash_4to8_bytes(s, length, seed); + else if (length > 8 && length <= 16) + hash = hash_9to16_bytes(s, length, seed); + else if (length > 16 && length <= 32) + hash = hash_17to32_bytes(s, length, seed); + else if (length > 32) + hash = hash_33to64_bytes(s, length, seed); + else if (length != 0) + hash = hash_1to3_bytes(s, length, seed); + else + return k2 ^ seed; + + // FIXME: The invalid hash_code check is really expensive; there should be + // a better way of ensuring these invariants hold. + if (hash == static_cast(hash_code::get_null_code())) + hash = k1 ^ seed; + else if (hash == static_cast(hash_code::get_invalid_code())) + hash = k3 ^ seed; + return hash; +} + +/// \brief The intermediate state used during hashing. +/// Currently, the algorithm for computing hash codes is based on CityHash and +/// keeps 56 bytes of arbitrary state. +struct hash_state { + uint64_t h0, h1, h2, h3, h4, h5, h6; + uint64_t seed; + + /// \brief Create a new hash_state structure and initialize it based on the + /// seed and the first 64-byte chunk. + /// This effectively performs the initial mix. + static hash_state create(const char *s, uint64_t seed) { + hash_state state = { + 0, seed, hash_16_bytes(seed, k1), rotate(seed ^ k1, 49), + seed * k1, shift_mix(seed), hash_16_bytes(state.h4, state.h5), + seed + }; + state.mix(s); + return state; } - /// Add an unsigned 32-bit integer. - GeneralHash& add(uint32_t Data) { - addInt(Data); - return *this; + /// \brief Mix 32-bytes from the input sequence into the 16-bytes of 'a' + /// and 'b', including whatever is already in 'a' and 'b'. + static void mix_32_bytes(const char *s, uint64_t &a, uint64_t &b) { + a += fetch64(s); + uint64_t c = fetch64(s + 24); + b = rotate(b + a + c, 21); + uint64_t d = a; + a += fetch64(s + 8) + fetch64(s + 16); + b += rotate(a, 44) + d; + a += c; } - /// Add an signed 64-bit integer. - GeneralHash& add(int64_t Data) { - addInt(uint64_t(Data)); - return *this; + /// \brief Mix in a 64-byte buffer of data. + /// We mix all 64 bytes even when the chunk length is smaller, but we + /// record the actual length. + void mix(const char *s) { + h0 = rotate(h0 + h1 + h3 + fetch64(s + 8), 37) * k1; + h1 = rotate(h1 + h4 + fetch64(s + 48), 42) * k1; + h0 ^= h6; + h1 += h3 + fetch64(s + 40); + h2 = rotate(h2 + h5, 33) * k1; + h3 = h4 * k1; + h4 = h0 + h5; + mix_32_bytes(s, h3, h4); + h5 = h2 + h6; + h6 = h1 + fetch64(s + 16); + mix_32_bytes(s + 32, h5, h6); + std::swap(h2, h0); } - /// Add an unsigned 64-bit integer. - GeneralHash& add(uint64_t Data) { - addInt(Data); - return *this; + /// \brief Compute the final 64-bit hash code value based on the current + /// state and the length of bytes hashed. + uint64_t finalize(size_t length) { + uint64_t final_value + = hash_16_bytes(hash_16_bytes(h3, h5) + shift_mix(h1) * k1 + h2, + hash_16_bytes(h4, h6) + shift_mix(length) * k1 + h0); + if (final_value == static_cast(hash_code::get_null_code())) + final_value = k1 ^ seed; + if (final_value == static_cast(hash_code::get_invalid_code())) + final_value = k3 ^ seed; + return final_value; } +}; - /// Add a float - GeneralHash& add(float Data) { - union { - float D; uint32_t I; - }; - D = Data; - addInt(I); - return *this; + +/// \brief A global, fixed seed-override variable. +/// +/// This variable can be set using the \see llvm::set_fixed_execution_seed +/// function. See that function for details. Do not, under any circumstances, +/// set or read this variable. +extern size_t fixed_seed_override; + +inline size_t get_execution_seed() { + // FIXME: This needs to be a per-execution seed. This is just a placeholder + // implementation. Switching to a per-execution seed is likely to flush out + // instability bugs and so will happen as its own commit. + // + // However, if there is a fixed seed override set the first time this is + // called, return that instead of the per-execution seed. + static size_t seed = fixed_seed_override ? fixed_seed_override + : 0xff51afd7ed558ccdULL; + return seed; +} + + +/// \brief Helper to hash the value of a single integer. +/// +/// Overloads for smaller integer types are not provided to ensure consistent +/// behavior in the presence of integral promotions. Essentially, +/// "hash_value('4')" and "hash_value('0' + 4)" should be the same. +inline hash_code hash_integer_value(uint64_t value) { + // Similar to hash_4to8_bytes but using a seed instead of length. + const uint64_t seed = get_execution_seed(); + const char *s = reinterpret_cast(&value); + const uint64_t a = fetch32(s); + return hash_16_bytes(seed + (a << 3), fetch32(s + 4)); +} + +} // namespace detail +} // namespace hashing + + +/// \brief Override the execution seed with a fixed value. +/// +/// This hashing library uses a per-execution seed designed to change on each +/// run with high probability in order to ensure that the hash codes are not +/// attackable and to ensure that output which is intended to be stable does +/// not rely on the particulars of the hash codes produced. +/// +/// That said, there are use cases where it is important to be able to +/// reproduce *exactly* a specific behavior. To that end, we provide a function +/// which will forcibly set the seed to a fixed value. This must be done at the +/// start of the program, before any hashes are computed. Also, it cannot be +/// undone. This makes it thread-hostile and very hard to use outside of +/// immediately on start of a simple program designed for reproducible +/// behavior. +void set_fixed_execution_hash_seed(size_t fixed_value); + + +/// \brief Compute a hash_code for any integer value. +/// +/// Note that this function is intended to compute the same hash_code for +/// a particular value without regard to the pre-promotion type. This is in +/// contrast to hash_combine which may produce different hash_codes for +/// differing argument types even if they would implicit promote to a common +/// type without changing the value. +template +typename enable_if, hash_code>::type hash_value(T value) { + return ::llvm::hashing::detail::hash_integer_value(value); +} + +/// \brief Compute a hash_code for a pointer's address. +/// +/// N.B.: This hashes the *address*. Not the value and not the type. +template hash_code hash_value(const T *ptr) { + return ::llvm::hashing::detail::hash_integer_value( + reinterpret_cast(ptr)); +} + + +// Implementation details for implementing hash combining functions. +namespace hashing { +namespace detail { + +/// \brief Trait to indicate whether a type's bits can be hashed directly. +/// +/// A type trait which is true if we want to combine values for hashing by +/// reading the underlying data. It is false if values of this type must +/// first be passed to hash_value, and the resulting hash_codes combined. +// +// FIXME: We want to replace is_integral and is_pointer here with a predicate +// which asserts that comparing the underlying storage of two values of the +// type for equality is equivalent to comparing the two values for equality. +// For all the platforms we care about, this holds for integers and pointers, +// but there are platforms where it doesn't and we would like to support +// user-defined types which happen to satisfy this property. +template struct is_hashable_data + : integral_constant::value || is_pointer::value) && + 64 % sizeof(T) == 0)> {}; + +/// \brief Helper to get the hashable data representation for a type. +/// This variant is enabled when the type itself can be used. +template +typename enable_if, T>::type +get_hashable_data(const T &value) { + return value; +} +/// \brief Helper to get the hashable data representation for a type. +/// This variant is enabled when we must first call hash_value and use the +/// result as our data. +template +typename enable_if_c::value, size_t>::type +get_hashable_data(const T &value) { + using ::llvm::hash_value; + return hash_value(value); +} + +/// \brief Helper to store data from a value into a buffer and advance the +/// pointer into that buffer. +/// +/// This routine first checks whether there is enough space in the provided +/// buffer, and if not immediately returns false. If there is space, it +/// copies the underlying bytes of value into the buffer, advances the +/// buffer_ptr past the copied bytes, and returns true. +template +bool store_and_advance(char *&buffer_ptr, char *buffer_end, const T& value, + size_t offset = 0) { + size_t store_size = sizeof(value) - offset; + if (buffer_ptr + store_size > buffer_end) + return false; + const char *value_data = reinterpret_cast(&value); + memcpy(buffer_ptr, value_data + offset, store_size); + buffer_ptr += store_size; + return true; +} + +/// \brief Implement the combining of integral values into a hash_code. +/// +/// This overload is selected when the value type of the iterator is +/// integral. Rather than computing a hash_code for each object and then +/// combining them, this (as an optimization) directly combines the integers. +template +hash_code hash_combine_range_impl(InputIteratorT first, InputIteratorT last) { + typedef typename std::iterator_traits::value_type ValueT; + const size_t seed = get_execution_seed(); + char buffer[64], *buffer_ptr = buffer; + char *const buffer_end = buffer_ptr + array_lengthof(buffer); + while (first != last && store_and_advance(buffer_ptr, buffer_end, + get_hashable_data(*first))) + ++first; +/// \brief Metafunction that determines whether the given type is an integral +/// type. + if (first == last) + return hash_short(buffer, buffer_ptr - buffer, seed); + assert(buffer_ptr == buffer_end); + + hash_state state = state.create(buffer, seed); + size_t length = 64; + while (first != last) { + // Fill up the buffer. We don't clear it, which re-mixes the last round + // when only a partial 64-byte chunk is left. + buffer_ptr = buffer; + while (first != last && store_and_advance(buffer_ptr, buffer_end, + get_hashable_data(*first))) + ++first; + + // Rotate the buffer if we did a partial fill in order to simulate doing + // a mix of the last 64-bytes. That is how the algorithm works when we + // have a contiguous byte sequence, and we want to emulate that here. + std::rotate(buffer, buffer_ptr, buffer_end); + + // Mix this chunk into the current state. + state.mix(buffer); + length += buffer_ptr - buffer; + }; + + return state.finalize(length); +} + +/// \brief Implement the combining of integral values into a hash_code. +/// +/// This overload is selected when the value type of the iterator is integral +/// and when the input iterator is actually a pointer. Rather than computing +/// a hash_code for each object and then combining them, this (as an +/// optimization) directly combines the integers. Also, because the integers +/// are stored in contiguous memory, this routine avoids copying each value +/// and directly reads from the underlying memory. +template +typename enable_if, hash_code>::type +hash_combine_range_impl(const ValueT *first, const ValueT *last) { + const size_t seed = get_execution_seed(); + const char *s_begin = reinterpret_cast(first); + const char *s_end = reinterpret_cast(last); + const size_t length = std::distance(s_begin, s_end); + if (length <= 64) + return hash_short(s_begin, length, seed); + + const char *s_aligned_end = s_begin + (length & ~63); + hash_state state = state.create(s_begin, seed); + s_begin += 64; + while (s_begin != s_aligned_end) { + state.mix(s_begin); + s_begin += 64; } + if (length & 63) + state.mix(s_end - 64); - /// Add a double - GeneralHash& add(double Data) { - union { - double D; uint64_t I; - }; - D = Data; - addInt(I); - return *this; - } - - // Do a few final mixes of the hash to ensure the last few - // bytes are well-incorporated. - unsigned finish() { - mix(Count); - Hash ^= Hash >> 13; - Hash *= M; - Hash ^= Hash >> 15; - return Hash; - } - -private: - void mix(uint32_t Data) { - ++Count; - Data *= M; - Data ^= Data >> 24; - Data *= M; - Hash *= M; - Hash ^= Data; - } - - // Add a single uint32 value - void addInt(uint32_t Val) { - mix(Val); - } - - // Add a uint64 value - void addInt(uint64_t Val) { - mix(uint32_t(Val >> 32)); - mix(uint32_t(Val)); - } - - // Add a range of bytes from I to E. - template - void addBytes(const char *I, const char *E) { - uint32_t Data; - // Note that aliasing rules forbid us from dereferencing - // reinterpret_cast(I) even if I happens to be suitably - // aligned, so we use memcpy instead. - for (; E - I >= ptrdiff_t(sizeof Data); I += sizeof Data) { - // A clever compiler should be able to turn this memcpy into a single - // aligned or unaligned load (depending on the alignment of the type T - // that was used in the call to addBits). - std::memcpy(&Data, I, sizeof Data); - mix(Data); - } - if (!ElementsHaveEvenLength && I != E) { - Data = 0; - std::memcpy(&Data, I, E - I); - mix(Data); + return state.finalize(length); +} + +} // namespace detail +} // namespace hashing + + +/// \brief Compute a hash_code for a sequence of values. +/// +/// This hashes a sequence of values. It produces the same hash_code as +/// 'hash_combine(a, b, c, ...)', but can run over arbitrary sized sequences +/// and is significantly faster given pointers and types which can be hashed as +/// a sequence of bytes. +template +hash_code hash_combine_range(InputIteratorT first, InputIteratorT last) { + return ::llvm::hashing::detail::hash_combine_range_impl(first, last); +} + + +// Implementation details for hash_combine. +namespace hashing { +namespace detail { + +/// \brief Helper class to manage the recursive combining of hash_combine +/// arguments. +/// +/// This class exists to manage the state and various calls involved in the +/// recursive combining of arguments used in hash_combine. It is particularly +/// useful at minimizing the code in the recursive calls to ease the pain +/// caused by a lack of variadic functions. +class hash_combine_recursive_helper { + const size_t seed; + char buffer[64]; + char *const buffer_end; + char *buffer_ptr; + size_t length; + hash_state state; + +public: + /// \brief Construct a recursive hash combining helper. + /// + /// This sets up the state for a recursive hash combine, including getting + /// the seed and buffer setup. + hash_combine_recursive_helper() + : seed(get_execution_seed()), + buffer_end(buffer + array_lengthof(buffer)), + buffer_ptr(buffer), + length(0) {} + + /// \brief Combine one chunk of data into the current in-flight hash. + /// + /// This merges one chunk of data into the hash. First it tries to buffer + /// the data. If the buffer is full, it hashes the buffer into its + /// hash_state, empties it, and then merges the new chunk in. This also + /// handles cases where the data straddles the end of the buffer. + template void combine_data(T data) { + if (!store_and_advance(buffer_ptr, buffer_end, data)) { + // Check for skew which prevents the buffer from being packed, and do + // a partial store into the buffer to fill it. This is only a concern + // with the variadic combine because that formation can have varying + // argument types. + size_t partial_store_size = buffer_end - buffer_ptr; + memcpy(buffer_ptr, &data, partial_store_size); + + // If the store fails, our buffer is full and ready to hash. We have to + // either initialize the hash state (on the first full buffer) or mix + // this buffer into the existing hash state. Length tracks the *hashed* + // length, not the buffered length. + if (length == 0) { + state = state.create(buffer, seed); + length = 64; + } else { + // Mix this chunk into the current state and bump length up by 64. + state.mix(buffer); + length += 64; + } + // Reset the buffer_ptr to the head of the buffer for the next chunk of + // data. + buffer_ptr = buffer; + + // Try again to store into the buffer -- this cannot fail as we only + // store types smaller than the buffer. + if (!store_and_advance(buffer_ptr, buffer_end, data, + partial_store_size)) + abort(); } } - // Add a range of bits from I to E. - template - void addBits(const T *I, const T *E) { - addBytes( - reinterpret_cast(I), - reinterpret_cast(E)); +#if defined(__has_feature) && __has_feature(__cxx_variadic_templates__) + + /// \brief Recursive, variadic combining method. + /// + /// This function recurses through each argument, combining that argument + /// into a single hash. + template + hash_code combine(const T &arg, const Ts &...args) { + combine_data( get_hashable_data(arg)); + + // Recurse to the next argument. + return combine(args...); + } + +#else + // Manually expanded recursive combining methods. See variadic above for + // documentation. + + template + hash_code combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4, const T5 &arg5, const T6 &arg6) { + combine_data(get_hashable_data(arg1)); + return combine(arg2, arg3, arg4, arg5, arg6); + } + template + hash_code combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4, const T5 &arg5) { + combine_data(get_hashable_data(arg1)); + return combine(arg2, arg3, arg4, arg5); + } + template + hash_code combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4) { + combine_data(get_hashable_data(arg1)); + return combine(arg2, arg3, arg4); + } + template + hash_code combine(const T1 &arg1, const T2 &arg2, const T3 &arg3) { + combine_data(get_hashable_data(arg1)); + return combine(arg2, arg3); + } + template + hash_code combine(const T1 &arg1, const T2 &arg2) { + combine_data(get_hashable_data(arg1)); + return combine(arg2); + } + template + hash_code combine(const T1 &arg1) { + combine_data(get_hashable_data(arg1)); + return combine(); + } + +#endif + + /// \brief Base case for recursive, variadic combining. + /// + /// The base case when combining arguments recursively is reached when all + /// arguments have been handled. It flushes the remaining buffer and + /// constructs a hash_code. + hash_code combine() { + // Check whether the entire set of values fit in the buffer. If so, we'll + // use the optimized short hashing routine and skip state entirely. + if (length == 0) + return hash_short(buffer, buffer_ptr - buffer, seed); + + // Mix the final buffer, rotating it if we did a partial fill in order to + // simulate doing a mix of the last 64-bytes. That is how the algorithm + // works when we have a contiguous byte sequence, and we want to emulate + // that here. + std::rotate(buffer, buffer_ptr, buffer_end); + + // Mix this chunk into the current state. + state.mix(buffer); + length += buffer_ptr - buffer; + + return state.finalize(length); } }; -} // end namespace llvm +} // namespace detail +} // namespace hashing + + +#if __has_feature(__cxx_variadic_templates__) + +/// \brief Combine values into a single hash_code. +/// +/// This routine accepts a varying number of arguments of any type. It will +/// attempt to combine them into a single hash_code. For user-defined types it +/// attempts to call a \see hash_value overload (via ADL) for the type. For +/// integer and pointer types it directly combines their data into the +/// resulting hash_code. +/// +/// The result is suitable for returning from a user's hash_value +/// *implementation* for their user-defined type. Consumers of a type should +/// *not* call this routine, they should instead call 'hash_value'. +template hash_code hash_combine(const Ts &...args) { + // Recursively hash each argument using a helper class. + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(args...); +} + +#else + +// What follows are manually exploded overloads for each argument width. See +// the above variadic definition for documentation and specification. + +template +hash_code hash_combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4, const T5 &arg5, const T6 &arg6) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1, arg2, arg3, arg4, arg5, arg6); +} +template +hash_code hash_combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4, const T5 &arg5) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1, arg2, arg3, arg4, arg5); +} +template +hash_code hash_combine(const T1 &arg1, const T2 &arg2, const T3 &arg3, + const T4 &arg4) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1, arg2, arg3, arg4); +} +template +hash_code hash_combine(const T1 &arg1, const T2 &arg2, const T3 &arg3) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1, arg2, arg3); +} +template +hash_code hash_combine(const T1 &arg1, const T2 &arg2) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1, arg2); +} +template +hash_code hash_combine(const T1 &arg1) { + ::llvm::hashing::detail::hash_combine_recursive_helper helper; + return helper.combine(arg1); +} + +#endif + +} // namespace llvm #endif diff --git a/include/llvm/Support/system_error.h b/include/llvm/Support/system_error.h index b30973271c8..af812069b9f 100644 --- a/include/llvm/Support/system_error.h +++ b/include/llvm/Support/system_error.h @@ -470,17 +470,6 @@ template <> struct hash; namespace llvm { -template -struct integral_constant { - typedef T value_type; - static const value_type value = v; - typedef integral_constant type; - operator value_type() { return value; } -}; - -typedef integral_constant true_type; -typedef integral_constant false_type; - // is_error_code_enum template struct is_error_code_enum : public false_type {}; diff --git a/include/llvm/Support/type_traits.h b/include/llvm/Support/type_traits.h index 515295bdd66..03f85e99971 100644 --- a/include/llvm/Support/type_traits.h +++ b/include/llvm/Support/type_traits.h @@ -17,6 +17,7 @@ #ifndef LLVM_SUPPORT_TYPE_TRAITS_H #define LLVM_SUPPORT_TYPE_TRAITS_H +#include "llvm/Support/DataTypes.h" #include // This is actually the conforming implementation which works with abstract @@ -68,17 +69,62 @@ struct isPodLike > { }; +template +struct integral_constant { + typedef T value_type; + static const value_type value = v; + typedef integral_constant type; + operator value_type() { return value; } +}; + +typedef integral_constant true_type; +typedef integral_constant false_type; + /// \brief Metafunction that determines whether the two given types are /// equivalent. -template -struct is_same { - static const bool value = false; -}; +template struct is_same : public false_type {}; +template struct is_same : public true_type {}; + +/// \brief Metafunction that removes const qualification from a type. +template struct remove_const { typedef T type; }; +template struct remove_const { typedef T type; }; -template -struct is_same { - static const bool value = true; +/// \brief Metafunction that removes volatile qualification from a type. +template struct remove_volatile { typedef T type; }; +template struct remove_volatile { typedef T type; }; + +/// \brief Metafunction that removes both const and volatile qualification from +/// a type. +template struct remove_cv { + typedef typename remove_const::type>::type type; }; + +/// \brief Helper to implement is_integral metafunction. +template struct is_integral_impl : false_type {}; +template <> struct is_integral_impl< bool> : true_type {}; +template <> struct is_integral_impl< char> : true_type {}; +template <> struct is_integral_impl< signed char> : true_type {}; +template <> struct is_integral_impl : true_type {}; +template <> struct is_integral_impl< wchar_t> : true_type {}; +template <> struct is_integral_impl< short> : true_type {}; +template <> struct is_integral_impl : true_type {}; +template <> struct is_integral_impl< int> : true_type {}; +template <> struct is_integral_impl : true_type {}; +template <> struct is_integral_impl< long> : true_type {}; +template <> struct is_integral_impl : true_type {}; +template <> struct is_integral_impl< long long> : true_type {}; +template <> struct is_integral_impl : true_type {}; + +/// \brief Metafunction that determines whether the given type is an integral +/// type. +template +struct is_integral : is_integral_impl {}; + +/// \brief Metafunction that determines whether the given type is a pointer +/// type. +template struct is_pointer : false_type {}; +template struct is_pointer : true_type {}; + // enable_if_c - Enable/disable a template based on a metafunction template diff --git a/lib/Support/CMakeLists.txt b/lib/Support/CMakeLists.txt index 6cec47df67e..0b69238274e 100644 --- a/lib/Support/CMakeLists.txt +++ b/lib/Support/CMakeLists.txt @@ -26,6 +26,7 @@ add_llvm_library(LLVMSupport FoldingSet.cpp FormattedStream.cpp GraphWriter.cpp + Hashing.cpp IntEqClasses.cpp IntervalMap.cpp IntrusiveRefCntPtr.cpp diff --git a/lib/VMCore/LLVMContextImpl.h b/lib/VMCore/LLVMContextImpl.h index 9aba73781c4..a29bba2f3b4 100644 --- a/lib/VMCore/LLVMContextImpl.h +++ b/lib/VMCore/LLVMContextImpl.h @@ -119,10 +119,9 @@ struct AnonStructTypeKeyInfo { return DenseMapInfo::getTombstoneKey(); } static unsigned getHashValue(const KeyTy& Key) { - GeneralHash Hash; - Hash.add(Key.ETypes); - Hash.add(Key.isPacked); - return Hash.finish(); + return hash_combine(hash_combine_range(Key.ETypes.begin(), + Key.ETypes.end()), + Key.isPacked); } static unsigned getHashValue(const StructType *ST) { return getHashValue(KeyTy(ST)); @@ -172,11 +171,10 @@ struct FunctionTypeKeyInfo { return DenseMapInfo::getTombstoneKey(); } static unsigned getHashValue(const KeyTy& Key) { - GeneralHash Hash; - Hash.add(Key.ReturnType); - Hash.add(Key.Params); - Hash.add(Key.isVarArg); - return Hash.finish(); + return hash_combine(Key.ReturnType, + hash_combine_range(Key.Params.begin(), + Key.Params.end()), + Key.isVarArg); } static unsigned getHashValue(const FunctionType *FT) { return getHashValue(KeyTy(FT)); diff --git a/unittests/ADT/HashingTest.cpp b/unittests/ADT/HashingTest.cpp index 18bfb722f4a..1f4e4793fc9 100644 --- a/unittests/ADT/HashingTest.cpp +++ b/unittests/ADT/HashingTest.cpp @@ -13,45 +13,306 @@ #include "gtest/gtest.h" #include "llvm/ADT/Hashing.h" +#include "llvm/Support/DataTypes.h" +#include +#include +#include +#include + +namespace llvm { + +// Helper for test code to print hash codes. +void PrintTo(const hash_code &code, std::ostream *os) { + *os << static_cast(code); +} + +// Fake an object that is recognized as hashable data to test super large +// objects. +struct LargeTestInteger { uint64_t arr[8]; }; + +namespace hashing { +namespace detail { +template <> struct is_hashable_data : true_type {}; +} // namespace detail +} // namespace hashing + +} // namespace llvm using namespace llvm; namespace { -TEST(HashingTest, EmptyHashTest) { - GeneralHash Hash; - ASSERT_EQ(0u, Hash.finish()); +TEST(HashingTest, HashValueBasicTest) { + int x = 42, y = 43, c = 'x'; + void *p = 0; + uint64_t i = 71; + const unsigned ci = 71; + volatile int vi = 71; + const volatile int cvi = 71; + uintptr_t addr = reinterpret_cast(&y); + EXPECT_EQ(hash_value(42), hash_value(x)); + EXPECT_NE(hash_value(42), hash_value(y)); + EXPECT_NE(hash_value(42), hash_value(p)); + EXPECT_NE(hash_code::get_null_code(), hash_value(p)); + EXPECT_EQ(hash_value(71), hash_value(i)); + EXPECT_EQ(hash_value(71), hash_value(ci)); + EXPECT_EQ(hash_value(71), hash_value(vi)); + EXPECT_EQ(hash_value(71), hash_value(cvi)); + EXPECT_EQ(hash_value(c), hash_value('x')); + EXPECT_EQ(hash_value('4'), hash_value('0' + 4)); + EXPECT_EQ(hash_value(addr), hash_value(&y)); } -TEST(HashingTest, IntegerHashTest) { - ASSERT_TRUE(GeneralHash().add(1).finish() == GeneralHash().add(1).finish()); - ASSERT_TRUE(GeneralHash().add(1).finish() != GeneralHash().add(2).finish()); -} +template T *begin(T (&arr)[N]) { return arr; } +template T *end(T (&arr)[N]) { return arr + N; } + +// Provide a dummy, hashable type designed for easy verification: its hash is +// the same as its value. +struct HashableDummy { size_t value; }; +hash_code hash_value(HashableDummy dummy) { return dummy.value; } + +TEST(HashingTest, HashCombineRangeBasicTest) { + // Leave this uninitialized in the hope that valgrind will catch bad reads. + int dummy; + hash_code dummy_hash = hash_combine_range(&dummy, &dummy); + EXPECT_NE(hash_code::get_null_code(), dummy_hash); + EXPECT_NE(hash_code::get_invalid_code(), dummy_hash); + + const int arr1[] = { 1, 2, 3 }; + hash_code arr1_hash = hash_combine_range(begin(arr1), end(arr1)); + EXPECT_NE(hash_code::get_null_code(), arr1_hash); + EXPECT_NE(hash_code::get_invalid_code(), arr1_hash); + EXPECT_NE(dummy_hash, arr1_hash); + EXPECT_EQ(arr1_hash, hash_combine_range(begin(arr1), end(arr1))); -TEST(HashingTest, StringHashTest) { - ASSERT_TRUE( - GeneralHash().add("abc").finish() == GeneralHash().add("abc").finish()); - ASSERT_TRUE( - GeneralHash().add("abc").finish() != GeneralHash().add("abcd").finish()); + const std::vector vec(begin(arr1), end(arr1)); + EXPECT_EQ(arr1_hash, hash_combine_range(vec.begin(), vec.end())); + + const std::list list(begin(arr1), end(arr1)); + EXPECT_EQ(arr1_hash, hash_combine_range(list.begin(), list.end())); + + const std::deque deque(begin(arr1), end(arr1)); + EXPECT_EQ(arr1_hash, hash_combine_range(deque.begin(), deque.end())); + + const int arr2[] = { 3, 2, 1 }; + hash_code arr2_hash = hash_combine_range(begin(arr2), end(arr2)); + EXPECT_NE(hash_code::get_null_code(), arr2_hash); + EXPECT_NE(hash_code::get_invalid_code(), arr2_hash); + EXPECT_NE(dummy_hash, arr2_hash); + EXPECT_NE(arr1_hash, arr2_hash); + + const int arr3[] = { 1, 1, 2, 3 }; + hash_code arr3_hash = hash_combine_range(begin(arr3), end(arr3)); + EXPECT_NE(hash_code::get_null_code(), arr3_hash); + EXPECT_NE(hash_code::get_invalid_code(), arr3_hash); + EXPECT_NE(dummy_hash, arr3_hash); + EXPECT_NE(arr1_hash, arr3_hash); + + const int arr4[] = { 1, 2, 3, 3 }; + hash_code arr4_hash = hash_combine_range(begin(arr4), end(arr4)); + EXPECT_NE(hash_code::get_null_code(), arr4_hash); + EXPECT_NE(hash_code::get_invalid_code(), arr4_hash); + EXPECT_NE(dummy_hash, arr4_hash); + EXPECT_NE(arr1_hash, arr4_hash); + + const size_t arr5[] = { 1, 2, 3 }; + const HashableDummy d_arr5[] = { {1}, {2}, {3} }; + hash_code arr5_hash = hash_combine_range(begin(arr5), end(arr5)); + hash_code d_arr5_hash = hash_combine_range(begin(d_arr5), end(d_arr5)); + EXPECT_EQ(arr5_hash, d_arr5_hash); } -TEST(HashingTest, FloatHashTest) { - ASSERT_TRUE( - GeneralHash().add(1.0f).finish() == GeneralHash().add(1.0f).finish()); - ASSERT_TRUE( - GeneralHash().add(1.0f).finish() != GeneralHash().add(2.0f).finish()); +TEST(HashingTest, HashCombineRangeLengthDiff) { + // Test that as only the length varies, we compute different hash codes for + // sequences. + std::map code_to_size; + std::vector all_one_c(256, '\xff'); + for (unsigned Idx = 1, Size = all_one_c.size(); Idx < Size; ++Idx) { + hash_code code = hash_combine_range(&all_one_c[0], &all_one_c[0] + Idx); + std::map::iterator + I = code_to_size.insert(std::make_pair(code, Idx)).first; + EXPECT_EQ(Idx, I->second); + } + code_to_size.clear(); + std::vector all_zero_c(256, '\0'); + for (unsigned Idx = 1, Size = all_zero_c.size(); Idx < Size; ++Idx) { + hash_code code = hash_combine_range(&all_zero_c[0], &all_zero_c[0] + Idx); + std::map::iterator + I = code_to_size.insert(std::make_pair(code, Idx)).first; + EXPECT_EQ(Idx, I->second); + } + code_to_size.clear(); + std::vector all_one_int(512, -1); + for (unsigned Idx = 1, Size = all_one_int.size(); Idx < Size; ++Idx) { + hash_code code = hash_combine_range(&all_one_int[0], &all_one_int[0] + Idx); + std::map::iterator + I = code_to_size.insert(std::make_pair(code, Idx)).first; + EXPECT_EQ(Idx, I->second); + } + code_to_size.clear(); + std::vector all_zero_int(512, 0); + for (unsigned Idx = 1, Size = all_zero_int.size(); Idx < Size; ++Idx) { + hash_code code = hash_combine_range(&all_zero_int[0], &all_zero_int[0] + Idx); + std::map::iterator + I = code_to_size.insert(std::make_pair(code, Idx)).first; + EXPECT_EQ(Idx, I->second); + } } -TEST(HashingTest, DoubleHashTest) { - ASSERT_TRUE(GeneralHash().add(1.).finish() == GeneralHash().add(1.).finish()); - ASSERT_TRUE(GeneralHash().add(1.).finish() != GeneralHash().add(2.).finish()); +TEST(HashingTest, HashCombineRangeGoldenTest) { + struct { const char *s; uint64_t hash; } golden_data[] = { + { "a", 0xaeb6f9d5517c61f8ULL }, + { "ab", 0x7ab1edb96be496b4ULL }, + { "abc", 0xe38e60bf19c71a3fULL }, + { "abcde", 0xd24461a66de97f6eULL }, + { "abcdefgh", 0x4ef872ec411dec9dULL }, + { "abcdefghijklm", 0xe8a865539f4eadfeULL }, + { "abcdefghijklmnopqrstu", 0x261cdf85faaf4e79ULL }, + { "abcdefghijklmnopqrstuvwxyzabcdef", 0x43ba70e4198e3b2aULL }, + { "abcdefghijklmnopqrstuvwxyzabcdef" + "abcdefghijklmnopqrstuvwxyzghijkl" + "abcdefghijklmnopqrstuvwxyzmnopqr" + "abcdefghijklmnopqrstuvwxyzstuvwx" + "abcdefghijklmnopqrstuvwxyzyzabcd", 0xdcd57fb2afdf72beULL }, + { "a", 0xaeb6f9d5517c61f8ULL }, + { "aa", 0xf2b3b69a9736a1ebULL }, + { "aaa", 0xf752eb6f07b1cafeULL }, + { "aaaaa", 0x812bd21e1236954cULL }, + { "aaaaaaaa", 0xff07a2cff08ac587ULL }, + { "aaaaaaaaaaaaa", 0x84ac949d54d704ecULL }, + { "aaaaaaaaaaaaaaaaaaaaa", 0xcb2c8fb6be8f5648ULL }, + { "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", 0xcc40ab7f164091b6ULL }, + { "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", 0xc58e174c1e78ffe9ULL }, + { "z", 0x1ba160d7e8f8785cULL }, + { "zz", 0x2c5c03172f1285d7ULL }, + { "zzz", 0x9d2c4f4b507a2ac3ULL }, + { "zzzzz", 0x0f03b9031735693aULL }, + { "zzzzzzzz", 0xe674147c8582c08eULL }, + { "zzzzzzzzzzzzz", 0x3162d9fa6938db83ULL }, + { "zzzzzzzzzzzzzzzzzzzzz", 0x37b9a549e013620cULL }, + { "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz", 0x8921470aff885016ULL }, + { "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" + "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" + "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" + "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" + "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz", 0xf60fdcd9beb08441ULL }, + { "a", 0xaeb6f9d5517c61f8ULL }, + { "ab", 0x7ab1edb96be496b4ULL }, + { "aba", 0x3edb049950884d0aULL }, + { "ababa", 0x8f2de9e73a97714bULL }, + { "abababab", 0xee14a29ddf0ce54cULL }, + { "ababababababa", 0x38b3ddaada2d52b4ULL }, + { "ababababababababababa", 0xd3665364219f2b85ULL }, + { "abababababababababababababababab" + "abababababababababababababababab" + "abababababababababababababababab" + "abababababababababababababababab" + "abababababababababababababababab", 0x840192d129f7a22bULL } + }; + for (unsigned i = 0; i < sizeof(golden_data)/sizeof(*golden_data); ++i) { + StringRef str = golden_data[i].s; + hash_code hash = hash_combine_range(str.begin(), str.end()); +#if 0 // Enable this to generate paste-able text for the above structure. + std::string member_str = "\"" + str.str() + "\","; + fprintf(stderr, " { %-35s 0x%016lxULL },\n", + member_str.c_str(), (size_t)hash); +#endif + EXPECT_EQ(static_cast(golden_data[i].hash), + static_cast(hash)); + } } -TEST(HashingTest, IntegerArrayHashTest) { - int a[] = { 1, 2 }; - int b[] = { 1, 3 }; - ASSERT_TRUE(GeneralHash().add(a).finish() == GeneralHash().add(a).finish()); - ASSERT_TRUE(GeneralHash().add(a).finish() != GeneralHash().add(b).finish()); +TEST(HashingTest, HashCombineBasicTest) { + // Hashing a sequence of homogenous types matches range hashing. + const int i1 = 42, i2 = 43, i3 = 123, i4 = 999, i5 = 0, i6 = 79; + const int arr1[] = { i1, i2, i3, i4, i5, i6 }; + EXPECT_EQ(hash_combine_range(arr1, arr1 + 1), hash_combine(i1)); + EXPECT_EQ(hash_combine_range(arr1, arr1 + 2), hash_combine(i1, i2)); + EXPECT_EQ(hash_combine_range(arr1, arr1 + 3), hash_combine(i1, i2, i3)); + EXPECT_EQ(hash_combine_range(arr1, arr1 + 4), hash_combine(i1, i2, i3, i4)); + EXPECT_EQ(hash_combine_range(arr1, arr1 + 5), + hash_combine(i1, i2, i3, i4, i5)); + EXPECT_EQ(hash_combine_range(arr1, arr1 + 6), + hash_combine(i1, i2, i3, i4, i5, i6)); + + // Hashing a sequence of heterogenous types which *happen* to all produce the + // same data for hashing produces the same as a range-based hash of the + // fundamental values. + const size_t s1 = 1024, s2 = 8888, s3 = 9000000; + const HashableDummy d1 = { 1024 }, d2 = { 8888 }, d3 = { 9000000 }; + const size_t arr2[] = { s1, s2, s3 }; + EXPECT_EQ(hash_combine_range(begin(arr2), end(arr2)), + hash_combine(s1, s2, s3)); + EXPECT_EQ(hash_combine(s1, s2, s3), hash_combine(s1, s2, d3)); + EXPECT_EQ(hash_combine(s1, s2, s3), hash_combine(s1, d2, s3)); + EXPECT_EQ(hash_combine(s1, s2, s3), hash_combine(d1, s2, s3)); + EXPECT_EQ(hash_combine(s1, s2, s3), hash_combine(d1, d2, s3)); + EXPECT_EQ(hash_combine(s1, s2, s3), hash_combine(d1, d2, d3)); + + // Permuting values causes hashes to change. + EXPECT_NE(hash_combine(i1, i1, i1), hash_combine(i1, i1, i2)); + EXPECT_NE(hash_combine(i1, i1, i1), hash_combine(i1, i2, i1)); + EXPECT_NE(hash_combine(i1, i1, i1), hash_combine(i2, i1, i1)); + EXPECT_NE(hash_combine(i1, i1, i1), hash_combine(i2, i2, i1)); + EXPECT_NE(hash_combine(i1, i1, i1), hash_combine(i2, i2, i2)); + EXPECT_NE(hash_combine(i2, i1, i1), hash_combine(i1, i1, i2)); + EXPECT_NE(hash_combine(i1, i1, i2), hash_combine(i1, i2, i1)); + EXPECT_NE(hash_combine(i1, i2, i1), hash_combine(i2, i1, i1)); + + // Changing type w/o changing value causes hashes to change. + EXPECT_NE(hash_combine(i1, i2, i3), hash_combine((char)i1, i2, i3)); + EXPECT_NE(hash_combine(i1, i2, i3), hash_combine(i1, (char)i2, i3)); + EXPECT_NE(hash_combine(i1, i2, i3), hash_combine(i1, i2, (char)i3)); + + // This is array of uint64, but it should have the exact same byte pattern as + // an array of LargeTestIntegers. + const uint64_t bigarr[] = { + 0xaaaaaaaaababababULL, 0xacacacacbcbcbcbcULL, 0xccddeeffeeddccbbULL, + 0xdeadbeafdeadbeefULL, 0xfefefefededededeULL, 0xafafafafededededULL, + 0xffffeeeeddddccccULL, 0xaaaacbcbffffababULL, + 0xaaaaaaaaababababULL, 0xacacacacbcbcbcbcULL, 0xccddeeffeeddccbbULL, + 0xdeadbeafdeadbeefULL, 0xfefefefededededeULL, 0xafafafafededededULL, + 0xffffeeeeddddccccULL, 0xaaaacbcbffffababULL, + 0xaaaaaaaaababababULL, 0xacacacacbcbcbcbcULL, 0xccddeeffeeddccbbULL, + 0xdeadbeafdeadbeefULL, 0xfefefefededededeULL, 0xafafafafededededULL, + 0xffffeeeeddddccccULL, 0xaaaacbcbffffababULL + }; + // Hash a preposterously large integer, both aligned with the buffer and + // misaligned. + const LargeTestInteger li = { { + 0xaaaaaaaaababababULL, 0xacacacacbcbcbcbcULL, 0xccddeeffeeddccbbULL, + 0xdeadbeafdeadbeefULL, 0xfefefefededededeULL, 0xafafafafededededULL, + 0xffffeeeeddddccccULL, 0xaaaacbcbffffababULL + } }; + // Rotate the storage from 'li'. + const LargeTestInteger l2 = { { + 0xacacacacbcbcbcbcULL, 0xccddeeffeeddccbbULL, 0xdeadbeafdeadbeefULL, + 0xfefefefededededeULL, 0xafafafafededededULL, 0xffffeeeeddddccccULL, + 0xaaaacbcbffffababULL, 0xaaaaaaaaababababULL + } }; + const LargeTestInteger l3 = { { + 0xccddeeffeeddccbbULL, 0xdeadbeafdeadbeefULL, 0xfefefefededededeULL, + 0xafafafafededededULL, 0xffffeeeeddddccccULL, 0xaaaacbcbffffababULL, + 0xaaaaaaaaababababULL, 0xacacacacbcbcbcbcULL + } }; + EXPECT_EQ(hash_combine_range(begin(bigarr), end(bigarr)), + hash_combine(li, li, li)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 9), + hash_combine(bigarr[0], l2)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 10), + hash_combine(bigarr[0], bigarr[1], l3)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 17), + hash_combine(li, bigarr[0], l2)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 18), + hash_combine(li, bigarr[0], bigarr[1], l3)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 18), + hash_combine(bigarr[0], l2, bigarr[9], l3)); + EXPECT_EQ(hash_combine_range(bigarr, bigarr + 20), + hash_combine(bigarr[0], l2, bigarr[9], l3, bigarr[18], bigarr[19])); } }