1. 22 Dec, 2022 2 commits
  2. 21 Dec, 2022 3 commits
  3. 20 Dec, 2022 2 commits
  4. 19 Dec, 2022 5 commits
  5. 17 Dec, 2022 1 commit
  6. 16 Dec, 2022 2 commits
  7. 15 Dec, 2022 2 commits
    • Fix the documentation of the atomic_hook.h registration functions to correctly… · d241d919
      Fix the documentation of the atomic_hook.h registration functions to correctly state that only the first registered hook will be honored.
      
      The comments that imply otherwise were never true, and were a leftover artifact during initial development of the feature.
      
      Also remove a TODO() I gave myself years ago; this is never going to happen and isn't worth the bother.
      
      PiperOrigin-RevId: 495687371
      Change-Id: I63f8ef57d659075bf290caae0617ea61ceb2c1db
      Greg Falcon committed
    • Add the ability to turn on warnings that get disabled in tests · e2416566
      in GCC and LLVM build configs
      
      This was accomplished by adding GccStyleFilterAndCombine() in copts.py
      
      Previously, if we had a default warning of the form -Wwarning, adding
      -Wno-warning to the list of test warnings would just add conflicting
      flags. We now filter -Wwarning if -Wno-warning is added to the test warnings.
      
      PiperOrigin-RevId: 495683815
      Change-Id: I5dfd8a30b0be09d6b48237f61d598230ab9027db
      Derek Mauro committed
  8. 14 Dec, 2022 1 commit
  9. 13 Dec, 2022 2 commits
    • Prevent all CHECK functions from expanding macros for the error string. · a13ef44b
      This was likely an unintentional behavior change made a while ago while trying to reduce duplication.  The new behavior will always include the unexpanded macro in the error string.  For example, `CHECK_EQ(MACRO(x), MACRO(y))` will now output "MACRO(x) == MACRO(y)" if it fails.  Before this change, CHECK and QCHECK were the only macros that had this behavior.
      
      Not using function-like macro aliases is a possible alternative here, but unfortunately that would flood the macro namespace downstream with CHECK* and break existing code.
      
      PiperOrigin-RevId: 495138582
      Change-Id: I6a1afd89a6b9334003362e5d3e55da68f86eec98
      Mike Kruskal committed
    • Add prefetch to crc32 · 4cb6c389
      We already prefetch in case of large inputs, do the same
      for medium sized inputs as well. This is mostly neutral
      for performance in most cases, so this also adds a new
      bench with working size >> cache size to ensure that we
      are seeing performance benefits of prefetch. Main benefits
      are on AMD with hardware prefetchers turned off:
      
      AMD prefetchers on:
      name                           old time/op  new time/op  delta
      BM_Calculate/0                 2.43ns ± 1%  2.43ns ± 1%     ~     (p=0.814 n=40+40)
      BM_Calculate/1                 2.50ns ± 2%  2.50ns ± 2%     ~     (p=0.745 n=39+39)
      BM_Calculate/100               9.17ns ± 1%  9.17ns ± 2%     ~     (p=0.747 n=40+40)
      BM_Calculate/10000              474ns ± 1%   474ns ± 2%     ~     (p=0.749 n=40+40)
      BM_Calculate/500000            22.8µs ± 1%  22.9µs ± 2%     ~     (p=0.298 n=39+40)
      BM_Extend/0                    1.38ns ± 1%  1.38ns ± 1%     ~     (p=0.651 n=40+40)
      BM_Extend/1                    1.53ns ± 2%  1.53ns ± 1%     ~     (p=0.957 n=40+39)
      BM_Extend/100                  9.48ns ± 1%  9.48ns ± 2%     ~     (p=1.000 n=40+40)
      BM_Extend/10000                 474ns ± 2%   474ns ± 1%     ~     (p=0.928 n=40+40)
      BM_Extend/500000               22.8µs ± 1%  22.9µs ± 2%     ~     (p=0.331 n=40+40)
      BM_Extend/100000000            4.79ms ± 1%  4.79ms ± 1%     ~     (p=0.753 n=38+38)
      BM_ExtendCacheMiss/10          25.5ms ± 2%  25.5ms ± 2%     ~     (p=0.988 n=38+40)
      BM_ExtendCacheMiss/100         23.1ms ± 2%  23.1ms ± 2%     ~     (p=0.792 n=40+40)
      BM_ExtendCacheMiss/1000        37.2ms ± 1%  28.6ms ± 2%  -23.00%  (p=0.000 n=38+40)
      BM_ExtendCacheMiss/100000      7.77ms ± 2%  7.74ms ± 2%   -0.45%  (p=0.006 n=40+40)
      
      AMD prefetchers off:
      name                           old time/op  new time/op  delta
      BM_Calculate/0                 2.43ns ± 2%  2.43ns ± 2%     ~     (p=0.351 n=40+39)
      BM_Calculate/1                 2.51ns ± 2%  2.51ns ± 1%     ~     (p=0.535 n=40+40)
      BM_Calculate/100               9.18ns ± 2%  9.15ns ± 2%     ~     (p=0.120 n=38+39)
      BM_Calculate/10000              475ns ± 2%   475ns ± 2%     ~     (p=0.852 n=40+40)
      BM_Calculate/500000            22.9µs ± 2%  22.8µs ± 2%     ~     (p=0.396 n=40+40)
      BM_Extend/0                    1.38ns ± 2%  1.38ns ± 2%     ~     (p=0.466 n=40+40)
      BM_Extend/1                    1.53ns ± 2%  1.53ns ± 2%     ~     (p=0.914 n=40+39)
      BM_Extend/100                  9.49ns ± 2%  9.49ns ± 2%     ~     (p=0.802 n=40+40)
      BM_Extend/10000                 475ns ± 2%   474ns ± 1%     ~     (p=0.589 n=40+40)
      BM_Extend/500000               22.8µs ± 2%  22.8µs ± 2%     ~     (p=0.872 n=39+40)
      BM_Extend/100000000            10.0ms ± 3%  10.0ms ± 4%     ~     (p=0.355 n=40+40)
      BM_ExtendCacheMiss/10           196ms ± 2%   196ms ± 2%     ~     (p=0.698 n=40+40)
      BM_ExtendCacheMiss/100          129ms ± 1%   129ms ± 1%     ~     (p=0.602 n=36+37)
      BM_ExtendCacheMiss/1000        88.6ms ± 1%  57.2ms ± 1%  -35.49%  (p=0.000 n=36+38)
      BM_ExtendCacheMiss/100000      14.9ms ± 1%  14.9ms ± 1%     ~     (p=0.888 n=39+40)
      
      Intel skylake:
      BM_Calculate/0                 2.49ns ± 2%  2.44ns ± 4%  -2.15%  (p=0.001 n=31+34)
      BM_Calculate/1                 3.04ns ± 2%  2.98ns ± 9%  -1.95%  (p=0.003 n=31+35)
      BM_Calculate/100               8.64ns ± 3%  8.53ns ± 5%    ~     (p=0.065 n=31+35)
      BM_Calculate/10000              290ns ± 3%   285ns ± 7%  -1.80%  (p=0.004 n=28+34)
      BM_Calculate/500000            11.8µs ± 2%  11.6µs ± 8%  -1.59%  (p=0.003 n=26+34)
      BM_Extend/0                    1.56ns ± 1%  1.52ns ± 3%  -2.44%  (p=0.000 n=26+35)
      BM_Extend/1                    1.88ns ± 3%  1.83ns ± 6%  -2.17%  (p=0.001 n=27+35)
      BM_Extend/100                  9.31ns ± 3%  9.13ns ± 7%  -1.92%  (p=0.000 n=33+38)
      BM_Extend/10000                 290ns ± 3%   283ns ± 3%  -2.45%  (p=0.000 n=32+38)
      BM_Extend/500000               11.8µs ± 2%  11.5µs ± 8%  -1.80%  (p=0.001 n=35+37)
      BM_Extend/100000000            6.39ms ±10%  6.11ms ± 8%  -4.34%  (p=0.000 n=40+40)
      BM_ExtendCacheMiss/10          36.2ms ± 7%  35.8ms ±14%    ~     (p=0.281 n=33+37)
      BM_ExtendCacheMiss/100         26.9ms ±15%  25.9ms ±12%  -3.93%  (p=0.000 n=40+40)
      BM_ExtendCacheMiss/1000        23.8ms ± 5%  23.4ms ± 5%  -1.68%  (p=0.001 n=39+40)
      BM_ExtendCacheMiss/100000      10.1ms ± 5%  10.0ms ± 4%    ~     (p=0.051 n=39+39)
      
      PiperOrigin-RevId: 495119444
      Change-Id: I67bcf3b0282b5e1c43122de2837a24c16b8aded7
      Ilya Tokar committed
  10. 12 Dec, 2022 4 commits
  11. 10 Dec, 2022 1 commit
  12. 09 Dec, 2022 1 commit
  13. 08 Dec, 2022 5 commits
    • Fix some ClangTidy warnings in raw_hash_set code. · 522606b7
      PiperOrigin-RevId: 493993005
      Change-Id: I0705be8678022a9e08a1af9972687b7955593994
      Evan Brown committed
    • Fixing macro expansion changes in new logging macros. · ec583f2d
      This was an unintentional behavior change when we added a new layer of macros.  Not using function-like macro aliases would get around this, but unfortunately that would flood the macro namespace downstream with CHECK and LOG (and break existing code).
      
      Note, the old behavior only applied to CHECK and QCHECK.  Other CHECK macros already had multiple layers of function-like macros and were unaffected.
      
      PiperOrigin-RevId: 493984662
      Change-Id: I9a050dcaf01f2b6935f02cd42e23bc3a4d5fc62a
      Mike Kruskal committed
    • Eliminate AArch64-specific code paths from LowLevelHash · c353e259
      After internal investigation, it’s no longer clear that the alternative
      LowLevelHash mixer committed in a05366d8
      unequivocally improves performance on AArch64. It unnecessarily reduces
      performance on Apple Silicon and the AWS Graviton. It also lowers hash
      quality, which offsets much of the performance gain it provides on the
      Arm Neoverse N1 (see https://github.com/abseil/abseil-cpp/issues/1093).
      Switch back to the original mixer.
      
      Closes: https://github.com/abseil/abseil-cpp/issues/1093
      PiperOrigin-RevId: 493941913
      Change-Id: I84c789b2f88c91dec22f6f0f6e8c5129d2939a6f
      Benjamin Barenblat committed
    • Change CommonFields from a private base class of raw_hash_set to be the first… · 523b8699
      Change CommonFields from a private base class of raw_hash_set to be the first member of the settings_ CompressedTuple so that we can move growth_left into CommonFields.
      
      This allows for removing growth_left as a separate argument for a few functions.
      
      Also, move the infoz() accessor functions to be before the data members of CommonFields to comply with the style guide.
      
      PiperOrigin-RevId: 493918310
      Change-Id: I58474e37d3b16a1513d2931af6b153dea1d809c2
      Evan Brown committed
    • The abridged justification is as follows: · 2e177685
      -   The deadlock seems to occur if flag initialization happens to occur while a sample is being created.
          -   Each sample has its own mutex that is locked when a new sample is registered, i.e. created for the first time.
          -   The flag implicitly creates a global sampler object which locks `graveyard_`'s mutex.
      -   Usually, in `PushDead`, the `graveyard` is locked before the sample, hence triggering deadlock detection.
      -   This lock order can never be recreated since this code is executed exactly once per sample object, and the sample object cannot be accessed until after the method returns.
      -   It should therefore be safe to ignore any locking order condition that may occur during sample creation.
      
      PiperOrigin-RevId: 493901903
      Change-Id: I094abca82c1a8a82ac392383c72469d68eef09c4
      Abseil Team committed
  14. 07 Dec, 2022 3 commits
  15. 06 Dec, 2022 4 commits
  16. 05 Dec, 2022 2 commits