Commit a9a49560 by Abseil Team Committed by CJ Johnson

Export of internal Abseil changes

--
c68f1886f5e8fd90eb0c2d2e68feaf00a7cdacda by CJ Johnson <johnsoncj@google.com>:

Introduce absl::Cleanup to the OSS repo

PiperOrigin-RevId: 354583156

--
17030cf388e10f7eb959e3e566326d1072ce392e by Abseil Team <absl-team@google.com>:

Internal change only

PiperOrigin-RevId: 354574953

--
e979d7236d4f3252e79ddda6739b67a9a326bf6d by CJ Johnson <johnsoncj@google.com>:

Internal change

PiperOrigin-RevId: 354545297

--
7ea02b3783f7f49ef97d86a8f6580a19cc57df14 by Abseil Team <absl-team@google.com>:

Pre-allocate memory for vectors where the size is known.

PiperOrigin-RevId: 354344576

--
9246c7cb11f1d6444f79ebe25acc69a8a9b870e0 by Matt Kulukundis <kfm@google.com>:

Add support for Elbrus 2000 (e2k)

Import of https://github.com/abseil/abseil-cpp/pull/889

PiperOrigin-RevId: 354344013

--
0fc93d359cc1fb307552e917b37b7b2e7eed822f by Abseil Team <absl-team@google.com>:

Integrate CordRepRing logic into cord (but do not enable it)

PiperOrigin-RevId: 354312238

--
eda05622f7da71466723acb33403f783529df24b by Abseil Team <absl-team@google.com>:

Protect ignore diagnostic with "__has_warning".

PiperOrigin-RevId: 354112334

--
47716c5d8fb10efa4fdd801d28bac414c6f8ec32 by Abseil Team <absl-team@google.com>:

Rearrange InlinedVector copy constructor and destructor to treat
a few special cases inline and then tail-call a non-inlined routine
for the rest.  In particular, we optimize for empty vectors in both
cases.

Added a couple of benchmarks that copy either an InlVec<int64> or
an InlVec<InlVec<int64>>.

Speed difference:
```
BM_CopyTrivial/0                                    0.92ns +- 0%   0.47ns +- 0%  -48.91%  (p=0.000 n=11+12)
BM_CopyTrivial/1                                    0.92ns +- 0%   1.15ns +- 0%  +25.00%  (p=0.000 n=10+9)
BM_CopyTrivial/8                                    8.57ns +- 0%  10.72ns +- 1%  +25.16%  (p=0.000 n=10+12)
BM_CopyNonTrivial/0                                 3.21ns +- 0%   0.70ns +- 0%  -78.23%  (p=0.000 n=12+10)
BM_CopyNonTrivial/1                                 5.88ns +- 1%   5.51ns +- 0%   -6.28%  (p=0.000 n=10+8)
BM_CopyNonTrivial/8                                 21.5ns +- 1%   15.2ns +- 2%  -29.23%  (p=0.000 n=12+12)
```

Note: the slowdowns are a few cycles which is expected given the procedure
call added in that case. We decided this is a good tradeoff given the code
size reductions and the more significant speedups for empty vectors.

Size difference (as measured by nm):
```
BM_CopyTrivial     from 1048 bytes to 326 bytes.
BM_CopyNonTrivial  from  749 bytes to 470 bytes.
```

Code size for a large binary drops by ~500KB (from 349415719 to 348906015 348906191).

All of the benchmarks that showed a significant difference:

Ones that improve with this CL:
```
BM_CopyNonTrivial/0                                 3.21ns +- 0%   0.70ns +- 0%  -78.23%  (p=0.000 n=12+10)
BM_InlinedVectorFillString/0                        0.93ns +- 0%   0.24ns +- 0%  -74.19%  (p=0.000 n=12+10)
BM_InlinedVectorAssignments/1                       10.5ns +- 0%    4.1ns +- 0%  -60.64%  (p=0.000 n=11+10)
BM_InlinedVectorAssignments/2                       10.7ns +- 0%    4.4ns +- 0%  -59.08%  (p=0.000 n=11+11)
BM_CopyTrivial/0                                    0.92ns +- 0%   0.47ns +- 0%  -48.91%  (p=0.000 n=11+12)
BM_CopyNonTrivial/8                                 21.5ns +- 1%   15.2ns +- 2%  -29.23%  (p=0.000 n=12+12)
BM_StdVectorEmpty                                   0.47ns +- 1%   0.35ns +- 0%  -24.73%  (p=0.000 n=12+12)
BM_StdVectorSize                                    0.46ns +- 2%   0.35ns +- 0%  -24.32%  (p=0.000 n=12+12)
BM_SwapElements<LargeCopyableOnly>/0                3.44ns +- 0%   2.76ns +- 1%  -19.83%  (p=0.000 n=11+11)
BM_InlinedVectorFillRange/256                       20.7ns +- 1%   17.8ns +- 0%  -14.08%  (p=0.000 n=12+9)
BM_CopyNonTrivial/1                                 5.88ns +- 1%   5.51ns +- 0%   -6.28%  (p=0.000 n=10+8)
BM_SwapElements<LargeCopyableMovable>/1             4.19ns +- 0%   3.95ns +- 1%   -5.63%  (p=0.000 n=11+12)
BM_SwapElements<LargeCopyableMovableSwappable>/1    4.18ns +- 0%   3.99ns +- 0%   -4.70%  (p=0.000 n=9+11)
BM_SwapElements<LargeCopyableMovable>/0             2.41ns +- 0%   2.31ns +- 0%   -4.45%  (p=0.000 n=12+12)
BM_InlinedVectorFillRange/64                        8.25ns +- 0%   8.04ns +- 0%   -2.51%  (p=0.000 n=12+11)
BM_SwapElements<LargeCopyableOnly>/1                82.4ns +- 0%   81.5ns +- 0%   -1.06%  (p=0.000 n=12+12)
```

Ones that get worse with this CL:
```
BM_CopyTrivial/1                                    0.92ns +- 0%   1.15ns +- 0%  +25.00%  (p=0.000 n=10+9)
BM_CopyTrivial/8                                    8.57ns +- 0%  10.72ns +- 1%  +25.16%  (p=0.000 n=10+12)
BM_SwapElements<LargeCopyableMovableSwappable>/512  1.48ns +- 1%   1.66ns +- 1%  +11.88%  (p=0.000 n=12+12)
BM_InlinedVectorFillString/1                        11.5ns +- 0%   12.8ns +- 1%  +11.62%  (p=0.000 n=12+11)
BM_SwapElements<LargeCopyableMovableSwappable>/64   1.48ns +- 2%   1.66ns +- 1%  +11.66%  (p=0.000 n=12+11)
BM_SwapElements<LargeCopyableMovableSwappable>/1k   1.48ns +- 1%   1.65ns +- 2%  +11.32%  (p=0.000 n=12+12)
BM_SwapElements<LargeCopyableMovable>/512           1.48ns +- 2%   1.58ns +- 4%   +6.62%  (p=0.000 n=11+12)
BM_SwapElements<LargeCopyableMovable>/1k            1.49ns +- 2%   1.58ns +- 3%   +6.05%  (p=0.000 n=12+12)
BM_SwapElements<LargeCopyableMovable>/64            1.48ns +- 2%   1.57ns +- 4%   +6.04%  (p=0.000 n=11+12)
BM_InlinedVectorFillRange/1                         4.81ns +- 0%   5.05ns +- 0%   +4.83%  (p=0.000 n=11+11)
BM_InlinedVectorFillString/8                        79.4ns +- 1%   83.1ns +- 1%   +4.64%  (p=0.000 n=10+12)
BM_StdVectorFillString/1                            16.3ns +- 0%   16.6ns +- 0%   +2.13%  (p=0.000 n=11+8)
```

PiperOrigin-RevId: 353906786

--
8e26518b3cec9c598e5e9573c46c3bd1b03a67ef by Abseil Team <absl-team@google.com>:

Internal change

PiperOrigin-RevId: 353737330

--
f206ae0983e58c9904ed8b8f05f9caf564a446be by Matt Kulukundis <kfm@google.com>:

Import of CCTZ from GitHub.

PiperOrigin-RevId: 353682256
GitOrigin-RevId: c68f1886f5e8fd90eb0c2d2e68feaf00a7cdacda
Change-Id: I5790c1036c4f543c701d1039848fabf7ae881ad8
parent af39e133
...@@ -60,6 +60,8 @@ set(ABSL_INTERNAL_DLL_FILES ...@@ -60,6 +60,8 @@ set(ABSL_INTERNAL_DLL_FILES
"base/policy_checks.h" "base/policy_checks.h"
"base/port.h" "base/port.h"
"base/thread_annotations.h" "base/thread_annotations.h"
"cleanup/cleanup.h"
"cleanup/internal/cleanup.h"
"container/btree_map.h" "container/btree_map.h"
"container/btree_set.h" "container/btree_set.h"
"container/fixed_array.h" "container/fixed_array.h"
......
...@@ -72,6 +72,9 @@ Abseil contains the following C++ library components: ...@@ -72,6 +72,9 @@ Abseil contains the following C++ library components:
* [`algorithm`](absl/algorithm/) * [`algorithm`](absl/algorithm/)
<br /> The `algorithm` library contains additions to the C++ `<algorithm>` <br /> The `algorithm` library contains additions to the C++ `<algorithm>`
library and container-based versions of such algorithms. library and container-based versions of such algorithms.
* [`cleanup`](absl/cleanup/)
<br /> The `cleanup` library contains the control-flow-construct-like type
`absl::Cleanup` which is used for executing a callback on scope exit.
* [`container`](absl/container/) * [`container`](absl/container/)
<br /> The `container` library contains additional STL-style containers, <br /> The `container` library contains additional STL-style containers,
including Abseil's unordered "Swiss table" containers. including Abseil's unordered "Swiss table" containers.
......
...@@ -722,4 +722,13 @@ static_assert(ABSL_INTERNAL_INLINE_NAMESPACE_STR[0] != 'h' || ...@@ -722,4 +722,13 @@ static_assert(ABSL_INTERNAL_INLINE_NAMESPACE_STR[0] != 'h' ||
#define ABSL_HAVE_ADDRESS_SANITIZER 1 #define ABSL_HAVE_ADDRESS_SANITIZER 1
#endif #endif
// ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION
//
// Class template argument deduction is a language feature added in C++17.
#ifdef ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION
#error "ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION cannot be directly set."
#elif defined(__cpp_deduction_guides)
#define ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION 1
#endif
#endif // ABSL_BASE_CONFIG_H_ #endif // ABSL_BASE_CONFIG_H_
...@@ -92,6 +92,7 @@ static void TestFunction(int thread_salt, SpinLock* spinlock) { ...@@ -92,6 +92,7 @@ static void TestFunction(int thread_salt, SpinLock* spinlock) {
static void ThreadedTest(SpinLock* spinlock) { static void ThreadedTest(SpinLock* spinlock) {
std::vector<std::thread> threads; std::vector<std::thread> threads;
threads.reserve(kNumThreads);
for (int i = 0; i < kNumThreads; ++i) { for (int i = 0; i < kNumThreads; ++i) {
threads.push_back(std::thread(TestFunction, i, spinlock)); threads.push_back(std::thread(TestFunction, i, spinlock));
} }
......
# Copyright 2021 The Abseil Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
load("@rules_cc//cc:defs.bzl", "cc_library", "cc_test")
load(
"//absl:copts/configure_copts.bzl",
"ABSL_DEFAULT_COPTS",
"ABSL_DEFAULT_LINKOPTS",
"ABSL_TEST_COPTS",
)
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
cc_library(
name = "cleanup_internal",
hdrs = ["internal/cleanup.h"],
copts = ABSL_DEFAULT_COPTS,
linkopts = ABSL_DEFAULT_LINKOPTS,
deps = [
"//absl/base:base_internal",
"//absl/base:core_headers",
"//absl/utility",
],
)
cc_library(
name = "cleanup",
hdrs = [
"cleanup.h",
],
copts = ABSL_DEFAULT_COPTS,
linkopts = ABSL_DEFAULT_LINKOPTS,
deps = [
":cleanup_internal",
"//absl/base:config",
"//absl/base:core_headers",
],
)
cc_test(
name = "cleanup_test",
size = "small",
srcs = [
"cleanup_test.cc",
],
copts = ABSL_TEST_COPTS,
deps = [
":cleanup",
"//absl/base:config",
"//absl/utility",
"@com_google_googletest//:gtest_main",
],
)
# Copyright 2021 The Abseil Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
absl_cc_library(
NAME
cleanup_internal
HDRS
"internal/cleanup.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::base_internal
absl::core_headers
absl::utility
PUBLIC
)
absl_cc_library(
NAME
cleanup
HDRS
"cleanup.h"
COPTS
${ABSL_DEFAULT_COPTS}
DEPS
absl::cleanup_internal
absl::config
absl::core_headers
PUBLIC
)
absl_cc_test(
NAME
cleanup_test
SRCS
"cleanup_test.cc"
COPTS
${ABSL_TEST_COPTS}
DEPS
absl::cleanup
absl::config
absl::utility
gmock_main
)
// Copyright 2021 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// -----------------------------------------------------------------------------
// File: cleanup.h
// -----------------------------------------------------------------------------
//
// `absl::Cleanup` implements the scope guard idiom, invoking `operator()() &&`
// on the callback it was constructed with, on scope exit.
//
// Example:
//
// ```
// void CopyGoodData(const char* input_path, const char* output_path) {
// FILE* in_file = fopen(input_path, "r");
// FILE* out_file = fopen(output_path, "w");
// if (in_file == nullptr || out_file == nullptr) return;
//
// // C++17 style using class template argument deduction
// absl::Cleanup in_closer = [&in_file] { fclose(in_file); };
//
// // C++11 style using the factory function
// auto out_closer = absl::MakeCleanup([&out_file] { fclose(out_file); });
//
// // `fclose` will be called on all exit paths by the cleanup instances
//
// Data data;
// while (ReadData(in_file, &data)) {
// if (data.IsBad()) {
// LOG(ERROR) << "Found bad data.";
// return; // `in_closer` and `out_closer` will call their callbacks
// }
// SaveData(out_file, &data);
// }
// return; // `in_closer` and `out_closer` will call their callbacks
// }
// ```
//
// `std::move(cleanup).Invoke()` will execute the callback early, before
// destruction, and prevent the callback from executing in the destructor.
//
// Alternatively, `std::move(cleanup).Cancel()` will prevent the callback from
// ever executing at all.
//
// Once a cleanup object has been `std::move(...)`-ed, it may not be used again.
#ifndef ABSL_CLEANUP_CLEANUP_H_
#define ABSL_CLEANUP_CLEANUP_H_
#include <utility>
#include "absl/base/config.h"
#include "absl/base/macros.h"
#include "absl/cleanup/internal/cleanup.h"
namespace absl {
ABSL_NAMESPACE_BEGIN
template <typename Arg, typename Callback = void()>
class ABSL_MUST_USE_RESULT Cleanup {
static_assert(cleanup_internal::WasDeduced<Arg>(),
"Explicit template parameters are not supported.");
static_assert(cleanup_internal::ReturnsVoid<Callback>(),
"Callbacks that return values are not supported.");
public:
Cleanup(Callback callback) : storage_(std::move(callback)) {} // NOLINT
Cleanup(Cleanup&& other) : storage_(std::move(other.storage_)) {}
void Cancel() && {
ABSL_HARDENING_ASSERT(storage_.IsCallbackEngaged());
storage_.DisengageCallback();
}
void Invoke() && {
ABSL_HARDENING_ASSERT(storage_.IsCallbackEngaged());
storage_.DisengageCallback();
storage_.InvokeCallback();
}
~Cleanup() {
if (storage_.IsCallbackEngaged()) {
storage_.InvokeCallback();
}
}
private:
cleanup_internal::Storage<Callback> storage_;
};
// `auto c = absl::MakeCleanup(/* callback */);`
//
// C++11 type deduction API for creating an instance of `absl::Cleanup`.
template <typename... Args, typename Callback>
absl::Cleanup<cleanup_internal::Tag, Callback> MakeCleanup(Callback callback) {
static_assert(cleanup_internal::WasDeduced<cleanup_internal::Tag, Args...>(),
"Explicit template parameters are not supported.");
static_assert(cleanup_internal::ReturnsVoid<Callback>(),
"Callbacks that return values are not supported.");
return {std::move(callback)};
}
// `absl::Cleanup c = /* callback */;`
//
// C++17 type deduction API for creating an instance of `absl::Cleanup`.
#if defined(ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION)
template <typename Callback>
Cleanup(Callback callback) -> Cleanup<cleanup_internal::Tag, Callback>;
#endif // defined(ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION)
ABSL_NAMESPACE_END
} // namespace absl
#endif // ABSL_CLEANUP_CLEANUP_H_
// Copyright 2021 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "absl/cleanup/cleanup.h"
#include <functional>
#include <type_traits>
#include <utility>
#include "gtest/gtest.h"
#include "absl/base/config.h"
#include "absl/utility/utility.h"
namespace {
using Tag = absl::cleanup_internal::Tag;
template <typename Type1, typename Type2>
void AssertSameType() {
static_assert(std::is_same<Type1, Type2>::value, "");
}
struct IdentityFactory {
template <typename Callback>
static Callback AsCallback(Callback callback) {
return Callback(std::move(callback));
}
};
// `FunctorClass` is a type used for testing `absl::Cleanup`. It is intended to
// represent users that make their own move-only callback types outside of
// `std::function` and lambda literals.
class FunctorClass {
using Callback = std::function<void()>;
public:
explicit FunctorClass(Callback callback) : callback_(std::move(callback)) {}
FunctorClass(FunctorClass&& other)
: callback_(absl::exchange(other.callback_, Callback())) {}
FunctorClass(const FunctorClass&) = delete;
FunctorClass& operator=(const FunctorClass&) = delete;
FunctorClass& operator=(FunctorClass&&) = delete;
void operator()() const& = delete;
void operator()() && {
ASSERT_TRUE(callback_);
callback_();
callback_ = nullptr;
}
private:
Callback callback_;
};
struct FunctorClassFactory {
template <typename Callback>
static FunctorClass AsCallback(Callback callback) {
return FunctorClass(std::move(callback));
}
};
struct StdFunctionFactory {
template <typename Callback>
static std::function<void()> AsCallback(Callback callback) {
return std::function<void()>(std::move(callback));
}
};
using CleanupTestParams =
::testing::Types<IdentityFactory, FunctorClassFactory, StdFunctionFactory>;
template <typename>
struct CleanupTest : public ::testing::Test {};
TYPED_TEST_SUITE(CleanupTest, CleanupTestParams);
bool function_pointer_called = false;
void FunctionPointerFunction() { function_pointer_called = true; }
TYPED_TEST(CleanupTest, FactoryProducesCorrectType) {
{
auto callback = TypeParam::AsCallback([] {});
auto cleanup = absl::MakeCleanup(std::move(callback));
AssertSameType<absl::Cleanup<Tag, decltype(callback)>, decltype(cleanup)>();
}
{
auto cleanup = absl::MakeCleanup(&FunctionPointerFunction);
AssertSameType<absl::Cleanup<Tag, void (*)()>, decltype(cleanup)>();
}
{
auto cleanup = absl::MakeCleanup(FunctionPointerFunction);
AssertSameType<absl::Cleanup<Tag, void (*)()>, decltype(cleanup)>();
}
}
#if defined(ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION)
TYPED_TEST(CleanupTest, CTADProducesCorrectType) {
{
auto callback = TypeParam::AsCallback([] {});
absl::Cleanup cleanup = std::move(callback);
AssertSameType<absl::Cleanup<Tag, decltype(callback)>, decltype(cleanup)>();
}
{
absl::Cleanup cleanup = &FunctionPointerFunction;
AssertSameType<absl::Cleanup<Tag, void (*)()>, decltype(cleanup)>();
}
{
absl::Cleanup cleanup = FunctionPointerFunction;
AssertSameType<absl::Cleanup<Tag, void (*)()>, decltype(cleanup)>();
}
}
TYPED_TEST(CleanupTest, FactoryAndCTADProduceSameType) {
{
auto callback = IdentityFactory::AsCallback([] {});
auto factory_cleanup = absl::MakeCleanup(callback);
absl::Cleanup deduction_cleanup = callback;
AssertSameType<decltype(factory_cleanup), decltype(deduction_cleanup)>();
}
{
auto factory_cleanup =
absl::MakeCleanup(FunctorClassFactory::AsCallback([] {}));
absl::Cleanup deduction_cleanup = FunctorClassFactory::AsCallback([] {});
AssertSameType<decltype(factory_cleanup), decltype(deduction_cleanup)>();
}
{
auto factory_cleanup =
absl::MakeCleanup(StdFunctionFactory::AsCallback([] {}));
absl::Cleanup deduction_cleanup = StdFunctionFactory::AsCallback([] {});
AssertSameType<decltype(factory_cleanup), decltype(deduction_cleanup)>();
}
{
auto factory_cleanup = absl::MakeCleanup(&FunctionPointerFunction);
absl::Cleanup deduction_cleanup = &FunctionPointerFunction;
AssertSameType<decltype(factory_cleanup), decltype(deduction_cleanup)>();
}
{
auto factory_cleanup = absl::MakeCleanup(FunctionPointerFunction);
absl::Cleanup deduction_cleanup = FunctionPointerFunction;
AssertSameType<decltype(factory_cleanup), decltype(deduction_cleanup)>();
}
}
#endif // defined(ABSL_HAVE_CLASS_TEMPLATE_ARGUMENT_DEDUCTION)
TYPED_TEST(CleanupTest, BasicUsage) {
bool called = false;
{
EXPECT_FALSE(called);
auto cleanup =
absl::MakeCleanup(TypeParam::AsCallback([&called] { called = true; }));
EXPECT_FALSE(called);
}
EXPECT_TRUE(called);
}
TYPED_TEST(CleanupTest, BasicUsageWithFunctionPointer) {
function_pointer_called = false;
{
EXPECT_FALSE(function_pointer_called);
auto cleanup =
absl::MakeCleanup(TypeParam::AsCallback(&FunctionPointerFunction));
EXPECT_FALSE(function_pointer_called);
}
EXPECT_TRUE(function_pointer_called);
}
TYPED_TEST(CleanupTest, Cancel) {
bool called = false;
{
EXPECT_FALSE(called);
auto cleanup =
absl::MakeCleanup(TypeParam::AsCallback([&called] { called = true; }));
std::move(cleanup).Cancel();
EXPECT_FALSE(called);
}
EXPECT_FALSE(called);
}
TYPED_TEST(CleanupTest, Invoke) {
bool called = false;
{
EXPECT_FALSE(called);
auto cleanup =
absl::MakeCleanup(TypeParam::AsCallback([&called] { called = true; }));
std::move(cleanup).Invoke();
EXPECT_TRUE(called);
}
EXPECT_TRUE(called);
}
} // namespace
// Copyright 2021 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef ABSL_CLEANUP_INTERNAL_CLEANUP_H_
#define ABSL_CLEANUP_INTERNAL_CLEANUP_H_
#include <type_traits>
#include <utility>
#include "absl/base/internal/invoke.h"
#include "absl/base/thread_annotations.h"
#include "absl/utility/utility.h"
namespace absl {
ABSL_NAMESPACE_BEGIN
namespace cleanup_internal {
struct Tag {};
template <typename Arg, typename... Args>
constexpr bool WasDeduced() {
return (std::is_same<cleanup_internal::Tag, Arg>::value) &&
(sizeof...(Args) == 0);
}
template <typename Callback>
constexpr bool ReturnsVoid() {
return (std::is_same<base_internal::invoke_result_t<Callback>, void>::value);
}
template <typename Callback>
class Storage {
public:
explicit Storage(Callback callback)
: engaged_(true), callback_(std::move(callback)) {}
Storage(Storage&& other)
: engaged_(absl::exchange(other.engaged_, false)),
callback_(std::move(other.callback_)) {}
Storage(const Storage& other) = delete;
Storage& operator=(Storage&& other) = delete;
Storage& operator=(const Storage& other) = delete;
bool IsCallbackEngaged() const { return engaged_; }
void DisengageCallback() { engaged_ = false; }
void InvokeCallback() ABSL_NO_THREAD_SAFETY_ANALYSIS {
std::move(callback_)();
}
private:
bool engaged_;
Callback callback_;
};
} // namespace cleanup_internal
ABSL_NAMESPACE_END
} // namespace absl
#endif // ABSL_CLEANUP_INTERNAL_CLEANUP_H_
...@@ -167,11 +167,13 @@ class InlinedVector { ...@@ -167,11 +167,13 @@ class InlinedVector {
// Creates an inlined vector by copying the contents of `other` using `alloc`. // Creates an inlined vector by copying the contents of `other` using `alloc`.
InlinedVector(const InlinedVector& other, const allocator_type& alloc) InlinedVector(const InlinedVector& other, const allocator_type& alloc)
: storage_(alloc) { : storage_(alloc) {
if (IsMemcpyOk::value && !other.storage_.GetIsAllocated()) { if (other.empty()) {
// Empty; nothing to do.
} else if (IsMemcpyOk::value && !other.storage_.GetIsAllocated()) {
// Memcpy-able and do not need allocation.
storage_.MemcpyFrom(other.storage_); storage_.MemcpyFrom(other.storage_);
} else { } else {
storage_.Initialize(IteratorValueAdapter<const_pointer>(other.data()), storage_.InitFrom(other.storage_);
other.size());
} }
} }
......
...@@ -534,6 +534,28 @@ void BM_ConstructFromMove(benchmark::State& state) { ...@@ -534,6 +534,28 @@ void BM_ConstructFromMove(benchmark::State& state) {
ABSL_INTERNAL_BENCHMARK_ONE_SIZE(BM_ConstructFromMove, TrivialType); ABSL_INTERNAL_BENCHMARK_ONE_SIZE(BM_ConstructFromMove, TrivialType);
ABSL_INTERNAL_BENCHMARK_ONE_SIZE(BM_ConstructFromMove, NontrivialType); ABSL_INTERNAL_BENCHMARK_ONE_SIZE(BM_ConstructFromMove, NontrivialType);
// Measure cost of copy-constructor+destructor.
void BM_CopyTrivial(benchmark::State& state) {
const int n = state.range(0);
InlVec<int64_t> src(n);
for (auto s : state) {
InlVec<int64_t> copy(src);
benchmark::DoNotOptimize(copy);
}
}
BENCHMARK(BM_CopyTrivial)->Arg(0)->Arg(1)->Arg(kLargeSize);
// Measure cost of copy-constructor+destructor.
void BM_CopyNonTrivial(benchmark::State& state) {
const int n = state.range(0);
InlVec<InlVec<int64_t>> src(n);
for (auto s : state) {
InlVec<InlVec<int64_t>> copy(src);
benchmark::DoNotOptimize(copy);
}
}
BENCHMARK(BM_CopyNonTrivial)->Arg(0)->Arg(1)->Arg(kLargeSize);
template <typename T, size_t FromSize, size_t ToSize> template <typename T, size_t FromSize, size_t ToSize>
void BM_AssignSizeRef(benchmark::State& state) { void BM_AssignSizeRef(benchmark::State& state) {
auto size = ToSize; auto size = ToSize;
......
...@@ -81,6 +81,23 @@ void DestroyElements(AllocatorType* alloc_ptr, Pointer destroy_first, ...@@ -81,6 +81,23 @@ void DestroyElements(AllocatorType* alloc_ptr, Pointer destroy_first,
} }
} }
// If kUseMemcpy is true, memcpy(dst, src, n); else do nothing.
// Useful to avoid compiler warnings when memcpy() is used for T values
// that are not trivially copyable in non-reachable code.
template <bool kUseMemcpy>
inline void MemcpyIfAllowed(void* dst, const void* src, size_t n);
// memcpy when allowed.
template <>
inline void MemcpyIfAllowed<true>(void* dst, const void* src, size_t n) {
memcpy(dst, src, n);
}
// Do nothing for types that are not memcpy-able. This function is only
// called from non-reachable branches.
template <>
inline void MemcpyIfAllowed<false>(void*, const void*, size_t) {}
template <typename AllocatorType, typename Pointer, typename ValueAdapter, template <typename AllocatorType, typename Pointer, typename ValueAdapter,
typename SizeType> typename SizeType>
void ConstructElements(AllocatorType* alloc_ptr, Pointer construct_first, void ConstructElements(AllocatorType* alloc_ptr, Pointer construct_first,
...@@ -310,9 +327,14 @@ class Storage { ...@@ -310,9 +327,14 @@ class Storage {
: metadata_(alloc, /* size and is_allocated */ 0) {} : metadata_(alloc, /* size and is_allocated */ 0) {}
~Storage() { ~Storage() {
pointer data = GetIsAllocated() ? GetAllocatedData() : GetInlinedData(); if (GetSizeAndIsAllocated() == 0) {
inlined_vector_internal::DestroyElements(GetAllocPtr(), data, GetSize()); // Empty and not allocated; nothing to do.
} else if (IsMemcpyOk::value) {
// No destructors need to be run; just deallocate if necessary.
DeallocateIfAllocated(); DeallocateIfAllocated();
} else {
DestroyContents();
}
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
...@@ -370,6 +392,8 @@ class Storage { ...@@ -370,6 +392,8 @@ class Storage {
// Storage Member Mutators // Storage Member Mutators
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
ABSL_ATTRIBUTE_NOINLINE void InitFrom(const Storage& other);
template <typename ValueAdapter> template <typename ValueAdapter>
void Initialize(ValueAdapter values, size_type new_size); void Initialize(ValueAdapter values, size_type new_size);
...@@ -452,6 +476,8 @@ class Storage { ...@@ -452,6 +476,8 @@ class Storage {
} }
private: private:
ABSL_ATTRIBUTE_NOINLINE void DestroyContents();
using Metadata = using Metadata =
container_internal::CompressedTuple<allocator_type, size_type>; container_internal::CompressedTuple<allocator_type, size_type>;
...@@ -477,6 +503,40 @@ class Storage { ...@@ -477,6 +503,40 @@ class Storage {
}; };
template <typename T, size_t N, typename A> template <typename T, size_t N, typename A>
void Storage<T, N, A>::DestroyContents() {
pointer data = GetIsAllocated() ? GetAllocatedData() : GetInlinedData();
inlined_vector_internal::DestroyElements(GetAllocPtr(), data, GetSize());
DeallocateIfAllocated();
}
template <typename T, size_t N, typename A>
void Storage<T, N, A>::InitFrom(const Storage& other) {
const auto n = other.GetSize();
assert(n > 0); // Empty sources handled handled in caller.
const_pointer src;
pointer dst;
if (!other.GetIsAllocated()) {
dst = GetInlinedData();
src = other.GetInlinedData();
} else {
// Because this is only called from the `InlinedVector` constructors, it's
// safe to take on the allocation with size `0`. If `ConstructElements(...)`
// throws, deallocation will be automatically handled by `~Storage()`.
size_type new_capacity = ComputeCapacity(GetInlinedCapacity(), n);
dst = AllocatorTraits::allocate(*GetAllocPtr(), new_capacity);
SetAllocatedData(dst, new_capacity);
src = other.GetAllocatedData();
}
if (IsMemcpyOk::value) {
MemcpyIfAllowed<IsMemcpyOk::value>(dst, src, sizeof(dst[0]) * n);
} else {
auto values = IteratorValueAdapter<const_pointer>(src);
inlined_vector_internal::ConstructElements(GetAllocPtr(), dst, &values, n);
}
GetSizeAndIsAllocated() = other.GetSizeAndIsAllocated();
}
template <typename T, size_t N, typename A>
template <typename ValueAdapter> template <typename ValueAdapter>
auto Storage<T, N, A>::Initialize(ValueAdapter values, size_type new_size) auto Storage<T, N, A>::Initialize(ValueAdapter values, size_type new_size)
-> void { -> void {
......
...@@ -207,10 +207,12 @@ void Status::UnrefNonInlined(uintptr_t rep) { ...@@ -207,10 +207,12 @@ void Status::UnrefNonInlined(uintptr_t rep) {
} }
} }
uintptr_t Status::NewRep(absl::StatusCode code, absl::string_view msg, uintptr_t Status::NewRep(
absl::StatusCode code, absl::string_view msg,
std::unique_ptr<status_internal::Payloads> payloads) { std::unique_ptr<status_internal::Payloads> payloads) {
status_internal::StatusRep* rep = new status_internal::StatusRep( status_internal::StatusRep* rep = new status_internal::StatusRep(
code, std::string(msg.data(), msg.size()), std::move(payloads)); code, std::string(msg.data(), msg.size()),
std::move(payloads));
return PointerToRep(rep); return PointerToRep(rep);
} }
...@@ -236,8 +238,9 @@ absl::StatusCode Status::code() const { ...@@ -236,8 +238,9 @@ absl::StatusCode Status::code() const {
void Status::PrepareToModify() { void Status::PrepareToModify() {
ABSL_RAW_CHECK(!ok(), "PrepareToModify shouldn't be called on OK status."); ABSL_RAW_CHECK(!ok(), "PrepareToModify shouldn't be called on OK status.");
if (IsInlined(rep_)) { if (IsInlined(rep_)) {
rep_ = NewRep(static_cast<absl::StatusCode>(raw_code()), rep_ =
absl::string_view(), nullptr); NewRep(static_cast<absl::StatusCode>(raw_code()), absl::string_view(),
nullptr);
return; return;
} }
...@@ -248,7 +251,8 @@ void Status::PrepareToModify() { ...@@ -248,7 +251,8 @@ void Status::PrepareToModify() {
if (rep->payloads) { if (rep->payloads) {
payloads = absl::make_unique<status_internal::Payloads>(*rep->payloads); payloads = absl::make_unique<status_internal::Payloads>(*rep->payloads);
} }
rep_ = NewRep(rep->code, message(), std::move(payloads)); rep_ = NewRep(rep->code, message(),
std::move(payloads));
UnrefNonInlined(rep_i); UnrefNonInlined(rep_i);
} }
} }
......
...@@ -371,10 +371,10 @@ class ABSL_MUST_USE_RESULT Status final { ...@@ -371,10 +371,10 @@ class ABSL_MUST_USE_RESULT Status final {
Status(); Status();
// Creates a status in the canonical error space with the specified // Creates a status in the canonical error space with the specified
// `absl::StatusCode` and error message. If `code == absl::StatusCode::kOk`, // `absl::StatusCode` and error message. If `code == absl::StatusCode::kOk`, // NOLINT
// `msg` is ignored and an object identical to an OK status is constructed. // `msg` is ignored and an object identical to an OK status is constructed.
// //
// The `msg` string must be in UTF-8. The implementation may complain (e.g., // The `msg` string must be in UTF-8. The implementation may complain (e.g., // NOLINT
// by printing a warning) if it is not. // by printing a warning) if it is not.
Status(absl::StatusCode code, absl::string_view msg); Status(absl::StatusCode code, absl::string_view msg);
...@@ -551,7 +551,8 @@ class ABSL_MUST_USE_RESULT Status final { ...@@ -551,7 +551,8 @@ class ABSL_MUST_USE_RESULT Status final {
status_internal::Payloads* GetPayloads(); status_internal::Payloads* GetPayloads();
// Takes ownership of payload. // Takes ownership of payload.
static uintptr_t NewRep(absl::StatusCode code, absl::string_view msg, static uintptr_t NewRep(
absl::StatusCode code, absl::string_view msg,
std::unique_ptr<status_internal::Payloads> payload); std::unique_ptr<status_internal::Payloads> payload);
static bool EqualsSlow(const absl::Status& a, const absl::Status& b); static bool EqualsSlow(const absl::Status& a, const absl::Status& b);
......
...@@ -78,6 +78,8 @@ ...@@ -78,6 +78,8 @@
#include "absl/functional/function_ref.h" #include "absl/functional/function_ref.h"
#include "absl/meta/type_traits.h" #include "absl/meta/type_traits.h"
#include "absl/strings/internal/cord_internal.h" #include "absl/strings/internal/cord_internal.h"
#include "absl/strings/internal/cord_rep_ring.h"
#include "absl/strings/internal/cord_rep_ring_reader.h"
#include "absl/strings/internal/resize_uninitialized.h" #include "absl/strings/internal/resize_uninitialized.h"
#include "absl/strings/internal/string_constant.h" #include "absl/strings/internal/string_constant.h"
#include "absl/strings/string_view.h" #include "absl/strings/string_view.h"
...@@ -361,6 +363,10 @@ class Cord { ...@@ -361,6 +363,10 @@ class Cord {
friend class CharIterator; friend class CharIterator;
private: private:
using CordRep = absl::cord_internal::CordRep;
using CordRepRing = absl::cord_internal::CordRepRing;
using CordRepRingReader = absl::cord_internal::CordRepRingReader;
// Stack of right children of concat nodes that we have to visit. // Stack of right children of concat nodes that we have to visit.
// Keep this at the end of the structure to avoid cache-thrashing. // Keep this at the end of the structure to avoid cache-thrashing.
// TODO(jgm): Benchmark to see if there's a more optimal value than 47 for // TODO(jgm): Benchmark to see if there's a more optimal value than 47 for
...@@ -385,6 +391,10 @@ class Cord { ...@@ -385,6 +391,10 @@ class Cord {
// Stack specific operator++ // Stack specific operator++
ChunkIterator& AdvanceStack(); ChunkIterator& AdvanceStack();
// Ring buffer specific operator++
ChunkIterator& AdvanceRing();
void AdvanceBytesRing(size_t n);
// Iterates `n` bytes, where `n` is expected to be greater than or equal to // Iterates `n` bytes, where `n` is expected to be greater than or equal to
// `current_chunk_.size()`. // `current_chunk_.size()`.
void AdvanceBytesSlowPath(size_t n); void AdvanceBytesSlowPath(size_t n);
...@@ -398,6 +408,10 @@ class Cord { ...@@ -398,6 +408,10 @@ class Cord {
absl::cord_internal::CordRep* current_leaf_ = nullptr; absl::cord_internal::CordRep* current_leaf_ = nullptr;
// The number of bytes left in the `Cord` over which we are iterating. // The number of bytes left in the `Cord` over which we are iterating.
size_t bytes_remaining_ = 0; size_t bytes_remaining_ = 0;
// Cord reader for ring buffers. Empty if not traversing a ring buffer.
CordRepRingReader ring_reader_;
// See 'Stack' alias definition. // See 'Stack' alias definition.
Stack stack_of_right_children_; Stack stack_of_right_children_;
}; };
...@@ -1107,6 +1121,11 @@ inline bool Cord::StartsWith(absl::string_view rhs) const { ...@@ -1107,6 +1121,11 @@ inline bool Cord::StartsWith(absl::string_view rhs) const {
} }
inline void Cord::ChunkIterator::InitTree(cord_internal::CordRep* tree) { inline void Cord::ChunkIterator::InitTree(cord_internal::CordRep* tree) {
if (tree->tag == cord_internal::RING) {
current_chunk_ = ring_reader_.Reset(tree->ring());
return;
}
stack_of_right_children_.push_back(tree); stack_of_right_children_.push_back(tree);
operator++(); operator++();
} }
...@@ -1126,13 +1145,33 @@ inline Cord::ChunkIterator::ChunkIterator(const Cord* cord) ...@@ -1126,13 +1145,33 @@ inline Cord::ChunkIterator::ChunkIterator(const Cord* cord)
} }
} }
inline Cord::ChunkIterator& Cord::ChunkIterator::AdvanceRing() {
current_chunk_ = ring_reader_.Next();
return *this;
}
inline void Cord::ChunkIterator::AdvanceBytesRing(size_t n) {
assert(n >= current_chunk_.size());
bytes_remaining_ -= n;
if (bytes_remaining_) {
if (n == current_chunk_.size()) {
current_chunk_ = ring_reader_.Next();
} else {
size_t offset = ring_reader_.length() - bytes_remaining_;
current_chunk_ = ring_reader_.Seek(offset);
}
} else {
current_chunk_ = {};
}
}
inline Cord::ChunkIterator& Cord::ChunkIterator::operator++() { inline Cord::ChunkIterator& Cord::ChunkIterator::operator++() {
ABSL_HARDENING_ASSERT(bytes_remaining_ > 0 && ABSL_HARDENING_ASSERT(bytes_remaining_ > 0 &&
"Attempted to iterate past `end()`"); "Attempted to iterate past `end()`");
assert(bytes_remaining_ >= current_chunk_.size()); assert(bytes_remaining_ >= current_chunk_.size());
bytes_remaining_ -= current_chunk_.size(); bytes_remaining_ -= current_chunk_.size();
if (bytes_remaining_ > 0) { if (bytes_remaining_ > 0) {
return AdvanceStack(); return ring_reader_ ? AdvanceRing() : AdvanceStack();
} else { } else {
current_chunk_ = {}; current_chunk_ = {};
} }
...@@ -1174,7 +1213,7 @@ inline void Cord::ChunkIterator::AdvanceBytes(size_t n) { ...@@ -1174,7 +1213,7 @@ inline void Cord::ChunkIterator::AdvanceBytes(size_t n) {
if (ABSL_PREDICT_TRUE(n < current_chunk_.size())) { if (ABSL_PREDICT_TRUE(n < current_chunk_.size())) {
RemoveChunkPrefix(n); RemoveChunkPrefix(n);
} else if (n != 0) { } else if (n != 0) {
AdvanceBytesSlowPath(n); ring_reader_ ? AdvanceBytesRing(n) : AdvanceBytesSlowPath(n);
} }
} }
......
...@@ -367,7 +367,7 @@ TEST(Cord, Subcord) { ...@@ -367,7 +367,7 @@ TEST(Cord, Subcord) {
for (size_t end_pos : positions) { for (size_t end_pos : positions) {
if (end_pos < pos || end_pos > a.size()) continue; if (end_pos < pos || end_pos > a.size()) continue;
absl::Cord sa = a.Subcord(pos, end_pos - pos); absl::Cord sa = a.Subcord(pos, end_pos - pos);
EXPECT_EQ(absl::string_view(s).substr(pos, end_pos - pos), ASSERT_EQ(absl::string_view(s).substr(pos, end_pos - pos),
std::string(sa)) std::string(sa))
<< a; << a;
} }
...@@ -379,7 +379,7 @@ TEST(Cord, Subcord) { ...@@ -379,7 +379,7 @@ TEST(Cord, Subcord) {
for (size_t pos = 0; pos <= sh.size(); ++pos) { for (size_t pos = 0; pos <= sh.size(); ++pos) {
for (size_t n = 0; n <= sh.size() - pos; ++n) { for (size_t n = 0; n <= sh.size() - pos; ++n) {
absl::Cord sc = c.Subcord(pos, n); absl::Cord sc = c.Subcord(pos, n);
EXPECT_EQ(sh.substr(pos, n), std::string(sc)) << c; ASSERT_EQ(sh.substr(pos, n), std::string(sc)) << c;
} }
} }
...@@ -389,7 +389,7 @@ TEST(Cord, Subcord) { ...@@ -389,7 +389,7 @@ TEST(Cord, Subcord) {
while (sa.size() > 1) { while (sa.size() > 1) {
sa = sa.Subcord(1, sa.size() - 2); sa = sa.Subcord(1, sa.size() - 2);
ss = ss.substr(1, ss.size() - 2); ss = ss.substr(1, ss.size() - 2);
EXPECT_EQ(ss, std::string(sa)) << a; ASSERT_EQ(ss, std::string(sa)) << a;
if (HasFailure()) break; // halt cascade if (HasFailure()) break; // halt cascade
} }
......
...@@ -43,8 +43,9 @@ static constexpr size_t kMaxFlatSize = 4096; ...@@ -43,8 +43,9 @@ static constexpr size_t kMaxFlatSize = 4096;
static constexpr size_t kMaxFlatLength = kMaxFlatSize - kFlatOverhead; static constexpr size_t kMaxFlatLength = kMaxFlatSize - kFlatOverhead;
static constexpr size_t kMinFlatLength = kMinFlatSize - kFlatOverhead; static constexpr size_t kMinFlatLength = kMinFlatSize - kFlatOverhead;
constexpr size_t AllocatedSizeToTagUnchecked(size_t size) { constexpr uint8_t AllocatedSizeToTagUnchecked(size_t size) {
return (size <= 1024) ? size / 8 : 128 + size / 32 - 1024 / 32; return static_cast<uint8_t>((size <= 1024) ? size / 8
: 128 + size / 32 - 1024 / 32);
} }
static_assert(kMinFlatSize / 8 >= FLAT, ""); static_assert(kMinFlatSize / 8 >= FLAT, "");
...@@ -65,7 +66,7 @@ inline size_t RoundUpForTag(size_t size) { ...@@ -65,7 +66,7 @@ inline size_t RoundUpForTag(size_t size) {
// undefined if the size exceeds the maximum size that can be encoded in // undefined if the size exceeds the maximum size that can be encoded in
// a tag, i.e., if size is larger than TagToAllocatedSize(<max tag>). // a tag, i.e., if size is larger than TagToAllocatedSize(<max tag>).
inline uint8_t AllocatedSizeToTag(size_t size) { inline uint8_t AllocatedSizeToTag(size_t size) {
const size_t tag = AllocatedSizeToTagUnchecked(size); const uint8_t tag = AllocatedSizeToTagUnchecked(size);
assert(tag <= MAX_FLAT_TAG); assert(tag <= MAX_FLAT_TAG);
return tag; return tag;
} }
......
...@@ -36,8 +36,10 @@ namespace cord_internal { ...@@ -36,8 +36,10 @@ namespace cord_internal {
#ifdef __clang__ #ifdef __clang__
#pragma clang diagnostic push #pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wshadow" #pragma clang diagnostic ignored "-Wshadow"
#if __has_warning("-Wshadow-field")
#pragma clang diagnostic ignored "-Wshadow-field" #pragma clang diagnostic ignored "-Wshadow-field"
#endif #endif
#endif
namespace { namespace {
......
...@@ -34,8 +34,10 @@ namespace cord_internal { ...@@ -34,8 +34,10 @@ namespace cord_internal {
#ifdef __clang__ #ifdef __clang__
#pragma clang diagnostic push #pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wshadow" #pragma clang diagnostic ignored "-Wshadow"
#if __has_warning("-Wshadow-field")
#pragma clang diagnostic ignored "-Wshadow-field" #pragma clang diagnostic ignored "-Wshadow-field"
#endif #endif
#endif
// All operations modifying a ring buffer are implemented as static methods // All operations modifying a ring buffer are implemented as static methods
// requiring a CordRepRing instance with a reference adopted by the method. // requiring a CordRepRing instance with a reference adopted by the method.
...@@ -81,7 +83,7 @@ class CordRepRing : public CordRep { ...@@ -81,7 +83,7 @@ class CordRepRing : public CordRep {
// `end_pos` which is the `end_pos` of the previous node (or `begin_pos`) plus // `end_pos` which is the `end_pos` of the previous node (or `begin_pos`) plus
// this node's length. The purpose is to allow for a binary search on this // this node's length. The purpose is to allow for a binary search on this
// position, while allowing O(1) prepend and append operations. // position, while allowing O(1) prepend and append operations.
using pos_type = uint64_t; using pos_type = size_t;
// `index_type` is the type for the `head`, `tail` and `capacity` indexes. // `index_type` is the type for the `head`, `tail` and `capacity` indexes.
// Ring buffers are limited to having no more than four billion entries. // Ring buffers are limited to having no more than four billion entries.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment