Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set max epoch number constant #212

Merged
merged 1 commit into from
Nov 10, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions include/ethash/ethash.h
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,14 @@ extern "C" {
#define ETHASH_FULL_DATASET_ITEM_SIZE 128
#define ETHASH_NUM_DATASET_ACCESSES 64

/// The maximum epoch number supported by this implementation.
///
/// The value represents the last epoch where the light cache size fits 4GB size limit.
/// It also allows to make some assumptions about the max values of datasets indices.
/// The DAG size in the last epoch is 252GB and the block number is above 979M
/// which gives over 60 years of blocks assuming 2s block times.
#define ETHASH_MAX_EPOCH_NUMBER 32639
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you arrive at this number?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The max value where light cache size is under 4GB. Later some uint32 overflows start to happen.

Copy link
Collaborator

@axic axic Nov 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you document this?

Also can you put an estimate in terms of mainnet block numbers (I know it depends on difficulty, but assume a linear yearly difficulty increase). I also understand this is moot with PoS, but still would be nice to have an idea.

Or at least document the current block + epoch number, as that gives an idea how many epochs we progressed over 6 years.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axic
1 epoch = 30k blocks so the limit imposed by @chfast is 979.17M blocks
Difficulty is not related. Difficulty is adjusted in mining to keep block interval constant (among 12 and 15 seconds).
Actually mainnet is at block 13.5M i.e. epoch 450

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is documented now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AndreaLanfranchi @chfast Thanks for the explanation, this looks good now.


/** Ethash error codes. */
enum ethash_errc
{
Expand Down
10 changes: 6 additions & 4 deletions include/ethash/ethash.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,12 @@ namespace ethash
{
constexpr auto revision = ETHASH_REVISION;

static constexpr int epoch_length = ETHASH_EPOCH_LENGTH;
static constexpr int light_cache_item_size = ETHASH_LIGHT_CACHE_ITEM_SIZE;
static constexpr int full_dataset_item_size = ETHASH_FULL_DATASET_ITEM_SIZE;
static constexpr int num_dataset_accesses = ETHASH_NUM_DATASET_ACCESSES;
constexpr int epoch_length = ETHASH_EPOCH_LENGTH;
constexpr int light_cache_item_size = ETHASH_LIGHT_CACHE_ITEM_SIZE;
constexpr int full_dataset_item_size = ETHASH_FULL_DATASET_ITEM_SIZE;
constexpr int num_dataset_accesses = ETHASH_NUM_DATASET_ACCESSES;

constexpr int max_epoch_number = ETHASH_MAX_EPOCH_NUMBER;

using epoch_context = ethash_epoch_context;
using epoch_context_full = ethash_epoch_context_full;
Expand Down
25 changes: 16 additions & 9 deletions lib/ethash/ethash.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ inline hash512 bitwise_xor(const hash512& x, const hash512& y) noexcept

int find_epoch_number(const hash256& seed) noexcept
{
static constexpr int num_tries = 30000; // Divisible by 16.

// Thread-local cache of the last search.
static thread_local int cached_epoch_number = 0;
static thread_local hash256 cached_seed = {};
Expand All @@ -82,7 +80,7 @@ int find_epoch_number(const hash256& seed) noexcept

// Search for matching seed starting from epoch 0.
s = {};
for (int i = 0; i < num_tries; ++i)
for (int i = 0; i <= max_epoch_number; ++i)
{
if (s.word32s[0] == seed_part)
{
Expand Down Expand Up @@ -135,6 +133,9 @@ epoch_context_full* create_epoch_context(
static_assert(sizeof(epoch_context_full) < sizeof(hash512), "epoch_context too big");
static constexpr size_t context_alloc_size = sizeof(hash512);

if (epoch_number < 0 || epoch_number > max_epoch_number)
return nullptr;

const int light_cache_num_items = calculate_light_cache_num_items(epoch_number);
const int full_dataset_num_items = calculate_full_dataset_num_items(epoch_number);
const size_t light_cache_size = get_light_cache_size(light_cache_num_items);
Expand Down Expand Up @@ -428,29 +429,35 @@ ethash_hash256 ethash_calculate_epoch_seed(int epoch_number) noexcept

int ethash_calculate_light_cache_num_items(int epoch_number) noexcept
{
static constexpr int item_size = sizeof(hash512);
static constexpr int num_items_init = light_cache_init_size / item_size;
static constexpr int num_items_growth = light_cache_growth / item_size;
constexpr int item_size = sizeof(hash512);
constexpr int num_items_init = light_cache_init_size / item_size;
constexpr int num_items_growth = light_cache_growth / item_size;
static_assert(
light_cache_init_size % item_size == 0, "light_cache_init_size not multiple of item size");
static_assert(
light_cache_growth % item_size == 0, "light_cache_growth not multiple of item size");

if (epoch_number < 0 || epoch_number > max_epoch_number)
return 0;

int num_items_upper_bound = num_items_init + epoch_number * num_items_growth;
int num_items = ethash_find_largest_prime(num_items_upper_bound);
return num_items;
}

int ethash_calculate_full_dataset_num_items(int epoch_number) noexcept
{
static constexpr int item_size = sizeof(hash1024);
static constexpr int num_items_init = full_dataset_init_size / item_size;
static constexpr int num_items_growth = full_dataset_growth / item_size;
constexpr int item_size = sizeof(hash1024);
constexpr int num_items_init = full_dataset_init_size / item_size;
constexpr int num_items_growth = full_dataset_growth / item_size;
static_assert(full_dataset_init_size % item_size == 0,
"full_dataset_init_size not multiple of item size");
static_assert(
full_dataset_growth % item_size == 0, "full_dataset_growth not multiple of item size");

if (epoch_number < 0 || epoch_number > max_epoch_number)
return 0;

int num_items_upper_bound = num_items_init + epoch_number * num_items_growth;
int num_items = ethash_find_largest_prime(num_items_upper_bound);
return num_items;
Expand Down
8 changes: 6 additions & 2 deletions test/benchmarks/ethash_benchmarks.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ static void calculate_light_cache_num_items(benchmark::State& state)
benchmark::DoNotOptimize(&answer);
}
}
BENCHMARK(calculate_light_cache_num_items)->Arg(32638)->Arg(32639);
BENCHMARK(calculate_light_cache_num_items)
->Arg(ethash::max_epoch_number - 1)
->Arg(ethash::max_epoch_number);

static void calculate_full_dataset_num_items(benchmark::State& state)
{
Expand All @@ -32,7 +34,9 @@ static void calculate_full_dataset_num_items(benchmark::State& state)
benchmark::DoNotOptimize(&answer);
}
}
BENCHMARK(calculate_full_dataset_num_items)->Arg(32638)->Arg(32639);
BENCHMARK(calculate_full_dataset_num_items)
->Arg(ethash::max_epoch_number - 1)
->Arg(ethash::max_epoch_number);


static void seed(benchmark::State& state)
Expand Down
53 changes: 42 additions & 11 deletions test/unittests/test_ethash.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,10 @@ static dataset_size_test_case dataset_size_test_cases[] = {
{1956, 273153856, 17481857408},
{2047, 285081536, 18245220736},
{30000, 3948936512, 252731976832},
{32639, 4294836032, 274869514624},
{max_epoch_number, 4294836032, 274'869'514'624},
{max_epoch_number + 1, 0, 0},
{max_epoch_number * 2, 0, 0},
{-1, 0, 0},
};

TEST(ethash, light_cache_size)
Expand Down Expand Up @@ -220,21 +223,23 @@ struct epoch_seed_test_case
const char* const epoch_seed_hex;
};

static epoch_seed_test_case epoch_seed_test_cases[] = {
constexpr epoch_seed_test_case epoch_seed_test_cases[] = {
{0, "0000000000000000000000000000000000000000000000000000000000000000"},
{1, "290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563"},
{171, "a9b0e0c9aca72c07ba06b5bbdae8b8f69e61878301508473379bb4f71807d707"},
{2048, "20a7678ca7b50829183baac2e1e3c43fa3c4bcbc171b11cf5a9f30bebd172920"},
{29998, "1222b1faed7f93098f8ae498621fb3479805a664b70186063861c46596c66164"},
{29999, "ee1d0f61b054dff0f3025ebba821d405c8dc19a983e582e9fa5436fc3e7a07d8"},
{max_epoch_number - 1, "9472a82f992649315e3977120843a5a246e375715bd70ee98b3dd77c63154e99"},
{max_epoch_number, "09b435f2d92d0ddee038c379be8db1f895c904282e9ceb790f519a6aa3f83810"},
};

TEST(ethash, calculate_epoch_seed)
{
for (auto& t : epoch_seed_test_cases)
{
const hash256 epoch_seed = calculate_epoch_seed(t.epoch_number);
EXPECT_EQ(epoch_seed, to_hash256(t.epoch_seed_hex));
EXPECT_EQ(to_hex(epoch_seed), t.epoch_seed_hex);
}
}

Expand Down Expand Up @@ -268,19 +273,30 @@ TEST(ethash, find_epoch_number_double_descending)
TEST(ethash, find_epoch_number_sequential)
{
hash256 seed = {};
for (int i = 0; i < 30000; ++i)
for (int i = 0; i <= max_epoch_number; ++i)
{
auto e = find_epoch_number(seed);
EXPECT_EQ(e, i);
seed = keccak256(seed);
}
}

TEST(ethash, find_epoch_number_max)
{
const auto seed_max = to_hash256(epoch_seed_test_cases[7].epoch_seed_hex);
const auto seed_out_of_range = keccak256(seed_max);

find_epoch_number({}); // Reset cache.
EXPECT_EQ(find_epoch_number(seed_out_of_range), -1);
find_epoch_number({}); // Reset cache.
EXPECT_EQ(find_epoch_number(seed_max), max_epoch_number);
}

TEST(ethash, find_epoch_number_sequential_gap)
{
constexpr int start_epoch = 200;
hash256 seed = calculate_epoch_seed(start_epoch);
for (int i = start_epoch; i < 30000; ++i)
for (int i = start_epoch; i <= max_epoch_number; ++i)
{
auto e = find_epoch_number(seed);
EXPECT_EQ(e, i);
Expand Down Expand Up @@ -315,7 +331,7 @@ TEST(ethash, find_epoch_number_invalid)

TEST(ethash, find_epoch_number_epoch_too_high)
{
hash256 seed = calculate_epoch_seed(30000);
hash256 seed = calculate_epoch_seed(max_epoch_number + 1);
int epoch = find_epoch_number(seed);
EXPECT_EQ(epoch, -1);
}
Expand All @@ -324,7 +340,7 @@ TEST(ethash_multithreaded, find_epoch_number_sequential)
{
auto fn = [] {
hash256 seed = {};
for (int i = 0; i < 30000; ++i)
for (int i = 0; i <= max_epoch_number; ++i)
{
auto e = find_epoch_number(seed);
EXPECT_EQ(e, i);
Expand All @@ -348,6 +364,9 @@ TEST(ethash, get_epoch_number)
EXPECT_EQ(get_epoch_number(30001), 1);
EXPECT_EQ(get_epoch_number(30002), 1);
EXPECT_EQ(get_epoch_number(5000000), 166);
EXPECT_EQ(get_epoch_number(max_epoch_number * epoch_length), max_epoch_number);
constexpr auto max_block = max_epoch_number * epoch_length + epoch_length - 1;
EXPECT_EQ(get_epoch_number(max_block), max_epoch_number);
}

TEST(ethash, light_cache)
Expand Down Expand Up @@ -375,6 +394,12 @@ TEST(ethash, light_cache)
}
}

TEST(ethash, create_context_invalid_epoch)
{
EXPECT_EQ(create_epoch_context(-1), nullptr);
EXPECT_EQ(create_epoch_context(max_epoch_number + 1), nullptr);
}

TEST(ethash, fake_dataset_partial_items)
{
struct full_dataset_item_test_case
Expand Down Expand Up @@ -462,7 +487,10 @@ TEST(ethash, fake_dataset_items)
{740620450,
"7e4a3533ef6f0d9fa7e41b8304e08fe9e52556334cad0cc861337bd1155bbea211cf0b0198b4f08567cc47fcc964bbbdfb2f851437da1edba7c6f4bd3fd61a3a",
"f20969bd0407bb76560e7c099224a1ea185214808950519fafdcd02ba2874e9b4ebf1797cafb3b80e903b13a87ddac5d54d67ed58acf49bb12e03b81eb6c99af"},
{4294967295,
{2147418082, // Max index for epoch 32639.
"a79eaa61a5c2256eb3bf9c78a2b6509929780d8826d7a7d1324328ab786ca9c23fc1437e1efb432ab823c5d5448b4183893d16168aebe21470e3515104eab67f",
"496baeac6ea83fdd5a6a20827029ddd73d1be507dc7f210c2aed29f0757eefea72ab7e4c92aab9ee34ed46027bdc9918e047b0f845c7fbbd254b8014141c7605"},
{0x7fffffff, // Max allowed index value.
"21471504c1f31007c14acd107a8ade1aad6c2a6c2ad879b3aca3b12517105483502d0e3e902acf3b128d294c0a69f2cc199bf8813be1f8bb4b5625822b70ec09",
"8e4fdb5dc602598f10a42b5061132eec05299380db872a3caf04aa21e3d4970350394dfbd58c5ab54571b1be0cc9001d788c6b14cbf003d7decc2aaef1232b8c"},
};
Expand Down Expand Up @@ -527,9 +555,12 @@ TEST(ethash, dataset_items_epoch13)
{740620450,
"df0c2e2f4df033a64b1bcd207c30c7ce48c7d8ca8edd1284c87a91d54372ed0cb513d1876b1dbef6fc06c496941039cba6c50676596d6379152689d9841c97e4",
"357bacef5baf4687c87e7ff07d5ab104ce39badcf9633c22ee31c3c3de0887b296f9385ea27573cb94bc3423cc39ab2a733be97a98e860290c31e94f03f39814"},
{4294967295,
"11fab5bafdf0e29f199cad053a542f777fcd8b4fb8a0203bf720b9a01718e8c76d0e374e979ebf0e1faf8ce992638a5e92ea8be8000c47e8307acad261df1abb",
"164ff9a893a162319f9ccb4294e33fb6ae50ea05d02a753fd4797662676c1fad6d70b11db6d4aa0298d6aa695c9be8dea3dad70f953368cb11b283eb145d17e3"},
{2147418082, // Max index for epoch 32639.
"9705a12d9f1a193ffea9b9c6603b8d17315896b84ea6649e613fec1578c867535e6bbfd71cb18ce0c0dd6ca8051f7bfb5cfa2d89b29d1bf25a0b36ae57505844",
"c2f3475bf52ec727a0b684d9fbc5ce9234331abc585c383e87fa70e8c860819b35c12e6173df081f3f84bea218633ad54c9da6051ba90efc3985e887530cb89e"},
{0x7fffffff, // Max allowed index value.
"d463d63e393e6ccc31b240d3d12301a14e0410377657b0554d6041541303c2ddc8ec026432adf73311b56de486f6fdca808f87f3824587b413a4e4f7a571d046",
"9e15d844f137ae66e7fc23934cc51d53a36ec28a5d1a246d50773471252ae9ea30fe20e817434e771bbf77577899cf2cce8de11578b925a12af2ad9dd316f0ec"},
};
// clang-format on

Expand Down