This is a low-memory-overhead and low-cpu-overhead (per generated
bit) random number generator. The repeat cycle is around 2**88.
Which means around 2**120 bits are available before a cycle. Which
means that about 2**102 12-bit samples are available before repeats.
This should be more than sufficient for blob rollover purposes.
If 100 million blobs are split per second (absurdly high), then
that's about 2**27 per second. If run for 30 years, that's
2**30 seconds. If run across 128 CPUs, that's 2**7 CPUs. Thus
2**(27+30+7) total samples are required before loop. This is
2**64 which is WAAAY less than 2**88. (And this is overly
conservative, as these generators should be one-per-thread...
so we're really much closer to 2**57, not that it matters.)
For this reason, there's no reseed code. The cycle length of
mt11213b is significantly longer, however, it has a significantly
larger state. One goal here is to keep the amount of state for
this generator to a single cache line. As such, if the cycle
length is later shown to be significantly smaller than 2**48
or so, a reseed code path may need to be added. (This is on
the assumption that the above described intensive run would
run for more than 1 million seconds, or about two weeks.)
This permits "stateless" allocators which grab memory from a
`thread_local Alepha::Blob` instance. Each allocation
sticks a malloc cookie of type
`std::shared_ptr< Alepha::Blob::StorageReservation >`
just before the base of the allocation.
The allocator object knows that it needs to `reinterpret_cast`
the malloc cookie into a shared pointer and run its destructor.
This causes the Blob's underlying reference counted allocation
to be tied to the lifetime of the allocated memory. The intent
is to permit cheap allocation in one thread and deallocation
in another. Each deallocation should be a single atomic
dereference operation. Each allocation should be (usually) a
bit of pointer arithmetic and a single atomic increment operation.
This, hopefully, eliminates significant thread contention for
the global allocation mechanism between various threads in
an intensive multithreaded situation where each processing
thread thread may independently retire data objects allocated by
a single source.
This permits naming operators via an enhanced enum and then
looking them up. This is a useful component for quick
development of scripting language functionalities.
C++26 (I hope) is supposed to have this syntax:
```
template for( const auto &element: aggregate )
{
...;
}
```
Thus, I've adjusted this gadget to have a similar name, to enable
simple mechanical code changes. From Alepha, you'd use it
thus:
```
template_for( aggregate ) <=[&]( const auto &element )
{
...;
};
```
C++26 (I hope) is supposed to have this syntax:
```
template for( const auto &element: aggregate )
{
...;
}
```
Thus, I've adjusted this gadget to have a similar name, to enable
simple mechanical code changes. From Alepha, you'd use it
thus:
```
template_for( aggregate ) <=[&]( const auto &element )
{
...;
};
```
The `TableTest` mechanism now prints as much detailed information
as it can get about the case arguments in any failed test case.
Git-commit-built-by: Merge branch 'print-test-inputs'
This is useful when the same set of types has to go into a
std::variant, a std::tuple, and perhaps a few other template
lists.
Git-commit-built-by: Merge branch 'make-template'
A type which cannot be printed when streamed in "Relaxed"
mode will simply print the typeid and that's it. This is
opposed to its original behaviour which would be a compile
time error.
This probably needs to be expanded upon. The basic functionality
added is to permit a test expectation clause to be a function which
takes some kind of exception type. That function can then
perform any arbitrary checks and analyses it needs to confirm that
the exception which was caught passes muster for that test case.