It's mostly worked out, but there's a few odd corner cases,
especially around the auto-gen of negation options. It also
got me to start thinking about "is a required negatable option
still required of its negation?" Similar questions revolve
around requiring such options.
I'm punting on these for now, but I think it makes sense to
perhaps make those incompatible with such domains. Or to treat
the two options as a shared-fate unit. But is `-O -o -O -o` a
violation of exclusivity? If we wind up returning to the default
state, have we actually passed that option, with respect to
"requirement"? I have to think about that some more.
A commit message isn't the best place to capture this, but I
didn't want to lose this thought.
I went with expansion here, as it was easier to implement,
given the complexities of how the options parsing code works.
Rather than try to maintain state machines and parsing for both
forms of argument, we can transform the short options into the
long form options. This, then, might lead to some issues when
the code is expanded to handle arguments to those options.
I'll probably just add a state tracking bit to that parameter to
say that it was expanded from a specific short form.
It might be worth it to permit a short form to expand to a long
form _with_ specific hardcoded option. This gets into defaults,
which might be the better way to underpin that.
For expanding these into the automatic help documentation, the
Long options (the main option definition struct) should maintain
a list of the short forms that it supports.
I also need to add a neat syntax. Something like:
```
-'o'_option <= --"long-option"_option
```
It might be beneficial to auto generate something like:
```
-'O'_option <= --"no-long-option"_option
```
for boolean toggles. Should it always be so? Maybe an extra
sigil to allow both?
This is a low-memory-overhead and low-cpu-overhead (per generated
bit) random number generator. The repeat cycle is around 2**88.
Which means around 2**120 bits are available before a cycle. Which
means that about 2**102 12-bit samples are available before repeats.
This should be more than sufficient for blob rollover purposes.
If 100 million blobs are split per second (absurdly high), then
that's about 2**27 per second. If run for 30 years, that's
2**30 seconds. If run across 128 CPUs, that's 2**7 CPUs. Thus
2**(27+30+7) total samples are required before loop. This is
2**64 which is WAAAY less than 2**88. (And this is overly
conservative, as these generators should be one-per-thread...
so we're really much closer to 2**57, not that it matters.)
For this reason, there's no reseed code. The cycle length of
mt11213b is significantly longer, however, it has a significantly
larger state. One goal here is to keep the amount of state for
this generator to a single cache line. As such, if the cycle
length is later shown to be significantly smaller than 2**48
or so, a reseed code path may need to be added. (This is on
the assumption that the above described intensive run would
run for more than 1 million seconds, or about two weeks.)
This permits "stateless" allocators which grab memory from a
`thread_local Alepha::Blob` instance. Each allocation
sticks a malloc cookie of type
`std::shared_ptr< Alepha::Blob::StorageReservation >`
just before the base of the allocation.
The allocator object knows that it needs to `reinterpret_cast`
the malloc cookie into a shared pointer and run its destructor.
This causes the Blob's underlying reference counted allocation
to be tied to the lifetime of the allocated memory. The intent
is to permit cheap allocation in one thread and deallocation
in another. Each deallocation should be a single atomic
dereference operation. Each allocation should be (usually) a
bit of pointer arithmetic and a single atomic increment operation.
This, hopefully, eliminates significant thread contention for
the global allocation mechanism between various threads in
an intensive multithreaded situation where each processing
thread thread may independently retire data objects allocated by
a single source.
This permits naming operators via an enhanced enum and then
looking them up. This is a useful component for quick
development of scripting language functionalities.
C++26 (I hope) is supposed to have this syntax:
```
template for( const auto &element: aggregate )
{
...;
}
```
Thus, I've adjusted this gadget to have a similar name, to enable
simple mechanical code changes. From Alepha, you'd use it
thus:
```
template_for( aggregate ) <=[&]( const auto &element )
{
...;
};
```
C++26 (I hope) is supposed to have this syntax:
```
template for( const auto &element: aggregate )
{
...;
}
```
Thus, I've adjusted this gadget to have a similar name, to enable
simple mechanical code changes. From Alepha, you'd use it
thus:
```
template_for( aggregate ) <=[&]( const auto &element )
{
...;
};
```
The `TableTest` mechanism now prints as much detailed information
as it can get about the case arguments in any failed test case.
Git-commit-built-by: Merge branch 'print-test-inputs'
This is useful when the same set of types has to go into a
std::variant, a std::tuple, and perhaps a few other template
lists.
Git-commit-built-by: Merge branch 'make-template'
A type which cannot be printed when streamed in "Relaxed"
mode will simply print the typeid and that's it. This is
opposed to its original behaviour which would be a compile
time error.
This probably needs to be expanded upon. The basic functionality
added is to permit a test expectation clause to be a function which
takes some kind of exception type. That function can then
perform any arbitrary checks and analyses it needs to confirm that
the exception which was caught passes muster for that test case.