1
0
forked from Alepha/Alepha
Commit Graph

6 Commits

Author SHA1 Message Date
fc5ebba241 begin and end need conditional noexcept.
The specializations and overlaods need to match what's in the
standard library.
2025-09-06 18:40:53 -04:00
54edf41d96 Fix building of Memory/Buffer.h 2024-09-06 18:09:54 -04:00
5efc8b79f0 Updated thread slab with overflow protection and rewritten. 2024-09-06 17:06:08 -04:00
9717ae49a4 Use fast random to decide when to split Blobs. 2024-09-05 18:48:07 -04:00
6c165b1603 Blob based per-thread slab allocator
This permits "stateless" allocators which grab memory from a
`thread_local Alepha::Blob` instance.  Each allocation
sticks a malloc cookie of type
`std::shared_ptr< Alepha::Blob::StorageReservation >`
just before the base of the allocation.

The allocator object knows that it needs to `reinterpret_cast`
the malloc cookie into a shared pointer and run its destructor.
This causes the Blob's underlying reference counted allocation
to be tied to the lifetime of the allocated memory.  The intent
is to permit cheap allocation in one thread and deallocation
in another.  Each deallocation should be a single atomic
dereference operation.  Each allocation should be (usually) a
bit of pointer arithmetic and a single atomic increment operation.

This, hopefully, eliminates significant thread contention for
the global allocation mechanism between various threads in
an intensive multithreaded situation where each processing
thread thread may independently retire data objects allocated by
a single source.
2024-09-05 18:35:07 -04:00
94b0a1561b Relocate Blob, Buffer, and DataChain to Memory. 2024-05-20 18:18:04 -04:00