this post was submitted on 07 Sep 2023
4 points (100.0% liked)
Data Engineering
379 readers
2 users here now
A community for discussion about data engineering
Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In the google BigTable paper they use it like this. The reason it makes sense in that context is that there is that bloom filters are tiny and can be easily held in memory for a large dataset, whereas performing the actual lookup requires a slow disk access/network request. This way you can reduce the amount of expensive lookups massively at the cost of a small amount of memory.
Also since the underlying data format is immutable the cost of recalculating upon removal doesn't matter since it doesn't need to happen
Compared to which alternative? I also don't understand exactly what the filter's purpose was in this case, did you have all DNS records in a bloom filter to quickly check for misses? Or was it some kind of allowlist/blocklist of clients?
Finally, what metrics were you using to decide it did not benefit at all?
All records created by customers, yes. The alternative used was our default structure, but that structure is also in ram. The problem faced is that a resolver can be targeted in several ways, one of which is querying for zone data that doesn't exist. When latency is the primary performance concern, and you need a low performance cost method to look up whether a request can be (maybe) served or (absolutely) not, bloom filters look like the right filter to use.
Performance wise (latency), there was no improvement in lookup performance, nor a reduction on CPU consumed. So for us in that application, it didn't make sense.
Big table was the example I was thinking of. A massive k:v store that can start at 4, 8, 32TB you can have in RAM. The problem is, you can only answer negative membership in the key space, not positive. So if you're looking up a key you either get a absolute no or an opportunity to scan the key space anyway.
What I've tended to find is that indexes make more sense in most scenarios. Not all, because there's always exceptions. But indexes tend to be faster, still consume vastly less space that the data being indexed, and they tend to offer more powerful features rather than just membership in a set.