Probabilistic data structures are a great fit for modern Big Data applications. They use hash functions to randomize items and keep the size constant. The Bloom filter is an implementation of a probability set, invented by Burton Bloom in 1970. The most prominent examples of operations may include identifying some unique or frequent items. The higher the number of hash functions is, the more accurate determination you get. Bloom filters have this powerful combo of simplicity and multi-purpose nature. In layman terms, they support operations similar to the hash tables but use less space. Cassandra, Cassandra, SSTache, and others use these structures to storage massive amounts of information.
Tags
Probabilistic Data Structures And Algorithms In Big Data
Source: Pinay Tube PH
0 Comments