Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

O(n) time complexity for most operations #143

Open
gwy15 opened this issue Dec 3, 2020 · 0 comments
Open

O(n) time complexity for most operations #143

gwy15 opened this issue Dec 3, 2020 · 0 comments
Labels

Comments

@gwy15
Copy link

gwy15 commented Dec 3, 2020

Describe the bug
The LruCache is implemented with a BTreeMap and a VecDeque, which means that by accessing keys and calling the
update_key method, it will invoke a VecDeque::remove operation, which will cost O(n) time.

Expected behavior
Should be O(1).

Additional context
Some other libs that do the complexity right:

  • lru-cache & linked-hash-map: used a linked hash map, O(1).
  • hashlink: used a linked hash map, O(1).

sample

#![feature(test)]

extern crate test;
use test::Bencher;

use std::cell::RefCell;
use std::collections::HashMap;
use lru_cache::LruCache as LruCacheLruCache;
use lru_time_cache::LruCache;
use hashlink::LruCache as HashLinkLruCache;
use rand::prelude::*;

fn main() { }

const RANGE: usize = 32 * 1024;
const FIND_TIMES: usize = 10;

#[bench]
fn lru_time_cache_sum(b: &mut Bencher) {
    let mut lru = LruCache::with_capacity(RANGE);
    for i in 0..RANGE {
        lru.insert(i, i);
    }
    let lru = RefCell::new(lru);
    b.iter(|| {
        let res: usize = (0..FIND_TIMES)
            .map(|_| *lru
                .borrow_mut()
                .get(&thread_rng().gen_range(0, RANGE))
                .unwrap_or(&0))
            .sum();
        test::black_box(res);
    });
}

#[bench]
fn lru_cache_sum(b: &mut Bencher) {
    let mut lru = LruCacheLruCache::new(RANGE);
    for i in 0..RANGE {
        lru.insert(i, i);
    }
    let lru = RefCell::new(lru);
    b.iter(|| {
        let res: usize = (0..FIND_TIMES)
            .map(|_| *lru
                .borrow_mut()
                .get_mut(&thread_rng().gen_range(0, RANGE))
                .unwrap_or(&mut 0))
            .sum();
        test::black_box(res);
    });
}

#[bench]
fn hashlink_lru_cache_sum(b: &mut Bencher) {
    let mut lru = HashLinkLruCache::new(RANGE);
    for i in 0..RANGE {
        lru.insert(i, i);
    }
    let lru = RefCell::new(lru);
    b.iter(|| {
        let res: usize = (0..FIND_TIMES)
            .map(|_| *lru
                .borrow_mut()
                .get(&thread_rng().gen_range(0, RANGE))
                .unwrap_or(&0))
            .sum();
        test::black_box(res);
    });
}

#[bench]
fn hashmap_sum(b: &mut Bencher) {
    let mut map = HashMap::new();
    for i in 0..RANGE {
        map.insert(i, i);
    }
    b.iter(|| {
        let res: usize = (0..FIND_TIMES)
            .map(|_| *map
                .get(&thread_rng().gen_range(0, RANGE))
                .unwrap_or(&0))
            .sum();
        test::black_box(res);
    });
}
@gwy15 gwy15 added the bug label Dec 3, 2020
tomaszklak added a commit to NordSecurity/libtelio that referenced this issue Jul 25, 2023
Current lru cache is known to be slow:
maidsafe/lru_time_cache#143

This change replaces it with our own lru cache based ond hashlink and
rustc hash. The new implementation uses tests from the original lru
crate. Additionaly the behaviour is verified to be the same using
property based tests.

Signed-off-by: Tomasz Kłak <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant