Skip to content

Optional memory management for persistent services

Pre-release
Pre-release
Compare
Choose a tag to compare
@github-actions github-actions released this 09 Sep 14:19
· 69 commits to master since this release

Support a new context manager method Language.memory_zone(), to allow long-running services to avoid growing memory usage from cached entries in the Vocab or StringStore. Once the memory zone block ends, spaCy will evict Vocab and StringStore entries that were added during the block, freeing up memory. Doc objects created inside a memory zone block should not be accessed outside the block.

The current implementation disables population of the tokenizer cache inside the memory zone, resulting in some performance impact. The performance difference will likely be negligible if you're running a full pipeline, but if you're only running the tokenizer, it'll be much slower. If this is a problem, you can mitigate it by warming the cache first, by processing the first few batches of text without creating a memory zone. Support for memory zones in the tokenizer will be added in a future update.

The Language.memory_zone() context manager also checks for a memory_zone() method on pipeline components, so that components can perform similar memory management if necessary. None of the built-in components currently require this.

If you component needs to add non-transient entries to the StringStore or Vocab, you can pass the allow_transient=False flag to the Vocab.add() or StringStore.add() components.

Example usage:

import spacy
import json
from pathlib import Path
from typing import Iterator
from collections import Counter
import typer
from spacy.util import minibatch


def texts(path: Path) -> Iterator[str]:
    with path.open("r", encoding="utf8") as file_:
        for line in file_:
            yield json.loads(line)["text"]

def main(jsonl_path: Path) -> None:
    nlp = spacy.load("en_core_web_sm")
    counts = Counter()
    batches = minibatch(texts(jsonl_path), 1000)
    for i, batch in enumerate(batches):
        print("Batch", i)
        with nlp.vocab.memory_zone():
            for doc in nlp.pipe(batch):
                for token in doc:
                    counts[token.text] += 1
    for word, count in counts.most_common(100):
        print(count, word)

if __name__ == "__main__":
    typer.run(main)```