I'm building a local-first macOS productivity tracker that reads from the system's KnowledgeC database to analyze focus time and context switches. Since querying this SQLite database can be expensive for aggregations, I implemented a simple in-memory cache with TTL.

Context: The application serves a web dashboard showing daily/weekly metrics, and users might refresh frequently. The data doesn't need to be real-time (1-2 minute staleness is acceptable).

Here's my in-memory only cache implementation:

"""Simple in-memory cache with TTL."""

import time
from dataclasses import dataclass
from typing import Any


@dataclass
class CacheEntry:
    """Cache entry with expiration."""

    value: Any
    expires_at: float


class InMemoryCache:
    """Simple in-memory cache with TTL expiration."""

    def __init__(self, ttl_seconds: int = 60) -> None:
        self.ttl_seconds = ttl_seconds
        self._cache: dict[str, CacheEntry] = {}

    def get(self, key: str) -> Any | None:
        """Get value from cache if not expired."""
        entry = self._cache.get(key)
        if entry is None:
            return None

        if time.time() > entry.expires_at:
            del self._cache[key]
            return None

        return entry.value

    def set(self, key: str, value: Any) -> None:
        """Set value in cache with TTL."""
        expires_at = time.time() + self.ttl_seconds
        self._cache[key] = CacheEntry(value=value, expires_at=expires_at)

    def clear(self) -> None:
        """Clear all cache entries."""
        self._cache.clear()

My specific concerns:

  1. Is using a dict with manual TTL checking too simplistic? Should I use cachetools or similar?
  2. Should I add L-2 cache at file dir level?
  3. Thread safety - the FastAPI app is async, but the cache operations aren't locked
  4. Memory management - there's no max size limit, just TTL-based eviction

The full project is available here if more context is needed, but I'm primarily interested in feedback on this caching approach.

Current behavior:

  • Cache hits reduce query time from ~50ms to <1ms
  • TTL of 60 seconds works well for my use case
  • No observed memory issues with daily/weekly queries only

0

Your Reply

By clicking “Post Your Reply”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.