A framework to give compute constrained AI agents structured spatial-temporal memory. Created over a weekend at the Mistral Worldwide 2026 Hackathon with Daryl (he needs to setup an online profile of some kind).
VIGIL represents scenes as graphs (tracked objects, spatial relations, motion events) and implements two kinds of memory:
The whole system can be run on-device. Model-agnostic by design; bring your own detector, LLM, and UI.
Check the GitHub for more information.