Introducing Quidoris Engine - Creating an unlimited context "window" you can run locally and harness to any LLM.
Core idea: Don’t stuff your library into the model. Let the model interact with the library.
Relationship Learning Models promise the ability to have unlimited context windows by providing an environmment in which they can "scan" a document library and retrieve information from this library be taking small snapshots of the information, and either answering the prompt provided it from these snapshots, or, additionally call small agents to assist it in returing their reply to the user's prompt. This model type promises to reduce content rot to zero.
The Quidoris Engine is an RLM environment. It is a harness that lives independently from an RLM but RLM's can dock with the harness and use its powerful environment to run an unlimited context window. Right now LLM's are limited to a few million token context windows.
RLM's, in concert with the Quidoris Engine, have unlimited contect windows. Think 1000 documents or 10000 document context windows. This is what the Quidoris Engine does for all RLM's.
Model Card for Model ID
Quidoris Engine (RLM Harness)
Quidoris Engine is a local-first inference harness inspired by Recursive Language Models (RLMs): it treats long prompts and large document libraries as an external environment, letting models search, read, cite, and recurse—instead of cramming everything into a single context window.
It runs as a local daemon with an HTTP API (SSE-friendly) and ships with a lightweight local web UI.
Model Details
Model Description
A RLM inference harness to allow any user to utilize RLM's locally on their own machines.
What this is
A local daemon (Bun) + SQLite/FTS5 index for fast retrieval
A provider-agnostic harness (BYOK): Local CLI / Hugging Face / OpenAI-compatible providers
A UI Launcher that can auto-start the daemon (since browsers can’t spawn local processes)
An “evidence-first” workflow: citations and evidence spotlight are first-class