How it works

context master converts your workspace into structured, graph-shaped context by combining VSCode’s symbol/reference providers with ranked projection and token-aware serialization.

  1. 01

    Initialize

    On activation, the extension creates an index of the current workspace.

  2. 02

    Extract structure

    VSCode language providers surface symbols and references. context master stitches them into a navigable graph across files.

  3. 03

    Rank Symbols

    Centrality metrics help help in creating a high-level topology of the codebase.

  4. 04

    Serve MCP tools

    The extension starts a local MCP server. Any MCP-capable assistant can connect and call tools to explore your codebase.

Why it's a good idea

Most agents rely on regex and naive file reading. context master uses VSCode’s language providers to expose the same kind of structure developers rely on: symbols, references, and relationships across files.

Language Server Protocol
We don’t parse text; we traverse the VSCode symbol tree for each file and connect those nodes across files into a symbolic graph.
Local-First Indexing
Indexing happens in your editor. No data ever leaves your machine. You remain in control of how you access and use the provided MCP server.
Automated Context Management
Gone are the days of manually adding files to the agent's context. Instead, agents can query for exactly the context relevant to the current task.
Token Efficiency
Regular agents have to read whole files which bloats the context window and increases the risk of hallucinations. context master on the other hand reads the implementations of singular symbols and thus saves tokens by only fetching the necessary context.
Fuzzy Symbol Search
Even if the LLM hallucinates a symbol's ID, the extension provides candidates which might be the symbol the agent was actually looking for - thus reducing general errors the AI makes.
Graph Centrality
Code isn't linear, it's a connected graph of callers and callees. We calculate centrality to identify architectural pillars versus utility scripts and give the agent a heatmap of importance.
Large Repository Support #1
Theoretically, the extension can index repositories of any size. In practice, large repos take lots of memory and time to index. Indexing performance is one of our key areas of focus.
Large Repository Support #2
Our algorithms ensure that even for very large codebases in tight token budgets the extension still provides a useful topological overview. This is made possible by ranking the importance of symbols in the graph and pruning everything that's not needed to reach an architectural viewpoint if a configurable token budget is crossed.
Grounded & Traceable
Outputs are anchored to real VSCode locations (uri + range). This grounding reduces hallucinations because the agent can navigate directly to references without having to rely on regex pattern matching.
Deterministic Symbol IDs
Symbols are addressed by stable identifiers which carry semantic meaning. This helps the agent differentiate between symbols with the same name in different files.
Multi-Root Workspaces Support (Early)
We provide preliminary support for multi-root workspaces. Indexing is scoped to individual workspace folders. We're working on being able to resolve references across repository borders.

What to expect

context master is designed to make agents more reliable by grounding exploration in editor-resolved symbols and references. Here’s what that means in practice.

What you can rely on
  • Every returned symbol is anchored to a real VSCode location (uri + range).
  • No hallucinations. Everything grounded in data returned from the language server that VSCode runs in the background.
  • Incremental upgrades ensure the tools never provide outdated information.
  • Full control over indexing process through inclusions and exclusions.
What to keep in mind
  • The proper language extensions have to beinstalled in VSCode for the language to be indexed.
  • If VSCode can’t resolve symbols/references for a language, results may be limited.
  • Large repos can take time to index the first time; exclusions help keep it fast.
  • An agent can still make incorrect decisions — context master reduces blind spots, it doesn’t replace review.

Keep exploring