notex.nvim/specs/002-notex-is-a/plan.md

10 KiB

Implementation Plan: Relational Document System for Neovim

Branch: 002-notex-is-a | Date: 2025-10-03 | Spec: /specs/002-notex-is-a/spec.md Input: Feature specification from /specs/002-notex-is-a/spec.md

Execution Flow (/plan command scope)

1. Load feature spec from Input path
   → If not found: ERROR "No feature spec at {path}"
2. Fill Technical Context (scan for NEEDS CLARIFICATION)
   → Detect Project Type from file system structure or context (web=frontend+backend, mobile=app+api)
   → Set Structure Decision based on project type
3. Fill the Constitution Check section based on the content of the constitution document.
4. Evaluate Constitution Check section below
   → If violations exist: Document in Complexity Tracking
   → If no justification possible: ERROR "Simplify approach first"
   → Update Progress Tracking: Initial Constitution Check
5. Execute Phase 0 → research.md
   → If NEEDS CLARIFICATION remain: ERROR "Resolve unknowns"
6. Execute Phase 1 → contracts, data-model.md, quickstart.md, agent-specific template file (e.g., `CLAUDE.md` for Claude Code, `.github/copilot-instructions.md` for GitHub Copilot, `GEMINI.md` for Gemini CLI, `QWEN.md` for Qwen Code, or `AGENTS.md` for all other agents).
7. Re-evaluate Constitution Check section
   → If new violations: Refactor design, return to Phase 1
   → Update Progress Tracking: Post-Design Constitution Check
8. Plan Phase 2 → Describe task generation approach (DO NOT create tasks.md)
9. STOP - Ready for /tasks command

IMPORTANT: The /plan command STOPS at step 7. Phases 2-4 are executed by other commands:

  • Phase 2: /tasks command creates tasks.md
  • Phase 3-4: Implementation execution (manual or via tools)

Summary

A Neovim plugin that provides a relational document system similar to Notion, enabling users to query, filter, and view markdown documents based on YAML header properties through custom syntax and virtual buffers.

Technical Context

Language/Version: Lua (Neovim compatible) Primary Dependencies: SQLite (for performant indexing and querying) Storage: SQLite database + markdown files with YAML headers Testing: busted (Lua testing framework) Target Platform: Neovim Project Type: Single project (Neovim plugin) Performance Goals: Query execution <100ms, indexing thousands of documents Constraints: Non-blocking queries, minimal dependencies, Lua-only implementation Scale/Scope: Support libraries of several thousand markdown documents

Constitution Check

GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.

  • I. Clean Code: Is the proposed code structure and design clean and maintainable?
  • II. Functional Style: Does the design favor functional approaches for data transformation?
  • III. Descriptive Coding: Is the naming of components and files descriptive and self-documenting?
  • IV. Test-First Development: Are comprehensive tests planned before implementation?
  • V. Performance by Design: Are performance considerations adequately addressed?

Project Structure

Documentation (this feature)

specs/[###-feature]/
├── plan.md              # This file (/plan command output)
├── research.md          # Phase 0 output (/plan command)
├── data-model.md        # Phase 1 output (/plan command)
├── quickstart.md        # Phase 1 output (/plan command)
├── contracts/           # Phase 1 output (/plan command)
└── tasks.md             # Phase 2 output (/tasks command - NOT created by /plan)

Source Code (repository root)

lua/
└── notex/
    ├── init.lua              # Plugin entry point and setup
    ├── database/
    │   ├── init.lua         # Database connection and initialization
    │   ├── schema.lua       # Database schema management
    │   └── migrations.lua   # Database migration handling
    ├── parser/
    │   ├── init.lua         # Document parsing coordination
    │   ├── yaml.lua         # YAML header extraction and parsing
    │   └── markdown.lua     # Markdown content processing
    ├── query/
    │   ├── init.lua         # Query engine coordination
    │   ├── parser.lua       # Query syntax parsing
    │   ├── executor.lua     # Query execution logic
    │   └── builder.lua      # SQL query construction
    ├── ui/
    │   ├── init.lua         # UI coordination
    │   ├── buffer.lua       # Virtual buffer management
    │   ├── view.lua         # Query result visualization
    │   └── editor.lua       # Inline editing interface
    ├── index/
    │   ├── init.lua         # Document indexing coordination
    │   ├── scanner.lua      # File system scanning
    │   └── updater.lua      # Incremental index updates
    └── utils/
        ├── init.lua         # Utility functions
        ├── date.lua         # Date parsing and formatting
        ├── types.lua        # Type detection and conversion
        └── validation.lua   # Data validation helpers

tests/
├── unit/                   # Unit tests for individual modules
│   ├── database/
│   ├── parser/
│   ├── query/
│   ├── ui/
│   ├── index/
│   └── utils/
├── integration/            # Integration tests for workflows
│   ├── test_query_workflow.lua
│   ├── test_document_indexing.lua
│   └── test_virtual_buffer.lua
└── contract/               # Contract tests from API definitions
    ├── test_query_api.lua
    └── test_document_api.lua

Structure Decision: Single project structure optimized for Neovim plugin architecture with clear separation of concerns across domains (database, parsing, querying, UI, indexing).

Phase 0: Outline & Research

  1. Extract unknowns from Technical Context above:

    • For each NEEDS CLARIFICATION → research task
    • For each dependency → best practices task
    • For each integration → patterns task
  2. Generate and dispatch research agents:

    For each unknown in Technical Context:
      Task: "Research {unknown} for {feature context}"
    For each technology choice:
      Task: "Find best practices for {tech} in {domain}"
    
  3. Consolidate findings in research.md using format:

    • Decision: [what was chosen]
    • Rationale: [why chosen]
    • Alternatives considered: [what else evaluated]

Output: research.md with all NEEDS CLARIFICATION resolved

Phase 1: Design & Contracts

Prerequisites: research.md complete

  1. Extract entities from feature specdata-model.md:

    • Entity name, fields, relationships
    • Validation rules from requirements
    • State transitions if applicable
  2. Generate API contracts from functional requirements:

    • For each user action → endpoint
    • Use standard REST/GraphQL patterns
    • Output OpenAPI/GraphQL schema to /contracts/
  3. Generate contract tests from contracts:

    • One test file per endpoint
    • Assert request/response schemas
    • Tests must fail (no implementation yet)
  4. Extract test scenarios from user stories:

    • Each story → integration test scenario
    • Quickstart test = story validation steps
  5. Update agent file incrementally (O(1) operation):

    • Run .specify/scripts/bash/update-agent-context.sh claude IMPORTANT: Execute it exactly as specified above. Do not add or remove any arguments.
    • If exists: Add only NEW tech from current plan
    • Preserve manual additions between markers
    • Update recent changes (keep last 3)
    • Keep under 150 lines for token efficiency
    • Output to repository root

Output: data-model.md, /contracts/*, failing tests, quickstart.md, agent-specific file

Phase 2: Task Planning Approach

This section describes what the /tasks command will do - DO NOT execute during /plan

Task Generation Strategy:

  • Load .specify/templates/tasks-template.md as base
  • Generate tasks from Phase 1 design docs (contracts, data model, quickstart)
  • Query API contract → query parsing and execution test tasks [P]
  • Database schema models → schema and migration tasks [P]
  • Document parser contracts → YAML parsing and indexing tasks [P]
  • UI contracts → virtual buffer and view tasks [P]
  • Quickstart scenarios → integration test tasks
  • Implementation tasks to make tests pass

Ordering Strategy:

  • TDD order: Tests before implementation
  • Dependency order: Database → Parser → Query → UI → Integration
  • Mark [P] for parallel execution (independent files)

Estimated Output: 28-32 numbered, ordered tasks in tasks.md covering:

  • Database setup and schema (4-5 tasks)
  • Document parsing and indexing (6-7 tasks)
  • Query parsing and execution (6-7 tasks)
  • Virtual buffer UI (5-6 tasks)
  • Integration and testing (4-5 tasks)
  • Documentation and polish (3-4 tasks)

IMPORTANT: This phase is executed by the /tasks command, NOT by /plan

Phase 3+: Future Implementation

These phases are beyond the scope of the /plan command

Phase 3: Task execution (/tasks command creates tasks.md)
Phase 4: Implementation (execute tasks.md following constitutional principles)
Phase 5: Validation (run tests, execute quickstart.md, performance validation)

Complexity Tracking

Fill ONLY if Constitution Check has violations that must be justified

Violation Why Needed Simpler Alternative Rejected Because
[e.g., 4th project] [current need] [why 3 projects insufficient]
[e.g., Repository pattern] [specific problem] [why direct DB access insufficient]

Progress Tracking

This checklist is updated during execution flow

Phase Status:

  • Phase 0: Research complete (/plan command)
  • Phase 1: Design complete (/plan command)
  • Phase 2: Task planning complete (/plan command - describe approach only)
  • Phase 3: Tasks generated (/tasks command)
  • Phase 4: Implementation complete
  • Phase 5: Validation passed

Gate Status:

  • Initial Constitution Check: PASS
  • Post-Design Constitution Check: PASS
  • All NEEDS CLARIFICATION resolved
  • Complexity deviations documented

Based on Constitution v1.0.0 - See /memory/constitution.md