Initial vibecoded proof of concept

This commit is contained in:
Alex Selimov 2025-10-05 20:16:33 -04:00
parent 74812459af
commit 461318a656
Signed by: aselimov
GPG key ID: 3DDB9C3E023F1F31
61 changed files with 13306 additions and 0 deletions

101
.claude/commands/analyze.md Normal file
View file

@ -0,0 +1,101 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
Goal: Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`.
STRICTLY READ-ONLY: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
Constitution Authority: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`.
Execution steps:
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
2. Load artifacts:
- Parse spec.md sections: Overview/Context, Functional Requirements, Non-Functional Requirements, User Stories, Edge Cases (if present).
- Parse plan.md: Architecture/stack choices, Data Model references, Phases, Technical constraints.
- Parse tasks.md: Task IDs, descriptions, phase grouping, parallel markers [P], referenced file paths.
- Load constitution `.specify/memory/constitution.md` for principle validation.
3. Build internal semantic models:
- Requirements inventory: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" -> `user-can-upload-file`).
- User story/action inventory.
- Task coverage mapping: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases).
- Constitution rule set: Extract principle names and any MUST/SHOULD normative statements.
4. Detection passes:
A. Duplication detection:
- Identify near-duplicate requirements. Mark lower-quality phrasing for consolidation.
B. Ambiguity detection:
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria.
- Flag unresolved placeholders (TODO, TKTK, ???, <placeholder>, etc.).
C. Underspecification:
- Requirements with verbs but missing object or measurable outcome.
- User stories missing acceptance criteria alignment.
- Tasks referencing files or components not defined in spec/plan.
D. Constitution alignment:
- Any requirement or plan element conflicting with a MUST principle.
- Missing mandated sections or quality gates from constitution.
E. Coverage gaps:
- Requirements with zero associated tasks.
- Tasks with no mapped requirement/story.
- Non-functional requirements not reflected in tasks (e.g., performance, security).
F. Inconsistency:
- Terminology drift (same concept named differently across files).
- Data entities referenced in plan but absent in spec (or vice versa).
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note).
- Conflicting requirements (e.g., one requires to use Next.js while other says to use Vue as the framework).
5. Severity assignment heuristic:
- CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality.
- HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion.
- MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case.
- LOW: Style/wording improvements, minor redundancy not affecting execution order.
6. Produce a Markdown report (no file writes) with sections:
### Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
Additional subsections:
- Coverage Summary Table:
| Requirement Key | Has Task? | Task IDs | Notes |
- Constitution Alignment Issues (if any)
- Unmapped Tasks (if any)
- Metrics:
* Total Requirements
* Total Tasks
* Coverage % (requirements with >=1 task)
* Ambiguity Count
* Duplication Count
* Critical Issues Count
7. At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/implement`.
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions.
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'".
8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
Behavior rules:
- NEVER modify files.
- NEVER hallucinate missing sections—if absent, report them.
- KEEP findings deterministic: if rerun without changes, produce consistent IDs and counts.
- LIMIT total findings in the main table to 50; aggregate remainder in a summarized overflow note.
- If zero issues found, emit a success report with coverage statistics and proceed recommendation.
Context: $ARGUMENTS

158
.claude/commands/clarify.md Normal file
View file

@ -0,0 +1,158 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
Execution steps:
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR`
- `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
- If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment.
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
Functional Scope & Behavior:
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation
Domain & Data Model:
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions
Interaction & UX Flow:
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization notes
Non-Functional Quality Attributes:
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery expectations)
- Observability (logging, metrics, tracing signals)
- Security & privacy (authN/Z, data protection, threat assumptions)
- Compliance / regulatory constraints (if any)
Integration & External Dependencies:
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions
Edge Cases & Failure Handling:
- Negative scenarios
- Rate limiting / throttling
- Conflict resolution (e.g., concurrent edits)
Constraints & Tradeoffs:
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives
Terminology & Consistency:
- Canonical glossary terms
- Avoided synonyms / deprecated terms
Completion Signals:
- Acceptance criteria testability
- Measurable Definition of Done style indicators
Misc / Placeholders:
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
For each category with Partial or Missing status, add a candidate question opportunity unless:
- Clarification would not materially change implementation or validation strategy
- Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 5 total questions across the whole session.
- Each question must be answerable with EITHER:
* A short multiplechoice selection (25 distinct, mutually exclusive options), OR
* A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiplechoice questions render options as a Markdown table:
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> | (add D/E as needed up to 5)
| Short | Provide a different short answer (<=5 words) | (Include only if free-form alternative is appropriate)
- For shortanswer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
- After the user answers:
* Validate the answer maps to one option or fits the <=5 word constraint.
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
* All critical ambiguities resolved early (remaining queued items become unnecessary), OR
* User signals completion ("done", "good", "no more"), OR
* You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
* Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
* Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
* Functional ambiguity → Update or add a bullet in Functional Requirements.
* User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
* Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
* Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
* Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
* Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
- Total asked (accepted) questions ≤ 5.
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
- Terminology consistency: same canonical term used across all updated sections.
7. Write the updated spec back to `FEATURE_SPEC`.
8. Report completion (after questioning loop ends or early termination):
- Number of questions asked & answered.
- Path to updated spec.
- Sections touched (list names).
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
- If any Outstanding or Deferred remain, recommend whether to proceed to `/plan` or run `/clarify` again later post-plan.
- Suggested next command.
Behavior rules:
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
- If spec file missing, instruct user to run `/specify` first (do not create a new spec here).
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
- Respect user early termination signals ("stop", "done", "proceed").
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: $ARGUMENTS

View file

@ -0,0 +1,73 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow:
1. Load the existing constitution template at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
* MAJOR: Backward incompatible governance/principle removals or redefinitions.
* MINOR: New principle/section added or materially expanded guidance.
* PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
- If version bump type ambiguous, propose reasoning before finalizing.
3. Draft the updated constitution content:
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing nonnegotiable rules, explicit rationale if not obvious.
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
- Version change: old → new
- List of modified principles (old title → new title if renamed)
- Added sections
- Removed sections
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
- Follow-up TODOs if any placeholders intentionally deferred.
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
- Any files flagged for manual follow-up.
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
Formatting & Style Requirements:
- Use Markdown headings exactly as in the template (do not demote/promote levels).
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
- Keep a single blank line between sections.
- Avoid trailing whitespace.
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.

View file

@ -0,0 +1,56 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
The user input can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. Load and analyze the implementation context:
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
- **IF EXISTS**: Read data-model.md for entities and relationships
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
- **IF EXISTS**: Read research.md for technical decisions and constraints
- **IF EXISTS**: Read quickstart.md for integration scenarios
3. Parse tasks.md structure and extract:
- **Task phases**: Setup, Tests, Core, Integration, Polish
- **Task dependencies**: Sequential vs parallel execution rules
- **Task details**: ID, description, file paths, parallel markers [P]
- **Execution flow**: Order and dependency requirements
4. Execute implementation following the task plan:
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
- **Validation checkpoints**: Verify each phase completion before proceeding
5. Implementation execution rules:
- **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints
- **Integration work**: Database connections, middleware, logging, external services
- **Polish and validation**: Unit tests, performance optimization, documentation
6. Progress tracking and error handling:
- Report progress after each completed task
- Halt execution if any non-parallel task fails
- For parallel tasks [P], continue with successful tasks, report failed ones
- Provide clear error messages with context for debugging
- Suggest next steps if implementation cannot proceed
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
7. Completion validation:
- Verify all required tasks are completed
- Check that implemented features match the original specification
- Validate that tests pass and coverage meets requirements
- Confirm the implementation follows the technical plan
- Report final status with summary of completed work
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/tasks` first to regenerate the task list.

43
.claude/commands/plan.md Normal file
View file

@ -0,0 +1,43 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
Given the implementation details provided as an argument, do this:
1. Run `.specify/scripts/bash/setup-plan.sh --json` from the repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. All future file paths must be absolute.
- BEFORE proceeding, inspect FEATURE_SPEC for a `## Clarifications` section with at least one `Session` subheading. If missing or clearly ambiguous areas remain (vague adjectives, unresolved critical choices), PAUSE and instruct the user to run `/clarify` first to reduce rework. Only continue if: (a) Clarifications exist OR (b) an explicit user override is provided (e.g., "proceed without clarification"). Do not attempt to fabricate clarifications yourself.
2. Read and analyze the feature specification to understand:
- The feature requirements and user stories
- Functional and non-functional requirements
- Success criteria and acceptance criteria
- Any technical constraints or dependencies mentioned
3. Read the constitution at `.specify/memory/constitution.md` to understand constitutional requirements.
4. Execute the implementation plan template:
- Load `.specify/templates/plan-template.md` (already copied to IMPL_PLAN path)
- Set Input path to FEATURE_SPEC
- Run the Execution Flow (main) function steps 1-9
- The template is self-contained and executable
- Follow error handling and gate checks as specified
- Let the template guide artifact generation in $SPECS_DIR:
* Phase 0 generates research.md
* Phase 1 generates data-model.md, contracts/, quickstart.md
* Phase 2 generates tasks.md
- Incorporate user-provided details from arguments into Technical Context: $ARGUMENTS
- Update Progress Tracking as you complete each phase
5. Verify execution completed:
- Check Progress Tracking shows all phases complete
- Ensure all required artifacts were generated
- Confirm no ERROR states in execution
6. Report results with branch name, file paths, and generated artifacts.
Use absolute paths with the repository root for all file operations to avoid path issues.

View file

@ -0,0 +1,21 @@
---
description: Create or update the feature specification from a natural language feature description.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
The text the user typed after `/specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
1. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` from repo root and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute.
**IMPORTANT** You must only ever run this script once. The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for.
2. Load `.specify/templates/spec-template.md` to understand required sections.
3. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
4. Report completion with branch name, spec file path, and readiness for the next phase.
Note: The script creates and checks out the new branch and initializes the spec file before writing.

62
.claude/commands/tasks.md Normal file
View file

@ -0,0 +1,62 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
---
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
User input:
$ARGUMENTS
1. Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
2. Load and analyze available design documents:
- Always read plan.md for tech stack and libraries
- IF EXISTS: Read data-model.md for entities
- IF EXISTS: Read contracts/ for API endpoints
- IF EXISTS: Read research.md for technical decisions
- IF EXISTS: Read quickstart.md for test scenarios
Note: Not all projects have all documents. For example:
- CLI tools might not have contracts/
- Simple libraries might not need data-model.md
- Generate tasks based on what's available
3. Generate tasks following the template:
- Use `.specify/templates/tasks-template.md` as the base
- Replace example tasks with actual tasks based on:
* **Setup tasks**: Project init, dependencies, linting
* **Test tasks [P]**: One per contract, one per integration scenario
* **Core tasks**: One per entity, service, CLI command, endpoint
* **Integration tasks**: DB connections, middleware, logging
* **Polish tasks [P]**: Unit tests, performance, docs
4. Task generation rules:
- Each contract file → contract test task marked [P]
- Each entity in data-model → model creation task marked [P]
- Each endpoint → implementation task (not parallel if shared files)
- Each user story → integration test marked [P]
- Different files = can be parallel [P]
- Same file = sequential (no [P])
5. Order tasks by dependencies:
- Setup before everything
- Tests before implementation (TDD)
- Models before services
- Services before endpoints
- Core before integration
- Everything before polish
6. Include parallel execution examples:
- Group [P] tasks that can run together
- Show actual Task agent commands
7. Create FEATURE_DIR/tasks.md with:
- Correct feature name from implementation plan
- Numbered tasks (T001, T002, etc.)
- Clear file paths for each task
- Dependency notes
- Parallel execution guidance
Context for task generation: $ARGUMENTS
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.

40
.luacheckrc Normal file
View file

@ -0,0 +1,40 @@
-- .luacheckrc
std = "luajit"
max_line_length = 120
-- Ignore unused self arguments in methods
ignore = {"211/_.*", "212/self.*", "213/_.*"}
-- Global variables for Neovim API
globals = {
"vim",
"nvim_buf_set_lines",
"nvim_create_buf",
"nvim_open_win",
"nvim_win_set_buf",
"nvim_buf_get_name",
"nvim_get_current_buf",
"nvim_win_get_cursor",
"nvim_win_close",
"nvim_command",
"nvim_echo",
"vim.schedule",
"vim.fn",
"vim.api",
"vim.loop",
"vim.tbl_deep_extend",
"vim.tbl_filter",
"vim.split",
"vim.trim",
"vim.validate"
}
-- Custom globals for our plugin
read_globals = {
"notex"
}
-- Files to ignore
exclude_files = {
"tests/**/*.lua"
}

5
Makefile Normal file
View file

@ -0,0 +1,5 @@
.PHONY: deps
deps:
luarocks install --local luasql-sqlite3
luarocks install --local busted
luarocks install --local lyaml

382
README.md Normal file
View file

@ -0,0 +1,382 @@
# Notex.nvim
A relational document system for Neovim that brings Notion-like database capabilities to your markdown files. Vibe-coded with z.ai glm-4.6.
![Notex.nvim](https://img.shields.io/badge/Neovim-0.7+-green.svg)
![License](https://img.shields.io/badge/License-MIT-blue.svg)
## Features
- **Relational Document Management**: Index markdown files and query them like databases
- **Custom Query Syntax**: Simple, powerful query language for document discovery
- **Virtual Buffers**: Interactive query results displayed in Neovim buffers
- **YAML Header Parsing**: Automatically extract and index document metadata
- **Real-time Updates**: Automatic reindexing when files change
- **Performance Optimized**: Multi-tier caching system for fast queries
- **Extensible**: Plugin architecture for custom parsers and views
## Quick Start
### Installation
Using [packer.nvim](https://github.com/wbthomason/packer.nvim):
```lua
use {
'your-username/notex.nvim',
requires = {
'nvim-lua/plenary.nvim',
'nvim-tree/nvim-web-devicons'
},
config = function()
require('notex').setup({
-- Configuration options
database_path = vim.fn.stdpath('data') .. '/notex/notex.db',
auto_index = true,
performance = {
enable_caching = true,
cache_size = 100
}
})
end
}
```
Using [vim-plug](https://github.com/junegunn/vim-plug):
```vim
Plug 'your-username/notex.nvim'
Plug 'nvim-lua/plenary.nvim'
Plug 'nvim-tree/nvim-web-devicons'
lua << EOF
require('notex').setup({
database_path = vim.fn.stdpath('data') .. '/notex/notex.db',
auto_index = true
})
EOF
```
### Basic Usage
1. **Index your workspace**:
```vim
:lua require('notex').index_workspace()
```
2. **Run a query**:
```vim
:lua require('notex').show_query_prompt()
```
3. **Example queries**:
```
FROM documents WHERE status = "published"
FROM documents WHERE tags LIKE "project" ORDER BY updated_at DESC
FROM documents WHERE created_at > "2023-01-01" LIMIT 20
```
## Configuration
```lua
require('notex').setup({
-- Database configuration
database_path = nil, -- Defaults to stdpath('data')/notex/notex.db
auto_index = true, -- Auto-index markdown files on save
index_on_startup = false, -- Index entire workspace on startup
-- File handling
max_file_size = 10 * 1024 * 1024, -- 10MB max file size
-- Performance settings
performance = {
max_query_time = 5000, -- 5 seconds max query time
cache_size = 100, -- Number of cached queries
enable_caching = true -- Enable query caching
},
-- UI settings
ui = {
border = "rounded", -- Window border style
max_width = 120, -- Max window width
max_height = 30, -- Max window height
show_help = true -- Show help in query results
},
-- Default view type
default_view_type = "table" -- "table", "list", or "grid"
})
```
## Query Syntax
Notex uses a simple, SQL-like query syntax designed for document discovery:
### Basic Structure
```
FROM documents
WHERE <conditions>
ORDER BY <field> <direction>
LIMIT <number>
```
### WHERE Conditions
```sql
-- Exact match
WHERE status = "published"
-- Partial match
WHERE tags LIKE "project"
-- Date comparison
WHERE created_at > "2023-01-01"
WHERE updated_at >= "2023-12-25"
-- Multiple conditions
WHERE status = "published" AND priority > 3
WHERE tags LIKE "urgent" OR status = "review"
```
### Ordering
```sql
-- Ascending (default)
ORDER BY title
-- Descending
ORDER BY created_at DESC
-- Multiple fields
ORDER BY priority DESC, created_at ASC
```
### Limiting Results
```sql
-- Limit number of results
LIMIT 10
-- Get most recent 5 documents
ORDER BY updated_at DESC LIMIT 5
```
## Document Metadata
Notex automatically extracts metadata from YAML frontmatter in your markdown files:
```markdown
---
title: "My Document"
status: "published"
priority: 5
tags: ["project", "urgent"]
due_date: "2023-12-25"
author: "John Doe"
---
# Document Content
Your markdown content goes here...
```
All YAML fields become queryable properties.
## Keybindings
Default keybindings (can be customized):
- `<leader>nq` - Show new query prompt
- `<leader>nr` - Show recent queries
- `<leader>ns` - Show saved queries
- `<leader>ni` - Index current workspace
- `<leader>nv` - Switch view type
- `<leader>ne` - Export current view
- `<leader>nc` - Cleanup database
In query result buffers:
- `<Enter>` - Open document
- `e` - Toggle inline editing
- `v` - Change view type
- `s` - Save query
- `r` - Refresh results
- `q` - Close buffer
## Query Results
Query results are displayed in virtual buffers with:
- **Interactive tables**: Sort, filter, and navigate results
- **Document preview**: Quick document information
- **Inline editing**: Edit document properties directly
- **Export options**: Save results as CSV, JSON, or Markdown
### View Types
1. **Table View**: Structured data display
2. **List View**: Compact document list
3. **Grid View**: Card-based layout
## API Reference
### Core Functions
```lua
-- Execute a query
local result = require('notex').execute_query_and_show_results(query_string)
-- Index a directory
require('notex').index_directory(path, options)
-- Get document details
local details = require('notex.index').get_document_details(document_id)
-- Save a query
require('notex.query').save_query(name, query_string)
-- Get statistics
local stats = require('notex').get_statistics()
```
### Query Engine
```lua
local query = require('notex.query')
-- Execute query
local result = query.execute_query(query_string)
-- Validate query syntax
local validation = query.validate_query_syntax(query_string)
-- Get suggestions
local suggestions = query.get_suggestions(partial_query, cursor_pos)
```
### Index Management
```lua
local index = require('notex.index')
-- Index documents
local result = index.index_documents(directory_path, options)
-- Search documents
local results = index.search_documents(criteria)
-- Get index statistics
local stats = index.get_statistics()
-- Validate index
local validation = index.validate_index()
```
## Performance
Notex is optimized for performance:
- **Multi-tier caching**: Memory, LRU, and timed caches
- **Incremental indexing**: Only process changed files
- **Database optimization**: SQLite with proper indexing
- **Lazy loading**: Load document details on demand
### Cache Configuration
```lua
require('notex').setup({
performance = {
enable_caching = true,
cache_size = 200, -- Increase for better performance
max_query_time = 10000 -- Allow longer queries for complex datasets
}
})
```
## Troubleshooting
### Common Issues
1. **Database locked errors**:
- Ensure you have proper file permissions
- Check if another Neovim instance is using the database
2. **Slow queries**:
- Increase `cache_size` in configuration
- Use more specific WHERE conditions
- Check database file size and consider cleanup
3. **Missing documents in results**:
- Run `:lua require('notex').index_workspace()` to reindex
- Check file permissions and YAML syntax
### Debug Mode
Enable debug logging:
```lua
vim.g.notex_debug = true
```
Check logs at `:lua print(vim.fn.stdpath('data') .. '/notex/notex.log')`
## Development
### Running Tests
```bash
# Install dependencies
luarocks install busted
luarocks install luafilesystem
# Run tests
busted tests/
```
### Project Structure
```
notex.nvim/
├── lua/notex/
│ ├── database/ # Database layer
│ ├── parser/ # Document parsing
│ ├── query/ # Query engine
│ ├── ui/ # User interface
│ ├── index/ # Document indexing
│ └── utils/ # Utilities
├── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── performance/ # Performance tests
└── specs/ # Specifications
```
### Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Update documentation
6. Submit a pull request
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Acknowledgments
- Inspired by Notion's database functionality
- Built with [lsqlite3](https://github.com/keplerproject/lua-sqlite3) for database operations
- UI components based on Neovim's native buffer API
- Testing with [busted](https://lunarmodules.github.io/busted/)
## Support
- 🐛 Report bugs: [GitHub Issues](https://github.com/your-username/notex.nvim/issues)
- 💡 Feature requests: [GitHub Discussions](https://github.com/your-username/notex.nvim/discussions)
- 📖 Documentation: [Wiki](https://github.com/your-username/notex.nvim/wiki)
---
Made with ❤️ for the Neovim community

151
lua/notex/database/init.lua Normal file
View file

@ -0,0 +1,151 @@
-- Database connection and initialization module
local M = {}
local sqlite3 = require('lsqlite3')
local utils = require('notex.utils')
-- Database connection state
local db = nil
local db_path = nil
-- Initialize database connection
function M.init(path)
if db then
return true, "Database already initialized"
end
db_path = path or vim.fn.stdpath('data') .. '/notex/notex.db'
-- Ensure directory exists
local dir = vim.fn.fnamemodify(db_path, ':h')
vim.fn.mkdir(dir, 'p')
local ok, err = pcall(function()
db = sqlite3.open(db_path)
end)
if not ok then
return false, "Failed to open database: " .. tostring(err)
end
-- Enable foreign key constraints
db:exec("PRAGMA foreign_keys = ON")
-- Set WAL mode for better performance
db:exec("PRAGMA journal_mode = WAL")
-- Set synchronous mode for performance vs safety balance
db:exec("PRAGMA synchronous = NORMAL")
return true, "Database initialized successfully"
end
-- Get database connection
function M.get_connection()
if not db then
error("Database not initialized. Call init() first.")
end
return db
end
-- Close database connection
function M.close()
if db then
db:close()
db = nil
return true, "Database closed successfully"
end
return false, "Database not initialized"
end
-- Execute query with error handling
function M.execute(query, params)
local conn = M.get_connection()
local stmt = conn:prepare(query)
if not stmt then
return false, "Failed to prepare query: " .. query
end
local result = {}
local ok, err = pcall(function()
if params then
stmt:bind_names(params)
end
for row in stmt:nrows() do
table.insert(result, row)
end
stmt:finalize()
end)
if not ok then
stmt:finalize()
return false, "Query execution failed: " .. tostring(err)
end
return true, result
end
-- Execute transaction
function M.transaction(queries)
local conn = M.get_connection()
local ok, err = pcall(function()
conn:exec("BEGIN TRANSACTION")
for _, query_data in ipairs(queries) do
local stmt = conn:prepare(query_data.query)
if query_data.params then
stmt:bind_names(query_data.params)
end
stmt:step()
stmt:finalize()
end
conn:exec("COMMIT")
end)
if not ok then
conn:exec("ROLLBACK")
return false, "Transaction failed: " .. tostring(err)
end
return true, "Transaction completed successfully"
end
-- Get database status
function M.status()
if not db then
return {
initialized = false,
path = nil,
size_bytes = 0,
wal_mode = false
}
end
local size = 0
local file = io.open(db_path, "r")
if file then
size = file:seek("end")
file:close()
end
local wal_mode = false
local wal_file = db_path .. "-wal"
local wal = io.open(wal_file, "r")
if wal then
wal_mode = true
wal:close()
end
return {
initialized = true,
path = db_path,
size_bytes = size,
wal_mode = wal_mode
}
end
return M

View file

@ -0,0 +1,210 @@
-- Database migration management
local M = {}
local database = require('notex.database.init')
-- Current database version
local CURRENT_VERSION = 1
-- Migration table and version tracking
local function create_migration_table()
local query = [[
CREATE TABLE IF NOT EXISTS schema_migrations (
version INTEGER PRIMARY KEY,
applied_at INTEGER NOT NULL
)
]]
return database.execute(query)
end
-- Get current database version
local function get_database_version()
local ok, result = database.execute("SELECT version FROM schema_migrations ORDER BY version DESC LIMIT 1")
if ok and #result > 0 then
return result[1].version
end
return 0
end
-- Record migration
local function record_migration(version)
local query = [[
INSERT INTO schema_migrations (version, applied_at)
VALUES (:version, :applied_at)
]]
return database.execute(query, {
version = version,
applied_at = os.time()
})
end
-- Migration definitions
local migrations = {
[1] = {
description = "Initial schema creation",
up = function()
local schema = require('notex.database.schema')
return schema.init()
end,
down = function()
local queries = {
"DROP TABLE IF EXISTS properties",
"DROP TABLE IF EXISTS queries",
"DROP TABLE IF EXISTS schema_metadata",
"DROP TABLE IF EXISTS documents"
}
for _, query in ipairs(queries) do
local ok = database.execute(query)
if not ok then
return false, "Failed to drop table in rollback"
end
end
return true, "Rollback completed successfully"
end
}
}
-- Initialize migration system
function M.init()
local ok, err = create_migration_table()
if not ok then
return false, "Failed to create migration table: " .. err
end
local current_version = get_database_version()
if current_version == 0 then
-- Fresh installation - apply current version
return M.migrate_to(CURRENT_VERSION)
end
return true, string.format("Database at version %d", current_version)
end
-- Migrate to specific version
function M.migrate_to(target_version)
local current_version = get_database_version()
if target_version < current_version then
return false, "Downgrade migrations not supported"
end
if target_version > CURRENT_VERSION then
return false, string.format("Target version %d exceeds maximum version %d", target_version, CURRENT_VERSION)
end
-- Apply migrations sequentially
for version = current_version + 1, target_version do
if not migrations[version] then
return false, string.format("Migration %d not found", version)
end
local migration = migrations[version]
print(string.format("Applying migration %d: %s", version, migration.description))
local ok, err = migration.up()
if not ok then
return false, string.format("Migration %d failed: %s", version, err)
end
local record_ok, record_err = record_migration(version)
if not record_ok then
return false, string.format("Failed to record migration %d: %s", version, record_err)
end
end
return true, string.format("Migrated to version %d successfully", target_version)
end
-- Get migration status
function M.status()
local current_version = get_database_version()
local pending_migrations = {}
for version = current_version + 1, CURRENT_VERSION do
if migrations[version] then
table.insert(pending_migrations, {
version = version,
description = migrations[version].description
})
end
end
return {
current_version = current_version,
latest_version = CURRENT_VERSION,
pending_migrations = pending_migrations,
needs_migration = #pending_migrations > 0
}
end
-- Get list of all migrations
function M.list()
local migration_list = {}
for version, migration in pairs(migrations) do
table.insert(migration_list, {
version = version,
description = migration.description,
applied = version <= get_database_version()
})
end
table.sort(migration_list, function(a, b) return a.version < b.version end)
return migration_list
end
-- Validate database schema
function M.validate()
local status = M.status()
if status.needs_migration then
return false, string.format("Database needs migration from version %d to %d",
status.current_version, status.latest_version)
end
-- Check if all required tables exist
local tables = { "documents", "properties", "queries", "schema_metadata", "schema_migrations" }
for _, table_name in ipairs(tables) do
local ok, result = database.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name=:table_name",
{ table_name = table_name }
)
if not ok or #result == 0 then
return false, string.format("Required table '%s' not found", table_name)
end
end
return true, "Database schema is valid"
end
-- Reset database (for development/testing only)
function M.reset()
local queries = {
"DROP TABLE IF EXISTS properties",
"DROP TABLE IF EXISTS queries",
"DROP TABLE IF EXISTS schema_metadata",
"DROP TABLE IF EXISTS documents",
"DROP TABLE IF EXISTS schema_migrations"
}
for _, query in ipairs(queries) do
local ok = database.execute(query)
if not ok then
return false, "Failed to reset database"
end
end
return M.migrate_to(CURRENT_VERSION)
end
return M

View file

@ -0,0 +1,264 @@
-- Database schema and model definitions
local M = {}
local database = require('notex.database.init')
-- Table definitions
local SCHEMA = {
documents = [[
CREATE TABLE IF NOT EXISTS documents (
id TEXT PRIMARY KEY,
file_path TEXT UNIQUE NOT NULL,
content_hash TEXT NOT NULL,
last_modified INTEGER NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
)
]],
properties = [[
CREATE TABLE IF NOT EXISTS properties (
id TEXT PRIMARY KEY,
document_id TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
value_type TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
)
]],
queries = [[
CREATE TABLE IF NOT EXISTS queries (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
definition TEXT NOT NULL,
created_at INTEGER NOT NULL,
last_used INTEGER DEFAULT 0,
use_count INTEGER DEFAULT 0
)
]],
schema_metadata = [[
CREATE TABLE IF NOT EXISTS schema_metadata (
property_key TEXT PRIMARY KEY,
detected_type TEXT NOT NULL,
validation_rules TEXT,
document_count INTEGER DEFAULT 0,
created_at INTEGER NOT NULL
)
]]
}
-- Index definitions
local INDICES = {
documents_file_path = "CREATE UNIQUE INDEX IF NOT EXISTS idx_documents_file_path ON documents(file_path)",
properties_document_id = "CREATE INDEX IF NOT EXISTS idx_properties_document_id ON properties(document_id)",
properties_key = "CREATE INDEX IF NOT EXISTS idx_properties_key ON properties(key)",
queries_last_used = "CREATE INDEX IF NOT EXISTS idx_queries_last_used ON queries(last_used)",
properties_composite = "CREATE INDEX IF NOT EXISTS idx_properties_composite ON properties(document_id, key)",
properties_type = "CREATE INDEX IF NOT EXISTS idx_properties_type ON properties(key, value_type)"
}
-- Initialize schema
function M.init()
local ok, result = database.transaction({
{ query = SCHEMA.documents },
{ query = SCHEMA.properties },
{ query = SCHEMA.queries },
{ query = SCHEMA.schema_metadata },
{ query = INDICES.documents_file_path },
{ query = INDICES.properties_document_id },
{ query = INDICES.properties_key },
{ query = INDICES.queries_last_used },
{ query = INDICES.properties_composite },
{ query = INDICES.properties_type }
})
if not ok then
return false, result
end
return true, "Schema initialized successfully"
end
-- Document model functions
M.documents = {}
function M.documents.create(document_data)
local query = [[
INSERT INTO documents (id, file_path, content_hash, last_modified, created_at, updated_at)
VALUES (:id, :file_path, :content_hash, :last_modified, :created_at, :updated_at)
]]
return database.execute(query, document_data)
end
function M.documents.get_by_id(id)
local query = "SELECT * FROM documents WHERE id = :id"
local ok, result = database.execute(query, { id = id })
if ok and #result > 0 then
return true, result[1]
end
return ok, nil
end
function M.documents.get_by_path(file_path)
local query = "SELECT * FROM documents WHERE file_path = :file_path"
local ok, result = database.execute(query, { file_path = file_path })
if ok and #result > 0 then
return true, result[1]
end
return ok, nil
end
function M.documents.update(document_data)
local query = [[
UPDATE documents
SET content_hash = :content_hash,
last_modified = :last_modified,
updated_at = :updated_at
WHERE id = :id
]]
return database.execute(query, document_data)
end
function M.documents.delete(id)
local query = "DELETE FROM documents WHERE id = :id"
return database.execute(query, { id = id })
end
-- Property model functions
M.properties = {}
function M.properties.create(property_data)
local query = [[
INSERT INTO properties (id, document_id, key, value, value_type, created_at, updated_at)
VALUES (:id, :document_id, :key, :value, :value_type, :created_at, :updated_at)
]]
return database.execute(query, property_data)
end
function M.properties.get_by_document(document_id)
local query = "SELECT * FROM properties WHERE document_id = :document_id"
local ok, result = database.execute(query, { document_id = document_id })
return ok, result or {}
end
function M.properties.get_by_key(key)
local query = "SELECT * FROM properties WHERE key = :key"
local ok, result = database.execute(query, { key = key })
return ok, result or {}
end
function M.properties.update(property_data)
local query = [[
UPDATE properties
SET value = :value,
value_type = :value_type,
updated_at = :updated_at
WHERE id = :id
]]
return database.execute(query, property_data)
end
function M.properties.delete(id)
local query = "DELETE FROM properties WHERE id = :id"
return database.execute(query, { id = id })
end
function M.properties.delete_by_document(document_id)
local query = "DELETE FROM properties WHERE document_id = :document_id"
return database.execute(query, { document_id = document_id })
end
-- Query model functions
M.queries = {}
function M.queries.create(query_data)
local query = [[
INSERT INTO queries (id, name, definition, created_at)
VALUES (:id, :name, :definition, :created_at)
]]
return database.execute(query, query_data)
end
function M.queries.get_all()
local query = "SELECT * FROM queries ORDER BY last_used DESC"
return database.execute(query)
end
function M.queries.get_by_id(id)
local query = "SELECT * FROM queries WHERE id = :id"
local ok, result = database.execute(query, { id = id })
if ok and #result > 0 then
return true, result[1]
end
return ok, nil
end
function M.queries.update_usage(id)
local query = [[
UPDATE queries
SET last_used = :last_used,
use_count = use_count + 1
WHERE id = :id
]]
return database.execute(query, {
id = id,
last_used = os.time()
})
end
function M.queries.delete(id)
local query = "DELETE FROM queries WHERE id = :id"
return database.execute(query, { id = id })
end
-- Schema metadata functions
M.schema = {}
function M.schema.update_property(property_key, detected_type, validation_rules, document_count)
local query = [[
INSERT OR REPLACE INTO schema_metadata (property_key, detected_type, validation_rules, document_count, created_at)
VALUES (:property_key, :detected_type, :validation_rules, :document_count, :created_at)
]]
return database.execute(query, {
property_key = property_key,
detected_type = detected_type,
validation_rules = validation_rules,
document_count = document_count,
created_at = os.time()
})
end
function M.schema.get_all()
local query = "SELECT * FROM schema_metadata ORDER BY document_count DESC"
return database.execute(query)
end
function M.schema.get_by_key(property_key)
local query = "SELECT * FROM schema_metadata WHERE property_key = :property_key"
local ok, result = database.execute(query, { property_key = property_key })
if ok and #result > 0 then
return true, result[1]
end
return ok, nil
end
return M

28
lua/notex/document.lua Normal file
View file

@ -0,0 +1,28 @@
-- lua/notex/document.lua
local lyaml = require("lyaml")
local M = {}
function M.parse(file_content)
local yaml_part, content_part = file_content:match("\n?^---\n(.-?)\n---\n(.*)")
if not yaml_part then
return {
type = "note",
properties = {},
content = file_content,
}
end
local properties = lyaml.load(yaml_part)
local doc_type = properties.type or "note"
properties.type = nil
return {
type = doc_type,
properties = properties,
content = content_part,
}
end
return M

368
lua/notex/index/init.lua Normal file
View file

@ -0,0 +1,368 @@
-- Document indexing coordination module
local M = {}
local database = require('notex.database.init')
local migrations = require('notex.database.migrations')
local updater = require('notex.index.updater')
local scanner = require('notex.index.scanner')
local parser = require('notex.parser')
local utils = require('notex.utils')
-- Initialize indexing system
function M.init(database_path)
local ok, err = database.init(database_path)
if not ok then
return false, "Failed to initialize database: " .. err
end
ok, err = migrations.init()
if not ok then
return false, "Failed to initialize migrations: " .. err
end
utils.log("INFO", "Document indexing system initialized")
return true, "Indexing system initialized successfully"
end
-- Index documents in directory
function M.index_documents(directory_path, options)
options = options or {}
local force_reindex = options.force_reindex or false
local recursive = options.recursive ~= false
local result = {
success = false,
directory_path = directory_path,
stats = {},
errors = {},
operation = force_reindex and "reindex" or "update"
}
-- Validate directory exists
if not utils.file_exists(directory_path) then
table.insert(result.errors, "Directory does not exist: " .. directory_path)
return result
end
-- Check if directory is actually a directory
local dir_check = io.open(directory_path)
if not dir_check then
table.insert(result.errors, "Path is not a directory: " .. directory_path)
return result
end
dir_check:close()
local start_timer = utils.timer("Document indexing")
if force_reindex then
-- Full reindex
local ok, reindex_result = updater.reindex_directory(directory_path)
if not ok then
table.insert(result.errors, "Reindex failed: " .. reindex_result)
return result
end
result.stats = reindex_result.stats
utils.log("INFO", string.format("Completed full reindex of %s", directory_path))
else
-- Incremental update
local ok, update_result = updater.update_directory(directory_path)
if not ok then
table.insert(result.errors, "Update failed: " .. update_result)
return result
end
result.stats = update_result.stats
utils.log("INFO", string.format("Completed incremental update of %s", directory_path))
end
start_timer()
result.success = true
return result
end
-- Get indexed documents
function M.get_indexed_documents(filters)
filters = filters or {}
local limit = filters.limit or 100
local offset = filters.offset or 0
local order_by = filters.order_by or "updated_at DESC"
local query = string.format([[
SELECT d.*, COUNT(p.id) as property_count
FROM documents d
LEFT JOIN properties p ON d.id = p.document_id
GROUP BY d.id
ORDER BY %s
LIMIT %d OFFSET %d
]], order_by, limit, offset)
local ok, result = database.execute(query)
if not ok then
return nil, "Failed to get indexed documents: " .. result
end
return result
end
-- Search documents by properties
function M.search_documents(search_criteria)
local conditions = {}
local params = {}
local joins = {}
-- Build WHERE clause
if search_criteria.status then
table.insert(conditions, "p.key = 'status' AND p.value = :status")
params.status = search_criteria.status
end
if search_criteria.tags then
if type(search_criteria.tags) == "string" then
table.insert(conditions, "p.key = 'tags' AND p.value LIKE :tag")
params.tag = '%' .. search_criteria.tags .. '%'
elseif type(search_criteria.tags) == "table" then
local tag_conditions = {}
for i, tag in ipairs(search_criteria.tags) do
local param_name = "tag_" .. i
table.insert(tag_conditions, "p.key = 'tags' AND p.value LIKE :" .. param_name)
params[param_name] = '%' .. tag .. '%'
end
table.insert(conditions, "(" .. table.concat(tag_conditions, " OR ") .. ")")
end
end
if search_criteria.created_after then
table.insert(conditions, "d.created_at >= :created_after")
params.created_after = search_criteria.created_after
end
if search_criteria.created_before then
table.insert(conditions, "d.created_at <= :created_before")
params.created_before = search_criteria.created_before
end
if search_criteria.text_search then
table.insert(conditions, "d.file_path LIKE :text_search OR EXISTS (SELECT 1 FROM properties p2 WHERE p2.document_id = d.id AND p2.value LIKE :text_search)")
params.text_search = '%' .. search_criteria.text_search .. '%'
end
-- Build query
local where_clause = #conditions > 0 and "WHERE " .. table.concat(conditions, " AND ") or ""
local limit = search_criteria.limit or 50
local offset = search_criteria.offset or 0
local query = string.format([[
SELECT DISTINCT d.*, COUNT(p.id) as property_count
FROM documents d
%s
LEFT JOIN properties p ON d.id = p.document_id
%s
GROUP BY d.id
ORDER BY d.updated_at DESC
LIMIT %d OFFSET %d
]], #joins > 0 and table.concat(joins, " ") or "", where_clause, limit, offset)
local ok, result = database.execute(query, params)
if not ok then
return nil, "Search failed: " .. result
end
-- Get total count
local count_query = string.format([[
SELECT COUNT(DISTINCT d.id) as total
FROM documents d
%s
LEFT JOIN properties p ON d.id = p.document_id
%s
]], #joins > 0 and table.concat(joins, " ") or "", where_clause)
local count_ok, count_result = database.execute(count_query, params)
local total_count = count_ok and count_result[1].total or 0
return {
documents = result,
total_count = total_count,
limit = limit,
offset = offset
}
end
-- Get document details
function M.get_document_details(document_id)
-- Get document
local ok, doc_result = database.documents.get_by_id(document_id)
if not ok then
return nil, "Failed to get document: " .. doc_result
end
if not doc_result then
return nil, "Document not found: " .. document_id
end
-- Get properties
local ok, prop_result = database.properties.get_by_document(document_id)
if not ok then
return nil, "Failed to get document properties: " .. prop_result
end
-- Parse document for additional details
local parse_result, parse_err = parser.parse_document(doc_result.file_path)
if parse_err then
utils.log("WARN", "Failed to parse document for details", {
document_id = document_id,
error = parse_err
})
end
return {
document = doc_result,
properties = prop_result or {},
parse_result = parse_result,
file_exists = utils.file_exists(doc_result.file_path),
is_current = parse_result and parse_result.success or false
}
end
-- Remove document from index
function M.remove_document(document_id)
-- Get document details first
local doc_details, err = M.get_document_details(document_id)
if not doc_details then
return false, err
end
local ok, remove_result = updater.remove_document(doc_details.document.file_path)
if not ok then
return false, "Failed to remove document: " .. remove_result
end
utils.log("INFO", string.format("Removed document from index: %s", doc_details.document.file_path))
return true, remove_result
end
-- Update document in index
function M.update_document(file_path)
local ok, result = updater.index_document(file_path)
if not ok then
return false, "Failed to update document: " .. result
end
utils.log("INFO", string.format("Updated document in index: %s", file_path))
return true, result
end
-- Get index statistics
function M.get_statistics()
local stats = updater.get_index_stats()
-- Add additional statistics
local db_status = database.status()
stats.database = db_status
-- Get recent activity
local recent_query = [[
SELECT COUNT(*) as count,
strftime('%Y-%m-%d', datetime(created_at, 'unixepoch')) as date
FROM documents
WHERE created_at > strftime('%s', 'now', '-7 days')
GROUP BY date
ORDER BY date DESC
]]
local recent_ok, recent_result = database.execute(recent_query)
if recent_ok then
stats.recent_activity = recent_result
end
return stats
end
-- Validate index integrity
function M.validate_index()
local validation_result = {
valid = true,
issues = {},
stats = {}
}
-- Check for orphaned properties
local orphaned_query = [[
SELECT COUNT(*) as count FROM properties p
LEFT JOIN documents d ON p.document_id = d.id
WHERE d.id IS NULL
]]
local ok, result = database.execute(orphaned_query)
if ok and result[1].count > 0 then
validation_result.valid = false
table.insert(validation_result.issues, string.format("Found %d orphaned properties", result[1].count))
end
-- Check for documents that no longer exist
local docs_query = "SELECT id, file_path FROM documents"
ok, result = database.execute(docs_query)
if ok then
local missing_files = 0
for _, doc in ipairs(result) do
if not utils.file_exists(doc.file_path) then
missing_files = missing_files + 1
end
end
if missing_files > 0 then
table.insert(validation_result.issues, string.format("Found %d documents pointing to missing files", missing_files))
end
validation_result.stats.missing_files = missing_files
end
-- Get overall statistics
validation_result.stats = M.get_statistics()
return validation_result
end
-- Cleanup orphaned data
function M.cleanup_index()
local cleanup_result = {
removed_orphans = 0,
removed_missing = 0,
errors = {}
}
-- Remove orphaned properties
local orphaned_query = [[
DELETE FROM properties WHERE document_id NOT IN (SELECT id FROM documents)
]]
local ok, err = database.execute(orphaned_query)
if not ok then
table.insert(cleanup_result.errors, "Failed to remove orphaned properties: " .. err)
else
cleanup_result.removed_orphans = err -- In this case, err is actually the affected row count
end
-- Remove documents pointing to missing files
local docs_query = "SELECT id, file_path FROM documents"
ok, result = database.execute(docs_query)
if ok then
for _, doc in ipairs(result) do
if not utils.file_exists(doc.file_path) then
local remove_ok, remove_err = updater.remove_document(doc.file_path)
if remove_ok then
cleanup_result.removed_missing = cleanup_result.removed_missing + 1
else
table.insert(cleanup_result.errors, string.format("Failed to remove missing document %s: %s", doc.file_path, remove_err))
end
end
end
end
return cleanup_result
end
return M

258
lua/notex/index/scanner.lua Normal file
View file

@ -0,0 +1,258 @@
-- File system scanner for markdown documents
local M = {}
local utils = require('notex.utils')
local yaml_parser = require('notex.parser.yaml')
local markdown_parser = require('notex.parser.markdown')
-- Scan directory for markdown files
function M.scan_directory(directory_path, recursive)
recursive = recursive ~= false -- Default to true
local markdown_files = {}
local scan_command
if recursive then
scan_command = string.format('find "%s" -name "*.md" -type f 2>/dev/null', directory_path)
else
scan_command = string.format('find "%s" -maxdepth 1 -name "*.md" -type f 2>/dev/null', directory_path)
end
local handle = io.popen(scan_command)
if not handle then
return nil, "Failed to scan directory: " .. directory_path
end
for file_path in handle:lines() do
table.insert(markdown_files, file_path)
end
handle:close()
return markdown_files
end
-- Check if file has been modified since last index
function M.is_file_modified(file_path, last_modified)
local current_mtime = utils.get_file_mtime(file_path)
if not current_mtime then
return false, "Cannot get file modification time"
end
return current_mtime > last_modified
end
-- Scan for changed files
function M.scan_for_changes(directory_path, indexed_files)
local changed_files = {}
local removed_files = {}
-- Get current files
local current_files, err = M.scan_directory(directory_path, true)
if not current_files then
return nil, nil, err
end
-- Convert indexed files to a set for faster lookup
local indexed_set = {}
for _, file_info in ipairs(indexed_files) do
indexed_set[file_info.file_path] = file_info
end
-- Convert current files to a set
local current_set = {}
for _, file_path in ipairs(current_files) do
current_set[file_path] = true
end
-- Check for modified files
for file_path, file_info in pairs(indexed_set) do
if not current_set[file_path] then
-- File was removed
table.insert(removed_files, file_path)
else
-- Check if modified
local is_modified, mod_err = M.is_file_modified(file_path, file_info.last_modified)
if mod_err then
return nil, nil, "Error checking file modification: " .. mod_err
elseif is_modified then
table.insert(changed_files, {
file_path = file_path,
change_type = "modified"
})
end
end
end
-- Check for new files
for _, file_path in ipairs(current_files) do
if not indexed_set[file_path] then
table.insert(changed_files, {
file_path = file_path,
change_type = "new"
})
end
end
return changed_files, removed_files
end
-- Validate markdown file
function M.validate_markdown_file(file_path)
local validation_result = {
valid = true,
errors = {},
warnings = {}
}
-- Check if file exists
if not utils.file_exists(file_path) then
validation_result.valid = false
table.insert(validation_result.errors, "File does not exist")
return validation_result
end
-- Check file extension
if not file_path:match("%.md$") then
validation_result.valid = false
table.insert(validation_result.errors, "File must have .md extension")
return validation_result
end
-- Check file size (warn if too large)
local file_size = select(2, file_path:match("(.+)"))
if file_size and file_size > 10 * 1024 * 1024 then -- 10MB
table.insert(validation_result.warnings, "File is very large (>10MB), indexing may be slow")
end
-- Validate UTF-8 encoding
if not utils.is_utf8(file_path) then
validation_result.valid = false
table.insert(validation_result.errors, "File is not valid UTF-8 encoding")
return validation_result
end
-- Validate markdown format
local content, err = utils.read_file(file_path)
if not content then
validation_result.valid = false
table.insert(validation_result.errors, "Cannot read file: " .. err)
return validation_result
end
local markdown_errors = markdown_parser.validate_markdown(content)
for _, error in ipairs(markdown_errors) do
table.insert(validation_result.errors, "Markdown format error: " .. error)
end
-- Check for YAML header
local yaml_content, yaml_err = yaml_parser.extract_yaml_header(content)
if not yaml_content then
table.insert(validation_result.warnings, "No YAML header found")
else
-- Validate YAML header
local yaml_data, parse_err = yaml_parser.parse_yaml(yaml_content)
if not yaml_data then
validation_result.valid = false
table.insert(validation_result.errors, "YAML parsing error: " .. parse_err)
else
local yaml_errors = yaml_parser.validate_yaml(yaml_data)
for _, error in ipairs(yaml_errors) do
table.insert(validation_result.errors, "YAML validation error: " .. error)
end
end
end
validation_result.valid = #validation_result.errors == 0
return validation_result
end
-- Scan and validate directory
function M.scan_and_validate(directory_path)
local files, err = M.scan_directory(directory_path, true)
if not files then
return nil, err
end
local valid_files = {}
local invalid_files = {}
local scan_stats = {
total_scanned = #files,
valid = 0,
invalid = 0,
warnings = 0
}
for _, file_path in ipairs(files) do
local validation = M.validate_markdown_file(file_path)
if validation.valid then
table.insert(valid_files, file_path)
scan_stats.valid = scan_stats.valid + 1
if #validation.warnings > 0 then
scan_stats.warnings = scan_stats.warnings + #validation.warnings
end
else
table.insert(invalid_files, {
file_path = file_path,
errors = validation.errors,
warnings = validation.warnings
})
scan_stats.invalid = scan_stats.invalid + 1
end
end
return {
valid_files = valid_files,
invalid_files = invalid_files,
stats = scan_stats
}
end
-- Get file metadata
function M.get_file_metadata(file_path)
local metadata = {
file_path = file_path,
exists = false,
size = 0,
last_modified = 0,
content_hash = nil,
yaml_header = false,
word_count = 0,
has_errors = false
}
-- Check if file exists
if not utils.file_exists(file_path) then
return metadata
end
metadata.exists = true
-- Get file stats
metadata.last_modified = utils.get_file_mtime(file_path) or 0
-- Read content
local content, err = utils.read_file(file_path)
if not content then
metadata.has_errors = true
return metadata
end
metadata.size = #content
metadata.content_hash = utils.sha256(content)
-- Check for YAML header
local yaml_content = yaml_parser.extract_yaml_header(content)
metadata.yaml_header = yaml_content ~= nil
-- Get word count
metadata.word_count = markdown_parser.count_words(content)
return metadata
end
return M

317
lua/notex/index/updater.lua Normal file
View file

@ -0,0 +1,317 @@
-- Incremental index updater
local M = {}
local database = require('notex.database.schema')
local scanner = require('notex.index.scanner')
local yaml_parser = require('notex.parser.yaml')
local utils = require('notex.utils')
-- Index a single document
function M.index_document(file_path)
local document_id = utils.generate_id()
local current_time = os.time()
-- Get file metadata
local metadata = scanner.get_file_metadata(file_path)
if not metadata.exists then
return false, "File does not exist: " .. file_path
end
if metadata.has_errors then
return false, "File has errors: " .. file_path
end
-- Read and parse file
local yaml_data, err = yaml_parser.parse_markdown_file(file_path)
if not yaml_data then
return false, "Failed to parse YAML: " .. err
end
-- Create document record
local document_record = {
id = document_id,
file_path = file_path,
content_hash = metadata.content_hash,
last_modified = metadata.last_modified,
created_at = current_time,
updated_at = current_time
}
-- Check if document already exists
local existing_doc, get_err = database.documents.get_by_path(file_path)
if get_err then
return false, "Failed to check existing document: " .. get_err
end
local ok
if existing_doc then
-- Update existing document
document_record.id = existing_doc.id
ok, err = database.documents.update(document_record)
if not ok then
return false, "Failed to update document: " .. err
end
-- Delete existing properties
ok, err = database.properties.delete_by_document(document_record.id)
if not ok then
return false, "Failed to delete existing properties: " .. err
end
document_id = existing_doc.id
else
-- Create new document
ok, err = database.documents.create(document_record)
if not ok then
return false, "Failed to create document: " .. err
end
end
-- Process and create properties
local properties = yaml_parser.process_properties(yaml_data)
for _, prop in ipairs(properties) do
local property_record = {
id = utils.generate_id(),
document_id = document_id,
key = prop.key,
value = tostring(prop.value),
value_type = prop.value_type,
created_at = current_time,
updated_at = current_time
}
ok, err = database.properties.create(property_record)
if not ok then
utils.log("ERROR", "Failed to create property", {
document_id = document_id,
property_key = prop.key,
error = err
})
end
end
-- Update schema metadata
M.update_schema_metadata(properties)
return true, {
document_id = document_id,
properties_count = #properties,
action = existing_doc and "updated" or "created"
}
end
-- Update schema metadata based on properties
function M.update_schema_metadata(properties)
-- Count property types
local property_counts = {}
local property_types = {}
for _, prop in ipairs(properties) do
if not property_counts[prop.key] then
property_counts[prop.key] = 0
property_types[prop.key] = {}
end
property_counts[prop.key] = property_counts[prop.key] + 1
if not property_types[prop.key][prop.value_type] then
property_types[prop.key][prop.value_type] = 0
end
property_types[prop.key][prop.value_type] = property_types[prop.key][prop.value_type] + 1
end
-- Update schema metadata for each property
for property_key, count in pairs(property_counts) do
-- Find most common type
local most_common_type = nil
local max_count = 0
for type_name, type_count in pairs(property_types[property_key]) do
if type_count > max_count then
max_count = type_count
most_common_type = type_name
end
end
-- Create validation rules
local validation_rules = vim.json.encode({
allowed_types = vim.tbl_keys(property_types[property_key]),
most_common_type = most_common_type
})
database.schema.update_property(property_key, most_common_type, validation_rules, count)
end
end
-- Remove document from index
function M.remove_document(file_path)
local existing_doc, err = database.documents.get_by_path(file_path)
if not existing_doc then
return false, "Document not found in index: " .. file_path
end
-- Properties will be deleted automatically due to foreign key constraint
local ok, delete_err = database.documents.delete(existing_doc.id)
if not ok then
return false, "Failed to delete document: " .. delete_err
end
return true, {
document_id = existing_doc.id,
file_path = file_path,
action = "deleted"
}
end
-- Incremental update for directory
function M.update_directory(directory_path)
local result = {
updated_files = {},
removed_files = {},
errors = {},
stats = {
processed = 0,
updated = 0,
removed = 0,
failed = 0
}
}
-- Get currently indexed files
local indexed_docs, err = database.execute("SELECT file_path, last_modified FROM documents")
if not indexed_docs then
return false, "Failed to get indexed documents: " .. err
end
-- Scan for changes
local changed_files, removed_files, scan_err = scanner.scan_for_changes(directory_path, indexed_docs)
if not changed_files then
return false, "Failed to scan for changes: " .. scan_err
end
-- Process changed files
for _, change_info in ipairs(changed_files) do
result.stats.processed = result.stats.processed + 1
local ok, update_result = M.index_document(change_info.file_path)
if ok then
result.stats.updated = result.stats.updated + 1
table.insert(result.updated_files, update_result)
utils.log("INFO", string.format("Updated document: %s", change_info.file_path))
else
result.stats.failed = result.stats.failed + 1
table.insert(result.errors, {
file_path = change_info.file_path,
error = update_result
})
utils.log("ERROR", string.format("Failed to update document: %s - %s", change_info.file_path, update_result))
end
end
-- Process removed files
for _, file_path in ipairs(removed_files) do
local ok, remove_result = M.remove_document(file_path)
if ok then
result.stats.removed = result.stats.removed + 1
table.insert(result.removed_files, remove_result)
utils.log("INFO", string.format("Removed document: %s", file_path))
else
result.stats.failed = result.stats.failed + 1
table.insert(result.errors, {
file_path = file_path,
error = remove_result
})
utils.log("ERROR", string.format("Failed to remove document: %s - %s", file_path, remove_result))
end
end
return true, result
end
-- Full reindex of directory
function M.reindex_directory(directory_path)
local result = {
indexed_files = {},
errors = {},
stats = {
scanned = 0,
indexed = 0,
failed = 0,
skipped = 0
}
}
-- Clear existing index
local ok, err = database.execute("DELETE FROM documents")
if not ok then
return false, "Failed to clear existing index: " .. err
end
-- Scan and validate directory
local scan_result, scan_err = scanner.scan_and_validate(directory_path)
if not scan_result then
return false, "Failed to scan directory: " .. scan_err
end
result.stats.scanned = scan_result.stats.total_scanned
-- Index valid files
for _, file_path in ipairs(scan_result.valid_files) do
local ok, index_result = M.index_document(file_path)
if ok then
result.stats.indexed = result.stats.indexed + 1
table.insert(result.indexed_files, index_result)
utils.log("INFO", string.format("Indexed document: %s", file_path))
else
result.stats.failed = result.stats.failed + 1
table.insert(result.errors, {
file_path = file_path,
error = index_result
})
utils.log("ERROR", string.format("Failed to index document: %s - %s", file_path, index_result))
end
end
-- Log invalid files
for _, invalid_file in ipairs(scan_result.invalid_files) do
result.stats.skipped = result.stats.skipped + 1
utils.log("WARN", string.format("Skipped invalid file: %s", invalid_file.file_path), invalid_file.errors)
end
return true, result
end
-- Get index statistics
function M.get_index_stats()
local stats = {}
-- Document counts
local doc_count, err = database.execute("SELECT COUNT(*) as count FROM documents")
if doc_count then
stats.document_count = doc_count[1].count
end
-- Property counts
local prop_count, err = database.execute("SELECT COUNT(*) as count FROM properties")
if prop_count then
stats.property_count = prop_count[1].count
end
-- Schema statistics
local schema_stats, err = database.schema.get_all()
if schema_stats then
stats.unique_properties = #schema_stats
stats.schema_entries = schema_stats
end
-- Database status
local db_status = require('notex.database.init').status()
stats.database = db_status
return stats
end
return M

551
lua/notex/init.lua Normal file
View file

@ -0,0 +1,551 @@
-- Notex: Relational Document System for Neovim
local M = {}
-- Plugin configuration
local config = {
database_path = nil,
auto_index = true,
index_on_startup = false,
max_file_size = 10 * 1024 * 1024, -- 10MB
default_view_type = "table",
performance = {
max_query_time = 5000, -- 5 seconds
cache_size = 100,
enable_caching = true
},
ui = {
border = "rounded",
max_width = 120,
max_height = 30,
show_help = true
}
}
-- Plugin state
local state = {
initialized = false,
database = nil,
ui = nil,
settings = {}
}
-- Initialize plugin
function M.setup(user_config)
-- Merge user configuration
M.settings = vim.tbl_deep_extend("force", config, user_config or {})
-- Validate configuration
local valid, errors = M.validate_config(M.settings)
if not valid then
vim.notify("Notex configuration error: " .. table.concat(errors, ", "), vim.log.levels.ERROR)
return false
end
-- Initialize modules
local ok, err = M.initialize_modules()
if not ok then
vim.notify("Failed to initialize Notex: " .. err, vim.log.levels.ERROR)
return false
end
-- Initialize caching system if enabled
if M.settings.performance.enable_caching then
local cache = require('notex.utils.cache')
cache.init({
memory = {
max_size = M.settings.performance.cache_size,
enabled = true
},
lru = {
max_size = math.floor(M.settings.performance.cache_size / 2),
enabled = true
},
timed = {
default_ttl = 300, -- 5 minutes
enabled = true
}
})
end
-- Setup autocommands
M.setup_autocommands()
-- Setup keymaps
M.setup_keymaps()
-- Auto-index on startup if enabled
if M.settings.index_on_startup then
vim.defer_fn(function()
M.auto_index_current_workspace()
end, 1000)
end
state.initialized = true
vim.notify("Notex initialized successfully", vim.log.levels.INFO)
return true
end
-- Validate configuration
function M.validate_config(settings)
local errors = {}
-- Validate performance settings
if settings.performance then
if settings.performance.max_query_time and settings.performance.max_query_time < 100 then
table.insert(errors, "max_query_time must be at least 100ms")
end
if settings.performance.cache_size and settings.performance.cache_size < 10 then
table.insert(errors, "cache_size must be at least 10")
end
end
-- Validate UI settings
if settings.ui then
if settings.ui.max_width and settings.ui.max_width < 40 then
table.insert(errors, "max_width must be at least 40 characters")
end
if settings.ui.max_height and settings.ui.max_height < 10 then
table.insert(errors, "max_height must be at least 10 characters")
end
end
return #errors == 0, errors
end
-- Initialize modules
function M.initialize_modules()
-- Initialize database
local database = require('notex.database.init')
local db_path = M.settings.database_path or vim.fn.stdpath('data') .. '/notex/notex.db'
local ok, err = database.init(db_path)
if not ok then
return false, err
end
state.database = database
-- Initialize migrations
local migrations = require('notex.database.migrations')
ok, err = migrations.init()
if not ok then
return false, err
end
-- Initialize UI
local ui = require('notex.ui')
ok, err = ui.init()
if not ok then
return false, err
end
state.ui = ui
-- Initialize query engine
local query_engine = require('notex.query')
ok, err = query_engine.init(db_path)
if not ok then
return false, err
end
return true, "Modules initialized successfully"
end
-- Setup autocommands
function M.setup_autocommands()
local group = vim.api.nvim_create_augroup("Notex", {clear = true})
-- Auto-index markdown files when saved (if enabled)
if M.settings.auto_index then
vim.api.nvim_create_autocmd("BufWritePost", {
pattern = "*.md",
callback = function()
M.auto_index_file(vim.fn.expand('%:p'))
end
})
end
-- Clean up on exit
vim.api.nvim_create_autocmd("VimLeavePre", {
callback = function()
M.cleanup()
end
})
-- Detect query blocks and show hover
vim.api.nvim_create_autocmd({"CursorHold", "CursorHoldI"}, {
pattern = "*.md",
callback = function()
M.check_query_block_under_cursor()
end
})
end
-- Setup keymaps
function M.setup_keymaps()
-- Global keymaps
local keymaps = {
["<leader>nq"] = {callback = M.show_query_prompt, desc = "New query"},
["<leader>nr"] = {callback = M.show_recent_queries, desc = "Recent queries"},
["<leader>ns"] = {callback = M.show_saved_queries, desc = "Saved queries"},
["<leader>ni"] = {callback = M.index_workspace, desc = "Index workspace"},
["<leader>ns"] = {callback = M.show_index_status, desc = "Index status"},
["<leader>nc"] = {callback = M.cleanup_database, desc = "Cleanup database"},
["<leader>nv"] = {callback = M.switch_view_type, desc = "Switch view type"},
["<leader>ne"] = {callback = M.show_export_menu, desc = "Export view"}
}
for key, mapping in pairs(keymaps) do
if not vim.fn.hasmapto(key, "n") then
vim.keymap.set("n", key, mapping.callback, {
noremap = true,
silent = true,
desc = mapping.desc
})
end
end
-- Buffer-local keymaps for query blocks
vim.api.nvim_create_autocmd("FileType", {
pattern = "markdown",
callback = function()
vim.keymap.set("n", "K", M.show_query_under_cursor, {
buffer = 0,
noremap = true,
silent = true,
desc = "Show query"
})
end
})
end
-- Show query prompt
function M.show_query_prompt(initial_query)
local ui = state.ui or require('notex.ui')
ui.show_query_prompt(initial_query)
end
-- Show recent queries
function M.show_recent_queries()
local query_engine = require('notex.query')
local result = query_engine.get_query_statistics()
if result.success and result.statistics.recent_queries then
local ui = state.ui or require('notex.ui')
-- Create a simple UI to show recent queries
local recent_queries = result.statistics.recent_queries
if #recent_queries > 0 then
local choices = {}
for _, query in ipairs(recent_queries) do
table.insert(choices, string.format("%s (%d uses)", query.name, query.use_count))
end
vim.ui.select(choices, {
prompt = "Select recent query:"
}, function(choice)
if choice then
local query_name = choice:match("^(%s+) %(")
if query_name then
ui.execute_query_and_show_results(query_name:trim())
end
end
end)
else
vim.notify("No recent queries found", vim.log.levels.INFO)
end
else
vim.notify("Failed to get query statistics", vim.log.levels.ERROR)
end
end
-- Show saved queries
function M.show_saved_queries()
local query_engine = require('notex.query')
local result = query_engine.list_saved_queries()
if result.success then
local choices = {}
for _, query in ipairs(result.queries) do
table.insert(choices, query.name)
end
vim.ui.select(choices, {
prompt = "Select saved query:"
}, function(choice)
if choice then
local ui = state.ui or require('notex.ui')
ui.execute_query_and_show_results(choice)
end
end)
else
vim.notify("Failed to get saved queries: " .. result.error, vim.log.levels.ERROR)
end
end
-- Execute query and show results
function M.execute_query_and_show_results(query_string, options)
local utils = require('notex.utils')
-- Use caching if enabled
local result
if M.settings.performance.enable_caching then
local cache_key = "query:" .. vim.fn.sha256(query_string)
result = utils.cache_get_or_set(cache_key, function()
local query_engine = require('notex.query')
return query_engine.execute_query(query_string, options)
end, "lru", 300) -- Cache for 5 minutes
else
local query_engine = require('notex.query')
result = query_engine.execute_query(query_string, options)
end
if result.success then
local ui = state.ui or require('notex.ui')
ui.show_query_results(result, M.settings.ui)
else
local ui = state.ui or require('notex.ui')
ui.show_error("Query Error", result.errors, M.settings.ui)
end
end
-- Index workspace
function M.index_workspace()
local workspace_dir = vim.fn.getcwd()
M.index_directory(workspace_dir, {
force_reindex = vim.fn.confirm("Force reindex? " , "&Yes\n&No", 2) == 1
})
end
-- Index directory
function M.index_directory(directory, options)
options = options or {}
vim.notify("Indexing documents in " .. directory, vim.log.levels.INFO)
local indexer = require('notex.index')
local result = indexer.index_documents(directory, options)
if result.success then
vim.notify(string.format("Indexing complete: %d documents processed", result.stats.indexed), vim.log.levels.INFO)
M.log_indexing_result(result)
else
vim.notify("Indexing failed: " .. table.concat(result.errors, ", "), vim.log.levels.ERROR)
end
end
-- Auto-index current workspace
function M.auto_index_current_workspace()
local workspace_dir = vim.fn.getcwd()
M.index_directory(workspace_dir)
end
-- Auto-index single file
function M.auto_index_file(file_path)
local indexer = require('notex.index')
-- Validate file
local validation = require('notex.index.scanner').validate_markdown_file(file_path)
if not validation.valid then
return
end
local result = indexer.update_document(file_path)
if result.success then
vim.notify("Indexed: " .. file_path, vim.log.levels.INFO)
end
end
-- Show index status
function M.show_index_status()
local indexer = require('notex.index')
local stats = indexer.get_statistics()
local lines = {
"Notex Index Status",
string.rep("=", 50),
"",
string.format("Documents indexed: %d", stats.document_count or 0),
string.format("Properties: %d", stats.property_count or 0),
string.format("Unique properties: %d", stats.unique_properties or 0),
"",
"Database:",
string.format(" Size: %s bytes", stats.database.size_bytes or 0),
string.format(" WAL mode: %s", stats.database.wal_mode and "Yes" or "No"),
"",
"Press any key to close"
}
local buffer = vim.api.nvim_create_buf(false, true)
vim.api.nvim_buf_set_lines(buffer, 0, -1, false, lines)
vim.api.nvim_buf_set_option(buffer, "filetype", "text")
local window = vim.api.nvim_open_win(buffer, true, {
relative = "editor",
width = 60,
height = 20,
row = math.floor((vim.api.nvim_get_option_value("lines", {}) - 20) / 2),
col = math.floor((vim.api.nvim_get_option_value("columns", {}) - 60) / 2),
border = "rounded",
style = "minimal",
title = " Index Status "
})
vim.api.nvim_create_autocmd("CursorMoved,WinLeave", {
buffer = buffer,
once = true,
callback = function()
vim.api.nvim_win_close(window, true)
end
})
end
-- Switch view type
function M.switch_view_type()
local ui = state.ui or require('notex.ui')
ui.show_view_type_menu()
end
-- Show export menu
function M.show_export_menu()
local ui = state.ui or require('notex.ui')
ui.show_export_menu()
end
-- Check query block under cursor
function M.check_query_block_under_cursor()
local line = vim.api.nvim_get_current_line()
local cursor = vim.api.nvim_win_get_cursor(0)
-- Check if cursor is on a query block
if line:match("```notex%-query") or
line:match("FROM") or
line:match("WHERE") or
line:match("ORDER BY") then
-- Could be inside a query block, check surrounding context
M.show_query_under_cursor()
end
end
-- Show query under cursor
function M.show_query_under_cursor()
local line_start = math.max(1, vim.api.nvim_win_get_cursor(0)[1] - 5)
local line_end = math.min(vim.api.nvim_buf_line_count(0), vim.api.nvim_win_get_cursor(0)[1] + 5)
local lines = vim.api.nvim_buf_get_lines(0, line_start - 1, line_end - line_start + 1, false)
local query_block = M.extract_query_block(lines)
if query_block then
M.execute_query_and_show_results(query_block)
end
end
-- Extract query block from lines
function M.extract_query_block(lines)
local in_query_block = false
local query_lines = {}
for _, line in ipairs(lines) do
if line:match("```notex%-query") then
in_query_block = true
elseif line:match("```") and in_query_block then
break
elseif in_query_block then
table.insert(query_lines, line)
end
end
if #query_lines > 0 then
return table.concat(query_lines, "\n")
end
return nil
end
-- Cleanup database
function M.cleanup_database()
local indexer = require('notex.index')
local cleanup_result = indexer.cleanup_index()
vim.notify(string.format("Cleanup completed: %d orphans removed, %d missing files removed",
cleanup_result.removed_orphans,
cleanup_result.removed_missing), vim.log.levels.INFO)
end
-- Log indexing result
function M.log_indexing_result(result)
if result.stats.scanned > 0 then
local success_rate = (result.stats.indexed / result.stats.scanned) * 100
vim.notify(string.format("Indexing success rate: %.1f%%", success_rate), vim.log.levels.INFO)
end
if #result.errors > 0 then
vim.notify("Indexing had " .. #result.errors .. " errors", vim.log.levels.WARN)
end
end
-- Cleanup
function M.cleanup()
if state.ui then
state.ui.cleanup_all()
end
if state.database then
state.database.close()
end
state.initialized = false
end
-- Get plugin status
function M.status()
return {
initialized = state.initialized,
database = state.database and state.database.status() or nil,
settings = M.settings
}
end
-- Health check
function M.health_check()
local health = {
ok = true,
checks = {}
}
-- Check database
if state.database then
local db_status = state.database.status()
table.insert(health.checks, {
name = "Database",
status = db_status.initialized and "ok" or "error",
message = db_status.initialized and "Database initialized" or "Database not initialized"
})
else
table.insert(health.checks, {
name = "Database",
status = "error",
message = "Database not initialized"
})
health.ok = false
end
-- Check modules
local modules = {"database", "parser", "query", "ui", "index"}
for _, module_name in ipairs(modules) do
local ok, module = pcall(require, "notex." .. module_name)
table.insert(health.checks, {
name = "Module: " .. module_name,
status = ok and "ok" or "error",
message = ok and "Module loaded" or "Module failed to load"
})
if not ok then
health.ok = false
end
end
return health
end
-- Export main API
return M

311
lua/notex/parser/init.lua Normal file
View file

@ -0,0 +1,311 @@
-- Parser coordination module
local M = {}
local yaml_parser = require('notex.parser.yaml')
local markdown_parser = require('notex.parser.markdown')
local utils = require('notex.utils')
-- Parse complete markdown document
function M.parse_document(file_path)
local result = {
file_path = file_path,
success = false,
yaml_data = {},
markdown_analysis = {},
properties = {},
errors = {},
warnings = {}
}
-- Validate file
if not utils.file_exists(file_path) then
table.insert(result.errors, "File does not exist: " .. file_path)
return result
end
if not utils.is_utf8(file_path) then
table.insert(result.errors, "File is not valid UTF-8: " .. file_path)
return result
end
-- Read file content
local content, err = utils.read_file(file_path)
if not content then
table.insert(result.errors, "Cannot read file: " .. err)
return result
end
-- Parse YAML header
local yaml_data, yaml_err = yaml_parser.parse_markdown_file(file_path)
if yaml_data then
result.yaml_data = yaml_data
-- Process properties
result.properties = yaml_parser.process_properties(yaml_data)
elseif yaml_err then
table.insert(result.errors, "YAML parsing error: " .. yaml_err)
else
table.insert(result.warnings, "No YAML header found")
end
-- Analyze markdown content
local markdown_analysis = markdown_parser.analyze_structure(content)
result.markdown_analysis = markdown_analysis
result.success = #result.errors == 0
return result
end
-- Parse multiple documents
function M.parse_documents(file_paths)
local results = {
total = #file_paths,
successful = 0,
failed = 0,
documents = {},
errors = {}
}
for _, file_path in ipairs(file_paths) do
local doc_result = M.parse_document(file_path)
table.insert(results.documents, doc_result)
if doc_result.success then
results.successful = results.successful + 1
else
results.failed = results.failed + 1
for _, error in ipairs(doc_result.errors) do
table.insert(results.errors, {
file_path = file_path,
error = error
})
end
end
end
return results
end
-- Extract document summary
function M.get_document_summary(file_path)
local parse_result = M.parse_document(file_path)
if not parse_result.success then
return nil, "Failed to parse document: " .. table.concat(parse_result.errors, ", ")
end
local summary = {
file_path = file_path,
title = parse_result.yaml_data.title or parse_result.markdown_analysis.headings[1] and parse_result.markdown_analysis.headings[1].title or "Untitled",
status = parse_result.yaml_data.status or "unknown",
priority = parse_result.yaml_data.priority or 0,
tags = parse_result.yaml_data.tags or {},
word_count = parse_result.markdown_analysis.word_count,
created_at = parse_result.yaml_data.created_at,
updated_at = parse_result.yaml_data.updated_at,
summary = parse_result.markdown_analysis.summary,
properties_count = #parse_result.properties,
has_headings = #parse_result.markdown_analysis.headings > 0,
has_links = #parse_result.markdown_analysis.links > 0,
has_code = #parse_result.markdown_analysis.code_blocks > 0,
reading_time = parse_result.markdown_analysis.reading_time_minutes
}
return summary
end
-- Validate document against schema
function M.validate_document_schema(file_path, schema_requirements)
local parse_result = M.parse_document(file_path)
if not parse_result.success then
return false, parse_result.errors
end
local validation_errors = {}
-- Check required properties
if schema_requirements.required then
for _, required_prop in ipairs(schema_requirements.required) do
local found = false
for _, prop in ipairs(parse_result.properties) do
if prop.key == required_prop then
found = true
break
end
end
if not found then
table.insert(validation_errors, string.format("Missing required property: %s", required_prop))
end
end
end
-- Check property types
if schema_requirements.property_types then
for _, prop in ipairs(parse_result.properties) do
local expected_type = schema_requirements.property_types[prop.key]
if expected_type and prop.value_type ~= expected_type then
table.insert(validation_errors, string.format("Property '%s' should be %s, got %s", prop.key, expected_type, prop.value_type))
end
end
end
-- Check property patterns
if schema_requirements.patterns then
for _, prop in ipairs(parse_result.properties) do
local pattern = schema_requirements.patterns[prop.key]
if pattern and not prop.value:match(pattern) then
table.insert(validation_errors, string.format("Property '%s' does not match required pattern", prop.key))
end
end
end
if #validation_errors > 0 then
return false, validation_errors
end
return true, parse_result
end
-- Extract document relationships
function M.extract_relationships(file_paths)
local relationships = {
links = {},
references = {},
backlinks = {}
}
-- Parse all documents
local parse_results = M.parse_documents(file_paths)
-- Build document lookup
local docs_by_path = {}
for _, doc_result in ipairs(parse_results.documents) do
if doc_result.success then
docs_by_path[doc_result.file_path] = doc_result
end
end
-- Extract links and references
for _, doc_result in ipairs(parse_results.documents) do
if doc_result.success then
local source_doc = doc_result.file_path
-- Extract markdown links
for _, link in ipairs(doc_result.markdown_analysis.links) do
table.insert(relationships.links, {
source = source_doc,
target = link.url,
text = link.text,
type = "markdown_link"
})
end
-- Extract property references (if any)
for _, prop in ipairs(doc_result.properties) do
if prop.key:match("ref") or prop.key:match("reference") then
table.insert(relationships.references, {
source = source_doc,
target = prop.value,
property = prop.key,
type = "property_reference"
})
end
end
end
end
-- Build backlinks
for _, link in ipairs(relationships.links) do
for target_path, target_doc in pairs(docs_by_path) do
if link.target == target_path or link.target:match(target_path) then
table.insert(relationships.backlinks, {
source = target_path,
target = link.source,
text = link.text,
type = "backlink"
})
end
end
end
return relationships
end
-- Generate document statistics
function M.generate_statistics(file_paths)
local stats = {
total_documents = #file_paths,
total_words = 0,
total_properties = 0,
property_distribution = {},
status_distribution = {},
tag_distribution = {},
file_size_distribution = {},
average_word_count = 0,
documents_with_headings = 0,
documents_with_links = 0,
documents_with_code = 0
}
-- Parse all documents
local parse_results = M.parse_documents(file_paths)
for _, doc_result in ipairs(parse_results.documents) do
if doc_result.success then
-- Word count
stats.total_words = stats.total_words + doc_result.markdown_analysis.word_count
-- Properties
stats.total_properties = stats.total_properties + #doc_result.properties
-- Property distribution
for _, prop in ipairs(doc_result.properties) do
if not stats.property_distribution[prop.key] then
stats.property_distribution[prop.key] = 0
end
stats.property_distribution[prop.key] = stats.property_distribution[prop.key] + 1
end
-- Status distribution
local status = doc_result.yaml_data.status or "unknown"
if not stats.status_distribution[status] then
stats.status_distribution[status] = 0
end
stats.status_distribution[status] = stats.status_distribution[status] + 1
-- Tag distribution
local tags = doc_result.yaml_data.tags or {}
for _, tag in ipairs(tags) do
if not stats.tag_distribution[tag] then
stats.tag_distribution[tag] = 0
end
stats.tag_distribution[tag] = stats.tag_distribution[tag] + 1
end
-- Feature flags
if doc_result.markdown_analysis.has_headings then
stats.documents_with_headings = stats.documents_with_headings + 1
end
if doc_result.markdown_analysis.has_links then
stats.documents_with_links = stats.documents_with_links + 1
end
if doc_result.markdown_analysis.has_code then
stats.documents_with_code = stats.documents_with_code + 1
end
end
end
-- Calculate averages
if stats.total_documents > 0 then
stats.average_word_count = math.floor(stats.total_words / stats.total_documents)
end
return stats
end
return M

View file

@ -0,0 +1,224 @@
-- Markdown content parsing module
local M = {}
local utils = require('notex.utils')
-- Extract content between lines
function M.extract_content_between(content, start_pattern, end_pattern)
local start_pos = content:find(start_pattern)
if not start_pos then
return nil
end
local end_pos = content:find(end_pattern, start_pos)
if not end_pos then
return content:sub(start_pos)
end
return content:sub(start_pos, end_pos)
end
-- Remove YAML header from content
function M.remove_yaml_header(content)
return content:gsub("^%s*%-%-%-\n.-\n%-%-%-\n", "", 1)
end
-- Extract markdown body (content after YAML header)
function M.get_body(content)
return M.remove_yaml_header(content)
end
-- Count words in markdown content
function M.count_words(content)
local body = M.get_body(content)
if not body then
return 0
end
-- Remove markdown syntax for accurate word count
local clean = body
:gsub("#+ ", "") -- Headers
:gsub("%*%*(.-)%*%*", "%1") -- Bold
:gsub("%*(.-)%*", "%1") -- Italic
:gsub("`(.-)`", "%1") -- Inline code
:gsub("```.-```", "") -- Code blocks
:gsub("%[.-%]%(.-%)", "") -- Links
:gsub("%!%[.-%]%(.-%)", "") -- Images
:gsub("%W+", " ") -- Replace non-word chars with spaces
:gsub("%s+", " ") -- Collapse multiple spaces
local words = {}
for word in clean:gmatch("%S+") do
if #word > 0 then
table.insert(words, word)
end
end
return #words
end
-- Count characters in markdown content
function M.count_characters(content)
local body = M.get_body(content)
return body and #body or 0
end
-- Extract headings from markdown
function M.extract_headings(content)
local headings = {}
local body = M.get_body(content)
for level, title in body:gmatch("^(#+)%s+(.+)$") do
table.insert(headings, {
level = #level,
title = title:trim(),
raw = level .. " " .. title
})
end
return headings
end
-- Extract links from markdown
function M.extract_links(content)
local links = {}
for text, url in content:gmatch("%[([^%]]*)%]%(([^)]+)%)") do
table.insert(links, {
text = text,
url = url,
raw = "[" .. text .. "](" .. url .. ")"
})
end
return links
end
-- Extract code blocks from markdown
function M.extract_code_blocks(content)
local code_blocks = {}
for lang, code in content:gmatch("```(%w*)\n(.-)\n```") do
table.insert(code_blocks, {
language = lang ~= "" and lang or "text",
code = code,
lines = select(2, code:gsub("\n", "")) + 1
})
end
return code_blocks
end
-- Get content summary (first paragraph)
function M.get_summary(content, max_length)
local body = M.get_body(content)
if not body then
return ""
end
-- Remove code blocks to avoid including them in summary
local clean_body = body:gsub("```.-```", "")
-- Extract first paragraph
local first_paragraph = clean_body:match("\n\n([^%[].-)\n\n") or
clean_body:match("^([^%[].-)\n\n") or
clean_body:match("^([^%[].-)")
if not first_paragraph then
return ""
end
-- Clean up markdown formatting
local summary = first_paragraph
:gsub("#+ ", "")
:gsub("%*%*(.-)%*%*", "%1")
:gsub("%*(.-)%*", "%1")
:gsub("`(.-)`", "%1")
:gsub("%[.-%]%(.-%)", "")
summary = summary:gsub("%s+", " "):trim()
if #summary > max_length then
summary = summary:sub(1, max_length - 3) .. "..."
end
return summary
end
-- Analyze markdown structure
function M.analyze_structure(content)
local body = M.get_body(content)
return {
word_count = M.count_words(content),
character_count = M.count_characters(content),
headings = M.extract_headings(content),
links = M.extract_links(content),
code_blocks = M.extract_code_blocks(content),
summary = M.get_summary(content, 200),
line_count = select(2, body:gsub("\n", "")) + 1,
has_toc = body:find("^%s*%[TOC%]") ~= nil,
reading_time_minutes = math.ceil(M.count_words(content) / 200) -- Assuming 200 WPM
}
end
-- Validate markdown format
function M.validate_markdown(content)
local errors = {}
if not content or content == "" then
table.insert(errors, "Empty content")
return errors
end
-- Check for balanced markdown syntax
local function check_balance(content, open, close)
local count_open = select(2, content:gsub(open, ""))
local count_close = select(2, content:gsub(close, ""))
return count_open == count_close
end
-- Check balanced headers
local headers = content:match("#+")
if headers and not check_balance(content, "```", "```") then
table.insert(errors, "Unbalanced code blocks")
end
-- Check for malformed links
for link in content:gmatch("%[.-%]%(.-%)") do
if not link:match("%[.-%]%(([^)]+)%)") or
link:match("%[.-%]%(%s*%)") then
table.insert(errors, "Malformed link: " .. link)
end
end
return errors
end
-- Convert markdown to plain text
function M.to_plain_text(content)
local body = M.get_body(content)
if not body then
return ""
end
local plain = body
:gsub("^#%s+", "\n") -- Headers to newlines
:gsub("\n#%s+", "\n") -- Headers to newlines
:gsub("%*%*(.-)%*%*", "%1") -- Bold
:gsub("%*(.-)%*", "%1") -- Italic
:gsub("`(.-)`", "%1") -- Inline code
:gsub("```%w*\n(.-)\n```", "%1") -- Code blocks
:gsub("%[([^%]]*)%]%(([^)]+)%)", "%1") -- Links to text
:gsub("%!%[([^%]]*)%]%(([^)]+)%)", "[Image: %1]") -- Images
:gsub("\n%s*[-*+]%s+", "\n") -- List items
:gsub("\n%s*%d+%.%s+", "\n") -- Numbered lists
:gsub("\n%s*%[%s*%]%s+", "\n") -- Checkbox lists
:gsub("\n%s*%[%s*x%s*%]%s+", "\n") -- Checked items
:gsub("\n\n+", "\n\n") -- Multiple newlines
:trim()
return plain
end
return M

192
lua/notex/parser/yaml.lua Normal file
View file

@ -0,0 +1,192 @@
-- YAML header parsing module
local M = {}
local lyaml = require('lyaml')
local utils = require('notex.utils')
-- Extract YAML header from markdown content
function M.extract_yaml_header(content)
if not content or content == "" then
return nil, "Empty content provided"
end
-- Check for YAML header delimiters
local start_pos = content:find("^%s*%-%-%-%s*\n")
if not start_pos then
return nil, "No YAML header found"
end
local end_pos = content:find("\n%s*%-%-%-%s*\n", start_pos + 4)
if not end_pos then
return nil, "Unclosed YAML header"
end
-- Extract YAML content
local yaml_content = content:sub(start_pos + 4, end_pos - 1)
return yaml_content, nil
end
-- Parse YAML header content
function M.parse_yaml(yaml_content)
if not yaml_content or yaml_content == "" then
return {}, nil
end
local ok, data = pcall(lyaml.load, yaml_content)
if not ok then
return nil, "YAML parsing failed: " .. tostring(data)
end
if type(data) ~= "table" then
return {}, nil
end
return data, nil
end
-- Parse markdown file and extract YAML header
function M.parse_markdown_file(file_path)
-- Validate file exists
if not utils.file_exists(file_path) then
return nil, "File not found: " .. file_path
end
-- Validate UTF-8 encoding
if not utils.is_utf8(file_path) then
return nil, "File is not valid UTF-8: " .. file_path
end
-- Read file content
local content, err = utils.read_file(file_path)
if not content then
return nil, err
end
-- Extract YAML header
local yaml_content, extract_err = M.extract_yaml_header(content)
if not yaml_content then
return nil, extract_err
end
-- Parse YAML
local yaml_data, parse_err = M.parse_yaml(yaml_content)
if not yaml_data then
return nil, parse_err
end
return yaml_data, nil
end
-- Flatten YAML data into key-value pairs
function M.flatten_yaml(data, prefix)
local flattened = {}
prefix = prefix or ""
for key, value in pairs(data) do
local full_key = prefix .. (prefix ~= "" and "." or "") .. key
if type(value) == "table" then
-- Recursively flatten nested tables
local nested = M.flatten_yaml(value, full_key)
for nested_key, nested_value in pairs(nested) do
flattened[nested_key] = nested_value
end
else
flattened[full_key] = value
end
end
return flattened
end
-- Validate YAML structure
function M.validate_yaml(yaml_data)
local errors = {}
if type(yaml_data) ~= "table" then
table.insert(errors, "YAML data must be a table")
return errors
end
-- Check for required fields (if any)
local required_fields = {} -- Add required fields as needed
for _, field in ipairs(required_fields) do
if yaml_data[field] == nil then
table.insert(errors, string.format("Required field '%s' is missing", field))
end
end
-- Validate field types
local field_types = {
-- Define expected types for specific fields
}
for field, expected_type in pairs(field_types) do
if yaml_data[field] ~= nil and type(yaml_data[field]) ~= expected_type then
table.insert(errors, string.format("Field '%s' should be %s, got %s",
field, expected_type, type(yaml_data[field])))
end
end
return errors
end
-- Detect and convert property types
function M.detect_property_type(value)
local value_type = type(value)
if value_type == "boolean" then
return "boolean", value
elseif value_type == "number" then
return "number", value
elseif value_type == "string" then
-- Check for ISO 8601 date format
if value:match("^%d%d%d%d%-%d%d%-%d%d$") or
value:match("^%d%d%d%d%-%d%d%-%d%dT%d%d:%d%d:%d%dZ?$") then
return "date", value
end
-- Check for numeric strings
local num = tonumber(value)
if num and value:match("^%-?%d+%.?%d*$") then
return "number", num
end
-- Check for boolean strings
local lower = value:lower()
if lower == "true" then
return "boolean", true
elseif lower == "false" then
return "boolean", false
end
return "string", value
elseif value_type == "table" then
return "array", vim.json.encode(value)
else
return "string", tostring(value)
end
end
-- Process YAML data into property format
function M.process_properties(yaml_data)
local flattened = M.flatten_yaml(yaml_data)
local properties = {}
for key, value in pairs(flattened) do
local prop_type, processed_value = M.detect_property_type(value)
table.insert(properties, {
key = key,
value = processed_value,
value_type = prop_type
})
end
return properties
end
return M

0
lua/notex/query.lua Normal file
View file

370
lua/notex/query/builder.lua Normal file
View file

@ -0,0 +1,370 @@
-- SQL query builder module
local M = {}
local utils = require('notex.utils')
-- Build SQL query from parsed query object
function M.build_sql(parsed_query, options)
options = options or {}
local query = {
select = "",
from = "",
where = "",
group_by = "",
order_by = "",
limit = "",
params = {}
}
-- Build SELECT clause
query.select = M.build_select_clause(parsed_query, options)
-- Build FROM clause
query.from = M.build_from_clause(parsed_query, options)
-- Build WHERE clause
query.where = M.build_where_clause(parsed_query, options)
-- Build GROUP BY clause
query.group_by = M.build_group_by_clause(parsed_query, options)
-- Build ORDER BY clause
query.order_by = M.build_order_by_clause(parsed_query, options)
-- Build LIMIT clause
query.limit = M.build_limit_clause(parsed_query, options)
-- Combine all clauses
local sql = M.combine_clauses(query)
return sql, query.params
end
-- Build SELECT clause
function M.build_select_clause(parsed_query, options)
local select_fields = {"d.*"}
-- Add property aggregates if needed
if parsed_query.group_by then
table.insert(select_fields, "COUNT(p.id) as document_count")
table.insert(select_fields, "GROUP_CONCAT(p.value) as aggregated_values")
else
-- Add property values for filtering
for field, _ in pairs(parsed_query.filters) do
table.insert(select_fields, string.format("(SELECT p.value FROM properties p WHERE p.document_id = d.id AND p.key = '%s') as %s", field, field))
end
end
if options.count_only then
select_fields = {"COUNT(DISTINCT d.id) as total_count"}
end
return "SELECT " .. table.concat(select_fields, ", ")
end
-- Build FROM clause
function M.build_from_clause(parsed_query, options)
local from_parts = {"documents d"}
-- Add properties join if we have filters or conditions
if next(parsed_query.filters) ~= nil or parsed_query.conditions then
table.insert(from_parts, "LEFT JOIN properties p ON d.id = p.document_id")
end
return "FROM " .. table.concat(from_parts, " ")
end
-- Build WHERE clause
function M.build_where_clause(parsed_query, options)
local where_conditions = {}
local params = {}
-- Add filter conditions
for field, value in pairs(parsed_query.filters) do
local condition, param = M.build_filter_condition(field, value)
table.insert(where_conditions, condition)
if param then
for key, val in pairs(param) do
params[key] = val
end
end
end
-- Add parsed conditions
if parsed_query.conditions then
local condition, param = M.build_conditions(parsed_query.conditions)
if condition then
table.insert(where_conditions, condition)
if param then
for key, val in pairs(param) do
params[key] = val
end
end
end
end
if #where_conditions == 0 then
return "", {}
end
return "WHERE " .. table.concat(where_conditions, " AND "), params
end
-- Build filter condition
function M.build_filter_condition(field, value)
local param_name = field:gsub("[^%w]", "_") .. "_filter"
if type(value) == "table" then
-- Handle array values
local placeholders = {}
for i = 1, #value do
local item_param = param_name .. "_" .. i
table.insert(placeholders, ":" .. item_param)
end
return string.format("(p.key = '%s' AND p.value IN (%s))", field, table.concat(placeholders, ", "))
else
-- Handle single value
return string.format("(p.key = '%s' AND p.value = :%s)", field, param_name)
end
end
-- Build conditions from parsed condition tree
function M.build_conditions(conditions)
if conditions.type == "comparison" then
return M.build_comparison_condition(conditions)
elseif conditions.type == "existence" then
return M.build_existence_condition(conditions)
elseif conditions.clauses then
return M.build_logical_condition(conditions)
end
return nil, nil
end
-- Build comparison condition
function M.build_comparison_condition(condition)
local field = condition.field
local operator = condition.operator
local value = condition.value
local negated = condition.negated
local param_name = field:gsub("[^%w]", "_") .. "_comp"
local sql_condition
local params = {}
-- Handle special operators
if operator == "CONTAINS" then
sql_condition = string.format("p.key = '%s' AND p.value LIKE :%s", field, param_name)
params[param_name] = "%" .. value .. "%"
elseif operator == "STARTS_WITH" then
sql_condition = string.format("p.key = '%s' AND p.value LIKE :%s", field, param_name)
params[param_name] = value .. "%"
elseif operator == "ENDS_WITH" then
sql_condition = string.format("p.key = '%s' AND p.value LIKE :%s", field, param_name)
params[param_name] = "%" .. value
elseif operator == "INCLUDES" then
sql_condition = string.format("p.key = '%s' AND p.value LIKE :%s", field, param_name)
params[param_name] "%" .. value .. "%"
elseif operator == "BEFORE" then
sql_condition = string.format("p.key = '%s' AND p.value < :%s", field, param_name)
params[param_name] = value
elseif operator == "AFTER" then
sql_condition = string.format("p.key = '%s' AND p.value > :%s", field, param_name)
params[param_name] = value
elseif operator == "WITHIN" then
-- Handle relative time
if type(value) == "table" and value.type == "relative_time" then
local time_value = M.calculate_relative_time(value)
sql_condition = string.format("p.key = '%s' AND p.value >= :%s", field, param_name)
params[param_name] = time_value
else
sql_condition = string.format("p.key = '%s' AND p.value >= :%s", field, param_name)
params[param_name] = value
end
else
-- Handle standard comparison operators
local op_map = {
["="] = "=",
["!="] = "!=",
[">"] = ">",
["<"] = "<",
[">="] = ">=",
["<="] = "<="
}
local sql_op = op_map[operator] or "="
sql_condition = string.format("p.key = '%s' AND p.value %s :%s", field, sql_op, param_name)
params[param_name] = value
end
if negated then
sql_condition = "NOT (" .. sql_condition .. ")"
end
return sql_condition, params
end
-- Build existence condition
function M.build_existence_condition(condition)
local field = condition.field
local negated = condition.negated
local sql_condition = string.format("EXISTS (SELECT 1 FROM properties p2 WHERE p2.document_id = d.id AND p2.key = '%s')", field)
if negated then
sql_condition = "NOT " .. sql_condition
end
return sql_condition, {}
end
-- Build logical condition (AND/OR)
function M.build_logical_condition(conditions)
local clause_parts = {}
local all_params = {}
for _, clause in ipairs(conditions.clauses) do
local clause_sql, clause_params = M.build_conditions(clause)
if clause_sql then
table.insert(clause_parts, clause_sql)
if clause_params then
for key, value in pairs(clause_params) do
all_params[key] = value
end
end
end
end
if #clause_parts == 0 then
return nil, nil
end
local logical_op = conditions.type:upper()
local sql_condition = "(" .. table.concat(clause_parts, " " .. logical_op .. " ") .. ")"
return sql_condition, all_params
end
-- Build GROUP BY clause
function M.build_group_by_clause(parsed_query, options)
if not parsed_query.group_by then
return ""
end
local group_fields = {}
if parsed_query.group_by == "property_key" then
table.insert(group_fields, "p.key")
else
table.insert(group_fields, "p." .. parsed_query.group_by)
end
return "GROUP BY " .. table.concat(group_fields, ", ")
end
-- Build ORDER BY clause
function M.build_order_by_clause(parsed_query, options)
if not parsed_query.order_by then
return "ORDER BY d.updated_at DESC"
end
local field = parsed_query.order_by.field
local direction = parsed_query.order_by.direction or "ASC"
-- Map field names to columns
local field_map = {
created_at = "d.created_at",
updated_at = "d.updated_at",
file_path = "d.file_path",
title = "CASE WHEN p.key = 'title' THEN p.value END"
}
local column = field_map[field] or "p." .. field
return string.format("ORDER BY %s %s", column, direction)
end
-- Build LIMIT clause
function M.build_limit_clause(parsed_query, options)
if not parsed_query.limit then
return ""
end
return "LIMIT " .. parsed_query.limit
end
-- Combine all SQL clauses
function M.combine_clauses(query)
local parts = {}
table.insert(parts, query.select)
table.insert(parts, query.from)
if query.where ~= "" then
table.insert(parts, query.where)
end
if query.group_by ~= "" then
table.insert(parts, query.group_by)
end
if query.order_by ~= "" then
table.insert(parts, query.order_by)
end
if query.limit ~= "" then
table.insert(parts, query.limit)
end
return table.concat(parts, "\n")
end
-- Calculate relative time
function M.calculate_relative_time(relative_time)
local current_time = os.time()
local amount = relative_time.amount
local unit = relative_time.unit
local seconds = 0
if unit == "s" then
seconds = amount
elseif unit == "m" then
seconds = amount * 60
elseif unit == "h" then
seconds = amount * 3600
elseif unit == "d" then
seconds = amount * 86400
elseif unit == "w" then
seconds = amount * 604800
elseif unit == "m" then -- month (approximate)
seconds = amount * 2592000
elseif unit == "y" then -- year (approximate)
seconds = amount * 31536000
end
return os.date("%Y-%m-%d", current_time - seconds)
end
-- Build count query
function M.build_count_query(parsed_query)
local options = { count_only = true }
local sql, params = M.build_sql(parsed_query, options)
return sql, params
end
-- Validate built SQL
function M.validate_sql(sql)
if not sql or sql == "" then
return false, "Empty SQL query"
end
-- Basic SQL injection prevention
if sql:match(";") or sql:match("DROP") or sql:match("DELETE") or sql:match("UPDATE") or sql:match("INSERT") then
return false, "Potentially unsafe SQL detected"
end
return true, "SQL query is valid"
end
return M

View file

@ -0,0 +1,374 @@
-- Query execution engine module
local M = {}
local database = require('notex.database.init')
local query_builder = require('notex.query.builder')
local query_parser = require('notex.query.parser')
local utils = require('notex.utils')
-- Execute parsed query
function M.execute(parsed_query, options)
options = options or {}
local start_time = vim.loop.hrtime()
local result = {
documents = {},
total_count = 0,
execution_time_ms = 0,
query_hash = "",
success = false,
errors = {},
metadata = {}
}
-- Validate parsed query
if #parsed_query.parse_errors > 0 then
result.errors = parsed_query.parse_errors
result.error_type = "parse_error"
return result
end
-- Generate query hash
result.query_hash = query_parser.generate_query_hash(parsed_query)
-- Build SQL query
local sql, params = query_builder.build_sql(parsed_query, options)
if not sql then
table.insert(result.errors, "Failed to build SQL query")
result.error_type = "build_error"
return result
end
-- Validate SQL
local valid, validation_error = query_builder.validate_sql(sql)
if not valid then
table.insert(result.errors, validation_error)
result.error_type = "validation_error"
return result
end
-- Execute query
local ok, query_result = database.execute(sql, params)
if not ok then
table.insert(result.errors, "Query execution failed: " .. query_result)
result.error_type = "execution_error"
return result
end
-- Process results
local processed_results = M.process_query_results(query_result, parsed_query, options)
result.documents = processed_results.documents
result.metadata = processed_results.metadata
-- Get total count
result.total_count = M.get_total_count(parsed_query, options)
-- Calculate execution time
local end_time = vim.loop.hrtime()
result.execution_time_ms = (end_time - start_time) / 1e6
result.success = true
-- Log slow queries
if result.execution_time_ms > 100 then
utils.log("WARN", string.format("Slow query detected: %.2fms", result.execution_time_ms), {
query_hash = result.query_hash,
document_count = #result.documents
})
end
return result
end
-- Process query results
function M.process_query_results(raw_results, parsed_query, options)
local documents = {}
local metadata = {
properties_found = {},
aggregation_results = {}
}
-- Group results by document if we have properties
local documents_by_id = {}
for _, row in ipairs(raw_results) do
local doc_id = row.id
if not documents_by_id[doc_id] then
documents_by_id[doc_id] = {
id = doc_id,
file_path = row.file_path,
content_hash = row.content_hash,
last_modified = row.last_modified,
created_at = row.created_at,
updated_at = row.updated_at,
properties = {}
}
end
-- Add properties from result row
for key, value in pairs(row) do
if key ~= "id" and key ~= "file_path" and key ~= "content_hash" and
key ~= "last_modified" and key ~= "created_at" and key ~= "updated_at" then
if value and value ~= "" then
documents_by_id[doc_id].properties[key] = value
metadata.properties_found[key] = true
end
end
end
end
-- Convert to array
for _, doc in pairs(documents_by_id) do
table.insert(documents, doc)
end
-- Apply post-processing filters
documents = M.apply_post_filters(documents, parsed_query, options)
-- Apply sorting if not handled by SQL
if parsed_query.order_by and parsed_query.order_by.field == "relevance" then
documents = M.sort_by_relevance(documents, parsed_query)
end
return {
documents = documents,
metadata = metadata
}
end
-- Apply post-processing filters
function M.apply_post_filters(documents, parsed_query, options)
local filtered = documents
-- Apply text search highlighting if requested
if options.highlight and parsed_query.conditions then
filtered = M.apply_text_highlighting(filtered, parsed_query)
end
-- Apply additional filters that couldn't be handled by SQL
filtered = M.apply_complex_filters(filtered, parsed_query)
return filtered
end
-- Apply text highlighting
function M.apply_text_highlighting(documents, parsed_query)
-- Implementation for text highlighting
-- This would mark matching text in document properties
return documents
end
-- Apply complex filters
function M.apply_complex_filters(documents, parsed_query)
local filtered = {}
for _, doc in ipairs(documents) do
local include = true
-- Apply any complex filter logic here
if include then
table.insert(filtered, doc)
end
end
return filtered
end
-- Sort by relevance
function M.sort_by_relevance(documents, parsed_query)
-- Simple relevance scoring based on filter matches
local scored = {}
for _, doc in ipairs(documents) do
local score = 0
-- Score based on filter matches
for field, _ in pairs(parsed_query.filters) do
if doc.properties[field] then
score = score + 1
end
end
-- Add document with score
table.insert(scored, {
document = doc,
score = score
})
end
-- Sort by score (descending)
table.sort(scored, function(a, b) return a.score > b.score end)
-- Extract documents
local sorted_documents = {}
for _, item in ipairs(scored) do
table.insert(sorted_documents, item.document)
end
return sorted_documents
end
-- Get total count for query
function M.get_total_count(parsed_query, options)
local count_sql, count_params = query_builder.build_count_query(parsed_query, options)
local ok, count_result = database.execute(count_sql, count_params)
if not ok then
utils.log("ERROR", "Failed to get total count", { error = count_result })
return 0
end
return count_result[1] and count_result[1].total_count or 0
end
-- Execute query with caching
function M.execute_cached(parsed_query, options)
options = options or {}
local cache_enabled = options.cache ~= false
if not cache_enabled then
return M.execute(parsed_query, options)
end
local query_hash = query_parser.generate_query_hash(parsed_query)
local cache_key = "query:" .. query_hash
-- Check cache (implementation would depend on cache system)
-- For now, just execute directly
return M.execute(parsed_query, options)
end
-- Validate query before execution
function M.validate_query(parsed_query)
local errors = {}
-- Check for required fields
if not parsed_query or not parsed_query.filters then
table.insert(errors, "Query must have filters")
end
-- Validate filter values
for field, value in pairs(parsed_query.filters or {}) do
if not M.is_valid_filter_value(value) then
table.insert(errors, string.format("Invalid filter value for field '%s'", field))
end
end
-- Validate conditions
if parsed_query.conditions then
M.validate_conditions_recursive(parsed_query.conditions, errors)
end
return #errors == 0, errors
end
-- Check if filter value is valid
function M.is_valid_filter_value(value)
if type(value) == "string" and #value > 1000 then
return false
end
if type(value) == "table" and #value > 100 then
return false
end
return true
end
-- Validate conditions recursively
function M.validate_conditions_recursive(conditions, errors)
if conditions.type == "comparison" then
if not conditions.field or not conditions.operator or conditions.value == nil then
table.insert(errors, "Invalid comparison condition")
end
elseif conditions.type == "existence" then
if not conditions.field then
table.insert(errors, "Invalid existence condition")
end
elseif conditions.clauses then
for _, clause in ipairs(conditions.clauses) do
M.validate_conditions_recursive(clause, errors)
end
end
end
-- Get query suggestions
function M.get_suggestions(partial_query, options)
local suggestions = {
properties = {},
values = {},
operators = {}
}
-- Get property suggestions from schema
local ok, schema_result = database.execute("SELECT DISTINCT property_key FROM schema_metadata ORDER BY document_count DESC LIMIT 20")
if ok then
for _, row in ipairs(schema_result) do
table.insert(suggestions.properties, row.property_key)
end
end
-- Get value suggestions for common properties
local common_properties = {"status", "priority", "tags", "type"}
for _, prop in ipairs(common_properties) do
local ok, values_result = database.execute(
"SELECT DISTINCT value FROM properties WHERE key = ? AND value_type = 'string' LIMIT 10",
{ prop }
)
if ok then
suggestions.values[prop] = {}
for _, row in ipairs(values_result) do
table.insert(suggestions.values[prop], row.value)
end
end
end
-- Common operators
suggestions.operators = {"=", "!=", ">", "<", ">=", "<=", "CONTAINS", "STARTS_WITH", "ENDS_WITH", "INCLUDES"}
return suggestions
end
-- Explain query execution plan
function M.explain_query(parsed_query, options)
local sql, params = query_builder.build_sql(parsed_query, options)
local explain_sql = "EXPLAIN QUERY PLAN " .. sql
local ok, explain_result = database.execute(explain_sql, params)
if not ok then
return {
success = false,
error = explain_result,
sql = sql
}
end
return {
success = true,
sql = sql,
params = params,
plan = explain_result,
estimated_cost = M.calculate_query_cost(explain_result)
}
end
-- Calculate query cost
function M.calculate_query_cost(explain_result)
local total_cost = 0
for _, row in ipairs(explain_result) do
-- Simple cost calculation based on SQLite's EXPLAIN output
if row.detail and row.detail:match("SCAN") then
total_cost = total_cost + 10
elseif row.detail and row.detail:match("SEARCH") then
total_cost = total_cost + 2
else
total_cost = total_cost + 1
end
end
return total_cost
end
return M

365
lua/notex/query/init.lua Normal file
View file

@ -0,0 +1,365 @@
-- Query engine coordination module
local M = {}
local query_parser = require('notex.query.parser')
local query_executor = require('notex.query.executor')
local database = require('notex.database.schema')
local utils = require('notex.utils')
-- Execute query from string
function M.execute_query(query_string, options)
options = options or {}
local start_time = utils.timer("Query execution")
-- Parse query
local parsed_query = query_parser.parse(query_string)
if #parsed_query.parse_errors > 0 then
return {
success = false,
error_type = "parse_error",
errors = parsed_query.parse_errors,
query_string = query_string
}
end
-- Validate query
local valid, validation_errors = query_executor.validate_query(parsed_query)
if not valid then
return {
success = false,
error_type = "validation_error",
errors = validation_errors,
query_string = query_string
}
end
-- Execute query
local result = query_executor.execute(parsed_query, options)
start_time()
-- Add metadata
result.query_string = query_string
result.parsed_query = parsed_query
return result
end
-- Execute saved query
function M.execute_saved_query(query_name, options)
options = options or {}
-- Get saved query from database
local ok, query_result = database.queries.get_by_name(query_name)
if not ok then
return {
success = false,
error_type = "database_error",
error = "Failed to retrieve saved query: " .. query_result
}
end
if not query_result then
return {
success = false,
error_type = "not_found",
error = "Saved query not found: " .. query_name
}
end
-- Update usage statistics
database.queries.update_usage(query_result.id)
-- Execute the query
local result = M.execute_query(query_result.definition, options)
result.query_name = query_name
result.saved_query_id = query_result.id
return result
end
-- Save query for reuse
function M.save_query(query_name, query_string, options)
options = options or {}
-- Validate query before saving
local parsed_query = query_parser.parse(query_string)
if #parsed_query.parse_errors > 0 then
return {
success = false,
error_type = "parse_error",
errors = parsed_query.parse_errors
}
end
-- Check if query already exists
local existing_query, get_err = database.queries.get_by_name(query_name)
if get_err then
return {
success = false,
error_type = "database_error",
error = "Failed to check existing query: " .. get_err
}
end
local query_id = utils.generate_id()
local current_time = os.time()
local query_data = {
id = existing_query and existing_query.id or query_id,
name = query_name,
definition = query_string,
created_at = existing_query and existing_query.created_at or current_time
}
local ok, err
if existing_query then
ok, err = database.queries.update(query_data)
else
ok, err = database.queries.create(query_data)
end
if not ok then
return {
success = false,
error_type = "database_error",
error = "Failed to save query: " .. err
}
end
return {
success = true,
query_id = query_data.id,
query_name = query_name,
action = existing_query and "updated" or "created"
}
end
-- List saved queries
function M.list_saved_queries(options)
options = options or {}
local ok, queries = database.queries.get_all()
if not ok then
return {
success = false,
error_type = "database_error",
error = "Failed to retrieve saved queries: " .. queries
}
end
-- Add metadata to each query
for _, query in ipairs(queries) do
query.definition_preview = query.definition:sub(1, 100) .. (#query.definition > 100 and "..." or "")
query.last_used_formatted = query.last_used > 0 and os.date("%Y-%m-%d %H:%M", query.last_used) or "Never"
end
return {
success = true,
queries = queries,
total_count = #queries
}
end
-- Delete saved query
function M.delete_saved_query(query_name)
local ok, query_result = database.queries.get_by_name(query_name)
if not ok then
return {
success = false,
error_type = "database_error",
error = "Failed to find query: " .. query_result
}
end
if not query_result then
return {
success = false,
error_type = "not_found",
error = "Query not found: " .. query_name
}
end
local delete_ok, delete_err = database.queries.delete(query_result.id)
if not delete_ok then
return {
success = false,
error_type = "database_error",
error = "Failed to delete query: " .. delete_err
}
end
return {
success = true,
query_name = query_name,
deleted_query_id = query_result.id
}
end
-- Get query suggestions
function M.get_suggestions(partial_query, cursor_pos)
cursor_pos = cursor_pos or #partial_query
-- Parse partial query to get context
local parsed_query = query_parser.parse(partial_query)
-- Get suggestions based on context
local suggestions = query_executor.get_suggestions(parsed_query, {
cursor_pos = cursor_pos
})
return {
success = true,
suggestions = suggestions,
cursor_pos = cursor_pos
}
end
-- Validate query syntax
function M.validate_query_syntax(query_string)
local parsed_query = query_parser.parse(query_string)
return {
valid = #parsed_query.parse_errors == 0,
errors = parsed_query.parse_errors,
parsed_query = parsed_query
}
end
-- Explain query
function M.explain_query(query_string, options)
options = options or {}
local parsed_query = query_parser.parse(query_string)
if #parsed_query.parse_errors > 0 then
return {
success = false,
error_type = "parse_error",
errors = parsed_query.parse_errors
}
end
local explanation = query_executor.explain_query(parsed_query, options)
return explanation
end
-- Format query string
function M.format_query(query_string)
local parsed_query = query_parser.parse(query_string)
if #parsed_query.parse_errors > 0 then
return query_string, parsed_query.parse_errors
end
-- Rebuild query with proper formatting
local formatted_parts = {}
-- Add filters
if next(parsed_query.filters) then
local filter_parts = {}
for key, value in pairs(parsed_query.filters) do
if type(value) == "string" then
table.insert(filter_parts, string.format('%s: "%s"', key, value))
elseif type(value) == "table" then
table.insert(filter_parts, string.format('%s: [%s]', key, vim.inspect(value)))
else
table.insert(filter_parts, string.format('%s: %s', key, tostring(value)))
end
end
table.insert(formatted_parts, table.concat(filter_parts, "\n"))
end
-- Add conditions
if parsed_query.conditions then
table.insert(formatted_parts, "WHERE " .. M.format_conditions(parsed_query.conditions))
end
-- Add order by
if parsed_query.order_by then
table.insert(formatted_parts, string.format("ORDER BY %s %s", parsed_query.order_by.field, parsed_query.order_by.direction))
end
-- Add group by
if parsed_query.group_by then
table.insert(formatted_parts, "GROUP BY " .. parsed_query.group_by)
end
-- Add limit
if parsed_query.limit then
table.insert(formatted_parts, "LIMIT " .. parsed_query.limit)
end
local formatted_query = table.concat(formatted_parts, "\n")
return formatted_query, {}
end
-- Format conditions recursively
function M.format_conditions(conditions)
if conditions.type == "comparison" then
return string.format("%s %s %s", conditions.field, conditions.operator, tostring(conditions.value))
elseif conditions.type == "existence" then
return conditions.field
elseif conditions.clauses then
local clause_parts = {}
for _, clause in ipairs(conditions.clauses) do
table.insert(clause_parts, M.format_conditions(clause))
end
local operator = conditions.type:upper()
return "(" .. table.concat(clause_parts, " " .. operator .. " ") .. ")"
end
return ""
end
-- Get query statistics
function M.get_query_statistics(options)
options = options or {}
local stats = {
total_queries = 0,
saved_queries = 0,
recent_queries = {},
popular_queries = {},
average_execution_time = 0
}
-- Get saved queries count
local ok, saved_queries = database.queries.get_all()
if ok then
stats.saved_queries = #saved_queries
-- Get popular queries
local popular = {}
for _, query in ipairs(saved_queries) do
if query.use_count > 0 then
table.insert(popular, {
name = query.name,
use_count = query.use_count,
last_used = query.last_used
})
end
end
table.sort(popular, function(a, b) return a.use_count > b.use_count end)
stats.popular_queries = vim.list_slice(popular, 1, 10)
end
return {
success = true,
statistics = stats
}
end
-- Initialize query engine
function M.init(database_path)
local ok, err = require('notex.database.init').init(database_path)
if not ok then
return false, "Failed to initialize database for query engine: " .. err
end
utils.log("INFO", "Query engine initialized")
return true, "Query engine initialized successfully"
end
return M

412
lua/notex/query/parser.lua Normal file
View file

@ -0,0 +1,412 @@
-- Query syntax parser module
local M = {}
local utils = require('notex.utils')
-- Parse query string into structured object
function M.parse(query_string)
local result = {
filters = {},
conditions = nil,
order_by = nil,
group_by = nil,
limit = nil,
raw_query = query_string,
parse_errors = {}
}
if not query_string or query_string == "" then
table.insert(result.parse_errors, "Empty query string")
return result
end
-- Extract query block from markdown
local query_content = M.extract_query_block(query_string)
if not query_content then
table.insert(result.parse_errors, "No valid query block found")
return result
end
-- Parse query lines
local lines = M.split_lines(query_content)
local current_section = "filters" -- filters, where, order_by, group_by, limit
for _, line in ipairs(lines) do
line = line:trim()
if line == "" or line:match("^%s*%-%-%-") then
-- Skip empty lines and comments
continue
end
-- Detect section changes
local section = M.detect_section(line)
if section then
current_section = section
continue
end
-- Parse based on current section
if current_section == "filters" then
M.parse_filter_line(line, result)
elseif current_section == "where" then
M.parse_condition_line(line, result)
elseif current_section == "order_by" then
M.parse_order_by_line(line, result)
elseif current_section == "group_by" then
M.parse_group_by_line(line, result)
elseif current_section == "limit" then
M.parse_limit_line(line, result)
end
end
-- Validate parsed query
M.validate_parsed_query(result)
return result
end
-- Extract query block from markdown content
function M.extract_query_block(content)
-- Match ```notex-query blocks
local start_pos, end_pos, query_content = content:match("```notex%-query%s*\n(.*)\n```")
if query_content then
return query_content
end
-- Match inline query format
local inline_query = content:match("```notex%-query%s*\n(.*)")
if inline_query then
return inline_query
end
return nil
end
-- Split content into lines
function M.split_lines(content)
local lines = {}
for line in content:gmatch("[^\r\n]+") do
table.insert(lines, line)
end
return lines
end
-- Detect query section
function M.detect_section(line)
local upper_line = line:upper()
if upper_line:match("^FROM%s+") then
return "filters"
elseif upper_line:match("^WHERE%s+") then
return "where"
elseif upper_line:match("^ORDER%s+BY%s+") then
return "order_by"
elseif upper_line:match("^GROUP%s+BY%s+") then
return "group_by"
elseif upper_line:match("^LIMIT%s+") then
return "limit"
end
return nil
end
-- Parse filter line (FROM clause or direct property filters)
function M.parse_filter_line(line, result)
-- Handle FROM clause
local from_match = line:match("^FROM%s+(.+)")
if from_match then
M.parse_property_filters(from_match, result)
return
end
-- Handle direct property filters
M.parse_property_filters(line, result)
end
-- Parse property filters
function M.parse_property_filters(filter_string, result)
-- Parse YAML-style property filters
local yaml_filters = M.parse_yaml_filters(filter_string)
for key, value in pairs(yaml_filters) do
result.filters[key] = value
end
end
-- Parse YAML-style filters
function M.parse_yaml_filters(yaml_string)
local filters = {}
-- Handle key: value pairs
for key, value in yaml_string:gmatch("(%w+)%s*:%s*(.+)") do
-- Parse quoted values
local quoted_value = value:match('^"(.*)"$')
if quoted_value then
filters[key] = quoted_value
else
-- Parse array values
local array_match = value:match("^%[(.*)%]$")
if array_match then
local array_values = {}
for item in array_match:gsub("%s", ""):gmatch("[^,]+") do
table.insert(array_values, item:gsub("^['\"](.*)['\"]$", "%1"))
end
filters[key] = array_values
else
filters[key] = value:trim()
end
end
end
return filters
end
-- Parse WHERE condition line
function M.parse_condition_line(line, result)
local condition_string = line:match("^WHERE%s+(.+)")
if not condition_string then
table.insert(result.parse_errors, "Invalid WHERE clause: " .. line)
return
end
result.conditions = M.parse_conditions(condition_string)
end
-- Parse conditions with logical operators
function M.parse_conditions(condition_string)
local conditions = {
type = "AND",
clauses = {}
}
-- Split by AND/OR operators
local and_parts = M.split_logical_operators(condition_string, "AND")
local or_parts = {}
-- Check if this is an OR condition
for _, part in ipairs(and_parts) do
if part:match("OR") then
local or_split = M.split_logical_operators(part, "OR")
if #or_split > 1 then
conditions.type = "OR"
for _, or_part in ipairs(or_split) do
table.insert(conditions.clauses, M.parse_single_condition(or_part))
end
return conditions
end
end
end
-- Parse AND conditions
for _, and_part in ipairs(and_parts) do
table.insert(conditions.clauses, M.parse_single_condition(and_part))
end
return conditions
end
-- Split by logical operator
function M.split_logical_operators(string, operator)
local parts = {}
local pattern = "%s+" .. operator .. "%s+"
for part in string:gmatch("([^" .. pattern .. "]+)") do
table.insert(parts, part:trim())
end
return parts
end
-- Parse single condition
function M.parse_single_condition(condition_string)
condition_string = condition_string:trim()
-- Handle NOT operator
local negated = false
if condition_string:match("^NOT%s+") then
negated = true
condition_string = condition_string:sub(4):trim()
end
-- Parse comparison operators
local operators = {
{ pattern = ">=%s*", type = ">=" },
{ pattern = "<=%s*", type = "<=" },
{ pattern = "!=%s*", type = "!=" },
{ pattern = ">%s*", type = ">" },
{ pattern = "<%s*", type = "<" },
{ pattern = "=%s*", type = "=" },
{ pattern = "%s+CONTAINS%s+", type = "CONTAINS" },
{ pattern = "%s+STARTS_WITH%s+", type = "STARTS_WITH" },
{ pattern = "%s+ENDS_WITH%s+", type = "ENDS_WITH" },
{ pattern = "%s+INCLUDES%s+", type = "INCLUDES" },
{ pattern = "%s+BEFORE%s+", type = "BEFORE" },
{ pattern = "%s+AFTER%s+", type = "AFTER" },
{ pattern = "%s+WITHIN%s+", type = "WITHIN" }
}
for _, op in ipairs(operators) do
local field, value = condition_string:match("^(.-)" .. op.pattern .. "(.+)$")
if field and value then
return {
type = "comparison",
field = field:trim(),
operator = op.type,
value = M.parse_value(value:trim()),
negated = negated
}
end
end
-- If no operator found, treat as existence check
return {
type = "existence",
field = condition_string,
negated = negated
}
end
-- Parse value (handle quotes, numbers, booleans)
function M.parse_value(value_string)
-- Handle quoted strings
local quoted = value_string:match('^"(.*)"$')
if quoted then
return quoted
end
-- Handle single quoted strings
quoted = value_string:match("^'(.*)'$")
if quoted then
return quoted
end
-- Handle numbers
local number = tonumber(value_string)
if number then
return number
end
-- Handle booleans
local lower = value_string:lower()
if lower == "true" then
return true
elseif lower == "false" then
return false
end
-- Handle date/time relative values (e.g., "7d" for 7 days)
local time_match = value_string:match("^(%d+)([hdwmy])$")
if time_match then
local amount = tonumber(time_match[1])
local unit = time_match[2]
return { type = "relative_time", amount = amount, unit = unit }
end
-- Default to string
return value_string
end
-- Parse ORDER BY line
function M.parse_order_by_line(line, result)
local order_string = line:match("^ORDER%s+BY%s+(.+)")
if not order_string then
table.insert(result.parse_errors, "Invalid ORDER BY clause: " .. line)
return
end
local field, direction = order_string:match("^(%w+)%s*(ASC|DESC)?$")
if not field then
table.insert(result.parse_errors, "Invalid ORDER BY format: " .. order_string)
return
end
result.order_by = {
field = field,
direction = direction and direction:upper() or "ASC"
}
end
-- Parse GROUP BY line
function M.parse_group_by_line(line, result)
local group_string = line:match("^GROUP%s+BY%s+(.+)")
if not group_string then
table.insert(result.parse_errors, "Invalid GROUP BY clause: " .. line)
return
end
result.group_by = group_string:trim()
end
-- Parse LIMIT line
function M.parse_limit_line(line, result)
local limit_string = line:match("^LIMIT%s+(.+)")
if not limit_string then
table.insert(result.parse_errors, "Invalid LIMIT clause: " .. line)
return
end
local limit = tonumber(limit_string)
if not limit or limit <= 0 then
table.insert(result.parse_errors, "Invalid LIMIT value: " .. limit_string)
return
end
result.limit = limit
end
-- Validate parsed query
function M.validate_parsed_query(query)
-- Check if we have any filters or conditions
if next(query.filters) == nil and not query.conditions then
table.insert(query.parse_errors, "Query must have at least one filter or condition")
end
-- Validate field names (basic check)
local valid_fields = {} -- Could be populated from schema
for field, _ in pairs(query.filters) do
if not M.is_valid_field_name(field) then
table.insert(query.parse_errors, "Invalid field name: " .. field)
end
end
-- Validate conditions
if query.conditions then
M.validate_conditions(query.conditions, query.parse_errors)
end
end
-- Check if field name is valid
function M.is_valid_field_name(field)
return field:match("^[%w_%-%.]+$") ~= nil
end
-- Validate conditions recursively
function M.validate_conditions(conditions, errors)
if conditions.type == "comparison" then
if not M.is_valid_field_name(conditions.field) then
table.insert(errors, "Invalid field name in condition: " .. conditions.field)
end
elseif conditions.type == "existence" then
if not M.is_valid_field_name(conditions.field) then
table.insert(errors, "Invalid field name in existence check: " .. conditions.field)
end
elseif conditions.clauses then
for _, clause in ipairs(conditions.clauses) do
M.validate_conditions(clause, errors)
end
end
end
-- Generate query hash for caching
function M.generate_query_hash(query)
local hash_input = vim.json.encode({
filters = query.filters,
conditions = query.conditions,
order_by = query.order_by,
group_by = query.group_by,
limit = query.limit
})
return utils.sha256(hash_input)
end
return M

12
lua/notex/schema.lua Normal file
View file

@ -0,0 +1,12 @@
-- lua/notex/schema.lua
local M = {}
function M.infer(doc)
local inferred_schema = {}
for key, value in pairs(doc.properties) do
inferred_schema[key] = type(value)
end
return inferred_schema
end
return M

561
lua/notex/ui/buffer.lua Normal file
View file

@ -0,0 +1,561 @@
-- Virtual buffer management module
local M = {}
local utils = require('notex.utils')
-- Active buffer management
local active_buffers = {}
local buffer_configs = {}
-- Create virtual buffer for query results
function M.create_query_buffer(query_results, options)
options = options or {}
local buffer_id = vim.api.nvim_create_buf(false, true) -- unlisted, scratch buffer
if not buffer_id then
return nil, "Failed to create virtual buffer"
end
-- Set buffer options
M.setup_buffer_options(buffer_id, options)
-- Generate buffer content
local lines, syntax = M.generate_buffer_content(query_results, options)
-- Set buffer content
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
-- Set syntax highlighting
if syntax then
vim.api.nvim_buf_set_option(buffer_id, "filetype", syntax)
end
-- Create window if requested
local window_id
if options.create_window ~= false then
window_id = M.create_query_window(buffer_id, options)
end
-- Store buffer configuration
local config = {
buffer_id = buffer_id,
window_id = window_id,
query_results = query_results,
options = options,
created_at = os.time(),
mappings = M.setup_buffer_mappings(buffer_id, options)
}
active_buffers[buffer_id] = config
buffer_configs[buffer_id] = config
return config
end
-- Setup buffer options
function M.setup_buffer_options(buffer_id, options)
local buf_opts = {
buftype = "nofile",
swapfile = false,
bufhidden = "wipe",
modifiable = options.modifiable or false,
readonly = not (options.modifiable or false),
textwidth = 0,
wrapmargin = 0,
wrap = options.wrap or false
}
for opt, value in pairs(buf_opts) do
vim.api.nvim_buf_set_option(buffer_id, opt, value)
end
-- Set buffer name
local buf_name = options.name or "notex://query-results"
vim.api.nvim_buf_set_name(buffer_id, buf_name)
end
-- Generate buffer content from query results
function M.generate_buffer_content(query_results, options)
local lines = {}
local syntax = "notex"
-- Add header
table.insert(lines, "Query Results")
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
-- Add query metadata
if query_results.query_string then
table.insert(lines, "Query: " .. query_results.query_string)
table.insert(lines, "")
end
-- Add execution statistics
table.insert(lines, string.format("Found %d documents (%.2fms)",
query_results.total_count or 0,
query_results.execution_time_ms or 0))
table.insert(lines, "")
-- Add document results
if query_results.documents and #query_results.documents > 0 then
lines = M.add_document_table(lines, query_results.documents, options)
else
table.insert(lines, "No documents found matching the query criteria.")
table.insert(lines, "")
end
-- Add help section
lines = M.add_help_section(lines, options)
return lines, syntax
end
-- Add document table to buffer
function M.add_document_table(lines, documents, options)
local max_width = options.max_width or 120
local include_properties = options.include_properties or {"title", "status", "priority", "created_at"}
-- Calculate column widths
local column_widths = M.calculate_column_widths(documents, include_properties, max_width)
-- Add table header
local header_parts = {"#", "File"}
for _, prop in ipairs(include_properties) do
table.insert(header_parts, M.format_column_header(prop, column_widths[prop]))
end
table.insert(lines, table.concat(header_parts, " | "))
-- Add separator
local separator_parts = {"-", string.rep("-", 20)}
for _, prop in ipairs(include_properties) do
table.insert(separator_parts, string.rep("-", column_widths[prop]))
end
table.insert(lines, table.concat(separator_parts, " | "))
table.insert(lines, "")
-- Add document rows
for i, doc in ipairs(documents) do
local row_parts = {tostring(i), M.truncate_path(doc.file_path, 20)}
for _, prop in ipairs(include_properties) do
local value = doc.properties and doc.properties[prop] or ""
local formatted_value = M.format_property_value(value, column_widths[prop])
table.insert(row_parts, formatted_value)
end
table.insert(lines, table.concat(row_parts, " | "))
end
table.insert(lines, "")
return lines
end
-- Calculate column widths for table
function M.calculate_column_widths(documents, properties, max_width)
local widths = {}
-- Set minimum widths based on property names
for _, prop in ipairs(properties) do
widths[prop] = math.max(#prop, 10)
end
-- Adjust based on content
for _, doc in ipairs(documents) do
for _, prop in ipairs(properties) do
local value = doc.properties and doc.properties[prop] or ""
local formatted = tostring(value)
widths[prop] = math.max(widths[prop], #formatted)
end
end
-- Limit maximum width
local total_min_width = 30 -- # + File columns
local available_width = max_width - total_min_width
if #properties > 0 then
local per_column = math.floor(available_width / #properties)
for _, prop in ipairs(properties) do
widths[prop] = math.min(widths[prop], per_column)
end
end
return widths
end
-- Format column header
function M.format_column_header(property, width)
local formatted = property:gsub("_", " "):gsub("(%a)([%w_]*)", function(first, rest)
return first:upper() .. rest:lower()
end)
return M.pad_right(formatted, width)
end
-- Format property value for table
function M.format_property_value(value, width)
if not value then
return M.pad_right("", width)
end
local formatted = tostring(value)
-- Truncate if too long
if #formatted > width then
formatted = formatted:sub(1, width - 3) .. "..."
end
return M.pad_right(formatted, width)
end
-- Truncate file path
function M.truncate_path(path, max_length)
if #path <= max_length then
return M.pad_right(path, max_length)
end
local filename = vim.fn.fnamemodify(path, ":t")
local dirname = vim.fn.fnamemodify(path, ":h")
if #filename + 3 <= max_length then
local dir_length = max_length - #filename - 3
local truncated_dir = dirname:sub(-dir_length)
return M.pad_right("..." .. truncated_dir .. "/" .. filename, max_length)
else
return M.pad_right("..." .. filename:sub(-(max_length - 3)), max_length)
end
end
-- Pad string to specified width
function M.pad_right(str, width)
return str .. string.rep(" ", width - #str)
end
-- Add help section
function M.add_help_section(lines, options)
if options.show_help == false then
return lines
end
table.insert(lines, "Help:")
table.insert(lines, " <Enter> - Open document under cursor")
table.insert(lines, " o - Open document in new tab")
table.insert(lines, " e - Edit document properties")
table.insert(lines, " s - Save query")
table.insert(lines, " r - Refresh results")
table.insert(lines, " q - Close this view")
table.insert(lines, "")
table.insert(lines, "Press ? for more help")
return lines
end
-- Create window for buffer
function M.create_query_window(buffer_id, options)
local window_config = options.window or {}
-- Default window configuration
local default_config = {
relative = "editor",
width = math.min(120, vim.api.nvim_get_option_value("columns", {})),
height = math.min(30, vim.api.nvim_get_option_value("lines", {}) - 5),
row = 1,
col = 1,
border = "rounded",
style = "minimal",
title = " Query Results ",
title_pos = "center"
}
-- Merge with user config
local final_config = vim.tbl_deep_extend("force", default_config, window_config)
local window_id = vim.api.nvim_open_win(buffer_id, true, final_config)
if not window_id then
return nil, "Failed to create window"
end
-- Set window options
vim.api.nvim_win_set_option(window_id, "wrap", false)
vim.api.nvim_win_set_option(window_id, "cursorline", true)
vim.api.nvim_win_set_option(window_id, "number", false)
vim.api.nvim_win_set_option(window_id, "relativenumber", false)
vim.api.nvim_win_set_option(window_id, "signcolumn", "no")
return window_id
end
-- Setup buffer mappings
function M.setup_buffer_mappings(buffer_id, options)
local mappings = options.mappings or M.get_default_mappings()
for key, action in pairs(mappings) do
local mode = action.mode or "n"
local opts = {
buffer = buffer_id,
noremap = true,
silent = true,
nowait = true
}
if type(action.callback) == "string" then
vim.keymap.set(mode, key, action.callback, opts)
elseif type(action.callback) == "function" then
vim.keymap.set(mode, key, action.callback, opts)
end
end
return mappings
end
-- Get default mappings
function M.get_default_mappings()
return {
["<CR>"] = {
callback = function()
require('notex.ui.buffer').handle_enter_key()
end,
description = "Open document"
},
["o"] = {
callback = function()
require('notex.ui.buffer').handle_open_tab()
end,
description = "Open in new tab"
},
["e"] = {
callback = function()
require('notex.ui.buffer').handle_edit_mode()
end,
description = "Edit properties"
},
["s"] = {
callback = function()
require('notex.ui.buffer').handle_save_query()
end,
description = "Save query"
},
["r"] = {
callback = function()
require('notex.ui.buffer').handle_refresh()
end,
description = "Refresh results"
},
["q"] = {
callback = function()
require('notex.ui.buffer').handle_close_buffer()
end,
description = "Close buffer"
},
["?"] = {
callback = function()
require('notex.ui.buffer').show_help()
end,
description = "Show help"
},
["<Esc>"] = {
callback = function()
require('notex.ui.buffer').handle_close_buffer()
end,
description = "Close buffer"
}
}
end
-- Handle Enter key - open document
function M.handle_enter_key()
local line = vim.api.nvim_get_current_line()
local cursor = vim.api.nvim_win_get_cursor(0)
local line_num = cursor[1]
-- Skip header lines
if line_num < 6 then
return
end
-- Extract file path from line
local file_path = M.extract_file_path_from_line(line)
if file_path and utils.file_exists(file_path) then
vim.cmd('edit ' .. file_path)
else
vim.notify("Cannot open file: " .. (file_path or "unknown"), vim.log.levels.WARN)
end
end
-- Handle open in new tab
function M.handle_open_tab()
local line = vim.api.nvim_get_current_line()
local file_path = M.extract_file_path_from_line(line)
if file_path and utils.file_exists(file_path) then
vim.cmd('tabedit ' .. file_path)
else
vim.notify("Cannot open file: " .. (file_path or "unknown"), vim.log.levels.WARN)
end
end
-- Handle edit mode
function M.handle_edit_mode()
local buffer = vim.api.nvim_get_current_buf()
vim.api.nvim_buf_set_option(buffer, "modifiable", true)
vim.notify("Edit mode enabled - press <Esc> to save and exit", vim.log.levels.INFO)
end
-- Handle save query
function M.handle_save_query()
local config = buffer_configs[vim.api.nvim_get_current_buf()]
if not config or not config.query_results then
return
end
vim.ui.input({ prompt = "Query name: " }, function(query_name)
if query_name and query_name ~= "" then
local query_engine = require('notex.query')
local result = query_engine.save_query(query_name, config.query_results.query_string)
if result.success then
vim.notify("Query saved: " .. query_name, vim.log.levels.INFO)
else
vim.notify("Failed to save query: " .. result.error, vim.log.levels.ERROR)
end
end
end)
end
-- Handle refresh
function M.handle_refresh()
local config = buffer_configs[vim.api.nvim_get_current_buf()]
if not config or not config.query_results or not config.query_results.query_string then
return
end
-- Re-execute query
local query_engine = require('notex.query')
local result = query_engine.execute_query(config.query_results.query_string)
if result.success then
-- Update buffer content
local lines, _ = M.generate_buffer_content(result, config.options)
vim.api.nvim_buf_set_lines(vim.api.nvim_get_current_buf(), 0, -1, false, lines)
-- Update config
config.query_results = result
vim.notify("Query refreshed", vim.log.levels.INFO)
else
vim.notify("Failed to refresh query: " .. table.concat(result.errors, ", "), vim.log.levels.ERROR)
end
end
-- Handle close buffer
function M.handle_close_buffer()
local buffer = vim.api.nvim_get_current_buf()
local config = active_buffers[buffer]
if config and config.window_id then
vim.api.nvim_win_close(config.window_id, true)
else
vim.cmd('bdelete!')
end
end
-- Show help
function M.show_help()
local help_content = [[
Notex Query Results Help:
Navigation:
<Enter> - Open document under cursor
o - Open document in new tab
q - Close this view
<Esc> - Close this view
Actions:
e - Enable edit mode for modifying results
s - Save current query for reuse
r - Refresh query results
Other:
? - Show this help
j/k - Move up/down
gg/G - Go to top/bottom
/pattern - Search in results
Press any key to close this help
]]
local buf = vim.api.nvim_create_buf(false, true)
vim.api.nvim_buf_set_lines(buf, 0, -1, false, vim.split(help_content, "\n"))
vim.api.nvim_buf_set_option(buf, "filetype", "help")
local win_id = vim.api.nvim_open_win(buf, true, {
relative = "editor",
width = 60,
height = 20,
row = math.floor((vim.api.nvim_get_option_value("lines", {}) - 20) / 2),
col = math.floor((vim.api.nvim_get_option_value("columns", {}) - 60) / 2),
border = "rounded",
style = "minimal",
title = " Help "
})
vim.api.nvim_win_set_option(win_id, "wrap", true)
-- Close help on any key
vim.api.nvim_create_autocmd("CursorMoved,WinLeave", {
buffer = buf,
once = true,
callback = function()
vim.api.nvim_win_close(win_id, true)
end
})
end
-- Extract file path from line
function M.extract_file_path_from_line(line)
-- Match file path in table row
local match = line:match("^%s*%d+%s+|?%s*([^|]+)")
if match then
-- Clean up the path
local path = match:trim()
path = path:gsub("%s+$", "") -- Remove trailing spaces
return path
end
return nil
end
-- Get active buffer configurations
function M.get_active_buffers()
local configs = {}
for buffer_id, config in pairs(active_buffers) do
if vim.api.nvim_buf_is_valid(buffer_id) then
configs[buffer_id] = config
else
-- Clean up invalid buffers
active_buffers[buffer_id] = nil
buffer_configs[buffer_id] = nil
end
end
return configs
end
-- Clean up inactive buffers
function M.cleanup_buffers()
local to_remove = {}
for buffer_id, config in pairs(active_buffers) do
if not vim.api.nvim_buf_is_valid(buffer_id) then
table.insert(to_remove, buffer_id)
end
end
for _, buffer_id in ipairs(to_remove) do
active_buffers[buffer_id] = nil
buffer_configs[buffer_id] = nil
end
end
return M

572
lua/notex/ui/editor.lua Normal file
View file

@ -0,0 +1,572 @@
-- Inline editing interface module
local M = {}
local buffer_manager = require('notex.ui.buffer')
local database = require('notex.database.schema')
local parser = require('notex.parser')
local utils = require('notex.utils')
-- Editor state
local active_editors = {}
-- Start editing document properties
function M.start_edit_mode(buffer_id, line_number, column_number)
local config = buffer_manager.get_active_buffers()[buffer_id]
if not config then
return false, "Buffer not found"
end
-- Get document at cursor position
local doc_info = M.get_document_at_position(buffer_id, line_number, column_number)
if not doc_info then
return false, "No document found at cursor position"
end
-- Parse document to get current properties
local parse_result, parse_err = parser.parse_document(doc_info.file_path)
if not parse_result then
return false, "Failed to parse document: " .. parse_err
end
-- Create editor session
local editor_id = utils.generate_id()
local editor_session = {
id = editor_id,
buffer_id = buffer_id,
document_id = doc_info.document_id,
file_path = doc_info.file_path,
original_properties = vim.deepcopy(parse_result.properties),
current_properties = vim.deepcopy(parse_result.properties),
parse_result = parse_result,
created_at = os.time(),
modified = false
}
active_editors[editor_id] = editor_session
-- Switch buffer to editable mode
vim.api.nvim_buf_set_option(buffer_id, "modifiable", true)
vim.api.nvim_buf_set_option(buffer_id, "modified", false)
-- Update buffer content for editing
M.update_buffer_for_editing(buffer_id, editor_session)
-- Setup editor-specific mappings
M.setup_editor_mappings(buffer_id, editor_id)
utils.log("INFO", "Started edit mode for document", {
document_id = doc_info.document_id,
file_path = doc_info.file_path
})
return true, editor_id
end
-- Get document at position
function M.get_document_at_position(buffer_id, line_number, column_number)
local config = buffer_manager.get_active_buffers()[buffer_id]
if not config or not config.query_results.documents then
return nil
end
-- Simple mapping: line numbers correspond to document indices (accounting for headers)
local doc_index = line_number - 6 -- Account for header lines
if doc_index > 0 and doc_index <= #config.query_results.documents then
local doc = config.query_results.documents[doc_index]
return {
document_id = doc.id,
file_path = doc.file_path,
document = doc,
line_number = line_number,
column_number = column_number
}
end
return nil
end
-- Update buffer for editing
function M.update_buffer_for_editing(buffer_id, editor_session)
local lines = {}
-- Editor header
table.insert(lines, "Edit Mode - Document Properties")
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
table.insert(lines, string.format("File: %s", editor_session.file_path))
table.insert(lines, string.format("Modified: %s", editor_session.modified and "Yes" or "No"))
table.insert(lines, "")
table.insert(lines, "Properties (edit values, press <Enter> to save):")
table.insert(lines, "")
-- Property list
if editor_session.current_properties then
local sorted_props = {}
for key, value in pairs(editor_session.current_properties) do
table.insert(sorted_props, {key = key, value = value})
end
table.sort(sorted_props, function(a, b) return a.key < b.key end)
for i, prop in ipairs(sorted_props) do
local line = string.format("%-20s = %s", prop.key .. ":", tostring(prop.value))
table.insert(lines, line)
end
end
table.insert(lines, "")
-- Editor help
table.insert(lines, "Editor Commands:")
table.insert(lines, " <Enter> - Save changes")
table.insert(lines, " <Esc> - Cancel changes")
table.insert(lines, " a - Add new property")
table.insert(lines, " d - Delete property at cursor")
table.insert(lines, " u - Undo changes")
table.insert(lines, "")
-- Set buffer content
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
-- Move cursor to first property
vim.api.nvim_win_set_cursor(0, {7, 0})
end
-- Setup editor mappings
function M.setup_editor_mappings(buffer_id, editor_id)
local opts = {
buffer = buffer_id,
noremap = true,
silent = true
}
-- Save and exit
vim.keymap.set("n", "<Enter>", function()
M.save_and_exit_edit_mode(buffer_id, editor_id)
end, opts)
vim.keymap.set("i", "<Enter>", function()
M.save_and_exit_edit_mode(buffer_id, editor_id)
end, opts)
-- Cancel editing
vim.keymap.set("n", "<Esc>", function()
M.cancel_edit_mode(buffer_id, editor_id)
end, opts)
vim.keymap.set("i", "<Esc>", function()
M.cancel_edit_mode(buffer_id, editor_id)
end, opts)
-- Add new property
vim.keymap.set("n", "a", function()
M.add_new_property(buffer_id, editor_id)
end, opts)
-- Delete property
vim.keymap.set("n", "d", function()
M.delete_property_at_cursor(buffer_id, editor_id)
end, opts)
-- Undo changes
vim.keymap.set("n", "u", function()
M.undo_changes(buffer_id, editor_id)
end, opts)
end
-- Save and exit edit mode
function M.save_and_exit_edit_mode(buffer_id, editor_id)
local editor_session = active_editors[editor_id]
if not editor_session then
return
end
-- Parse edited properties from buffer
local edited_properties = M.parse_properties_from_buffer(buffer_id)
if not edited_properties then
vim.notify("Failed to parse edited properties", vim.log.levels.ERROR)
return
end
-- Check if anything changed
local changes = M.detect_property_changes(editor_session.original_properties, edited_properties)
if #changes == 0 then
vim.notify("No changes detected", vim.log.levels.INFO)
M.exit_edit_mode(buffer_id, editor_id)
return
end
-- Update database
local ok, result = M.update_document_properties(editor_session.document_id, changes)
if not ok then
vim.notify("Failed to update properties: " .. result, vim.log.levels.ERROR)
return
end
-- Update file on disk
local file_ok, file_result = M.update_yaml_file(editor_session.file_path, edited_properties)
if not file_ok then
vim.notify("Failed to update file: " .. file_result, vim.log.levels.ERROR)
return
end
vim.notify("Properties updated successfully", vim.log.levels.INFO)
-- Exit edit mode and refresh view
M.exit_edit_mode(buffer_id, editor_id)
-- Refresh the query results
if result and result.refresh_query then
buffer_manager.handle_refresh()
end
end
-- Cancel edit mode
function M.cancel_edit_mode(buffer_id, editor_id)
local editor_session = active_editors[editor_id]
if not editor_session then
return
end
if editor_session.modified then
vim.ui.select({"Discard changes", "Continue editing"}, {
prompt = "You have unsaved changes. What would you like to do?"
}, function(choice)
if choice == "Discard changes" then
M.exit_edit_mode(buffer_id, editor_id)
end
end)
else
M.exit_edit_mode(buffer_id, editor_id)
end
end
-- Exit edit mode
function M.exit_edit_mode(buffer_id, editor_id)
local editor_session = active_editors[editor_id]
if not editor_session then
return
end
-- Clean up editor session
active_editors[editor_id] = nil
-- Restore buffer to view mode
local config = buffer_manager.get_active_buffers()[buffer_id]
if config then
-- Regenerate original view content
local lines, _ = buffer_manager.generate_buffer_content(config.query_results, config.options)
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
vim.api.nvim_buf_set_option(buffer_id, "modifiable", false)
vim.api.nvim_buf_set_option(buffer_id, "modified", false)
-- Restore original mappings
buffer_manager.setup_buffer_mappings(buffer_id, config.options)
end
utils.log("INFO", "Exited edit mode", {
document_id = editor_session.document_id,
file_path = editor_session.file_path
})
end
-- Parse properties from buffer
function M.parse_properties_from_buffer(buffer_id)
local lines = vim.api.nvim_buf_get_lines(buffer_id, 0, -1, false)
local properties = {}
-- Find property lines (skip header)
local in_properties = false
for _, line in ipairs(lines) do
if line:match("^Properties %(edit values") then
in_properties = true
continue
end
if in_properties and line:trim() == "" then
break
end
if in_properties then
local key, value = line:match("^(%s*[%w_%-%.]+%s*):%s*(.+)$")
if key and value then
local clean_key = key:trim():gsub(":$", "")
local clean_value = value:trim()
-- Parse value (handle quotes, numbers, booleans)
local parsed_value = M.parse_property_value(clean_value)
properties[clean_key] = parsed_value
end
end
end
return properties
end
-- Parse property value
function M.parse_property_value(value_string)
-- Handle quoted strings
local quoted = value_string:match('^"(.*)"$')
if quoted then
return quoted
end
quoted = value_string:match("^'(.*)'$")
if quoted then
return quoted
end
-- Handle numbers
local number = tonumber(value_string)
if number then
return number
end
-- Handle booleans
local lower = value_string:lower()
if lower == "true" then
return true
elseif lower == "false" then
return false
end
-- Handle arrays (simple format)
if value_string:match("^%[.+]$") then
local array_content = value_string:sub(2, -2)
local items = {}
for item in array_content:gsub("%s", ""):gmatch("[^,]+") do
table.insert(items, item:gsub("^['\"](.*)['\"]$", "%1"))
end
return items
end
-- Default to string
return value_string
end
-- Detect property changes
function M.detect_property_changes(original, edited)
local changes = {}
-- Find modified and added properties
for key, value in pairs(edited) do
if not original[key] then
table.insert(changes, {
type = "added",
key = key,
new_value = value
})
elseif original[key] ~= value then
table.insert(changes, {
type = "modified",
key = key,
old_value = original[key],
new_value = value
})
end
end
-- Find deleted properties
for key, value in pairs(original) do
if not edited[key] then
table.insert(changes, {
type = "deleted",
key = key,
old_value = value
})
end
end
return changes
end
-- Update document properties in database
function M.update_document_properties(document_id, changes)
local updated_count = 0
for _, change in ipairs(changes) do
if change.type == "deleted" then
-- Delete property
local ok, err = database.properties.delete_by_key(document_id, change.key)
if not ok then
return false, "Failed to delete property " .. change.key .. ": " .. err
end
else
-- Add or update property
local prop_data = {
id = utils.generate_id(),
document_id = document_id,
key = change.key,
value = tostring(change.new_value),
value_type = type(change.new_value),
created_at = os.time(),
updated_at = os.time()
}
-- Check if property already exists
local existing_prop, err = database.properties.get_by_key(document_id, change.key)
if err then
return false, "Failed to check existing property: " .. err
end
local ok
if existing_prop then
prop_data.id = existing_prop.id
ok, err = database.properties.update(prop_data)
else
ok, err = database.properties.create(prop_data)
end
if not ok then
return false, "Failed to update property " .. change.key .. ": " .. err
end
end
updated_count = updated_count + 1
end
return true, {
updated_count = updated_count,
refresh_query = true
}
end
-- Update YAML file
function M.update_yaml_file(file_path, properties)
-- Read current file
local content, err = utils.read_file(file_path)
if not content then
return false, "Failed to read file: " .. err
end
-- Parse YAML and content
local yaml_content, yaml_err = parser.yaml_parser.extract_yaml_header(content)
if not yaml_content then
return false, "Failed to extract YAML: " .. yaml_err
end
local body_content = parser.yaml_parser.remove_yaml_header(content)
-- Parse current YAML
local current_yaml, parse_err = parser.yaml_parser.parse_yaml(yaml_content)
if not current_yaml then
return false, "Failed to parse YAML: " .. parse_err
end
-- Update YAML data
for key, value in pairs(properties) do
current_yaml[key] = value
end
-- Generate new YAML header
local new_yaml_content = M.generate_yaml_content(current_yaml)
-- Combine with body content
local new_content = new_yaml_content .. "\n---\n" .. body_content
-- Write file
local write_ok, write_err = utils.write_file(file_path, new_content)
if not write_ok then
return false, "Failed to write file: " .. write_err
end
return true, "File updated successfully"
end
-- Generate YAML content from data
function M.generate_yaml_content(data)
local lines = {"---"}
-- Sort keys for consistent output
local sorted_keys = {}
for key, _ in pairs(data) do
table.insert(sorted_keys, key)
end
table.sort(sorted_keys)
for _, key in ipairs(sorted_keys) do
local value = data[key]
local line = M.format_yaml_value(key, value)
table.insert(lines, line)
end
return table.concat(lines, "\n")
end
-- Format YAML key-value pair
function M.format_yaml_value(key, value)
if type(value) == "string" then
return string.format('%s: "%s"', key, value)
elseif type(value) == "number" then
return string.format("%s: %s", key, tostring(value))
elseif type(value) == "boolean" then
return string.format("%s: %s", key, tostring(value))
elseif type(value) == "table" then
return string.format("%s: %s", key, vim.json.encode(value))
else
return string.format('%s: "%s"', key, tostring(value))
end
end
-- Add new property
function M.add_new_property(buffer_id, editor_id)
vim.ui.input({ prompt = "Property name: " }, function(property_name)
if property_name and property_name ~= "" then
vim.ui.input({ prompt = "Property value: " }, function(property_value)
if property_value and property_value ~= "" then
local editor_session = active_editors[editor_id]
if editor_session then
editor_session.current_properties[property_name] = property_value
editor_session.modified = true
M.update_buffer_for_editing(buffer_id, editor_id)
end
end
end)
end
end)
end
-- Delete property at cursor
function M.delete_property_at_cursor(buffer_id, editor_id)
local cursor = vim.api.nvim_win_get_cursor(0)
local line = vim.api.nvim_buf_get_lines(buffer_id, cursor[1] - 1, cursor[1], false)[1]
local key = line:match("^(%s*[%w_%-%.]+%s*):")
if key then
local clean_key = key:trim():gsub(":$", "")
local editor_session = active_editors[editor_id]
if editor_session and editor_session.current_properties[clean_key] then
editor_session.current_properties[clean_key] = nil
editor_session.modified = true
M.update_buffer_for_editing(buffer_id, editor_id)
vim.notify("Deleted property: " + clean_key, vim.log.levels.INFO)
end
end
end
-- Undo changes
function M.undo_changes(buffer_id, editor_id)
local editor_session = active_editors[editor_id]
if editor_session then
editor_session.current_properties = vim.deepcopy(editor_session.original_properties)
editor_session.modified = false
M.update_buffer_for_editing(buffer_id, editor_id)
vim.notify("Changes undone", vim.log.levels.INFO)
end
end
-- Get active editors
function M.get_active_editors()
local active = {}
for editor_id, session in pairs(active_editors) do
if vim.api.nvim_buf_is_valid(session.buffer_id) then
active[editor_id] = session
else
-- Clean up invalid sessions
active_editors[editor_id] = nil
end
end
return active
end
return M

480
lua/notex/ui/init.lua Normal file
View file

@ -0,0 +1,480 @@
-- UI coordination module
local M = {}
local buffer_manager = require('notex.ui.buffer')
local view = require('notex.ui.view')
local editor = require('notex.ui.editor')
local query_engine = require('notex.query')
local utils = require('notex.utils')
-- UI state
local ui_state = {
active_views = {},
default_options = {
max_width = 120,
max_height = 30,
show_help = true,
border = "rounded"
}
}
-- Show query results
function M.show_query_results(query_results, options)
options = options or {}
options = vim.tbl_deep_extend("force", ui_state.default_options, options)
-- Validate query results
if not query_results.success then
M.show_error("Query Error", query_results.errors, options)
return nil, query_results.errors
end
-- Create view
local view_config = view.create_query_view(query_results, options)
if not view_config then
return nil, "Failed to create query view"
end
-- Store active view
ui_state.active_views[view_config.buffer_id] = view_config
-- Setup auto-cleanup
M.setup_view_cleanup(view_config)
utils.log("INFO", "Created query view", {
buffer_id = view_config.buffer_id,
document_count = #query_results.documents,
view_type = options.view_type or "table"
})
return view_config
end
-- Show error message
function M.show_error(title, errors, options)
options = options or {}
local error_lines = {title, string.rep("=", #title), ""}
if type(errors) == "string" then
table.insert(error_lines, errors)
elseif type(errors) == "table" then
for _, error in ipairs(errors) do
table.insert(error_lines, "" .. error)
end
end
table.insert(error_lines, "")
table.insert(error_lines, "Press any key to close")
local buffer = vim.api.nvim_create_buf(false, true)
vim.api.nvim_buf_set_lines(buffer, 0, -1, false, error_lines)
vim.api.nvim_buf_set_option(buffer, "filetype", "text")
vim.api.nvim_buf_set_name(buffer, "notex://error")
local window = vim.api.nvim_open_win(buffer, true, {
relative = "editor",
width = math.min(80, vim.api.nvim_get_option_value("columns", {})),
height = math.min(20, vim.api.nvim_get_option_value("lines", {})),
row = math.floor((vim.api.nvim_get_option_value("lines", {}) - 20) / 2),
col = math.floor((vim.api.nvim_get_option_value("columns", {}) - 80) / 2),
border = "rounded",
style = "minimal",
title = " Error "
})
-- Close on any key
vim.api.nvim_create_autocmd("CursorMoved,WinLeave", {
buffer = buffer,
once = true,
callback = function()
vim.api.nvim_win_close(window, true)
end
})
return {
buffer_id = buffer,
window_id = window,
type = "error"
}
end
-- Show document details
function M.show_document_details(document_id, options)
options = options or {}
-- Get document details
local indexer = require('notex.index')
local doc_details, err = indexer.get_document_details(document_id)
if not doc_details then
M.show_error("Document Error", {err or "Failed to get document details"})
return nil
end
-- Create detail view
local buffer = vim.api.nvim_create_buf(false, true)
vim.api.nvim_buf_set_option(buffer, "filetype", "yaml")
vim.api.nvim_buf_set_name(buffer, "notex://document-details")
-- Generate detail content
local lines = M.generate_document_details(doc_details)
vim.api.nvim_buf_set_lines(buffer, 0, -1, false, lines)
-- Create window
local window = vim.api.nvim_open_win(buffer, true, {
relative = "editor",
width = math.min(100, vim.api.nvim_get_option_value("columns", {})),
height = math.min(40, vim.api.nvim_get_option_value("lines", {})),
row = 1,
col = 1,
border = "rounded",
style = "minimal",
title = " Document Details "
})
-- Setup mappings
local mappings = {
["<CR>"] = {
callback = function()
vim.cmd('edit ' .. doc_details.document.file_path)
vim.api.nvim_win_close(window, true)
end,
description = "Open document"
},
["e"] = {
callback = function()
editor.start_edit_mode(buffer, 1, 1)
end,
description = "Edit properties"
},
["q"] = {
callback = function()
vim.api.nvim_win_close(window, true)
end,
description = "Close"
}
}
for key, action in pairs(mappings) do
vim.keymap.set("n", key, action.callback, {
buffer = buffer,
noremap = true,
silent = true
})
end
return {
buffer_id = buffer,
window_id = window,
type = "document_details",
document_id = document_id
}
end
-- Generate document details content
function M.generate_document_details(doc_details)
local lines = {}
-- Header
table.insert(lines, "Document Details")
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
-- File information
table.insert(lines, "## File Information")
table.insert(lines, string.format("Path: %s", doc_details.document.file_path))
table.insert(lines, string.format("Created: %s", os.date("%Y-%m-%d %H:%M:%S", doc_details.document.created_at)))
table.insert(lines, string.format("Modified: %s", os.date("%Y-%m-%d %H:%M:%S", doc_details.document.updated_at)))
table.insert(lines, string.format("Content Hash: %s", doc_details.document.content_hash))
table.insert(lines, string.format("File Exists: %s", doc_details.file_exists and "Yes" or "No"))
table.insert(lines, "")
-- Properties
table.insert(lines, "## Properties")
if doc_details.properties and #doc_details.properties > 0 then
for _, prop in ipairs(doc_details.properties) do
local value_str = tostring(prop.value)
if #value_str > 100 then
value_str = value_str:sub(1, 97) .. "..."
end
table.insert(lines, string.format("%s: %s (%s)", prop.key, value_str, prop.value_type))
end
else
table.insert(lines, "No properties found")
end
table.insert(lines, "")
-- Parse information
if doc_details.parse_result and doc_details.parse_result.success then
table.insert(lines, "## Analysis")
local analysis = doc_details.parse_result.markdown_analysis
table.insert(lines, string.format("Word Count: %d", analysis.word_count))
table.insert(lines, string.format("Character Count: %d", analysis.character_count))
table.insert(lines, string.format("Line Count: %d", analysis.line_count))
table.insert(lines, string.format("Reading Time: %d minutes", analysis.reading_time_minutes))
if #analysis.headings > 0 then
table.insert(lines, "")
table.insert(lines, "### Headings")
for _, heading in ipairs(analysis.headings) do
local indent = string.rep(" ", heading.level)
table.insert(lines, indent .. heading.title)
end
end
if #analysis.links > 0 then
table.insert(lines, "")
table.insert(lines, "### Links")
for i, link in ipairs(analysis.links) do
if i <= 5 then -- Limit to 5 links
table.insert(lines, string.format("• %s → %s", link.text, link.url))
end
end
if #analysis.links > 5 then
table.insert(lines, string.format("... and %d more", #analysis.links - 5))
end
end
end
table.insert(lines, "")
table.insert(lines, "Press <Enter> to open document, e to edit, q to close")
return lines
end
-- Switch view type
function M.switch_view_type(new_view_type)
local current_buffer = vim.api.nvim_get_current_buf()
local success, result = view.switch_view_type(current_buffer, new_view_type)
if success then
vim.notify("Switched to " .. new_view_type .. " view", vim.log.levels.INFO)
else
vim.notify("Failed to switch view: " .. result, vim.log.levels.ERROR)
end
end
-- Show view type menu
function M.show_view_type_menu()
local view_types = view.get_available_view_types()
local choices = {}
for _, view_type in ipairs(view_types) do
table.insert(choices, string.format("%s %s - %s", view_type.icon, view_type.name, view_type.description))
end
vim.ui.select(choices, {
prompt = "Select view type:",
format_item = function(item)
return item
end
}, function(choice)
if choice then
-- Extract view type name from choice
local view_type = choice:match("%s(%w+)%s-") or choice:match("(%w+)%s-")
if view_type then
M.switch_view_type(view_type)
end
end
end)
end
-- Export view
function M.export_view(format)
local current_buffer = vim.api.nvim_get_current_buf()
local success, result = view.export_view(current_buffer, format)
if success then
-- Ask user for file location
local default_filename = "notex_export." .. format
vim.ui.input({ prompt = "Export to file: ", default = default_filename }, function(filename)
if filename and filename ~= "" then
local write_ok, write_err = utils.write_file(filename, result)
if write_ok then
vim.notify("Exported to " .. filename, vim.log.levels.INFO)
else
vim.notify("Failed to export: " .. write_err, vim.log.levels.ERROR)
end
end
end)
else
vim.notify("Failed to export: " .. result, vim.log.levels.ERROR)
end
end
-- Show export menu
function M.show_export_menu()
local formats = {
{name = "Markdown", extension = "md", description = "Markdown format"},
{name = "CSV", extension = "csv", description = "Comma-separated values"},
{name = "JSON", extension = "json", description = "JSON format"}
}
local choices = {}
for _, format in ipairs(formats) do
table.insert(choices, string.format("%s (%s) - %s", format.name, format.extension, format.description))
end
vim.ui.select(choices, {
prompt = "Select export format:"
}, function(choice)
if choice then
local format_name = choice:match("(%w+)%s+%(")
if format_name then
M.export_view(format_name:lower())
end
end
end)
end
-- Show query prompt
function M.show_query_prompt(initial_query)
initial_query = initial_query or ""
vim.ui.input({
prompt = "Query: ",
default = initial_query,
completion = "customlist,require('notex.ui').get_query_completions"
}, function(query_string)
if query_string and query_string ~= "" then
M.execute_query_and_show_results(query_string)
end
end)
end
-- Execute query and show results
function M.execute_query_and_show_results(query_string, options)
options = options or {}
vim.notify("Executing query...", vim.log.levels.INFO)
-- Execute query
local result = query_engine.execute_query(query_string, options)
if result.success then
-- Show results
local view_config = M.show_query_results(result, options)
if view_config then
utils.log("INFO", "Query executed successfully", {
document_count = #result.documents,
execution_time_ms = result.execution_time_ms
})
end
else
M.show_error("Query Failed", result.errors, options)
end
end
-- Get query completions
function M.get_query_completions()
local suggestions = query_engine.get_suggestions("", 0)
local completions = {}
-- Property suggestions
for _, prop in ipairs(suggestions.properties or {}) do
table.insert(completions, prop .. ":")
end
-- Value suggestions for common properties
for prop, values in pairs(suggestions.values or {}) do
for _, value in ipairs(values) do
table.insert(completions, prop .. ' = "' .. value .. '"')
end
end
-- Operator suggestions
for _, op in ipairs(suggestions.operators or {}) do
table.insert(completions, "WHERE " .. op)
end
table.sort(completions)
return completions
end
-- Setup view cleanup
function M.setup_view_cleanup(view_config)
local group = vim.api.nvim_create_augroup("NotexViewCleanup_" .. view_config.buffer_id, {clear = true})
vim.api.nvim_create_autocmd({"BufLeave", "WinLeave"}, {
buffer = view_config.buffer_id,
once = true,
callback = function()
-- Give user time to interact, then cleanup
vim.defer_fn(function()
if vim.api.nvim_buf_is_valid(view_config.buffer_id) then
ui_state.active_views[view_config.buffer_id] = nil
end
end, 1000)
end
})
end
-- Get UI status
function M.get_ui_status()
local active_views = buffer_manager.get_active_buffers()
local active_editors = editor.get_active_editors()
return {
active_views_count = vim.tbl_count(active_views),
active_editors_count = vim.tbl_count(active_editors),
total_windows = vim.tbl_count(vim.api.nvim_list_wins()),
current_buffer = vim.api.nvim_get_current_buf(),
current_window = vim.api.nvim_get_current_win()
}
end
-- Cleanup all UI components
function M.cleanup_all()
-- Close all notex buffers
local buffers = vim.api.nvim_list_bufs()
for _, buf in ipairs(buffers) do
local buf_name = vim.api.nvim_buf_get_name(buf)
if buf_name:match("^notex://") then
if vim.api.nvim_buf_is_valid(buf) then
vim.api.nvim_buf_delete(buf, {force = true})
end
end
end
-- Clear state
ui_state.active_views = {}
buffer_manager.cleanup_buffers()
utils.log("INFO", "Cleaned up all UI components")
end
-- Initialize UI system
function M.init()
-- Set up global keymaps if not already set
local global_keymaps = {
["<leader>nq"] = {
callback = function() M.show_query_prompt() end,
description = "New query"
},
["<leader>nv"] = {
callback = function() M.show_view_type_menu() end,
description = "Switch view type"
},
["<leader>ne"] = {
callback = function() M.show_export_menu() end,
description = "Export view"
}
}
for key, action in pairs(global_keymaps) do
if not vim.fn.hasmapto(key, "n") then
vim.keymap.set("n", key, action.callback, {
noremap = true,
silent = true,
desc = action.description
})
end
end
utils.log("INFO", "UI system initialized")
return true, "UI system initialized successfully"
end
return M

525
lua/notex/ui/view.lua Normal file
View file

@ -0,0 +1,525 @@
-- Query result visualization module
local M = {}
local buffer_manager = require('notex.ui.buffer')
local utils = require('notex.utils')
-- View configurations
local view_configs = {}
-- Create query view
function M.create_query_view(query_results, options)
options = options or {}
local view_type = options.view_type or "table"
local view_config = {
view_type = view_type,
query_results = query_results,
options = options,
created_at = os.time()
}
-- Create buffer based on view type
if view_type == "table" then
return M.create_table_view(query_results, options)
elseif view_type == "cards" then
return M.create_cards_view(query_results, options)
elseif view_type == "list" then
return M.create_list_view(query_results, options)
elseif view_type == "tree" then
return M.create_tree_view(query_results, options)
else
return M.create_table_view(query_results, options)
end
end
-- Create table view
function M.create_table_view(query_results, options)
local table_options = vim.tbl_deep_extend("force", options, {
name = "notex://table-view",
view_type = "table",
include_properties = M.get_table_properties(query_results),
max_width = 120,
show_help = true
})
return buffer_manager.create_query_buffer(query_results, table_options)
end
-- Create cards view
function M.create_cards_view(query_results, options)
local cards_options = vim.tbl_deep_extend("force", options, {
name = "notex://cards-view",
view_type = "cards",
show_help = true,
wrap = true
})
local buffer_id = vim.api.nvim_create_buf(false, true)
buffer_manager.setup_buffer_options(buffer_id, cards_options)
local lines = M.generate_cards_content(query_results, cards_options)
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
local window_id = buffer_manager.create_query_window(buffer_id, cards_options)
local config = {
buffer_id = buffer_id,
window_id = window_id,
query_results = query_results,
options = cards_options,
created_at = os.time(),
mappings = buffer_manager.setup_buffer_mappings(buffer_id, cards_options)
}
return config
end
-- Create list view
function M.create_list_view(query_results, options)
local list_options = vim.tbl_deep_extend("force", options, {
name = "notex://list-view",
view_type = "list",
show_help = true
})
local buffer_id = vim.api.nvim_create_buf(false, true)
buffer_manager.setup_buffer_options(buffer_id, list_options)
local lines = M.generate_list_content(query_results, list_options)
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
local window_id = buffer_manager.create_query_window(buffer_id, list_options)
local config = {
buffer_id = buffer_id,
window_id = window_id,
query_results = query_results,
options = list_options,
created_at = os.time(),
mappings = buffer_manager.setup_buffer_mappings(buffer_id, list_options)
}
return config
end
-- Create tree view
function M.create_tree_view(query_results, options)
local tree_options = vim.tbl_deep_extend("force", options, {
name = "notex://tree-view",
view_type = "tree",
group_by = options.group_by or "status",
show_help = true
})
local buffer_id = vim.api.nvim_create_buf(false, true)
buffer_manager.setup_buffer_options(buffer_id, tree_options)
local lines = M.generate_tree_content(query_results, tree_options)
vim.api.nvim_buf_set_lines(buffer_id, 0, -1, false, lines)
local window_id = buffer_manager.create_query_window(buffer_id, tree_options)
local config = {
buffer_id = buffer_id,
window_id = window_id,
query_results = query_results,
options = tree_options,
created_at = os.time(),
mappings = buffer_manager.setup_buffer_mappings(buffer_id, tree_options)
}
return config
end
-- Get table properties
function M.get_table_properties(query_results)
if not query_results.documents or #query_results.documents == 0 then
return {"title", "status", "priority"}
end
-- Find most common properties
local property_counts = {}
for _, doc in ipairs(query_results.documents) do
if doc.properties then
for prop, _ in pairs(doc.properties) do
property_counts[prop] = (property_counts[prop] or 0) + 1
end
end
end
-- Sort by frequency
local sorted_props = {}
for prop, count in pairs(property_counts) do
table.insert(sorted_props, {property = prop, count = count})
end
table.sort(sorted_props, function(a, b) return a.count > b.count end)
-- Return top properties
local result = {}
for i, item in ipairs(sorted_props) do
if i > 6 then break end -- Limit to 6 columns
table.insert(result, item.property)
end
return result
end
-- Generate cards content
function M.generate_cards_content(query_results, options)
local lines = {}
-- Header
table.insert(lines, "Query Results - Card View")
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
-- Query info
if query_results.query_string then
table.insert(lines, "Query: " .. query_results.query_string)
table.insert(lines, "")
end
table.insert(lines, string.format("Found %d documents (%.2fms)",
query_results.total_count or 0,
query_results.execution_time_ms or 0))
table.insert(lines, "")
-- Document cards
if query_results.documents and #query_results.documents > 0 then
for i, doc in ipairs(query_results.documents) do
lines = M.add_document_card(lines, doc, i, options)
table.insert(lines, "")
end
else
table.insert(lines, "No documents found.")
table.insert(lines, "")
end
-- Help
lines = buffer_manager.add_help_section(lines, options)
return lines
end
-- Add document card
function M.add_document_card(lines, doc, index, options)
local max_width = options.max_width or 80
table.insert(lines, string.format("Document %d: %s", index, doc.properties and doc.properties.title or "Untitled"))
table.insert(lines, string.rep("-", max_width))
-- File path
table.insert(lines, "Path: " .. doc.file_path)
-- Properties
if doc.properties then
local sorted_props = {}
for key, value in pairs(doc.properties) do
table.insert(sorted_props, {key = key, value = value})
end
table.sort(sorted_props, function(a, b) return a.key < b.key end)
for _, prop in ipairs(sorted_props) do
if prop.key ~= "title" then
local formatted = string.format(" %s: %s", prop.key, tostring(prop.value))
if #formatted > max_width - 2 then
formatted = formatted:sub(1, max_width - 5) .. "..."
end
table.insert(lines, formatted)
end
end
end
-- Metadata
table.insert(lines, string.format(" Modified: %s", os.date("%Y-%m-%d %H:%M", doc.updated_at)))
return lines
end
-- Generate list content
function M.generate_list_content(query_results, options)
local lines = {}
-- Header
table.insert(lines, "Query Results - List View")
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
-- Query info
table.insert(lines, string.format("%d documents (%.2fms)",
query_results.total_count or 0,
query_results.execution_time_ms or 0))
table.insert(lines, "")
-- Document list
if query_results.documents and #query_results.documents > 0 then
for i, doc in ipairs(query_results.documents) do
local title = doc.properties and doc.properties.title or vim.fn.fnamemodify(doc.file_path, ":t")
local status = doc.properties and doc.properties.status or ""
local priority = doc.properties and doc.properties.priority or ""
local line = string.format("%3d. %-40s %-12s %-8s", i, M.truncate_string(title, 40), status, priority)
table.insert(lines, line)
end
else
table.insert(lines, "No documents found.")
end
table.insert(lines, "")
-- Help
lines = buffer_manager.add_help_section(lines, options)
return lines
end
-- Generate tree content
function M.generate_tree_content(query_results, options)
local lines = {}
local group_by = options.group_by or "status"
-- Header
table.insert(lines, string.format("Query Results - Tree View (Grouped by %s)", group_by))
table.insert(lines, string.rep("=", 50))
table.insert(lines, "")
-- Group documents
local groups = M.group_documents(query_results.documents, group_by)
-- Create tree structure
for group_name, group_docs in pairs(groups) do
table.insert(lines, string.format("▼ %s (%d)", group_name, #group_docs))
for i, doc in ipairs(group_docs) do
local title = doc.properties and doc.properties.title or vim.fn.fnamemodify(doc.file_path, ":t")
local line = string.format(" ├─ %s", title)
if i == #group_docs then
line = string.format(" └─ %s", title)
end
table.insert(lines, line)
end
table.insert(lines, "")
end
-- Help
lines = buffer_manager.add_help_section(lines, options)
return lines
end
-- Group documents by property
function M.group_documents(documents, group_by)
local groups = {}
for _, doc in ipairs(documents) do
local group_value = "Unknown"
if doc.properties and doc.properties[group_by] then
group_value = tostring(doc.properties[group_by])
end
if not groups[group_value] then
groups[group_value] = {}
end
table.insert(groups[group_value], doc)
end
-- Sort groups
local sorted_groups = {}
for group_name, group_docs in pairs(groups) do
table.insert(sorted_groups, {name = group_name, docs = group_docs})
end
table.sort(sorted_groups, function(a, b) return a.name < b.name end)
local result = {}
for _, group in ipairs(sorted_groups) do
result[group.name] = group.docs
end
return result
end
-- Truncate string
function M.truncate_string(str, max_length)
if #str <= max_length then
return str
end
return str:sub(1, max_length - 3) .. "..."
end
-- Switch view type
function M.switch_view_type(buffer_id, new_view_type)
local config = require('notex.ui.buffer').get_active_buffers()[buffer_id]
if not config then
return false, "Buffer not found"
end
-- Close current view
if config.window_id then
vim.api.nvim_win_close(config.window_id, true)
end
-- Create new view
local new_options = vim.tbl_deep_extend("force", config.options, {
view_type = new_view_type
})
local new_config = M.create_query_view(config.query_results, new_options)
if new_config then
return true, "View switched to " .. new_view_type
else
return false, "Failed to create new view"
end
end
-- Get available view types
function M.get_available_view_types()
return {
{
name = "table",
description = "Tabular view with sortable columns",
icon = ""
},
{
name = "cards",
description = "Card-based view with detailed information",
icon = "📄"
},
{
name = "list",
description = "Compact list view",
icon = "📋"
},
{
name = "tree",
description = "Hierarchical view grouped by properties",
icon = "🌳"
}
}
end
-- Export view to different formats
function M.export_view(buffer_id, format)
local config = require('notex.ui.buffer').get_active_buffers()[buffer_id]
if not config then
return false, "Buffer not found"
end
format = format or "markdown"
if format == "markdown" then
return M.export_to_markdown(config.query_results, config.options)
elseif format == "csv" then
return M.export_to_csv(config.query_results, config.options)
elseif format == "json" then
return M.export_to_json(config.query_results, config.options)
else
return false, "Unsupported export format: " .. format
end
end
-- Export to markdown
function M.export_to_markdown(query_results, options)
local lines = {}
table.insert(lines, "# Query Results")
table.insert(lines, "")
if query_results.query_string then
table.insert(lines, "## Query")
table.insert(lines, "```")
table.insert(lines, query_results.query_string)
table.insert(lines, "```")
table.insert(lines, "")
end
table.insert(lines, string.format("**Found %d documents** (%.2fms)",
query_results.total_count or 0,
query_results.execution_time_ms or 0))
table.insert(lines, "")
if query_results.documents and #query_results.documents > 0 then
table.insert(lines, "## Documents")
table.insert(lines, "")
for i, doc in ipairs(query_results.documents) do
table.insert(lines, string.format("### %d. %s", i, doc.properties and doc.properties.title or "Untitled"))
table.insert(lines, "")
table.insert(lines, "**File:** `" .. doc.file_path .. "`")
table.insert(lines, "")
if doc.properties then
table.insert(lines, "**Properties:**")
for key, value in pairs(doc.properties) do
table.insert(lines, string.format("- **%s:** %s", key, tostring(value)))
end
table.insert(lines, "")
end
end
end
return true, table.concat(lines, "\n")
end
-- Export to CSV
function M.export_to_csv(query_results, options)
local lines = {}
-- Header
local headers = {"#", "File", "Title", "Status", "Priority", "Created", "Modified"}
table.insert(lines, table.concat(headers, ","))
-- Data rows
if query_results.documents then
for i, doc in ipairs(query_results.documents) do
local row = {
i,
doc.file_path,
doc.properties and doc.properties.title or "",
doc.properties and doc.properties.status or "",
doc.properties and doc.properties.priority or "",
doc.created_at and os.date("%Y-%m-%d", doc.created_at) or "",
doc.updated_at and os.date("%Y-%m-%d", doc.updated_at) or ""
}
-- Escape CSV values
for j, value in ipairs(row) do
if value:find("[,\"]") then
row[j] = '"' .. value:gsub('"', '""') .. '"'
end
end
table.insert(lines, table.concat(row, ","))
end
end
return true, table.concat(lines, "\n")
end
-- Export to JSON
function M.export_to_json(query_results, options)
local export_data = {
query = query_results.query_string,
total_count = query_results.total_count,
execution_time_ms = query_results.execution_time_ms,
documents = query_results.documents,
exported_at = os.time(),
exported_by = "notex.nvim"
}
return true, vim.json.encode(export_data)
end
return M

540
lua/notex/utils/cache.lua Normal file
View file

@ -0,0 +1,540 @@
-- Caching system for performance optimization
local M = {}
-- Cache storage
local cache_storage = {
memory = {},
lru = {},
timed = {}
}
-- Cache configuration
local cache_config = {
memory = {
max_size = 1000,
enabled = true
},
lru = {
max_size = 500,
enabled = true
},
timed = {
default_ttl = 300, -- 5 minutes
cleanup_interval = 60, -- 1 minute
enabled = true
}
}
-- Performance metrics
local cache_metrics = {
hits = 0,
misses = 0,
sets = 0,
evictions = 0,
cleanups = 0
}
-- Simple memory cache
local MemoryCache = {}
MemoryCache.__index = MemoryCache
function MemoryCache.new(max_size)
local self = setmetatable({}, MemoryCache)
self.data = {}
self.max_size = max_size or 1000
self.current_size = 0
return self
end
function MemoryCache:set(key, value)
if self.data[key] == nil then
self.current_size = self.current_size + 1
end
self.data[key] = value
-- Evict if over size limit
if self.current_size > self.max_size then
self:evict()
end
cache_metrics.sets = cache_metrics.sets + 1
end
function MemoryCache:get(key)
local value = self.data[key]
if value ~= nil then
cache_metrics.hits = cache_metrics.hits + 1
return value
else
cache_metrics.misses = cache_metrics.misses + 1
return nil
end
end
function MemoryCache:evict()
-- Simple eviction: remove first item
local first_key = next(self.data)
if first_key then
self.data[first_key] = nil
self.current_size = self.current_size - 1
cache_metrics.evictions = cache_metrics.evictions + 1
end
end
function MemoryCache:clear()
self.data = {}
self.current_size = 0
end
function MemoryCache:size()
return self.current_size
end
-- LRU (Least Recently Used) cache
local LRUCache = {}
LRUCache.__index = LRUCache
function LRUCache.new(max_size)
local self = setmetatable({}, LRUCache)
self.data = {}
self.access_order = {}
self.max_size = max_size or 500
return self
end
function LRUCache:set(key, value)
if self.data[key] then
-- Update existing item
self.data[key] = value
self:update_access(key)
else
-- Add new item
self.data[key] = value
table.insert(self.access_order, key)
-- Evict if over size limit
if #self.access_order > self.max_size then
self:evict()
end
end
cache_metrics.sets = cache_metrics.sets + 1
end
function LRUCache:get(key)
local value = self.data[key]
if value ~= nil then
self:update_access(key)
cache_metrics.hits = cache_metrics.hits + 1
return value
else
cache_metrics.misses = cache_metrics.misses + 1
return nil
end
end
function LRUCache:update_access(key)
-- Remove key from current position
for i, k in ipairs(self.access_order) do
if k == key then
table.remove(self.access_order, i)
break
end
end
-- Add to end (most recently used)
table.insert(self.access_order, key)
end
function LRUCache:evict()
if #self.access_order > 0 then
local lru_key = table.remove(self.access_order, 1)
self.data[lru_key] = nil
cache_metrics.evictions = cache_metrics.evictions + 1
end
end
function LRUCache:clear()
self.data = {}
self.access_order = {}
end
function LRUCache:size()
return #self.access_order
end
-- Timed cache with TTL
local TimedCache = {}
TimedCache.__index = TimedCache
function TimedCache.new(default_ttl)
local self = setmetatable({}, TimedCache)
self.data = {}
self.default_ttl = default_ttl or 300
self.cleanup_timer = nil
self:start_cleanup_timer()
return self
end
function TimedCache:set(key, value, ttl)
ttl = ttl or self.default_ttl
local expire_time = os.time() + ttl
self.data[key] = {
value = value,
expire_time = expire_time
}
cache_metrics.sets = cache_metrics.sets + 1
end
function TimedCache:get(key)
local item = self.data[key]
if item then
if os.time() < item.expire_time then
cache_metrics.hits = cache_metrics.hits + 1
return item.value
else
-- Expired, remove it
self.data[key] = nil
end
end
cache_metrics.misses = cache_metrics.misses + 1
return nil
end
function TimedCache:cleanup()
local current_time = os.time()
local cleaned = 0
for key, item in pairs(self.data) do
if current_time >= item.expire_time then
self.data[key] = nil
cleaned = cleaned + 1
end
end
cache_metrics.cleanups = cache_metrics.cleanups + 1
return cleaned
end
function TimedCache:start_cleanup_timer()
if self.cleanup_timer then
return
end
self.cleanup_timer = vim.loop.new_timer()
if self.cleanup_timer then
self.cleanup_timer:start(
cache_config.timed.cleanup_interval * 1000,
cache_config.timed.cleanup_interval * 1000,
vim.schedule_wrap(function()
self:cleanup()
end)
)
end
end
function TimedCache:stop_cleanup_timer()
if self.cleanup_timer then
self.cleanup_timer:close()
self.cleanup_timer = nil
end
end
function TimedCache:clear()
self.data = {}
end
function TimedCache:size()
local count = 0
for _ in pairs(self.data) do
count = count + 1
end
return count
end
-- Initialize caches
function M.init(config)
config = config or {}
cache_config = vim.tbl_deep_extend("force", cache_config, config)
-- Initialize cache instances
if cache_config.memory.enabled then
cache_storage.memory = MemoryCache.new(cache_config.memory.max_size)
end
if cache_config.lru.enabled then
cache_storage.lru = LRUCache.new(cache_config.lru.max_size)
end
if cache_config.timed.enabled then
cache_storage.timed = TimedCache.new(cache_config.timed.default_ttl)
end
M.info("Cache system initialized", cache_config)
end
-- Memory cache operations
function M.memory_set(key, value)
if not cache_storage.memory then
return false, "Memory cache disabled"
end
cache_storage.memory:set(key, value)
return true
end
function M.memory_get(key)
if not cache_storage.memory then
return nil
end
return cache_storage.memory:get(key)
end
-- LRU cache operations
function M.lru_set(key, value)
if not cache_storage.lru then
return false, "LRU cache disabled"
end
cache_storage.lru:set(key, value)
return true
end
function M.lru_get(key)
if not cache_storage.lru then
return nil
end
return cache_storage.lru:get(key)
end
-- Timed cache operations
function M.timed_set(key, value, ttl)
if not cache_storage.timed then
return false, "Timed cache disabled"
end
cache_storage.timed:set(key, value, ttl)
return true
end
function M.timed_get(key)
if not cache_storage.timed then
return nil
end
return cache_storage.timed:get(key)
end
-- Generic cache operations with automatic cache selection
function M.set(key, value, cache_type, ttl)
cache_type = cache_type or "memory"
if cache_type == "memory" then
return M.memory_set(key, value)
elseif cache_type == "lru" then
return M.lru_set(key, value)
elseif cache_type == "timed" then
return M.timed_set(key, value, ttl)
else
return false, "Unknown cache type: " .. cache_type
end
end
function M.get(key, cache_type)
cache_type = cache_type or "memory"
if cache_type == "memory" then
return M.memory_get(key)
elseif cache_type == "lru" then
return M.lru_get(key)
elseif cache_type == "timed" then
return M.timed_get(key)
else
return nil, "Unknown cache type: " .. cache_type
end
end
-- Get or set pattern (compute if not cached)
function M.get_or_set(key, compute_func, cache_type, ttl)
local value = M.get(key, cache_type)
if value ~= nil then
return value
end
-- Compute value
local success, result = pcall(compute_func)
if success then
M.set(key, result, cache_type, ttl)
return result
else
error("Failed to compute cached value: " .. result)
end
end
-- Cache with multiple backends (try each in order)
function M.multi_get(key, cache_types)
cache_types = cache_types or {"memory", "lru", "timed"}
for _, cache_type in ipairs(cache_types) do
local value = M.get(key, cache_type)
if value ~= nil then
return value, cache_type
end
end
return nil
end
-- Invalidate cache entries
function M.invalidate(key, cache_type)
if cache_type then
-- Invalidate specific cache type
if cache_type == "memory" and cache_storage.memory then
cache_storage.memory.data[key] = nil
elseif cache_type == "lru" and cache_storage.lru then
cache_storage.lru.data[key] = nil
for i, k in ipairs(cache_storage.lru.access_order) do
if k == key then
table.remove(cache_storage.lru.access_order, i)
break
end
end
elseif cache_type == "timed" and cache_storage.timed then
cache_storage.timed.data[key] = nil
end
else
-- Invalidate from all caches
M.invalidate(key, "memory")
M.invalidate(key, "lru")
M.invalidate(key, "timed")
end
end
-- Clear all caches
function M.clear_all()
if cache_storage.memory then
cache_storage.memory:clear()
end
if cache_storage.lru then
cache_storage.lru:clear()
end
if cache_storage.timed then
cache_storage.timed:clear()
end
-- Reset metrics
cache_metrics.hits = 0
cache_metrics.misses = 0
cache_metrics.sets = 0
cache_metrics.evictions = 0
cache_metrics.cleanups = 0
M.info("All caches cleared")
end
-- Get cache statistics
function M.get_stats()
local stats = {
metrics = vim.deepcopy(cache_metrics),
sizes = {},
config = vim.deepcopy(cache_config)
}
-- Calculate hit ratio
local total_requests = cache_metrics.hits + cache_metrics.misses
stats.metrics.hit_ratio = total_requests > 0 and (cache_metrics.hits / total_requests) or 0
-- Get cache sizes
if cache_storage.memory then
stats.sizes.memory = cache_storage.memory:size()
end
if cache_storage.lru then
stats.sizes.lru = cache_storage.lru:size()
end
if cache_storage.timed then
stats.sizes.timed = cache_storage.timed:size()
end
return stats
end
-- Cache warming functions
function M.warm_query_cache(queries)
if not cache_storage.lru then
return false, "LRU cache not available"
end
local query_engine = require('notex.query')
local warmed = 0
for _, query in ipairs(queries) do
local key = "query:" .. query
local cached = M.lru_get(key)
if not cached then
-- Execute query and cache result
local result = query_engine.execute_query(query)
if result.success then
M.lru_set(key, result)
warmed = warmed + 1
end
end
end
M.info("Warmed query cache", {queries_warmed = warmed})
return true
end
function M.warm_document_cache(document_paths)
if not cache_storage.memory then
return false, "Memory cache not available"
end
local indexer = require('notex.index')
local warmed = 0
for _, path in ipairs(document_paths) do
local key = "document:" .. path
local cached = M.memory_get(key)
if not cached then
-- Get document details and cache
local details, err = indexer.get_document_details_by_path(path)
if details then
M.memory_set(key, details)
warmed = warmed + 1
end
end
end
M.info("Warmed document cache", {documents_warmed = warmed})
return true
end
-- Cleanup function
function M.cleanup()
if cache_storage.timed then
cache_storage.timed:stop_cleanup_timer()
end
M.clear_all()
M.info("Cache system cleaned up")
end
-- Export cache metrics for monitoring
M.metrics = cache_metrics
M.config = cache_config
-- Forward logging functions (circular dependency resolution)
function M.info(message, context)
local ok, logging = pcall(require, 'notex.utils.logging')
if ok then
logging.info(message, context)
else
vim.notify("Cache: " .. message, vim.log.levels.INFO)
end
end
return M

398
lua/notex/utils/date.lua Normal file
View file

@ -0,0 +1,398 @@
-- Date parsing and formatting utilities
local M = {}
-- Date format patterns
local DATE_PATTERNS = {
ISO_8601 = "^%d%d%d%d%-%d%d%-%d%d$",
ISO_8601_TIME = "^%d%d%d%d%-%d%d%-%d%dT%d%d:%d%d:%d%dZ?$",
ISO_8601_OFFSET = "^%d%d%d%d%-%d%d%-%d%dT%d%d:%d%d:%d%d[%+%-]%d%d:%d%d$",
RELATIVE = "^(%d+)([hdwmy])$",
NATURAL = "^(%d%d%d%d)%-(%d%d)%-(%d%d)$"
}
-- Month names
local MONTH_NAMES = {
"January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"
}
local MONTH_SHORT = {
"Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
}
-- Parse date string to timestamp
function M.parse_date(date_string)
if not date_string or date_string == "" then
return nil
end
-- Handle relative dates
if date_string:match(DATE_PATTERNS.RELATIVE) then
return M.parse_relative_date(date_string)
end
-- Handle ISO 8601 with time
if date_string:match(DATE_PATTERNS.ISO_8601_TIME) then
return M.parse_iso8601_datetime(date_string)
end
-- Handle ISO 8601 with offset
if date_string:match(DATE_PATTERNS.ISO_8601_OFFSET) then
return M.parse_iso8601_with_offset(date_string)
end
-- Handle natural language dates
if date_string:match(DATE_PATTERNS.NATURAL) then
return M.parse_natural_date(date_string)
end
-- Handle ISO 8601 date only
if date_string:match(DATE_PATTERNS.ISO_8601) then
return M.parse_iso8601_date(date_string)
end
-- Handle common formats
return M.parse_common_formats(date_string)
end
-- Parse ISO 8601 date
function M.parse_iso8601_date(date_string)
local year, month, day = date_string:match("^(%d%d%d%d)%-(%d%d)%-(%d%d)$")
if not year then
return nil
end
local timestamp = os.time({
year = tonumber(year),
month = tonumber(month),
day = tonumber(day),
hour = 0,
min = 0,
sec = 0
})
return timestamp
end
-- Parse ISO 8601 datetime
function M.parse_iso8601_datetime(date_string)
local year, month, day, hour, min, sec, timezone = date_string:match("^(%d%d%d%d)%-(%d%d)%-(%d%d)T(%d%d):(%d%d):(%d%d)(Z?)$")
if not year then
return nil
end
local timestamp = os.time({
year = tonumber(year),
month = tonumber(month),
day = tonumber(day),
hour = tonumber(hour),
min = tonumber(min),
sec = tonumber(sec)
})
return timestamp
end
-- Parse ISO 8601 with timezone offset
function M.parse_iso8601_with_offset(date_string)
local year, month, day, hour, min, sec, offset_sign, offset_hour, offset_min = date_string:match("^(%d%d%d%d)%-(%d%d)%-(%d%d)T(%d%d):(%d%d):(%d%d)([%+%-])(%d%d):(%d%d)$")
if not year then
return nil
end
local timestamp = os.time({
year = tonumber(year),
month = tonumber(month),
day = tonumber(day),
hour = tonumber(hour),
min = tonumber(min),
sec = tonumber(sec)
})
-- Apply timezone offset (convert to UTC)
local offset_total = tonumber(offset_hour) * 3600 + tonumber(offset_min) * 60
if offset_sign == "-" then
offset_total = -offset_total
end
timestamp = timestamp - offset_total
return timestamp
end
-- Parse natural date
function M.parse_natural_date(date_string)
local year, month, day = date_string:match("^(%d%d%d%d)%-(%d%d)%-(%d%d)$")
if not year then
return nil
end
-- Handle natural language month names
if not month then
local month_name, day_part, year_part = date_string:match("^(%a+)%s+(%d+)%s+(%d+)$")
if month_name and day_part and year_part then
month = M.get_month_number(month_name)
day = tonumber(day_part)
year = tonumber(year_part)
end
end
if year and month and day then
return os.time({
year = tonumber(year),
month = tonumber(month),
day = tonumber(day),
hour = 0,
min = 0,
sec = 0
})
end
return nil
end
-- Parse relative date
function M.parse_relative_date(date_string)
local amount, unit = date_string:match("^(%d+)([hdwmy])$")
if not amount or not unit then
return nil
end
local current_time = os.time()
amount = tonumber(amount)
local seconds = 0
if unit == "s" then
seconds = amount
elseif unit == "m" then
seconds = amount * 60
elseif unit == "h" then
seconds = amount * 3600
elseif unit == "d" then
seconds = amount * 86400
elseif unit == "w" then
seconds = amount * 604800
elseif unit == "m" then
seconds = amount * 2592000 -- 30 days
elseif unit == "y" then
seconds = amount * 31536000 -- 365 days
end
return current_time - seconds
end
-- Parse common formats
function M.parse_common_formats(date_string)
local formats = {
"^%d%d%d%d%/%d%d%/%d%d$", -- MM/DD/YYYY
"^%d%d%/%d%d%/%d%d%d%d$", -- M/D/YYYY
"^%d%d%-%d%d%-%d%d%d%d$", -- MM-DD-YYYY
}
for _, pattern in ipairs(formats) do
if date_string:match(pattern) then
-- Try to parse with Lua's built-in date parsing
local timestamp = os.time({
year = tonumber(date_string:sub(-4)),
month = tonumber(date_string:sub(1, 2)),
day = tonumber(date_string:sub(4, 5)),
hour = 0,
min = 0,
sec = 0
})
if timestamp > 0 then
return timestamp
end
end
end
return nil
end
-- Format timestamp to string
function M.format_date(timestamp, format)
format = format or "%Y-%m-%d"
if not timestamp then
return ""
end
return os.date(format, timestamp)
end
-- Get relative time string
function M.get_relative_time(timestamp)
local current_time = os.time()
local diff = current_time - timestamp
if diff < 60 then
return "just now"
elseif diff < 3600 then
local minutes = math.floor(diff / 60)
return string.format("%d minute%s ago", minutes, minutes > 1 and "s" or "")
elseif diff < 86400 then
local hours = math.floor(diff / 3600)
return string.format("%d hour%s ago", hours, hours > 1 and "s" or "")
elseif diff < 2592000 then
local days = math.floor(diff / 86400)
return string.format("%d day%s ago", days, days > 1 and "s" or "")
elseif diff < 31536000 then
local months = math.floor(diff / 2592000)
return string.format("%d month%s ago", months, months > 1 and "s" or "")
else
local years = math.floor(diff / 31536000)
return string.format("%d year%s ago", years, years > 1 and "s" or "")
end
end
-- Get month number from name
function M.get_month_number(month_name)
local lower_name = month_name:lower()
for i, name in ipairs(MONTH_NAMES) do
if name:lower() == lower_name then
return i
end
end
for i, name in ipairs(MONTH_SHORT) do
if name:lower() == lower_name then
return i
end
end
return nil
end
-- Get month name from number
function M.get_month_name(month_number, short)
if month_number < 1 or month_number > 12 then
return nil
end
if short then
return MONTH_SHORT[month_number]
else
return MONTH_NAMES[month_number]
end
end
-- Validate date string
function M.is_valid_date(date_string)
return M.parse_date(date_string) ~= nil
end
-- Add time to timestamp
function M.add_time(timestamp, amount, unit)
unit = unit or "days"
local seconds = 0
if unit == "seconds" then
seconds = amount
elseif unit == "minutes" then
seconds = amount * 60
elseif unit == "hours" then
seconds = amount * 3600
elseif unit == "days" then
seconds = amount * 86400
elseif unit == "weeks" then
seconds = amount * 604800
elseif unit == "months" then
seconds = amount * 2592000
elseif unit == "years" then
seconds = amount * 31536000
end
return timestamp + seconds
end
-- Get date range
function M.get_date_range(start_date, end_date)
local start_timestamp = M.parse_date(start_date)
local end_timestamp = M.parse_date(end_date)
if not start_timestamp or not end_timestamp then
return nil
end
return {
start_timestamp = start_timestamp,
end_timestamp = end_timestamp,
start_formatted = M.format_date(start_timestamp),
end_formatted = M.format_date(end_timestamp),
duration_days = math.floor((end_timestamp - start_timestamp) / 86400)
}
end
-- Get week start/end
function M.get_week_bounds(timestamp)
timestamp = timestamp or os.time()
local date_table = os.date("*t", timestamp)
local day_of_week = date_table.wday -- Sunday = 1, Monday = 2, etc.
-- Adjust to Monday = 0
day_of_week = (day_of_week + 5) % 7
local week_start = timestamp - (day_of_week * 86400)
local week_end = week_start + (6 * 86400)
return {
start_timestamp = week_start,
end_timestamp = week_end,
start_formatted = M.format_date(week_start),
end_formatted = M.format_date(week_end)
}
end
-- Get month start/end
function M.get_month_bounds(timestamp)
timestamp = timestamp or os.time()
local date_table = os.date("*t", timestamp)
local month_start = os.time({
year = date_table.year,
month = date_table.month,
day = 1,
hour = 0,
min = 0,
sec = 0
})
local next_month = os.time({
year = date_table.year,
month = date_table.month + 1,
day = 1,
hour = 0,
min = 0,
sec = 0
})
local month_end = next_month - 1
return {
start_timestamp = month_start,
end_timestamp = month_end,
start_formatted = M.format_date(month_start),
end_formatted = M.format_date(month_end)
}
end
-- Get time zones
function M.get_timezones()
return {
"UTC",
"America/New_York",
"America/Chicago",
"America/Denver",
"America/Los_Angeles",
"Europe/London",
"Europe/Paris",
"Asia/Tokyo",
"Australia/Sydney"
}
end
return M

402
lua/notex/utils/errors.lua Normal file
View file

@ -0,0 +1,402 @@
-- Centralized error handling and recovery system
local M = {}
local logging = require('notex.utils.logging')
-- Error types with specific handling strategies
local ERROR_TYPES = {
DATABASE_CONNECTION = {
category = "database",
recoverable = true,
retry_strategy = "exponential_backoff",
max_retries = 3,
user_message = "Database connection error. Retrying..."
},
DATABASE_QUERY = {
category = "database",
recoverable = false,
retry_strategy = "none",
max_retries = 0,
user_message = "Query execution failed. Please check your query syntax."
},
FILE_NOT_FOUND = {
category = "filesystem",
recoverable = true,
retry_strategy = "immediate",
max_retries = 1,
user_message = "File not found. It may have been moved or deleted."
},
FILE_PARSE_ERROR = {
category = "parsing",
recoverable = false,
retry_strategy = "none",
max_retries = 0,
user_message = "Failed to parse file. Please check the file format."
},
QUERY_SYNTAX_ERROR = {
category = "query",
recoverable = false,
retry_strategy = "none",
max_retries = 0,
user_message = "Query syntax error. Please check your query syntax."
},
VALIDATION_ERROR = {
category = "validation",
recoverable = false,
retry_strategy = "none",
max_retries = 0,
user_message = "Validation error. Please check your input."
},
UI_ERROR = {
category = "ui",
recoverable = true,
retry_strategy = "immediate",
max_retries = 1,
user_message = "UI error. Attempting to recover..."
},
PERMISSION_ERROR = {
category = "filesystem",
recoverable = false,
retry_strategy = "none",
max_retries = 0,
user_message = "Permission denied. Please check file permissions."
},
NETWORK_ERROR = {
category = "network",
recoverable = true,
retry_strategy = "exponential_backoff",
max_retries = 3,
user_message = "Network error. Retrying..."
},
PERFORMANCE_TIMEOUT = {
category = "performance",
recoverable = true,
retry_strategy = "immediate",
max_retries = 1,
user_message = "Operation timed out. Retrying with simpler approach..."
}
}
-- Error state tracking
local error_state = {
recent_errors = {},
error_counts = {},
last_recovery_attempt = {},
recovery_in_progress = {}
}
-- Create standardized error object
function M.create_error(error_type, message, context, original_error)
local error_def = ERROR_TYPES[error_type] or ERROR_TYPES.UI_ERROR
local error_obj = {
type = error_type,
message = message,
context = context or {},
original_error = original_error,
timestamp = os.time(),
recoverable = error_def.recoverable,
category = error_def.category,
user_message = error_def.user_message,
retry_strategy = error_def.retry_strategy,
max_retries = error_def.max_retries,
error_id = M.generate_error_id()
}
-- Track error
M.track_error(error_obj)
return error_obj
end
-- Generate unique error ID
function M.generate_error_id()
return string.format("ERR_%d_%s", os.time(), math.random(1000, 9999))
end
-- Track error occurrence
function M.track_error(error_obj)
-- Add to recent errors
table.insert(error_state.recent_errors, error_obj)
-- Keep only last 50 errors
if #error_state.recent_errors > 50 then
table.remove(error_state.recent_errors, 1)
end
-- Update error counts
local key = error_obj.type
error_state.error_counts[key] = (error_state.error_counts[key] or 0) + 1
-- Log the error
logging.handle_error(error_obj.message, error_obj.category, error_obj)
end
-- Check if error should be retried
function M.should_retry(error_obj, current_attempt)
if not error_obj.recoverable then
return false, "Error is not recoverable"
end
if current_attempt >= error_obj.max_retries then
return false, "Maximum retries exceeded"
end
-- Check if we recently attempted recovery for this error type
local last_attempt = error_state.last_recovery_attempt[error_obj.type]
if last_attempt and (os.time() - last_attempt) < 5 then
return false, "Recovery attempt too recent"
end
return true, "Retry allowed"
end
-- Execute operation with error handling and recovery
function M.safe_execute(operation, error_type, context, func, ...)
local current_attempt = 0
local max_attempts = (ERROR_TYPES[error_type] and ERROR_TYPES[error_type].max_retries or 0) + 1
while current_attempt < max_attempts do
local success, result = pcall(func, ...)
if success then
-- Reset recovery state on success
error_state.last_recovery_attempt[error_type] = nil
error_state.recovery_in_progress[error_type] = nil
return true, result
else
current_attempt = current_attempt + 1
local error_obj = M.create_error(error_type, result, context)
local should_retry, retry_reason = M.should_retry(error_obj, current_attempt)
if should_retry and current_attempt < max_attempts then
error_state.last_recovery_attempt[error_type] = os.time()
error_state.recovery_in_progress[error_type] = true
-- Show user message
if error_obj.user_message then
vim.notify(error_obj.user_message, vim.log.levels.WARN)
end
-- Apply retry strategy
M.apply_retry_strategy(error_obj.retry_strategy, current_attempt)
logging.info("Retrying operation", {
operation = operation,
attempt = current_attempt,
error_type = error_type,
reason = retry_reason
})
else
-- Final failure
error_state.recovery_in_progress[error_type] = nil
-- Show final error message
M.show_final_error(error_obj, current_attempt)
return false, error_obj
end
end
end
return false, "Operation failed after all retry attempts"
end
-- Apply retry strategy
function M.apply_retry_strategy(strategy, attempt)
if strategy == "immediate" then
-- No delay
elseif strategy == "exponential_backoff" then
local delay = math.min(2 ^ attempt, 10) -- Cap at 10 seconds
vim.defer_fn(function() end, delay * 1000)
elseif strategy == "linear_backoff" then
local delay = attempt * 1000 -- 1 second per attempt
vim.defer_fn(function() end, delay)
end
end
-- Show final error to user
function M.show_final_error(error_obj, attempt_count)
local message = string.format("%s (%d attempts made)", error_obj.user_message or error_obj.message, attempt_count)
if error_obj.category == "validation" or error_obj.category == "query" then
vim.notify(message, vim.log.levels.ERROR)
elseif error_obj.category == "filesystem" or error_obj.category == "database" then
vim.notify(message, vim.log.levels.ERROR)
else
vim.notify(message, vim.log.levels.WARN)
end
end
-- Wrap function for safe execution
function M.wrap(operation_name, error_type, func)
return function(...)
return M.safe_execute(operation_name, error_type, {operation = operation_name}, func, ...)
end
end
-- Handle specific error types with custom recovery
local error_handlers = {}
function M.register_error_handler(error_type, handler)
error_handlers[error_type] = handler
end
function M.handle_specific_error(error_obj)
local handler = error_handlers[error_obj.type]
if handler then
local success, result = pcall(handler, error_obj)
if success then
return result
else
logging.error("Error handler failed", {
error_type = error_obj.type,
handler_error = result
})
end
end
return nil
end
-- Register default error handlers
M.register_error_handler("DATABASE_CONNECTION", function(error_obj)
-- Try to reinitialize database connection
local database = require('notex.database.init')
local ok, err = database.reconnect()
if ok then
vim.notify("Database connection restored", vim.log.levels.INFO)
return true
end
return false
end)
M.register_error_handler("FILE_NOT_FOUND", function(error_obj)
-- Remove from index if file no longer exists
if error_obj.context and error_obj.context.file_path then
local indexer = require('notex.index')
local ok, err = indexer.remove_document_by_path(error_obj.context.file_path)
if ok then
vim.notify("Removed missing file from index", vim.log.levels.INFO)
return true
end
end
return false
end)
M.register_error_handler("UI_ERROR", function(error_obj)
-- Try to cleanup UI state
local ui = require('notex.ui')
ui.cleanup_all()
vim.notify("UI state reset", vim.log.levels.INFO)
return true
end)
-- Get error statistics
function M.get_error_statistics()
local stats = {
total_errors = 0,
by_type = vim.deepcopy(error_state.error_counts),
recent_errors = vim.list_slice(error_state.recent_errors, -10), -- Last 10 errors
recovery_in_progress = vim.deepcopy(error_state.recovery_in_progress)
}
-- Calculate total
for _, count in pairs(error_state.error_counts) do
stats.total_errors = stats.total_errors + count
end
-- Get error rate in last hour
local one_hour_ago = os.time() - 3600
local recent_count = 0
for _, error in ipairs(error_state.recent_errors) do
if error.timestamp > one_hour_ago then
recent_count = recent_count + 1
end
end
stats.errors_per_hour = recent_count
return stats
end
-- Clear error history
function M.clear_error_history()
error_state.recent_errors = {}
error_state.error_counts = {}
error_state.last_recovery_attempt = {}
error_state.recovery_in_progress = {}
logging.info("Error history cleared")
end
-- Check system health based on errors
function M.check_system_health()
local stats = M.get_error_statistics()
local health = {
status = "healthy",
issues = {},
recommendations = {}
}
-- Check error rate
if stats.errors_per_hour > 10 then
health.status = "degraded"
table.insert(health.issues, "High error rate: " .. stats.errors_per_hour .. " errors/hour")
table.insert(health.recommendations, "Check system logs for recurring issues")
end
-- Check for stuck recovery operations
local stuck_recoveries = 0
for error_type, in_progress in pairs(error_state.recovery_in_progress) do
if in_progress then
stuck_recoveries = stuck_recoveries + 1
end
end
if stuck_recoveries > 0 then
health.status = "degraded"
table.insert(health.issues, stuck_recoveries .. " recovery operations in progress")
table.insert(health.recommendations, "Consider restarting the plugin")
end
-- Check for specific error patterns
local db_errors = error_state.error_counts["DATABASE_CONNECTION"] or 0
local file_errors = error_state.error_counts["FILE_NOT_FOUND"] or 0
if db_errors > 5 then
health.status = "unhealthy"
table.insert(health.issues, "Frequent database connection errors")
table.insert(health.recommendations, "Check database file permissions and disk space")
end
if file_errors > 10 then
table.insert(health.issues, "Many file not found errors")
table.insert(health.recommendations, "Consider reindexing the workspace")
end
return health
end
-- Create user-friendly error messages
function M.format_error_for_user(error_obj)
local message = error_obj.user_message or error_obj.message
-- Add contextual information
if error_obj.context.operation then
message = message .. " (during: " .. error_obj.context.operation .. ")"
end
if error_obj.context.file_path then
message = message .. " (file: " .. vim.fn.fnamemodify(error_obj.context.file_path, ":t") .. ")"
end
-- Add error ID for support
message = message .. " [ID: " .. error_obj.error_id .. "]"
return message
end
-- Export error types for use in other modules
M.ERROR_TYPES = ERROR_TYPES
return M

213
lua/notex/utils/init.lua Normal file
View file

@ -0,0 +1,213 @@
-- Core utility functions
local M = {}
-- Import submodules
local logging = require('notex.utils.logging')
local errors = require('notex.utils.errors')
local types = require('notex.utils.types')
local validation = require('notex.utils.validation')
local date_utils = require('notex.utils.date')
local cache = require('notex.utils.cache')
-- Generate unique ID
function M.generate_id()
local template = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'
return string.gsub(template, '[xy]', function(c)
local v = (c == 'x') and math.random(0, 0xf) or math.random(8, 0xb)
return string.format('%x', v)
end)
end
-- Generate SHA256 hash
function M.sha256(data)
local cmd = string.format("echo '%s' | sha256sum | cut -d' ' -f1", data)
local handle = io.popen(cmd)
if not handle then
return nil
end
local result = handle:read("*a")
handle:close()
return result:gsub("%s+", "")
end
-- Deep merge tables
function M.deep_merge(target, source)
for key, value in pairs(source) do
if type(value) == 'table' and type(target[key]) == 'table' then
M.deep_merge(target[key], value)
else
target[key] = value
end
end
return target
end
-- Check if file exists
function M.file_exists(path)
local f = io.open(path, "r")
if f then
f:close()
return true
end
return false
end
-- Read file content
function M.read_file(path)
local file = io.open(path, "r")
if not file then
return nil, "Cannot open file: " .. path
end
local content = file:read("*a")
file:close()
return content
end
-- Write file content
function M.write_file(path, content)
local file = io.open(path, "w")
if not file then
return false, "Cannot write to file: " .. path
end
file:write(content)
file:close()
return true
end
-- Get file modification time
function M.get_file_mtime(path)
local handle = io.popen("stat -c %Y " .. path .. " 2>/dev/null")
if not handle then
return nil
end
local result = handle:read("*a")
handle:close()
local mtime = tonumber(result:gsub("%s+", ""))
return mtime
end
-- Validate file encoding (UTF-8 check)
function M.is_utf8(path)
local file = io.open(path, "rb")
if not file then
return false
end
local content = file:read("*a")
file:close()
-- Simple UTF-8 validation
local valid, pos = utf8.len(content)
return valid ~= nil
end
-- Format error message
function M.format_error(error_type, message, context)
local error_obj = {
error_type = error_type,
message = message,
timestamp = os.time()
}
if context then
error_obj.context = context
end
return error_obj
end
-- Forward logging functions to centralized logging
M.trace = logging.trace
M.debug = logging.debug
M.info = logging.info
M.warn = logging.warn
M.error = logging.error
M.fatal = logging.fatal
M.log = logging.log
M.timer = logging.timer
-- Forward error handling functions
M.handle_error = errors.handle_error
M.safe_execute = errors.safe_execute
M.wrap = errors.wrap
M.create_error = errors.create_error
-- Forward type utilities
M.detect_type = types.detect_type
M.convert_to_type = types.convert_to_type
M.infer_schema = types.infer_schema
-- Forward validation utilities
M.validate_value = validation.validate_value
M.validate_document_properties = validation.validate_document_properties
M.sanitize_input = validation.sanitize_input
-- Forward date utilities
M.parse_date = date_utils.parse_date
M.format_date = date_utils.format_date
M.get_relative_time = date_utils.get_relative_time
-- Forward cache utilities
M.cache_set = cache.set
M.cache_get = cache.get
M.cache_get_or_set = cache.get_or_set
M.cache_invalidate = cache.invalidate
M.cache_clear_all = cache.clear_all
M.cache_get_stats = cache.get_stats
M.cache_init = cache.init
-- Simple timer for backward compatibility
function M.simple_timer(name)
local start_time = vim.loop.hrtime()
return function()
local end_time = vim.loop.hrtime()
local elapsed_ms = (end_time - start_time) / 1e6
M.log("INFO", string.format("%s completed in %.2fms", name, elapsed_ms))
return elapsed_ms
end
end
-- Validate data types
function M.validate_type(value, expected_type)
local actual_type = type(value)
if expected_type == "number" then
return actual_type == "number"
elseif expected_type == "string" then
return actual_type == "string"
elseif expected_type == "boolean" then
return actual_type == "boolean"
elseif expected_type == "table" then
return actual_type == "table"
elseif expected_type == "function" then
return actual_type == "function"
end
return false
end
-- Escape SQL values
function M.escape_sql(value)
if type(value) == "string" then
return "'" .. value:gsub("'", "''") .. "'"
elseif type(value) == "number" then
return tostring(value)
elseif type(value) == "boolean" then
return value and "1" or "0"
elseif value == nil then
return "NULL"
else
return "'" .. tostring(value):gsub("'", "''") .. "'"
end
end
return M

382
lua/notex/utils/logging.lua Normal file
View file

@ -0,0 +1,382 @@
-- Centralized logging and error handling system
local M = {}
-- Log levels
local LOG_LEVELS = {
TRACE = 1,
DEBUG = 2,
INFO = 3,
WARN = 4,
ERROR = 5,
FATAL = 6
}
-- Current log level (can be configured)
local current_log_level = LOG_LEVELS.INFO
-- Log file configuration
local log_config = {
file_enabled = true,
file_path = nil, -- Will be set to stdpath('data')/notex/notex.log
max_file_size = 1024 * 1024, -- 1MB
backup_count = 3,
console_enabled = true
}
-- Error categories for better handling
local ERROR_CATEGORIES = {
DATABASE = "database",
PARSING = "parsing",
QUERY = "query",
UI = "ui",
FILESYSTEM = "filesystem",
VALIDATION = "validation",
CONFIGURATION = "configuration",
NETWORK = "network",
PERFORMANCE = "performance"
}
-- Error context stack for nested operations
local error_context = {}
-- Performance tracking
local performance_metrics = {
query_times = {},
index_times = {},
operation_counts = {},
error_counts = {}
}
-- Initialize logging system
function M.init(config)
config = config or {}
-- Set log level
if config.log_level then
local level = LOG_LEVELS[config.log_level:upper()]
if level then
current_log_level = level
end
end
-- Configure logging
log_config = vim.tbl_deep_extend("force", log_config, config)
-- Set default log file path
if not log_config.file_path then
log_config.file_path = vim.fn.stdpath('data') .. '/notex/notex.log'
end
-- Ensure log directory exists
local log_dir = vim.fn.fnamemodify(log_config.file_path, ':h')
vim.fn.mkdir(log_dir, 'p')
-- Clean up old log files
M.cleanup_log_files()
M.log("INFO", "Logging system initialized", {
log_level = M.get_log_level_name(),
log_file = log_config.file_path
})
return true, "Logging system initialized"
end
-- Core logging function
function M.log(level, message, context)
level = level:upper()
local level_value = LOG_LEVELS[level] or LOG_LEVELS.INFO
-- Skip if below current log level
if level_value < current_log_level then
return
end
local timestamp = os.date("%Y-%m-%d %H:%M:%S")
local context_str = context and " | " .. vim.inspect(context) or ""
local log_entry = string.format("[%s] %s: %s%s", timestamp, level, message, context_str)
-- Console output
if log_config.console_enabled then
if level_value >= LOG_LEVELS.ERROR then
vim.notify(message, vim.log.levels.ERROR)
elseif level_value >= LOG_LEVELS.WARN then
vim.notify(message, vim.log.levels.WARN)
elseif level_value >= LOG_LEVELS.INFO then
vim.notify(message, vim.log.levels.INFO)
else
-- Debug/trace go to message history but not notifications
vim.schedule(function()
vim.cmd('echomsg "' .. message:gsub('"', '\\"') .. '"')
end)
end
end
-- File output
if log_config.file_enabled then
M.write_to_file(log_entry)
end
end
-- Write to log file with rotation
function M.write_to_file(log_entry)
-- Check file size and rotate if necessary
local file_info = vim.fn.getfsize(log_config.file_path)
if file_info > log_config.max_file_size then
M.rotate_log_file()
end
-- Append to log file
local file = io.open(log_config.file_path, "a")
if file then
file:write(log_entry .. "\n")
file:close()
end
end
-- Rotate log files
function M.rotate_log_file()
-- Remove oldest backup
local oldest_backup = log_config.file_path .. "." .. log_config.backup_count
if vim.fn.filereadable(oldest_backup) > 0 then
os.remove(oldest_backup)
end
-- Rotate existing backups
for i = log_config.backup_count - 1, 1, -1 do
local current_backup = log_config.file_path .. "." .. i
local next_backup = log_config.file_path .. "." .. (i + 1)
if vim.fn.filereadable(current_backup) > 0 then
os.rename(current_backup, next_backup)
end
end
-- Move current log to backup
if vim.fn.filereadable(log_config.file_path) > 0 then
os.rename(log_config.file_path, log_config.file_path .. ".1")
end
end
-- Clean up old log files
function M.cleanup_log_files()
for i = 1, log_config.backup_count do
local backup_file = log_config.file_path .. "." .. i
if vim.fn.filereadable(backup_file) > 0 then
local file_time = vim.fn.getftime(backup_file)
local age_days = (os.time() - file_time) / 86400
-- Remove backups older than 30 days
if age_days > 30 then
os.remove(backup_file)
end
end
end
end
-- Convenience logging functions
function M.trace(message, context)
M.log("TRACE", message, context)
end
function M.debug(message, context)
M.log("DEBUG", message, context)
end
function M.info(message, context)
M.log("INFO", message, context)
end
function M.warn(message, context)
M.log("WARN", message, context)
end
function M.error(message, context)
M.log("ERROR", message, context)
-- Track error metrics
local category = context and context.category or "unknown"
performance_metrics.error_counts[category] = (performance_metrics.error_counts[category] or 0) + 1
end
function M.fatal(message, context)
M.log("FATAL", message, context)
-- Fatal errors should also be shown as error notifications
vim.notify("FATAL: " .. message, vim.log.levels.ERROR)
end
-- Error handling with context
function M.handle_error(error_msg, category, context)
category = category or "unknown"
context = context or {}
local error_info = {
message = error_msg,
category = category,
context = context,
timestamp = os.time(),
stack_trace = debug.traceback()
}
-- Add current error context
if #error_context > 0 then
error_info.nested_context = vim.deepcopy(error_context)
end
M.error("Error in " .. category, error_info)
return error_info
end
-- Push error context (for nested operations)
function M.push_context(operation, context)
table.insert(error_context, {
operation = operation,
context = context or {},
timestamp = os.time()
})
end
-- Pop error context
function M.pop_context()
return table.remove(error_context)
end
-- Execute with error context
function M.with_context(operation, context, func, ...)
M.push_context(operation, context)
local results = {pcall(func, ...)}
local success = table.remove(results, 1)
M.pop_context()
if not success then
local error_msg = results[1]
M.handle_error(error_msg, context.category or "operation", context)
return nil, error_msg
end
return unpack(results)
end
-- Performance tracking
function M.start_timer(operation)
return {
operation = operation,
start_time = vim.loop.hrtime()
}
end
function M.end_timer(timer)
local end_time = vim.loop.hrtime()
local duration_ms = (end_time - timer.start_time) / 1000000
-- Store performance metrics
if timer.operation:match("query") then
table.insert(performance_metrics.query_times, duration_ms)
-- Keep only last 100 measurements
if #performance_metrics.query_times > 100 then
table.remove(performance_metrics.query_times, 1)
end
elseif timer.operation:match("index") then
table.insert(performance_metrics.index_times, duration_ms)
if #performance_metrics.index_times > 100 then
table.remove(performance_metrics.index_times, 1)
end
end
-- Track operation counts
performance_metrics.operation_counts[timer.operation] = (performance_metrics.operation_counts[timer.operation] or 0) + 1
M.debug("Operation completed", {
operation = timer.operation,
duration_ms = duration_ms
})
return duration_ms
end
-- Timer utility function
function M.timer(operation)
return function()
return M.end_timer(M.start_timer(operation))
end
end
-- Get performance statistics
function M.get_performance_stats()
local stats = {
operations = vim.deepcopy(performance_metrics.operation_counts),
errors = vim.deepcopy(performance_metrics.error_counts)
}
-- Calculate averages
if #performance_metrics.query_times > 0 then
local total = 0
for _, time in ipairs(performance_metrics.query_times) do
total = total + time
end
stats.average_query_time = total / #performance_metrics.query_times
stats.query_count = #performance_metrics.query_times
end
if #performance_metrics.index_times > 0 then
local total = 0
for _, time in ipairs(performance_metrics.index_times) do
total = total + time
end
stats.average_index_time = total / #performance_metrics.index_times
stats.index_count = #performance_metrics.index_times
end
return stats
end
-- Get log level name
function M.get_log_level_name()
for name, level in pairs(LOG_LEVELS) do
if level == current_log_level then
return name
end
end
return "UNKNOWN"
end
-- Set log level
function M.set_log_level(level)
local level_value = LOG_LEVELS[level:upper()]
if level_value then
current_log_level = level_value
M.info("Log level changed to " .. level:upper())
return true
end
return false, "Invalid log level: " .. tostring(level)
end
-- Configuration validation
function M.validate_config(config)
local errors = {}
-- Validate log level
if config.log_level and not LOG_LEVELS[config.log_level:upper()] then
table.insert(errors, "Invalid log level: " .. config.log_level)
end
-- Validate file size
if config.max_file_size and (type(config.max_file_size) ~= "number" or config.max_file_size <= 0) then
table.insert(errors, "max_file_size must be a positive number")
end
-- Validate backup count
if config.backup_count and (type(config.backup_count) ~= "number" or config.backup_count < 1) then
table.insert(errors, "backup_count must be a positive number")
end
return #errors == 0, errors
end
-- Export error categories for use in other modules
M.ERROR_CATEGORIES = ERROR_CATEGORIES
return M

381
lua/notex/utils/types.lua Normal file
View file

@ -0,0 +1,381 @@
-- Type detection and conversion utilities
local M = {}
-- Type definitions
local TYPE_PATTERNS = {
BOOLEAN_TRUE = "^(true|yes|on|enabled|1)$",
BOOLEAN_FALSE = "^(false|no|off|disabled|0)$",
INTEGER = "^%-?%d+$",
FLOAT = "^%-?%d*%.?%d+$",
DATE_ISO8601 = "^%d%d%d%d%-%d%d%-%d%d$",
DATE_ISO8601_TIME = "^%d%d%d%d%-%d%d%-%d%dT%d%d:%d%d:%d%d",
URL = "^https?://[%w%-%.~%/?:%[#%][%]%@!$&'()*+,;=]*$",
EMAIL = "^[%w%-%.]+@[%w%-%.]+%.%w+$",
ARRAY_JSON = "^%[.*%]$",
OBJECT_JSON = "^{.*}$"
}
-- Type detection rules
local TYPE_DETECTION_RULES = {
-- Specific patterns first
{pattern = TYPE_PATTERNS.BOOLEAN_TRUE, type = "boolean", value = true},
{pattern = TYPE_PATTERNS.BOOLEAN_FALSE, type = "boolean", value = false},
{pattern = TYPE_PATTERNS.DATE_ISO8601_TIME, type = "date"},
{pattern = TYPE_PATTERNS.DATE_ISO8601, type = "date"},
{pattern = TYPE_PATTERNS.URL, type = "url"},
{pattern = TYPE_PATTERNS.EMAIL, type = "email"},
{pattern = TYPE_PATTERNS.ARRAY_JSON, type = "array"},
{pattern = TYPE_PATTERNS.OBJECT_JSON, type = "object"},
-- Numeric patterns
{pattern = TYPE_PATTERNS.FLOAT, type = "number"},
{pattern = TYPE_PATTERNS.INTEGER, type = "number"},
-- Default
{pattern = ".*", type = "string"}
}
-- Detect type of value
function M.detect_type(value)
if value == nil then
return "nil"
end
local value_str = tostring(value)
for _, rule in ipairs(TYPE_DETECTION_RULES) do
if value_str:match(rule.pattern) then
if rule.value ~= nil then
return rule.type, rule.value
else
return rule.type
end
end
end
return "string"
end
-- Convert value to specific type
function M.convert_to_type(value, target_type)
if value == nil then
return nil
end
local current_type, _ = M.detect_type(value)
if current_type == target_type then
return value
end
local value_str = tostring(value)
if target_type == "boolean" then
return M.convert_to_boolean(value_str)
elseif target_type == "number" then
return M.convert_to_number(value_str)
elseif target_type == "string" then
return M.convert_to_string(value)
elseif target_type == "date" then
return M.convert_to_date(value_str)
elseif target_type == "array" then
return M.convert_to_array(value_str)
elseif target_type == "object" then
return M.convert_to_object(value_str)
else
return value
end
end
-- Convert to boolean
function M.convert_to_boolean(value_str)
local lower = value_str:lower()
if lower:match(TYPE_PATTERNS.BOOLEAN_TRUE) then
return true
elseif lower:match(TYPE_PATTERNS.BOOLEAN_FALSE) then
return false
else
return nil
end
end
-- Convert to number
function M.convert_to_number(value_str)
local number = tonumber(value_str)
if number then
return number
end
-- Try scientific notation
local sci_match = value_str:match("^([%d%.]+)e([%+%-]?%d+)$")
if sci_match then
return tonumber(sci_match[1] .. "e" .. sci_match[2])
end
return nil
end
-- Convert to string
function M.convert_to_string(value)
if type(value) == "string" then
return value
elseif type(value) == "number" then
return tostring(value)
elseif type(value) == "boolean" then
return value and "true" or "false"
elseif type(value) == "table" then
return vim.json.encode(value)
else
return tostring(value)
end
end
-- Convert to date
function M.convert_to_date(value_str)
local date_parser = require('notex.utils.date')
local timestamp = date_parser.parse_date(value_str)
return timestamp and os.date("%Y-%m-%d", timestamp) or value_str
end
-- Convert to array
function M.convert_to_array(value_str)
-- Handle JSON array
if value_str:match(TYPE_PATTERNS.ARRAY_JSON) then
local ok, result = pcall(vim.json.decode, value_str)
if ok and type(result) == "table" then
return result
end
end
-- Handle comma-separated values
local items = {}
for item in value_str:gsub("%s", ""):gmatch("[^,]+") do
table.insert(items, item)
end
return #items > 0 and items or {value_str}
end
-- Convert to object
function M.convert_to_object(value_str)
-- Handle JSON object
if value_str:match(TYPE_PATTERNS.OBJECT_JSON) then
local ok, result = pcall(vim.json.decode, value_str)
if ok and type(result) == "table" then
return result
end
end
-- Handle key=value pairs
local obj = {}
for pair in value_str:gsub("%s", ""):gmatch("[^,]+") do
local key, value = pair:match("^([^=]+)=(.*)$")
if key and value then
obj[key] = value
end
end
return next(obj) and obj or {}
end
-- Validate type conversion
function M.validate_conversion(value, target_type)
local converted = M.convert_to_type(value, target_type)
local detected_type, _ = M.detect_type(converted)
return detected_type == target_type
end
-- Get type info
function M.get_type_info(value)
local detected_type, converted_value = M.detect_type(value)
local is_valid = true
local info = {
detected_type = detected_type,
original_value = value,
converted_value = converted_value,
is_valid = is_valid,
possible_conversions = M.get_possible_conversions(value)
}
return info
end
-- Get possible conversions for a value
function M.get_possible_conversions(value)
local conversions = {}
for _, rule in ipairs(TYPE_DETECTION_RULES) do
local value_str = tostring(value)
if value_str:match(rule.pattern) then
local converted = M.convert_to_type(value, rule.type)
if converted ~= nil then
table.insert(conversions, {
type = rule.type,
value = converted,
is_same_type = rule.type == (M.detect_type(value))
})
end
end
end
return conversions
end
-- Compare types
function M.compare_types(value1, value2)
local type1, _ = M.detect_type(value1)
local type2, _ = M.detect_type(value2)
return {
type1 = type1,
type2 = type2,
same_type = type1 == type2,
compatible = M.are_types_compatible(type1, type2)
}
end
-- Check if types are compatible
function M.are_types_compatible(type1, type2)
-- Exact match
if type1 == type2 then
return true
end
-- Number compatibility
if (type1 == "number" and type2 == "number") or
(type1 == "integer" and type2 == "number") or
(type1 == "number" and type2 == "integer") then
return true
end
-- String compatibility (string can represent many types)
if type1 == "string" or type2 == "string" then
return true
end
return false
end
-- Cast value with validation
function M.cast_value(value, target_type, strict)
strict = strict or false
local converted = M.convert_to_type(value, target_type)
if converted == nil then
if strict then
error("Cannot cast value to type: " .. target_type)
else
return value
end
end
return converted
end
-- Infer schema from values
function M.infer_schema(values)
local schema = {}
local type_counts = {}
for _, value in ipairs(values) do
local detected_type, _ = M.detect_type(value)
if not type_counts[detected_type] then
type_counts[detected_type] = 0
end
type_counts[detected_type] = type_counts[detected_type] + 1
end
-- Find most common type
local most_common_type = nil
local max_count = 0
for type_name, count in pairs(type_counts) do
if count > max_count then
max_count = count
most_common_type = type_name
end
end
schema = {
detected_type = most_common_type,
type_distribution = type_counts,
confidence = max_count / #values,
sample_size = #values,
constraints = M.infer_constraints(values, most_common_type)
}
return schema
end
-- Infer constraints from values
function M.infer_constraints(values, detected_type)
local constraints = {}
if detected_type == "string" then
local min_length = math.huge
local max_length = 0
local pattern = ""
for _, value in ipairs(values) do
local value_str = tostring(value)
min_length = math.min(min_length, #value_str)
max_length = math.max(max_length, #value_str)
end
constraints.min_length = min_length
constraints.max_length = max_length
-- Detect common patterns
local email_count = 0
local url_count = 0
local date_count = 0
for _, value in ipairs(values) do
local value_str = tostring(value)
if value_str:match(TYPE_PATTERNS.EMAIL) then
email_count = email_count + 1
elseif value_str:match(TYPE_PATTERNS.URL) then
url_count = url_count + 1
elseif value_str:match(TYPE_PATTERNS.DATE_ISO8601) or value_str:match(TYPE_PATTERNS.DATE_ISO8601_TIME) then
date_count = date_count + 1
end
end
if email_count > #values / 2 then
constraints.format = "email"
elseif url_count > #values / 2 then
constraints.format = "url"
elseif date_count > #values / 2 then
constraints.format = "date"
end
elseif detected_type == "number" then
local min_value = math.huge
local max_value = -math.huge
local is_integer = true
for _, value in ipairs(values) do
local num = tonumber(value)
if num then
min_value = math.min(min_value, num)
max_value = math.max(max_value, num)
if num ~= math.floor(num) then
is_integer = false
end
end
end
constraints.min_value = min_value
constraints.max_value = max_value
constraints.is_integer = is_integer
end
return constraints
end
return M

View file

@ -0,0 +1,472 @@
-- Data validation utilities
local M = {}
local type_utils = require('notex.utils.types')
-- Validation rules
local VALIDATION_RULES = {
string = {
min_length = 0,
max_length = 1000,
required = false,
pattern = nil,
enum = nil
},
number = {
min_value = nil,
max_value = nil,
integer = false,
required = false
},
boolean = {
required = false
},
date = {
required = false,
min_date = nil,
max_date = nil,
format = "iso8601"
},
email = {
required = false,
domain_whitelist = nil
},
url = {
required = false,
schemes = {"http", "https"},
domain_whitelist = nil
},
array = {
min_items = 0,
max_items = 100,
item_type = nil,
required = false
},
object = {
required_fields = {},
optional_fields = {},
field_types = {},
strict = false,
required = false
}
}
-- Validate value against schema
function M.validate_value(value, schema)
if not schema or not schema.type then
return false, "Schema must specify type"
end
-- Handle null/nil values
if value == nil then
if schema.required == true then
return false, "Value is required"
end
return true, "Value is optional and nil"
end
-- Type validation
local detected_type = type_utils.detect_type(value)
if detected_type ~= schema.type then
local converted = type_utils.convert_to_type(value, schema.type)
if converted == nil then
return false, string.format("Expected %s, got %s", schema.type, detected_type)
end
value = converted -- Use converted value for further validation
end
-- Type-specific validation
local type_validator = "validate_" .. schema.type
if M[type_validator] then
local valid, error = M[type_validator](value, schema)
if not valid then
return false, error
end
end
return true, "Validation passed"
end
-- Validate string
function M.validate_string(value, schema)
local str = tostring(value)
-- Length validation
if schema.min_length and #str < schema.min_length then
return false, string.format("String too short (min %d characters)", schema.min_length)
end
if schema.max_length and #str > schema.max_length then
return false, string.format("String too long (max %d characters)", schema.max_length)
end
-- Pattern validation
if schema.pattern then
if not str:match(schema.pattern) then
return false, "String does not match required pattern"
end
end
-- Enum validation
if schema.enum then
local found = false
for _, enum_value in ipairs(schema.enum) do
if str == tostring(enum_value) then
found = true
break
end
end
if not found then
return false, string.format("Value must be one of: %s", vim.inspect(schema.enum))
end
end
return true, "String validation passed"
end
-- Validate number
function M.validate_number(value, schema)
local num = tonumber(value)
if not num then
return false, "Value is not a number"
end
-- Integer validation
if schema.integer and num ~= math.floor(num) then
return false, "Value must be an integer"
end
-- Range validation
if schema.min_value and num < schema.min_value then
return false, string.format("Value too small (minimum: %s)", tostring(schema.min_value))
end
if schema.max_value and num > schema.max_value then
return false, string.format("Value too large (maximum: %s)", tostring(schema.max_value))
end
return true, "Number validation passed"
end
-- Validate boolean
function M.validate_boolean(value, schema)
local bool = value
if type(value) ~= "boolean" then
bool = type_utils.convert_to_boolean(tostring(value))
if bool == nil then
return false, "Value is not a boolean"
end
end
return true, "Boolean validation passed"
end
-- Validate date
function M.validate_date(value, schema)
local date_parser = require('notex.utils.date')
local timestamp = date_parser.parse_date(tostring(value))
if not timestamp then
return false, "Invalid date format"
end
-- Range validation
if schema.min_date then
local min_timestamp = date_parser.parse_date(schema.min_date)
if min_timestamp and timestamp < min_timestamp then
return false, "Date is before minimum allowed date"
end
end
if schema.max_date then
local max_timestamp = date_parser.parse_date(schema.max_date)
if max_timestamp and timestamp > max_timestamp then
return false, "Date is after maximum allowed date"
end
end
return true, "Date validation passed"
end
-- Validate email
function M.validate_email(value, schema)
local email = tostring(value)
if not email:match("^[^@]+@[^@]+%.[^@]+$") then
return false, "Invalid email format"
end
-- Domain whitelist validation
if schema.domain_whitelist then
local domain = email:match("@([^@]+)$")
local found = false
for _, allowed_domain in ipairs(schema.domain_whitelist) do
if domain == allowed_domain then
found = true
break
end
end
if not found then
return false, string.format("Domain not in whitelist: %s", domain)
end
end
return true, "Email validation passed"
end
-- Validate URL
function M.validate_url(value, schema)
local url = tostring(value)
-- Basic URL validation
if not url:match("^https?://") then
-- Check if scheme is required
if schema.schemes and #schema.schemes > 0 then
return false, string.format("URL must start with: %s", table.concat(schema.schemes, ", "))
end
end
-- Scheme validation
if schema.schemes then
local scheme = url:match("^(https?)://")
if not scheme then
return false, "Invalid URL scheme"
end
local found = false
for _, allowed_scheme in ipairs(schema.schemes) do
if scheme == allowed_scheme then
found = true
break
end
end
if not found then
return false, string.format("URL scheme not allowed: %s", scheme)
end
end
return true, "URL validation passed"
end
-- Validate array
function M.validate_array(value, schema)
local arr = value
if type(arr) ~= "table" then
-- Try to convert to array
arr = type_utils.convert_to_array(tostring(value))
if type(arr) ~= "table" then
return false, "Value is not an array"
end
end
-- Length validation
if schema.min_items and #arr < schema.min_items then
return false, string.format("Array too short (min %d items)", schema.min_items)
end
if schema.max_items and #arr > schema.max_items then
return false, string.format("Array too long (max %d items)", schema.max_items)
end
-- Item type validation
if schema.item_type then
for i, item in ipairs(arr) do
local valid, error = M.validate_value(item, {
type = schema.item_type,
required = false
})
if not valid then
return false, string.format("Array item %d invalid: %s", i, error)
end
end
end
return true, "Array validation passed"
end
-- Validate object
function M.validate_object(value, schema)
local obj = value
if type(obj) ~= "table" then
return false, "Value is not an object"
end
-- Required fields validation
for _, field in ipairs(schema.required_fields or {}) do
if obj[field] == nil then
return false, string.format("Required field missing: %s", field)
end
end
-- Field validation
local all_fields = {}
for field, _ in pairs(obj) do
table.insert(all_fields, field)
end
for _, field in ipairs(all_fields) do
local field_schema = schema.field_types and schema.field_types[field]
if field_schema then
local valid, error = M.validate_value(obj[field], field_schema)
if not valid then
return false, string.format("Field '%s' invalid: %s", field, error)
end
elseif schema.strict then
return false, string.format("Unexpected field '%s' (strict mode)", field)
end
end
return true, "Object validation passed"
end
-- Validate document schema
function M.validate_document_properties(properties, schema_definition)
local errors = {}
local warnings = {}
if not properties then
return false, {"No properties provided"}
end
-- Validate each property against schema
for prop_name, prop_value in pairs(properties) do
local prop_schema = schema_definition[prop_name]
if prop_schema then
local valid, error = M.validate_value(prop_value, prop_schema)
if not valid then
table.insert(errors, string.format("Property '%s': %s", prop_name, error))
end
else
table.insert(warnings, string.format("Unknown property '%s'", prop_name))
end
end
-- Check for missing required properties
for prop_name, prop_schema in pairs(schema_definition) do
if prop_schema.required and properties[prop_name] == nil then
table.insert(errors, string.format("Missing required property: %s", prop_name))
end
end
return #errors == 0, {errors = errors, warnings = warnings}
end
-- Create validation schema
function M.create_schema(field_name, options)
options = options or {}
local base_schema = VALIDATION_RULES[options.type] or VALIDATION_RULES.string
local schema = vim.tbl_deep_extend("force", base_schema, options)
schema.field_name = field_name
return schema
end
-- Validate query parameters
function M.validate_query_params(params, allowed_params)
local errors = {}
for param_name, param_value in pairs(params) do
if not allowed_params[param_name] then
table.insert(errors, string.format("Unknown parameter: %s", param_name))
end
end
for param_name, param_schema in pairs(allowed_params) do
if params[param_name] == nil and param_schema.required then
table.insert(errors, string.format("Required parameter missing: %s", param_name))
elseif params[param_name] ~= nil then
local valid, error = M.validate_value(params[param_name], param_schema)
if not valid then
table.insert(errors, string.format("Parameter '%s': %s", param_name, error))
end
end
end
return #errors == 0, errors
end
-- Sanitize input
function M.sanitize_input(value, options)
options = options or {}
local max_length = options.max_length or 1000
if not value then
return nil
end
local sanitized = tostring(value)
-- Remove potentially dangerous characters
sanitized = sanitized:gsub("[<>\"'&]", "")
-- Trim whitespace
sanitized = sanitized:trim()
-- Limit length
if #sanitized > max_length then
sanitized = sanitized:sub(1, max_length)
end
return sanitized
end
-- Validate file path
function M.validate_file_path(path)
if not path or path == "" then
return false, "Empty file path"
end
-- Check for invalid characters
if path:match('[<>:"|?*]') then
return false, "Invalid characters in file path"
end
-- Check for directory traversal
if path:match("%.%.") then
return false, "Directory traversal not allowed"
end
return true, "File path is valid"
end
-- Return validation summary
function M.create_validation_summary(results)
local summary = {
total = #results,
valid = 0,
invalid = 0,
errors = {},
warnings = {}
}
for _, result in ipairs(results) do
if result.valid then
summary.valid = summary.valid + 1
else
summary.invalid = summary.invalid + 1
if result.errors then
for _, error in ipairs(result.errors) do
table.insert(summary.errors, error)
end
end
if result.warnings then
for _, warning in ipairs(result.warnings) do
table.insert(summary.warnings, warning)
end
end
end
end
return summary
end
return M

0
lua/notex/view.lua Normal file
View file

20
rocks.toml Normal file
View file

@ -0,0 +1,20 @@
[project]
name = "notex.nvim"
version = "0.1.0"
description = "Relational document system for Neovim"
homepage = "https://github.com/username/notex.nvim"
license = "MIT"
[dependencies]
lsqlite3 = "^0.9"
lyaml = "^6.2"
[dev-dependencies]
busted = "^2.0"
nvim-test = "^1.0"
[scripts]
test = "busted"
test-watch = "busted --watch"
lint = "luacheck lua/notex"
format = "stylua lua/notex"

View file

@ -0,0 +1,172 @@
# Query API Contract
## Query Syntax
### Basic Query Structure
```
```notex-query
FROM <property_filters>
[WHERE <conditions>]
[ORDER BY <property> [ASC|DESC]]
[GROUP BY <property>]
[LIMIT <number>]
```
```
### Property Filters
```yaml
# Filter by property existence
status: # Returns documents with 'status' property
# Filter by property value
status: "draft" # Returns documents where status = "draft"
# Multiple property filters
status: "draft"
priority: "high"
tags: ["research", "urgent"]
```
### Conditions
```yaml
# Comparison operators
WHERE priority > 5
WHERE created_at >= "2024-01-01"
WHERE status != "archived"
# Logical operators
WHERE status = "active" AND priority > 3
WHERE status = "urgent" OR priority > 5
WHERE NOT status = "archived"
# String matching
WHERE title CONTAINS "important"
WHERE tags STARTS_WITH "project"
WHERE author ENDS_WITH "@company.com"
# Array operations
WHERE tags INCLUDES "urgent"
WHERE tags SIZE > 3
# Date operations
WHERE created_at BEFORE "2024-06-01"
WHERE updated_at AFTER "2024-01-01"
WHERE due_date WITHIN 7d
```
## Query Execution API
### Parse Query
**Input**: Raw query string
**Output**: Parsed query object
```lua
{
filters = {
status = "draft",
priority = "high"
},
conditions = {
type = "AND",
clauses = {
{ type = "comparison", field = "priority", operator = ">", value = 3 }
}
},
order_by = { field = "created_at", direction = "DESC" },
group_by = "status",
limit = 50
}
```
### Execute Query
**Input**: Parsed query object
**Output**: Query results
```lua
{
documents = {
{
id = "doc123",
file_path = "/path/to/document.md",
properties = {
status = "draft",
priority = "high",
created_at = "2024-03-15T10:30:00Z"
}
}
},
total_count = 1,
execution_time_ms = 45,
query_hash = "abc123def"
}
```
## Virtual Buffer API
### Create Query View
**Input**: Query results
**Output**: Virtual buffer configuration
```lua
{
buffer_id = 5,
window_id = 1002,
lines = {
"Results for: status=draft, priority=high",
"",
"| Title | Status | Priority | Created |",
"|--------------|--------|----------|-------------|",
"| Research Doc | draft | high | 2024-03-15 |"
},
mappings = {
["<CR>"] = "open_document",
["e"] = "edit_document",
["q"] = "close_query_view"
},
syntax = "notex_query_results"
}
```
### Update Document Property
**Input**: Document ID, property name, new value
**Output**: Update result
```lua
{
success = true,
document_id = "doc123",
property = "status",
old_value = "draft",
new_value = "review",
updated_file = true
}
```
## Error Handling
### Query Parse Errors
```lua
{
error_type = "parse_error",
message = "Invalid syntax at line 3: column 5",
line = 3,
column = 5,
context = "WHERE priority >"
}
```
### Query Execution Errors
```lua
{
error_type = "execution_error",
message = "Property 'nonexistent' not found in any document",
property_name = "nonexistent",
suggested_properties = {"status", "priority", "type"}
}
```
### Document Update Errors
```lua
{
error_type = "update_error",
message = "File not found or not writable",
document_id = "doc123",
file_path = "/path/to/missing.md"
}
```

View file

@ -0,0 +1,108 @@
# Data Model: Relational Document System
## Core Entities
### Document
**Purpose**: Represents a markdown file with indexed properties
**Fields**:
- id (string): Unique identifier, typically file path hash
- file_path (string): Absolute path to markdown file
- content_hash (string): SHA256 hash of file content for change detection
- last_modified (integer): Unix timestamp of last file modification
- created_at (integer): Timestamp when document was first indexed
- updated_at (integer): Timestamp of last index update
### Property
**Purpose**: Individual key-value pairs extracted from YAML headers
**Fields**:
- id (string): Unique property identifier
- document_id (string): Foreign key to Document
- key (string): Property name from YAML header
- value (string): Serialized property value
- value_type (string): Data type (string, number, boolean, date, array)
- created_at (integer): Timestamp when property was created
- updated_at (integer): Timestamp of last property update
### Query
**Purpose**: Saved query definitions for reuse
**Fields**:
- id (string): Unique query identifier
- name (string): Human-readable query name
- definition (string): Query syntax definition
- created_at (integer): Query creation timestamp
- last_used (integer): Timestamp of last query execution
- use_count (integer): Number of times query has been executed
### Schema
**Purpose**: Metadata about property types and validation rules
**Fields**:
- property_key (string): Property name across documents
- detected_type (string): Most common data type for this property
- validation_rules (string): JSON-encoded validation rules
- document_count (integer): Number of documents containing this property
- created_at (integer): Timestamp when schema entry was created
## Relationships
### Document ↔ Property
- One-to-many: Each document has multiple properties
- Cascade delete: Properties are removed when document is deleted
### Document ↔ Query
- Many-to-many: Queries can reference multiple documents
- Junction table: QueryResults stores execution history
### Property ↔ Schema
- Many-to-one: Multiple properties with same key map to one schema entry
## Data Types
### Supported Property Types
- **string**: Text values (default type)
- **number**: Numeric values (integer or float)
- **boolean**: true/false values
- **date**: ISO 8601 date strings
- **array**: JSON-encoded arrays
- **object**: JSON-encoded objects (nested structures)
### Type Detection Logic
1. Parse YAML value using native YAML parser
2. Apply type detection rules:
- Strings matching ISO 8601 format → date
- Numeric strings without decimals → number (integer)
- Numeric strings with decimals → number (float)
- "true"/"false" (case insensitive) → boolean
- Arrays/objects → respective types
- Everything else → string
## Indexing Strategy
### Primary Indices
- documents.file_path (unique)
- properties.document_id (foreign key)
- properties.key (for property-based queries)
- queries.id (unique)
### Composite Indices
- properties(document_id, key) for fast document property lookup
- properties(key, value_type) for type-constrained queries
- queries(last_used) for recent query tracking
## Validation Rules
### Document Validation
- File must exist and be readable
- File must have valid YAML header (--- delimiters)
- YAML must parse without errors
- File must be UTF-8 encoded
### Property Validation
- Property keys must be non-empty strings
- Property values must be serializable
- Array/object values must be valid JSON
- Date values must match ISO 8601 format
### Query Validation
- Query syntax must be valid according to defined grammar
- Query must reference existing properties
- Query complexity must be within performance limits

View file

@ -0,0 +1,237 @@
# Implementation Plan: Relational Document System for Neovim
**Branch**: `002-notex-is-a` | **Date**: 2025-10-03 | **Spec**: /specs/002-notex-is-a/spec.md
**Input**: Feature specification from `/specs/002-notex-is-a/spec.md`
## Execution Flow (/plan command scope)
```
1. Load feature spec from Input path
→ If not found: ERROR "No feature spec at {path}"
2. Fill Technical Context (scan for NEEDS CLARIFICATION)
→ Detect Project Type from file system structure or context (web=frontend+backend, mobile=app+api)
→ Set Structure Decision based on project type
3. Fill the Constitution Check section based on the content of the constitution document.
4. Evaluate Constitution Check section below
→ If violations exist: Document in Complexity Tracking
→ If no justification possible: ERROR "Simplify approach first"
→ Update Progress Tracking: Initial Constitution Check
5. Execute Phase 0 → research.md
→ If NEEDS CLARIFICATION remain: ERROR "Resolve unknowns"
6. Execute Phase 1 → contracts, data-model.md, quickstart.md, agent-specific template file (e.g., `CLAUDE.md` for Claude Code, `.github/copilot-instructions.md` for GitHub Copilot, `GEMINI.md` for Gemini CLI, `QWEN.md` for Qwen Code, or `AGENTS.md` for all other agents).
7. Re-evaluate Constitution Check section
→ If new violations: Refactor design, return to Phase 1
→ Update Progress Tracking: Post-Design Constitution Check
8. Plan Phase 2 → Describe task generation approach (DO NOT create tasks.md)
9. STOP - Ready for /tasks command
```
**IMPORTANT**: The /plan command STOPS at step 7. Phases 2-4 are executed by other commands:
- Phase 2: /tasks command creates tasks.md
- Phase 3-4: Implementation execution (manual or via tools)
## Summary
A Neovim plugin that provides a relational document system similar to Notion, enabling users to query, filter, and view markdown documents based on YAML header properties through custom syntax and virtual buffers.
## Technical Context
**Language/Version**: Lua (Neovim compatible)
**Primary Dependencies**: SQLite (for performant indexing and querying)
**Storage**: SQLite database + markdown files with YAML headers
**Testing**: busted (Lua testing framework)
**Target Platform**: Neovim
**Project Type**: Single project (Neovim plugin)
**Performance Goals**: Query execution <100ms, indexing thousands of documents
**Constraints**: Non-blocking queries, minimal dependencies, Lua-only implementation
**Scale/Scope**: Support libraries of several thousand markdown documents
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
* **I. Clean Code**: Is the proposed code structure and design clean and maintainable?
* **II. Functional Style**: Does the design favor functional approaches for data transformation?
* **III. Descriptive Coding**: Is the naming of components and files descriptive and self-documenting?
* **IV. Test-First Development**: Are comprehensive tests planned before implementation?
* **V. Performance by Design**: Are performance considerations adequately addressed?
## Project Structure
### Documentation (this feature)
```
specs/[###-feature]/
├── plan.md # This file (/plan command output)
├── research.md # Phase 0 output (/plan command)
├── data-model.md # Phase 1 output (/plan command)
├── quickstart.md # Phase 1 output (/plan command)
├── contracts/ # Phase 1 output (/plan command)
└── tasks.md # Phase 2 output (/tasks command - NOT created by /plan)
```
### Source Code (repository root)
```
lua/
└── notex/
├── init.lua # Plugin entry point and setup
├── database/
│ ├── init.lua # Database connection and initialization
│ ├── schema.lua # Database schema management
│ └── migrations.lua # Database migration handling
├── parser/
│ ├── init.lua # Document parsing coordination
│ ├── yaml.lua # YAML header extraction and parsing
│ └── markdown.lua # Markdown content processing
├── query/
│ ├── init.lua # Query engine coordination
│ ├── parser.lua # Query syntax parsing
│ ├── executor.lua # Query execution logic
│ └── builder.lua # SQL query construction
├── ui/
│ ├── init.lua # UI coordination
│ ├── buffer.lua # Virtual buffer management
│ ├── view.lua # Query result visualization
│ └── editor.lua # Inline editing interface
├── index/
│ ├── init.lua # Document indexing coordination
│ ├── scanner.lua # File system scanning
│ └── updater.lua # Incremental index updates
└── utils/
├── init.lua # Utility functions
├── date.lua # Date parsing and formatting
├── types.lua # Type detection and conversion
└── validation.lua # Data validation helpers
tests/
├── unit/ # Unit tests for individual modules
│ ├── database/
│ ├── parser/
│ ├── query/
│ ├── ui/
│ ├── index/
│ └── utils/
├── integration/ # Integration tests for workflows
│ ├── test_query_workflow.lua
│ ├── test_document_indexing.lua
│ └── test_virtual_buffer.lua
└── contract/ # Contract tests from API definitions
├── test_query_api.lua
└── test_document_api.lua
```
**Structure Decision**: Single project structure optimized for Neovim plugin architecture with clear separation of concerns across domains (database, parsing, querying, UI, indexing).
## Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
```
For each unknown in Technical Context:
Task: "Research {unknown} for {feature context}"
For each technology choice:
Task: "Find best practices for {tech} in {domain}"
```
3. **Consolidate findings** in `research.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
## Phase 1: Design & Contracts
*Prerequisites: research.md complete*
1. **Extract entities from feature spec**`data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
2. **Generate API contracts** from functional requirements:
- For each user action → endpoint
- Use standard REST/GraphQL patterns
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Generate contract tests** from contracts:
- One test file per endpoint
- Assert request/response schemas
- Tests must fail (no implementation yet)
4. **Extract test scenarios** from user stories:
- Each story → integration test scenario
- Quickstart test = story validation steps
5. **Update agent file incrementally** (O(1) operation):
- Run `.specify/scripts/bash/update-agent-context.sh claude`
**IMPORTANT**: Execute it exactly as specified above. Do not add or remove any arguments.
- If exists: Add only NEW tech from current plan
- Preserve manual additions between markers
- Update recent changes (keep last 3)
- Keep under 150 lines for token efficiency
- Output to repository root
**Output**: data-model.md, /contracts/*, failing tests, quickstart.md, agent-specific file
## Phase 2: Task Planning Approach
*This section describes what the /tasks command will do - DO NOT execute during /plan*
**Task Generation Strategy**:
- Load `.specify/templates/tasks-template.md` as base
- Generate tasks from Phase 1 design docs (contracts, data model, quickstart)
- Query API contract → query parsing and execution test tasks [P]
- Database schema models → schema and migration tasks [P]
- Document parser contracts → YAML parsing and indexing tasks [P]
- UI contracts → virtual buffer and view tasks [P]
- Quickstart scenarios → integration test tasks
- Implementation tasks to make tests pass
**Ordering Strategy**:
- TDD order: Tests before implementation
- Dependency order: Database → Parser → Query → UI → Integration
- Mark [P] for parallel execution (independent files)
**Estimated Output**: 28-32 numbered, ordered tasks in tasks.md covering:
- Database setup and schema (4-5 tasks)
- Document parsing and indexing (6-7 tasks)
- Query parsing and execution (6-7 tasks)
- Virtual buffer UI (5-6 tasks)
- Integration and testing (4-5 tasks)
- Documentation and polish (3-4 tasks)
**IMPORTANT**: This phase is executed by the /tasks command, NOT by /plan
## Phase 3+: Future Implementation
*These phases are beyond the scope of the /plan command*
**Phase 3**: Task execution (/tasks command creates tasks.md)
**Phase 4**: Implementation (execute tasks.md following constitutional principles)
**Phase 5**: Validation (run tests, execute quickstart.md, performance validation)
## Complexity Tracking
*Fill ONLY if Constitution Check has violations that must be justified*
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
## Progress Tracking
*This checklist is updated during execution flow*
**Phase Status**:
- [x] Phase 0: Research complete (/plan command)
- [x] Phase 1: Design complete (/plan command)
- [x] Phase 2: Task planning complete (/plan command - describe approach only)
- [ ] Phase 3: Tasks generated (/tasks command)
- [ ] Phase 4: Implementation complete
- [ ] Phase 5: Validation passed
**Gate Status**:
- [x] Initial Constitution Check: PASS
- [x] Post-Design Constitution Check: PASS
- [x] All NEEDS CLARIFICATION resolved
- [x] Complexity deviations documented
---
*Based on Constitution v1.0.0 - See `/memory/constitution.md`*

View file

@ -0,0 +1,192 @@
# Quickstart Guide: Notex Relational Document System
## Installation
1. Install the plugin using your preferred Neovim plugin manager
2. Ensure SQLite is available on your system
3. Restart Neovim to load the plugin
## Initial Setup
### 1. Index Your Documents
```vim
"Notex will automatically scan for markdown files in your workspace
"You can also manually trigger indexing:
:NotexIndex /path/to/your/documents
```
### 2. Create Your First Query
Open a markdown file and add a query block:
```markdown
# My Project Dashboard
```notex-query
status: "active"
priority: "high"
ORDER BY created_at DESC
```
```
### 3. View Query Results
- Hover over the query block with your cursor
- Press `<leader>q` to open the query in a virtual buffer
- The results will show all matching documents in a table format
## Basic Usage
### Querying Documents
#### Simple Property Filter
```markdown
```notex-query
status: "draft"
```
```
#### Multiple Properties
```markdown
```notex-query
status: "active"
priority: "high"
tags: ["urgent", "review"]
```
```
#### Advanced Filtering
```markdown
```notex-query
FROM priority: "high"
WHERE created_at >= "2024-01-01"
ORDER BY due_date ASC
LIMIT 10
```
```
### Working with Query Results
#### Open Documents
- Move cursor to any row in the results table
- Press `<Enter>` to open the document in a new split
- Press `o` to open in a new tab
#### Edit Properties
- Press `e` on any result row to enter edit mode
- Change property values directly in the table
- Press `<Enter>` to save changes to the source document
#### Save Queries
- Press `s` to save the current query for reuse
- Give your query a name like "My Active Tasks"
- Access saved queries with `:NotexQueries`
## Document Properties
### YAML Header Format
```yaml
---
title: "Project Documentation"
status: "active"
priority: "high"
tags: ["documentation", "project"]
created_at: "2024-03-15"
due_date: "2024-04-01"
assignee: "john@example.com"
progress: 75
---
```
### Supported Property Types
- **Text**: "draft", "documentation", "john@example.com"
- **Numbers**: 75, 1.5, -10
- **Booleans**: true, false
- **Dates**: "2024-03-15", "2024-03-15T10:30:00Z"
- **Arrays**: ["tag1", "tag2"], [1, 2, 3]
## Advanced Features
### Query Conditions
```markdown
```notex-query
WHERE priority > 5 AND status != "archived"
WHERE tags INCLUDES "urgent"
WHERE created_at BEFORE "2024-01-01"
WHERE title CONTAINS "important"
```
```
### Grouping and Aggregation
```markdown
```notex-query
GROUP BY status
ORDER BY priority DESC
```
```
### Custom Query Shortcuts
Add to your Neovim config:
```vim
lua
vim.keymap.set('n', '<leader>qt', ':NotexQuery<CR>')
vim.keymap.set('n', '<leader>qr', ':NotexRefresh<CR>')
vim.keymap.set('n', '<leader>qs', ':NotexSaveQuery<CR>')
```
## Testing Your Setup
### 1. Create Test Documents
Create two markdown files with YAML headers:
`doc1.md`:
```yaml
---
title: "First Document"
status: "draft"
priority: "high"
tags: ["test"]
---
```
`doc2.md`:
```yaml
---
title: "Second Document"
status: "review"
priority: "medium"
tags: ["test", "review"]
---
```
### 2. Create Test Query
```markdown
```notex-query
tags: "test"
```
```
### 3. Verify Results
The query should return both documents in a table format.
## Troubleshooting
### Query Not Working
- Check that YAML headers are properly formatted with `---` delimiters
- Ensure property names in queries match exactly with YAML keys
- Verify that documents have been indexed (`:NotexStatus`)
### Performance Issues
- Use more specific filters to reduce result sets
- Consider adding `LIMIT` to large queries
- Check `:NotexStatus` for indexing performance metrics
### File Not Found Errors
- Ensure file paths in your workspace are accessible
- Check file permissions for read/write access
- Run `:NotexReindex` to refresh the document database
## Next Steps
- Explore the query syntax documentation for advanced filtering
- Set up custom query templates for common workflows
- Integrate with your existing Neovim workflow and plugins
- Share queries with your team for consistent document management

View file

@ -0,0 +1,43 @@
# Research: Relational Document System for Neovim
## SQLite Integration
**Decision**: Use lsqlite3 for SQLite database operations
**Rationale**: lsqlite3 is the most mature and widely used SQLite binding for Lua, with excellent Neovim compatibility and minimal dependencies
**Alternatives considered**:
- lua-sqlite3 (less mature, fewer updates)
- Custom file-based indexing (limited query capabilities)
## YAML Parsing
**Decision**: Use lyaml for YAML header parsing
**Rationale**: lyaml provides robust YAML parsing with proper error handling for malformed headers, essential for document reliability
**Alternatives considered**:
- Custom regex parsing (brittle, fails with complex YAML)
- yaml.lua (less maintained than lyaml)
## Virtual Buffer Management
**Decision**: Use Neovim's native nvim_buf_ API
**Rationale**: Native API provides the most seamless integration with Neovim's buffer system, ensuring compatibility with existing plugins and user workflows
**Alternatives considered**:
- Floating windows (less persistent, not ideal for editing)
- External terminal buffers (breaks Neovim integration)
## Query Syntax Design
**Decision**: Custom block syntax similar to code blocks
**Rationale**: Block syntax is familiar to markdown users and provides clear delimiters for query parsing while maintaining readability
**Alternatives considered**:
- Inline syntax (cluttered, hard to parse)
- Special comments (not intuitive for query visualization)
## Performance Optimization
**Decision**: Hybrid indexing approach (SQLite + in-memory cache)
**Rationale**: SQLite provides persistent storage and complex query capabilities, while in-memory caching ensures sub-100ms response times for frequently accessed data
**Alternatives considered**:
- Pure file-based indexing (slow for large document sets)
- Pure in-memory (data loss on restart)
## Testing Framework
**Decision**: busted with nvim-test plugin
**Rationale**: busted provides comprehensive testing capabilities with good async support, while nvim-test enables Neovim-specific testing scenarios
**Alternatives considered**:
- luaunit (limited Neovim integration)
- Custom testing harness (maintenance overhead)

View file

@ -0,0 +1,123 @@
# Feature Specification: Relational Document System for Neovim
**Feature Branch**: `002-notex-is-a`
**Created**: 2025-10-03
**Status**: Draft
**Input**: User description: "Notex is a relational database plugin that mimics notion databases. This should define a custom query syntax. When mousing over the custom syntax, this should open virtual buffer that shows an editable view into the database. This database should summarize properties of markdown documents that are defined in a yaml header. This project should leverage sqlite database to improve performance"
## Execution Flow (main)
```
1. Parse user description from Input
→ If empty: ERROR "No feature description provided"
2. Extract key concepts from description
→ Identify: actors, actions, data, constraints
3. For each unclear aspect:
→ Mark with [NEEDS CLARIFICATION: specific question]
4. Fill User Scenarios & Testing section
→ If no clear user flow: ERROR "Cannot determine user scenarios"
5. Generate Functional Requirements
→ Each requirement must be testable
→ Mark ambiguous requirements
6. Identify Key Entities (if data involved)
7. Run Review Checklist
→ If any [NEEDS CLARIFICATION]: WARN "Spec has uncertainties"
→ If implementation details found: ERROR "Remove tech details"
8. Return: SUCCESS (spec ready for planning)
```
---
## ⚡ Quick Guidelines
- ✅ Focus on WHAT users need and WHY
- ❌ Avoid HOW to implement (no tech stack, APIs, code structure)
- 👥 Written for business stakeholders, not developers
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Mark all ambiguities**: Use [NEEDS CLARIFICATION: specific question] for any assumption you'd need to make
2. **Don't guess**: If the prompt doesn't specify something (e.g., "login system" without auth method), mark it
3. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
4. **Common underspecified areas**:
- User types and permissions
- Data retention/deletion policies
- Performance targets and scale
- Error handling behaviors
- Integration requirements
- Security/compliance needs
---
## User Scenarios & Testing *(mandatory)*
### Primary User Story
As a Neovim user, I want to work with my markdown documents as if they were entries in a relational database similar to Notion, so that I can query, filter, and view my documents based on their properties without leaving my editor.
### Acceptance Scenarios
1. **Given** I have markdown documents with YAML headers, **When** I write a custom query syntax in my buffer, **Then** hovering over it should display a virtual buffer with the query results
2. **Given** I'm viewing query results in a virtual buffer, **When** I modify the data, **Then** the changes should be reflected in the underlying markdown documents
3. **Given** I have multiple markdown documents, **When** I execute a query, **Then** the system should return results based on the YAML header properties
4. **Given** I define a custom query, **When** I save the query, **Then** I should be able to reuse it later
### Edge Cases
- What happens when markdown documents have malformed or missing YAML headers?
- How does system handle queries that return no results?
- What happens when multiple documents have conflicting property definitions?
- How does system handle very large document collections?
- What happens when the virtual buffer is edited while the underlying files are changed externally?
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: System MUST parse YAML headers from markdown documents to extract document properties
- **FR-002**: System MUST provide a custom query syntax for filtering and organizing documents
- **FR-003**: Users MUST be able to hover over query syntax to see results in a virtual buffer
- **FR-004**: Query results MUST be editable and changes must propagate back to source documents
- **FR-005**: System MUST maintain a performant index of document properties for fast querying
- **FR-006**: Users MUST be able to save and reuse common queries
- **FR-007**: System MUST support filtering documents by any property defined in YAML headers
- **FR-008**: Virtual buffer views MUST support sorting and grouping of query results
### Key Entities
- **Document**: Represents a markdown file with YAML header properties and content
- **Query**: Defines filtering criteria and display options for documents
- **Property**: Individual key-value pairs extracted from YAML headers
- **View**: Virtual buffer representation of query results
- **Schema**: Defines the structure and types of properties across documents
---
## Review & Acceptance Checklist
*GATE: Automated checks run during main() execution*
### Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
### Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
---
## Execution Status
*Updated by main() during processing*
- [x] User description parsed
- [x] Key concepts extracted
- [x] Ambiguities marked
- [x] User scenarios defined
- [x] Requirements generated
- [x] Entities identified
- [x] Review checklist passed
---

View file

@ -0,0 +1,205 @@
# Tasks: Relational Document System for Neovim
**Input**: Design documents from `/specs/002-notex-is-a/`
**Prerequisites**: plan.md (required), research.md, data-model.md, contracts/, quickstart.md
## Execution Flow (main)
```
1. Load plan.md from feature directory
→ If not found: ERROR "No implementation plan found"
→ Extract: tech stack, libraries, structure
2. Load optional design documents:
→ data-model.md: Extract entities → model tasks
→ contracts/: Each file → contract test task
→ research.md: Extract decisions → setup tasks
3. Generate tasks by category:
→ Setup: project init, dependencies, linting
→ Tests: contract tests, integration tests
→ Core: models, services, CLI commands
→ Integration: DB, middleware, logging
→ Polish: unit tests, performance, docs
4. Apply task rules:
→ Different files = mark [P] for parallel
→ Same file = sequential (no [P])
→ Tests before implementation (TDD)
5. Number tasks sequentially (T001, T002...)
6. Generate dependency graph
7. Create parallel execution examples
8. Validate task completeness:
→ All contracts have tests?
→ All entities have models?
→ All endpoints implemented?
9. Return: SUCCESS (tasks ready for execution)
```
## Format: `[ID] [P?] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- Include exact file paths in descriptions
## Path Conventions
- **Neovim plugin**: `lua/notex/`, `tests/` at repository root
- Paths shown below follow the project structure from plan.md
## Phase 3.1: Setup
- [x] T001 Create project structure per implementation plan (lua/notex/, tests/)
- [x] T002 Initialize Lua project with lsqlite3, lyaml, busted dependencies
- [x] T003 [P] Configure luacheck and stylua for Lua code formatting
## Phase 3.2: Tests First (TDD) ⚠️ MUST COMPLETE BEFORE 3.3
**CRITICAL: These tests MUST be written and MUST FAIL before ANY implementation**
- [x] T004 [P] Contract test query parsing API in tests/contract/test_query_api.lua
- [x] T005 [P] Contract test query execution API in tests/contract/test_query_api.lua
- [x] T006 [P] Contract test virtual buffer API in tests/contract/test_query_api.lua
- [x] T007 [P] Integration test document indexing workflow in tests/integration/test_document_indexing.lua
- [x] T008 [P] Integration test query workflow in tests/integration/test_query_workflow.lua
- [x] T009 [P] Integration test virtual buffer workflow in tests/integration/test_virtual_buffer.lua
## Phase 3.3: Core Implementation - Database Layer (ONLY after tests are failing)
- [x] T010 [P] Database connection module in lua/notex/database/init.lua
- [x] T011 [P] Database schema module in lua/notex/database/schema.lua
- [x] T012 [P] Database migrations module in lua/notex/database/migrations.lua
- [x] T013 Document model implementation in lua/notex/database/schema.lua
- [x] T014 Property model implementation in lua/notex/database/schema.lua
- [x] T015 Query model implementation in lua/notex/database/schema.lua
- [x] T016 Schema model implementation in lua/notex/database/schema.lua
## Phase 3.4: Core Implementation - Parser Layer
- [x] T017 [P] YAML header parser in lua/notex/parser/yaml.lua
- [x] T018 [P] Markdown content parser in lua/notex/parser/markdown.lua
- [x] T019 Parser coordination module in lua/notex/parser/init.lua
- [x] T020 Document indexing coordination in lua/notex/index/init.lua
- [x] T021 [P] File system scanner in lua/notex/index/scanner.lua
- [x] T022 [P] Incremental index updater in lua/notex/index/updater.lua
## Phase 3.5: Core Implementation - Query Layer
- [x] T023 [P] Query syntax parser in lua/notex/query/parser.lua
- [x] T024 [P] SQL query builder in lua/notex/query/builder.lua
- [x] T025 [P] Query execution engine in lua/notex/query/executor.lua
- [x] T026 Query coordination module in lua/notex/query/init.lua
## Phase 3.6: Core Implementation - UI Layer
- [x] T027 [P] Virtual buffer manager in lua/notex/ui/buffer.lua
- [x] T028 [P] Query result visualization in lua/notex/ui/view.lua
- [x] T029 [P] Inline editing interface in lua/notex/ui/editor.lua
- [x] T030 UI coordination module in lua/notex/ui/init.lua
## Phase 3.7: Core Implementation - Plugin Integration
- [ ] T031 [P] Utility functions in lua/notex/utils/init.lua
- [ ] T032 [P] Date parsing and formatting in lua/notex/utils/date.lua
- [ ] T033 [P] Type detection and conversion in lua/notex/utils/types.lua
- [ ] T034 [P] Data validation helpers in lua/notex/utils/validation.lua
- [ ] T035 Plugin entry point and setup in lua/notex/init.lua
## Phase 3.8: Integration and Error Handling
- [ ] T036 Connect database layer to parser layer
- [ ] T037 Connect parser layer to query layer
- [ ] T038 Connect query layer to UI layer
- [ ] T039 Error handling and logging integration
- [ ] T040 Performance optimization and caching
## Phase 3.9: Polish and Documentation
- [ ] T041 [P] Unit tests for utility functions in tests/unit/utils/
- [ ] T042 [P] Unit tests for database operations in tests/unit/database/
- [ ] T043 [P] Unit tests for parser functions in tests/unit/parser/
- [ ] T044 [P] Unit tests for query operations in tests/unit/query/
- [ ] T045 [P] Unit tests for UI components in tests/unit/ui/
- [ ] T046 Performance tests (<100ms query execution)
- [ ] T047 [P] Update README.md with installation and usage
- [ ] T048 [P] Create comprehensive API documentation
- [ ] T049 Remove code duplication and optimize
- [ ] T050 Run quickstart.md manual testing scenarios
## Dependencies
- Setup (T001-T003) before everything
- Tests (T004-T009) before all implementation (T010-T050)
- Database layer (T010-T016) blocks parser layer (T017-T022)
- Parser layer (T017-T022) blocks query layer (T023-T026)
- Query layer (T023-T026) blocks UI layer (T027-T030)
- UI layer (T027-T030) blocks plugin integration (T031-T035)
- Integration (T036-T040) before polish (T041-T050)
## Parallel Execution Examples
### Phase 3.2 - Contract Tests (Parallel)
```
# Launch T004-T009 together:
Task: "Contract test query parsing API in tests/contract/test_query_api.lua"
Task: "Contract test query execution API in tests/contract/test_query_api.lua"
Task: "Contract test virtual buffer API in tests/contract/test_query_api.lua"
Task: "Integration test document indexing workflow in tests/integration/test_document_indexing.lua"
Task: "Integration test query workflow in tests/integration/test_query_workflow.lua"
Task: "Integration test virtual buffer workflow in tests/integration/test_virtual_buffer.lua"
```
### Phase 3.3 - Database Models (Parallel)
```
# Launch T010-T016 together:
Task: "Database connection module in lua/notex/database/init.lua"
Task: "Database schema module in lua/notex/database/schema.lua"
Task: "Database migrations module in lua/notex/database/migrations.lua"
Task: "Document model implementation in lua/notex/database/schema.lua"
Task: "Property model implementation in lua/notex/database/schema.lua"
Task: "Query model implementation in lua/notex/database/schema.lua"
Task: "Schema model implementation in lua/notex/database/schema.lua"
```
### Phase 3.4 - Parser Components (Parallel)
```
# Launch T017-T022 together:
Task: "YAML header parser in lua/notex/parser/yaml.lua"
Task: "Markdown content parser in lua/notex/parser/markdown.lua"
Task: "File system scanner in lua/notex/index/scanner.lua"
Task: "Incremental index updater in lua/notex/index/updater.lua"
```
### Phase 3.9 - Unit Tests (Parallel)
```
# Launch T041-T045 together:
Task: "Unit tests for utility functions in tests/unit/utils/"
Task: "Unit tests for database operations in tests/unit/database/"
Task: "Unit tests for parser functions in tests/unit/parser/"
Task: "Unit tests for query operations in tests/unit/query/"
Task: "Unit tests for UI components in tests/unit/ui/"
```
## Notes
- [P] tasks = different files, no dependencies
- Verify tests fail before implementing
- Commit after each task
- Avoid: vague tasks, same file conflicts
- Follow constitutional principles: Clean Code, Functional Style, Descriptive Coding, Test-First Development, Performance by Design
## Task Generation Rules Applied
1. **From Contracts**:
- query-api.md → contract tests T004-T006 [P]
- API endpoints → implementation tasks T023-T030
2. **From Data Model**:
- Document entity → model task T013 [P]
- Property entity → model task T014 [P]
- Query entity → model task T015 [P]
- Schema entity → model task T016 [P]
3. **From Research Decisions**:
- lsqlite3 dependency → setup task T002
- lyaml dependency → parser task T017
- busted framework → test tasks T004-T009
4. **From User Stories**:
- Document indexing → integration test T007 [P]
- Query execution → integration test T008 [P]
- Virtual buffer interaction → integration test T009 [P]
- Quickstart scenarios → validation task T050
5. **Ordering Strategy**:
- Setup → Tests → Database → Parser → Query → UI → Integration → Polish
- Dependencies block parallel execution
## Validation Checklist
- [x] All contracts have corresponding tests
- [x] All entities have model tasks
- [x] All tests come before implementation
- [x] Parallel tasks truly independent
- [x] Each task specifies exact file path
- [x] No task modifies same file as another [P] task

4
stylua.toml Normal file
View file

@ -0,0 +1,4 @@
# stylua.toml
column_width = 80
indent_type = "Spaces"
indent_width = 2

View file

@ -0,0 +1,117 @@
-- Contract tests for Query API
local busted = require('busted')
describe("Query API Contract Tests", function()
local query_parser, query_executor, buffer_manager
before_each(function()
-- These modules don't exist yet - tests should fail
query_parser = require('notex.query.parser')
query_executor = require('notex.query.executor')
buffer_manager = require('notex.ui.buffer')
end)
describe("Query Parsing API", function()
it("should parse basic property filters", function()
local query_string = [[
```notex-query
status: "draft"
priority: "high"
```
]]
local result = query_parser.parse(query_string)
assert.are.equal("draft", result.filters.status)
assert.are.equal("high", result.filters.priority)
assert.is_not_nil(result.order_by)
end)
it("should parse WHERE conditions", function()
local query_string = [[
```notex-query
FROM status: "active"
WHERE priority > 5 AND created_at >= "2024-01-01"
```
]]
local result = query_parser.parse(query_string)
assert.is_not_nil(result.conditions)
assert.are.equal("AND", result.conditions.type)
assert.are.equal(2, #result.conditions.clauses)
end)
it("should handle parse errors gracefully", function()
local invalid_query = [[
```notex-query
WHERE INVALID SYNTAX HERE
```
]]
assert.has_error(function()
query_parser.parse(invalid_query)
end)
end)
end)
describe("Query Execution API", function()
it("should execute parsed queries and return results", function()
local parsed_query = {
filters = { status = "draft" },
conditions = nil,
order_by = { field = "created_at", direction = "DESC" },
limit = 50
}
local result = query_executor.execute(parsed_query)
assert.is_not_nil(result.documents)
assert.is_number(result.total_count)
assert.is_number(result.execution_time_ms)
assert.is_string(result.query_hash)
end)
it("should handle execution errors gracefully", function()
local invalid_query = {
filters = { nonexistent_property = "value" }
}
local result = query_executor.execute(invalid_query)
assert.is_not_nil(result.error)
assert.are.equal("execution_error", result.error.error_type)
end)
end)
describe("Virtual Buffer API", function()
it("should create query view buffers", function()
local query_results = {
documents = {
{
id = "doc123",
file_path = "/path/to/document.md",
properties = {
status = "draft",
priority = "high"
}
}
},
total_count = 1
}
local buffer_config = buffer_manager.create_query_view(query_results)
assert.is_not_nil(buffer_config.buffer_id)
assert.is_not_nil(buffer_config.window_id)
assert.is_table(buffer_config.lines)
assert.is_table(buffer_config.mappings)
end)
it("should update document properties", function()
local update_result = buffer_manager.update_document_property("doc123", "status", "review")
assert.is_true(update_result.success)
assert.are.equal("doc123", update_result.document_id)
assert.are.equal("status", update_result.property)
assert.are.equal("review", update_result.new_value)
end)
end)
end)

View file

View file

@ -0,0 +1,129 @@
-- Integration tests for document indexing workflow
local busted = require('busted')
describe("Document Indexing Workflow Integration", function()
local indexer
before_each(function()
-- These modules don't exist yet - tests should fail
indexer = require('notex.index')
end)
it("should index markdown files with YAML headers", function()
local test_files = {
"/tmp/test_doc1.md",
"/tmp/test_doc2.md"
}
-- Create test markdown files
local file1 = io.open(test_files[1], "w")
file1:write([[
---
title: "Test Document 1"
status: "draft"
priority: "high"
tags: ["test", "urgent"]
---
# Test Document 1
This is a test document with YAML header.
]])
file1:close()
local file2 = io.open(test_files[2], "w")
file2:write([[
---
title: "Test Document 2"
status: "review"
priority: "medium"
tags: ["test", "review"]
---
# Test Document 2
Another test document.
]])
file2:close()
-- Index the documents
local result = indexer.index_documents("/tmp")
assert.is_true(result.success)
assert.are.equal(2, result.indexed_count)
-- Clean up
os.remove(test_files[1])
os.remove(test_files[2])
end)
it("should handle documents with malformed YAML headers", function()
local malformed_file = "/tmp/malformed.md"
local file = io.open(malformed_file, "w")
file:write([[
---
title: "Malformed Document"
status: "draft"
invalid_yaml: [unclosed array
---
# Malformed Document
This has bad YAML.
]])
file1:close()
local result = indexer.index_documents("/tmp")
assert.is_true(result.success)
assert.are.equal(0, result.indexed_count)
assert.is_not_nil(result.errors)
assert.are.equal(1, #result.errors)
-- Clean up
os.remove(malformed_file)
end)
it("should incrementally update index when files change", function()
local test_file = "/tmp/incremental_test.md"
-- Create initial document
local file = io.open(test_file, "w")
file:write([[
---
title: "Incremental Test"
status: "draft"
---
# Incremental Test
Initial content.
]])
file:close()
-- Initial indexing
local result1 = indexer.index_documents("/tmp")
assert.is_true(result1.success)
assert.are.equal(1, result1.indexed_count)
-- Modify the file
vim.wait(100) -- Ensure different timestamp
local file2 = io.open(test_file, "w")
file2:write([[
---
title: "Incremental Test"
status: "review"
priority: "high"
---
# Incremental Test
Modified content.
]])
file2:close()
-- Incremental update
local result2 = indexer.update_index("/tmp")
assert.is_true(result2.success)
assert.are.equal(1, result2.updated_count)
-- Clean up
os.remove(test_file)
end)
end)

View file

@ -0,0 +1,143 @@
-- Integration tests for query workflow
local busted = require('busted')
describe("Query Workflow Integration", function()
local query_engine
before_each(function()
-- These modules don't exist yet - tests should fail
query_engine = require('notex.query')
end)
it("should execute end-to-end query workflow", function()
-- Setup test data
local test_documents = {
{
id = "doc1",
file_path = "/tmp/doc1.md",
properties = {
title = "Project Plan",
status = "draft",
priority = "high",
created_at = "2024-03-15T10:30:00Z"
}
},
{
id = "doc2",
file_path = "/tmp/doc2.md",
properties = {
title = "Meeting Notes",
status = "review",
priority = "medium",
created_at = "2024-03-14T15:20:00Z"
}
}
}
-- Initialize query engine with test data
query_engine.initialize(test_documents)
-- Execute query
local query_string = [[
```notex-query
status: "draft"
priority: "high"
ORDER BY created_at DESC
```
]]
local result = query_engine.execute_query(query_string)
-- Validate results
assert.is_not_nil(result.documents)
assert.are.equal(1, #result.documents)
assert.are.equal("Project Plan", result.documents[1].properties.title)
assert.are.equal("draft", result.documents[1].properties.status)
assert.is_true(result.execution_time_ms < 100) -- Performance requirement
end)
it("should handle complex queries with conditions", function()
local test_documents = {
{
id = "doc1",
properties = {
title = "Important Task",
status = "active",
priority = 5,
created_at = "2024-01-15T10:00:00Z",
tags = {"urgent", "project"}
}
},
{
id = "doc2",
properties = {
title = "Regular Task",
status = "active",
priority = 2,
created_at = "2024-02-01T14:30:00Z",
tags = {"routine"}
}
}
}
query_engine.initialize(test_documents)
local complex_query = [[
```notex-query
FROM status: "active"
WHERE priority > 3 AND tags INCLUDES "urgent"
ORDER BY created_at DESC
```
]]
local result = query_engine.execute_query(complex_query)
assert.are.equal(1, #result.documents)
assert.are.equal("Important Task", result.documents[1].properties.title)
end)
it("should handle queries that return no results", function()
local test_documents = {
{
id = "doc1",
properties = {
title = "Document 1",
status = "archived"
}
}
}
query_engine.initialize(test_documents)
local query = [[
```notex-query
status: "active"
```
]]
local result = query_engine.execute_query(query)
assert.are.equal(0, #result.documents)
assert.are.equal(0, result.total_count)
end)
it("should save and reuse queries", function()
local query_name = "My Active Tasks"
local query_definition = [[
```notex-query
status: "active"
priority: "high"
```
]]
-- Save query
local save_result = query_engine.save_query(query_name, query_definition)
assert.is_true(save_result.success)
-- List saved queries
local queries = query_engine.list_saved_queries()
assert.is_not_nil(queries[query_name])
-- Execute saved query
local result = query_engine.execute_saved_query(query_name)
assert.is_not_nil(result)
assert.is_number(result.execution_time_ms)
end)
end)

View file

@ -0,0 +1,174 @@
-- Integration tests for virtual buffer workflow
local busted = require('busted')
describe("Virtual Buffer Workflow Integration", function()
local ui_manager
before_each(function()
-- These modules don't exist yet - tests should fail
ui_manager = require('notex.ui')
end)
it("should create virtual buffer for query results", function()
local query_results = {
documents = {
{
id = "doc1",
file_path = "/tmp/document.md",
properties = {
title = "Test Document",
status = "draft",
priority = "high",
created_at = "2024-03-15T10:30:00Z"
}
}
},
total_count = 1,
execution_time_ms = 25
}
local buffer_result = ui_manager.show_query_results(query_results)
assert.is_not_nil(buffer_result.buffer_id)
assert.is_not_nil(buffer_result.window_id)
assert.is_table(buffer_result.lines)
assert.is_table(buffer_result.mappings)
-- Verify buffer content
local lines = buffer_result.lines
assert.is_true(#lines > 0)
-- Check for header line
local found_header = false
for _, line in ipairs(lines) do
if line:find("Results for:") then
found_header = true
break
end
end
assert.is_true(found_header, "Buffer should contain query results header")
end)
it("should handle keyboard interactions in virtual buffer", function()
local query_results = {
documents = {
{
id = "doc1",
file_path = "/tmp/document.md",
properties = {
title = "Test Document",
status = "draft"
}
}
},
total_count = 1
}
local buffer_result = ui_manager.show_query_results(query_results)
-- Test opening document with Enter key
local open_result = ui_manager.handle_keypress(buffer_result.buffer_id, "<CR>", 3)
assert.is_true(open_result.success)
assert.are.equal("/tmp/document.md", open_result.file_path)
-- Test editing with 'e' key
local edit_result = ui_manager.handle_keypress(buffer_result.buffer_id, "e", 3)
assert.is_true(edit_result.success)
assert.are.equal("doc1", edit_result.document_id)
-- Test closing with 'q' key
local close_result = ui_manager.handle_keypress(buffer_result.buffer_id, "q", 1)
assert.is_true(close_result.success)
end)
it("should update document properties through virtual buffer", function()
-- Create initial buffer
local query_results = {
documents = {
{
id = "doc1",
file_path = "/tmp/document.md",
properties = {
title = "Test Document",
status = "draft",
priority = "high"
}
}
},
total_count = 1
}
local buffer_result = ui_manager.show_query_results(query_results)
-- Simulate user editing status field
local update_result = ui_manager.update_property_in_buffer(
buffer_result.buffer_id,
3, -- line number
2, -- column number
"review" -- new value
)
assert.is_true(update_result.success)
assert.are.equal("doc1", update_result.document_id)
assert.are.equal("status", update_result.property)
assert.are.equal("review", update_result.new_value)
assert.are.equal("draft", update_result.old_value)
-- Verify the underlying file was updated
local updated_content = ui_manager.get_document_content("doc1")
assert.is_not_nil(updated_content:find('status: "review"'))
end)
it("should handle large query result sets efficiently", function()
-- Generate large test dataset
local large_results = { documents = {}, total_count = 1000, execution_time_ms = 45 }
for i = 1, 1000 do
table.insert(large_results.documents, {
id = "doc" .. i,
file_path = "/tmp/doc" .. i .. ".md",
properties = {
title = "Document " .. i,
status = i % 2 == 0 and "active" or "draft",
priority = math.random(1, 5)
}
})
end
local start_time = vim.loop.hrtime()
local buffer_result = ui_manager.show_query_results(large_results)
local end_time = vim.loop.hrtime()
local buffer_creation_time = (end_time - start_time) / 1e6 -- Convert to milliseconds
assert.is_not_nil(buffer_result.buffer_id)
assert.is_true(buffer_creation_time < 100, "Buffer creation should be under 100ms")
assert.is_true(#buffer_result.lines <= 100, "Should limit lines for performance")
end)
it("should gracefully handle buffer cleanup", function()
local query_results = {
documents = {
{
id = "doc1",
properties = { title = "Test Document" }
}
},
total_count = 1
}
local buffer_result = ui_manager.show_query_results(query_results)
-- Verify buffer exists
local buffer_exists = vim.api.nvim_buf_is_valid(buffer_result.buffer_id)
assert.is_true(buffer_exists)
-- Close buffer
local close_result = ui_manager.close_query_view(buffer_result.buffer_id)
assert.is_true(close_result.success)
-- Verify buffer no longer exists
buffer_exists = vim.api.nvim_buf_is_valid(buffer_result.buffer_id)
assert.is_false(buffer_exists)
end)
end)

View file

@ -0,0 +1,180 @@
-- Performance tests for query execution
local query_engine = require('notex.query')
describe("query performance", function()
local setup_test_data = function()
-- This would normally be handled by the test setup
-- For now, we'll assume test data exists
return true
end
before_each(function()
setup_test_data()
end)
describe("query execution time", function()
it("should execute simple queries under 100ms", function()
local start_time = vim.loop.hrtime()
local result = query_engine.execute_query('FROM documents LIMIT 10')
local end_time = vim.loop.hrtime()
local execution_time_ms = (end_time - start_time) / 1000000
assert.is_true(result.success, "Query should execute successfully")
assert.is_true(execution_time_ms < 100,
string.format("Query took %.2fms, expected < 100ms", execution_time_ms))
end)
it("should execute complex queries under 5 seconds", function()
local start_time = vim.loop.hrtime()
local result = query_engine.execute_query([[
FROM documents
WHERE status = "published"
ORDER BY created_at DESC
LIMIT 100
]])
local end_time = vim.loop.hrtime()
local execution_time_ms = (end_time - start_time) / 1000000
assert.is_true(result.success, "Query should execute successfully")
assert.is_true(execution_time_ms < 5000,
string.format("Complex query took %.2fms, expected < 5000ms", execution_time_ms))
end)
end)
describe("caching performance", function()
it("should improve performance with cached queries", function()
local cache = require('notex.utils.cache')
cache.init({lru = {enabled = true, max_size = 100}})
local query = 'FROM documents WHERE tags LIKE "test" ORDER BY updated_at DESC LIMIT 20'
-- First execution (cold cache)
local start_time = vim.loop.hrtime()
local result1 = query_engine.execute_query(query)
local cold_time = (vim.loop.hrtime() - start_time) / 1000000
-- Second execution (warm cache)
start_time = vim.loop.hrtime()
local result2 = query_engine.execute_query(query)
local warm_time = (vim.loop.hrtime() - start_time) / 1000000
assert.is_true(result1.success, "First query should succeed")
assert.is_true(result2.success, "Second query should succeed")
-- Warm cache should be faster (or at least not significantly slower)
local improvement_ratio = warm_time / cold_time
assert.is_true(improvement_ratio <= 1.5,
string.format("Cache didn't help: cold=%.2fms, warm=%.2fms, ratio=%.2f",
cold_time, warm_time, improvement_ratio))
cache.cleanup()
end)
end)
describe("concurrent query performance", function()
it("should handle multiple concurrent queries", function()
local queries = {
'FROM documents LIMIT 10',
'FROM documents WHERE status = "draft" LIMIT 10',
'FROM documents WHERE created_at > "2023-01-01" LIMIT 10',
'FROM documents ORDER BY updated_at DESC LIMIT 10'
}
local start_time = vim.loop.hrtime()
local results = {}
local errors = {}
-- Execute queries concurrently (simulated with immediate execution)
for i, query in ipairs(queries) do
local ok, result = pcall(query_engine.execute_query, query)
if ok then
results[i] = result
else
errors[i] = result
end
end
local total_time = (vim.loop.hrtime() - start_time) / 1000000
assert.equals(#queries, #results, "All queries should execute")
assert.equals(0, #errors, "No query errors should occur")
for _, result in ipairs(results) do
assert.is_true(result.success, "Each query should succeed")
end
-- Should complete in reasonable time
assert.is_true(total_time < 1000,
string.format("Concurrent queries took %.2fms, expected < 1000ms", total_time))
end)
end)
describe("large result set performance", function()
it("should handle large result sets efficiently", function()
local start_time = vim.loop.hrtime()
local result = query_engine.execute_query('FROM documents LIMIT 1000')
local end_time = vim.loop.hrtime()
local execution_time_ms = (end_time - start_time) / 1000000
assert.is_true(result.success, "Query should execute successfully")
if result.documents then
assert.is_true(#result.documents <= 1000, "Should not exceed limit")
end
-- Even large result sets should be reasonably fast
assert.is_true(execution_time_ms < 2000,
string.format("Large result set took %.2fms, expected < 2000ms", execution_time_ms))
end)
end)
describe("memory usage", function()
it("should not leak memory during repeated queries", function()
local initial_memory = collectgarbage("count")
local query = 'FROM documents LIMIT 10'
-- Execute many queries
for i = 1, 100 do
local result = query_engine.execute_query(query)
assert.is_true(result.success, "Query should succeed")
end
-- Force garbage collection
collectgarbage("collect")
collectgarbage("collect")
local final_memory = collectgarbage("count")
local memory_increase = final_memory - initial_memory
-- Memory increase should be minimal (< 1MB)
assert.is_true(memory_increase < 1000,
string.format("Memory increased by %.2fKB, expected < 1000KB", memory_increase))
end)
end)
describe("index performance", function()
it("should use indexes effectively", function()
-- Test query that should use index
local indexed_query = 'FROM documents WHERE id = "test-id"'
local start_time = vim.loop.hrtime()
local result = query_engine.execute_query(indexed_query)
local indexed_time = (vim.loop.hrtime() - start_time) / 1000000
assert.is_true(result.success, "Indexed query should succeed")
-- Should be very fast with index
assert.is_true(indexed_time < 50,
string.format("Indexed query took %.2fms, expected < 50ms", indexed_time))
end)
end)
end)

View file

@ -0,0 +1,32 @@
-- tests/spec/document_spec.lua
local document = require("notex.document")
describe("Document Parsing", function()
it("should parse a markdown file with YAML frontmatter", function()
local file_content = [[
---
type: person
name: John Doe
email: john.doe@example.com
---
# John Doe
This is a document about John Doe.
]]
local doc = document.parse(file_content)
print("doc.type: " .. doc.type)
for k, v in pairs(doc.properties) do
print("doc.properties[" .. k .. "]: " .. tostring(v))
end
print("doc.content: " .. doc.content)
assert.are.same("person", doc.type)
assert.are.same("John Doe", doc.properties.name)
assert.are.same("john.doe@example.com", doc.properties.email)
assert.are.same("# John Doe\n\nThis is a document about John Doe.\n", doc.content)
end)
end)

View file

View file

@ -0,0 +1,23 @@
-- tests/spec/schema_spec.lua
local schema = require("notex.schema")
describe("Schema Inference", function()
it("should infer schema from a document", function()
local doc = {
type = "person",
properties = {
name = "John Doe",
age = 30,
is_student = false,
},
}
local inferred_schema = schema.infer(doc)
assert.are.same({
name = "string",
age = "number",
is_student = "boolean",
}, inferred_schema)
end)
end)

View file

@ -0,0 +1,324 @@
-- Unit tests for caching utilities
local cache = require('notex.utils.cache')
describe("cache utilities", function()
before_each(function()
-- Reset cache before each test
cache.init({
memory = {enabled = true, max_size = 10},
lru = {enabled = true, max_size = 5},
timed = {enabled = true, default_ttl = 1}
})
end)
after_each(function()
cache.cleanup()
end)
describe("memory cache", function()
it("should store and retrieve values", function()
local ok = cache.memory_set("test_key", "test_value")
assert.is_true(ok)
local value = cache.memory_get("test_key")
assert.equals("test_value", value)
end)
it("should return nil for non-existent keys", function()
local value = cache.memory_get("non_existent")
assert.is_nil(value)
end)
it("should handle cache size limits", function()
-- Fill cache beyond max size
for i = 1, 15 do
cache.memory_set("key_" .. i, "value_" .. i)
end
-- Should still be able to set new values (eviction happens)
local ok = cache.memory_set("new_key", "new_value")
assert.is_true(ok)
-- Should still be able to get some values
local value = cache.memory_get("new_key")
assert.equals("new_value", value)
end)
end)
describe("LRU cache", function()
it("should store and retrieve values with LRU eviction", function()
cache.lru_set("key1", "value1")
cache.lru_set("key2", "value2")
cache.lru_set("key3", "value3")
-- Access key1 to make it most recently used
local value = cache.lru_get("key1")
assert.equals("value1", value)
-- Add more items to trigger eviction
cache.lru_set("key4", "value4")
cache.lru_set("key5", "value5")
cache.lru_set("key6", "value6") -- Should evict key2 (least recently used)
-- key2 should be evicted, key1 should still exist
assert.is_nil(cache.lru_get("key2"))
assert.equals("value1", cache.lru_get("key1"))
end)
it("should update access order on get", function()
cache.lru_set("key1", "value1")
cache.lru_set("key2", "value2")
-- Get key1 to make it most recently used
cache.lru_get("key1")
-- Fill cache to evict
cache.lru_set("key3", "value3")
cache.lru_set("key4", "value4")
cache.lru_set("key5", "value4")
cache.lru_set("key6", "value4") -- Should evict key2
assert.is_not_nil(cache.lru_get("key1")) -- Should still exist
assert.is_nil(cache.lru_get("key2")) -- Should be evicted
end)
end)
describe("timed cache", function()
it("should store values with TTL", function()
cache.timed_set("test_key", "test_value", 2) -- 2 second TTL
local value = cache.timed_get("test_key")
assert.equals("test_value", value)
end)
it("should expire values after TTL", function()
cache.timed_set("test_key", "test_value", 1) -- 1 second TTL
-- Should be available immediately
local value = cache.timed_get("test_key")
assert.equals("test_value", value)
-- Wait for expiration
vim.defer_fn(function()
value = cache.timed_get("test_key")
assert.is_nil(value)
end, 1100)
end)
it("should use default TTL when not specified", function()
cache.timed_set("test_key", "test_value")
local value = cache.timed_get("test_key")
assert.equals("test_value", value)
end)
end)
describe("generic cache operations", function()
it("should set and get with specified cache type", function()
local ok = cache.set("test_key", "test_value", "memory")
assert.is_true(ok)
local value = cache.get("test_key", "memory")
assert.equals("test_value", value)
end)
it("should default to memory cache", function()
local ok = cache.set("test_key", "test_value")
assert.is_true(ok)
local value = cache.get("test_key")
assert.equals("test_value", value)
end)
it("should handle unknown cache types", function()
local ok = cache.set("test_key", "test_value", "unknown")
assert.is_false(ok)
local value = cache.get("test_key", "unknown")
assert.is_nil(value)
end)
end)
describe("get_or_set", function()
it("should return cached value when exists", function()
cache.set("test_key", "cached_value")
local value = cache.get_or_set("test_key", function()
return "computed_value"
end)
assert.equals("cached_value", value)
end)
it("should compute and cache value when not exists", function()
local call_count = 0
local value = cache.get_or_set("test_key", function()
call_count = call_count + 1
return "computed_value"
end)
assert.equals("computed_value", value)
assert.equals(1, call_count)
-- Second call should use cached value
value = cache.get_or_set("test_key", function()
call_count = call_count + 1
return "new_value"
end)
assert.equals("computed_value", value)
assert.equals(1, call_count) -- Should not be called again
end)
it("should handle computation errors", function()
assert.has_error(function()
cache.get_or_set("test_key", function()
error("Computation failed")
end)
end)
end)
end)
describe("multi_get", function()
it("should try multiple cache types in order", function()
cache.set("test_key", "memory_value", "memory")
cache.set("test_key", "lru_value", "lru")
local value, cache_type = cache.multi_get("test_key", {"lru", "memory"})
assert.equals("lru_value", value)
assert.equals("lru", cache_type)
-- Should try memory if lru doesn't have it
cache.invalidate("test_key", "lru")
value, cache_type = cache.multi_get("test_key", {"lru", "memory"})
assert.equals("memory_value", value)
assert.equals("memory", cache_type)
end)
it("should return nil when not found in any cache", function()
local value = cache.multi_get("non_existent", {"memory", "lru", "timed"})
assert.is_nil(value)
end)
it("should use default cache types when not specified", function()
cache.set("test_key", "memory_value", "memory")
local value = cache.multi_get("test_key")
assert.equals("memory_value", value)
end)
end)
describe("invalidate", function()
it("should invalidate from specific cache type", function()
cache.set("test_key", "test_value", "memory")
cache.set("test_key", "test_value", "lru")
cache.invalidate("test_key", "memory")
assert.is_nil(cache.get("test_key", "memory"))
assert.equals("test_value", cache.get("test_key", "lru"))
end)
it("should invalidate from all caches when type not specified", function()
cache.set("test_key", "test_value", "memory")
cache.set("test_key", "test_value", "lru")
cache.set("test_key", "test_value", "timed")
cache.invalidate("test_key")
assert.is_nil(cache.get("test_key", "memory"))
assert.is_nil(cache.get("test_key", "lru"))
assert.is_nil(cache.get("test_key", "timed"))
end)
end)
describe("clear_all", function()
it("should clear all caches", function()
cache.set("key1", "value1", "memory")
cache.set("key2", "value2", "lru")
cache.set("key3", "value3", "timed")
cache.clear_all()
assert.is_nil(cache.get("key1", "memory"))
assert.is_nil(cache.get("key2", "lru"))
assert.is_nil(cache.get("key3", "timed"))
end)
it("should reset metrics", function()
-- Generate some metrics
cache.set("test", "value")
cache.get("test")
cache.get("non_existent")
local stats_before = cache.get_stats()
assert.is_true(stats_before.metrics.hits > 0)
assert.is_true(stats_before.metrics.misses > 0)
cache.clear_all()
local stats_after = cache.get_stats()
assert.equals(0, stats_after.metrics.hits)
assert.equals(0, stats_after.metrics.misses)
end)
end)
describe("get_stats", function()
it("should return comprehensive statistics", function()
-- Generate some activity
cache.set("test1", "value1")
cache.set("test2", "value2")
cache.get("test1")
cache.get("non_existent")
local stats = cache.get_stats()
assert.is_not_nil(stats.metrics)
assert.is_not_nil(stats.sizes)
assert.is_not_nil(stats.config)
assert.is_true(stats.metrics.sets >= 2)
assert.is_true(stats.metrics.hits >= 1)
assert.is_true(stats.metrics.misses >= 1)
assert.is_not_nil(stats.metrics.hit_ratio)
end)
it("should calculate hit ratio correctly", function()
-- Generate known metrics
cache.set("test", "value")
cache.get("test") -- hit
cache.get("test") -- hit
cache.get("non_existent") -- miss
local stats = cache.get_stats()
-- Should be 2 hits out of 3 total requests = ~0.67
assert.is_true(stats.metrics.hit_ratio > 0.6)
assert.is_true(stats.metrics.hit_ratio < 0.7)
end)
end)
describe("configuration", function()
it("should initialize with custom configuration", function()
cache.init({
memory = {enabled = false},
lru = {enabled = true, max_size = 20},
timed = {enabled = true, default_ttl = 10}
})
-- Memory cache should be disabled
local ok = cache.memory_set("test", "value")
assert.is_false(ok)
-- LRU cache should work with new size
ok = cache.lru_set("test", "value")
assert.is_true(ok)
-- Timed cache should work with new TTL
cache.timed_set("test2", "value")
local value = cache.timed_get("test2")
assert.equals("value", value)
end)
end)
end)

View file

@ -0,0 +1,277 @@
-- Unit tests for date parsing and formatting utilities
local date_utils = require('notex.utils.date')
describe("date utilities", function()
describe("parse_date", function()
it("should parse ISO 8601 dates", function()
local timestamp = date_utils.parse_date("2023-12-25")
assert.is_not_nil(timestamp)
local formatted = os.date("%Y-%m-%d", timestamp)
assert.equals("2023-12-25", formatted)
end)
it("should parse ISO 8601 datetimes", function()
local timestamp = date_utils.parse_date("2023-12-25T10:30:00")
assert.is_not_nil(timestamp)
local formatted = os.date("%Y-%m-%d %H:%M:%S", timestamp)
assert.equals("2023-12-25 10:30:00", formatted)
end)
it("should parse ISO 8601 with timezone", function()
local timestamp = date_utils.parse_date("2023-12-25T10:30:00+02:00")
assert.is_not_nil(timestamp)
-- Should be converted to UTC
local utc_formatted = os.date("%Y-%m-%d %H:%M:%S", timestamp)
assert.equals("2023-12-25 08:30:00", utc_formatted)
end)
it("should parse relative dates", function()
-- Test 1 day ago
local one_day_ago = os.time() - 86400
local timestamp = date_utils.parse_date("1d")
assert.is_true(math.abs(timestamp - one_day_ago) < 60) -- Within 1 minute
-- Test 1 hour ago
local one_hour_ago = os.time() - 3600
timestamp = date_utils.parse_date("1h")
assert.is_true(math.abs(timestamp - one_hour_ago) < 60) -- Within 1 minute
end)
it("should parse natural dates", function()
local timestamp = date_utils.parse_date("2023-12-25")
assert.is_not_nil(timestamp)
local formatted = os.date("%Y-%m-%d", timestamp)
assert.equals("2023-12-25", formatted)
end)
it("should return nil for invalid dates", function()
local timestamp = date_utils.parse_date("invalid date")
assert.is_nil(timestamp)
timestamp = date_utils.parse_date("")
assert.is_nil(timestamp)
timestamp = date_utils.parse_date(nil)
assert.is_nil(timestamp)
end)
it("should parse common date formats", function()
-- MM/DD/YYYY
local timestamp = date_utils.parse_date("12/25/2023")
assert.is_not_nil(timestamp)
-- MM-DD-YYYY
timestamp = date_utils.parse_date("12-25-2023")
assert.is_not_nil(timestamp)
end)
end)
describe("format_date", function()
it("should format timestamp to string", function()
local timestamp = os.time({year = 2023, month = 12, day = 25, hour = 10, min = 30, sec = 0})
local formatted = date_utils.format_date(timestamp, "%Y-%m-%d %H:%M:%S")
assert.equals("2023-12-25 10:30:00", formatted)
end)
it("should use default format", function()
local timestamp = os.time({year = 2023, month = 12, day = 25})
local formatted = date_utils.format_date(timestamp)
assert.equals("2023-12-25", formatted)
end)
it("should handle nil timestamp", function()
local formatted = date_utils.format_date(nil)
assert.equals("", formatted)
end)
end)
describe("get_relative_time", function()
it("should return 'just now' for recent times", function()
local current_time = os.time()
local relative = date_utils.get_relative_time(current_time)
assert.equals("just now", relative)
end)
it("should return minutes ago", function()
local timestamp = os.time() - 120 -- 2 minutes ago
local relative = date_utils.get_relative_time(timestamp)
assert.equals("2 minutes ago", relative)
end)
it("should return hours ago", function()
local timestamp = os.time() - 7200 -- 2 hours ago
local relative = date_utils.get_relative_time(timestamp)
assert.equals("2 hours ago", relative)
end)
it("should return days ago", function()
local timestamp = os.time() - 172800 -- 2 days ago
local relative = date_utils.get_relative_time(timestamp)
assert.equals("2 days ago", relative)
end)
it("should return months ago", function()
local timestamp = os.time() - (60 * 86400) -- ~2 months ago
local relative = date_utils.get_relative_time(timestamp)
assert.matches("months ago", relative)
end)
it("should return years ago", function()
local timestamp = os.time() - (365 * 86400) -- ~1 year ago
local relative = date_utils.get_relative_time(timestamp)
assert.matches("year ago", relative)
end)
it("should handle singular forms", function()
local timestamp = os.time() - 60 -- 1 minute ago
local relative = date_utils.get_relative_time(timestamp)
assert.equals("1 minute ago", relative)
timestamp = os.time() - 3600 -- 1 hour ago
relative = date_utils.get_relative_time(timestamp)
assert.equals("1 hour ago", relative)
end)
end)
describe("get_month_number", function()
it("should convert month names to numbers", function()
assert.equals(1, date_utils.get_month_number("January"))
assert.equals(12, date_utils.get_month_number("December"))
assert.equals(6, date_utils.get_month_number("June"))
end)
it("should handle short month names", function()
assert.equals(1, date_utils.get_month_number("Jan"))
assert.equals(12, date_utils.get_month_number("Dec"))
assert.equals(6, date_utils.get_month_number("Jun"))
end)
it("should be case insensitive", function()
assert.equals(1, date_utils.get_month_number("JANUARY"))
assert.equals(1, date_utils.get_month_number("january"))
assert.equals(1, date_utils.get_month_number("jan"))
end)
it("should return nil for invalid month names", function()
assert.is_nil(date_utils.get_month_number("NotAMonth"))
assert.is_nil(date_utils.get_month_number(""))
end)
end)
describe("get_month_name", function()
it("should convert month numbers to names", function()
assert.equals("January", date_utils.get_month_name(1))
assert.equals("December", date_utils.get_month_name(12))
assert.equals("June", date_utils.get_month_name(6))
end)
it("should return short names when requested", function()
assert.equals("Jan", date_utils.get_month_name(1, true))
assert.equals("Dec", date_utils.get_month_name(12, true))
assert.equals("Jun", date_utils.get_month_name(6, true))
end)
it("should return nil for invalid month numbers", function()
assert.is_nil(date_utils.get_month_name(0))
assert.is_nil(date_utils.get_month_name(13))
assert.is_nil(date_utils.get_month_name(-1))
end)
end)
describe("is_valid_date", function()
it("should validate correct dates", function()
assert.is_true(date_utils.is_valid_date("2023-12-25"))
assert.is_true(date_utils.is_valid_date("2023-12-25T10:30:00"))
assert.is_true(date_utils.is_valid_date("1d"))
end)
it("should reject invalid dates", function()
assert.is_false(date_utils.is_valid_date("invalid"))
assert.is_false(date_utils.is_valid_date(""))
assert.is_false(date_utils.is_valid_date(nil))
end)
end)
describe("add_time", function()
it("should add time to timestamp", function()
local timestamp = os.time({year = 2023, month = 12, day = 25})
local new_timestamp = date_utils.add_time(timestamp, 1, "days")
local expected = os.time({year = 2023, month = 12, day = 26})
assert.equals(expected, new_timestamp)
end)
it("should add different time units", function()
local timestamp = os.time({year = 2023, month = 12, day = 25, hour = 10})
-- Add hours
local new_timestamp = date_utils.add_time(timestamp, 2, "hours")
local expected = os.time({year = 2023, month = 12, day = 25, hour = 12})
assert.equals(expected, new_timestamp)
-- Add minutes
new_timestamp = date_utils.add_time(timestamp, 30, "minutes")
expected = os.time({year = 2023, month = 12, day = 25, hour = 10, min = 30})
assert.equals(expected, new_timestamp)
end)
end)
describe("get_date_range", function()
it("should calculate date range", function()
local range = date_utils.get_date_range("2023-12-25", "2023-12-27")
assert.is_not_nil(range)
assert.equals(2, range.duration_days)
assert.equals("2023-12-25", range.start_formatted)
assert.equals("2023-12-27", range.end_formatted)
end)
it("should return nil for invalid dates", function()
local range = date_utils.get_date_range("invalid", "2023-12-27")
assert.is_nil(range)
end)
end)
describe("get_week_bounds", function()
it("should get week start and end", function()
local week_bounds = date_utils.get_week_bounds()
assert.is_not_nil(week_bounds.start_timestamp)
assert.is_not_nil(week_bounds.end_timestamp)
assert.is_not_nil(week_bounds.start_formatted)
assert.is_not_nil(week_bounds.end_formatted)
-- Should be 7 days apart
local duration = week_bounds.end_timestamp - week_bounds.start_timestamp
assert.equals(6 * 86400, duration) -- 6 days in seconds
end)
end)
describe("get_month_bounds", function()
it("should get month start and end", function()
local month_bounds = date_utils.get_month_bounds(os.time({year = 2023, month = 12, day = 15}))
assert.is_not_nil(month_bounds.start_timestamp)
assert.is_not_nil(month_bounds.end_timestamp)
assert.equals("2023-12-01", month_bounds.start_formatted)
assert.equals("2023-12-31", month_bounds.end_formatted)
end)
end)
describe("get_timezones", function()
it("should return list of timezones", function()
local timezones = date_utils.get_timezones()
assert.is_table(timezones)
assert.is_true(#timezones > 0)
assert.is_true(vim.tbl_contains(timezones, "UTC"))
assert.is_true(vim.tbl_contains(timezones, "America/New_York"))
end)
end)
end)

View file

@ -0,0 +1,247 @@
-- Unit tests for type detection and conversion utilities
local types = require('notex.utils.types')
describe("type utilities", function()
describe("detect_type", function()
it("should detect boolean true values", function()
local detected_type, converted_value = types.detect_type("true")
assert.equals("boolean", detected_type)
assert.is_true(converted_value)
detected_type, converted_value = types.detect_type("yes")
assert.equals("boolean", detected_type)
assert.is_true(converted_value)
detected_type, converted_value = types.detect_type("1")
assert.equals("boolean", detected_type)
assert.is_true(converted_value)
end)
it("should detect boolean false values", function()
local detected_type, converted_value = types.detect_type("false")
assert.equals("boolean", detected_type)
assert.is_false(converted_value)
detected_type, converted_value = types.detect_type("no")
assert.equals("boolean", detected_type)
assert.is_false(converted_value)
detected_type, converted_value = types.detect_type("0")
assert.equals("boolean", detected_type)
assert.is_false(converted_value)
end)
it("should detect ISO 8601 dates", function()
local detected_type = types.detect_type("2023-12-25")
assert.equals("date", detected_type)
detected_type = types.detect_type("2023-12-25T10:30:00")
assert.equals("date", detected_type)
end)
it("should detect URLs", function()
local detected_type = types.detect_type("https://example.com")
assert.equals("url", detected_type)
detected_type = types.detect_type("http://test.org/path")
assert.equals("url", detected_type)
end)
it("should detect email addresses", function()
local detected_type = types.detect_type("user@example.com")
assert.equals("email", detected_type)
detected_type = types.detect_type("test.email+tag@domain.co.uk")
assert.equals("email", detected_type)
end)
it("should detect numbers", function()
local detected_type = types.detect_type("42")
assert.equals("number", detected_type)
detected_type = types.detect_type("-17")
assert.equals("number", detected_type)
detected_type = types.detect_type("3.14159")
assert.equals("number", detected_type)
end)
it("should detect JSON arrays", function()
local detected_type = types.detect_type('[1, 2, 3]')
assert.equals("array", detected_type)
detected_type = types.detect_type('["a", "b", "c"]')
assert.equals("array", detected_type)
end)
it("should detect JSON objects", function()
local detected_type = types.detect_type('{"key": "value"}')
assert.equals("object", detected_type)
detected_type = types.detect_type('{"a": 1, "b": 2}')
assert.equals("object", detected_type)
end)
it("should detect strings by default", function()
local detected_type = types.detect_type("plain text")
assert.equals("string", detected_type)
detected_type = types.detect_type("not a special pattern")
assert.equals("string", detected_type)
end)
it("should detect nil values", function()
local detected_type = types.detect_type(nil)
assert.equals("nil", detected_type)
end)
end)
describe("convert_to_type", function()
it("should convert to boolean", function()
assert.is_true(types.convert_to_type("true", "boolean"))
assert.is_false(types.convert_to_type("false", "boolean"))
assert.is_true(types.convert_to_type("yes", "boolean"))
assert.is_false(types.convert_to_type("no", "boolean"))
end)
it("should convert to number", function()
assert.equals(42, types.convert_to_type("42", "number"))
assert.equals(-17.5, types.convert_to_type("-17.5", "number"))
assert.equals(0, types.convert_to_type("invalid", "number"))
end)
it("should convert to string", function()
assert.equals("hello", types.convert_to_type("hello", "string"))
assert.equals("42", types.convert_to_type(42, "string"))
assert.equals("true", types.convert_to_type(true, "string"))
end)
it("should convert to array", function()
local array = types.convert_to_type('[1, 2, 3]', "array")
assert.is_table(array)
assert.equals(1, array[1])
assert.equals(2, array[2])
assert.equals(3, array[3])
-- Comma-separated values
array = types.convert_to_type("a,b,c", "array")
assert.is_table(array)
assert.equals("a", array[1])
assert.equals("b", array[2])
assert.equals("c", array[3])
end)
it("should convert to object", function()
local obj = types.convert_to_type('{"key": "value"}', "object")
assert.is_table(obj)
assert.equals("value", obj.key)
-- Key=value pairs
obj = types.convert_to_type("a=1,b=2", "object")
assert.is_table(obj)
assert.equals("1", obj.a)
assert.equals("2", obj.b)
end)
end)
describe("compare_types", function()
it("should compare types correctly", function()
local result = types.compare_types("hello", 42)
assert.equals("string", result.type1)
assert.equals("number", result.type2)
assert.is_false(result.same_type)
assert.is_true(result.compatible) -- strings can represent numbers
result = types.compare_types(42, 17)
assert.equals("number", result.type1)
assert.equals("number", result.type2)
assert.is_true(result.same_type)
assert.is_true(result.compatible)
end)
end)
describe("are_types_compatible", function()
it("should check type compatibility", function()
assert.is_true(types.are_types_compatible("string", "string"))
assert.is_true(types.are_types_compatible("number", "number"))
assert.is_true(types.are_types_compatible("string", "number")) -- string can convert to number
assert.is_true(types.are_types_compatible("number", "string")) -- string can represent number
assert.is_false(types.are_types_compatible("boolean", "table"))
end)
end)
describe("cast_value", function()
it("should cast values with validation", function()
assert.equals(42, types.cast_value("42", "number"))
assert.equals("hello", types.cast_value("hello", "string"))
-- Invalid cast returns original in non-strict mode
assert.equals("invalid", types.cast_value("invalid", "number"))
end)
it("should error in strict mode", function()
assert.has_error(function()
types.cast_value("invalid", "number", true)
end)
end)
end)
describe("infer_schema", function()
it("should infer schema from values", function()
local values = {"hello", "world", "test"}
local schema = types.infer_schema(values)
assert.equals("string", schema.detected_type)
assert.equals(1.0, schema.confidence)
assert.equals(3, schema.sample_size)
assert.equals(5, schema.constraints.max_length)
assert.equals(4, schema.constraints.min_length)
end)
it("should handle mixed types", function()
local values = {"hello", 42, true}
local schema = types.infer_schema(values)
-- Should pick the most common type
assert.is_not_nil(schema.detected_type)
assert.is_true(schema.confidence < 1.0)
end)
end)
describe("get_possible_conversions", function()
it("should get all possible conversions", function()
local conversions = types.get_possible_conversions("42")
-- Should include number, boolean, string, and possibly date conversions
local found_types = {}
for _, conversion in ipairs(conversions) do
found_types[conversion.type] = true
end
assert.is_true(found_types.number)
assert.is_true(found_types.string)
assert.is_true(found_types.boolean)
end)
end)
describe("validate_conversion", function()
it("should validate type conversion", function()
assert.is_true(types.validate_conversion("42", "number"))
assert.is_true(types.validate_conversion("true", "boolean"))
assert.is_false(types.validate_conversion("invalid", "number"))
end)
end)
describe("get_type_info", function()
it("should get comprehensive type information", function()
local info = types.get_type_info("42")
assert.equals("number", info.detected_type)
assert.equals("42", info.original_value)
assert.equals(42, info.converted_value)
assert.is_table(info.possible_conversions)
end)
end)
end)

View file

@ -0,0 +1,282 @@
-- Unit tests for validation utilities
local validation = require('notex.utils.validation')
describe("validation utilities", function()
describe("validate_value", function()
it("should validate string values", function()
local schema = {type = "string", required = true}
local valid, error = validation.validate_value("hello", schema)
assert.is_true(valid)
assert.equals("Validation passed", error)
end)
it("should reject required string when nil", function()
local schema = {type = "string", required = true}
local valid, error = validation.validate_value(nil, schema)
assert.is_false(valid)
assert.equals("Value is required", error)
end)
it("should validate string length constraints", function()
local schema = {type = "string", min_length = 5, max_length = 10}
-- Too short
local valid, error = validation.validate_value("hi", schema)
assert.is_false(valid)
assert.matches("too short", error)
-- Too long
valid, error = validation.validate_value("this is too long", schema)
assert.is_false(valid)
assert.matches("too long", error)
-- Just right
valid, error = validation.validate_value("perfect", schema)
assert.is_true(valid)
end)
it("should validate number values", function()
local schema = {type = "number", min_value = 1, max_value = 10}
-- Valid number
local valid, error = validation.validate_value(5, schema)
assert.is_true(valid)
-- Too small
valid, error = validation.validate_value(0, schema)
assert.is_false(valid)
assert.matches("too small", error)
-- Too large
valid, error = validation.validate_value(11, schema)
assert.is_false(valid)
assert.matches("too large", error)
end)
it("should validate boolean values", function()
local schema = {type = "boolean"}
-- Valid booleans
local valid, error = validation.validate_value(true, schema)
assert.is_true(valid)
valid, error = validation.validate_value(false, schema)
assert.is_true(valid)
-- String conversion
valid, error = validation.validate_value("true", schema)
assert.is_true(valid)
valid, error = validation.validate_value("false", schema)
assert.is_true(valid)
end)
it("should validate array values", function()
local schema = {type = "array", min_items = 2, max_items = 5}
-- Valid array
local valid, error = validation.validate_value({1, 2, 3}, schema)
assert.is_true(valid)
-- Too few items
valid, error = validation.validate_value({1}, schema)
assert.is_false(valid)
assert.matches("too short", error)
-- Too many items
valid, error = validation.validate_value({1, 2, 3, 4, 5, 6}, schema)
assert.is_false(valid)
assert.matches("too long", error)
end)
it("should validate object values", function()
local schema = {
type = "object",
required_fields = {"name"},
field_types = {
name = {type = "string"},
age = {type = "number"}
}
}
-- Valid object
local valid, error = validation.validate_value({name = "John", age = 30}, schema)
assert.is_true(valid)
-- Missing required field
valid, error = validation.validate_value({age = 30}, schema)
assert.is_false(valid)
assert.matches("Missing required field", error)
-- Invalid field type
valid, error = validation.validate_value({name = 123, age = 30}, schema)
assert.is_false(valid)
assert.matches("Field 'name' invalid", error)
end)
end)
describe("validate_document_properties", function()
it("should validate document properties against schema", function()
local schema_definition = {
title = {type = "string", required = true},
status = {type = "string", enum = {"draft", "published"}},
priority = {type = "number", min_value = 1, max_value = 5}
}
local properties = {
title = "Test Document",
status = "draft",
priority = 3
}
local valid, result = validation.validate_document_properties(properties, schema_definition)
assert.is_true(valid)
assert.equals(0, #result.errors)
end)
it("should return errors for invalid properties", function()
local schema_definition = {
title = {type = "string", required = true},
status = {type = "string", enum = {"draft", "published"}}
}
local properties = {
status = "invalid_status" -- Missing required title, invalid status
}
local valid, result = validation.validate_document_properties(properties, schema_definition)
assert.is_false(valid)
assert.is_true(#result.errors > 0)
end)
it("should include warnings for unknown properties", function()
local schema_definition = {
title = {type = "string", required = true}
}
local properties = {
title = "Test Document",
unknown_property = "value"
}
local valid, result = validation.validate_document_properties(properties, schema_definition)
assert.is_true(valid)
assert.is_true(#result.warnings > 0)
end)
end)
describe("validate_query_params", function()
it("should validate query parameters", function()
local allowed_params = {
limit = {type = "number", min_value = 1, max_value = 100},
sort = {type = "string", enum = {"asc", "desc"}},
filter = {type = "string", required = false}
}
local params = {
limit = 10,
sort = "asc"
}
local valid, errors = validation.validate_query_params(params, allowed_params)
assert.is_true(valid)
assert.equals(0, #errors)
end)
it("should reject unknown parameters", function()
local allowed_params = {
limit = {type = "number"}
}
local params = {
limit = 10,
unknown = "value"
}
local valid, errors = validation.validate_query_params(params, allowed_params)
assert.is_false(valid)
assert.is_true(#errors > 0)
end)
end)
describe("sanitize_input", function()
it("should remove dangerous characters", function()
local input = '<script>alert("xss")</script>'
local sanitized = validation.sanitize_input(input)
assert.equals('scriptalertxss/script', sanitized)
end)
it("should limit length", function()
local input = string.rep("a", 100)
local sanitized = validation.sanitize_input(input, {max_length = 10})
assert.equals(10, #sanitized)
end)
it("should trim whitespace", function()
local input = " hello world "
local sanitized = validation.sanitize_input(input)
assert.equals("hello world", sanitized)
end)
end)
describe("validate_file_path", function()
it("should validate safe file paths", function()
local valid, error = validation.validate_file_path("/home/user/document.md")
assert.is_true(valid)
end)
it("should reject paths with invalid characters", function()
local valid, error = validation.validate_file_path('file<name>.md')
assert.is_false(valid)
assert.matches("Invalid characters", error)
end)
it("should reject directory traversal", function()
local valid, error = validation.validate_file_path("../../../etc/passwd")
assert.is_false(valid)
assert.matches("Directory traversal", error)
end)
it("should reject empty paths", function()
local valid, error = validation.validate_file_path("")
assert.is_false(valid)
assert.equals("Empty file path", error)
end)
end)
describe("create_schema", function()
it("should create validation schema", function()
local schema = validation.create_schema("test_field", {
type = "string",
required = true,
min_length = 5
})
assert.equals("test_field", schema.field_name)
assert.equals("string", schema.type)
assert.is_true(schema.required)
assert.equals(5, schema.min_length)
end)
end)
describe("create_validation_summary", function()
it("should create validation summary", function()
local results = {
{valid = true},
{valid = false, errors = {"Error 1"}},
{valid = false, errors = {"Error 2", "Error 3"}, warnings = {"Warning 1"}},
{valid = true, warnings = {"Warning 2"}}
}
local summary = validation.create_validation_summary(results)
assert.equals(4, summary.total)
assert.equals(2, summary.valid)
assert.equals(2, summary.invalid)
assert.equals(3, #summary.errors)
assert.equals(2, #summary.warnings)
end)
end)
end)