docs: initialize constitution v1.0.0
This commit is contained in:
commit
ef65e38bb2
17 changed files with 2226 additions and 0 deletions
105
.gemini/commands/analyze.toml
Normal file
105
.gemini/commands/analyze.toml
Normal file
|
@ -0,0 +1,105 @@
|
||||||
|
description = "Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
Goal: Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`.
|
||||||
|
|
||||||
|
STRICTLY READ-ONLY: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||||
|
|
||||||
|
Constitution Authority: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`.
|
||||||
|
|
||||||
|
Execution steps:
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
||||||
|
- SPEC = FEATURE_DIR/spec.md
|
||||||
|
- PLAN = FEATURE_DIR/plan.md
|
||||||
|
- TASKS = FEATURE_DIR/tasks.md
|
||||||
|
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||||
|
|
||||||
|
2. Load artifacts:
|
||||||
|
- Parse spec.md sections: Overview/Context, Functional Requirements, Non-Functional Requirements, User Stories, Edge Cases (if present).
|
||||||
|
- Parse plan.md: Architecture/stack choices, Data Model references, Phases, Technical constraints.
|
||||||
|
- Parse tasks.md: Task IDs, descriptions, phase grouping, parallel markers [P], referenced file paths.
|
||||||
|
- Load constitution `.specify/memory/constitution.md` for principle validation.
|
||||||
|
|
||||||
|
3. Build internal semantic models:
|
||||||
|
- Requirements inventory: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" -> `user-can-upload-file`).
|
||||||
|
- User story/action inventory.
|
||||||
|
- Task coverage mapping: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases).
|
||||||
|
- Constitution rule set: Extract principle names and any MUST/SHOULD normative statements.
|
||||||
|
|
||||||
|
4. Detection passes:
|
||||||
|
A. Duplication detection:
|
||||||
|
- Identify near-duplicate requirements. Mark lower-quality phrasing for consolidation.
|
||||||
|
B. Ambiguity detection:
|
||||||
|
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria.
|
||||||
|
- Flag unresolved placeholders (TODO, TKTK, ???, <placeholder>, etc.).
|
||||||
|
C. Underspecification:
|
||||||
|
- Requirements with verbs but missing object or measurable outcome.
|
||||||
|
- User stories missing acceptance criteria alignment.
|
||||||
|
- Tasks referencing files or components not defined in spec/plan.
|
||||||
|
D. Constitution alignment:
|
||||||
|
- Any requirement or plan element conflicting with a MUST principle.
|
||||||
|
- Missing mandated sections or quality gates from constitution.
|
||||||
|
E. Coverage gaps:
|
||||||
|
- Requirements with zero associated tasks.
|
||||||
|
- Tasks with no mapped requirement/story.
|
||||||
|
- Non-functional requirements not reflected in tasks (e.g., performance, security).
|
||||||
|
F. Inconsistency:
|
||||||
|
- Terminology drift (same concept named differently across files).
|
||||||
|
- Data entities referenced in plan but absent in spec (or vice versa).
|
||||||
|
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note).
|
||||||
|
- Conflicting requirements (e.g., one requires to use Next.js while other says to use Vue as the framework).
|
||||||
|
|
||||||
|
5. Severity assignment heuristic:
|
||||||
|
- CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality.
|
||||||
|
- HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion.
|
||||||
|
- MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case.
|
||||||
|
- LOW: Style/wording improvements, minor redundancy not affecting execution order.
|
||||||
|
|
||||||
|
6. Produce a Markdown report (no file writes) with sections:
|
||||||
|
|
||||||
|
### Specification Analysis Report
|
||||||
|
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||||
|
|----|----------|----------|-------------|---------|----------------|
|
||||||
|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||||
|
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||||
|
|
||||||
|
Additional subsections:
|
||||||
|
- Coverage Summary Table:
|
||||||
|
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||||
|
- Constitution Alignment Issues (if any)
|
||||||
|
- Unmapped Tasks (if any)
|
||||||
|
- Metrics:
|
||||||
|
* Total Requirements
|
||||||
|
* Total Tasks
|
||||||
|
* Coverage % (requirements with >=1 task)
|
||||||
|
* Ambiguity Count
|
||||||
|
* Duplication Count
|
||||||
|
* Critical Issues Count
|
||||||
|
|
||||||
|
7. At end of report, output a concise Next Actions block:
|
||||||
|
- If CRITICAL issues exist: Recommend resolving before `/implement`.
|
||||||
|
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions.
|
||||||
|
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'".
|
||||||
|
|
||||||
|
8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||||
|
|
||||||
|
Behavior rules:
|
||||||
|
- NEVER modify files.
|
||||||
|
- NEVER hallucinate missing sections—if absent, report them.
|
||||||
|
- KEEP findings deterministic: if rerun without changes, produce consistent IDs and counts.
|
||||||
|
- LIMIT total findings in the main table to 50; aggregate remainder in a summarized overflow note.
|
||||||
|
- If zero issues found, emit a success report with coverage statistics and proceed recommendation.
|
||||||
|
|
||||||
|
Context: {{args}}
|
||||||
|
"""
|
162
.gemini/commands/clarify.toml
Normal file
162
.gemini/commands/clarify.toml
Normal file
|
@ -0,0 +1,162 @@
|
||||||
|
description = "Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||||
|
|
||||||
|
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||||
|
|
||||||
|
Execution steps:
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||||
|
- `FEATURE_DIR`
|
||||||
|
- `FEATURE_SPEC`
|
||||||
|
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||||
|
- If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment.
|
||||||
|
|
||||||
|
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||||
|
|
||||||
|
Functional Scope & Behavior:
|
||||||
|
- Core user goals & success criteria
|
||||||
|
- Explicit out-of-scope declarations
|
||||||
|
- User roles / personas differentiation
|
||||||
|
|
||||||
|
Domain & Data Model:
|
||||||
|
- Entities, attributes, relationships
|
||||||
|
- Identity & uniqueness rules
|
||||||
|
- Lifecycle/state transitions
|
||||||
|
- Data volume / scale assumptions
|
||||||
|
|
||||||
|
Interaction & UX Flow:
|
||||||
|
- Critical user journeys / sequences
|
||||||
|
- Error/empty/loading states
|
||||||
|
- Accessibility or localization notes
|
||||||
|
|
||||||
|
Non-Functional Quality Attributes:
|
||||||
|
- Performance (latency, throughput targets)
|
||||||
|
- Scalability (horizontal/vertical, limits)
|
||||||
|
- Reliability & availability (uptime, recovery expectations)
|
||||||
|
- Observability (logging, metrics, tracing signals)
|
||||||
|
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||||
|
- Compliance / regulatory constraints (if any)
|
||||||
|
|
||||||
|
Integration & External Dependencies:
|
||||||
|
- External services/APIs and failure modes
|
||||||
|
- Data import/export formats
|
||||||
|
- Protocol/versioning assumptions
|
||||||
|
|
||||||
|
Edge Cases & Failure Handling:
|
||||||
|
- Negative scenarios
|
||||||
|
- Rate limiting / throttling
|
||||||
|
- Conflict resolution (e.g., concurrent edits)
|
||||||
|
|
||||||
|
Constraints & Tradeoffs:
|
||||||
|
- Technical constraints (language, storage, hosting)
|
||||||
|
- Explicit tradeoffs or rejected alternatives
|
||||||
|
|
||||||
|
Terminology & Consistency:
|
||||||
|
- Canonical glossary terms
|
||||||
|
- Avoided synonyms / deprecated terms
|
||||||
|
|
||||||
|
Completion Signals:
|
||||||
|
- Acceptance criteria testability
|
||||||
|
- Measurable Definition of Done style indicators
|
||||||
|
|
||||||
|
Misc / Placeholders:
|
||||||
|
- TODO markers / unresolved decisions
|
||||||
|
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||||
|
|
||||||
|
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||||
|
- Clarification would not materially change implementation or validation strategy
|
||||||
|
- Information is better deferred to planning phase (note internally)
|
||||||
|
|
||||||
|
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||||
|
- Maximum of 5 total questions across the whole session.
|
||||||
|
- Each question must be answerable with EITHER:
|
||||||
|
* A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||||
|
* A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||||
|
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||||
|
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||||
|
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||||
|
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||||
|
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||||
|
|
||||||
|
4. Sequential questioning loop (interactive):
|
||||||
|
- Present EXACTLY ONE question at a time.
|
||||||
|
- For multiple‑choice questions render options as a Markdown table:
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| A | <Option A description> |
|
||||||
|
| B | <Option B description> |
|
||||||
|
| C | <Option C description> | (add D/E as needed up to 5)
|
||||||
|
| Short | Provide a different short answer (<=5 words) | (Include only if free-form alternative is appropriate)
|
||||||
|
|
||||||
|
- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
|
||||||
|
- After the user answers:
|
||||||
|
* Validate the answer maps to one option or fits the <=5 word constraint.
|
||||||
|
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||||
|
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||||
|
- Stop asking further questions when:
|
||||||
|
* All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||||
|
* User signals completion ("done", "good", "no more"), OR
|
||||||
|
* You reach 5 asked questions.
|
||||||
|
- Never reveal future queued questions in advance.
|
||||||
|
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||||
|
|
||||||
|
5. Integration after EACH accepted answer (incremental update approach):
|
||||||
|
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||||
|
- For the first integrated answer in this session:
|
||||||
|
* Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||||
|
* Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||||
|
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||||
|
- Then immediately apply the clarification to the most appropriate section(s):
|
||||||
|
* Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||||
|
* User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||||
|
* Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||||
|
* Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||||
|
* Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||||
|
* Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||||
|
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||||
|
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||||
|
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||||
|
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||||
|
|
||||||
|
6. Validation (performed after EACH write plus final pass):
|
||||||
|
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||||
|
- Total asked (accepted) questions ≤ 5.
|
||||||
|
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||||
|
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||||
|
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||||
|
- Terminology consistency: same canonical term used across all updated sections.
|
||||||
|
|
||||||
|
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||||
|
|
||||||
|
8. Report completion (after questioning loop ends or early termination):
|
||||||
|
- Number of questions asked & answered.
|
||||||
|
- Path to updated spec.
|
||||||
|
- Sections touched (list names).
|
||||||
|
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||||
|
- If any Outstanding or Deferred remain, recommend whether to proceed to `/plan` or run `/clarify` again later post-plan.
|
||||||
|
- Suggested next command.
|
||||||
|
|
||||||
|
Behavior rules:
|
||||||
|
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||||
|
- If spec file missing, instruct user to run `/specify` first (do not create a new spec here).
|
||||||
|
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||||
|
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||||
|
- Respect user early termination signals ("stop", "done", "proceed").
|
||||||
|
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||||
|
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||||
|
|
||||||
|
Context for prioritization: {{args}}
|
||||||
|
"""
|
77
.gemini/commands/constitution.toml
Normal file
77
.gemini/commands/constitution.toml
Normal file
|
@ -0,0 +1,77 @@
|
||||||
|
description = "Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||||
|
|
||||||
|
Follow this execution flow:
|
||||||
|
|
||||||
|
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
||||||
|
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||||
|
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||||
|
|
||||||
|
2. Collect/derive values for placeholders:
|
||||||
|
- If user input (conversation) supplies a value, use it.
|
||||||
|
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
||||||
|
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||||
|
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||||
|
* MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||||
|
* MINOR: New principle/section added or materially expanded guidance.
|
||||||
|
* PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||||
|
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||||
|
|
||||||
|
3. Draft the updated constitution content:
|
||||||
|
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||||
|
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||||
|
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||||
|
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||||
|
|
||||||
|
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||||
|
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||||
|
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||||
|
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||||
|
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||||
|
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||||
|
|
||||||
|
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||||
|
- Version change: old → new
|
||||||
|
- List of modified principles (old title → new title if renamed)
|
||||||
|
- Added sections
|
||||||
|
- Removed sections
|
||||||
|
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||||
|
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||||
|
|
||||||
|
6. Validation before final output:
|
||||||
|
- No remaining unexplained bracket tokens.
|
||||||
|
- Version line matches report.
|
||||||
|
- Dates ISO format YYYY-MM-DD.
|
||||||
|
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||||
|
|
||||||
|
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
||||||
|
|
||||||
|
8. Output a final summary to the user with:
|
||||||
|
- New version and bump rationale.
|
||||||
|
- Any files flagged for manual follow-up.
|
||||||
|
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||||
|
|
||||||
|
Formatting & Style Requirements:
|
||||||
|
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||||
|
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||||
|
- Keep a single blank line between sections.
|
||||||
|
- Avoid trailing whitespace.
|
||||||
|
|
||||||
|
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||||
|
|
||||||
|
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||||
|
|
||||||
|
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
||||||
|
"""
|
60
.gemini/commands/implement.toml
Normal file
60
.gemini/commands/implement.toml
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
description = "Execute the implementation plan by processing and executing all tasks defined in tasks.md"
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
|
||||||
|
|
||||||
|
2. Load and analyze the implementation context:
|
||||||
|
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||||
|
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||||
|
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||||
|
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||||
|
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||||
|
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||||
|
|
||||||
|
3. Parse tasks.md structure and extract:
|
||||||
|
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||||
|
- **Task dependencies**: Sequential vs parallel execution rules
|
||||||
|
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||||
|
- **Execution flow**: Order and dependency requirements
|
||||||
|
|
||||||
|
4. Execute implementation following the task plan:
|
||||||
|
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||||
|
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||||
|
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||||
|
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||||
|
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||||
|
|
||||||
|
5. Implementation execution rules:
|
||||||
|
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||||
|
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||||
|
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||||
|
- **Integration work**: Database connections, middleware, logging, external services
|
||||||
|
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||||
|
|
||||||
|
6. Progress tracking and error handling:
|
||||||
|
- Report progress after each completed task
|
||||||
|
- Halt execution if any non-parallel task fails
|
||||||
|
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||||
|
- Provide clear error messages with context for debugging
|
||||||
|
- Suggest next steps if implementation cannot proceed
|
||||||
|
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||||
|
|
||||||
|
7. Completion validation:
|
||||||
|
- Verify all required tasks are completed
|
||||||
|
- Check that implemented features match the original specification
|
||||||
|
- Validate that tests pass and coverage meets requirements
|
||||||
|
- Confirm the implementation follows the technical plan
|
||||||
|
- Report final status with summary of completed work
|
||||||
|
|
||||||
|
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/tasks` first to regenerate the task list.
|
||||||
|
"""
|
47
.gemini/commands/plan.toml
Normal file
47
.gemini/commands/plan.toml
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
description = "Execute the implementation planning workflow using the plan template to generate design artifacts."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
Given the implementation details provided as an argument, do this:
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/setup-plan.sh --json` from the repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. All future file paths must be absolute.
|
||||||
|
- BEFORE proceeding, inspect FEATURE_SPEC for a `## Clarifications` section with at least one `Session` subheading. If missing or clearly ambiguous areas remain (vague adjectives, unresolved critical choices), PAUSE and instruct the user to run `/clarify` first to reduce rework. Only continue if: (a) Clarifications exist OR (b) an explicit user override is provided (e.g., "proceed without clarification"). Do not attempt to fabricate clarifications yourself.
|
||||||
|
2. Read and analyze the feature specification to understand:
|
||||||
|
- The feature requirements and user stories
|
||||||
|
- Functional and non-functional requirements
|
||||||
|
- Success criteria and acceptance criteria
|
||||||
|
- Any technical constraints or dependencies mentioned
|
||||||
|
|
||||||
|
3. Read the constitution at `.specify/memory/constitution.md` to understand constitutional requirements.
|
||||||
|
|
||||||
|
4. Execute the implementation plan template:
|
||||||
|
- Load `.specify/templates/plan-template.md` (already copied to IMPL_PLAN path)
|
||||||
|
- Set Input path to FEATURE_SPEC
|
||||||
|
- Run the Execution Flow (main) function steps 1-9
|
||||||
|
- The template is self-contained and executable
|
||||||
|
- Follow error handling and gate checks as specified
|
||||||
|
- Let the template guide artifact generation in $SPECS_DIR:
|
||||||
|
* Phase 0 generates research.md
|
||||||
|
* Phase 1 generates data-model.md, contracts/, quickstart.md
|
||||||
|
* Phase 2 generates tasks.md
|
||||||
|
- Incorporate user-provided details from arguments into Technical Context: {{args}}
|
||||||
|
- Update Progress Tracking as you complete each phase
|
||||||
|
|
||||||
|
5. Verify execution completed:
|
||||||
|
- Check Progress Tracking shows all phases complete
|
||||||
|
- Ensure all required artifacts were generated
|
||||||
|
- Confirm no ERROR states in execution
|
||||||
|
|
||||||
|
6. Report results with branch name, file paths, and generated artifacts.
|
||||||
|
|
||||||
|
Use absolute paths with the repository root for all file operations to avoid path issues.
|
||||||
|
"""
|
25
.gemini/commands/specify.toml
Normal file
25
.gemini/commands/specify.toml
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
description = "Create or update the feature specification from a natural language feature description."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Create or update the feature specification from a natural language feature description.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
The text the user typed after `/specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{{args}}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||||
|
|
||||||
|
Given that feature description, do this:
|
||||||
|
|
||||||
|
1. Run the script `.specify/scripts/bash/create-new-feature.sh --json "{{args}}"` from repo root and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute.
|
||||||
|
**IMPORTANT** You must only ever run this script once. The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for.
|
||||||
|
2. Load `.specify/templates/spec-template.md` to understand required sections.
|
||||||
|
3. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||||
|
4. Report completion with branch name, spec file path, and readiness for the next phase.
|
||||||
|
|
||||||
|
Note: The script creates and checks out the new branch and initializes the spec file before writing.
|
||||||
|
"""
|
66
.gemini/commands/tasks.toml
Normal file
66
.gemini/commands/tasks.toml
Normal file
|
@ -0,0 +1,66 @@
|
||||||
|
description = "Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts."
|
||||||
|
|
||||||
|
prompt = """
|
||||||
|
---
|
||||||
|
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||||
|
---
|
||||||
|
|
||||||
|
The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).
|
||||||
|
|
||||||
|
User input:
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
|
||||||
|
2. Load and analyze available design documents:
|
||||||
|
- Always read plan.md for tech stack and libraries
|
||||||
|
- IF EXISTS: Read data-model.md for entities
|
||||||
|
- IF EXISTS: Read contracts/ for API endpoints
|
||||||
|
- IF EXISTS: Read research.md for technical decisions
|
||||||
|
- IF EXISTS: Read quickstart.md for test scenarios
|
||||||
|
|
||||||
|
Note: Not all projects have all documents. For example:
|
||||||
|
- CLI tools might not have contracts/
|
||||||
|
- Simple libraries might not need data-model.md
|
||||||
|
- Generate tasks based on what's available
|
||||||
|
|
||||||
|
3. Generate tasks following the template:
|
||||||
|
- Use `.specify/templates/tasks-template.md` as the base
|
||||||
|
- Replace example tasks with actual tasks based on:
|
||||||
|
* **Setup tasks**: Project init, dependencies, linting
|
||||||
|
* **Test tasks [P]**: One per contract, one per integration scenario
|
||||||
|
* **Core tasks**: One per entity, service, CLI command, endpoint
|
||||||
|
* **Integration tasks**: DB connections, middleware, logging
|
||||||
|
* **Polish tasks [P]**: Unit tests, performance, docs
|
||||||
|
|
||||||
|
4. Task generation rules:
|
||||||
|
- Each contract file → contract test task marked [P]
|
||||||
|
- Each entity in data-model → model creation task marked [P]
|
||||||
|
- Each endpoint → implementation task (not parallel if shared files)
|
||||||
|
- Each user story → integration test marked [P]
|
||||||
|
- Different files = can be parallel [P]
|
||||||
|
- Same file = sequential (no [P])
|
||||||
|
|
||||||
|
5. Order tasks by dependencies:
|
||||||
|
- Setup before everything
|
||||||
|
- Tests before implementation (TDD)
|
||||||
|
- Models before services
|
||||||
|
- Services before endpoints
|
||||||
|
- Core before integration
|
||||||
|
- Everything before polish
|
||||||
|
|
||||||
|
6. Include parallel execution examples:
|
||||||
|
- Group [P] tasks that can run together
|
||||||
|
- Show actual Task agent commands
|
||||||
|
|
||||||
|
7. Create FEATURE_DIR/tasks.md with:
|
||||||
|
- Correct feature name from implementation plan
|
||||||
|
- Numbered tasks (T001, T002, etc.)
|
||||||
|
- Clear file paths for each task
|
||||||
|
- Dependency notes
|
||||||
|
- Parallel execution guidance
|
||||||
|
|
||||||
|
Context for task generation: {{args}}
|
||||||
|
|
||||||
|
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||||
|
"""
|
42
.specify/memory/constitution.md
Normal file
42
.specify/memory/constitution.md
Normal file
|
@ -0,0 +1,42 @@
|
||||||
|
<!--
|
||||||
|
Sync Impact Report:
|
||||||
|
- Version change: none → 1.0.0
|
||||||
|
- Added sections:
|
||||||
|
- Core Principles
|
||||||
|
- Governance
|
||||||
|
- Removed sections:
|
||||||
|
- [SECTION_2_NAME]
|
||||||
|
- [SECTION_3_NAME]
|
||||||
|
- Modified principles: none
|
||||||
|
- Templates requiring updates:
|
||||||
|
- ✅ .specify/memory/constitution.md
|
||||||
|
- ⚠ pending: .specify/templates/plan-template.md
|
||||||
|
- ⚠ pending: .specify/templates/spec-template.md
|
||||||
|
- ⚠ pending: .specify/templates/tasks-template.md
|
||||||
|
- ⚠ pending: .gemini/commands/analyze.toml
|
||||||
|
- ⚠ pending: .gemini/commands/clarify.toml
|
||||||
|
- ⚠ pending: .gemini/commands/constitution.toml
|
||||||
|
- ⚠ pending: .gemini/commands/implement.toml
|
||||||
|
- ⚠ pending: .gemini/commands/plan.toml
|
||||||
|
- ⚠ pending: .gemini/commands/specify.toml
|
||||||
|
- ⚠ pending: .gemini/commands/tasks.toml
|
||||||
|
- Follow-up TODOs: none
|
||||||
|
-->
|
||||||
|
# notex.nvim Constitution
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### I. Clean Code
|
||||||
|
Code should be written in a way that is easy to read, understand, and maintain. Follow established style guides and best practices.
|
||||||
|
|
||||||
|
### II. Functional Style
|
||||||
|
Favor a functional programming style with immutable data structures and pure functions where possible and appropriate for the language.
|
||||||
|
|
||||||
|
### III. Descriptive Coding
|
||||||
|
Write self-documenting code with descriptive function and variable names. Avoid comments that explain *what* the code is doing; the code should speak for itself. Comments should only be used to explain *why* a certain implementation was chosen when it's not obvious.
|
||||||
|
|
||||||
|
## Governance
|
||||||
|
|
||||||
|
All pull requests and reviews must verify compliance with this constitution. Any deviation from these principles must be explicitly justified and approved.
|
||||||
|
|
||||||
|
**Version**: 1.0.0 | **Ratified**: 2025-10-01 | **Last Amended**: 2025-10-01
|
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
|
@ -0,0 +1,166 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Consolidated prerequisite checking script
|
||||||
|
#
|
||||||
|
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
# It replaces the functionality previously spread across multiple scripts.
|
||||||
|
#
|
||||||
|
# Usage: ./check-prerequisites.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# OPTIONS:
|
||||||
|
# --json Output in JSON format
|
||||||
|
# --require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
# --paths-only Only output path variables (no validation)
|
||||||
|
# --help, -h Show help message
|
||||||
|
#
|
||||||
|
# OUTPUTS:
|
||||||
|
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
|
||||||
|
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
|
||||||
|
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
REQUIRE_TASKS=false
|
||||||
|
INCLUDE_TASKS=false
|
||||||
|
PATHS_ONLY=false
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--require-tasks)
|
||||||
|
REQUIRE_TASKS=true
|
||||||
|
;;
|
||||||
|
--include-tasks)
|
||||||
|
INCLUDE_TASKS=true
|
||||||
|
;;
|
||||||
|
--paths-only)
|
||||||
|
PATHS_ONLY=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
cat << 'EOF'
|
||||||
|
Usage: check-prerequisites.sh [OPTIONS]
|
||||||
|
|
||||||
|
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
|
||||||
|
OPTIONS:
|
||||||
|
--json Output in JSON format
|
||||||
|
--require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
--include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
--paths-only Only output path variables (no prerequisite validation)
|
||||||
|
--help, -h Show this help message
|
||||||
|
|
||||||
|
EXAMPLES:
|
||||||
|
# Check task prerequisites (plan.md required)
|
||||||
|
./check-prerequisites.sh --json
|
||||||
|
|
||||||
|
# Check implementation prerequisites (plan.md + tasks.md required)
|
||||||
|
./check-prerequisites.sh --json --require-tasks --include-tasks
|
||||||
|
|
||||||
|
# Get feature paths only (no validation)
|
||||||
|
./check-prerequisites.sh --paths-only
|
||||||
|
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Source common functions
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get feature paths and validate branch
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
|
||||||
|
if $PATHS_ONLY; then
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Minimal JSON paths payload (no validation performed)
|
||||||
|
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
|
||||||
|
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
|
||||||
|
else
|
||||||
|
echo "REPO_ROOT: $REPO_ROOT"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "FEATURE_DIR: $FEATURE_DIR"
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "TASKS: $TASKS"
|
||||||
|
fi
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate required directories and files
|
||||||
|
if [[ ! -d "$FEATURE_DIR" ]]; then
|
||||||
|
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
|
||||||
|
echo "Run /specify first to create the feature structure." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$IMPL_PLAN" ]]; then
|
||||||
|
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /plan first to create the implementation plan." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for tasks.md if required
|
||||||
|
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
|
||||||
|
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /tasks first to create the task list." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build list of available documents
|
||||||
|
docs=()
|
||||||
|
|
||||||
|
# Always check these optional docs
|
||||||
|
[[ -f "$RESEARCH" ]] && docs+=("research.md")
|
||||||
|
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
|
||||||
|
|
||||||
|
# Check contracts directory (only if it exists and has files)
|
||||||
|
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
|
||||||
|
docs+=("contracts/")
|
||||||
|
fi
|
||||||
|
|
||||||
|
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
|
||||||
|
|
||||||
|
# Include tasks.md if requested and it exists
|
||||||
|
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
|
||||||
|
docs+=("tasks.md")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Build JSON array of documents
|
||||||
|
if [[ ${#docs[@]} -eq 0 ]]; then
|
||||||
|
json_docs="[]"
|
||||||
|
else
|
||||||
|
json_docs=$(printf '"%s",' "${docs[@]}")
|
||||||
|
json_docs="[${json_docs%,}]"
|
||||||
|
fi
|
||||||
|
|
||||||
|
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
|
||||||
|
else
|
||||||
|
# Text output
|
||||||
|
echo "FEATURE_DIR:$FEATURE_DIR"
|
||||||
|
echo "AVAILABLE_DOCS:"
|
||||||
|
|
||||||
|
# Show status of each potential document
|
||||||
|
check_file "$RESEARCH" "research.md"
|
||||||
|
check_file "$DATA_MODEL" "data-model.md"
|
||||||
|
check_dir "$CONTRACTS_DIR" "contracts/"
|
||||||
|
check_file "$QUICKSTART" "quickstart.md"
|
||||||
|
|
||||||
|
if $INCLUDE_TASKS; then
|
||||||
|
check_file "$TASKS" "tasks.md"
|
||||||
|
fi
|
||||||
|
fi
|
113
.specify/scripts/bash/common.sh
Executable file
113
.specify/scripts/bash/common.sh
Executable file
|
@ -0,0 +1,113 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# Common functions and variables for all scripts
|
||||||
|
|
||||||
|
# Get repository root, with fallback for non-git repositories
|
||||||
|
get_repo_root() {
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
git rev-parse --show-toplevel
|
||||||
|
else
|
||||||
|
# Fall back to script location for non-git repos
|
||||||
|
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
(cd "$script_dir/../../.." && pwd)
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get current branch, with fallback for non-git repositories
|
||||||
|
get_current_branch() {
|
||||||
|
# First check if SPECIFY_FEATURE environment variable is set
|
||||||
|
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
||||||
|
echo "$SPECIFY_FEATURE"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Then check git if available
|
||||||
|
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
||||||
|
git rev-parse --abbrev-ref HEAD
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# For non-git repos, try to find the latest feature directory
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local specs_dir="$repo_root/specs"
|
||||||
|
|
||||||
|
if [[ -d "$specs_dir" ]]; then
|
||||||
|
local latest_feature=""
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
for dir in "$specs_dir"/*; do
|
||||||
|
if [[ -d "$dir" ]]; then
|
||||||
|
local dirname=$(basename "$dir")
|
||||||
|
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
||||||
|
local number=${BASH_REMATCH[1]}
|
||||||
|
number=$((10#$number))
|
||||||
|
if [[ "$number" -gt "$highest" ]]; then
|
||||||
|
highest=$number
|
||||||
|
latest_feature=$dirname
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -n "$latest_feature" ]]; then
|
||||||
|
echo "$latest_feature"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "main" # Final fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if we have git available
|
||||||
|
has_git() {
|
||||||
|
git rev-parse --show-toplevel >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
check_feature_branch() {
|
||||||
|
local branch="$1"
|
||||||
|
local has_git_repo="$2"
|
||||||
|
|
||||||
|
# For non-git repos, we can't enforce branch naming but still provide output
|
||||||
|
if [[ "$has_git_repo" != "true" ]]; then
|
||||||
|
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
|
||||||
|
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
|
||||||
|
echo "Feature branches should be named like: 001-feature-name" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
get_feature_dir() { echo "$1/specs/$2"; }
|
||||||
|
|
||||||
|
get_feature_paths() {
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local current_branch=$(get_current_branch)
|
||||||
|
local has_git_repo="false"
|
||||||
|
|
||||||
|
if has_git; then
|
||||||
|
has_git_repo="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local feature_dir=$(get_feature_dir "$repo_root" "$current_branch")
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
REPO_ROOT='$repo_root'
|
||||||
|
CURRENT_BRANCH='$current_branch'
|
||||||
|
HAS_GIT='$has_git_repo'
|
||||||
|
FEATURE_DIR='$feature_dir'
|
||||||
|
FEATURE_SPEC='$feature_dir/spec.md'
|
||||||
|
IMPL_PLAN='$feature_dir/plan.md'
|
||||||
|
TASKS='$feature_dir/tasks.md'
|
||||||
|
RESEARCH='$feature_dir/research.md'
|
||||||
|
DATA_MODEL='$feature_dir/data-model.md'
|
||||||
|
QUICKSTART='$feature_dir/quickstart.md'
|
||||||
|
CONTRACTS_DIR='$feature_dir/contracts'
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||||
|
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
97
.specify/scripts/bash/create-new-feature.sh
Executable file
97
.specify/scripts/bash/create-new-feature.sh
Executable file
|
@ -0,0 +1,97 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
JSON_MODE=false
|
||||||
|
ARGS=()
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json) JSON_MODE=true ;;
|
||||||
|
--help|-h) echo "Usage: $0 [--json] <feature_description>"; exit 0 ;;
|
||||||
|
*) ARGS+=("$arg") ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
FEATURE_DESCRIPTION="${ARGS[*]}"
|
||||||
|
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||||
|
echo "Usage: $0 [--json] <feature_description>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Function to find the repository root by searching for existing project markers
|
||||||
|
find_repo_root() {
|
||||||
|
local dir="$1"
|
||||||
|
while [ "$dir" != "/" ]; do
|
||||||
|
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
||||||
|
echo "$dir"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
dir="$(dirname "$dir")"
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Resolve repository root. Prefer git information when available, but fall back
|
||||||
|
# to searching for repository markers so the workflow still functions in repositories that
|
||||||
|
# were initialised with --no-git.
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||||
|
HAS_GIT=true
|
||||||
|
else
|
||||||
|
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
||||||
|
if [ -z "$REPO_ROOT" ]; then
|
||||||
|
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
HAS_GIT=false
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$REPO_ROOT"
|
||||||
|
|
||||||
|
SPECS_DIR="$REPO_ROOT/specs"
|
||||||
|
mkdir -p "$SPECS_DIR"
|
||||||
|
|
||||||
|
HIGHEST=0
|
||||||
|
if [ -d "$SPECS_DIR" ]; then
|
||||||
|
for dir in "$SPECS_DIR"/*; do
|
||||||
|
[ -d "$dir" ] || continue
|
||||||
|
dirname=$(basename "$dir")
|
||||||
|
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
|
||||||
|
number=$((10#$number))
|
||||||
|
if [ "$number" -gt "$HIGHEST" ]; then HIGHEST=$number; fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
NEXT=$((HIGHEST + 1))
|
||||||
|
FEATURE_NUM=$(printf "%03d" "$NEXT")
|
||||||
|
|
||||||
|
BRANCH_NAME=$(echo "$FEATURE_DESCRIPTION" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//')
|
||||||
|
WORDS=$(echo "$BRANCH_NAME" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//')
|
||||||
|
BRANCH_NAME="${FEATURE_NUM}-${WORDS}"
|
||||||
|
|
||||||
|
if [ "$HAS_GIT" = true ]; then
|
||||||
|
git checkout -b "$BRANCH_NAME"
|
||||||
|
else
|
||||||
|
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
|
||||||
|
SPEC_FILE="$FEATURE_DIR/spec.md"
|
||||||
|
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
|
||||||
|
|
||||||
|
# Set the SPECIFY_FEATURE environment variable for the current session
|
||||||
|
export SPECIFY_FEATURE="$BRANCH_NAME"
|
||||||
|
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
|
||||||
|
else
|
||||||
|
echo "BRANCH_NAME: $BRANCH_NAME"
|
||||||
|
echo "SPEC_FILE: $SPEC_FILE"
|
||||||
|
echo "FEATURE_NUM: $FEATURE_NUM"
|
||||||
|
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
|
||||||
|
fi
|
60
.specify/scripts/bash/setup-plan.sh
Executable file
60
.specify/scripts/bash/setup-plan.sh
Executable file
|
@ -0,0 +1,60 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
ARGS=()
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
echo "Usage: $0 [--json]"
|
||||||
|
echo " --json Output results in JSON format"
|
||||||
|
echo " --help Show this help message"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
ARGS+=("$arg")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
# Check if we're on a proper feature branch (only for git repos)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# Ensure the feature directory exists
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
# Copy plan template if it exists
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
|
||||||
|
if [[ -f "$TEMPLATE" ]]; then
|
||||||
|
cp "$TEMPLATE" "$IMPL_PLAN"
|
||||||
|
echo "Copied plan template to $IMPL_PLAN"
|
||||||
|
else
|
||||||
|
echo "Warning: Plan template not found at $TEMPLATE"
|
||||||
|
# Create a basic plan file if template doesn't exist
|
||||||
|
touch "$IMPL_PLAN"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
|
||||||
|
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
|
||||||
|
else
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "SPECS_DIR: $FEATURE_DIR"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "HAS_GIT: $HAS_GIT"
|
||||||
|
fi
|
719
.specify/scripts/bash/update-agent-context.sh
Executable file
719
.specify/scripts/bash/update-agent-context.sh
Executable file
|
@ -0,0 +1,719 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Update agent context files with information from plan.md
|
||||||
|
#
|
||||||
|
# This script maintains AI agent context files by parsing feature specifications
|
||||||
|
# and updating agent-specific configuration files with project information.
|
||||||
|
#
|
||||||
|
# MAIN FUNCTIONS:
|
||||||
|
# 1. Environment Validation
|
||||||
|
# - Verifies git repository structure and branch information
|
||||||
|
# - Checks for required plan.md files and templates
|
||||||
|
# - Validates file permissions and accessibility
|
||||||
|
#
|
||||||
|
# 2. Plan Data Extraction
|
||||||
|
# - Parses plan.md files to extract project metadata
|
||||||
|
# - Identifies language/version, frameworks, databases, and project types
|
||||||
|
# - Handles missing or incomplete specification data gracefully
|
||||||
|
#
|
||||||
|
# 3. Agent File Management
|
||||||
|
# - Creates new agent context files from templates when needed
|
||||||
|
# - Updates existing agent files with new project information
|
||||||
|
# - Preserves manual additions and custom configurations
|
||||||
|
# - Supports multiple AI agent formats and directory structures
|
||||||
|
#
|
||||||
|
# 4. Content Generation
|
||||||
|
# - Generates language-specific build/test commands
|
||||||
|
# - Creates appropriate project directory structures
|
||||||
|
# - Updates technology stacks and recent changes sections
|
||||||
|
# - Maintains consistent formatting and timestamps
|
||||||
|
#
|
||||||
|
# 5. Multi-Agent Support
|
||||||
|
# - Handles agent-specific file paths and naming conventions
|
||||||
|
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf
|
||||||
|
# - Can update single agents or all existing agent files
|
||||||
|
# - Creates default Claude file if no agent files exist
|
||||||
|
#
|
||||||
|
# Usage: ./update-agent-context.sh [agent_type]
|
||||||
|
# Agent types: claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf
|
||||||
|
# Leave empty to update all existing agent files
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Enable strict error handling
|
||||||
|
set -u
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Configuration and Global Variables
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
|
||||||
|
AGENT_TYPE="${1:-}"
|
||||||
|
|
||||||
|
# Agent-specific file paths
|
||||||
|
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
|
||||||
|
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
|
||||||
|
COPILOT_FILE="$REPO_ROOT/.github/copilot-instructions.md"
|
||||||
|
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
|
||||||
|
QWEN_FILE="$REPO_ROOT/QWEN.md"
|
||||||
|
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
|
||||||
|
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
|
||||||
|
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
|
||||||
|
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
|
||||||
|
|
||||||
|
# Template file
|
||||||
|
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
|
||||||
|
|
||||||
|
# Global variables for parsed plan data
|
||||||
|
NEW_LANG=""
|
||||||
|
NEW_FRAMEWORK=""
|
||||||
|
NEW_DB=""
|
||||||
|
NEW_PROJECT_TYPE=""
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Utility Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo "INFO: $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo "✓ $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo "ERROR: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warning() {
|
||||||
|
echo "WARNING: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
# Cleanup function for temporary files
|
||||||
|
cleanup() {
|
||||||
|
local exit_code=$?
|
||||||
|
rm -f /tmp/agent_update_*_$$
|
||||||
|
rm -f /tmp/manual_additions_$$
|
||||||
|
exit $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set up cleanup trap
|
||||||
|
trap cleanup EXIT INT TERM
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Validation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
validate_environment() {
|
||||||
|
# Check if we have a current branch/feature (git or non-git)
|
||||||
|
if [[ -z "$CURRENT_BRANCH" ]]; then
|
||||||
|
log_error "Unable to determine current feature"
|
||||||
|
if [[ "$HAS_GIT" == "true" ]]; then
|
||||||
|
log_info "Make sure you're on a feature branch"
|
||||||
|
else
|
||||||
|
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if plan.md exists
|
||||||
|
if [[ ! -f "$NEW_PLAN" ]]; then
|
||||||
|
log_error "No plan.md found at $NEW_PLAN"
|
||||||
|
log_info "Make sure you're working on a feature with a corresponding spec directory"
|
||||||
|
if [[ "$HAS_GIT" != "true" ]]; then
|
||||||
|
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if template exists (needed for new files)
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_warning "Template file not found at $TEMPLATE_FILE"
|
||||||
|
log_warning "Creating new agent files will fail"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Plan Parsing Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
extract_plan_field() {
|
||||||
|
local field_pattern="$1"
|
||||||
|
local plan_file="$2"
|
||||||
|
|
||||||
|
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
||||||
|
head -1 | \
|
||||||
|
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
||||||
|
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
||||||
|
grep -v "NEEDS CLARIFICATION" | \
|
||||||
|
grep -v "^N/A$" || echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
parse_plan_data() {
|
||||||
|
local plan_file="$1"
|
||||||
|
|
||||||
|
if [[ ! -f "$plan_file" ]]; then
|
||||||
|
log_error "Plan file not found: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$plan_file" ]]; then
|
||||||
|
log_error "Plan file is not readable: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Parsing plan data from $plan_file"
|
||||||
|
|
||||||
|
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
||||||
|
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
||||||
|
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
||||||
|
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
||||||
|
|
||||||
|
# Log what we found
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
log_info "Found language: $NEW_LANG"
|
||||||
|
else
|
||||||
|
log_warning "No language information found in plan"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
log_info "Found framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
log_info "Found database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
|
||||||
|
log_info "Found project type: $NEW_PROJECT_TYPE"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
format_technology_stack() {
|
||||||
|
local lang="$1"
|
||||||
|
local framework="$2"
|
||||||
|
local parts=()
|
||||||
|
|
||||||
|
# Add non-empty parts
|
||||||
|
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
||||||
|
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
||||||
|
|
||||||
|
# Join with proper formatting
|
||||||
|
if [[ ${#parts[@]} -eq 0 ]]; then
|
||||||
|
echo ""
|
||||||
|
elif [[ ${#parts[@]} -eq 1 ]]; then
|
||||||
|
echo "${parts[0]}"
|
||||||
|
else
|
||||||
|
# Join multiple parts with " + "
|
||||||
|
local result="${parts[0]}"
|
||||||
|
for ((i=1; i<${#parts[@]}; i++)); do
|
||||||
|
result="$result + ${parts[i]}"
|
||||||
|
done
|
||||||
|
echo "$result"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Template and Content Generation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
get_project_structure() {
|
||||||
|
local project_type="$1"
|
||||||
|
|
||||||
|
if [[ "$project_type" == *"web"* ]]; then
|
||||||
|
echo "backend/\\nfrontend/\\ntests/"
|
||||||
|
else
|
||||||
|
echo "src/\\ntests/"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
get_commands_for_language() {
|
||||||
|
local lang="$1"
|
||||||
|
|
||||||
|
case "$lang" in
|
||||||
|
*"Python"*)
|
||||||
|
echo "cd src && pytest && ruff check ."
|
||||||
|
;;
|
||||||
|
*"Rust"*)
|
||||||
|
echo "cargo test && cargo clippy"
|
||||||
|
;;
|
||||||
|
*"JavaScript"*|*"TypeScript"*)
|
||||||
|
echo "npm test && npm run lint"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "# Add commands for $lang"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
get_language_conventions() {
|
||||||
|
local lang="$1"
|
||||||
|
echo "$lang: Follow standard conventions"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_new_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local temp_file="$2"
|
||||||
|
local project_name="$3"
|
||||||
|
local current_date="$4"
|
||||||
|
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template not found at $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template file is not readable: $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Creating new agent context file from template..."
|
||||||
|
|
||||||
|
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
|
||||||
|
log_error "Failed to copy template file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Replace template placeholders
|
||||||
|
local project_structure
|
||||||
|
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
|
||||||
|
|
||||||
|
local commands
|
||||||
|
commands=$(get_commands_for_language "$NEW_LANG")
|
||||||
|
|
||||||
|
local language_conventions
|
||||||
|
language_conventions=$(get_language_conventions "$NEW_LANG")
|
||||||
|
|
||||||
|
# Perform substitutions with error checking using safer approach
|
||||||
|
# Escape special characters for sed by using a different delimiter or escaping
|
||||||
|
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
|
||||||
|
# Build technology stack and recent change strings conditionally
|
||||||
|
local tech_stack
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
tech_stack="- $escaped_lang ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_framework ($escaped_branch)"
|
||||||
|
else
|
||||||
|
tech_stack="- ($escaped_branch)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local recent_change
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_framework"
|
||||||
|
else
|
||||||
|
recent_change="- $escaped_branch: Added"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local substitutions=(
|
||||||
|
"s|\[PROJECT NAME\]|$project_name|"
|
||||||
|
"s|\[DATE\]|$current_date|"
|
||||||
|
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
|
||||||
|
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
|
||||||
|
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
|
||||||
|
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
|
||||||
|
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
|
||||||
|
)
|
||||||
|
|
||||||
|
for substitution in "${substitutions[@]}"; do
|
||||||
|
if ! sed -i.bak -e "$substitution" "$temp_file"; then
|
||||||
|
log_error "Failed to perform substitution: $substitution"
|
||||||
|
rm -f "$temp_file" "$temp_file.bak"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Convert \n sequences to actual newlines
|
||||||
|
newline=$(printf '\n')
|
||||||
|
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
|
||||||
|
|
||||||
|
# Clean up backup files
|
||||||
|
rm -f "$temp_file.bak" "$temp_file.bak2"
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
update_existing_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local current_date="$2"
|
||||||
|
|
||||||
|
log_info "Updating existing agent context file..."
|
||||||
|
|
||||||
|
# Use a single temporary file for atomic update
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Process the file in one pass
|
||||||
|
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
|
||||||
|
local new_tech_entries=()
|
||||||
|
local new_change_entry=""
|
||||||
|
|
||||||
|
# Prepare new technology entries
|
||||||
|
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare new change entry
|
||||||
|
if [[ -n "$tech_stack" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
|
||||||
|
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process file line by line
|
||||||
|
local in_tech_section=false
|
||||||
|
local in_changes_section=false
|
||||||
|
local tech_entries_added=false
|
||||||
|
local changes_entries_added=false
|
||||||
|
local existing_changes_count=0
|
||||||
|
|
||||||
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||||
|
# Handle Active Technologies section
|
||||||
|
if [[ "$line" == "## Active Technologies" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
# Add new tech entries before closing the section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
|
||||||
|
# Add new tech entries before empty line in tech section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle Recent Changes section
|
||||||
|
if [[ "$line" == "## Recent Changes" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
# Add new change entry right after the heading
|
||||||
|
if [[ -n "$new_change_entry" ]]; then
|
||||||
|
echo "$new_change_entry" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
in_changes_section=true
|
||||||
|
changes_entries_added=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_changes_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
|
||||||
|
# Keep only first 2 existing changes
|
||||||
|
if [[ $existing_changes_count -lt 2 ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
((existing_changes_count++))
|
||||||
|
fi
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Update timestamp
|
||||||
|
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
|
||||||
|
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
|
||||||
|
else
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
done < "$target_file"
|
||||||
|
|
||||||
|
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
||||||
|
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Move temp file to target atomically
|
||||||
|
if ! mv "$temp_file" "$target_file"; then
|
||||||
|
log_error "Failed to update target file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
#==============================================================================
|
||||||
|
# Main Agent File Update Function
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local agent_name="$2"
|
||||||
|
|
||||||
|
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
|
||||||
|
log_error "update_agent_file requires target_file and agent_name parameters"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Updating $agent_name context file: $target_file"
|
||||||
|
|
||||||
|
local project_name
|
||||||
|
project_name=$(basename "$REPO_ROOT")
|
||||||
|
local current_date
|
||||||
|
current_date=$(date +%Y-%m-%d)
|
||||||
|
|
||||||
|
# Create directory if it doesn't exist
|
||||||
|
local target_dir
|
||||||
|
target_dir=$(dirname "$target_file")
|
||||||
|
if [[ ! -d "$target_dir" ]]; then
|
||||||
|
if ! mkdir -p "$target_dir"; then
|
||||||
|
log_error "Failed to create directory: $target_dir"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$target_file" ]]; then
|
||||||
|
# Create new file from template
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
|
||||||
|
if mv "$temp_file" "$target_file"; then
|
||||||
|
log_success "Created new $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to move temporary file to $target_file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_error "Failed to create new agent file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Update existing file
|
||||||
|
if [[ ! -r "$target_file" ]]; then
|
||||||
|
log_error "Cannot read existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -w "$target_file" ]]; then
|
||||||
|
log_error "Cannot write to existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if update_existing_agent_file "$target_file" "$current_date"; then
|
||||||
|
log_success "Updated existing $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to update existing agent file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Agent Selection and Processing
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_specific_agent() {
|
||||||
|
local agent_type="$1"
|
||||||
|
|
||||||
|
case "$agent_type" in
|
||||||
|
claude)
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
;;
|
||||||
|
gemini)
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
;;
|
||||||
|
copilot)
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
;;
|
||||||
|
cursor)
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
;;
|
||||||
|
qwen)
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
;;
|
||||||
|
opencode)
|
||||||
|
update_agent_file "$AGENTS_FILE" "opencode"
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex CLI"
|
||||||
|
;;
|
||||||
|
windsurf)
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
;;
|
||||||
|
kilocode)
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
;;
|
||||||
|
auggie)
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
;;
|
||||||
|
roo)
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log_error "Unknown agent type '$agent_type'"
|
||||||
|
log_error "Expected: claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf|kilocode|auggie|roo"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
update_all_existing_agents() {
|
||||||
|
local found_agent=false
|
||||||
|
|
||||||
|
# Check each possible agent file and update if it exists
|
||||||
|
if [[ -f "$CLAUDE_FILE" ]]; then
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$GEMINI_FILE" ]]; then
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$COPILOT_FILE" ]]; then
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$CURSOR_FILE" ]]; then
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$QWEN_FILE" ]]; then
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AGENTS_FILE" ]]; then
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex/opencode"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$WINDSURF_FILE" ]]; then
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$KILOCODE_FILE" ]]; then
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AUGGIE_FILE" ]]; then
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$ROO_FILE" ]]; then
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If no agent files exist, create a default Claude file
|
||||||
|
if [[ "$found_agent" == false ]]; then
|
||||||
|
log_info "No existing agent files found, creating default Claude file..."
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
print_summary() {
|
||||||
|
echo
|
||||||
|
log_info "Summary of changes:"
|
||||||
|
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
echo " - Added language: $NEW_LANG"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
echo " - Added framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
echo " - Added database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
log_info "Usage: $0 [claude|gemini|copilot|cursor|qwen|opencode|codex|windsurf|kilocode|auggie|roo]"
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Main Execution
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
main() {
|
||||||
|
# Validate environment before proceeding
|
||||||
|
validate_environment
|
||||||
|
|
||||||
|
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
||||||
|
|
||||||
|
# Parse the plan file to extract project information
|
||||||
|
if ! parse_plan_data "$NEW_PLAN"; then
|
||||||
|
log_error "Failed to parse plan data"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process based on agent type argument
|
||||||
|
local success=true
|
||||||
|
|
||||||
|
if [[ -z "$AGENT_TYPE" ]]; then
|
||||||
|
# No specific agent provided - update all existing agent files
|
||||||
|
log_info "No agent specified, updating all existing agent files..."
|
||||||
|
if ! update_all_existing_agents; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Specific agent provided - update only that agent
|
||||||
|
log_info "Updating specific agent: $AGENT_TYPE"
|
||||||
|
if ! update_specific_agent "$AGENT_TYPE"; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
print_summary
|
||||||
|
|
||||||
|
if [[ "$success" == true ]]; then
|
||||||
|
log_success "Agent context update completed successfully"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
log_error "Agent context update completed with errors"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Execute main function if script is run directly
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
main "$@"
|
||||||
|
fi
|
23
.specify/templates/agent-file-template.md
Normal file
23
.specify/templates/agent-file-template.md
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
# [PROJECT NAME] Development Guidelines
|
||||||
|
|
||||||
|
Auto-generated from all feature plans. Last updated: [DATE]
|
||||||
|
|
||||||
|
## Active Technologies
|
||||||
|
[EXTRACTED FROM ALL PLAN.MD FILES]
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
```
|
||||||
|
[ACTUAL STRUCTURE FROM PLANS]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
||||||
|
|
||||||
|
<!-- MANUAL ADDITIONS START -->
|
||||||
|
<!-- MANUAL ADDITIONS END -->
|
221
.specify/templates/plan-template.md
Normal file
221
.specify/templates/plan-template.md
Normal file
|
@ -0,0 +1,221 @@
|
||||||
|
|
||||||
|
# Implementation Plan: [FEATURE]
|
||||||
|
|
||||||
|
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||||
|
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||||
|
|
||||||
|
## Execution Flow (/plan command scope)
|
||||||
|
```
|
||||||
|
1. Load feature spec from Input path
|
||||||
|
→ If not found: ERROR "No feature spec at {path}"
|
||||||
|
2. Fill Technical Context (scan for NEEDS CLARIFICATION)
|
||||||
|
→ Detect Project Type from file system structure or context (web=frontend+backend, mobile=app+api)
|
||||||
|
→ Set Structure Decision based on project type
|
||||||
|
3. Fill the Constitution Check section based on the content of the constitution document.
|
||||||
|
4. Evaluate Constitution Check section below
|
||||||
|
→ If violations exist: Document in Complexity Tracking
|
||||||
|
→ If no justification possible: ERROR "Simplify approach first"
|
||||||
|
→ Update Progress Tracking: Initial Constitution Check
|
||||||
|
5. Execute Phase 0 → research.md
|
||||||
|
→ If NEEDS CLARIFICATION remain: ERROR "Resolve unknowns"
|
||||||
|
6. Execute Phase 1 → contracts, data-model.md, quickstart.md, agent-specific template file (e.g., `CLAUDE.md` for Claude Code, `.github/copilot-instructions.md` for GitHub Copilot, `GEMINI.md` for Gemini CLI, `QWEN.md` for Qwen Code or `AGENTS.md` for opencode).
|
||||||
|
7. Re-evaluate Constitution Check section
|
||||||
|
→ If new violations: Refactor design, return to Phase 1
|
||||||
|
→ Update Progress Tracking: Post-Design Constitution Check
|
||||||
|
8. Plan Phase 2 → Describe task generation approach (DO NOT create tasks.md)
|
||||||
|
9. STOP - Ready for /tasks command
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT**: The /plan command STOPS at step 7. Phases 2-4 are executed by other commands:
|
||||||
|
- Phase 2: /tasks command creates tasks.md
|
||||||
|
- Phase 3-4: Implementation execution (manual or via tools)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
[Extract from feature spec: primary requirement + technical approach from research]
|
||||||
|
|
||||||
|
## Technical Context
|
||||||
|
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
||||||
|
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
||||||
|
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
||||||
|
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
||||||
|
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
||||||
|
**Project Type**: [single/web/mobile - determines source structure]
|
||||||
|
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
||||||
|
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
||||||
|
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
||||||
|
|
||||||
|
## Constitution Check
|
||||||
|
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||||
|
|
||||||
|
* **I. Clean Code**: Is the proposed code structure and design clean and maintainable?
|
||||||
|
* **II. Functional Style**: Does the design favor a functional approach where appropriate?
|
||||||
|
* **III. Descriptive Coding**: Is the naming of components and files descriptive and self-documenting?
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
### Documentation (this feature)
|
||||||
|
```
|
||||||
|
specs/[###-feature]/
|
||||||
|
├── plan.md # This file (/plan command output)
|
||||||
|
├── research.md # Phase 0 output (/plan command)
|
||||||
|
├── data-model.md # Phase 1 output (/plan command)
|
||||||
|
├── quickstart.md # Phase 1 output (/plan command)
|
||||||
|
├── contracts/ # Phase 1 output (/plan command)
|
||||||
|
└── tasks.md # Phase 2 output (/tasks command - NOT created by /plan)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Source Code (repository root)
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
||||||
|
for this feature. Delete unused options and expand the chosen structure with
|
||||||
|
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
||||||
|
not include Option labels.
|
||||||
|
-->
|
||||||
|
```
|
||||||
|
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
||||||
|
src/
|
||||||
|
├── models/
|
||||||
|
├── services/
|
||||||
|
├── cli/
|
||||||
|
└── lib/
|
||||||
|
|
||||||
|
tests/
|
||||||
|
├── contract/
|
||||||
|
├── integration/
|
||||||
|
└── unit/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
||||||
|
backend/
|
||||||
|
├── src/
|
||||||
|
│ ├── models/
|
||||||
|
│ ├── services/
|
||||||
|
│ └── api/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
frontend/
|
||||||
|
├── src/
|
||||||
|
│ ├── components/
|
||||||
|
│ ├── pages/
|
||||||
|
│ └── services/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
||||||
|
api/
|
||||||
|
└── [same as backend above]
|
||||||
|
|
||||||
|
ios/ or android/
|
||||||
|
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Structure Decision**: [Document the selected structure and reference the real
|
||||||
|
directories captured above]
|
||||||
|
|
||||||
|
## Phase 0: Outline & Research
|
||||||
|
1. **Extract unknowns from Technical Context** above:
|
||||||
|
- For each NEEDS CLARIFICATION → research task
|
||||||
|
- For each dependency → best practices task
|
||||||
|
- For each integration → patterns task
|
||||||
|
|
||||||
|
2. **Generate and dispatch research agents**:
|
||||||
|
```
|
||||||
|
For each unknown in Technical Context:
|
||||||
|
Task: "Research {unknown} for {feature context}"
|
||||||
|
For each technology choice:
|
||||||
|
Task: "Find best practices for {tech} in {domain}"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Consolidate findings** in `research.md` using format:
|
||||||
|
- Decision: [what was chosen]
|
||||||
|
- Rationale: [why chosen]
|
||||||
|
- Alternatives considered: [what else evaluated]
|
||||||
|
|
||||||
|
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||||
|
|
||||||
|
## Phase 1: Design & Contracts
|
||||||
|
*Prerequisites: research.md complete*
|
||||||
|
|
||||||
|
1. **Extract entities from feature spec** → `data-model.md`:
|
||||||
|
- Entity name, fields, relationships
|
||||||
|
- Validation rules from requirements
|
||||||
|
- State transitions if applicable
|
||||||
|
|
||||||
|
2. **Generate API contracts** from functional requirements:
|
||||||
|
- For each user action → endpoint
|
||||||
|
- Use standard REST/GraphQL patterns
|
||||||
|
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||||
|
|
||||||
|
3. **Generate contract tests** from contracts:
|
||||||
|
- One test file per endpoint
|
||||||
|
- Assert request/response schemas
|
||||||
|
- Tests must fail (no implementation yet)
|
||||||
|
|
||||||
|
4. **Extract test scenarios** from user stories:
|
||||||
|
- Each story → integration test scenario
|
||||||
|
- Quickstart test = story validation steps
|
||||||
|
|
||||||
|
5. **Update agent file incrementally** (O(1) operation):
|
||||||
|
- Run `.specify/scripts/bash/update-agent-context.sh gemini`
|
||||||
|
**IMPORTANT**: Execute it exactly as specified above. Do not add or remove any arguments.
|
||||||
|
- If exists: Add only NEW tech from current plan
|
||||||
|
- Preserve manual additions between markers
|
||||||
|
- Update recent changes (keep last 3)
|
||||||
|
- Keep under 150 lines for token efficiency
|
||||||
|
- Output to repository root
|
||||||
|
|
||||||
|
**Output**: data-model.md, /contracts/*, failing tests, quickstart.md, agent-specific file
|
||||||
|
|
||||||
|
## Phase 2: Task Planning Approach
|
||||||
|
*This section describes what the /tasks command will do - DO NOT execute during /plan*
|
||||||
|
|
||||||
|
**Task Generation Strategy**:
|
||||||
|
- Load `.specify/templates/tasks-template.md` as base
|
||||||
|
- Generate tasks from Phase 1 design docs (contracts, data model, quickstart)
|
||||||
|
- Each contract → contract test task [P]
|
||||||
|
- Each entity → model creation task [P]
|
||||||
|
- Each user story → integration test task
|
||||||
|
- Implementation tasks to make tests pass
|
||||||
|
|
||||||
|
**Ordering Strategy**:
|
||||||
|
- TDD order: Tests before implementation
|
||||||
|
- Dependency order: Models before services before UI
|
||||||
|
- Mark [P] for parallel execution (independent files)
|
||||||
|
|
||||||
|
**Estimated Output**: 25-30 numbered, ordered tasks in tasks.md
|
||||||
|
|
||||||
|
**IMPORTANT**: This phase is executed by the /tasks command, NOT by /plan
|
||||||
|
|
||||||
|
## Phase 3+: Future Implementation
|
||||||
|
*These phases are beyond the scope of the /plan command*
|
||||||
|
|
||||||
|
**Phase 3**: Task execution (/tasks command creates tasks.md)
|
||||||
|
**Phase 4**: Implementation (execute tasks.md following constitutional principles)
|
||||||
|
**Phase 5**: Validation (run tests, execute quickstart.md, performance validation)
|
||||||
|
|
||||||
|
## Complexity Tracking
|
||||||
|
*Fill ONLY if Constitution Check has violations that must be justified*
|
||||||
|
|
||||||
|
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||||
|
|-----------|------------|-------------------------------------|
|
||||||
|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||||
|
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||||
|
|
||||||
|
|
||||||
|
## Progress Tracking
|
||||||
|
*This checklist is updated during execution flow*
|
||||||
|
|
||||||
|
**Phase Status**:
|
||||||
|
- [ ] Phase 0: Research complete (/plan command)
|
||||||
|
- [ ] Phase 1: Design complete (/plan command)
|
||||||
|
- [ ] Phase 2: Task planning complete (/plan command - describe approach only)
|
||||||
|
- [ ] Phase 3: Tasks generated (/tasks command)
|
||||||
|
- [ ] Phase 4: Implementation complete
|
||||||
|
- [ ] Phase 5: Validation passed
|
||||||
|
|
||||||
|
**Gate Status**:
|
||||||
|
- [ ] Initial Constitution Check: PASS
|
||||||
|
- [ ] Post-Design Constitution Check: PASS
|
||||||
|
- [ ] All NEEDS CLARIFICATION resolved
|
||||||
|
- [ ] Complexity deviations documented
|
||||||
|
|
||||||
|
---
|
||||||
|
*Based on Constitution v1.0.0 - See `/memory/constitution.md`*
|
116
.specify/templates/spec-template.md
Normal file
116
.specify/templates/spec-template.md
Normal file
|
@ -0,0 +1,116 @@
|
||||||
|
# Feature Specification: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Feature Branch**: `[###-feature-name]`
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Status**: Draft
|
||||||
|
**Input**: User description: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## Execution Flow (main)
|
||||||
|
```
|
||||||
|
1. Parse user description from Input
|
||||||
|
→ If empty: ERROR "No feature description provided"
|
||||||
|
2. Extract key concepts from description
|
||||||
|
→ Identify: actors, actions, data, constraints
|
||||||
|
3. For each unclear aspect:
|
||||||
|
→ Mark with [NEEDS CLARIFICATION: specific question]
|
||||||
|
4. Fill User Scenarios & Testing section
|
||||||
|
→ If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||||
|
5. Generate Functional Requirements
|
||||||
|
→ Each requirement must be testable
|
||||||
|
→ Mark ambiguous requirements
|
||||||
|
6. Identify Key Entities (if data involved)
|
||||||
|
7. Run Review Checklist
|
||||||
|
→ If any [NEEDS CLARIFICATION]: WARN "Spec has uncertainties"
|
||||||
|
→ If implementation details found: ERROR "Remove tech details"
|
||||||
|
8. Return: SUCCESS (spec ready for planning)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚡ Quick Guidelines
|
||||||
|
- ✅ Focus on WHAT users need and WHY
|
||||||
|
- ❌ Avoid HOW to implement (no tech stack, APIs, code structure)
|
||||||
|
- 👥 Written for business stakeholders, not developers
|
||||||
|
|
||||||
|
### Section Requirements
|
||||||
|
- **Mandatory sections**: Must be completed for every feature
|
||||||
|
- **Optional sections**: Include only when relevant to the feature
|
||||||
|
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||||
|
|
||||||
|
### For AI Generation
|
||||||
|
When creating this spec from a user prompt:
|
||||||
|
1. **Mark all ambiguities**: Use [NEEDS CLARIFICATION: specific question] for any assumption you'd need to make
|
||||||
|
2. **Don't guess**: If the prompt doesn't specify something (e.g., "login system" without auth method), mark it
|
||||||
|
3. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||||
|
4. **Common underspecified areas**:
|
||||||
|
- User types and permissions
|
||||||
|
- Data retention/deletion policies
|
||||||
|
- Performance targets and scale
|
||||||
|
- Error handling behaviors
|
||||||
|
- Integration requirements
|
||||||
|
- Security/compliance needs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Scenarios & Testing *(mandatory)*
|
||||||
|
|
||||||
|
### Primary User Story
|
||||||
|
[Describe the main user journey in plain language]
|
||||||
|
|
||||||
|
### Acceptance Scenarios
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
- What happens when [boundary condition]?
|
||||||
|
- How does system handle [error scenario]?
|
||||||
|
|
||||||
|
## Requirements *(mandatory)*
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
||||||
|
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
||||||
|
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
||||||
|
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
||||||
|
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
||||||
|
|
||||||
|
*Example of marking unclear requirements:*
|
||||||
|
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
||||||
|
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
||||||
|
|
||||||
|
### Key Entities *(include if feature involves data)*
|
||||||
|
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
||||||
|
- **[Entity 2]**: [What it represents, relationships to other entities]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Review & Acceptance Checklist
|
||||||
|
*GATE: Automated checks run during main() execution*
|
||||||
|
|
||||||
|
### Content Quality
|
||||||
|
- [ ] No implementation details (languages, frameworks, APIs)
|
||||||
|
- [ ] Focused on user value and business needs
|
||||||
|
- [ ] Written for non-technical stakeholders
|
||||||
|
- [ ] All mandatory sections completed
|
||||||
|
|
||||||
|
### Requirement Completeness
|
||||||
|
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||||
|
- [ ] Requirements are testable and unambiguous
|
||||||
|
- [ ] Success criteria are measurable
|
||||||
|
- [ ] Scope is clearly bounded
|
||||||
|
- [ ] Dependencies and assumptions identified
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Status
|
||||||
|
*Updated by main() during processing*
|
||||||
|
|
||||||
|
- [ ] User description parsed
|
||||||
|
- [ ] Key concepts extracted
|
||||||
|
- [ ] Ambiguities marked
|
||||||
|
- [ ] User scenarios defined
|
||||||
|
- [ ] Requirements generated
|
||||||
|
- [ ] Entities identified
|
||||||
|
- [ ] Review checklist passed
|
||||||
|
|
||||||
|
---
|
127
.specify/templates/tasks-template.md
Normal file
127
.specify/templates/tasks-template.md
Normal file
|
@ -0,0 +1,127 @@
|
||||||
|
# Tasks: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||||
|
**Prerequisites**: plan.md (required), research.md, data-model.md, contracts/
|
||||||
|
|
||||||
|
## Execution Flow (main)
|
||||||
|
```
|
||||||
|
1. Load plan.md from feature directory
|
||||||
|
→ If not found: ERROR "No implementation plan found"
|
||||||
|
→ Extract: tech stack, libraries, structure
|
||||||
|
2. Load optional design documents:
|
||||||
|
→ data-model.md: Extract entities → model tasks
|
||||||
|
→ contracts/: Each file → contract test task
|
||||||
|
→ research.md: Extract decisions → setup tasks
|
||||||
|
3. Generate tasks by category:
|
||||||
|
→ Setup: project init, dependencies, linting
|
||||||
|
→ Tests: contract tests, integration tests
|
||||||
|
→ Core: models, services, CLI commands
|
||||||
|
→ Integration: DB, middleware, logging
|
||||||
|
→ Polish: unit tests, performance, docs
|
||||||
|
4. Apply task rules:
|
||||||
|
→ Different files = mark [P] for parallel
|
||||||
|
→ Same file = sequential (no [P])
|
||||||
|
→ Tests before implementation (TDD)
|
||||||
|
5. Number tasks sequentially (T001, T002...)
|
||||||
|
6. Generate dependency graph
|
||||||
|
7. Create parallel execution examples
|
||||||
|
8. Validate task completeness:
|
||||||
|
→ All contracts have tests?
|
||||||
|
→ All entities have models?
|
||||||
|
→ All endpoints implemented?
|
||||||
|
9. Return: SUCCESS (tasks ready for execution)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Format: `[ID] [P?] Description`
|
||||||
|
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||||
|
- Include exact file paths in descriptions
|
||||||
|
|
||||||
|
## Path Conventions
|
||||||
|
- **Single project**: `src/`, `tests/` at repository root
|
||||||
|
- **Web app**: `backend/src/`, `frontend/src/`
|
||||||
|
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
||||||
|
- Paths shown below assume single project - adjust based on plan.md structure
|
||||||
|
|
||||||
|
## Phase 3.1: Setup
|
||||||
|
- [ ] T001 Create project structure per implementation plan
|
||||||
|
- [ ] T002 Initialize [language] project with [framework] dependencies
|
||||||
|
- [ ] T003 [P] Configure linting and formatting tools
|
||||||
|
|
||||||
|
## Phase 3.2: Tests First (TDD) ⚠️ MUST COMPLETE BEFORE 3.3
|
||||||
|
**CRITICAL: These tests MUST be written and MUST FAIL before ANY implementation**
|
||||||
|
- [ ] T004 [P] Contract test POST /api/users in tests/contract/test_users_post.py
|
||||||
|
- [ ] T005 [P] Contract test GET /api/users/{id} in tests/contract/test_users_get.py
|
||||||
|
- [ ] T006 [P] Integration test user registration in tests/integration/test_registration.py
|
||||||
|
- [ ] T007 [P] Integration test auth flow in tests/integration/test_auth.py
|
||||||
|
|
||||||
|
## Phase 3.3: Core Implementation (ONLY after tests are failing)
|
||||||
|
- [ ] T008 [P] User model in src/models/user.py
|
||||||
|
- [ ] T009 [P] UserService CRUD in src/services/user_service.py
|
||||||
|
- [ ] T010 [P] CLI --create-user in src/cli/user_commands.py
|
||||||
|
- [ ] T011 POST /api/users endpoint
|
||||||
|
- [ ] T012 GET /api/users/{id} endpoint
|
||||||
|
- [ ] T013 Input validation
|
||||||
|
- [ ] T014 Error handling and logging
|
||||||
|
|
||||||
|
## Phase 3.4: Integration
|
||||||
|
- [ ] T015 Connect UserService to DB
|
||||||
|
- [ ] T016 Auth middleware
|
||||||
|
- [ ] T017 Request/response logging
|
||||||
|
- [ ] T018 CORS and security headers
|
||||||
|
|
||||||
|
## Phase 3.5: Polish
|
||||||
|
- [ ] T019 [P] Unit tests for validation in tests/unit/test_validation.py
|
||||||
|
- [ ] T020 Performance tests (<200ms)
|
||||||
|
- [ ] T021 [P] Update docs/api.md
|
||||||
|
- [ ] T022 Remove duplication
|
||||||
|
- [ ] T023 Run manual-testing.md
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- Tests (T004-T007) before implementation (T008-T014)
|
||||||
|
- T008 blocks T009, T015
|
||||||
|
- T016 blocks T018
|
||||||
|
- Implementation before polish (T019-T023)
|
||||||
|
|
||||||
|
## Parallel Example
|
||||||
|
```
|
||||||
|
# Launch T004-T007 together:
|
||||||
|
Task: "Contract test POST /api/users in tests/contract/test_users_post.py"
|
||||||
|
Task: "Contract test GET /api/users/{id} in tests/contract/test_users_get.py"
|
||||||
|
Task: "Integration test registration in tests/integration/test_registration.py"
|
||||||
|
Task: "Integration test auth in tests/integration/test_auth.py"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
- [P] tasks = different files, no dependencies
|
||||||
|
- Verify tests fail before implementing
|
||||||
|
- Commit after each task
|
||||||
|
- Avoid: vague tasks, same file conflicts
|
||||||
|
|
||||||
|
## Task Generation Rules
|
||||||
|
*Applied during main() execution*
|
||||||
|
|
||||||
|
1. **From Contracts**:
|
||||||
|
- Each contract file → contract test task [P]
|
||||||
|
- Each endpoint → implementation task
|
||||||
|
|
||||||
|
2. **From Data Model**:
|
||||||
|
- Each entity → model creation task [P]
|
||||||
|
- Relationships → service layer tasks
|
||||||
|
|
||||||
|
3. **From User Stories**:
|
||||||
|
- Each story → integration test [P]
|
||||||
|
- Quickstart scenarios → validation tasks
|
||||||
|
|
||||||
|
4. **Ordering**:
|
||||||
|
- Setup → Tests → Models → Services → Endpoints → Polish
|
||||||
|
- Dependencies block parallel execution
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
*GATE: Checked by main() before returning*
|
||||||
|
|
||||||
|
- [ ] All contracts have corresponding tests
|
||||||
|
- [ ] All entities have model tasks
|
||||||
|
- [ ] All tests come before implementation
|
||||||
|
- [ ] Parallel tasks truly independent
|
||||||
|
- [ ] Each task specifies exact file path
|
||||||
|
- [ ] No task modifies same file as another [P] task
|
Loading…
Add table
Add a link
Reference in a new issue