initial commit
This commit is contained in:
commit
8d2fd778c2
352
CLAUDE.md
Normal file
352
CLAUDE.md
Normal file
@ -0,0 +1,352 @@
|
|||||||
|
# Claude Code Configuration - SPARC Development Environment
|
||||||
|
|
||||||
|
## 🚨 CRITICAL: CONCURRENT EXECUTION & FILE MANAGEMENT
|
||||||
|
|
||||||
|
**ABSOLUTE RULES**:
|
||||||
|
1. ALL operations MUST be concurrent/parallel in a single message
|
||||||
|
2. **NEVER save working files, text/mds and tests to the root folder**
|
||||||
|
3. ALWAYS organize files in appropriate subdirectories
|
||||||
|
4. **USE CLAUDE CODE'S TASK TOOL** for spawning agents concurrently, not just MCP
|
||||||
|
|
||||||
|
### ⚡ GOLDEN RULE: "1 MESSAGE = ALL RELATED OPERATIONS"
|
||||||
|
|
||||||
|
**MANDATORY PATTERNS:**
|
||||||
|
- **TodoWrite**: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum)
|
||||||
|
- **Task tool (Claude Code)**: ALWAYS spawn ALL agents in ONE message with full instructions
|
||||||
|
- **File operations**: ALWAYS batch ALL reads/writes/edits in ONE message
|
||||||
|
- **Bash commands**: ALWAYS batch ALL terminal operations in ONE message
|
||||||
|
- **Memory operations**: ALWAYS batch ALL memory store/retrieve in ONE message
|
||||||
|
|
||||||
|
### 🎯 CRITICAL: Claude Code Task Tool for Agent Execution
|
||||||
|
|
||||||
|
**Claude Code's Task tool is the PRIMARY way to spawn agents:**
|
||||||
|
```javascript
|
||||||
|
// ✅ CORRECT: Use Claude Code's Task tool for parallel agent execution
|
||||||
|
[Single Message]:
|
||||||
|
Task("Research agent", "Analyze requirements and patterns...", "researcher")
|
||||||
|
Task("Coder agent", "Implement core features...", "coder")
|
||||||
|
Task("Tester agent", "Create comprehensive tests...", "tester")
|
||||||
|
Task("Reviewer agent", "Review code quality...", "reviewer")
|
||||||
|
Task("Architect agent", "Design system architecture...", "system-architect")
|
||||||
|
```
|
||||||
|
|
||||||
|
**MCP tools are ONLY for coordination setup:**
|
||||||
|
- `mcp__claude-flow__swarm_init` - Initialize coordination topology
|
||||||
|
- `mcp__claude-flow__agent_spawn` - Define agent types for coordination
|
||||||
|
- `mcp__claude-flow__task_orchestrate` - Orchestrate high-level workflows
|
||||||
|
|
||||||
|
### 📁 File Organization Rules
|
||||||
|
|
||||||
|
**NEVER save to root folder. Use these directories:**
|
||||||
|
- `/src` - Source code files
|
||||||
|
- `/tests` - Test files
|
||||||
|
- `/docs` - Documentation and markdown files
|
||||||
|
- `/config` - Configuration files
|
||||||
|
- `/scripts` - Utility scripts
|
||||||
|
- `/examples` - Example code
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
This project uses SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) methodology with Claude-Flow orchestration for systematic Test-Driven Development.
|
||||||
|
|
||||||
|
## SPARC Commands
|
||||||
|
|
||||||
|
### Core Commands
|
||||||
|
- `npx claude-flow sparc modes` - List available modes
|
||||||
|
- `npx claude-flow sparc run <mode> "<task>"` - Execute specific mode
|
||||||
|
- `npx claude-flow sparc tdd "<feature>"` - Run complete TDD workflow
|
||||||
|
- `npx claude-flow sparc info <mode>` - Get mode details
|
||||||
|
|
||||||
|
### Batchtools Commands
|
||||||
|
- `npx claude-flow sparc batch <modes> "<task>"` - Parallel execution
|
||||||
|
- `npx claude-flow sparc pipeline "<task>"` - Full pipeline processing
|
||||||
|
- `npx claude-flow sparc concurrent <mode> "<tasks-file>"` - Multi-task processing
|
||||||
|
|
||||||
|
### Build Commands
|
||||||
|
- `npm run build` - Build project
|
||||||
|
- `npm run test` - Run tests
|
||||||
|
- `npm run lint` - Linting
|
||||||
|
- `npm run typecheck` - Type checking
|
||||||
|
|
||||||
|
## SPARC Workflow Phases
|
||||||
|
|
||||||
|
1. **Specification** - Requirements analysis (`sparc run spec-pseudocode`)
|
||||||
|
2. **Pseudocode** - Algorithm design (`sparc run spec-pseudocode`)
|
||||||
|
3. **Architecture** - System design (`sparc run architect`)
|
||||||
|
4. **Refinement** - TDD implementation (`sparc tdd`)
|
||||||
|
5. **Completion** - Integration (`sparc run integration`)
|
||||||
|
|
||||||
|
## Code Style & Best Practices
|
||||||
|
|
||||||
|
- **Modular Design**: Files under 500 lines
|
||||||
|
- **Environment Safety**: Never hardcode secrets
|
||||||
|
- **Test-First**: Write tests before implementation
|
||||||
|
- **Clean Architecture**: Separate concerns
|
||||||
|
- **Documentation**: Keep updated
|
||||||
|
|
||||||
|
## 🚀 Available Agents (54 Total)
|
||||||
|
|
||||||
|
### Core Development
|
||||||
|
`coder`, `reviewer`, `tester`, `planner`, `researcher`
|
||||||
|
|
||||||
|
### Swarm Coordination
|
||||||
|
`hierarchical-coordinator`, `mesh-coordinator`, `adaptive-coordinator`, `collective-intelligence-coordinator`, `swarm-memory-manager`
|
||||||
|
|
||||||
|
### Consensus & Distributed
|
||||||
|
`byzantine-coordinator`, `raft-manager`, `gossip-coordinator`, `consensus-builder`, `crdt-synchronizer`, `quorum-manager`, `security-manager`
|
||||||
|
|
||||||
|
### Performance & Optimization
|
||||||
|
`perf-analyzer`, `performance-benchmarker`, `task-orchestrator`, `memory-coordinator`, `smart-agent`
|
||||||
|
|
||||||
|
### GitHub & Repository
|
||||||
|
`github-modes`, `pr-manager`, `code-review-swarm`, `issue-tracker`, `release-manager`, `workflow-automation`, `project-board-sync`, `repo-architect`, `multi-repo-swarm`
|
||||||
|
|
||||||
|
### SPARC Methodology
|
||||||
|
`sparc-coord`, `sparc-coder`, `specification`, `pseudocode`, `architecture`, `refinement`
|
||||||
|
|
||||||
|
### Specialized Development
|
||||||
|
`backend-dev`, `mobile-dev`, `ml-developer`, `cicd-engineer`, `api-docs`, `system-architect`, `code-analyzer`, `base-template-generator`
|
||||||
|
|
||||||
|
### Testing & Validation
|
||||||
|
`tdd-london-swarm`, `production-validator`
|
||||||
|
|
||||||
|
### Migration & Planning
|
||||||
|
`migration-planner`, `swarm-init`
|
||||||
|
|
||||||
|
## 🎯 Claude Code vs MCP Tools
|
||||||
|
|
||||||
|
### Claude Code Handles ALL EXECUTION:
|
||||||
|
- **Task tool**: Spawn and run agents concurrently for actual work
|
||||||
|
- File operations (Read, Write, Edit, MultiEdit, Glob, Grep)
|
||||||
|
- Code generation and programming
|
||||||
|
- Bash commands and system operations
|
||||||
|
- Implementation work
|
||||||
|
- Project navigation and analysis
|
||||||
|
- TodoWrite and task management
|
||||||
|
- Git operations
|
||||||
|
- Package management
|
||||||
|
- Testing and debugging
|
||||||
|
|
||||||
|
### MCP Tools ONLY COORDINATE:
|
||||||
|
- Swarm initialization (topology setup)
|
||||||
|
- Agent type definitions (coordination patterns)
|
||||||
|
- Task orchestration (high-level planning)
|
||||||
|
- Memory management
|
||||||
|
- Neural features
|
||||||
|
- Performance tracking
|
||||||
|
- GitHub integration
|
||||||
|
|
||||||
|
**KEY**: MCP coordinates the strategy, Claude Code's Task tool executes with real agents.
|
||||||
|
|
||||||
|
## 🚀 Quick Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add MCP servers (Claude Flow required, others optional)
|
||||||
|
claude mcp add claude-flow npx claude-flow@alpha mcp start
|
||||||
|
claude mcp add ruv-swarm npx ruv-swarm mcp start # Optional: Enhanced coordination
|
||||||
|
claude mcp add flow-nexus npx flow-nexus@latest mcp start # Optional: Cloud features
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Tool Categories
|
||||||
|
|
||||||
|
### Coordination
|
||||||
|
`swarm_init`, `agent_spawn`, `task_orchestrate`
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
`swarm_status`, `agent_list`, `agent_metrics`, `task_status`, `task_results`
|
||||||
|
|
||||||
|
### Memory & Neural
|
||||||
|
`memory_usage`, `neural_status`, `neural_train`, `neural_patterns`
|
||||||
|
|
||||||
|
### GitHub Integration
|
||||||
|
`github_swarm`, `repo_analyze`, `pr_enhance`, `issue_triage`, `code_review`
|
||||||
|
|
||||||
|
### System
|
||||||
|
`benchmark_run`, `features_detect`, `swarm_monitor`
|
||||||
|
|
||||||
|
### Flow-Nexus MCP Tools (Optional Advanced Features)
|
||||||
|
Flow-Nexus extends MCP capabilities with 70+ cloud-based orchestration tools:
|
||||||
|
|
||||||
|
**Key MCP Tool Categories:**
|
||||||
|
- **Swarm & Agents**: `swarm_init`, `swarm_scale`, `agent_spawn`, `task_orchestrate`
|
||||||
|
- **Sandboxes**: `sandbox_create`, `sandbox_execute`, `sandbox_upload` (cloud execution)
|
||||||
|
- **Templates**: `template_list`, `template_deploy` (pre-built project templates)
|
||||||
|
- **Neural AI**: `neural_train`, `neural_patterns`, `seraphina_chat` (AI assistant)
|
||||||
|
- **GitHub**: `github_repo_analyze`, `github_pr_manage` (repository management)
|
||||||
|
- **Real-time**: `execution_stream_subscribe`, `realtime_subscribe` (live monitoring)
|
||||||
|
- **Storage**: `storage_upload`, `storage_list` (cloud file management)
|
||||||
|
|
||||||
|
**Authentication Required:**
|
||||||
|
- Register: `mcp__flow-nexus__user_register` or `npx flow-nexus@latest register`
|
||||||
|
- Login: `mcp__flow-nexus__user_login` or `npx flow-nexus@latest login`
|
||||||
|
- Access 70+ specialized MCP tools for advanced orchestration
|
||||||
|
|
||||||
|
## 🚀 Agent Execution Flow with Claude Code
|
||||||
|
|
||||||
|
### The Correct Pattern:
|
||||||
|
|
||||||
|
1. **Optional**: Use MCP tools to set up coordination topology
|
||||||
|
2. **REQUIRED**: Use Claude Code's Task tool to spawn agents that do actual work
|
||||||
|
3. **REQUIRED**: Each agent runs hooks for coordination
|
||||||
|
4. **REQUIRED**: Batch all operations in single messages
|
||||||
|
|
||||||
|
### Example Full-Stack Development:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Single message with all agent spawning via Claude Code's Task tool
|
||||||
|
[Parallel Agent Execution]:
|
||||||
|
Task("Backend Developer", "Build REST API with Express. Use hooks for coordination.", "backend-dev")
|
||||||
|
Task("Frontend Developer", "Create React UI. Coordinate with backend via memory.", "coder")
|
||||||
|
Task("Database Architect", "Design PostgreSQL schema. Store schema in memory.", "code-analyzer")
|
||||||
|
Task("Test Engineer", "Write Jest tests. Check memory for API contracts.", "tester")
|
||||||
|
Task("DevOps Engineer", "Setup Docker and CI/CD. Document in memory.", "cicd-engineer")
|
||||||
|
Task("Security Auditor", "Review authentication. Report findings via hooks.", "reviewer")
|
||||||
|
|
||||||
|
// All todos batched together
|
||||||
|
TodoWrite { todos: [...8-10 todos...] }
|
||||||
|
|
||||||
|
// All file operations together
|
||||||
|
Write "backend/server.js"
|
||||||
|
Write "frontend/App.jsx"
|
||||||
|
Write "database/schema.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📋 Agent Coordination Protocol
|
||||||
|
|
||||||
|
### Every Agent Spawned via Task Tool MUST:
|
||||||
|
|
||||||
|
**1️⃣ BEFORE Work:**
|
||||||
|
```bash
|
||||||
|
npx claude-flow@alpha hooks pre-task --description "[task]"
|
||||||
|
npx claude-flow@alpha hooks session-restore --session-id "swarm-[id]"
|
||||||
|
```
|
||||||
|
|
||||||
|
**2️⃣ DURING Work:**
|
||||||
|
```bash
|
||||||
|
npx claude-flow@alpha hooks post-edit --file "[file]" --memory-key "swarm/[agent]/[step]"
|
||||||
|
npx claude-flow@alpha hooks notify --message "[what was done]"
|
||||||
|
```
|
||||||
|
|
||||||
|
**3️⃣ AFTER Work:**
|
||||||
|
```bash
|
||||||
|
npx claude-flow@alpha hooks post-task --task-id "[task]"
|
||||||
|
npx claude-flow@alpha hooks session-end --export-metrics true
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎯 Concurrent Execution Examples
|
||||||
|
|
||||||
|
### ✅ CORRECT WORKFLOW: MCP Coordinates, Claude Code Executes
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: MCP tools set up coordination (optional, for complex tasks)
|
||||||
|
[Single Message - Coordination Setup]:
|
||||||
|
mcp__claude-flow__swarm_init { topology: "mesh", maxAgents: 6 }
|
||||||
|
mcp__claude-flow__agent_spawn { type: "researcher" }
|
||||||
|
mcp__claude-flow__agent_spawn { type: "coder" }
|
||||||
|
mcp__claude-flow__agent_spawn { type: "tester" }
|
||||||
|
|
||||||
|
// Step 2: Claude Code Task tool spawns ACTUAL agents that do the work
|
||||||
|
[Single Message - Parallel Agent Execution]:
|
||||||
|
// Claude Code's Task tool spawns real agents concurrently
|
||||||
|
Task("Research agent", "Analyze API requirements and best practices. Check memory for prior decisions.", "researcher")
|
||||||
|
Task("Coder agent", "Implement REST endpoints with authentication. Coordinate via hooks.", "coder")
|
||||||
|
Task("Database agent", "Design and implement database schema. Store decisions in memory.", "code-analyzer")
|
||||||
|
Task("Tester agent", "Create comprehensive test suite with 90% coverage.", "tester")
|
||||||
|
Task("Reviewer agent", "Review code quality and security. Document findings.", "reviewer")
|
||||||
|
|
||||||
|
// Batch ALL todos in ONE call
|
||||||
|
TodoWrite { todos: [
|
||||||
|
{id: "1", content: "Research API patterns", status: "in_progress", priority: "high"},
|
||||||
|
{id: "2", content: "Design database schema", status: "in_progress", priority: "high"},
|
||||||
|
{id: "3", content: "Implement authentication", status: "pending", priority: "high"},
|
||||||
|
{id: "4", content: "Build REST endpoints", status: "pending", priority: "high"},
|
||||||
|
{id: "5", content: "Write unit tests", status: "pending", priority: "medium"},
|
||||||
|
{id: "6", content: "Integration tests", status: "pending", priority: "medium"},
|
||||||
|
{id: "7", content: "API documentation", status: "pending", priority: "low"},
|
||||||
|
{id: "8", content: "Performance optimization", status: "pending", priority: "low"}
|
||||||
|
]}
|
||||||
|
|
||||||
|
// Parallel file operations
|
||||||
|
Bash "mkdir -p app/{src,tests,docs,config}"
|
||||||
|
Write "app/package.json"
|
||||||
|
Write "app/src/server.js"
|
||||||
|
Write "app/tests/server.test.js"
|
||||||
|
Write "app/docs/API.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ WRONG (Multiple Messages):
|
||||||
|
```javascript
|
||||||
|
Message 1: mcp__claude-flow__swarm_init
|
||||||
|
Message 2: Task("agent 1")
|
||||||
|
Message 3: TodoWrite { todos: [single todo] }
|
||||||
|
Message 4: Write "file.js"
|
||||||
|
// This breaks parallel coordination!
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Benefits
|
||||||
|
|
||||||
|
- **84.8% SWE-Bench solve rate**
|
||||||
|
- **32.3% token reduction**
|
||||||
|
- **2.8-4.4x speed improvement**
|
||||||
|
- **27+ neural models**
|
||||||
|
|
||||||
|
## Hooks Integration
|
||||||
|
|
||||||
|
### Pre-Operation
|
||||||
|
- Auto-assign agents by file type
|
||||||
|
- Validate commands for safety
|
||||||
|
- Prepare resources automatically
|
||||||
|
- Optimize topology by complexity
|
||||||
|
- Cache searches
|
||||||
|
|
||||||
|
### Post-Operation
|
||||||
|
- Auto-format code
|
||||||
|
- Train neural patterns
|
||||||
|
- Update memory
|
||||||
|
- Analyze performance
|
||||||
|
- Track token usage
|
||||||
|
|
||||||
|
### Session Management
|
||||||
|
- Generate summaries
|
||||||
|
- Persist state
|
||||||
|
- Track metrics
|
||||||
|
- Restore context
|
||||||
|
- Export workflows
|
||||||
|
|
||||||
|
## Advanced Features (v2.0.0)
|
||||||
|
|
||||||
|
- 🚀 Automatic Topology Selection
|
||||||
|
- ⚡ Parallel Execution (2.8-4.4x speed)
|
||||||
|
- 🧠 Neural Training
|
||||||
|
- 📊 Bottleneck Analysis
|
||||||
|
- 🤖 Smart Auto-Spawning
|
||||||
|
- 🛡️ Self-Healing Workflows
|
||||||
|
- 💾 Cross-Session Memory
|
||||||
|
- 🔗 GitHub Integration
|
||||||
|
|
||||||
|
## Integration Tips
|
||||||
|
|
||||||
|
1. Start with basic swarm init
|
||||||
|
2. Scale agents gradually
|
||||||
|
3. Use memory for context
|
||||||
|
4. Monitor progress regularly
|
||||||
|
5. Train patterns from success
|
||||||
|
6. Enable hooks automation
|
||||||
|
7. Use GitHub tools first
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- Documentation: https://github.com/ruvnet/claude-flow
|
||||||
|
- Issues: https://github.com/ruvnet/claude-flow/issues
|
||||||
|
- Flow-Nexus Platform: https://flow-nexus.ruv.io (registration required for cloud features)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Remember: **Claude Flow coordinates, Claude Code creates!**
|
||||||
|
|
||||||
|
# important-instruction-reminders
|
||||||
|
Do what has been asked; nothing more, nothing less.
|
||||||
|
NEVER create files unless they're absolutely necessary for achieving your goal.
|
||||||
|
ALWAYS prefer editing an existing file to creating a new one.
|
||||||
|
NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
|
||||||
|
Never save working files, text/mds and tests to the root folder.
|
||||||
6
package-lock.json
generated
Normal file
6
package-lock.json
generated
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"name": "hackathon",
|
||||||
|
"lockfileVersion": 3,
|
||||||
|
"requires": true,
|
||||||
|
"packages": {}
|
||||||
|
}
|
||||||
145
requirements/DataCollector SRS.md
Normal file
145
requirements/DataCollector SRS.md
Normal file
@ -0,0 +1,145 @@
|
|||||||
|
# Software Requirement Specification Collector HTTP Plugin
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
### Purpose
|
||||||
|
This document shall describes the Collector sender plugin responsible for collecting endpoint device diagnostic data via HTTP. This document contains a description and all relevant requirements towards functionality and architecture.
|
||||||
|
|
||||||
|
### System Description
|
||||||
|
The "Collector HTTP Plugin" (CHP) is based on the Collector main product. The main task of the Collector is to connect multiple network zones in a safe and secure way so that data can only be transferred in a unidirectional way. The Collector itself has been fully developed and is ready for use.
|
||||||
|
|
||||||
|
This project is about the development of the "Collector HTTP Plugin" (CHP) extension for the Collector main application. The CHP is desinged to be an additional software component to the Collector application running as a separate service.
|
||||||
|
|
||||||
|
The CHP is composed of two parts. The "HTTP Sender Plugin" (HSP) and the "HTTP Receiver Plugin" (HRP). As a first stage only that HSP shall be developed. The HRP will be part of a separate development.
|
||||||
|
|
||||||
|
The main task of the HSP is to connect the Collector to a large amount of endpoint devices and cyclically collect diagnostic data via HTTP. In a later stage the HRP will be responsible to forward the collected data to the receiver destination.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### HSP Architecture requirements
|
||||||
|
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
| --- | --- |
|
||||||
|
| Prose | The requirements specify the architectural framework and assumptions |
|
||||||
|
| Req-Arch-1 | The HSP shall be developed in OpenJDK 25, featuring Java 25. |
|
||||||
|
| Req-Arch-2 | HSP shall use only the following external libraries: gRPC Java 1.60+, Protocol Buffers 3.25+, and their transitive dependencies. All other functionality shall use only OpenJDK 25 standard library classes. |
|
||||||
|
| Req-Arch-3 | HSP shall log all log messages and errors to the file "hsp.log" in a temp directory. |
|
||||||
|
| Req-Arch-4 | For logging the Java Logging API with log rotation (max 100MB per file, 5 files) shall be used. |
|
||||||
|
| Req-Arch-5 | HSP shall always run and not terminate unless an unrecoverable error occurs. |
|
||||||
|
| Req-Arch-6 | HSP shall use a multi-threaded architecture with separate threads for HTTP polling and gRPC transmission. For the HTTP polling virtual threads shall be used to lower the resource demand. |
|
||||||
|
| Req-Arch-7 | HSP shall implement the Producer-Consumer pattern for data flow between IF1 and IF2. |
|
||||||
|
| Req-Arch-8 | HSP shall use thread-safe collections for data buffering. |
|
||||||
|
|
||||||
|
### HSP Functional requirements
|
||||||
|
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
| --- | --- |
|
||||||
|
| Prose | This section describes the initialization of the HSP |
|
||||||
|
| Req-FR-1 | HSP shall execute the following startup sequence: |
|
||||||
|
| Req-FR-2 | Startup step 1: Load and validate configuration as described below |
|
||||||
|
| Req-FR-3 | Startup step 2: Initialize logging |
|
||||||
|
| Req-FR-4 | Startup step 3: Establish gRPC connection to the Collector Sender Core as described below |
|
||||||
|
| Req-FR-5 | Startup step 4: Begin HTTP polling of diagnostic data from the endpoint devices |
|
||||||
|
| Req-FR-6 | If gRPC connection fails, HSP shall retry every 5 seconds indefinitely and log warnings every 1 minute. |
|
||||||
|
| Req-FR-7 | HSP shall not begin HTTP polling until the gRPC connection is successfully established. |
|
||||||
|
| Req-FR-8 | HSP shall log "HSP started successfully" at INFO level when all initialization steps complete. |
|
||||||
|
| Prose | This section describes the configuration of the HSP |
|
||||||
|
| Req-FR-9 | The HSP shall be configurable via a configuration file as described in the HSP Configuration File Specfication. |
|
||||||
|
| Req-FR-10 | At startup HSP shall read in the configuration file located in the application directory and apply the configuration. |
|
||||||
|
| Prose | The configuration file contains: HTTP endpoint URLs, polling interval, gRPC server address/port, timeout values, and retry policies. |
|
||||||
|
| Req-FR-11 | HSP shall validate all configuration parameters to be within given limits |
|
||||||
|
| Req-FR-12 | If validation fails HSP shall terminate with error code 1. |
|
||||||
|
| Req-FR-13 | If validation fails HSP shall log the reason for the failed validation |
|
||||||
|
| Prose | This section describes the connection and mechanism used for the HSP to connect to the endpoint devices to poll diagnostic data |
|
||||||
|
| Req-FR-14 | The HSP shall establish a connection to all configured devices according to interface IF1 |
|
||||||
|
| Req-FR-15 | HSP shall set a timeout of 30 seconds for each HTTP GET request. |
|
||||||
|
| Req-FR-16 | HSP shall poll each configured HTTP endpoint device at intervals specified in the configuration |
|
||||||
|
| Req-FR-17 | If the HTTP GET request fails HSP shall retry up to 3 times with 5-second intervals before marking the connection to the endpoint device as failed. |
|
||||||
|
| Req-FR-18 | HSP shall implement linear backoff for failed endpoint connections. Starting at 5s to a maximum of 300s, adding 5s in every attempt. |
|
||||||
|
| Req-FR-19 | HSP shall not have concurrent connections to the same endpoint device. |
|
||||||
|
| Req-FR-20 | HSP shall continue polling other endpoint devices if one endpoint device fails. |
|
||||||
|
| Prose | The endpoint devices will answer the polling by transmitting a binary file. |
|
||||||
|
| Req-FR-21 | HSP shall reject binary files larger than 1MB and shall log a warning. |
|
||||||
|
| Req-FR-22 | HSP shall wrap the collected data in JSON as data serialization format. |
|
||||||
|
| Req-FR-23 | HSP shall encode binary file as Base64 strings within the JSON payload. |
|
||||||
|
| Req-FR-24 | Each JSON message shall include: "HTTP sender plugin" as the plugin name, timestamp (ISO 8601), source_endpoint (URL), data_size (bytes), and payload (Base64 encoded binary). |
|
||||||
|
| Req-FR-25 | HSP shall then send the collected and aggregated data to the CollectorSender Core as decribed below. |
|
||||||
|
| Req-FR-25 | If gRPC transmission fails, HSP shall buffer collected data in memory (max 300 messages). |
|
||||||
|
| Req-FR-26 | If the buffer is full and new data is collected, HSP shall discard the oldest data. |
|
||||||
|
| Prose | This section describes the connection and mechanism used for the HSP to connect to the Collector Sender Core to transmit aggregated collected data |
|
||||||
|
| Req-FR-27 | The HSP shall communicate with the Collector Sender Core according to Interface IF2 |
|
||||||
|
| Req-FR-28 | HSP shall automatically establish a single bidirectional gRPC stream to the Collector Sender Core at startup and maintain it for the lifetime of the application. |
|
||||||
|
| Req-FR-29 | If the gRPC stream fails, HSP shall close the stream, wait 5 seconds, and try to establish a new stream. |
|
||||||
|
| Req-FR-30 | HSP shall send one TransferRequest message containing as many messages as fit into 4MB (transfer maximum) |
|
||||||
|
| Req-FR-31 | HSP shall send one TransferRequest message containing less then 4MB (transfer maximum) latest 1s after the last message. |
|
||||||
|
| Req-FR-32 | The receiver_id field shall be set to 99 for all requests. |
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-NFR-1 | HSP shall support concurrent polling of up to 1000 HTTP endpoint devices. |
|
||||||
|
| Req-NFR-2 | HSP shall not exceed 4096MB of RAM usage under normal operation. |
|
||||||
|
|
||||||
|
### Security
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-NFR-3 | HSP shall not use HTTP authentication. |
|
||||||
|
| Req-NFR-4 | HSP shall use TCP mode only for the gRPC interface. |
|
||||||
|
|
||||||
|
### Usability
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-NFR-5 | HSP shall be built using Maven 3.9+ with a provided pom.xml. |
|
||||||
|
| Req-NFR-6 | HSP shall be packaged as an executable JAR with all dependencies included (fat JAR). |
|
||||||
|
|
||||||
|
### Reliability
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-NFR-7 | HSP shall expose a health check HTTP endpoint on localhost:8080/health returning JSON status. |
|
||||||
|
| Req-NFR-8 | Health check shall include: service status, last successful collection timestamp, gRPC connection status, error count of HTTP collection attempts, number of successfully collected HTTP endpoints last 30s, number of failed HTTP endpoints last 30s. |
|
||||||
|
|
||||||
|
### Normative Requirements
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-Norm-1 | The software shall be developed in accordance with ISO-9001 |
|
||||||
|
| Req-Norm-2 | The software shall be developed in accordance with Cenelec EN 50716 Basic Entegrity |
|
||||||
|
| Req-Norm-3 | The software shall implement measures for error detection and handling, including but not limited to, invalid sensor data, communication timeouts, and internal faults. |
|
||||||
|
| Req-Norm-4 | The software shall be subjected to rigorous testing, including unit testing, integration testing, and validation testing, to ensure that it meets the specified safety requirements. |
|
||||||
|
| Req-Norm-5 | The software development process shall be documented, including requirements, design, implementation, and testing, to provide a clear audit trail. |
|
||||||
|
| Req-Norm-6 | The software shall be designed to be maintainable, with clear and concise code, and a modular architecture. |
|
||||||
|
|
||||||
|
|
||||||
|
## Testing Requirements
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
|---|---|
|
||||||
|
| Req-NFR-7 | Integration tests shall verify HTTP collection with a mock HTTP server. |
|
||||||
|
| Req-NFR-8 | Integration tests shall verify gRPC transmission with a mock gRPC server. |
|
||||||
|
| Req-NFR-9 | Tests shall use JUnit 5 and Mockito frameworks. |
|
||||||
|
| Req-NFR-10 | All tests shall be executable via 'mvn test' command. |
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
|
||||||
|
| Unique ID | Requirement Description |
|
||||||
|
| --- | --- |
|
||||||
|
| Req-US-1 | As a system operator, I want HSP to automatically collect diagnostic data from configured HTTP endpoints every second, so that real-time device health can be monitored without manual intervention. |
|
||||||
|
| Req-US-1 | As a data analyst, I want all collected diagnostic data to be reliably transmitted to the Collector Sender Core via gRPC, so that I can analyze device behavior even if temporary network issues occur. |
|
||||||
|
| Req-US-1 | As a system administrator, I want to check HSP health status via HTTP endpoint, so that I can monitor the service without accessing logs. |
|
||||||
|
|
||||||
|
### Assumptions and Dependencies
|
||||||
|
The Collector Core Sender gRPC server is always available.
|
||||||
|
|
||||||
|
### Normative Requirements
|
||||||
|
**TBD**
|
||||||
|
|
||||||
|
### List any other requirements not covered above
|
||||||
|
**TBD**
|
||||||
|
|
||||||
|
## Glossary
|
||||||
|
- **HSP**: HTTP Sender Plugin
|
||||||
|
- **HRP**: HTTP Receiver Plugin
|
||||||
|
- **Diagnostic Data**: Binary files containing specific application dependent content
|
||||||
|
- **IF1**: HTTP-based interface for data collection
|
||||||
|
- **IF2**: gRPC-based interface for data transmission
|
||||||
|
- **Endpoint device**: A endpoint device is a device in a network providing diagnostic data via HTTP
|
||||||
51
requirements/HSP_Configuration_File_Specification.md
Normal file
51
requirements/HSP_Configuration_File_Specification.md
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
# HSP Configuration File Specification
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
This configuation file contains all devices that shall be polled by HSP for diagnostic data. It also containts additional data specifying the connection used.
|
||||||
|
|
||||||
|
## Format
|
||||||
|
The Configuration file shall be stored as JSON file.
|
||||||
|
|
||||||
|
## File Location
|
||||||
|
- Path: `./hsp-config.json` (application directory)
|
||||||
|
|
||||||
|
## JSON Schema
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"grpc": {
|
||||||
|
"server_address": "localhost",
|
||||||
|
"server_port": 50051,
|
||||||
|
"timeout_seconds": 30
|
||||||
|
},
|
||||||
|
"http": {
|
||||||
|
"endpoints": [
|
||||||
|
"http://device1.local:8080/diagnostics",
|
||||||
|
"http://device2.local:8080/diagnostics"
|
||||||
|
],
|
||||||
|
"polling_interval_seconds": 1,
|
||||||
|
"request_timeout_seconds": 30,
|
||||||
|
"max_retries": 3,
|
||||||
|
"retry_interval_seconds": 5
|
||||||
|
},
|
||||||
|
"buffer": {
|
||||||
|
"max_messages": 300000
|
||||||
|
},
|
||||||
|
"backoff": {
|
||||||
|
"http_start_seconds": 5,
|
||||||
|
"http_max_seconds": 300,
|
||||||
|
"http_increment_seconds": 5,
|
||||||
|
"grpc_interval_seconds": 5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
(minimum and default 1 second, maximum 3600 seconds).
|
||||||
|
|
||||||
|
## Field value limits
|
||||||
|
|
||||||
|
| Field | Type | Required | Constraints |
|
||||||
|
|---|---|---|---|
|
||||||
|
| grpc.server_address | string | Yes | Valid hostname or IP |
|
||||||
|
| grpc.server_port | integer | Yes | 1-65535 |
|
||||||
|
| http.endpoints | array | Yes | Min 1, Max 1000 URLs |
|
||||||
|
| http.polling_interval_seconds | integer | Yes | 1-3600 |
|
||||||
40
requirements/IF_1_HSP_-_End_Point_Device.md
Normal file
40
requirements/IF_1_HSP_-_End_Point_Device.md
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
# Interface Specification
|
||||||
|
|
||||||
|
## 1. Introduction
|
||||||
|
|
||||||
|
This document describes the Interface 1 (HSP - Endpoint devices)
|
||||||
|
|
||||||
|
### 1.1. Purpose
|
||||||
|
|
||||||
|
### 1.2. Scope
|
||||||
|
|
||||||
|
### 1.3. Definitions, Acronyms, and Abbreviations
|
||||||
|
|
||||||
|
## 2. Interface Overview
|
||||||
|
|
||||||
|
### 2.1. System Context
|
||||||
|
|
||||||
|
### 2.2. Interface Diagram
|
||||||
|
|
||||||
|
## 3. Functional Description
|
||||||
|
|
||||||
|
The interface type is HTTP-Get.
|
||||||
|
A REST endpoint provides a binary file containing the diagnostic data to be fetched.
|
||||||
|
|
||||||
|
### 3.1. Major Functions
|
||||||
|
|
||||||
|
## 4. Performance Requirements
|
||||||
|
|
||||||
|
## 5. Detailed Interface Specification
|
||||||
|
|
||||||
|
### 5.1. Data Model
|
||||||
|
|
||||||
|
### 5.2. Communication Protocol
|
||||||
|
|
||||||
|
### 5.3. Endpoint / Method Definitions
|
||||||
|
|
||||||
|
## 6. Error Handling
|
||||||
|
|
||||||
|
## 7. Security Considerations
|
||||||
|
|
||||||
|
## 8. Versioning
|
||||||
70
requirements/IF_2_HSP_-_Collector_Sender_Core.md
Normal file
70
requirements/IF_2_HSP_-_Collector_Sender_Core.md
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
# Interface Specification
|
||||||
|
|
||||||
|
## 1. Introduction
|
||||||
|
|
||||||
|
This document describes the Interface 2 (HSP - Sender Core)
|
||||||
|
|
||||||
|
### 1.1. Purpose
|
||||||
|
|
||||||
|
### 1.2. Scope
|
||||||
|
|
||||||
|
### 1.3. Definitions, Acronyms, and Abbreviations
|
||||||
|
|
||||||
|
## 2. Interface Overview
|
||||||
|
|
||||||
|
### 2.1. System Context
|
||||||
|
|
||||||
|
### 2.2. Interface Diagram
|
||||||
|
|
||||||
|
## 3. Functional Description
|
||||||
|
|
||||||
|
### 3.1 gRPC connection to OWG Core Sender
|
||||||
|
|
||||||
|
IF2 shall be implemented according to the proto file definition:
|
||||||
|
|
||||||
|
``` proto
|
||||||
|
syntax = "proto3";
|
||||||
|
|
||||||
|
option java_package = "com.siemens.coreshield.owg.shared.grpc";
|
||||||
|
option java_multiple_files = true;
|
||||||
|
|
||||||
|
service TransferService {
|
||||||
|
rpc transferStream(stream TransferRequest) returns (TransferResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
message TransferRequest {
|
||||||
|
int32 receiver_id = 1;
|
||||||
|
bytes data = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message TransferResponse {
|
||||||
|
int32 response_code = 1;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
### 3.2 JSON serialization format:
|
||||||
|
{
|
||||||
|
"plugin_name": "HTTP sender plugin",
|
||||||
|
"timestamp": "2025-11-17T10:52:10.123Z",
|
||||||
|
"source_endpoint": "http://192.168.1.10/diag",
|
||||||
|
"data_size": 1024,
|
||||||
|
"payload": "SGVsbG8gV29ybGQh..."
|
||||||
|
}
|
||||||
|
|
||||||
|
### 3.3 Major Functions
|
||||||
|
|
||||||
|
## 4. Performance Requirements
|
||||||
|
|
||||||
|
## 5. Detailed Interface Specification
|
||||||
|
|
||||||
|
### 5.1. Data Model
|
||||||
|
|
||||||
|
### 5.2. Communication Protocol
|
||||||
|
|
||||||
|
### 5.3. Endpoint / Method Definitions
|
||||||
|
|
||||||
|
## 6. Error Handling
|
||||||
|
|
||||||
|
## 7. Security Considerations
|
||||||
|
|
||||||
|
## 8. Versioning
|
||||||
46
requirements/IF_3_HTTP_Health_check.md
Normal file
46
requirements/IF_3_HTTP_Health_check.md
Normal file
@ -0,0 +1,46 @@
|
|||||||
|
# Interface Specification
|
||||||
|
|
||||||
|
## 1. Introduction
|
||||||
|
|
||||||
|
### 1.1. Purpose
|
||||||
|
|
||||||
|
### 1.2. Scope
|
||||||
|
|
||||||
|
### 1.3. Definitions, Acronyms, and Abbreviations
|
||||||
|
|
||||||
|
## 2. Interface Overview
|
||||||
|
|
||||||
|
### 2.1. System Context
|
||||||
|
|
||||||
|
### 2.2. Interface Diagram
|
||||||
|
|
||||||
|
## 3. Functional Description
|
||||||
|
|
||||||
|
### 3.1 Health Check JSON Schema
|
||||||
|
|
||||||
|
{
|
||||||
|
"service_status": "RUNNING | DEGRADED | DOWN",
|
||||||
|
"grpc_connection_status": "CONNECTED | DISCONNECTED",
|
||||||
|
"last_successful_collection_ts": "2025-11-17T10:52:10Z",
|
||||||
|
"http_collection_error_count": 15,
|
||||||
|
"endpoints_success_last_30s": 998,
|
||||||
|
"endpoints_failed_last_30s": 2
|
||||||
|
}
|
||||||
|
|
||||||
|
### 3.1. Major Functions
|
||||||
|
|
||||||
|
## 4. Performance Requirements
|
||||||
|
|
||||||
|
## 5. Detailed Interface Specification
|
||||||
|
|
||||||
|
### 5.1. Data Model
|
||||||
|
|
||||||
|
### 5.2. Communication Protocol
|
||||||
|
|
||||||
|
### 5.3. Endpoint / Method Definitions
|
||||||
|
|
||||||
|
## 6. Error Handling
|
||||||
|
|
||||||
|
## 7. Security Considerations
|
||||||
|
|
||||||
|
## 8. Versioning
|
||||||
Loading…
x
Reference in New Issue
Block a user