Initial implementation of HTTP Sender Plugin following TDD methodology with hexagonal architecture. All 313 tests passing (0 failures). This commit adds: - Complete domain model and port interfaces - All adapter implementations (HTTP, gRPC, file logging, config) - Application services (data collection, transmission, backpressure) - Comprehensive test suite with 18 integration tests Test fixes applied during implementation: - Fix base64 encoding validation in DataCollectionServiceIntegrationTest - Fix exception type handling in IConfigurationPortTest - Fix CompletionException unwrapping in IHttpPollingPortTest - Fix sequential batching in DataTransmissionServiceIntegrationTest - Add test adapter failure simulation for reconnection tests - Use adapter counters for gRPC verification Files added: - pom.xml with all dependencies (JUnit 5, Mockito, WireMock, gRPC, Jackson) - src/main/java: Domain model, ports, adapters, application services - src/test/java: Unit tests, integration tests, test utilities
475 lines
14 KiB
Markdown
475 lines
14 KiB
Markdown
# TDD Compliance Checklist
|
|
**Version**: 1.0
|
|
**Project**: HTTP Sender Plugin (HSP)
|
|
**Last Updated**: 2025-11-20
|
|
|
|
## Purpose
|
|
|
|
This checklist ensures that all development follows Test-Driven Development (TDD) methodology as mandated by the project implementation plan. **ALL code MUST be developed using the Red-Green-Refactor cycle**.
|
|
|
|
---
|
|
|
|
## 🚨 TDD Non-Negotiable Rules
|
|
|
|
### Rule 1: Tests First, Code Second
|
|
- [ ] **No production code written without a failing test**
|
|
- [ ] Test defines the interface and expected behavior
|
|
- [ ] Implementation satisfies the test requirements
|
|
- [ ] Test is committed to Git BEFORE implementation
|
|
|
|
### Rule 2: Red-Green-Refactor Cycle Documented
|
|
- [ ] **RED**: Failing test committed with message `test: description (RED)`
|
|
- [ ] **GREEN**: Minimal implementation committed with message `feat: description (GREEN)`
|
|
- [ ] **REFACTOR**: Code improvements committed with message `refactor: description`
|
|
- [ ] Git history clearly shows TDD cycle for each feature
|
|
|
|
### Rule 3: All Tests Must Pass
|
|
- [ ] Never commit with broken tests
|
|
- [ ] CI pipeline must be green
|
|
- [ ] Fix build breaks immediately (within 15 minutes)
|
|
- [ ] All existing tests pass before adding new features
|
|
|
|
### Rule 4: Coverage Thresholds Mandatory
|
|
- [ ] **95% line coverage minimum** (enforced by JaCoCo)
|
|
- [ ] **90% branch coverage minimum** (enforced by JaCoCo)
|
|
- [ ] CI pipeline fails if coverage drops below threshold
|
|
- [ ] Coverage trends tracked in sprint metrics
|
|
|
|
---
|
|
|
|
## 📋 Pre-Commit Checklist
|
|
|
|
Before committing any code, verify:
|
|
|
|
### Test Verification
|
|
- [ ] Tests written BEFORE implementation
|
|
- [ ] Tests follow AAA pattern (Arrange-Act-Assert)
|
|
- [ ] Test names clearly describe behavior: `shouldDoX_whenY()`
|
|
- [ ] Tests are independent (no shared state between tests)
|
|
- [ ] Tests are repeatable (same result every time)
|
|
- [ ] Tests are fast (unit tests < 100ms each)
|
|
|
|
### Git Commit Verification
|
|
- [ ] RED commit exists (failing test)
|
|
- [ ] GREEN commit follows RED (passing implementation)
|
|
- [ ] REFACTOR commit (if applicable)
|
|
- [ ] Commit messages follow TDD pattern
|
|
- [ ] Multiple commits per day (frequent integration)
|
|
|
|
### Code Coverage Verification
|
|
- [ ] JaCoCo report generated: `mvn jacoco:report`
|
|
- [ ] Line coverage ≥ 95%
|
|
- [ ] Branch coverage ≥ 90%
|
|
- [ ] No uncovered critical paths
|
|
- [ ] Coverage report reviewed in IDE
|
|
|
|
### Test Quality Verification
|
|
- [ ] Tests actually fail when they should (verify RED phase)
|
|
- [ ] Tests pass for correct reasons (not false positives)
|
|
- [ ] Edge cases covered (boundary conditions)
|
|
- [ ] Error scenarios tested (exceptions, failures)
|
|
- [ ] Happy path and sad path both tested
|
|
|
|
---
|
|
|
|
## 🔍 Pull Request TDD Compliance Review
|
|
|
|
### Git History Review
|
|
|
|
**Requirement**: Every PR must show clear TDD cycle in commits
|
|
|
|
Check commit history for pattern:
|
|
```
|
|
✓ test: add BufferManager offer() test (RED)
|
|
✓ feat: implement BufferManager offer() method (GREEN)
|
|
✓ refactor: add javadoc to BufferManager
|
|
✓ test: add BufferManager overflow test (RED)
|
|
✓ feat: implement FIFO overflow handling (GREEN)
|
|
✓ refactor: improve naming in overflow logic
|
|
```
|
|
|
|
**Red Flags**:
|
|
```
|
|
✗ feat: implement entire BufferManager (no tests first)
|
|
✗ test: add tests for BufferManager (tests after implementation)
|
|
✗ test + feat: implement BufferManager with tests (combined commit)
|
|
```
|
|
|
|
### Test-to-Code Commit Ratio
|
|
|
|
- [ ] Approximately 1:1 ratio of test commits to implementation commits
|
|
- [ ] Tests consistently appear BEFORE implementations in history
|
|
- [ ] No large blocks of code without corresponding tests
|
|
- [ ] Refactor commits are smaller, incremental improvements
|
|
|
|
### Coverage Report Review
|
|
|
|
Reviewer must verify:
|
|
- [ ] JaCoCo report attached to PR or accessible in CI
|
|
- [ ] Coverage meets 95%/90% thresholds
|
|
- [ ] No suspicious untested code paths
|
|
- [ ] New code covered by new tests (not just existing tests)
|
|
|
|
### Test Quality Review
|
|
|
|
- [ ] Tests follow AAA pattern consistently
|
|
- [ ] Test names are descriptive and behavior-focused
|
|
- [ ] Tests use appropriate assertions (not just `assertTrue`)
|
|
- [ ] Mock objects used appropriately (not over-mocking)
|
|
- [ ] Integration tests cover inter-component boundaries
|
|
|
|
---
|
|
|
|
## 📊 TDD Metrics Dashboard
|
|
|
|
### Weekly Metrics to Track
|
|
|
|
| Metric | Target | How to Measure |
|
|
|--------|--------|----------------|
|
|
| Test-to-Code Commit Ratio | 1:1 | Count test commits vs implementation commits |
|
|
| Line Coverage | ≥95% | JaCoCo report |
|
|
| Branch Coverage | ≥90% | JaCoCo report |
|
|
| Mutation Score | ≥75% | PIT mutation testing |
|
|
| Unit Test Execution Time | <5 min | CI pipeline logs |
|
|
| Flaky Test Rate | <1% | Track test failures/re-runs |
|
|
| TDD Compliance Rate | 100% | PR reviews with TDD violations |
|
|
|
|
### Sprint Retrospective TDD Questions
|
|
|
|
1. **Did we follow TDD for all code this sprint?**
|
|
- If no, what were the exceptions and why?
|
|
|
|
2. **Where did we skip tests first?**
|
|
- Identify patterns and root causes
|
|
|
|
3. **What slowed down our TDD workflow?**
|
|
- Tool issues, environment problems, knowledge gaps?
|
|
|
|
4. **How can we improve TDD practices next sprint?**
|
|
- Training needs, tooling improvements, pair programming?
|
|
|
|
---
|
|
|
|
## 🧪 TDD by Component Type
|
|
|
|
### Port Interfaces (Test-First Design)
|
|
|
|
**Checklist**:
|
|
- [ ] Test written defining interface contract FIRST
|
|
- [ ] Test shows expected method signatures and return types
|
|
- [ ] Interface defined to satisfy test
|
|
- [ ] Mock implementation created for testing
|
|
- [ ] Adapter implementation follows with its own TDD cycle
|
|
|
|
**Example Test (RED)**:
|
|
```java
|
|
@Test
|
|
void shouldPollEndpoint_whenUrlProvided() {
|
|
// Given
|
|
IHttpPollingPort httpPort = new HttpPollingAdapter(config);
|
|
String url = "http://example.com/data";
|
|
|
|
// When
|
|
CompletableFuture<byte[]> result = httpPort.pollEndpoint(url);
|
|
|
|
// Then
|
|
assertThat(result).isCompletedWithValue(expectedData);
|
|
}
|
|
```
|
|
|
|
### Domain Models (Value Object TDD)
|
|
|
|
**Checklist**:
|
|
- [ ] Test immutability (no setters, final fields)
|
|
- [ ] Test equality (equals/hashCode contract)
|
|
- [ ] Test serialization (JSON, Base64 encoding)
|
|
- [ ] Test validation (constructor throws on invalid data)
|
|
- [ ] Test thread safety (concurrent access if applicable)
|
|
|
|
**Example Test (RED)**:
|
|
```java
|
|
@Test
|
|
void shouldBeImmutable_whenCreated() {
|
|
// Given
|
|
DiagnosticData data1 = new DiagnosticData("url", new byte[]{1,2,3});
|
|
DiagnosticData data2 = new DiagnosticData("url", new byte[]{1,2,3});
|
|
|
|
// Then
|
|
assertThat(data1).isEqualTo(data2);
|
|
assertThat(data1).isNotSameAs(data2);
|
|
// Verify no setters exist (compilation check)
|
|
}
|
|
```
|
|
|
|
### Core Services (Business Logic TDD)
|
|
|
|
**Checklist**:
|
|
- [ ] Test business rules and invariants
|
|
- [ ] Test orchestration logic (calls to ports)
|
|
- [ ] Test error handling and exceptions
|
|
- [ ] Test statistics and monitoring
|
|
- [ ] Test concurrency if applicable
|
|
- [ ] Use mocks for port dependencies
|
|
|
|
**Example Test (RED)**:
|
|
```java
|
|
@Test
|
|
void shouldRejectOversizedData_whenFileExceeds1MB() {
|
|
// Given
|
|
DataCollectionService service = new DataCollectionService(httpPort, bufferPort);
|
|
byte[] largeData = new byte[2_000_000]; // 2 MB
|
|
|
|
// When / Then
|
|
assertThatThrownBy(() -> service.validateData(largeData, "http://test"))
|
|
.isInstanceOf(OversizedDataException.class)
|
|
.hasMessageContaining("1MB");
|
|
}
|
|
```
|
|
|
|
### Adapters (Infrastructure TDD)
|
|
|
|
**Checklist**:
|
|
- [ ] Unit tests with mocks for quick feedback
|
|
- [ ] Integration tests with real infrastructure (WireMock, gRPC test server)
|
|
- [ ] Test retry logic and backoff strategies
|
|
- [ ] Test timeouts and error scenarios
|
|
- [ ] Test thread safety and concurrency
|
|
- [ ] Test resource cleanup (connections, streams)
|
|
|
|
**Example Test (RED)**:
|
|
```java
|
|
@Test
|
|
void shouldRetryThreeTimes_whenHttpFails() {
|
|
// Given
|
|
stubFor(get("/endpoint").willReturn(aResponse().withStatus(500)));
|
|
HttpPollingAdapter adapter = new HttpPollingAdapter(config);
|
|
|
|
// When
|
|
assertThatThrownBy(() -> adapter.pollEndpoint(url).join())
|
|
.hasCauseInstanceOf(PollingFailedException.class);
|
|
|
|
// Then (Req-FR-17: 3 retries)
|
|
verify(3, getRequestedFor(urlEqualTo("/endpoint")));
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## 🔄 Daily TDD Workflow
|
|
|
|
### Morning (Start of Day)
|
|
|
|
1. [ ] Pull latest from main/develop branch
|
|
2. [ ] Review overnight CI builds (all green?)
|
|
3. [ ] Check JaCoCo coverage report (≥95%/90%?)
|
|
4. [ ] Pick next user story from sprint backlog
|
|
5. [ ] Create feature branch: `git checkout -b feature/buffer-manager`
|
|
6. [ ] Review requirements and acceptance criteria
|
|
|
|
### During Development (5-10 TDD Cycles per Day)
|
|
|
|
**Per Feature/Method**:
|
|
|
|
1. [ ] **Write Test (RED)**: 15-30 minutes
|
|
- Write failing test for next requirement
|
|
- Run test, verify it fails: `mvn test -Dtest=ClassName#testMethod`
|
|
- Commit: `git commit -m "test: add test for X (RED)"`
|
|
|
|
2. [ ] **Write Code (GREEN)**: 15-45 minutes
|
|
- Write minimal code to make test pass
|
|
- Run test, verify it passes: `mvn test`
|
|
- Run all tests, verify no regressions: `mvn test`
|
|
- Commit: `git commit -m "feat: implement X (GREEN)"`
|
|
|
|
3. [ ] **Refactor**: 10-20 minutes (if needed)
|
|
- Improve code quality, remove duplication
|
|
- Run all tests, verify still passing: `mvn test`
|
|
- Commit: `git commit -m "refactor: improve X naming/structure"`
|
|
|
|
4. [ ] **Verify Coverage**: 5 minutes
|
|
- Generate coverage: `mvn jacoco:report`
|
|
- Check coverage in IDE or HTML report
|
|
- Ensure new code is covered
|
|
|
|
5. [ ] **Push Frequently**: Every 2-3 cycles
|
|
- Push to remote: `git push origin feature/buffer-manager`
|
|
- Verify CI pipeline runs and passes
|
|
|
|
### End of Day
|
|
|
|
1. [ ] Push feature branch: `git push origin feature/buffer-manager`
|
|
2. [ ] Create pull request in Gitea (if feature complete)
|
|
3. [ ] Verify CI pipeline passes (green build)
|
|
4. [ ] Request code review from team member
|
|
5. [ ] Update sprint board (move tasks to "In Review")
|
|
|
|
---
|
|
|
|
## ⚠️ TDD Anti-Patterns to Avoid
|
|
|
|
### 1. Writing Code Before Tests
|
|
```
|
|
✗ WRONG:
|
|
- Implement entire BufferManager class
|
|
- Then write tests to cover it
|
|
|
|
✓ CORRECT:
|
|
- Write test for offer() method
|
|
- Implement offer() method
|
|
- Write test for poll() method
|
|
- Implement poll() method
|
|
```
|
|
|
|
### 2. Testing Implementation Details
|
|
```
|
|
✗ WRONG:
|
|
@Test
|
|
void shouldUseArrayBlockingQueue_inBufferManager() {
|
|
// Testing internal implementation choice
|
|
}
|
|
|
|
✓ CORRECT:
|
|
@Test
|
|
void shouldStoreFIFO_whenMultipleOffersAndPolls() {
|
|
// Testing observable behavior
|
|
}
|
|
```
|
|
|
|
### 3. Over-Mocking
|
|
```
|
|
✗ WRONG:
|
|
@Test
|
|
void shouldCalculateSum() {
|
|
Calculator calc = mock(Calculator.class);
|
|
when(calc.add(2, 3)).thenReturn(5);
|
|
assertThat(calc.add(2, 3)).isEqualTo(5); // Circular mocking
|
|
}
|
|
|
|
✓ CORRECT:
|
|
@Test
|
|
void shouldCalculateSum() {
|
|
Calculator calc = new Calculator(); // Real object
|
|
assertThat(calc.add(2, 3)).isEqualTo(5);
|
|
}
|
|
```
|
|
|
|
### 4. Flaky Tests
|
|
```
|
|
✗ WRONG:
|
|
@Test
|
|
void shouldComplete_withinReasonableTime() {
|
|
Thread.sleep(100); // Time-dependent test
|
|
assertThat(result).isNotNull();
|
|
}
|
|
|
|
✓ CORRECT:
|
|
@Test
|
|
void shouldComplete_whenDataAvailable() {
|
|
CountDownLatch latch = new CountDownLatch(1);
|
|
// Use proper synchronization primitives
|
|
}
|
|
```
|
|
|
|
### 5. Large, Monolithic Tests
|
|
```
|
|
✗ WRONG:
|
|
@Test
|
|
void shouldTestEntireSystem() {
|
|
// 200 lines of test code testing everything
|
|
}
|
|
|
|
✓ CORRECT:
|
|
@Test
|
|
void shouldPollEndpoint_whenUrlValid() { /* 10 lines */ }
|
|
|
|
@Test
|
|
void shouldRetryOnFailure_whenHttpError() { /* 10 lines */ }
|
|
|
|
@Test
|
|
void shouldStoreInBuffer_whenDataReceived() { /* 10 lines */ }
|
|
```
|
|
|
|
---
|
|
|
|
## 📚 TDD Resources
|
|
|
|
### Internal Documentation
|
|
- [Project Implementation Plan](../PROJECT_IMPLEMENTATION_PLAN.md) - TDD Section (lines 644-880)
|
|
- [Test Strategy](../testing/test-strategy.md) - Testing approach
|
|
- [Architecture Decisions](../ARCHITECTURE_DECISIONS.md) - Design rationale
|
|
|
|
### TDD Training Materials
|
|
- Kent Beck - "Test-Driven Development by Example"
|
|
- Martin Fowler - "Refactoring: Improving the Design of Existing Code"
|
|
- Uncle Bob Martin - "Clean Code" (Chapter on Unit Tests)
|
|
|
|
### Tools and IDE Setup
|
|
- **IntelliJ IDEA**: Enable "Run tests on save" for instant feedback
|
|
- **Eclipse**: Install EclEmma for coverage visualization
|
|
- **JaCoCo**: Maven plugin for coverage enforcement
|
|
- **PIT**: Mutation testing for test quality validation
|
|
- **WireMock**: HTTP mocking for integration tests
|
|
- **gRPC Testing**: Mock gRPC servers for integration tests
|
|
|
|
---
|
|
|
|
## ✅ Definition of Done (TDD Perspective)
|
|
|
|
A user story/task is considered DONE when:
|
|
|
|
- [ ] All tests written BEFORE implementation (verified in Git history)
|
|
- [ ] All tests pass (green CI build)
|
|
- [ ] Line coverage ≥ 95%, branch coverage ≥ 90%
|
|
- [ ] Code review completed with TDD compliance verified
|
|
- [ ] No TDD violations found in PR review
|
|
- [ ] Git history shows clear RED-GREEN-REFACTOR cycles
|
|
- [ ] Integration tests pass (if applicable)
|
|
- [ ] Documentation updated (Javadoc, README)
|
|
- [ ] Merged to develop branch
|
|
|
|
---
|
|
|
|
## 📞 TDD Support and Questions
|
|
|
|
### Who to Ask
|
|
- **TDD Questions**: Tech Lead, Senior Developers
|
|
- **Tool Setup**: DevOps Engineer
|
|
- **Coverage Issues**: QA Lead
|
|
- **Git Workflow**: Tech Lead
|
|
|
|
### Pair Programming Sessions
|
|
- **Daily TDD Pairs**: Rotate pairs daily for knowledge sharing
|
|
- **TDD Mob Sessions**: Weekly mob programming on complex TDD scenarios
|
|
- **TDD Code Reviews**: All PRs require TDD compliance review
|
|
|
|
---
|
|
|
|
## 🎯 Summary: The TDD Mindset
|
|
|
|
> **"If it's worth building, it's worth testing. If it's not worth testing, why are you wasting your time working on it?"**
|
|
|
|
**TDD is not optional—it's how we build software at HSP.**
|
|
|
|
### The TDD Promise
|
|
- Tests document behavior
|
|
- Refactoring is safe
|
|
- Bugs are caught early
|
|
- Code is more modular
|
|
- Coverage is not a goal—it's a side effect
|
|
|
|
### The TDD Reality Check
|
|
If you find yourself:
|
|
- Writing code without tests → STOP
|
|
- Committing untested code → STOP
|
|
- Skipping tests "just this once" → STOP
|
|
|
|
**Return to RED-GREEN-REFACTOR. Always.**
|
|
|
|
---
|
|
|
|
**Document Control**:
|
|
- Version: 1.0
|
|
- Created: 2025-11-20
|
|
- Status: Active
|
|
- Review Cycle: After each sprint
|