TestTrackingDiagrams.PlantUml.Ikvm
1.26.0
dotnet add package TestTrackingDiagrams.PlantUml.Ikvm --version 1.26.0
NuGet\Install-Package TestTrackingDiagrams.PlantUml.Ikvm -Version 1.26.0
<PackageReference Include="TestTrackingDiagrams.PlantUml.Ikvm" Version="1.26.0" />
<PackageVersion Include="TestTrackingDiagrams.PlantUml.Ikvm" Version="1.26.0" />
<PackageReference Include="TestTrackingDiagrams.PlantUml.Ikvm" />
paket add TestTrackingDiagrams.PlantUml.Ikvm --version 1.26.0
#r "nuget: TestTrackingDiagrams.PlantUml.Ikvm, 1.26.0"
#:package TestTrackingDiagrams.PlantUml.Ikvm@1.26.0
#addin nuget:?package=TestTrackingDiagrams.PlantUml.Ikvm&version=1.26.0
#tool nuget:?package=TestTrackingDiagrams.PlantUml.Ikvm&version=1.26.0
<a name="top"></a>
Test Tracking Diagrams
Effortlessly autogenerate PlantUML sequence diagrams (or Mermaid sequence diagrams) from your component and acceptance tests every time you run them. Tracks the HTTP requests between your test caller, your Service Under Test (SUT), and your SUT dependencies, then converts them into diagrams embedded in searchable HTML reports and YAML specification files.
Table of Contents
- Example Output
- How It Works
- Use Cases
- Deterministic vs AI-Generated Diagrams
- Component Diagrams (C4-style)
- Supported Frameworks & NuGet Packages
- Recommended BDD Framework
- Documentation
<a name="example-output"></a>Example Output ↑
Each test that makes HTTP calls through the tracked pipeline automatically produces a sequence diagram (with matching PlantUML) showing the full request/response flow between services.
Tip: You can visually separate the setup (arrange) phase from the action phase using the
SeparateSetupflag.
<a name="how-it-works"></a>How It Works ↑
┌─────────────┐ HTTP ┌─────────────┐ HTTP ┌─────────────┐
│ Test Code │ ──────────► │ SUT │ ──────────► │ Dependency │
│ (Caller) │ ◄────────── │ (Your API) │ ◄────────── │ (Fakes) │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
│ │ Event / Message │
│ │ ──────────────────► ┌──────┴──────┐
│ │ │Event broker │
│ │ │ (Fakes) │
│ │ └───────┬─────┘
│ │ │
└───── All HTTP traffic + events/messages are intercepted ───┘
│
▼
┌──────────────────────┐
│ RequestResponseLogger│
│ (in-memory log) │
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ PlantUmlCreator │
│ or MermaidCreator │
│ (generates diagrams) │
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ ReportGenerator │
│ (HTML + YAML files) │
└──────────────────────┘
Intercept — A
TestTrackingMessageHandler(aDelegatingHandler) is inserted into the HTTP pipeline. It logs every request and response, enriching them with tracking headers (test name, test ID, trace ID, caller name). For non-HTTP interactions (events, messages, commands),MessageTrackerlogs them directly to the same in-memory store. See the Tracking Dependencies wiki page for a detailed guide on how to configure tracking for every commonHttpClientpattern.Important: These two mechanisms produce visually different diagram output.
TestTrackingMessageHandlerproduces proper HTTP-style arrows (with method, status code, headers, body), whileMessageTrackerproduces event-style arrows (blue notes, no HTTP semantics). Always useTestTrackingMessageHandlerfor HTTP-based dependencies — even if the dependency is faked or stubbed in tests (e.g. via WireMock, JustEat HttpClient Interception, or in-memory fake APIs). ReserveMessageTrackerfor genuinely non-HTTP interactions like Kafka events or message bus traffic. See the Tracking Dependencies — Faking Dependencies section for detailed examples.Collect — All logged
RequestResponseLogentries are held in the staticRequestResponseLogger. Each entry captures the method, URI, headers, body, status code, service names, and a trace ID to correlate requests across services. Events and messages are stored alongside HTTP logs with a distinctEventmeta type.Generate — At the end of the test run,
PlantUmlCreator(orMermaidCreatorwhen using Mermaid output) groups logs by test ID and converts them into sequence diagram code. PlantUML diagrams are encoded and rendered via a PlantUML server (or locally via IKVM); Mermaid diagrams are embedded directly in HTML as<pre class="mermaid">blocks rendered client-side by mermaid.js.Report —
ReportGeneratorcombines the diagrams with test metadata (features, scenarios, results, BDD steps) to produce three output files: a YAML specification, an HTML specification with diagrams, and an HTML test run report.
<a name="use-cases"></a>Use Cases ↑
Debugging failed tests locally and in CI/staging
When a test fails, the sequence diagram shows exactly which HTTP call returned an unexpected response — the status code, headers, and body are all visible in the diagram notes. This eliminates guesswork when diagnosing failures, whether you're debugging locally or triaging a failed CI pipeline run against a staging environment. Instead of adding logging, re-running, and reading through console output, the diagram gives you the full picture in a single image.
Living documentation for stakeholders, developers, and AI
The generated HTML reports and YAML specifications serve as an always-up-to-date source of truth for how your API behaves. Because they're produced directly from passing tests, they can never drift out of sync with the actual implementation. Stakeholders can browse the HTML reports to understand feature behaviour without reading code. Developers can use them during onboarding or when working in unfamiliar areas of the codebase. AI assistants can consume the YAML specs or PlantUML source to answer questions about service interactions with high accuracy.
Feeding AI tools for more accurate analysis
The raw PlantUML code behind each diagram is a compact, structured representation of your service's HTTP interactions. You can feed it directly into AI coding assistants, chat interfaces, or documentation generators to give them precise context about how services communicate. This produces significantly better results than asking an AI to infer behaviour from source code alone, because the diagrams capture the actual runtime flow including request/response payloads, status codes, and service names.
Creating accurate high-level architecture diagrams
The per-test sequence diagrams provide a ground-truth foundation for building higher-level architecture and integration diagrams. Rather than drawing C4 models, system context diagrams, or integration maps from memory (which inevitably drift from reality), you or an AI can derive them from the concrete service interactions captured in the test suite. The PlantUML source is particularly useful here — an AI can aggregate the participants and message flows across multiple test diagrams to produce accurate summary diagrams.
Reviewing pull requests
When a PR changes HTTP interactions (new downstream calls, modified payloads, changed endpoints), the sequence diagrams in the test reports make the impact immediately visible. Reviewers can compare the before and after diagrams to understand exactly what changed in the service communication, without having to mentally trace through the code.
Regression detection
If a code change unintentionally alters the HTTP interaction pattern — an extra call to a downstream service, a missing header, a changed payload shape — the updated diagram makes it obvious. The YAML specification files are particularly useful for automated diffing in CI pipelines.
Onboarding and knowledge transfer
New team members can browse the HTML reports to quickly understand how the system's services interact, what endpoints exist, and what the expected request/response shapes look like — all backed by real, passing tests rather than potentially stale wiki pages.
CI summary integration
Enable WriteCiSummary = true on your ReportConfigurationOptions to surface test results and sequence diagrams directly in your GitHub Actions job summary or Azure DevOps build summary. The summary includes a pass/fail table, and when tests fail, the failed scenarios are shown with error messages, stack traces, and their sequence diagrams — giving you immediate visual context without downloading artifacts. When all tests pass, diagrams for the first N scenarios are shown as a quick validation. An optional interactive HTML artifact (WriteCiSummaryInteractiveHtml = true) renders diagrams client-side using the PlantUML JS engine with no server dependency. See the CI Summary Integration wiki page for full details.
CI artifact upload
Enable PublishCiArtifacts = true to automatically publish generated report files as CI artifacts. On Azure DevOps, reports are uploaded directly via ##vso[artifact.upload] logging commands during test execution — no additional pipeline configuration needed. On GitHub Actions, the library writes the reports directory path and retention days to $GITHUB_OUTPUT so you can add a single upload-artifact step to your workflow. Artifact retention defaults to 1 day (CiArtifactRetentionDays). See the CI Artifact Upload wiki page for configuration and workflow examples.
<a name="deterministic-vs-ai"></a>Deterministic vs AI-Generated Diagrams ↑
A key advantage of these diagrams is that they are deterministic — they are derived directly from actual HTTP traffic captured during test execution, not generated by an AI model. AI-generated diagrams are non-deterministic by nature: they vary between runs, may hallucinate service interactions that don't exist, omit ones that do, or represent payloads inaccurately. The accuracy depends entirely on the model's understanding of your codebase, which is always incomplete.
Because TestTrackingDiagrams captures what actually happened over the wire, the output is a faithful, reproducible record of your system's behaviour. This makes the diagrams and PlantUML source especially valuable as input to AI tools — when you give an AI a deterministic, verified diagram as context, it can produce far more accurate outputs for:
- Debugging — The AI sees the exact request/response chain that led to a failure, rather than guessing from code paths
- Code understanding — The AI can reason about concrete service interactions instead of inferring them from scattered HTTP client registrations and handler code
- Diagram generation — The AI can aggregate verified low-level sequence diagrams into accurate high-level architecture diagrams, C4 models, or integration maps
- Documentation — The AI can write accurate API behaviour descriptions grounded in real data rather than its own interpretation of the source code
In short: use deterministic diagrams as the source of truth, and let AI tools build on top of that truth rather than trying to reconstruct it.
<a name="component-diagrams"></a>Component Diagrams (C4-style) ↑
In addition to per-test sequence diagrams, TestTrackingDiagrams can aggregate all tracked interactions across your entire test suite to auto-generate a C4-style component diagram. This diagram shows every discovered participant (services, event brokers, databases) and their relationships — giving you a high-level architecture overview derived directly from real test traffic.
This is an opt-in feature. Enable it by setting GenerateComponentDiagram = true on your ReportConfigurationOptions:
var options = new ReportConfigurationOptions
{
GenerateComponentDiagram = true,
ComponentDiagramOptions = new ComponentDiagramOptions
{
Title = "My Service Architecture",
// Optional: filter out participants you don't want in the diagram
ParticipantFilter = name => name != "InternalHelper"
}
};
Output
When enabled, two additional files are generated alongside your existing reports:
| File | Description |
|---|---|
ComponentDiagram.puml |
Raw PlantUML C4 source — version-controllable, diffable |
ComponentDiagram.html |
Standalone HTML page with the PlantUML source for easy viewing |
Example Output
@startuml
!include <C4/C4_Component>
title My Service Architecture
Person(webApp, "WebApp")
System(orderService, "OrderService")
System(paymentService, "PaymentService")
System(kafka, "Kafka")
Rel(webApp, orderService, "HTTP: GET, POST — 14 calls across 8 tests")
Rel(orderService, paymentService, "HTTP: POST — 6 calls across 4 tests")
Rel(orderService, kafka, "Publish: Publish — 3 calls across 2 tests")
@enduml
How participants are classified:
- A participant that only appears as a caller (never called by another service) is rendered as a
Person()— typically your test client - All other participants are rendered as
System()— your SUT, its dependencies, event brokers, databases, etc.
Relationship labels show the protocol, distinct HTTP methods used, total call count, and how many tests exercised the relationship.
For full configuration details (custom titles, themes, label formatters, participant filters), see the Component Diagrams wiki page.
<a name="supported-frameworks"></a>Supported Frameworks & NuGet Packages ↑
Extensions
All packages from 1.23.X onwards target .NET 10.0 .
<a name="recommended-bdd"></a>Recommended BDD Framework ↑
If you're choosing a BDD framework to pair with TestTrackingDiagrams, we recommend LightBDD.
- Composite (sub) steps — LightBDD lets you nest steps inside other steps, creating a hierarchy of abstraction levels. These sub-steps appear in the generated reports, allowing you to read the high-level scenario at a glance and drill down into implementation details only when needed.
- Pure C# — Scenarios are plain method calls with refactoring, IntelliSense, and compile-time safety. No
.featurefiles to keep in sync. - Rich built-in reporting — LightBDD generates its own HTML reports with step timings, statuses, and categories. TestTrackingDiagrams hooks into this pipeline to embed sequence diagrams directly alongside the scenario results.
- Parameterised and tabular steps — First-class support for data-driven steps with inline parameters, verifiable tabular data, and tabular attributes, making it easy to express complex test inputs and expected outputs.
- DI container support — Native integration with
Microsoft.Extensions.DependencyInjectionand Autofac, which aligns naturally with ASP.NET Core test setups. - Active maintenance — LightBDD is actively maintained with regular releases and good documentation.
That said, all supported frameworks work well with TestTrackingDiagrams — pick whichever fits your team best.
<a name="documentation"></a>Documentation ↑
For full documentation including quick start guides, configuration, customisation, and API reference, see the Wiki.
Key pages:
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- IKVM (>= 8.9.1)
- IKVM.Image (>= 8.9.1)
- IKVM.Image.JDK (>= 8.9.1)
- IKVM.Image.JRE (>= 8.9.1)
- IKVM.MSBuild (>= 8.9.1)
- TestTrackingDiagrams (>= 1.26.0)
-
net8.0
- IKVM (>= 8.9.1)
- IKVM.Image (>= 8.9.1)
- IKVM.Image.JDK (>= 8.9.1)
- IKVM.Image.JRE (>= 8.9.1)
- IKVM.MSBuild (>= 8.9.1)
- TestTrackingDiagrams (>= 1.26.0)
-
net9.0
- IKVM (>= 8.9.1)
- IKVM.Image (>= 8.9.1)
- IKVM.Image.JDK (>= 8.9.1)
- IKVM.Image.JRE (>= 8.9.1)
- IKVM.MSBuild (>= 8.9.1)
- TestTrackingDiagrams (>= 1.26.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.