Skip to content
Michał Kochaniak
All Projects
01 / 04

AI Performance Reporting System

Automated pipeline turning raw JMeter output into structured, stakeholder-ready performance reports

PythonpandasmatplotlibJTL / CSV ParsingStatistical AnalysisPercentile Computationpython-docxFPDF

Overview

Performance testing produces large volumes of raw data — JTL files, CSV exports, server-side metrics — that need skilled interpretation before they become actionable. This project replaced a manual analysis workflow with an automated Python pipeline that ingests JMeter results, computes statistical metrics, generates charts, and produces structured DOCX/PDF reports. An LLM layer via Ollama adds narrative summaries, flags anomalies, and provides plain-language interpretation for non-technical readers.

Challenge

Manual analysis of JMeter results consumed hours per test cycle and was inconsistent between analysts.

Reports varied in structure, depth, and quality depending on who wrote them.

Non-technical stakeholders struggled to extract actionable conclusions from raw performance data.

Historical trend comparison required tedious manual data extraction across multiple result files.

Approach

Designed a Python pipeline to parse JTL/CSV result files and compute key performance metrics — response times, throughput, error rates, and percentile distributions.

Built chart generation — response time distributions, throughput curves, error breakdowns — using matplotlib, embedded directly into report templates.

Created DOCX/PDF report templates with consistent structure: executive summary, detailed metrics, trend comparison, conclusions, and recommendations.

Integrated an LLM layer (via Ollama) to generate narrative analysis sections — translating statistical data into clear, human-readable conclusions.

Added historical comparison logic to automatically surface regressions and improvements across test runs.

Technology Stack

Core

Pythonpandasmatplotlib

Data

JTL / CSV ParsingStatistical AnalysisPercentile Computation

Reporting

python-docxFPDFTemplate Engine

AI

OllamaLLM Prompt EngineeringNarrative Generation

Outcomes

Reduced per-cycle analysis time from hours of manual work to minutes of automated processing.

Standardized report structure across the team — every report now follows a consistent, professional format.

Enabled non-technical stakeholders to understand performance results without engineer interpretation.

Historical trend detection surfaced regressions that were previously missed in manual reviews.

Summary

The value of performance engineering is in how efficiently results reach decision-makers. Automating the analysis-to-report pipeline — with AI-assisted narrative — turned raw data into structured deliverables that directly supported release decisions.