Skip to content
Michał Kochaniak
Open to opportunities

Michał Kochaniak

Senior Test Automation Engineer

AI-Driven Quality Systems · Performance Engineering · Automation Architecture

I design and build automation frameworks, performance reporting systems, and AI-assisted quality workflows.

Automation EngineeringPerformance EngineeringAI-Driven QA SystemsMobile & Web TestingReporting PipelinesCI/CD Integrations

About

Quality as systems engineering

I treat test automation as an architecture problem, not a scripting task. My work covers framework design, performance analysis, CI/CD integration, and AI-assisted workflows — with the goal of giving teams reliable quality signals and clear reporting throughout delivery.

Automation Architecture

Designing maintainable web and mobile automation frameworks built for long-term stability, cross-platform coverage, and CI/CD integration.

JavaSeleniumAppiumMavenTestNG

Performance Engineering

Turning raw JMeter results into structured analysis, visual reporting, and decision-ready performance insights.

JMeterCSV/JTL AnalysisPythonDOCX/PDF Generation

Applied AI in QA

Using local LLMs and agent workflows to support test analysis, reporting, and engineering decisions without external data exposure.

LLMsOllamaAgent OrchestrationPrompt Engineering

Featured Projects

Featured Work in Automation, Performance, and AI

Selected projects across test automation, performance analysis, reporting systems, and AI-assisted engineering workflows.

Performance Engineering01

AI Performance Reporting System

Problem

Raw JTL and CSV outputs required hours of manual analysis, while reporting quality varied between cycles and stakeholders lacked self-service visibility.

Solution

Built an automated Python pipeline for performance analysis, chart generation, and structured DOCX/PDF reporting, with a local LLM layer for narrative summaries and anomaly flagging.

PythonJMeterpandasOllamaDOCX/PDF

Outcome

Reduced reporting time from hours to minutes and established a consistent, stakeholder-ready reporting workflow.

View case
Automation Architecture02

Mobile Test Automation for Banking App

Problem

A hybrid banking app with native and webview flows required reliable automated regression across Android and iOS under frequent UI change.

Solution

Designed a layered automation framework with cross-platform abstractions, webview context handling, and CI-ready execution patterns.

JavaAppiumSeleniumMavenTestNG

Outcome

Delivered stable automation coverage for critical user flows with low maintenance overhead across major UI updates.

View case
CI/CD Integration03

Jira + Zephyr + CI Quality Pipeline

Problem

Automated test results were disconnected from Jira workflows, making traceability, reporting, and quality visibility inconsistent.

Solution

Implemented a CI-integrated quality pipeline that publishes automated results to Zephyr Scale and maintains requirement-level traceability in Jira.

JavaMavenJenkinsGitHub ActionsZephyr Scale

Outcome

Established real-time quality visibility and removed manual result synchronisation from the delivery workflow.

View case
Applied AI04

Agentic QA Assistant

Problem

Senior QA time was repeatedly spent on routine tasks — reviewing logs, comparing baselines, and preparing reports.

Solution

Designed a locally hosted LLM-based assistant to support result interpretation, regression analysis, and QA reporting workflows in privacy-sensitive environments.

OllamaLLM OrchestrationPythonPrompt Design

Outcome

Reduced routine analysis effort and created a reusable foundation for domain-specific QA support with fully local processing.

View case

Applied AI

AI in Quality Engineering

AI is most useful in QA not for generating tests, but for accelerating analysis, interpreting results, and supporting engineering decisions — locally and privately.

Test Result Analysis

Parsing logs, clustering failures, and surfacing root causes — faster than manual triage.

  • Summarising test failures across suites
  • Grouping similar errors by pattern
  • Identifying likely root causes from stack traces

Performance Report Interpretation

Interpreting JMeter results and performance baselines into actionable observations.

  • Explaining throughput and latency anomalies
  • Comparing runs against historical baselines
  • Generating stakeholder-readable summaries

AI-Assisted Reporting

Structured reports from raw test data — consistent format, no manual writing.

  • Narrative summaries from execution data
  • Executive-level conclusions and risk flags
  • Consistent formatting across report cycles

Local AI / On-Prem Systems

Running models locally via Ollama. Sensitive data never leaves the environment.

  • No external API calls for analysis
  • Sensitive data stays inside the network
  • Reproducible and version-controlled workflows

I treat AI as an engineering tool — useful when it improves signal quality, reduces manual effort, and keeps decision-making grounded in data.

Built with local LLMs (Ollama), structured prompts, and workflow orchestration patterns.

Capabilities

Stack & practices

Core technologies and methods I work with regularly.

Automation Engineering

  • Java
  • Selenium WebDriver
  • Appium
  • Maven
  • TestNG / JUnit
  • Page Object Model
  • Cross-Platform Test Design
  • Data-Driven Testing

Performance Engineering

  • Apache JMeter
  • CSV / JTL Analysis
  • Performance Reporting
  • Trend Comparison
  • Result Visualization

CI/CD & Tooling

  • Jenkins
  • GitHub Actions
  • Build Pipelines
  • Jira Integration
  • Zephyr Scale
  • Git Workflow

AI & Agent Systems

  • LLM Integration
  • Ollama / Local AI
  • Agent Orchestration
  • Prompt & System Design
  • AI-Assisted Analysis

Impact

Measurable outcomes

01

Delivered maintainable automation for critical banking flows across Android and iOS

02

Built reporting pipelines used by both engineering teams and executive stakeholders

03

Connected automated test execution with Jira and Zephyr for end-to-end traceability

04

Designed privacy-safe local AI workflows for QA analysis and engineering support

Process

Working approach

01

Assess

Understand the system, identify risk areas, and define what quality means before building automation.

02

Architect

Design framework patterns that remain stable across product change, not scripts that break on the next release.

03

Automate

Focus on high-value flows and integration points where automation improves delivery confidence.

04

Report

Turn execution data and performance results into structured signals that support engineering decisions.

Next step

Let's solve a quality problem

I help engineering teams ship faster by building test automation architectures, performance pipelines, and AI-driven quality systems. If your release cycle needs unblocking — let's talk scope.