Michał Kochaniak
Senior Test Automation Engineer
AI-Driven Quality Systems · Performance Engineering · Automation Architecture
I design and build automation frameworks, performance reporting systems, and AI-assisted quality workflows.
About
Quality as systems engineering
I treat test automation as an architecture problem, not a scripting task. My work covers framework design, performance analysis, CI/CD integration, and AI-assisted workflows — with the goal of giving teams reliable quality signals and clear reporting throughout delivery.
Automation Architecture
Designing maintainable web and mobile automation frameworks built for long-term stability, cross-platform coverage, and CI/CD integration.
Performance Engineering
Turning raw JMeter results into structured analysis, visual reporting, and decision-ready performance insights.
Applied AI in QA
Using local LLMs and agent workflows to support test analysis, reporting, and engineering decisions without external data exposure.
Featured Projects
Featured Work in Automation, Performance, and AI
Selected projects across test automation, performance analysis, reporting systems, and AI-assisted engineering workflows.
Applied AI
AI in Quality Engineering
AI is most useful in QA not for generating tests, but for accelerating analysis, interpreting results, and supporting engineering decisions — locally and privately.
Test Result Analysis
Parsing logs, clustering failures, and surfacing root causes — faster than manual triage.
- Summarising test failures across suites
- Grouping similar errors by pattern
- Identifying likely root causes from stack traces
Performance Report Interpretation
Interpreting JMeter results and performance baselines into actionable observations.
- Explaining throughput and latency anomalies
- Comparing runs against historical baselines
- Generating stakeholder-readable summaries
AI-Assisted Reporting
Structured reports from raw test data — consistent format, no manual writing.
- Narrative summaries from execution data
- Executive-level conclusions and risk flags
- Consistent formatting across report cycles
Local AI / On-Prem Systems
Running models locally via Ollama. Sensitive data never leaves the environment.
- No external API calls for analysis
- Sensitive data stays inside the network
- Reproducible and version-controlled workflows
I treat AI as an engineering tool — useful when it improves signal quality, reduces manual effort, and keeps decision-making grounded in data.
Built with local LLMs (Ollama), structured prompts, and workflow orchestration patterns.
Capabilities
Stack & practices
Core technologies and methods I work with regularly.
Automation Engineering
- Java
- Selenium WebDriver
- Appium
- Maven
- TestNG / JUnit
- Page Object Model
- Cross-Platform Test Design
- Data-Driven Testing
Performance Engineering
- Apache JMeter
- CSV / JTL Analysis
- Performance Reporting
- Trend Comparison
- Result Visualization
CI/CD & Tooling
- Jenkins
- GitHub Actions
- Build Pipelines
- Jira Integration
- Zephyr Scale
- Git Workflow
AI & Agent Systems
- LLM Integration
- Ollama / Local AI
- Agent Orchestration
- Prompt & System Design
- AI-Assisted Analysis
Impact
Measurable outcomes
Delivered maintainable automation for critical banking flows across Android and iOS
Built reporting pipelines used by both engineering teams and executive stakeholders
Connected automated test execution with Jira and Zephyr for end-to-end traceability
Designed privacy-safe local AI workflows for QA analysis and engineering support
Process
Working approach
Assess
Understand the system, identify risk areas, and define what quality means before building automation.
Architect
Design framework patterns that remain stable across product change, not scripts that break on the next release.
Automate
Focus on high-value flows and integration points where automation improves delivery confidence.
Report
Turn execution data and performance results into structured signals that support engineering decisions.
Next step
Let's solve a quality problem
I help engineering teams ship faster by building test automation architectures, performance pipelines, and AI-driven quality systems. If your release cycle needs unblocking — let's talk scope.