š AI Spotlight: Anna Kovalova š
- Reut Lazo
- Sep 28, 2025
- 6 min read
Weāre excited to present Anna Kovalova, CEO at Anbosoft as this weekās AI Spotlight.

Letās dive into our interview with Anna and see how she is using AI.
1. Share your AI origin story
I didnāt start in a lab; I started on the test bench. After years shipping software, hardware, and firmware, I began using AI to speed up the āthinking workā in QAāgenerating test ideas and charters, exploring edge cases, drafting risk notes, and synthesizing logs. Very quickly I learned two truths: AI can accelerate analysis, but quality still depends on human context and judgment.
The pivotal moment came when I saw teams trying AI in isolated pockets with little business impact. What they needed wasnāt another toolāit was a repeatable way to connect quality decisions to outcomes. That insight led me to design an AI-powered QA Audit: a structured survey plus interviews, a maturity score, a risk map, and a prioritized action plan that shows āwhere you are, where you could be, and how to get there.ā I built it to illuminate waste, shorten feedback loops, and translate QA improvements into executive-friendly metrics like cycle time, hotfix frequency, and defect escape.
I shared the approach publicly, then iterated it with real teams across different stacks. Along the way I documented practical, day-to-day AI uses for testers so others could adopt them immediately. That combinationāhands-on practice, a measurable framework, and open teachingādefines my AI journey. Today my principle is the same as when I started: keep quality human-centered, apply AI where it truly reduces waste and risk, and package the learnings so any team can benefit.
2. What three AI tools have been most game changing for you?
ChatGPT ā my āSwiss-armyā QA workbench
Why it changed my day-to-day: it collapsed the slowest parts of analysis. I use ChatGPTās data-analysis tooling to ingest logs/CSV dumps, slice defects by signal (owner, env, commit), and spin up quick visualizationsāwithout leaving the chat. That lets me prioritize tests with evidence instead of hunches, then roll straight into generating BDD scenarios, edge-case lists, regexes, or crisp repro steps for non-native readers. The net effect is fewer context switches and faster, evidence-backed decisions.
Why it fits my workflow: the same workspace can reason over screenshots or UI diffs when I need it (multimodal), so I donāt bounce between tools during failure triage.
Claude ā long-context reasoning + tidy āArtifactsā
Why it changed my day-to-day: spec reviews and multi-file plans stopped feeling like a juggling act. Claudeās large context window lets me keep lengthy requirements, prior conversations, and log excerpts āin memoryā while I iterateāso I can ask higher-order questions (risk, coverage gaps) without trimming context.
Why it fits my workflow: the Artifacts pane gives me a clean side canvas where test plans, risk tables, or code snippets live as first-class objects I can refine, version, and shareāwithout losing the chat thread that produced them. It turns brainstorming into a tangible deliverable in one place.
Postman Postbot ā API testing on rails
Why it changed my day-to-day: Postbot automates the glue work in API coverage. I describe what I want in plain language and get runnable tests for a single request or an entire collection, plus inline suggestions while I hand-edit scripts. It also helps troubleshoot failing checks and draft documentation where my team already livesāinside Postman. That shortens the path from ānew endpointā to reliable, reviewed coverage.
3. If you were just starting your AI journey today where would you start?
Hereās how Iād start todayābased on what actually worked for me, but simplified so a newcomer can get momentum fast (and without breaking anything important).
Pick one outcome and make it measurable.
Iād choose a single, high-leverage problem (for example: ācut hotfixes,ā āshorten triage from hours to minutes,ā or āreduce flaky testsā). Iād write the baseline numbers on day one, so every AI change has a clear before/after.
Set simple guardrails up front.
Iād document three rules Iāve learned to be non-negotiable:
No sensitive data in prompts.
Every AI output has an owner who reviews it.
We log prompts/outputs for learning and audits.
It takes 20 minutes and saves weeks of cleanup later.
Assemble a tiny tool kit and templates.
Iād start with the trio I lean on daily: ChatGPT for rapid test ideas, data slicing, and quick refactors; Claude for long-context specs and tidy iteration; Postman Postbot to turn intent into runnable API tests. Then Iād create a few reusable prompt templates: risk-based test ideas, BDD scenarios, log triage, and defect-write-ups.
Ship one real workflow in a week.
Iād take a single API or feature and do an end-to-end pass:
Use ChatGPT to outline risks, edge cases, and a small data set.
Use Claude to hold the full spec and produce a first draft of the test plan.
Use Postbot to generate tests for a collection, add checks, and fix failures.
The deliverable isnāt a ādemoā; itās a working slice the team can feel.
Measure, then iterate.
After that first week, Iād compare results to my baseline: What got faster? What errors disappeared? What still hurts? Iād keep a simple table of prompts that worked, prompts that backfired, and examples that consistently produce good results. This becomes my internal playbook.
Build a lightweight evaluation habit.
Iād lock in tiny but real checks: pass-rate of generated API assertions, groundedness of summaries against source logs, and a weekly ādefect-escapeā snapshot. Iāve learned that if you donāt measure, velocity becomes vanity.
Practice the two muscles that compound over time.
Prompt patterns. Role + task + constraints, a couple of crisp examples, and a structured output. Iād practice these on real tickets and logsānot synthetic exercises.
Red-teaming your own work. Before I adopt a new AI step, I stress it: missing context, misleading inputs, or ambiguous requirements. That discipline prevents āAI-flavoredā bugs.
Socialize the win and ask for the next pilot.
Iād package the week-one results in a one-pager: the business outcome, the steps, the guardrails, and the measurable deltas. Then Iād ask for a second pilot in a neighboring area (for example, UI checks or log triage), reusing the same scaffolding.
30-day starter plan (what Iād literally do):
Week 1: Baseline metrics, guardrails doc, tool setup, templates.
Week 2: One feature/API from zero to tested; log what worked.
Week 3: Add minimal evals; harden prompts; remove manual glue.
Week 4: Present results; scale to a second workflow; retire anything that didnāt earn its keep.
Thatās the path I wish I had on day one: one outcome, one week, guardrails from the start, and a feedback loop that ties AI to real improvements the team can see.
4. Share the spotlight: Name 3+ women leading in AI we should all follow.
Mira Murati (Founder & CEO, Thinking Machines Lab; ex-OpenAI CTO) ā Now building a new AI lab/product org; valuable for frontier research + team-building lessons.
āSilicon Valerieā Bertele (Investor & AI/innovation creator) ā prolific voice on AI, product, and venture; daily, punchy posts on how teams and founders can use AI practically (plus a window into ecosystem trends).
Joy Buolamwini (Founder, Algorithmic Justice League) ā Champion for algorithmic fairness; follows and shapes the real-world impacts of AI on communities.
5. As a woman in AI, what do you want our allies to know?
Hereās what I want our allies to knowāpractical, evidence-based, and focused on what actually helps.
Representation is still lopsidedāso seats at the table matter. Women remain a minority in core AI pipelines (e.g., only ~18% of authors at leading AI conferences; >80% of AI professors are men). If weāre not in the rooms where datasets, objectives, and guardrails are set, our needs get missed. Invite us in earlyāon model, data, and policy decisions.
Access and adoption gaps are real. Women are significantly less likely to use generative AI at work, even as usage soars. Close this with equal tooling access, structured time to learn, and visible sponsorship for women experimenting in production workflows.
Bias isnāt abstractāit hurts women of color first. Landmark studies show much higher error rates for darker-skinned women in commercial systems. Allies can push for diverse evaluation sets, bias testing as a release gate, and red-team reviews that include the people most affected.
Inclusion pays offāliterally. Diverse leadership teams are more likely to outperform financially. Tie inclusion goals to business metrics and accountability, not just values statements.
Meetings are where inclusion lives or dies. Women are interrupted more and credited less, which quietly shapes who gets stretch work and budget. Use simple norms: no interruptions; āecho + attributeā ideas; rotate facilitation; and capture decisions in writing with owners.
Caregiving bias is a career tax. The motherhood penalty is well-documented. Flexible schedules, outcome-based performance reviews, and normalizing career āzig-zagsā keep strong contributors in the field.
AI may reshape womenās jobs more, not less. Roles with high clerical/admin contentāwhere women are over-representedāare especially exposed. Allies in leadership should pair automation with upskilling budgets and clear pathways into higher-value work.
What great allyship looks like in practice:
Share power (co-own decisions, not just meetings).
Sponsor, donāt just mentor (open doors to scope, budget, and visibility).
Codify fairness (structured interviews, skill-based evals, written credit).
Resource inclusion (time, tools, and trainingāespecially for AI).
Measure it (publish adoption, promotion, and pay-equity deltas by gender).
If we do these things consistentlyābring women into the design loop, reduce friction to use AI, and enforce everyday inclusionāwe get safer systems, better business results, and teams where everyoneās work is visible and valued.
Want to be the next in AI Spotlight? Itās a great opportunity to share your voice with our community! Fill out the WxAI AI Spotlight Nomination Form for your chance to step into the AI Spotlight and to share your voice with the Women X AI community.




Comments