Break - 15 minute
Break - 5 minute
Break - 5 minute
Methodology
9:30 — 10:00
"Quick 'n Dirty Test Management"
Pekka Marjamäki, Jani Grönman
10:00 — 10:30
"Becoming a toolsmith - lessons learned from software testing and tool experimentation"
Joshua Gorospe
10:30 — 11:00
"Automation Test code as a First class citizen"
Lewis Prescott
11:00 — 11:30
Q&A
Motivation&Thinking
11:45 — 12:15
"Learning, Cognition, and Becoming a Better Tester"
Jess Ingrassellino
12:15 — 12:45
"How we Think (when testing)"
Paul Gerrard
12:45 — 13:15
"Evolving your career: Questions a tester should ask at interview to find a great job"
Mike Harris
13:15 — 13:45
Q&A
AI
13:50 — 14:20
"Artificial Intelligence vs. Human Stupidity - An Ethical problem"
Olivier Denoo
14:20 — 14:50
""Self-Healing Tests" The holy grail of test automation ... Or just a lot of noise about nothing?"
Matthias Zax
14:50 — 15:20
"Getting Started with AI for Software"
Jenn Bonine
15:20 — 15:50
Q&A
Testing Techniques
15:55 — 16:25
"MobTesting - is that a thing?"
Louise Rasmussen
16:25 — 16:55
"You Need User Acceptance Testing (UAT) as well as User Story Acceptance Tests"
Robin Goldsmith
16:55 — 17:25
"Implementing BDD: How One Team is Making it Work "
Christine Ketterlin Fisher
17:25 — 17:55
Q&A
10:00 — 10:10
Intro
Entry-level track
Gives access to Junior track only
with no recordings.
Focuses on entry-level content around QA.
Automation & Tooling
11:45 — 12:15
"Effective Test Estimation"
Shyam Sunder
12:15 — 12:45
"Why do we do Test Automation Internal Mentorships for our QA Engineers?"
Uladzislau
Ramanenka
12:45 — 13:15
"Orchestrating your Testing Process - coordinating your manual and automated testing efforts"
Joel Montvelisky
13:15 — 13:45
Q&A
AI
13:50 — 14:20
"Testing Conversational AI - Strategy to Automation"
Shama Ugale
14:20 — 14:50
"Cognitive Engineering - Test Data AI"
Jonathon Wright
14:50 — 15:20
"Advancing Mobile App Dev and Test with Next Gen Technologies"
Eran Kinsbruner
15:20 — 15:50
Q&A
Break - 15 minute
Testing Techniques
15:5516:25
"Taking Quality to the Next Level: Metrics for Improvement"
Anton Angelov
16:25 — 16:55
"Test case auto-generation and case studies"
Flint Liu
16:55 — 17:25
"How to Rapidly Introduce Tests When There Is No test coverage"
Dmytro Gordiienko
17:25 — 17:55
Q&A
Break - 5 minute
Testing Techniques
9:30 — 10:00
"Using Test Designs in Test Automation"
André Verschelling
10:00 — 10:30
"Why Aren't We Using Selenium?"
Curt Tompkins
10:30 — 11:00
"Page Object Model Pitfalls"
Max Saperstone
11:00 — 11:30
Q&A
Break - 5 minute
AI
10:00 — 10:10
Intro
Full Access
Gives access to both Junior and Senior tracks, recordings are included. Focuses on deep tech content around QA.
Testing Conversational AI - Strategy to Automation
Last year was dominated by the smart devices and voice based home assistants. These use the conversational interfaces unlike other application to interact with. They are built using advanced algorithms, ranging from pattern and expression matching engines to natural language processing and AI/Machine learning techniques. These systems are constantly learning by themselves improving the intercations with the user bringing up the challenge in the testing world of non-deterministic output. To such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions using emojis gifs and pictures. Testing in this context moves to clouds of probabilities.

In this session I will cover the strategy for testing such interfaces, testing the NLP models and sharing experience on how to automate these tests and add it to the CI/CD build pipelines.
Key learnings:
* How What and why of a conversational interface?
* How can I build my testing approach for such an interface?
* What from my current toolset can I use for this new context?
* How do I automated and add it for my CI/CD pipeline for instant feedback?
* How do I measure the quality?
Learning, Cognition, and Becoming a Better Tester
Have you ever wondered how you can tie your life interests into your work? Do you love to learn, and want to understand how your love for learning can joyously infect your entire life, including your software testing? Jess will share pedagogies of learning (Zone of Proximal Development, Scaffolding, Metacognition, and Flow), and then do a live demonstration of these principles on the violin, weaving testing, music, and other disciplines together in a fascinating talk where she lays her skills on the line to show you how you can use learning, cognition, and personal passion to become a better tester.
Test case auto-generation and case studies
Test case is the main factor in testing activity, however there is no perfect algorithm can help us define a test suite to cover all the scenarios. In the modern world, software under test becomes more and more complicated, manual test case design is not enough on some of the product such as machine learning and big data service, and on some of the testing requirement, such as security testing and performance testing. This talk will mainly cover three different test case generation techniques, search-based testing, combinatorial testing and metamorphic testing. Based on different techniques, case studies will be also provided with different type of produces, AI model, search engine and backend services.
Taking Quality to the Next Level: Metrics for Improvement
Are your testing and quality assurance activities adding significant value in the eyes of your stakeholders? Do you have difficulty convincing decision-makers that they need to invest more in improving quality? Selecting metrics that stakeholders understand will get your improvement project or program past the pilot phase and reduce the risk of having it stopped in its tracks.
Anton Angelov will present a story of how his teams managed to adopt a more in-depth bug workflow. From noticing the bug to the triage process and metrics boards. There will be a real-world example of how to collect 15 essential QA metrics from Jira. To calculate them, we will use Azure Functions and visualize them though charts in Azure Power BI. At the end of the presentation, you will have many ideas on how you can optimize your existing defect management through new statuses, practices and monitor it through quality metrics, which can help you to improve your software development and testing further.
Cognitive Engineering - Test Data AI
Next generation Intelligent Automation unlock the power of analytics and autonomics for continuous delivery. While automation solutions within current testing implementations help to address agility need, such automation is typically driven by static rules using conventional scripting and orchestration techniques. Such techniques incur high maintenance overhead to keep updated relative to changing circumstances. The recent emergence of cognitive engineering technologies (such as autonomics) has evolved, introducing the possibility to drive adaptive automation within testing implementations. Such automation can self-heal and self-configure based on changing situations. In this session, the speaker will present how next generation data engineering and autonomics technologies can be leveraged to power the next generation of cognitive testing implementations, and how they can support the needs of your organisation.
Why do we do Test Automation Internal Mentorships for our QA Engineers?
Currently, in the testing industry, there’s an aspiration to learn more about test automation. Drivers may be different: get new skills, ease work tasks, etc. But often, what stops you in this learning is the absence of guidance and assistance. Of course, there are tons of materials online, but what to do with all of them? How to apply the theoretical knowledge?
At the same time, with over half a billion users around the world, Bumble relies a lot on autotests. We wanted to help our QA Engineers to learn Test Automation.
The best way to accomplish it, from our perspective, was to create a Test Automation Internal Mentorships. They allowed our QA Engineers to learn directly from the experts and have a better understanding of how things work.
With such practice, our QA engineers became more independent and now handle more complex tasks. What’s more, all the learning is shared among the entire department.
Apart from why we did it, I will also cover:
- why we invested in autotests;
- why did we start improving QAs test automation skills;
- why you may need something different from that internal mentorship.
the other ways of learning test automation for you and your team
Join me in this talk and I’ll explain how you can organise your own Internal Mentorships.
Orchestrating your Testing Process - coordinating your manual and automated testing efforts
Due to many historical reasons, most testing and even development organizations, approach their manual and automated testing efforts independently. What’s more, when you look closer at these teams, you notice that even within their automation efforts they are using a number of different testing frameworks, running independently and without much thought around coordination, coverage overlaps or functional dependencies.

This approach needs to change. Teams are releasing products faster than ever, and this means that we need to make every testing effort count, including everything from Unit and Integration Tests run by our development teams, Functional and Non Functional automated tests executed by the testing teams, and every manual testing effort encompassing all the Exploratory and Scripted tests run by every member of our teams.

By coordinating the planning, designing, execution, and reporting of our complete testing process we will be able to achieve better visibility and make more accurate decisions faster.

But the road to achieving this goal is not trivial.
During this session we will:
- Review the objectives of coordinating all your testing efforts
- Understand common issues and hurdles faced by teams embarking on these efforts
- Learn how to build coordinated efforts using a few recommended approaches
- Get ideas on how to get started with your team, as soon as possible.
Using Test Designs in Test Automation
Many Test Automation efforts these days use Domain Specific Languages (DSL) such as Gherkin to describe the test cases. Although skilled testers are involved in this process, the creation of these Gherkin files is often done out of the blue or based on small snippets of information such as user stories. While the approach enables understanding of the test cases by all stakeholders it still results in a quite narrow view on the system or function to be tested.
Going back to the basics of our craftsmanship however, we could easily elaborate the use of Gherkin and BDD into actively applying it as specification by example, just as it was meant to be. The Test Design Techniques of old will guide us in this specification phase, resulting in test cases that will achieve covering the system under test both broad and deep, up to the level of the applied technique. By combining multiple techniques, it will also comply with approaches such as risk-based testing whenever required.
In this presentation I will explain the design-specify-test approach using Test Design Techniques to write Gherkin files and illustrate this with several examples of different Test Design Techniques.
Advancing Mobile App Dev and Test with Next Gen Technologies
Mobile has gone a long way until it reached its current state of being the main digital channel for all business activities. With that, it is not starting to mark its next wave through AppClips, Android APKs, Progressive Web apps (PWA), and flutter apps. How can teams make the leap towards these new techniques that are about to transform the current phase? In this session, Eran Kinsbruner is going to give a deep dive into the main emerging mobile technologies and how to get ready to adopting them, how will they impact software deliver cycles and much more.
Effective Test Estimation
We have executed many projects Large Projects, Small Projects etc. Sometimes we miss our testing deadlines because there is no defined criterion that is used to build our execution test plan. To help avoid such missing our deadlines we have prepared these Test Estimation guidelines. In this presentation I present the various Test estimation techniques which will help us in proper execution of the Testing projects. This is a presentation submitted in the Test Management Stream.
Why Aren't We Using Selenium?
As managers, we've seen the same marketing pitch—a new, easy-to-use platform that promises to help you shift-left and accelerate customer value. It sounds appealing as our teams must rapidly address the market demands for new features, faster service, and more integrations while maintaining quality. Too often, as we scale, quality begins to suffer because teams are dependent on scarce QA resources. As managers, we ask how we often ask ourselves how we can source dedicated automation experts to help our teams, whether Selenium, Javascript, or Java, as was the case for I experienced as a leader of a small team at MMIT.

I sought to change the paradigm and adopt a new platform to scale automation while onboarding a new QA team. In this talk, I will share how my team was able to successfully scale from 0 to 70% automation of regression testing within three months of adopting Testim. In this talk, I will share how with GUI-driven test creation and planning, any of my teammates as a web user could become an automation tester. And for my more skilled power users with Javascript, Typescript, and other languages, I will share how they were able to test faster within codeless automation solutions with backend coding access. As a manager, if you have been asked by your team,“Why Aren't We Using Selenium?," then this is the talk for you.
Page Object Model Pitfalls
Selenium has been around for over 15 years, and by now organizations have realized that Selenium tests need to be treated the same as any other functional code. This means not just keeping your tests in source control, but also designing them to be maintainable and robust. A common design pattern known as the Page Object Model (POM) has emerged, which greatly assists with organization and maintenance of tests. But there are scalability, speed, and robustness issues with this pattern. This has caused organizations to move away from Selenium for other tooling, however, most organizations are encountering the same problems, because they are using the same problematic design patterns. Max will outline these issues, how to avoid them, and better patterns to use. He'll discuss how to transform your tests to be more effective, using patterns like Arrange Act Assert, and not relying solely on Selenium to exercise the system.
Getting Started with AI for Software
How do you train an AI bot to do some of the mundane work in your job? Integrating AI into your daily work can be intimidating, but in reality, it is pretty easy, and it just takes some understanding of where to start. Learn how to directly apply AI to real-world problems without having a Ph.D. in computer science. Jennifer will share a wide survey of all the different ways AI is applied to software today. Get some basic understanding of AI/ML with a no coding skills required approach. Whether you are just AI-Curious or want to reap the benefits of AI-based approaches on your product today, this session is a must to get an understanding of where software, and your career, are headed in an AI-First world. Jennifer will also reserve time for Q&A to discuss applying AI-based testing approaches to your individual challenges. This session will get you ready for the AI-Based future!” Leave this session knowing how to start using AI, where AI can be applied, benefits of AI first approach, and different tool options for AI in a tool-agnostic approach.
Takeaways:
• Learn how to start using AI and where AI can be applied to software
• Benefits of AI first approach
• Learn different tool options for AI in a tool-agnostic approach
Artificial Intelligence vs. Human Stupidity - An Ethical problem
AI is becoming more and more a part of our life, whether we like it or not, whether we know it or not.
AI-based systems are already scrutinizing our data to profile us and offer the best or more suitable products and services to us...in theory at least.
Feeding from big data, commercial AI solutions become a helper to determine the risk we take or the risk we might be for others; they interfere with your insurance fees, court decisions, driving, medical checkup, school results, job interviews, employee performance and school or exam results...
But what if they fail? What is the role we must play, as IT-ers and QA-ers from an ethical perspective?
How we Think (when testing)
This talk proposes a model of the (testing) thought processes that every developer and tester uses. In a sentence, what we do is this: "we explore sources of knowledge to build test models that inform our testing". The model identifies two modes of thinking – exploration and testing – and we use judgement to decide when to flip from one to the other.
Separating out these ten thinking activities clarifies what we do when we test. It helps us to understand the challenges of testing and how we choose the methods, approaches and tools we need to test. Understanding how we think helps us to identify how ML/AI can help us as well as identifying the capabilities and skills we need to acquire, to practice and excel in.
Automation Test code as a First class citizen
Over the past year Cancer Research UK have been migrating from Java Selenium automation suite to JavaScript Cypress tests (I know a whole year, we did have a pandemic to work around). I will share our story of aligning the programming language of the automated tests with developers.

We have overcome many barriers and challenges to making End to End tests a first class citizen inside the continuous integration pipeline.
Some of which I will present in this talk:
- Introducing the term "Acceptance Tests" instead of End to End Tests
- Page Object Model vs App Actions
- BDD vs Vanilla JS
- Horizontal End to End Tests vs Vertical End to End Tests

Automation Testers role has changed from automating the manual test cases, with tools like Cypress, automation has now become a team activity with developers.

The outcomes we have seen within CRUK are:
- Better team collaboration and knowledge sharing
- More stable tests and test infrastructure
- More conversations about tests and testability

I hope our journey will align with many other peoples current, past or future experience, potentially encouraging some people to take the leap and make automated testing a team responsibility.
MobTesting - is that a thing?
When we say everyone has the responsibility for the quality, does anyone then really have it?
At ebay DK we have been transitioning though out the last couple of years.
We do not have embedded QA's in all our squads, and they are not the final point of contact before sending it to our users. Our QA's work as coaches and their job is to enable the teams to make sure the quality of what they deliver is good enough to send to our users.
One of the tools they use is mobTesting.
I will talk about how we introduced it to the teams, when we use it as a tool, how we ensure quality in general and what we see the benefits are from working this way.
Self-Healing Tests” The holy grail of test automation ... Or just a lot of noise about nothing?
One of the most important and complex tasks in test automation is the maintenance of test scripts. No other test artifact takes up as much time and effort in maintenance as the test cases that turned into code.

The question now arises whether there is an approach in which artificial intelligence paired with machine learning can take care of the maintenance of the test scripts. The developers of the test scripts would have more time to take care of the automation of new tests and thus increase the test coverage through test automation. The answer to the question is: "Yes, there is a solution: Self-Healing Tests".

In a nutshell, self-healing is the automation of test automation. Test tools with self-healing properties, recognize changes in the graphical user interface and automatically adapt the automated test cases so that the tests remain functional. Commercial tools like TestIM, Mabl & Tricentis Neo-Engine are very promising and jumped on the bandwagon in good time. But there are also promising open source alternatives such as Healenium.
The lecture explains the basics of self-healing tests and shows, using an example, the implementation with the open source library Healenium.
Quick 'n Dirty Test Management
Sometimes there is no need of cumbersome process for test management or you are in a rush to create a management model on the fly. We have created a test management model that relies on the skill and self-guidance of the testers, session based testing and clear, concise test reporting. The model is applicable to most of the planning tools available (for example MS Teams has a Planner) and it is easy to set up. First step is to do test design and create the session description. Then after testing those sessions, the reporting is done using a few simple methods. The process relies on communication rather than rigid process. It is meant to be as lean and agile as possible.
You Need User Acceptance Testing (UAT) as well as User Story Acceptance Tests
Agile project participants report great difficulty getting user story acceptance tests right. Those with broader perspectives further recognize Agile’s sprint focus on small pieces of working code tends to fall down when those pieces must work together. This eye-opening session gives guidance on writing more effective user story acceptance tests and explains why and how to do the true User Acceptance Testing (UAT) of larger integrations that Agile projects also need but seldom have.
• The difference between User Story Acceptance Tests and User Acceptance Testing (UAT)
• Keys to writing more effective right User Story Acceptance Tests
• Appropriate use and creation of needed User Acceptance Tests (UAT)
Evolving your career: Questions a tester should ask at interview to find a great job
Choosing to work at a new company involves excitement, possibility and risk. Working with lean and agile teams provides a great framework for testers and it is useful to explore a company's lean and agile practices before accepting a job offer. My talk will make suggestions about how to reduce risk when accepting a new job offer by using the interview to find out from a prospective employer about how they work including their lean and agile practices"
Becoming a toolsmith - lessons learned from software testing and tool experimentation
Now more than ever, software testers need to adapt quickly and be flexible in tough times. This talk covers lessons learned starting from a junior tester role working on a large aging monolithic system to senior roles dealing with scalable technology platforms. I explain strategies for building up your hidden toolsmith abilities.

Session takeaways...
- Possible test tool development pitfalls you could encounter in your software testing career
- Useful approaches for learning automation and test tools and applying them to projects
- Examples and accessible tool suggestions to experiment on
How to Rapidly Introduce Tests When There Is No test coverage
I’ll be sharing methods and techniques me and my team implemented to drastically improve test coverage and reduce production errors.

In the course of 5 years, many tech companies reportedly have increased their QA resources by approximate 30%. Why? Because, as the consumers become pickier and the competition becomes more aggressive, the quality of the product can be the key to stand out from the crowds.

To deliver a high quality product, companies need to take a holistic approach to identify the product’s quality baseline, detect the weak points and ultimately improve it. One of the tactics under this quality strategy is to improve test coverage.

I’d like to talk about test coverage:
How to measure it;
How to improve it,
and how to leverage your test coverage results to prioritize your QA resources.
Implementing BDD: How One Team is Making it Work
Behavior Driven Development, or BDD, has been a buzzworthy term in the testing and development community for several years. At first glance the elements of BDD seem simple. Testing scenarios! Living documentation! Automation! Reports! That sounds great; why isn't everyone doing it? However, upon deeper dive, it's obvious the implementation of BDD needs a lot of forethought and planning and that teams must approach it for the right reasons. This talk will follow the evolution one team is currently experiencing in their shift to BDD. BDD was selected to help them modernize the work that the business analysts, manual testers, and automation testers were doing and to support the larger organization's DevOps transformation. Why is BDD the right methodology for this and what does the process look like? This talk will answer those questions and share the preparation, major milestones, successes, and failures the team has encountered along the way. Join me to find out what happens when a traditional organization completely turns their old processes upside down sets out to conquer BDD.