Software Testing
10:20 — 10:50
"Everything you need to know about accessibility testing"
Lena Konstantynova
10:50 — 11:20
"Teaching Accessibility Testing"
Karen Amaro
11:20 — 11:50
"Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design"
Jorge Luis Castro Toribio
11:50 — 12:20
Q&A
Technology
12:25 — 12:55
"Introduction to Testing Video Playback at Scale"
Ian Goddard
12:55 — 13:25
"How to Achieve Parallel Test Execution with Robot Framework"
Ran Tzur
13:25 — 13:55
"Enhance Mobile User Experience Through Performance Testing"
Sofia Palamarchuk
13:55 — 14:25
Q&A
Break - 5 minute
Manual vs Automation
14:3015:00
"When and why use automation"
Alexandre Bauduin
15:00 — 15:30
"Manual Testing is Not Dead: Just the Definition"
Erika Chestnut
15:30 — 16:00
"How to build automation mindset from the ground-up"
Joseph Smalls-Mantey
16:00 — 16:30
Q&A
Break - 5 minute
10:00 — 10:10
Intro
Entry-level track
Gives access to Junior track only
with no recordings.
Focuses on entry-level content around QA.
Data Science and IoT
Leadership & Management
12:55 — 13:25
"The transition in the QA role"
Jette Pedersen
13:25 — 13:55
"Are You the Best Leader You Can Be?"
Amy Jo Esser
13:55 — 14:25
"Leadership IQ in the age of AI/ML"
Jenn Bonine
14:55 — 15:25
Q&A
Break - 5 minute
Web & API
15:3016:00
"5 Levels of API Test Automation"
Shekhar Ramphal
16:00 — 16:30
"How to test the test results: A case study of data analysis on test results"
Alper Mermer
16:30 — 17:00
"An Agile Approach to Web Application Security Testing"
Bhushan Gupta
17:00 — 17:30
Q&A
Break - 5 minute
10:50 — 11:20
"Testing in the IoT - Challenges and solutions"
Hristo Gergov
11:20 — 11:50
"Testing a Data Science Model"
Laveena Ramchandani
11:50 — 12:20
"Reducing the Scope of Load Test Analysis using Machine Learning"
Julio de Lima
12:20 — 12:50
Q&A
"Tips for managing an evolving QA team"
14:25 — 14:55
Sergii Tolbatov
10:00 — 10:10
Intro
Full Access
Gives access to both Junior and Senior tracks, recordings are included. Focuses on deep tech content around QA.
Leadership IQ in the age of AI/ML
How do you train an AI bot to do some of the mundane work in your job? Integrating AI into your daily work can be intimidating, but in reality, it is pretty easy, and it just takes some understanding of where to start. Learn how to directly apply AI to real-world problems without having a Ph.D. in computer science. Jennifer will share a wide survey of all the different ways AI is applied to software today. Get some basic understanding of AI/ML with a no coding skills required approach. Whether you are just AI-Curious or want to reap the benefits of AI-based approaches on your product today, this session is a must to get an understanding of where software, and your career, are headed in an AI-First world. Jennifer will also reserve time for Q&A to discuss applying AI-based testing approaches to your individual challenges. This session will get you ready for the AI-Based future!" Leave this session knowing how to start using AI, where AI can be applied, benefits of AI first approach, and different tool options for AI in a tool-agnostic approach.
Takeaways:
• Learn how to start using AI and where AI can be applied to software
• Benefits of AI first approach
• Learn different tool options for AI in a tool-agnostic approach
How to build automation mindset from the ground-up
A company with a healthy automation mindset uses testing effectively to speed up their development process and save the company money. When building an automated testing culture at the company it is important to goal for engineers to buy in. Testing is most useful and most worth the investment when engineers are invested in it, meaning they rely on testing in their development process, they take the initiative to expand the testing codebase and they heed test results.

But engineers are busy, and when they have been working without tests, it can be hard to get them on board with a testing culture. This talk will explore how to build a testing system that can earn the trust of developersHow to build an automated testing environment from the ground-up.
- What are the things one needs to consider when setting up an automation environment (i.e., budget, staffing, commitment from management, platform, tooling, test breadth, coverage)
- What is the process building up an automated test environment like, how long does it take, what pitfalls should one look out for?
5 Levels of API Test Automation
In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very neccessary to make sure this distributed monolith operates correctly. Traditionly we would test by invoked an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flakey tests.
I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)

Mocked black box testing - where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
Temp namespaced api in your ci environment - here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors occur, use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.
Post deployment tests - usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.

We should be happy by now right? Fairly happy that api does what it says on the box… but…

Environment stability tests - tests tha run every few min in an integrated env and makes sure all services are highly available given all the deployments that have occurred. Use gitlab to control the scheduling.
Data explorer tests - these are tests that run periodically but use some randomisation to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.

I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.
Teaching Accessibility Testing
In 2020, the Software Testing Center (CES) in Uruguay gave its first Web Accessibility Testing course, online. This course teaches students
topics that align with the criteria and principles of the WCAG 2.1 Practices. It also proposes a large number of exercises on real sites, to practice the acquired knowledge.

The goal of the course is to familiarize the student with automatic testing, heuristic evaluations, filtering techniques and testing with users. This is very much in agreement with the WCAG-EM methodology promoted by w3. Web accessibility is not yet mandated by law in our country nor in the rest of Latin America, so companies have little or no incentive to build accessible software.

The course makes students place themselves as users of the applications and see that small changes, (such as ensuring that the images have textual descriptions, or that the contrast ratio is acceptable, etc.), produce improvements in accessibility. It has awakened our students to the necessary proactivity to ensure that accessibility is increasingly prioritized.
Knowledge, awareness and proactivity are the engine and energy that allow our students to take the first steps on this long journey towards a more accessible digital world.
During this presentation we will share our experiences delivering the course - the benefits we achieved and the improvements we realized.
Everything you need to know about accessibility testing
How do you make a product inclusive? How do you ensure that users will have equal access to your creation? You need accessibility testing! In this session, Lena will present an overview of accessibility testing and demonstrate why everybody needs to be aware of it. We will look at some useful tools for easy manual testing, and also show how to automate these kinds of tests.
Testing in the IoT - Challenges and solutions
The emerging of the Internet of Things domain faced the Quality Assurance engineers with various challenges in the areas of tests execution, test strategy definition, test design, automated testing and etc. Being more than just a newly recognized technology, IoT is the interworking of software and hardware in a new, lightweight and extremely distributed way. Not only computers, tablets and watches, but also smoke detectors, hoods, fridges, cameras and lighting equipment belong now to the smart devices category. All those devices have their vendor specific OS, Authentication and Authorization mechanisms, communication protocols, User Interfaces and many others. The main purpose of every IoT solution is to provide a smooth integration with all those gadgets in terms of functionality, security, user experience and performance. The real challenge from testing perspective in such projects is finding and implementing a scalable strategy for testing and verification of the functionalities related to the smart devices.

When planning and executing a test strategy for an IoT project, we need to pay special attention to more and more components, among which: the hardware, the high number of integration points, the physical environment, the cases of unconventional use and others. The “Quality of Things” presentation puts main focus on the most common testing types and their appliance in IoT in order to outline the unexpected challenges those can bring in the context of IoT. I will share my professional experience as a Test Manager in a large-scale Smarthome project and provoke discussion on the possible approaches for resolving issues in the field.

Key takeaways:
1. Insights into the planning of IoT project test strategy
2. Practical tips and tricks for tests execution of IoT devices
3. How to overcome the main technical challenges when analysing and troubleshooting IoT devices
Reducing the Scope of Load Test Analysis using Machine Learning
Load testing execution produces a huge amount of data. Investigation and analysis are time-consuming, and numbers tend to hide important information about issues and trends. using machine learning is a good way to solve data issues by giving meaningful insights about what happened during test execution. Julio Cesar de Lima Costa will show you how to use K-means clustering, a machine learning algorithm, to reduce almost 300,000 records to fewer than 1,000 and still get good insights into load testing results. He will explain K-means clustering, detail what use cases and applications this method can be used in, and give the steps to help you reproduce a K-means clustering experiment in your own projects. You'll learn how to use this machine learning algorithm to reduce the scope of your load testing and getting meaningful analysis from your data faster.
Manual Testing is Not Dead: Just the Definition
The rise of test automation has reduced the manual testers footprint significantly. We have more and more AI and ML tools that are taking on things like visual regression and providing smart test automation. Companies are hiring SDETs at an increasing rate, while reducing their manual test roles with the hopes that a high amount of test automation will result in a higher quality product. So where does that leave manual testing? Join Erika as she showcases the value of manual testing as a non-negotiable partner alongside a robust test automation strategy. Reconnect with thought based testing to infuse quality earlier in the delivery life cycle. Learn how manual testers can bring quality to the forefront of the conversation and help drive not only automation delivery but the overall quality of what is automated. Discover how to change the narrative and re-define manual testing in a way that empowers testers and helps to build quality teams that can deliver a well rounded test program.
Enhance Mobile User Experience Through Performance Testing
Consumers now expect digital experiences to exceed face-to-face experiences. Recent studies show that 80% of people have deleted a mobile app due to problems with its performance. I've learned that this problem can plague teams of all sizes as the mobile team lead at a major ride sharing company shared with me that their business was losing $200 million per year due to app crashes. So, how can you avoid this and similar mobile performance problems? In this talk, you will learn how mobile performance impacts the user experience, what you need to look for when evaluating the performance of an app and how to start testing mobile performance earlier in the dev cycle for better results. I will include lessons learned from real world examples over many years of working with leading brands to improve their mobile app performance.
Testing a Data Science Model
I have heard from many testers around the world, that they know of data science teams but no testers testing the models, how do we have enough confidence what is produced is good enough? A model is a statistical black box, how to test it so we understand its behaviours to test is properly. My main aim would be to help inspire testers to explore data science models.
I’d like to share how I explored the world of data science when testing a model and how we can apply that if we find ourselves in this situation. It is an emerging area for testers and exciting.
I’d like to invite you to my talk where, we will go through my journey of discovering data science model testing and find the following takeaways useful not just for testing a data science model but day to day testing too.
Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design
Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design. This talk aims to share practices and real experiences building security testing as a new ways of working for development teams facin an DevOps transformation, all this through agile, lean canves, lean and agile practices
Takeaways:
- Lean Canvas techniques applied to build new ways of working as continuous security allows you to define a powerful strategy and road map.
- Gamification applied for security testing is a great way to build new ways of working on your teams.
- Lean allows you to be more efficient in terms of your security testing practices in your DevOps pipeline.
Introduction to Testing Video Playback at Scale
Video playback is a complex animal that is often overlooked or reduced in QA to simple cursory visibility checks. This talk will cover an representative overview of the architecture of live and on demand OTT (over the top) video streaming (as well as a brief covering of broadcast for reference) and the testing challenges that come from this, as well as some tips on how best to approach the planning of the testing effort to maximise the efficacy of your playback testing.

Pitched at a beginner to intermediate level, participants will leave with a knowledge of some of the moving parts that make up OTT video streaming and some of the approaches that can be utilised whilst testing playback.
When and why use automation
Some projects/products are so complex and large that testing can require weeks. In my world, test requirements are expressed into 647 pages given by the aviation authorities, translating into more or less a total of 3000 pages of tests results and various documentation. Without automation testing time would be longer and no room would be free for other kind of testing: Exploratory testing for example. In this talk you will see examples where automation was a solution. How to decide what to automated and what not.
How to test the test results: A case study of data analysis on test results
In this talk we’re going to explore some experiences and good practices we’ve developed in exploring the testing environments. We’re going to try to answer some questions like;
How do you analyze test resource usage in a highly dynamic, hardware based environment?
What metrics do we look for in our analysis?
What types of data are we looking for?
How to do exploratory testing on the data and then automate the tests?
How to report the findings and create suggestions for improvements?
The transition in the QA role
How do you navigate in the change of the QA role from being a traditional tester to being a Quality coach and what is the role of a Quality coach in practice?
I would like to take you though the transition the we gone though over the last couple of years. Telling about the benefits of the transition but also about the ups and downs that has been along the road.
I will be telling about what my role consists of today, how I work with the squads and which tools I think is good to manage when being a Quality coach.
Are You the Best Leader You Can Be?
I will share three primary methods (Win the Morning, Embrace your Fears, and Continuous Learning and Growing) on how to be the best leader you can be and why it is critical to work on being a better leader every day.

We are all leaders. At a minimum, we must lead ourselves every single day, and many of us have test and quality teams that we lead and serve. Have you ever stopped to analyze yourself to determine if you are the best leader you can be? I have had the joy of learning and continuing to learn from many great test leaders including the late Jerry Weinberg and other high-performance leaders outside of the testing arena. Join me as I share ways to be the best leaders we can be by employing approaches from these leaders, including tactical steps on how attendees can “Win the Morning, Win the Day” by incorporating rituals and habits that will make a difference; strategies how to “Embrace and Face Your Fears”; how to create a daily “Continuous Learning and Growth Plan” and why it’s a must as well as tips from other favorite leadership books, blogs, and podcasts.
Tips for managing an evolving QA team
Even though there is a blurred line between managing and evolving Teams and these 2 terms may seem very similar – it’s not rare when Team Leads forget or omit the second part. And there is a good reason for that – evolving your team is a big investment, that requires a lot of personal commitment, dedication, desire and vision.

Tips to cover:
Tip 1 – Don’t separate ‘Managing’ and ‘Evolving’ your team parts
Tip 2 – Know your team
Tip 3 – Nothing great has been ever achieved without an investment
Tip 4 – Respect your Team
Tip 5 – Be open
An Agile Approach to Web Application Security Testing
Discuss popular methods of testing web application security with their strengths and weaknesses
Explain why these methods should be used in a well defined sequential manner
Map these methods to the different stages of an agile software development life cycle
Conclude with how to make security testing effective and efficient
How to Achieve Parallel Test Execution with Robot Framework
Parallel test execution is crucial for when we want to scale our testing efforts and reach faster time to market. Testing different web & mobile apps in parallel, rather than one by one, makes all the difference between lagging behind your competitors, as opposed to accelerating your product release, especially during this current Agile age. You'll not only be saving a bunch of time, but you'll also be decreasing your necessary resources – eventually resulting is saving costs. A lot of it!

Let's see how to eliminate time-consuming testing efforts and transform our testing journey into one that is not only fast, but also stable and based on industry standard Selenium & Robot Framework.

We will learn how to execute, deploy & analyze fast by leveraging parallel testing capabilities for Robot Framework, using the first free Selenium-powered solution for teams – TestProject, and running on Docker containers.