QA is developing on a supersonic speed and it is crucially important - more than ever before - to be up-to-date in QA knowledge and to find a perfect combination of methods and practices for every special project
Quality, Velocity and Cost are the tops of the basic QA triangle.
To keep it in a good balance, QA team and product managers should think of Agility as of a practical approach to testing process building.
That's why we tried to deepen into the QA topic and bring you the newest and the most interesting cases to learn from our speakers.
QA gurus from Amazon, Ebay, Booking and other innovative businesses are ready to share their experience of being agile to provide the most effective testing solutions.
Software Test Engineer
Software QA Manager
Software Development Engineer in Test
Director of QA
Quality Analyst Consultant
Technical Test Lead
Technical Test Engineer
QA Automation Lead
Head of QA
Manual QA Engineer
Software Test Engineer
Director of QA
2021, April 8, Online
President and Principal Consultant at AmiBug.Com, Inc.
Create workflows to schedule testing tasks dynamically and adapt the testing focus as priorities change. Decide on purpose what not to test— not just because the clock ran out!
Just-In-Time Testing (JIT) approaches are successfully applied to many types of software projects—commercial off-the-shelf applications, agile and iterative development environments, mission-critical business systems, and just about any application type. Real examples demonstrate how JIT testing either replaces or complements more traditional approaches. Examples are drawn from insurance, banking, telecommunications, medical, and other industries. The course is packed with interactive exercises in which students work together in small groups to apply JIT testing concepts.
Just In Time Testing received the EUROSTAR BEST TUTORIAL award in 2010.
"Testing Conversational AI - Strategy to Automation"
Testing Conversational AI - Strategy to Automation
Last year was dominated by the smart devices and voice based home assistants. These use the conversational interfaces unlike other application to interact with. They are built using advanced algorithms, ranging from pattern and expression matching engines to natural language processing and AI/Machine learning techniques. These systems are constantly learning by themselves improving the intercations with the user bringing up the challenge in the testing world of non-deterministic output. To such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions using emojis gifs and pictures. Testing in this context moves to clouds of probabilities.
In this session I will cover the strategy for testing such interfaces, testing the NLP models and sharing experience on how to automate these tests and add it to the CI/CD build pipelines. Key learnings: * How What and why of a conversational interface? * How can I build my testing approach for such an interface? * What from my current toolset can I use for this new context? * How do I automated and add it for my CI/CD pipeline for instant feedback? * How do I measure the quality?
How do you train an AI bot to do some of the mundane work in your job? Integrating AI into your daily work can be intimidating, but in reality, it is pretty easy, and it just takes some understanding of where to start. Learn how to directly apply AI to real-world problems without having a Ph.D. in computer science. Jennifer will share a wide survey of all the different ways AI is applied to software today. Get some basic understanding of AI/ML with a no coding skills required approach. Whether you are just AI-Curious or want to reap the benefits of AI-based approaches on your product today, this session is a must to get an understanding of where software, and your career, are headed in an AI-First world. Jennifer will also reserve time for Q&A to discuss applying AI-based testing approaches to your individual challenges. This session will get you ready for the AI-Based future!” Leave this session knowing how to start using AI, where AI can be applied, benefits of AI first approach, and different tool options for AI in a tool-agnostic approach. Takeaways: • Learn how to start using AI and where AI can be applied to software • Benefits of AI first approach • Learn different tool options for AI in a tool-agnostic approach
Test case is the main factor in testing activity, however there is no perfect algorithm can help us define a test suite to cover all the scenarios. In the modern world, software under test becomes more and more complicated, manual test case design is not enough on some of the product such as machine learning and big data service, and on some of the testing requirement, such as security testing and performance testing. This talk will mainly cover three different test case generation techniques, search-based testing, combinatorial testing and metamorphic testing. Based on different techniques, case studies will be also provided with different type of produces, AI model, search engine and backend services.
In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very neccessary to make sure this distributed monolith operates correctly. Traditionly we would test by invoked an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flakey tests. I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)
Mocked black box testing - where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies. Temp namespaced api in your ci environment - here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors occur, use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria. Post deployment tests - usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.
We should be happy by now right? Fairly happy that api does what it says on the box… but…
Environment stability tests - tests tha run every few min in an integrated env and makes sure all services are highly available given all the deployments that have occurred. Use gitlab to control the scheduling. Data explorer tests - these are tests that run periodically but use some randomisation to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.
"Taking Quality to the Next Level: Metrics for Improvement"
Taking Quality to the Next Level: Metrics for Improvement
Are your testing and quality assurance activities adding significant value in the eyes of your stakeholders? Do you have difficulty convincing decision-makers that they need to invest more in improving quality? Selecting metrics that stakeholders understand will get your improvement project or program past the pilot phase and reduce the risk of having it stopped in its tracks. Anton Angelov will present a story of how his teams managed to adopt a more in-depth bug workflow. From noticing the bug to the triage process and metrics boards. There will be a real-world example of how to collect 15 essential QA metrics from Jira. To calculate them, we will use Azure Functions and visualize them though charts in Azure Power BI. At the end of the presentation, you will have many ideas on how you can optimize your existing defect management through new statuses, practices and monitor it through quality metrics, which can help you to improve your software development and testing further.
Next generation Intelligent Automation unlock the power of analytics and autonomics for continuous delivery. While automation solutions within current testing implementations help to address agility need, such automation is typically driven by static rules using conventional scripting and orchestration techniques. Such techniques incur high maintenance overhead to keep updated relative to changing circumstances. The recent emergence of cognitive engineering technologies (such as autonomics) has evolved, introducing the possibility to drive adaptive automation within testing implementations. Such automation can self-heal and self-configure based on changing situations. In this session, the speaker will present how next generation data engineering and autonomics technologies can be leveraged to power the next generation of cognitive testing implementations, and how they can support the needs of your organisation.
"Artificial Intelligence vs. Human Stupidity - An Ethical problem"
Artificial Intelligence vs. Human Stupidity - An Ethical problem
AI is becoming more and more a part of our life, whether we like it or not, whether we know it or not. AI-based systems are already scrutinizing our data to profile us and offer the best or more suitable products and services to us...in theory at least. Feeding from big data, commercial AI solutions become a helper to determine the risk we take or the risk we might be for others; they interfere with your insurance fees, court decisions, driving, medical checkup, school results, job interviews, employee performance and school or exam results... But what if they fail? What is the role we must play, as IT-ers and QA-ers from an ethical perspective?
"Why do we do Test Automation Internal Mentorships for our QA Engineers?"
Why do we do Test Automation Internal Mentorships for our QA Engineers?
Currently, in the testing industry, there’s an aspiration to learn more about test automation. Drivers may be different: get new skills, ease work tasks, etc. But often, what stops you in this learning is the absence of guidance and assistance. Of course, there are tons of materials online, but what to do with all of them? How to apply the theoretical knowledge? At the same time, with over half a billion users around the world, Bumble relies a lot on autotests. We wanted to help our QA Engineers to learn Test Automation. The best way to accomplish it, from our perspective, was to create a Test Automation Internal Mentorships. They allowed our QA Engineers to learn directly from the experts and have a better understanding of how things work. With such practice, our QA engineers became more independent and now handle more complex tasks. What’s more, all the learning is shared among the entire department. Apart from why we did it, I will also cover: - why we invested in autotests; - why did we start improving QAs test automation skills; - why you may need something different from that internal mentorship. the other ways of learning test automation for you and your team Join me in this talk and I’ll explain how you can organise your own Internal Mentorships.
This talk proposes a model of the (testing) thought processes that every developer and tester uses. In a sentence, what we do is this: "we explore sources of knowledge to build test models that inform our testing". The model identifies two modes of thinking – exploration and testing – and we use judgement to decide when to flip from one to the other. Separating out these ten thinking activities clarifies what we do when we test. It helps us to understand the challenges of testing and how we choose the methods, approaches and tools we need to test. Understanding how we think helps us to identify how ML/AI can help us as well as identifying the capabilities and skills we need to acquire, to practice and excel in.
We have overcome many barriers and challenges to making End to End tests a first class citizen inside the continuous integration pipeline. Some of which I will present in this talk: - Introducing the term "Acceptance Tests" instead of End to End Tests - Page Object Model vs App Actions - BDD vs Vanilla JS - Horizontal End to End Tests vs Vertical End to End Tests
Automation Testers role has changed from automating the manual test cases, with tools like Cypress, automation has now become a team activity with developers.
The outcomes we have seen within CRUK are: - Better team collaboration and knowledge sharing - More stable tests and test infrastructure - More conversations about tests and testability
I hope our journey will align with many other peoples current, past or future experience, potentially encouraging some people to take the leap and make automated testing a team responsibility.
When we say everyone has the responsibility for the quality, does anyone then really have it? At ebay DK we have been transitioning though out the last couple of years. We do not have embedded QA's in all our squads, and they are not the final point of contact before sending it to our users. Our QA's work as coaches and their job is to enable the teams to make sure the quality of what they deliver is good enough to send to our users. One of the tools they use is mobTesting. I will talk about how we introduced it to the teams, when we use it as a tool, how we ensure quality in general and what we see the benefits are from working this way.
"Implementing BDD: How One Team is Making it Work"
Implementing BDD: How One Team is Making it Work
Behavior Driven Development, or BDD, has been a buzzworthy term in the testing and development community for several years. At first glance the elements of BDD seem simple. Testing scenarios! Living documentation! Automation! Reports! That sounds great; why isn't everyone doing it? However, upon deeper dive, it's obvious the implementation of BDD needs a lot of forethought and planning and that teams must approach it for the right reasons. This talk will follow the evolution one team is currently experiencing in their shift to BDD. BDD was selected to help them modernize the work that the business analysts, manual testers, and automation testers were doing and to support the larger organization's DevOps transformation. Why is BDD the right methodology for this and what does the process look like? This talk will answer those questions and share the preparation, major milestones, successes, and failures the team has encountered along the way. Join me to find out what happens when a traditional organization completely turns their old processes upside down sets out to conquer BDD.
Video playback is a complex animal that is often overlooked or reduced in QA to simple cursory visibility checks. This talk will cover an representative overview of the architecture of live and on demand OTT (over the top) video streaming (as well as a brief covering of broadcast for reference) and the testing challenges that come from this, as well as some tips on how best to approach the planning of the testing effort to maximise the efficacy of your playback testing.
Pitched at a beginner to intermediate level, participants will leave with a knowledge of some of the moving parts that make up OTT video streaming and some of the approaches that can be utilised whilst testing playback.
"Everything you need to know about accessibility testing"
Everything you need to know about accessibility testing
How do you make a product inclusive? How do you ensure that users will have equal access to your creation? You need accessibility testing! In this session, Lena will present an overview of accessibility testing and demonstrate why everybody needs to be aware of it. We will look at some useful tools for easy manual testing, and also show how to automate these kinds of tests.
"Learning, Cognition, and Becoming a Better Tester"
Learning, Cognition, and Becoming a Better Tester
Have you ever wondered how you can tie your life interests into your work? Do you love to learn, and want to understand how your love for learning can joyously infect your entire life, including your software testing? Jess will share pedagogies of learning (Zone of Proximal Development, Scaffolding, Metacognition, and Flow), and then do a live demonstration of these principles on the violin, weaving testing, music, and other disciplines together in a fascinating talk where she lays her skills on the line to show you how you can use learning, cognition, and personal passion to become a better tester.
The emerging of the Internet of Things domain faced the Quality Assurance engineers with various challenges in the areas of tests execution, test strategy definition, test design, automated testing and etc. Being more than just a newly recognized technology, IoT is the interworking of software and hardware in a new, lightweight and extremely distributed way. Not only computers, tablets and watches, but also smoke detectors, hoods, fridges, cameras and lighting equipment belong now to the smart devices category. All those devices have their vendor specific OS, Authentication and Authorization mechanisms, communication protocols, User Interfaces and many others. The main purpose of every IoT solution is to provide a smooth integration with all those gadgets in terms of functionality, security, user experience and performance. The real challenge from testing perspective in such projects is finding and implementing a scalable strategy for testing and verification of the functionalities related to the smart devices.
When planning and executing a test strategy for an IoT project, we need to pay special attention to more and more components, among which: the hardware, the high number of integration points, the physical environment, the cases of unconventional use and others. The “Quality of Things” presentation puts main focus on the most common testing types and their appliance in IoT in order to outline the unexpected challenges those can bring in the context of IoT. I will share my professional experience as a Test Manager in a large-scale Smarthome project and provoke discussion on the possible approaches for resolving issues in the field.
Key takeaways: 1. Insights into the planning of IoT project test strategy 2. Practical tips and tricks for tests execution of IoT devices 3. How to overcome the main technical challenges when analysing and troubleshooting IoT devices
"Reducing the Scope of Load Test Analysis using Machine Learning""
Reducing the Scope of Load Test Analysis using Machine Learning
Load testing execution produces a huge amount of data. Investigation and analysis are time-consuming, and numbers tend to hide important information about issues and trends. using machine learning is a good way to solve data issues by giving meaningful insights about what happened during test execution. Julio Cesar de Lima Costa will show you how to use K-means clustering, a machine learning algorithm, to reduce almost 300,000 records to fewer than 1,000 and still get good insights into load testing results. He will explain K-means clustering, detail what use cases and applications this method can be used in, and give the steps to help you reproduce a K-means clustering experiment in your own projects. You'll learn how to use this machine learning algorithm to reduce the scope of your load testing and getting meaningful analysis from your data faster.
"Orchestrating your Testing Process - coordinating your manual and automated testing efforts"
Orchestrating your Testing Process - coordinating your manual and automated testing efforts
Due to many historical reasons, most testing and even development organizations, approach their manual and automated testing efforts independently. What’s more, when you look closer at these teams, you notice that even within their automation efforts they are using a number of different testing frameworks, running independently and without much thought around coordination, coverage overlaps or functional dependencies.
This approach needs to change. Teams are releasing products faster than ever, and this means that we need to make every testing effort count, including everything from Unit and Integration Tests run by our development teams, Functional and Non Functional automated tests executed by the testing teams, and every manual testing effort encompassing all the Exploratory and Scripted tests run by every member of our teams.
By coordinating the planning, designing, execution, and reporting of our complete testing process we will be able to achieve better visibility and make more accurate decisions faster.
But the road to achieving this goal is not trivial. During this session we will: - Review the objectives of coordinating all your testing efforts - Understand common issues and hurdles faced by teams embarking on these efforts - Learn how to build coordinated efforts using a few recommended approaches - Get ideas on how to get started with your team, as soon as possible.
Many Test Automation efforts these days use Domain Specific Languages (DSL) such as Gherkin to describe the test cases. Although skilled testers are involved in this process, the creation of these Gherkin files is often done out of the blue or based on small snippets of information such as user stories. While the approach enables understanding of the test cases by all stakeholders it still results in a quite narrow view on the system or function to be tested. Going back to the basics of our craftsmanship however, we could easily elaborate the use of Gherkin and BDD into actively applying it as specification by example, just as it was meant to be. The Test Design Techniques of old will guide us in this specification phase, resulting in test cases that will achieve covering the system under test both broad and deep, up to the level of the applied technique. By combining multiple techniques, it will also comply with approaches such as risk-based testing whenever required. In this presentation I will explain the design-specify-test approach using Test Design Techniques to write Gherkin files and illustrate this with several examples of different Test Design Techniques.
"Advancing Mobile App Dev and Test with Next Gen Technologies"
Advancing Mobile App Dev and Test with Next Gen Technologies
Mobile has gone a long way until it reached its current state of being the main digital channel for all business activities. With that, it is not starting to mark its next wave through AppClips, Android APKs, Progressive Web apps (PWA), and flutter apps. How can teams make the leap towards these new techniques that are about to transform the current phase? In this session, Eran Kinsbruner is going to give a deep dive into the main emerging mobile technologies and how to get ready to adopting them, how will they impact software deliver cycles and much more.
We have executed many projects Large Projects, Small Projects etc. Sometimes we miss our testing deadlines because there is no defined criterion that is used to build our execution test plan. To help avoid such missing our deadlines we have prepared these Test Estimation guidelines. In this presentation I present the various Test estimation techniques which will help us in proper execution of the Testing projects. This is a presentation submitted in the Test Management Stream.
"“Self-Healing Tests” The holy grail of test automation ... Or just a lot of noise about nothing?"
Self-Healing Tests” The holy grail of test automation ... Or just a lot of noise about nothing?
One of the most important and complex tasks in test automation is the maintenance of test scripts. No other test artifact takes up as much time and effort in maintenance as the test cases that turned into code.
The question now arises whether there is an approach in which artificial intelligence paired with machine learning can take care of the maintenance of the test scripts. The developers of the test scripts would have more time to take care of the automation of new tests and thus increase the test coverage through test automation. The answer to the question is: "Yes, there is a solution: Self-Healing Tests".
In a nutshell, self-healing is the automation of test automation. Test tools with self-healing properties, recognize changes in the graphical user interface and automatically adapt the automated test cases so that the tests remain functional. Commercial tools like TestIM, Mabl & Tricentis Neo-Engine are very promising and jumped on the bandwagon in good time. But there are also promising open source alternatives such as Healenium. The lecture explains the basics of self-healing tests and shows, using an example, the implementation with the open source library Healenium.
The rise of test automation has reduced the manual testers footprint significantly. We have more and more AI and ML tools that are taking on things like visual regression and providing smart test automation. Companies are hiring SDETs at an increasing rate, while reducing their manual test roles with the hopes that a high amount of test automation will result in a higher quality product. So where does that leave manual testing? Join Erika as she showcases the value of manual testing as a non-negotiable partner alongside a robust test automation strategy. Reconnect with thought based testing to infuse quality earlier in the delivery life cycle. Learn how manual testers can bring quality to the forefront of the conversation and help drive not only automation delivery but the overall quality of what is automated. Discover how to change the narrative and re-define manual testing in a way that empowers testers and helps to build quality teams that can deliver a well rounded test program.
Sometimes there is no need of cumbersome process for test management or you are in a rush to create a management model on the fly. We have created a test management model that relies on the skill and self-guidance of the testers, session based testing and clear, concise test reporting. The model is applicable to most of the planning tools available (for example MS Teams has a Planner) and it is easy to set up. First step is to do test design and create the session description. Then after testing those sessions, the reporting is done using a few simple methods. The process relies on communication rather than rigid process. It is meant to be as lean and agile as possible.
"Enhance Mobile User Experience Through Performance Testing"
Enhance Mobile User Experience Through Performance Testing
Consumers now expect digital experiences to exceed face-to-face experiences. Recent studies show that 80% of people have deleted a mobile app due to problems with its performance. I've learned that this problem can plague teams of all sizes as the mobile team lead at a major ride sharing company shared with me that their business was losing $200 million per year due to app crashes. So, how can you avoid this and similar mobile performance problems? In this talk, you will learn how mobile performance impacts the user experience, what you need to look for when evaluating the performance of an app and how to start testing mobile performance earlier in the dev cycle for better results. I will include lessons learned from real world examples over many years of working with leading brands to improve their mobile app performance.
I have heard from many testers around the world, that they know of data science teams but no testers testing the models, how do we have enough confidence what is produced is good enough? A model is a statistical black box, how to test it so we understand its behaviours to test is properly. My main aim would be to help inspire testers to explore data science models. I’d like to share how I explored the world of data science when testing a model and how we can apply that if we find ourselves in this situation. It is an emerging area for testers and exciting. I’d like to invite you to my talk where, we will go through my journey of discovering data science model testing and find the following takeaways useful not just for testing a data science model but day to day testing too.
"Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design"
Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design
Building Continuous Security Ways of Working: Overcoming the challenge of security testing adoption through Lean Canvas Design. This talk aims to share practices and real experiences building security testing as a new ways of working for development teams facin an DevOps transformation, all this through agile, lean canves, lean and agile practices Takeaways: - Lean Canvas techniques applied to build new ways of working as continuous security allows you to define a powerful strategy and road map. - Gamification applied for security testing is a great way to build new ways of working on your teams. - Lean allows you to be more efficient in terms of your security testing practices in your DevOps pipeline.
Selenium has been around for over 15 years, and by now organizations have realized that Selenium tests need to be treated the same as any other functional code. This means not just keeping your tests in source control, but also designing them to be maintainable and robust. A common design pattern known as the Page Object Model (POM) has emerged, which greatly assists with organization and maintenance of tests. But there are scalability, speed, and robustness issues with this pattern. This has caused organizations to move away from Selenium for other tooling, however, most organizations are encountering the same problems, because they are using the same problematic design patterns. Max will outline these issues, how to avoid them, and better patterns to use. He'll discuss how to transform your tests to be more effective, using patterns like Arrange Act Assert, and not relying solely on Selenium to exercise the system.
Some projects/products are so complex and large that testing can require weeks. In my world, test requirements are expressed into 647 pages given by the aviation authorities, translating into more or less a total of 3000 pages of tests results and various documentation. Without automation testing time would be longer and no room would be free for other kind of testing: Exploratory testing for example. In this talk you will see examples where automation was a solution. How to decide what to automated and what not.
"How to test the test results: A case study of data analysis on test results"
How to test the test results: A case study of data analysis on test results
In this talk we’re going to explore some experiences and good practices we’ve developed in exploring the testing environments. We’re going to try to answer some questions like; How do you analyze test resource usage in a highly dynamic, hardware based environment? What metrics do we look for in our analysis? What types of data are we looking for? How to do exploratory testing on the data and then automate the tests? How to report the findings and create suggestions for improvements?
How do you navigate in the change of the QA role from being a traditional tester to being a Quality coach and what is the role of a Quality coach in practice? I would like to take you though the transition the we gone though over the last couple of years. Telling about the benefits of the transition but also about the ups and downs that has been along the road. I will be telling about what my role consists of today, how I work with the squads and which tools I think is good to manage when being a Quality coach.
I will share three primary methods (Win the Morning, Embrace your Fears, and Continuous Learning and Growing) on how to be the best leader you can be and why it is critical to work on being a better leader every day.
We are all leaders. At a minimum, we must lead ourselves every single day, and many of us have test and quality teams that we lead and serve. Have you ever stopped to analyze yourself to determine if you are the best leader you can be? I have had the joy of learning and continuing to learn from many great test leaders including the late Jerry Weinberg and other high-performance leaders outside of the testing arena. Join me as I share ways to be the best leaders we can be by employing approaches from these leaders, including tactical steps on how attendees can “Win the Morning, Win the Day” by incorporating rituals and habits that will make a difference; strategies how to “Embrace and Face Your Fears”; how to create a daily “Continuous Learning and Growth Plan” and why it’s a must as well as tips from other favorite leadership books, blogs, and podcasts.
Even though there is a blurred line between managing and evolving Teams and these 2 terms may seem very similar – it’s not rare when Team Leads forget or omit the second part. And there is a good reason for that – evolving your team is a big investment, that requires a lot of personal commitment, dedication, desire and vision.
Tips to cover: Tip 1 – Don’t separate ‘Managing’ and ‘Evolving’ your team parts Tip 2 – Know your team Tip 3 – Nothing great has been ever achieved without an investment Tip 4 – Respect your Team Tip 5 – Be open
"How to Rapidly Introduce Tests When There Is No test coverage"
How to Rapidly Introduce Tests When There Is No test coverage
I’ll be sharing methods and techniques me and my team implemented to drastically improve test coverage and reduce production errors.
In the course of 5 years, many tech companies reportedly have increased their QA resources by approximate 30%. Why? Because, as the consumers become pickier and the competition becomes more aggressive, the quality of the product can be the key to stand out from the crowds.
To deliver a high quality product, companies need to take a holistic approach to identify the product’s quality baseline, detect the weak points and ultimately improve it. One of the tactics under this quality strategy is to improve test coverage.
I’d like to talk about test coverage: How to measure it; How to improve it, and how to leverage your test coverage results to prioritize your QA resources.
"An Agile Approach to Web Application Security Testing"
An Agile Approach to Web Application Security Testing
Discuss popular methods of testing web application security with their strengths and weaknesses Explain why these methods should be used in a well defined sequential manner Map these methods to the different stages of an agile software development life cycle Conclude with how to make security testing effective and efficient