Quality management and software testing in web integration

Quality management and software testing in web integration

September 16,2022 in Digital Marketing Blogs Posts | 0 Comments

This article describes the basic concepts of “Quality assurance“ testing and their goals, and considers various types of testing.

The rapid development of information technology today has resulted in ever greater demands on the functionality of web (software) applications, and in turn on the quality their software. As a result, quality management must play a vital part in the process of developing web solutions.

What is quality assurance?

The Wikipedia definition of quality assurance is: “Quality assurance (QA) generally concerns everything from design, development, implementation and maintenance through to product development. The aim of this activity is to ensure that the outputs of individual parts meet the necessary quality standards, which have been defined beforehand.

Ensuring quality assurance (known as “Software Quality Assurance“) involves the following activities:

  • Verifying that customer requirements have been met, and checking that they are in line with the assignment and project specifics.
  • Checking developer outputs focusing on adherence to pre-defined standards and norms.
  • Testing in order to identify possible errors – functional, logical, content, systems, etc.

Naturally, these steps also apply to web solutions, and specifically the main part of testing. In this area there is a need to take into account that new web pages are being tested and to accommodate a selection of appropriate tests, such as web application security, the ability to deal with huge numbers of visitors (performance testing), testing browser compatibility and operational systems, and accessibility for ordinary users as well as users with disabilities.


Testing is a systematic process, involving  observing system operation under specific conditions imitating a real-life environment. It is also focused on identifying errors, flaws and deviations from customer requests, and on operation in borderline situations from the perspective of output data, stress and security. Individual findings are similarly recorded and evaluated.

The goal of testing is to find errors as quickly as possible and at the lowest level of solution development, and then rectify them.

The concepts of Quality Assurance and testing are often confused, but as we have seen in the previous section, testing is just one part of the quality management process.

Types of tests

Literature and practice highlight a variety of testing methods, which depend on how the perspective from which we see them.

Based on the implementation method, we can divide the process into

  • Manual testing: carried out by the tester (user) manually. A disadvantage of this method is its slowness and lack of efficiency. However, some tests can work only manually, specifically because of the human factor.
  • Automatic testing: with automated testing tools. In a short period a large quantity of variable testing data can be tested. In web applications, we can easily test the validity of pages, the functionality of individual links, stress resistance and similar aspects, using automated testing tools.

Given the nature and focus of the testing itself, we can differentiate between:

  • Functional testing: verifies that the software functions meet customer requirements, and checks that the software/solution is functioning, on various SW and HW platforms.
  • Non-functional testing: testing the qualities of the system that are unrelated to its primary focus, e.g. efficiency test, security test and stress test.

From the source code perspective, this involves:

  • Static testing: checking the source codes themselves.
  • Dynamic testing: testing the system qualities during program runtime.

Depending on the approach to information, test can be described as:

  • Black box: the tester analyses only the external response of the system, and recording the input parameters and output parameters. However, the black box test does not involve the system’s internal process. On the other hand, it can yield distorted results because even when the assigned input gives the expected output, inside the system, the process can be handled incorrectly (such as incorrect saving in the data structures…) .
  • White box: assumes that the tester has a detailed knowledge of the internal structure of the codes and therefore “sees into the system“. At the same time, as part of the test, the tester audits the source code.
  • Gray box test: combines black and white box tests. The tester is familiar with the basic structure of the system, but does not focus on the source code details. The structure is accessible mainly due to the formulation of more appropriate testing scenarios.

Depending on which level of software development inspection takes place, we can divide testing into:

  • Developer testing – testing at the lowest level, usually immediately after writing the integrated part of the program code, with an emphasis on functional accuracy and validity.
  • Unit testing: testing how correctly the smallest system units function, checking adherence to prescribed standards and norms. At this level, the tests can be easily fully automated.
  • Integration testing: verifying correct interaction between individual modules in the system. For web systems, linking all parts of the proposed solution is an important phase of testing.
  • System testing: the application is tested as a complete entity. Tests focus on system functionality, reliability, security and efficiency. At this level, we can put systems through several tests, with independently designed methodologies, such as:
    • performance test: measures the speed of responses to input requests. Investigating the efficiency of the web application in multi-user environments and how the react to a sudden large volume of input data, and an increase in the number of active users.
    • security test: testing the system’s ability to prevent unauthorized  access to source codes or data
    • usability test: establishes how easy it is to use web pages, how to find one’s way around them, and how user-friendly they are.
    • accessibility test: ensuring that web pages are “barrier-free“, i.e. they are accessible to users with disabilities (impaired vision, hearing..), testing color  contrast.
    • stress test: tests the system, with a focus on checking failures where there is insufficient HW capacity (CPU, memory, disk, etc.).
    • maintenance test: checks that the system is functioning correctly after maintenance (changes to HW or SW environments, migration and system upgrade..) .
  • Acceptance testing: carried out after overall tuning of the system and removal of all detected errors from previous tests. Tests whether the system is prepared for live operation (alpha test, beta test, etc…).

It is impossible to say which testing method is the best and most efficient. The choice of testing methodology depends on a range of factors, such as project size, goals and focus, capabilities and capacities of the customer and the testing team. However, putting together the right testing plan at the very beginning allows us not only to  improve the overall quality of the system, but also enables considerable financial savings.

Experience shows that it is much more effective to  prevent errors at the beginning of work than to rectify them at the end of the project. This is even more important when it comes to web integration, both for integrated systems and in the case of a mistake that occurs at one end of the chain and can spill over into another area, where it can cause a lot of damage. It is therefore a very good idea to carry out quality tests at every step of the software life cycle, beginning with the specific requirements of the customer and ending with putting the ready application into production.

It is very useful to carry out quality tests as early as the analysis and design phase of the solution because an error in the architecture design, an incorrect functional specification, or a poorly chosen graphic design can later cause many tests to fail. As a consequence, going live is significantly delayed.


It is a fact that testing cannot prove that the system has no error and so it cannot be regarded as faultless. However, an effort is made to be as near faultless as possible. This is doubly so for integrated systems, where there the risk of failure in the areas of security, communication and stability is greater.


Was this article helpful?

Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!

Reaction to comment: Cancel reply

What do you think about this article?

Your email address will not be published. Required fields are marked.