Ubuntu logo

Summit

Improve certification testsuite, tools and processes for external testing

2012-05-08 11:00..11:55 in G. Ballroom B

External testing, that is, the testing performed outside of Canonical certification labs, is a significant part of all the testing activities around Ubuntu that is growing on every cycle.

It is becoming more common that people outside the certification team want to run our certification tools and tests, to either submit results to Friendly, test their systems, or check if a system would pass certification.

As part of the continuous improvement practices currently in place, the certification team will focus on enhancing the user experience for external testers for both desktop and server, so that they can run test cases with the maximum coverage (both for hardware and software) and report problems effectively.

The areas that have been identified to work on during the next cycle are the following ones:

== Test case usability improvement ==

Some test cases executed internally in the laboratory assume the presence of devices and configurations needed to run them smoothly. Unfortunately, when these test cases are executed in a different environment, they fail and cannot be used unless all the prerequisites are properly fulfilled.

  • We need to take into account that for the external whitelist is going to be run outside the lab, so we cannot make assumptions on configuration

  • Automate as much as possible to get rid of any assumption and also run the test cases on any system that matches a minimal set of requirements.

  • Provide guidelines either in the documentation or the test case description to configure the environment properly when full automation isn't attainable.

  • Improve general usability of the testcases, to allow people less familiar with the tools, to run the test suite without having to ask questions.

== Documentation ==

Documentation is intended to solve any doubt that could prevent a testing session from finishing properly with the submission of results and the reporting of the problems found.

  • Create a document that describes the procedure to run test cases using the external whitelist in a desktop.

  • Update the document for the server that was created in the previous cycle. This includes, for example, adding a section about how to troubleshoot problems. For instance, what to do if a test case doesn't work as expected, the test results cannot be submitted, etc.

  • Create training material for TAMs and customers, so that they can get some hands-on experience before certifying a device for a given release for the first time.

  • Provide guidelines either in the documentation or the test case description to configure the environment properly when full automation isn't attainable.

== Whitelist coverage ==

The different set of test cases between the internal and the external whitelist, causes that the coverage achieved depends on the whitelist executed when it shouldn't be the case.

  • Review both whitelists and make sure that they provide the same amount of coverage.

  • When the improvements on test case automation aren't enough to get the desired coverage, create alternative and easy to configure test cases that can be used to test the same capability/feature in a different environment.

== User interface usability ==

The new Qt interface is a step forward in terms of having a modern interface that looks appealing to users. However, we have to make sure that it lives up to the expectations of testers.

  • Review the interactions required to run test cases and solve any issue that prevents the tester from having a good experience that encourages collaboration and makes reporting problems easy. For example, look at the consistency of buttons when marking a test cases as passed/failed/skipped.