Benefits of Model-Based Test Automation

From Evidence on Formal Methods Uses and Impact on Industry
Jump to: navigation, search
Success Story
Domain Business Information
Keywords Event-B, Model-Based Testing
Source SAP (DEPLOY)


Main -> DEPLOY Success Stories -> BusInfo-2

Short description

Model-based testing can be used from Event-B models and can provide a high added value to the produced model as they can significantly reduced the time to produce integration tests, provide assurance on specific coverage criteria. Moreover they are compatible with a traditional development process and allow the adopter to benefit from high quality design (thanks to the model itself) and quality assurance (thanks to the automatically derived tests).

Related FAQ

process integration, productivity, quality

of Interest to QA Practitioners:

  • EM-QAP-1 Is it possible to take advantages of formal models to automate part of QA tasks?

Benefits

  • MBT reduces the overall test design time for specific test classes.
  • MBT can guarantee specific and complex coverage criteria.
  • MBT allows to use formal models only on the top part of a design process and rely on standard (test-based) development techniques for the rest of the tool chain.

Limitations

  • specific test classes, limited to what is captured by model
  • model validation is required
  • formal modelling skills are necessary
  • need for an adaptation layer for running abstract test on a concrete environment
  • technical limitations of tools limiting the size of the model or requiring one to write models with MBT in mind.

Elaboration

Model-Based Testing (MBT) is the most automated level of testing (ranging from manual, capture/replay, script-based, keyword-driven and model-based testing). MBT automates the detailed design of the test cases and the generation of the traceability matrix based on an abstract model of the system under test. A model-based testing tool generates a set of test cases from that model. MBT has been previously successfully applied on a variety of sectors and languages such as B, UML.

The initial work was more a feasibility study. It focused on producing test suite covering global message choreography model with a transition coverage criteria and minimal length of the tests to ease test maintenance and error localisation.

The algorithm was developed starting from an Event-B equivalent of the global choreography model. The ProB model checker was used to perform a bread first search (yielding short solutions first) and then to map these solutions on message send/receive events. An optimisation step was then used to remove redundant paths. Finally executable test scripts were produced (adaptation layer).

A simple example of computed test paths (computed by ProB, but before test suite minimisation) is shown in the following figure.

Mbt-sap.png

The results of this feasibility study were evaluated positively by some validation experiments with users. The generated tests suite could be understood and enriched. There was no comparison with manual coding of the test but the general feedback was it was time saving. The effort to concretise them in real test suites was estimated to be between four and eight hours for the considered cases. However, when generating a new test suite, only minor adaptations were necessary and the updated test suite could be run in a matter of minutes.

Consolidation

At the tool level, the tools currently available seem to move away from relying on strong formal models to more widely adopted modelling paradigms such as UML (Smartesting is the rebranding of LTG/UML while LTG/B was dropped), Java-like languages (Conformiq) or C#-flavoured (Spec#). Legeard, a major contributor to MBT and involved in the Smartesting tool, reported that LTG/B was dropped partly because of the redundancy with LTG/UML but also because the need to reach a larger target (general information systems and not only specific embedded systems). Another reason was the low availability of trained people able to write B models.

Model based testing has been reported to be successful in the Smart Card domain [1][2].

Model based testing was also reported to be used in the automotive sector to automatically generate test vectors from a formal model [3].

Specific Contributions

  • Test generator based on ProB

Further work

  • MBT will be supported in the second half of the project by a specific DEPLOY task.

References:

  1. J. Philippsa, A. Pretschnera, O. Slotoscha, E. Aiglstorferb, S. Kriebelb, K. Schollb, Model-Based Test Case Generation for Smart Cards, in Proc. Eighth International Workshop on Formal Methods for Industrial Critical Systems (FMICS'03), Electronic Notes in Theoretical Computer Science, Volume 80, August 2003.
  2. Mark Utting and Bruno Legeard, Practical Model-Based Testing: A Tools Approach, Morgan-Kaufmann 2006
  3. Cite error: Invalid <ref> tag; no text was provided for refs named Bode

Cite error: <ref> tag with name "Matelot" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "NModel" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "Smartesting" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "SpecExplorer" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "ProB" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "QTronic" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "TL-CAT" defined in <references> is not used in prior text.


Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox