We have moved all code to a git repository, which provides many advantages in the development process. The SVN repository will no longer be maintained. Please check out from Git instead through git://smff.git.sourceforge.net/gitroot/smff/"module name", e.g. git://smff.git.sourceforge.net/gitroot/smff/SMFF-Core
Added a new module which provides an interface for pyCPA. pyCPA is a pragmatic Python implementation of Compositional Performance Analysis (aka the SymTA/S approach provided by Symtavision) used for research in worst-case timing analysis. The source of pyCPA is available at http://code.google.com/p/pycpa/.
This new interface allows to perform worst-case timing analysis to acquire metrics such as worst-case response times of tasks in SMFF models. This can be used within the generation process to e.g. generate models that have a certain slack w.r.t. task deadlines. Another option is to use the pyCPA analysis as reference for the evaluation of timing analysis approaches. You could also use SMFF and pyCPA to develop optimization algorithms for real-time systems without having to implement your own timing analysis.
Together SMFF and pyCPA will provide a powerful combination of testcase generation and reference timing analysis.
Evaluation of scheduling, allocation or performance verification algorithms requires either analytical performance estimations or a large number of testcases. In many cases, e.g. if heuristics are employed, extensive sets of testcase systems are imperative. Oftentimes realistic models of such systems are not available to the developer in large numbers.
SMFF (System Models for Free) provides a framework for pseudorandom generation of models of real-time systems. The generated system models can be used for evaluation of scheduling, allocation or performance verification algorithms. As requirements for the generated systems are domain-specific the framework is implemented in a modular way, such that the model is extendible and each step of the model generation can be exchanged by a custom implementation.
During the development of e.g. scheduling or allocation algorithms or algorithms for performance verification, test-case systems are required to evaluate the applicability and performance. If formal proofs of correctness or analytically derived performance estimations can be given a small set of such systems is sufficient. However, in many cases this is not possible e.g. if heuristics are employed. In this case the algorithm has to be tested with an extensive set of testcases. For many algorithm developers, especially in academia, system models are not available in large numbers. Manually creating such system models is very time-consuming and might not respect requirements on randomness.
Consider the following example. A developer has implemented a heuristic algorithm for optimized priority assignment in distributed real-time systems. As the algorithm is based on a heuristic, no analytical estimation of the performance of the algorithm can be given. Thus the developer has to evaluate the algorithm against a set of testcases. As no extensive set of testcases is available to the developer, the system models have to be generated automatically. These models however need to resemble real-world systems, as typical for the targeted domain. Furthermore, they need to be sufficiently random, in order not to bias the evaluation.
In this paper we address this issue and present SMFF - a framework for parameter-driven generation of models of distributed real-time systems. The generated models incorporate a description of the platform, of the software applications mapped onto the platform and the associated scheduling and timing parameters, thus covering the entire model specification.
As system models, that are used for algorithm evaluation, have to resemble real-world systems, requirements on testcase systems may be highly domain- and problem-specific. The presented framework provides a high degree of modularity, allowing the user to extend the system-model and to replace algorithms for system model generation, thus making the framework a universal tool for testcase generation. The algorithms presented in this paper are example implementations and were developed for the evaluation of an algorithm to find execution priorities in static-priority-preemptively (SPP) scheduled systems under consideration of timing constraints.
The key features of the SMFF framework are:
The SMFF framework is no simulation or benchmarking environment. Thus, it does not address the issues of simulation or performance monitoring. It rather provides models as input for such tools.