Prof. Dr. Cesare Pautasso

On the Performance Overhead of BPMN Modeling Practices

Ana Ivanchikj, Vincenzo Ferme, Cesare Pautasso

15th International Conference on Business Process Management (BPM2017), Barcelona, Spain, pp. 216-232

September 2017

Abstract

Business process models can serve different purposes, from discussion and analysis among stakeholders, to simulation and execution. While work has been done on deriving modeling guidelines to improve understandability, it remains to be determined how different modeling practices impact the execution of the models. In this paper we observe how semantically equivalent, but syntactically different, models behave in order to assess the performance impact of different modeling practices. To do so, we propose a methodology for systematically deriving semantically equivalent models by applying a set of model transformation rules and for precisely measuring their execution performance. We apply the methodology on three scenarios to systematically explore the performance variability of 16 different versions of parallel, exclusive, and inclusive control flows. Our experiments with two open-source business process management systems measure the execution duration of each model's instances. The results reveal statistically different execution performance when applying different modeling practices without total ordering of performance ranks.

Download

DOI: 10.1007/978-3-319-65000-5_13

PDF: ▼benchflow-bpm2017.pdf (395KB)

Citation

Bibtex

@inproceedings{benchflow:2017:bpm,
	author = {Ana Ivanchikj and Vincenzo Ferme and Cesare Pautasso},
	title = {On the Performance Overhead of BPMN Modeling Practices},
	booktitle = {15th International Conference on Business Process Management (BPM2017)},
	year = {2017},
	month = {September},
	pages = {216-232},
	publisher = {Springer},
	doi = {10.1007/978-3-319-65000-5_13},
	address = {Barcelona, Spain},
	abstract = {Business process models can serve different purposes, from discussion and analysis among stakeholders, to simulation and execution. 
While work has been done on deriving modeling guidelines to improve understandability, it remains to be determined how different modeling practices impact the execution of the models. 
In this paper we observe how semantically equivalent, but syntactically different, models behave in order to assess the performance impact of different modeling practices. 
To do so, we propose a methodology for systematically deriving semantically equivalent models by applying a set of model transformation rules and for precisely measuring their execution performance. 
We apply the methodology on three scenarios to systematically explore the performance variability of 16 different versions of parallel, exclusive, and inclusive control flows. 
Our experiments with two open-source business process management systems measure the execution duration of each model's instances. 
The results reveal statistically different execution performance when applying different modeling practices without total ordering of performance ranks.
},
	keywords = {BenchFlow, BPMN, performance}
}