We review the state of the art of process modeling for discrete event simulation, make a number of observations and identify a number of issues that have to be tackled for promoting the use of process modeling in simulation. Process models are of particular interest in model-based simulation engineering approaches where the executable simulation model (code) is obtained with the help of textual or visual models. We present an illustrative example of model-based simulation development.
This paper documents a work on all-purpose discrete event simulation tools evaluation. Selected tools must be suitable for process design (e.g. manufacturing or services industries). Rather than making specific judgments of the tools, authors tried to measure the intensity of usage or presence in different sources, which they called “popularity”. It was performed in several different ways, including occurrences in the WWW and scientific publications with tool name and vendor name. This work is an upgrade to the same study issued 5 years ago (2011), which in its turn was also an upgrade of 10 years ago (in 2006). It is obvious that more popularity does not assure more quality, or being better to the purpose of a simulation tool; however, a positive correlation may exist between them. The result of this work is a short list, of 19 commercial simulation tools, with probably the nowadays’ most relevant ones.
Queuing systems of any domain oftentimes exhibit correlated arrivals that considerably influence system behavior. Unfortunately, the vast majority of simulation modeling applications and programming languages do not provide the means to properly model the corresponding input processes. In order to obtain valid models, there is a substantial need for tools capable of modeling autocorrelated input processes. Accordingly, this paper provides a review of available tools to fit and model these processes. In addition to a brief theoretical discussion of the approaches, we provide tool evaluation from a practitioners perspective. The assessment of the tools is based on their ability to model input processes that are either fitted to a trace or defined explicitly by their characteristics, i.e., the marginal distribution and autocorrelation coefficients. In our experiments we found that tools relying on autoregressive models performed the best.
In today’s rapidly changing technological scenario, tech giants revise their strategic alignment every couple of years. As a result, their workforce has to be adapted to the organization’s strategy. Members of the workforce who are neither relevant to the strategic alignment, nor can be made relevant by reskilling, have to be either outplaced (i.e. placed in an another job within organization) or separated from the organization. In geographies like Europe, where the cost of separation is very high, it becomes very important to make the right decision for each employee. In this paper, we describe a simulation based methodology to find the probability and time of outplacement of an employee. These numbers are inputs to a global problem of making the optimal decision for the entire workforce.
Simulation is commonly used for decision-making on the design and operation of manufacturing (Negahban and Smith 2014), healthcare (Mielczarek and Uzialko-Mydlikowska 2012), and military (Naseer, Eldabi, and Jahangirian 2009) systems as well as in supply chain management (Terzi and Cavalieri 2004), marketing (Negahban and Yilmaz 2014), and social sciences (Axelrod 1997). The work-in-progress (WIP) in a production line, number of patients waiting for treatment at an emergency department (ED), space utilization of a distribution center, and future sales/demand for a new technology are examples of typical performance measures estimated/predicted through simulation. Due to the stochastic nature of the different components of such dynamic systems, many of the inputs of a simulation model are random (e.g., stochastic processing times on machines in a production line, patient arrivals into an emergency department, the number of SKUs in an order received by a warehouse, or consumers’ purchasing behavior and word-of-mouth after the launch of a new product). As a result, the output (performance measures) are also random variables making the assessment of the level of error in the predictions of the simulation model and the level of uncertainty in the possible values (i.e., distribution) of the measure(s) of interest critical for effective decision-making.
As high-performance computing resources have become increasingly available, new modes of computational processing and experimentation have become possible. This tutorial presents the Extreme-scale Model Exploration with Swift/T (EMEWS) framework for combining existing capabilities for model exploration approaches (e.g., model calibration, metaheuristics, data assimilation) and simulations (or any “black box” application code) with the Swift/T parallel scripting language to run scientific workflows on a variety of computing resources, from desktop to academic clusters to Top 500 level supercomputers. We will present a number of use-cases, starting with a simple agent-based model parameter sweep, and ending with a complex adaptive parameter space exploration workflow coordinating ensembles of distributed simulations. The use-cases are published on a public repository for interested parties to download and run on their own.
Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.
Alcohol misuse is a complex systemic problem. The aim of this study was to explore the feasibility of using a transparent and participatory agent-based modelling approach to develop a robust decision support tool to test alcohol policy scenarios before they are implemented in the real world. Methods A consortium of Australia’s leading alcohol experts was engaged to collaboratively develop an agent-based model of alcohol consumption behavior and related harms. As a case study, four policy scenarios were examined.