Getting started with PowerDEVS

In this guide we are going to learn how to use the basic PowerDEVS features to model a very simple representation of the TDAQ system for Phase II.

It assumes you already have installed and configured PowerDEVS correcty, read the PowerDEVS User Guide, and that you read and have a good background of DEVS theory.

We will first create and simulate our first model. Then we will add more models to complete the system. Afterwards, we will run heavier simulations with realistic parameters. For each step the process is the same: 1) model, 2) simulate, 3) analyse.

=

=

Before starting - What we will model?

Always, before starting to model and simulate we need to know:

  1. What we want to model?
  2. Which questions we want to answer?
1. As said before, we want to model de TDAQ system for Phase II.
2. We will focus on the Starage Handler component. Specifically, we want to answer: ¿How much storage space will be required?

Below you can see a functional view of the TDAQs, and simple view of the storageHandler component.

Selection_042.pngSelection_043.pngSelection_044.png

We will focus on the StorageHandler component which receives events from the DataHandler and stores them. Then the EventFilter reads those events to decides which ones should be sent to permanent storage and which should be deleted.

This interaction between the DataHandler, the StorageHandler and the EventFilter is what we need to model in order to answer ¿How much storage space will be required in the StorageHandler to cope with X hours of running?

If we were interested in an approximated answer using only the average rates of the components, we could use an analitical model which executes is constant time. This is explained in the another guide: How to model a simple continues model in PowerDEVS

Step 1 - First Simulation

Model - a job generator

Model setup

In this section we will add a single atomic model and run our first simulation.

  1. For this guide we will use the getting_started branch in GIT repository: git checkout getting_started
  2. Open PowerDEVS simulator (it will also open Scilab)
  3. File -> New, to create a new top model
  4. Add the following models (drag&drop from library):
    1. ScilabCommand (in the Sinks lbrary). Name it LoadScilabParams.
    2. ExperimentTracker (in the Queueing Networks library). Name it ExperimentTracker.

      These models provide infrastructure to read parameters, so parameter sweeping, interaction with scilab, etc. One day a bright student will add this functionality to the core of PowerDEVS :-)

      [ADD a screenshot]


  5. File--> Save, to save the model. Save the models under /examples//MyGettingStarted/GettingStarted.pdm when requested.
    NOTE: all top models should be stored under the /examples folder. It is recommended that each user creates its own folder for its custom models, and a folder for each model. They can be placed diferently if desired.

    TODO: if you try to compile now it wont compile. This is because of include problems and has to be fixed soon.

Creating the model
  1. Add a new JobGenerator model (drag&drop from Queueing Networks library). Name it DataHandler.
    This model will be responsible of generating jobs, using a distribution for the rate and jobSize. In our example, the JobGenerator model will represent the DataHandler sending new Events (jobs).

    NOTE:
    All current models are ATOMIC models, which define behavior (see DEVS theory). You can right click on the model and click "Edit code", which will show you the path where the behavior is implemented. It is recommended that if you want to change the behavior of a library model (for example if you want the generator to generate different objects), to do it in a different source files so that the library model behavior is not modified. (see step XXX, were we create a new atomic model source files)

    NOTE FOR DEVELOPERS: If you are interested in how an atomic model is implemented, take a look at the source code. The generator atomic model is pretty simple and straight forward to understand (once you have read the DEVS theory). You will find in the .h the 6 methods which must be implemented for every atomic model: init, ta, dint, dext, lambda and exit (which directly map to the DEVS atomic model definition). In the .cpp you can find the implementation for these methods. You might be interested to see in the Init method how parameters are read (using the readDistributionParameter call), how variables are loggued (using the logger object), how events are generated in the lamda method. This model does not expect to receive input events, so the dext method is empty. It only has internal behavior (generating events with periodic time) so you will find this implementation in the dint method.

    [ADD a screenshot]

  2. Compile using the "Simulate" button in powerDEVS [ADD icon]. This is to check everything is correctly configured. If it is successful you will see the Simulation GUI.If you try to run the simulation now, it will fail (check the next bullet to understand why)
    When you hit the "simulate" button in PowerDEVS, it first read the top model structure and generates a model.h and makefile.include file acordingly. Then it compiles the simulator with this model to generate an executable simulation model, which is placed in /output/model.
    NOTE: this implies that if you change the structure of the model, you should recompile before executing.

    [ADD a screenshot]
  3. Before running the simulation, we must take care of the parameters.
    There are several ways of setting the parameters for PowerDEVS atomic models. The easiest and straight forward way is to double-click in the atomic model and set the values for each of the parameters. Although this seems easy, it does not scale if you have several atomic models (imagine double-clicking 100 models to see the parameters!). So we will see here how to specify parameters in a separate Scilab file (hopefully a good student will make this steps automatic in the future):

    1. In the folder where you saved the top model (ej /examples/<yourUserName>/MyGettingStarted/) create a new folder called Sciab
    2. In the new Scilab folder create a new file called model.scilabParams.
    3. Now we need to configure things in order for this file to be loaded by Scilab. Double-click the LoadScilabParams model, and in the "Run at Init" parameter put the following: exec('../examples/<yourUserName>/MyGettingStarted/model.scilabParams', 0). This will make the file you created before to be loaded in Scilab when the simulations first starts.
    4. Click the Priority button in PowerDEVS GUI. This menu allows you to specify the Z function (which will decide the priority of execution of each model). Make sure that LoadScilabParams, ExperimentTracker and UpdateScilabParams are the first 3 models. FinalizationCommands should be always the very last model. (Probably for this guide now, you will have to change only the FinalizationCommands model. But whenever you add a new model, remember to check this).
    5. This file will be loaded by Scilab so you can put any valid Scilab expression here (constants, variables, math expression, even complex functions). For these example, lets declare some variables (the names should give you an idea what they will be used for):
      model.scilabParams
      NOTE: pay special attention to the DataHandler.period and DataHandler.jobSize parameters. These parameters specify a distribution (Exponential and Constant respectively) for the period at which jobs are generated and for the size of the jobs.
    6. Now we should tell the DataHandler model from which variables to get the parameters from. Double-click the DataHandler and complete each parameter as follows: 1) DataHandler.startTime, 2) DataHandler.period, 3) DataHandler.jobSize, 4) DataHandler.stopThreshold

      [ADD a screenshot]

Simulate

  1. Now we are ready to simulate. Click the "Simulate" button [ADD icon] and you will see the Simulation GUI.
  2. In Final Time choose how many seconds you want to simulate (this is virtual simulation time, not real wall clock time). We are only generating jobs, so lets simulate one hour = 3600 seconds.
  3. Click Run Simulation. You will see thebar below showing progress. Probably for this small simulation it will finish really fast. The the simulation finishes in the bar below you will see "Simulation Completed xxxx ms" with the real wall clock time the simulation took to finish (mine took 900ms).
CONGRATS you have just run your first simulation!!... but wait, where are the results!!?

Analysing simulation results in Scilab

There many many ways to see and analyze PowerDEVS simulation results. You can use Python, CSV, Scilab, Mathlab, etc. Usually we use R as it has several interesting functions and plots, sometimes we upload plots to plot.ly. When we need to debug or see the behaviour of models in great great details we use the debug log, which PowerDEVS puts in ouput/pdevs.log.

Here we will see how to use Scilab, which is pretty straight forward as variables are written there. The simulation you just executed is really simple, just a generator model generating jobs. But lets verify that the geneator worked as expected: generating jobs with an exponencial distribution and mean 1/10, and jobSize should be constant.

  1. Switch to the Scilab GUI. You will notice that in the main console there are some messages about the simulation starting and finished.
  2. Type who in the main console all the available variables will be listed. Also the top right panel (Variable Browser) will update showing the variables.
  3. You will see there the parameters you defined in the model.scilabParams file, but also you will see some variables generated by the DataHandler aotmic model: DataHandler _jobWeight, DataHandler _intergen, DataHandler _count, DataHandler _t
    You can see in the "Value" column that these variables are arrays. It shows 1x36012 in my case. You can also type the following to see the lenght of any array: length(DataHandler _count)
  4. Each item in the arrays represents a value of the variable (for example count) at certain point in time. The time of the simulation that value was recorded is in the DataHandler _t variable. So, if we plot 'count' vs 't', we will see the variable count progressed though time (the count variable counts in the generator model count the amount of jobs generated). Type the following in Scilab console: scf(); plot(DataHandler _t, DataHandler _count, ".")

    Selection_055.png

    This graphs shows in the X axis the time (DataHandler_t) and in the Y axis the value of DataHandler_count. It created ~36000 jobs in 3600 seconds, which makes sense as we specified in the parameters we wanted to generate jobs with an average period of 1/10 (see parameter DataHandler.period_mu).

  5. We also specified we wanted the time at which the jobs are generated to follow an exponential distribution (see parameter DataHandler.period). Did it worked as expected?
    The intergen variable stores the time elapsed between different jobs where generated, let plot an histogram of that variable to see the generation interleave time distribution. In Scilab console type: scf(); histplot(100, DataHandler _intergen, normalization=%f);

    Selection_056.png

    As you can see, the time between different jobs are generated follow an exponential distribution.
    We verified before that the mean was ~0.1 (36000 jobs in 3600s), but you can also see that precisely with Scilab: mean(DataHandler _intergen) (in my simulation that returns 0.0999660

  6. What about the size of the jobs generated? They should be constant.
    The stored variable jobWeight records the weight of each of the generated jobs. Because it should be constant, it is enough to see: max(DataHandler _jobWeight) == min(DataHandler _jobWeight)=, and max(DataHandler _jobWeight) = 5
OK, we have simulated 3600s in less than a second. But we are only generating jobs!! Whats next?

Step 2 - Complete model

Here we will complete the model, adding a queue (also called buffer in networks) to represent the StorageHandler which stores events, and a processor to represent the EventFilter which will process the stored events. We will set the parameters to adjust the model to the DAQ rate expectations.

Before continuing, its good to stop to make some reflexions about the model we are creating: When you model any system you don't model it completely, you make simplifications. We call these simplifications "abstractions", as you don't put into the model every single aspect of the real system but only those things which you want to focus on. Formally in DEVS it is called Experimental Frame.
For the model we are creating in this guide, the 3 components we are representing are are complex systems. But in order to answer our question we don't need all the details. Moreover, to get an approximate answer we need even less details (also to make this getting started easy ;)). Later, when more accurate predictions are needed or new question are required to be answered, we can fine-tune our models as needed. But it is always important to know what we are losing when we do these abstractions, so here are some example assumptions we will make:

  • The storage handler can absorb all the information it receives and it is requested.
    This clearly not true: the storage handler will probably be implemented with lots of hard drives, and each has a limited thoughtput. How many disks will be required? How will these be connected? Which RAID technology will be used?
    Those are not the questions we want to deal with right now.
  • Transfering data from one component to the other is instantaneous.
    Also this is not true in the real world, data will probably be transferred through a network which has a delay, latency, it can loose some information and retransmit it. Different technologies could be use to implement the network.
    These things will affect our results, but for this guide it will be good enough to get approximate results.

Model - add storage and processing

So, back to work:

  1. Switch back to PowerDEVS interface, to the model you were creating before.
  2. Add a new Queue model (drag&drop from library). Name it StorageHandler.
    Notice that the Queue model has 2 input ports and 2 output ports. The first input port receives new jobs which are queued if there is enough space. The second input port is used to signal a request to unqueue the next job (for example when the server is ready to process). The first ouput port is used to send the jobs. The second output port is used to signal the status of the queue, generally used by samplers when you want to reduce the amount of logger variables (we won't use it at his moment).
    NOTE: You can find information about atomic models by double-click on the model. You will find the information about each of the ports and internal behavior.


    [Add screenshot]

  3. By default the queue is parametriced to have infinite buffer space. That is OK in our case, but we will change it to get the parameter from the configuration file:
    1. Double-click the StorageHandler model, and set the MaxCapacity parameter to: StorageHandler.maxCapacity
    2. In the model.scilabParameters file, add the following line: StorageHandler.maxCapacity = -1; // Max capacity of the StorageHandler
  4. Now lets connect both models. We want generated jobs by the DataHandler to be queued in the StorageHandler.
    Press on the output port of the DataHandler and without releasing drag it to the first inuput port of the StorageHandler.

    NOTE: At this point you could run a simulation to verify that generate jobs are queued. You can do so using the StorageHandler _qsize variable which the Queue model stores in Scilab. If you plot the StorageHandler _qsize variable should be exactly the same as the DataHandler _count, as all generated jobs are queued and remain there. If you want to play around, you can set a maximum capacity to the queue, and verify that the queue size stays at that limit and also check the StorageHandler _discards to see discarded jobs.

    [Add screenshot]

  5. Add a new ProcessorSharing model (drag&drop from library). Name it EventFilter.
    The ProcessorSharing has 1 input port and 2 output ports. The input port receives incoming jobs to be processed. The first output port sends jobs once they finished processing. The second output ports sends rejected jobs (when the maximum capacity is exceeded). The model processes up to a maximum if max_capacity jobs in paralel sharing the processing capacity (serviceCapacity) among all jobs equally. That is, if capacity=1 and a job with weight=1 arrives, it is processed after 1 second. If 2 jobs with weight=1 arrive simultaneouly, they will be both finish processing after 2 seconds.

    [Add screenshot]

  6. Lets set the parameters for the ProcessorSharing model:
    1. Double-click the EventFilter model, and set: 1)ServicePower parameter to: EventFilter.servicePower, and 2) MaxJobs to: EventFilter.maxJobs
    2. In the model.scilabParameters file, add the following lines:
      EventFilter.servicePower = 25; // Processing Power
      EventFilter.maxJobs = 1; // Max amount of jobs
NOTE: Please not that here we are configuring the EventFilter to process events at 5Hz (because jobs have all weight 5 and we are setting service power to 25). The DataHandler was set to generate events at 10Hz.

  1. Lets connect the StorageHandler so that jobs are sent to the EventFilter, and the EventFilter should request a new job when it finished processing.
    Press on the first output port of the StorageHandler and without releasing drag it to the inuput port of the EventFilter. Then, press on the first output port of the EventFilter and without releasing drag it to the seconds inuput port of the StorageHandler.

    Selection_057.png


  2. Remember to change the priorities now that you added new models. Press the Priority button and send the FinalizationCommands model to the very end.

Simulating the complete model

  1. Click the "Simulate" button [ADD icon] and you will see the Simulation GUI.
  2. In Final Time lets simulate one hour = 3600 seconds.
  3. Click Run Simulation.

Analysing the complete model

Lets verify our simulation. We now added the EventFilter which consumes jobs from the queue at half the speed that the generator creates jobs. That means that the queue should increase in size at 5Hz.

  1. Switch to the Scilab GUI.
  2. Plot the queue size: scf(); plot(StorageHandler _t, StorageHandler _qsize, ".")

    Selection_058.png

    As you can see in this plot, after 3600s there are ~18.000 queued jobs. You can also see that the queue size increases linearly with time.
  3. We can also add the plot with the generated jobs, so that we can compare queued jobs vs generated jobs: plot(DataHandler _t, DataHandler _count, ".red")

    Selection_059.png



    please note that each variable in scilab stores the values of simulation variables each time it changes. That means that every time a variable changes 2 values are loggues: the time and the new value. If you look at the variables we are plotting (StorageHandler _t for example), they have ~54000 values. That is OK right now, but is we set the simulation with higher rates, it might become to slow to log everything. We will deal with this in the next section.

Step 3 - Runing with realistic parameters

We have our model complete and verified that it behaves as expected. Lets now set the parameters as to represent the TDAQ system. Also, we will need to be more intelligent with the data we are logging (currently everything) as TDAQ will generate 1MHz events, so we will add Samplers to log what we need.

Model - Update parameters and add samplers

  1. Lets set realistic parameters: generate 400KHz events and EventFilter to process 80% of that rate.
    1. In the model.scilabParameters file, change the parameters:
      DataHandler.period_mu = 1/(400 * 1000);
      DataHandler.jobSize_value = 1;
      EventFilter.servicePower = 0.8 * DataHandler.jobSize_value * ( 1/DataHandler.period_mu)
  2. You can simulate now, BUT generating events at 400KHz the simulation will generate approximately 100MB for every simulated second (you would need 360GB to store results of one hour of simulation). Not only the storage needed to log results, but also it would make the simulation really slow because of the necessary I/O operations. You can try simulating for 0.2s and see how slow it can become.
  3. We will reduce the mount of variables we are logging. For this, we will need to change the code (but soon someone enthusiast will allow the logging to be configurable):
    1. Before making changes to library models lets make a copy into our model. Copy the following files into a the new folder atomics/<yourUserName>/MyGettingStarted/: 1) atomics/queueingtheory/jobgenerator.cpp (and .h), 2) atomics/queueingtheory/jobqueue.cpp (and .h), 3) atomics/queueingtheory/procesorsharingserver.cpp (and .h)
    2. In PowerDEVS GUI do the following for the 3 models (DataHandler, StorageHandler and EventFilter): Right-Click --> Edit --> Code (tab). In the left panel (Folders), choose the folder were you copied the files. In the right panel (Files), choose the .h file that has the behaviour of the model (ej: for the EventFilter, choose ProcesorSharingServer.h)
    3. Open the ../<yourUserName>/MyGettingStarted/jobgenerator.cpp file
    4. Find the line where the logger is created (line 36) and change the logging level to SCILAB_LOG_LEVEL_NORMAL (currently SCILAB_LOG_LEVEL_ALWAYS): this->logger = createDefaultLogger(SCILAB_LOG_LEVEL_NORMAL, this->getName());
    5. Repeat steps 3 and 4 also for the other models. I.E for files atomics/queueingtheory/jobqueue.cpp and procesorsharingserver.cpp
  4. Lets now add some sampler for the things we do want to log. Add a QueueSampler (from the queueing Network library) and name it SHSampler.
    NOTE: there are several samplers depending on what you want to record. For example, if you want to record the amount of processed jobs, you can use a CounterSampler and connect it to the output of the EventFilter. Similarly to count generatedJobs.
  5. Double-click the SHSampler model, and set the SamplerPeriod parameter to SHSampler.rate
  6. In the model.scilabParameters file, add the following line to sample every half second: SHSampler.Rate = 0.5;

Simulate

We will start simulating for 10s, so that the simulation finishes quickly. The simulation will not log so much, but still it will generate all the 400KHz events so it will be much slower than before

  1. Click the "Simulate" button [ADD icon] and you will see the Simulation GUI.
  2. In Final Time lets simulate 10 seconds.
  3. Click Run Simulation. In my CPU this simulation took approximately 30 seconds.
You can let it running more time if you want to. You can also run it from command line (from the <powerdevs>/output directory): ./model -tf 10

Analyse

  1. Switch to Scilab
  2. Plot the average queue lenght over time: scf(); plot(SHSampler_t, SHSampler_timeAvg, ".-")



    NOTE: as you can see, now there are only a few point (one every 0.5s). That is because we are now plotting the sampled values of the variable. You can also see the maximum (or minumum) value of the queue in is sampled period using SHSampler_max (or SHSampler_min).

Next Steps

We have our complete model running with real parameters.

Whats next?

  • Sweeping parameter values:
    What if we want to know the beheaviour for different rates of the EventFilter? We can change the EventFilter.servicePower parameter simulate, set another value and simulate again. If we want to check several values is not easy to do it manually.
    In the next guides, we will show how to sweep parameter values, store results on disk and analyse results in R studio.

  • Programming behaviour in atomic models in C++:
    Up unitl now you modeled things with the pre-built library models. But what if you need a model that is not in the library (this is the most usual case when modelling complex systems)?
    In the How to program an atomic model in PowerDEVS guide we will how to to change the model we developed here, so that the EventFilter can process multiple events in parallel. We will implemente a new DEVS atomic model with this new behaviour.

  • Continuous (fluid) model ( here):
    The way we modeled the TDAQ system here is really simple. At this level of abstraction we could get the same results using a continuos model, with the advantage that the simulation will finish in seconds even for higher rates (because it does not generate a DEVS event for every Jobs).
    In the next guide, we will show how to develop this same model using continuous equations.

-- MatiasAlejandroBonaventura - 2016-04-13

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng Selection_042.png r1 manage 56.5 K 2016-04-21 - 16:51 MatiasAlejandroBonaventura  
PNGpng Selection_043.png r1 manage 316.1 K 2016-04-21 - 16:51 MatiasAlejandroBonaventura  
PNGpng Selection_044.png r1 manage 42.6 K 2016-04-21 - 16:52 MatiasAlejandroBonaventura  
PNGpng Selection_054.png r1 manage 8.5 K 2016-04-21 - 18:40 MatiasAlejandroBonaventura  
PNGpng Selection_055.png r1 manage 8.6 K 2016-04-21 - 18:48 MatiasAlejandroBonaventura  
PNGpng Selection_056.png r1 manage 5.1 K 2016-04-21 - 18:55 MatiasAlejandroBonaventura  
PNGpng Selection_057.png r1 manage 20.4 K 2016-04-25 - 13:55 MatiasAlejandroBonaventura  
PNGpng Selection_058.png r1 manage 8.7 K 2016-04-25 - 14:53 MatiasAlejandroBonaventura  
PNGpng Selection_059.png r1 manage 10.6 K 2016-04-25 - 14:57 MatiasAlejandroBonaventura  
Unknown file formatscilabparams model.scilabParams r1 manage 1.5 K 2016-04-21 - 17:40 MatiasAlejandroBonaventura  
Edit | Attach | Watch | Print version | History: r15 < r14 < r13 < r12 < r11 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r15 - 2016-09-02 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback