Analyzing Our Company

A model of our team is not very useful unless we have a way to execute operations on the model to generate meaningful and significant results. In OpenMETA, Test Benches are used to transform a model to new representations, generate simulation code, and calculate metrics.

For the given exercise, we would like to calculate some meaningful metrics about our team.

Creating a Test Bench

Let’s add a Test Bench to our project.

  1. Right-click on the RootFolder in the GME Browser and choose Insert Folder ‣ Testing.
  2. Right-click on the resulting Testing folder and choose Insert Model ‣ Test Bench.
  3. Rename this new Test Bench AnalyzeMyTeam.

The GME Browser should now look like this:

../../_images/analyze_my_team_added.png

Adding Our Team

Let’s specify the MyTeam Component Assembly as the System Under Test.

  1. Double-click the AnalyzeMyTeam Test Bench from the GME Browser to open it in the canvas window.
  2. Right-click and drag the MyTeam Component Assembly from the GME Browser onto the canvas.
  3. Select Create reference.
  4. Select TopLevelSystemUnderTest and click OK.
../../_images/tlsut.png

Adding the Workflow

Workflows are used to define the steps that should be carried out when a Test Bench is executed. We will have to create a workflow defining what the test bench should do when executed and then add it to our Test Bench.

  1. In the GME Browser, right-click on the Testing folder and choose Insert Folder ‣ Workflow Definitions.

  2. Right-click on the resulting WorkflowDefinitions Folder and choose Insert Model ‣ Workflow.

  3. Rename the workflow ValueAggregator.

  4. Double-click the workflow to open it in the canvas window.

  5. Drag a Task from the Parts Browser onto the canvas.

  6. Select the CyPhyPython Interpreter.

    ../../_images/cyphypython_interpreter.png
  7. Double-click the resulting Task.

  8. In the dialog that appears enter bin/ValueAggregator.py in the script_file field.

    ../../_images/workflow_parameters.png

Your workflow should now look something like this:

../../_images/value_aggregator_workflow.png

Let’s finish by adding this workflow to our Test Bench.

  1. Open the Test Bench in the canvas by locating and double-clicking it in the GME Browser.
  2. Right-click and drag the new ValueAggregator Workflow onto the canvas and choose Create reference.

Adding Our Queries

The Value Aggregator that we added in the previous steps is a script included with the OpenMETA tools that allows us to easily query and extract values from the model we’ve constructed. We will use this tool to calculate the team’s salary and average tenure with our company.

The Value Aggregator script visits each of the properties in the Test Bench and performs the query defined in the Description attribute. Then it assigns values to each Metric based on the equation defined in its Description attribute. We will create two queries and metrics.

  1. From the Parts Browser drag a Property into the AnalyzeMyTeam canvas.

  2. Rename the Property SalarySum.

  3. In the Object Inspector enter Salary,*,SUM into the Description attribute.

    ../../_images/salary_sum_description.png
  4. Create a second Property named YearsEmployedAverage and enter YearsEmployed,*,AVERAGE into the Description attribute.

  5. From the Parts Browser drag a Metric into the AnalyzeMyTeam canvas.

  6. Rename the Metric TeamSalary.

  7. In the Object Inspector enter TeamSalary = SalarySum into the Description attribute.

    ../../_images/team_salary_description.png
  8. Create a second Metric named AverageTeamTenure and enter AverageTeamTenure = YearsEmployedAverage into the Description attribute.

Your finished Test Bench should look something like this:

../../_images/completed_test_bench.png

Running Our Analysis

The only thing left to do it run our newly-created Test Bench using the Master Interpreter.

  1. While the AnalyzeMyTeam Test Bench is open in the canvas, click the Master Interpreter button, MASTER_INTERPRETER_BUTTON, on the toolbar.
  2. Click OK when the dialog appears.

This will send the job to the Results Browser for execution. When the job completes and turns green, navigate to the Test Benches tab to see the results of the job. Your results should look something like this:

../../_images/results_browser.png