User Tools

Site Tools


Workflow operating principle

To create a first simple encoding workflow, right click on the job canvas, choose 'insert processor', choose category(Monitor) and select 'Folder'. Again with 'Encoders'category then 'H264“. A last one in 'Delivery'category and 'Folder”.

Link them together, left mouse click on output's node(blue square on right side of node), hold button and release it on next node : The 3 processor nodes are now connected.

Set your watch path in Monitor Node, configure your H264 encoder Processor and choose a destination folder in Delivery Node. The workflow is ready. Start the workflow with a right click on workflow's name(workflow manager window) then start. Drag a video file in your watch folder and wait for the encoding to complete.

You can submit manually a file (even if your workflow is disabled) by right-click on monitor node and choose 'Submit file(s) to folder'.

Advanced Workflow

A variable can be used to “fetch” a file's information(for example), keep it in memory during all the workflow and used it when it is necessary. Many internal variables are stored from the beginning of each workflow like file's name (%s_original_name) or video file's properties (duration, frame rate,etc). It is possible to create one's own variable, use 'Populated Variable'node to create the new variable.

Working with variables

Download Link : working_with_user_variables.json
After importing the workflow, select it in your workflow list and right click→ submit file at the first node. It will copy the file you submitted to c:\temp (if the folder exists).
This workflow shows how to use user variables and build a path from 2 variables: The first processor “Populate Variables” is used to set the value of these 2 variables: %s_test_variable_1% and %s_test_variable_2%.

In the deliver to folder processor, the actual folder for delivery looks like this:


Which at workflow runtime will actually be translated to this:


Please play with the values, set different drive letter and folder names for experimenting.

Detailed Explanation about User Variables:

  • You can create/delete User variables in every processor node by clicking the bold > Symbol
  • Just creating a user variable does not influence anything, it must be actually “used” in a processor in order to do something
  • Variable Values have to be set by a job. At job start, user variables are always empty. (see STATICS for pre-defined/global variables)
  • The same Variable can be set/used in multiple paralell job, they will not influence each other. A user variable only counts for the currently running job and its childs
  • If you use a non-set variable in a job, it's value is not replaced by “nothing”, but it will appear as %s_some_variable_name%
  • Variable values are only visible to the current split/branch/job (it is all the same) and all child splits/branches/jobs
  • Once you created a variable, you can select it from the list in every processor of every workflow. But remember, it's value is empty by default when a job starts

Working with Statics vs Variables

Download Link : working_with_statics_vs_variables.json
This workflow shows whats the difference between a variable and a STATIC.

In difference to user “variables”, once created and set to a value, STATICS have always the same value in every workflow, job and split. The value can only be changed by an administrator using the ffastrans UI. A job cannot change the value of a STATIC. In other words, we talk about a global (or domain) variable here.

How to create a STATIC:

1) Left Click the header of a processor (does not have to be Populate variables, could by any processor)
2) left click one of the > Symbols
3) Rickt click here and select → New
4) Select the type “static” instead of variable
5) Enter a name for this global STATIC variable. Remember, this value is always the same in each and every workflow that uses this STATIC.
6) Select the type (advanced users can use something different than string)
7) Enter a value for the static. Note again that this value cannot be changed by a job

Error handling

Download Link : working_with_errors.json
This workflow is demonstrating how to catch and resolve errors.
It first tries to deliver the input file to a share that actually does not exist. (wait a while, it will retry 100 times). When it fails, it will deliver to a fallback location. After that, we use a hold node with “synchronize” option in order to be able to go on with the main branch.
I guess this is a little known but very powerful feature of FFAStrans workflows. Right click the input connector of a processor to select if the processor is executed on success, error or in any case.

                                                                                                          Back to top

workflows/workflow_building.txt · Last modified: 2021/02/11 20:49 by benjamin

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki