====== Process overview ====== {{gallery>:system:process_model_1.jpg?400x150&1200X1200&lightbox}} ==== General Information ==== * What is an a3x file a3x is best compared to a python .pyc file or a java .class file. It is basically a "compiled" autoit script. The only difference to an .exe file is that it needs an "autoit" (current programming language of ffastrans) environment to run. Luckily, ffastrans.exe and exe_manager.exe have such an environment included, so no need to install Autoit itself. In the past, all a3x files were actually delivered as .exe files with FFAStrans but it turned out that different antivirus softwares did not like it this way, so FFAStrans was changed to deliver a3x files. * Farm Mode Note that there is no difference between running on a standalone host or in a farm. The processes are the same on each and every member of the farm. All nodes look for new Job tickets in the same folders and compete against each other getting a job assigned (depending on CPU settings, "farm host exceptions" etc.) ==== ffastrans.exe ==== ffastrans.exe can fulfill multiple purposes, depending on if ffastrans is installed as windows service: * User interface for Host and Farm settings as well as worfklow building and management * when rest_service is NOT installed, it fulfills the job of rest_service {{gallery>:system:process_with_transcode.png?900x150&1200X1200&lightbox}} In this picture, we see ffastrans.exe on a standalone host, not installed as service. ffastrans.exe took care about starting exe_manager which again started different sub-processes, e.g. def_runner.a3x and processors.a3x ==== rest_service.exe ==== * keep exe_manager alive * provide http API service {{gallery>:system:process_as_service.png?1200x150&1200X1200&lightbox}} In this picture, we see that rest_service.exe took care about starting exe_manager. ==== exe_manager.exe ==== Exe_manager watches for new tickes created by various methods supported by FFAStrans. Because of FFAStrans design all hosts will (compete and) try to pick up new tickets. When new tickets are validated exe_manager will start the processor and refer to the new ticket file, and thus create a (sub)job. FFAStrans has 3 types of tickets: * Pending: New jobs from monitors always starts here. * Queue: New (from API or manually submitted) goes in here. This is also the place for existing that wait for next processors execution. Queue has higher priority than pending and will always be executed before any pending tickets. * Running: These tickets are the current running tickets and should correspond to the number of processors started by exe_manager. So exe_manager has the following tasks: * Check if there are new job tickets in queue or pending and if the settings allow to process the new ticket, move to running * Start def_runner.a3x process which takes care of monitoring (in case there are any active workflows with monitor processors) * Clean the history logs for all jobs. * Auto update youtube-dl * Restart orphaned jobs by comparing running tickets with active processes, each host of the farm only takes care about tickets it took for processing ==== def_runner.a3x ==== def_runner.a3x is started by exe_manager in case there are any active workflows that need watching for new files. {{gallery>:system:process_base.png?1200x150&1200X1200&lightbox}} In this picture, we see that exe_manager started a sub-process that is also called exe_manager but when looking at the "command line", we see that one of the exe_manager sub-processes takes care about def_runner.a3x ==== processors.a3x ==== Whenever exe_manager found a new job ticket that needs to be processed on the current host, it will actually execute the "processor" that is described by the job ticket using processors.a3x. ==== Priority flow chart and example ==== Since the 1.1.0.0 FFAStrans release, all the jobs are processed regarding to their priority : \\ FFAStrans Workflow properties\Priority -> 0(Very Low)-1(Low)-2(Normal)-3(High)-4(Very High) \\ //**Each time a new higher priority branch (or job) is being put into queue, it will start before all lower priority queuedjobs *// //**Once a branch/job has been started, the priority does not matter anymore *// \\ \\ Example for 4 max active jobs on 1 host, 2 hosts with 2 max active jobs or 4 hosts with 1 max active job: \\ {{gallery>:system:jobs_priority.jpg?700x350&1200X1200&lightbox}} ==== Job Ticket Management ==== {{gallery>:system:processes_tickets.png?700x350&800X800&lightbox}} This picture shows how the different processes work with Job Tickets. - Various job starting methods create tickets (.json file) in either pending or queued folder - Queued or pending tickets are being read by exe_manager - exe_manager spawns a processors.a3x process for each ticket - processors.a3x reads ticket, decides if the current host and CPU allows to execute the ticket and in case it executes the processor from the current ticket - when the processor finished, processors.a3x parses from the workflow which processors are next to be executed and places the corresponding tickets into queued folder, the process starts over at 3 ==== The Status directory ==== TOOD: processors.a3x checks for status pause or abort of a job split, exe_manager for ".start~%workflow_id%" to see if def_runner needs to be started ---------------------------------------                                                                                                           [[system:processes|Back to top]]