Hello,
I would like to request the implementation of a “Max Parallel running jobs per workflow setting” feature. This functionality is essential for scenarios where a specific number of concurrent tasks need to be executed simultaneously. For example, when copying files from AirSpeed, it is optimal to copy up to 4 files concurrently to avoid overloading the system, which could lead to AirSpeed freezing if 5-10 files are processed at once. Additionally, when performing GPU-based transcoding, professional graphics cards such as NVIDIA Quadro do not have a hard limit on the number of simultaneous encodes, but increasing the number of concurrent tasks can still impact processing speed. Therefore, having the ability to control the number of parallel jobs would allow users to find the optimal balance between processing speed and system stability. This feature would also be beneficial for workflows that utilize external scripts or additional processors, ensuring efficient resource allocation and enhanced operational control.
Thx
Max Parallel running jobs per workflow setting
Re: Max Parallel running jobs per workflow setting
I cannot 100% second this request because i would like it more granular than on a per workflow basis. The max concurrent jobs size should be adjustable per processor instead of per workflow which also implies that the host groups must be set per processor instead of per workflow. This would also open ideas like download on some weak server, transcode on some strong, deliver on the server that has internet connection etc...
Its an ever ongoing topic that we also speak internally about pretty often.
Its an ever ongoing topic that we also speak internally about pretty often.
emcodem, wrapping since 2009 you got the rhyme?
Re: Max Parallel running jobs per workflow setting
Thank you, your option seems more like groups within a farm setup to me. I am using this approach, but when a group consists of multiple servers, it doesn’t help when I need to limit the execution of a specific workflow to a few simultaneous runs.
Here’s my practical case – since I need to copy files from AirSpeed, I can’t assign this workflow to the entire farm. Instead, I had to restrict one server from 10+ available slots to 4 and assign this and other similar workflows only to that server. This approach is less flexible in terms of overall farm utilization.
Here’s my practical case – since I need to copy files from AirSpeed, I can’t assign this workflow to the entire farm. Instead, I had to restrict one server from 10+ available slots to 4 and assign this and other similar workflows only to that server. This approach is less flexible in terms of overall farm utilization.
Re: Max Parallel running jobs per workflow setting
Hm i guess from that perspective there is no difference if you set the concurrency on workflow...
Well this or that way we rely on our @admin mister steinar to choose and actually implement whatever he likes, noone else but him is working on this part in ffastrans
Well this or that way we rely on our @admin mister steinar to choose and actually implement whatever he likes, noone else but him is working on this part in ffastrans
emcodem, wrapping since 2009 you got the rhyme?
Re: Max Parallel running jobs per workflow setting
Hi artjuice,
You will probably be able to do something by using the "Job processing slots" counter configurable separately on each node. So you can have one workflow so when it reach the AirSpeed part you can configure it to use a higher number of slots of the total number available on the host. So if you have configured your host to have 12 simultaneous job slots, setting the AirSpeed node to take 3 slots will enable maximum 4 simulatneous AirSpeed nodes to be active at the same time. However, this is per host, so it's not 100% what you want to do but if you configure your ffastrans farm wisely you might come around the issue. If you for example have a separate host to do the AirSpeed jobs and limit those nodes to just that single host (using hosts groups) it's sort of doable.
-steinar
You will probably be able to do something by using the "Job processing slots" counter configurable separately on each node. So you can have one workflow so when it reach the AirSpeed part you can configure it to use a higher number of slots of the total number available on the host. So if you have configured your host to have 12 simultaneous job slots, setting the AirSpeed node to take 3 slots will enable maximum 4 simulatneous AirSpeed nodes to be active at the same time. However, this is per host, so it's not 100% what you want to do but if you configure your ffastrans farm wisely you might come around the issue. If you for example have a separate host to do the AirSpeed jobs and limit those nodes to just that single host (using hosts groups) it's sort of doable.
-steinar