Relating jobs across workflows

Questions and answers on how to get the most out of FFAStrans
Post Reply
ddehoff
Posts: 8
Joined: Mon Jan 21, 2019 9:22 pm

Relating jobs across workflows

Post by ddehoff »

I have a setup where the same file is taken through multiple workflows (720, 1080, 4k). I'm doing this because I want each workflow to have a different priority. I'd like to show job history in a custom status monitor grouped like this:

MyFile.mp4
- 720 Version: Complete
- 1080 Version: Complete
- 4k Version: Complete

The issue is trying to code the status monitor to look at the history API and determine that this 720 workflow job should be grouped with this 1080 job and this 4k job. Of course they all have the same filename, but a user could submit the same filename again later the same day with an edit so I would end up with something like this:

MyFile.mp4
- 720 Version: Complete
- 1080 Version: Complete
- 4k Version: Complete
- 720 Version: Complete
- 1080 Version: Complete
- 4k Version: Complete

I tried grouping them by filename AND start_time, but depending on load, different workflows pick up the same file at different times.

How can I look at two jobs in the history API and know that both jobs were for the same file? Can I get to the file modified date or the hash for that file?
admin
Site Admin
Posts: 1687
Joined: Sat Feb 08, 2014 10:39 pm

Re: Relating jobs across workflows

Post by admin »

Hi ddehoff,

This is not a trivial thing you're asking. You cannot do it by just looking at the history API, you need to access the actual log for each job ID:
/api/json/v2/logs/<job_id>

Here you can look for the "cache_record" key which holds the complete path to the file picked up. This path includes the hash of the complete file path in the key value. So this would be an absolute identifier for that particular file and its location.

Hope this helps.

-steinar
emcodem
Posts: 1811
Joined: Wed Sep 19, 2018 8:11 am

Re: Relating jobs across workflows

Post by emcodem »

Hm, the proposed way from steinar would probably cause too many api calls per second when opening up your custom monitor page (in case you dont have your own database like the webinterface does).

To reduce the number of calls and enable you to work with the history call only, it might work for you to insert a populate processor in your workflow and add a hash the original input file's path like shown below. This way, you find your hash in the Status column:

You might want to change %s_source% in my example to the cache record like steinar proposed.

Depending on your requirements, it might be a good idea to switch from using a hash (avoid hashing the same file multiple times in paralell), to use the creation date of the file using the "File" variables contaioning "original" e.g. %i_original_day% etc...
hash.png
hash.png (7.11 KiB) Viewed 7191 times
hash2.png
hash2.png (9.5 KiB) Viewed 7191 times
test
emcodem, wrapping since 2009 you got the rhyme?
ddehoff
Posts: 8
Joined: Mon Jan 21, 2019 9:22 pm

Re: Relating jobs across workflows

Post by ddehoff »

This is great! I see the hash in the result parameter which will really help. However, as in your screenshot, putting %s_success% after the hash does not seem to append the original success text, and the original text is lost. Any ideas on how to preserve the original success text? I tried creating a variable called orig_success and setting it to %s_success and then setting %s_success% to [%s_hash%] %s_orig_success, but %orig_success always just contains the string %s_success%.
emcodem
Posts: 1811
Joined: Wed Sep 19, 2018 8:11 am

Re: Relating jobs across workflows

Post by emcodem »

Ohhh sorry i did not pay attention to that.
As a matter of fact, s_success is a very special thing. Basically it is "unset" until the current branch ended. After the last node executed, it will be set to "Success" in case it is not already set and there were no errors.

So basically if you do not set s_success in your workflow at all, then it can only be "Success" on the success path of the workflow. On the error path, %s_error% would contain the interesting informations. At least as far as i can quickly read from the code.

This means that there is no reason to capture s_success really, except it has been set before in the workflow. Are you setting s_success earlier in the workflow to some custom value? If yes, you could just use some different variable for it. (because in difference to other variables, the value of s_success just cannot be retrieved).

Also, in order to prevent the same file being hashed multiple times, i'd recommend to hash the original input file after the transcoding. The xxhash tool has an internal database to prevent hashing the same file multiple times. If you would hash it at start of the job and all sub jobs for this file start at the same time, they all would probably really hash the input file multiple times paralell.
emcodem, wrapping since 2009 you got the rhyme?
ddehoff
Posts: 8
Joined: Mon Jan 21, 2019 9:22 pm

Re: Relating jobs across workflows

Post by ddehoff »

Ok. Sounds like setting success will not work as it can either have a custom string OR the correct status of the workflow, not both. I need the genuine success message to tell me if it was successful or to key on conditionals such as "this 4k workflow didn't run because the source video was only 1024"

Is there another way I can expose the file hash or another custom variable in the history api?
admin
Site Admin
Posts: 1687
Joined: Sat Feb 08, 2014 10:39 pm

Re: Relating jobs across workflows

Post by admin »

The only way to have a custom string in the history api is currently to populate the %s_success% with whatever you want. Now, if I understand you correctly you need a way to group the same file from independent workflow jobs. In this regard, populating the %s_success% with hash would work, like @emcodem suggested. And because there is no unique success text per job generated by FFAStrans other then "Success", you can skip this and just add the %s_error% variable to your hash: "[%s_hash%]%s_error%". Now, if the job succeed you will just have "[hash]" as result without any other text appended and you'd know it's success. But in case of error you would have that too: "[hash]Some error message". If you use this option you must remember to set your populate node to execute on both error and success (yellow input connector).

Maybe I have misunderstood what you want to accomplish and that I'm way off, but would this work for you?

-steinar
emcodem
Posts: 1811
Joined: Wed Sep 19, 2018 8:11 am

Re: Relating jobs across workflows

Post by emcodem »

@ddehoff

sorry if that all is a little confusing...
The message is that there is no sense in retrieving a value of %s_success% because it has no value until the job ended. only %s_error% will contain interesting information but only if anything failed before in the job (e.g. a conditional).

I created this example WF to show you what we were thinking of.
Relating Jobs Across Workflows.json
(5.96 KiB) Downloaded 346 times
Additionally, if you do not already know, there is a way to "hide" or better "delete" jobs(or better branches) from the status monitor in case they have a failed Condition (e.g. this file is not UHD, so no reason to go on): Just check the "dispel" checkbox in the Conditional node.

Let me know if there are still doubts!
emcodem, wrapping since 2009 you got the rhyme?
Post Reply