Page 35 of 40
Re: Webinterface
Posted: Tue Apr 23, 2024 10:17 am
by emcodem
Thanks a lot for letting me know @knk i'll take a close look before the next release...
Re: Webinterface
Posted: Mon May 20, 2024 3:21 pm
by knk
Hi emcodem!
Quick question, is there any way I can get the active user on the browser session and transform it into a variable?
In depth: I'd like to know who exactly started a workflow using the webinterface. Been looking at logs or active files but don't seem to find a usable data input that I could pass onto the workflow side. There could even be a chance to post that information into a variable, but I need to know where I can get it first.
Thanks (again!) !
Re: Webinterface
Posted: Tue May 21, 2024 8:47 am
by emcodem
Hi @knk,
Good request, fortunately momocampo asked for it some time ago so it has been implemented already. Unfortunately i am not 100% certain if it is part of the current release or if it has been added later (i only work with "latest" versions
)
How it works is that when a job is submit from job submitter page, this variable will carry the logged in username: %s_job_submit_username%
let me know if it works for you, if not i'll try to be faster with the next releas
Re: Webinterface
Posted: Tue May 21, 2024 9:10 am
by knk
Indeed... working like a charm!
Thanks!!!
Re: Webinterface
Posted: Tue May 21, 2024 10:23 pm
by emcodem
@knk
So the reason why this has not yet been documented in the wiki is that i was not certain if the design how it works is final.
E.g. steinar does not like the concept of setting "non existing" variables in the job via API. (webint does submit this user variable even if it does not "exist" in the workflow"), while at the same time i personally like this concept very much because this way webinterface version stays decoupled from ffastrans version and it is also more kind of backward compatible. E.g. i add a new fixed variable in the job submitter and you still can use new webint feature with old ffastrans.
However, it is as it is right now and no plans for changing currently, so i also added it to the wiki documentation.
Re: Webinterface
Posted: Thu Jun 13, 2024 4:14 pm
by Stef
Hello again
When submitting a larger amount of files, sometimes I encounter the following error for a few files:
Code: Select all
[ERROR] Error: connect ECONNREFUSED 127.0.0.1:65445
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146:16)
At the same time, STDOUT log contains the following as well:
Code: Select all
[[17:20:05.456]] [LOG] Error retrieving finished jobs http://localhost:65445/api/json/v2/history Error: connect ECONNREFUSED 127.0.0.1:65445
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 65445
}
This usually happens when the computer that hosts the Webserver is transcoding or processing files.
Seems like the API refuses connection sometimes, though I don't see a reason as to why, since the rest of the queue gets submitted. Any idea by chance what could be the culprit?
Re: Webinterface
Posted: Thu Jun 13, 2024 10:40 pm
by emcodem
Hi @Stef, good to hear from you!
yeah our good @FranceBB also had these kind of troubles because he works with constantly overloaded systems.
We all work on performance optimisations all the time also the next webint version does not really use the ffastrans API so the error should be gone in next version anyways. However one good tipp might be to lower the priority of your heavy processing workflows - this will also influence windows process priority of transcode jobs which in return leads to a more responsive system and ffastrans API.
Also, in case you have watchfolders where lots of "old" or "non processed" files reside, it will be good if you upgrade to the latest ffastrans version or just make sure your watchfolders do usually not have a large amount of unused files lying around.
Re: Webinterface
Posted: Fri Jun 14, 2024 9:02 am
by Stef
Thanks for the reply!
Yeah, It happens when the system is taxed, so errors weren't really suprising. I've already tried adjusting priorities for long tasks, and watchfolders are cleaned nightly, so I was wondering if it perhaps was some sort of rate-limiting issue. Probably should split webserver and processing duties (makes me wonder if the webserver can be run under wine on a small Synology or something like that, maybe I'll check that out).
I've noticed the activity in the WebUI github, so I'm keeping an eye out
Re: Webinterface
Posted: Fri Jun 14, 2024 11:48 am
by emcodem
@Stef
when we experience timeouts it will most likely be the ffastrans application itself (rest_service or ffastrans.exe, depends on if you run as service or not). This is single core and it also depends very much on the speed of the storage location where the database relies on. You should e.g. avoid to have the database files on the same NAS location where you transcode on - or at least provide a separate network cable for it because if you utilize lots of network bandwith, small file operations and file listing operations become extremely slow on this cable. Slow file operations block a lot of time in ffastrans main program.
Webint might work with wine but in theory it can run natively on linux, it just don't make a lot of sense because webint will not eat lots of cpu time on the main ffastrans host. You would most likely still see the timeouts because it is the ffastrans api that is blocked for some time.
What i forgot to say is that you can try to just raise the API timeout in webint admin settings but not sure if the job submit part really reacts to this setting (maybe only the history and active job fetching)
Re: Webinterface
Posted: Sat Jun 15, 2024 10:50 pm
by FranceBB
@Stef Yeah emcodem already highlighted some good important points, but I'll add what I've learned from experience:
1) Separate your web interface and your farm
Ideally you should dedicate an host to run the REST-Service only and be in charge of running the web interface.
It should NOT be executing any jobs, its only purpose should be to run the rest service used by the web interface and the web interface itself.
2) Separate your database and your processing cache
Your database is where your FFAStrans installation is. It needs a dedicated storage that is snappy to respond and that is ideally only serving the purpose of storing the FFAStrans files. Although version 1.4 isn't as resource hungry as older versions used to be (in particular 1.3.1), it can still put a great deal of pressure on the db as it opens and closes tickets (I mean json files) and writes all the info about detected files, on the jobs etc. The db will have lots of very small files and it needs to be very fast in serving them. The idea is that FFAStrans should be able to read and write lots of small files very fast without generating deadlocks on the storage. On the other hand, the media processing cache needs to be a completely different storage in which big beefy files are gonna be stored before they're delivered to the destination or just deleted as they're temporary. We're talking about several GB big files (although that varies from workflow to workflow). I personally have the media processing cache on Dell EMC Isilon storage which is "good enough" and the DB on a RAID6 RHEL7 dedicated server for high availability (i.e it never needs to go down). Keep in mind that it's generally preferred to have a native SMB share on Windows like on Windows Server 2019 onwards for SMB as two Windows machines generally talk "better" via SMB, but in my case I went the Linux way 'cause I couldn't be bothered with Patch Tuesday (i.e rebooting the server hosting the db every second tuesday of the month to install the windows updates). Before that, I tried (a very long time ago) to put the DB on isilon and what we found out was that the poor storage went into deadlock as it wasn't made to read and write lots of small files. AVID Nexis is also a show stopper (i.e a "no-go") for the DB, although it could be used for the temporary processing cache.
3) Careful with networking
Most often than not it's not the fact that the CPU is at 100% that makes a server not respond to an API call but rather the fact that is reading/writing a lot of data via the NIC. Although things should get better with 10 Gbit/s servers connected to a 100 Gbit/s switch or greater, the overwhelming majority of hardware out there is still running at 1 Gbit/s on 40 Gbit/s switches. With such a low bandwidth, most often than not when indexing (i.e reading the file with an Avisynth indexer like when you use the A/V Decoder) or even in normal non Avisynth related workflows when FFAStrans is just performing things like reading the source media properties so that the filter_builder can create the appropriate command line etc, the NIC can be easily overwhelmed and saturate all the 1 Gbit/s bandwidth.