emcodem wrote: ↑Fri Nov 08, 2024 9:37 pm
Hi @DCCentR
sorry for letting you run into this problem in first place and thanks for the report, perfect that you deliver the webint log along.
So my guess would be that you only see this behaviour when there are jobs running, taking away CPU and especially Bandwith to your NAS09 share.
It is of course important for me to support such problems but it is kind of hard to test, can you please confirm try raising the "API timeout" in webui->admin->Network settings from 7000 to e.g 120000? Meanwhile i'll reactivate some old Raspberry Pi and configure it as WLAN NAS so i have something slow for testing this kind of Situation.
What exactly happens:
So webint reads lots of files from the job history folder (history jobs can currently block active jobs update). When there is network activity, the Latency to the share raises, e.g. if you have 2000 jobs in the ffastrans history and usually each one takes 0.1ms, the response usually takes 200ms, so far so godd but as soon as you have network activity on the same netwok cable, the time for each file raises and we soon have 10ms per file or more, leading to a very long update time.
Of course i have already multiple strategies to try to cache stuff and avoid this but looks like some of it is not working as expected.
Hi @emcodem
Thank you for your reply and explanation. Changing API timeout to 120000 did not help. Is it not necessary to restart webinetrfase service to apply it, right?
DCCentR wrote: ↑Sat Nov 09, 2024 3:49 pm
Hi @emcodem
Thank you for your reply and explanation. Changing API timeout to 120000 did not help. Is it not necessary to restart webinetrfase service to apply it, right?
logs afte120000.7z
I tried reorganizing the tasks a bit by unloading NAS09 (the old scheme that caused the problem and the new one):
Диаграмма без названия.png
So far, the problem has not occurred again.
logs_after_reorganize.zip
Unfortunately reorganizing modules/tasks didn't help. Running and finished workflows are not displayed again when some of the workflows are executed:
Снимок экрана 2024-11-11 171101.png (136.14 KiB) Viewed 84 times
Снимок экрана 2024-11-11 171038.png (179.75 KiB) Viewed 84 times
Oh man what an unfortune, sorry for causing you troubles
Meanwhile i raised internally the topic about missing documentation/recommendation regarding the perfect system setup especially for NAS based installations. In fact, steinar serves the ffastrans db folder from a separate VM. In a perfect scenario, the connection between this "Database VM" and the ffastrans hosts does run over a separate control network where no media data is transferred to keep the latency low. Of course not everyone can build a separate control network and we'll always have to keep an eye on this topic.
On the code front, i did experiment with a raspberry NAS over WIFI and i think i was able to solve the most disturbing troubles. You know, Wifi by default has a bad latency and when i put some traffic on it... you know...
Raising the timeouts was also not accepted directly as it should be (just as you say, it must be restarted), this should also be fixed too. But in my testing just raising the timeouts was not really a good solution anyway, only optimizing the file caching really helped.
Oh man what an unfortune, sorry for causing you troubles
Meanwhile i raised internally the topic about missing documentation/recommendation regarding the perfect system setup especially for NAS based installations. In fact, steinar serves the ffastrans db folder from a separate VM. In a perfect scenario, the connection between this "Database VM" and the ffastrans hosts does run over a separate control network where no media data is transferred to keep the latency low. Of course not everyone can build a separate control network and we'll always have to keep an eye on this topic.
On the code front, i did experiment with a raspberry NAS over WIFI and i think i was able to solve the most disturbing troubles. You know, Wifi by default has a bad latency and when i put some traffic on it... you know...
Raising the timeouts was also not accepted directly as it should be (just as you say, it must be restarted), this should also be fixed too. But in my testing just raising the timeouts was not really a good solution anyway, only optimizing the file caching really helped.
@emcodem
Hey, no problem
Thank you for your efforts in solving this issue.
Regarding the separate management network, I can connect the servers with “FFAStrans & webinterface dir” and “FFAStrans Web server” with 1Gbit (or even 10Gbit if needed ):
Снимок экрана 2024-11-12 121735.png (24.38 KiB) Viewed 55 times
Do you think a separate management network for these servers would be enough? (for solving web inerface issue)
@DCCentR
Well i in first place you definitely face software shortcomings, e.g. the caching can be optimized a lot, and it must be optimized. Not everyone can build a separate control network
But yes i also think that 1Gbit control network is far enough, as long as you can keep the data transfers from the Control network NIC the latency to the files should be good. But i didnt test such a setup yet.
Not everything is yet optimized, e.g. if you have many "incoming" jobs from watchfolders, it would probably still be laggy but the problems you face now should be solved because i only read the json files of "new" jobs instead of jsons from 100 jobs every 3 seconds.
Also, i will separate reading history jobs from active jobs because updating history is not as important as active.
emcodem wrote: ↑Tue Nov 12, 2024 9:50 am
@DCCentR
Well i in first place you definitely face software shortcomings, e.g. the caching can be optimized a lot, and it must be optimized. Not everyone can build a separate control network
But yes i also think that 1Gbit control network is far enough, as long as you can keep the data transfers from the Control network NIC the latency to the files should be good. But i didnt test such a setup yet.
Not everything is yet optimized, e.g. if you have many "incoming" jobs from watchfolders, it would probably still be laggy but the problems you face now should be solved because i only read the json files of "new" jobs instead of jsons from 100 jobs every 3 seconds.
Also, i will separate reading history jobs from active jobs because updating history is not as important as active.