Hi,
I'm having a workflow where after FFAStrans finds a new file in watchfolder it runs a script.
Everything is working, until 3 parallel instances running. Because the script is transfering a file using SSHFS, it takes some time.
And when fourth file/job starts it just skips the script, doesn't even run it and goes to the next node with success.
Any idea why?
I've reproduced this several times.
:!: Multiple parallel python script RUN instances :!:
Re: :!: Multiple parallel python script RUN instances :!:
Hey veks,
no idea, need bad log and or workflow...
no idea, need bad log and or workflow...
emcodem, wrapping since 2009 you got the rhyme?
Re: :!: Multiple parallel python script RUN instances :!:
I've tried it in much simpler workflow too.
I'm testing this on FFAStrans v1.3.1 on a Windows server.
Normal workflow, local watchfolder, gets some txt file to just trigger the job.
After that it goes a command executor that runs some python script.
I've created a script for testing purposes using chatGPT to be sure that our script isn't a problem. Script just creates a new log file with filename of an input file and then waits 5 minutes.
And that's it.
Re: :!: Multiple parallel python script RUN instances :!:
OK Please give me the simple workflow and script so we test with the same stuff.
emcodem, wrapping since 2009 you got the rhyme?
Re: :!: Multiple parallel python script RUN instances :!:
I've done tests on 3 other nodes (limiting workflow only to one of them, one by one), and on them there's no such a problem. Difference is that main working node/manager is running as a service under the domain Windows user while on other nodes it's running as local user.
Could that be a problem?
Re: :!: Multiple parallel python script RUN instances :!:
Alone that is not your issue, credentlials more or less only define filesystem access permissions and whats in the PATH, so if it finds python.exe and all the libraries ONCE as a service with domain credentials, it should find it always.
What would be most interesting is to see a log from a bad job and check which log entries we have from the cmd processor of interest. Do you see the processor to be executed in webinterface log viewer?
If no and it is zero log entries, e.g. the processor has not been executed at all, we face a ffastrans issue (dont think so), otherwise likely some environment issue, in which case we need bad logs get a feeling about whats happening or not
What would be most interesting is to see a log from a bad job and check which log entries we have from the cmd processor of interest. Do you see the processor to be executed in webinterface log viewer?
If no and it is zero log entries, e.g. the processor has not been executed at all, we face a ffastrans issue (dont think so), otherwise likely some environment issue, in which case we need bad logs get a feeling about whats happening or not
emcodem, wrapping since 2009 you got the rhyme?
Re: :!: Multiple parallel python script RUN instances :!:
It can be seen as success, because it just runs the workflow, goes to run the script and continues with next node, or just finishes with success.
Can you check PM too?
Tnx!
Can you check PM too?
Tnx!