uhm, to be honest, reading your description i am not really sure how exactly your current setup looks like. This sentence made me struggle understanding it:
This could mean so much different things... like do they really run on the server that "is" the NAS? (in that case your NAS is a windows file server?)ffastrans and webinterface-statusmonitor on NAS are running as service with the local administrator-account.
However, ffastrans has a masterless design so actually there is no such concept as "transcode instances and non transcode instances". They all work with the ame /db directory and therefore all of them have to be able to read/write to it.
It does not really matter if ffastrans is started as background service or not, the only thing that matters is file access permissions. One thing you must avoid at all costs is to run mixed as you do now and let one instance work with a mapped network drive and another instance with full UNC paths.
What happens when you cancel a job is that the webinterface talks to one of your hosts API (if you did not change, it talks to "localhost"). This ffastrans instance must be able to write a file to the DB directory ffastrans. (this is how cancelling works: a file is written to the db folder and running jobs react to this file).
In that case it does not even matter at all under which credentials (nor service or not) the webinterface exe is running as only ffastrans needs file access to the db directory.
Again, the flow is: webinterface talks to API (localhost:65445), and this ffastrans instance writes file to the db directory. Then the ffastrans instance that executing a running job is constantly looking for the "cancel file" and reacting to it.
You can e.g. try to reconfigure the webinterface STATIC_API_HOST to talk to one of the other nodes.