Input multiple files into a filter/encoder for video stacking?

Questions and answers on how to get the most out of FFAStrans
Post Reply
PeterHF
Posts: 3
Joined: Fri Nov 13, 2020 8:45 pm

Input multiple files into a filter/encoder for video stacking?

Post by PeterHF »

Hi FFAStrans folks,

First time user here, so apologies if this is a silly question. Is there a way to feed multiple files into a custom avisynth or ffmpeg processor. I'd like to use the avisynth (StackHorizontal/StackVertical) or ffmpeg vstack/hstack to make a 2x2 grid. As we'll have 20 sets of these videos it would be bonus to be able to do it based on something in the filename or at minimum based on a watch folder for each set.

Thanks!

P.s. I did see the 2015 thread about FFAStrans only supporting one file per input monitor but was hoping there might have been an update (thread: "separate video and sound sources, frame sequences").
User avatar
FranceBB
Posts: 264
Joined: Sat Jun 25, 2016 3:43 pm
Contact:

Re: Input multiple files into a filter/encoder for video stacking?

Post by FranceBB »

Hi there, Peter and welcome to the forum.
First of all, one of the things to keep in mind is that to use the StackHorizontal() and StackVertical() you need the clips to have the very same resolution, the very same framerate, the very same color space and the very same bit depth, so it could be a bit tricky to use those commands on different clips without knowing what they are.
What I came up with are two functions called StackMeHorizontal() and StackMeVertical() that take care of those things for you and that you can use to Stack two files with different characteristics horizontally or vertically:

Code: Select all

function StackMeHorizontal(clip clp, clip clp2) {

x=Width(clp)
y=Height(clp)
pp1=Spline64Resize(clp2, x, y)

fps=FrameRate(clp)
pp2=ConvertFPS(pp1, fps)

Bpc = BitsPerComponent(clp2)

MyColorSpace = clp.IsY ? "Y" : clp.IsY8 ? "Y8" : clp.IsYV12   ? "YV12"  : clp.IsYUY2  ? "YUY2"
        \ : clp.IsYV16      ? "YV16"  : clp.IsYV24  ? "YV24"
        \ : clp.Is420 ? "YUV420" : clp.IsYV411 ? "YUV411" : clp.Is422 ? "YUV422" : clp.Is444 ? "YUV444"
        \ : ""

    MyColorSpace == "Y8" ? ConverttoY8(pp2)
    \ : MyColorSpace == "YUV411" ? ConverttoYUV411(pp2)
    \ : MyColorSpace == "YV12" ? ConverttoYV12(pp2)
    \ : MyColorSpace == "YV16" ? ConverttoYV16(pp2)
    \ : MyColorSpace == "YUY2" ? ConverttoYUY2(pp2)
    \ : MyColorSpace == "YV24" ? Converttoyv24(pp2)
    \ : MyColorSpace == "Y" && Bpc == 10 ? ConverttoY(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV420" && Bpc == 10 ? ConverttoYUV420(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV411" && Bpc == 10 ? ConverttoYUV411(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV422" && Bpc == 10 ? ConverttoYUV422(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV444" && Bpc == 10 ? ConverttoYUV444(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "Y" && Bpc == 12 ? ConverttoY(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV420" && Bpc == 12 ? ConverttoYUV420(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV411" && Bpc == 12 ? ConverttoYUV411(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV422" && Bpc == 12 ? ConverttoYUV422(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV444" && Bpc == 12 ? ConverttoYUV444(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "Y" && Bpc == 14 ? ConverttoY(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV420" && Bpc == 14 ? ConverttoYUV420(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV411" && Bpc == 14 ? ConverttoYUV411(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV422" && Bpc == 14 ? ConverttoYUV422(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV444" && Bpc == 14 ? ConverttoYUV444(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "Y" && Bpc == 16 ? ConverttoY(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV420" && Bpc == 16 ? ConverttoYUV420(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV411" && Bpc == 16 ? ConverttoYUV411(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV422" && Bpc == 16 ? ConverttoYUV422(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV444" && Bpc == 16 ? ConverttoYUV444(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "Y" && Bpc == 32 ? ConverttoY(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV420" && Bpc == 32 ? ConverttoYUV420(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV411" && Bpc == 32 ? ConverttoYUV411(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV422" && Bpc == 32 ? ConverttoYUV422(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV444" && Bpc == 32 ? ConverttoYUV444(pp2).ConvertBits(bits=32)
    \: ""
    
    my_second_clip=last
    my_first_clip=clp
    

StackHorizontal(my_first_clip, my_second_clip)

}
and

Code: Select all

function StackMeVertical(clip clp, clip clp2) {

x=Width(clp)
y=Height(clp)
pp1=Spline64Resize(clp2, x, y)

fps=FrameRate(clp)
pp2=ConvertFPS(pp1, fps)

Bpc = BitsPerComponent(clp2)

MyColorSpace = clp.IsY ? "Y" : clp.IsY8 ? "Y8" : clp.IsYV12   ? "YV12"  : clp.IsYUY2  ? "YUY2"
        \ : clp.IsYV16      ? "YV16"  : clp.IsYV24  ? "YV24"
        \ : clp.Is420 ? "YUV420" : clp.IsYV411 ? "YUV411" : clp.Is422 ? "YUV422" : clp.Is444 ? "YUV444"
        \ : ""

    MyColorSpace == "Y8" ? ConverttoY8(pp2)
    \ : MyColorSpace == "YUV411" ? ConverttoYUV411(pp2)
    \ : MyColorSpace == "YV12" ? ConverttoYV12(pp2)
    \ : MyColorSpace == "YV16" ? ConverttoYV16(pp2)
    \ : MyColorSpace == "YUY2" ? ConverttoYUY2(pp2)
    \ : MyColorSpace == "YV24" ? Converttoyv24(pp2)
    \ : MyColorSpace == "Y" && Bpc == 10 ? ConverttoY(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV420" && Bpc == 10 ? ConverttoYUV420(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV411" && Bpc == 10 ? ConverttoYUV411(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV422" && Bpc == 10 ? ConverttoYUV422(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "YUV444" && Bpc == 10 ? ConverttoYUV444(pp2).ConvertBits(bits=10)
    \ : MyColorSpace == "Y" && Bpc == 12 ? ConverttoY(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV420" && Bpc == 12 ? ConverttoYUV420(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV411" && Bpc == 12 ? ConverttoYUV411(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV422" && Bpc == 12 ? ConverttoYUV422(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "YUV444" && Bpc == 12 ? ConverttoYUV444(pp2).ConvertBits(bits=12)
    \ : MyColorSpace == "Y" && Bpc == 14 ? ConverttoY(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV420" && Bpc == 14 ? ConverttoYUV420(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV411" && Bpc == 14 ? ConverttoYUV411(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV422" && Bpc == 14 ? ConverttoYUV422(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "YUV444" && Bpc == 14 ? ConverttoYUV444(pp2).ConvertBits(bits=14)
    \ : MyColorSpace == "Y" && Bpc == 16 ? ConverttoY(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV420" && Bpc == 16 ? ConverttoYUV420(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV411" && Bpc == 16 ? ConverttoYUV411(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV422" && Bpc == 16 ? ConverttoYUV422(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "YUV444" && Bpc == 16 ? ConverttoYUV444(pp2).ConvertBits(bits=16)
    \ : MyColorSpace == "Y" && Bpc == 32 ? ConverttoY(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV420" && Bpc == 32 ? ConverttoYUV420(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV411" && Bpc == 32 ? ConverttoYUV411(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV422" && Bpc == 32 ? ConverttoYUV422(pp2).ConvertBits(bits=32)
    \ : MyColorSpace == "YUV444" && Bpc == 32 ? ConverttoYUV444(pp2).ConvertBits(bits=32)
    \: ""
    
    my_second_clip=last
    my_first_clip=clp
    

StackVertical(my_first_clip, my_second_clip)

}

As you can tell from the script, the way they work is to get the properties from the first clip as a reference to apply a quick and dirt conversion to the second clip so that the two properties match and the two clips can therefore be stacked.

I just tested it by feeding Color Bars to it with different resolutions and color spaces and it worked:

Image



As to the fact of picking up more files via FFASTrans, the way it works can be a bit tricky.
This is because in monitors like the P2 watchfolder, we know what we're looking for and we know the structure of such a folder, the .xml in it, and therefore we know when to stop, but with normal watchfolders with random files we don't.
One of the things that can actually be done is to look for a .txt in which you're gonna write the name of the two files that you need to Stack and then use that to generate variables that are gonna populate two ffms2 call and each one is gonna be used in the Custom Avisynth Script as source, namely as video1 and video2 like so:

Code: Select all

video1=FFMpegSource2(source1, atrack=-1)
video2=FFMpegSource2(source2, atrack=-1)

StackMeHorizontal(video1, video2)
where "source1" and "source2" are two variables that contain the path and the name with the extension of the files picked up by the watchfolder using the .txt method.
Still, the bad thing about this would be that you're only gonna pick the first audio channel automatically and you won't have any control over sources with several different audio channels all encoded in a different stream (namely discrete audio channels)...
One other thing you won't have control on is the interlacing stuff.
In the drafts of the functions I wrote, I assumed that both sources were progressive, but that might not be the case.
I should actually put in there something like:

Code: Select all

IsFieldBased(clp) ? Bob(clp2)
and then bob "clp" as well to get the same framerate in the end...

Anyway, it's 40 minutes past 1 in the morning (1.40AM) and I have an exam coming up on November 16th about coding in Java, so I should really go back to study xD

As to the .txt method, I think there are way better methods, but that's more processors/watchfolders/variables related and less Avisynth-related, so I'll hand over to emcodem on this ;)
PeterHF
Posts: 3
Joined: Fri Nov 13, 2020 8:45 pm

Re: Input multiple files into a filter/encoder for video stacking?

Post by PeterHF »

Wow! Thanks so much for this. Hope your exam went well. Always happy to help procrastinate.

In this case we are lucky that we know what the clips are. Each time they are the same 4 sources with the recordings started and stopped simultaneously. But this is still great help as they likely weren't all sources identical. As you highlighted the issues/intricacies with automating the multiple file aspect, I think I'm going to see if there is something else we can do upstream of FFAStrans to remove this need.
emcodem
Posts: 1811
Joined: Wed Sep 19, 2018 8:11 am

Re: Input multiple files into a filter/encoder for video stacking?

Post by emcodem »

Hey PeterHF,

FFAStrans always works with the concept of a single source file. OK there is also support for P2 and CanonXF which consist of multiple sources but what happens there is basically that the input files are magically merged into a single file BEFORE the job is started.

The single source file is in your workflow in the %s_source% variable. Look here for more information about %s_source%. http://ffastrans.com/wiki/doku.php?id=v ... ]=variable
How it could work for you is that you elect one of your 4 source files to be the "master" file and use it for triggering the Job start, e.g. you set your monitor folder processor to watch only for *_1.avi.

Second, you use the %s_source% in order to calculate the filenames of the other sources, e.g. *_2.avi, *_3.avi ... Use a populate processor like that:
populate1.png
populate1.png (18.65 KiB) Viewed 5761 times
In above example, i replace _1 from %s_Source% by _2, _3 and so on and the last thing i do is to set %s_success% to one of the calculated variables in order to see the result on the job monitor "status" field of ffastrans (for development).
Alternatively you can try and use the files find processor to locate all the files and populate their names into variables: http://ffastrans.com/wiki/doku.php?id=c ... processors

What you must do next is to write your avisynth script using a write text file processor. It needs to load the source files and do all your stacking and or whatever you want to do. In order to be compatible to the stock nodes, at the end of your script you must "return m_clip" containing the resulting video and audios.

This is untested but it might give you an idea about my thinking.

Code: Select all

LoadCPlugin("C:\__PATH_TO__\FFAStrans\processors\AVS_plugins\ffms2\x64\ffms2.dll")
video1 = FFVideoSource("%s_source%", seekmode = 0)
video2 = FFVideoSource("%s_file2%", seekmode = 0)
video3 = FFVideoSource("%s_file3%", seekmode = 0)
video4 = FFVideoSource("%s_file4%", seekmode = 0)
m_clip = StackVertical(StackHorizontal(video1,video2), StackHorizontal(video3, video4))
return m_clip
When you check "set text file as source" in the write text file processor, you should be able to use all stock nodes in the rest of your workflow from here.

In case you want to do all that with an external software, the same concept applies and you could write your resulting avs script into a watchfolder of ffastrans or start a job via API. In the end you should always aim to generate an avs script that returns m_clip and set %s_source% to contain the path to this script.

Let me know any doubt!
emcodem, wrapping since 2009 you got the rhyme?
PeterHF
Posts: 3
Joined: Fri Nov 13, 2020 8:45 pm

Re: Input multiple files into a filter/encoder for video stacking?

Post by PeterHF »

Thanks so much for the details emcodem! It is good to know how the FFASTrans looks at the world.

In the end these 4 sources were available as NDI signals, so I threw something together using OBS project to record these as a single file then process using the watch folder as you mentioned. FFASTrans seem great and I've been excitedly talking up our early success with it as a solution to a few of our peskiest workflow issues. The study (we are a team of engineers/designers at a big teaching hospital in Toronto) runs for another couple weeks, so I'll get more feedback from the project team then, but things are looking up.
emcodem
Posts: 1811
Joined: Wed Sep 19, 2018 8:11 am

Re: Input multiple files into a filter/encoder for video stacking?

Post by emcodem »

Hey Peter,

in that case when your input is live, you could also just use a single ffmpeg to capture from the 4 NDI sources and do the stacking at once... but i guess it all depends on lots of requirements around.
Let me know if you need more help, you have the "i work for a helpful institution bonus" ;-)
emcodem, wrapping since 2009 you got the rhyme?
Post Reply