Any way to change reading blocksize of input file?
Posted: Fri Mar 15, 2019 12:05 pm
Hi all,
I've noticed that regardless of how many simultaneous threads are running and whatever settings I choose, FFAStrans/ffmpeg always seems to read from the disk with approx 190KB block size (see the attached screenshot, it's the red line).
I was wondering if there's any way for it to read 512KB or 1024KB at a time, to reduce the overall IO on the drive.
I'm going to be transcoding media which will be on a RAID so I'd like to minimise the IO requests if possible, making fewer but larger IO requests.
It seems that ffmpeg has a blocksize setting (see here: https://ffmpeg.org/ffmpeg-all.html#file) but I can't seem to get it working in the custom ffmpeg module (as it requires modifying the -i parameter which presumably FFA uses to set the correct source path).
Anyone have any ideas? Or is the blocksize for reads fixed internally in FFA?
Many thanks again,
Dave
I've noticed that regardless of how many simultaneous threads are running and whatever settings I choose, FFAStrans/ffmpeg always seems to read from the disk with approx 190KB block size (see the attached screenshot, it's the red line).
I was wondering if there's any way for it to read 512KB or 1024KB at a time, to reduce the overall IO on the drive.
I'm going to be transcoding media which will be on a RAID so I'd like to minimise the IO requests if possible, making fewer but larger IO requests.
It seems that ffmpeg has a blocksize setting (see here: https://ffmpeg.org/ffmpeg-all.html#file) but I can't seem to get it working in the custom ffmpeg module (as it requires modifying the -i parameter which presumably FFA uses to set the correct source path).
Anyone have any ideas? Or is the blocksize for reads fixed internally in FFA?
Many thanks again,
Dave