Hey Graham, can you try with 1.4.2.10? The build is here:
https://github.com/steipal/FFAStrans-Public
You just have to click on "Code" and select "Download as zip".
The reason why I'm suggesting this is that I cannot reproduce this with 1.4.2.10 and, frankly speaking, that's exactly what I would expect: the levels to not be changed in the output file, regardless of the "bad" metadata in the input.
Let's try to reason this together.
So first things first, the H.264 encoder has the following options in terms of levels:
- Convert to Limited
- Convert to Full
- Set as Limited
- Set as Full
- Same as source
The difference between the "Convert" and the "Set" is that the first two will attempt to read the info from the file and perform the conversion based on that. For instance, the first checks if the file is Limited and leaves it as is, otherwise if it's flagged as full it will convert from 0-255 to 16-235 (it can scale with bit depth as well, like if you're working at 10bit it will bring it to 64-940 etc). The second one does the opposite, so it expands the range, which means that if your source is limited tv range 16-235 it will expand it to 0-255. As for the "Set as Limited" and "Set as Full" they just set the metadata of the output clip, namely they flag it as either limited tv range or full pc range without actually changing the levels nor caring about what the input was. In other words, that's the same as using --range tv or --range pc in x264 (i.e it just sets the metadata).
Here's the interesting part: you have the H.264 encoder with the following option "Set as Limited" which means that the filter_builder will do absolutely nothing to the input levels before passing the uncompressed A/V stream living in RAM to the x264 encoder. The only thing that setting does is to set the metadata in the output file as Limited TV Range.
I downloaded both samples (the source and the encoded one), indexed them and then I used VideoTek() to see the waveform:

- TEST MXF AVC INTRA 100.mxf000000.png (927.87 KiB) Viewed 5408 times
Top is the original AVC Intra Class 100 file with the normal limited tv range levels and the bottom is the output H.264 file with the wrongly compressed levels.
Emcodem already found the issue with the source, which is that it has contradicting information between the stream and the container where one says that it's limited tv range and the other says that it's full pc range, in fact Adobe Premiere exported a raw_video.h264 with the metadata saying "Full PC Range" and then it muxed it in an .mxf container saying "Limited TV Range". But... here's the thing: as long as you have the H.264 encoding node on "Set to Limited" the encoder won't touch the levels, but rather it will just leave them alone and just set the metadata, which is what is happening here.
If you look at the waveform from VideoTek() above on the encoded file, you can actually see that not only it's been compressed twice despite being limited tv range already, but I'm like 90% sure that it has been converted inside Avisynth itself. The reason why I'm saying this is that there's only one function that I know for a fact introduces those "lines" (the horizontal stripes you can see in the waveform) and that function is Levels(). If this happened on the filter_builder level in FFMpeg, we wouldn't have had those "lines" (i.e horizontal stripes) but rather a different waveform. This is corroborated by the fact that by having the encoder set with "Set to Limited" it won't mess with the levels.
Here's the AVS Script created by the A/V Decoder:
Code: Select all
_ffas_video = "Z:\wetransfer_test-mxf-avc-intra-100-mp4_2025-01-31_1441\TEST MXF AVC INTRA 100.mxf"
_ffas_audio = "Z:\wetransfer_test-mxf-avc-intra-100-mp4_2025-01-31_1441\TEST MXF AVC INTRA 100.mxf"
_ffas_width = 1920
_ffas_height = 1080
_ffas_work_fdr = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87"
Import("Z:\FFAStrans-Public-master\processors\AVS_plugins\avsi\mtmodes.avsi")
Import("Z:\FFAStrans-Public-master\processors\AVS_plugins\avsi\_ffas_helpers.avsi")
LoadPlugin("Z:\FFAStrans-Public-master\processors\AVS_plugins\ffms2\x64\ffms2.dll")
LoadPlugin("Z:\FFAStrans-Public-master\processors\AVS_plugins\bas\x64\BestAudioSource.dll")
LoadPlugin("Z:\FFAStrans-Public-master\processors\AVS_plugins\JPSDR\x64\plugins_JPSDR.dll")
video = FFVideoSource("Z:\wetransfer_test-mxf-avc-intra-100-mp4_2025-01-31_1441\TEST MXF AVC INTRA 100.mxf", 0, cachefile = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87\1-0-0~250201211620568~6748~20250131-1434-5666-48c2-c35b785d87ac~dec_avmedia~ffindex.dat", fpsnum=25 , fpsden=1, seekmode = 0)
audio_null = BlankClip(length=103, width=1920, height=1080, color=$000000, channels=1, audio_rate=48000, fps=25)
audio_1 = audio_null
audio_2 = audio_null
audio_3 = audio_null
audio_4 = audio_null
audio_5 = audio_null
audio_6 = audio_null
audio_7 = audio_null
audio_8 = audio_null
audio_9 = audio_null
audio_10 = audio_null
audio_11 = audio_null
audio_12 = audio_null
audio_13 = audio_null
audio_14 = audio_null
audio_15 = audio_null
audio_16 = audio_null
audio_17 = audio_null
audio_18 = audio_null
audio_19 = audio_null
audio_20 = audio_null
audio_21 = audio_null
audio_22 = audio_null
audio_23 = audio_null
audio_24 = audio_null
audio_25 = audio_null
audio_26 = audio_null
audio_27 = audio_null
audio_28 = audio_null
audio_29 = audio_null
audio_30 = audio_null
audio_31 = audio_null
audio_32 = audio_null
audio = FFAudioSource(_ffas_video, 1, cachefile = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87\1-0-0~250201211620568~6748~20250131-1434-5666-48c2-c35b785d87ac~dec_avmedia~ffindex.dat").ResampleAudio(48000).ConvertAudioTo16bit()
audio_1 = GetChannel(audio, 1)
audio = FFAudioSource(_ffas_video, 2, cachefile = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87\1-0-0~250201211620568~6748~20250131-1434-5666-48c2-c35b785d87ac~dec_avmedia~ffindex.dat").ResampleAudio(48000).ConvertAudioTo16bit()
audio_2 = GetChannel(audio, 1)
audio = FFAudioSource(_ffas_video, 3, cachefile = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87\1-0-0~250201211620568~6748~20250131-1434-5666-48c2-c35b785d87ac~dec_avmedia~ffindex.dat").ResampleAudio(48000).ConvertAudioTo16bit()
audio_3 = GetChannel(audio, 1)
audio = FFAudioSource(_ffas_video, 4, cachefile = "c:\.ffastrans_work_root\20250130-1251-4763-4bc2-15ce9978a82d\20250201-2116-1494-3051-1633f2e36a87\1-0-0~250201211620568~6748~20250131-1434-5666-48c2-c35b785d87ac~dec_avmedia~ffindex.dat").ResampleAudio(48000).ConvertAudioTo16bit()
audio_4 = GetChannel(audio, 1)
audio = MergeChannels(audio_1, audio_2, audio_3, audio_4, audio_5, audio_6, audio_7, audio_8, audio_9, audio_10, audio_11, audio_12, audio_13, audio_14, audio_15, audio_16, audio_17, audio_18, audio_19, audio_20, audio_21, audio_22, audio_23, audio_24, audio_25, audio_26, audio_27, audio_28, audio_29, audio_30, audio_31, audio_32)
Global m_clip = AudioDub(video, audio)
m_clip = RemoveAlphaPlane(m_clip)
m_clip = SetChannelMask(m_clip, false, 0)
m_clip = ConvertToYUV422(m_clip, interlaced=true)
m_clip = ConvertBits(m_clip, 8, dither=1)
m_clip = AssumeFrameBased(m_clip)
m_clip = AssumeTFF(m_clip)
Return m_clip
There's absolutely nothing wrong here.
Even ConvertToYUV422() won't screw it up 'cause it thinks that it's full pc range so it will just use matrix="PC.709" instead of matrix="Rec.709" under the hood, thus leaving the levels alone.
Frame properties after the Avisynth Script:

- Screenshot from 2025-02-01 21-09-19.png (5.3 KiB) Viewed 5408 times
As you can see, it's set to Full PC Range (i.e _ColorRange == 0) instead of _ColorRange == 1, but it's "fine" 'cause the levels are untouched:

- Screenshot from 2025-02-01 21-30-41.png (141.4 KiB) Viewed 5408 times
Here's the comparison of the input source alongside the output produced using your simple workflow:

- TEST MXF AVC INTRA 100.mxf200000.png (944.17 KiB) Viewed 5408 times
Here's the outputted sample:
https://we.tl/t-NKR9b6HTMl
Please note that all I said above is true as long as you have A/V Decoder -> H.264 Encoder, however it might not be true if you add additional nodes using Avisynth like A/V Decoder -> Filter X -> H.264 Encoder as the internal logic might be performing different adjustments according to the metadata, but yeah, it should work.
Anyway, please take 1.4.2.10 for a spin and give it a try.
