20

cross-posted from: https://leminal.space/post/6179210

I have a collection of about ~110 4K Blu-Ray movies that I've ripped and I want to take the time to compress and store them for use on a future Jellyfin server.

I know some very basics about ffmpeg and general codec information, but I have a very specific set of goals in mind I'm hoping someone could point me in the right direction with:

  1. Smaller file size (obviously)
  2. Image quality good enough that I cannot spot the difference, even on a high-end TV or projector
  3. Preserved audio
  4. Preserved HDR metadata

In a perfect world, I would love to be able to convert the proprietary HDR into an open standard, and the Dolby Atmos audio into an open standard, but a good compromise is this.

Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format, any tips or pointers?

you are viewing a single comment's thread
view the rest of the comments
[-] pe1uca@lemmy.pe1uca.dev 5 points 1 month ago

I sort of did this for some movies I had to lessen the burden of on the fly encoding since I already know what formats my devices support.
Just something to have in mind, my devices only support HD, so I had a lot of wiggle room on the quality.

Here's the command jellyfin was running and helped me start figuring out what I needed.

/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -canvas_size 1920x1080 -i file:"/mnt/peliculas/Harry-Potter/3.hp.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:0 -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 5605745 -bufsize 11211490 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -sc_threshold:v:0 0 -filter_complex "[0:3]scale=s=1920x1080:flags=fast_bilinear[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p[main];[main][sub]overlay=eof_action=endall:shortest=1:repeatlast=0" -start_at_zero -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693.m3u8"

From there I played around with several options and ended up with this command (This has several map options since I was actually combining several files into one)

ffmpeg -y -threads 4 \
-init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda \
-i './Harry Potter/3.hp.mkv' \
-map 0:v:0 -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0 \
-map 0:a:0 -map 0:a:1 \
-fps_mode passthrough -f mp4 ./hp-output/3.hp.mix.mp4

If you want to know other values for each option you can run ffmpeg -h encoder=h264_nvenc.

I don't have at hand all the sources from where I learnt what each option did, but here's what to have in mind to the best of my memory.
All of these comments are from the point of view of h264 with nvenc.
I assume you know who the video and stream number selectors work for ffmpeg.

  • Using GPU hardware acceleration produces a lower quality image at the same sizes/presets. It just helps taking less time to process.
  • You need to modify the -preset, -profile and -level options to your quality and time processing needs.
  • -vf was to change the data format my original files had to a more common one.
  • The combination of -rc and -cq options is what controls the variable rate (you have to set -b:v to zero, otherwise this one is used as a constant bitrate)

Try different combinations with small chunks of your files.
IIRC the options you need to use are -ss, -t and/or -to to just process a chunk of the file and not have to wait for hours processing a full movie.


Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format

There's no need to have a GPU or a big CPU to run these commands. The only problem will be the time.
Since we're talking about preprocessing the library you don't need real time encoding, your hardware can take one or two hours to process a 30 minutes video and you'll still have the result, so you only need patience.

You can see jellyfin uses -preset veryfast and I use -preset p7 which the documentation marks as slowest (best quality)
This is because jellyfin only process the video when you're watching it and it needs to process frames faster than your devices display them.
But my command doesn't, I just run it and whenever it finishes I'll have the files ready for when I want to watch them without a need for an additional transcode.

this post was submitted on 01 May 2024
20 points (100.0% liked)

datahoarder

6272 readers
4 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS