Plex GPU transcoding on Docker on LXC on Proxmox - jocke (2023)

I recently needed to get GPU transcoding to work on Plex. The setup involved running Plex in a Docker container, in an LXC container running on Proxmox. I've found some general guidelines online, but none that cover everything (particularly dual-layer virtualization). I've found some challenges to get this working properly so I'll try to provide a full guide here.

I assume you have Proxmox and LXC set up, ready to go and run Debian 11 (Bullseye). In my example I will run the LXC container namedDocker1(ID 101) on my Proxmox host. Everything will be headless (i.e. no X involved). LXC will be privileged withfuse=1, nest=1defined as resources. As GPU I will use an Nvidia RTX A2000. All commands are executed asThose.

Host Proxmox

The first step is to install the drivers on the host. Nvidia has an official Debian repository that we could use. However, this poses a potential problem; We need to install drivers in the LXC container laterdiekernel modules. I couldn't find a way with the packages in the official Debian repository, so I had to manually install the drivers in the LXC container. The other aspect is that both the host and the LXC container need to be running themsameDriver version (otherwise it will not work). If we install on the host using the official Debian repository and manually install the driver into the LXC container, we can easily end up with different versions (anytime you install aappropriate updateon the host). To make this as consistent as possible, we will manually install the driver on both the host and the LXC container.

#edit (2022-09-03): We need to disable the nouveau kernel module before we can install the NVIDIAsecho driver -e "blacklist nouveau\noptions nouveau modeset=0" > /etc/modprobe.d/blacklist-nouveau. confupdate -initramfs -ureboot# install pve headers that match your current kernel apt install pve-headers-$(uname -r)# download + install nvidia driver# 510.47.03 was the latest wget -O at the time of writing NVIDIA -Linux-x86_64 -510.47.03.run https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.runchmod +x NVIDIA-Linux- x86_64 -510.47.03 .run ./NVIDIA-Linux-x86_64-510.47.03.run --check# answer "no" when asked if you want to install 32-bit compatibility drivers# answer " no” when asked if you should update X./NVIDIA -Linux-x86_64-510.47.03.run configuration

With the drivers installed, we need to add some udev rules. This ensures that the correct kernel modules are loaded and that all relevant device files are created at boot.

# Add kernel modules echo -e '\n# Load NVIDIA modules\nnvidia-drm\nnvidia-uvm' >> /etc/modules-load.d/modules.conf# Follow to /etc/udev/rules.d/ add 70-nvidia.rules# creates relevant device files in /dev/KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 during boot process / dev/nvidia*'"KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*' " SUBSYSTEM=="module", ACTION=="add", DEVPATH=="/module/nvidia", RUN+="/usr/bin/nvidia-modprobe -m"

To prevent the driver/kernel module from being unloaded when the GPU is not in use, we need to run the included Nvidiapersistence service🇧🇷 It will be provided to us after installing the driver.

(Video) Passing a GPU through to a Proxmox container

# Copy and extract cp /usr/share/doc/NVIDIA_GLX-1.0/samples/nvidia-persistenced-init.tar.bz2 .bunzip2 remove nvidia-persisted-init.tar.bz2tar -xf nvidia-persisted-init.tar# alt if present (to avoid the masquerading service) rm /etc/systemd/system/nvidia-persisted.service# installchmod +x nvidia-persisted-init/install.sh./nvidia-persisted-init/install.sh# check if it exists ist oksystemctl status nvidia-persisted.servicerm -rf nvidia-persisted-init*

If you got this far without errors, you can restart the Proxmox host. After rebooting, you should see the following output (GPU type/information will of course change depending on your GPU);

root@foobar:~# nvidia-smiMi Feb 23 01:34:17 2022+------------------------------- - ------------- - ----------------------------------- - ---------+| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 ||------------------------------+ - ---------------------+----------------------+| GPU name persistence-M| Bus ID No.A | Volatile Incorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory usage | GPU utility Compute M. || 🇧🇷 🇧🇷 MIG M ||=====================================+=== = ==== ==============+========================|| 0 NVIDIA RTX A2000 On | 00000000:82:00.0 Off | Off || 30% 36C P2 4W / 70W | 1MiB / 6138MiB | 0% default || 🇧🇷 🇧🇷 N/A |+--------------------------+--------------------------- ------- --------------------------------------------------++---- ----------- - -------------------------------------- -----------------------+| Processes: || GPU GI CI PID Type Process Name GPU Memory || ID usage ||=========================================== == ===== === ===============================|| No running processes found |+--------------------------------------------------------- ------- -- ---------------------------------+root@foobar:~# systemctl status nvidia-persisted.service● nvidia- persisted.service - NVIDIA Persistence Daemon Loaded: loaded (/lib/systemd/system/nvidia-persisted.service; activated; manufacturer default: activated) Active: active (running) since Wed 02/23/2022 00:18:04 CET ; 1h 16min ago Process: 9300 ExecStart=/usr/bin/nvidia-persisted --user nvidia-persisted (code=exit, status=0/SUCCESS) Main PID: 9306 (nvidia-persistence) Tasks: 1 (threshold: 154511) Memory: 512.0K CPU: 1.309s CGroup: /system.slice/nvidia-persisted.service └─9306 /usr/bin/nvidia-persisted --user nvidia-persisted Feb 23 00:18:03 foobar systemd[1] : Starting NVIDIA Persistence Daemon...23. February 00:18:03 foobar nvidia-persisted[9306]: Started (9306)23. February 00:18:04 foobar systemd[1]: NVIDIA persistence daemon started.root@foobar:~# ls -alh /dev/nvidia*crw-rw-rw- 1 root root 195, 0 Feb 23 00:17 /dev /nvidia0crw-rw-rw- 1 root root 195, 255 Feb 23 00:17 /dev/nvidiactlcrw -rw -rw- 1 root root 195, 254 Feb 23 00:17 /dev/nvidia-modesetcrw-rw-rw- 1 root root 511, 0 Feb 23 00:17 /dev/nvidia-uvmcrw-rw-rw - 1 root root 511, 01 Feb 23 00:17 /dev/nvidia-uvm-tools

If the correct GPU is shown innvidia-smi, the persistence service works fine and everythingcincoFiles are available, we are ready to proceed with the LXC container.

LXC-Container

We need to add the appropriate LXC configuration to our container. Shut down the LXC container and make the following changes to the LXC configuration file;

# /etc/pve/lxc/101.conf bearbeiten und auf seguintelxc.cgroup2.devices.allow hinweisen: c 195:* rwmlxc.cgroup2.devices.allow: c 509:* rwmlxc.cgroup2.devices.allow: c 511: * rwmlxc.mount.entry: /dev/nvidia0 dev/nvidia0 nenhum bind, optional, create=filelxc.mount.entry: /dev/nvidiactl dev/nvidiactl nenhum bind, optional, create=filelxc.mount.entry: /dev/ nvidia-modeset dev/nvidia-modeset nenhum bind, optional, create=filelxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm nenhum bind, optional, create=filelxc.mount.entry: /dev/nvidia- uvm-tools dev/nvidia-uvm-tools nenhum bind,opcional,criar=arquivo

The numbers in the cgroup2 lines come from the fifth column in the device list above (vials -alh /dev/nvidia*🇧🇷 Both for menvidia-uvmFiles switch between randomly509e511, while the other three remain as static195🇧🇷 I'm not sure why they toggle between the two values ​​(if you know how to make them static please let me know), but LXC doesn't complain if you set numbers that don't exist (ie. we can add all three to make sure it works).

Now we can activate the LXC container and are ready to install the Nvidia driver. This time we will install it without kernel drivers and there is no need to install kernel headers.

(Video) Plex on ProxMox Tutorial WITH nVidia Hardware Encoding

wget -ONVIDIA-Linux-x86_64-510.47.03.run https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.runchmod +x NVIDIA- Linux-x86_64-510.47.03.run./NVIDIA-Linux-x86_64-510.47.03.run --check# answer "no" when asked if you have X./NVIDIA-Linux-x86_64-510.47 .03 configuration .run --no-kernel-module

At this point you should be able to restart your LXC container. Make sure the files and driver are working as expected before proceeding with Docker setup.

root@docker1:~# ls -alh /dev/nvidia*crw-rw-rw- 1 root root 195, 0 Feb 23 00:17 /dev/nvidia0crw-rw-rw- 1 root root 195, 255 Feb 23 00: 17 /dev/nvidiactlcrw-rw-rw- 1 root root 195, 254 Feb 23 00:17 /dev/nvidia-modesetcrw-rw-rw- 1 root root 511, 0 Feb 23 00:17 /dev/ nvidia-uvmcrw- rw-rw- 1 root root 511, Feb 1 23 00:17 /dev/nvidia-uvm-toolsroot@docker1:~# nvidia-smiWed Feb 23 01:50:15 2022+---- ------ -------------------------------------------- ------ -------------------+| NVIDIA-SMI 510.47.03 Treiberversion: 510.47.03 CUDA-Version: 11.6 ||------------------------------+ - ---------------------+----------------------+| GPU-Name Persistenz-M| Bus-ID Anz.A | Flüchtig Unkorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Speichernutzung | GPU-Dienstprogramm Compute M. || 🇧🇷 🇧🇷 MIG M. ||=====================================+==== ==== ==============+========================|| 0 NVIDIA RTX A2000 aus | 00000000:82:00.0 Aus | Aus || 30% 34C P8 10W / 70W | 3 MiB / 6138 MiB | 0 % Standard || 🇧🇷 🇧🇷 N/A |+--------------------------+--------------------- -------------------------------------++--------------- - ------------------------------------------------- - -----------+| Prozesse: || GPU GI CI PID-Typ Prozessname GPU-Speicher || ID-Nutzung ||=========================================== ======= === ===============================|| Keine laufenden Prozesse gefunden |+--------------------------------------------------- -- ----------------------------------+

Docker-Container

Now we can proceed to get Docker working. we will usedocker-compose, and we also make sure we have the latest version by removing the package provided by DebianDockeredocker-compose🇧🇷 We will also installDocker runtime provided by Nvidia🇧🇷 Both are relevant in terms of GPU availability in Docker.

# Remover Pacotes fornecidos pelo debianapt remover docker-compose docker docker.io containerd runc# install Docker do repositório oficialapt updateapt installar ca-certificates curl gnupg lsb-releasecurl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpgecho \ "deb [arch=$(dpkg --print-architecture) assinado por=/usr/share/keyrings/docker-archive-keyring .gpg] https://download.docker.com/linux/debian \ $(lsb_release -cs) estável" | tee /etc/apt/sources.list.d/docker.list > /dev/nullapt updateapt install docker-ce docker-ce-cli containerd.io# install docker-composecurl -L "https://github.com/docker /compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-composechmod +x /usr/local/bin/ docker-compose# install docker-compose bash Completationcurl \ -L https://raw.githubusercontent.com/docker/compose/1.29.2/contrib/completion/bash/docker-compose \ -o /etc/bash_completion.d/ docker-compose# install nvidia-docker2# edit (2022-09-03): nova versão do kit de ferramentas requer repositório diferenteapt install -y curldistribution=$(. /etc/os-release;echo $ID$VERSION_ID)keyring_file= "/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg"curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o ${keyring_file}curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed "s#deb https://#deb [assinado por=${keyring_file}] https://#g" | \ tee /etc/apt/sources.list.d/nvidia-container-toolkit.listapt updateapt install nvidia-docker2# reinicie o systemd + docker (se você no recarregar o systemd, pode no funcionar) systemctl daemon-reloadsystemctl restart docker

We should now be able to run Docker containers with GPU support. let's test

root@docker1:~# docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smiTue Feb 22 22:15:14 2022+---------------- -------------- - ----------------------------------- --------------------------+| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 ||------------------------------+ - ---------------------+----------------------+| GPU name persistence-M| Bus ID No.A | Volatile Incorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory usage | GPU utility Compute M. || 🇧🇷 🇧🇷 MIG M ||=====================================+=== = ==== ==============+========================|| 0 NVIDIA RTX A2000 off | 00000000:82:00.0 Off | Off || 30% 29C P8 4W / 70W | 1MiB / 6138MiB | 0% default || 🇧🇷 🇧🇷 N/A |+--------------------------+--------------------------- ------- --------------------------------------------------++---- ----------- - -------------------------------------- -----------------------+| Processes: || GPU GI CI PID Type Process Name GPU Memory || ID usage ||=========================================== == ===== === ===============================|| No running processes found |+--------------------------------------------------------- ------- -- ---------------------------------+root@docker1:~# cat docker-compose.ymlversion: '3.7' services: test: image: tensorflow/tensorflow:latest-gpu command: python -c "import tensorflow as tf;tf.test.gpu_device_name()" deploy: resources: reserves: devices: - Capabilities: [gpu]root@ docker1: ~# docker-compose upStarting test_test_1 ... doneAppending to test_test_1test_1 | 2022-02-22 22:49:00.691229:I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with the oneAPI Deep Neural Network Library (oneDNN) to target the following CPU instructions in performance-critical operations to use: AVX2 FMAtest_1 | To enable them for other operations, build TensorFlow with the appropriate compiler flags.test_1 | 2022-02-22 22:49:02.119628: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Device /device:GPU:0 created with 4141 MB memory: -> Device: 0, Name: NVIDIA RTX A2000 , PCI bus ID: 0000:82:00.0, computing capacity: 8.6test_test_1 ended with code 0

Yay! It works! Let's add the final parts for a fully functional Plexdocker-compose.yml.

Version: "3.7" Services: Plex: Container Name: Plex Host Name: Plex Image: Linux Server/Plex: Last Reboot: Unless Interrupted Deployment: Resources: Reservations: Devices: - Capabilities: [GPU] Environment: TZ: Europe/Paris PUID: 0 PGID: 0 VERSION: latest NVIDIA_VISIBLE_DEVICES: all NVIDIA_DRIVER_CAPABILITIES: compute, video, utility network_mode: host volumes: - /srv/config/plex:/config - /storage/media:/data/media - /storage/temp /plex/transcode :/transcode - /storage/temp/plex/tmp:/tmp

And it works! Wow!

(Video) GPU Passthrough on Linux and Docker for AI, ML, and Plex

Plex GPU transcoding on Docker on LXC on Proxmox - jocke (1)

problems found

I had some challenges trying to get everything to work. All the solutions have been included in the guide above, but I'll briefly mention them here for your reference.

1. nvidia-smi does not work in docker container

I got the error messageFailed to initialize NVML: Unknown errorwhile runningnvidia-smiin the Docker container. It turned out that this was caused byGroup 2substitutegroupon the host.

My first solution was to disable itGroup 2, and go back togroup🇧🇷 This can be done via the updated GRUB parameter as follows;

# Assuming EFI/UEFI# other legacy BIOSecho commands "$(cat /etc/kernel/cmdline) systemd.unified_cgroup_hierarchy=false" > /etc/kernel/cmdlineproxmox-boot-tool refresh

However, the correct solution would be to change thatlxc.cgroup.devices.allowLines in the LXC configuration file up tolxc.cgroup2.devices.allow, which permanently fixes the problem.

2. GPU-Konfiguration docker-compose

The official documentation fordocker-composeand Plex, indicates that GPU support is added via the parameterDuration🇧🇷 last runDockeredocker-composefrom the Debian stable repository (Debian 11) could not use theRuntime: nvidiaParameter.

(Video) How To Install JellyFin On Proxmox With Hardware Accelerated (Intel/AMD GPU)

Latest method of consuming GPU-Indocker-compose, adeployparameter is only supported in the latest docker-compose (v1.28.0+), which is newer than what is in the Debian 11 stable repository. We need to be using the latest versions for this to work where we would be using the new onesdeployParameter.

3. docker-compose GPU environment variables

GPU transcoding in Plex didn't just work withdeployParameter. It also needs the two environment variables to work. This wasn't clearly documented and caused some frustration when trying to get everything to work.

NVIDIA_VISIBLE_DEVICES: alleNVIDIA_DRIVER_CAPABILITIES: Compute, Video, Utility

4. High CPU usage due to fuse overlayfs

I've also noticed high CPU usagefuse covers(that's the storage driver I'm using for docker) caused by the plex container. It turned out to be the background task "detect intros" which transcodes the audio (to find the intros). Before I/tmplike the transcode directory that was part of/assembledfuse covers🇧🇷 This happened even though the transcoding path was set to/ transcodificar(Settings -> Temporary Transcoder Directory). Normal transcoding seems to use/ transcodificar, so it seems it's only the Intro Recognition task that's having this problem. Mounting this path fixed the problem.

Update

Every time you update the kernel, you have to reinstall the driver on the Proxmox host. If you want to run the same NVIDIA driver version, the process is simple. Just run the original driver installation again. In the LXC container there shouldn't be a need to do anything (since the version stays the same and no kernel modules are involved).

# Answer "No" when asked if you want to install 32-bit compatibility drivers. # Answer "No" when asked if you should update your X./NVIDIA-Linux-x86_64-510.47.03.runreboot configuration

If you want to update the NVIDIA driver, some additional steps are required. If you already have a working NVIDIA driver (meaning you haven't just updated the kernel), you should uninstall the old NVIDIA driver first (otherwise it will complain that the kernel module is loaded and immediately reload the module when You are trying to download it ).

(Video) Docker in Proxmox V7 LXC with Turnkey Core - Lower Resources by 80% Compared to VMs

# Uninstall old driver to prevent kernel modules from loading # This step can be skipped if the driver is broken after kernel update./NVIDIA-Linux-x86_64-510.47.03.run --uninstallreboot# If If you have updated the kernel, we need to download it the new headersapt install pve-headers-$(uname -r)# install the new version, 515.65.01 is the latest at the time of writing this# (the installer will prompt you to install the uninstall old version if you can skip manual uninstall) wget -O NVIDIA-Linux-x86_64-515.65.01.run https://us.download.nvidia.com/XFree86/Linux-x86_64/515.65.01/NVIDIA -Linux-x86_64-515.65.01 .runchmod +x NVIDIA -Linux-x86_64- 515.65.01.run./NVIDIA-Linux-x86_64-515.65.01.run --check# answer "no" when asked whether you want to install 32-bit compatibility drivers# answer "no" when asked if there is X config ./NVIDIA-Linux-x86_64-515.65.01.runreboot# should now install a new driver and workingroot@foobar:~# nvidia-smi S as of Sep 3 06:04:04 2022 +---- -------------------------------------------------- ----- -------------------+| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 ||------------------------------+ - ---------------------+----------------------+| GPU name persistence-M| Bus ID No.A | Volatile Incorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory usage | GPU utility Compute M. || 🇧🇷 🇧🇷 MIG M ||=====================================+=== = ==== ==============+========================|| 0 NVIDIA RTX A2000 On | 00000000:82:00.0 Off | Off || 30% 32C P8 4W / 70W | 1MiB / 6138MiB | 0% default || 🇧🇷 🇧🇷 N/A |+--------------------------+--------------------------- ------- ---------------------------------+ +--------- ------ - --------------------------------------------------------- ------------------+| Processes: || GPU GI CI PID Type Process Name GPU Memory || ID usage ||=========================================== == ===== === ===============================|| No running processes found |+--------------------------------------------------------- ------- -- ----------------------------------+

Now we need to update the driver in the LXC container as they must be the same version;

# Download new versionwget -O NVIDIA-Linux-x86_64-515.65.01.run +x NVIDIA-Linux-x86_64-515.65.01.run./NVIDIA-Linux-x86_64-515.65.01.run --check# answer "no " " When asked if you want to install 32-bit compatibility drivers, answer "No" when asked if you want to install X config./NVIDIA-Linux-x86_64-515.65.01.run --no- kernel-moduleroot@docker1 should update: ~# nvidia-smiSa Sep 3 06:11:04 2022 +---- ----------------------- --------- ------------- --------------------------------------+|NVIDIA- SMI 515.65.01 driver version: 515.65.01 CUDA version: 11.7 ||------------------------------+ - -- -------------------+----------------------+|GPU Name Persistence-M | Bus ID #A | Volatile Incorrect ECC || Fan Temp Perf Pwr:Usage/Cap| Memory Usage | GPU Utility Compute M || 🇧🇷 🇧🇷 MIG M ||======== =============================+==== ==== ============ ==+========================||0 NVIDIA RTX A2000 off|00000000:82:00.0 off|off||30% 30C P8 4W / 70W | 1MiB / 6138MiB | 0% default || 🇧🇷 🇧🇷 N/A |+--------------------------+--------------------------- ------- ---------------------------------+ +--------- ------ - --------------------------------------------------------- ------------------+| Processes: || GPU GI CI PID Type Process Name GPU Memory || ID usage ||=========================================== == ===== === ===============================|| No running processes found |+--------------------------------------------------------- ------- -- ----------------------------------+

Restart the LXC container and things should work with the new driver.

FAQs

Which is better LXC or Docker? ›

Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine, and enables developers to easily run applications that reside in their own application environment which is specified by a docker image.

Should you run Docker on Proxmox? ›

You manage a Docker instance from the host, using the Docker Engine command line interface. It is not recommended to run docker directly on your Proxmox VE host. If you want to run application containers, for example, Docker images, it is best to run them inside a Proxmox Qemu VM.

Is LXC faster than Docker? ›

The performance difference between LXC and Docker is almost insignificant. Both provide fast boot times.

Can I run Docker inside LXC? ›

Running Docker inside LXC allows us to reap all the benefits of running it in a separate environment from the host without having to deal with the complexity and overhead associated with running it in a full virtual machine.

Is LXC obsolete? ›

The LXC project is still going strong and shows no signs of winding down; LXC 5.0 was released in July and comes with a promise of support until 2027.

Can proxmox run Docker container? ›

If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox Qemu VM.

How much RAM should I leave Proxmox? ›

Memory, minimum 2 GB for OS and Proxmox VE services. Plus designated memory for guests. For Ceph or ZFS additional memory is required, approximately 1 GB memory for every TB used storage.

Is 16gb RAM enough for Proxmox? ›

Having 16GBs of RAM isn't necessary, there is the possibility of getting away with 4GB, however, if you wanted to run multiple VMs at once, then it would be pretty tough on the system.

What are the disadvantages of using Docker? ›

Disadvantage of Dockers
  • Docker is not good for application that requires rich GUI.
  • It is difficult to manage large amount of containers.
  • Docker does not provide cross-platform compatibility means if an application is designed to run in a Docker container on windows, then it cannot run on Linux Docker container.
Apr 23, 2022

What is better than Proxmox? ›

However, ESXi, an industry-standard virtualization solution, provides greater RAM and host capacities than Proxmox. While Proxmox offers the same capacities for all users for free, ESXi offers several performance tiers based on licensing, increasing the number of hosts in a cluster and RAM amount per host.

Is LXC faster than VM? ›

LXC's start much faster than VMs and use fewer host resources per container than VMs, so they are ideal for combinations of packing a lot of isolated processes onto one host and/or starting them up frequently.

When to use LXC over Docker? ›

When comparing Docker vs. LXC, consider the main difference that containerd is only used for single application containers, while LXC is able to run multiple applications inside system containers.

Should I use LXC or LXD? ›

Since it is an extension of LXC, LXD will support some of the advanced features such as live migration and snapshots. LXD is not designed to replace LXC, but it is intended to make LXC based containers better, flexible, and easy to use.

Are LXC containers safe? ›

Unprivileged containers occur in the LXC 1.0 and require a kernel version 3.13 or higher. They are considered safe as the container uid 0 is mapped to an unprivileged user outside of the container with extra rights only on resources that it owns itself.

Do LXC containers have their own kernel? ›

LXC containers are often considered as something in the middle between a chroot and a full-fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.

Is Docker still the best container? ›

Simply put, Docker is heavy. We get better performance with a lightweight container runtime like containerd or CRI-O. As a recent example, Google benchmarks have shown that containerd consumes less memory and CPU, and that pods start in less time than on Docker.

Videos

1. Installing Plex in Proxmox CT with NFS Share
(TrillasAdventures)
2. Getting Started with Proxmox Containers
(DB Tech)
3. ULTIMATE Jellyfin Guide - Hardware Acceleration, Codecs (PART 2)
(TechHut)
4. You don't need JELLYFIN, EMBY, OR PLEX
(Grafick)
5. I Freed Up 700GB+ Converting my Videos Using Tdarr
(Techno Tim)
6. Proxmox Creating an LXC Container
(Viatto)
Top Articles
Latest Posts
Article information

Author: Rev. Leonie Wyman

Last Updated: 03/14/2023

Views: 5833

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.