Skip to main content

fedora

prime-run / nvidia-prime the sleep / wake handoff from intel gpu to nvidia gpu issue of black screen on wake.

nvidia-suspend.service glitch

fedora origami​

  • tlp for power

Yeah, the quick enable method from the Nvidia docs is close but not quite right for your Origami setup (Fedora Atomic/immutable). That guide uses curl + sudo tee to add the repo, then dnf install---which works great on regular Fedora Workstation, but on Atomic distros like yours, the filesystem is read-only, so dnf can't modify repos or install system packages directly. You'd get errors or partial installs that break on reboot.

Instead, adapt it with rpm-ostree (the Atomic way to layer packages persistently). Here's the quick, working version based on current community practices for Fedora Atomic (Silverblue/Kinoite/Origami equivalents in 2026):

  1. Add the official Nvidia Container Toolkit repo (this layers it safely):Bash

    curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo |\
    sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo

    (This creates the repo file---it's one of the few writable spots on Atomic.)

  2. Layer the toolkit packages (use rpm-ostree instead of dnf):Bash

    sudo rpm-ostree install nvidia-container-toolkit
  3. Reboot to apply:Bash

    systemctl reboot
  4. Generate the CDI config (key for Podman to see GPUs):Bash

    sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

    Then verify: nvidia-ctk cdi list (should list your RTX 3050 as nvidia.com/gpu=0, nvidia.com/gpu=all, etc.).

  5. Test passthrough (same as before):Bash

    podman run --rm --device nvidia.com/gpu=all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smi

If the test shows your GPU details, CDI is good---update your Open WebUI run command to use --device nvidia.com/gpu=all instead of --gpus=all (Podman prefers the full CDI syntax on newer Fedora Atomic).

A few watch-outs:

  • If repo add fails (permission issues), do it as root or check SELinux (sudo setenforce 0 temp test).
  • Some Atomic users report needing sudo nvidia-ctk runtime configure --runtime=podman after install.
  • If conflicts arise (rare), check rpm-ostree status and override if needed.

This is the "quick enable" equivalent---repo curl + install via ostree. Run it step-by-step, paste errors if any, and we'll nail it. Once GPU works, the bundled container will fly with voice/offline Ollama.