11

Facts:

  1. Rootless podman works perfectly for uid 1480
  2. Rootless podman fails for uid 2088
  3. CentOS 7
  4. Kernel 3.10.0-1062.1.2.el7.x86_64
  5. podman version 1.4.4
  6. Almost the entire environment has been removed between the two
  7. The filesystem for /tmp is xfs
  8. The capsh output of the two users is identical but for uid / username
  9. Both UIDs have identical entries in /etc/sub{u,g}id files
  10. The $HOME/.config/containers/storage.conf is the default and is identical between the two with the exception of the uids. The storage.conf is below for reference.

I wrote the following shell script to demonstrate just how similar an environment the two are operating in:

#!/bin/sh
for i in 1480 2088; do
  sudo chroot --userspec "$i":10 / env -i /bin/sh <<EOF
echo -------------- $i ----------------
/usr/sbin/capsh --print
grep "$i" /etc/subuid /etc/subgid
mkdir /tmp/"$i"
HOME=/tmp/"$i"
export HOME
podman --root=/tmp/"$i" info > /tmp/podman."$i"
podman run --rm --root=/tmp/"$i" docker.io/library/busybox printf "\tCOMPLETE\n"
echo -----------END $i END-------------
EOF
  sudo rm -rf /tmp/"$i"
done

Here's the output of the script:

$ sh /tmp/podman-fail.sh
[sudo] password for functional:
-------------- 1480 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
uid=1480(functional)
gid=10(wheel)
groups=0(root)
/etc/subuid:1480:100000:65536
/etc/subgid:1480:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH
        COMPLETE
-----------END 1480 END-------------
-------------- 2088 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
uid=2088(broken)
gid=10(wheel)
groups=0(root)
/etc/subuid:2088:100000:65536
/etc/subgid:2088:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument

Here's the storage.conf for the 1480 uid. It's identical except s/1480/2088/:

[storage]
  driver = "vfs"
  runroot = "/run/user/1480"
  graphroot = "/tmp/1480/.local/share/containers/storage"
  [storage.options]
    size = ""
    remap-uids = ""
    remap-gids = ""
    remap-user = ""
    remap-group = ""
    ostree_repo = ""
    skip_mount_home = ""
    mount_program = ""
    mountopt = ""
    [storage.options.thinpool]
      autoextend_percent = ""
      autoextend_threshold = ""
      basesize = ""
      blocksize = ""
      directlvm_device = ""
      directlvm_device_force = ""
      fs = ""
      log_level = ""
      min_free_space = ""
      mkfsarg = ""
      mountopt = ""
      use_deferred_deletion = ""
      use_deferred_removal = ""
      xfs_nospace_max_retries = ""

You can see there's basically no difference between the two podman info outputs for the users:

$ diff -u /tmp/podman.1480 /tmp/podman.2088
--- /tmp/podman.1480    2019-10-17 22:41:21.991573733 -0400
+++ /tmp/podman.2088    2019-10-17 22:41:26.182584536 -0400
@@ -7,7 +7,7 @@
   Distribution:
     distribution: '"centos"'
     version: "7"
-  MemFree: 45654056960
+  MemFree: 45652697088
   MemTotal: 67306323968
   OCIRuntime:
     package: containerd.io-1.2.6-3.3.el7.x86_64
@@ -24,7 +24,7 @@
   kernel: 3.10.0-1062.1.2.el7.x86_64
   os: linux
   rootless: true
-  uptime: 30h 17m 50.23s (Approximately 1.25 days)
+  uptime: 30h 17m 54.42s (Approximately 1.25 days)
 registries:
   blocked: null
   insecure: null
@@ -35,14 +35,14 @@
   - quay.io
   - registry.centos.org
 store:
-  ConfigFile: /tmp/1480/.config/containers/storage.conf
+  ConfigFile: /tmp/2088/.config/containers/storage.conf
   ContainerStore:
     number: 0
   GraphDriverName: vfs
   GraphOptions: null
-  GraphRoot: /tmp/1480
+  GraphRoot: /tmp/2088
   GraphStatus: {}
   ImageStore:
     number: 0
-  RunRoot: /run/user/1480
-  VolumePath: /tmp/1480/volumes
+  RunRoot: /run/user/2088
+  VolumePath: /tmp/2088/volumes

I refuse to believe there's an if (2088 == uid) { abort(); } or similar nonsense somewhere in podman's source code. What am I missing?

Rob Paisley
  • 417
  • 3
  • 13

3 Answers3

10

Does podman system migrate fix there might not be enough IDs available in the namespace for you?

It did for me and others: https://github.com/containers/libpod/issues/3421

sander
  • 284
  • 3
  • 5
4

AFAICT, sub-UID and GID ranges should not overlap between users. For reference, here is what the useradd manpage has to say about the matter:

   SUB_GID_MIN (number), SUB_GID_MAX (number), SUB_GID_COUNT
   (number)
       If /etc/subuid exists, the commands useradd and newusers
       (unless the user already have subordinate group IDs)
       allocate SUB_GID_COUNT unused group IDs from the range
       SUB_GID_MIN to SUB_GID_MAX for each new user.

       The default values for SUB_GID_MIN, SUB_GID_MAX,
       SUB_GID_COUNT are respectively 100000, 600100000 and 65536.

   SUB_UID_MIN (number), SUB_UID_MAX (number), SUB_UID_COUNT
   (number)
       If /etc/subuid exists, the commands useradd and newusers
       (unless the user already have subordinate user IDs) allocate
       SUB_UID_COUNT unused user IDs from the range SUB_UID_MIN to
       SUB_UID_MAX for each new user.

       The default values for SUB_UID_MIN, SUB_UID_MAX,
       SUB_UID_COUNT are respectively 100000, 600100000 and 65536.

The key word is unused.

Diab Jerius
  • 2,015
  • 10
  • 15
  • Always consult manpage, then StackOverflow, thanks for remembering me. This Red Hat Blog post sheds some light in the same context: https://www.redhat.com/sysadmin/rootless-podman – Steve K Jan 04 '21 at 23:29
-2

CentOS 7.6 does not suport rootless buildah by default - see https://github.com/containers/buildah/pull/1166 and https://www.redhat.com/en/blog/preview-running-containers-without-root-rhel-76

  • 2
    It seems the OP is already successfully running rootless podman (and is not asking about buildah)? – larsks Dec 18 '19 at 12:56