-2

Back when the powers that be didn't squeeze the middle-class as much and there was time to waste "fooling around" etc, I used to compile everything from scratch from .tgz and manually get dependencies and make install to localdir.

Sadly, there's no more time for such l(in)uxuries these days so I need a quick lazy way to keep my 16GB Linux Boot OS partition as small as possible and have apps/software/Development Environment and other data on a separate partition.

I can deal with mounting my home dir to other partition but my remaining issue is with /var and /usr etc and all the stuff that gets installed there every time I apt-get some packages I end up with a trillion dependencies installed because an author of a 5kb app decided not to include a 3kb parser and wanted me to install another 50MB package to get that 3kb library :) yay!

Of course later when I uninstall those packages, all those dependencies that got installed and never really have a need for anymore get left behind. But anyway the point is I don't want to have to manually compile and spend hours chasing down dependencies so I can compile and install to my own paths and then have to tinker with a bunch of configuration files. So after some research this is the best I could come up with, did I miss some easier solution?

  1. Use OVERLAYFS and Overlayroot to do an overlay of my root / partition on my secondary drive or partition so that my Linux OS is never written to anymore but everything will be transparently written to the other partition.

I like the idea of this method and I want to know who uses this method and if its working out well. What I like is that this way I can continue to be lazy and just blindly apt-get install toolchain and everything should work as normal without any special tinkering with each apps config files etc to change paths.

Its also nice that dependencies will be easily re-used by the different apps. Any problems I haven't foreseen with this method? Is anyone using this solution?

  1. DOCKER or Other Application Containers, libvirt/lxc etc?

This might be THE WAY to go? With this method I assume I should install ALL my apps I want to install/try-out inside ONE Docker container otherwise I will be wasting storage space by duplication of dependencies in each container? Or does DOCKER or other app-containers do DEDUPLICATION of files/libs across containers? Does this work fine for graphical/x-windows/etc apps inside docker/containers?

If you know of something easier than Overlayfs/overlayroot or Docker/LXC to accomplish what I want and that's not any more hassle to setup please tell me.tx

htfree
  • 181
  • 12

1 Answers1

0

After further research and testing/trying out docker for this, I've decided that "containers" like docker are the easy way to go to install apps you may want to purge later. It seems that this technique already uses the Overlayfs overlayroot kind of technique under the hood to make use of already installed dependencies in the host and installs other needed dependencies in the docker image. So basically I gain the same advantages as the other manual overlayroot technique I talked about and yet without having to work to set all that up.

So yep, I'm a believer in application containers now! Even works for GUI apps.

Now I can keep a very light-weight small size main root partition and simply install anything I want to try out inside app containers and delete when done.

This also solves the problem of lingering no longer needed dependencies which I'd have to deal with myself if manually doing an overlayroot over the main /.

htfree
  • 181
  • 12