4

Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?

http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?

Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?

Any proof of concept or real life solutions?

MJo
  • 329
  • 2
  • 8

3 Answers3

4

When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.

dsolimano
  • 8,272
  • 3
  • 45
  • 61
  • 2
    I had a proof of concept group with 5 slaves doing just this. The perforce plugin would create new workspaces for each slave and perform an initial sync (that took forever). After that, each machine would only sync changes since the last build to bring the workspace back into line. My only problem was that every now and then the workspaces would get in some unsyncable state and I'd have to force a full sync on them. Worked out pretty well and I didn't have to build anything to copy files. – dhable Dec 10 '10 at 21:29
  • Interesting @Dan. I am connecting to SVN and Team Server, and I have both set up to do a full checkout every time. It takes an extra couple of minutes, but it hasn't been a problem yet as our code base is relatively small. – dsolimano Dec 11 '10 at 04:03
2

It's surely a late answer, but may help others.

I'm currently using the "Copy Artifact plug-in" with great results. http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin

(https://stackoverflow.com/a/4135171/2040743)

Community
  • 1
  • 1
niglesias
  • 381
  • 7
  • 15
1

Just one way of doing things, others exist.

Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.

This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.

The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.

Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).

One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.

Edwin Buck
  • 64,804
  • 7
  • 90
  • 127