4

I would like to know when the placement algorithm of TensorFlow (as described in the white paper) gets actually employed. All examples for distributing TensorFlow that I have seen so far seem to specify manually where the nodes should be executed on, using "tf.device()".

Elizafox
  • 1,126
  • 7
  • 9
Paul
  • 987
  • 1
  • 10
  • 23

1 Answers1

5

The dynamic placement algorithm described in Section 3.2.1 of the TensorFlow whitepaper was not included in the open-source release. Instead, the "simple placer" (whose implementation can be found in simple_placer.cc) is used, but it requires some explicit annotations (via tf.device()) to make yield an efficient placement. Higher-level constructs like tf.train.replica_device_setter() wrap tf.device() to specify common policies such as "shard the variables across parameter servers, and otherwise put all ops on the worker device," and we use this extensively in distributed training.

In practice we have found that a small set of annotations usually yields a more efficient placement than the dynamic placer will determine, but improving the placement algorithm remains an area of active research.

mrry
  • 120,078
  • 23
  • 381
  • 391
  • +mrry Thanks for your answer. Is there some plan to release the dynamic placer in the forseeable future? – MiniQuark Aug 24 '16 at 13:02
  • 1
    +mrry Nevermind, I just noticed that this question is discussed in https://github.com/tensorflow/tensorflow/issues/2126 – MiniQuark Aug 24 '16 at 13:20