State in TF graphs

This page assumes basic familiarity with the Go TensorFlow bindings. Be sure to read the introduction first.

Most TF operations are stateless, they take input, directly or indirectly from placeholders, and returns values. Once a call to Run() is completed, no intermediate state persists.

Usually this is desirable. However there are situations in which carefully controlled state is useful. The most common of these is updating weights and biases when training. However TF provides various other stateful operations. Here I describe several of them.

Variables

Create a variable:

variable := op.VarHandleOp(s, tf.Int32, tf.ScalarShape())

In this example, the variable is of type tf.Int32, and is a scalar shape, that is to say, it is a zero dimensional number.

This variable is just a handle, it can not be used directly.

To assign the output of a placeholder:

intPH := op.Placeholder(s, tf.Int32, op.PlaceholderShape(tf.ScalarShape()))
assign := op.AssignVariableOp(s, variable, intPH)

Here we create a placeholder of type tf.Int32, and scalar shape, and pass it to the assign OP.

Now create an OP to read the value:

read := op.ReadVariableOp(s, variable, tf.Int32)

So far we have not actually assigned values to or read values from the variable, just created the graph of operations to do so.

Say that we wish to assign the number 1 to the variable. We must first convert our number to a tensor.

tensor1, err := tf.NewTensor(int32(1))

Now that our number is in tensor form, we can feed it into the graph, while pulling on the assign operation.

_, err = sess.Run(map[tf.Output]*tf.Tensor{intPH: tensor1}, nil, []*tf.Operation{assign})

The variable is now initialized with the value of 1.

To read the value, we simple pull on the read output.

results, err := sess.Run(nil, []tf.Output{read}, nil)

If we unpack results, we should get 1.

fmt.Println(results[0].Value().(int32))

We can assign a new value to the variable.

As before we create a tensor with the desired value

tensor2, err := tf.NewTensor(int32(2))

We then pull on the assign operation while feeding tensor2 to the placeholder.

_, err = sess.Run(map[tf.Output]*tf.Tensor{intPH: tensor2}, nil, []*tf.Operation{assign})

And finally, evaluate and print the output of read.

results, err = sess.Run(nil, []tf.Output{read}, nil)
fmt.Println(results[0].Value().(int32))

We have used TensorFlow to construct a graph containing an operation to assign the output of a placeholder to a variable and an output to read the variable, assigned 1 to the variable, read the variable, assigned 2 to it, and read the variable again. Including error handling, this required ~40 lines of code.

This is equivalent to the following Go code.

var variable int32
variable = 1
fmt.Println(variable)
variable = 2
fmt.Println(variable)

Queues

Queues allows one process to produce tuples of tensors, and another to consume them.

TF implements various types of queues. Here we will be demonstrating a simple FIFO queue.

This queue will hold pairs of type (tf.Int64, tf.Float)

dataType := []tf.DataType{tf.Int64, tf.Float}

To feed this data we create two placeholders. Notice that while the int placeholder is of scalar shape, the float placeholder is of shape [2]

intInputPH := op.Placeholder(s.SubScope("int_elem"), tf.Int64, op.PlaceholderShape(tf.ScalarShape()))
floatInputPH := op.Placeholder(s.SubScope("float_elem"), tf.Float, op.PlaceholderShape(tf.MakeShape(2)))

Create the queue, passing the list of two data type we defined earlier.

queue := op.FIFOQueueV2(s, dataType)

Now create operations to:

enqueue := op.QueueEnqueueV2(s, queue, []tf.Output{intInputPH, floatInputPH})
components := op.QueueDequeueV2(s, queue, dataType)
closeQueue := op.QueueCloseV2(s, queue)

Now that we have created the graph, we can start using it.

First let’s make some dummy data.

floatData := [][]float32{[]float32{0.3, 0.4}, []float32{0.5, 0.4}, []float32{0.6, 0.4}}

For each datum in the outer dimension, make a tensor for the integer index and the [2] datum.

  intTensor, err := tf.NewTensor(int64(i))
  floatTensor, err := tf.NewTensor(f)

To enqueue the pair, pull on the enqueue operation while feeding the int and float tensors to their respective placeholders.

  _, err = sess.Run(map[tf.Output]*tf.Tensor{intInputPH: intTensor, floatInputPH: floatTensor}, nil, []*tf.Operation{enqueue})

We have now loaded several pairs of (int64, [2]float32).

Let us now read some data from the queue.

results, err := sess.Run(nil, components, nil)

Recall that components is a list of two outputs. results therefore should contain two tensors. The value of the first can be coerced to a Go int64. The value of the second can be coerced to a Go []float32.

fmt.Println(results[0].Value().(int64), results[1].Value().([]float32))

This should print 0 [0.3 0.4].

We can pull on the dequeue OP again to read another pair of values.

results, err = sess.Run(nil, components, nil)

And print it.

fmt.Println(results[0].Value().(int64), results[1].Value().([]float32))

Once we are done using the queue, we should close it.

_, err = sess.Run(nil, nil, []*tf.Operation{closeQueue})

We have used the Go TensorFlow bindings to construct a graph containing an operation to feed the value of two placeholders to a FIFO queue and a pair of outputs to read the values from the queue, fed a few pairs of data into the queue, and read two pairs out from the queue. Including error handling, this required ~52 lines of code.

This is roughly equivalent to the following Go code. Note that as Go does not support tuples, we must define a type to hold the pair. Unlike TF queues which provide unlimited buffering by default, Go channels requires us to specify the size of buffering we wish.

type dataType struct {
  Integer int64
  Floats  []float32
}
queue := make(chan dataType, 100)
for i, f := range floatData {
  goqueue <- dataType{Integer: int64(i), Floats: f}
}
output := <-queue
fmt.Println(output.Integer, output.Floats)
output = <-queue
fmt.Println(output.Integer, output.Floats)
Last updated on: Mon, Nov 27, 2017