TensorFlow Filesystem - Access Tensors Differently

Tensorflow is great. Really, I mean it. The problem is it’s great up to a point. Sometimes you want to do very simple things, but tensorflow is giving you a hard time. The motivation I had behind writing TFFS (TensorFlow File System) can be shared by anyone who has used tensorflow, including you.

All I wanted was to know what the name of a specific tensor is; or what its input tensors are (ignoring operations).

All of these questions can be easily answered using tensorboard. Sure, you just open the graph tab, and visually examine the graph. Really convenient, right? Well, only if you want to have a bird overview of the graph. But if you’re focused and have a specific question you want to answer, using the keyboard is the way to go.

So why not load the graph inside a python shell and examine it? That’s doable, but writing these lines of code every time I want to do that task? Having to remember how to load a graph, how to look for a tensor, how to get its inputs... Sure, it’s only a couple of lines of code, but once you repeat the same task over and over again, it’s time to write a script!

So why not write an interactive script? You mean a script that given the path to your model loads it for you, and provides utility functions to ease your pain of writing tensorflow code? Well, we could do that, but that’s not gonna be as awesome as what I’m gonna show you!

Disclaimer: if you want a solution that makes sense, stop reading here and just use the interactive script approach. Continue reading only if you want to learn something a bit different ;)

Filesystem to the Rescue!

The names of tensors have slashes — a big resemblance with the UNIX filesystem. Imagine a world with a tensorflow filesystem, where directories are analogous to tensorflow scopes, and files  -  to tensors. Given such a filesystem, we could use good old bash to do what we want. For instance, we could list all available scopes and tensors by running find ~/tf - assuming ~/tf is where the tensorflow filesystem mounted. Want to list only tensors? No problem, just use find ~/tf -type f.

Hey, we’ve got files, right? What should their content be? Let’s run cat ~/tf/.../tensor_name. Wow! We got the tensor’s value  - nice... This will work only if the tensor doesn’t depend on placeholders, for obvious reasons.

What about getting the inputs to a tensor? Well, you can run ~/tf/bin/inputs -d 3 ~/tf/.../tensor_name. This special script will print a tree view of the inputs with recursion depth of 3. Nice...

OK, I’m in. How do we implement it? That’s a fine question. We can use Filesystem in Userspace (FUSE). It’s a technology that allows us to implement a filesystem in user space. It saves us the trouble of going into the low level realm, which is really hard if you’ve never done it before (and slow to implement). It includes writing code in C  - the horror! We’ll use a python binding called fusepy.

In this post I’ll explain only the interesting parts of the implementation. The entire code can be found here. Including documentation, it’s only 345 lines of code.

First we need to load a tensorflow model:

  1. Import the graph structure using tf.train.import_meta_graph.
  2. If the model was trained, load the weights using saver.restore.
In [ ]:
def _load_model(model_path):
    Load a tensorflow model from the given path.
    It's assumed the path is either a directory containing a .meta file, or the .meta file itself.
    If there's also a file containing the weights with the same name as the .meta file
    (without the .meta extension), it'll be loaded as well.
    if os.path.isdir(model_path):
        meta_filename = [filename for filename in os.listdir(model_path) if filename.endswith('.meta')]
        assert len(meta_filename) == 1, 'expecting to get a .meta file or a directory containing a .meta file'
        model_path = os.path.join(model_path, meta_filename[0])
        assert model_path.endswith('.meta'), 'expecting to get a .meta file or a directory containing a .meta file'
    weights_path = model_path[:-len('.meta')]

    graph = tf.Graph()
    with graph.as_default():
        saver = tf.train.import_meta_graph(model_path)
        if os.path.isfile(weights_path):
            session = tf.Session(graph=graph)
            saver.restore(session, weights_path)
            session = None
    return graph, session

Mapping Tensors to Files

Next, I’ll describe the main class. There are several interesting things in the constructor. First, we call _load_model. Then, we map each tensor in the graph into a path in the filesystem. Say a tensor has the name a/b/c:0 - it will be mapped to the path ~/tf/a/b/c:0, assuming the mount point is ~/tf.

Each directory and file is created using the _create_dir and _create_tensor_file functions. These functions return a simple python dict with metadata about the directory or file.

We also call _populate_bin, which I'll touch on later.

In [ ]:
class TfFs(fuse.Operations):
    def __init__(self, mount_point, model_path):
        self._graph, self._session = _load_model(model_path)
        self._files = {}
        self._bin_scripts = {}
        self._tensor_values = {}
        now = time()
        self._files['/'] = _create_dir(now)
        self._files['/bin'] = _create_dir(now)

        for op in self._graph.get_operations():
            for tensor in op.outputs:
                next_slash_index = 0
                while next_slash_index >= 0:
                    next_slash_index = tensor.name.find('/', next_slash_index + 1)
                    if next_slash_index >= 0:
                        key = '/' + tensor.name[:next_slash_index]
                        if key not in self._files:
                            self._files[key] = _create_dir(now)
                self._files['/' + tensor.name] = _create_tensor_file(tensor)

    # ...

Working with fusepy

fusepy requires you to implement a class that extends fuse.Operations. This class implements the operations the filesystem supports. In our case, we want to support reading files, so we'll implement the read function. when called, this function will evaluate the tensor associated with the given path.

In [ ]:
# ...

def _eval_tensor_if_needed(self, path):
    Given a path to a tensor file, evaluate the tensor and cache the result in self._tensor_values.
    if self._session is None:
        return None
    if path not in self._tensor_values:
        self._tensor_values[path] = self._session.run(self._graph.get_tensor_by_name(path[1:]))
    return self._tensor_values[path]

def read(self, path, size, offset, fh):
    if path.startswith('/bin/'):
        return self._bin_scripts[path][offset:offset + size]

    val = self._eval_tensor_if_needed(path)
    with printoptions(suppress=True,
                      formatter={'all': _fixed_val_length},
        return str(val)[offset:offset + size]

# ...

Worried about the printoptions part? When implementing a filesystem you need to know the size of the files. We could evaluate all tensors in order to know the sizes, but this would take time and memory. Instead, we can examine each tensor’s shape. Given its shape, and given we use a formatter that outputs a fixed amount of characters per entry in the result (this is where _fixed_val_length comes in), we can calculate the size.

Getting Inputs and Outputs

While tensorflow scopes have a structure that resembles a filesystem, tensor inputs and outputs don’t. So instead of using a filesystem to get inputs and outputs, we can write a script that can be executed as follows:

~/tf/bin/outputs --depth 3 ~/tf/a:0

The result will look like this:

├── ~/tf/b:0
└── ~/tf/c:0
    └── ~/tf/d:0
    |   └── ~/tf/e:0
    └── ~/tf/f:0

Nice! We have the tree of outputs! To implement it, all we have to do is:

  1. Get the data, which is a mapping from each tensor to its outputs.
  2. Implement a recursive function that prints the tree. It's not that complicated (it took me 6 lines of code), and can be a nice exercise.

There's one challenge though... The outputs script is gonna be executed inside a new process - this is how UNIX works. It means it won't have access to the tensorflow graph, which was loaded in the main process.

So how is it going to access all the inputs/outputs of a tensor? I could have implemented interprocess communication between the two processes, which is a lot of work. But I chose a different approach. I created a template python file that contained the following line:


This is illegal python code. The _populate_bin function which we saw earlier reads this python file, and replaces {{TENSOR_TO_TENSORS_PLACEHOLDER}} with the dictionary of tensor to outputs (or inputs). The result file is then mapped to a path in our filesystem - ~/tf/bin/outputs (or ~/tf/bin/inputs). It means that if you run cat ~/tf/bin/outputs, you'll be able to see the (potentially) huge mapping inside the file.

cat ./final_thoughts

We did it! We mapped a tensorflow model to a filesystem. TFFS was a fun small project, and I learned about FUSE while doing so, which is neat.

TFFS is a cute tool, but it's not a replacement for good old python shell. With python, you can easily import a tensorflow model and inspect the tensors manually. I just wish I could remember how to do so, even after doing it hundreds of times...

Get updated of new posts

Comments !