|On this page|
Output files and file attributes on PDG work items are assigned tags so that PDG can identify their file types. A tag determines what application is used to open a file from the attribute pane, as well as which cache handler(s) to run when checking for an existing file on disk. You can register custom tags with the PDG pdg.TypeRegistry and use them with built-in nodes. You can also register custom cache handling functions written in Python in order to manually control PDG’s caching mechanism.
Custom tags added through the Type Registry automatically appear in the tag chooser drop-down on any nodes that have a parameter for setting the file tag.
Custom file tags
Custom cache handlers
Many nodes in PDG support disk caching. If the expected output files for a work item already exist on disk, then that work item is able to cook from cache instead of re-running. You can enable caching per node, and you can configure it to either always read, always write, or read from cache only if the files exist. If you set the HOUDINI_PDG_CACHE_DEBUG environment variable before launching Houdini, PDG will print cache file debug information when a graph is cooking.
Internally, PDG verifies that cache files exist by checking if they're found on disk. This may not be suitable for all applications or all types of files. For example, this would not be suitable if your output files are stored on a cloud storage system or if you want to do extra file validation as part of the cache check.
As an alternative, you can verify the existence of cache files by registering a custom cache handler in the same way that custom file tags are registered (as described in the previous section). For example, you can create a script that defines your custom cache handlers and save it as
import os def simple_handler(local_path, raw_file, work_item): print(local_path) return pdg.cacheResult.Skip def custom_handler(local_path, raw_file, work_item): # Skip work items that don't have the right attribute if work_item['usecustomcaching'].value() == 0: return pdg.cacheResult.Skip try: if os.stat(local_path).st_size == 0: return pdg.cacheResult.Miss return pdg.cacheResult.Hit except: return pdg.cacheResult.Miss def registerTypes(type_registry): type_registry.registerCacheHandler("file/geo", custom_handler) type_registry.registerCacheHandler("file/geo/usd", simple_handler)
Each cache handler is passed three arguments: the local path to the cache file, the raw pdg.File object, and the pdg.WorkItem that owns the file. The file object contains all of the metadata associated with the file, such as its raw path and file tag.
Do not modify the work item during the cache handler hook, and do not store the work item in a global variable and then try to access it outside of the handler method. This is invalid.
Cache handlers are registered for a particular file tag. In the above example, a file tagged as
file/geo/usd would first be checked using the
simple_handler. Since that handler returns the pdg.cacheResult.Skip return code, the cache system then moves on to the next possible handler which is
file/geo. That handler verifies that the file has a non-zero size, but it only does so if the work item that owns the file has the
usecustomcaching attribute set. If both handlers return
Skip, then PDG’s built in cache checking mechanism is used instead.
You can register a handler for all file types by adding it with the
The cache handler will be called with the batch sub item if the file was generated as part of a batch. You can get the batch parent by looking at
Custom file transfer handlers
PDG has some default expectations and behaviors when it comes to file transfers. PDG assumes that you are using a shared file system which exists between all your machines and those machines only work on tasks in a specific graph. For example, scheduler nodes customarily try to copy local files like the current
.hip file to the working directory (which may be a local file path or a path that’s accessible over an NFS mount point) specified on the schedulers.
However, your set-up may be a lot different from what PDG is expecting. For example, your worker machines might only be able to access data using an object storage system or a database, or you just might not be able to share a mount point with the submitting machine.
In cases like these, you can create a custom PDG file transfer handler to implement your own file transfer logic.
PDG exposes a file transfer hook that you can override with your own Python logic to perform custom file transfer behaviors. This hook has access to your work items, the scheduler associated with the file transfer, and all the information about the file being transferred. And the file transfer handler can use whatever Python code is necessary to copy the file to its intended destination (for example, like a series of a remote machines).
A custom file transfer handler can choose to implement its own caching logic or decide to not handle a particular file. Because of this behavior, you can install multiple custom handlers and control them with the attributes from incoming work items. If none of your custom handlers want to handle a particular file transfer operation, PDG’s default handler will automatically try to copy the file directly into the associated scheduler’s working directory.
The file transfer handler system stores a thread-safe cache that maps local file paths to a user-defined cache ID. Each time a handler transfers a file, it passes the last cache ID that was reported for that file or 0 if the file does not have a stored ID. The handler can then use the ID to determine if the file needs to be copied again or if it should be treated as cached/already copied. The cache checking logic is taken care of by the file transfer handler, but PDG handles the storage of the cache IDs so that it can avoid the need for global variables in the handler.
The following code demonstrates an approximate Python implementation of the default transfer handler that is built into PDG.
Save it to a file (for example, like
$HOME/houdini19.0/pdg/types/custom_file_transfer.py) if you want to test it in a live Houdini session.
import pdg import os import shutil def transferCopy(src_path, dst_path, file_object, work_item, scheduler, cache_id): dst_exists = os.path.exists(dst_path) # If the paths are the same and exist, return early is dst_exists and os.path.samefile(src_path, dst_path): return pdg.TransferPair.skip() # unconditionally copy directores if os.path.isdir(src_path): try: shutil.copytree(src_path, dst_path) return pdg.TransferPair.success() except: return pdg.TransferPair.failure() # if the old mod time matches the current one, skip the copy and # indicate that the transfer was cached src_mtime = int(os.path.getmtime(src_path)) if dst_exists: if os.path.getsize(src_path) == os.path.getsize(dst_path): if src_mtime == cache_id: return pdg.TransferPair.cached() dst_dir = os.path.dirname(dst_path) # create intermediate directories as needed try: if not os.path.exists(dst_dir): os.makedirs(dst_dir) except OSError: pass # copy the file shutil.copyfile(src_path, dst_path) return pdg.TransferPair.success(src_mtime) # Register the handler for all types of file def registerTypes(type_registry): type_registry.registerTransferHandler("file", transferCopy)
You can also pass in a
use_mod_time argument to the registration function in order to use the default local file mod time for caching. For example:
def registerTypes(type_registry): type_registry.registerTransferHandler("file", transferCopy. use_mod_time=True)
PDG will only invoke the handler when the file has not been transferred yet for a particular scheduler, or when the local file has a mod time that is newer than the time recorded at the last transfer.
Do not try to modify the work item passed to the custom handler function, and do not store the work item in a global variable and then try to access it outside of the scope of the function. This is invalid.
Custom file hash functions
Work item output files have a 64-bit integer field that PDG uses to identify if the file is stale. For example, if you recook a File Pattern TOP node after modifying a file on disk, the work items that correspond to the modified files are automatically dirtied as part of the cook. By default, PDG uses the file’s mod time. However, you can register a custom hash function in Python to use a custom scheme.
Like with cache handlers, you register custom hash functions based on the output file tag.
The following code demonstrates a CRC checksum for text files. Save it to a file (for example, like
$HOME/houdini18.5/pdg/types/custom_file_hash.py) if you want to test it in a live Houdini session.
import zlib def crc_handler(local_path, raw_file, work_item): try: with open(local_path, 'rb') as local_file: return zlib.crc32(local_file.read()) except: return 0 def registerTypes(type_registry): type_registry.registerHashHandler("file/txt", crc_handler)
The custom hash function is invoked with three arguments: the local path to the file, the raw pdg.File object, and the pdg.WorkItem that owns the file. If the function returns a non-zero value, that value is stored as the file’s hash. If the return value is exactly zero, then PDG checks for other matching hash functions (if any exist). If no other hash functions are found, then it falls back to the built-in implementation that uses the file’s mod time.
You can apply the function to specific node types by filtering on the node type in the function implementation. For example, the following hash function applies to all types of files, but only for work items in a File Pattern:
import zlib def crc_handler(local_path, raw_file, work_item): if work_item and work_item.node.type.typeName != 'filepattern': return 0 try: with open(local_path, 'rb') as local_file: return zlib.crc32(local_file.read()) except: return 0 def registerTypes(type_registry): type_registry.registerHashHandler("file", crc_handler)
Do not modify the work item in the custom hash function, and do not store the work item in a global variable and then try to access it outside of the scope of the function. This is invalid.