Error saving USD to network folder

   11889   28   5
User Avatar
Member
55 posts
Joined: Oct. 2018
Offline
Hi, we are having an issue exporting usd files to a network drive, we get this warning “Insufficient permissions to write to destination directory” It´s not a sharing problem as we are writing files to that drive without problems. We are using UNC paths, so it´s not a mapped drive problem either. Any ideas?

Attachments:
usd export.PNG (287.8 KB)

User Avatar
Staff
4435 posts
Joined: July 2005
Offline
There was a bug reported here (and later moved to the usd-interest google group) about USD having a problem writing files to SMB shared network drives. AFAIK that issue is still unresolved, though some users have been using SMB with USD without issues, so the theory is that there is some configuration of SMB in combination with USD that is problematic. We have not been able to reproduce this problem internally.
User Avatar
Member
26 posts
Joined: Aug. 2018
Offline
Is there a solution for this? We're setting up a USD workflow for the first time, and this is the error we get.

This same network drive is used for everything. Our entire Houdini projects are on that drive, we run Redshift renders there, simulation caches, COP renders, everything. But the USD ROP gives this permission error.
User Avatar
Member
1 posts
Joined: June 2015
Offline
Hi!
We are experiencing a similar issue, and it seems to be in fact due to network shares
We're trying to write ( with the USD_ROP or USD_RENDER_ROP ) to say C:\A\B\C ( B being an ntfs network drive mounted to a folder ) but it tries to write to C:\C , ignoring everything before the network mapped folder
User Avatar
Member
4 posts
Joined: June 2017
Offline
We have similar issues even after 18.5.
No problems with direct movement to disk and no problems with other Houdini parts
Edited by bedynek.timon - Jan. 6, 2021 10:03:43
User Avatar
Staff
4435 posts
Joined: July 2005
Offline
As mentioned earlier, this is a USD bug. There was some progress in 18.5.411, where some fixes to the USD library were included for dealing with Windows "Junction Point" mounts, which might be the source of javier gonzalez gabriel's problem. But it is my understanding that SMB mounts on Windows are still problematic for the USD library in some way that we (and Pixar) have never been able to reproduce, and therefore unable to fix.

For anyone out there who is able to reproduce this, it should be reproduceable with a custom USD build, where if you have any development expertise in house it would be very helpful to many people if you could try to track down the cause of the issue. Or describe a consistent way to reproduce the problem even if you are not sure how to fix it.

Sorry we don't have any better news on this.
User Avatar
Member
26 posts
Joined: Aug. 2018
Offline
blubbmedia
Is there a solution for this? We're setting up a USD workflow for the first time, and this is the error we get.

This same network drive is used for everything. Our entire Houdini projects are on that drive, we run Redshift renders there, simulation caches, COP renders, everything. But the USD ROP gives this permission error.
We can reproduce the error in the sense that we can make it happen here every time. But of course we don't know what causes it.

The way we've fixed it for now is to set write permissions for the necessary folders to "everyone" on the samba share.
User Avatar
Member
123 posts
Joined: Jan. 2015
Offline
Hello,

We have to same problem on our server.

All the machines we have are Windows 10(workstations) and a Windows Server 2016(server)

I tried to share a network folder on another workstation to, just to see what happens. It was still not possible to overwrite the folder from Houdini rop.

Here are some repro steps.

Lets say it was done on a machine that is called "Workstation1"

1. Create folder on a Windows machine, with the name "USD_Folder"
2. Right click on that folder and press "Properties"
3. Press the "Sharing" tab, and then press "Advanced Sharing"
4. Click the "Share this folder" button
5. Press "Permissions"
6. Under "Group or user names" it should be listed "Everyone". Let that group get "Full Control" with pressing the "Allow" under the "Permissions" tab.

Now the shared folder is done.

So next use another workstation and open Houdini, write something out as a USD(USD ROP) file into the shared folder. The UNC path should be "//Worksation1/USD_Folder/test.usd"

Now you can use a sublayer node inside Lops and load the same USD file.

After that the "USD ROP" should not be able to override the file anymore, but if you deactivate the sublayer node that is reading the usd file, it will get released, and the "USD ROP" should work again.

The same happens when loading the USD file on another machine to. I have tried to load with Maya 2020(maya-usd 0.8.0) and Houdini 18.5.462


But if i overwrite the USD file from Maya 2020(maya-usd 0.8.0) it will work, even if it is loaded in Houdini at the same time. I can also rename, move and delete the USD file in Windows Explorer.


Hope this helps to find the cause, not sure if it's possible for our studio to move over to USD with this limitation at the moment :/

I will try and create a USD file with python next, to see what happens. I just have to find out how to get it running first.
User Avatar
Member
123 posts
Joined: Jan. 2015
Offline
Hello again,

I tried to write usd file to another network share. This time my Windows 10 machine is connected to a folder that is only shared to the local administrator user on Windows 10 machine sharing the folder.

So to get access to the folder i just entered the login info to the administrator user on the pc sharing the folder on my workstation.

But now i can't write any usd files to that folder with Houdini or the maya-usd plugin from github. But the USD Multiverse plugin can do it.

Seems like the Multiverse people have solved this issue?

This is the only related thing i found in the release notes on v6.5.2

"Writing: Resolved a permission issue which prevented to write over NFS on Windows in network environments not managed by Window server. Now both mapped drives and UNC paths are fully supported.
Writing: Resolved an issue that prevented to write compositions over NFS, or compositions that contained at least one assets stored on a NFS."
User Avatar
Member
120 posts
Joined: Jan. 2012
Offline
Same issue here.
H18.5.532
Windows 10.0.19042 Build 19042


Steps to repro:

1. Map network drive in Windows Explorer. For example drive E: to network location Y:

2. Write USD to Y:/test.usd
3. Load USD in Stage via Sublayer
4. Try to write to the same location from step 2 (overwrite)
5. Try to delete file from step 2

Both steps 4 and 5 will fail, saying that USD is loaded by Houdini.
If I write and load from drive E:, that is not a network drive, steps 4 and 5 finish successfully.

Hope you can repro this.
Thanks!
Edited by tas3d - April 30, 2021 12:34:49

Attachments:
Screenshot 2021-04-30 123048.jpg (22.1 KB)

Michal Tas Maciejewski @ www.vfxtricks.com
User Avatar
Staff
4435 posts
Joined: July 2005
Offline
The only mystery to me here is that steps 4 and 5 work when you're working on a local drive. As long as a binary USD file is loaded by Houdini, the USD library will have that file locked on Windows.

This thread morphed, I believe, to a completely new topic around March. The original post was about "Permission Denied" errors when trying to write _anything_ to certain network folders from USD (even to files that never existed before). The more recent posts have (it seems to me) been about trying to write files that are already open in Houdini. This is a totally separate issue recently discussed on usd-interest. My understanding is that this is just the nature of the Windows OS and how USD keeps file handles open for any loaded layer. Unfortunately there's nothing we can do about that either. As mentioned on the usd-interest thread, the omniverse folks had to do some major surgery to the USD library to work around this limitation.
User Avatar
Member
120 posts
Joined: Jan. 2012
Offline
I understand this is not SESI issue.
If someone has a workaround on Windows, please let us know.

I am planning to upgrade my storage to NAS, but seems like some of them also run on SMB, so that wont fix my problem.

Thanks everyone!
Michal Tas Maciejewski @ www.vfxtricks.com
User Avatar
Member
789 posts
Joined: April 2020
Offline
In response to Mark's reply:

We have a workaround for this. When loading a usda, the whole file is loaded into a string and the file is closed. Later requests are served from this in memory string.

Currently when loading a usdc, the file is opened, some information served to usd and then kept open for future requests for more information.

The way we solved this, is that we created a new "resolver" that reads the whole usdc file into a memory buffer, closes the file and then serves all the information from this buffer.

So far this works very well. We just completed the pixar CLA and are looking how to implement this nicely, as we hard coded this to be our default resolver, but if we would like to contribute back, we need a more general solution. Once we have that, we'll try to get it into the dev branch of usd.

Cheers,
koen
User Avatar
Staff
4435 posts
Joined: July 2005
Offline
Thank you for the details, koen!
User Avatar
Member
2 posts
Joined: Sept. 2019
Offline
Hi everyone,

I have found a fix for this issue that allows you to maintain 775 permissions (does not allow everyone to write to your folders.)

As far as our network configuration, we have a Windows Server running Active Directory, a Linux Centos file server that hosts our files to the network via Samba, and Windows clients that access the Samba share in their various DCC applications.

For us, the issue was that the permissions between our Windows AD->Linux->Windows were not cleanly being passed between the different OSes. USD uses the Win32 API under the hood to determine if a Windows User has permissions to write to any given directory. This API requires that the permissions across these different OSes are clearly communicated. All we have to change is settings on our Centos file server to map all of the permissions correctly.

You shouldn't have to restart your File Server for any of these fixes to work, but you may need to leave and join the domain, so expect for your file server to go down for a bit.

MAKE SURE TO REPLACE ANYTHING IN <> IN THE BELOW CODE WITH THE VALUES SPECIFIC TO YOUR NETWORK. ALSO IF NOT RUNNING CENTOS 7, YOUR COMMANDS MAY DIFFER SLIGHTLY.

STEPS TO FIX:

You will start by ssh-ing as root (if not connected to the domain) into the Linux machine that you are trying to host a Samba share on.
If you have already connected the machine to the AD skip to step 5.
If you already have Samba running on the Linux machine but are having issues with permissions, skip to step 9.

To connect to the network, we will use the preinstalled realm package. To check if you are already connected, use the command realm list. If you get anything returned here, you are connected to the domain already, otherwise you need to join the AD.

To join the AD, type realm join -U <your domain username> you will be prompted to enter your password. This should take a few seconds, and afterwards, a realm list should show you successfully connected to the AD.

You should at this point be able to ssh into the Linux machine as a domain user. Verify that you can do this before continuing.

Once connected to the AD, run a yum install samba samba-common. Install all dependencies as needed. This should includes packages such as sssd and samba and will setup the basic samba config directory (/etc/samba/*).

At this point, you will want to ensure everything installed correctly and nothing is corrupted by doing a:

systemctl start smb nmb winbind.

(There should be a warning about the system not being able to find winbind yet, we will install that later).

If you got any errors here, check the logs for the respective service and debug. Otherwise, check the status of the services with:

systemctl status smb nmb

If everything is working, you should be able to see your user's Linux home directory from Windows by using Windows explorer to navigate to the hostname like so:

\\<hostname>

Now we will update the /etc/samba/smb.conf file to fix the permissions issues we are having. Open this file with your preferred text editor, I use vi because it's preinstalled and is simple to use, but you can use whatever. Update the file to look like this:
[global]

        workgroup = <DOMAIN NAME>
        security = ADS
        realm = <DNS name of the Kerberos Server>

        passdb backend = tdbsam
        kerberos method = secrets and keytab

        idmap config * : backend = tdb
        idmap config * : range = 3000-7999

        idmap config <DOMAIN NAME>:backend = rid
        idmap config <DOMAIN NAME>:range = 10000-999999

        template shell = /bin/sh
        template homedir = /home/%U

        winbind refresh tickets = yes
        vfs objects = acl_xattr
        map acl inherit = yes
        acl_xattr:ignore system acl = yes

        disable spoolss = yes
        printcap name = /dev/null
        load printers = no
        cups options = raw

# Here you will set the share name/comment/path and read only state. Don't set anything else here.
[testshare]

        comment = Test Share
        path = /test
        read only = False
Important things to know about the changes we made:
- Set the kerberos method so auth is secure
- Set the ID mapping to rid so that Winbind can translate the UIDs/GIDs back to Windows.
- Set the template shell/homedir so that we retain individual user home dir and I believe template shell is required?
- Set winbind to refresh tickets because otherwise they expire after a day or so
- The three lines below winbind refresh tickets = yes are also ?required? for translating UIDs/GIDs back to Windows? Need to test this for sure
- At the bottom section of global we disable printing.

At this point you should be able to restart samba and everything should still work. You should be able to access the share (folder under the testshare section) from Windows. You will again use Windows Explorer to test this via \\<linux machine name>\<share name>. If this doesn't work you have messed something up. I would start with checking that sssd, smb, nmb services are all running.

At this point, if you run the command id in Linux, you will likely see your ID is very large (See Figure 2). This means that the IDs are not correctly mapping from
Windows->Linux. This is expected behavior. We have a few more steps to do.

Next we need to install winbind this is the service that handles mapping the Linux UID/GIDs back to Windows SIDs so that Windows apps using the Win32 API can correctly verify your user's permissions. To install this use the command:

yum install samba-winbind

In order to use a helpful winbind debugging utility called wbinfo (I won't go into how to use this for debugging), you should also run:

yum install samba4-winbind-clients

Now, winbind is ready to use as a service, but is not plugged into anything. Therefore if we started the winbind service, winbind would have no valid configuration and would actually block access to the share. So first we need to actually tell the Name Service Switch (NSS) to actually use winbind as a name resolver. To do this open up the /etc/nsswitch.conf file and edit these two lines:

passwd: files sss
group: files sss

to look like this:

passwd: files winbind sss
group: files winbind sss

After making this change, you do not need to restart/reload any services, as nsswitch is just an API for C libraries.

There is one other change that we must make in order to make the ID mapping work properly. This is currently unconfirmed if this actually affects anything, but AFAIK it is necessary. Open up the file /etc/sssd/sssd.conf and add the line:

ldap_idmap_autorid_compat = True

Supposedly this line makes it so SSSD and Winbind interact correctly and pass off IDs as intended.

After making this change, make sure to reload the sssd service via:

systemctl restart sssd

Lastly you should make sure smb, nmb, winbind, and sssd are all started up and running with no issues via:

systemctl restart smb nmb winbind
systemctl status smb nmb winbind sssd

After making all of the previous changes, you should be able to exit the Linux machine and ssh back into it (with your domain login) without any issues. If not you have messed something up.

Upon logging in, should be given the sh shell as defined in the above samba config and you should be able to run the id command and should get both User IDs and Group IDs that are in the 10000-999999 range (from the samba config). You should also see that the user and group names returned from the id command include the actual domain name in them Ex. (<DOMAIN NAME>\<DOMAIN USERNAME>). If all of these things looks right, you've successfully setup ID mapping for Samba!

Now that the mapping is working, you will need to use the new domain UIDs/GIDs for any existing folders on the share that you want to fix the permissions issues on. To do this you can run the chown command. For example, you can change the ownership of the root share folder (in this case /test) like this:

sudo chown <DOMAIN NAME>\\<DOMAIN USERNAME>:<DOMAIN NAME>\\<DOMAIN GROUP NAME> /test

You must use the above syntax <DOMAIN NAME>\\<DOMAIN USER/GROUP NAME> rather than the email <DOMAIN USERNAME>@<DOMAIN EMAIL>.com" when changing ownership or it may use the OLD UIDs/GIDs for the groups (the really high number ones) instead. The really high number ones will not work in Windows.


COMMON ISSUES

On the Linux side, the id's for groups are not mapping? For instance when ssh as a domain user into the Linux machine, I see this message:

/usr/bin/id: cannot find name for group ID 10513

To fix this, check your /etc/nsswitch.conf and ensure that you set only these lines to include winbind:

passwd: files winbind sss
group: files winbind sss

Please note that the shadow entry (that is in between passwd/group does not include winbind.

You may have issues when turning on winbind where when exiting ssh and rejoining, your ID is still not correctly falling within the range. To fix this, you should try leaving the realm, and rejoining it. This seemed to force rebind to recalculate ids correctly.
Edited by brazen_bryce - Sept. 13, 2021 13:06:18

Attachments:
IncorrectPermissions.PNG (57.6 KB)
CorrectPermissions.png (58.3 KB)

User Avatar
Member
23 posts
Joined: Aug. 2013
Offline
Hey,

I'm testing out Houdini 19.0.383 (windows 10), and I'm getting the insufficient privileges error trying to write a usd to disk. I use a Synology NAS as my file system.

Any hot tips?

Cheers,

Nick
User Avatar
Member
120 posts
Joined: Jan. 2012
Offline
Just an idea, perhaps using different file sharing protocol will make a difference?
Try switching to iSCSI from SMB, but really, its a wild guess.

I am planning to buy NAS and use it with USD pipeline, so that is quite concerning.

https://kb.synology.com/en-us/DSM/tutorial/How_to_use_the_iSCSI_Target_service_on_Synology_NAS [kb.synology.com]
Michal Tas Maciejewski @ www.vfxtricks.com
User Avatar
Member
2 posts
Joined: Nov. 2019
Offline
I've created a workaround for writing USD to a Synology server from Windows by using another protocol indeed (NFS instead of SMB). We are currently using it for testing-purposes only, so i don't know if this is production-stable on the Windows side (it also seems to be a bit slower compared to the default SMB-protocol, but still usable. That being said, I have not tested the speeds that extensively).

It is still a workaround, and hopefully this issue will be fixed in the future. But atleast there is a way to get it working with Synology-servers and Windows. I have made a little PDF going trough the steps that I had to do, if it is of any use for somebody (will upload it as an attachment). You can still keep your old SMB-mount as well (as long as you assign it a different drive-letter of course). So this workaround won't break any of that.

Oh, and you might need Windows Pro for this 'extra feature' to be visable. But I'm not sure if that is still the case.

Attachments:
mountUsingNFS_forUsd.pdf (184.1 KB)

User Avatar
Member
1 posts
Joined: May 2020
Offline
The way we fixed this was also by switching to NFS (mounting to our synology NAS with NFS instead of SMB).
But this doesn't fix the other issue of when you read a usd file with even just usdview, you won't be able to edit it until that process is closed, windows throws an error saying "File In Use". So basically you can't have a usd open in any process while editing/saving over it.
This issue doesn't happen locally.
It doesn't seem to happen accross multiple machines, meaning if I read a usd file in machineA, another machineB can still make edits to it, as long as machineB doesn't itself open it with some process. So it's more of a local issue but only in shared directories.
Another interesting finding is, because our synology NAS supports both NFS and SMB, I am able to access the file with both. So say I read a usd file through an NFS path, then go try edit it through a SMB path, in the same machine, that works fine too.

Attachments:
Capture d’écran 2021-11-12 084744.png (14.9 KB)

User Avatar
Member
120 posts
Joined: Jan. 2012
Offline
Houdini 19.0.436 Added a new USD ROP parameter that allows Houdini to work around some USD issues with overwriting USD files on network drives (at the cost of a performance penalty during the save)

Try this. Perhaps mtucker can chime in if this feature can help in this case.
Michal Tas Maciejewski @ www.vfxtricks.com
  • Quick Links