Docker, NAS shares on Unraid, and Stale File HandlesJordan Hofker Unraid's unique software raid setup is great until you're mounting Docker volumes with NFS on remote hosts.
This was driving me nuts awhile back. I'm curious how others have dealt with it, or if they've even seen the issue, but here's what worked for me.
I'm a fairly heavy user of docker compose with about 105 containers running currently. I've also been using Unraid for quite some time and have only recently migrated all of my containers off of Unraid and onto separate hosts to allow of easier management of the load particular apps place on my server. In order to keep my data in one place, but spread the computational load, I've been using my Unraid shares to keep those things going. Inside of a compose file, that often looks like:
Now, before I was doing this per docker compose stack, I was manually mounting these shares with NFS on the docker host. I decided I like having the shares specified per application to allow for being specific about what's mounted where and when. Either way, I experienced the same issue -- containers would slowly lock up over time and have logs like this as they did:
mount.nfs: Stale file handle
Usually, a quick docker compose down and up would take care of this, for a little while, but that gets old. So old that I aliased the command just to make life a little bit easier
# In ~/.bash_aliases I added these just to reduce my typing:
alias dc="docker compose"
alias dcr="dc pull && dc down -v && dc up -d --remove-orphans"
Even that got old, though and I went off in search of alternatives. Several days (okay, weeks) later, I saw someone suggest that cifs could work "better" in this scenario. Yes it's less cool to use cifs for this, but I'd rather have something working that I didn't have to constantly babysit. So after some experimentation, I converted my docker volumes to this:
If your shares on your NAS are public read/write, you probably don't need to include the account, but connecting this was fixed my issue with stale file handles. And until it all breaks in some way or I receive better information on fixing this, it's how things will stay.
Okay, but have you tried...
Probably! But please let me know if there's something else I should give a shot:
- Forcing different versions of NFS
- Mounting at the host level
- Scripting to remount shares after some amount of time
- Alternative NFS-style applications
- Public shares, private shares
- One single share (
//server-ip/data) and then mapping all folders off of that
I believe this issue is largely because I'm using Unraid and with its unique file storage of cache layers and multiple disks backing a single share, files get moved around behind the scenes and that's why things get disconnected. But I really like the level of mismatched disks I can throw in that machine without thinking about hardware RAID, so I'm not yet willing to change out my NAS.