In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you’re going to need to manually copy data out of the containers. Personally, if all you’re talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.
All this said though, some applications really don’t like their data stored on NFS. I know Plex really doesn’t function well when it’s database is on NFS. But, the Plex media directories are fine to host from NFS.
Docker’s documentation for supported backing filesystems for container filesystems.
In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.
As for your data volumes, it’s likely going to be very application specific. I’ve got Postgres databases running off remote NFS, that are totally happy. I don’t fully understand why Plex struggles to run it’s Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.