unRAID

0 readers
1 users here now

A community for unRAID users to discuss their projects.

founded 1 year ago
MODERATORS
1
2
3
 
 

The drives passes all SMART checks and has nothing flagging as a warning or fault, yet when I do a read test, I get all the errors.

4
 
 

So I've recently had a power cut that caused a few issues though I had just bought a new 8TB ironwolf to replace a couple of older 1TB drives from the dawn of time.

Typically though, this drive was DOA and the issues have taken out one of the 1TB drives.

I've bought a new 8TB WD Red that works great. I've used unbalance to migrate the old drives data to the new drive so I can get rid of them as they were starting to show smart errors.

My question is this.

The seemingly dead 1TB looks like it's working through a USB to SATA adaptor on unassigned devices and I would like to get the data from it back on my array but I'm unsure of the command to use with rsync to copy it from the old drive to the new array drive.

Obviously unbalance can't see it as it's an unassigned device.

Any help would be really appreciated.

Thanks

5
1
submitted 1 year ago* (last edited 1 year ago) by BlovedMadman@lemmy.world to c/unraid@reddthat.com
 
 

Appdata, isos, system and domains are the only things on the Cache drive, mover has been run and there is nothing on the cache drive that shouldn't be. I only have 1 docker installed currently (plex) and no VMs or ISOs downloaded or(no VMs). I have tried to delete old files that docker apps left behind, but some of the folders won't delete. ( I also have a share thsts empty but there are folders I can't delete (no files in them that I can see) and therfor can't remove the share)

I'm at a point where I want to restart my server from scratch, but don't want to loose the data on the drives in the array, can this be done? And if so, how?

6
 
 

Has anyone successfully deployed something like Subs AI within unraid?

Basically I'd like to use this to grab all the missing subtitles that Bazarr isn't able to grab.

PS: If any one knows of a similar app with a scheduler built into the webui, please let me know

7
 
 

As my bug reports explains: The Video-Guide and Script from SpaceinvaderOne for ZFS Snapshot Backups cannot safely backup live databases, it'll corrupt them. Please make sure, that you either stop all dockers using a database before running the script or you copy the live database into a db-dump beforehand, as long as you use this script.

8
1
submitted 1 year ago* (last edited 1 year ago) by focusforte@lemmy.world to c/unraid@reddthat.com
 
 

So I'm trying to get unraid set up for the first time, I'm still running the free trial and assuming I can get this set up, I do plan on purchasing it but I'm starting to get frustrated and could use some help.

I previously had a drobo, but that company went under and I decided to switch to an unraid box because at least as far as I can tell, it's the only other NAS solution that will let me upgrade my array over time with mixed drive capacities.

Initially everything was fine, I popped in all my empty drives, set up the array with 1x20Tb drive set up as parity, 1x20tb and 5x14TB drives set up as data disks and I started to move stuff over from the backups that I had from the previous Drobo NAS using the unassigned devices plugin and a USB 3 Hard drive dock.

Well, what had happened twice now is that randomly a device will go disabled with no indication of what's wrong. The blank drives are only about a year old and have before this shown absolutely no signs of failing. They are also still passing the SMART tests with no errors. The first time this happened it was the brand new 20 terabyte drive and the parity slot. I was able to resolve this by unassigning that drive starting the array and then stopping to reassign the drive. That's what a forum post. I was able to find suggested and that seemed to have worked, but it started a whole new parity sync That was estimated to take a whole week. The thing is I don't have a week to waste so I went ahead and started moving files back onto the system again, but now the same thing has happened to disc 3. One of the 14 terabyte drives.

I'm at my wits end, the first time it happened I couldn't figure it out. So I just wiped the array reinstalled on raid and try it again because I just couldn't figure it out. Are there any kind of common pitfalls than anyone could recommend me checking or anyone in general? Just willing to help?

My hardware: Ryzen 7 5700G 64GB of 3200 ECC DDR4 (2x16gb is currently installed, but I've got another 2x16 that just arrived yesterday. I need to install that was just shipped late) 8 NAS bays in the front case

Blank Drives: 2x20TB, 5x14TB, 1x8TB Drives with Data on them: 2x20TB, 1x10TB, 1x8TB (totaling around 40 TB of data across them)

Once the data is moved off of the drives that have data on them, I do intend to add them to the array. My NAS case has eight bays, and two internal SSDs that are separate from the NAS bays, One sata ssd set up as a cache, the other an NVME m.2 set up as a space for appdata.

As of last night before I went to bed I had about 3 terabytes of data moved onto the array, but during the overnight copy, something happened to my disc 3 which made the device marked as offline. I couldn't find any error messages informing me why the disk was offline but it was marked as offline.

The Parity drive was already invalid because I was copying data in while the parity sync was happening, and now I can't get the array to start at all.

I tried doing something that a forum post recommended which was to start the array with the disc unassigned in maintenance mode, then stop the array and then restart it with disc 3 Reassigned to the correct drive, but it refuses to do so. It tells me that there are too many wrong or missing disks.

The weirdest part is that I know that disc 3 still has the data on it because if I unassigned 3 and then mount that drive using the unassigned devices plugin I can see all the data is still browsable and there.

I'm starting to feel real dumb cuz I don't know what I'm doing wrong. I feel like there's got to be something simple that I'm doing wrong and I just can't figure out what it is.

9
 
 

Hello!

I’ve been running unRAID for about two years now, and recently had a thought to use some spare parts and separate my server into two based on use. The server was used for personal photos, videos, documents, general storage, projects, AI art, media, multitude of docker containers, etc. But I was thinking, it’s a bit wasteful to run parts that I use once or twice a week or less 24/7, there is just no need for the power use and wear and tear on the components. So I was thinking to separate this into a server for storage of photos, videos and documents powered on when needed, and then a second server for the media which can be accessed 24/7.

Server 1 (photos, videos, documents, AI experiments): 1 x 16TB parity, 2 x 14TB array. I7 6700k, 16GB ram

Server 2 (media, docker): 1 x 10TB parity, 1 x 10TB and 2 x 6TB array. Cheap 2 core skylake CPU from spare parts, 8GB ram.

With some testing, server 2 only pulls about 10w while streaming media locally, which is a huge drop from the 90+ watts at idle that it was running when I had everything combined.

I was hoping to use an old laptop I have laying around for the second server instead, which has an 8 core CPU, 16GB ram, and runs at 5w idle. I have a little NVMe to SATA adapter that works well but the trouble is powering the drives reliably.

Anyways, pros of separating it out, lower power usage, less wear and tear on HDDs so I will have to replace them less frequently.

Cons, running and managing two servers.

Ideally, I’d like to run server 1 on the cheap 2 core skylake CPU (it’s only serving some files after all), server 2 on the laptop with 8 cores (but still have the issue of powering the drives), and then take the i7 6700 for a spare gaming PC for family.

Alternative would be to just combine everything back into one server and manage the shares better, have drives online only when needed, etc. But I had issues with this, and would sometimes log into the web ui to find all drives spun up even though nothing was being accessed.

Anyways, I hope all of that makes sense. Any insight or thoughts would be appreciated!

10
11
1
submitted 1 year ago* (last edited 1 year ago) by p5f20w18k@lemmy.world to c/unraid@reddthat.com
 
 

So my SWAG docker can't see other containers on the same docker network, all the conf files need the IP and Port to work.

The other containers can see each other (sonarr and sab for example) and they are all on the same network.

Anyone know why?

Found the fix:

12
 
 

Anyone know of a USB 3.2 gen2 or lower 2.5gbe adapter that works well with Unraid? I know it's my old friend Slackware under the hood but I'm not sure how far that will get me since it seems to be pretty well stripped down. I'm fresh out of PCIe slots in my little box but have plenty of USB 3.2 gen2 ports to go around since the only USB device I use is my UPS.

13
 
 

Hi guys, looks like the used dell 2080 ti I bought off of reddit died after a couple of months of life.

I have been throwing some AI workloads at it (image generation, speech to text, etc) and it looks like the Nvidia driver randomly stopped seeing it. Tried downgrading the driver version and rebooting but as soon as I started throwing some AI workloads at it, the same thing happened.

Can anyone suggest a good dual slot GPU? Doesn't really need to be one of the consumer cards as I'll only be using this for AI workloads and transcoding via tdarr and Plex.

Thank you!

14
 
 

As the title said, I updated to 6.12 and suddenly a new share called "appdata_cache" appeared. I have my appdata share living on the cache (primary cache, secondary array, mover cache <-- array)

Anyone know what and way that is ?

15
 
 

I’m new to the Unraid scene, after putting off doing something other than Windows-based serving and sharing for about.. oh, about 14 years. By “new to the scene”, I mean: “Trial expires in 28 days 22 hours 6 minutes” :-)

Anywho, I ran into an issue with a disabled drive. The solution was to rebuild it. I solved it thanks to a post by u/Medical_Shame4079, on Reddit.

That made me think about the whole “losing stuff on Reddit” maybe problem of the future. While this post isn’t much, maybe it will be helpful to someone else, sometime else.

The issue? A drive ha a status of disabled, and it has a message of “device is disabled contents emulated unraid.”

The fix:

Stop the array, unassign the disk, start the array in maintenance mode, stop it again, reassign the drive to the same slot. The idea is to start the array temporarily with the drive “missing” so it changes from “disabled” to “emulated” status, then to stop it and “replace” the drive to get it back to “active” status.

Looking forward to more time with Unraid. It’s been easy to pick up so far.

16
1
submitted 1 year ago* (last edited 1 year ago) by Dianoga@lemm.ee to c/unraid@reddthat.com
17
18
 
 

Hey,

I have a SSD pool with two relative small SSDs:

Now I started to notice that one of my SSDs started to fail. So I thought, why not use this opportunity to upgrade the pool. This is how I expect it to work:

  1. Buy two new bigger SSDs
  2. Restart server in safe mode
  3. Remove failing SSD
  4. Install one of the new SSDs
  5. Add the new SSD to the pool
  6. Start the Array?
  7. The pool should regenerate???
  8. Start with Step 3 again and replace the second small SSD
  9. Profit ???

Is this how it works or do I really first need to use the mover and move all data back to the hard drives and replace the pool all at once?

19
 
 

Got some extreme warm weather coming and I'm going to be out of town for a while. Can't trust the inlaws staying here to do anything server related.

Anyone know of a plugin or script to automatically shutdown if the system temp is too high?

20
 
 

cross-posted from: https://discuss.tchncs.de/post/464987

If you aren't already using the mover tuning plug-in, now is a good time to have a look at it.

The latest update allows per-share settings override for detailed control over how your caches are used.

I use this plug-in to keep files in cache based on their age, so for example in a media server, you can have the last 14 days of TV shows kept in cache, while still running the mover regularly.

It can also do a full move if the disk is above a certain threshold value, so if your cache is getting full, it can dump all files to the array as per normal.

So you always keep the most important recent files on the cache, with a greatly reduced risk of running into a full cache issue and the problems that causes.

Now, with the latest update, you can tune these settings PER SHARE, rather than being across the whole system.

21
 
 

I’m running the binhex Mullvad container but now they Mullvad is removing port forwarding I have 2 questions.

Is there a setting that can fix this issue in the container?

If not, I assume I would need to do a new container using a different VPN?

22
 
 
23
1
submitted 1 year ago* (last edited 1 year ago) by tko@tkohhh.social to c/unraid@reddthat.com
 
 

My favorite new feature of 6.12 is the "Exclusive Shares" concept. Here's a rundown:

Background

Unraid user shares are a FUSE, which allows data on multiple drives to be presented as a single file system. This idea is at the heart of Unraid's Array, as well as the concept of "Cache Pools" (now known as Named Pools). Any time you see a path that starts with /mnt/user in Unraid, that's a FUSE (prior to 6.12, that is).

FUSE is great for giving us this transparent way to view our files without having to worry about which physical drive those files reside on. However, this comes at a cost... and that cost is reduced performance for applications running on an SSD Named Pool.

This performance penalty wasn't always noticeable, but it would sometimes rear it's ugly head in unexpected ways (exampes: graylog and duplicati). There was a workaround, assuming your appdata user share was stored entirely on one Named Pool: you could update your docker bind mounts to /mnt/[poolname]/appdata instead of /mnt/user/appdata. This bypassed the FUSE.

Exclusive Shares

With Unraid 6.12, Limetech introduced "Exclusive Shares" as part of the Share Storage Conceptual Change. This gives us a built-in way to bypass FUSE on an entire user share.

In order for a share to be designated an Exclusive Share, the following must be true:

  • "Primary Storage" must be a Named Pool
  • "Secondary Storage" must be set to none
  • Files for that share must exist entirely on the Primary Storage device

Setup

In order to use Exclusive Shares, you first have to enable them. Go to Settings > Global Share Settings and change the Permit Exclusive Shares setting to Yes. You'll have to stop your array in order to make this change.

Next, make sure that your appdata share is stored entirely on the Named Pool. Go to Shares and click Compute in the Size column for your appdata share. This will tell you how much data for this share is saved on each drive. If the only drive reported is your Named Pool, you're all set. If you've got more than one drive, you'll need to disable docker in settings, and then run the mover.

Once your sure that your entire appdata share is saved on your Named Pool, you need to change your appdata share settings. On the Shares tab, click on appdata to bring up the settings. Change the Secondary Storage option to None.

If you did it correctly, after you Apply the changes you will see the Exclusive Access field on the appdata share change from No to Yes.

Finally, if any of your docker container bind mounts use mnt/[poolname]/appdata/..., you can change those to mnt/user/appdata/....

I hope this helps anybody who might have been frustrated with the appdata FUSE performance in the past! If you have any questions, let me know!-

24
 
 

Changes vs. 6.12.1

This is mainly a bug fix release, including also a minor security update. Other highlights:

  • We reverted docker from v23.0.6, introduced during Unraid OS 6.12 development, to v20.10.24, which is the latest patch release of docker used in Unraid OS 6.11. This to address increased memory usage and other issues discovered with docker.
  • A small necessary change to invoke our 'update_services' script whenever a WireGuard tunnel starts or stops is automatically applied to all 'config/wireguard/*.conf' files when you update via Update OS page. For manual update or if you downgrade, it is necessary to make a "dummy change" in a setting on the Settings/VPN Manager page and then click Apply.

Bug fixes and improvements

  • email notifications: add line in /etc/php.ini: 'mail.mixed_lf_and_crlf=On' to workaround change in PHP8 CRLF handling
  • emhttpd: Fix regression: emulated ZFS volumes not recognized
  • emhttpd: Fix regression: format fails if diskFsType==auto and defaultFsType specifies encryption
  • emhtptd: Fix regression: mount fails if diskFsType==auto
  • htop: remove predefined /root/.config/htop/htoprc file
  • network: RC services update:
    • NFS - fix service reload
    • RPC - fix service reload
    • NGINX - remove HTTPS port in FQDN redirect when default 443
    • All services - register IPv4 Link local assignment (169.254.xxx.xxx)
    • All services - make lock file programmable
    • WireGuard: delayed service update to avoid race condition
    • rc.library: do not allow duplicates in bind list
  • webgui: Dashboard updates:
    • Re-introduce show/hide tile content
    • Add new icon function to show/hide all content at once
    • Reduce gap between columns
    • description -> model
    • ZFS: fix percentage value to max 100%
    • Use prototype function: This makes it easier for 3rd party developers to automatically hide dynamic content
    • Handle duplicate IP assignments and give warning
    • change header MEMORY to SYSTEM
  • webgui: OS Update: add checkbox to confirm reading release notes before upgrading
  • webgui: diagnostics: include list of duplicate assignments
  • webgui: NFS: for Security/Private increase Rule field from 256 to 512 characters.

Linux kernel

  • version 6.1.36

Base Distro

  • bind: version -9.16.42 (CVE-2023-2911)
  • docker: 20.10.24 (revert from v23.0.6)
25
 
 

Bug fixes

emhttpd: remove "unraid" from reserved names list
emhttpd: properly handle "ERROR" strings in 'btrfs filesystem show' command output
emhttpd: prevent cmdStart if already Started
network: Revised service reload functionality: ensures the services are only reloaded once
network: rc.library: read IP addresses directly from interfaces instead of file
network: NTP: fix listening interfaces
network: NTP: exclude WG tunnels and user defined interfaces
network: NTP: add interface name in config
network: SSH: add interface name in config
webgui: fix PHP8 warning in UPS Settings page
webgui: Dashboard: show ZFS percentage based on c_max value
webgui: Dashboard: suppress coloring of ZFS utilization bar
webgui: Dashboard: olther misc fixes

Linux kernel

version 6.1.34

Base Distro

ttyd: version 1.7.3 (fixes issue of invisible underbar characters with certain FireFox versions)

Security updates

ca-certificates: version 20230506
curl: version 8.1.2 (CVE-2023-28322 CVE-2023-28321 CVE-2023-28320 CVE-2023-28319)
git: version 2.35.8 (CVE-2023-25652 CVE-2023-25815 CVE-2023-29007)
ntp: version 4.2.8p17 (CVE-2023-26551 CVE-2023-26552 CVE-2023-26553 CVE-2023-26554 CVE-2023-26555)
openssl: version 1.1.1u (CVE-2023-2650)
openssh: version 9.3p1
php: version 8.2.7
libX11: version 1.8.6 (CVE-2023-3138)
libssh: version 0.10.5 (CVE-2023-1667 CVE-2023-2283)
zstd: version 1.5.5
view more: next ›