RAID "Degraded"/offline — why it is a high-risk state
The "Degraded" message means the array is operating in emergency mode (redundancy is missing or one of the copies is inconsistent). In this state, one careless step can overwrite metadata and make reconstruction much harder. If your data matters, the priority isstopping all writesand securing read access. In the case of arrays and NAS systems, the safest path is to move toRAID/NAS array reconstruction(with no blind rebuilds).
What you absolutely must NOT do before handing it over to a laboratory
- Do not startrebuild/re-sync"just to see what happens".
- Do not re-initialise the array and do not create new volumes.
- Do not update the controller/NAS firmware during the incident.
- Do not change the disk order or swap drives blindly.
- Do not run file-system repair tools on the volume; they can change the structure permanently.
What to do instead (safe checklist)
- Stop services that are writing data (VMs, databases, shares) and perform a controlled shutdown.
- Take photos/screenshots: array status, error messages, bay order, models and serial numbers.
- Label the drives (slot 1/2/3/…) and do not start them individually in an operating system.
- If possible, prepare the configuration details (RAID level, stripe size, controller model).
When to report the case
If the array is in "Degraded" state, read errors appear, the volume disappears, or the NAS will not boot, the safest option is to move toRAID/NAS data recovery. In the laboratory, we start with imaging the drives (including cases where the array uses SSDs:SSD/NVMeor traditional spinningHDD), and only then do we reconstruct the volume on the copy.
Reporting the case:describe the error messages and the NAS/controller model — you will receive a safe action plan.Request form.
Most common mistakes that make RAID reconstruction harder
The most damaging errors are the ones that change metadata while you are still trying to understand the incident. A rushed rebuild, importing the array in a different order, updating firmware during the outage or forcing a new configuration can turn a recoverable case into a much longer one. If your array already shows a warning similar to the case described in RAID 5 shows degraded but drives are healthy, treat every additional write as a potential complication rather than a harmless test.
How to prepare the array for safe diagnosis
Before handing the case over, document the bay order, model numbers, serial numbers, controller/NAS screenshots and every alert that appeared before the outage. If the incident affected a company environment, it also helps to list which shares, VMs or databases became unavailable first — the same discipline speeds up cases like a RAID failure in a company or a server/NAS outage in the first 24 hours. A clean description often saves more time than another risky reboot.
If you want to assess the case safely
If the files matter and you do not want to keep risking more experiments, use the contact form and describe the device, symptoms and the most important data. You can also review typical ranges on the data recovery cost page and go straight to HDD data recovery if you want the service path that fits this case best.