userbinator 2 days ago

Reading the part about using foam to make these drives quieter, and the link to the author's other article about putting drives on foam, makes me write this obligatory warning: hard drives do not like non-rigid mounting. Yes, the servo can usually still position the heads on the right track (since it's a servo), but power dissipation will be higher, performance will be lower, and you may get more errors with a non-rigid mount. Around 20 years ago it was a short-lived fad in the silent-PC community to suspend drives on rubber bands, and many of those who did that experienced unusually short drive lifetimes and very high seek error rates. Elasticity is the worst, since it causes the actuator arm to oscillate. The ideal mount is as rigid as possible.

  • matt-p a day ago

    Yes. If you need this you are far better off buying SSDs than wasting time on these silly ideas.

    • CaptainOfCoit a day ago

      How much would 18TB of SSDs cost compared to 18TB of HDDs? Probably a big reason why many go for HDDs still today.

      • justsomehnguy a day ago

        SSDs are still roughly 3x per $/Tb. You can get a 8Tb QVO SATA drive for a like ~$300 so... ~$40/Tb

        • hyperpl 19 hours ago

          Where are you seeing 8TB (assuming you meant TB and Tb) for $300?

          • justsomehnguy an hour ago

            At Amazon. Guess I needed to check it better. Also I shouldn't did it in a bar. sigh

            Anyway, I have a spreadsheet with both the prices and $/Gb/Tb so I just copy the relevant part here and hope formatting would persist:

                $/Tb Item Interface Capacity Price, $
                 $90 SSD 4Tb Samsung 870 QVO (MZ-77Q4T0BW) SATA3 4000 $359
                 $96 SSD 2Tb Samsung 870 QVO (MZ-77Q2T0BW) SATA3 2000 $192
                $102 USB 1Tb SanDisk Ultra Dual Drive Go (SDDDC3-1T00-G46) USB 1000 $102
                $103 SSD Samsung PM1643a MZILT3T8HBLS-00007 3.84T (root) SAS 3840 $394
                $104 SSD 4Tb Samsung 870 EVO (MZ-77E4T0BW) SATA3 4000 $417
                $105 SSD 2Tb Transcend 225S (TS2TSSD225S) SATA3 2000 $210
                $106 SSD 4Tb Transcend 230S (TS4TSSD230S) SATA3 4000 $425
                $107 SDXC 2Tb MicroSD SanDisk Extreme (SDSQXAV-2T00-GN6MN) SDXC 2000 $215
                $109 SSD 2Tb Samsung 870 EVO (MZ-77E2T0BW) SATA3 2000 $217
                $111 SSD 2Tb Kingston KC600 Series (SKC600/2048G) SATA3 2000 $223
                $115 SSD 8Tb Samsung 870 QVO (MZ-77Q8T0BW) SATA3 8000 $923
                $128 SSD 7.68Tb Samsung PM9A3 (MZQL27T6HBLA-00A07) OEM U.2 7680 $985
                $129 SSD 1Tb Samsung 870 EVO (MZ-77E1T0BW) SATA3 1000 $129
                $134 SSD 3.2Tb Intel P4610 Series (SSDPE2KE032T801) U.2 3200 $430
                $140 SSD 1.6Tb Intel P4610 Series (SSDPE2KE016T801) U.2 1600 $224
            
            So 8Tb QVO are $115/TB.
  • mmaunder 2 days ago

    Thanks, very interesting. TIL.

    • ranma42 2 days ago

      I've been mounting my 3.5" hard drives on those "fad" rubber band 5.25" drive bay adapters for decades and have not noticed any increased failure rate at all. Sure, seek time may be worse, but the reduced noise has been worth it for me.

      • userbinator a day ago

        The problem isn't just slower seeks; it's when vibration causes the head to go off-track and write data where it shouldn't, faster than the servo can correct. Track pitch in modern hard drives is only a few dozen nanometers.

      • justsomehnguy a day ago

        I think OP is talking about something quite different.

        Can you give a pic or link on what you are using?

  • justsomehnguy a day ago

    As someone with a bit of experience on this topic:

    HDDs doesn't like micromovements. If you put it on a pink foam mat (both a computer and yoga ones) it wouldn't matter. If you 'rigid mount' it but your screws would came lose then your HDD wouldn't like it because it wo&ld result in microvibrations from the self induced oscillations.

    Rubber washers are good because they eat those microvibrations. The hard foam which is talked about in the linked article is not good because it is bad from the all aspects - too hard to eat up microvibrations, too soft to be a rigid mount.

    The worst thing you can do is to rigid mount an HDD to a case which is a subject to a constant vibration load eg from a heavy duty fan or some engine.

  • 7bit a day ago

    I'm pretty sure whatever that community experienced is more anecdotal that statistically provable...

    • more_corn 20 hours ago

      I’ve worked at a scale that is statistically relevant. Tens of thousands of drives under my control. I’ve seen a ton of different failure modes. Some of our anecdotes are actually useful. The problem with book and lab theory is that sometimes the theoretical problems don’t manifest (SSD wear out for example) and sometimes the minor seeming things turn out to matter a lot.

hddherman 2 days ago

Hello, author here! It's a nice surprise to notice my own post here, but the timing is unfortunate as I'm shuffling things around on my home server and will accidentally/intentionally take it offline for a bit.

Here's a Wayback Machine copy of the page when that does happen: https://web.archive.org/web/20251006052340/https://ounapuu.e...

  • justinclift a day ago

    Have you considered 2nd hard enterprise SSDs?

    Sometimes larger sized models of those (15TB+) can be found with very good pricing. :)

    • hddherman 17 hours ago

      I have considered going that route, but I'd have to switch to a platform that supports those formats and that will likely be too expensive for me as a hobbyist.

aftbit 2 days ago

I've been considering "de-enterprising" my home storage stack to save power and noise and gain something a bit more modular. Currently I'm running on an old NAS 1U machine that I bought on eBay for about $300, with a raidz2 of 12x 18TB drives. I have yet to find a good way to get nearly that much storage without going enterprise or spending an absolute fortune.

I'm always interested in these DIY NAS builds, but they also feel just an order of magnitude too small to me. How do you store ~100 TB of content with room to grow without a wide NAS? Archiving rarely used stuff out to individual pairs of disks could work, as could running some kind of cluster FS on cheap nodes (tinyminimicro, raspberry pi, framework laptop, etc) with 2 or 4x disks each off USB controllers. So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill.

  • scottlamb 2 days ago

    > How do you store ~100 TB of content with room to grow without a wide NAS?

    In the cloud (S3) or on offline (unpowered HDDs or tapes or optical media) I suppose. Most people just don't store that much content.

    > So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill.

    What kind of power bill are you talking about? I'd expect the drives to be about 10W each steady state (more when spinning up), so 180W. I'd expect a lower-power motherboard/CPU running near idle to be another 40W (or less). If you have a 90% efficient PSU, then maybe 250W in total.

    If you're way more than that, you can probably swap out the old enterprisey motherboard/RAM/CPU/PSU for something more modern and do a lot better. Maybe in the same case.

    I'm learning 1U is pretty unpleasant though. E.g. I tried an ASRock B650M-HDV/M.2 in a Supermicro CSE-813M. A standard IO panel is higher than 1U. If I remove the IO panel, the motherboard does fit...but the VRM heatsink also was high enough that the top case bows a bit when I put it on. I guess you can get smaller third party VRM heat sinks, but that's another thing to deal with. The CPU cooler options are limited (the Dynatron A42 works, but it's loud when the CPU draws a lot of power). 40mm case fans are also quite loud to move the required airflow. You can buy noctuas or whatever, but they won't really keep it cool. The ones that actually do spin very fast and so are very loud. You must have noticed this too, although maybe you have a spot for the machine where you don't hear the noise all the time.

    I'm trying 2U now. I bought and am currently setting up an Innovision AS252-A06 chassis: 8 3.5" hot swap bays, 2U, 520mm depth. (Of course you can have a lot more drives if you go to 2.5" drives, give up hot swap, and/or have room for a deeper chassis.) Less worry about if stuff will fit, more room for airflow without noise.

    • master_crab 2 days ago

      2U is definitely better, but I didn’t notice significant drops in dB till I could stuff a 120mm fan in the case. That requires a 3U or more.

      And if you need a good fan that’s quiet enough for the CPU, you’re looking at 4U. Otherwise, you’ll need AIOs hooked up to the aforementioned 120s.

      • scottlamb a day ago

        > And if you need a good fan that’s quiet enough for the CPU, you’re looking at 4U.

        Depends on the CPU, I imagine. I'm using one with a 65W TDP. I'm hopeful that I can cool that quietly with air in 2U, without having to nerf it with lower BIOS settings. Many NASs have even lower power CPUs like the Intel N97 and friends.

        • master_crab a day ago

          Oh yes, you can definitely get away with much less for something like that or an ARM, Ryzen embedded chips, etc. The 4U is more for full scale desktop CPUs like the i9-12900k I am running (like an NH-D15 sink/fan). You may even be able to get away with passive cooling at the 65W range.

          • scottlamb 3 hours ago

            > You may even be able to get away with passive cooling at the 65W range.

            I saw there's a "passive" Dynatron A43, which even claims to handle up to 155W. My understanding is that most/all server motherboards will have the socket oriented so the fins are front-to-back and the RAM is off to the side. And then you have chassis fans blowing air front-to-back, so I think they basically double as the CPU fan. (Which is also what the older motherboard that came in my CSE-813M did.) I air-quoted passive because I think it needs those chassis fans, but there's not one on the CPU anyway. And I'm not sure I completely trust the A43's rating, but with this setup I think it'd be fine for my 65 W TDP CPU at least.

            On the other hand, I'm using a cheap gaming motherboard with fins sideways, RAM blocking the front-to-back airflow. My gut says that Dynatron A43 wouldn't do well. I don't understand why this orientation is desirable for desktops; my conspiracy theorist side says they make the consumers ones this way so they won't eat into the rack-mounted server market share. I am kinda tempted to get a server motherboard for this and IPMI (and/or at least serial port-accessible BIOS), but I started by looking at budget NASes and things have already spiraled a bit from there.

  • dragontamer 2 days ago

    I have to imagine that the best NAS build is simply a 6-core or 8-core standard AMD or Intel with a few HBA controllers and maybe 10Gbit SPF+ fiber or something.

    "Old server hardware" for $300 is a bit of a variation, in that you're just buying something from 5 years ago so that its cheaper. But if you want to improve power-efficiency, buy a CPU from today rather than an old one.

    --------

    IIRC, the "5 year old used market" for servers is particularly good because many datacenters and companies opt for a ~5-year upgrade cycle. That means 5-year-old equipment is always being sold off at incredible rates.

    Any 5-year-old server will obviously have all the features you need for a NAS (likely excellent connectivity, expandibility, BMS, physical space, etc. etc.). Just you have to put up with power-efficiency specs of 5 years ago.

    • asmor 2 days ago

      For AMD Zen, they have power consumption overhead on all chiplet designs, even if the chip only has one core complex, the separate IO die makes it hard to get idle power consumption under 30W.

      Usually the chips with explicitly integrated GPUs (G-suffix, or laptop chips) are monolithic and can hit 10W or lower.

    • hypercube33 2 days ago

      Dell R500 series is very good for dense storage at low costs if you lean to SATA or NL-SAS

  • _kb a day ago

    There's a bit of a trend of vendors packaging mobile CPUs in desktop form factor which are a good candidate for this. Rather than the prebuilt mini PCs this also includes mini-ITX boards. Personally I use the Minisforum BD795i SE, but there are others too.

    Check for PCIe bifurcation support. If that's there you can pop in a PCIe to quad M.2 adapter. That will split a PCIe x16 slot into 4 x M.2s. Each of those (and the M.2s already on the motherboard) can then be loaded with either an NVMe drive or an M.2 to SATA adapter, with each adapter providing 6 x SATA ports. That setup gives a lot of flexibility to build out a fairly extensive storage array with both NVMe and spinning platters and no USB in sight.

    As a nice side effect of the honestly bonkers amount of compute in those boards there's also plenty of capacity to run other VM workloads on the same metal which lets a lot of the storage access happen locally rather than over the network. For me, that means the on-board 2.5GbE NIC is more than fine, but if not you can also load a M.2 to 10GbE adapter(s) as needed.

    • aftbit a day ago

      This sounds like a really nice setup. Which M.2 to SATA adapters are you using? I've heard some of those are dodgy and others are alright.

      • _kb 21 hours ago

        I don’t at the moment. This setup is new and my current hot storage needs are pretty minimal so I’m all in on NVMe. When that changes though thats the expansion plan. ASM1166 based boards seem to be an ok choice, but don’t have any personal recs there (yet).

      • toast0 a day ago

        I've not used any of them, but from my shopping some of them are multiport SATA adapters, and some of them are a single port SATA adapter plus a SATA port multiplier. I would expect the port multiplier variants to be dodgier.

  • adrian_b 2 days ago

    If you want instant access to any bit of the 100 TB content, you need a wide NAS.

    Otherwise, you can have a couple of HDD racks in which you can insert HDDs when needed (SATA allows live insertion and extraction, like USB).

    Then you have an unlimited amount of offline storage, which can be accessed in a minute by swapping HDDs. You can keep an index of all files stored offline on the SSD of your PC, for easy searches without access to HDDs. The index should have all relevant metadata, including content hashes, for file integrity verification and for duplicate files identification.

    Having 2 HDD racks instead of just 1 allows direct copies between HDDs and doubles the capacity accessible without swapping HDDs. Adding more than 2 adds little benefit. Moreover, some otherwise suitable MBs have only 2 SATA connectors.

    Or else you can use an LTO drive, which is a very steep initial investment, but its cost is recovered after a few hundred TB by the much cheaper magnetic tapes.

    Tapes have a worse access time, of the order of one minute after tape insertion, but they have much higher sequential transfer speeds than cheap SATA HDDs. Thus for retrieving big archive files or movies they save time. Transfers from magnetic tape must be done either directly to an NVME SSD or to an NVME SSD through Ethernet of 10 Gb/s or faster, otherwise their intrinsic transfer speed will not be reached.

  • toast0 2 days ago

    If you want 100TB, you need a bigger NAS than most, and that makes most of the DIY NAS not so good. 2-4 drives seems to be where DIY shines. These days motherboards often stop at 4x sata, so you'll need a HBA or USB (eww).

    Personally, I just don't have that much data, 24TB mirrored for important data is probably enough, and I have my old mirror set avaialable for media like recorded tv and maybe dvds and blu-rays if I can figure out a way to play them that I like better than just putting the discs in the machine.

    • asmor 2 days ago

      We run 48TB (after redundancy, 3 striped mirrors) over a USB enclosure (TerraMaster D6-320) and it's honestly not as bad as people say. The only failure this system experienced in the past few years was due to noisy power causing a reset, and the ZFS root (not the data pool) becoming read only due to a write hole caused by a consumer NVMe (Crucial P3 Plus) lying about being synced (who could've expected that).

  • throawayonthe a day ago

    check out the Jonsbo N5 NAS case, you can toss 12 drives and a low power mITX motherboard (see sibling comments) in it for a cheap-ish neat-ish box with a not-proprietary upgrade path

  • willis936 2 days ago

    Uhh could you provide a hook for such a deal? I've been starving for more storage and can now handle a rack mounted system but have been avoiding dropping $1000 on a pair of new hard drives.

    • serf 2 days ago

      I just missed an ebay opportunity to get a dell r730xd with 12x 12tb drives for around 400 dollars.

      if you're willing to wait and bid-snipe you can find deals like that routinely; just wait to find one with the size drives you want.

      if you just need the drives similar lot sales are available for high power-on time zero errors enterprise drives. I bought a lot of 6x 6tb drives two weeks ago for 120 usd and they all worked fine. If you have the bay space and a software solutuon that lets you swap them in and out as needed without distorting data then there is a lot of 'hobby fun' to be had with managing a storage rack.

      • willis936 2 days ago

        I have a a case with several 3.5" bays and a truenas server happily running. I've been running an all-flash array because I had a bright-eyed vision of the future. At this point a very cheap pile of unreliable spinning rust is exactly what I need. Thanks for the tips.

  • behringer 2 days ago

    A fortune? I'm getting 14tb SAS drives "recertified" on ebay for 150usd. Substantially less than most other sources of hard drives.

    Depending on your drive enclosure it should also be able to power down drives that aren't actively being used.

    Recertified/used enterprise equipment is the only way to affordably host 100s of terabytes at home.

leobg 2 days ago

I was about to buy a NAS. I find the idea of using an old laptop instead interesting. Especially since it comes with UPS built in.

The author is using a ThinkPad T430.

Any experiences?

  • beala 2 days ago

    The official TrueNAS docs recommend against using USB drives [1]. My understanding is that between the USB controller, flaky connectors and cables, and usb-to-sata bridges of varying quality, there are just too many unknowns to guarantee a reliable experience. For example, I’ve heard that some usb-to-sata controllers will drop commands and not report SMART data. That said, there are of course many people on the internet who have thrown caution to the wind and report that it’s working fine for them.

    Personally I’m in the process of building a NAS with an old 9th gen Intel i5. Many mobos support 6 SATA ports and three mirrored 20 TB pairs is enough storage for me. I’m guessing it’ll be a bit more power hungry than a ugreen/synology/etc appliance but there will also be plenty of headroom for running other services.

    [1] https://www.truenas.com/docs/core/13.0/gettingstarted/coreha...

    • bluedino 2 days ago

      I've had the same thing from random disconnects etc from various USB hard drives and SSD's over the years.

    • bakugo 2 days ago

      These shucked USB adapters from WD Elements external drives are pretty reliable, from my experience. They kinda have to be, since otherwise it would affect the reputation of WD's external drives as a whole.

      Obviously, direct SATA is still better if possible, but if not, these are probably the next best thing.

      • riobard a day ago

        Those pesky WD bridges usually support USB Bulk Storage only but not UASP, resulting in worse performance and higher CPU usage.

        Also HDD power management is often complicated by the bridge chip sometimes intervening.

        Not recommended for long-term use.

    • mannyv 2 days ago

      Been using like 7 external usb drives with 40-50tb total for a few years with no issues. Not raid, just backing up drive to drive. No controller or drive issues. Mix of seagate and wd 8/12/16gb.

      I hate blanket recommendations like this by docs. To me, it just sounds like some guy had a problem a few times and now it's canon. It's like saying "avoid Seagate because their 3tb drives sucked." Well they did, but now they seem to be fine.

      • Yokolos 2 days ago

        What may work anecdotally can't necessarily be used for official recommendations for a large range of users across an unknown range of hardware configurations. If it works for you, that's fine. That isn't sufficient to make a general statement that everybody will be fine using external USB drives, particularly for RAID, especially when people will then make you responsible if something goes wrong for not making sufficiently safe recommendations. You understand that, right?

      • zettabomb 2 days ago

        RAID is much different. You can try it over USB, you won't have a good time. TrueNAS is primarily talking about RAID users.

        • beala 2 days ago

          Yes I should have specified that this advice is specific to RAID configurations in NAS applications.

          If you're occasionally copying data to an external USB drive, that's totally fine. That's what they were designed for.

          The issue is that they were not designed for continuous use, or much more demanding applications like rebuilding/resilvering a drive. It's during these applications that issues occur, which is a double whammy, because it can cause permanent data loss if your USB drive fails during a recovery operation. I did a little more research after posting my last comment and came across this helpful post on the TrueNAS forums going into more depth: https://forums.truenas.com/t/why-you-should-avoid-usb-attach...

          • jcalvinowens 2 days ago

            YMMV. I have a 4-drive 20TB mdraid10 across two different $50 USB3.0 2-drive enclosures, I've read petabytes off this array with years of uptime and absolutely zero problems. And it runs on one of those $300 off brand NUCs. The 2.5G NIC is the bottleneck on reads.

      • cerved 2 days ago

        Is that with ZFS or something else?

        Mainly I wouldn't do it because of there's space and SATA ports it seems stupid. Hotter. Worse HW.

        Can't really see much good reason to do it tbh except it's in a small hot case which is relatively easy to move around. Maybe if you do occasionally backups and you don't care about scrubbing and redundancy? Otherwise why not shuck them and throw them in a case?

      • faust201 2 days ago

        you say

        > 40-50tb total

        > 8/12/16gb

        How many drives are those?

        You are kidding.

  • cerved 2 days ago

    I own this and it's worth it's weight in gold https://www.supermicro.com/en/products/motherboard/A2SDi-H-T...

    Yes. It's pricey but it's never been a problem. It can connect like 12 HDDs with 256GB ram and has 10GBe and runs at a tiny TDP. Has IPMI. Fits in a tiny case.

    The only issue I had with this motherboard was that it was difficult to find someone who sold it. Love it

    Also I don't see the built-in UPS. The external drives still use external power

    • sedawkgrep 2 days ago

      That's an amazing board. I had no idea something like this existed.

    • smartbit 2 days ago

      How much power does it use?

      • cerved 2 days ago

        Not a lot. Idk 25W TDP for the 16 core?

    • cyberax 2 days ago

      Wow. It even has ECC!

  • reliablereason a day ago

    The laptop batteries tend to go bad(either just stop working or expand and become a major fire hazard) after a year or two as they are not built to be fully charged for years on end. I tried doing it twice and that is what happened both times.

    Would not recommend; if you want a UPS just buy one, the small ones are not that expensive, like 70 USD.

    • hddherman a day ago

      On my ThinkPad T430, I have a weekly full discharge cycle set up using "tlp recalibrate BAT0", it helps avoid that issue and helps confirm that the battery is still functional.

    • dapperdrake a day ago

      On Thinkpads tlp(8) can set a maximum battery charge threshold of 80%. The embedded controller takes care of it. Never had problems.

      Makes batteries live way longer.

    • muro 7 hours ago

      After a year or two is exaggerating - it's rare to see issues within 5 years.

  • tombert 2 days ago

    I don't use a laptop, but I use something fairly adjacent: the Beelink SER6 (https://www.amazon.com/Beelink-4-75GHz-PCIe4-0-Supports-HDMI...), which is basically a gaming laptop converted into a small desktop. For the most part, it has actually been pretty great. It's quiet, has a CPU that is much better than I expected, and a decent enough GPU to do hardware transcoding for Jellyfin without much issue.

    I use USB chassis of hard drives to work as the "NAS" part, and it works fairly well, and this box is also my router (using a 10 GbE thunderbolt adapter) though my biggest issue comes with large updates in NixOS.

    For reasons that are still not completely clear to me, when I do a very large system update (rebuilding Triton-llvm for Immich seems to really do it), the internal network will consistently cut out until I reboot the machine. I can log in through the external interface with Tailscale and my phone, so the machine itself is fine, but for whatever reason the internal network will die.

    And that's kind of the price you pay for using a non-server to do server work. It will generally work pretty well, but I find that it does require a bit more babysitting than a rack mount server did.

    • dontlaugh a day ago

      There’s also variants of three mini PCs with hard drive bays. I recently bought an Aostar WTR Pro and I’d considered the Ugreen competitor.

      • tombert a day ago

        Yeah, though I have 24 drives so I think by definition I couldn't really have a "mini" with enough bays to handle that.

  • tw04 2 days ago

    If you don’t need any performance it’s a great backup strategy. If your only way of connecting the drives to the laptop is USB I would be concerned about data integrity if it’s important data.

    • amelius 2 days ago

      Why is USB so bad at data integrity. Doesn't it have error detection/correction? If so, that sounds like a huge design flaw.

      • beagle3 2 days ago

        Individual writes are safe, in my Experience with thousands of uSB drives in many configurations, some with 12 2tb drives hanging on multiple USB hubs at the same time.

        However, there are disconnects/reconnects every now and then. If you use a standard raid over these usb drives, almost every disconnect/reconnect will trigger a rebuild — and rebuilds take many hours. If you are unlucky enough to have multiple disconnects during a rebuild, you are in trouble.

        • amelius 2 days ago

          I've had bitflips with USB transfers of 1-10TB. I don't remember the specifics, but my personal confidence in USB is low.

  • phil21 2 days ago

    I ran an old Thinkpad as a home router and small home server/NAS device for quite a long time, usually swapping out my old work upgrades every 3 years or so.

    They all had onboard gige so it worked fine - native vlan for the inbound Comcast connection, tagged vlans out to a switch for the various LAN connections.

    They were from the era of DVD drives so I was able to put an extra HDD in the DVD slot to expand storage with. One model even had a eSATA port.

    They worked great. Built-in UPS and they come with a reliable crash cart built-in!

  • whazor 2 days ago

    When I used a laptop as server, the battery became a spicy pillow. I think laptops are not designed to be running continuously and on warmer temperatures than normal.

    • dapperdrake a day ago

      Thinkpads can use tlp to cap battery charge at 80%. It works.

  • IgorPartola 2 days ago

    For me it was important to have ECC RAM and laptops pretty much never have that. My personal recommendation is an old IBM/Lenovo workstation tower as the base. I bought one for $35 on eBay and added $40 of RAM (32GB). A $10 UPS from Goodwill with a $25 battery from Amazon, and whatever hard drives you want. I run Ubuntu and ZFS on it but next time would probably opt for FreeBSD for a nicer OS.

    • yyhhsj0521 2 days ago

      Why is it important for an NAS to have ECC RAM?

      • anjel 2 days ago

        When Bitrot happens, ECC catches it, where non-ecc doesn't

      • bombela a day ago

        So that you don't lose your data to random bit flips from cosmic rays.

  • dheera 2 days ago

    > I was about to buy a NAS.

    The UNAS Pro 8 just came out and I'm thinking about getting it, switching away from my aging Synology setup ... only thing I wish it had was a UPS server as my Synology currently serves that purpose to trigger other machines to shut down ...

    • VTimofeenko 2 days ago

      I believe Synology's UPS monitoring is based on nut-server[1]. In my setup, I am running the server on a separate machine that reads UPS state over USB and Synology is just a client. Maybe UNAS could also just work as a client.

      [1]: https://networkupstools.org/

    • ericd 2 days ago

      I'm considering doing the same, I guess one would basically just be splitting functions, a dedicated NAS, and a dedicated server for all the functions that Synos tend to perform (generally not very well, but at least with pretty low power usage).

    • Xss3 2 days ago

      I think they just released some new prosumer ups.

  • dapperdrake 2 days ago

    Used a Lenovo X220T with a cracked screen and missing keyboard a few years back. Worked like a champ (as a server). Cooling was much better without the keyboard.

  • hypercube33 2 days ago

    I see a lot of people using M710 mini desktops - I think you can pop a pcie 10gbe card in and a m.2 SATA card and 3d print a disk stand?

  • m2has 2 days ago

    I’ve use an P51 for about a year now with no issues. I initially bought 6bay DAS, but I’ve since moved to pure SSD storage inside the laptop.

  • nicman23 a day ago

    i had a m.2 to pci-e for a sata controller. worked fine but the ups thing is a bit non workable as the drives are not powered by the laptop

speedgoose 2 days ago

I admire the courage to store data on refurbished Seagate hard drives. I prefer SSD storage with some backups using cloud cold storage, because I’m not the one replacing the failing hard drives.

  • Aurornis 2 days ago

    I would also prefer having a large number of high capacity SSDs so I could replace my spinning hard drives.

    But even the cheapest high capacity SSD deals are still a lot more expensive than hard drive array.

    I’ll continue replacing failing hard drives for a few more years. For me that has meant zero replacements over a decade, though I planned for a 5% annual failure rate and have a spare drive in the case ready to go. I could replace a failed drive from the array in the time takes to shut down, swap a cable to the spare drive, and boot up again.

    SSDs also need to be examined for power loss protection. The results with consumer drives are mixed and it’s hard to find good info about how common drives behave. Getting enterprise grade drives with guaranteed PLP from large on-onboard capacitors is ideal, but those are expensive. Spinning hard drives have the benefit of using their rotational inertia to power the drive long enough to finish outstanding writes.

    • oceanplexian 2 days ago

      This is going to be a huge anecdote but all the consumer SSD I've had has been dramatically less reliable than HDDs. I've gone through dozens of little SATA and M2 drives and almost every single one of them has failed when put into any kind of server workload. However most of the HDDs I have from the last 10 years are still going strong despite sitting in my NAS and spinning that entire time.

      After going deep on the spec sheets and realizing that all but the best consumer drives have miserably low DWPD numbers I switched to enterprise (U.2 style) two years ago. I slam them with logs, metrics data, backups, frequent writes and data transfers, and have had 0 failures.

      • smartbit 2 days ago

        What file system are you using? ZFS is written with rotation rust in mind and assumingely will kill non-enterprise ssd.

    • cm2187 2 days ago

      You can find cheap used enterprise SSDs on ebay. But the problem is that even the most power efficient enterprise SSD (SATA) idle at like 1w. And given the smaller capacities, you need many more to match a hard drive. In the end HDD might actually consume less power than an all flash array + controllers if you need a large capacity.

      • userbinator 2 days ago

        Used SSDs, especially enterprise ones, are a really bad idea unless you get some really old SLC parts. Flash wears out in a very obvious way that HDDs don't, and keep in mind that enterprise-rated SSDs are deliberately rated to sacrifice retention for endurance.

        • cm2187 2 days ago

          Agree on SSD for cold storage, that's not a good idea. But you would be surprised by how little used are typical used enterprise SSDs on ebay. This article matches my experience:

          https://www.servethehome.com/we-bought-1347-used-data-center...

          I bought over 200 over the last year, and the average wear level was 96%, and 95% had a wear above 88%.

          • userbinator a day ago

            Endurance and retention are inversely correlated, and as I mentioned in my original comment, enterprise DC drives are designed to advertise the former at the expense of the latter. The industry standard used to be 5 years retention for consumer and 3 months for enterprise, after reaching the specified TBW. The wear level SMART counter reflects that; "96% remaining" on an enterprise drive may be 40% or less on a consumer one having written the same amount, since the latter is specified to hold the data for longer once its rating has been reached.

            • cm2187 a day ago

              Retention is offline retention. Not online. So not sure what point you are trying to make. If it is that SSDs shouldn't be used for cold storage, yeah I agree, and enterprise SSds aren't designed for cold storage. But you seem to be linking retention to TBW, which are largely orthogonal metrics. If you are going to use the SSDs in a NAS, which by definition are running all the time, why would you even care about the rentention rating?

    • dleeftink 2 days ago

      Curious, what's the use case for wanting your data backed-up without fail? Is it personal archives or otherwise (business) archive related?

      Not to say you shouldn't backup your data, but personally I wouldn't be to affected if one of my personal drives errored out, especially if they contained unused personal files from 10+ years ago (legal/tax/financials are another matter).

      • EvanAnderson 2 days ago

        Any data I created, paid to license, or put in significant work to gather has to be backed-up with 3-2-1 rule. Stuff I can download or otherwise obtain again is best effort but not mandatory backup.

        Mainly I don't want to lose anything that took work to make or get. Personal photos, videos, source code, documents, and correspondence are the highest priority.

  • LorenPechtel 2 days ago

    RAID. Preferably RAID 6. Much, much better to build a system to survive failure than to prevent failure.

    • dragontamer 2 days ago

      Don't RAID these days. Software won rather drastically, likely because CPUs are finally powerful enough to run all those calculations without much of a hassle.

      Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID.

      Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues.

      • lproven 2 days ago

        > Don't RAID these days. Software won rather drastically

        RAID does not mean or imply hardware RAID controllers, which you seem to incorrectly assume.

        Software RAID is still 100% RAID.

        • dragontamer 2 days ago

          And 'softRAID', like what is on for free on Intel motherboards or AMD Motherboards suck and should be avoided.

          ------

          The best advice I can give is to use a real solution like ZFS, Storage Spaces and the like.

          It's not sufficient to say 'Use RAID' because within the Venn Diagram of things falling under RAID is a whole bunch of shit solutions and awful experiences.

          • lproven a day ago

            I haven't seen a machine shipped with firmware RAID in decades.

            It's still enabled in the firmware of some vendors' laptops -- ones deep in Microsoft's pockets, like Dell, who personally I would not touch unless the kit were free, but gullible IT managers buy the things.

            My personal suspicion is that it's an anti-Linux measure. It's hard to convert such a machine to AHCI mode without reformatting unless you have more clue than the sort of person who buys Dell kit.

            In real life it's easy: set Windows to start in Safe Mode, reboot, go into the firmware, change RAID mode to AHCI, reboot, exit Safe Mode.

            Result, Windows detects a new disk controller and boots normally, and now, all you need to do is disable Bitlocker and you can dual-boot happily.

            However that's more depth of knowledge than I've met in a Windows techie in a decade, too.

      • f_devd 2 days ago

        FYI XFS is not redundant, also RAID usually refers to software RAID these days.

        I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work.

        • zozbot234 2 days ago

          btrfs RAID is quite infamous for eating your data. Has it been fixed recently?

          • lproven a day ago

            To be fair, your statement could be edited as follows to increase its accuracy:

            > btrfs is quite infamous for eating your data.

            This is the reason for the slogan on the bcachefs website:

            "The COW filesystem for Linux that won't eat your data".

            https://bcachefs.org/

            After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

            Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

            IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.

            The fact that its RAID is even more unstable merely seals the deal.

            • f_devd a day ago

              > Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

              > In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

              While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk.

              May seem daunting but it's actually only about 6 commands.

              • lproven 11 hours ago

                Hi. My screen name is my real name, and my experience with Btrfs stems from the fact that I worked for SUSE for 4 years in the technical documentation department.

                What that means is I wrote the manual.

                Now, disclaimer, not that manual: I did not work on filesystems or Btrfs, not at all. (I worked on SUSE's now-axed-because-of-Rancher container distro CaaSP, and on SLE's support for persistent memory, and lots of other stuff that I've now forgotten because it was 4 whole years and it was very nearly 4 years ago.)

                I am however one of the many people who have contributed to SUSE's excellent documentation, and while I didn't write the stuff about filesystems, it is an error to assume that I don't know anything about this. I really do. I had meetings with senior SUSE people where I attempted to discuss the critical weaknesses of Btrfs, and my points were pooh-poohed.

                Some of them still stalk me on social media and regularly attack me, my skills, my knowledge, and my reputation. I block them where I can. Part of the price of being online and using one's real name. I get big famous people shouting that I am wrong sometimes. It happens. Rare indeed is the person who can refute me and falsify my claims. (Hell, rare enough is the person who knows the difference between "rebut" and "refute".)

                So, no, while I accept that there may be workarounds that a smart human may be able to do, I strongly suspect that these things are accessible to software, to tools such as Zypper and Snapper.

                In my repeated direct personal experience, using openSUSE Leap and openSUSE Tumbleweed, routine software upgrades can fill up the root filesystem. I presume this is because the packaging tools can't get accurate values for free space, probably because Btrfs can't accurately account for space used or about to be used by snapshots, and a corrupt Btrfs root filesystem can't be turned back into a valid consistent one using the automated tools provided.

                Which is why both SUSE's and Btrfs's own docs say "do not use the repair tools unless you are instructed to by an expert."

          • cerved 2 days ago

            No. RAID 5/6 is still fundamentally broken and probably won't get fixed

            • f_devd a day ago

              This is incorrect, quoting Linux 6.7 release (Jan 2024):

              "This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation."

              I've not kept with more recent releases but there has been progress on the issue

          • f_devd 2 days ago

            I believe RAID5/6 is still experimental (although I believe the main issues were worked out in early 2024), I've seen reports of large arrays being stable since then. It's still recommended to run metadata in raid1/raid1c3.

            RAID0/1/10 has been stable for a while.

      • LorenPechtel 21 hours ago

        Software or hardware, it's still the same basic concept.

        Redundancy rather than individual reliability.

  • mvanbaak 2 days ago

    I have a dozen refurbished exos disk in my storage machine. Works super! SSD for bigger storage is simply too expensive

  • stirlo 2 days ago

    And I prefer to have a healthy bank account balance.

    Storing 18TB (let alone with raid) on SSDs is something only those earning Silicon Valley tech wages can afford.

    • arjie 2 days ago

      We bought a few Kioxia 30.72 TiB SSDs for a couple of thousand in a liquidation sale. Sadly, I don't work there any more or I could have looked it up. U.2 drives if I recall, so you do need either a PCIe card or the appropriate stuff on your motherboard but pretty damn nice drives.

    • patrakov 2 days ago

      Not really. I know that my sleep is worth more than the difference between HDD and SSD prices, and I know the difference between the failure rates and the headache caused by the RMA process, so I buy SSDs.

      In essence, what we together are saying is that people with super-sensitive sleep that are also easily upset, and that don't have ultra-high salaries, cannot really afford 18 TB of data (even though they can afford an HDD), and that's true.

      • gambiting 2 days ago

        Well, again, well done on being able to afford it. I have 24TB array on cheap second hand drives from CEX for about £100 each, using DrivePool - and guess what, if one of them dies I'll just buy another £100 second hand drive. But also guess what - in the 6 years I had this setup, all of these are still in good condition. Paying for SSDs upfront would have been a gigantic financial mistake(imho).

  • cm2187 2 days ago

    Might be a bit adventurous for primary storage (though with enough backup and redundancy, why not). But seems perfect for me for backup / cold storage.

  • jabart 2 days ago

    Every drive is "used" the moment you turn it on.

    • malfist 2 days ago

      There's a big difference between used as in I just bought this hard drive and have used it for a week in my home server, and used as in refurbished drive after years of hard labor in someone else's server farm

      • jabart 2 days ago

        Enterprise drives are way different than anything consumer based. I wouldn't trust a consumer drive used for 2 years, but a true enteprise drive has like millions of hours left of it's life.

        Quote from Toshiba's paper on this. [1]

        Hard disk drives for enterprise server and storage usage (Enterprise Performance and Enterprise Capacity Drives) have MTTF of up to 2 million hours, at 5 years warranty, 24/7 operation. Operational temperature range is limited, as the temperature in datacenters is carefully controlled. These drives are rated for a workload of 550TB/year, which translates into a continuous data transfer rate of 17.5 Mbyte/s[3]. In contrast, desktop HDDs are designed for lower workloads and are not rated or qualified for 24/7 continuous operation.

        From Synology

        With support for 550 TB/year workloads1 and rated for a 2.5 million hours mean time to failure (MTTF), HAS5300 SAS drives are built to deliver consistent and class-leading performance in the most intense environments. Persistent write cache technology further helps ensure data integrity for your mission-critical applications.

        [1] https://toshiba.semicon-storage.com/content/dam/toshiba-ss-v...

        [2] https://www.synology.com/en-us/company/news/article/HAS5300/...

        • malfist 2 days ago

          Take a look at backblaze data stats. Consumer drives are just as durable, if not more so than enterprise drives. The biggest thing you're getting with enterprise drives is a longer warranty.

          If you're buying them from the second hand market, you don't likely get the warranty (and is likely why they're on the second hand market)

        • Spooky23 2 days ago

          There isn’t a significant difference between “enterprise” and “consumer” in terms of fundamental characteristics. They have different firmware and warranties, usually disks are tested more methodically.

          Max operating range is ~60C for spinning disks and ~70C for SSD. Optimal is <40-45C. The larger agents facilties afaik tend to run as hot as they can.

        • kvemkon 2 days ago

          > drive has like millions of hours left of it's life.

          It doesn't apply for the single drive, only for a large number of drives. E.g. if you have 100000 drives (2.4 million hours MTTF) in a server building with the required environmental conditions and maximum workload, be prepared to replace a drive once a day in average.

      • deodar 2 days ago

        Drive failure rate versus age is a U-shaped curve. I wouldn't distrust a used drive with healthy performance and SMART parameters.

        And you should use some form of redundancy/backups anyway. It's also a good idea to not use all disks from the same batch to avoid correlated failures.

    • numpad0 2 days ago

      Returns are known bads.

compsciphd a day ago

anecdote: I've had very bad experience with these OS white label drives, even when marked as new. I've had much better luck shucking USB drives.

4+ years ago I bought 20 "new" (can't validate), "seagate manufactured" (can't validate) "OS" SAS drives, and 2 started throwing errors in truenas quickly (sadly after I had the ability to return them). Had another 20 WD and Segate drives I shucked at the same time (was going into 3 12x SAS/SATA machines and 1 4x SATA NAS). The NAS got sidelined as had to use the SATA drives were meant for and no longer trusted the SAS drives so wanted to keep the 2 extra drives as backup. Which was a good idea, as over the next 4 years another 2 of SAS drives started throwing similar errors.

so 20% of the white label drives didn't really last, while 100% of the shucked drives have. What was even worse, the firmware on the "OS" drives was crap, while it "technically" had smart data, it didn't provide any stats, just passed/not passed. (main lesson learned from this, don't accept

Another anecdote: For a long time I wasn't sure what to do with the SAS drives as in the past I used unused drives for this for cold offline storage, but SAS docks were very expensive ($200+). Recently it seems they have come down in price to under $50 so I bought and was able to fill the drives up (albeit very slowly, it seems they did have problems (was only getting 10-20MB/s), but at least I was able to validate their contents a few times after that, a bit less slow (80MB/s).

Aside: 3 weeks ago I had multiple power outages that I thought created problems in one of the shucked drives (was getting uncorrectable reads, though ZFS handled it ok) and a smart long test show pending sectors. But after force writing all the pending sectors with hdparm, none of the sectors were reallocated. I now think it just had bad partial writes when the power outage hit, so the sectors literally had bad data as the error correcting code didn't match up, also explains why they were all in blocks of 8), and multiple smart long tests later and "fingers crossed", everything seems fine.

hexagonwin 2 days ago

What exactly are these "white label drives"? Aren't these just normal seagate exos drives with SMART information wiped and labels removed? i.e. just a worse used drive.

  • ndiddy 2 days ago

    The "OS" on the drive stands for "off-spec". As far as I understand, here's where they come from:

    1. A large company (think cloud storage provider or something) wanting to build out storage infrastructure buys a large amount of drives from Seagate.

    2. When the company receives the drives from Seagate, they randomly sample from the lot to make sure the drives are fully functional and meet specifications.

    3. The company identifies issues from the sampled drives. These can range from dents/dings in the casing or torn labels to firmware or reliability issues.

    4. The company returns the entire lot to Seagate as defective. Seagate now doesn't want anything to do with these drives, so they relabel them as "OS" with no Seagate branding and sell them as-is at a discount to drive resellers.

    5. The drive resellers may or may not do further testing on the drives (you can probably tell by how much of a warranty a given reseller offers) before selling them onto people wanting cheap storage.

  • userbinator 2 days ago

    Apparently Seagate drives that weren't good enough to have their own name on them... which given the history of even their branded drives, is something I'd only use for temporary caching of data that's easily regenerated.

  • ghostly_s 2 days ago

    Trying to think of reasons why the manufacturer wouldn't want their name on them and none of them are good. And for not even much of a discount.

  • bluedino 2 days ago

    Weren't shucked drives (removed from enclosures) referred to as White label drives at one point?

sigio a day ago

The reduction in warranty from 5 years to 1 when buying these doesn't weigh up to the quite limited reduction in price. This would only cover failures during the first few months of runtime, and while most drive-failures will be in the beginning, or after 5+ years, i've seen enough drives die in year 2-5 to prefer some warranty cover, especially on the $200 drives.

t312227 a day ago

hello,

thanks for the great article!!

2 remarks from my side:

* some smartctl -a ... output would have been nice ~ i don't care if it is from "when the drives where shipped" or from any later point in time

* prices are somewhat ... aehm ... lets call them "uncompetitive" at least for where i'm at (austria, central-europe, eu)

i compared prices normalized by cost pro TB with new (!) drives from the austrian price-portal "geizhals"

* https://geizhals.at

for example: for 3,5 inches HDDs sorted by "price / TB"

* https://geizhals.at/?cat=hde7s&xf=5704_3.5%22~5717_SATA%203G...

sometimes the prices are slightly higher for the used (!) drives ... sometimes also a bit lower, but imho (!) not enough to justify buying refurbished drives over new (!) ones ...

just my 0.02€

vintagedave a day ago

What a fascinating website (in general - other articles are worth reading too.)

The author is Estonian; the website name (and his name) 'õunapuu' means 'apple tree'. I love Estonian names: often closely tied to nature.

buckle8017 2 days ago

These drives are very likely refurbs that are unofficial.

White labeling avoids lawsuits.

serf 2 days ago

The way the story lead with the belief that the drives were likely going to be untrustworthy made me think the author was going to throw them in a system with multiple redundancies or use them as additional parity drives..

god speed!

econ 2 days ago

OT

> Half of tech YouTube has been sponsored by companies like...

It just struck me that the product reviews are a part of the social realm that is barely explored.

Imagine a video website like TikTok or YouTube etc where all videos are organized under products. Priority to those who purchased the product and a category ranked by how many similar products you've purchased.

The thing sort of exists currently in some hard to find corner of TEMU etc but there are no channels or playlists.

  • Aurornis 2 days ago

    The reason you don’t see videos arranged by product is because everyone knows not to trust unknown creators telling you how great a product is.

    Viewers want to see opinions from specific people they’ve come to trust, not the first video that comes up for a product.

    • aspenmayer 2 days ago

      Coincidentally or not, those folks who have more subscribers usually charge more for their consideration. That’s why I generally trust Steve of Gamers Nexus more than other folks, because they don’t do ads except for promoting their own products, so there’s no conflict of interest. On the one hand, Gamers Nexus doesn’t manufacture their own hard drives, but on the other, they publish their methodology and have a reputation to uphold, so I would trust their judgement regarding testing computer hardware more than folks who do engage in outside advertising.

    • econ 2 days ago

      They don't have to tell you anything. Just unbox and show what they got.

      I just purchased a bicycle chain cleaning device. It was absurdly cheap. The plastic was extruded poorly, it was hard to assemble, it was not entirely obvious how to use it. However! It did the job and it barely got dirty. I expected it to be full of rusty oil both inside and outside but it accumulated just a tiny smudge on the inlet. If anyone made a video it would be a fantastic product.

      • ghostly_s 2 days ago

        God, the flood of absolutely useless "review" videos Amazon has incentivized customers to shit all over their site which are nothing more than unboxings are the worst thing about that ecosystem. No thank you.

        • econ a day ago

          Think of it like a football channel, a place to contain such things.

          Amazon is just not interested in organizing it properly.

          You should have a look what river of fresh nonsense is uploaded to YouTube. The difference is that amazon has you look at it as if something valuable.

      • noAnswer 2 days ago

        1. You could be that anyone.

        2. The world is filled to the brim with videos about "fantastic products".

    • markerz 2 days ago

      Alternatively, unknown creators have less incentive to falsely promote or lie. It’s the reason I tend to trust random strangers on Reddit than popular YouTubers who have achieved monetization and sponsorship.

      • Aurornis a day ago

        No, that’s the opposite of how it works.

        I’ve seen how PR firms interact with creators. It’s much easier to get the small time creators to take your product and make a positive video because getting some free product is the biggest payout they’re getting from their channel. They will always give positive reviews because they have more to gain from flattering the companies that send them free stuff than from the $1.50 they’re going to earn in ad money.

        The PR firms who worked with the company I was at had a long list of small time video creators who would reliably produce positive videos of something as long as you sent them a free product. The creators know this game.

    • 9dev 2 days ago

      I don’t trust big channels especially, because I assume they have just sold themselves out to the biggest sponsor. Influencers only exist due to campaign deals, where companies try to sneak their ads into your mind by abusing your inclination to trust another human being. All of it is sickening.

      In comparison, I’d rather read a general review magazine with a long history. At least they don’t try to trick me into believing they are working out of the goodness of their hearts, and they usually aren’t married to a single big sponsor.

      Online reviews are broken beyond repair.

      • ghostly_s 2 days ago

        >I’d rather read a general review magazine with a long history.

        Do any of these still exist?

walrus01 2 days ago

I was hoping for a full text dump of the SMART data from the drives.

  • awaymazdacx5 2 days ago

    If CSPRNG encrypts /dev/urandom, encrypting the data using a binary 256 bit AES cmd to update entropy pool would double contain the data, which is writing /dev/random to /dev/nvme0n1p1.

lofaszvanitt 2 days ago

I never understood why they let Seagate et al do this game about hard drives. If they offer a warranty, then replace the drive to brand new, and shove the recertified, fixed whatever bullshit up your wahzoo.