Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A custom-designed IDE SSD for old PCs (github.com/dosdude1)
194 points by squarefoot on Jan 23, 2023 | hide | past | favorite | 121 comments


dosdude1 is awesome. He replaced the BGA CPU in my iMac G4 with a 1.7 GHz MC7448 (https://mattst88.com/computers/imacg4/) and made a video about the process:

https://www.youtube.com/watch?v=SnpdLt4OIFs

He's got two videos about this custom IDE SSD on his YouTube channel here:

https://www.youtube.com/watch?v=YrBz-6lXbZQ https://www.youtube.com/watch?v=EMCz0VsEbqc


I'm hoping he makes a video detailing the pcb design and layout. Very impressive project using Kicad!


Can I get a tldr? Can't watch until I get home, but I was looking over the github and I'm curious about how this works...

It sounds like we're supposed to scavenge the nand and controller off an ssd and solder it to their board... but can I carve out 200/500mb disks out of it like other drive emulators, or are we still stuck trying to put a 256gb ssd into a machine that can't support disks that large?

Edit: from my POV the issue isn't "I can't plug a nvme into my ide-only 286” but rather "I can't find a cf card small enough, and that the bios is happy enough with to allow the system to boot"


https://tldp.org/HOWTO/Large-Disk-HOWTO-4.html

Since this is a custom application, I suspect you'd have to figure out where on this list your use case falls and use the appropriate NAND set.


That's not even a complete list, because there are some old BIOSes that won't boot from a drive that's not part of the "type xx" standard...

Also, some BIOSes only allowed a small handful of those types (completely ignoring IBM because they were so picky you couldn't even replace your floppy/disk controller with another IBM branded one - it had to be the model the PC shipped with or screw you)

Retro do be like that sometimes :)

Edit: the tables in the hard drive bible PDFs really come in handy https://archive.org/details/bitsavers_cscCSCHardn1996_266711...


I think there reaches a point where you outsource the problem to something like the XT-IDE Universal BIOS, which does more than what the name says. (It's also configurable for use as an option ROM for machines with normal IDE)

I'm fond of the SD-IDE adapters because these days, CF cards are harder to find, and some are not 100% happy with straight CF-IDE adaptors, but the SD-IDE adaptor has to do stuff and so is better at compatibility.

When I threw one in my 386SX, the BIOS didn't like it. It didn't autodetect, and although it claimed to support drives up to 8Gb, it would do weird stuff and refuse to boot on a 4GB card, even if I said "just use 100Mb of it".

On XUB, it autodetects fine and I just set up a 2GB partition because it's the largest FAT16 will support, and far more than you need when the original drive was 40Mb.

Added plus: the easiest way to add it is to buy a $15 NIC and slap the ROM in the boot-ROM socket, so now it has networking.


I've pondered going all SCSI and using a SCSI emulator... Summer project perhaps :)


Yeah he's one of the big online names in classic Mac stuff.


What are the advantages of this versus a PATA disk-on-module, a CompactFlash card, or an SD-to-PATA adapter?


"What are the advantages of this versus a PATA disk-on-module, a CompactFlash card, or an SD-to-PATA adapter?"

I wondered the same thing ...

I have thousands of days of uptime on various CF->IDE configurations - many of which were critical infrastructure.

It's a really elegant solution because CF and IDE are pinout compatible. No logic is required.

I set filesystems to be read-only so as not to burn out the CF part and, again, some of these were in service for >10 years and I have much higher confidence in this configuration than any SSD based solution I have ever fielded.


Don't use SD/microSD storage for these things. It's terrible for a number of reasons. Get yourself a cheap IDE-SATA adapter instead and plug an old used 2.5" SATA SSD in there.


> Don't use SD/microSD storage for these things. It's terrible for a number of reasons.

Is it, really? My microSDs have already handled more lifetime writes than all IDE HDDs I ever had combined.


I don't buy it, because it contradicts the very nature of flash storage. Unless you used your IDE HDDs as piñatas, that is. But the problems specific to SD/microSD and USB flash drives are more than limited longevity. The underlying SDIO protocol/interface of this storage type has several shortcomings making it awful for general operating system use entailing frequent intermixed writes and reads, including bringing the whole storage (and with it the OS) to a complete read-blocked freeze during extended and small rapid writes.


> I don't buy it, because it contradicts the very nature of flash storage.

No, it really doesn't. Besides, both capacities _and_ performance have increased a lot since the top-of-the-line IDE HDDs. It's easy enough to see that e.g. a user of a Steam Deck probably writes daily a couple orders of magnitude more to a microSD than even a heavy a Windows 98 user did to its HDD. Even the writes caused by desfragmenting your average era HDD daily is practically peanuts to a TB microSD, but it was definitely not peanuts to the HDD itself.

> The underlying SDIO protocol/interface of this storage type has several shortcomings making it awful for general operating system use entailing frequent intermixed writes and reads

We are comparing this to IDE. Not NVME on top of PCIe.

TL;DR I have a pile of broken microSD hards, but I also have a larger pile of broken old HDDs.


My old w98 machine had a 60 GB disk and for sure lasted far more than microSD's.


If this is going to be a contest of throwing anecdotes around, I literally have _multiple_ Win98 VMs on a single 256GB microSD (which is like, 30$?). And so do a lot of people who are in the target audience of TFA, whom boot their retro hardware from microSDs since it is definitely faster and even more reliable than era-accurate HDDs. And just plain more convenient; you can backup the entire HDD by copying the SD around in a fraction of the time it would take you to clone the HDD.

It just makes sense that the microSD would last longer. Even if you assume that it was the worst of the worst of flash memory and each cell would only last 100 times, that still amounts to 20TB of lifetime writes to the microSD. Windows would even use ridiculously large size cluster sizes on a 60GiB drive, so write amplification would be of little concern. You could install Windows 98 over and over continuously as max as allowed by the HDD speed (say 1hr per installation) for a decade and you would still have writes to spare. Paper envelope calculation caveat, but normally flash memory will have at least one order of magnitude more cycles.

Let's remember that 90s and early 00s HDDs are not remembered for their longevity, either (search for click of death).


An Amiga 2000 booting from MicroSD is not the same as booting Windows 98.

>Let's remember that 90s and early 00s HDDs are not remembered for their longevity, either (search for click of death).

My ~2002 drive was reliable enough. What it broke was a motherboard capacitor.


60GB in a windows 98 machine is honestly huge in an era when a 20GB IBM HDD sold for $440 in 1998-1999 pre inflation currency. Equivalent to about $785 today.


W98 was used until the WXP SP2 era. Not so expensive if you had it by 2001.


That could be survivor bias.


No, the common usage.


Person 1: "Here is my lived experience."

Person 2: "I don't buy it."

Internet in a nutshell.


I think this often comes up when one person is talking about personal experience, and another person is talking about general advice.

One person's anecdote does not necessarily prove any larger point. They might have gotten (un)lucky, live or work in a particular environment/context that doesn't apply generally, etc.

On the other hand, dismissing someone's anecdote because you have some Larger Point axe to grind is also a bit crap (:


This isn't my experience.

You must be using a different Internet


I think part of it has to do with the workloads you're using.

If you're using it as a disc replacement for, say, a 286/386, it comes from an era where disc performance was far worse, so the software is usually not going to hit the disc gratuitiously.

Also, dealing with an adapter that's single-threaded and taps out at 25Mbps isn't a big deal when you're dealing with an environment with primitive cooperative multitasking (i. e. DOS/Win3) and attached via a bus that tops out at half that.

Now, it could be an issue in later systems-- something like a late 486 or beyond, where it has a local-bus/VL-bus/PCI disc controller and might be used with Linux or Windows NT might work poorly with those adapters, but then you're probably able to use a commercial PATA or SATA+adaptor SSD.


Tell that to the original Raspberry Pi's floating around that are old enough to be enrolled in high school.


That's not been my experience with the original Pis. We ended up mounting cards as read only because they wouldn't last otherwise. Syslog and various other processes, even with light IO would render the cards useless after a month or two.


Were they high-endurance SD cards? In my experience, people usually aren't aware there is a difference between standard and high-endurance SD cards. The former should last years on a light I/O raspberry pi workload.


Okay, so don't use 90% of SD cards.


Also: a lot of OS distributions aimed at SBCs with microSD storage forgetting to set noatime/relatime on their file systems, to avoid thousands upon thousands of writes for access time updates whenever files are read. It's only just recently that OS vendors finally woke up and realized what was going on.


relatime is default option since forever. (certainly since before rpi was even a thing)

https://elixir.bootlin.com/linux/v2.6.30/source/fs/namespace... (some random old kernel)


That's good info and news to me. "relatime" is unfortunately a silent killer as well, albeit nowhere near as intense as "strictatime".


On their original SD card? Probably not, especially on the earlier raspi models.


"Hey, original Raspberry Pi, you have no bulk flash storage on board so I don't know why this guy is trying to get you into unrelated discussion about durability of bulk flash storage in the first place!"

Here, I told it.

Also they are "just" 11 years old so you've managed to be wrong on everything you said somehow..


I have, on numerous occasions. It's one of the reasons why general desktop use on them can be so infuriating.


I have old raspis, the SD card failed.


> My microSDs have already handled more lifetime writes than all IDE HDDs I ever had combined.

In practice it's hard to tell without real statistics. I've had two Raspberry Pi setups running off microSD cards for many years now, as actively used CUPS hosts for printers. No problems whatsoever. On the other hand I have some other Pi setups where one of them suddenly suffered an extremely hot microSD card and I just had to yank it out (I was logged in at the time and the system froze). Same type microSD as the other two (Sandisk). The card was gone, the Pi survived. That got me a bit nervous and I used an SSD instead afterwards.

So, it's all anecdotal - two microSD cards working flawlessly for many years, another failing catastrophically after a relatively short time. Usage pattern approximately the same.


SD cards don't have wear leveling like real SSDs.


SD cards have wear leveling. Probably comparable in principle to dramless SSDs. I/O interface sucks, though.


Some SD cards do, some don’t. When I use them as HDs in vintage computers I get those that do.


How can you tell which are the ones?


IDE-SATA adapters adhere to ATA3 standard https://www.scs.stanford.edu/10wi-cs140/pintos/specs/ata-3-s...

> Deleted the IOCS16- signal.

That means they only work on newer IDE interfaces. Anything pre UDMA (up to ~1995. 486/first pentiums and things like RiscPC) will still expect /IOCS16 pin being asserted when 'any register other than the data register is accessed', only data register is 16bit. More info https://www.vogons.org/viewtopic.php?p=1122886#p1122886


All of them? Admittedly the oldest hardware I've used such adapters on is a circa 1998 motherboard with UDMA/ATA4 support, but I have at least three different chips on the ones I've picked up the past decade.


The downside to using SD and friends for this is SD supposedly has less write endurance and worse wear leveling than a proper SSD. I have no idea if this is true. The upside to using CF is CF is electrically compatible with IDE, so you don't have a IDE/SATA bridge that can cause compatibility issues and is a point of failure.


It would kind of fit their use case. You typically fill up the card with long sequential writes (photo and video files), download them to a computer, erase the files, and repeat.

It'd be a shame if they had a lower bytes written capability, but a non-existent/poor/simple wear leveling algorithm wouldn't hurt them in their typical usage pattern. Might be worth looking into whether eMMC makes different assumptions considering its prevalence as primary storage for mobile devices.


I agree that these things are readily available, but given that we're reading it on github, maybe it's somebody's tinkering project.


That's really the only valid reason to do this, which is of course reason enough :P


Can anyone recommend a specific device they've observed to work on a G4 Mac mini? (https://www.amazon.com/gp/product/B087CL5KHL and https://www.amazon.com/gp/product/B07Z67GX6W together did not work. https://www.amazon.com/gp/product/B07QN9STY7 and a random SATA drive did not work and did not fit)


The eBay listings are long gone, but in 2019 I ordered an adapter and a cheap used Samsung sata m.2 and they work perfectly in my G4 Mac Mini.

The SSD listing descriptions were:

* "2 Pack M.2 NGFF SATA SSD to 2.5 inch IDE 44PIN Converter Adapter with Case"

* "256GB SATA-III Solid State Drive M.2 2280 Samsung PM871 MZ-NLN2560 Lenovo SSD"


I used https://eshop.macsales.com/item/Addonics/ADSAIDE/ to upgrade some Glyph drives to ssd when connected to a g5 audio rig a few years ago


I've had success with this one, with a G4 mini. https://www.amazon.co.uk/gp/product/B00AQT2LRK/ref=ppx_yo_dt...


I use this in my 3400c, probably fine for your mac mini https://www.amazon.com/gp/aw/d/B00AQT2LCU


There used to (maybe still are) PATA SSDs from a brand called King Spec on Amazon that I have used in a number of machines. I have used them in a G3 PowerBook (Lombard), a Sony VAIO laptop, and an IBM ThinkPad. These are all just for funsies machines so I make no claims of those drives' reliability. I don't have any irreplaceable data on any of those drives.


I’ve used an IDE to CF adapter successfully.


Add in there IDE to SATA and IDE to mSATA/M.2. If you just need to plug a non-ancient disk into IDE, there are tons of options.


Nand should have better latency and write duration


Sure but what are the subjective positives?

Source: Ran Intel's game lab to seek out subjective positives with SIMD when the MIPS and AMD and the other CPU company were a threat... transmeta

https://en.wikipedia.org/wiki/Transmeta

Look at what they lost....

I was talking with a DB Dev ~1998 at intel and his biggest problem was figuring out hot to expand the money fields in the DB to include the number of digits to reflect how much money they had...


Silly question, do we have to worry about older PC OSes not playing well with SSDs with excessive read/writes? I'd be concerned about destroying drives in short order with swaps that are miss behaving.


You really have to go nuts with the writes to break a modern SSD. I don't think the bandwidth is even high enough to do so on purpose on a older machine.


Repeated defrag runs could wear some grooves into those bits. Older versions of Windows will not look for (nor care about) the "rotational" flag.


Only Vista has automatically-scheduled background full defragmentation and doesn't care about the rotational flag.


Then we're good, because nobody is going to install that friggin OS


Automatic defrag should only hit a handful of files each time it runs. You're not going to upset an SSD with that.

But even a whole-disk rewrite cycle will have very limited effect on the vast majority of SSDs.


Disable defrag


I don't want to, watching the mishmash of rectangles slowly getting organized by color makes me feel very productive!


I was astonished at the remaining life on a Samsung 960 pro when I checked the other day. I don't think I will have to replace it this decade at the current pace.


I have a pair of crucial mx500 ssds that are down to their last 10% of reported life... Awaiting the day they finally decide to stop working, maybe some time during the summer...


Older software tends to do far less 'background' stuff.

Whenever I run old OS's in virtualbox, I'm surprised that even doing lots of stuff for a week rarely results in more than a few GB's of writes.

So I don't think your concern is an issue for the typical case in practice. Obviously there'll be corner cases, but I haven't hit those yet.


Yes, we do have to worry about excessive writes.

I had a Samsung SSD - chosen because Samsung flash storage is generally better than most - connected to a SATA to IDE adapter, which was connected to an IDE to SCSI adapter, installed in a VAXStation 4000/30 (AKA the VAXStation VLC). It has 24 megs of memory.

https://twitter.com/AnachronistJohn/status/12947258190387527...

In less than a year, the SSD was dead, and I'm sure the reason was that I was compiling constantly on that machine, so writes to swap were constant.

I now have a spinning rust 500 gig SATA drive, and it has been fine for the last three years without problems.


Lack of OS-level TRIM support will slow down writes. TRIM also allows SSDs to maintain wear leveling.


The trick I've heard to use SSDs on devices without TRIM is to just leave like 15% of the drive unprovisioned (i.e. create a single partition smaller than the drive) to give the drive's built-in garbage collection plenty of slack space to work with.

If you're working with vintage machines they all have drive size limits in the 2-8 GB range anyway so you'll end up doing that anyway as you can't get SSDs that small


You can't just leave the space unprovisioned in general. It's a crapshoot weather the drive will automatically over-provision using the unprovisioned space. I've never personally seen one that does this. With samsung, crucial, and other major brands you need to use their proprietary tools to change the over-provisioning percentage (before doing any formatting/creating partition table).

So, to anyone else trying this, make sure to do it before you create the partition table.


You get lots of benefit from permanently-trimmed space that is never used, even if it isn't explicitly "over-provisioned".

The underlying storage always has lots of free space, and thus is likely to have A) lots of pre-erased pages, and thus B) doesn't need to relocate as much stuff often to make contiguous chunks to erase, and thus C) write amplification is much lower.


Today the SSD microcontroller will do TRIM by itself.


Do you mean that it will automatically TRIM a block of all zeros? Or that it will itself try to parse filesystem structures and TRIM blocks that it can imply are un-used?


Can you list even one drive that internally parses filesystem data and guesses empty space to trim?


Completely anecdotal observation with n=1 but FWIW I was an early adopter of SSDs and put one in my thinkpad in college running Win XP (this was before trim and all that) - it did require my to "align the partitions" but once that was done I did all my project work and would routinely hybernate my laptop. It lasted all through college and years after. I eventually just retired the device. SDD was fine. At the time SSDs were "unusable" for desktop workloads because of their perceived failure rates.


Whether or not they were rated for it, SSDs of that era were capable of vastly more writes than today’s SSDs


Early SSDs were all SLC with 50-100K erase cycles. Current cheap flash is around 200-500 cycles.


I was a college student so I couldn't afford one of those - I had a 100 GB SATA MLC SSD that read at like 100 MB/s - this was ~2007ish when SSD were still new from a consumer perspective.


Nice work, especially reverse-engineering the controller. I use a $5 bidirectional IDE<->SATA adapter.


These initialize kinda slow, using one in the Xbox doubles to triples the boot up time. It's been a couple years but iirc there basically were only two different chips.


The ones I have all "come online" faster than a mechanical HDD spins up and incur no slowdown. Things run at full Ultra ATA speeds. You may have run into one with quirky firmware or so - there are quite a few different chipsets.


I tried around 2017 or so. It was just two brands of adapters back then, one from startech, the other one I forgot. Both of them made the boot up way slower, and it was common knowledge in the scene that there is no solution to that. Either better adapters emerged in the meantime, or the Xbox just has some quirk.


Very cool that capable individuals are paying attention to older technology, but it's not like IDE SSD hasn't existed for a while. Here's[1] one of the more recent selections, and it's been around awhile.

Edit: oops, missed that it was a ZIF interface. Ok, well done.

[1] https://eshop.macsales.com/item/OWC/SSDMXLE120/


What I really want: a very fast volatile SSD. Reason: my notebook RAM is maxed out and I don't want to throw it off; a fast volatile SSD would allow me to use it as a swap device.

Is there such a thing?


Not volatile, but Optane? Even in the era of super fast pcie4 ssds, the random read times for optane is like 6-10x faster. Fast enough that intel put optane into ram module for persistence.

Plus intel sold nand ssd drives (model H20?) that had a 32gb optane module on board. You can just use the optane for swap and the nand as your main drive.


Also, consumer Optane is hella cheap at the moment. For example, : https://www.newegg.com/intel-optane-ssd-p1600x-118gb/p/1Z4-0...?


Q: ;

I have multiple TBs of SSD currently, how is Optane better at ~118 GBs of stuff... Why would I want this over that?


Much lower latency. Optane doesn't have any of the "page" / "cell" bullshit that flash imposes, so there's really no need for buffers at all. Writes just go straight to disk basically, at almost zero latency, NVMe itself imposes much more latency than the actual reads/writes.


I think of optane not so much as a mass storage device, and more as a fast scratch space. The perfect place to put your swap partition, to use as cache in front of slower storage, the scratch space of an algorithm that starts using disk space to save memory, or the intermediate output of your compiler.


Optane has insane write endurance. Speed is good as well. It is expensive for sure but I guess it can be justified for some database use.


Orders of magnitude better wear capabilities and much faster at low queue depth operations


If you're doing lots of writes it will not wear out the same way an SSD would. Also it's very fast, but these days not as fast as the best NVMe SSDs you can buy now.


There used to be a product called ZeusRAM which was a small (like 8GB) SSD with a matching amount of DRAM. In normal use all operations were done on the DRAM, and when power was lost it would use charge held in a super-cap to flush the data out to the NAND.

Haven't seen any purely volatile drives, though.


The Gigabyte i-RAM from 2005/2006, featured in a LTT video.


So when power is restored it restores the RAM and turns back on like nothing happened?


Correct.


Oh wow lol, this reminds me of Windows Vista days and Microsoft’s ReadyBoost for using a USB flash drive as extra memory for disk caching because it was hammering peoples disks. Good times.


On Linux there is zram which creates a first-priority swap partition that just compresses the pages and keeps them in memory.

It looks like Windows also has memory compression.

I believe that OS X has memory compression turned on by default.


Zram is actually a general purpose block device, you can format it with ext4 and have a compressed RAM disk.

There is also zswap, which is a transparent compressed RAM cache in front of your cache partition. Most distros have it enabled by default, so make sure you don't use it together with swap in zram.


>I believe that OS X has memory compression turned on by default.

It isn't just turned on by default, there is no way of turning it off.


Optane SSDs. 2242 devices should fit in an ExpressCard bay (which is electrically a PCIe slot) but the only adapters are from ThinkMod and the owner of that company has kinda disappeared.


These used to exist, PCI cards loaded up with DRAM, optionally battery-backed. Some talked over a drive interface like SCSI or SATA. I don't know that anyone bothers with them nowadays, since the target was never laptops and it's easy to cram a lot of RAM in a not-laptop.



Having worked on SSD firmware, that would simplify a lot of the code in the firmware if we didn't have to recover things after a power loss. But I don't think it would improve performance a lot.


Why not just a fast SSD, without being volatile?


Because by being volatile, it could be faster.


Have you actually looked for a fast SSD? Your laptop's SATA/PCIe interface is probably the bottleneck at this point.


It couldn't, since it would be limited by bus width anyway.


Solution, meet problem?


A regular SSD will do. But if your notebook is old, it may only support SATA based SSD, which is not considered fast any more.


6Gb(it)ps of a maxed out SATA III link is still not slow, it's just that the multi-GB(yte)ps speeds of modern PCIe drives is just insanely fast. My TrueNAS box has 3x2 mirrored vdevs of 7200RPM drives and has plenty of bandwidth to handle multiple 4K ProRes streams, the only reason I have PCIe storage (an Optane 800P) on it is because I don't want to waste a precious 3.5" bay for a SLOG device to handle sync writes (VM workloads).


A single board computer with something like a 10GbE connection to your laptop.


I just saw a dual m2 with raid that fits in a 2.5 laptop slot. Who else wants laptop raid?


I've been doing it for a long long time with the SD card slot


Wouldn't raid1 just make both cards fail roughly at same time ? After all you're writing same amount of data on both


No. SD + internal.

And "roughly at the same time" is fine. I can deal with that.

Hasn't happened like that but even if it did, even an extra day is all you need to save it


I guess just getting 2 different brands of cards would be fine for that.

I've seen devices like this for servers, basically 2 microsd cards with hardware raid1 mounted inside the server. It wasn't doing much writing on it tho, it was designed so you could run say hypervisor off it and be able to use all drive slots for storage.


I also have an rsync backup that looks to see if I'm on a certain network and then runs the backup accordingly.

It's sufficient. I'm not doing things that important


Edit: edited out "Dosdude" which is the name of the author, not the project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: