dosdude1 is awesome. He replaced the BGA CPU in my iMac G4 with a 1.7 GHz MC7448 (https://mattst88.com/computers/imacg4/) and made a video about the process:
Can I get a tldr? Can't watch until I get home, but I was looking over the github and I'm curious about how this works...
It sounds like we're supposed to scavenge the nand and controller off an ssd and solder it to their board... but can I carve out 200/500mb disks out of it like other drive emulators, or are we still stuck trying to put a 256gb ssd into a machine that can't support disks that large?
Edit: from my POV the issue isn't "I can't plug a nvme into my ide-only 286” but rather "I can't find a cf card small enough, and that the bios is happy enough with to allow the system to boot"
That's not even a complete list, because there are some old BIOSes that won't boot from a drive that's not part of the "type xx" standard...
Also, some BIOSes only allowed a small handful of those types (completely ignoring IBM because they were so picky you couldn't even replace your floppy/disk controller with another IBM branded one - it had to be the model the PC shipped with or screw you)
I think there reaches a point where you outsource the problem to something like the XT-IDE Universal BIOS, which does more than what the name says. (It's also configurable for use as an option ROM for machines with normal IDE)
I'm fond of the SD-IDE adapters because these days, CF cards are harder to find, and some are not 100% happy with straight CF-IDE adaptors, but the SD-IDE adaptor has to do stuff and so is better at compatibility.
When I threw one in my 386SX, the BIOS didn't like it. It didn't autodetect, and although it claimed to support drives up to 8Gb, it would do weird stuff and refuse to boot on a 4GB card, even if I said "just use 100Mb of it".
On XUB, it autodetects fine and I just set up a 2GB partition because it's the largest FAT16 will support, and far more than you need when the original drive was 40Mb.
Added plus: the easiest way to add it is to buy a $15 NIC and slap the ROM in the boot-ROM socket, so now it has networking.
"What are the advantages of this versus a PATA disk-on-module, a CompactFlash card, or an SD-to-PATA adapter?"
I wondered the same thing ...
I have thousands of days of uptime on various CF->IDE configurations - many of which were critical infrastructure.
It's a really elegant solution because CF and IDE are pinout compatible. No logic is required.
I set filesystems to be read-only so as not to burn out the CF part and, again, some of these were in service for >10 years and I have much higher confidence in this configuration than any SSD based solution I have ever fielded.
Don't use SD/microSD storage for these things. It's terrible for a number of reasons. Get yourself a cheap IDE-SATA adapter instead and plug an old used 2.5" SATA SSD in there.
I don't buy it, because it contradicts the very nature of flash storage. Unless you used your IDE HDDs as piñatas, that is. But the problems specific to SD/microSD and USB flash drives are more than limited longevity. The underlying SDIO protocol/interface of this storage type has several shortcomings making it awful for general operating system use entailing frequent intermixed writes and reads, including bringing the whole storage (and with it the OS) to a complete read-blocked freeze during extended and small rapid writes.
> I don't buy it, because it contradicts the very nature of flash storage.
No, it really doesn't. Besides, both capacities _and_ performance have increased a lot since the top-of-the-line IDE HDDs. It's easy enough to see that e.g. a user of a Steam Deck probably writes daily a couple orders of magnitude more to a microSD than even a heavy a Windows 98 user did to its HDD. Even the writes caused by desfragmenting your average era HDD daily is practically peanuts to a TB microSD, but it was definitely not peanuts to the HDD itself.
> The underlying SDIO protocol/interface of this storage type has several shortcomings making it awful for general operating system use entailing frequent intermixed writes and reads
We are comparing this to IDE. Not NVME on top of PCIe.
TL;DR I have a pile of broken microSD hards, but I also have a larger pile of broken old HDDs.
If this is going to be a contest of throwing anecdotes around, I literally have _multiple_ Win98 VMs on a single 256GB microSD (which is like, 30$?). And so do a lot of people who are in the target audience of TFA, whom boot their retro hardware from microSDs since it is definitely faster and even more reliable than era-accurate HDDs. And just plain more convenient; you can backup the entire HDD by copying the SD around in a fraction of the time it would take you to clone the HDD.
It just makes sense that the microSD would last longer. Even if you assume that it was the worst of the worst of flash memory and each cell would only last 100 times, that still amounts to 20TB of lifetime writes to the microSD. Windows would even use ridiculously large size cluster sizes on a 60GiB drive, so write amplification would be of little concern. You could install Windows 98 over and over continuously as max as allowed by the HDD speed (say 1hr per installation) for a decade and you would still have writes to spare. Paper envelope calculation caveat, but normally flash memory will have at least one order of magnitude more cycles.
Let's remember that 90s and early 00s HDDs are not remembered for their longevity, either (search for click of death).
60GB in a windows 98 machine is honestly huge in an era when a 20GB IBM HDD sold for $440 in 1998-1999 pre inflation currency. Equivalent to about $785 today.
I think this often comes up when one person is talking about personal experience, and another person is talking about general advice.
One person's anecdote does not necessarily prove any larger point. They might have gotten (un)lucky, live or work in a particular environment/context that doesn't apply generally, etc.
On the other hand, dismissing someone's anecdote because you have some Larger Point axe to grind is also a bit crap (:
I think part of it has to do with the workloads you're using.
If you're using it as a disc replacement for, say, a 286/386, it comes from an era where disc performance was far worse, so the software is usually not going to hit the disc gratuitiously.
Also, dealing with an adapter that's single-threaded and taps out at 25Mbps isn't a big deal when you're dealing with an environment with primitive cooperative multitasking (i. e. DOS/Win3) and attached via a bus that tops out at half that.
Now, it could be an issue in later systems-- something like a late 486 or beyond, where it has a local-bus/VL-bus/PCI disc controller and might be used with Linux or Windows NT might work poorly with those adapters, but then you're probably able to use a commercial PATA or SATA+adaptor SSD.
That's not been my experience with the original Pis. We ended up mounting cards as read only because they wouldn't last otherwise. Syslog and various other processes, even with light IO would render the cards useless after a month or two.
Were they high-endurance SD cards? In my experience, people usually aren't aware there is a difference between standard and high-endurance SD cards. The former should last years on a light I/O raspberry pi workload.
Also: a lot of OS distributions aimed at SBCs with microSD storage forgetting to set noatime/relatime on their file systems, to avoid thousands upon thousands of writes for access time updates whenever files are read. It's only just recently that OS vendors finally woke up and realized what was going on.
"Hey, original Raspberry Pi, you have no bulk flash storage on board so I don't know why this guy is trying to get you into unrelated discussion about durability of bulk flash storage in the first place!"
Here, I told it.
Also they are "just" 11 years old so you've managed to be wrong on everything you said somehow..
> My microSDs have already handled more lifetime writes than all IDE HDDs I ever had combined.
In practice it's hard to tell without real statistics.
I've had two Raspberry Pi setups running off microSD cards for many years now, as actively used CUPS hosts for printers. No problems whatsoever. On the other hand I have some other Pi setups where one of them suddenly suffered an extremely hot microSD card and I just had to yank it out (I was logged in at the time and the system froze). Same type microSD as the other two (Sandisk). The card was gone, the Pi survived. That got me a bit nervous and I used an SSD instead afterwards.
So, it's all anecdotal - two microSD cards working flawlessly for many years, another failing catastrophically after a relatively short time. Usage pattern approximately the same.
That means they only work on newer IDE interfaces. Anything pre UDMA (up to ~1995. 486/first pentiums and things like RiscPC) will still expect /IOCS16 pin being asserted when 'any register other than the data register is accessed', only data register is 16bit. More info https://www.vogons.org/viewtopic.php?p=1122886#p1122886
All of them? Admittedly the oldest hardware I've used such adapters on is a circa 1998 motherboard with UDMA/ATA4 support, but I have at least three different chips on the ones I've picked up the past decade.
The downside to using SD and friends for this is SD supposedly has less write endurance and worse wear leveling than a proper SSD. I have no idea if this is true. The upside to using CF is CF is electrically compatible with IDE, so you don't have a IDE/SATA bridge that can cause compatibility issues and is a point of failure.
It would kind of fit their use case. You typically fill up the card with long sequential writes (photo and video files), download them to a computer, erase the files, and repeat.
It'd be a shame if they had a lower bytes written capability, but a non-existent/poor/simple wear leveling algorithm wouldn't hurt them in their typical usage pattern. Might be worth looking into whether eMMC makes different assumptions considering its prevalence as primary storage for mobile devices.
There used to (maybe still are) PATA SSDs from a brand called King Spec on Amazon that I have used in a number of machines. I have used them in a G3 PowerBook (Lombard), a Sony VAIO laptop, and an IBM ThinkPad. These are all just for funsies machines so I make no claims of those drives' reliability. I don't have any irreplaceable data on any of those drives.
I was talking with a DB Dev ~1998 at intel and his biggest problem was figuring out hot to expand the money fields in the DB to include the number of digits to reflect how much money they had...
Silly question, do we have to worry about older PC OSes not playing well with SSDs with excessive read/writes?
I'd be concerned about destroying drives in short order with swaps that are miss behaving.
You really have to go nuts with the writes to break a modern SSD. I don't think the bandwidth is even high enough to do so on purpose on a older machine.
I was astonished at the remaining life on a Samsung 960 pro when I checked the other day. I don't think I will have to replace it this decade at the current pace.
I have a pair of crucial mx500 ssds that are down to their last 10% of reported life... Awaiting the day they finally decide to stop working, maybe some time during the summer...
I had a Samsung SSD - chosen because Samsung flash storage is generally better than most - connected to a SATA to IDE adapter, which was connected to an IDE to SCSI adapter, installed in a VAXStation 4000/30 (AKA the VAXStation VLC). It has 24 megs of memory.
The trick I've heard to use SSDs on devices without TRIM is to just leave like 15% of the drive unprovisioned (i.e. create a single partition smaller than the drive) to give the drive's built-in garbage collection plenty of slack space to work with.
If you're working with vintage machines they all have drive size limits in the 2-8 GB range anyway so you'll end up doing that anyway as you can't get SSDs that small
You can't just leave the space unprovisioned in general. It's a crapshoot weather the drive will automatically over-provision using the unprovisioned space. I've never personally seen one that does this. With samsung, crucial, and other major brands you need to use their proprietary tools to change the over-provisioning percentage (before doing any formatting/creating partition table).
So, to anyone else trying this, make sure to do it before you create the partition table.
You get lots of benefit from permanently-trimmed space that is never used, even if it isn't explicitly "over-provisioned".
The underlying storage always has lots of free space, and thus is likely to have A) lots of pre-erased pages, and thus B) doesn't need to relocate as much stuff often to make contiguous chunks to erase, and thus C) write amplification is much lower.
Do you mean that it will automatically TRIM a block of all zeros? Or that it will itself try to parse filesystem structures and TRIM blocks that it can imply are un-used?
Completely anecdotal observation with n=1 but FWIW I was an early adopter of SSDs and put one in my thinkpad in college running Win XP (this was before trim and all that) - it did require my to "align the partitions" but once that was done I did all my project work and would routinely hybernate my laptop. It lasted all through college and years after. I eventually just retired the device. SDD was fine. At the time SSDs were "unusable" for desktop workloads because of their perceived failure rates.
I was a college student so I couldn't afford one of those - I had a 100 GB SATA MLC SSD that read at like 100 MB/s - this was ~2007ish when SSD were still new from a consumer perspective.
These initialize kinda slow, using one in the Xbox doubles to triples the boot up time. It's been a couple years but iirc there basically were only two different chips.
The ones I have all "come online" faster than a mechanical HDD spins up and incur no slowdown. Things run at full Ultra ATA speeds. You may have run into one with quirky firmware or so - there are quite a few different chipsets.
I tried around 2017 or so. It was just two brands of adapters back then, one from startech, the other one I forgot. Both of them made the boot up way slower, and it was common knowledge in the scene that there is no solution to that. Either better adapters emerged in the meantime, or the Xbox just has some quirk.
Very cool that capable individuals are paying attention to older technology, but it's not like IDE SSD hasn't existed for a while. Here's[1] one of the more recent selections, and it's been around awhile.
Edit: oops, missed that it was a ZIF interface. Ok, well done.
What I really want: a very fast volatile SSD. Reason: my notebook RAM is maxed out and I don't want to throw it off; a fast volatile SSD would allow me to use it as a swap device.
Not volatile, but Optane? Even in the era of super fast pcie4 ssds, the random read times for optane is like 6-10x faster. Fast enough that intel put optane into ram module for persistence.
Plus intel sold nand ssd drives (model H20?) that had a 32gb optane module on board. You can just use the optane for swap and the nand as your main drive.
Much lower latency. Optane doesn't have any of the "page" / "cell" bullshit that flash imposes, so there's really no need for buffers at all. Writes just go straight to disk basically, at almost zero latency, NVMe itself imposes much more latency than the actual reads/writes.
I think of optane not so much as a mass storage device, and more as a fast scratch space. The perfect place to put your swap partition, to use as cache in front of slower storage, the scratch space of an algorithm that starts using disk space to save memory, or the intermediate output of your compiler.
If you're doing lots of writes it will not wear out the same way an SSD would. Also it's very fast, but these days not as fast as the best NVMe SSDs you can buy now.
There used to be a product called ZeusRAM which was a small (like 8GB) SSD with a matching amount of DRAM. In normal use all operations were done on the DRAM, and when power was lost it would use charge held in a super-cap to flush the data out to the NAND.
Oh wow lol, this reminds me of Windows Vista days and Microsoft’s ReadyBoost for using a USB flash drive as extra memory for disk caching because it was hammering peoples disks. Good times.
Zram is actually a general purpose block device, you can format it with ext4 and have a compressed RAM disk.
There is also zswap, which is a transparent compressed RAM cache in front of your cache partition. Most distros have it enabled by default, so make sure you don't use it together with swap in zram.
Optane SSDs. 2242 devices should fit in an ExpressCard bay (which is electrically a PCIe slot) but the only adapters are from ThinkMod and the owner of that company has kinda disappeared.
These used to exist, PCI cards loaded up with DRAM, optionally battery-backed. Some talked over a drive interface like SCSI or SATA. I don't know that anyone bothers with them nowadays, since the target was never laptops and it's easy to cram a lot of RAM in a not-laptop.
Having worked on SSD firmware, that would simplify a lot of the code in the firmware if we didn't have to recover things after a power loss. But I don't think it would improve performance a lot.
6Gb(it)ps of a maxed out SATA III link is still not slow, it's just that the multi-GB(yte)ps speeds of modern PCIe drives is just insanely fast. My TrueNAS box has 3x2 mirrored vdevs of 7200RPM drives and has plenty of bandwidth to handle multiple 4K ProRes streams, the only reason I have PCIe storage (an Optane 800P) on it is because I don't want to waste a precious 3.5" bay for a SLOG device to handle sync writes (VM workloads).
I guess just getting 2 different brands of cards would be fine for that.
I've seen devices like this for servers, basically 2 microsd cards with hardware raid1 mounted inside the server. It wasn't doing much writing on it tho, it was designed so you could run say hypervisor off it and be able to use all drive slots for storage.
https://www.youtube.com/watch?v=SnpdLt4OIFs
He's got two videos about this custom IDE SSD on his YouTube channel here:
https://www.youtube.com/watch?v=YrBz-6lXbZQ https://www.youtube.com/watch?v=EMCz0VsEbqc