What's new
Frozen in Carbonite

Welcome to FiC! Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Modern What if "universal storage" was developed in the mid-1980s?

Morphile

Well-known member
For those of you unaware of the idea of universal storage, it's rapid read/write, high capacity and non-volatile storage. DRAM, the current common "program memory" architecture, compromises on the latter two to entirely maximize read/write speeds, while hard drives compromise on speed, as RAM covers that so it's a perfectly acceptable loss. The goal would be being able to satisfy the read needs of the 486 processor for a reasonable workload (future-proofing the technology enough to cause overhead to spiral into it staying), while being close enough in storage capacity to the magnetic bulk storage formats that it's an acceptable loss for the improved read/write speed. We currently have technologies that would have been able to fill such a role a decade ago, but our programs have gotten obscenely bloated off the sheer bulk of hard disk drives allowing you to damn near ignore file sizes for normal users.

In the scenario where they fall behind on storage over the (other) magnetic media, we would likely see even slower, even larger bulk storage kept from bloating wildly out of control by a degree of limitation on read/write cycles in the extremely large, by our standards, RAM modules. Variations that compromise on volitility to increase read/write cycles and speed could exist, but the amount of code that any significant loss of storage would render unusable for a significant period of time would lock more conventional RAM out of the market, due to code being designed, for a long time, to be stored entirely in RAM, causing a large amount of hardware and code overhead to prevent lower-density RAM. This is, at least for the scenario, a self-reinforcing situation, causing it to be virtually impossible to move away from the formerly-universal storage. Given a world where RAM is twenty times the size, but a fifth the speed, to throw numbers out (more realistically, it completely invalidates SSDs up until the 3d NAND flash performance comes around by there being so small an improvement in density that the read/write speed loss is unacceptable), how would code best serve such a shift away from speed towards bulk?

In the scenario where it, by some miracle, loses out on read-write speed over volatile RAM but maintains sufficiently close storage to magnetic media to lead to SSDs two decades early and the code and hardware overhead permits a shift to smaller RAM modules (perhaps the RAM develops quickly enough to handle all the legacy code by being big enough to allow the switch), we'd probably see hardware development shift towards miniaturization and modularity sooner, as the scalability of the dominant storage medium is significantly improved. As with how things are shaping up currently, it could remain universal storage in smaller appliances much longer, while being the technology to enable some of them earlier to begin with. The rough ass-pulled ballpark for code to work with is a 20% decrease in storage capacity up front, gradually falling behind to a 50% storage capacity difference, but being around three times faster. This is well in between SSDs and HDDs, with the considerably longer time it'd be around immensely lowering the cost difference with HDDs. Due to the relatively minor improvement in speeds, SSDs would likely only be used by specific power users who demand maximum load times for a much longer time. Basically right up until m.2 and 3d NAND flash mature, it'd stick around as the mass market's storage of choice.

In the scenario where universal storage essentially impossibly stays in place, having superior data storage to SSDs with higher read/write speeds, to the point of crowding out DRAM and similar technologies while relegating HDDs to archival work (you aren't exceeding HDDs in archival work. The only things better are high-end essentially bespoke magnetic tapes, which reach double-digit terabyte storage, and it's flat out not possible to run code off those. Those are infrastructure backups), the question of keeping the cartridge system another few decades becomes entirely a matter of cost. External drives would occur much sooner to facilitate this, and it could displace licensed software packages by having the cost tied to the physical drive said program would be stored on. The limitations of read/write cycles and enabling factors of price dictate whether this leads to a general retention of cartridge-based hardware, working around read/write cycle limits by having drives be per-program off the back of cheap storage. Eventually, you'd have either extreme amounts of cartredge shrinking, either in relative storage size or dimensions, or see something like CD-ROMs emerge anyways due to economics of scale. With code being rewritable at sufficient speeds to have self-modifying programs in the mid-90s (and the many, many other shenanigans universal storage causes), how would the world of programming change?

I'm not interested in wider socioeconomic impacts beyond giving estimates of technologies happening sooner or later than they did in real life. This is me wondering how computers, specifically, change from these scenarios moving forward. Part of the reason for specifying this is so that the automation doomsayers and such don't drag this thread into arguing about broad economic concerns. I get enough of that from the pessimists I subscribe to on YouTube. Feel free to change the hardware alteration outcomes, I just give one possible outcome on some subset of hardware changes.
 
Back
Top Bottom