- cross-posted to:
- memes@lemmy.world
- cross-posted to:
- memes@lemmy.world
cross-posted from: https://lemmy.world/post/43005954
Isn’t it Open AI who bought it with Nvidia’s money that Nvidia said they’ll think about giving them later with the money they’ll make from selling their GPUs to data centers that haven’t been built powered by infrastructure that doesn’t exist yet
Reading this comment is a great litmus test. If you think that sounds dumb and nothing at all like a brilliant master plan, you are not part of the world’s financial elite.
I wonder what ram prices will look like shortly after the bubble pops.
If RAM will even be AVAILABLE to buy, what with their attempts to replace all personal computers with terminals slaved to pay as you go cloud computing “services” 😮💨😬
I’m not sure how much you follow the history of IT, but this has happened at least 3 times in history, and it has always swung back to local processing. What has always been the force that brought local computing back is that compute power gets cheap. RAM and GPU costs are pushing the distributed (cloud) model right now.
Probably still astronomical, because the RAM being produced is specifically designed for use in large data-centers, not PCs.
This is a classic guns/butter problem. “We’re using all our industrial resources to produce guns” doesn’t mean the price of butter drops when the gun market falls through.
Server RAM is not that much different.
I was wondering about that. I mean the sticks are different (consumer preferring faster ram, enterprise preferring an extra chip for ECC). But at the root it’s all dram that should be the same underlying silicon by and large.
But, I won’t say for certain because I’ve never really looked into ram production in that level of detail.
This doesn’t make sense because the reverse would be true, and it’s not. If they were so different an explosion in server demand wouldn’t strongly affect consumer products.




