As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
Wonder if you could game that in theory by burning in the components on the surface before launch or if the launch would cause a big enough spike from the vibration damage that it's not worth it.
I suspect you'd absolutely want to burn in before launch, maybe even including simulating some mechanical stress to "shake out" more issues, but it is a valid question how much burn in is worth doing before and after launch.
Vibration testing is a completely standard part of space payload pre-flight testing. You would absolutely want to vibe-test (no, not that kind) at both a component level and fully integrated before launch.
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
> The analysis has zero redundancy for either servers or support systems.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
even a swarm of satellites has risk factors. we treat space as if it were empty (it's in the name) but there's debris left over from previous missions. this stuff orbits at a very high velocity, so if an object greater than 10cm is projected to get within a couple kilometers of the ISS, they move the ISS out of the way. they did this in April and it happens about once a year.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
On the ground vibration testing is a standard part of pre-launch spacecraft testing. This would trigger most (not all) vibration/G-force related failures on the ground rather than at the actual launch.
Electronics can be extremely resilient to vibration and g forces. Self guided artillery shells such as the M982 Excalibur include fairly normal electronics for GPS guidance. https://en.wikipedia.org/wiki/M982_Excalibur
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
I suspect the thermal system would look very different from a terrestrial component. Fans and connectors can shake loose - but do nothing in space.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I think it's likely the overall rate would be higher, and you might find you need more aggressive burn-in, but even then you'd need an extremely high failure rate before it's more efficient to replace components than writing them off.
The bathtub curve isn’t the same for all components of a server though. Writing off the entire server because a single ram chip or ssd or network card failed would limit the entire server to the lifetime of the weakest part. I think you would want redundant hot spares of certain components with lower mean time between failures.
We do often write off an entire server because a single component fails because the lifetime of the shortest-lifetime components is usually long enough that even on-earth with easy access it's often not worth the cost to try to repair. In an easy-to-access data centre, the component most likely to get replaced would be hot-swappable drives or power supplies, but it's been about 2 decades since the last time I worked anywhere where anyone bothered to check for failed RAM or failed CPUs to salvage a server. And lot of servers don't have network devices you can replace without soldering, and haven't for a long time outside of really high end networking.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
Oil, like air, doesn't convent well in 0G, you'll need pretty hefty pumps and well designed layouts to ensure no hot spots form. Heat pipes are at least passive and don't depend on gravity.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
It's all relative. Is it harder than getting 40MW of (stable!) power? Harder than packaging and launching the thing? Sure it's a bit of a problem, perhaps harder than other satellites if the temperature needs to be lower (assuming commodity server hardware) so the radiator system might need to be large. But large isn't the same as difficult.
Well sure. If you think fully reusable rockets won’t ever happen, then the datacenter in space thing isn’t viable. But THAT’S where the problem is, not innumerate bullcrap about size of radiators.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
Space is very bad for the human body, you wouldn't be able to leave the humans there waiting for something to happen like you do on earth, they'd need to be sent from earth every time.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Underwater welder, though being replaced by drone operator, is still a trade despite the health risks. Do you think nobody on this whole planet would take a space datacenter job on a 3 month rotation?
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
The problem isn't health “risk”, there are risks but there are also health effects that will come with certainty. For instance, low gravity deplete your muscles pretty fast. Spend three month in space and you're not going to walk out of the reentry vehicle.
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
Good points. Spin “gravity” is also quite challenging to acclimatize to because it’s not uniform like planetary gravity. Lots of nausea and unintuitive gyroscopic effects when moving. It’s definitely not a “just”
I worked in aerospace for a couple of years in the beginning of my career. While my area of expertise was the mechanical design I shared my office with the guy who did the thermal design and I learned two things:
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
The caveat to this is that that this also dependent on where in the lifecycle of the satellite you are at. For example after launch, you might just have your survival heaters on, which will keep you within generally an industrial range (e.g. >-40c), and you might not reach higher temps until you hit nominal operations. But a lot of the hardware specs for temperature often are closer standard "industrial" specs rather than special mil or NASA specs.
Sure it is doable. My point is that at room temperature convection is a so much more efficient heat transfer mechanism that I wonder why someone would even think about doing without it.
Lay people associate space with cold because nearly every scifi movie has people freezing over in seconds when exposed to the vacuum of space (insert Picard face-palm gif).
Even The Expanse, even them! Although they are otherwise so realistic, that I have to say I started doubting myself a bit. I wonder what would really would happen and how fast...
People even complained that Leia did not freeze over (in stead of complaining about her sudden use of the force where previously she did not show any such talents.)
Well empty space has a temperature of roughly -270c...so that's pretty cold.
But I think what people/movies don't understand is that there's almost no conductive thermal transfer going on, because there's not much matter to do it. It's all radiation, which is why heat is a much bigger problem, because you can only radiate heat away, you can't conduct it. And whatever you use to radiate heat away can also potentially receive radiation from things like the Sun, making your craft even hotter.
> Well empty space has a temperature of roughly -270c...so that's pretty cold.
What is this “empty space” you speak of? Genuinely empty space is empty and does not have a clearly defined temperature. If you are in space in our universe, very far from everything else, then the temperature of the cosmic microwave background is what matters, and that’s a few K. If you’re in our solar system in an orbit near Earth, the radiation field is wildly far from any sort of thermal equilibrium, and the steady state temperature of a passive black body will depend strongly on whether it’s in the Earth’s shadow, and it’s a lot hotter than a few K when exposed to sunlight.
Wouldn't a body essentially freeze dry as a wet being exposed to vacuum? I.e. the temperature of the space is still irrelevant and the cooling comes from vaporization.
Why do they want to put a data center in space in the first place?
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
I've talked to the founder of Starcloud about this, there is just going to be a lot of data generative stuff in space in the future, and further and further out into space. He thinks now is the right time to learn how to compute up there because people will want to process, and maybe orchestrate processing between many devices, in space. He's fully aware of all of the objections in this hn comments section, he just doesn't believe they are insurmountable and he believes interoperable compute hubs in space will be required over the next 20/30 years. He's in his mid 20s, so it seems like a reasonable mission to be on to me.
Seems far more likely that the "data generative stuff" will get smaller and cheaper to run (like cell phones with on-device models) much faster than "run a giant supercomputer in orbit" will become easy.
My headlights aren't good enough so I'm unsure but generally that maps. To me the interoperability part is what is interesting, your data and my data in real time being consumed by some understanding agent doing automated research? I could imagine putting something like a Stoffel MPC layer in there, then nations states can more easily work together? I presume space data/research will be highly competitive, even friendly nations may want to combine data without knowing the underneath. We're so far out here that it's kinda silly, but I don't think we're out to lunch? Have a great weekend Chris! :)
My initial thought was: ambiguous regulatory environment.
Not being physically located the US, the EU, or any other sovereign territory, they could plausably claim exemption from pretty much any national regulations.
Space is terrible for that. There's only a handful of countries with launch vehicles and/or launch sites. You obviously need to be in their good graces for the launch to be approved.
If you want permissive regulatory environment, just spend the money buying a Mercedes for some politician in a corrupt country, you'll get a lot further...
Which is a good analogy; international waters are far from lawless.
You're still subject to the law of your flag state, just as if you were on their territory. In addition to that, you're subject to everyone's jurisdiction if you commit certain crimes - including piracy. https://en.wikipedia.org/wiki/Universal_jurisdiction
Speed of light is actually quite an advantage, in theory at least. Speed of light in optical fiber is quite a bit slower (takes 50% longer) than in vacuum.
Not really. Fiber is more 2/3 of free space propagation and that puts the break-even point of direct fiber connection vs LEO up- and downlink at a geodesic distance of about 12000 km. So, for most data centers you want to reach a fiber is the better option.
There was a video from Scott Manley about this several years back. And he was very skeptical that it's even feasible for SpinLaunch to place something useful into orbit. And they haven't yet.
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
>
2. People will pay big bucks to keep their data all the way up there!
Just to make me understand the business plan better: why would people or companies to be willing to pay much more to have their data (or computations) done in space?
The only reason that I can imagine is that the satellite which contains the data center also has a lot of sensors mounted (think military spying devices), and either for security, capacity or latency reasons you prefer the sensor data to be processed in space instead of transferring it down to earth, process it there, and sending the results back to space.
In other words: the business model is getting big money defense contracts (somewhat ignoring whether the idea really makes military sense or not).
Except space hasn't been "more secure" for nation states against other nation states in decades. US, Russia, and China all have various capabilities to destroy or steal or manipulate or tamper with satellites. It mostly doesn't happen right now because nobody is at full blown war. Shooting down satellites was expected to be a part of any superpower war since the 80s. Those weapons will be plenty effective against even massive installations in space.
Meanwhile, space gets you zero protection from the infosec threats that plague national security installations.
If you are out of the magnetosphere, wouldn't your data be subject to way more cosmic ray interference, to the point that its actually a consideration?
In LEO, there is a lot of testing and mitigation you can do with your design to help reduce the chance and impact of radiation single events. For example, redundancy for key components, ECC for RAM, supervisor hardware, RAID or other storage tooling, etc.
Geostationary orbiters operate there during the day but this concept would position systems 100x closer to earth and well inside the protective envelope.
I can't comment on AMD or Intel but Apple Silicon definitely uses ECC for at least the system level cache. On top of that it performs cache healing (swapping out bad lines for spares) on every cache level every time the system boots.
Yes. And most server hardware is already at least ECC ram. You may still want some light radiation shielding to prevent the worst, maybe some heavier shielding for solar flares. But beyond that, simple error correction can be baked into the software - ecc the bootloader and filesystem and you are mostly good to go
I wonder if there could be some way to photolitograph compute circuits directly onto a radiator substrate, and accomplish a fully-passive thermal solution that way. Consider the heat-conduction problem: from dimensional analysis, the required thickness of a (conduction-only) radiator plate with a regular grid of heat sources on it shrinks superlinearly as you subdivide those heat sources (from few large sources, into many, small ones). At fixed areal power density, if the unit heat source is Q, the plate thickness d ∝ Q^{-3/2}. (This is intuitive: the asymptotic limit is a uniform, continuous heat source exactly matched to a uniform radiation heat sink; hence heat conduction is zero). So: could one contemplate an array of very tiny CPU sub-units, grided evenly over a thin Al foil—say at the milliwatt scale with millimeter-scale separation? It'd be mostly empty space (radiator area) and interconnect. It'd be thermally self-sufficient and weigh practically nothing.
Look at the thermal shield design for the JWST. Could you have a data center that unfolds into a multi-layered plane where the outer solar collector layer faces the sun, an intermediate layer shields infrared emissions from the back side of that, and the final layer that always faces away from the sun holds (or is) a bunch of chips? Park it in an orbit where it can stay oriented this way or an L point. Free compute for the life span of the chips powered by the sun.
A lot of these is a supercomputing Dyson swarm.
Also do chips in space need casing or could the wafers be just exposed on that back layer?
I understand that the multi-layer insulation idea doesn't accomplish much unless you're trying to reach deep cryogenic temperatures passively (as infrared telescopes do!) It's a difficult structural design which would only cut your heat budget by a small constant factor. Remember that much of that heat on the solar side is making its way over to the cold side by way of electricity—the compute units are a heat "source" of similar magnitude as the solar input itself.
edit: I think the optimal packing could be a simple rolled-up scroll, that unfurls in space into a ribbon. A very lazy design where the ribbon has no orientation control, randomly furls and knots; and only half of it is (randomly) facing the sun at any given time. And the compute units are designed work under those conditions—as they are to be robust against peers randomly disappearing to micrometeorites, to space radiation, and so forth.
Because, you could make up for everything in quantity. A small 3x5 meter cylinder of rolled-up foil stores—at the mm-thickness scale, 10's of gigawatts of compute; at the micron scale, 10's of terawatts. Of course that end is far-future sci-fi stuff!
> Also do chips in space need casing or could the wafers be just exposed on that back layer?
Even in LEO they benefit tremendously from radiation shielding, even a couple millimeters of aluminum greatly reduces the total ionizing dose. Also LEO has the issue of monatomic oxygen in the thermosphere which tends to react aggressively with the surface of anything it touches. An aluminium spacecraft structure isn't really affected, but I don't think it'd be very good for a semiconductor wafer.
Yes cooling is difficult. Half the "solar panels" on the ISS aren't solar panels but heat radiation panels. That's the only way you can get rid of it and it's very inefficient so you need a huge surface.
This isn’t true. The radiators on ISS are MUCH smaller than the solar panels. I know it’s every single armchair engineer’s idea that heat rejection is this impossible problem in space, but your own example of ISS proves this is untrue. Radiators are no more of a problem than solar panels.
The radiators are significantly smaller than the PV arrays, but not by a massive ratio; looks like about 1:3.6 based on the published area numbers that I could find.
It looks like the ISS active cooling system has a maximum cooling capacity that could handle the equivalent of a single-digit number of racks (down to 1 for an AI-focused rack).
The heat load of the ISS is a handful of astronauts and some equipment and whatever it absorbs from the sun. Not an entire data center or a nuclear rocket which is where the radiator discussion comes into play.
The heat load is equal to the load from the solar panels, to first order. So actually yeah, you CAN compare the size of solar panels to the size of the radiators.
seems oddly paradoxical. ISS interior at some roughly livable temperature. Exterior is ... freakin' space! Temperature gradient seems as if it should take of it ...
... and then you realize that because it is space, there's almost nothing out there to absorb the heat ...
There's nothing paradoxical about it. There's no such thing as a temperature gradient in a vacuum, there's nothing to hold or measure temperature against. And thus a vacuum is a really good insulator. Which is why a vacuum flask, which ultimately became one of Thermos' most well known products, is used to control temperature both in and outside the lab.
Except a thermos has a really low emissivity, otherwise (if it had high emissivity), it’d be a poor insulator due to thermal radiation, the same reason why ISS’s radiators are much smaller than its solar panels.
I’d settle for at least a high school physics education. This idea seemed insane when I first heard about it a few weeks back. This analysis just makes it that much more crazy.
If YC is hell bent on lighting piles of money on fire, I can think of some more enjoyable ways.
Radiation is not actually a problem unless you're trying to do super high power nuclear electric propulsion (i.e. in your videogame). Classic armchair engineer mistake, tbh.
Radiators work great in space. Stefan-Boltzmann's law. The ISS's solar panels are MUCH smaller than the radiators. Considering datacenters on Earth have to have massive heat exchangers as well, I really think the bUt wHaT aBoUt rAdiAtOrs is an overblown gotcha, considering every satellite still has to dump heat and works just fine.
The problem is not that radiators don't work. The problem is the need for liquid cooling. The heat prduced per area in the GPU/CPU is much bigger than the cooling capacity per area of your radiator.
Even here on earth, contemporary GPU racks for AI have had to move to liquid cooling because it is the only way to extract enough heat. At 120 kW for 18x 1U servers (GB200 NVL72), the power density is waaay beyond what you can do with air even.
The last time Starcloud was doing the rounds on HN, I estimated that they need to be pumping water at a flow rate of 60 000 liters per second, if you use the numbers in their whitepaper. That's a tenth of the Sacramento river, flowing in space through a network with a million junctions and hoping nothing leaks.
A great interactive example of this is the game Oxygen Not Included. By the late game, you're biggest problem is your base getting too hot from the waste heat of all your industry.
If you read the Starcloud whitepaper[1], it claims that massive batteries aren't needed because the satellites would be placed in a dawn-dusk sun-synchronous orbit. Except for occasional lunar eclipses, the solar panels would be in constant sunlight.
The whitepaper also says that they're targeting use cases that don't require low latency or high availability. In short: AI model training and other big offline tasks.
For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
All satellites launched from the US are required to have a decommissioning plan and a debris assessment report. In other words: the government must be satisfied that they won't create orbital debris or create a hazard on the ground. Since these satellites would be very large, they'll almost certainly need thrusters that allow them to avoid potential collisions and deorbit in a controlled manner.
Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries. If cost to orbit per kg plummets and the price of batteries remains high, they'll have a chance. If not, they're sunk.
I think they'll most likely fail, but their business could be very lucrative if they succeed. I wouldn't invest, but I can see why some people would.
> For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
This is hiding so, so much complexity behind a simple hand wavy “modular”. I have trained large models on thousands of GPUs, hardware failure happen all the time. Last example in date: an infiniband interface flapping which ultimately had to be physically replaced.
What do you do if your DC is in space? Do you just jettison the entire multi million $ DGX pod that contains the faulty 300$ interface before sending a new one? Do you have an army of astronauts + Dragons to do this manually? Do we hope we have achieve super intelligence by then and have robots that can do this for us ?
Waving the “Modular” magic key word doesn’t really cut it for me.
> Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries.
Something tells me that the price of batteries is already cheap enough for terrestrial data centers to make more economic sense than launching a datacenter - which will also need batteries - into space.
International space law (starting with the Outer Space Treaty of 1967) says that nations are responsible for all spacecraft they launch, no matter whether the government or a non-governmental group launches them. So a server farm launched by a Danish company is governed by Danish law just the same as if they were on the ground- and exposed to the same ability to put someone into jail if they don't comply with a legal warrant etc.
This is true even if your company moves the actual launching to, say, a platform in international waters- you (either a corporation or an individual) are still regulated by your home country, and that country is responsible for your actions and has full enforcement rights over you. There is no area beyond legal control, space is not a magic "free from the government" area.
They don't need to do that if they go after your ground station operators.
To escape the law you need to hide or protect something on earth (your ground station(s), downlinks). If you can hide or protect that infrastructure on earth, why bother putting the computers in space?
I'm not sure how you maintain hidden ground stations while providing a commercial service that justifies many $MM in capital and requires state support to get launch permission.
> I'm not sure how you maintain hidden ground stations while providing a commercial service that justifies many $MM in capital and requires state support to get launch permission.
Who said that Starcloud's business model is about commercial services? At https://news.ycombinator.com/item?id=44397026 I rather speculate that Starcloud's business model is about getting big money defense contracts.
What if you’re a stateless person? (Not an easy status to acquire these days, but any US citizen can just renounce their citizenship without getting a new one, for example.)
nations are responsible for all spacecraft they launch, no matter whether the government or a non-governmental group launches them.
Nations come and go. In my lifetime, the world map has changed dozens of times. Incorporate in a country that doesn't look like it's going to be around very long. More than likely, the people running it will be happy to take your money.
That is not how international law works, you don't get to say "we are a new country and therefore not bound by treaties that earlier forms did."
This principle was established when Nazis were convicted for war crimes at Nuremburg for violating treaties that their predecessor state the Weimar Republic signed, even after the Nazi's repudiated those treaties and claimed they were signed by an illegitimate state, and that they were a new Reich, not like the Wiemar Republic.
Basically if territory changes hand to an existing state that state will obviously still have obligations, and if a new state is formed, then generally it is assumed to still carry the obligations of the previous state. There is no "one weird trick" to avoid international law. I assure you that the diplomats and lawyers 80 years ago thought of these possibilities. They saw what resulted from the Soviet and Nazi mutual POW slaughters, and set up international law so no one could ignore it.
Those kinds of countries don't tend to be the kinds of countries with active space programs.
And more critically - they have successor states.
The Russian Federation is treated as the successor to the USSR in most cases (much to the chagrin of the rest of the CIS) and Serbia is treated as the successor to Yugoslavia (much to the chagrin of the rest)
:-) I appreciate your snark and the ad campaign reference.
But if international waters isn't enough (and much cheaper) then I don't think space will either. Man's imagination for legal control knows no bounds.
You wait (maybe not, it's a long wait...), if humankind ever does get out to the stars, the legal claims of the major nations on the universe will have preceded them.
The 'Principality of Sealand', anywhere else on the high seas or Antarctica have their issues with practicality too, but considerably less likelihood of background radiation flipping bits...
Unless the company blasts its HQ and all its employees into space, no, they are very much subject to the jurisdiction of the countries they operate in. The physical location of the data center is irrelevant.
I know there's the fantasy of orbital CSAM storage able to beam obscenity to any point on the ground with zero accountability, but that is not going to survive real world politics.
[Mild spoilers for _Critical Mass_ by Daniel Suarez below]
> Servers outside any legal jurisdiction
Others have weighed in on the accuracy of this, with a couple pointing out that the people are still on the ground. There's a thread in _Critical Mass_ by Daniel Suarez that winds up dealing with this issue in a complex set of overlapping ways.
Pretty good stuff, I don't think the book will be as good as the prior book in the series. (I'm only about halfway through.)
>Shooting down a satellite is a major step that creates a mess of space junk, angering everybody.
unless everybody is angry at satellite in which case it is a price everybody is even eager to pay.
>Plus you can just have a couple of politicians from each major power park their money on that satellite.
I've long had the idea that there are fashions in corruption and a point at which to be corrupt just becomes too gauche and most politicians go back to being honest.
This explains the highly variant history of extreme corruption in democracies.
At any rate while the idea that the cure for any government interference is to be sufficiently corrupt sounds foolproof in theory I'm not sure it actually works out.
If I was a major politician and you had my competitors park their money on your satellite it would become interesting for me to get rid of it. Indeed if you had me and my competitors on the satellite I might start thinking how do I conceal getting my money out of here and then wait for best moment to ram measure through to blow up satellite.
I'm sorry but what logic is it you're referring to here? Is it the idea that there are fashions in corruption? If so by that logic we are probably in an era of high corruption.
Is it the idea that if I were a corrupt politician and I had equally corrupt enemies I would use my knowledge of their corruption to dirty trick them? Because ... dirty tricking them and getting them to lose all their finance at one time is not quite the same as passing a law making it difficult for everyone to get more finance from hereon out.
I'm not following exactly what logic of mine you think you've defeated with observing that there are a lot of corrupt politicians nowadays?
The best argument I've heard for data centres in space startups is it's a excuse to do engineering work on components other space companies might want to buy (radiators, shielding, rad-hardened chips, data transfer, space batteries) which are too unsexy to attract the same level of FOMO investment...
Yes, and also just because a space data center isn’t useful today doesn’t mean it won’t be required tomorrow. When all the computing is between the ground and some nearby satellites, of course the tradeoffs won’t be worth it.
But what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
We may as well get ahead of the problems we’ll face and solve them in a low-stakes environment now, rather than waiting to discover some novel failure scenario when we’re nearing Mars…
> what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
How would this work? Planets orbit at different speeds, so you can't build a chain of relays to another planet. I can imagine these things orbiting planets, but is that worth it compared to ground-based systems?
We'd build it then? The problems of a space data center are extremely generic and only worth solving when you actually need one. Which would never be in low earth orbit.
You need less batteries in orbit than on the ground since you're only in shade for at most like 40 minutes. And it's all far more predictable.
Cooling isn't actually any more difficult than on Earth. You use large radiators and radiate to deep space. The radiators are much smaller than the solar arrays. "Oh but thermos bottles--" thermos bottles use a very low emissivity coating. Space radiators use a high emissivity coating. Literally every satellite manages to deal with heat rejection just fine, and with radiators (if needed) much smaller than the solar arrays.
Latency is potentially an issue if in a high orbit, but in LEO can be very small.
Equipment upgrades and maintenance is impossible? Literally, what is ISS, where this is done all the time?
Radiation shielding isn't free, but it's not necessarily that expensive either.
Orbital maintainence is not a serious problem with low cost launch.
The upside is effectively unlimited energy. No other place can give you terawatts of power. At that scale, this can be cheaper than terrestrially.
> The radiators are much smaller than the solar arrays.
Modern solar panels are way more efficient than the ancient ones in ISS, at least 10x. The cooling radiators are smaller than solar panels because they are stacked and therefore effectively 5x efficient.
Unless there are at least 2x performance improvements on the cooling system, the cooling system would have to be larger than solar panels in a modern deployment.
This is false. It’s pretty straightforward to prove using Stefan-Boltzmann. Radiating from both sides at 300K, a square radiator that’s 1 meter on a side emits 920W.
Additionally, you wouldn’t use cutting edge 35% triple junction cells for a space datacenter, you’d use silicon cells like Starlink and ISS use. 22% efficient with 90% full factor, given 1350W/m^2 and thus 270W/m^2, to provide enough power for that radiator you’d need a solar panel 3.4 times as big, and that’s if you were in 24/7 sunshine. If you’re in a low orbit that’s obscured almost half the time, it’s 6-7 times as big.
Why do people keep making these obviously wrong claims when a paragraph of arithmetic shows they’re wrong? Do the math.
We’re probably thinking of it the wrong way. Instead of a single datacenter it’s more likely we build constellations and then change the way we write software.
There will probably be a lot more edge computing in the future. 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal. Space infrastructure will probably have some parallels.
That sounds like the Guoxing Aerospace / ADA Space “Three-Body Computing Constellation”, currently at 12 satellites (out of a planned 2,800).
The Chinese project involves a larger number of less powerful inference-only nodes for edge computing, compared to Starcloud's training-capable hyperscale data centers.
> 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal.
Are there many startups actually taking real advantage of edge computing? Smaller B2B places don't really need it, larger ones can just spin up per-region clusters.... and then for 2C stuff you're mainly looking at static asset stuff which is just CDNs?
Who's out there using edge computing to good effect?
Re: reliable energy. Even in low earth orbit, isn't sunlight plentiful? My layman's guess says it's in direct sun 80-95% of the time, with deterministic shade.
It's super reliable, provided you've got the stored energy for the reliable periods of downtime (or a sun synchronous orbit). Energy storage is a solved problem, but you need rather a lot of it for a datacentre and that's all mass which is very expensive to launch and to replace at the end of its usable lifetime. Same goes for most of the other problems brought up
Exactly this. It's not that it's a difficult problem, but it is a high mass-budget problem. Which makes it an expensive problem. Which makes it a difficult problem.
That would make communicating with bits on Earth kind of painful though; I suppose that would work for a server that serves other sun-synchronous objects, but that seems like a rather small market.
If starcloud integrated with something like starlink, using the laser inter satellite links to distribute ground comms across a network of satellites, then the datacenter maintaining a direct link to a base station is probably a non-issue for most purposes.
You're making lots of assumptions. They can put like 1000 Raspberrypi's which don't need all that much cooling and relatively little energy requirements.
For your other concerns, the risks are worth it for customers because of the main reward: No laws or governments in space! Technically, the datacenter company could be found liable but not for traffic, only for take-down refusals. Physical security is the most important security. For a lot of potential clients, simply making sure human access to the device is difficult is worth data-loss,latency and reliability issues.
I wonder what the implication on data protection / privacy laws and the like would be. Would it be as simple as there'd be no laws, or is the location of the users still relevant?
> Article VI of the Outer Space Treaty deals with international responsibility, stating that "the activities of non-governmental entities in outer space, including the Moon and other celestial bodies, shall require authorization and continuing supervision by the appropriate State Party to the Treaty" and that States Party shall bear international responsibility for national space activities whether carried out by governmental or non-governmental entities.
Yes it was ONLY 1,000 out of 300,000. But that is only harddrives not other hardware failures/replacement. But it goes to show that things do fail. And the cost of replacement in space is drastically more expensive. The idea of a DC in space as it stands is a nothing burger.
The point is that past burn-in, the failure rates are low enough for years that they're a rounding error and you can plan for just letting the failed equipment sit there.
Allowing the failed equipment to sit there can in fact cut costs because it allows you to design the space without consideration of humans needing to be able to access and insert/remove servers.
The higher the cost of bringing someone in to do maintenance, the more likely it is you will just design for redundancy of the core systems (cooling, power, networking), and accept failures and just disable failed equipment.
There is 0 reason to put a data center in space. For every single reason beyond "investor vibes" you can accomplish the same thing on earth for a significantly lower cost.
This site is unusable on my mobile android phone, even tried multiple browsers. The body text extends beyond the window and I can't scroll or zoom to fit.
Let's start by acknowledging that there is no Starship and it's likely that the current iteration of that system is not viable. It will need to be redesigned, and no one even knows if it's possible not to mention economically feasible.
My napkin is with Starcloud https://news.ycombinator.com/item?id=43190778 , ie. one Starship $10M launch - 10 000 GPU datacenter into LEO with energy and cooling. I missed there batteries for the half the time being in the Earth shadow (as originally i calculated that for crypto where you can have half the time off which isn't the case for the regular datacenter) and panels to charge them, that adds 10kg for 1 KWH, and thus it will get down to about 5000 GPU for the same weight and launch cost.
Paradoxically the datacenter in LEO is cheaper than on the ground, and have bunch of other benefits like for example physical security.
If you read Starcloud's whitepaper[1], they mention using a dawn-dusk sun-synchronous orbit. This would keep the solar panels in sunlight except for occasional lunar eclipses (which would basically be scheduled downtime, since their plan is to use these data centers for AI training).
The launch costs in the article look quite off from the outset.
A Falcon Heavy launch is already under $100M, and in the $1400/kg range; Starship’s main purpose is to massively reduce launch costs, so $1000/kg is not optimistic at all and would be a failure. Their current target is $250/kg eventually once full reusability is in place.
Still far from the dream of $30/kg but not that far.
The original “white paper” [1] also does acknowledge that a separate launch is needed for the solar panels and radiators at a 1:1 ratio to the server launches, which is ignored here. I think the author leaned in a bit too much on their deep research AI assistant output.
Space roboticist here.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
https://accendoreliability.com/the-bath-tub-curve-explained/ is an interesting breakdown of bath tub curve dynamics for those curious!
Wonder if you could game that in theory by burning in the components on the surface before launch or if the launch would cause a big enough spike from the vibration damage that it's not worth it.
Maybe they are different types of failure modes. Solar panel semiconductors hate vibration.
And then, there is of course radiation trouble.
So those two kinds of burn-in require a launch ti space anyway.
I suspect you'd absolutely want to burn in before launch, maybe even including simulating some mechanical stress to "shake out" more issues, but it is a valid question how much burn in is worth doing before and after launch.
Vibration testing is a completely standard part of space payload pre-flight testing. You would absolutely want to vibe-test (no, not that kind) at both a component level and fully integrated before launch.
Ah, the good old BETA distribution.
Programming and CS people somehow rarely look at that.
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
> The analysis has zero redundancy for either servers or support systems.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
Many small satellites also increases the surface area for cooling
even a swarm of satellites has risk factors. we treat space as if it were empty (it's in the name) but there's debris left over from previous missions. this stuff orbits at a very high velocity, so if an object greater than 10cm is projected to get within a couple kilometers of the ISS, they move the ISS out of the way. they did this in April and it happens about once a year.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
On the ground vibration testing is a standard part of pre-launch spacecraft testing. This would trigger most (not all) vibration/G-force related failures on the ground rather than at the actual launch.
Electronics can be extremely resilient to vibration and g forces. Self guided artillery shells such as the M982 Excalibur include fairly normal electronics for GPS guidance. https://en.wikipedia.org/wiki/M982_Excalibur
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
Really? Even radiation hardened hardware? Aren’t there way higher size floors on the transistors?
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
Yes, an orbital launch probably resets the bathtub to some degree.
I suspect the thermal system would look very different from a terrestrial component. Fans and connectors can shake loose - but do nothing in space.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I think it's likely the overall rate would be higher, and you might find you need more aggressive burn-in, but even then you'd need an extremely high failure rate before it's more efficient to replace components than writing them off.
The bathtub curve isn’t the same for all components of a server though. Writing off the entire server because a single ram chip or ssd or network card failed would limit the entire server to the lifetime of the weakest part. I think you would want redundant hot spares of certain components with lower mean time between failures.
We do often write off an entire server because a single component fails because the lifetime of the shortest-lifetime components is usually long enough that even on-earth with easy access it's often not worth the cost to try to repair. In an easy-to-access data centre, the component most likely to get replaced would be hot-swappable drives or power supplies, but it's been about 2 decades since the last time I worked anywhere where anyone bothered to check for failed RAM or failed CPUs to salvage a server. And lot of servers don't have network devices you can replace without soldering, and haven't for a long time outside of really high end networking.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
A new meaning to the term "space junk"
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
(Also, much easier to cool.)
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
Power is solar and cooling is radiators. They did the math on it, its feasible and mostly an engineering problem now.
Power!? Isnt that just PV and batteries? LEO has like 1.5h orbit.
It's a Datacenter... I guess solar is what they're planning to use, but the array will be so large it'll have its own gravity well
All mass has gravity
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
Did Microsoft do any of that with their submersible tests?
My feeling is that, a bit like starlink, you would just deprecate failed hardware, rather than bother with all the moving parts to replace faulty ram.
Does mean your comms and OOB tools need to be better than the average american colo provider but I would hope that would be a given.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
Oil, like air, doesn't convent well in 0G, you'll need pretty hefty pumps and well designed layouts to ensure no hot spots form. Heat pipes are at least passive and don't depend on gravity.
Mineral oil density is around 900kg / cubic meter.
Not sure this is such a great idea.
Does using oil solve the mass problem? Liquids aren't light.
I would wager that its lighter than:
Repair robots
Enough air between servers to allow robots to access and replace componentry.
Spare componentry.
An eject/return system.
Heatpipes from every server to the radiators.
I would wager it isn't.
First, oil is much heavier than air.
Second: you still need radiators to dissipate heat that is in oil somehow.
Why does it need to be robots?
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
Life support can be on the shuttle/transport. Or it can be its own hab… space office ? Space workshop ?
What about food, water and air filtration needs?
Presumably those needs are handled on the habitat where the orbital maintenance team lives when they aren’t visiting satellite data centers.
Treat each maintenance trip like an EVA (extra vehicular activity) and bring your life support with you.
Thats life support.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
You mean like every single kitchen?
You might be thinking of 100F, a toasty summer day. 100C on the other hand (about 212F) is fatal even in zero humidity.
Yeah, just attach a Haven module to the data center.
Bingo.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
It is really hard, and it is something you need to take into careful consideration when designing a satellite.
It is really fucking hard when you have 40MW of heat being generated that you somehow have to get rid of.
It's all relative. Is it harder than getting 40MW of (stable!) power? Harder than packaging and launching the thing? Sure it's a bit of a problem, perhaps harder than other satellites if the temperature needs to be lower (assuming commodity server hardware) so the radiator system might need to be large. But large isn't the same as difficult.
Musk is already in the testing phase for this. His starship rockets should be reusable as soon as 2018!
Well sure. If you think fully reusable rockets won’t ever happen, then the datacenter in space thing isn’t viable. But THAT’S where the problem is, not innumerate bullcrap about size of radiators.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
And in the meantime, he has responsibly redistributed and recycled their mass. Avoiding any concern that Earth's mass could be negatively impacted.
How will he overtake all the other reusable rockets at this rate?
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Spreading heavy metals in the upper atmosphere. Fun.
seems to be an industry standard
What, why would you fly out and replace it? It'd be much cheaper just to launch more.
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
It sounds like building it on the moon would be better.
I think what you actually do is let it gradually degrade over time and then launch a new one.
Seems prudent to achieve fully robotic datacenters on earth before doing it in space. I know, I’m a real wet blanket.
If mass is going to be as cheap as is needed for this to work anyway, there's no reason you can't just use people like in a normal datacenter.
Space is very bad for the human body, you wouldn't be able to leave the humans there waiting for something to happen like you do on earth, they'd need to be sent from earth every time.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Underwater welder, though being replaced by drone operator, is still a trade despite the health risks. Do you think nobody on this whole planet would take a space datacenter job on a 3 month rotation?
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
The problem isn't health “risk”, there are risks but there are also health effects that will come with certainty. For instance, low gravity deplete your muscles pretty fast. Spend three month in space and you're not going to walk out of the reentry vehicle.
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
Then just provide spin gravity for the crew habitat.
“just”
It's theoretically possible for sure, but we've never done that in practice and it's far from trivial.
Good points. Spin “gravity” is also quite challenging to acclimatize to because it’s not uniform like planetary gravity. Lots of nausea and unintuitive gyroscopic effects when moving. It’s definitely not a “just”
The economics don't work the same on earth.
What makes the economics better in space?
Are there any unique use-cases waiting to be unleashed?
Regular maintenance methods are cheap on earth and infeasible in space.
Keep in mind economics is all about allocation of scarce resources with alternative uses.
[dead]
I worked in aerospace for a couple of years in the beginning of my career. While my area of expertise was the mechanical design I shared my office with the guy who did the thermal design and I learned two things:
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
The caveat to this is that that this also dependent on where in the lifecycle of the satellite you are at. For example after launch, you might just have your survival heaters on, which will keep you within generally an industrial range (e.g. >-40c), and you might not reach higher temps until you hit nominal operations. But a lot of the hardware specs for temperature often are closer standard "industrial" specs rather than special mil or NASA specs.
It's like a thermos flask where the spaceship is the contents and space is the insulating vacuum.
They address that issue in the link; The propose a 63m^2 radiator for heat dissipation.
Sure it is doable. My point is that at room temperature convection is a so much more efficient heat transfer mechanism that I wonder why someone would even think about doing without it.
Lay people associate space with cold because nearly every scifi movie has people freezing over in seconds when exposed to the vacuum of space (insert Picard face-palm gif).
Even The Expanse, even them! Although they are otherwise so realistic, that I have to say I started doubting myself a bit. I wonder what would really would happen and how fast...
People even complained that Leia did not freeze over (in stead of complaining about her sudden use of the force where previously she did not show any such talents.)
Well empty space has a temperature of roughly -270c...so that's pretty cold.
But I think what people/movies don't understand is that there's almost no conductive thermal transfer going on, because there's not much matter to do it. It's all radiation, which is why heat is a much bigger problem, because you can only radiate heat away, you can't conduct it. And whatever you use to radiate heat away can also potentially receive radiation from things like the Sun, making your craft even hotter.
> Well empty space has a temperature of roughly -270c...so that's pretty cold.
What is this “empty space” you speak of? Genuinely empty space is empty and does not have a clearly defined temperature. If you are in space in our universe, very far from everything else, then the temperature of the cosmic microwave background is what matters, and that’s a few K. If you’re in our solar system in an orbit near Earth, the radiation field is wildly far from any sort of thermal equilibrium, and the steady state temperature of a passive black body will depend strongly on whether it’s in the Earth’s shadow, and it’s a lot hotter than a few K when exposed to sunlight.
Wouldn't a body essentially freeze dry as a wet being exposed to vacuum? I.e. the temperature of the space is still irrelevant and the cooling comes from vaporization.
What is room temperature in this context? The temp of the space it's sitting in or a typical room temp on Earth?
Room temperature on earth. In physics room temperature is used as a technical term and actually pretty universally defined as 20°C (293.15 K).
Traditionally in European papers it used to be 18°C, so if Einstein and Schrödinger talk about room temperature it is that.
I've heard in chemistry and stamp collecting they use 25°C but that is heresy.
Why do they want to put a data center in space in the first place?
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
I've talked to the founder of Starcloud about this, there is just going to be a lot of data generative stuff in space in the future, and further and further out into space. He thinks now is the right time to learn how to compute up there because people will want to process, and maybe orchestrate processing between many devices, in space. He's fully aware of all of the objections in this hn comments section, he just doesn't believe they are insurmountable and he believes interoperable compute hubs in space will be required over the next 20/30 years. He's in his mid 20s, so it seems like a reasonable mission to be on to me.
Seems far more likely that the "data generative stuff" will get smaller and cheaper to run (like cell phones with on-device models) much faster than "run a giant supercomputer in orbit" will become easy.
My headlights aren't good enough so I'm unsure but generally that maps. To me the interoperability part is what is interesting, your data and my data in real time being consumed by some understanding agent doing automated research? I could imagine putting something like a Stoffel MPC layer in there, then nations states can more easily work together? I presume space data/research will be highly competitive, even friendly nations may want to combine data without knowing the underneath. We're so far out here that it's kinda silly, but I don't think we're out to lunch? Have a great weekend Chris! :)
My initial thought was: ambiguous regulatory environment.
Not being physically located the US, the EU, or any other sovereign territory, they could plausably claim exemption from pretty much any national regulations.
This might be true, but unrealistic.
If you run amiss of US (or EU) regulators, they will never say, "well, it's in space, out of our jurisdiction!".
They will make your life hell on Earth.
Space is terrible for that. There's only a handful of countries with launch vehicles and/or launch sites. You obviously need to be in their good graces for the launch to be approved.
If you want permissive regulatory environment, just spend the money buying a Mercedes for some politician in a corrupt country, you'll get a lot further...
Quick, we need a new Cryptonomicon, in space!
A bit like international waters. I wonder when we'll see the first space pirates.
> A bit like international waters.
Which is a good analogy; international waters are far from lawless.
You're still subject to the law of your flag state, just as if you were on their territory. In addition to that, you're subject to everyone's jurisdiction if you commit certain crimes - including piracy. https://en.wikipedia.org/wiki/Universal_jurisdiction
Ability to raise money from gullible investors.
On the environmental front, when it comes to the of life the entire data center is incinerated in the Earth's upper atmosphere
> Why do they want to put a data center in space in the first place?
At https://news.ycombinator.com/item?id=44397026 I speculate that in particular militaries might be interested.
Speed of light is actually quite an advantage, in theory at least. Speed of light in optical fiber is quite a bit slower (takes 50% longer) than in vacuum.
Not really. Fiber is more 2/3 of free space propagation and that puts the break-even point of direct fiber connection vs LEO up- and downlink at a geodesic distance of about 12000 km. So, for most data centers you want to reach a fiber is the better option.
Not every datacenter use case is latency sensitive. Backup storage or GPU compute, for example.
But then why bother with the added expense of launching into space? It's definitely not for environmental reasons.
Starcloud isn't even worth the attention to point out what an infeasible idea it is.
Maybe SpinLaunch can get it up there. And the power for the SpinLaunch motor can come from Solar Roadways.
There was a video from Scott Manley about this several years back. And he was very skeptical that it's even feasible for SpinLaunch to place something useful into orbit. And they haven't yet.
Coming from someone in this space, SpinLaunch has more legs to stand on than Starcloud.
SpinLaunch could work but from the Moon, sending stuff back to Earth.
That's how you get another Theranos.
If I'm reading this correctly, the idea is
1. YOLO. Yeet big data into orbit!
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
> 2. People will pay big bucks to keep their data all the way up there!
Just to make me understand the business plan better: why would people or companies to be willing to pay much more to have their data (or computations) done in space?
The only reason that I can imagine is that the satellite which contains the data center also has a lot of sensors mounted (think military spying devices), and either for security, capacity or latency reasons you prefer the sensor data to be processed in space instead of transferring it down to earth, process it there, and sending the results back to space.
In other words: the business model is getting big money defense contracts (somewhat ignoring whether the idea really makes military sense or not).
Except space hasn't been "more secure" for nation states against other nation states in decades. US, Russia, and China all have various capabilities to destroy or steal or manipulate or tamper with satellites. It mostly doesn't happen right now because nobody is at full blown war. Shooting down satellites was expected to be a part of any superpower war since the 80s. Those weapons will be plenty effective against even massive installations in space.
Meanwhile, space gets you zero protection from the infosec threats that plague national security installations.
> There, 24/7 solar power is unhindered by day/night cycles, weather, and atmospheric losses (attenuation).
Wouldn’t the earth still get in the way of the sun or it’s too far away?
> Starcloud’s target is to achieve a 5 GW cluster with solar arrays spanning 4 km by 4 km
Doesn't this massively surface area also mean a proportionately large risk of getting damaged by orbital debris?
If you are out of the magnetosphere, wouldn't your data be subject to way more cosmic ray interference, to the point that its actually a consideration?
In LEO, there is a lot of testing and mitigation you can do with your design to help reduce the chance and impact of radiation single events. For example, redundancy for key components, ECC for RAM, supervisor hardware, RAID or other storage tooling, etc.
Geostationary orbiters operate there during the day but this concept would position systems 100x closer to earth and well inside the protective envelope.
ECC everything?
Isn't ECC only applied to RAM? Do any current gen CPUs have ECC cache?
I can't comment on AMD or Intel but Apple Silicon definitely uses ECC for at least the system level cache. On top of that it performs cache healing (swapping out bad lines for spares) on every cache level every time the system boots.
Yes. And most server hardware is already at least ECC ram. You may still want some light radiation shielding to prevent the worst, maybe some heavier shielding for solar flares. But beyond that, simple error correction can be baked into the software - ecc the bootloader and filesystem and you are mostly good to go
Cooling things in space is insanely difficult, as there’s no conduction or convection.
Cooling is one of the main challenges in designing data centers.
I wonder if there could be some way to photolitograph compute circuits directly onto a radiator substrate, and accomplish a fully-passive thermal solution that way. Consider the heat-conduction problem: from dimensional analysis, the required thickness of a (conduction-only) radiator plate with a regular grid of heat sources on it shrinks superlinearly as you subdivide those heat sources (from few large sources, into many, small ones). At fixed areal power density, if the unit heat source is Q, the plate thickness d ∝ Q^{-3/2}. (This is intuitive: the asymptotic limit is a uniform, continuous heat source exactly matched to a uniform radiation heat sink; hence heat conduction is zero). So: could one contemplate an array of very tiny CPU sub-units, grided evenly over a thin Al foil—say at the milliwatt scale with millimeter-scale separation? It'd be mostly empty space (radiator area) and interconnect. It'd be thermally self-sufficient and weigh practically nothing.
Look at the thermal shield design for the JWST. Could you have a data center that unfolds into a multi-layered plane where the outer solar collector layer faces the sun, an intermediate layer shields infrared emissions from the back side of that, and the final layer that always faces away from the sun holds (or is) a bunch of chips? Park it in an orbit where it can stay oriented this way or an L point. Free compute for the life span of the chips powered by the sun.
A lot of these is a supercomputing Dyson swarm.
Also do chips in space need casing or could the wafers be just exposed on that back layer?
I understand that the multi-layer insulation idea doesn't accomplish much unless you're trying to reach deep cryogenic temperatures passively (as infrared telescopes do!) It's a difficult structural design which would only cut your heat budget by a small constant factor. Remember that much of that heat on the solar side is making its way over to the cold side by way of electricity—the compute units are a heat "source" of similar magnitude as the solar input itself.
edit: I think the optimal packing could be a simple rolled-up scroll, that unfurls in space into a ribbon. A very lazy design where the ribbon has no orientation control, randomly furls and knots; and only half of it is (randomly) facing the sun at any given time. And the compute units are designed work under those conditions—as they are to be robust against peers randomly disappearing to micrometeorites, to space radiation, and so forth.
Because, you could make up for everything in quantity. A small 3x5 meter cylinder of rolled-up foil stores—at the mm-thickness scale, 10's of gigawatts of compute; at the micron scale, 10's of terawatts. Of course that end is far-future sci-fi stuff!
> Also do chips in space need casing or could the wafers be just exposed on that back layer?
Even in LEO they benefit tremendously from radiation shielding, even a couple millimeters of aluminum greatly reduces the total ionizing dose. Also LEO has the issue of monatomic oxygen in the thermosphere which tends to react aggressively with the surface of anything it touches. An aluminium spacecraft structure isn't really affected, but I don't think it'd be very good for a semiconductor wafer.
Putting a datacenter in space is one of the worst ideas I've heard in a while.
Reliable energy? Possible, but difficult -- need plenty of batteries
Cooling? Very difficult. Where does the heat transfer to?
Latency? Highly variable.
Equipment upgrades and maintenance? Impossible.
Radiation shielding? Not free.
Decommissioning? Potentially dangerous!
Orbital maintenance? Gotta install engines on your datacenter and keep them fueled.
There's no upside, it's only downsides as far as I can tell.
Yes cooling is difficult. Half the "solar panels" on the ISS aren't solar panels but heat radiation panels. That's the only way you can get rid of it and it's very inefficient so you need a huge surface.
This isn’t true. The radiators on ISS are MUCH smaller than the solar panels. I know it’s every single armchair engineer’s idea that heat rejection is this impossible problem in space, but your own example of ISS proves this is untrue. Radiators are no more of a problem than solar panels.
The radiators are significantly smaller than the PV arrays, but not by a massive ratio; looks like about 1:3.6 based on the published area numbers that I could find.
It looks like the ISS active cooling system has a maximum cooling capacity that could handle the equivalent of a single-digit number of racks (down to 1 for an AI-focused rack).
The heat load of the ISS is a handful of astronauts and some equipment and whatever it absorbs from the sun. Not an entire data center or a nuclear rocket which is where the radiator discussion comes into play.
The heat load is equal to the load from the solar panels, to first order. So actually yeah, you CAN compare the size of solar panels to the size of the radiators.
seems oddly paradoxical. ISS interior at some roughly livable temperature. Exterior is ... freakin' space! Temperature gradient seems as if it should take of it ...
... and then you realize that because it is space, there's almost nothing out there to absorb the heat ...
There's nothing paradoxical about it. There's no such thing as a temperature gradient in a vacuum, there's nothing to hold or measure temperature against. And thus a vacuum is a really good insulator. Which is why a vacuum flask, which ultimately became one of Thermos' most well known products, is used to control temperature both in and outside the lab.
Except a thermos has a really low emissivity, otherwise (if it had high emissivity), it’d be a poor insulator due to thermal radiation, the same reason why ISS’s radiators are much smaller than its solar panels.
there literally is nothing to absorb the heat. Conduction and convection are out, all you got is radiation.
new vc rule: no investing in space startups unless their founders have 1000 hours in KSP and 500 hours in children of a dead earth
I’d settle for at least a high school physics education. This idea seemed insane when I first heard about it a few weeks back. This analysis just makes it that much more crazy.
If YC is hell bent on lighting piles of money on fire, I can think of some more enjoyable ways.
they got the sun synchronous orbit part right.
Radiation is not actually a problem unless you're trying to do super high power nuclear electric propulsion (i.e. in your videogame). Classic armchair engineer mistake, tbh.
Radiators work great in space. Stefan-Boltzmann's law. The ISS's solar panels are MUCH smaller than the radiators. Considering datacenters on Earth have to have massive heat exchangers as well, I really think the bUt wHaT aBoUt rAdiAtOrs is an overblown gotcha, considering every satellite still has to dump heat and works just fine.
The problem is not that radiators don't work. The problem is the need for liquid cooling. The heat prduced per area in the GPU/CPU is much bigger than the cooling capacity per area of your radiator.
Even here on earth, contemporary GPU racks for AI have had to move to liquid cooling because it is the only way to extract enough heat. At 120 kW for 18x 1U servers (GB200 NVL72), the power density is waaay beyond what you can do with air even.
The last time Starcloud was doing the rounds on HN, I estimated that they need to be pumping water at a flow rate of 60 000 liters per second, if you use the numbers in their whitepaper. That's a tenth of the Sacramento river, flowing in space through a network with a million junctions and hoping nothing leaks.
There's a difference between a couple humans (n150W) and say JUST one H200 DGX (8700W).
yes. in general as a rule of thumb your radiator size must scale proportionally to your solar panel size, as parent says:
> The ISS's solar panels are MUCH smaller than the radiators.
A great interactive example of this is the game Oxygen Not Included. By the late game, you're biggest problem is your base getting too hot from the waste heat of all your industry.
If you read the Starcloud whitepaper[1], it claims that massive batteries aren't needed because the satellites would be placed in a dawn-dusk sun-synchronous orbit. Except for occasional lunar eclipses, the solar panels would be in constant sunlight.
The whitepaper also says that they're targeting use cases that don't require low latency or high availability. In short: AI model training and other big offline tasks.
For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
All satellites launched from the US are required to have a decommissioning plan and a debris assessment report. In other words: the government must be satisfied that they won't create orbital debris or create a hazard on the ground. Since these satellites would be very large, they'll almost certainly need thrusters that allow them to avoid potential collisions and deorbit in a controlled manner.
Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries. If cost to orbit per kg plummets and the price of batteries remains high, they'll have a chance. If not, they're sunk.
I think they'll most likely fail, but their business could be very lucrative if they succeed. I wouldn't invest, but I can see why some people would.
1. https://starcloudinc.github.io/wp.pdf
You can also drink from a shoe. It's absolutely possible.
And it enjoyed some popularity. [1]
[1] https://en.wikipedia.org/wiki/Beer_boot
And there you've cut to the chase.
I was implying an unspoken obvious "but why would you?"
But of course the answer I missed was you don't, you make money from people who, for whatever reason, want to drink from shoes.
> For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
This is hiding so, so much complexity behind a simple hand wavy “modular”. I have trained large models on thousands of GPUs, hardware failure happen all the time. Last example in date: an infiniband interface flapping which ultimately had to be physically replaced. What do you do if your DC is in space? Do you just jettison the entire multi million $ DGX pod that contains the faulty 300$ interface before sending a new one? Do you have an army of astronauts + Dragons to do this manually? Do we hope we have achieve super intelligence by then and have robots that can do this for us ?
Waving the “Modular” magic key word doesn’t really cut it for me.
> Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries.
Something tells me that the price of batteries is already cheap enough for terrestrial data centers to make more economic sense than launching a datacenter - which will also need batteries - into space.
Same with hydrogen fuel cell vehicles, inventing a detour because it sounds cool and ultimately don't work out because Occam's Razor.
Let me alert all the NIMBY folks, let them know that data centers will be blocking their view of the moon and casting shadows on their backyards.
Just give all those astronomers mockup telescopes with little screens creating the fancy images they want inside them.
They will calm down.
Of course, they're soft targets in space war too, they could generate lots of debris.
Man I read “AI training in a high latency self sufficient satellite orbiting earth” as the start of a Sci-Fi novel…
Just another good proof of paper being an ideal medium for fiction
Any purported advantages have to contend with the fact that sending the modules costs millions of dollars. Tens to hundred millions
Servers outside any legal jurisdiction. Priceless.
International space law (starting with the Outer Space Treaty of 1967) says that nations are responsible for all spacecraft they launch, no matter whether the government or a non-governmental group launches them. So a server farm launched by a Danish company is governed by Danish law just the same as if they were on the ground- and exposed to the same ability to put someone into jail if they don't comply with a legal warrant etc.
This is true even if your company moves the actual launching to, say, a platform in international waters- you (either a corporation or an individual) are still regulated by your home country, and that country is responsible for your actions and has full enforcement rights over you. There is no area beyond legal control, space is not a magic "free from the government" area.
While that's all true, it does hilariously increase the difficulty for the government showing up and seizing your server hardware...
They don't need to do that if they go after your ground station operators.
To escape the law you need to hide or protect something on earth (your ground station(s), downlinks). If you can hide or protect that infrastructure on earth, why bother putting the computers in space?
Because you need an enormous amount of energy to run the servers. You may hide the downlinks but you still need power.
I'm not sure how you maintain hidden ground stations while providing a commercial service that justifies many $MM in capital and requires state support to get launch permission.
> I'm not sure how you maintain hidden ground stations while providing a commercial service that justifies many $MM in capital and requires state support to get launch permission.
Who said that Starcloud's business model is about commercial services? At https://news.ycombinator.com/item?id=44397026 I rather speculate that Starcloud's business model is about getting big money defense contracts.
Yeah exactly. We’re riffing on how implausible that is, right?
ASAT missiles have existed since the 80s and multiple countries have demonstrated the capability to destroy something in space.
Meanwhile, you, the actual human being the government wants to coerce, are still on the earth, where someone can grab you and beat you with a wrench
Maybe not so much... they'll just grab you. Obligatory XKCD.
https://xkcd.com/538/
Unless you go up there with it and a literal lifetime supply? Although I guess if you don't take much it's still a lifetime supply...
What if you’re a stateless person? (Not an easy status to acquire these days, but any US citizen can just renounce their citizenship without getting a new one, for example.)
Being stateless has an end result of "literally anyone can fuck with you" more than "no one can fuck with you".
nations are responsible for all spacecraft they launch, no matter whether the government or a non-governmental group launches them.
Nations come and go. In my lifetime, the world map has changed dozens of times. Incorporate in a country that doesn't look like it's going to be around very long. More than likely, the people running it will be happy to take your money.
Generally though, countries don’t disappear: they have a predecessor and a successor.
A successor may take possession of the land, but that doesn't mean it will also take responsibility for the previous government's liabilities.
That is why international treaties come with implicit or explicit enforcement options
That is not how international law works, you don't get to say "we are a new country and therefore not bound by treaties that earlier forms did."
This principle was established when Nazis were convicted for war crimes at Nuremburg for violating treaties that their predecessor state the Weimar Republic signed, even after the Nazi's repudiated those treaties and claimed they were signed by an illegitimate state, and that they were a new Reich, not like the Wiemar Republic.
Basically if territory changes hand to an existing state that state will obviously still have obligations, and if a new state is formed, then generally it is assumed to still carry the obligations of the previous state. There is no "one weird trick" to avoid international law. I assure you that the diplomats and lawyers 80 years ago thought of these possibilities. They saw what resulted from the Soviet and Nazi mutual POW slaughters, and set up international law so no one could ignore it.
Those kinds of countries don't tend to be the kinds of countries with active space programs.
And more critically - they have successor states.
The Russian Federation is treated as the successor to the USSR in most cases (much to the chagrin of the rest of the CIS) and Serbia is treated as the successor to Yugoslavia (much to the chagrin of the rest)
:-) I appreciate your snark and the ad campaign reference.
But if international waters isn't enough (and much cheaper) then I don't think space will either. Man's imagination for legal control knows no bounds.
You wait (maybe not, it's a long wait...), if humankind ever does get out to the stars, the legal claims of the major nations on the universe will have preceded them.
The 'Principality of Sealand', anywhere else on the high seas or Antarctica have their issues with practicality too, but considerably less likelihood of background radiation flipping bits...
Unless the company blasts its HQ and all its employees into space, no, they are very much subject to the jurisdiction of the countries they operate in. The physical location of the data center is irrelevant.
Exactly. Government entities have a funny habit of making their own decisions about what (and who) is and is not subject to their jurisdiction.
I know there's the fantasy of orbital CSAM storage able to beam obscenity to any point on the ground with zero accountability, but that is not going to survive real world politics.
[Mild spoilers for _Critical Mass_ by Daniel Suarez below]
> Servers outside any legal jurisdiction
Others have weighed in on the accuracy of this, with a couple pointing out that the people are still on the ground. There's a thread in _Critical Mass_ by Daniel Suarez that winds up dealing with this issue in a complex set of overlapping ways.
Pretty good stuff, I don't think the book will be as good as the prior book in the series. (I'm only about halfway through.)
Given that most of the major powers have satellite shootdown ability this isn't worth all that much if you're causing enough trouble.
Shooting down a satellite is a major step that creates a mess of space junk, angering everybody.
Plus you can just have a couple of politicians from each major power park their money on that satellite.
>Shooting down a satellite is a major step that creates a mess of space junk, angering everybody.
unless everybody is angry at satellite in which case it is a price everybody is even eager to pay.
>Plus you can just have a couple of politicians from each major power park their money on that satellite.
I've long had the idea that there are fashions in corruption and a point at which to be corrupt just becomes too gauche and most politicians go back to being honest.
This explains the highly variant history of extreme corruption in democracies.
At any rate while the idea that the cure for any government interference is to be sufficiently corrupt sounds foolproof in theory I'm not sure it actually works out.
If I was a major politician and you had my competitors park their money on your satellite it would become interesting for me to get rid of it. Indeed if you had me and my competitors on the satellite I might start thinking how do I conceal getting my money out of here and then wait for best moment to ram measure through to blow up satellite.
By that logic, politicians around the world would make it illegal for themselves to trade stock on their insider knowledge. I'm not holding my breath.
See: https://unusualwhales.com/politics. Some of these politicians on both sides are very good and consistent stock pickers indeed.
huh?..
I'm sorry but what logic is it you're referring to here? Is it the idea that there are fashions in corruption? If so by that logic we are probably in an era of high corruption.
Is it the idea that if I were a corrupt politician and I had equally corrupt enemies I would use my knowledge of their corruption to dirty trick them? Because ... dirty tricking them and getting them to lose all their finance at one time is not quite the same as passing a law making it difficult for everyone to get more finance from hereon out.
I'm not following exactly what logic of mine you think you've defeated with observing that there are a lot of corrupt politicians nowadays?
Who would be willing to provide connectivity to servers that are exploiting being outside legal jurisdiction for some kind of value?
Dozens upon dozens of illicit shady bulletproof hosting providers.
2026, we will get ransomware from space!
The RaaS groups have hundreds of millions of dollars so in theory they actually could get something like that setup if they wanted.
> 2026, we will get ransomware from space!
Ahem, cloud ransomware.
Anyone with a ground station aimed at the datacenter satellite.
Would be cheaper to do in international waters, even if you needed security to protect it.
Pretty worthless unless the execs live in space too.
Why? Its not like we put execs in jail for allowing their companies to do terrible things under their watch.
The best argument I've heard for data centres in space startups is it's a excuse to do engineering work on components other space companies might want to buy (radiators, shielding, rad-hardened chips, data transfer, space batteries) which are too unsexy to attract the same level of FOMO investment...
Yes, and also just because a space data center isn’t useful today doesn’t mean it won’t be required tomorrow. When all the computing is between the ground and some nearby satellites, of course the tradeoffs won’t be worth it.
But what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
We may as well get ahead of the problems we’ll face and solve them in a low-stakes environment now, rather than waiting to discover some novel failure scenario when we’re nearing Mars…
> what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
How would this work? Planets orbit at different speeds, so you can't build a chain of relays to another planet. I can imagine these things orbiting planets, but is that worth it compared to ground-based systems?
We'd build it then? The problems of a space data center are extremely generic and only worth solving when you actually need one. Which would never be in low earth orbit.
You need less batteries in orbit than on the ground since you're only in shade for at most like 40 minutes. And it's all far more predictable.
Cooling isn't actually any more difficult than on Earth. You use large radiators and radiate to deep space. The radiators are much smaller than the solar arrays. "Oh but thermos bottles--" thermos bottles use a very low emissivity coating. Space radiators use a high emissivity coating. Literally every satellite manages to deal with heat rejection just fine, and with radiators (if needed) much smaller than the solar arrays.
Latency is potentially an issue if in a high orbit, but in LEO can be very small.
Equipment upgrades and maintenance is impossible? Literally, what is ISS, where this is done all the time?
Radiation shielding isn't free, but it's not necessarily that expensive either.
Orbital maintainence is not a serious problem with low cost launch.
The upside is effectively unlimited energy. No other place can give you terawatts of power. At that scale, this can be cheaper than terrestrially.
> The radiators are much smaller than the solar arrays.
Modern solar panels are way more efficient than the ancient ones in ISS, at least 10x. The cooling radiators are smaller than solar panels because they are stacked and therefore effectively 5x efficient.
Unless there are at least 2x performance improvements on the cooling system, the cooling system would have to be larger than solar panels in a modern deployment.
This is false. It’s pretty straightforward to prove using Stefan-Boltzmann. Radiating from both sides at 300K, a square radiator that’s 1 meter on a side emits 920W.
Additionally, you wouldn’t use cutting edge 35% triple junction cells for a space datacenter, you’d use silicon cells like Starlink and ISS use. 22% efficient with 90% full factor, given 1350W/m^2 and thus 270W/m^2, to provide enough power for that radiator you’d need a solar panel 3.4 times as big, and that’s if you were in 24/7 sunshine. If you’re in a low orbit that’s obscured almost half the time, it’s 6-7 times as big.
Why do people keep making these obviously wrong claims when a paragraph of arithmetic shows they’re wrong? Do the math.
We’re probably thinking of it the wrong way. Instead of a single datacenter it’s more likely we build constellations and then change the way we write software.
There will probably be a lot more edge computing in the future. 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal. Space infrastructure will probably have some parallels.
That sounds like the Guoxing Aerospace / ADA Space “Three-Body Computing Constellation”, currently at 12 satellites (out of a planned 2,800).
The Chinese project involves a larger number of less powerful inference-only nodes for edge computing, compared to Starcloud's training-capable hyperscale data centers.
[1] Andrew Jones. "China launches first of 2,800 satellites for AI space computing constellation". Spacenews, May 14, 2025. https://spacenews.com/china-launches-first-of-2800-satellite... [2] Ling Xin. "China launches satellites to start building the world’s first supercomputer in orbit". South China Morning Post, May 15, 2025. https://www.scmp.com/news/china/science/article/3310506/chin... [3] Ben Turner. "China is building a constellation of AI supercomputers in space — and just launched the first pieces". June 2, 2025. https://www.livescience.com/technology/computing/china-is-bu...
> 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal.
Are there many startups actually taking real advantage of edge computing? Smaller B2B places don't really need it, larger ones can just spin up per-region clusters.... and then for 2C stuff you're mainly looking at static asset stuff which is just CDNs?
Who's out there using edge computing to good effect?
Sounds like a great investment for SoftBank
I think the upside is that it’s VC fodder. I imagine their thinking went about as far as “wow, what if we like….did AI…but in space?!”
reliable energy is the only (maybe valid) reason. you can get yourself into a sun synchronous dawn dusk orbit and avoid shading by the earth.
Bandwidth - negligible
Re: reliable energy. Even in low earth orbit, isn't sunlight plentiful? My layman's guess says it's in direct sun 80-95% of the time, with deterministic shade.
It's super reliable, provided you've got the stored energy for the reliable periods of downtime (or a sun synchronous orbit). Energy storage is a solved problem, but you need rather a lot of it for a datacentre and that's all mass which is very expensive to launch and to replace at the end of its usable lifetime. Same goes for most of the other problems brought up
Exactly this. It's not that it's a difficult problem, but it is a high mass-budget problem. Which makes it an expensive problem. Which makes it a difficult problem.
You answered it yourself, a sun synchronous orbit negates the need for large battery systems.
That would make communicating with bits on Earth kind of painful though; I suppose that would work for a server that serves other sun-synchronous objects, but that seems like a rather small market.
Maybe.
If starcloud integrated with something like starlink, using the laser inter satellite links to distribute ground comms across a network of satellites, then the datacenter maintaining a direct link to a base station is probably a non-issue for most purposes.
You can have sun synchronous around an earth orbit. L1 would do nicely
Depends on your orbit, but you need to be prepared to rotate into Earth's shadow seamlessly.
> There's no upside, it's only downsides as far as I can tell.
It's outside of any jurisdiction, this is a dream come true for a libertarian oligarch.
It's not, all objects in space fall under the jurisdiction of the country that launched the rocket.
You're making lots of assumptions. They can put like 1000 Raspberrypi's which don't need all that much cooling and relatively little energy requirements.
For your other concerns, the risks are worth it for customers because of the main reward: No laws or governments in space! Technically, the datacenter company could be found liable but not for traffic, only for take-down refusals. Physical security is the most important security. For a lot of potential clients, simply making sure human access to the device is difficult is worth data-loss,latency and reliability issues.
I’m mostly puzzled by how this got yc funding. Everything I’ve seen thus far suggests this is nowhere close to feasible
But but but they release a "whitepaper"! Surely that's worth at least ten million dollars!
BRB, buying a copy of Microsoft Word so I can retire.
This is putting the cart before the horse in the most literal sense, SpaceX can’t even get a Starship into space without it breaking apart.
They can get highly qualified space engineers to do a lot of pre-qualification work for free though! (Cunningham's law)
I wonder what the implication on data protection / privacy laws and the like would be. Would it be as simple as there'd be no laws, or is the location of the users still relevant?
Laws definitely still apply.
https://en.wikipedia.org/wiki/Outer_Space_Treaty
> Article VI of the Outer Space Treaty deals with international responsibility, stating that "the activities of non-governmental entities in outer space, including the Moon and other celestial bodies, shall require authorization and continuing supervision by the appropriate State Party to the Treaty" and that States Party shall bear international responsibility for national space activities whether carried out by governmental or non-governmental entities.
"Terrestrial datacenters have parts fail and get replaced all the time."
This premise is basically false. Most datacenter hardware, once it has completed testing and burn in, will last for years in constant use.
There are definitely failures but they're very low unless something is wrong like bad cooling, vibration, or just a bad batch of hardware.
So, hardware lasts for years except in the cases where it doesn't?
Backblaze is a perfect example of parts failing.
https://www.backblaze.com/cloud-storage/resources/hard-drive...
Yes it was ONLY 1,000 out of 300,000. But that is only harddrives not other hardware failures/replacement. But it goes to show that things do fail. And the cost of replacement in space is drastically more expensive. The idea of a DC in space as it stands is a nothing burger.
The point is that past burn-in, the failure rates are low enough for years that they're a rounding error and you can plan for just letting the failed equipment sit there.
Allowing the failed equipment to sit there can in fact cut costs because it allows you to design the space without consideration of humans needing to be able to access and insert/remove servers.
The higher the cost of bringing someone in to do maintenance, the more likely it is you will just design for redundancy of the core systems (cooling, power, networking), and accept failures and just disable failed equipment.
> unless something is wrong like ... vibration
so you might have problems if you were to do something that causes a lot of vibration, like launch the entire data center into space?
There is 0 reason to put a data center in space. For every single reason beyond "investor vibes" you can accomplish the same thing on earth for a significantly lower cost.
Here is a video that I think thoroughly covers the challenges a datacenter in orbit would face.
https://www.youtube.com/watch?v=JAcR7kqOb3o
This is such a gloriously stupid fucking idea.
This site is unusable on my mobile android phone, even tried multiple browsers. The body text extends beyond the window and I can't scroll or zoom to fit.
Same for me.
But does work if I rotate phone to landscape mode.
should be fixed now.
And all of humanity will be watching these arrays orbit, for the financial benefit of whom? I'm happy to remember the wild night sky.
Who's asking for datacenters in space?
> Who's asking for datacenters in space?
At https://news.ycombinator.com/item?id=44397026 I speculate that in particular militaries might be interested.
Let's start by acknowledging that there is no Starship and it's likely that the current iteration of that system is not viable. It will need to be redesigned, and no one even knows if it's possible not to mention economically feasible.
good use case for bitcoin mining?
- lots of cheap power - deploy 100s of ASICs, let each of them fail as they go
No. Too much heat to dissipate. Plus there's no benefit to mining in space.
My napkin is with Starcloud https://news.ycombinator.com/item?id=43190778 , ie. one Starship $10M launch - 10 000 GPU datacenter into LEO with energy and cooling. I missed there batteries for the half the time being in the Earth shadow (as originally i calculated that for crypto where you can have half the time off which isn't the case for the regular datacenter) and panels to charge them, that adds 10kg for 1 KWH, and thus it will get down to about 5000 GPU for the same weight and launch cost.
Paradoxically the datacenter in LEO is cheaper than on the ground, and have bunch of other benefits like for example physical security.
If you read Starcloud's whitepaper[1], they mention using a dawn-dusk sun-synchronous orbit. This would keep the solar panels in sunlight except for occasional lunar eclipses (which would basically be scheduled downtime, since their plan is to use these data centers for AI training).
1. https://starcloudinc.github.io/wp.pdf
$10M is at best SpaceX's projected internal cost.
[dead]
[flagged]
"...dumbest possible idea.."
It's a crowded field, you have to do something to stand out!
Solar roadways!
Recently had a conversation of space based solar power pros and cons screech to a halt when someone said "Well what about space based geothermal?"
What's the issue? If we can beam power down, we can beam it up!
More realistically, just drop a thermally conductive cable down to low solar orbit. Absolutely unlimited.
I guess from a certain perspective, all our sources of geothermal energy are already located in space
maybe on Io :)
The launch costs in the article look quite off from the outset.
A Falcon Heavy launch is already under $100M, and in the $1400/kg range; Starship’s main purpose is to massively reduce launch costs, so $1000/kg is not optimistic at all and would be a failure. Their current target is $250/kg eventually once full reusability is in place.
Still far from the dream of $30/kg but not that far.
The original “white paper” [1] also does acknowledge that a separate launch is needed for the solar panels and radiators at a 1:1 ratio to the server launches, which is ignored here. I think the author leaned in a bit too much on their deep research AI assistant output.
[1] https://starcloudinc.github.io/wp.pdf
please read Table 1.