Orbital computing gains momentum with the largest processing cluster ever launched into space
Kepler Communications operates the largest orbital computing cluster, marking a milestone in space infrastructure for data processing and decentralized AI inference.
The frontier of computing is expanding beyond the Earth's atmosphere, consolidating the concept of orbital infrastructure. The Canadian company Kepler Communications has taken a decisive step by activating the largest computing cluster currently in orbit, composed of 40 Nvidia Orin processors distributed across 10 operational satellites. This network, interconnected by laser communication links, represents the embryo of a new era where data processing does not rely exclusively on terrestrial servers, but occurs exactly where the information is captured.
The Challenge of Space Computing
For years, the concept of space-based data centers was treated more as a futuristic speculation than a commercial reality. Although giants like SpaceX and Blue Origin have ambitious visions for the future, orbital infrastructure still lacks robust processing power. Kepler, however, positions itself not as a traditional data center operator, but as a network infrastructure layer. According to CEO Mina Mitry, the focus is on providing connectivity and processing services for other satellites, drones, and aircraft, solving latency and data volume bottlenecks that currently limit space operations.
Innovation with Sophia Space
The ecosystem gained new momentum with the recent partnership between Kepler and the startup Sophia Space. The project's central goal is to test an orbital computer equipped with passive cooling systems, a critical innovation to bypass the prohibitive weight and cost of active cooling systems in microgravity environments. Sophia intends to run its proprietary operating system on six GPUs distributed across two of Kepler's satellites. This exercise is fundamental to validating whether software can be managed remotely and efficiently in space, a process that is trivial on Earth but represents an unprecedented technical challenge in orbit.
Inference Strategy over Training
Unlike visions that seek to replicate massive data centers in space, Kepler's strategy is focused on efficiency and immediate practical utility. Mitry argues that the priority should be distributed inference rather than training large-scale language models. While a massive processor consuming kilowatts of power would be inefficient if used only partially, Kepler's model uses GPUs that operate at full load constantly. This approach is ideal for synthetic aperture radar sensors and defense systems, which require immediate processing for threat tracking—a growing demand, including from United States government agencies.
Market Impact and the Terrestrial Data Center Crisis
The orbital computing market is also being driven by external factors, such as increasingly strict regulatory restrictions against building large data centers in urban areas on Earth. With cities and legislators imposing limits on the energy and space consumption of these complexes, Earth's orbit is, ironically, becoming a strategic alternative. Rob DeMillo, CEO of Sophia Space, observes that the shrinking availability of space for terrestrial data centers may accelerate corporate interest in space, turning orbit into a refuge for data processing that terrestrial infrastructure can no longer absorb.
Perspectives and the Path to 2030
Although experts predict that large-scale data centers, in the vein of Elon Musk's or Jeff Bezos's visions, will only become reality in the next decade, the current path of edge computing is the necessary catalyst. Kepler and Sophia Space are laying the foundations for a future where satellites not only transmit data but make decisions in real-time. With the validation of cooling technologies and software orchestration in orbit, it is expected that by 2027, the sector will have overcome initial risks, allowing space infrastructure to become a fundamental piece of the global artificial intelligence economy.