Orbital Wedge
Where AI Compute Belongs in Space, and Where It Doesn’t

Introduction: When Infrastructure Becomes the Constraint
In recent months, the idea of data centers in orbit has resurfaced in public discussion. At first glance, it reads as speculative.
The more important context is less dramatic. AI scaling is no longer constrained primarily by model design. It is increasingly constrained by physical infrastructure.
Rack densities continue to climb. Campus-scale load requests are becoming routine. Interconnection queues are extending. Transformer lead times are stretching. Permitting cycles are slowing deployment.
When the bottleneck shifts from compute procurement to electricity delivery and activation timelines, geography becomes part of system architecture.
That shift does not make orbital data centers inevitable. It makes alternative architectures worth examining.
The question is not whether GPUs can operate in orbit. It is whether constraint alignment makes it economically rational.
This is not a forecast of hyperscale cloud relocating to orbit next year.
It is a conditional infrastructure thesis.
If terrestrial power delivery, permitting friction, and activation timelines continue tightening over the next five to ten years, a narrow segment of latency-tolerant, energy-intensive inference workloads could shift off-planet.
The design space excludes training clusters, robotics, and real-time control systems. It focuses on workloads where activation speed and energy availability outweigh fiber proximity.
The investable exposure is not orbital megastructures. It is the enabling infrastructure that would make such deployment viable.
Launch cadence, communication efficiency, silicon roadmap adaptation, and procurement anchors determine whether that segment expands or remains specialized.
This essay evaluates whether those variables are beginning to align.Why This Matters for Capital
AI scaling is increasingly constrained by physical infrastructure rather than model design. As rack densities rise and data center loads approach utility-scale demand, interconnection queues, transformer lead times, and permitting cycles have become gating variables. When infrastructure becomes the bottleneck, capital searches for alternatives.
This is neither a near-term migration of hyperscale cloud to orbit nor a distant science-fiction scenario. The relevant horizon is the next five to ten years, when terrestrial grid bottlenecks, rising inference energy demand, and improving launch economics may intersect. Elements of this shift are already visible in adjacent infrastructure investments.
Orbital compute does not need to replace hyperscale to matter. It only needs to capture a subset of energy-dominant, latency-tolerant workloads to justify specialized deployment. A modest reallocation of inference demand toward energy-constrained use cases would constitute a meaningful market.
It will not train frontier models or support millisecond control systems. It does not compete with fiber-bound terrestrial architectures. The viable design space is narrower: latency-tolerant workloads where power density is the binding constraint and bandwidth requirements remain bounded.
The investment exposure is not ownership of monolithic “space data centers.” It sits earlier and lower in the stack:
Radiation-tolerant compute packaging
Vacuum-optimized thermal systems
High-gain communications hardware
Inter-satellite networking layers
Orchestration software
Replacement logistics tied to launch cadence
Many of these layers are investable before orbital compute reaches scale. They also participate in broader space infrastructure expansion regardless of whether orbital compute becomes hyperscale.
A natural objection is that terrestrial innovation may relieve the constraint first. Additional generation capacity, improved cooling, and grid reform will expand supply. Yet energy abundance does not eliminate delivery friction. Transmission buildout, interconnection approvals, and local siting remain political and geographic constraints. Activation timelines may continue to lag compute procurement cycles.
The principal risk is coordination failure. Without sustained launch cadence, communication efficiency, silicon adaptation, and procurement anchors, the wedge remains specialized rather than systemic.
With that frame established, the practical question becomes: what is actually breaking on Earth?
The Ceiling: What Is Actually Breaking on Earth?
For most of the past decade, AI scaling was constrained by silicon. More GPUs, higher-bandwidth memory, advanced packaging, and greater parallelism defined the frontier.
That constraint is shifting. The emerging bottlenecks are physical.
Power Density
AI rack density is rising rapidly.
Hopper-era racks operated in the 40–80 kilowatt range. Blackwell-class systems are approaching 120–140 kilowatts per rack. Roadmaps for Rubin and subsequent architectures project configurations exceeding 200 kilowatts, with some projections approaching 600 kilowatts later this decade.
At these densities, individual campuses request hundreds of megawatts. Utilities are now evaluating load applications comparable to those of small cities.
The constraint is not national generation capacity in aggregate. It is localized delivery. Substations, transformers, and distribution networks were not designed for concentrated demand at this scale.
The U.S. generation interconnection queue now ranges between roughly 2,300 and 2,600 gigawatts. In PJM, the core region of the U.S. data center market, projected load growth through 2040 exceeds 600 terawatt-hours, with a significant share attributed to data center expansion.
Median interconnection timelines approach five years. Transformer supply deficits near 30 percent have extended lead times for large units to two to four years.
AI clusters can be procured in months. Grid upgrades require multi-year planning cycles. That mismatch introduces deployment risk that capital must price.
This is less a shortage of energy than a misalignment between compute procurement cycles and infrastructure activation speed.
Cooling Limits
Higher rack density increases thermal density.
Air cooling approaches practical limits at these power levels. Direct-to-chip liquid cooling and advanced thermal systems are becoming standard in high-performance facilities.
These systems add dependencies. Water supply, treatment capacity, discharge regulation, and environmental review processes become part of the infrastructure stack.
A 100-megawatt facility can consume on the order of two million liters of water per day. At the gigawatt campus scale, usage approaches that of a mid-sized municipality.
In water-constrained regions such as parts of Arizona and Virginia, water rights are politically sensitive and often contested. Thermal management becomes a siting and permitting variable, not just an engineering decision.
Permitting and Political Risk
Large AI data centers are industrial assets measured in hundreds of megawatts.
They compete with residential and commercial customers for grid allocation. They trigger debate over land use, water consumption, noise, and long-term grid stability.
As of early 2026, more than 300 state-level bills referencing data centers have been introduced across over 30 states. Several jurisdictions have imposed temporary moratoriums, revised rate classes, or added grid-impact review requirements. In some regions, large new loads face curtailment during peak stress unless paired with dedicated generation.
These measures do not halt expansion. They increase activation uncertainty and timeline variability.
Ireland’s 2025 decision requiring large data centers to provide 100 percent on-site or proximate generation illustrates how constrained jurisdictions may respond.
Even when approvals are granted, extended review cycles add delay. Delay raises financing costs and reduces deployment predictability.
The issue is not whether AI demand exists. It is whether existing infrastructure can absorb that demand at the pace deployment cycles require.
As the constraint shifts from chip design to electricity delivery and heat removal, location becomes part of system architecture.
That is the ceiling orbital compute attempts to arbitrage.
The Orbital Wedge
If terrestrial infrastructure constraints create pressure, the relevant question is not whether compute can operate in orbit. It is under what conditions that deployment becomes rational.
Orbital compute does not compete on the lowest cost per megawatt. It competes on activation speed under constraint. When the cost of delay in terrestrial deployment exceeds the capital and replacement premium required in orbit, the tradeoff changes.
Most workloads do not meet that threshold. The framework below identifies the narrow subset that might.
Three variables determine plausibility: latency sensitivity, bandwidth required, and power intensity.
The chart maps the first two. Power acts as a gating filter.

Axis A: Latency Sensitivity
Some AI systems operate in feedback loops measured in milliseconds. Autonomous driving, robotics, and industrial control require sub-50 millisecond response times. Interactive applications typically target sub-second responsiveness.
Orbital compute introduces an unavoidable transmission delay. Once routing and processing are included, response times exceed the tolerances required for real-time control.
Millisecond systems belong near the physical processes they govern. Orbit is structurally misaligned with real-time physical AI.
Axis B: Bandwidth Required
Many workloads are data-movement dominant rather than compute-dominant. Continuous high-resolution video, LiDAR pipelines, and sensor-heavy robotics generate large volumes of raw data.
Transmitting terabytes of telemetry between Earth and orbit erodes any advantage.
Orbital bandwidth is constrained by spectrum allocation, ground station density, and link capacity. Workloads requiring sustained high-throughput exchange are poor candidates.
Workloads that process data locally and transmit compressed outputs are more compatible.
Axis C: Power Intensity
Power intensity acts as the first filter.
Certain workloads consume substantial compute energy while moving modest data volumes and tolerating delay. These systems are compute-dominant rather than bandwidth-dominant.
Only when power availability becomes the binding constraint does the latency–bandwidth tradeoff become relevant.
The framework, therefore, isolates power-intensive workloads first. Within that subset, latency and bandwidth determine plausibility.
Defining the Wedge
Orbital deployment becomes plausible only when three conditions align:
Power demand dominates system cost
Latency tolerance extends into seconds or minutes
Data transfer volumes remain bounded relative to compute intensity
Under those conditions, model weights can be uploaded periodically, inference can execute under continuous solar power, and compact outputs can be transmitted back to Earth.
The economic center of gravity shifts from fiber density and interconnect speed to energy availability and replacement cadence.
Workloads Inside the Wedge
Two illustrative categories:
In both cases, raw data is processed near the energy source, and only higher-value outputs are transmitted.
Where It Does Not Work
Orbital compute is poorly suited for:
Autonomous driving
Robotics and physical AI
Industrial closed-loop control
Real-time edge inference
Continuous raw data streaming
These systems require low latency and sustained high bandwidth. Their economics favor proximity.
From Cloud Logic to Triage Logic
Terrestrial cloud assumes abundant bandwidth and persistent connectivity. Orbital systems operate with scheduled transmission windows and constrained downlink capacity.
As a result, orbital architectures must decide what matters before transmission. Selection precedes delivery.
This is not traditional cloud logic. It is triage logic.
The segment is narrow by design. Its viability depends on workload discipline, not technological enthusiasm.
Where It Breaks
Orbital compute may relieve certain terrestrial constraints while introducing others. The wedge is narrow for structural reasons.
1. Lifecycle Becomes Infrastructure
In orbit, hardware degradation is structural, not incidental. Radiation exposure, thermal cycling, and material fatigue impose finite lifetimes. Shielding extends survivability but does not remove decay.
Terrestrial facilities rely on on-site maintenance. Orbital systems rely on replacement launches.
Maintenance becomes cadence. Launch availability becomes an operating rhythm rather than a contingency.
The capital model shifts accordingly. Depreciation aligns more closely with silicon cycles. Asset turnover becomes planned. If launch cadence tightens or pricing rises, reliability and margins are directly affected.
2. Congestion and Risk Pricing
Low Earth orbit is increasingly dense. Additional infrastructure increases aggregate mass and cross-sectional exposure to debris.
Collision probability, regulatory scrutiny, and insurance pricing scale with orbital congestion. Insurance is no longer marginal for high-value, compute-heavy missions.
These variables do not prohibit deployment. They introduce recurring cost components that must be embedded in operating assumptions.
3. Politics Does Not Disappear
Relocating infrastructure to orbit does not remove governance. It changes jurisdiction.
Terrestrial facilities face land use and water debates. Orbital systems face spectrum allocation, brightness concerns, and national security implications.
Large constellations have already drawn objections from astronomical communities. Additional reflective infrastructure increases scrutiny.
Jurisdictional friction may also emerge. Nations may resist reliance on foreign-controlled orbital processing layers, particularly where defense or sensitive data is involved.
Infrastructure does not escape politics by leaving Earth. It enters a different regulatory regime.
4. Fragmentation and Coordination Risk
Satellite systems today remain largely proprietary. Interoperability across constellations is limited. There is no shared orbital compute layer.
Without common standards for routing, task orchestration, and ground integration, orbital compute fragments into parallel systems.
Fragmentation erodes capital efficiency. Redundant infrastructure multiplies cost. Coordination failure constrains scale even if deployment is technically feasible.
The Structural Risk
The primary risk is not physical impossibility. It is institutional misalignment.
If terrestrial constraints tighten while coordination remains fragmented, orbital compute remains specialized rather than systemic.
Its scale depends less on physics than on sustained alignment across launch providers, silicon vendors, network operators, and regulators.
Why Inference is Different From Training
AI systems are often discussed as a single category, but training and inference impose different architectural demands.
Training
Training constructs the model. It requires massive datasets, continuous data exchange, tightly synchronized accelerator clusters, high-bandwidth interconnect, and extremely low internal latency.
During training, thousands of GPUs exchange gradients continuously. Communication dominates system behavior. Fiber density, cluster topology, and synchronization speed define the frontier.
This architecture is deeply terrestrial. It depends on dense fiber networks, stable grid infrastructure, and tightly coupled hardware environments.
Orbit is poorly aligned with this workload profile.
Inference
Inference uses the trained model.
Once model weights are produced, they can be deployed across many independent tasks. Communication intensity declines relative to training. Requests are often loosely coupled and can execute asynchronously.
Inference typically involves independent or lightly coupled queries, smaller data exchanges, reduced reliance on synchronized interconnect, and greater tolerance for batch execution.
The dominant constraint shifts. Training scales with communication efficiency. Inference increasingly scales with available compute energy.
Why This Matters For Orbits
An orbital node is unlikely to function as a training cluster. Synchronization demands alone make it inefficient relative to fiber-bound terrestrial systems.
Inference, particularly delay-tolerant and compute-intensive workloads, aligns more closely with orbital constraints. Model weights can be uploaded periodically. Queries can execute without continuous inter-node synchronization. Outputs can be compressed and transmitted during scheduled windows.
This does not make inference universally suitable for orbit. It sharpens the wedge defined earlier: workloads that tolerate delay, are energy-dominant, and require bounded bandwidth.
If inference begins to operate in orbit for triage or background processing, a secondary question follows. What portion remains there as deployment scales?
The answer depends on cost structure, launch cadence, and coordination across the stack.
The Physics of Orbital Compute
Once the wedge is defined economically, the next issue is practical: can orbital infrastructure operate reliably at scale, and what does it cost?
An orbital compute node is capital-intensive at deployment. Shielding mass, radiator surface area, power systems, structural reinforcement, and launch logistics all increase upfront cost relative to terrestrial buildout. This is infrastructure construction, not incremental expansion.
Deployment is not inexpensive. The relevant question is whether constraint relief offsets structural overhead.
Power and Orbit Geometry
In low Earth orbit, solar arrays receive the full solar constant of roughly 1,361 W/m². In dawn–dusk sun-synchronous orbits, exposure can exceed 95% annually depending on altitude and inclination.
The distinction is not only irradiance but capacity factor. Terrestrial solar farms cycle through night and weather variability. Orbital arrays, when positioned appropriately, experience more predictable exposure. These dynamics have revived interest in space-based solar power concepts, though those remain experimental and capital-intensive.
Predictability reduces reliance on deep battery cycling and stabilizes power availability for compute-heavy workloads. It does not eliminate mass. Panels, trusses, and power management systems add to launch weight.
Continuity improves. Structural mass increases.
Cooling In Vacuum
Power density creates thermal constraints.
Terrestrial facilities rely on convection and liquid cooling systems that ultimately dissipate heat into air or water. In orbit, heat rejection occurs through radiation alone.
Radiative cooling requires surface area. As compute density rises, radiator size scales with it. Megawatt-class systems require large deployable radiators, adding mass and structural complexity.
Cooling in orbit is therefore a geometry and launch-economics problem rather than a fluid logistics problem. Radiator mass directly affects cost per deployed megawatt.
Radiation and Hardware Lifecycle
At altitudes between roughly 1,200 and 1,600 kilometers, trapped proton flux increases radiation exposure relative to Earth’s surface.
Commercial silicon degrades under sustained radiation unless shielded or qualified for tolerance. Shielding reduces dose rates but increases launch mass.
Operational lifetimes are therefore likely shorter than terrestrial facilities. Replacement becomes scheduled.
Depreciation aligns more closely with silicon innovation cycles. Nodes must be designed for turnover.
Modular Deployment
A plausible architecture is modular rather than monolithic. Independent nodes can be launched incrementally, networked, and replaced on cadence.
This reduces single-point failure exposure and allows scaling through repetition rather than singular megastructures.
The tradeoff is operational complexity. Replacement cadence becomes part of baseline operations.
Engineering Does Not Remove Risk
Orbital systems must account for debris exposure, radiation degradation, thermal cycling, launch delays, and insurance pricing. These variables shape operating margins and capital structure.
Physics does not prevent deployment. It imposes mass, cost, and lifecycle discipline.
Viability depends on whether the constraint relief provided justifies that discipline.
The Hidden Constraint: Communications
Even if orbital compute relieves power constraints, its value depends on communication. Processing in orbit has limited economic relevance unless outputs integrate efficiently into terrestrial networks.
Power enables computation. Communication determines utility.
Throughput and Economic Viability
The orbital inference model assumes large data volumes are processed locally, and only compact outputs are transmitted to Earth. That architecture works only if the downlink capacity scales with output demand.
Several variables shape this capacity:
Spectrum allocation limits usable RF bandwidth
Optical links offer higher throughput but require precise alignment and depend on atmospheric conditions and ground infrastructure
Ground station density determines transmission windows and sustained rates
Communication becomes a design constraint, not an afterthought. If output transmission cannot scale relative to compute capacity, the energy advantage in orbit becomes irrelevant.
Aperture, Mass, and Launch Cost
High-bandwidth transmission depends on antenna gain. Gain increases with aperture size. Larger apertures require structural support to maintain surface geometry.
In orbit, structural reinforcement translates directly into mass. Mass translates into launch cost.
As compute capacity increases, communication hardware can become a meaningful share of total system mass. Larger apertures improve throughput but increase deployment cost. Smaller apertures reduce mass but constrain output rate.
Economic viability depends on maintaining a favorable mass-to-throughput ratio.
Lightweight Deployable Structures
Recent research into lightweight deployable reflectors, including kirigami-inspired thin-film geometries, seeks to increase aperture efficiency without proportionally increasing structural mass.

The technical detail matters less than the economic implication. Improving aperture-to-mass efficiency reduces cost per transmitted bit.
Communication efficiency influences total delivered inference cost, not just system design.
Network Architecture and Standards
Hardware does not determine scalability alone. Orbital networks remain largely proprietary. Interoperability across constellations is limited.
Without shared standards for routing, task orchestration, and data formatting, compute nodes risk functioning as isolated assets rather than as an integrated layer.
Terrestrial cloud scaled in part because common abstractions reduced coordination friction. Orbital compute will require similar alignment to achieve capital efficiency.
The communications constraint is both physical and architectural. Power availability is insufficient if integration into terrestrial data flows remains fragmented.
The Convergence Thesis
Orbital compute is technically feasible. Its investability depends on scale. Scale depends on institutional coordination.
Three constraint curves and three coordinating forces determine whether the wedge expands.
Constraint Curve 1: Terrestrial Infrastructure Friction
AI load growth is accelerating while infrastructure activation cycles lengthen.
Interconnection queues stretch into multiple years. Transformer procurement remains constrained. In major data center hubs, large-load approvals face delay.
The issue is not generation capacity in aggregate. It is localized activation friction. When procurement cycles shorten, and grid upgrades require multi-year planning, deployment risk rises.
If this friction persists, alternative architectures draw attention.
Constraint Curve 2: Launch Economics
Orbital deployment becomes plausible only if launch cost and cadence evolve together.
Falcon 9 pricing through 2026 (as shown below) implies roughly $74 million for a dedicated launch. Starship’s widely discussed sub-$200/kg economics remain contingent on full reusability and sustained cadence.
At the heavy-lift scale, several variables change:
Multi-megawatt clusters can fit within a single launch envelope
Additional shielding and radiator mass become tolerable
Replacement cadence becomes schedulable
Cost compression does not make orbit inexpensive. It reduces structural infeasibility.
Without predictable cadence and pricing, orbital compute remains experimental.
Constraint Curve 3: Workload Composition
Training remains fiber-bound and communication-intensive. Inference workloads, particularly background and batch processes, continue to expand.
As inference proliferates across agents, embeddings, and summarization pipelines, energy consumption becomes a larger share of total compute cost relative to synchronized interconnect.
This shift is conditional. If AI usage remains dominated by tightly coupled training clusters, orbital compute remains marginal. If delay-tolerant inference expands, the wedge strengthens.
Coordinating Forces
Constraint curves create pressure. Coordination determines whether pressure converts to scale.
Vertical Integration
An actor controlling launch, satellite networking, and internal AI demand can internalize early inefficiencies. Vertical integration allows deployment before broad market validation.
This does not guarantee scale. It creates a proving ground.
Silicon Roadmap and Standards
Orbital constraints are entering semiconductor planning cycles.
At GTC 2026, NVIDIA introduced the Vera Rubin Space-1 module, engineered for size-, weight-, and power-constrained environments, with partnerships including Axiom Space, Starcloud, and Planet.
The engineering challenges are structural. Radiation tolerance, launch survivability, and radiative thermal management require different packaging and system architecture than terrestrial deployments.
This does not signal imminent orbital hyperscale clusters. It indicates that orbital operating conditions are being incorporated into roadmap discussions.
Standardization will determine scalability. Terrestrial AI benefited from shared abstractions that reduced fragmentation and improved capital efficiency. Without comparable coordination across hardware, networking, and orchestration layers, orbital deployments remain isolated.
Vendor attention reflects evaluation, not inevitability.
Procurement Anchors
Large-scale public procurement has historically catalyzed infrastructure markets. Current spending allocations for distributed space systems and resilient architectures suggest sustained public-sector interest in orbital capabilities.
Procurement anchors can absorb early inefficiencies, define interoperability standards, and support replacement cadence economics.
Commercial markets often follow established infrastructure layers.
The Conditional Equation
Orbital compute scales only if:
Terrestrial infrastructure friction persists
Heavy-lift cadence sustains cost compression
AI workload composition broadens toward delay-tolerant inference
Silicon vendors define radiation-aware standards
Procurement anchors support early deployment
Remove one variable and the design space narrows.
Scale depends on alignment rather than novelty.
Strategic Implication
The opportunity is not orbital data centers as a headline concept. It is the enabling infrastructure that accrues value if the architecture expands.
Orbital compute does not replace terrestrial hyperscale. It does not train frontier models or support millisecond control loops. Its relevance is limited to workloads where terrestrial activation friction becomes binding.
The investable exposure sits below the application layer.
Where Capital Enters
Relevant categories include:
Radiation-tolerant compute packaging
Vacuum-optimized thermal systems
High-gain, mass-efficient communications hardware
Inter-satellite networking middleware
Orchestration layers for distributed nodes
Replacement logistics integrated with launch cadence
These resemble telecom and semiconductor supply chains more than venture software platforms. They also participate in broader space infrastructure growth, independent of orbital compute reaching hyperscale.
Diversification across end markets reduces dependence on a single outcome.
The Economic Asymmetry
Orbital compute is unlikely to be cheaper per watt than terrestrial facilities. Launch, shielding, and structural mass impose capital intensity.
The asymmetry emerges when delay carries economic cost.
If interconnection timelines extend, permitting uncertainty increases financing risk, or local siting constraints slow deployment, capacity that operates outside terrestrial delivery bottlenecks gains strategic value.
The premium reflects insulation from activation friction, not inherent efficiency.
Insulation matters only when constraints tighten.
The Risk
The primary risk is misalignment rather than engineering failure.
Scale falters if:
No shared abstraction layer emerges
Interoperability remains fragmented
Replacement cadence overwhelms operating economics
Spectrum allocation or regulatory friction delays integration
Any one constraint narrows the segment. In combination, they confine the market to specialized use cases.
Capital Discipline
Orbital compute should be treated as a conditional infrastructure thesis.
If terrestrial constraints ease, the wedge contracts. If launch cadence stalls, margins compress. If workload composition does not shift toward inference-heavy models, the addressable segment remains limited.
Capital allocation belongs in segments with optionality beyond orbital compute alone.
The thesis is alignment under constraint, not inevitability.
Current Signals
Activity across adjacent layers suggests the architectural components described above are moving beyond theory.
On the compute side, startups such as Sophia Space are developing orbital edge processing systems focused on in-space data analysis and AI acceleration. Starcloud’s plan to deploy AWS Outposts hardware in orbit reflects early experimentation with extending terrestrial compute infrastructure beyond Earth.
The communications layer is further along. Companies such as Kepler Communications are deploying optical relay constellations designed to support higher-throughput inter-satellite links.
Modular spacecraft platforms are also attracting capital. Firms like Apex are productizing satellite buses capable of hosting repeatable payloads, enabling incremental deployment and scheduled replacement consistent with distributed architectures.
These developments do not confirm large-scale orbital compute. They indicate that capital and engineering effort are already flowing into enabling layers — compute, communications, modular platforms, and standards — that would be required if deployment scales.
Closing Frame
Orbital compute is a narrow design space within a broader infrastructure transition. It is capital-intensive, technically constrained, and dependent on coordination across launch, silicon, communications, and procurement.
Its relevance grows only if the terrestrial infrastructure lag persists while launch economics and inference workloads evolve in parallel.
If infrastructure constraints ease, the design space contracts. Coordination failure produces fragmentation. Sustained alignment across the stack turns it into infrastructure.
The central issue is not whether GPUs can operate in orbit. It is whether energy delivery, activation timelines, and workload composition shift enough to justify alternative architectures.
Any durable scale would emerge from alignment under constraint, not novelty alone.
Best,
Chris Kong
Founder & GP, PaperJet Ventures
P.S. Thank you to Jen Liao, Bola Adegbulu, and Jaireh Tecarro for the perspective and feedback that sharpened this piece. Thanks also to Jonathan Stock, Ellen Chang, and Igor Bargatin for industry and research insight.




