The future of blockchain technology lies not only in processing traditional value transfers and smart contracts but also in deep integration with emerging computing paradigms such as artificial intelligence, Internet of Things, and privacy computing. This integration will spawn entirely new application scenarios, driving Web3’s evolution from financial infrastructure to a general-purpose computing platform. However, existing mainstream scaling solutions show obvious architectural limitations in supporting heterogeneous computing.Layer 1 scaling solutions attempt to achieve performance improvements by directly modifying the blockchain’s underlying protocol. These solutions include increasing block capacity, shortening block times, or transitioning from proof-of-work to more efficient consensus mechanisms like proof-ofstake. While these improvements enhance network performance to some extent, they essentially still optimize on the original single execution environment architecture, without fundamentally expanding the system’s ability to accommodate heterogeneous computing tasks. For example, even if Ethereum’s block size is increased tenfold, it can still only execute EVM bytecode and cannot natively support GPU-accelerated machine learning tasks or privacy computing operations requiring special hardware.Layer 2 scaling solutions adopt a different strategy, building new homogeneous layers on top of the main chain to move computational burden off-chain. Taking Rollup technology [6] as an example, it packages hundreds or thousands of transactions off-chain into a ”batch,” generates a succinct proof (fraud proof in Optimistic Rollup, validity proof in ZK-Rollup), and then submits this proof and state root to the main chain. The main chain confirms the validity of all transactions by verifying this proof without needing to re-execute each transaction.While this design is ingenious, it’s built on a key assumption: data submitted to the main chain must be able to fully reconstruct Layer 2’s state. This data availability requirement essentially limits the extension scope to computing models homogeneous with the main chain. In other words, Layer 2 can only execute computation types that the main chain can understand and verify. When facing AI training tasks requiring specialized hardware acceleration, high-performance games requiring realtime response, or IoT applications needing to process massive sensor data, the existing Layer 2 architecture proves inadequate. These application scenarios require not just higher throughput but also support for heterogeneous execution environments, flexible resource scheduling, and specialized hardware acceleration.This technical limitation has profound implications. It not only restricts blockchain technology’s penetration into broader application scenarios but also hinders the realization of diverse computing resource integration in the Web3 vision. In the ideal Web3 world, various computing resources— whether CPU, GPU, FPGA, or quantum processors—should be able to coordinate and exchange value through a unified blockchain network. However, the homogeneous limitations of existing architectures make this vision difficult to realize, with blockchain still confined to relatively narrow application domains.