← All issues

Wasm multimemory support for base instructions

4ac3b92

Source/JavaScriptCore/wasm/WasmFunctionParser.h

template<> auto FunctionParser<Context>::load(LoadOpType op) -> PartialResult
{
+ uint32_t memoryIndex;
+ WASM_PARSER_FAIL_IF(!parseVarUInt32(memoryIndex), "can't get memory index");
+ WASM_PARSER_FAIL_IF(memoryIndex >= m_info.memoryCount(), "load: illegal memory index ", memoryIndex);
uint32_t alignment;
WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
uint32_t offset;
WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
Value value;
- WASM_TRY_ADD_TO_CONTEXT(load(op, alignment, offset, value));
+ WASM_TRY_ADD_TO_CONTEXT(load(op, memoryIndex, alignment, offset, value));

Source/JavaScriptCore/wasm/WasmBBQJIT64.h

template <typename Func>
auto emitCheckAndPrepareAndMaterializePointerApply(Value pointer, uint32_t memoryIndex, uint32_t uoffset, Func&& func)
{
+ if (memoryIndex) {
+ // non-zero memory: fetch base and size from instance memory table
+ loadWebAssemblyGlobalState(/* memoryIndex */ memoryIndex);
+ }
// ... bounds check uses loaded base/size
}

The WebAssembly multimemory proposal (phase 4) allows a module to declare multiple independent linear memory regions. Each load/store and bulk memory instruction encodes a memory index immediate in the bytecode. Prior to this change, WebKit treated that immediate as a reserved zero byte and rejected non-zero values. This commit adds end-to-end multimemory support across all three Wasm execution tiers — BBQ (baseline JIT), OMG (optimizing JIT), and IPInt (interpreter). The function parser now reads the memory index from every load/store and bulk memory instruction, and each tier dispatches to a fast path for memory 0 (preserving existing behavior) and a new slow path for non-zero indices that fetches base and bounds from the instance's memory table.

Instruction stream:  i32.load  memidx=N  offset  ...
                                  │
                          FunctionParser::load()
                                  │
                    ┌─────────────┴─────────────┐
                 memidx==0                   memidx!=0
                 (fast path)               (slow path)
                    │                           │
             use cached                 fetch memories[N].base
             base/size regs            fetch memories[N].boundsSize
                    │                           │
                    └──────────┬────────────────┘
                       emitCheckAndPreparePointer()
                               │
                          bounds check + deref

This is a large surface expansion: every memory access instruction in every Wasm tier now has a new code path controlled by a parsed memory index, and bulk memory operations (memory.fill, memory.copy, memory.init) can now operate across distinct linear memory regions. Previously-invalid module bytecode (non-zero memory index) is now valid — the error messages shift from "auxiliary byte should be zero" to "illegal memory index", reflecting the semantic change. The cross-memory memory.copy path is entirely new logic where source and destination can reference different-sized memories.

🔒

New memory index dispatch across every access path in three JIT tiers — several edge cases in bounds checking and bulk operations are worth close investigation.

Subscribe to read more