This Week in WebKit — March 14–20, 2026

405 commits | 3 security-relevant | 107 contributors

TL;DR


Security Fixes

[1] Unsafe argument capture in EXTDisjointTimerQueryWebGL2::queryCounterEXT()

Severity: High | Component: WebGL | 84075f4

The lambda passed to queueMicrotask previously captured the WebGLQuery& query parameter by reference ([&]). The fix changes the capture to use a Ref smart pointer, ensuring the WebGLQuery object is ref-counted and kept alive until the microtask executes.

Source/WebCore/html/canvas/EXTDisjointTimerQueryWebGL2.cpp

     // A query's result must not be made available until control has returned to the user agent's main loop.
-    protect(protect(context->scriptExecutionContext())->eventLoop())->queueMicrotask(protect(context->scriptExecutionContext())->vm(), [&] {
-        query.makeResultAvailable();
+    protect(protect(context->scriptExecutionContext())->eventLoop())->queueMicrotask(protect(context->scriptExecutionContext())->vm(), [query = Ref { query }] {
+        query->makeResultAvailable();
     });

Patch Details

The fix is surgical: the lambda capture list changes from [&] (capture everything by reference) to [query = Ref { query }] (capture a ref-counted smart pointer to the query). The call site updates from query.makeResultAvailable() to query->makeResultAvailable() to dereference through the Ref<T> wrapper. No other logic changes.

Background

EXTDisjointTimerQueryWebGL2 is a WebGL 2.0 extension that provides GPU timer query functionality, allowing web content to measure GPU execution time. The queryCounterEXT() method schedules a microtask to make a query's result available asynchronously — per spec, the result must not be delivered until control returns to the main loop. WebKit's Ref<T> is a reference-counting smart pointer that prevents premature destruction; when a C++ lambda captures a local reference with [&], it captures a raw reference with no ownership semantics.

Analysis

This is a textbook async-capture use-after-free. The queryCounterEXT function receives a WebGLQuery& parameter and schedules a microtask that calls query.makeResultAvailable(). Because the lambda captures by reference, the WebGLQuery object's ref-count is not incremented — if all JavaScript references to the query are dropped and GC runs before the microtask fires, the lambda dereferences freed memory.

  queryCounterEXT(query)
    │
    ├─► queueMicrotask( [&] { query.makeResultAvailable(); } )
    │         ↑ raw reference, no ref-count bump
    │
    ├─► JS drops query reference
    ├─► GC collects WebGLQuery  ← ref-count hits zero, freed
    │
    └─► microtask fires
          └─► query.makeResultAvailable()  ← UAF

The irony is that the surrounding code already demonstrates awareness of the lifetime problem — protect() wrappers are applied to the script execution context, event loop, and VM. The query parameter itself was simply missed. An attacker would create a WebGLQuery, call queryCounterEXT() to schedule the microtask, then drop all JS references and trigger GC before the microtask fires. If the freed memory is reallocated with attacker-controlled content, the makeResultAvailable() call operates on corrupted state. Since WebGLQuery is a ref-counted C++ object, this could yield a controlled virtual method call on attacker-shaped data.

Exploitation runs in the WebContent process, which is sandboxed — a separate sandbox escape would be needed for full system compromise. This pattern is worth auditing across all WebGL extension methods that schedule deferred work via queueMicrotask or queueTask, particularly anywhere function parameters are captured by the scheduling lambda.

Note: Whether WebGLQuery has virtual methods suitable for vtable hijacking, and whether the GC timing window is practically reachable, could not be verified from the commit alone.


[2] IPInt PC desynchronization on variable-length prefixed Wasm opcodes

Severity: High | Component: JSC Wasm IPInt | 0acf64e

IPInt (In-Place Interpreter) used constant PC advancement for WebAssembly prefixed opcodes (GC prefix 0xFB, SIMD prefix 0xFD), assuming the sub-opcode was always 1 byte. The Wasm spec encodes sub-opcodes as variable-length LEB128. The fix tracks the actual decoded sub-opcode length across all affected handlers so PC advances correctly regardless of encoding length.

Source/JavaScriptCore/wasm/WasmIPIntGenerator.cpp

- advancePC(2);  // 1 byte prefix + 1 byte sub-opcode (WRONG for multi-byte LEB128)
+ advancePC(1 + subOpSize);  // 1 byte prefix + variable-length LEB128 sub-opcode

JSTests/wasm/stress/ipint-variable-length-gc-opcodes.js

+ // Helper function to create redundant LEB128 encoding of a value
+ function createRedundantLEB128(value, totalBytes) {
+     if (totalBytes === 1) { return [value]; }
+     let result = [];
+     for (let i = 0; i < totalBytes - 1; i++) {
+         if (i === 0) {
+             result.push(value | 0x80);  // First byte with continuation bit
+         } else {
+             result.push(0x80);  // Middle bytes: just continuation bit
+         }
+     }
+     result.push(0x00);  // Final byte: no continuation bit
+     return result;
+ }
+ // Example: ref.i31 (sub-opcode 0x1C) encoded as 2 bytes [0x9C, 0x00]
+ let extendedOp = createRedundantLEB128(0x1C, 2);
+ let codeBody = [
+     0x00,
+     0x41, 0x2A,        // i32.const 42
+     0xFB, ...extendedOp, // ref.i31 with redundant encoding
+     0x0B               // end
+ ];

Patch Details

The change touches over 20 handler functions in WasmIPIntGenerator.cppaddRefI31, addI31GetS, addI31GetU, addArrayLen, addArrayFill, addArrayCopy, addAnyConvertExtern, addExternConvertAny, and the full suite of SIMD handlers (addSIMDSplat, addSIMDShuffle, addSIMDShift, addSIMDExtmul, addSIMDConstant, addSIMDExtractLane, addSIMDReplaceLane, addSIMDI_V, addSIMDV_V, addSIMDBitwiseSelect, addSIMDRelOp, addSIMDV_VV). Each replaces advancePC(2) (or similar constant) with advancePC(1 + subOpSize) where subOpSize is the actual LEB128-decoded length passed from the parser. The parser in WasmFunctionParser.h is also modified to propagate the consumed sub-opcode size to generators. SIMD-related methods were renamed from addConstant/addExtractLane/addReplaceLane to addSIMDConstant/addSIMDExtractLane/addSIMDReplaceLane across BBQ JIT, OMG IR, and const-expr generators.

Background

WebAssembly prefixed opcodes use a two-level encoding: a prefix byte (0xFB for GC, 0xFD for SIMD) followed by a sub-opcode encoded as LEB128. LEB128 uses 7 data bits per byte with a continuation bit, and critically permits redundant representations — value 0x1C can be encoded as [0x1C] (1 byte), [0x9C, 0x00] (2 bytes), [0x9C, 0x80, 0x00] (3 bytes), and so on up to 5 bytes. All representations are valid per the Wasm spec.

IPInt is JSC's In-Place Interpreter for WebAssembly — the fastest-startup execution tier that directly interprets Wasm bytecode without compilation. Its PC (program counter) must advance by the exact number of bytes consumed by each instruction; any mismatch causes the interpreter to read subsequent bytes at the wrong offset.

  JS call           IPInt (baseline)         BBQ JIT (optimized)
  ───────► Wasm module ───────► [ IPInt interpreter ] ──────► [ BBQ compiler ]
                                      │
                                      └── PC must track exact byte offsets
                                           for every instruction

Analysis

This is a classic parser-interpreter mismatch vulnerability. The WasmFunctionParser correctly decoded variable-length LEB128 sub-opcodes, but the IPInt generator assumed fixed 1-byte sub-opcodes and used advancePC(2) — 1 byte for the prefix plus 1 for the sub-opcode. When a Wasm module uses a redundantly-encoded sub-opcode (say, 2 bytes instead of 1), IPInt advances PC by 2 instead of 3, causing the interpreter to desynchronize from the actual bytecode stream.

  Bytecode:  FB 9C 00 41 2A 0B
             ↑  ↑──────↑
             │  ref.i31 (0x1C) in 2-byte LEB128
             GC prefix

  IPInt reads:
    PC=0: FB → GC prefix, dispatch sub-opcode
    PC=1: 9C 00 → decode LEB128 → 0x1C (ref.i31), execute
    PC+=2 (WRONG): PC=2 → reads 00 41 → misinterprets as different instruction
                          instead of PC=3 → 41 2A (i32.const 42)

The consequences are severe. After desynchronization, the interpreter executes a different instruction sequence than what the parser validated. This is a validation bypass — type checking, operand validation, and memory safety constraints were enforced on the parsed bytecode, but IPInt now executes "phantom" instructions composed from misaligned bytes. By carefully constructing the bytecode stream, an attacker controls which instructions execute after the desync point.

The exact primitive depends on what byte sequences follow the desynchronization point, but the possibilities include type confusion (interpreting a ref type as an integer), out-of-bounds memory access (executing a memory load with bytes misinterpreted as immediates), or control flow hijacking within the Wasm execution context. This is reachable from any web page via the WebAssembly API with no user interaction beyond page load; exploitation runs in the sandboxed WebContent process.

The broader pattern — one component correctly handles a format's flexibility while another assumes a canonical form — is a recurring source of security bugs in multi-tier systems. Auditing other Wasm interpreter and compiler tiers for similar constant-offset PC advancement on prefixed opcodes, especially for newer proposal extensions (GC, exception handling, threads), would be worthwhile.

Note: The claim that an attacker can precisely control which phantom instructions execute after desynchronization to achieve specific primitives is plausible but was not verified end-to-end from the commit alone.


[3] ZStream: deflateEnd() called after inflateInit2() via DecompressionStream

Severity: Medium | Component: WebCore Compression Streams | 43a1b0b

The ZStream destructor unconditionally called deflateEnd() regardless of whether the stream was initialized for compression or decompression. When a DecompressionStream was destroyed, deflateEnd() operated on a z_stream initialized by inflateInit2() — undefined behavior in zlib that leads to invalid frees.

Source/WebCore/Modules/compression/ZStream.cpp

     if (result != Z_OK)
         return false;

+    m_operation = operation;
     m_isInitialized = true;
     return true;
 }
...
 ZStream::~ZStream()
 {
-    if (m_isInitialized)
+    if (!m_isInitialized)
+        return;
+
+    if (m_operation == Operation::Compression)
         deflateEnd(&m_stream);
+    else
+        inflateEnd(&m_stream);
 }

Source/WebCore/Modules/compression/ZStream.h

     z_stream m_stream;
+    Operation m_operation { Operation::Compression };
     bool m_isInitialized { false };

Patch Details

The fix adds an m_operation member variable of type Operation (an existing enum distinguishing compression from decompression), set during initializeIfNecessary() when the zlib context is created. The destructor now branches on m_operation to call the correct cleanup function: deflateEnd() for compression, inflateEnd() for decompression. The condition if (m_isInitialized) was also inverted to an early-return guard for clarity.

Analysis

This is a straightforward resource cleanup mismatch. zlib maintains separate internal state structures for deflate (deflate_state) and inflate (inflate_state), allocated by deflateInit2() and inflateInit2() respectively. These structures have different layouts and internal pointer arrangements. Calling deflateEnd() on an inflate-initialized z_stream causes zlib to interpret the inflate_state pointer as a deflate_state, freeing memory through wrong internal offsets — an invalid free that could corrupt heap metadata.

  DecompressionStream lifecycle (before fix):
    constructor  →  initializeIfNecessary(Decompression)  →  inflateInit2(&m_stream)
    destructor   →  deflateEnd(&m_stream)  ← WRONG: frees inflate_state as deflate_state

The commit message notes the author was unable to trigger a crash locally, suggesting the corruption may be latent or heap-layout-dependent. Nevertheless, the bug is reachable from any page using the web-exposed DecompressionStream API. Exploitability depends on the specific zlib version and how differently the two state structures are laid out — in principle, the invalid free could be leveraged for heap corruption, but the practical difficulty is high.

This is the kind of init/cleanup asymmetry that's easy to introduce when a single class wraps both directions of a symmetric library. The Compression Streams API is relatively new in WebKit, and auditing other dual-mode wrappers for similar mismatched cleanup paths would be worthwhile.


Notable Development

Implement await using syntax from Explicit Resource Management proposal

92e5222

Source/JavaScriptCore/builtins/DisposableStackPrototype.js

-linkTimeConstant getAsyncDisposableMethod(value)
+linkTimeConstant getAsyncDisposeMethod(value, hint)
 {
+    // Try @@asyncDispose first
+    var method = value[Symbol.asyncDispose];
+    if (method !== @undefined)
+        return method;
+    // Fallback: wrap @@dispose in a promise-returning closure
+    var syncMethod = value[Symbol.dispose];
+    if (syncMethod === @undefined)
+        return @undefined;
+    return function() {
+        syncMethod.@call(value);
+        return Promise.@resolve();
+    };
 }

Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp

 // UsingSlot now carries:
 //   reached  — register set to true only after getAsyncDisposeMethod returns (prevents
 //              spurious Await(undefined) if method lookup throws)
 //   isAsync  — compile-time flag; distinguishes null-init (needsAwait path) from
 //              unreached (skip entirely), because spec requires one Await(undefined)
 //              even for `await using x = null`

JSC now supports await using declarations from the TC39 Explicit Resource Management proposal, completing async resource disposal support. The change extends the parser, bytecode compiler, and adds a new @getAsyncDisposeMethod built-in that tries @@asyncDispose first, then falls back to wrapping @@dispose in a promise-returning closure per spec. No new bytecodes are added; the existing Generatorification machinery handles await suspend points.

The implementation is more subtle than it appears. The disposal finally-block compiler emits a two-register needsAwait/hasAwaited protocol to ensure null-valued await using declarations produce exactly one Await(undefined) per spec, and a per-slot reached boolean distinguishes "initializer never executed" from "initialized to null." Multiple disposal failures are aggregated into a SuppressedError, and mixed using/await using scopes require careful interleaving of sync and async disposal steps in reverse declaration order:

  Disposal sequence for { using a = r1; await using b = r2; }:

    slot[1] b (async, disposed first)
      ├── @getAsyncDisposeMethod(b)
      │     ├── try @@asyncDispose → call, hasAwaited=true, await result
      │     └── fallback @@dispose → wrap in closure, await wrapped
      └── if method==undefined (null value): needsAwait=true

    slot[0] a (sync)
      ├── Step 3.d: if needsAwait && !hasAwaited → Await(undefined)
      └── call a[@@dispose]()

This is a significant new JS language surface with several high-value audit targets: the reached register logic could leave disposal state inconsistent if set at the wrong point; the needsAwait/hasAwaited state machine could produce incorrect await sequencing under mixed sync/async scopes; and the @@asyncDispose@@dispose fallback closure captures value and syncMethod by scope reference, making it potentially sensitive to prototype pollution at lookup-time vs. call-time. The SuppressedError aggregation across multiple failing async disposals is also worth fuzzing.


SIMD Shuffle strength reduction for ARM64

8eb745f

Source/JavaScriptCore/jit/SIMDShuffle.h

+    static std::optional<std::pair<uint8_t, SIMDLane>> tryMatchCanonicalBinaryImpl(const uint8_t* mask, SIMDLane lane)
+    {
+        // Returns (pattern_type, lane) if the 16-byte mask matches
+        // a uzp1/uzp2/zip1/zip2/trn1/trn2/ext canonical form.
+        ...
+    }
+
+    static std::array<uint8_t, 16> composeShuffle(
+        const std::array<uint8_t, 16>& outer,
+        const std::array<uint8_t, 16>& inner0,
+        const std::array<uint8_t, 16>& inner1)
+    {
+        // Compose two shuffle masks: resolve each output byte of `outer`
+        // through the appropriate inner mask (bytes 0-15 → inner0, 16-31 → inner1).
+        std::array<uint8_t, 16> result;
+        for (unsigned i = 0; i < 16; ++i) {
+            uint8_t idx = outer[i];
+            if (idx < 16)
+                result[i] = inner0[idx];
+            else
+                result[i] = inner1[idx - 16];
+        }
+        return result;
+    }

Source/JavaScriptCore/b3/B3ReduceSIMDShuffle.cpp

+void reduceSIMDShuffle(Procedure& proc)
+{
+    // Walk the procedure looking for VectorSwizzle(VectorSwizzle(...), ...)
+    // chains and collapse them into a single VectorSwizzle with a composed mask.
+    ...
+}

A new B3ReduceSIMDShuffle phase in JSC's B3 JIT analyzes and collapses composed VectorSwizzle operations, plus ARM64-specific lowering of generic shuffle patterns to cheaper native instructions — zip1/zip2, uzp1/uzp2, trn1/trn2, ext, rev16/rev32/rev64, and xar (ARM64 SHA3). This replaces expensive tbl-based generic shuffles with single-cycle specialized instructions.

  WebAssembly i8x16.shuffle
          │
          ▼
    B3 VectorSwizzle (IR node)
          │
          ├─ B3ReduceStrength (early) — simple reductions
          │
          ├─ B3ReduceSIMDShuffle (new) — compose VectorSwizzle chains
          │    └─ VectorSwizzle(VectorSwizzle(a,b), c) → single merged node
          │
          ├─ B3ReduceStrength (late) — canonical pattern → typed opcode
          │    └─ uzp1/zip1/trn1/ext/rev/xar
          │
          └─ B3LowerToAir → emit ARM64 native instructions
               (vs. fallback: TBL — expensive table lookup)

The split into two passes is deliberate: early canonicalization would obscure composition opportunities. The composeShuffle function in SIMDShuffle.h performs algebraic permutation composition — resolving each output byte of the outer mask through the appropriate inner mask. This is the kind of code that is easy to get wrong at boundary indices (bytes ≥16 select from inner1; an off-by-one here silently corrupts vector data). The tryMatchCanonicalBinary/tryMatchCanonicalUnary pattern matchers are also worth auditing: a mask misclassification emits the wrong native instruction with no crash, just incorrect output. The SHA3 xar path is gated on CPU feature detection (isARM64_SHA3) — verify that detection is conservative.


Sec-Fetch-Site header reset on refresh enables CSRF bypass

81e6513

Source/WebCore/loader/cache/CachedResourceLoader.cpp

+static bool shouldReuseExistingFetchMetadata(const LocalFrame& frame, const ResourceRequest& request, CachedResource::Type type, FetchOptions::Mode mode)
+{
+    if (mode != FetchOptions::Mode::Navigate || type != CachedResource::Type::MainResource)
+        return false;
+
+    RefPtr loader = frame.loader().activeDocumentLoader();
+    if (loader && loader->triggeringAction().type() != NavigationType::FormResubmitted)
+        return false;
+
+    ASSERT_UNUSED(request, request.hasHTTPHeaderField(HTTPHeaderName::SecFetchDest));
+    ASSERT(request.hasHTTPHeaderField(HTTPHeaderName::SecFetchMode));
+    ASSERT(request.hasHTTPHeaderField(HTTPHeaderName::SecFetchSite));
+
+    return true;
+}
+
+static bool shouldUpdateFetchMetadata(const LocalFrame& frame, const ResourceRequest& request, CachedResource::Type type, FetchOptions::Mode mode)
+{
+    return frame.document()
+        && !protect(frame.document())->quirks().shouldDisableFetchMetadata()
+        && !shouldReuseExistingFetchMetadata(frame, request, type, mode);
+}

This fixes an actively exploitable CSRF bypass. When a cross-site form POST triggered a resubmission prompt, Sec-Fetch-Site was recomputed from the destination origin — downgrading the header from cross-site to same-origin and defeating server-side fetch-metadata defenses.

  Form submit (cross-site):                     Form resubmit (BEFORE fix):
    Initiator: attacker.com                        Frame doc: victim.com
    Target:    victim.com/submit                   Target:    victim.com/submit
    computeFetchMetadataSite() → cross-site        computeFetchMetadataSite() → same-origin ← WRONG

  Form resubmit (AFTER fix):
    shouldReuseExistingFetchMetadata() → true
    → skip recomputation, preserve cross-site header ✓

The root cause is that computeFetchMetadataSite derives the site relationship from the current frame's document origin and the request's destination. For form resubmissions, the "current document" is already the destination page (loaded from the first submission), so recomputation produces same-origin instead of the true cross-site from the original POST. The fix adds shouldReuseExistingFetchMetadata() which skips header recomputation for NavigationType::FormResubmitted navigations, preserving the already-correct headers from the initial submission.

The fix only gates on FormResubmitted — other navigation types that preserve a loaded document as the "current frame" may suffer the same flaw (redirects during POST, history navigation to a POST entry, meta-refresh from the destination). The shouldDisableFetchMetadata quirks path is a separate bypass surface worth examining. And the ASSERTs confirm headers must already exist, but there's no enforcement that their values haven't been tampered with between submission and resubmission.


Deeply nested <div> causes hang in parser

aff0077

An infinite loop in the HTML parser triggers when deeply nested elements hit the 512-node DOM tree depth limit. HTMLConstructionSite::attachLater blindly pops table-internal elements (td, th, tr, tbody, table) from the open elements stack to enforce the depth cap, creating a mismatch between the active insertion mode and the actual stack contents.

  Normal table parsing:
    stack: [... table tbody tr td]   mode: InCell
    </table> → closeTheCell() pops td → processes </table> normally

  At depth limit (before fix):
    attachLater pops td to stay at 512, pushes next element
    stack: [... table tbody tr    ]  mode: InCell  ← INCONSISTENT
    </table> → closeTheCell() finds no td/th → fails silently → infinite loop

  At depth limit (after fix):
    attachLater sets m_hasReachedMaxDOMTreeDepth = true
    HTMLTreeBuilder checks flag → resetInsertionModeAppropriately()
    mode re-derived from actual stack → consistent state → normal exit

The fix adds a m_hasReachedMaxDOMTreeDepth flag that HTMLTreeBuilder checks at seven specific call sites (processStartTagForInBody, processStartTagForInTable, processStartTag, processEndTagForInTableBody, processEndTagForInRow, processTrEndTagForInRow, processTableEndTagForInTable) to call resetInsertionModeAppropriately, re-synchronizing the state machine. This is a straightforward DoS — a crafted HTML document with ~512 nested divs followed by a table structure hangs the parser indefinitely.

The underlying pattern — insertion mode assumes element X is on the stack, depth cap ejects X silently — may apply to modes not covered by these seven sites. The flag is a boolean that's set but never cleared; if the parser continues processing (which it does), subsequent depth-cap pops could corrupt different modes. Whether resetInsertionModeAppropriately correctly handles all possible stack states resulting from the blind pop deserves scrutiny.


Fix WebContent jetsam when pinch-zooming reddit.com

c49ecd8

Two changes fix WebContent process jetsam on iOS during pinch-zoom on image-heavy pages. First, requiresTiledLayer() was missing a deviceScaleFactor multiplication — it only checked pageScaleFactor, so on a 3x device at 5x zoom the total 15x scale was invisible to the tiling decision. Second, non-tiled backing store allocations are now capped at 16 MB total byte size.

  Before (3x device, 5x page zoom):
    398px layer
      └─ requiresTiledLayer(pageScale=5.0) → false   ← missing ×deviceScaleFactor
      └─ allocate IOSurface @ 15x = 3612×3612px = 65 MB
      × 40 layers = ~2.6 GB IOSurface → jetsam

  After:
    398px layer
      └─ requiresTiledLayer(pageScale×deviceScale=15.0) → true → tiled
      └─ OR: non-tiled path capped at 16 MB
      × 40 layers = ~1.4 GB IOSurface → within limit

The impact is that a page with 40+ mid-sized images could trivially exhaust IOSurface memory during pinch-zoom, triggering a jetsam kill — an attacker-triggerable DoS on mobile. The requiresTiledLayer() scale factor accounting error is worth searching for in other places in GraphicsLayerCA and TileController where pageScaleFactor and deviceScaleFactor are used independently but should be multiplied. The new 16 MB cap uses a sqrt-based scale reduction — verify the computed contentsScale handles degenerate layer sizes (zero width/height) and extreme scale values that could underflow.


Implement CSS Box Sizing stretch keyword

fdfcbee

LayoutTests/fast/box-sizing/fill-available-expected.txt

-FAIL .wrapper 1 assert_equals: ... height expected 180 but got 140
+PASS .wrapper 1
-FAIL .wrapper 3 assert_equals: ... height expected 180 but got 140
+PASS .wrapper 3

WebKit now implements the standardized stretch CSS sizing keyword from CSS Box Sizing 4 for width, height, min/max-width, min/max-height, and flex-basis. -webkit-fill-available is kept as an alias. The critical new rule is margin-zeroing in block layout: when the parent has no border/padding on a given side and is not an independent formatting context, the child's margin on that side is zeroed for stretch sizing — a context-dependent behavioral mutation that must be suppressed for flex, grid, and out-of-flow children.

  Containing block definite?
   │
   ├─ YES
   │   ├─ Block layout (not flex/grid, not OOF)
   │   │    └─ Apply margin-zeroing rule → fill CB
   │   ├─ Flex cross-axis → applyStretchMinMaxCrossSize
   │   ├─ Flex block-axis → treated like intrinsic
   │   └─ Replaced element → RespectingMinMax + aspect-ratio clamping
   │
   └─ NO (indefinite)
       ├─ width/height → fallback to auto
       ├─ min-* → fallback to 0
       └─ max-* → fallback to none

The margin-zeroing rule's boundary conditions — when is the parent an independent formatting context? when does border/padding exist on "a given side"? — are the kind of predicate that breaks at edge cases. The height routing through computeSizingKeywordLogicalContentHeightUsing() vs. percentageOrCalculated() is worth auditing for table cells and writing-mode changes. The indefinite-fallback discontinuity (stretch → auto/0/none when the containing block transitions from indefinite to definite) could be observable from script, creating a potential layout-based information leak channel.


Consolidate DocumentFragment insertions into a single notification

0bc30db

Source/WebCore/dom/ContainerNode.cpp

+template<typename DOMInsertionWork>
+static ALWAYS_INLINE void executeNodeInsertionWithScriptAssertion(ContainerNode& containerNode, NodeVector& children, Node* beforeChild,
+    ContainerNode::ChildChange::Source source, ReplacedAllChildren replacedAllChildren, NOESCAPE const DOMInsertionWork& doNodeInsertion)
+{
+    if (children.isEmpty())
+        return;
+
+    auto childChange = makeChildChangeForInsertion(containerNode, children, beforeChild, source, replacedAllChildren);
+
+    NodeVector postInsertionNotificationTargets;
+    {
+        WidgetHierarchyUpdatesSuspensionScope suspendWidgetHierarchyUpdates;
+        ScriptDisallowedScope::InMainThread scriptDisallowedScope;
+        Style::ChildChangeInvalidation styleInvalidation(containerNode, childChange);
+
+        for (auto& child : children) {
+            doNodeInsertion(child);
+            ChildListMutationScope(containerNode).childAdded(child);
+            notifyChildNodeInserted(containerNode, child, postInsertionNotificationTargets);
+        }
+    }
+
+    // FIXME: Move childrenChanged into ScriptDisallowedScope block.
+    containerNode.childrenChanged(childChange);
+
+    for (auto& target : postInsertionNotificationTargets)
+        target->didFinishInsertingNode();
+
+    if (source == ContainerNode::ChildChange::Source::API) {
+        for (auto& child : children)
+            dispatchChildInsertionEvents(child);
+    }
+}

DocumentFragment multi-node insertions are now atomic: all children are inserted under ScriptDisallowedScope first, then childrenChanged fires once for the entire batch, and didFinishInsertingNode plus mutation events fire per-node but only after all insertions complete. This matches the DOM/HTML5 spec and Gecko/Blink behavior.

  Before:                                 After:
    appendChild(fragment)                   appendChild(fragment)
      └─ insert child[0]                     └─ [ScriptDisallowedScope]
           └─ childrenChanged               └─ insert child[0]  (no events)
           └─ mutation event (script!)      └─ insert child[1]  (no events)
      └─ insert child[1]                     └─ [end scope]
           └─ childrenChanged               └─ childrenChanged({both})  ← once
           └─ mutation event (script!)      └─ didFinishInsertingNode per child
                                            └─ mutation events per child

The change is high-value for security auditing. The ScriptDisallowedScope invariant must hold across every fragment-insertion code path — any bypass (innerHTML, parser insertion, adoptNode) could allow script to observe a partially-inserted tree. childrenChanged now receives a multi-node ChildChange structure; every consumer (Element, ShadowRoot, HTMLDetailsElement, HTMLSelectElement, SVGAnimateMotionElement, style invalidation) must correctly handle N>1 nodes. didFinishInsertingNode fires per-node but after all siblings are in the tree — handlers that walk siblings or assume a stable tree during their own notification may behave incorrectly.


Send only target frame's FrameState during back/forward navigation

ea9560f

Source/WebCore/loader/FrameLoader.cpp

 void FrameLoader::setRequestedHistoryItem(HistoryItem& item)
 {
     Ref frame = m_frame.get();
+    ASSERT(!frame->page() || !frame->page()->settings().useUIProcessForBackForwardItemLoading() || item.children().isEmpty());

      item.setFrameID(frame->frameID());
      m_requestedHistoryItem = item;
+
+    if (RefPtr parentFrame = dynamicDowncast<LocalFrame>(frame->tree().parent())) {
+        if (RefPtr parentItem = parentFrame->loader().history().currentItem())
+            parentItem->setChildItem(Ref { item });
+    }
 }

Source/WebKit/Shared/WebBackForwardListFrameItem.cpp

-Ref<FrameState> WebBackForwardListFrameItem::copyFrameStateWithChildren()
+Ref<FrameState> WebBackForwardListFrameItem::copyFrameState()
 {
     Ref frameState = protect(this->frameState())->copy();
     ASSERT(frameState->children.isEmpty());
+    return frameState;
+}
+
+Ref<FrameState> WebBackForwardListFrameItem::copyFrameStateWithChildren()
+{
+    Ref frameState = copyFrameState();
     for (auto& child : m_children)
         frameState->children.append(child->copyFrameStateWithChildren());
     return frameState;
 }

This fixes a Site Isolation violation in back/forward navigation. When useUIProcessForBackForwardItemLoading is enabled, the UI process previously sent the full FrameState tree — including cross-origin child frame data (URLs, scroll positions, form state) — to a single web process via GoToBackForwardItem. A compromised renderer could read this data, violating the Site Isolation boundary.

  Before (violation):                        After (fixed):
    UIProcess                                  UIProcess
      └─► GoToBackForwardItem(fullTree)          ├─► mainFrameOnly → Process A
            → WebProcess A                       ├─► childFrame B  → Process B
                ├── mainFrame (own)              └─► childFrame C  → Process C
                ├── childFrame B (LEAK)
                └── childFrame C (LEAK)

The fix introduces copyFrameState() (without children) alongside the existing copyFrameStateWithChildren(), and the UI process now distributes only the relevant FrameState to each web process. The receiving side reconstructs the parent-child HistoryItem tree locally via setRequestedHistoryItem, which mutates the parent frame's live currentItem() by calling setChildItem().

Two angles deserve investigation: whether all IPC paths now use the childless variant (a missed caller still leaks cross-origin data), and whether the tree reconstruction in setRequestedHistoryItem is safe when child frames arrive out of order or a navigation commits while the tree is being rebuilt. The assertion item.children().isEmpty() only fires in debug builds.


By the Numbers

Metric Value
Total commits 405
Security fixes 3 (2 High, 1 Medium)
Contributors 107
Top components WebCore, Platform, WebKit, JSC, Other