Back to Blog

Why file size matters in WebXR experiences

WebXR applications run in browsers without installation. Users click a link and expect immediate access. Every megabyte of 3D model data adds latency between intent and experience. Conversion rates drop exponentially with load time. A 3-second delay reduces engagement by 40%. A 10-second delay loses 60% of users.

Network conditions vary globally. 4G provides 10-30 Mbps in optimal conditions but degrades to 2-5 Mbps indoors or in congested areas. 3G is still common in developing markets at 1-3 Mbps. A 20MB model takes 2 seconds on fiber, 6 seconds on good 4G, and 60 seconds on 3G. Most users abandon before completion.

Browser memory limits constrain total scene size. Mobile browsers allocate 256MB-512MB for WebGL contexts. Desktop browsers allow 1-2GB but throttle inactive tabs. A single uncompressed 4K texture consumes 64MB. Five such textures exceed mobile budgets. Compression is not optional for WebXR.

File size impacts caching behavior. Browsers cache resources under 5MB aggressively. Larger files may not cache reliably across sessions. Users revisiting a WebXR experience should load instantly from cache. Splitting a 30MB scene into 10x3MB chunks improves cache hit rates and enables progressive loading.

Progressive loading improves perceived performance. Display a low-poly proxy immediately while high-detail meshes stream in background. Users tolerate lower fidelity if the experience is interactive within 2 seconds. A 500KB proxy loads in under 1 second on 4G. The full 10MB model can load during initial interaction.

Compression techniques reduce file size without quality loss. Draco mesh compression achieves 60-80% reduction by quantizing vertex positions and using entropy encoding. Texture compression (ASTC, ETC2, BC7) reduces texture size by 75% with minimal visual degradation. GLB binary format is 30% smaller than GLTF JSON with external buffers.

Geometry optimization targets redundant data. Models exported from CAD tools contain duplicate vertices at UV seams. Welding vertices with a small threshold reduces file size by 20-40%. Removing unused vertex attributes (tangents, vertex colors) saves additional bytes. Reordering vertices for better compression improves Draco efficiency.

Texture optimization has the highest impact. Textures account for 70-90% of model file size. Resize base color to 1024x1024 and normal maps to 512x512. Use JPEG for base color (lossy but small) and PNG for normal maps (lossless but larger). Consider basis universal for runtime transcoding to platform-optimal formats.

Material consolidation reduces draw calls and file size. Each material requires separate texture lookups and shader state changes. Merge materials where possible. Use texture atlases to combine multiple materials into a single texture. This reduces both file size and runtime overhead.

Network transfer is not the only cost. Parsing and decompressing GLB files consumes CPU time. A 10MB compressed GLB may decompress to 40MB in memory. Draco decompression is CPU-intensive and blocks the main thread. Use web workers for decompression to avoid frame drops during load.

CDN and compression headers matter. Serve GLB files with gzip or brotli compression enabled. Brotli achieves 15-20% better compression than gzip for binary data. Use a CDN with edge caching to reduce latency. A 5MB file served from a nearby edge node loads 3x faster than from a distant origin server.

Avoid common mistakes: do not embed high-resolution textures in GLB files without compression, do not include multiple LOD levels in the same file (load them separately), do not use uncompressed vertex data, do not ignore network conditions during development. Test on throttled connections using browser dev tools.

The performance budget for WebXR: aim for under 3MB for initial load, under 10MB for full scene. Allocate 1-2MB for geometry, 5-8MB for textures, 1MB for materials and metadata. Use OptimiXR compress to achieve these targets. Convert models to GLB, enable Draco compression, resize textures, and test on 4G throttled connections.

Progressive enhancement allows graceful degradation. Detect network speed and device capabilities. Load high-resolution assets only on fast connections and capable devices. Provide a low-fidelity fallback for constrained environments. This maximizes reach without sacrificing quality for users with good hardware.

File size directly correlates with user retention. Every second of load time costs users. Optimize aggressively. Compress models, resize textures, enable Draco, use CDNs, implement progressive loading. The goal is not the smallest possible file but the fastest time to interactive experience.