WishWish

Project details

WishWish is a high-performance Creator-Driven Spatial Marketplace engineered to modernize the "Pop Mart" and Gacha-style collectible economy for the digital age. The architecture orchestrates a sophisticated embedded 3D engine within native mobile environments, supporting a production-scale GLB ingestion pipeline and a multi-modal generative AI ecosystem. The platform allows global creators to launch unique digital series with probabilistic "blind box" mechanics, while offering users an intelligent post-processing suite to enhance spatial captures into photorealistic media.

Key Systems Architecture

  • Embedded 3D Rendering Core (UaaL): Architected a modular Unity-as-a-Library (UaaL) core for native iOS/Android integration, enabling seamless transitions between standard 2D views, interactive 3D unboxing, and persistent AR environments.
  • Intelligent Generative Pipeline: Designed a cross-platform AI pipeline utilizing dynamic prompt orchestration to synchronize real-world AR environmental data with digital assets.
  • Dynamic Prompt Generation System: Engineered a backend-driven system that automatically analyzes character materials and lighting to generate detailed, AI-ready enhancement "recipes".
    • Step 1: Ingredient Ingestion: The system observes character proportions, materials (plastic, vinyl, metal), and colors via thumbnail analysis.
    • Step 2: Rulebook Inference: An AI-driven "Rulebook" (JSON) dictates the generation of lighting, shadows, and reflections based on indoor/outdoor context.
    • Step 3: Photorealistic Synthesis: The system outputs a Dynamic Prompt that harmonizes captured AR images with the 3D model for photorealistic blending.
  • Production-Scale GLB Ingestion: Developed a robust ingestion pipeline with automated validation filters, enforcing strict constraints on model size, texture resolution, and vertex counts to ensure device-level stability.
  • Asynchronous Geospatial Mapping: Implemented optimized algorithms for background environment scanning, allowing users to place spatial assets immediately while the system maps the remaining scene architecture invisibly.

Technical Leadership & Ownership
As the Staff Immersive Systems Architect, I directed the technical strategy and cross-functional orchestration for a distributed international team:

  • Systemic Optimization & Memory Governance: Conducted deep-level A/B testing and memory profiling to resolve critical device crashes, establishing best practices for asset scaling that eliminated multi-platform performance bottlenecks.
  • Cross-Platform Communication Framework: Architected an event-based communication layer between the native (Swift/Kotlin) and embedded (Unity) environments, ensuring isolated, modular system stability.
  • Global Technical Leadership: Facilitated daily technical orchestration with engineering teams in China and the US, bridging cultural and linguistic gaps to maintain production velocity and architectural integrity.
  • AI Product Strategy: Spearheaded the integration of Nano Banana, ChatGPT, and Seedream models to transform standard AR photos into high-fidelity, shareable digital media.

System Architecture
Engine: Unity (UaaL), Swift, Kotlin
AI Models: Nano Banana, ChatGPT, Seedream
Infrastructure: Node.js / GLB Ingestion, TypeScript, Three.js

Project Lifecycle
Duration: 2025 - Present
Phase: Global Commercial Scale
Target: Creator Marketplace / Gacha / Digital Pop Mart
Platform: Web, iOS, Android

Technical Ownership
Staff Immersive Architect
AI Pipeline Lead
Systems Orchestration Lead