service :

service :

VR/AR with Unity

VR/AR with Unity

timeline :

timeline :

oct 2025 - nov 2025

oct 2025 - nov 2025

XR Engineer

XR Engineer

mentor :

Rebuild NYC - VR Gaming Interface

role :

role :

“Rebuild NYC” is a virtual-reality puzzle experience that invites users to reconstruct iconic New York landmarks—the Statue of Liberty, Twin Towers, and Stonewall Monument which were once ruined.


As players assemble the monuments, AI-generated narratives unfold, revealing layers of cultural symbolism, urban resilience, and architectural detail.


The experience merges education, empathy, and gamified cognition — making history not something users read, but something they rebuild with their own hands.

“Rebuild NYC” is a virtual-reality puzzle experience that invites users to reconstruct iconic New York landmarks—the Statue of Liberty, Twin Towers, and Stonewall Monument which were once ruined.


As players assemble the monuments, AI-generated narratives unfold, revealing layers of cultural symbolism, urban resilience, and architectural detail.


The experience merges education, empathy, and gamified cognition — making history not something users read, but something they rebuild with their own hands.

mentor :

Problem Space

Problem Space

Traditional museum and heritage experiences rely on passive consumption — visitors read plaques, swipe through carousels, or view static models.


But history, especially urban history, is deeply tactile, spatial, and emotional. I identified an opportunity to bridge cognitive learning with embodied interaction — letting users physically reconstruct history in a way that strengthens spatial reasoning, cultural empathy, and memory retention.

Key Challenge

Key Challenge

HMW leverage VR’s spatial and sensory capabilities to create an emotional and educational reconstruction experience that blends play, history, and urban identity?

Design Strategy

Design Strategy

1. Core Interaction Loop

Each monument is broken into modular 3D puzzle pieces.
The system tracks assembly accuracy using AI-driven object recognition (via Unity’s Collider and Transform heuristics), triggering real-time narration once a structure is completed.

Loop:
→ Pick up fragment → Inspect engravings → Match geometry → Assemble → Unlock historical layer (voiceover + light reveal)

2. Human–AI Collaboration

I designed a context-aware narration engine that adapts tone and content based on the user’s pace and accuracy:

  • Fast builders receive architectural insights.

  • Slow or curious builders hear historical or emotional backstories.

This was powered by a reinforcement-learning-based state machine that maps user performance data to narrative tone shifts a prototype of adaptive storytelling for XR learning systems.

1. Core Interaction Loop

Each monument is broken into modular 3D puzzle pieces.
The system tracks assembly accuracy using AI-driven object recognition (via Unity’s Collider and Transform heuristics), triggering real-time narration once a structure is completed.

Loop:
→ Pick up fragment → Inspect engravings → Match geometry → Assemble → Unlock historical layer (voiceover + light reveal)

2. Human–AI Collaboration

I designed a context-aware narration engine that adapts tone and content based on the user’s pace and accuracy:

  • Fast builders receive architectural insights.

  • Slow or curious builders hear historical or emotional backstories.

This was powered by a reinforcement-learning-based state machine that maps user performance data to narrative tone shifts a prototype of adaptive storytelling for XR learning systems.

Research & Insights

Research & Insights

Iterations & Rookie Mistakes


  • lack of customisation

  • flat screen

  • no 3D elements for more interactive VR feel

  • No storyline

  • stuck using basic tools in a world of more diverse tools

Prototype

Prototype

Developed in Unity3D using XR Interaction Toolkit, tested on Meta Quest 3 and HTC Vive. Physics-based hand interactions were implemented with haptic pulses for tactile reinforcement. Using HAIC and AURA principles.

Developed in Unity3D using XR Interaction Toolkit, tested on Meta Quest 3 and HTC Vive. Physics-based hand interactions were implemented with haptic pulses for tactile reinforcement. Using HAIC and AURA principles.

Usability Testing Glimpse

Usability Testing Glimpse

I conducted two rounds of formative usability tests with 10 users (VR and non-VR backgrounds) across three devices namely Quest 3s, Quest 3 and Apple Vision pro

Quantitative Outcomes:

  • 82% completion without external guidance

  • 28% faster task efficiency after first 5 minutes

  • 40% increase in recall accuracy of monument facts (post-session quiz)

Qualitative Insights:

“It feels like I’m holding history.”
“The voice that responds to how I build makes it feel alive.”

Quantitative Outcomes:

  • 82% completion without external guidance

  • 28% faster task efficiency after first 5 minutes

  • 40% increase in recall accuracy of monument facts (post-session quiz)

Qualitative Insights:

“It feels like I’m holding history.”
“The voice that responds to how I build makes it feel alive.”

I conducted two rounds of formative usability tests with 10 users (VR and non-VR backgrounds) across three devices namely Quest 3s, Quest 3 and Apple Vision pro

Technical Implementation

Technical Implementation

Accessibility & Inclusion


  • Adjustable contrast modes for sensory sensitivity

  • Voice narration for low-vision users

  • Simplified control preset for users with motor limitations

  • Tested few things under WCAG 2.2 VR Accessibility Guidelines (WIP)

Outcome & Impact

Outcome & Impact

Rebuild NYC redefines how users perceive the relationship between urban memory and embodied learning. It transforms the act of rebuilding into a metaphor for resilience — both personal and collective.

Rebuild NYC redefines how users perceive the relationship between urban memory and embodied learning. It transforms the act of rebuilding into a metaphor for resilience — both personal and collective.

Reflection

Reflection

This project strengthened my conviction that VR UX design is not just about realism — it’s about cognition, emotion, and adaptation.
It challenged me to merge engineering precision with narrative empathy, crafting a system where interaction becomes education.

As a designer-engineer, I learned to think beyond pixels — designing for presence, flow, and memory in 3D space.

This project strengthened my conviction that VR UX design is not just about realism — it’s about cognition, emotion, and adaptation.
It challenged me to merge engineering precision with narrative empathy, crafting a system where interaction becomes education.

As a designer-engineer, I learned to think beyond pixels — designing for presence, flow, and memory in 3D space.

Interaction & UI Design

Interaction & UI Design

1. Spatial Interface System

In VR, flat menus break immersion. I designed a holographic radial interface anchored to the user’s non-dominant hand:

  • Hover-based selection minimizes controller fatigue.

  • Color-coded feedback (blue = correct placement, red = mismatch).

  • Voice-guided cues for accessibility, integrated using text-to-speech APIs.

2. Information Architecture in 3D

Each monument’s “memory core” (a glowing orb at completion) holds layered data visualizations:

  • Layer 1 → Visual reconstruction

  • Layer 2 → Audio narrative

  • Layer 3 → Historical timeline overlay

This hierarchy reflects 3D content architecture principles adapted from Don Norman’s visibility and mapping heuristics for spatial UX.

1. Spatial Interface System

In VR, flat menus break immersion. I designed a holographic radial interface anchored to the user’s non-dominant hand:

  • Hover-based selection minimizes controller fatigue.

  • Color-coded feedback (blue = correct placement, red = mismatch).

  • Voice-guided cues for accessibility, integrated using text-to-speech APIs.

2. Information Architecture in 3D

Each monument’s “memory core” (a glowing orb at completion) holds layered data visualizations:

  • Layer 1 → Visual reconstruction

  • Layer 2 → Audio narrative

  • Layer 3 → Historical timeline overlay

This hierarchy reflects 3D content architecture principles adapted from Don Norman’s visibility and mapping heuristics for spatial UX.

Main Scenes

Main Scenes

1. Main Menu Lobby: Monument selection to rebuild in puzzle

2. History: Historical symbolism of the monument.

3. Puzzle Scene: Main VR workspace with floating or table based puzzle assembly area.

4. Progress Display Zone: Real time progress bar, number completion and visual facts.

5. Completion Scene: Celebration, sound effects, and monument history reveal.

1. Main Menu Lobby: Monument selection to rebuild in puzzle

2. History: Historical symbolism of the monument.

3. Puzzle Scene: Main VR workspace with floating or table based puzzle assembly area.

4. Progress Display Zone: Real time progress bar, number completion and visual facts.

5. Completion Scene: Celebration, sound effects, and monument history reveal.

service :

VR/AR with Unity C#

timeline :

oct 2025 - nov 2025

XR Engineer

mentor :

Rebuild NYC - VR Gaming Interface

role :

“Rebuild NYC” is a virtual-reality puzzle experience that invites users to reconstruct iconic New York landmarks—the Statue of Liberty, Twin Towers, and Stonewall Monument which were once ruined. As players assemble the monuments, AI-generated narratives unfold, revealing layers of cultural symbolism, urban resilience, and architectural detail. The experience merges education, empathy, and gamified cognition — making history not something users read, but something they rebuild with their own hands.

Problem Space

Traditional museum and heritage experiences rely on passive consumption — visitors read plaques, swipe through carousels, or view static models.


But history, especially urban history, is deeply tactile, spatial, and emotional. I identified an opportunity to bridge cognitive learning with embodied interaction — letting users physically reconstruct history in a way that strengthens spatial reasoning, cultural empathy, and memory retention.

Key Challenge

HMW leverage VR’s spatial and sensory capabilities to create an emotional and educational reconstruction experience that blends play, history, and urban identity?

Design Strategy

1. Core Interaction Loop

Each monument is broken into modular 3D puzzle pieces.
The system tracks assembly accuracy using AI-driven object recognition (via Unity’s Collider and Transform heuristics), triggering real-time narration once a structure is completed.

Loop:
→ Pick up fragment → Inspect engravings → Match geometry → Assemble → Unlock historical layer (voiceover + light reveal)

2. Human–AI Collaboration

I designed a context-aware narration engine that adapts tone and content based on the user’s pace and accuracy:

  • Fast builders receive architectural insights.

  • Slow or curious builders hear historical or emotional backstories.

This was powered by a reinforcement-learning-based state machine that maps user performance data to narrative tone shifts a prototype of adaptive storytelling for XR learning systems.

Iterations & Rookie Mistakes


  • lack of customisation

  • flat screen

  • no 3D elements for more interactive VR feel

  • No storyline

  • stuck using basic tools in a world of more diverse tools

Interaction & UI Design

1. Spatial Interface System

In VR, flat menus break immersion. I designed a holographic radial interface anchored to the user’s non-dominant hand:

  • Hover-based selection minimizes controller fatigue.

  • Color-coded feedback (blue = correct placement, red = mismatch).

  • Voice-guided cues for accessibility, integrated using text-to-speech APIs.

2. Information Architecture in 3D

Each monument’s “memory core” (a glowing orb at completion) holds layered data visualizations:

  • Layer 1 → Visual reconstruction

  • Layer 2 → Audio narrative

  • Layer 3 → Historical timeline overlay

This hierarchy reflects 3D content architecture principles adapted from Don Norman’s visibility and mapping heuristics for spatial UX.

Main Scenes

1. Main Menu Lobby: Monument selection to rebuild in puzzle

2. History: Historical symbolism of the monument.

3. Puzzle Scene: Main VR workspace with floating or table based puzzle assembly area.

4. Progress Display Zone: Real time progress bar, number completion and visual facts.

5. Completion Scene: Celebration, sound effects, and monument history reveal.

]

Prototype

Developed in Unity3D using XR Interaction Toolkit, tested on Meta Quest 3 and HTC Vive. Physics-based hand interactions were implemented with haptic pulses for tactile reinforcement. Using HAIC and AURA principles.

Usability Testing Glimpse

I conducted two rounds of formative usability tests with 10 users (VR and non-VR backgrounds) across three devices namely Quest 3s, Quest 3 and Apple Vision pro

Quantitative Outcomes:

  • 82% completion without external guidance

  • 28% faster task efficiency after first 5 minutes

  • 40% increase in recall accuracy of monument facts (post-session quiz)

Qualitative Insights:

“It feels like I’m holding history.”
“The voice that responds to how I build makes it feel alive.”

Technical Implementation

Accessibility & Inclusion


  • Adjustable contrast modes for sensory sensitivity

  • Voice narration for low-vision users

  • Simplified control preset for users with motor limitations

  • Tested few things under WCAG 2.2 VR Accessibility Guidelines (WIP)

Outcome & Impact

Rebuild NYC redefines how users perceive the relationship between urban memory and embodied learning. It transforms the act of rebuilding into a metaphor for resilience — both personal and collective.

Reflection

This project strengthened my conviction that VR UX design is not just about realism — it’s about cognition, emotion, and adaptation.
It challenged me to merge engineering precision with narrative empathy, crafting a system where interaction becomes education.

As a designer-engineer, I learned to think beyond pixels — designing for presence, flow, and memory in 3D space.

HMW leverage VR’s spatial and sensory capabilities to create an emotional and educational reconstruction experience that blends play, history, and urban identity?


  • lack of customisation

  • flat screen

  • no 3D elements for more interactive VR feel

  • No storyline

  • stuck using basic tools in a world of more diverse tools

Iterations & Rookie Mistakes

Accessibility & Inclusion


  • Adjustable contrast modes for sensory sensitivity

  • Voice narration for low-vision users

  • Simplified control preset for users with motor limitations

  • Tested few things under WCAG 2.2 VR Accessibility Guidelines (WIP)

Traditional museum and heritage experiences rely on passive consumption — visitors read plaques, swipe through carousels, or view static models.


But history, especially urban history, is deeply tactile, spatial, and emotional. I identified an opportunity to bridge cognitive learning with embodied interaction — letting users physically reconstruct history in a way that strengthens spatial reasoning, cultural empathy, and memory retention.