We started by trying to build a chatbot to teach migrant workers about savings and budgeting. We ended up learning that they already had sophisticated systems for managing money across borders - they didn't need our financial frameworks. They needed structural support: fair banking access, protection from exploitative employers, legal help when contracts were violated.
The most valuable thing wasn't the prototype. It was learning how to listen - through drawing exercises, spectrum mapping, conversations that went sideways from our research questions. One worker drew his house in blue, then added in green what he wanted to achieve in five years. Our chatbot was suggesting "save $50 this month" to someone building a home across continents.
I wrote about what I learned from this project.
Youth mentors at Over The Rainbow were spending a lot of time trying to interpret digital emotional expressions - emoji usage, message timing, Singlish particles like "sia" that carry emotional weight. I built a tool to help surface patterns across conversations without trying to automate the human work of actually listening.
Three complete redesigns taught me that building for emotionally sensitive contexts is less about technical sophistication and more about understanding what the human work actually is. The hardest part wasn't the NLP - it was figuring out how to support mentors without making everything feel clinical.
Mental health app for Gen Z. We got $50k in funding, 1000+ downloads, pilots with NUS and Duke-NUS. I learned a lot about shipping products, managing a team, and eventually winding something down when it wasn't working.
Worked at a Chinese tech accelerator building scrapers and automation tools. Mostly interesting for the exposure to how the Chinese tech ecosystem works differently from what I knew in Singapore.
I kept running into the same question: how do you build systems that recognize emotions without reducing them to data points? This program was my attempt to sit with that question more seriously - combining computing, psychology, and ethics.
The Ethics of Anthropomorphic AI for Emotional Support - What happens when we design AI to feel like a friend? When does that help, and when does it cross into manipulation?