Back to home

The Story

Why I Built Elara

The Problem I Saw

As a student, I experienced firsthand how frustrating it is to get stuck on a math problem with no one to help. Traditional tutoring is expensive and hard to schedule, and most online resources optimize for answers—not understanding.

I saw the same pattern repeatedly when helping classmates: students didn't need solutions handed to them—they needed guidance through the reasoning process.

When I explored existing AI tools, I noticed a critical gap. Most systems either:

  • Jump straight to the final answer, or
  • Provide generic feedback disconnected from the student's actual work

None of them could look at a student's handwritten steps and respond with context-aware guidance.

The Vision

I set out to build the tool I wished I had:

An AI tutor that watches you think, not just what you type.

Elara is built around the Socratic method—guiding students with the right questions at the right time, instead of giving away answers. The goal isn't correctness—it's building intuition.

To make this possible, the system needs to:

  • Understand messy, real student input (handwritten math, incomplete steps, corrections)
  • Track reasoning across multiple steps
  • Decide when to intervene and how much help to give

The Journey

Building Elara forced me to go beyond just using LLMs—to designing a full agentic system that reasons over user work.

Some of the core challenges I tackled:

Handwriting to Structured Reasoning

Converting PencilKit stroke data into meaningful steps. Normalizing bounding boxes and grouping symbols into expressions.

Step-Level Understanding

Representing student work as ordered steps (not just raw text). Handling real-world behaviors like cross-outs, rewrites, and multi-line steps.

Agentic Architecture

Designing modular components (Perception, Detection, Reasoning, Annotation). Orchestrating them into a pipeline that produces grounded, actionable feedback.

Grounded Feedback

Mapping model outputs back to exact regions on the canvas. Ensuring feedback is specific, visual, and tied to the student's work.

Latency vs. Intelligence Tradeoffs

Deciding when to run full analysis vs lightweight checks. Balancing responsiveness with reasoning depth.

What I Learned

This project changed how I think about building software.

The hardest problems weren't just technical—they were about:

  • Defining what "good help" actually looks like
  • Deciding when not to intervene
  • Designing systems that adapt to messy, real human behavior

I also learned how to:

  • Break down ambiguous problems into testable components
  • Iterate quickly with real users instead of over-engineering
  • Build systems where multiple AI components collaborate toward a goal

Where It's Going

Elara is still early, but the goal is clear:

To build a system that becomes a long-term learning partner, not just a homework helper.

Let's Connect

I'm looking for opportunities where I can build impactful products and grow as an engineer. If my work resonates with you, I'd love to chat.