Replies: 1 comment
-
|
Can't wait to get the new features. Best wishes! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Currently our interactive scenes are generated through a two-step pipeline: scientific modeling → HTML generation. This produces decent physics simulations (projectile motion, circuits, etc.), but it's quite narrow in scope.
From an educational standpoint, we think there's a lot of room to grow here. The interactive scenes tend to be where students actually engage with a concept rather than passively reading through slides. Being able to adjust parameters and see what happens builds intuition in a way that static content can't. But right now that only works for a small slice of topics. Some concepts are better understood by seeing a process unfold step by step, others by writing code and testing it, others by exploring a structure in three dimensions. A single "generate HTML" prompt can't really handle all of these well.
There's also a gap in how our AI agents interact with these scenes. During a roundtable session, the teacher agent can narrate slides, draw on the whiteboard, and guide discussion, but when it comes to an interactive scene, it just... watches. It can't highlight a specific element, set the simulation to a particular state to demonstrate a point, or walk the student through the interface step by step. The agent essentially becomes passive in the most interactive part of the course.
We've been thinking about two directions:
1. Specializing the interactive generation pipeline
Instead of one generic prompt for all interactive content, split into specialized paths based on what kind of interaction best fits the concept. The outline generator would decide the interaction style, and each style gets its own optimized prompt and constraints. This would let us cover a much wider range of subjects while producing higher quality output for each.
2. Agent actions for interactive scenes
Giving the teacher agent a set of actions it can perform on the interactive content: highlighting elements, setting widget state, revealing parts progressively, annotating specific areas. This way the agent can actually teach through the interactive scene instead of just talking over it.
Some open questions:
Would love to hear thoughts from anyone who's been using or extending the interactive scene pipeline.
Beta Was this translation helpful? Give feedback.
All reactions