Launch HN: Golpo (YC S25) – AI-generated explainer videos
skar01
18 hours ago
95
83
Hey HN! We’re Shraman and Shreyas Kar, building Golpo (https://video.golpoai.com), an AI generator for whiteboard-style explainer videos, capable of creating videos from any document or prompt.

We’ve always made videos to communicate any concept and felt like it was the clearest way to communicate. But making good videos was time-consuming and tedious. It required planning, scripting, recording, editing, syncing voice with visuals. Even a 2-minute video could take hours.

AI video tools are impressive at generating cinematic scenes and flashy content, but struggle to explain a product demo, walk through a complex workflow, or teach a technical topic. People still spend hours making explainer videos manually because existing AI tools aren’t built for learning or clarity.

Our solution is Golpo. Our video generation engine generates time-aligned graphics with spoken narration that are good for onboarding, training, product walkthroughs, and education. It’s fast, scalable, and built from the ground up to help people understand complex ideas through simple storytelling.

Here’s a demo: https://www.youtube.com/watch?v=C_LGM0dEyDA#t=7.

Golpo is built specifically for use cases involving explaining, learning, and onboarding. In our (obviously biased!) opinion, it feels authentic and engaging in a way no other AI video generator does.

Golpo can generate videos in over 190 languages. After it generates a video, you can fully customize its animations by just describing the changes you want to see in each motion graphic it generates in natural language.

It was challenging to get this to work! Initially, we used a code-generation approach with Manim, where we fine-tuned a language model to emit Python animation scripts directly from the input text. While promising for small examples, this quickly became brittle, and the generated code usually contained broken imports, unsupported transforms, and poor timing alignment between narration and visuals. Debugging and regenerating these scripts was often slower than creating them manually.

We also explored training a custom diffusion-based video model, but found it impractical for our needs. Diffusion could produce high-fidelity cinematic scenes, but generating coherent sequences beyond about 30 seconds was unreliable without complex stitching, making edits required regenerating large portions of the video, and visuals frequently drifted from the instructional intent, especially for abstract or technical topics. Also, we did not have the compute to scale this.

Existing state-of-the-art systems like Sora and Veo 3 face similar limitations: they are optimized for cinematic storytelling, not step-by-step educational content, and they lack both the deterministic control needed for time-aligned narration and the scalability for 5–10 minute explainers.

In the end, we took a different path of training a reinforcement learning agent to “draw” whiteboard strokes, step-by-step, optimized for clear, human-like explanations. This worked well because the action space was simple and the environment was not overly complex, allowing the agent to learn efficient, precise, and consistent drawing behaviors.

Here are some sample videos that Golpo generated:

https://www.youtube.com/watch?v=33xNoWHYZGA (Whiteboard Gym - the tech behind Golpo itself)

https://www.youtube.com/watch?v=w_ZwKhptUqI (How do RNNs work?)

https://www.youtube.com/watch?v=RxFKo-2sWCM (function pointers in C)

https://golpo-podcast-inputs.s3.us-east-2.amazonaws.com/file... (basic intro to Gödel's theorem)

You can try Golpo here: https://video.golpoai.com, and we will set you up with 2 credits. We’d love your feedback, especially on what feels off, what you’d want to control, and how you might use it. Comments welcome!

https://video.golpoai.com/
typs18 hours ago
If that demo video is how it actually works, this is a pretty amazing technical feat. I’m definitely going to try this out.

Edit: I've used. It's amazing. I'm going to be using this a lot.

skar01typs16 hours ago
Thank you!!
mclau15718 hours ago
I have used AI in the past to learn a topic but by creating a GUI with input sliders and output that I can see how things change when I change parameters, this could work here where people can basically ask "what if x happens" and see the result which also makes them feel in control of the learning
skar01mclau15715 hours ago
Thank you!!
skar0118 hours ago
Hey also, if you want to suggest a video, we could try generating one and reply here with a link! Just tell us what you want the video to be about!!
cube2222 skar0117 hours ago
Hey, kudos for the product / demo on the website - it managed to keep me engaged to watch it till the end.

I’m mostly curious how it fairs with more complex topics and doing actually informative (rather than just “plain background”) illustrations.

Like a video explaining transformer attention in LLMs, to stay on the AI topic?

skar01cube222217 hours ago
Yeah so it actually does pretty well. Here are some sample videos:

https://www.youtube.com/watch?v=33xNoWHYZGA&t=1s

https://www.youtube.com/watch?v=w_ZwKhptUqI

andhuman skar016 hours ago
Could you do a video about latent heat?
metalliqaz18 hours ago
So... if I had the enterprise accounts for various LLM services, could I dupe this company with a basic upload page and a nice big prompt?
Wolf_Larsenmetalliqaz18 hours ago
Its not that simple, but it would be straight forward to duplicate the outputs of this with a simple LLM + ffmpeg workflow. They did mention a custom model on the landing page, and if they've trained one then you would be spending much more money on each output than they are. Because without a fine-tuned model there would be a lot of inference done for QA and refinement of each prompt | clip | frame .
MarcelOlszWolf_Larsen15 hours ago
"Custom model" usually translates to "deployed an OSS model and tweaked a few things" like 99% of the time.
Lieneticmetalliqaz18 hours ago
I'm curious - do you feel differently about some of these coding and coding-adjacent tools out there like Cursor and Lovable?
metalliqazLienetic16 hours ago
no, not really. I think they are massively over-valued but in the tech world... what else is new? I view those tools as mostly a convenience. They are integrating things into nice easy packages to use. That's the value.

With this... eh. Most people don't need to make more than one or two explainer videos, so are they going to take on a new monthly fee for that? And then there are power users who do it all the time, but almost surely have their own workflow put together that is customized to exactly what they want.

At any point, one of the big players could introduce this as a feature for their main product.

poly2it18 hours ago
The creator tier ($99.99/mo) lists "15 seconds" as a perk. Does this mean the maximum video length is 15 seconds?
bangaladorepoly2it18 hours ago
Given that the next tier up is "Create longer/more detailed video (up to 4 min long)", I'd guess you are right.

Seems like this is pretty useless unless you pay 200$ per month. Which may be a reasonable number for the clearly commercial / enterprise use case, but I'm just not certain what you can do wtih the lower tiers.

skar02poly2it17 hours ago
One of the founders here! No it's not. The max video length is up to 2 min, which is also the case in any non-free tier. We just include a 15-second option for that tier (because people it need for things like FB ads)
poly2itskar0214 hours ago
Maybe clarify it a bit. Eg. "Short 15 second option".
BugsJustFindMeskar0214 hours ago
In the post you talk about 5–10 minute explainers.

What does one do if they want to make a 5-10 minute explainer if the maximum length is 2 minutes?

metalliqaz18 hours ago
My suggestion would be to re-think the demo videos. I have only watched most of the way into the "function pointers in C" example. If I didn't already know C well, I would not be able to follow that. The technical diagrams don't stay on the screen long enough for new learners to process the information. These videos probably look fantastic to the person who wrote the document it summarizes, but to a newbie the information is fleeting and hard to follow. The machine doesn't understand that the screen shouldn't be completely wiped all the time while it follows the narrative. Some visuals should be static for paragraphs, or stay visible while detail marked up around it. For a true master of the art, see 3blue1brown.
bangaladoremetalliqaz18 hours ago
> For a true master of the art, see 3blue1brown.

I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.

[1] https://github.com/ManimCommunity/manim/

albumenbangaladore16 hours ago
Manim is awesome and I'd love to see that, but it doesn't easily offer the "hand-drawn whiteboard" look they've got currently.
WasimBhai18 hours ago
I have 2 credits but it won't let me generate a video. Founders, if you are around, you may want to debug.
skar02WasimBhai17 hours ago
Huh, that's odd. Could you DM me your email?
skar01skar0216 hours ago
Or just email us at founders@golpoai.com
delbronski18 hours ago
Wow, I was skeptical at first, but the result was pretty awesome!

Congrats! Cool product.

Feedback: I tried making a product explainer video for a tree planting rover I’m working on. The rover looked different in every scene. I can imagine this kind of consistency may be more difficult to get right. Maybe if I had uploaded a photo of how the rover looks it may have helped. In one scene the rover looks like an actual rover, in the other it looks like a humanoid robot.

But still, super impressed!

skar01delbronski17 hours ago
Thanks! We are working on the consistency.
KaoruAoiShiho17 hours ago
Did NotebookLM just come out with this? Very tough to compete with google.
empressplayKaoruAoiShiho14 hours ago
Can confirm, it creates slides though, not whiteboard animations. Although the slides are in color and have graphs, clipart, etc. (but they are static and the whiteboard drawing is cooler!)

It created an 8 minute video explaining my Logo-based coding language using 50 sources and it was free.

https://www.youtube.com/watch?v=HZW75burwQc

skar01empressplay13 hours ago
We have color as well and support graphs and clipart
adi421317 hours ago
This is neat but I wasn’t able to get it to work (server overloaded is what the browser app said) I’d also recommend registering a custom domain in Supabase so the Google SSO shows the golpo domain - which is a small, but professional-signaling affordance
skar01adi421316 hours ago
We will soon! Wanted to get the model working first! Could you try again
ishita15917 hours ago
Planning to add links as input anytime soon?

I would love to add a link to my product docs, upload some images and have it generate an onboarding video of the platform.

skar02ishita15917 hours ago
Yes, very soon. We already support this via API and will add to our platform too!
skar01skar0215 hours ago
Our API is currently available to our enterprise customers!
reactordev17 hours ago
This is actually pretty amazing. Not only does it work, it’s good. At least from the demo videos. YMMV.

What I always wanted to do was to teach what I know but I lack the time commitment to get it out. This might be a way…

skar01reactordev17 hours ago
Thank you so much!
CalRobert17 hours ago
So it eats concepts and makes videos?

One is reminded of smbc

https://www.seekpng.com/png/detail/213-2132749_gulpo-decal-from-smbc-max-stirner.png

skar02CalRobert17 hours ago
Haha! The name actually comes from the word story in Bengali.
ceroxylon17 hours ago
The generated graphic in the linked demo for "Training materials that captivate" is a sketch of someone looking forlorn while holding a piece of paper. Is there a way to do in-line edits to the generated result to polish out things like this?
skar01ceroxylon16 hours ago
We are working on that. There will ultimately be a storyboard feature where you can edit frame by frame!
nextworddev17 hours ago
Has anyone tried prompting VEO to create these videos
skar02nextworddev17 hours ago
We have! Veo I believe, can't do more than 8-second videos, and when prompted they aren't very coherent in our experience.
nextworddevskar0216 hours ago
oh had no idea. will try your product
OG_BME17 hours ago
I created a video on the free tier, the shareable link didn't work (404), I upgraded to be able to download it, and it seems to have disappeared? It says "Still generating" in my Library.

The video UUID starts with "f5fbd6c7", hopefully that's sufficient to identify me!

skar02OG_BME17 hours ago
Sorry about that! I found your video. Should I link it here or DM it to you (can you do DM in Hacker News?) ? You could also email me at shreyas2@stanford.edu, and I can send it there
dangskar0216 hours ago
(No DMs on HN, at least not yet)
OG_BMEskar0216 hours ago
Just emailed you! Thanks.
Lienetic17 hours ago
This is really interesting, definitely going to give it a try! Seems fun but are you seeing people actually needing to make lots of videos like this? What's your vision - how does this become really big?
drawnwren17 hours ago
I'm sure someone else has mentioned this but your video on the main page correctly has GRPO the first time it's introduced but then every time you mention it after that -- you've swapped it to GPRO.
tk9017 hours ago
Pretty cool, especially the voice and background music - feels just right.

I asked it about pointers in Rust. The transcript and images were great, very approachable!

"Do not let your computer sleep" -> is this using GPU on my machine or something?

skar01tk9016 hours ago
No! We just had that because we had not built the library feature yet, and just forgot to remove it. Now you can access through there!!
subhro17 hours ago
From one Kar to another, দূর্দান্ত গল্প Congratulations.
skar02subhro17 hours ago
Thanks!
albumen16 hours ago
Love it. The tone is just right. A couple of suggestions:

Have you tried a "filled line" approach, rather than "outlined" strokes? Might feel more like individual marker strokes.

I made a demo video on the free tier and it did a great job explaining acoustic delay lines in an accessible fashion, after feeding it a catalog PDF with an overview of the historical artefact and photography of an example unit. Unfortunately the service invented its own idea of what the artefact looked like. Could you offer a storyboard view and let users erase the incorrect parts and sketch their own shapes? Or split the drawing up into logical elements and the user could redraw them as needed, which would then be reused where that element is used in other frames?

skar01albumen16 hours ago
Thank you!! We are actually currently working on the storyboarding feature!!
BoorishBears15 hours ago
Very cool: what output format is the model producing?

Straight vector paths?

dtran14 hours ago
Love this idea! The Whiteboard Gym explainer video seemed really text-heavy (although I did learn enough to guess that that's because text likely beat drawing/adding an image for these abstract concepts for the GRPO agent). I found Shraman's personal story video much more engaging! https://x.com/ShramanKar/status/1955404430943326239

Signed up and waiting on a video :)

Edit: here's a 58s explainer video for the concept of body doubling: https://video.golpoai.com/share/448557cc-cf06-4cad-9fb2-f56bbb21a20b

addandsubtractdtran2 hours ago
The body doubling concept is something I've noticed myself, but never knew there was a term for it. TIL :)
ActVen14 hours ago
Popup window with "Load Failed" after it had some progress on the bar past 40% or so. Shows up in the library, but won't play. I just deleted it for now.
skar01ActVen13 hours ago
Could you try again?
ActVen skar0113 hours ago
Just tried on Chrome instead of safari and it worked this time. Thanks and congrats on the launch!
skar01ActVen13 hours ago
Thank you!
meistertigran13 hours ago
Can you share the paper mentioned in the demo video?
trenchpilgrim13 hours ago
I threw the user docs for my open source project in there and it was... surprisingly not terrible!

Note: Your paywall for downloading the video is easily bypassed by Inspect Element :)

My main concern for you is that y'all will get Sherlocked by OpenAI/Anthropic/Google.

mkageniustrenchpilgrim6 hours ago
Not only the giants. They will face significant threat from open source too[1]. But they just need to carve their own user base and be profitable in that space.

1. For example, I have built http://gitpodcast.com which can be run for free. Can also be self hosted using free tier of gemini and azure speech.

ayaros12 hours ago
In the Khan academy videos I remember watching, an instructor would actually write on a tablet; you'd see each letter get hand-written one by one in order. Is there no way to get it to do that? What the AI is doing instead is building-up the strokes of every character on the line of text all at once, which looks completely unnatural. The awkwardness is compounded by the fact that the letters are outlined, so it takes even more steps to create them.

In addition, the line-art style of the illustrations looks like that same cartoonish-AI-slop style I see everywhere now. I just can't take it seriously.

If this tool is widely deployed it's just going to get used to spread more misinformation. I'm sure it will be great for bad actors and spammers to have yet another tool in their toolbox to spread whatever weird content or messages they want. But for the rest of us, that means search engines and YouTube and other places will be filled with a million AI-generated half-baked inferior copies of Khan Academy. It's already hard enough to find good educational resources online if you don't know where to look, and this will only make the problem worse.

You'll just have to forgive me if I'm not really excited about this tool.

...also the name is a bit weird. It reminds me of "Gulpo, the fish who eats concepts" from that classic SMBC cartoon. (https://www.smbc-comics.com/comic/2010-12-15)

mandeepj11 hours ago
Congrats on the launch!

If I may ask - how do you generate your audio?

raylad11 hours ago
Feedback on the text: I find the way that the text generates randomly across the line very distracting because I (and I think most people) read from left to right. Having letters appear randomly is much more difficult to follow.

Are there options to have the text appear differently?

dfeeraylad10 hours ago
From the video

> The Al needs to figure out not just what to draw, but precisely when to draw it

;)

sdotdev10 hours ago
I'll try the 1 free generation soon, but the way the text appears randomly in that landing page demo video is really weird. I keep loosing track of where I'm reading too as the audio sometimes is not perfectly synced. The sync is not that bad however, but it could be better.
ks20489 hours ago
I made it 8 seconds into the "function pointers in C" video and immediately stopped. It went too fast to read the code examples and diagrams. (second "slide" appears for 1 second.. and what is that array it is showing?) If you go back and look at the code (a three line swap function) - it's messed up. No opening bracket and where is the closing bracket? It is "swaps first and last", but hard-coded to only length 3 arrays?

I'm sure AI could help make good animations like this, but this looks like slop.

personjerry9 hours ago
I feel like this is another case of throwing AI in a non-AI-required problem. Khan Academy itself just hired people to make its videos at a very reasonable wage. Why would you need to add AI into the equation? If you wanted to, you could build a platform of basic video / whiteboard content creators at a very reasonable price point.
wordpadpersonjerry7 hours ago
You can't have arbitrary content with a human in the workflow.
personjerrywordpad7 hours ago
You can absolutely hire a human to make arbitrary content
atleastoptimal8 hours ago
someone needs to do something about the purple darkmode rounded corner tailwind style that has infected all LLMs now.

cool product though!

UltraSane7 hours ago
Impressive. Reminds me of Google NotebookLLMs AI generated podcasts of PDFs.
android5216 hours ago
Do you have a developer api that empowers developers to create explainer videos?
giorgioz4 hours ago
I love the concept but the implementation in the demo seem not good enough to me. I think the black and white demo is quite ugly... 1) Explainer videos are not in black and white 2) the images are not drawn live usually. 3) text being drawn on the go is just a fake animation. In reality most explainer videos show short meaningful sentence appearing all at once so the user has more time for reading me.

Keep up refining the generated demo! Best of luck

fxwingiorgioz4 hours ago
I'm also not the biggest fan of the white-on-black style, but there is definitely precedent (at least in science-youtube-space) for explainer videos "drawn live" [1-4]

[1] https://www.youtube.com/@Aleph0

[2] https://www.youtube.com/@MinutePhysics

[3] https://www.youtube.com/@12tone

[4] https://www.youtube.com/@SimplilearnOfficial

whitepaint3 hours ago
I've tried it and it is really cool. Well done and good luck.
torlok3 hours ago
Going by the example videos, this is nothing like I'd expect a whiteboard video to look like. It fills the slides in erratically, even text. No human does that. It's distracting more than anything. If a human teacher wants to show cause-and-effect, they'll draw the cause, then an arrow, then the effect to emphasize what they're saying. Your videos resemble printing more than drawing.
achempion2 hours ago
Where I can find what is a credit? It says 150 credits for a Growth plan but doesn't explain how many credits are needed for a single video

p.s. the pricing section is unreadable under the 840px width

snowfieldan hour ago
I want to pay 20usd just to troll my friends with explainer videos on why they're shit at video games :D