Show HN: An AI that reliably builds full-stack apps by preventing LLM mistakes

lovable.dev

23 points by antonoo 7 hours ago

Hey HN! Previous CERN physicist turned hacker here.

We've developed a way to make AI coding actually work by systematically identifying and fixing places where LLMs typically fail in full-stack development. Today we're launching as Lovable (previously gptengineer.app) since it's such a big change.

The problem? AI writing code typically make small mistakes and then get stuck. Those who tried know the frustration. We fixed most of this by mapping out where LLMs fail in full-stack dev and engineering around those pitfalls with prompt chains.

Thanks to this, in all comparisons I found with: v0, replit, bolt etc we are actually winning, often by a wide margin.

What we have been working on since my last post (https://news.ycombinator.com/item?id=41380814):

> Handling larger codebases. We actually found that using small LLMs works much better than traditional RAG for this.

> Infra work to enable instant preview (it spins up dev environments quickly thanks to microVMs and idle pools of machines)

> A native integration with Supabase. This enables users to build full-stack apps (complete with auth, db, storage, edge functions) without leaving our editor.

Interesting project as an example:

https://likeable.lovable.app – a clone of our product, built with our AI. Looks like a perfect copy and works (click "edit with lovable" to get to a recursive editor...)

Going forward, we're shipping improvements weekly, focusing on making it faster, even more reliable and adding visual editing experience similar to figma.

If you want to try it has a completely free tier for now at lovable.dev

Would love your thoughts on where this could go and what you'd want to build with it. And what it means for the future of software engineering...

GameBuddyCedric 6 hours ago

Already love what it can already do! Impressive that nearly every time I get a fully runnning response in form of a web app

Can you share more about your prompt chain approach?

- Do you map prompts to specific developer roles (e.g., manager to say what each component should do and than another prompt with less context that focuses on the implementation for this specific file?

- Or do you generate all files at once?

Just because I prompted with a lot of long initial messages: Have you also explored an iterative refinement process where the AI revisits the initial message to nail down details more accurately?

  • vikeri 6 hours ago

    Great questions! There are several LLM calls behind a single prompt that you submit and we do dynamically assemble the prompt to the AI based on what you're asking for. But we do generate most files at once.

antonoo 7 hours ago

For those who always ask... the name (Lovable) reference is to the software the AI creates.

Me + team will be here and answer any questions about the product coming hours!

sebastiansc 7 hours ago

Founding a company without this and spending weeks building a prototype to test traction is just a waste of time now

  • antonoo 7 hours ago

    Well yes if it's a web app I agree.

    We don't do native mobile apps complex data processing etc. But it's quite straightforward to connect the app to existing backends deployed anywhere.

s-mon 3 hours ago

Man this is so cool!

  • viborcip 2 hours ago

    Yeah it's pretty awesome. What do you like the most about it?