Another experimental project that allows you to create small applications (so-called micro-apps) using text descriptions. It is integrated with the Groq platform (not to be confused with Elon Musk's Grok).
Groq's LPU™ Inference Engine is a hardware and software platform that provides very high speed computation, quality, and energy efficiency. It deploys open-source medium-sized models such as llama3, gemma2, deepseek-r1-distill, mixtral, and qwen.
Typically, the context window is small, and in the free version, the limit is 6k tokens per minute, meaning it really can't handle large codebases.
The code is open source: https://github.com/groq/groq-appgen - you can deploy it on your computer by generating an API key in the developer console.
A very interesting feature of Appgen is the ability to click on a pencil-shaped button and draw a sketch of how the interface should look with your mouse.
https://appgen.groqlabs.com/gallery
If desired, you can expand your "creation" into a gallery or view the works of others.
The default model is currently qwen-2.5-coder-32b
, but you can switch to deepseek-r1-distill-llama-70b-specdec
. Overall, I don't see any value in this other than the ability to compare the approaches to code generation in the available models.
#groq