CodeWithLLM-Updates
-

https://youtu.be/eR855VNPjhk

In Groq on their YouTube channel, there is a demo of the project iter on the mixtral-8x7b-32768 model in the terminal on nix-shell. I have not tested it - his approach is to print everything and have minimal control over generation.

1:53 But there is a cool feature, the command reflextion - when the model will receive 6 requests with the instruction to think about the problem.

The video is from March 8, and now there is also llama3-70b-8192 - but 8k versus 32k context window.

For VSCode in the extensions catalog, I found Groqopilot v0.0.81, but now it is more likely not working than working.