Timestamps:
00:00 - Intro
01:30 - Local Settings
03:27 - First HTML Test
06:11 - Python Game Test
08:47 - No Thinking Mode
10:18 - JS Game Test
13:11 - JS Pong Game Test
15:55 - Second HTML Test
18:50 - VC Pitch Test
25:50 - Impressions & Closing Thoughts
In this video, we take an in-depth look at the newly released Qwen3-235B-A22B MoE model, the largest model in the Qwen3 lineup. Touted as matching the performance of top-tier models like DeepSeek R1, OpenAI o1/03-mini, Grok, and Gemini 2.5, this Mixture-of-Experts model delivers strong results—especially considering its size and accessibility in quantized form.
We go over the recommended sampling parameters to enable both “thinking” and “non-thinking” modes. From there, we test the model across a range of tasks including HTML generation, Python game development, JavaScript-based interactive outputs, and a JS pong game.
We also walk through a VC pitch prompt and share general impressions of how this model feels in terms of reasoning, creativity, and usability when run locally.
Hugging Face Repo: https://huggingface.co/Qwen