Timestamps
00:00 - Intro
02:26 - 0.6B Python Testing
04:49 - 0.6B Second Python Test
05:33 - 0.6B HTML Test
06:58 - 0.6B HTML Improve Test
08:10 - 8B Python Game Test
10:07 - 8B Improved Game Test
11:05 - 8B HTML Test
12:01 - 8B HTML Improve Test
13:00 - Thoughts
14:04 - Odd Results From Early Release
14:49 - Closing Thoughts
In this video, we take a first look at two newly released models from the Qwen 3 family: the 0.6B model (tested at Q8 quantization) and the 8B variant (tested at Q4_K_M quantization). These models are part of the highly anticipated Qwen 3 release, and we explore their capabilities through local testing.
We start by running the 0.6B model through a few quick tasks, including Python game generation and basic HTML script creation. Following that, we switch over to the 8B model and run a similar set of tests, checking for improvements in output quality and reasoning.
This is a simple, hands-on overview of two promising models in the Qwen 3 lineup, with a focus on small-scale local inference performance and early practical observations from this release.
Hugging Face Model Repo: https://huggingface.co/Qwen