Hugging Face AI Updates: April 23, 2026
1. Gemma 4 VLA Demo Runs Entirely on an 8GB Jetson Orin Nano Super
Hugging Face. A joint Hugging Face and NVIDIA post demonstrates Gemma 4 as a Vision Language Agent running locally on a Jetson Orin Nano Super (8GB). The pipeline is voice-first: Parakeet STT transcribes microphone input, Gemma 4 decides whether to invoke the attached webcam, and Kokoro TTS voices the response, with no keyword triggers or hardcoded branching. The model runs via llama.cpp with GGUF quantization. The demo matters because it pushes a real VLA loop onto a $500-class developer board without cloud offload, which is the price point and hardware tier most edge-robotics teams can actually deploy. Source
2. Hugging Face Ships ml-intern, an Open-Source Agent That Runs the Post-Training Loop
Hugging Face. Hugging Face released ml-intern, an open-source agent built on its smolagents framework that autonomously runs the research loop for LLM post-training — literature review, dataset discovery, training script execution, and iterative evaluation. In a reported benchmark run, ml-intern pushed a Qwen3-1.7B base model’s GPQA scientific-reasoning score from 8.5% to 32% in under 10 hours, outperforming Claude Code’s reported 22.99%. The agent natively uses Hugging Face Jobs for compute and Trackio for experiment tracking, and can generate synthetic data plus implement techniques like GRPO without human scripting. Source