Daily News

AI Roundup: April 1, 2026

1. Alibaba Releases Qwen3.5-Omni, a Full Omnimodal LLM

Alibaba’s Qwen team released Qwen3.5-Omni, a natively multimodal model that processes text, images, audio, and video in a single architecture and outputs both text and real-time speech. The model supports speech recognition in 113 languages and dialects and speech generation in 36 languages, handles over 10 hours of audio input, and outperforms Gemini 3.1 Pro on general audio understanding and reasoning benchmarks. Qwen3.5-Omni is open-weight and available on Hugging Face. Source

2. California Governor Signs First-of-Its-Kind AI Procurement Executive Order

Governor Newsom signed Executive Order N-5-26, directing California state agencies to establish governance standards for AI procurement and deployment within 120 days. The order requires companies seeking state contracts to attest to and explain their safeguards against misuse, including preventing distribution of illegal content such as CSAM. It also directs the California Department of Technology to create the nation’s first watermarking recommendations for AI-generated images and manipulated video. The order positions California as a counterweight to the Trump administration’s AI deregulation stance. Source

3. Runway Launches $10M Fund and Builders Program for AI Startups

Runway announced a $10 million venture fund to invest in early-stage companies building across AI, media, and world simulation, alongside a Builders program offering free API credits to startups from seed to Series C. The fund will write checks up to $500,000 for pre-seed and seed-stage companies, and the Builders program provides 500,000 API credits plus access to Characters, Runway’s real-time video agent API. The founding cohort includes Cartesia, MSCHF, Oasys Health, and Supersonik. Source

4. Cohere and Ensemble Partner to Build First RCM-Native Healthcare LLM

Cohere and Ensemble announced an expanded partnership to build the healthcare industry’s first revenue cycle management-native large language model. Unlike competitors wrapping prompts around general-purpose models, the companies are fine-tuning a fully custom model from the ground up, trained on real RCM tasks and documented procedures using synthetic datasets in a HIPAA-compliant environment with zero identifiable patient data. The model is planned for release in the second half of 2026. Source

5. Nothing Technology Plans AI Glasses for 2027

Nothing Technology is developing AI-enhanced smart glasses planned for the first half of 2027, Bloomberg reported. The glasses will feature cameras, microphones, and speakers but no built-in display, relying on smartphones and the cloud for AI processing. CEO Carl Pei, who was initially resistant to launching glasses, has shifted to a multidevice strategy. Nothing is also developing AI-focused earbuds for later this year. Source

6. Apple Escalates Crackdown on Vibe-Coding Apps

Apple removed the vibe-coding app “Anything” from the App Store, citing Guideline 2.5.2 which bars apps from downloading or executing code that modifies their features post-review. The developer’s attempted workaround of redirecting app previews to an external browser was also rejected. Apple started blocking vibe-coding app updates in mid-March, affecting platforms like Replit and Vibecode, though Apple maintains its actions target specific guideline violations rather than vibe-coding as a concept. Source

7. Speechify Launches Native Windows App with Local AI Models

Speechify released a native Windows application that runs AI models locally for transcription and dictation across applications. The shift from cloud-dependent processing to on-device AI reduces latency and improves privacy for the text-to-speech platform’s Windows users. Source