Tagged "privacy-preserving-ai"
- Google's Gemma 4 Could Put Powerful AI on Your Phone and Laptop
- Run a Local LLM Server on Raspberry Pi with Remote Access Capabilities
- Google's Gemma 4 Brings Powerful On-Device AI to Phones and Laptops
- Seed3D 2.0
- Local LLM for Private Companies
- Llama 4 Scout on MLX: The Complete Apple Silicon Guide (2026)
- Cursor-Autoresearch: AI Research Automation Port for Local Workflows
- Waterloo's Live AI-Goose Tracker: Real-Time Edge Vision
- Memjar: Uncompromising Local-First Second Brain
- I Connected My Local LLM to My Browser and It Changed How I Automated Tasks
- Building a Voice AI Wearable in a Casio F91W with Whisper and BLE
- Noi Enables Running ChatGPT and Claude Side-by-Side on Your Desktop
- Running Gemma 4 on an iPhone 13 Pro
- DotLLM – Building an LLM Inference Engine in C#
- Ubiquiti UniFi G6 Turret 4K Camera Features On-Device AI Processing at $199 Price Point
- Qwen 3.5 Small – On-Device Multimodal Models Released
- Local LLM Connected to Home Assistant via MCP Now Enables Autonomous Smart Home Management
- Qwen3 Audio and Vision Support Now Available in llama.cpp
- Google's Gemini Nano 4 Offers Faster, Smarter Local Inference Capabilities
- ASUS ExpertBook P1 Integrates On-Device AI for Enterprise Collaboration
- CarryAI's Serverless Vision-Language Models Enable On-Device Multimodal AI
- Running a 1.7B Parameters LLM on an Apple Watch
- Google AI Edge Gallery Showcases Offline Inference with Gemma 4
- Google Launches Offline AI Dictation App for iOS with Gemma
- Google AI Edge Gallery Tops App Store Charts with On-Device Gemma 4
- Real-time Multimodal AI on Apple Silicon: Gemma E2B Demo Shows Practical Edge Deployment
- Apple Brings Enhanced On-Device AI Features to iPhone
- Show HN: Turn Photos Into Wordle Puzzles with AI That Runs 100% in Your Browser
- Google Previews Gemini Nano 4 for Android AICore with On-Device Capabilities
- Nex Life Logger: Local Activity Tracker with AI Agent Integration
- Google Launches Gemma 4 Open Models for Local On-Device AI
- How to Integrate VS Code with Ollama for Local AI Assistance
- Men Are Ditching TV for YouTube as AI Usage and Social Media Fatigue Grow
- Samsung Galaxy Book6 Series Brings Intel Core Ultra Chips for On-Device LLM Inference
- Qwen3 512k Context via TurboQuant on Mac mini
- Apple Gets Full Gemini Access and Uses Distillation to Build Lightweight On-Device AI
- Samsung Galaxy A37 and A57 5G Launch with On-Device AI Capabilities in India
- RF-DETR Nano and YOLO26 Enable On-Device Object Detection on Smartphones