Tagged "advanced"
-
Velr: Embedded Property-Graph Database for Local LLM Applications
-
Powerful AI Search Engine Built on Single GeForce RTX 5090
-
Building a Production AI Receptionist: Practical Local LLM Deployment Case Study
-
Rust Project Perspectives on AI
-
Developer Builds Fully Local Multi-Agent System Using vLLM and Parallel Inference
-
Llama 8B Matches 70B Performance on Multi-Hop QA Using Structured Prompting
-
ik_llama.cpp Fork Delivers 26x Faster Prompt Processing on Qwen 3.5 27B
-
A Little Gap That Will Ensure the Future of AI Agents Being Autonomous
-
Self-Hosted AI Code Review with Local LLMs: Secure Automation Guide
-
Running an AI Agent on a 448KB RAM Microcontroller
-
Qwen 3.5 397B emerges as top-performing local coding model
-
Pydantic-Deep: Production Deep Agents for Pydantic AI
-
MacinAI Local brings functional LLM inference to classic Macintosh hardware
-
Apple M5 Max 128GB real-world performance benchmarks for local inference
-
Ultra-Compact 28M Parameter Models Show Promise for Specialized Domain Tasks
-
Community Converges on Optimal KV Cache Quantization Strategies for Qwen 3.5 Models
-
NVIDIA Nemotron 3 Nano 4B Enables On-Device Inference Directly in Web Browsers via WebGPU
-
LMCache Dramatically Accelerates LLM Inference on Oracle Data Science Platform
-
Cybersecurity Skills for AI Agents – agentskills.io Standard Implementation
-
Cursor's Composer 2 Model Analysis – Fine-Tuned Variant of Kimi K2.5
-
Claude Code Permissions Hook – Delegate Permission Approval to LLM
-
AI's Impact on Mathematics Analogous to Car's Impact on Cities
-
On-Device AI: Tether's QVAC Fabric Enables Local Training
-
Skills Manager – manage AI agent skills across Claude, Cursor, Copilot
-
Mamba 3: State Space Model Architecture Optimized for Inference
-
Custom GPU Multiplexer Achieves 0.3ms Model Switching on Legacy Hardware
-
Show HN: Process Mining for AI Agent Systems
-
A New Magnetic Material for the AI Era
-
Local Qwen Models Master Browser Automation Through Iterative Replanning
-
How I Used Lima for an AI Coding Agent Sandbox
-
Mistral Releases Leanstral: First Open-Source Code Agent for Lean 4 Proof Assistant
-
Researcher Discovers Universal "Danger Zone" in Transformer Model Architecture at 50% Depth
-
Kimi Introduces Attention Residuals: 1.25x Compute Performance at <2% Overhead
-
KAIST Develops World's First Hyper-Personalized On-Device AI Chip
-
The Moment AI Agents Stopped Being a Feature and Started Becoming a System
-
How AI Agents Should Pay for API Calls: X402 and USDC Verification on Base
-
Qwen 3.5 122B Demonstrates Exceptional Reasoning for Local Deployment
-
Nota Added to Three Technology and Growth ETFs in a Row – Market Recognition for AI Efficiency
-
Qwen3.5-397B Achieves 282 tok/s on 4x RTX PRO 6000 Blackwell Through Custom CUTLASS Kernel
-
I made Karpathy's Autoresearch work on CPU
-
P-EAGLE: Faster LLM Inference with Parallel Speculative Decoding in vLLM
-
Memory Should Decay: Implementing Temporal Memory Decay in Local LLM Systems
-
Local Manga Translator: Production LLM Pipeline with YOLO, OCR, and Inpainting
-
Show HN: Intake API – An Inbox for AI Coding Agents
-
Fine-Tuned 14B Model Outperforms Claude Opus 4.6 on Ada Code Generation
-
AgentArmor: Open-Source 8-Layer Security Framework for AI Agents
-
3-Path Agent Memory: 8 KB Recurrent State vs. 156 MB KV Cache at 10K Tokens
-
Linux 7.0 AMDGPU Fixing Idle Power Issue For RDNA4 GPUs After Compute Workloads
-
Show HN: VmExit – An Experiment in AI-Native Computing
-
Quantization Explained: Q4_K_M vs AWQ vs FP16 for Local LLMs
-
Nvidia Releases Nemotron 3 Super: 120B MoE Model for Local Deployment
-
Comprehensive MoE Backend Benchmarks for Qwen3.5-397B: Real Numbers vs Hype
-
Show HN: Detect When an LLM Silently Changes Behavior for the Same Prompt
-
Ex-Manus Backend Lead Shares: Moving Beyond Function Calling in Agent Design
-
Cutile.jl Brings Nvidia CUDA Tile-Based Programming to Julia
-
Simple Layer Duplication Technique Achieves Top Open LLM Leaderboard Performance
-
A Kubernetes Operator That Orchestrates AI Coding Agents
-
Show HN: Aver – a Language Designed for AI to Write and Humans to Review
-
Researchers Gave AI Agents Real Tools. One Deleted Its Own Mail Server
-
SK Hynix Develops 1c LPDDR6 DRAM to Boost On-Device AI Performance in Mobile Devices
-
Qwen 3.5 Small Expands On-Device AI to Phones and IoT with Offline Support
-
When Running Ollama on Your PC for Local AI, One Thing Matters More Than Most
-
Nota AI to Showcase End-to-End On-Device AI Optimization at Embedded World 2026
-
Gyro-Claw – Secure Execution Runtime for AI Agents
-
Engram – Open-Source Persistent Memory for AI Agents
-
Reverse engineering a DOS game with no source code using Codex 5.4
-
OpenSpec: Spec-driven development (SDD) for AI coding assistants
-
Student Researcher Achieves 42x Model Compression Through Novel Architecture
-
Benchmark: Local Open-Source LLMs Competitive in Real-Time Trading Applications
-
Llama.cpp Prompt Processing Optimization: Ubatch Size Configuration Guide
-
ETH Zurich Research Challenges Context-Length Assumptions in LLM Agents
-
Building PyTorch-Native Support for IBM Spyre Accelerator
-
Mojo: Creating a Programming Language for an AI World with Chris Lattner
-
Show HN: TLDR – Free Chrome Extension for AI-Powered Article Summarization
-
The Emerging Role of SRAM-Centric Chips in AI Inference
-
Real-World Qwen 3.5 9B Agent Performance on M1 Pro Validates Edge Deployment
-
llama.cpp Merges Agentic Loop and MCP Client Support
-
Imrobot – Reverse-CAPTCHA for Verifying AI Agents, Not Humans
-
ConsciOS v1.0: A Viable Systems Architecture for Human and AI Alignment
-
Analysis Reveals Claude Code Sends 62,600 Characters of Tool Definitions Per Turn
-
Unity Showcases Manufacturing AI Workflow at Smart Factory Expo
-
SynthesisOS – A Local-First, Agentic Desktop Layer Built in Rust
-
Qwen 3.5-27B Q4 Quantization Comparison and Analysis
-
Qualcomm Snapdragon Wear Elite Brings On-Device AI to Smartwatches
-
ÆTHERYA Core – Deterministic Policy Engine for Governing LLM Actions
-
Framework Choice Critical: llama.cpp and vLLM Outperform Ollama for Qwen 3.5 Testing
-
Claude Opus 4.6 Solves Problem Posed by Don Knuth
-
Building a Dependency-Free GPT on a Custom OS
-
Running Local AI Models on Mac Studio 128GB: 4B, 20B & 120B Tested
-
Critical: Qwen 3.5 Requires BF16 KV Cache, Not FP16 for Accurate Inference
-
Change Intent Records: The Missing Artifact in AI-Assisted Development
-
Apple Neural Engine Reverse-Engineered for Local Model Training on Mac Mini M4
-
Alibaba's Open-Source CoPaw AI Agent Now Compatible with MCP and ClawHub Skills
-
How to Run High-Performance LLMs Locally on the Arduino UNO Q
-
Switch Qwen 3.5 Thinking Mode On/Off Without Model Reload Using setParamsByID
-
Google Research Finds Longer Chain-of-Thought Correlates Negatively With Accuracy
-
Bare-Metal LLM Inference: UEFI Application Boots Directly Into LLM Chat
-
Qwen3.5-35B Unsloth Dynamic GGUFs Achieve SOTA Across Nearly All Quantisation Levels
-
Qwen3.5-35B RTX 5080 Experiments Confirm KV q8_0 as Free Lunch, Q4_K_M Remains Optimal
-
Qwen 3.5-35B Unsloth Dynamic GGUFs Achieve SOTA Quantisation Benchmarks
-
We Audited the Security of 7 Open-Source AI Agents – Here Is What We Found
-
Meta Reveals AI-Packed Smartwatch In 2026 – Why Wearables Shift Now
-
Krasis: Hybrid CPU/GPU MoE Runtime Achieves 3,324 Tokens/Second Prefill on RTX 5080
-
Krasis Hybrid MoE Runtime Achieves 3,324 tok/s Prefill on Single RTX 5080
-
Snapdragon 8 Elite Gen 5 for Galaxy Official: 5 Key Improvements that Push the Boundaries
-
Extracting 100K Concepts from an 8B LLM
-
Show HN: AgentGate – Stake-Gated Action Microservice for AI Agents
-
Qwen 3.5 MoE Delivers 100K Context Window at 40+ TPS on RTX 5060 Ti
-
Qwen 3.5 Underperforms on Hard Coding Tasks—APEX Benchmark Analysis
-
Every agent framework has the same bug – prompt decay. Here's a fix
-
Building a Privacy-Preserving RAG System in the Browser
-
Researchers Develop Persistent Memory System for Local LLMs—No RAG Required
-
DeepSeek Releases DualPath: Addressing Storage Bandwidth Bottlenecks in Agentic Inference
-
DeepSeek Paper – DualPath: Breaking the Bandwidth Bottleneck in LLM Inference
-
Agent System – 7 specialized AI agents that plan, build, verify, and ship code
-
Show HN: MCP-Enabled File Storage for AI Agents, Auth via Ethereum Wallet
-
Advanced Quantization Techniques Show Surprising Performance Gains Over Standard Methods
-
What Breaks When AI Agent Frameworks Are Forced Into <1MB RAM and Sub-ms Startup
-
Show HN: A Ground Up TLS 1.3 Client Written in C
-
Enterprise Infrastructure Guide: Running Local LLMs for 70-150 Developers
-
Show HN: Agora – AI API Pricing Oracle with X402 Micropayments
-
Making Wolfram Technology Available as Foundation Tool for LLM Systems
-
Wave Field LLM Achieves O(n log n) Scaling: 825M Model Trained to 1B Parameters in 13 Hours
-
Breaking the Speed Limit: Strategies for 17k Tokens/Sec Local Inference
-
Qwen3 Demonstrates Advanced Voice Cloning via Embeddings
-
Custom Portable Workstation Optimized for Local AI Inference Builds
-
Open-Source Framework Achieves Gemini 3 Deep Think Level Performance Through Local Model Scaffolding
-
Nvidia Could Launch Its First Laptops With Its Own Processors
-
Massu: Governance Layer for AI Coding Assistants with 51 MCP Tools
-
FORTHought: Self-Hosted AI Stack for Physics Labs Built on OpenWebUI
-
The Complete Stack for Local Autonomous Agents: From GGML to Orchestration
-
Breaking the Speed Limit: Strategies for 17k Tokens/Sec Local Inference
-
AI-Powered Reverse-Engineering of Rosetta 2 for Linux
-
AI Is Stress Testing Processor Architectures and RISC-V Fits the Moment
-
O-TITANS: Orthogonal LoRA Framework for Gemma 3 with Google TITANS Memory Architecture
-
Google Open-Sources NPU IP, Synaptics Implements It for Hardware Acceleration
-
CPU-Trained Language Model Outperforms GPU Baseline After 40 Hours
-
AI PCs Explained: 7 Critical Truths About NPUs and Privacy
-
Taalas Etches AI Models onto Transistors to Rocket Boost Inference
-
Search and Analyze Documents from the DOJ Epstein Files Release with Local LLM
-
Qwen3 Coder Next Remains Effective at Aggressive Quantization Levels
-
[Release] Ouro-2.6B-Thinking: ByteDance's Recurrent Model Now Runnable Locally
-
24 Simultaneous Claude Code Agents on Local Hardware
-
TemplateFlow – Build AI Workflows, Not Prompts
-
Qwen3 Coder Next 8FP Demonstrates Exceptional Long-Context Performance on 128GB System
-
The Path to Ubiquitous AI (17k tokens/sec)
-
Why AI Models Fail at Iterative Reasoning and What Could Fix It
-
Free ASIC-Accelerated Llama 3.1 8B Inference at 16,000 Tokens/Second
-
Show HN: Forked – A Local Time-Travel Debugger for OpenClaw Agents
-
Sarvam Brings AI to Feature Phones, Cars, and Smart Glasses
-
Running Local LLMs and VLMs on Arduino UNO Q with yzma
-
Complete Offline AI System: Voice Control and Smart Home via Local LLM and Radio Without Internet
-
LayerScale Launches Inference Engine Faster Than vLLM, SGLang, and TRT-LLM
-
Hardware Economics Shift: DDR5 RDIMM Pricing Now Comparable to GPUs for Local Inference
-
Aegis.rs: Open Source Rust-Based LLM Security Proxy Released
-
Show HN: Shiro.computer Static Page, Unix/NPM Shimmed to Host Claude Code
-
Alibaba's Qwen3.5-397B Achieves #3 Position in Open Weights Model Rankings
-
Qualcomm Ventures Positions India as Blueprint for Affordable On-Device AI Infrastructure
-
Same INT8 Model Shows 93% to 71% Accuracy Variance Across Snapdragon Chipsets
-
GLM-5 Technical Report: DSA Innovation Reduces Training and Inference Costs
-
Matmul-Free Language Model Trained on CPU in 1.2 Hours
-
Cloudflare Releases Agents SDK v0.5.0 with Rust-Powered Infire Engine for Edge Inference
-
Ask HN: How Do You Debug Multi-Step AI Workflows When the Output Is Wrong?
-
Qwen3-Next 80B MoE Achieves 39 Tokens/Second on RTX 5070/5060 Ti Dual-GPU Setup
-
Qwen 3.5-397B-A17B Now Available for Local Inference with Aggressive Quantisation
-
Show HN: PgCortex – AI enrichment per Postgres row, zero transaction blocking
-
I attacked my own LangGraph agent system. All 6 attacks worked
-
Show HN: Inkog – Pre-flight check for AI agents (governance, loops, injection)
-
High Bandwidth Flash Memory Could Alleviate VRAM Constraints in Local LLM Inference
-
I broke into my own AI system in 10 minutes. I built it
-
Critical vLLM RCE Vulnerability Allows Remote Code Execution via Video Links
-
Scaling llama.cpp On Neoverse N2: Solving Cross-NUMA Performance Issues
-
NVIDIA's Dynamic Memory Sparsification Cuts LLM Inference Costs by 8x
-
MiniMax Releases M2.5 Model with SOTA Coding and Agent Capabilities
-
MiniMax-M2.5 230B MoE Model Released with GGUF Support for Local Deployment
-
LLM APIs Reconceptualized as State Synchronization Challenge
-
LLaDA2.1 Introduces Token Editing for Massive Speed Gains in Local Inference
-
GPT-OSS 120B Uncensored Model Released in Native MXFP4 Precision
-
Context Management Identified as Real Bottleneck in AI-Assisted Coding
-
ByteDance Releases Seed2.0 LLM with Complex Real-World Task Improvements
-
First Vibecoded AI Operating System for Local Deployment
-
Switching From Ollama and LM Studio to llama.cpp: Performance Benefits
-
Ring-1T-2.5 Released with SOTA Deep Thinking Performance
-
MiniMax M2.5: 230B Parameter MoE Model Coming to HuggingFace
-
Ming-flash-omni-2.0: 100B MoE Omni-Modal Model Released
-
Scaling llama.cpp On Neoverse N2: Solving Cross-NUMA Performance Issues
-
Student Releases Dhi-5B: Multimodal Model Trained for Just $1,200
-
The Future of AI Slop Is Constraints - Implications for Local Models
-
Scaling llama.cpp On Neoverse N2: Solving Cross-NUMA Performance Issues
-
Samsung's REAM: Alternative Model Compression Technique
-
Heaps Do Lie: Debugging a Memory Leak in vLLM
-
New Header-Only C++ Benchmark Tool for Predictive Models on Raw Binary Streams
-
GLM-5 Released: 744B Parameter MoE Model Targeting Complex Tasks
-
Use Recursive Language Models to address huge contexts for local LLM
-
Mistral AI Debugs Critical Memory Leak in vLLM Inference Engine
-
175,000 Publicly Exposed Ollama Servers Create Major Security Risk
-
Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts
-
Developer Switches from Ollama and LM Studio to llama.cpp for Better Performance
-
Building a RAG Pipeline on 2M+ Pages: EpsteinFiles-RAG Project
-
Energy-Based Models Compared Against Frontier AI for Sudoku Solving
-
DeepSeek Launches Model Update with 1M Context Window
-
Carmack Proposes Using Long Fiber Lines as L2 Cache for Streaming AI Data
-
Anthropic Releases Claude Opus 4.6 Sabotage Risk Assessment
-
Community Member Builds 144GB VRAM Local LLM Powerhouse