Tagged "fine-tuning"
- Show HN: 100% LLM Accuracy–No Fine-Tuning, JSON Only
- Anthropic Has Never Open-Sourced an LLM: Implications for Local Deployment Strategy
- Comparing Manual vs. AI Requirements Gathering: 2 Sentences vs. 127-Point Spec
- Wave Field LLM Achieves O(n log n) Scaling: 825M Model Trained to 1B Parameters in 13 Hours
- nanollama: Open-Source Framework for Training Llama 3 from Scratch with One-Command GGUF Export
- Local GPT-OSS 20B Model Demonstrates Practical Agentic Capabilities
- O-TITANS: Orthogonal LoRA Framework for Gemma 3 with Google TITANS Memory Architecture
- CPU-Trained Language Model Outperforms GPU Baseline After 40 Hours
- Matmul-Free Language Model Trained on CPU in 1.2 Hours
- Can We Leverage AI/LLMs for Self-Learning?
- Cohere Releases Tiny Aya: Efficient 3.3B Multilingual Model for 70+ Languages
- GPU-Accelerated DataFrame Library for Local Inference Workloads