Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

LLM crawlers continue to DDoS SourceHut

Smaller but Better: Unifying Layout Generation with Smaller LLMs

Writing an LLM from scratch, part 8 – trainable self-attention

Show HN: LLMs Playing Mafia games – See them lie, deceive, and reason

How will LLMs take our jobs?

Sketch-of-Thought: Efficient LLM Reasoning

Big LLMs weights are a piece of history

Hallucinations in code are the least dangerous form of LLM mistakes

Proof that LLM just mimic logical without understanding underlying logic

Letta: Letta is a framework for creating LLM services with memory

LLM generated code is like particleboard

People are just as bad as my LLMs

Ladder: Self-improving LLMs through recursive problem decomposition

Using traditional ML and LLMs to analyze Executive Orders (1789 – 2025)

A Practical Guide to Running Local LLMs

Asking LLMs to create my game Shepard's Dog

Entropy is all you need? The quest for best tokens and the new physics of LLMs

Adafruit Successfully Automates Arduino Development Using 'Claude Code' LLM

AutoHete: An Automatic and Efficient Heterogeneous Training System for LLMs

A Guide to In-Browser LLMs

LLMs Don't Know What They Don't Know–and That's a Problem

16-Bit to 1-Bit: Visual KV Cache Quantization for Efficient Multimodal LLMs

How much are LLMs boosting real-world programmer productivity?

Show HN: Can I run this LLM? (locally)

SepLLM: Accelerate LLMs by Compressing One Segment into One Separator

Infinite Retrieval: Attention enhanced LLMs in long-context processing

Part 5: Implementing a Web UI using Vaadin and GitHub Copilot Agent Mode - Why LLMs are not suitable for lesser-known programming languages ​​and frameworks

Show HN: TypeLeap: LLM Powered Reactive Intent UI/UX

GSM8K-Platinum: Revealing Performance Gaps in Frontier LLMs

Sidekick: Local-first native macOS LLM app

More →