Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

LLMs as the new high level language

Circumstantial Complexity, LLMs and Large Scale Architecture

Experts Have World Models. LLMs Have Word Models

LLMs could be, but shouldn't be compilers

The LLM spectrum and responsible LLM use

Expensively Quadratic: the LLM Agent Cost Curve

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

OpenClaw is basically a cascade of LLMs in prime position to mess stuff up

Who has completely sworn off including LLM generated code in their software?

Sysadmin In The LLM Age

nanochat can now train GPT-2 grade LLM for –$73 (3 hours on single 8XH100 node)

Yak Power-Shears: LLMs are pretty good at Emacs

Open-source AI tool beats LLMs in literature reviews – and gets citations right

My iPhone 16 Pro Max produces garbage output when running MLX LLMs

Dropstone launches shared multiplayer workspaces, allowing developers to chat and collaborate within the same LLM context window

Vibe: Easy VM sandboxes for LLM agents on MacOS

Show HN: I built "AI Wattpad" to eval LLMs on fiction

Rewriting pycparser with the help of an LLM

Offline LLM and Notetaking app using RN/expo

Measuring activation during behavioral activation therapy: a proof-of-concept study using smartphone sensors and LLM-derived ratings in adolescents with anhedonia

Evaluating and mitigating the growing risk of LLM-discovered 0-days

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

wordchipper - my next-gen LLM tokenizer; looking for LTR release help

Can u repurpose diffusion into brainstorm mode for LLM

port avro-tools' Java-based idl-to-json tool to Rust using an LLM

docrawl - fast documentation site crawler that outputs clean markdown (for RAG pipelines, LLM context, etc)

Cross-model consensus in AI code review: why different LLMs catch different bugs (technical writeup)

Using .NET for AI / LLM work — am I limiting myself?

hotpath-rs - new release adds an MCP interface; LLMs can now inspect Rust perf bottlenecks from the inside

More →