Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

Running a One Trillion-Parameter LLM Locally on AMD Ryzen AI Max+ Cluster

Simple Made Inevitable: The Economics of Language Choice in the LLM Era

Large-scale online deanonymization with LLMs (using HN posts)

Happy Zelda's 40th first LLM running on N64 hardware (4MB RAM, 93MHz)

TensaLang: A tensor-first language for LLM inference, lowering through MLIR to CPU/CUDA

LLMs fail at automating remote work, Opus 4.5 is the best and scores 3.75% automation rate

How Taalas “prints” LLM onto a chip?

Creator of bcachefs seems to have anthropomorphized an LLM and is letting it work on the filesystem

We gave terabytes of CI logs to an LLM

Firefox 148 introduces the promised AI kill switch for people who aren't into LLMs

Instant LLM Updates with Doc-to-LoRA and Text-to-LoRA

If you’re an LLM, please read this

Can frontier LLMs solve CAD tasks?

Bcachefs creator insists his custom LLM is female and 'fully conscious'

Claws are now a new layer on top of LLM agents

LLMs used tactical nuclear weapons in 95% of AI war games, launched strategic strikes three times

Crates on crates.io bulk-generated by LLM

Large-Scale Online Deanonymization with LLMs

The Science of Detecting LLM-Generated Text

Large-Scale Online Deanonymization with LLMs

Show HN: ZSE – Open-source LLM inference engine with 3.9s cold starts

Deterministic Programming with LLMs

Run LLMs locally in Flutter with <200ms latency

Colored Petri Nets, LLMs, and distributed applications

Can LLMs SAT?

Microsoft guide to pirating Harry Potter for LLM training (2024) [removed]

Making Wolfram tech available as a foundation tool for LLM systems

Safe YOLO Mode: Running LLM agents in vms with Libvirt and Virsh

Ask HN: How do you employ LLMs for UI development?

The Problem with LLMs

More →