Privacy folks – what's your take on using LLMs at work?

Lossless LLM compression for efficient GPU inference via dynamic-length float

LLMs No Longer Require Powerful Servers: Researchers from MIT, KAUST, ISTA, and Yandex Introduce a New AI Approach to Rapidly Compress Large Language Models without a Significant Loss of Quality

LLMs vs Compilers: Why the Rules Don’t Align

Don’t let an LLM make decisions or execute business logic

Show HN: LLM Based Spark Profiler

LLMs Don't Reward Originality, They Flatten It

llm.pdf: Run LLMs inside a PDF file

How NASA Is Using Graph Technology and LLMs to Build a People Knowledge Graph

LLMs Reduce Development Friction. Is That a Good Thing?

An LLM Query Understanding Service

Running Local LLMs? [Tenstorrent] 32GB Card Might Be Better Than Your RTX 5090

Announcing Codebase Viewer v0.1.0 - A Fast, egui-based Tool to Explore & Document Codebases (Great for LLM Context!)

The Cost of Being Crawled: LLM Bots and Vercel Image API Pricing

I Built a Website with My LLM Recommendations. That’s It. No Hype, No Fuss.

Neural Graffiti – Liquid Memory Layer for LLMs

Building a Text-to-SQL LLM Agent in Python: A Tutorial-Style Deep Dive into the Challenges

Calypso: LLMs as Dungeon Masters' Assistants [pdf]

go-away (another http proxy for LLM scraper defence)

Docker Model Runner Brings Local LLMs to Your Desktop

Show HN: Lunon – Instant model switching across LLMs

Vim is more useful in the age of LLMs

Benchmarking LLM social skills with an elimination game

Dual RTX 5090 Beats $25,000 H100 in Real-World LLM Performance

A study of 9 LLMs found medically unjustified differences in care based on patient identity – with Black, LGBTQIA+, and unhoused patients often receiving worse or unnecessary recommendations.

Show HN: MidiMaker.pro – Generate structured MIDI music from text using LLMs

LLM providers on the cusp of an 'extinction' phase as capex realities bite

Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete

Show HN: LocalScore – Local LLM Benchmark

Long context support in LLM 0.24 using fragments and template plugins

More →