Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

How I Use LLMs to Write

Agentic Misalignment: How LLMs could be insider threats

Compiling LLMs into a MegaKernel: A path to low-latency inference

The Emperor's New LLM

Salesforce study finds LLM agents flunk CRM and confidentiality tests

Clinical knowledge in LLMs does not translate to human interactions

Libraries are under-used. LLMs make this problem worse

Human-like object concept representations emerge naturally in multimodal LLMs

Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)

Pitfalls of premature closure with LLM assisted coding

Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons

Show HN: Trieve CLI – Terminal-based LLM agent loop with search tool for PDFs

The last six months in LLMs, illustrated by pelicans on bicycles

Could an LLM create a full Domain-Specific Language?

Tokasaurus: An LLM inference engine for high-throughput workloads

LLMs pose an interesting problem for DSL designers

JavelinGuard: Low-Cost Transformer Architectures for LLM Security

Fine-tuning LLMs is a waste of time

How OpenElections uses LLMs

Reverse Engineering Cursor's LLM Client

A Knockout Blow for LLMs?

The Unreliability of LLMs and What Lies Ahead

Workhorse LLMs: Why Open Source Models Dominate Closed Source for Batch Tasks

Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining

LLMs are cheap

Writing in the Age of LLMs

Differences in link hallucination and source comprehension across different LLM

Focus and Context and LLMs

The Lexiconia Codex: A fantasy story that teaches you LLM buzzwords

LLMs are mirrors of operator skill

More →