Loading...

Tag trends are in beta. Feedback? Thoughts? Email me at [email protected]

Echo Chamber: A Context-Poisoning Jailbreak That Bypasses LLM Guardrails

How I Use LLMs to Write

Ask HN: What are you actually using LLMs for in production?

LLMs Bring New Nature of Abstraction

Lossless LLM 3x Throughput Increase by LMCache

LLM Hallucinations in Practical Code Generation

LLM code generation may lead to an erosion of trust

LLM's Illusion of Alignment

Agentic Misalignment: How LLMs could be insider threats

Compiling LLMs into a MegaKernel: A path to low-latency inference

Life of an inference request (vLLM V1): How LLMs are served efficiently at scale

What LLMs Know About Their Users

Study Finds LLM Users Have Weaker Understanding After Research

The Emperor's New LLM

SymbolicAI: A neuro-symbolic perspective on LLMs

Salesforce study finds LLM agents flunk CRM and confidentiality tests

Showcase: Project Combiner (combine-files) - A CLI Tool Ideal for Prepping Context for LLMs

Clinical knowledge in LLMs does not translate to human interactions

SUSE Refines, Releases Open-Source LLM to Fuel Community Collaboration

Libraries are under-used. LLMs make this problem worse

Richard Feldman on new language adoption in the LLM age

Human-like object concept representations emerge naturally in multimodal LLMs

Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)

Pitfalls of premature closure with LLM assisted coding

Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons

The last six months in LLMs, illustrated by pelicans on bicycles

Show HN: Trieve CLI – Terminal-based LLM agent loop with search tool for PDFs

Could an LLM create a full Domain-Specific Language?

Tokasaurus: An LLM inference engine for high-throughput workloads

LLMs pose an interesting problem for DSL designers

More →