Daily Top Stories
1.
Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now (dosaygo-studio.github.io)
2.
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
3.
Rust in the kernel is no longer experimental (lwn.net)
4.
10 Years of Let's Encrypt (letsencrypt.org)
5.
Mistral releases Devstral2 and Mistral Vibe CLI (mistral.ai)
6.
Bruno Simon – 3D Portfolio (bruno-simon.com)
7.
PeerTube is recognized as a digital public good by Digital Public Goods Alliance (digitalpublicgoods.net)
8.
If you're going to vibe code, why not do it in C? (stephenramsay.net)
9.
Pebble Index 01 – External memory for your brain (repebble.com)
10.
Apple's slow AI pace becomes a strength as market grows weary of spending (finance.yahoo.com)
11.
Django: what’s new in 6.0 (adamj.eu)
12.
Donating the Model Context Protocol and establishing the Agentic AI Foundation (anthropic.com)
13.
So you want to speak at software conferences? (dylanbeattie.net)
14.
Kaiju – General purpose 3D/2D game engine in Go and Vulkan with built in editor (github.com)
15.
Revisiting "Let's Build a Compiler" (eli.thegreenplace.net)
16.
NYC congestion pricing cuts air pollution by a fifth in six months (airqualitynews.com)
17.
My favourite small hash table (corsix.org)
18.
A supersonic engine core makes the perfect power turbine (boomsupersonic.com)
19.
Stop Breaking TLS (markround.com)
20.
The stack circuitry of the Intel 8087 floating point chip, reverse-engineered (righto.com)
21.
'Source available' is not open source, and that's okay (dri.es)
22.
Agentic AI Foundation (block.xyz)
23.
How private equity is changing housing (theatlantic.com)
24.
“The Matilda Effect”: Pioneering Women Scientists Written Out of Science History (openculture.com)
25.
We Need to Die (willllliam.com)
26.
Qt, Linux and everything: Debugging Qt WebAssembly (qtandeverything.blogspot.com)
27.
Linux CVEs, more than you ever wanted to know (kroah.com)
28.
The Big Vitamin D Mistake [pdf] (2017) (pmc.ncbi.nlm.nih.gov)
29.
Cloudflare error page generator (github.com)
30.
Post-transformer inference: 224× compression of Llama-70B with improved accuracy (zenodo.org)