October 2024 Archive
3601.
Office 2024 (microsoft.com)
3602.
Why trolls, extremists, and others spread conspiracy theories they don't believe (theconversation.com)
3603.
Four Ways Ukraine's Drone Innovations Are Changing Warfare (wsj.com)
3604.
HTML Whitespace Is Broken (blog.dwac.dev)
3605.
How the United States Can Win the Battery Race (foreignpolicy.com)
3606.
Linux: Goodbye from a Linux Community Volunteer (lore.kernel.org)
3607.
Everything Is a Conspiracy Theory When You Don't Bother to Educate Yourself (techdirt.com)
3608.
Life Before the Invention of AutoCAD: Photos from 1950 to 1980 (rarehistoricalphotos.com)
3609.
I want to break some laws too (snats.xyz)
3610.
Mastodon Announces Fediverse Discovery Providers (wedistribute.org)
3611.
We're Entering Uncharted Territory for Math (theatlantic.com)
3612.
Rust Resumes Rise in Popularity (infoworld.com)
3613.
Google DeepMind scientists win Nobel chemistry prize (theguardian.com)
3614.
'Founder Mode' Explains the Rise of Trump in Silicon Valley (nytimes.com)
3615.
Secure CGI Applications in C on BSD (2016) (kristaps.bsd.lv)
3616.
Show HN: AndyNote – Chat-style transcripts with real-time speaker detection (andynote.app)
3617.
Morsing of the Dead [video] (youtube.com)
3618.
Docker lays off 10% of its employees (layoffs.fyi)
3619.
A Bicycle for the Mind – Prologue (technicshistory.com)
3620.
When Projects Stop (thefoggiest.dev)
3621.
Election officials are outmatched by Elon Musk's misinformation machine (cnn.com)
3622.
It's Time to Stop Taking Sam Altman at His Word (theatlantic.com)
3623.
Lawsuit blames Character.AI in death of 14-year-old boy (techcrunch.com)
3624.
Show HN: Auto-generate hard evaluation data for LLMs (talc.ai)
3625.
Ratfactor's Sequence Timer (ratfactor.com)
3626.
Nuance and Nuisance: On the Village Voice (harpers.org)
3627.
Show HN: LLM fine-tuning SDK and would love your feedback (docs.luminolabs.ai)
3628.
Who owns your shiny new Pixel 9 phone? You can't say no to Google's surveillance (cybernews.com)
3629.
Nuance: Preventing Schema Migrations from Causing Outages (techblog.citystoragesystems.com)
3630.
Running LLMs with 3.3M Context Tokens on a Single GPU (arxiv.org)