Posts Tagged With LLMs

๐Ÿ”–

Mario Zechner writes about the current usage of LLM coding agents in software development and how we just need to slow down our release cycles.

You have zero fucking idea whatโ€™s going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity. They have seen many bad architectural decisions in their training data and throughout their RL training. You have told them to architect your application. Guess what the result is?

An immense amount of complexity, an amalgam of terrible cargo cult โ€œindustry best practicesโ€, that you didnโ€™t rein in before it was too late. But itโ€™s worse than that.

๐Ÿ”–

Ladybird (a new web browser), recently migrated part of their code base to a new programming language with the help of Claude Code and Codex (OpenAIโ€™s code model). I thought this idea of human-directed LLM tasks is really great:

I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.

๐Ÿ”–

Bryan Cantrill shares Oxideโ€™s internal guidance on the use of LLMs in RFD 576.

Empathy: Be we readers or writers, there are humans on the other end of our language use. As we use LLMs, we must keep in mind our empathy for that human, be they the one who is consuming our writing, or the one who has written what we are reading.

๐Ÿ”–

Marcus Olangโ€™ reflects on being told his writing sounds like ChatGPT. As a Kenyan, he reframes the comparison: ChatGPT writes like him and like many others shaped by the same educational system.

I am a writer. A writer who also happens to be Kenyan. And I have come to this thesis statement: I donโ€™t write like ChatGPT. ChatGPT, in its strange, disembodied, globally-sourced way, writes like me. Or, more accurately, it writes like the millions of us who were pushed through a very particular educational and societal pipeline, a pipeline deliberately designed to sandpaper away ambiguity, and forge our thoughts into a very specific, very formal, and very impressive shape.

๐Ÿ”–

Geoffrey Litt on how he uses LLMs to code like a surgeon:

A surgeon isnโ€™t a manager, they do the actual work! But their skills and time are highly leveraged with a support team that handles prep, secondary tasks, admin. The surgeon focuses on the important stuff they are uniquely good at.

๐Ÿ”–

Anil Dash on most people in the tech industry, who actually build things, share the same feelings on AI:

Technologies like LLMs have utility, but the absurd way theyโ€™ve been over-hyped, the fact theyโ€™re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

๐Ÿ”–

Alex Martsinovich on why itโ€™s rude to show AI output to people:

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.