Compiled entirely from public activity on meta.discourse.org, X, and GitHub.
💬 meta.discourse.org
This week Sam shipped a notable bookmark UX improvement, making bookmarks significantly easier to discover and access on topics, while also acknowledging a related mobile visibility gap worth addressing. He pointed to AI skills infrastructure being checked into the main Discourse repo and fielded questions about the dv development tool for configuring Discourse AI. On the bug and support side, he triaged a handful of issues — providing an RSS workaround for exclude_tag in category feeds, weighing in on SQL parsing options for Data Explorer, and escalating an OIDC error affecting the OpenAI Discourse forum to internal contacts.
🐦 On social
The week’s social activity centered on term-llm development and AI model comparisons, with a standout post about using GPT-5.4 xhigh to refactor term-llm that drew significant engagement — the “taxi meter racing toward $350” framing resonated widely, earning the most likes and replies of the period, and prompting a follow-up correction clarifying the actual cached-token figures. Several posts explored table rendering quality across AI harnesses, positioning term-llm favorably alongside Claude Code and critically of opencode and gemini-cli. Engagement was otherwise quiet, though there was a notable outbound reply flagging a Discourse auth regression and a generous offer to donate a hosted Discourse instance to Andrej Karpathy for his talk experiment.
- 28 posts and 0 replies captured in the last 7 days
- Top by engagement: Yesterday I let GPT-5.4 xhigh refactor term-llm to rem…, the interface was born based on the caching constraint…, I love it too, but this was not codex :)
Most engaged tweets:
- post — Yesterday I let GPT-5.4 xhigh refactor term-llm to remove Glamour Markdown rendering. 60 mins in: 500M cached tokens, 500k context. At retail pricing it felt like watching a taxi…
204 likes · 3 reposts · 6 replies - post — the interface was born based on the caching constraints, all providers really really want you to use the cheap cached prompt tokens, the more you fiddle with history the worse cac…
21 likes · 0 reposts · 1 replies - post — I love it too, but this was not codex :)
4 likes · 0 reposts · 2 replies - post — Happy with this table rendering style… I guess term-llm is the only other harness except for claude that does this now
4 likes · 0 reposts · 2 replies - post — Despite how fancy my AI workflow has gotten, with my own built from scratch claw and AI containers and so on, I still find myself reaching for good old
term-llm execregularly.…
3 likes · 0 reposts · 1 replies
🛠️ GitHub — Sam’s Commits
samsaffron/term-llm
The week was dominated by a significant architectural shift in the UI layer: replacing the third-party glamour markdown renderer with a custom native implementation, followed by an intensive wave of polish commits to get it right — tables with borders, code block backgrounds, blockquote preservation, loose list spacing, proper thematic break sizing, and hyphen-safe line wrapping. Alongside that refactor, Sam hardened session and streaming reliability: streamed messages now persist incrementally, context estimates survive session resumes, and a handful of race conditions around cancellation, interrupt draining, and idle session replacement were squashed. On the LLM integration side, reasoning effort support was added and Anthropic tool event handling was tightened. Rounding things out were smaller infrastructure tweaks — locking CI deploys to the upstream repo, allowing wizard skipping, and removing the now-defunct Anthropic OAuth credential flow.
Key commits:
1be3750— fix(session): persist context estimates across resumesebabe74— fix(markdown): stop breaking lines at hyphens1c6690c— fix(markdown): preserve blockquote structure0442f50— fix(markdown): preserve loose list spacing8c30cde— fix(markdown): size thematic breaks to render width
discourse/discourse
Sam’s week split across two distinct areas: user-facing bookmark UX and developer tooling. The headline feature was a reworked bookmarks menu in the topic footer, unifying post and topic bookmarks into a single grouped interface with full create/edit/delete support and live label sync — a polish pass that closes a long-standing UX gap for heavy bookmark users. On the tooling side, Sam added a dedicated Rails generator for plugin post-deployment migrations, paired with a documented “migration skill”, making it easier for plugin authors to safely handle destructive database changes in the right place.
Key commits:
d9c0c2d— DEV: add plugin post-migration generator and migration skill (#39125)2ec93c6— FEATURE: Allow acting on all bookmarks in topic footer buttons (#39063)
discourse/discourse-kanban
The week’s work on discourse-kanban centered on a significant architectural cleanup and feature expansion. Sam replaced the legacy filter/query model with explicit category and tag associations, reworking the sync engine and schema (with new migrations and indexes) to sit on a cleaner foundation. On top of that, two user-facing features landed: a topic card detail modal showing metadata, post content, and assignment actions, and a “constraint-fix” flow that can automatically repair category/tag mismatches when placing cards on the board rather than just rejecting them. A correctness fix rounding out the week prevents unnecessary mutation failures when a column move wouldn’t actually change the topic.
Key commits:
98d8b8b— DEV: lint and fix build2b32d60— FEATURE: add constraint-fix moves4b7b2af— feat(kanban): add topic card detail modalbf0bf19— fix(kanban): skip no-op topic mutations2d978f6— DEV: lint and fix
🤖 Jarvis — Public Repo Work
Agent-authored public commits, typically guided by Sam during implementation work.
sam-saffron-jarvis/jarvis-browser-proxy
This past week in jarvis-browser-proxy saw a single focused reliability fix: ensuring the proxy’s status endpoint stays responsive while the browser is recovering from a crash or restart. The work touched the core server logic (proxy/server.go) with a meaningful refactor — 68 lines changed — and added new test coverage to guard against regressions. The intent is clearly to make the proxy more robust under fault conditions, so that callers polling status during a recovery cycle don’t hit a dead endpoint.
Key commits:
c024efb— fix: keep status responsive during browser recovery
SamSaffron/term-llm
The week was dominated by a deep reliability push across term-llm’s streaming and concurrency layer — Sam and Jarvis hunted down and fixed a cascade of deadlocks and race conditions spanning the Responses API, Gemini CLI, OpenAI-compatible adapter, and the agentic tool loop, all stemming from blocking event writes or stream consumers failing to drain properly. Alongside this, several correctness gaps were closed: tool names being silently dropped in chat and Responses API follow-ups, interjected messages not persisting to session history, and an SSRF vulnerability in the read_url tool. A new multi-agent Docker scaffolding model landed, with agents that self-bootstrap runit services and pre-load memory and self-update skills on every container rebuild. A handful of UI polish commits rounded out the week — smoother streaming tail renders, selective syntax highlighting, and a code block copy button.
Key commits:
4301130— fix: Gemini CLI streaming path can deadlock on cancellation due to blocking event writes (#363)d59790f— fix: Gemini provider can hang Stream.Close when output arrives after the caller cancels (#360)41254ce— fix: preserve inline formatting in headings (#355)27b7bb9— fix: Chat tool-result messages lose function names unless the matching tool call is replayed (#344)4c656f6— fix: Responses API tool follow-up loses the original tool name (#345)
⤴️ GitHub — Pull Requests
7 PRs this week:
- ✅ SamSaffron/term-llm#361 (diff) — fix: Anthropic streaming can deadlock forever on cancellation because event send closed
Guard Anthropic stream event sends with
ctx.Done()so cancellation cannot block forever when the consumer stops draining the buffered event channel. Also add a regression test that fills the Anthropic stream event buffer and verifies `stream.Close(… - ✅ SamSaffron/term-llm#362 (diff) — fix: Anthropic stream can deadlock forever on cancellation when the event buffer closed
Make Anthropic streaming event emission cancellation-aware so the background worker cannot block forever on a full event buffer. Add a regression test that fills the stream buffer and verifies
Close()returns promptly instead of hanging. Anthropic’… - ✅ SamSaffron/term-llm#364 (diff) — fix: Gemini provider can hang stream shutdown after cancellation due to blocking closed
- add a small Gemini-specific emit helper that stops sending events once the stream context is canceled - route Gemini text, tool call, usage, sources, and done events through that helper instead of writing directly to the shared event channel - add …
- ✅ SamSaffron/term-llm#351 (diff) — fix: Agentic engine can deadlock on cancellation because it does blocking event closed
Use cancellation-aware sends for agentic
runLoopevent forwarding so cancellation/disconnect shutdown cannot block forever on a full event channel. This updates the directevents <- ...writes ininternal/llm/engine.goto go through a small hel… - ✅ SamSaffron/term-llm#334 (diff) — fix: Failed provider replacement destroys the existing session runtime closed
- preserve the existing session runtime while
ReplaceIdleWithattempts to create a replacement - only evict and close the old runtime after a replacement is successfully installed - restore the original runtime when replacement creation fails, and …
- preserve the existing session runtime while
- ✅ SamSaffron/term-llm#335 (diff) — fix: Streaming response runs ignore client cancellation and keep executing after closed
- make
startResponseRunderive its run context from the caller context instead ofcontext.Background()- keep the existing response run timeout by wrapping the caller context withcontext.WithTimeout- update streaming tests to assert that requ…
- make
- 🟢 discourse/discourse#39172 (diff) — UX: add submenu for post bookmarks opened
Split post bookmark actions into a dedicated submenu so topic footer bookmarks can show a clearer hierarchy and keep topic-level actions separate. This also preserves the topic bookmark button when only post bookmarks exist and updates the bookmark m…
🐛 GitHub — Issues
No issue activity this week.
👀 GitHub — Reviews
2 reviews this week:
- discourse/discourse#39199 — FIX: Render group_list settings in AI feature editor approved
- discourse/discourse#39139 — DEPS: Bump mime-types-data from 3.2026.0317 to 3.2026.0407 approved