Compiled entirely from public activity on meta.discourse.org, X, and GitHub.
💬 meta.discourse.org
This week Sam was heavily focused on shipping and documenting MCP (Model Context Protocol) support for Discourse AI — publishing both a technical guide and an announcement post enabling AI agents to connect to external MCP servers like GitHub, Notion, and Linear. He also triaged several AI agent behavior issues, weighing in on mention-only response defaults and configurable response delays (noting that upcoming workflow support would address the latter). Rounding out the week, he did some housekeeping on outdated content and investigated a migration ID collision bug, flagging it as a recurring contributor discipline problem he’s working to address.
🐦 On social
No X activity captured this week.
🛠️ GitHub — Sam’s Commits
samsaffron/term-llm
The week’s work centered on two themes: building out a tool-driven agent handover system (allowing agents to hand off sessions with stored, renamed context files) and hardening the serve runtime against reliability issues like stale stream state, session hangs, and incomplete progress tracking. Sam also did a round of provider correctness fixes — enforcing provider persistence per session, cleaning up zero-value temperature/top_p params sent to OpenAI-compatible providers, and handling Claude CLI’s non-zero exit after MCP tool use. Rounding things out, model token limits were consolidated into model entries directly, and a Telegram download timeout was added to prevent indefinite blocking. Jarvis contributed one agent-directed fix to rescue orphaned interrupt commits when sessions go idle.
Key commits:
702750c— fix testsaff330c— feat(handover): add tool-driven agent handoffd1b611b— fix(zen): update default model limits27c7605— feat(handover): store and rename handover files0e01803— telegram classifier reused incorrectly
discourse/discourse
Sam’s week was focused squarely on maturing the MCP (Model Context Protocol) OAuth integration in the discourse-ai plugin. The single substantial commit adds fine-grained control over the OAuth flow — custom authorization and token parameters, smarter client authentication method negotiation, and a hard-fail mode when a refresh token isn’t returned — making the feature production-ready for real-world OAuth providers like Google. He also tightened up error handling so the AI sees tool-level failures rather than raw exceptions, improving the agent’s ability to reason about and recover from MCP tool errors.
Key commits:
cbb63ef— FEATURE: Add advanced OAuth options for MCP servers (#38913)
discourse/discourse-kanban
Sam’s focus this week was on polishing the kanban card UX — both in how cards are interacted with and how they display. He improved the drag-and-drop flow to feel more fluid, added auto-linking for URLs in card titles, and squashed a pair of layout bugs around card sizing and mobile keyboard handling. The work reads as a coherent push to make the day-to-day card experience tighter and more reliable, particularly on mobile.
Key commits:
da8b2ed— FEATURE: improve drag-and-drop card flow311d15a— FEATURE: Auto-link URLs in kanban card titles74d6632— FIX: mobile layout and virtual keyboard sizinge8833c5— FIX: not reserving enough space for card
discourse/dv
Sam’s week in discourse/dv was focused on a single, targeted infrastructure alignment: updating the dev tool to track upstream Discourse’s switch from the Unicorn web server to Pitchfork. The change touched service management, restart/reset flows, container definitions, and URL resolution across the codebase — a broad but shallow update to keep dv’s internals consistent with how Discourse now runs in production. Compatibility was preserved by falling back to the legacy UNICORN_PORT env var where PITCHFORK_PORT isn’t set.
Key commits:
0e89932— fix(discourse): replace unicorn with pitchfork
SamSaffron/dotfiles
This week’s single commit focused on integrating Jarvis more deeply into Sam’s desktop environment — specifically configuring Hyprland to share a browser instance with the AI agent on a dedicated workspace or environment. The Hyprland config additions likely set up window rules or workspace assignments to facilitate this shared browser access. Neovim plugin versions were also bumped as part of routine maintenance.
Key commits:
89620ef— shared browser with jarvis on dedicated env
🤖 Jarvis — Public Repo Work
Agent-authored public commits, typically guided by Sam during implementation work.
sam-saffron-jarvis/jarvis-browser-proxy
The past week in jarvis-browser-proxy was focused entirely on hardening the browser proxy’s operational reliability and install experience. Sam directed a consolidation of the browser control architecture (collapsing it into a single proxy service), followed by a push to make installation more polished and network-accessible with an one-shot upgrade script (listening on all interfaces, interactive setup). The tail end of the week was dominated by a series of iterative fixes to the upgrade and smoke-check pipeline — getting the readiness checks right for wildcard listen addresses, then adding proper wait logic, and finally silencing spurious errors during upgrades. The overall arc is: simplify the architecture, improve onboarding, then harden the upgrade path until it’s production-solid.
Key commits:
948ab12— Silence transient readiness errors during upgrade smoke checkf73d636— Wait for browser proxy readiness during upgrade smoke check2941d62— Fix browser proxy smoke check for wildcard listen addresses7490805— Add one-shot upgrade script for browser proxy7c9bc28— Harden browser proxy auto-recovery and health checks
SamSaffron/term-llm
The past week was a focused hardening sprint across the term-llm codebase, with Sam and Jarvis working through a high-volume blitz of ~38 fixes targeting production robustness rather than new features. The bulk of the work attacked resource safety — plugging goroutine leaks, unbounded memory and disk consumption, and streaming timeouts across multiple providers (Copilot, ChatGPT, OpenAI Responses) that had been silently aborting long-running requests. A parallel thread of fixes tightened OpenAI-compatibility correctness: passthrough tools being silently dropped, image content disappearing, developer messages being mishandled, and provider state not surviving session continuations. A small handful of features snuck in — agent name display in the TUI footer, remaining-time urgency in progressive mode, a --files-dir flag, and updated Venice model support — but the clear intent was stability and correctness over new capability.
Key commits:
086a59e— fix(web): rescue orphaned interrupt commits when session confirmed idlee1265d6— fix: Restart recovery resurrects runs that were already cancelled (#305)5bb0ff5— fix: OpenAI-compat message conversion drops earlier developer messages (#308)26b1b7f— fix: Job concurrency settings are accepted but effectively ignored (#307)267b6bf— feat(tui): show agent name first in chat footer (#304)
⤴️ GitHub — Pull Requests
19 PRs this week:
- ✅ SamSaffron/term-llm#310 (diff) — fix(web): rescue orphaned interrupt commits when session confirmed idle merged
In
syncActiveSessionFromServer, after confirming the session is idle (!activeRun && !state.abortController), callrequeueUncommittedInterruptsand drain any resultingqueuedInterruptsby sending the first one. When the browser was backgrounde… - ✅ SamSaffron/term-llm#296 (diff) — fix: Telegram interrupt classifier reuses one shared fast provider across all ch closed
Create a fresh fast provider for each Telegram interrupt-classification request instead of reusing one shared provider across all chats. Also add a regression test that verifies the Telegram session manager does not cache and reuse the same fast prov…
- ✅ SamSaffron/term-llm#309 (diff) — fix: Trailing developer messages are rewritten as user messages closed
Preserve trailing developer messages as instruction-role messages in
buildCompatMessages()by emitting them as syntheticsystemmessages instead of syntheticusermessages. Rewriting a trailing developer message asuserchanges its semantics … - ✅ SamSaffron/term-llm#302 (diff) — fix: Telegram media downloads use the default HTTP client with no timeout, allow closed
Use a dedicated HTTP client with an overall timeout for Telegram photo and voice downloads instead of the default client with no timeout. Also add regression tests covering timed-out photo and voice downloads. The default HTTP client can hang indefin…
- 🟢 discourse/discourse#39063 (diff) — FEATURE: Allow acting on all bookmarks in topic footer buttons opened
Add a dedicated topic bookmarks menu in the topic footer so post bookmarks can be shown, jumped to, edited, and deleted alongside topic bookmarks. Keep topic bookmark state in sync after create/update/delete actions, include post numbers in bookmark …
- ✅ SamSaffron/term-llm#279 (diff) — fix: Anthropic message parsing reorders tool results ahead of user text closed
Preserve the original block order when parsing Anthropic user messages that mix normal user content with
tool_resultblocks. The previous parser collected all tool results into a singletoolmessage and appended it before the remaining user conte… - ✅ SamSaffron/term-llm#256 (diff) — fix: Anthropic proxy ignores client tool restrictions and injects all server too closed
- limit Anthropic server-side tool injection to the tools explicitly named by the client - keep passthrough client tools and existing ToolMap behavior unchanged - add a regression test proving unrequested server tools are not exposed to the provider …
- ✅ SamSaffron/term-llm#289 (diff) — fix: Telegram photo download reads arbitrary response bodies fully into memory w closed
Add status and size checks to Telegram photo downloads before reading the body into memory. The photo downloader now rejects non-200 responses, enforces a 25 MiB limit using both Content-Length and a bounded reader, and includes targeted tests for HT…
- ✅ SamSaffron/term-llm#290 (diff) — fix: Anthropic message builders silently drop a trailing developer message closed
- preserve a trailing
RoleDevelopermessage in both Anthropic message builders by emitting a synthetic user message containing the wrapped developer instruction when there is no following user turn - add regression coverage for trailing developer m…
- preserve a trailing
- ✅ SamSaffron/term-llm#292 (diff) — fix: Serve runtimes never configure context tracking or auto-compaction closed
Configure serve-mode engines with context-window tracking and auto-compaction using the runtime’s effective provider/model, matching the existing ask/TUI paths. This also adds a regression test that verifies serve engines pick up an input limit and e…
- ✅ SamSaffron/term-llm#291 (diff) — fix: Serve runtimes never enable context tracking or auto-compaction closed
Configure serve runtimes with the effective provider/model context window settings when building engines in
newServeEngineWithTools, so served sessions enable token tracking andauto_compactlike ask/TUI do. Also add focused tests covering both a… - ✅ discourse/discourse#38913 (diff) — FEATURE: Add advanced OAuth options for MCP servers merged
Adds three new configurable fields to MCP server OAuth: -
oauth_authorization_params— JSON object merged into authorization requests (e.g.{"access_type":"offline"}for Google APIs) -oauth_token_params— JSON object merged into token exchan… - ✅ SamSaffron/term-llm#278 (diff) — fix: OpenAI-compatible providers cannot send temperature=0 or top_p=0 closed
Forward zero-valued
temperatureandtop_pin the OpenAI-compatible chat request instead of dropping them, and add a regression test covering both fields.temperature: 0andtop_p: 0are valid, meaningful settings. The previous> 0checks tr… - ✅ SamSaffron/term-llm#260 (diff) — fix: Anthropic mixed user/tool_result content is reordered incorrectly closed
Fix Anthropic message parsing so mixed user content and
tool_resultblocks preserve their original order. The parser still convertstool_resultblocks intoRoleToolmessages, but now it emits alternating user/tool messages in content order inst… - ✅ SamSaffron/term-llm#272 (diff) — fix: Telegram photo downloads are read fully into memory with no size or status closed
Add bounds and validation to Telegram photo downloads by using a timed request, rejecting non-2xx responses, and capping the amount of image data read into memory before base64 encoding it. The previous code used
http.Getfollowed by `io.ReadAll(re… - ✅ SamSaffron/term-llm#281 (diff) — fix: Telegram photo downloads read the entire response into memory with no limit closed
- stop using
http.Getfor Telegram photo downloads and create a context-bound request with a timeout - reject non-200 Telegram file responses before reading the body - cap photo downloads at 20 MB and fail if the response is larger than that limit …
- stop using
- ✅ SamSaffron/term-llm#282 (diff) — fix: Telegram voice downloads can fill disk because response size is unbounded closed
Add bounds checks to Telegram voice downloads by rejecting non-200 responses, refusing oversized files up front when size metadata is available, and capping streamed writes to 25 MB so voice downloads cannot grow without limit on disk. `downloadTeleg…
- ✅ SamSaffron/term-llm#270 (diff) — fix: Inline image attachments bypass the attachment-count limit entirely closed
- count inline LLM-native data URL images toward the shared attachment limit in
parseUserMessageContent- add a regression test covering more than 10 native inline image attachments Previously only non-native images andinput_fileparts increment…
- count inline LLM-native data URL images toward the shared attachment limit in
- ✅ SamSaffron/term-llm#268 (diff) — fix: normalize legacy minimax-{N} model names to minimax-m{N} for Venice closed
Add
normalizeVeniceModelin the Venice provider that rewrites legacyminimax-{N}model names to their currentminimax-m{N}API IDs (e.g.minimax-27→minimax-m27). Venice renamed their minimax models fromminimax-27/minimax-21/ `mini…
🐛 GitHub — Issues
No issue activity this week.
👀 GitHub — Reviews
1 reviews this week:
- discourse/discourse#39066 — DEV: Move 4 upcoming changes to stable approved