OpenAI Developer Community
community.openai.com.web.brid.gy
OpenAI Developer Community
@community.openai.com.web.brid.gy
Connect with other developers building with the OpenAI API Platform.

[bridged from https://community.openai.com/ on the web: https://fed.brid.gy/web/community.openai.com ]
Persistent, Structured Memory Strategies
Hey, thanks for that Google Sheets idea! My method is clever but not very useful… my laptop is the back-end data store so whenever it’s offline, all the API calls fail. I would _much rather_ read and write Google Sheets, but it hadn’t occurred to me until you said this that I could integrate Sheets directly this way. Awesome! I fiddled with this earlier today and couldn’t quite get it to work. I didn’t know where make.com fits in, so I set that aside and tried to define a few API endpoints (using the OpenAI language, on the Custom GPT config screen) to a specific Sheets doc. I got it to the point where my GPT was sending a request when there was no auth enabled (the response being that auth is needed), but not getting any kind of response when I enabled OAuth. All the changes I tested – real vs bogus client secret, etc – suggest that it’s not even getting to the OAuth part, and just sending an empty response for [reasons]. I gave up at that point (for now) but it seems totally possible to do this. Curious how you did it with make.com? Re: the log hallucinations, I was also getting pristine data, for the most part, and my GPT actively tried to convince me that it had the log stored as a table in some special structured memory space that I didn’t have access to, for internal use. That sounded totally plausible, until it started glitching, so I got suspicious and searched around for documentation on this hidden feature. I think it was trying to dupe me!
community.openai.com
December 5, 2025 at 2:30 AM
Agent Builder Agent Process Silently Dies
We see occasional cases where our Agent Builder workflow stops responding to user messages, with no error shown in the thread’s logs. It _might_ be related to client tool calls, but I’m not positive about that. * User sends messages * Agent responds * Client tool calls are used successfully, user continues to go back and forth with agent. * Then at some point, another client tool is called successfully, but that is the last response the agent ever gives. In the logs we see: * CLIENT TOOL CALL * Status: completed * Output: { “success”: true, … } * TASK GROUP * Created is _after_ the client tool call * Type: Thought * USER: User message * User says something normal, should get a response from agent * Instead, agent never responds * USER: User message * USER: User message * …. etc, user never gets another response from Agent * The Status of the thread is still shown as “Active” Similar issues: * Users stop receiving agent responses in longer ChatKit threads involving client tool calls · Issue #96 · openai/chatkit-js · GitHub (I’m on this thread, last time it turned out silent rate limit errors were happening under the hood. We couldn’t see those errors in the logs. Once our usage tier was increased to avoid rate limits, the issue was resolved. Not sure if the same thing is happening since they don’t show in Dashboard) * Users stop receiving agent responses after refreshing the page during a client tool call · Issue #107 · openai/chatkit-js · GitHub (in my case, it’s not because of mid-client tool refresh, and we don’t see _any_ error logged as described on this issue) Example threads: * https://platform.openai.com/logs/cthr_6930d7d7bea881948b3d4a1eda43db6000b2783dce918c1e * https://platform.openai.com/logs/cthr_692f996bd4208195b2345f71b3d66ce70ffae3760f062495 Unfortunately, I don’t have a way to deterministically repro.
community.openai.com
December 5, 2025 at 2:30 AM
Request a new feature for ChatGPT
Make ChatGPT a true long-term co-author: project workspaces, continuity and versioning I use ChatGPT as a long-term co-author on multi-month policy and technical writing projects (multiple reports, annexes, slide decks, letters, etc.). The core limitation is that every chat still behaves like an isolated conversation, rather than part of a single project with stable rules and drafts. A few changes could make ChatGPT much more usable for serious, ongoing work. **1. Project workspaces with persistent rules** Problem: I have to keep re-stating the same constraints in every chat: house style, spelling, tone, section spine, paper size, fonts, plus custom domain rules. Request: Create project workspaces with a structured “project profile” that can include: * Style settings: spelling (for example, UK vs US), sentence length preferences, tone, “executive summary first” etc. * Formatting settings: default paper size, font, heading hierarchy, reference style. * Custom Domain constraints. With an explicit toggle like: “Apply these rules to all outputs in this workspace.” **2. Cross-chat project index and retrieval** Problem: Real projects branch into multiple chats: separate threads for annexes, alternative drafts, letters, slide versions, etc. It is hard to see at a glance which draft of a section is “current”. Request: * Allow users to tag chats and files into a named project. * Provide a project index showing key artefacts and last-modified times (for example, “Section 4 – updated 3 days ago”). * Let the model reference them explicitly via prompts such as: “Use the latest version of Section 4 from this project as the base and tighten paragraph 3.” This would avoid the repeated “please scan this chat for the latest version” . **3. Version-aware document round-tripping** Problem: ChatGPT generates a document, I download and edit it locally, then re-upload and ask for targeted changes. The system often treats this as a brand-new file, without any sense of the previous version, and sometimes rewrites more than requested. Request: * When a user uploads a document that likely originated from ChatGPT, offer “link this to the previous version”. * Internally, track version lineage so the system can: * show a simple “changes since last ChatGPT version” diff, and * reason about what has changed since it last touched the file. * Provide a “surgical edit” mode: * I specify the scope (for example, “only update the box in Section 3.2” or “only rewrite bullets under Heading 4.1”), * the model guarantees all other sections remain untouched, including styles and references. **Why this matters** These changes would move ChatGPT from “very smart single-session assistant” to a credible long-term co-author for users who are actually producing complex, versioned work over months: policy reports, legal documents, academic outputs, technical documentation, etc. Right now a lot of the friction is not “intelligence” but lack of stable project context and version awareness. Fixing that would unlock a huge amount of value for heavy, professional users.
community.openai.com
December 4, 2025 at 10:29 PM