F O R M A L F E E D B A C K S U B M I S S I O N Alexander Claude.ai User · Pro/Max Subscriber Submitted via support.claude.ai A pril 2026 R EF: TRANSPARENCY-001 To: The Anthropic Product & Engineering Team Anthropic, PBC San Francisco, California anthropic.com S U B J E C T Four Formal Proposals to Strengthen User Trust, Transparency & Platform Inclusivity Dear Anthropic Team, I am writing as a committed Claude user and subscriber to formally submit four interconnected proposals that I believe would significantly strengthen public trust in Claude, align closely with Anthropic's stated mission of responsible and transparent AI development, and expand access to underserved platforms. I want to begin by saying that the reason I choose Claude over other AI assistants is precisely because of Anthropic's ethical foundations and commitment to building AI that benefits humanity. These proposals are offered in that same spirit — not as criticism, but as a contribution from someone who wants to see Anthropic lead the industry in openness and accountability. I have attached a detailed visual transparency document ( Attachment 1 ) illustrating the complete prompt lifecycle, tokenization process, Constitutional AI filtering, memory system, access context differences, and data retention breakdown. This document was produced as a demonstration of what publicly available transparency documentation could look like. Please consider it a starting point, not a finished product. T H E F O U R P R O P O S A L S 01 Official Native Linux Desktop Application Claude's desktop app is currently available on macOS and Windows only. Linux — a platform used extensively by developers, researchers, security professionals, and technically engaged power users — is officially unsupported. The community has already solved much of this problem: the open source project claude- desktop-debian (github.com/aaddrick/claude-desktop-debian) successfully repackages the official Windows Electron app into native .deb, .rpm, AppImage, AUR, and Nix packages with full MCP server support. Since the core application is already Electron-based and cross-platform by design, an official Linux build would not require a ground-up engineering effort. It would primarily require official packaging, code signing, QA on major distributions, and a supported update mechanism. I formally request that Anthropic adopt or build upon this community work to produce an officially supported Linux release, giving the Linux community the same first-class experience currently available on other platforms. 02 A Public Product Roadmap Anthropic already demonstrates commendable transparency through its Responsible Scaling Policy, Constitutional AI research papers, and model cards. I propose extending this philosophy to a public product roadmap — even a high-level one that indicates broad capability areas and platform priorities without exposing proprietary timelines. This would allow power users to plan workflows around upcoming features, provide meaningful community input before features ship, and create a sense of shared direction between Anthropic and its users. Many of the most trusted developer-oriented companies — including Notion, Linear, and GitHub — maintain public roadmaps as a trust-building exercise. I believe Anthropic is well positioned to lead this practice in the AI industry. 03 Open Source the Desktop Client Code I propose that Anthropic open source as much of the Claude desktop client as is feasible — particularly the interface layer, MCP integration architecture, and update mechanism. Open sourcing the client would allow independent security researchers to verify that the application behaves as documented, give the community the ability to contribute Linux and ARM support directly, build trust by demonstrating that Anthropic has nothing to hide in its client-side implementation, and set a new industry standard for AI application transparency. The model weights and proprietary API infrastructure would naturally remain closed. But an open client would be a powerful signal that Anthropic's commitment to transparency is not merely rhetorical. 04 Official Visual Prompt Transparency Documentation Most users — including technically engaged ones — have limited understanding of what actually happens to their prompts between pressing Enter and receiving a response. I propose that Anthropic publish official visual documentation clearly explaining: the full prompt lifecycle from device to server and back; what tokenization means and how it affects Claude's behaviour; how Constitutional AI filtering works and what happens when a prompt is blocked; how the memory system generates, stores, and applies summaries; how data handling differs across free, paid, API, and third-party access paths; explicit data retention timelines for each access tier; and what "may be used for training" means in practice and how to opt out where available. The attached document is my attempt to draft what this could look like using only publicly available information. I believe Anthropic's official version would be more accurate, more detailed, and considerably more impactful. Transparency of this kind would genuinely differentiate Claude from every other AI assistant on the market. These four proposals are interconnected by a common thread: the belief that trust, once earned, must be actively maintained through openness . Anthropic has built extraordinary goodwill through its ethical positioning. These steps would transform that goodwill from an assumption into a verifiable, documented reality. I am happy to discuss any of these proposals further, contribute to documentation efforts, or provide user perspective on any aspect of the Claude experience. Thank you sincerely for taking the time to read this submission. Yours sincerely, Alexander Claude.ai Subscriber · April 2026 ATTACHMENT 1: "How Claude Processes Your Prompts — Full Transparency Document v2.0" A detailed visual document covering the complete 9-step prompt lifecycle, tokenization examples, Constitutional AI filtering diagram, blocked prompt path, memory system explanation, access context comparison table, data retention breakdown by tier, and a comprehensive list of Anthropic's current data commitments. Produced by the submitter using publicly available information. Offered as a starting point for official documentation. C O M M U N I T Y T R A N S P A R E N C Y I N I T I A T I V E · V E R S I O N 2 . 0 How Claude Processes Your Prompts A comprehensive visual walkthrough of the complete lifecycle — from your keyboard to Claude's response — including tokenization, Constitutional AI, memory, blocked paths, and access context differences. P r e p a r e d b y a n A n t h r o p i c u s e r · S u b m i t t e d a s a f o r m a l t r a n s p a r e n c y s u g g e s t i o n · A p r i l 2 0 2 6 Section 1 — The Complete Prompt Lifecycle 0 1 You Type a Prompt Your message is typed into the Claude desktop app, mobile app, or web interface. At this stage your text exists only on your local device. Nothing has been transmitted yet. The interface may also include a system prompt — a hidden set of instructions set by the operator (Anthropic or a third-party developer) that shapes how Claude behaves before your message is even read. What is a System Prompt? When you use Claude through a third-party app or service, that developer can inject instructions Claude follows that you never see — such as "always respond formally" or "only discuss topics related to cooking." On claude.ai, Anthropic's own system prompts govern Claude's core behaviour and values. This layer is currently invisible to users and is an area where greater transparency would be beneficial. ⌨️ 0 2 TLS 1.3 Encryption 0 3 Transmitted to Anthropic's API LOCAL DEVICE ONLY PLAIN TEXT SYSTEM PROMPT (HIDDEN) Before leaving your device, your prompt is wrapped in TLS 1.3 encryption — the same standard used by online banking. This ensures that no one between your device and Anthropic's servers can intercept or read your message in transit. The encryption is automatic and requires no action from you. TLS 1.3 ENCRYPTED IN TRANSIT AUTOMATIC Your encrypted prompt travels over HTTPS to Anthropic's API servers. The full payload includes: your message, the conversation history (context window), any attached files, the system prompt, and your session authentication token. This is the point at which your data leaves your device. 🔒 📡 0 4 Safety & Policy Filtering — Constitutional AI 0 5 Tokenization — What Claude Actually Sees HTTPS / REST API CONVERSATION HISTORY SESSION TOKEN FILE ATTACHMENTS This is not a simple blocklist check. Anthropic uses a multi-stage process called Constitutional AI (CAI) , where Claude is trained to evaluate its own responses against a set of principles. At inference time, a safety layer evaluates the incoming prompt before it reaches the main model. Prompt Received → Policy Classifier → PASS → Continue or BLOCK → Refuse ⚑ BLOCKED PATH — What happens when a prompt is r efused If a prompt is flagged as violating policy, Claude does not forward it to the main inference model. Instead, a refusal response is generated and returned to you. No response is generated from the full model. Claude explains why it cannot help and, where appropriate, suggests alternatives. The refusal itself is also logged for safety monitoring purposes. CONSTITUTIONAL AI MULTI-STAGE CHECK HARM CLASSIFIER REFUSAL PATH 🛡️ 🔤 0 8 Response Streamed Back to Your Device 0 9 Rendered, Stored & Optionally Used for Training If tools are enabled, Claude may make additional calls at this stage. Each tool type has a different data path and privacy implication: Web Search (opt-in): Your query is sent to a search provider. Results are returned to Claude and included in context. The search provider may log your query. MCP Servers (locally installed): Claude calls services running on your own machine — such as filesystem access, GitHub, or genomics tools. Data stays local unless the MCP server itself makes external calls. Memory System: Anthropic's memory system periodically summarises your past conversations and stores key facts. When relevant, these summaries are injected into your context window at the start of a new conversation. You can view and delete memories in settings. Memory generation happens server-side — Anthropic's infrastructure processes your past conversations to create these summaries. MCP (LOCAL) WEB SEARCH (OPT-IN) MEMORY (SERVER-SIDE) ALL OPT-IN Claude's response is streamed back over HTTPS as it is generated — this is why you see text appear word by word in real time. The stream is encrypted with TLS 1.3. Streaming begins as soon as the first tokens are generated, rather than waiting for the full response to complete. STREAMING RESPONSE TLS 1.3 REAL-TIME TOKEN BY TOKEN 📤 💬 Section 2 — Access Context: How Your Data Differs This is one of the least understood aspects of Claude. Your data handling depends significantly on how you access Claude. The table below summarises the key differences across the three main access paths. F A C T O R C L A U D E . A I ( F R E E ) C L A U D E . A I ( P R O / M A X ) A P I ( D E V E L O P E R S ) T H I R D - P A R T Y A P P S May be used for training Yes (default) Limited / Opt-out available No (by default) Depends on operator policy System prompt visible to you No No Yes (you write it) No (operator controls it) Memory system active Yes Yes No Depends on operator Conversation history stored by Anthropic Yes Yes No (unless zero- retention waived) Depends on operator contract Data retention period Not publicly specified Not publicly specified Up to 30 days (logs) Varies — operator's privacy policy applies Ad-free Yes Yes Yes Depends on operator ⚠️ Transparency Gap Identified Data retention periods for claude.ai free and paid users are not clearly specified in Anthropic's current public documentation. This is a significant transparency gap. Users deserve to know exactly how long their conversations are stored, under what conditions they are used for training, and how to exercise full deletion rights. This document formally requests Anthropic publish explicit retention timelines for each access tier. Section 3 — Data Retention by Access Tier claude.ai — Free Users Conversations may be used to improve Claude's safety and capabilities. Data retention period is not explicitly published. Users can delete conversations manually. Opting out of training is not currently available on the free tier. Memory summaries are stored server-side indefinitely until deleted. claude.ai — Pro / Max Users Paying subscribers have stronger data protections. Some training opt-out options may be available. Exact retention timelines are not publicly documented. Conversation deletion is available. Memory can be viewed and deleted in settings. API Access (Developers) By default, Anthropic does not use API inputs/outputs for training. Anthropic may retain API request logs for up to 30 days for abuse monitoring. Zero data retention (ZDR) agreements are available for enterprise customers, under which no request data is stored at all. Third-Party Applications When accessing Claude through a third- party app, that operator's privacy policy governs your data — not Anthropic's. The operator may store, process, or share your conversations independently. Always check the privacy policy of any third-party Claude- powered application. Section 4 — What Anthropic Does Not Do ✗ Sell your conversation data to advertisers or third parties ✗ Serve targeted ads based on your prompt content ✗ Share your data with governments without legal process ✗ Use your prompts to train competitor models ✗ Transmit prompts unencrypted at any point in transit ✗ Access your local MCP server data without your explicit action ✗ Allow Claude to retain memory between sessions without the memory system ✗ Share conversation data with Anthropic's commercial partners without disclosure This document was produced by an Anthropic user as a formal transparency proposal. It is based on publicly available information and reasonable inference where official documentation is absent. Submitted to Anthropic via support.claude.ai · References: anthropic.com/privacy · anthropic.com/usage-policy Version 2.0 · April 2026 · Community Transparency Initiative