All projects 所有项目
AI · Tarot · Research · 2024–2026 AI · 塔罗 · 研究 · 2024–2026

Card Reader + AI Reading Engine 塔罗读牌 + AI 解读引擎

An interactive tarot/lenormand/poker spread reader with drag-to-position card slots, reversal logic, session memory, and a generative AI interpretation engine — structured 5-section output, cross-session archive intelligence, anti-drift context control, and prompt cache strategy to keep costs sane at ~8K token context. 交互式塔罗/诺曼/扑克牌阵读牌工具,支持拖动定位、逆位逻辑、会话记忆,以及生成式 AI 解读引擎——5 段结构化输出、跨会话档案智能、反漂移上下文控制,以及 Prompt 缓存策略将约 8K token 的上下文成本控制在合理范围内。

Interactive · Lab 交互 · 实验室

Tear-aparts and live mocks 拆解与实时演示

Architecture snapshots and product demos. Click cards to flip them, then run the reading. All mocked — nothing is saved. 架构快照与产品演示。点击牌面翻牌,然后运行解读。全部为模拟——不保存任何内容。

UX · Product产品

A complete reading 一次完整解读

Click each card slot to flip · then Run reading to stream the 5-section interpretation 点击每张牌槽翻牌 · 然后点击"运行解读"流式输出 5 段解读

Past过去
Present现在
Future未来
Cards Say牌面在说什么
Cross-Reference交叉印证
Synthesis综合
Shadow Card影子牌
Timing时机

The real engine streams each section via SSE at ~22ms/char. The 5-section structure is enforced in the system prompt — sections cannot be reordered or merged. The Shadow Card (section 4) is auto-revealed from a separate draw; Timing (section 5) is always the last section regardless of question type. 真实引擎通过 SSE 以约 22ms/字的速度流式输出每个段落。5 段结构在系统提示词中强制执行——段落无法重新排序或合并。影子牌(第 4 段)从单独抽牌自动揭示;时机(第 5 段)无论问题类型如何始终是最后一段。

ML · Architecture架构

3-tier archive retrieval 三层档案检索

HOT / WARM / COLD — click each tier to expand HOT / WARM / COLD — 点击展开各层

HOT Always included 始终包含 ~2,400 tok
The last 3–5 readings from this session, always prepended. Ensures the AI never loses the immediate context of what you've been exploring. Cached by the prompt cache layer — no re-encoding cost. 本次会话最近的 3–5 次解读,始终前置。确保 AI 不会丢失你正在探索的即时上下文。由 Prompt 缓存层缓存——无需重新编码成本。
WARM Card + name match 牌名匹配 ~1,200 tok
Past readings that share at least one card with the current spread, or reference the same card by name in user reflections. Retrieved via DB query on card_name + reading_id join. Token budget: ~1,200. 与当前牌阵至少共享一张牌的历史解读,或在用户反思中通过名称引用相同牌。通过 card_name + reading_id join 的数据库查询检索。Token 预算:约 1,200。
COLD On-demand 按需加载 ~400 tok
Any past reading, loaded only if the user explicitly references it ("in my reading last month…") or the AI identifies a pattern worth surfacing. Pulled by reading_id on-demand. Smallest budget — usually unused. 任何历史解读,仅在用户明确引用("上个月我那次解读……")或 AI 识别出值得呈现的模式时加载。按需通过 reading_id 拉取。预算最小——通常不使用。

Total archive token budget: ~4,000 of the ~8,000 request budget. HOT tier is prompt-cached; WARM/COLD tiers are dynamically assembled per request. The tiers ensure the AI has meaningful cross-session memory without blowing the context window. 档案 Token 总预算:约 8,000 请求预算中的约 4,000。HOT 层由 Prompt 缓存;WARM/COLD 层每次请求动态组装。这三层确保 AI 拥有有意义的跨会话记忆,同时不超出上下文窗口。

ML · Architecture架构

Anti-drift mechanism 反漂移机制

Simulate a 20-question session · watch recenters at Q5, Q10, Q15 模拟 20 个问题的会话 · 观察第 5、10、15 问的自动重心

0
/ 20 questions / 20 个问题
Auto-recenter — context rebased to session origin 自动重心 — 上下文已重置到会话起点
Session log will appear here… 会话日志将显示在这里…

At Q5, Q10, Q15 the engine silently injects a "recenter" directive into the context window — the model re-reads its original question framing and the HOT archive tier. This prevents the conversation from gradually drifting away from the original question into tangentially related territory, a known failure mode in long LLM sessions. 在第 5、10、15 个问题时,引擎静默向上下文窗口注入"重心"指令——模型重新读取其原始问题框架和 HOT 档案层。这防止了对话逐渐偏离原始问题进入边缘相关领域,这是长 LLM 会话中已知的失败模式。

ML · Architecture架构

Prompt cache strategy Prompt 缓存策略

Toggle cache on/off — see token cost per request 切换缓存开关 — 查看每次请求的 Token 成本

system prompt (8K)
archive HOT tier
user context
this request
Estimated cost: 估算成本: ~$0.003 / request

The ~8K system prompt (tarot card definitions, interpretation guidelines, output format rules) is encoded once and cached. Subsequent requests within the session reuse the cached prefix — only the new user message and dynamic archive context are re-encoded. Without caching, each request re-encodes the full 8K, increasing cost ~6×. 约 8K 的系统提示词(塔罗牌定义、解读指南、输出格式规则)只编码一次并缓存。会话内的后续请求复用缓存前缀——只有新用户消息和动态档案上下文需要重新编码。不启用缓存时,每次请求都要重新编码完整的 8K,成本增加约 6 倍。

UX · Product产品

Guided but escapable 引导式分支

Ask a question · the AI proposes follow-ups · you can always go your own way 提问 · AI 提出后续问题 · 你随时可以走自己的路

The AI suggests — pick one, or go your own way: AI 建议 — 选一个,或走你自己的路:
or或者
Try more in the live tool → 在完整工具里继续 →

You're never trapped in the AI's framing. The chips are suggestions, not a decision tree. 你永远不会被困在 AI 的框架里。这些选项是建议,而不是决策树。

After each section of interpretation, the engine generates 3 follow-up question chips based on the cards and user's original question. These are context-aware — not generic "tell me more" prompts. The user can pick one OR type anything freely. The design is guided-but-escapable: the AI leads without trapping. 每次解读段落后,引擎基于牌面和用户原始问题生成 3 个后续问题选项。这些是上下文感知的——不是通用的"告诉我更多"提示。用户可以选一个,也可以自由输入任何内容。设计理念是引导但可逃脱:AI 引导而不是困住用户。

UX · Privacy隐私

Privacy frame 隐私保护

Click each step to see what happens to your data 点击每个步骤查看你的数据经历了什么

Your question 你的问题
Encrypted in transit (HTTPS) · stored encrypted at rest 传输加密(HTTPS)· 静态存储加密
Questions and readings are stored in your personal archive, encrypted at rest in the DB. Only your account can retrieve them. The system never uses your questions for model training. 问题和解读存储在你的个人档案中,在数据库中静态加密。只有你的账户才能检索它们。系统从不使用你的问题进行模型训练。
AI model call AI 模型调用
Sent to GLM-5.1 API · no persistent storage on model provider side 发送至 GLM-5.1 API · 模型提供商侧无持久存储
The AI call goes to GLM-5.1. Per their API terms, prompts are not used for training and are not retained after the response. The system prompt never includes PII from other users. AI 调用发送至 GLM-5.1。根据其 API 条款,提示词不用于训练,响应后不保留。系统提示词从不包含其他用户的个人信息。
Archive storage 档案存储
Stored in your account's reading archive · never shared 存储在你账户的解读档案中 · 从不共享
Readings are stored in an append-only archive tied to your user_id. Account deletion is an anonymize-preserve operation — readings are de-linked from your identity but not cascade-deleted, preserving the statistical patterns the archive intelligence relies on. 解读以追加方式存储在与你的 user_id 关联的档案中。账户删除是一个匿名保留操作——解读与你的身份解除关联但不级联删除,保留档案智能所依赖的统计模式。
No third-party analytics 无第三方分析
No Google Analytics · no tracking pixels · no ad networks 无 Google Analytics · 无追踪像素 · 无广告网络
The tool has zero third-party analytics. No Google Analytics, no Mixpanel, no Facebook Pixel. Server-side access logs exist for debugging only and are not shared or sold. This is a personal tool, not a data product. 该工具没有任何第三方分析。没有 Google Analytics、Mixpanel 或 Facebook Pixel。服务端访问日志仅用于调试,不共享或出售。这是个人工具,不是数据产品。

Account deletion uses an anonymize-preserve pattern (migration 030_deletion_hardening.sql) — your readings are de-linked from your identity but not erased, preserving cross-session archive intelligence for the anonymized record. 账户删除使用匿名保留模式(迁移文件 030_deletion_hardening.sql)——你的解读与身份解除关联但不删除,为匿名记录保留跨会话档案智能。

Ready to try the real thing? 准备好试试真实的工具了吗?

Open the Card Reader 打开塔罗读牌工具

Open the live tool → 打开完整工具 →