怎样快速大量浪费Claude code token?
通过复杂多代理架构prompt和哲学思辨可快速浪费Claude Code token。
1. 关键信息
- 核心思路:让Claude进行多轮、多代理协作,生成大量文档和讨论 (#9 #10)。
- #9 给出超长prompt模板,要求创建
docs/devPlan/AI_dialog/目录,包含8个agent分3轮讨论,输出大量.md文件(如探勘、分析、提案、评审等),同时开四五个窗口更佳。 - #10 补充后续指令:要求fact check、ASCII图讲解plan、创建
5_planning/phase_plan.md细化实施阶段。 - #8 提到
/insights命令可分析员工是否乱用,但用户反询问“怎样合理使用”。 - #2 建议“多开几个my claw让他们哲学思辨”。
- #12 指出趋势:大家靠生成MD文件,月产7-8万行代码。
- #13 提议搭建“三省六部制”虚拟团队左右互搏,或让Claude读metrics分析incident。
- #6 评价“旱的旱死涝的涝死”。
2. 羊毛/优惠信息
无
3. 最新动态
- #2 提到“说不定520以后就不用担心了”,暗示可能5月20日后有规则或政策变化。
- #12 评论当前“大跃进”风气,靠AI生成代码行数作为考核指标。
4. 争议或不同意见
- #4 警告
/insights命令可以分析员工是否在乱用,存在被监控风险。 - #5 调侃用户“鱿鱼了,该把这个用户名捐献出来”,暗示浪费token行为可能在公司内部不受欢迎。
5. 行动建议
- 直接使用#9的完整prompt创建多代理规划流程,并同时开多个窗口跑。
- 可结合#10指令进一步生成实施阶段计划,或按#13搭建虚拟团队让agent互相辩论。
- 注意:若在公司环境使用,需留意是否会被
/insights监控 (#4)。
傻逼公司搞排名,连token都排 怎样大量快速浪费token?
你也是鱿鱼场啊?多开几个my claw 让他们哲学思辨。不过话说回来,说不定520以后就不用担心了
https://www.uscardforum.com/t/topic/489174/8 背莎士比亚
慎重 /insights 命令可以分析出来员工是不是在乱用
都在鱿鱼了,该把这个用户名捐献出来
旱的旱死涝的涝死
原来YouTube老师竟然是鱿鱼厂的
ccchowww: 慎重 /insights 命令可以分析出来员工是不是在乱用 怎样合理使用啊
这题我真的会!!! 如下,慎用 token爆炸鸡(你甚至可以同时开四五个窗口跑): —————————————— “你随便提一点小问题” t Use a structured multi-agent planning process for the request above. Create a discussion folder under .docs/devPlan/AI_dialog/ with a clear topic name such as 26_04_03 UI Upgrade . Keep the full planning history there as .md . #p-8049456-sub-agent-context-feeding-1Sub-Agent Context Feeding Each sub-agent spawned via the Agent tool receives the full content of the folders listed below. No digest or summary files are used — agents read the raw stage outputs directly. Current stage Folders fed to sub-agent 1_exploration 00_index.md , 0_scoping/ 2_analysis 00_index.md , 0_scoping/ , 1_exploration/ 3_discussion round_1 00_index.md , 0_scoping/ , 1_exploration/ , 2_analysis/ 3_discussion round_2 00_index.md , 0_scoping/ , 1_exploration/ , 3_discussion/round_1/ 3_discussion round_3 00_index.md , 0_scoping/ , 1_exploration/ , 3_discussion/round_2/ 4_proposals 00_index.md , 0_scoping/ , 1_exploration/ , 3_discussion/round_3/ #p-8049456-core-rules-2Core Rules Build a shared exploration base before agents form opinions. Separate evidence, inference, and recommendation. Write artifacts during each stage; do not wait until the end to backfill from memory. Treat the discussion folder as external memory and keep it current after each meaningful step. Every major claim should point to inspected files, docs, or explicit reasoning. Prefer concrete tradeoffs and migration thinking over vague advice. Feed sub-agents the prior stage folders as defined in the context feeding table. #p-8049456-required-structure-stages-04-3Required Structure (Stages 0–4) .docs/devPlan/AI_dialog/<topic>/ ├── 00_index.md ├── 0_scoping/ │ └── user_demand.md ├── 1_exploration/ │ ├── evidence_index.md │ ├── architecture_map.md │ ├── constraints_and_invariants.md │ ├── open_questions.md │ └── orchestrator_log.md ├── 2_analysis/ │ ├── agent1.md ... agentN.md │ └── synthesis.md ├── 3_discussion/ │ ├── round_1/ │ │ ├── agent1.md ... agent8.md │ │ └── round_summary.md │ ├── round_2/ │ │ ├── agent1.md ... agent8.md │ │ └── round_summary.md │ └── round_3/ │ ├── agent1.md ... agent8.md │ └── round_summary.md └── 4_proposals/ ├── proposal1.md ... proposal5.md ├── proposal_matrix.md ├── review1.md ... review3.md ├── comparison_table.md └── decision_log.md Before doing anything else, create 0_scoping/user_demand.md and 00_index.md . user_demand.md should capture: original user request explicit constraints decision principles output expectations user-stated assumptions 00_index.md should track: topic date objective folder map stage status (update after each stage) final chosen proposal why it won #p-8049456-stage-0-scoping-4Stage 0. Scoping Create 0_scoping/ . This stage is always fast and mandatory. Write user_demand.md . #p-8049456-stage-1-primary-exploration-5Stage 1. Primary Exploration Create 1_exploration/ before spawning agents. This stage is mandatory. Use the files as follows: evidence_index.md : evidence IDs such as E01 , source path, concrete finding, why it matters, and whether it is direct or inferred. architecture_map.md : current structure, module boundaries, data flow, control flow, ownership, and coupling hotspots. constraints_and_invariants.md : hard constraints, soft preferences, migration constraints, performance or compatibility constraints, and unresolved constraints. open_questions.md : unresolved issues, why they matter, what was explored, whether they block planning, and any provisional assumptions. orchestrator_log.md : chronological exploration log, hypotheses, confirmations or rejections, and why the next files or docs were inspected. Gate: Do not proceed to Stage 2 until: the core architecture is understandable the evidence index is non-trivial major constraints are captured open questions are separated from resolved facts #p-8049456-stage-2-agent-analysis-6Stage 2. Agent Analysis Spawn agents in parallel using the Agent tool for isolated context separation. Each sub-agent receives: a persona prompt defining its stance and planning angle full content of 00_index.md , 0_scoping/ , and 1_exploration/ (per context feeding table) access to read source files for evidence verification Spawn 8 agents (5 aggressive, 2 neutral, 1 conservative). Create 2_analysis/ . Each agentN.md should be a real planning memo, not a short opinion. It should include: identity, stance, and planning angle executive thesis current system model core diagnosis and root cause tree evidence used with Stage 1 evidence IDs assumptions target direction or end-state architecture phase-by-phase plan migration seams, rollback points, or containment notes expected benefits risks and tradeoffs rejected alternatives open questions confidence level After all analyses are complete, create 2_analysis/synthesis.md summarizing: major agreements major disagreements strongest aggressive ideas strongest neutral ideas strongest conservative concerns unresolved questions before cross-discussion #p-8049456-stage-3-cross-discussion-7Stage 3. Cross-Discussion Run 3 rounds of cross-discussion across all 8 agents. In each round, each agent should: read the latest peer analyses critique weak logic, blind spots, hidden cost, and architectural risk explicitly adopt or reject useful points update its own position explain what changed and why Create 3_discussion/round_1/ , 3_discussion/round_2/ , 3_discussion/round_3/ . Each round folder must contain: agent1.md through agent8.md round_summary.md Each round agentN.md should include: prior position summary peer critique adopted points rejected points and why weak assumptions or newly identified risks updated position what changed from the previous round unresolved items confidence shift Each round_summary.md should record: where consensus increased what disagreements remain which assumptions were challenged or discarded which directions gained or lost support what the next round must resolve #p-8049456-stage-4-proposals-review-decision-8Stage 4. Proposals, Review & Decision Create 4_proposals/ . #p-8049456-h-4a-proposal-generation-94a. Proposal Generation Synthesize 5 implementation candidates (3 aggressive, 1 neutral, 1 conservative). Proposal input: per context feeding table — 00_index.md , 0_scoping/ , 1_exploration/ , and 3_discussion/round_3/ . Each proposal should cover: problem framing and target outcome why this proposal exists target architecture and key structural changes phase-by-phase roadmap dependency and ownership impact migration strategy validation and rollback strategy pros, cons, and risks implementation complexity and migration cost long-term maintainability impact debt retired and debt intentionally left conditions under which the proposal should not be chosen proposal_matrix.md should compare all five proposals on: structural value delivery risk migration cost reversibility testing burden maintainability implementation speed dependency on uncertain assumptions likely failure modes best-fit use case #p-8049456-h-4b-proposal-review-104b. Proposal Review Assign 3 reviewers. Each reviewN.md should include: evaluation method per-proposal critique strongest proposal weakest proposal hidden assumptions likely failure modes what would change the recommendation final recommendation comparison_table.md should use a consistent rubric: structural payoff rollout safety migration complexity reversibility testability maintainability cost of being wrong time to first useful result long-term flexibility #p-8049456-h-4c-decision-114c. Decision decision_log.md should record: the chosen proposal why it was chosen what was borrowed from other proposals which attractive ideas were deliberately rejected the final tradeoff logic remaining risks recommended phase breakdown Update 00_index.md with the final decision. #p-8049456-quality-bar-12Quality Bar Prefer: evidence-backed claims explicit tradeoffs and uncertainty root-cause analysis over symptom cleanup concrete migration thinking phased, reviewable, and reversible rollout Avoid: shallow summaries empty architecture buzzwords repeating the same point without refinement pretending consensus exists when important disagreement remains hiding migration cost vague phases such as “clean up later” #p-8049456-decision-principles-13Decision Principles Prefer fundamental simplification over superficial patching when the structural payoff is real. Prefer long-term maintainability and clarity over temporary convenience. Do not choose aggressive options just for novelty; they must be logically justified. Make the plan specific, actionable, and evidence-backed. Prefer phaseable, reviewable, and reversible plans when possible.
然后我的下一条是: Read “ 你刚刚创建的那个目录“ 1.fact check; 2. 告诉我方案的主要内容是什么,用ASCII图的形式向我讲解这个plan的主要change; 3.create 5_planning/ phase_plan.md under the folder; Fine grind proposal into a phased implementation plan. For each phase: ## Phase N: <title> - **Objective:** one sentence - **Scope:** files / modules / layers touched - **Changes:** numbered list of concrete changes - **Validation:** how to verify (build, test, manual check) - **Exit criteria:** what must be true before next phase
八股文有一套的
牛 现在都要看AI 产生的code 行数 真大跃进 感觉大家都在 生成md 动不动一个月7-8万代码
去网上找点什么三省六部制架构,或者自己搭建一个从产品经理到码农到测试员的团队让他们左右脑互搏 oncall的时候也可以开claude让他读metrics分析incident
反正copilot是按照条计费,我就非得给他拉满了
大煉丹
所有问题put up agent teams to investigate, work in parallel。写代码用不了多少token,想烧就只能靠让他想