AI智能总结
TMT BREAKOUTJAN 07, 20262026年1⽉7⽇∙PAID∙已付费 The AI Vibes are back!QQQs +88bpsled by semis, particularly the sectors at the topof the pecking order, which remains similar to where we last left off in December: DRAM > Optical > HDDs > NAND > Power > ASICs > GPUs > Networking(probably should throw semi cap in there right after NAND….) Big moves in thesestocks:SNDK +28%; WDC +17%; STX +14%; LITE +11%; MU +10%with all thesestocks sitting at ATHs and it’s hard to imagine there was any wobbling in AI vibes overthe last several months looking at the charts. The conversations relayed to me at CESsounded similar: everyone loves memory (I heard$60-$70+ EPS numbers beingthrowing aroundfor MU, that’s up from$50+ I was hearing from uber bulls in Dec),everyone wants analog now, everyone’s new love is semicap with the inflection tocome, and everyone is frustrated with NVDA. AI氛围⼜回来了!在半导体板块的带领下,纳斯达克100指数(QQQs)上涨了88个基点,尤其是那些处于⾏业顶端的细分领域,其排序与我们去年12⽉最后⼀次观察时基本⼀致:DRAM >光模块> HDD > NAND >功率半导体> ASIC > GPU >⽹络设备(可能还应该把半导体设备排在NAND之后……)。这些股票波动巨⼤:SNDK +28%;WDC +17%;STX +14%;LITE +11%;MU +10%,所有这些股票都处于历史新⾼。看着这些⾛势图,很难想象过去⼏个⽉AI氛围曾有过任何动摇。我在CES上听到的对话也⼤同⼩异:每个⼈都爱存储(我听到有⼈在传MU的EPS能达到60-70美元以上,⽽12⽉份超级⽜市派的预期还是50美元以上);每个⼈现在都想要模拟芯⽚;每个⼈的新宠是即将迎来拐点的半导体设备;⽽每个⼈都对NVDA感到挫败。 Basically, the sectors with tight supply/demand dynamics and increasing prices arewhere investors want to be - both fast money and more medium/longer term investors(optical + memory + HDDs the favorite ways to play it).Lots of talk today (as has beenthe case for the last several months) from investors around the historically cyclicalmemory/HDDs continuing to act and be deserving of multiples one would give asecular grower, as that’s what they’ll look like over the next few years.As us old techhands remember, memory/storage names used to get low multiples because (1)pricing was violently cyclical, (2) supply was prone to overbuild, and (3) end demand was heavily PC/phone/consumer.This cycle feels and is different: AI/DC spend thatdoesn’t look like it will peak until late ‘27 at the earliest, supply structurally harder toadd, players more rationale (so far) and earnings tied to higher value products (HBM,high-end enterprise SSD, differentiated HDD tech like HAMR) vs commodity bits. 基本上,供需关系紧张且价格上涨的板块是投资者最想参与的地⽅⸺⽆论是快钱还是中⻓期投资者(光模块+存储+ HDD是最受欢迎的参与⽅式)。今天(正如过去⼏个⽉的情况⼀样)投资者们有很多讨论,认为历史上具有周期性的存储/HDD⾏业将继续表现,并理应获得seculargrower(⻓期成⻓股)的估值倍数,因为它们在未来⼏年看起来就是那个样⼦。正如我们这些科技⽼⼿所记得的,存储/硬盘类股票过去估值倍数很低,是因为:(1)价格波动具有剧烈的周期性;(2)供应容易过度建设;(3)终端需求严重依赖PC/⼿机/消费者。这⼀周期感觉且确实不同:AI/数据中⼼⽀出看起来最早要到27年底才会⻅顶,供应在结构上更难增加,参与者更加理性(⽬前为⽌),且收益与⾼价值产品(HBM、⾼端企业级SSD、像HAMR这样差异化的HDD技术)挂钩,⽽⾮⼤宗商品化的⽐特。 FundaAI made a more nuanced, but directionally similar argument today in hispiece:Why DRAM and SSD Could Become Two Primary Themes in the AI InfrastructureCycle,arguing the next AI infrastructure constraint won’t be “not enough GPUs,” it’llbe “not enoughcontext.” As Jensen put it: “Context is the new bottleneck — storagemust be rearchitected.” FundaAI and Jensen’s point is that as models run longerconversations, bigger context windows, and more “agentic” multi-step workflows, thesystem chokes not on raw FLOPS but on where it keeps and how it moves theKVcache(the running “memory” of the interaction). For those of us less technically-minded, we’ll let ChatGPT break it down: FundaAI在他今天的⽂章中提出了⼀个更细致但⽅向相似的论点:《为什么DRAM和SSD可能成为AI基础设施周期的两个主要主题》,他认为下⼀个AI基础设施的瓶颈将不是“GPU不够”,⽽是“上下⽂(context)不够”。正如⻩仁勋(Jensen)所说:“上下⽂是新的瓶颈⸺存储必须重新架构。” FundaAI和Jensen的观点是,随着模型运⾏更⻓的对话、更⼤的上下⽂窗⼝以及更多“代理化(agentic)”的多步⼯作流,系统卡壳的地⽅不在于原始算⼒(FLOPS),⽽在于它在哪⾥保存以及如何移动KV缓存(交互过程中的运⾏“记忆”)。对于我们这些技术头脑稍逊的⼈,我们让ChatGPT来详细拆解⼀下: A simple analogy: picture a chef working on a tiny cutting board (HBM). Every timea customer adds requests—“no onions… actually add onions… now make it vegan…also bring a side”—those notes (context/KV cache) pile onto the board. Eventually the board is covered in sticky notes and the chef can’t chop; the expensive chefstands there idle. The “rearchitecture” is just building a proper kitchen flow: keepthe most immediate notes on the cutting board (HBM), move “hot but not instant”notes to the prep table beside it (DRAM), and store the rest in a nearby filingcabinet/pantry (enterprise SSD). Then you add a fast runner and an organizer (DPU+ network) who fetches and places the right notes at the right moment so the chefkeeps cooking at full speed—meaning better throughput, lower$/token, and lesswasted GPU time. ⼀个简单的类⽐:想象⼀位厨师在极⼩的砧板(HBM)上⼯作。每当顾客增加要求⸺“不要洋葱……还是加洋葱吧……现在改成素⻝……再加⼀份配菜”⸺这些便签(上下⽂/KV缓存)就会堆满砧板。最终,砧板被便利贴盖满,厨师⽆法切菜;这位昂贵的厨师只能闲站着。所谓的“架构重构”只是建⽴⼀套合理的厨房流程:将最紧急的便签留在砧板(HBM)上,将“重要但不即时”的便签移到旁边的备菜台(DRAM),并将其余内容存放在附近的档案柜/储藏室(企业级SSD)。然后,你再增加⼀名跑腿员和⼀名组织者(DPU +⽹络),负责在正确的时刻取放正确的便签,让厨师能全速烹饪⸺这意味着更⾼的吞吐量、更低的单位Token成本,以及更少的GPU时间浪费。 That’s why this plugs directly into the “secular multiple” rerate debate formemory/storage. If “context handling” becomes a standard design requirement for AIracks (not an optional BOM choice), then DRAM and enterprise SSD stop being justcyclical components and start l