Video
Building Tensormesh: A Conversation with the CEO (Junchen Jiang)
Jan 08, 2026
๐๐๐ป๐ฐ๐ต๐ฒ๐ป ๐๐ถ๐ฎ๐ป๐ด, co-founder and CEO of โช@Tensormeshโฌ shares why KV cache โ the internal memory of large language models, is emerging as the ๐ป๐ฒ๐ ๐ ๐๐ถ๐ด ๐๐ฎ๐๐ฎ ๐น๐ฎ๐๐ฒ๐ฟ ๐ณ๐ผ๐ฟ ๐๐, and how Tensormesh is being built to tackle large-scale inference.
In this conversation, Junchen shares how ๐ง๐ฒ๐ป๐๐ผ๐ฟ๐บ๐ฒ๐๐ต approaches LLM inference โ why performance and cost are often misunderstood, and why memory, specifically the KV cache, sits at the center of the problem.
He also discusses how Tensormesh is being built at the intersection of research, open source, and industry to shape how inference systems are designed, optimized, and scaled.