New AI first hardware
By burning the transformer architecture into their chips, etched is creating the world’s most powerful servers for transformer inference.
Look at these speed improvement 🤯
→ Only one core
→ Fully open-source software stack
→ Expansible to 100T param models
→ Beam search and MCTS decoding
→ 144 GB HBM3E per chip
→ MoE and transformer variants
etched.ai/