The Nvidia CEO is floating a novel perk to attract talent: tokens.
Speaking at the GPU Technology Conference on Monday, Jensen Huang said in his two-hour-long keynote speech that he could see a future where every single engineer will need an annual token budget, and that he is willing to provide it.
“They’re going to make a few hundred thousand dollars a year, their base pay,” said Huang of engineers. “I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10X. Of course, we would.”
“It is now one of the recruiting tools in Silicon Valley: How many tokens comes along with my job?” Huang added. “And the reason for that is very clear, because every engineer that has access to tokens will be more productive.”
Tokens refer to a tiny piece of text that the AI reads or writes, usually about the size of a part of a word. AI companies use tokens as an economic unit to measure how much computing work the AI does. The longer your text is, the more tokens it takes to process, so pricing is often based on cost per thousand or million tokens.
Huang became one of the first high-profile CEO to publicly address the issue of a company token budget. His comments were made during a keynote primarily catering to developers, where he said that purchase orders between Blackwell and Vera Rubin would reach $1 trillion through 2027 due to their ability to generate more tokens.
Alistair Barr of Business Insider previously reported that Silicon Valley is coming up with new ways to compete for talent on top of the traditional salary, bonus, and equity by turning to AI inference power. Barr wrote that investors are now taking note of tokens as a “fourth component” in recruitment, and some have told Barr they believe that a company should clearly list its token budget on hiring notices.
Thibault Sottiaux, engineering lead at OpenAI’s Codex, which is an AI coding service, recently wrote on X that AI compute is becoming scarcer and more valuable.
“I am increasingly asked during candidate interviews how much dedicated inference compute they will have to build with Codex,” wrote Sottiaux.
