Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Who isn’t at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it’s estimated they “only” used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn’t worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?

  • Deceptichum@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I really hope they fail hard and end up putting these devices on the consumer second hand market because the v100’s while now affordable and flooding the market are too out of date.