AI development will continue its rapid growth, with compute power expected to increase 1000-fold by 2028, according to Mustafa Suleyman, co-founder of DeepMind. He highlights that training data for AI models has expanded by a trillion times since 2010, driven by advances in hardware and software. Nvidia’s GPUs have improved eightfold in raw performance since 2020, and new memory technologies like HBM3 triple data bandwidth, enabling faster processing. Supercomputers now connect over 100,000 GPUs, vastly accelerating AI training times—from 167 minutes on eight GPUs in 2020 to under four minutes today. Software improvements halve the compute needed for fixed performance every eight months, reducing deployment costs dramatically. Global AI compute capacity is growing nearly fourfold annually, with forecasts predicting 100 million H100-equivalent GPUs by 2027. This exponential growth will impact AECM professionals by enabling more advanced AI-driven design, project management, and construction automation tools. Faster AI training cycles mean quicker iterations on models for structural analysis, site planning, and resource optimization. Reduced AI deployment costs will make these technologies more accessible across firms of all sizes. AECM companies should prepare to integrate AI solutions that leverage this compute surge to enhance productivity and innovation. In the near term, expect continued hardware releases from Nvidia and other chipmakers, expanded supercomputing facilities, and software breakthroughs that further accelerate AI capabilities. By 2030, AI compute power could be 1000 times greater than today, transforming the AECM landscape with smarter, faster, and more cost-effective tools.
Source: source. Read the original story →