One of the hottest topics of 2019 has undoubtedly been artificial intelligence, or AI. Worldwide spending on AI systems is forecast to exceed $37 billion by the end of 2019, which is an increase of 44% over 2018, and skyrocket to an estimated $97.9 billion in 2023. With consumers already integrating AI into their lives through the use of Apple's Siri, Amazon's Alexa and even Microsoft's Cortana, it easy to see the practical applications of AI in our lives today. But what about tomorrow? With a market so large, tech companies have been funneling money into r&d projects for bigger and better AI chips to continuously push the limits of what AI can do.
Instead of breaking this down into super heavy tech terms and specs that most people would fall asleep reading, I'm going to look at the number of cores, clock speed, and transistor count. These three specs will give us a good understanding of what is going on.
As a benchmark, we'll talk about Apples's chip that they put in their newest iPhones this year. Apple released their A13 bionic chip, which contains two high performance cores that can operate at 2.65 GHz and 4 energy efficient cores. The A13 chip is capable of doing a vast number of things. It has many capabilities including facial recognition, speech recognition and predictive thinking, just to name a few. The A13 contains 8.5 billion transistors, and the whole package is about the size of a half dollar coin. For comparison, the latest Intel processor, the i9, only has 1.736 billion transistors. I'm sure 8.5 billion sounds like a lot, and really it is compared to past chips; but is 8.5 billion really that many compared to other modern AI chips?
Nvidia, a leading semiconductor company popular for its graphic processors used in modern PCs, has held the title for 'biggest chip' since 2017. Nvidia's flagship GPU chip, named Tesla V100, contains 640 cores that can operate at 3 GHz. It also comes loaded with 21.1 billion transistors, and the chip is roughly 4x6 inches. That is crazy to think about. Imagine what kind of computing power this chip has. It is capable of machine learning, natural language processing, social and general intelligence as well as advanced motion and manipulation. If the A13 is a Corvette, then the Tesla V100 is certainly a Lamborghini; but is 32 billion the most transistors we can stuff in a chip?
I know I said that Nvidia's chip was the largest in the world, and it was; that is until just last week. Cerebras Technology looks at both of these chips and says simply, "hold my beer." The Cerebras WSE (Wafer Scale Engine) is a new breed of chip unlike anything the world has ever seen before. This behemoth comes packed with 400,000 AI-optimized cores which can operate at 3 GHz. It also comes loaded with 1.2 trillion transistors, and the chip is the size of an iPad Pro. For the more tech savvy people, it is 15 rack units. Suffice it to say the possibilities with this chip are endless. I'm going to get a little nerdy with some extra specs on this chip because of just how mind blowing it is. This one device literally replaces racks of GPUs.
The Cerebras WSE has:
I have no idea what the future holds nor what kind of AI is coming, but with the untapped potential of chips like the Cerebras WSE already in existence; the future is bright.