Giant Computer Chip

Artificial intelligence (AI) requires a level of computing power that many processors simply aren’t capable of handling. Analyzing vast amounts of data and graphics using complex algorithms typically takes multiple processor chips working together. But moving information between chips can be slow, limiting how fast these systems can operate.

Some hardware manufacturers have been looking at broadening the connections between chips in order to speed up the data transfer rates. But a relatively new tech startup recently unveiled a different approach. In a world where we generally think of computer chips becoming smaller and smaller, they’ve created the world’s largest – 56 times larger than any chip available. Known as the Wafer Scale Engine (WSE), it contains 1.2 trillion transistors and 400,000 AI-optimized cores. The idea is to keep all of the processing power on a single chip so the system can operate faster.

From a quality standpoint, it’s not as simple as it sounds. One of the reasons chip makers keep them small is that the process of etching circuits into silicon is complex, and keeping the size down offers less room for error (i.e., higher yields). The developers have gotten around this problem by dividing the approximately 12-inch-by-12-inch chip into smaller cores with the assumption that some may not work. These areas can be bypassed by routing information around them.

The result is a chip that’s between 100 and 1,000 times faster than existing hardware technology. The new chips would be installed in large data processing centers to promote the use of AI for a variety of applications including self-driving vehicles, surveillance systems, and autonomous weapons.

For information: Cerebras Systems, 175 South San Antonio Road, Los Altos, CA 94022; Web site: https://www.cerebras.net/