Technology is always changing, and now, there’s a lot of talk about AI. It’s useful, but it can be expensive. Some smart people at the University of Sydney and the University of California are trying to find a cheaper way to use AI. They are using tiny silver wires to build something that could make AI work better and cost less. It’s like making AI more affordable and efficient.
Today’s leading AI systems rely on powerful AI accelerators, which are essentially GPUs with extensive VRAM and no video output. These hardware components come with a hefty price tag, with GPUs like the Nvidia H100 fetching as much as $30,000, not to mention the ongoing power costs.
Leveraging advanced nanotech fabrication techniques, the researchers have designed networks of silver nanowires, each strand being approximately one-thousandth the width of a human hair. These nanowires are arranged randomly, creating a network where they crisscross and interact, mimicking the behavior of synapses in the human brain.
The study, published in Nature Communications, reveals that these silver nanowires exhibit brain-like behavior when electrical signals traverse the network. The numerous intersections between the wires undergo changes in response to electrical impulses in real-time, making them ideal for online machine learning.
Online learning, a process that involves feeding data as a continuous stream, eliminates the need for bundling data into large batches, as is the case with AI accelerators equipped with extensive RAM. Even in its early stages, this approach has shown promise in core machine learning activities. The research team successfully converted the MNIST handwritten dataset into electrical signals, allowing their hardware network to learn how to identify written numbers. They also tested the network’s ability to perform memory-like tasks, such as recalling numbers, which it executed effectively.
While it may take some time before nanowire networks can rival high-powered AI accelerators, there are potential applications that do not require the same level of computational intensity.