AI algorithms and software seem to dominate today’s AI coverage, but hardware is an integral component, and an increasingly important one, that provides the necessary compute capabilities for data-intensive algorithms. However, there are some hardware obstacles to overcome, according to one DARPA program manager.
The Software-Defined Hardware (SDH) program out of DARPA’s Information Innovation Office (I2O) seeks to find the middle ground between hardware specialization and programmability to optimize algorithmic performance when sifting through the vast amount of data related to modern warfare.
One existing solution to optimize the efficiency of running algorithms on the scale necessary to process massive quantities of data “is to design and fabricate application specific integrated circuits (ASICs),” according to the SDH program webpage. ASICs are customized hardware architectures that maximize the efficiency of one specific algorithm, but they come with drawbacks.
They can cost hundreds of millions of dollars and take a long time to develop, according to the webpage. Furthermore, they can only perform one class of computation.
The high cost of ASICs limits their implementation, but the need to optimize a broader range of data-intensive algorithms still exists. That’s where the SDH program comes into play.
Filling the Gap with SDH
As the trajectory of available compute power slows in relation to the diminishing returns of advancements in semiconductor physics and silicon processors, specializing hardware becomes more important for optimizing AI capabilities, I2O Program Manager Wade Shen told attendees at Thursday's DARPA AI Colloquium in Alexandria, Virginia.
But because specializing hardware architecture to individual algorithms, as ASICs does, is costly and narrowly focused, Shen and the SDH program team are seeking a middle ground between hardware specialization and programmability.
“What we want to do in software-defined hardware is to have our cake and eat it too," Shen said. "And the idea is to build processors that allow us to generate things that look specialized at runtime, but allow us to do so in such a way that we can still program them for new algorithms and for a variety of algorithms that we might want to run on those processors."
The benefits of the SDH program are many, Shen explained. The cost of implementing SDH is much lower than ASICs, while the potential for programmability of SDH processors would make it easier to shift into different or new algorithms rather than being stuck on one specific algorithm like with an ASIC.
“If successful, SDH will result in the ability to develop and run data-intensive, data-exploitation algorithms at very low cost and, consequently, enable pervasive use of big-data solutions for a wide range of DOD applications including ISR, predictive logistics, decision support and beyond,” according to the SDH program webpage.
After starting the SDH program about six months ago, Shen and the program team expect to have the first silicon product ready within 18 months.
Shen emphasized the hope that AI developers find SDH technologies useful and “that this kind of hardware improvement will drive new algorithms to be developed in AI that previously couldn’t be developed because of computational limitations."