DARPA’s $1.5 Billion Plan to Revolutionize Chip Development

Hot Clicks: DARPA’s $1.5B Plan to Revolutionize Chip Development

 

Migration project

All articles

All podcasts

(this only appears for logged-in users)

Rounding up IT and advanced tech-related news impacting government and industry.

Step aside, software: Hardware is having a moment. The Defense Advanced Research Projects Agency doesn’t want the U.S. to fall short in the semiconductor chip industry, so it launched a $1.5 billion, 5-year program called the Electronics Resurgence Initiative.

Through ERI, DARPA plans to transform the nation’s chip development, technology and manufacturing. This effort relates to Moore’s Law, which says the number of transistors on a chip double regularly every two years — but this is reaching its limits, and the military is afraid it will prevent advances in electronics.

So, there’s a need for new designs to continue to advance chip performance — and to keep down those design costs. There’s also a concern about the increased foreign investment in semiconductor designs from countries like China, so it’s no surprise the U.S. wants to stay on top of chip development and invest more in hardware. MIT Technology Review

The Modeling Religion Project Connects AI to Policy

How would a president explore the best ways to integrate thousands of religious refugees into their secular country, while minimizing risk of violence, and with limited resources? You can’t experiment or test your ideas with actual humans, but you can do so with virtual people. Enter the Modeling Religion Project: computer models populated with thousands of virtual people or agents that interact with each other in shifting conditions of their artificial environment, as their economic, education and religious beliefs change.

The international team of computer scientists, philosophers and religion scholars are building these models and agents to mimic the beliefs and characteristics of a real country’s population using that country’s survey data, and training the model on a set of social-science rules about how humans interact under pressure.

For example, they can test if investing in education positively impacts a country with 50,000 incoming refugees —well, how the artificial society changes. This can provide politicians with an empirical AI intelligence tool to predict policy outcomes, in order to assess competing policy options and pick the best one. The Atlantic

Current Tech Won’t Make Mars Earth-Like

The hope that Mars is a plausible “planet B” if the Earth were to no longer support us may not be as realistic as we think, at least in terms of terraforming — the concept of changing the conditions on Mars to make it habitable for life existing on Earth (without life-support systems). According to the paper “Inventory of CO2 available for terraforming Mars,” it’s not possible to terraform Mars with the technologies we have.

To make Mars as habitable as Earth, we need to raise temperatures, have water in liquid form and make the atmosphere thicker. According to the paper, we could raise temperatures and change the atmosphere enough by using greenhouse gases already on Mars. The only gas there is a lot of to provide the warming needed is carbon dioxide, and even still, there isn’t enough CO2 on Mars to make it Earth-like.

There’s only enough to triple Mars’ atmospheric pressure — but to terraform Mars, the atmosphere needs to be raised enough so humans can walk around without spacesuits. Tripling is only one-fiftieth of the CO2 needed. Not to mention, the CO2 is really difficult to access, and it would take a lot of effort to release it into the atmosphere. The capabilities needed to do all this are just not here yet. Space.com

Are These Hands Robot or Human?

Researchers in the world’s top artificial intelligence labs are getting closer to making robotic hands that can do what human hands can. OpenAI, AI lab founded by Elon Musk and Sam Altman, has a robotic hand called Dactyl with mechanical fingers that bend and straighten like real hands. If given an alphabet block, Dactyl can show you the specific letters you ask for (such as green N), as well as spin and flip the toy in nimble ways. This may sound simple, but it’s a big feat for autonomous machines, especially considering Dactyl learned the task mostly on its own. And by using the same mathematical methods, researchers think they can train robotic hands to also accomplish more complex tasks.

To compare these achievements, researchers at University of California at Berkeley's Autolab have “gripper,” a two-fingered machine that can pick up items like a pair of pliers and sort them into bins. It’s easier to control than a five-fingered machine, and building its software is also simpler. But gripper can’t always handle objects that have shapes unlike anything it has ever seen before.

So, over the last few years, other researchers at Autolab created the “picker.” It uses a gripper and a suction cup, and can pick up all sorts of random objects. Picker benefits from dramatic advances in machine learning, and is learning tasks on its own. The New York Times

Amazon’s Facial Recognition Matched Congress Members to Criminals

Oops! Guess there’s still some work to be done in the facial recognition world. The American Civil Liberties Union tested Amazon’s facial recognition system by scanning the faces of all 535 members of Congress against 25,000 public mugshots using Amazon’s Rekognition application program interface. The results were not great. The system generated 28 false matches, meaning it matched members of Congress to criminal mugshots. ACLU says this raises concerns about police using the same system.

An Amazon spokesperson told The Verge this confusion was because of poor calibration. Also, ACLU’s tests were performed using Rekognition’s default confidence threshold of 80 percent where Amazon recommends at least 95 percent for law enforcement applications. This is because false ID in law enforcement can have much more serious consequences.

But concerns remain, as ACLU’s experiment was designed to look at the system’s partnership with the Washington County Sheriff’s Department in Oregon, where images were compared against 300,000 mugshots. There was also indication of racial bias that coincide with National Institute of Standards and Technology’s own Facial Recognition Vendor Test, which showed consistently higher error rates for facial recognition tests on women and African-Americans. Perhaps we’re not quite there yet.  The Verge