Researchers from the Seattle-based Allen Institute for Artificial Intelligence developed an English test in August for computers to examine if machines could finish sentences, and the lab’s AI systems did so at about 60 percent accuracy. Shortly after but unrelated, Google researchers introduced Bert, a system capable of guessing the missing word in sentences, and answering those same questions just as well as humans did; at more than 88 percent accuracy. This was a big step in AI development, and research organizations have been improving how computer systems learn language and apply it to tasks like analyzing documents and creating digital assistants. These advancements are continually augmenting human labor, benefiting various industries from law to medicine, and making it possible for machines to carry on actual conversations. But on social media, it allows bots to pose as humans. The New York Times
Elon Musk’s Space Internet Satellites are a Go
The Federal Communications Commission has approved a plan allowing the founder and CEO of SpaceX to put 7,000 Starink internet satellites in orbit, all part of SpaceX’s plan to deliver affordable high-speed internet access from space. Eventually, SpaceX wants to build a network of 12,000 satellites providing Earth with wireless internet access, which can help developing countries and rural areas connect. The SpaceX Starlink program tested its first satellites in February, and the FCC approved the first 4,000 satellites the following month. SpaceX plans to first deploy 1,600 satellites in the next few years, and it could take six years or more to deploy the full network. The Washington Post
NASA’s Mars Lander will Phone Home from the Red Planet
NASA’s Mars InSight lander will reach its destination on Nov. 26, and when it does, it promises to phone home to let NASA know it arrived safely. But NASA engineers aren’t expecting a detailed chat, and they don’t even know when the call will come or if it’ll come from InSight itself. Still, the lander is capable of producing two simple signals. First, when it’s landing it will produce steady tones of radio waves with changing frequencies as the spacecraft parachutes and slows down. When it lands, InSight will send two more beacon signals seven minutes apart in different wavelengths. The engineers are particularly interested in the second of those two signals, as it means the lander is in good condition. But they do have a backup plan — engineers built three possible alternative ways of hearing from InSight that relies on other spacecraft. The lander is joined by two cubesats, Mars Cube One and MarCO project, and there are two veteran NASA spacecraft orbiting Mars that can report back — Mars Reconnaissance Orbiter and the 2001 Mars Odyssey. Space.com
National Space Council is Toying with a 2028 Moon Landing
The National Space Council’s Users’ Advisory Group brainstorms NASA’s long term plans, is made up of government and NASA officials, and is headed by Vice President Mike Pence. Last week, the group was pitched a 2028 moon mission in response to an eager push from President Trump. The proposal was prepared by head of NASA Jim Bridenstine and NASA Associate Administrator for Policy and Strategy Tom Cremins. They presented a timeline for reaching and settling on the moon for the late 2020s, but a moon mission is still controversial. Some members of the National Science and Technology Council suggested NASA should focus on a Mars mission in the 2030s, and budget restrictions could prevent a moon mission. But Ted Cruz and Pence, heads of the council, have direct influence over space mission advisory councils and both support Trump’s moon mission, and the council said private companies will be used to help develop the tech needed. Motherboard
Bots are Using Metadata to Be More Human
A recent report by Data & Society looked into how our metadata created by social media likes, retweets, comments and reactions is being used to fake actual human behavior on social platforms. That’s because everything posted on these sites become metadata that help manipulators and their bots learn to appear human to the algorithms that monitor and police social networks, and according to the report, this problem will get worse as bots continue to mimic social media activity. Manipulators range from companies to political meddlers and repressive governments, and with the amount of citizens getting news on social media, this increasing use of disinformation bots is a serious concern. And these bots are getting their metadata from three stages of accessibility, but it's the macro layer, or the level that platforms have exclusive access to, that’s most harmful.
The report refers to manipulators’ use of this as data crafting, “a collection of practices that create, rely on, or even play with the proliferation of data on social media by engaging with new computational and algorithmic mechanisms of organization and classification.” So, these bots are good at mimicking people, and fooling algorithms. The Verge