In this issue of the Tech Digest: algorithms process cough recordings to diagnose COVID-19, San Francisco turns to IoT to go zero waste, and Google’s medical AI tool receives backlash from scientists. All that and more in a 10-minute read.
Deep learning is a behind-the-scenes technology that curates the content you see on social media, comes up with relevant search results on Google, and helps radiologists spot diseases in medical images. Until recently, smart algorithms required powerful hardware and robust infrastructures to function properly. But now there’s evidence deep learning could work on smaller devices, such as thermostats and fitness trackers. During the NeurIPS conference that is taking place on December 6-12, a research team from MIT will unveil MCUNet—a system that creates deep learning networks for low-power and low-memory devices. The solution uses two components to run neural networks on lightweight microcontrollers. The first one is TinyEngine, which acts as an operating system for IoT devices. The second one is called TinyNAS. It is a custom neural architecture search method that automatically adjusts the size of a neural network based on the computing power of a microcontroller. Additionally, MCUNet could help IoT vendors secure connected devices and reduce the amount of data traversing the network.
To control the pandemic, we need faster and more reliable tests—and here’s where technology comes to the rescue. In China, physicians have applied machine learning algorithms to analyze CT scans of patients with suspected pneumonia. Technology companies are developing thermal imaging and face recognition systems that help detect people with high body temperature, as well as individuals not wearing face masks. And the University of Oklahoma has proven AI can differentiate a COVID-19 cough from coughs caused by other infections! The concept was further developed by MIT, which used the largest-ever database of cough recordings to train an AI model capable of identifying the coronavirus in 100% of cases. If approved by the FDA and other regulatory agencies, such models could find a home in sound apps that allow hospitals to screen patients for COVID-19 symptoms at scale—and at no cost.
Self-driving cars have long been able to “see” the world around them and avoid collisions. This became possible thanks to artificial intelligence and advanced lidar, radar, and ultrasonic sensors. An Austin-based startup called Strap Technologies harnessed the same set of technologies to replace white canes for blind individuals with a chest-worn device. Strap’s gadget, which weighs less than half a pound, figures out how close barriers like walls, vehicles, and other people are to the wearer. The device then notifies the person about potential dangers through vibrations. According to Diego Roel, Strap’s CEO and founder, the wearable device allows for more precise guidance than white canes and mobile apps like Soundscape.
In the first two months of the COVID-19 pandemic, most companies experienced two-years-worth of Digital Transformation. The Internet of Things has played a pivotal role in making businesses more agile and efficient. In his recent article for Forbes, Bernard Marr, the UK’s leading technology advisor, has identified several IoT trends we should watch for in the coming year. While intelligent edge devices and record investments in medical IoT are no-brainers, I was quite surprised to learn how IoT is transforming remote work. With 53 million Alexa speakers installed in US homes and the growing availability of smart sensors like Azure Kinect, 2021 could give us plenty of intelligent tools for meeting scheduling and immersive video conferencing.
Earlier this year, Google unveiled a new AI-powered breast cancer screening tool that reportedly identifies malignant tumors in mammograms better than human radiologists. Apparently, the scientific community was unimpressed. Benjamin Haibe-Kains, who studies computational genomics at the University of Toronto, was among the 31 scientists who wrote a damning response to the study. The critics claim Google has provided so little information about the tool’s code and trials that the so-called study turned into a promo campaign of Google’s next best “killer app” rather than a scientific paper. And it looks like Google is not the only company failing to provide actual data behind the hype. Read the full story on MIT Technology Review to learn why sharing research details and letting others replicate it is key to AI’s success.
It’s relatively easy to create an AI application that translates texts from modern English to old Latin. What’s really hard is building robots that can be trusted to act on their own. Gary Markus, an interdisciplinary cognitive scientist who openly criticizes commonplace deep learning methods, sat down to talk with ZDNet to discuss the roadblocks developers continue to hit when creating artificial intelligence solutions. Mr. Markus argues modern AI tools haven’t made much progress since ELIZA, the 1965 medical expert system that relied on keywords to talk to patients. Instead of matching categories, artificial intelligence should understand what humans are talking about—and uncover the intent behind their words. Here’s what developers could do to achieve this feat.
Americans make up just 4% of the world’s population. Yet the nation generates up to 12% of all municipal solid waste, which equals 700 thousand tons of trash produced daily. Even though the country is investing heavily in recycling facilities and technology, 65% of that waste ends up in landfills. The US recycling rates, however, differ from city to city. In Chicago, it’s barely 8.8%. On the other hand, San Francisco is hoping to go “zero waste” by the end of this year. Check out our latest blog post to learn how IoT-enabled smart management systems help municipal authorities predict container waste levels, optimize garbage truck routes, and improve waste sorting operations at recycling plants.