In a groundbreaking experiment, scientists have cultivated a miniature brain-like organoid from human stem cells, interfaced it with a computer, and demonstrated its potential as an innovative form of organic machine learning chip. This organoid, known as “Brainoware,” displayed remarkable capabilities in rapid speech recognition and mathematical predictions, showcasing the efficiency of biocomputing compared to traditional silicon chips.Continue reading… “Living Brain Organoid Hooks Up to Computer, Shows Promise as Organic Machine Learning Chip”
A team of experts from the US Department of Energy’s Argonne National Laboratory has demonstrated the transformative potential of machine learning (ML) in the operations of sodium-cooled fast reactors (SFR), a cutting-edge nuclear reactor. The application of ML in this specialized reactor aims to improve security and efficiency, showcasing advancements that could revolutionize power generation and contribute to nuclear waste reduction.
SFRs utilize liquid sodium as a core coolant, allowing for efficient electricity generation without carbon emissions from heavy atom splitting. While not currently employed for commercial purposes in the US, these reactors are seen as a promising avenue for cleaner and more sustainable energy in the future.Continue reading… “Machine Learning Enhances Nuclear Reactor Operations: Argonne National Laboratory’s Breakthrough”
In Canada, rural and remote communities often rely on satellite connections for internet access. However, these connections frequently suffer from technical glitches, leading to frequent service disruptions. Bridging the digital divide between rural and urban areas has proven to be a persistent challenge, despite advancements in technology.
A potential solution is on the horizon, as a group of researchers from the National Research Council (NRC) and the University of Waterloo in Canada is harnessing the power of machine learning to tackle this age-old problem.
Identifying Issues Before They Escalate
The team has developed the Multivariate Variance-based Genetic Ensemble Learning Method, which combines various AI-driven techniques to detect abnormalities in satellites and satellite networks before they escalate into significant problems, as detailed in a recent press release.Continue reading… “Machine Learning Aims to Improve Satellite Internet Connectivity in Canada’s Remote Areas”
Memories can be elusive for both humans and machines, and understanding why artificial agents experience gaps in their cognitive processes is crucial for the advancement of artificial intelligence (AI). Electrical engineers at The Ohio State University have delved into the impact of “continual learning” on overall performance in AI systems. Continual learning involves training computers to continuously learn a sequence of tasks, utilizing past knowledge to improve learning new ones.
However, one major challenge scientists face is overcoming the machine learning equivalent of memory loss, known as “catastrophic forgetting.” As AI agents are trained on successive tasks, they tend to lose information from previous tasks, posing risks as AI becomes more integrated into society. Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at Ohio State, emphasized the importance of preventing these AI systems, such as automated driving applications or robotic systems, from forgetting crucial lessons.Continue reading… “Bridging the Gap: Ohio State Engineers Tackle Memory Loss in Machine Learning for Lifelong Adaptability”
Tiny fragments of space debris pose threats to operating satellites and spacecraft. Researchers are building a platform to track them and predict their movements. Image: NASA
An article published on the ASME (American Society of Mechanical Engineers) website discusses how machine learning can be utilized to track space debris. The author quotes Dr. Moriba Jah, an associate professor of aerospace engineering and engineering mechanics at the University of Texas at Austin, who stresses the significance of tracking space debris. Dr. Jah warns that space debris is becoming an increasingly pressing issue that will compromise our ability to use space in the future.
Dr. Jah further explains that conventional approaches to tracking space debris are insufficient, and machine learning has the potential to significantly enhance our ability to monitor and forecast the movements of objects in space. The author also mentions the work of Dr. Mark Matney, an orbital debris scientist at NASA, who is leading a project to leverage machine learning to track debris in geostationary orbit. Dr. Matney emphasizes the importance of machine learning, stating that “machine learning is going to be essential for helping us stay ahead of the debris problem and protect our valuable space assets.”Continue reading… “Machine Learning May be Key to Tracking Space Debris”
In classic machine learning (ML) proven algorithms to be powerful tools for many tasks, including image and speech recognition, natural language processing (NLP) and predictive modeling. However, classical algorithms are limited by the constraints of classical computers and may have difficulty handling large files and complex data sets or to achieve a high degree of accuracy and precision.
Enter quantum machine learning (QML).
QML combines the power of Quantum Computation with the predictive power of ML to overcome the limitations of classical algorithms and offer performance improvements. In their article “On the role of entanglement in speeding up quantum computing,” Richard Jozsa and Neil Linden, of the University of Bristol in the UK, write that “QML algorithms promise to provide exponential speedups over their classical algorithms for some of the most tasks such as data classification, feature selection, and cluster analysis. In particular, the use of quantum algorithms for supervised and unsupervised learning has the potential to revolutionize machine learning and artificial intelligence.”Continue reading… “Quantum machine learning (QML) is poised to make the leap in 2023”
The boot-like device uses machine learning to provide support for an individual with mobility problems.
An exoskeleton that uses machine learning to adapt to its wearers’ gait could help make it easier for people with limited mobility to walk.
The exoskeleton, which resembles a motorized boot, is lightweight and allows the wearer to move relatively freely, both increasing their walking speed and reducing the amount of energy they use while moving.
Developed by researchers from Stanford University, it consists of cheap wearable sensors, a motor, and a small Raspberry Pi computer, powered by a rechargeable battery pack worn around the waist. The sensors are embedded into the boot to measure force and motion unobtrusively.Continue reading… “A robotic exoskeleton adapts to wearers to help them walk faster￼”
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
UNIVERSITY OF WASHINGTON
A new AI tool could help researchers discover previously unknown proteins and design entirely new ones. When harnessed, it could help unlock the development of more efficient vaccines, speed up research for the cure to cancer, or lead to completely new materials.
Alphabet-owned AI lab DeepMind took the world by surprise in 2020 when it announced AlphaFold, an AI tool that used deep learning to solve one of the “grand challenges” of biology: accurately predicting the shapes of proteins. Proteins are fundamental to life, and understanding their shape is vital to working with them. Earlier this summer DeepMind announced that AlphaFold could now predict the shapes of all proteins known to science.
The new tool, ProteinMPNN, described by a group of researchers from the University of Washington in two papers published in Science today (available here and here), offers a powerful complement to that technology.
The papers are the latest example of how deep learning is revolutionizing protein design by giving scientists new research tools. Traditionally researchers engineer proteins by tweaking those that occur in nature, but ProteinMPNN will open an entire new universe of possible proteins for researchers to design from scratch.Continue reading… “An AI that can design new proteins could help unlock new cures and materials “
Fig.1. Overview of the proposed method. An image of the current landscape is acquired by the mobile terminal and sent to the server PC. The server detects the target building and generates a mask. The area to be complemented is set from the mask image, and the input image is automatically altered based on the features around the target area. The output image based on the digital completion is sent to the mobile terminal as a future landscape after demolition to be displayed on the DR display. Credit: Takuya Kikuchi et al.
Scientists at Osaka University have created a machine learning system that is capable of virtually removing buildings from a live view. By using generative adversarial networks (GAN) algorithms running on a remote server, the team was able to stream in real-time on a mobile device. This work can help accelerate the process of urban renewal based on community agreement.Continue reading… “A machine learning system that is capable of virtually removing buildings from a live view”
The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020.
UK-based AI computer company GraphCore has announced a new combination chip called Bow, which is the world’s first Wafer-on-Wafer (WoW) processor. GraphCore claims that the processor will speed up processes like deep learning by 40 per cent and use 16 per cent less energy than previous generation processors. GraphCore has partnered closely with TSMC to make the Bow IPU.
This is the latest version of an IPU or Intelligence Processing Unit from GraphCore. The firm had previously released two versions of the IPU. The Bow processor has a higher frequency of 1.85 GHz versus 1.35 GHz of its previous version, which came out in 2020. GraphCore has stated that its superscale Bow Pod 1024 offers up to 350 PetaFLOPS of AI compute. For users who are already on GraphCore systems, the new Bow IPU uses the same software minus any modifications.Continue reading… “GraphCore releases new 3D chip that speeds AI by 40%”
DNA damage is constantly occurring in cells, either due to external sources or as a result of internal cellular metabolic reactions and physiological activities. Accurate repair of such DNA damages is critical to avoid mutations and chromosomal rearrangements linked to diseases including cancer, immunodeficiencies, neurodegeneration, and premature aging.
A team of researchers at Massachusetts General Hospital and the National Cancer Research Centre have identified a way to repair genetic damage and prevent DNA alterations using machine learning techniques.
The researchers state that it is possible to learn more about how cancer develops and how to fight it if we understand how DNA lesions originate and repair. Therefore, they hope that their discovery will help create better cancer treatments while also protecting our healthy cells.Continue reading… “Researchers Use Machine Learning To Repair Genetic Damage”
Gardening is a rewarding activity indeed, both for your mind, as well as for your body. Unfortunately though, most of us can’t find enough time to dedicate to it, as our hectic lifestyles get in the way. That’s where this smart garden robot comes in, to make sure you don’t have to sow yourself, but just reap the benefits.
Sybil is a small, but a very capable device with machine learning capabilities. It can autonomously plant, weed, and map your entire garden.Continue reading… “Machine Learning Bot Can Replace Your Gardener, It Plants and Weeds on Its Own”