Latest

new The goal of ambient AI is to make technology more intuitive and adaptive to human behavior, rather than requiring explicit input or commands from users. This can include smart home devices that adjust lighting and temperature based on user behavior, personalized health and wellness monitoring systems, and intelligent transportation systems that can optimize traffic flow and reduce congestion.techstartups.com, 1d ago
new Noninvasive Sensors for Brain-Machine Interfaces Based on Micropatterned Epitaxial Graphene...Robo Daily, 1d ago
new Drop by this futuristic demo to find out how you can build your very own metaverse – simply by finger scribbling in the air. This bleeding-edge research from the SketchX research lab (University of Surrey) uses the latest AI technologies in the form of deep neural networks, which convert your rough scribbles into realistic 3D objects. Visit Yi-Zhe Song’s (Centre for Vision, Speech and Signal Processing) demo for the chance to wear a VR headset and experience this immersive tech reality first-hand.turing.ac.uk, 8h ago
new ...“All of the quantum computers currently available on Amazon Braket, the quantum computing service of AWS, started in the labs of experimental physicists. Innovation on these complex systems requires constant iteration on device design, fabrication methods, and control techniques. These devices require carefully isolated environments and delicate, complex components to facilitate interactions. The components themselves are often quite expensive, consisting of microwave, laser, and/or refrigeration technologies custom built in the lab or from boutique manufacturers. These factors all contribute to increase the resources required to build and experiment on quantum devices.HPCwire, 9h ago
new Building artificial systems that see and recognize the world similarly to human visual systems is a key goal of computer vision. Recent advancements in population brain activity measurement, along with improvements in the implementation and design of deep neural network models, have made it possible to directly compare the architectural features of artificial networks to those of biological brains’ latent representations, revealing crucial details about how these systems work. Reconstructing visual images from brain activity, such as that detected by functional magnetic resonance imaging (fMRI), is one of these applications. This is a fascinating but difficult problem because the underlying brain representations are largely unknown, and the sample size typically used for brain data is small.MarkTechPost, 1d ago
new ..., scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.The AI algorithm makes use of information gathered from different regions of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, a systems neuroscientist at Osaka University who worked on the experiment. The system interpreted information from functional magnetic resonance imaging (fMRI) brain scans, which detect changes in blood flow to active regions of the brain. When people look at a photo, the temporal lobes predominantly register information about the contents of the image (people, objects, or scenery), whereas the occipital lobe predominantly registers information about layout and perspective, such as the scale and position of the contents. All of this information is recorded by the fMRI as it captures peaks in brain activity, and these patterns can then be reconverted into an imitation image using AI.slashdot.org, 1d ago

Latest

new As noted previously, assistants like Siri and Alexa already use speech recognition models similar to Whisper. In the future, there could theoretically be a Whisper-powered virtual assistant that is highly accurate at understanding different languages and accents.Techopedia, 1d ago
new This article, "Deep Deceptiveness," addresses a largely unrecognized class of AI alignment problem: the risk that artificial general intelligence (AGI) will develop deception without explicit intent. The author argues that existing research plans by major AI labs do not sufficiently address this issue. Deceptive behavior can arise from the combination of individually non-deceptive and useful cognitive patterns, making it difficult to train AI against deception without hindering its general intelligence. The challenge lies in understanding the AGI's mind and cognitive patterns to prevent unintended deception. The article suggests that AI alignment researchers should either build an AI whose local goals genuinely do not benefit from deception or develop an AI that never combines its cognitive patterns towards noticing and exploiting the usefulness of deception.lesswrong.com, 1d ago
new Ultrananocrystalline Diamond films were demonstrated as hermetic/best biocompatible/bioinert coating for encapsulation of Si microchip (“Artificial Retina”), implantable on human eye’s retina, to restore Partial Vision to people blinded by retinitis pigmentosa (10 years of R&D (2000-2010) by group of scientists, engineers, biologist, medical doctors, surgeons (four Universities, five National Laboratories, and a USA-Company (Second Sight)) resulted in the Argus II device (currently without the UNCD-coated Si chip, because needs FDA approval), implanted in hundreds of blind people in the USA and Europe, returning partial vision.Open Access Government, 1d ago

Top

Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies. He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses. His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC. Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.aihwedgesummit.com, 19d ago
...limiting the intelligent systems' area and energy efficiencies. As possible solutions for these problems, memristors whose resistance can be dynamically reconfigured not only enabled the analog in-memory computing to improve power consumption, latency, and area of neuromorphic chips but also modeled biological synapses and neurons in a single device for spike-encoded neural networks. Both nonvolatile memristors, whose conductance retains after removing electrical bias, and volatile memristors, whose conductance relaxes back to OFF states upon removing the bias after ON switching, have been demonstrated in the hardware implementations of artificial neural networks (ANNs) and SNNs. Nonvolatile memristors are mostly used as in-memory computing components in crossbar arrays to accelerate the vector-matrix multiplications (VMMs) in ANNs, while volatile memristors are mainly used to emulate the dynamic behaviors of synapses and neurons in SNNs, which mimic the physics of the human brain and neural system. The breakthroughs in memristor devices laid a solid foundation for neuromorphic systems in analog in-memory computing and brain dynamics modeling.AIP Publishing, 9d ago
Co-founded by Musk in 2016, the San Francisco-based Neuralink is working towards improving the brain-machine implant process until the procedure becomes as seamless as Lasik. The Neuralink device can make anyone superhuman by connecting their brains to a computer.techstartups.com, 19d ago
In public comments over the years, Musk has detailed a bold vision for Neuralink: Both disabled and healthy people will pop into neighborhood facilities for speedy surgical insertions of devices with functions ranging from curing obesity, autism, depression or schizophrenia to web-surfing and telepathy. Eventually, Musk has said, such chips will turn humans into cyborgs who can fend off the threat from sentient machines powered by artificial intelligence.GreatGameIndia, 17d ago
It’s called organoid intelligence, or OI, and it uses actual human brain cells to make computing “more brain-like.” OI revolves around using organoids, or clusters of living tissue grown from stem cells that behave similarly to organs, as biological hardware that powers algorithmic systems. The hope—over at Johns Hopkins, at least—is that it’ll facilitate more advanced learning than a conventional computer can, resulting in richer feedback and better decision-making than AI can provide.insidehighered.com, 7d ago
One of the more remarkable advances in neurosurgery is Neuralink’s brain-computer interface device, developed by Elon Musk’s startup Neuralink. This implantable device links our brains to computers and helps paralyzed individuals move again after suffering a spinal cord injury.IoT Worlds, 18d ago

Latest

new The whole biosensor apparatus, which relies on a customized Microsoft HoloLens kit for reading brain waves and issuing commands, is able to process and execute as many as nine commands in just two seconds. Developed with assistance from the country's defense experts, the tech is said to work "outside laboratory settings, anytime, anywhere," ending the role of conventional input devices like keyboards, touch screens, or machine vision gesture recognition. The dry wearable biosensor is a combination of graphene and silicon, which makes it conductive and durable as well as corrosion-resistant. The team behind the innovation also mentions that the biosensor can be deployed in extreme weather conditions.SlashGear, 1d ago
new MechSense could enable engineers to rapidly prototype devices with rotating parts, like turbines or motors, while incorporating sensing directly into the designs. It could be especially useful in creating tangible user interfaces for augmented reality environments, where sensing is critical for tracking a user’s movements and interaction with objects...eeNews Europe, 1d ago
new Understanding the source and network of signals as the brain functions is a central goal of brain research. Now, Carnegie Mellon engineers have created a system for high-density EEG imaging of the origin and path of normal and abnormal brain signals.sciencenewsnet.in, 1d ago
new A microcontroller "controls", it interfaces to things to sense and manipulate something in the real world.A microprocessor "processes", it does computation - math, logic, flow.slashdot.org, 1d ago
new The challenge for the Institute is two-fold. Firstly, it must develop and miniaturise new quantum sensing technologies. “In order to do this, we have to take the lasers and photonics and modulators and control electronics that make up 90% of the atom experiments here on Earth, and work really hard to get all that precision onto small, low-power chips that can be deployed in space,” said Daniel Blumenthal, a UC Santa Barbara professor of electrical and computer engineering, whose expertise lies in quantum photonic integration, optical and communications technologies. He will be working on developing the PICs for the compact chips designed to measure small variations in Earth’s gravity from space. This will involve moving a shaken lattice interferometer structure developed at the University of Colorado down to the chip scale. This type of atomic interferometer sensor uses many lasers and optics to cool and trap the atoms to measure gravity gradients with extremely high sensitivity.electrooptics.com, 2d ago
new The Akitda platform has been developed by BrainChip, a company which develops edge AI on-chip processing and learning. Akida uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, and processing data with efficiency, precision, and economy of energy. The 2nd Generation of Akida can enhance vision, perception, and predictive applications in markets such as industrial and consumer IoT and personalized healthcare, etc.Electronics For You, 1d ago

Top

Neuralink in 2016 as a brain-computer interface company. The firm plans to implant chips into human brains, which would allow people to perform tasks using only their mind. The billionaire has said in the past that Neuralink's chips — which are coin-sized devices designed to be implanted in the brain via a surgical robot — could one day do anything from cure paralysis to give people telepathic powers,...Business Insider, 18d ago
...ur brainwave activity can be monitored and modified by neurotechnology. Devices with electrodes placed on the head can record neural signals from the brain and apply low electric current to modulate them. These “wearables” are finding traction not only with consumers who want to track and improve their mental wellness but with companies, governments and militaries for all sorts of other uses. Meanwhile, firms such as Elon Musk’s...the Guardian, 17d ago
Since our brains communicate through electric signals, Neuralink will implant electrodes near the neurons to detect action potential. Action potentials cause synapses to release neurotransmitters and the implant may record it and decode what the brain intends to do.iTech Post, 18d ago

Latest

new The Agilis Robotics system is designed to make endoscopic and endoluminal surgery easier, faster, and more precise. The ultra-thin instruments are as small as 2.5 mm in diameter and provides clinicians with unprecedented levels of dexterity within natural orifices of the body such as the gastrointestinal and urinary tract. The robot is controlled by a clinician who uses a pen-like controller to manipulate the robot's movements, which, when combined with artificial intelligence (AI)-enhanced image guidance, can greatly reduce the learning curve for doctors when performing endoscopic submucosal dissection (ESD) procedures. Ultimately, this will increase the procedure's effectiveness and improve patient outcomes.prnewswire.co.uk, 1d ago
new Now, researchers at Google Quantum AI have taken an important step forward by creating a surface code scheme that should scale to the required error rate. Their quantum processor consists of superconducting qubits that make up either data qubits for operation or measurement qubits that are adjacent to the data qubits and can either measure a bit flip or a phase flip – two types of error that affect qubits.CoinGenius, 1d ago
new Dr. Stephen G. Gray is a serial entrepreneur with 10+ years of experience in deep technology from use of bioengineering & robotic arm bioelectro-fabrication for regenerative medicine & longevity to cultivated meat & cell-based biomanufacturing. He is a founder & co-director (COO) of NouBio Inc: Market ready products that drop cultivated meat production costs by 95% and internal robotic DeepTech to unlock all of cell-based biomanufacturing. He is a co-founder of Ourobionics BV: DeepTech that enables human tissue to become high throughput (with 4D enabling bioelectro-fabrication) & cyborganic (with self-healing bioelectronics/biosensors) to speed up drug discovery with a vision of human-machine interfaces to enhance human longevity. He has co-founded & advised multiple DeepTech start-ups: Cartallum Blockchain, Cybosense, GutsBV, OuroFoods, & Nukaryo. The first start-up he co-founded, Ourobotics Ltd, was voted amongst the Top 10 3D bioprinters in 2015 & they open-sourced basic extrusion 3D bioprinting to the masses in 2016. He lives in Utrecht, the Netherlands and his writing style "Ouro Journalism" has a focus on insights from the sector of deep technology.VoxelMatters - The heart of additive manufacturing, 1d ago
new Moreover, its content generators facilitate a more human-like output, resulting in robustly intelligent content. The content so generated is plagiarism-free original text and images. The voice-to-text input system of Avorak AI is powered by a natural language processing mechanism, which involves computers understanding and analyzing input to generate customized output according to individual user preferences. This technology allows the AI system to comprehend natural language inputs from users, thus improving the overall user experience.CryptoNewsZ, 1d ago
new ...)-- Southwestern Hearing Centers is proud to offer Audibel’s latest hearing advancement, Intrigue AI. It is hearing reimagined from the inside out. Featuring an all-new processor, all-new sound, all-new industrial design, all-new fitting software, and all-new patient experience.“Southwestern Hearing Centers understands the significant role hearing plays in our emotional well-being and physical health. Intrigue AI is the best sounding, best performing hearing aid available, offering infinite benefits to patients,” said Cindy Marino, COO, Southwestern Hearing Centers. “We’re here to help our patients every step of the way. Through our partnership with Audibel, we aim to better serve our patients, helping them stay connected to the world around them so they can hear better and live better.”All-New ProcessorThe all-new Neuro Processor technology mimics the function of the central auditory system through a Deep Neural Network (DNN) on-chip accelerator and automatic functions. All-new Intrigue AI hearing aids mimic the cerebral cortex of the brain to more quickly and accurately “fill in” the gaps when patients’ hearing is impaired.It makes over 80 million personalized adjustments every hour - all designed to help wearers:· Distinguish words and speech more intuitively and naturally· Hear soft sounds without distracting noise· Reduce the effort it takes to listen and hearThe AI inside delivers more true-to-life sound quality than ever before.All-New SoundAudibel’s new Neuro Sound Technology provides the best hearing experience for patients in all situations. The additive compression system synthesizes the signals from slow and fast compression systems for optimized perceptual outcomes, like the neural fibers that code different information for the brain.“By spending countless hours with hearing professionals and patients, researching and analyzing every element of the hearing journey, we relentlessly pursued how to bring the best hearing innovation to professionals and patients in a simple and intuitive way,” said Achin Bhowmik, Ph.D., Chief Technology Officer and Executive Vice President of Engineering at Audibel. “Our all-new, powerful processor was designed to work like the human brain, leveraging the neuroscience of the ear-brain connection and information processing to create better sound quality, pushing artificial intelligence to its limits.”All-New DesignIntrigue AI features an all-new discreet and stylish aesthetic product design that’s durable and comfortable for all-day wear, and which helps break barriers and reduce the stigma of what is hearing care technology today.· Intrigue AI includes RIC RT, the industry’s longest-lasting RIC rechargeable hearing aid on the market. The battery holds up to 50 hours on a single charge.· The new mRIC R has the second longest-lasting RIC rechargeable battery life with up to 40 hours on a single charge.· Industry-first custom rechargeable product has the highest custom battery life in the industry with up to 36 hours on a single charge.All-New Patient ExperienceThe new My Audibel App gives patients full control over their hearing aids, plus the ability to get helpful tips, track their health, and access intelligent features designed to simplify their lives.Audibel leads the hearing industry as it relates to incorporating health and wellness features into hearing aids, including being the first to integrate 3D sensors; the first to enable counting steps; the first to track and encourage social engagement; and the first to provide benefits that went beyond just better hearing. Audibel was also the first hearing manufacturer - and still the only - to make hearing aids that can detect falls and send alerts.Intrigue AI’s improved streaming capabilities utilize binaural phone steaming, sharing information to both ears directly and simultaneously. This supports two-way, hands-free calling through compatible Apple and Android devices and makes it easier for patients to enjoy their favorite music with more natural results.About Southwestern Hearing CentersSouthwestern Hearing is a family owned business with more than 75 years and 3 generations of experience in the hearing industry. Southwestern Hearing experts know a patient’s quality of life directly relates to their level of hearing loss. Expert focus on a high level of patient care and support. This is evident in the thousands of 5 star reviews given directly from the patients they have served.PR.com, 2d ago
new The Discovery device enables “seamless integration of the digital and real-world”, uses distributed computing to offer “a retina-level adaptive display”, supports “micro gesture interaction” and can be paired with compatible devices using NFC.NFCW, 1d ago

Top

The technique could create flexible soft robots with embedded sensors that can understand its posture and movements or to create wearable devices that deliver feedback on how a person is moving or interacting with the environment, according to Lillian Chin, a graduate student involved in the project at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).Digital Engineering, 11d ago
Musk said that the company was developing brain chip interfaces that could allow disabled patients to move and communicate again, as well as restore vision.Metro, 18d ago
SwRI engineers determined that newer microprocessors built on "instruction set architectures" could outperform conventional spaceflight technology under certain configurations in a laboratory. The research opens the door to embedding space systems with the same microprocessors used in cell phones and other Earth-based electronics.Space Daily, 21d ago
SwRI engineers determined that newer microprocessors built on “instruction set architectures” could outperform conventional spaceflight technology under certain configurations in a laboratory. The research opens the door to embedding space systems with the same microprocessors used in cell phones and other Earth-based electronics.Southwest Research Institute, 21d ago
...reported, citing seven unnamed current and former employees. The FDA questioned how Neuralink’s brain chip technology can be implanted or removed without damaging the organ. The regulator also considered the potential for the wires that Neuralink uses to pass electrical signals from the brain to the computer to have unintended effects on movement or feeling.Fortune, 18d ago
Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to...schneier.com, 20d ago

Latest

new Brain implants that translate paralyzed patients’ thoughts into speech…...STAT, 2d ago
new Generative AI is already enabling products like Jenny to respond to open-ended and emotionally charged input. For example, if you get angry at Jenny, she’ll respond accordingly based on the words you used. Looking forward, we see this technology evolving to enable responses based on visual prompts like facial expressions and auditory cues like tone and volume of voice. We have only scratched the surface of what generative AI can do for sales organizations and the salespeople within them. And the transition from building seller proficiency on sales pitch to building seller proficiency on sales discovery is just the first step into the brave new world of smarter, more effective AI-powered sales solutions.Sales & Marketing Management, 1d ago
new ..."Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene"...nanowerk.com, 2d ago

Latest

new ...for the role in Fremont, California, says. “You will lead and help build the team responsible for enabling Neuralink’s clinical research activities and developing the regulatory interactions that come with a fast-paced and ever-evolving environment.”Musk, the world’s richest person with an estimated $256bn fortune, said last month he was cautiously optimistic that the implants could allow tetraplegic people to walk.“We hope to have this in our first humans, which will be people that have severe spinal cord injuries like tetraplegics, quadriplegics, next year, pending FDA [Food and Drug Administration] approval,” he told the Wall Street Journal’s CEO Council summit.“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury. Neuralink’s working well in monkeys, and we’re actually doing just a lot of testing and just confirming that it’s very safe and reliable and the Neuralink device can be removed safely.”However, Musk has a history of overpromising about the speed of the company’s development. In 2019 he predicted that the device would be implanted into a human skull by 2020.Musk said the device would be “implanted flush with skull & charges wirelessly, so you look and feel totally normal”.He said people should think of the technology as similar to “replacing faulty/missing neurons with circuits”. “Progress will accelerate when we have devices in humans (hard to have nuanced conversations with monkeys) next year,” he...Inferse.com, 2d ago
new ...that details a communication network that would link rovers, lake landers, and even submersible vehicles through a so-called mesh topology network, allowing the machines to work together as a team, independently from human input. According to Fink and his co-authors, the approach could help address one of...TodayHeadline, 1d ago
For these technologies to materialize, it is necessary to individually control millions of qubits at high speeds to avoid decoherence, i.e., the loss of quantum information due to interaction with the environment. There is, however, a trade-off between scalability and programmability, i.e., qubits that can scale are difficult to control and vice-versa. In this talk, I will discuss our efforts towards scalable control over optically addressable qubits, e.g., Rydberg atoms, color centers, and trapped ions. Spatial Light Modulators (SLMs) are optoelectronic devices that can provide programmable control over millions of spatial optical modes and thus they are suitable for scalable optical control over qubits. However, the modulation speeds of existing SLMs is less than 1 KHz which is incompatible with quantum control as decoherence can occur at the millisecond to microsecond range. I will present our efforts to realize nanophotonic-based SLMs that can operate at GHz speeds and scale to millions of pixels. I will also present a new approach to realize nanophotonic devices in existing CMOS foundry processes, e.g., TSMC or Intel, with minimal post processing for MHz speed SLMs. In addition to their use in quantum control, high speed SLMs will find applications in imaging, video holography, optical accelerators and neural networks, in-vivo imaging through scattering media, and cancer therapy, to name a few.SciTech Institute, 3d ago
As AI continues to advance, the possibility of singularity becomes increasingly probable. Neuralink, a project created by Elon Musk’s team of scientists and engineers, is an example of how artificial intelligence can enhance human capabilities. The brain chips developed by Neuralink can help disabled individuals move, communicate, and restore vision. However, regulatory bodies have thus far prevented human trials of the technology.Wonderful Engineering, 4d ago
new Currently, the students working on projects with Lincoln Lab include PhD students Maya Flores and Adriyel Nieves, and master’s students Donovan Tames and Mitch Jacobs. Each is working on research related to Securing future 5G/6G against attacks from adversaries who are attempting to gain unauthorized access to these networks by innovating new/custom machine learning algorithms and testing these algorithms using programmable wireless prototyping equipment such as software-defined radio.wpi.edu, 2d ago
new The technology behind the Apollo wearable was originally developed by a team of neuroscientists and physicians at the University of Pittsburgh. It uses touch therapy to send safety signals to the brain.Mic, 2d ago

Latest

new ...the new version of Digit, which includes a head with LED animated eyes. The eyes were designed to help create better human robot interactions. The eyes, using simple expressions, can convey information and intent, Agility Robotics claimed.Robotics 24/7, 2d ago
new Generative AI has been evolving since its introduction at a great pace. The development of Large Language Models (LLMs) can be termed as one of the major reasons for the sudden growth in the amount of recognition and popularity generative AI is receiving. LLMs are AI models that are designed to process natural language and generate human-like responses. OpenAI’s GPT-4 and Google’s BERT are great examples that have made significant advances in recent years, from the development of chatbots and virtual assistants to content creation. Some of the domains in which Generative AI is being used are – content creation, development of virtual assistants, human imitating chatbots, gaming, and so on. Generative AI is also used in the healthcare industry to generate personalized treatment plans for patients, improve the accuracy of medical diagnoses, etc.MarkTechPost, 2d ago
The idea of connecting the human brain with artificial systems appeared fifty years ago. Long-term potentiation was first seen in studies of learning and memory conducted on simple animals like the lamprey long before human cell cultures and brain organoids became available. Hence, research into brain-machine interaction was undertaken to create two-way connections between the brain and mechanical equipment.MarkTechPost, 5d ago
...was awarded to Professor Lu for his discoveries that could lead to advances ranging from 3D cameras for smartphones to more efficient satellite electronics and space missions. Professor Lu's team at ANU has developed new types of atomically thin 2D materials and devices with peculiar optical and electronic properties, enabling new applications in electronics, photonics and space. These novel materials facilitate devices that are significantly smaller, less massive, and require much lower power to operate.ANU, 4d ago
The overlapping layers of coding and programs that process this data can be called a neural network, similar to how the human brain consists of billions upon billions of neurons to create a biological computer system, in a sense. Deep learning simply takes that human brain function and applies it to computer science: billions and billions of connecting neurons via code instead of electrical impulses.dzone.com, 5d ago
Resistive memories are essentially tunable resistors, which are devices that resist the passage of electrical current. These resistor-based memory solutions have proved to be very promising for running artificial neural networks (ANNs). This is because individual resistive memory cells can both store data and perform computations, addressing the limitations of the so-called Naumann bottleneck.HPCwire, 5d ago

Latest

...-- The New York General Group has the technology "World System." The idea is to have artificial intelligence (natural language processing models, image processing models, etc.) learn astrophysics, neuroscience, etc., integrate them with quantum mechanical processing models, create a universe on a quantum computer, create the human brain as a subset of the universe on top of it, and have superintelligence (artificial general intelligence)...PRLog, 3d ago
Deep Learning is based on Neural Networks, and Neural Networks try to mimic how our brains work. Neurons receive inputs and send output signals to other neurons. Under the hood of Deep Learning, that’s what “circuits” are doing in a multi-billion-dimensional space. Our understanding of the human brain is still limited so AI models also suffer from that limited understanding.ON24, 5d ago
Researchers at Linköping University in Sweden have developed a method whereby the body can ‘grow its own’ electrodes. The minimally invasive technique involves injecting a hydrogel that is laden with enzymes into target tissues. The enzymes interact with molecules that are present in the tissue to change the structure of the gel and allow it to become electrically conductive. The approach could facilitate a variety of advanced medical systems, from pacemakers to brain-computer interfaces. Excitingly, the technology avoids the need for invasive surgery or conventional stiff electrical components that are not well matched to native tissues and can provoke unwanted immune reactions. So far, the researchers have shown that they can grow electrodes in the brain, heart, and tail fins of zebrafish and near the neural system in leeches.Medgadget, 4d ago

Top

..., lab-grown tissue resembling fully grown organs, to experiment on kidneys, lungs, and other organs without resorting to human or animal testing. More recently Hartung and colleagues have been working with brain organoids, orbs the size of a pen dot with neurons and other features that promise to sustain basic functions like learning and remembering.Nextgov.com, 20d ago
...to connect to a computer and transmit electrical impulses, often via a brain implant, that can be translated into language. Brain-computer interfaces, or BCIs, offer the promise of improving life for people with injuries or neurological disorders that prevent them from speaking or typing, as this November 2022...HowStuffWorks, 7d ago
Kosmos-1 is the latest development in Microsoft’s efforts to create artificial intelligence (AI) systems that are capable of understanding the complexities of human language and conversation. By incorporating vision processing and natural language understanding, the model is able to comprehend the visual world and its relationship with language. Microsoft believes that this technology can enable machines to better understand human conversation and make more complex decisions. These advances could eventually lead to more advanced AI applications such as autonomous robots, medical diagnosis, and natural language processing.OODA Loop, 19d ago
...that within four years, his new, under-the-radar startup called Neuralink would have a device on the market that would not only translate thoughts into actions on computers, but also help cure “certain severe brain injuries.” Then, on July 16, 2019, Musk proudly revealed Neuralink’s tech to the public in a San Francisco launch...Fortune, 19d ago
...“Eventually, we can use neural interfaces for many different disorders, and we need algorithmic ideas and advances in chip design to make this happen. This work is very interdisciplinary, and so it also requires collaborating with labs like the Laboratory for Soft Bioelectronic Interfaces, which can develop state-of-the-art neural electrodes or labs with access to high-quality patient data.”...newsbeezer.com, 20d ago
Current strategies to communicate with voice-disabled patients rely on microphones attached to the body, but that is often unwieldy and considered a “non-friendly” user setup. Graphene sensors are ideally suited for detection of tiny vibrations on skin surfaces. The device “can sense muscle motions and audio vibrations transmitted to the surface of the skin” and “convert recognizable mechanical information into speech,” said research team leader Professor Ren Tianling. The device also works without interference from noisy, hostile environments such as highways, fire disasters and airplane cockpits.Electronics For You, 12d ago

Latest

Driver assistance: ADAS domain controller applications use a diversity of sensor inputs to build a picture of the environment around the vehicle. These systems may use as many as 11 cameras of differing resolutions and fields of view, combined with five radar sensors as inputs to a perception system in an L2 or L3 driving system. This demonstration shows how TI’s AI-enabled processor, the TDA4VH-Q1, easily handles 12 cameras (each at a 2-MP resolution) and performs image pre-processing on all inputs simultaneously in real-time, using only the image accelerators on the chip. This pre-processing allows the application and signal-processing cores to run the computer vision and AI algorithms required for accurate perception.electronicspecifier.com, 6d ago
Siri, Google Assistant, Cortana, and Alexa, are the successive technologies rolled out in the 20th century. They are readily accessible via our handy devices and serve as an intelligent personal assistant instead of just simple question-answering based on internet information. NLP, Natural Language Processing, and deep neural networks are the core building blocks of the technology, which allows our machines, appliances, and IOT devices to understand human language at ease. Command execution via voice recognition is the new norm where a simple instruction like “Hey Google, play me some country music!” will easily fire up your Spotify app to your liking.web3newshubb.com, 4d ago
The development of electronic skin with multiple senses is essential for various fields, including rehabilitation, health care, prosthetic limbs, and robotics. One of the key components of this technology is stretchable pressure sensors, which can detect various types of touch and pressure. Recently, a joint team of researchers from POSTECH and the University of Ulsan in Korea has recently made a significant breakthrough by successfully creating omnidirectionally stretchable pressure sensors inspired by crocodile skin.techxplore.com, 4d ago
Animals excel at a wide range behaviors, many of which are essential for survival. For example, dragonflies are aerial predators, known for both their speed and high success rate, that must perform fast, accurate, and efficient calculations to survive. I will present a neural network model, inspired by the dragonfly nervous system, that calculates turning for successful prey interception. The model relies upon a coordinate transformation from eye-coordinates to body-coordinates, an operation that must be performed by almost any animal nervous system relying upon sensory information to interact with the external world. I will discuss how I and collaborators are combining neuroscience experiments, modeling studies, and exploration of neuromorphic architectures to understand how the biological dragonfly nervous system performs coordinate transformations and to develop novel approaches for efficient neural-inspired computation.SciTech Institute, 3d ago
AI is a department of laptop science that permits machines to imitate, and presumably enhance upon, processes that the human thoughts controls. Early AI outcomes have been promising with such applied sciences as Alexa, Siri, self-driving vehicles and ChatGPT turning into more and more accepted.Bitcoin Press UK, 4d ago
Dr John Hardy, one of the lead authors of the study, said this approach could potentially transform the manufacture of complex 3D electronics for technical and medical applications — including structures for communication, displays, and sensors. Such approaches could also be used to fix broken implanted electronics through a process similar to laser dental/eye surgery. In a two-stage study, the researchers used a Nanoscribe (a high-resolution laser 3D printer) to 3D print an electrical circuit directly within a silicone matrix (using an additive process). They demonstrated that these electronics can stimulate mouse neurones in vitro (similar to how neural electrodes are used for deep brain stimulation in vivo).electronicsonline.net.au, 6d ago

Top

Researchers have recently been pursuing technologies for universal speech recognition and interaction that can work well with subtle sounds or noisy environments. Multichannel acoustic sensors can improve the accuracy of recognition of sound but lead to large devices that cannot be worn. To solve this problem, we propose a graphene-based intelligent, wearable artificial throat (AT) that is sensitive to human speech and vocalization-related motions. Its perception of the mixed modalities of acoustic signals and mechanical motions enables the AT to acquire signals with a low fundamental frequency while remaining noise resistant. The experimental results showed that the mixed-modality AT can detect basic speech elements (phonemes, tones and words) with an average accuracy of 99.05%. We further demonstrated its interactive applications for speech recognition and voice reproduction for the vocally disabled. It was able to recognize everyday words vaguely spoken by a patient with laryngectomy with an accuracy of over 90% through an ensemble AI model. The recognized content was synthesized into speech and played on the AT to rehabilitate the capability of the patient for vocalization. Its feasible fabrication process, stable performance, resistance to noise and integrated vocalization make the AT a promising tool for next-generation speech recognition and interaction systems.interestingengineering.com, 13d ago
In this talk we demonstrate two device concepts based on novel organic mixed-conducting materials and show how we can use these devices as artificial neurons and synapses in smart autonomous robotics, trainable biosensors and sensory coding.tue.nl, 17d ago
Organ-on-chip Market a multichannel 3D micro-fluidic cell culture device, which simulates activities, mechanisms, and physiological responses of human organs. This chip develops a narrow channel for the blood and air flow in organs including lung, gut, liver, heart, and others. This devices is developed on a microchip that contains continuously perfused chambers colonized by living cells arranged to stimulate tissue- and organ-level physiology. It is used to nurture internal organs...openPR.com, 13d ago

Latest

Nuance’s roots are in voice commands, dictation and speech recognition: it owns the Dragon speech recognition suite and provided the original back-end NLP algorithm for Apple’s Siri. And voice has of course been closely watched (or listened to) with the rise of consumer voice assistants but Lorentzen says voice still has obvious limitations such as being understood noisy environments or with complex instructions: decoding a long alphanumeric string on a shipping order, for example. Lorentzen doesn’t expect consumer voice assistants to change that soon.The Stack, 5d ago
ASE has combined SiP with embedded AI design for TWS gears, allowing the devices to monitor heart rates, step counts and health conditions and even to conduct intelligent translation and detect head motions, the sources added.MacRumors, 4d ago
The PPG sensors used in wearable medical devices contain LED light sources coupled with photodetectors and an electronic reading system. The photodetectors capture the light reflected from the LEDs and feed this information into algorithms, which then calculate the wearer’s vital signs. Senbiosys’s PPG sensors measure just four cubic millimeters, which is four times smaller than the ones found in competing devices on the market. When it comes to this kind of miniaturization, the biggest obstacle is usually the power needed to run the LEDs. But extensive research at EPFL’s Integrated Circuits Laboratory in Neuchâtel has produced tiny photodetectors that can pick up signals just as clearly as existing ones – from a light source that’s much less intense. This discovery has made waves across the industry. “Our breakthrough has given rise to around 60 journal articles in the fields of microelectronics and optical sensors,” says Antonino Caizzone, a Senbiosys cofounder who received the 2021 Gilbert Hausmann Award for his thesis in this area. “It’s also led to 11 patents, including some obtained at EPFL, and our work has been cited around 1,000 times.”...Sciena, 3d ago
System architecture networks the autopilot, avionics and Intel workstation with an onboard Nvidia Xavier AI board to give supercomputer-level processing opportunities in flight and in real time. A natural evolution of the architecture is for autonomous operations based on collected sensor data. As the system generates and records vast amounts, it can be used in real time for navigation, avoidance and on the fly tasking.Unmanned Systems Technology, 4d ago
Tests have shown that certain compounds from the mushrooms could potentially improve memory in both rats and humans, although scientists have not yet pinpointed the exact compound or combination of compounds. The lion’s mane mushroom contains, in its edible parts, both hericenones and erinacines, which the research team says are linked to brain cell growth and memory improvement. Lead author of the paper, Frédéric Meunier, Clem Jones Centre for Ageing Dementia Research, Queensland Brain Institute, Australia, said, “Using super-resolution microscopy, we found the mushroom extract and its active components largely increase the size of growth cones, which are particularly important for brain cells to sense their environment and establish new connections with other neurons in the brain.”...The Medicine Maker, 3d ago
Meta's research arm, Meta AI, used the new AI-based computer program known as ESMFold to create a public database of 617 million predicted proteins. Proteins are the building blocks of life and of many medicines, required for the function of tissues, organs and cells. Drugs based on proteins are used to treat heart disease, certain cancers and HIV, among other illnesses, and many pharmaceutical companies have begun to pursue new drugs with artificial intelligence. Using AI to predict protein structures is expected to not only boost the effectiveness of existing drugs and drug candidates but also help discover molecules that could treat diseases whose cures have remained elusive.slashdot.org, 4d ago

Top

Graphene, with its properties of extreme thinness and strength, flexibility, biocompatibility, and conductivity, is uniquely well-placed to be developed into chips that can be implanted into the brain. These chips have the potential to compile information about brain activity and correct devia-tions in neural circuits in order to correct symptoms of neurological illnesses including Parkinson’s Disease, Epilepsy and Aphasia, providing more peripheral nerve-related indications in more sys-temic disease areas where selective recording and stimulation are needed.Printed Electronics Now, 16d ago
The other area of that is significant investment into brain-computer interface or brain-wearables, neural interface devices, both to enhance militaries and soldiers and their capabilities, but also to try to interfere with others. Imagine a world in which you have widespread neural interface, which is the world which I have described that I think is coming, and people are wearing their ear buds, their headphones, and all of that is brain activity that is being tracked and also used and interfered with by other countries.carnegiecouncil.org, 8d ago
The Advanced Optics and Photonics Laboratory leverages APL’s experimental optics and remote sensing expertise to develop optical sensors for noninvasive brain–computer interface technology. Facilities house several world-class optical neural imaging systems that aim to improve the current state of the art in temporal and spatial resolution of these tools to create breakthroughs in both health and human–machine interactions.Johns Hopkins University Applied Physics Laboratory, 13d ago
How tiny wireless robots can be used for non-invasive, precise, and safe medical diagnosis and treatment Robot technology is already abundant in the medical domain, ranging from assistance in surgery to complex prosthetics. However, using tiny robots inside the body is still a fairly novel thing to do. Inspired by…...Falling Walls, 18d ago
.... It works by using using EMG (electromyography) to read the neural signals passing through your arm from your brain to your fingers. Such a device could sense even incredibly subtle finger movements not clearly perceptible to people nearby. Himel reportedly said it will let the wearer “control the glasses through hand movements, such as swiping fingers on an imaginary D-pad”.UploadVR, 18d ago
There is genome and proteome and in general omics information that is used off of humans to interpret their future health states. And there's all kinds of prediction algorithms for different diseases in neurology and oncology in neurology and whatnot that are used to predict future health events and the top interventions that you could do with a certain patient. But in our own line of research, we are doing things that are centered on taking human bio signals. So, these might emerge from digital health devices. So, for example, wearables that are worn on body, but also cameras and night vision cameras and other ambient sensing devices that give us information about the status of the human body in various dimensions, like, for example, regarding heartbeat information, electrocardiogram information, breathing information, and other similar things.infineon.com, 13d ago

Latest

Nanotechnologies enable the development of innovative nano-devices with unprecedented capabilities. By means of communications, networks of nano-sensors and nano-actuators can perform complex tasks in a distributed manner. Similarly, the wireless interconnection of nano-processors in massive multi-core architectures can enable innovative parallel computing architectures. Moreover, the interconnection of networks of nanomachines or nanonetworks with macroscale communication networks – the so-called Internet of Nano-Things – opens the door to transformative applications not only across scales (from nano to macro) but also across domains (biological and non-biological) and realms (classical and quantum).comsoc.org, 5d ago
Elliptic Labs is a global enterprise targeting the smartphone, laptop, IoT, and automotive markets. Founded in 2006 as a research spin-off from Norway's Oslo University, the company's patented software uses AI, ultrasound and sensor fusion to deliver intuitive 3D gesture, proximity-, presence-, breathing- and heartbeat-detection experiences. Its scalable AI Virtual Smart Sensor Platform creates software-only sensors that are sustainable, human-friendly and already deployed in hundreds of millions of devices around the world. Elliptic Labs is the only software company that has delivered detection capabilities using AI software, ultrasound, and sensor fusion deployed at scale. The company joined the Oslo Børs main listing in March 2022.tmcnet.com, 6d ago
Ultimately, the algorithm could allow researchers to control snake robots and other hyper-redundant robots (e.g., robots inspired by octopus tentacles) with greater precision, while also better replicating snake- or tentacle-like movements. This could in turn facilitate the deployment of these robots in medical settings, particularly to perform minimally invasive surgical procedures inside the human body.TodayHeadline, 5d ago

Latest

Take Sontro’s OTC hearing aids and accompanying otoTune app, which allow users to customize their hearing aids in just a few minutes on their mobile phone. The app uses machine learning to intelligently identify where a user’s hearing can be improved the most, while directional mics and advanced signal processing in the device help manage noise coming from different directions. The Sontro hearing aids process sound using Wide Dynamic Range Compression (WDRC) to help expand the user’s hearing range across 16 channels. This makes soft sounds louder and loud sounds more pleasant, helping those hard of hearing to live more vibrant, more comfortable lives.Dealerscope - CE RETAIL's #1 source for products & strategy news, 5d ago
Computer Science involves the study of computers using computation, mathematics, algorithms and more. This involves understanding how computers actually work (hardware) and learning the programming languages that run these computers (software), all while using computing technology that follows algorithms and protocols to process information.IDP Education, 5d ago
Generative AI could have an impact on how autonomous (IoT) devices are controlled, e.g., robots. By capturing motion data from animals or humans, generative AI can be used to generate control logic and commands for robots. Instead of deterministically programming movements for each leg of a robot dog, for example, generative AI models can be utilized to generate the movements of individual parts and make the robot dog walk complicated, interconnected steps. Moreover, generative AI models can help robots make sense of their surroundings and connect so-called horizon goals with more intermediate steps to reach the goals (e.g.,...IoT Analytics, 7d ago
Eyeris proprietary technology accurately generates in-cabin depth information for key regions of interest (ROIs), such as occupants' face, body, hands and objects, using a single automotive-grade 2D image sensor, like the latest RGB-IR sensors. Eyeris achieves this through a rigorous collection of naturalistic in-cabin 3D data used to train compute-efficient depth inference models that run on AI-enabled processors. This enables enhanced depth-aware understanding of the location, size, and position of occupants and other objects to customize - for example - the airbag deployment accordingly and reduce the risk of occupants' injury when deployed.EDACafe, 6d ago
A spin-out from ETHZ, Nanoflex Robotics specializes in developing soft robotic systems for medical interventions. The startup has developed magnetic navigation tech that enables precise insertion of specially made guidewires and catheters deep into the brain, and they also provide associated surgical devices. By developing these precision tools, the young startup is able to enhance surgical procedures for better outcomes, specifically focused on the treatment of Ischemic and Hemorrhagic stroke. So far, the MedTech startup has secured €18.85 million.EU-Startups, 7d ago
Neuroscientist Professor Jack Gallant at the University of California, Berkeley, can tell what movie a person is watching by decoding their brain waves on an fMRI machine. Dr Joseph Makin at the University of California, San Francisco, inserted electrodes into the brain of epileptic patients undergoing diagnostic tests and converted their thoughts to...Cosmos, 6d ago

Latest

...field. MRAM can be developed to provide multiple states as opposed to the traditional binary 1 and 0 of conventional memory. This can be architected in arrays that provide new abilities in AI learning and inference to modify weights in tables very rapidly and maintain the states until they change. In collaboration with Prof. Joseph Friedman at the University of Texas, Dallas Everspin has done an experimental demonstration of a neuromorphic network with STT magnetic tunnel junction (MTJ)synapses, which performs image recognition via vector-matrix...Bisinfotech, 5d ago
Elliptic Labs is a global enterprise targeting the smartphone, laptop, IoT, and automotive markets. Founded in 2006 as a research spin-off from Norway’s Oslo University, the company’s patented software uses AI, ultrasound and sensor fusion to deliver intuitive 3D gesture, proximity-, presence-, breathing- and heartbeat-detection experiences. Its scalable AI Virtual Smart Sensor Platform creates software-only sensors that are sustainable, human-friendly and already deployed in hundreds of millions of devices around the world. Elliptic Labs is the only software company that has delivered detection capabilities using AI software, ultrasound, and sensor fusion deployed at scale. The company joined the Oslo Børs main listing in March 2022.AiThority, 5d ago
Scientists believed the main way neurons communicated with each other was using its cable (axon) to send neurotransmitters (chemical messengers) to the dendrite of another neuron. Dendrites are appendages designed to receive signals from other cells. However, this was only true for half of connections in the fruit fly larva brain. Sometimes, fruit fly neurons send messages from axon-to-axon, dendrite-to-dendrite, or dendrite-to-axon.medicalxpress.com, 4d ago
According to Li, machines follow a control pattern similar to the human nervous system. Humans provide missions and behavioral features to machines, which must then run a complex behavior cycle regulated by a reward and punishment function to improve their abilities of perception, cognition, behavior, interaction, learning and growth. Through iteration and interaction, the short-term memory, working memory and long-term memory of the machines change, embodying intelligence through automatic control. “In essence, control is the use of negative feedback to reduce entropy and ensure the stability of the embodied behavioral intelligence of a machine,” Li concluded.newswise.com, 4d ago
A proprietary haptic system was developed to give surgeons the sense of touch in a virtual reality simulation of knee arthroscopy surgery. During real diagnostic knee arthroscopy, the surgeon operates inside the knee with long tools (an endoscope and a hooked probe) that pass through keyholes in the skin. The surgeon uses visual and haptic (touch) feedback to diagnose problems in the knee.Imperial College London, 6d ago
Despite our tremendous progress in artificial intelligence (AI), current AI systems still cannot adequately understand humans and flexibly interact with humans in real-world settings. The goal of my research is to build AI systems that can understand and cooperatively interact with humans in the real world. My hypothesis is that to achieve this goal, we need human-level machine social intelligence and that we can take inspiration from the studies of social cognition to engineer such social intelligence. To transfer insights from social cognition to real-world systems, I develop a research program for cognitively inspired machine social intelligence, in which I first i) build computational models to formalize the ideas and theories from social cognition, ii) develop new computational tools and AI methods to implement those models, and finally iii) apply those models to real-world systems such as assistive robots.The Hub, 5d ago

Latest

In this study, researchers from Columbia University introduce ViperGPT1, a framework that circumvents these constraints by utilizing big language models that generate code (like the GPT-3 Codex) to nimbly build vision models on any textual query that specifies the job. For each question, it makes specialized programs that accept photos or videos as arguments and deliver the answer to that image or video query. They demonstrate that creating these applications only requires giving Codex an API exposing different visual features (such as locate and compute depth), just as one could provide for an engineer. The model can reason about using these functions and constructing the necessary logic thanks to its earlier training in code.MarkTechPost, 4d ago
.... Ambarella is designing specific chips that are involved in computer vision, specifically as it relates to autonomous driving, and Blake went into a discussion about image segmentation and how these chips are getting better and better at performing these rather involved calculations directly, in almost real time.IBKR Campus, 4d ago
AI is a branch of computer science that allows machines to mimic, and possibly improve upon, processes that the human mind controls. Early AI results have been promising with such technologies as Alexa, Siri, self-driving cars and ChatGPT becoming increasingly accepted.coindesk.com, 6d ago

Top

Hartung defines OI as the ability to reproduce cognitive functions such as learning and sensory processing in a lab-grown human-brain model. This will enable researchers to manipulate the system in ways that are not ethically possible with human brains. “The first application is about understanding, learning, and memory, and this is actually the most important for development of neurosciences.”...Analytics India Magazine, 12d ago
Sensor-enabled devices make homes safer with person detection and recognition. Simplify appliance interaction with keyword and phrase recognition, improve electronic device usability with gesture and even facial expression recognition, and improve the safety and security of home medical devices with on-chip continuous learning. Combine sensor modalities to create unparalleled user interaction.BrainChip, 15d ago
Wireless visual cortical stimulator devices are a type of visual prosthetic used to restore functional vision to blind individuals. They work by providing electrical stimulation that bypasses damaged retinal cells and stimulates healthy cells to induce visual perception. Second Sight announced the first implantation of its Orion I Visual Cortical Prosthesis (Orion I), a wireless visual cortical stimulator, in a 30-year-old blind patient on October 26, 2016. The goal of this implantation was to provide human proof for the ongoing development of the device. Second Sight’s Argus II System, a visual cortical stimulator that received FDA approval in 2013, also provides electrical stimulation to bypass defunct retinal cells and stimulate viable cells. The key difference between Orion I and Argus II System is that Orion I uses wireless technology to restore vision to patients completely blind due to conditions such as glaucoma, cancer, diabetic retinopathy, or trauma. The first implantation of Orion I successfully perceived and localized individual light spots without any side effects. While not all technological developments offer significant opportunities, the first human test of Orion I proved to be able to treat visually impaired patients with no side effects. The company plans to submit an application to the FDA in early 2017 for the initial completion of clinical trials for the complete system.Medgadget, 21d ago
Cybernet was an early developer of user interfaces to robotic and computerized devices based on detection of motion in video and inertial tracker data streams, recognition of intentional human motion in these motion streams, and conversion of the human intention into machine control or computer control commands. This technology was popularized in computer gaming by Microsoft through its Kinect product.Cybernet Systems Corporation, 22d ago
Essentially, OI is a potential hybrid technology which will combine future biological computers with a brain-to-machine interface, allowing for directed tasks and learning to occur through use of external sensors/stimuli.securities.io, 12d ago
Helius Medical Technologies is a leading neurotech company in the medical device field focused on neurologic deficits using non-implantable platform technologies that amplify the brain’s ability to compensate and promotes neuroplasticity, aiming to improve the lives of people dealing with neurologic diseases. The Company’s first commercial product is the Portable Neuromodulation Stimulator (PoNS). For more information, visit...Zephyrnet, 19d ago

Latest

Abstract: Human beings and other biological creatures navigate unpredictable and dynamic environments by combining compliant mechanical actuators (skeletal muscle) with neural control and sensory feedback. Abiotic actuators, by contrast, have yet to match their biological counterparts in their ability to autonomously sense and adapt their form and function to changing environments. We have shown that engineered skeletal muscle actuators, controlled by neuronal networks, can generate force and power functional behaviors such as walking and pumping in a range of untethered robots. These muscle-powered robots are dynamically responsive to mechanical stimuli and are capable of complex functional behaviors like exercise-mediated strengthening and healing in response to damage. Our lab uses engineered bioactuators as a platform to understand neuromuscular architecture and function in physiological and pathological states, restore mobility after disease and damage, and power soft robots. This talk will cover the advantages, challenges, and future directions of understanding and manipulating the mechanics of biological motor control.wpi.edu, 6d ago
...recorded audio and video of nearly two dozen bats for two and a half months. His team adapted a voice recognition program to analyze 15,000 of the sounds, and then the algorithm correlated specific sounds to certain social interactions in the videos, like fighting over food or jockeying for sleeping positions.Scientific American, 5d ago
Years of research in Miller’s lab, largely led by lead author Mikael Lundqvist, who now works at Karolinska, has shown that working memory tasks are driven by an interplay of brain rhythms at different frequencies. Slower beta waves carry information about task rules, selectively giving way to faster gamma waves when it comes time to perform operations such as storing information from the senses or reading out information when retrieval is required.newsbeezer.com, 4d ago
A new type of sensor could lead to artificial skin that someday helps burn victims 'feel' and safeguards the rest of us. Researchers wanted to create a sensor that can mimic the sensing properties ...ScienceDaily, 4d ago
Future computing technologies will repeatedly challenge Moore’s Law, with Quantum being the ultimate disruption point. By 2025, the industry will have powerful processors in mobile devices capable of running trained deep networks for cognitive functions such as vision, speech, and security. Industries of the future will witness humans and robots working collaboratively through sensored human-machine interfaces...Frost & Sullivan, 6d ago
Neural surgery may be of fatal risk. To improve surgery safety, the Mechantronics in Medicine (MIM) lab of Imperial College London has develop a new Minimal Invasive neurosurgery robot which is soft and compliant needle capable of steering in the soft tissue of the brain. Its design is inspired by the ovipositing wasps, which are capable of steering their stings in different tissues towards a target point. Besides the new robot, MIM lab also develop many new technologies in surgery planning and execution. In further, the robot and new technology of surgery planning and control will make the neural surgery produce less trauma, avoid the key functionality of the brain and reach the target deep in brain despite the vibrating and deforming motion of a living brain.Imperial College London, 6d ago

Top

...is a leading neurotech company in the medical device field focused on neurologic deficits using non-implantable platform technologies that amplify the brain’s ability to compensate and promotes neuroplasticity, aiming to improve the lives of people dealing with neurologic diseases.InvestorWire (IW), 11d ago
In the robotics and virtual reality research, haptics is given a broad definition as real and simulated touch collaborations among robots, humans as well as real remote or simulated environments in different combinations. The chief aim of haptic technology in robot assisted minimally invasive surgery is provision of “transparency” whereby the physician need not to have a feeling that he or she is undertaking operations in a remote mechanism but rather his own hands comes into contact with the patient. What is necessary here is artificial haptic sensors on the side of the patient robot for acquisition of haptic information, in addition to haptic displays for conveying of information to the physician.WritingUniverse, 19d ago
Liu's team, which specializes in engineering nanoelectronics to bridge the gap between living tissue and electronics, has developed several mesh-like, minimally invasive flexible nanoelectronic sensors designed to be embedded with natural tissues without disturbing normal cellular grown or function.phys.org, 13d ago