Latest

new The goal of ambient AI is to make technology more intuitive and adaptive to human behavior, rather than requiring explicit input or commands from users. This can include smart home devices that adjust lighting and temperature based on user behavior, personalized health and wellness monitoring systems, and intelligent transportation systems that can optimize traffic flow and reduce congestion.techstartups.com, 1d ago
new Another study, published in the journal JAMA Network Open, showed that an AI system invented by scientists at the University of British Columbia and BC Cancer was able to predict cancer patient survival rates using doctors' notes. The model uses natural language processing (NLP), which is a part of AI that can understand complex human language. The NLP can analyze doctors' notes after an initial consultation visit and identify individual characteristics specifically for each patient.bbntimes.com, 8h ago
new Drop by this futuristic demo to find out how you can build your very own metaverse – simply by finger scribbling in the air. This bleeding-edge research from the SketchX research lab (University of Surrey) uses the latest AI technologies in the form of deep neural networks, which convert your rough scribbles into realistic 3D objects. Visit Yi-Zhe Song’s (Centre for Vision, Speech and Signal Processing) demo for the chance to wear a VR headset and experience this immersive tech reality first-hand.turing.ac.uk, 8h ago
new Building artificial systems that see and recognize the world similarly to human visual systems is a key goal of computer vision. Recent advancements in population brain activity measurement, along with improvements in the implementation and design of deep neural network models, have made it possible to directly compare the architectural features of artificial networks to those of biological brains’ latent representations, revealing crucial details about how these systems work. Reconstructing visual images from brain activity, such as that detected by functional magnetic resonance imaging (fMRI), is one of these applications. This is a fascinating but difficult problem because the underlying brain representations are largely unknown, and the sample size typically used for brain data is small.MarkTechPost, 1d ago
new Noninvasive Sensors for Brain-Machine Interfaces Based on Micropatterned Epitaxial Graphene...Robo Daily, 1d ago
new ..., scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.The AI algorithm makes use of information gathered from different regions of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, a systems neuroscientist at Osaka University who worked on the experiment. The system interpreted information from functional magnetic resonance imaging (fMRI) brain scans, which detect changes in blood flow to active regions of the brain. When people look at a photo, the temporal lobes predominantly register information about the contents of the image (people, objects, or scenery), whereas the occipital lobe predominantly registers information about layout and perspective, such as the scale and position of the contents. All of this information is recorded by the fMRI as it captures peaks in brain activity, and these patterns can then be reconverted into an imitation image using AI.slashdot.org, 1d ago

Latest

new ...“All of the quantum computers currently available on Amazon Braket, the quantum computing service of AWS, started in the labs of experimental physicists. Innovation on these complex systems requires constant iteration on device design, fabrication methods, and control techniques. These devices require carefully isolated environments and delicate, complex components to facilitate interactions. The components themselves are often quite expensive, consisting of microwave, laser, and/or refrigeration technologies custom built in the lab or from boutique manufacturers. These factors all contribute to increase the resources required to build and experiment on quantum devices.HPCwire, 9h ago
new Similarly, machine learning researchers might develop new algorithms that are more efficient, accurate, or can handle larger datasets. For instance, deep learning architectures such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have revolutionized the field of image and speech recognition, natural language processing, and autonomous driving.bbntimes.com, 8h ago
new In the team's research, they manipulated incoherent light by using artificially structured materials called metasurfaces, made from tiny building blocks of semiconductors called meta-atoms that can be designed to reflect light very efficiently. Although metasurfaces had previously shown promise for creating devices that could steer...phys.org, 1d ago

Top

...limiting the intelligent systems' area and energy efficiencies. As possible solutions for these problems, memristors whose resistance can be dynamically reconfigured not only enabled the analog in-memory computing to improve power consumption, latency, and area of neuromorphic chips but also modeled biological synapses and neurons in a single device for spike-encoded neural networks. Both nonvolatile memristors, whose conductance retains after removing electrical bias, and volatile memristors, whose conductance relaxes back to OFF states upon removing the bias after ON switching, have been demonstrated in the hardware implementations of artificial neural networks (ANNs) and SNNs. Nonvolatile memristors are mostly used as in-memory computing components in crossbar arrays to accelerate the vector-matrix multiplications (VMMs) in ANNs, while volatile memristors are mainly used to emulate the dynamic behaviors of synapses and neurons in SNNs, which mimic the physics of the human brain and neural system. The breakthroughs in memristor devices laid a solid foundation for neuromorphic systems in analog in-memory computing and brain dynamics modeling.AIP Publishing, 9d ago
Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies. He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses. His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC. Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.aihwedgesummit.com, 19d ago
Co-founded by Musk in 2016, the San Francisco-based Neuralink is working towards improving the brain-machine implant process until the procedure becomes as seamless as Lasik. The Neuralink device can make anyone superhuman by connecting their brains to a computer.techstartups.com, 19d ago
Graphene, with its properties of extreme thinness and strength, flexibility, biocompatibility, and conductivity, is uniquely well-placed to be developed into chips that can be implanted into the brain. These chips have the potential to compile information about brain activity and correct devia-tions in neural circuits in order to correct symptoms of neurological illnesses including Parkinson’s Disease, Epilepsy and Aphasia, providing more peripheral nerve-related indications in more sys-temic disease areas where selective recording and stimulation are needed.Printed Electronics Now, 16d ago
Researchers have recently been pursuing technologies for universal speech recognition and interaction that can work well with subtle sounds or noisy environments. Multichannel acoustic sensors can improve the accuracy of recognition of sound but lead to large devices that cannot be worn. To solve this problem, we propose a graphene-based intelligent, wearable artificial throat (AT) that is sensitive to human speech and vocalization-related motions. Its perception of the mixed modalities of acoustic signals and mechanical motions enables the AT to acquire signals with a low fundamental frequency while remaining noise resistant. The experimental results showed that the mixed-modality AT can detect basic speech elements (phonemes, tones and words) with an average accuracy of 99.05%. We further demonstrated its interactive applications for speech recognition and voice reproduction for the vocally disabled. It was able to recognize everyday words vaguely spoken by a patient with laryngectomy with an accuracy of over 90% through an ensemble AI model. The recognized content was synthesized into speech and played on the AT to rehabilitate the capability of the patient for vocalization. Its feasible fabrication process, stable performance, resistance to noise and integrated vocalization make the AT a promising tool for next-generation speech recognition and interaction systems.interestingengineering.com, 13d ago
Neuralink in 2016 as a brain-computer interface company. The firm plans to implant chips into human brains, which would allow people to perform tasks using only their mind. The billionaire has said in the past that Neuralink's chips — which are coin-sized devices designed to be implanted in the brain via a surgical robot — could one day do anything from cure paralysis to give people telepathic powers,...Business Insider, 18d ago

Latest

new The challenge for the Institute is two-fold. Firstly, it must develop and miniaturise new quantum sensing technologies. “In order to do this, we have to take the lasers and photonics and modulators and control electronics that make up 90% of the atom experiments here on Earth, and work really hard to get all that precision onto small, low-power chips that can be deployed in space,” said Daniel Blumenthal, a UC Santa Barbara professor of electrical and computer engineering, whose expertise lies in quantum photonic integration, optical and communications technologies. He will be working on developing the PICs for the compact chips designed to measure small variations in Earth’s gravity from space. This will involve moving a shaken lattice interferometer structure developed at the University of Colorado down to the chip scale. This type of atomic interferometer sensor uses many lasers and optics to cool and trap the atoms to measure gravity gradients with extremely high sensitivity.electrooptics.com, 2d ago
new In the team's research, they manipulated incoherent light by using artificially structured materials called metasurfaces, made from tiny building blocks of semiconductors called meta-atoms that can be designed to reflect light very efficiently. Although metasurfaces had previously shown promise for creating devices that could steer light rays to arbitrary angles, they also presented a challenge because they had only been designed for coherent light sources. Ideally, one would want a semiconductor device that can emit light like an LED, steer the light emission to a set angle by applying a control voltage and shift the steering angle at the fastest speed possible.Space Daily, 1d ago
new As noted previously, assistants like Siri and Alexa already use speech recognition models similar to Whisper. In the future, there could theoretically be a Whisper-powered virtual assistant that is highly accurate at understanding different languages and accents.Techopedia, 1d ago
new The whole biosensor apparatus, which relies on a customized Microsoft HoloLens kit for reading brain waves and issuing commands, is able to process and execute as many as nine commands in just two seconds. Developed with assistance from the country's defense experts, the tech is said to work "outside laboratory settings, anytime, anywhere," ending the role of conventional input devices like keyboards, touch screens, or machine vision gesture recognition. The dry wearable biosensor is a combination of graphene and silicon, which makes it conductive and durable as well as corrosion-resistant. The team behind the innovation also mentions that the biosensor can be deployed in extreme weather conditions.SlashGear, 1d ago
new MechSense could enable engineers to rapidly prototype devices with rotating parts, like turbines or motors, while incorporating sensing directly into the designs. It could be especially useful in creating tangible user interfaces for augmented reality environments, where sensing is critical for tracking a user’s movements and interaction with objects...eeNews Europe, 1d ago
new Working with applied statistician Haiyan Huang, a UC Berkeley professor, the researchers developed deep learning methods to match natural protein properties with plastic polymer properties in order to design an artificial polymer that functions similarly, but not identically, to the natural protein.phys.org, 1d ago

Top

The technique could create flexible soft robots with embedded sensors that can understand its posture and movements or to create wearable devices that deliver feedback on how a person is moving or interacting with the environment, according to Lillian Chin, a graduate student involved in the project at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).Digital Engineering, 11d ago
It’s called organoid intelligence, or OI, and it uses actual human brain cells to make computing “more brain-like.” OI revolves around using organoids, or clusters of living tissue grown from stem cells that behave similarly to organs, as biological hardware that powers algorithmic systems. The hope—over at Johns Hopkins, at least—is that it’ll facilitate more advanced learning than a conventional computer can, resulting in richer feedback and better decision-making than AI can provide.insidehighered.com, 7d ago
Since our brains communicate through electric signals, Neuralink will implant electrodes near the neurons to detect action potential. Action potentials cause synapses to release neurotransmitters and the implant may record it and decode what the brain intends to do.iTech Post, 18d ago

Latest

new Ultrananocrystalline Diamond films were demonstrated as hermetic/best biocompatible/bioinert coating for encapsulation of Si microchip (“Artificial Retina”), implantable on human eye’s retina, to restore Partial Vision to people blinded by retinitis pigmentosa (10 years of R&D (2000-2010) by group of scientists, engineers, biologist, medical doctors, surgeons (four Universities, five National Laboratories, and a USA-Company (Second Sight)) resulted in the Argus II device (currently without the UNCD-coated Si chip, because needs FDA approval), implanted in hundreds of blind people in the USA and Europe, returning partial vision.Open Access Government, 1d ago
new During robotic surgery, every snip, clamp and stitch generates massive amounts video data and kinematic data tracing the surgeon’s movements. AI can analyze this data to give surgeons feedback on instrument moving speed, distances traveled and wrist angulations during the robotic surgery. Using data and expertise from Hung and his group, Liu and her team have developed algorithms that teach the computer to learn as it is fed thousands of these data points.USC News, 1d ago
new Mur's team used magnetoencephalography (MEG), a non-invasive medical test that measures the magnetic fields produced by the brain's electrical currents. By using MEG data acquired from human observers during object viewing, the team detected a key point of failure in deep learning: readily nameable parts of objects, such as “eye,” “wheel,” and “face,” can account for variance in human neural dynamics over and above what deep learning can deliver.aimagazine.com, 1d ago
new The Discovery device enables “seamless integration of the digital and real-world”, uses distributed computing to offer “a retina-level adaptive display”, supports “micro gesture interaction” and can be paired with compatible devices using NFC.NFCW, 1d ago
new The intelligent cameras are now able to detect anomalies independently and thereby optimise quality assurance processes. For this purpose, users train a neural network that is then executed on the programmable cameras. To achieve this, IDS offers the AI Vision Studio IDS NXT lighthouse, which is characterised by easy-to-use workflows and seamless integration into the IDS ecosystem. Customers can even use only ‘GOOD’ images for training. This means that relatively little training data is required compared to the other AI methods of object detection and classification. This simplifies the development of an AI vision application and is well-suited for evaluating the potential of AI-based image processing for projects in the company.electronicspecifier.com, 1d ago
For these technologies to materialize, it is necessary to individually control millions of qubits at high speeds to avoid decoherence, i.e., the loss of quantum information due to interaction with the environment. There is, however, a trade-off between scalability and programmability, i.e., qubits that can scale are difficult to control and vice-versa. In this talk, I will discuss our efforts towards scalable control over optically addressable qubits, e.g., Rydberg atoms, color centers, and trapped ions. Spatial Light Modulators (SLMs) are optoelectronic devices that can provide programmable control over millions of spatial optical modes and thus they are suitable for scalable optical control over qubits. However, the modulation speeds of existing SLMs is less than 1 KHz which is incompatible with quantum control as decoherence can occur at the millisecond to microsecond range. I will present our efforts to realize nanophotonic-based SLMs that can operate at GHz speeds and scale to millions of pixels. I will also present a new approach to realize nanophotonic devices in existing CMOS foundry processes, e.g., TSMC or Intel, with minimal post processing for MHz speed SLMs. In addition to their use in quantum control, high speed SLMs will find applications in imaging, video holography, optical accelerators and neural networks, in-vivo imaging through scattering media, and cancer therapy, to name a few.SciTech Institute, 3d ago

Top

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to...schneier.com, 20d ago
In public comments over the years, Musk has detailed a bold vision for Neuralink: Both disabled and healthy people will pop into neighborhood facilities for speedy surgical insertions of devices with functions ranging from curing obesity, autism, depression or schizophrenia to web-surfing and telepathy. Eventually, Musk has said, such chips will turn humans into cyborgs who can fend off the threat from sentient machines powered by artificial intelligence.GreatGameIndia, 17d ago
How tiny wireless robots can be used for non-invasive, precise, and safe medical diagnosis and treatment Robot technology is already abundant in the medical domain, ranging from assistance in surgery to complex prosthetics. However, using tiny robots inside the body is still a fairly novel thing to do. Inspired by…...Falling Walls, 18d ago
...reported, citing seven unnamed current and former employees. The FDA questioned how Neuralink’s brain chip technology can be implanted or removed without damaging the organ. The regulator also considered the potential for the wires that Neuralink uses to pass electrical signals from the brain to the computer to have unintended effects on movement or feeling.Fortune, 18d ago
Liu's team, which specializes in engineering nanoelectronics to bridge the gap between living tissue and electronics, has developed several mesh-like, minimally invasive flexible nanoelectronic sensors designed to be embedded with natural tissues without disturbing normal cellular grown or function.phys.org, 13d ago
EPFL researchers have focused on the areas of low-power chip design,[{” attribute=””>machine learning algorithms, and soft implantable electrodes to create a neural interface capable of identifying and mitigating symptoms of various neurological disorders.newsbeezer.com, 20d ago

Latest

new ...for the role in Fremont, California, says. “You will lead and help build the team responsible for enabling Neuralink’s clinical research activities and developing the regulatory interactions that come with a fast-paced and ever-evolving environment.”Musk, the world’s richest person with an estimated $256bn fortune, said last month he was cautiously optimistic that the implants could allow tetraplegic people to walk.“We hope to have this in our first humans, which will be people that have severe spinal cord injuries like tetraplegics, quadriplegics, next year, pending FDA [Food and Drug Administration] approval,” he told the Wall Street Journal’s CEO Council summit.“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury. Neuralink’s working well in monkeys, and we’re actually doing just a lot of testing and just confirming that it’s very safe and reliable and the Neuralink device can be removed safely.”However, Musk has a history of overpromising about the speed of the company’s development. In 2019 he predicted that the device would be implanted into a human skull by 2020.Musk said the device would be “implanted flush with skull & charges wirelessly, so you look and feel totally normal”.He said people should think of the technology as similar to “replacing faulty/missing neurons with circuits”. “Progress will accelerate when we have devices in humans (hard to have nuanced conversations with monkeys) next year,” he...Inferse.com, 2d ago
new In the team’s research, they manipulated incoherent light by using artificially structured materials called metasurfaces, made from tiny building blocks of semiconductors called meta-atoms that can be designed to reflect light very efficiently. Although metasurfaces had previously shown promise for creating devices that could steer light rays to arbitrary angles, they also presented a challenge because they had only been designed for coherent light sources. Ideally, one would want a semiconductor device that can emit light like an LED, steer the light emission to a set angle by applying a control voltage and shift the steering angle at the fastest speed possible.sciencenewsnet.in | news, journals and articles from all over the world., 1d ago
new The Agilis Robotics system is designed to make endoscopic and endoluminal surgery easier, faster, and more precise. The ultra-thin instruments are as small as 2.5 mm in diameter and provides clinicians with unprecedented levels of dexterity within natural orifices of the body such as the gastrointestinal and urinary tract. The robot is controlled by a clinician who uses a pen-like controller to manipulate the robot's movements, which, when combined with artificial intelligence (AI)-enhanced image guidance, can greatly reduce the learning curve for doctors when performing endoscopic submucosal dissection (ESD) procedures. Ultimately, this will increase the procedure's effectiveness and improve patient outcomes.prnewswire.co.uk, 1d ago

Latest

new Now, researchers at Google Quantum AI have taken an important step forward by creating a surface code scheme that should scale to the required error rate. Their quantum processor consists of superconducting qubits that make up either data qubits for operation or measurement qubits that are adjacent to the data qubits and can either measure a bit flip or a phase flip – two types of error that affect qubits.CoinGenius, 1d ago
new Moreover, its content generators facilitate a more human-like output, resulting in robustly intelligent content. The content so generated is plagiarism-free original text and images. The voice-to-text input system of Avorak AI is powered by a natural language processing mechanism, which involves computers understanding and analyzing input to generate customized output according to individual user preferences. This technology allows the AI system to comprehend natural language inputs from users, thus improving the overall user experience.CryptoNewsZ, 1d ago
new ...that details a communication network that would link rovers, lake landers, and even submersible vehicles through a so-called mesh topology network, allowing the machines to work together as a team, independently from human input. According to Fink and his co-authors, the approach could help address one of...TodayHeadline, 1d ago
new The Akitda platform has been developed by BrainChip, a company which develops edge AI on-chip processing and learning. Akida uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, and processing data with efficiency, precision, and economy of energy. The 2nd Generation of Akida can enhance vision, perception, and predictive applications in markets such as industrial and consumer IoT and personalized healthcare, etc.Electronics For You, 1d ago
new ...)-- Southwestern Hearing Centers is proud to offer Audibel’s latest hearing advancement, Intrigue AI. It is hearing reimagined from the inside out. Featuring an all-new processor, all-new sound, all-new industrial design, all-new fitting software, and all-new patient experience.“Southwestern Hearing Centers understands the significant role hearing plays in our emotional well-being and physical health. Intrigue AI is the best sounding, best performing hearing aid available, offering infinite benefits to patients,” said Cindy Marino, COO, Southwestern Hearing Centers. “We’re here to help our patients every step of the way. Through our partnership with Audibel, we aim to better serve our patients, helping them stay connected to the world around them so they can hear better and live better.”All-New ProcessorThe all-new Neuro Processor technology mimics the function of the central auditory system through a Deep Neural Network (DNN) on-chip accelerator and automatic functions. All-new Intrigue AI hearing aids mimic the cerebral cortex of the brain to more quickly and accurately “fill in” the gaps when patients’ hearing is impaired.It makes over 80 million personalized adjustments every hour - all designed to help wearers:· Distinguish words and speech more intuitively and naturally· Hear soft sounds without distracting noise· Reduce the effort it takes to listen and hearThe AI inside delivers more true-to-life sound quality than ever before.All-New SoundAudibel’s new Neuro Sound Technology provides the best hearing experience for patients in all situations. The additive compression system synthesizes the signals from slow and fast compression systems for optimized perceptual outcomes, like the neural fibers that code different information for the brain.“By spending countless hours with hearing professionals and patients, researching and analyzing every element of the hearing journey, we relentlessly pursued how to bring the best hearing innovation to professionals and patients in a simple and intuitive way,” said Achin Bhowmik, Ph.D., Chief Technology Officer and Executive Vice President of Engineering at Audibel. “Our all-new, powerful processor was designed to work like the human brain, leveraging the neuroscience of the ear-brain connection and information processing to create better sound quality, pushing artificial intelligence to its limits.”All-New DesignIntrigue AI features an all-new discreet and stylish aesthetic product design that’s durable and comfortable for all-day wear, and which helps break barriers and reduce the stigma of what is hearing care technology today.· Intrigue AI includes RIC RT, the industry’s longest-lasting RIC rechargeable hearing aid on the market. The battery holds up to 50 hours on a single charge.· The new mRIC R has the second longest-lasting RIC rechargeable battery life with up to 40 hours on a single charge.· Industry-first custom rechargeable product has the highest custom battery life in the industry with up to 36 hours on a single charge.All-New Patient ExperienceThe new My Audibel App gives patients full control over their hearing aids, plus the ability to get helpful tips, track their health, and access intelligent features designed to simplify their lives.Audibel leads the hearing industry as it relates to incorporating health and wellness features into hearing aids, including being the first to integrate 3D sensors; the first to enable counting steps; the first to track and encourage social engagement; and the first to provide benefits that went beyond just better hearing. Audibel was also the first hearing manufacturer - and still the only - to make hearing aids that can detect falls and send alerts.Intrigue AI’s improved streaming capabilities utilize binaural phone steaming, sharing information to both ears directly and simultaneously. This supports two-way, hands-free calling through compatible Apple and Android devices and makes it easier for patients to enjoy their favorite music with more natural results.About Southwestern Hearing CentersSouthwestern Hearing is a family owned business with more than 75 years and 3 generations of experience in the hearing industry. Southwestern Hearing experts know a patient’s quality of life directly relates to their level of hearing loss. Expert focus on a high level of patient care and support. This is evident in the thousands of 5 star reviews given directly from the patients they have served.PR.com, 2d ago
new Brain implants that translate paralyzed patients’ thoughts into speech…...STAT, 2d ago

Latest

As AI continues to advance, the possibility of singularity becomes increasingly probable. Neuralink, a project created by Elon Musk’s team of scientists and engineers, is an example of how artificial intelligence can enhance human capabilities. The brain chips developed by Neuralink can help disabled individuals move, communicate, and restore vision. However, regulatory bodies have thus far prevented human trials of the technology.Wonderful Engineering, 4d ago
Resistive memories are essentially tunable resistors, which are devices that resist the passage of electrical current. These resistor-based memory solutions have proved to be very promising for running artificial neural networks (ANNs). This is because individual resistive memory cells can both store data and perform computations, addressing the limitations of the so-called Naumann bottleneck.HPCwire, 5d ago
new Data Storage System for Automated Driving or DSSAD which acts as black box when used solely for R&D purpose in the vehicle shall be utilised to capture the vehicle data having user inputs and Autonomous Driving function behavior. This data shall be used offline during research to understand, enhance and fine tune artificial learning based on AI algorithms of a known user of his driving behavior and interaction with respect to AD algorithms driving task within the ODD.Autocar Professional, 2d ago
new ..."Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene"...nanowerk.com, 2d ago
new Another area in which AI and machine learning can be very helpful for precision medicine involves embedding machine learning into wearable and implantable devices. Very quickly we could train algorithms to the baseline of an individual patient and also to other patients. We could then use the devices to predict problems or detect dangerous conditions.pasadenanow.com, 2d ago
Dr John Hardy, one of the lead authors of the study, said this approach could potentially transform the manufacture of complex 3D electronics for technical and medical applications — including structures for communication, displays, and sensors. Such approaches could also be used to fix broken implanted electronics through a process similar to laser dental/eye surgery. In a two-stage study, the researchers used a Nanoscribe (a high-resolution laser 3D printer) to 3D print an electrical circuit directly within a silicone matrix (using an additive process). They demonstrated that these electronics can stimulate mouse neurones in vitro (similar to how neural electrodes are used for deep brain stimulation in vivo).electronicsonline.net.au, 6d ago

Latest

new Inspired by the hardiness of bumblebees, researchers have developed techniques that enable a bug-sized aerial robot to sustain severe damage to the artificial muscles (actuators) that power its wings, and still fly effectively.Cosmos, 2d ago
new Generative AI has been evolving since its introduction at a great pace. The development of Large Language Models (LLMs) can be termed as one of the major reasons for the sudden growth in the amount of recognition and popularity generative AI is receiving. LLMs are AI models that are designed to process natural language and generate human-like responses. OpenAI’s GPT-4 and Google’s BERT are great examples that have made significant advances in recent years, from the development of chatbots and virtual assistants to content creation. Some of the domains in which Generative AI is being used are – content creation, development of virtual assistants, human imitating chatbots, gaming, and so on. Generative AI is also used in the healthcare industry to generate personalized treatment plans for patients, improve the accuracy of medical diagnoses, etc.MarkTechPost, 2d ago
...was awarded to Professor Lu for his discoveries that could lead to advances ranging from 3D cameras for smartphones to more efficient satellite electronics and space missions. Professor Lu's team at ANU has developed new types of atomically thin 2D materials and devices with peculiar optical and electronic properties, enabling new applications in electronics, photonics and space. These novel materials facilitate devices that are significantly smaller, less massive, and require much lower power to operate.ANU, 4d ago

Top

Current strategies to communicate with voice-disabled patients rely on microphones attached to the body, but that is often unwieldy and considered a “non-friendly” user setup. Graphene sensors are ideally suited for detection of tiny vibrations on skin surfaces. The device “can sense muscle motions and audio vibrations transmitted to the surface of the skin” and “convert recognizable mechanical information into speech,” said research team leader Professor Ren Tianling. The device also works without interference from noisy, hostile environments such as highways, fire disasters and airplane cockpits.Electronics For You, 12d ago
...ur brainwave activity can be monitored and modified by neurotechnology. Devices with electrodes placed on the head can record neural signals from the brain and apply low electric current to modulate them. These “wearables” are finding traction not only with consumers who want to track and improve their mental wellness but with companies, governments and militaries for all sorts of other uses. Meanwhile, firms such as Elon Musk’s...the Guardian, 17d ago
A spin-out from ETHZ, Nanoflex Robotics specializes in developing soft robotic systems for medical interventions. The startup has developed magnetic navigation tech that enables precise insertion of specially made guidewires and catheters deep into the brain, and they also provide associated surgical devices. By developing these precision tools, the young startup is able to enhance surgical procedures for better outcomes, specifically focused on the treatment of Ischemic and Hemorrhagic stroke. So far, the MedTech startup has secured €18.85 million.EU-Startups, 7d ago
...for telepresence, teleoperation, and virtual reality has long been focused on creating devices that enable users to interact with virtual three-dimensional (3D) items or surroundings floating in midair without any physical objects. By concentrating ultrasound output from phased arrays of transducers, emerging 3D holographic haptic displays can provide such tactile feedback in midair. The skin is deformed by nonlinear acoustic...Tech Explorist, 18d ago
Kosmos-1 is the latest development in Microsoft’s efforts to create artificial intelligence (AI) systems that are capable of understanding the complexities of human language and conversation. By incorporating vision processing and natural language understanding, the model is able to comprehend the visual world and its relationship with language. Microsoft believes that this technology can enable machines to better understand human conversation and make more complex decisions. These advances could eventually lead to more advanced AI applications such as autonomous robots, medical diagnosis, and natural language processing.OODA Loop, 19d ago
Organ-on-chip Market a multichannel 3D micro-fluidic cell culture device, which simulates activities, mechanisms, and physiological responses of human organs. This chip develops a narrow channel for the blood and air flow in organs including lung, gut, liver, heart, and others. This devices is developed on a microchip that contains continuously perfused chambers colonized by living cells arranged to stimulate tissue- and organ-level physiology. It is used to nurture internal organs...openPR.com, 13d ago

Latest

new Sony has teamed up with Retissa, a company that has designed technology to aid people with low vision, to come up with a camera kit specifically for those who have visual impairments. The Sony DSC-HX99 RNV is made up of the HX99 compact camera and something called a QD Laser Retissa Neoviewer. The latter is a viewfinder that is able to project a digital image directly onto the retina of the user through laser retinal projection. A “crystal-clear image” is projected through a low-power, full-color laser beam.Yanko Design - Modern Industrial Design News, 2d ago
new The technology behind the Apollo wearable was originally developed by a team of neuroscientists and physicians at the University of Pittsburgh. It uses touch therapy to send safety signals to the brain.Mic, 2d ago
Researchers at Linköping University in Sweden have developed a method whereby the body can ‘grow its own’ electrodes. The minimally invasive technique involves injecting a hydrogel that is laden with enzymes into target tissues. The enzymes interact with molecules that are present in the tissue to change the structure of the gel and allow it to become electrically conductive. The approach could facilitate a variety of advanced medical systems, from pacemakers to brain-computer interfaces. Excitingly, the technology avoids the need for invasive surgery or conventional stiff electrical components that are not well matched to native tissues and can provoke unwanted immune reactions. So far, the researchers have shown that they can grow electrodes in the brain, heart, and tail fins of zebrafish and near the neural system in leeches.Medgadget, 4d ago
The overlapping layers of coding and programs that process this data can be called a neural network, similar to how the human brain consists of billions upon billions of neurons to create a biological computer system, in a sense. Deep learning simply takes that human brain function and applies it to computer science: billions and billions of connecting neurons via code instead of electrical impulses.dzone.com, 5d ago
...field. MRAM can be developed to provide multiple states as opposed to the traditional binary 1 and 0 of conventional memory. This can be architected in arrays that provide new abilities in AI learning and inference to modify weights in tables very rapidly and maintain the states until they change. In collaboration with Prof. Joseph Friedman at the University of Texas, Dallas Everspin has done an experimental demonstration of a neuromorphic network with STT magnetic tunnel junction (MTJ)synapses, which performs image recognition via vector-matrix...Bisinfotech, 5d ago
Medical imaging is a technique used to create visual representations of internal organs in the body for medical diagnosis and subsequent medical therapy. 3D imaging technology has helped healthcare workers capture images at different angles and view tissues at various depths. It gives improved resolution and more intricate details to understand the human body better. Unlike historically used imaging technologies with the possibility for inaccurate results, 3D imaging technology provides accurate information on any medical condition during diagnosis.reportsanddata.com, 3d ago

Top

...details, some BCIs are built into wearable devices, but others are surgically implanted directly to brain tissue. Subjects who receive BCIs often undergo a training process, in which they learn to produce signals that the BCI will recognize. The BCI, in turn, uses...HowStuffWorks, 7d ago
Micera is known for being the first to provide sensory feedback – in real-time – to an amputee, with a bionic hand, during clinical trials that took place in 2013 with results that were published in 2014. This bionic technology relied on providing sensory feedback via transversal electrodes that were surgically implanted into major nerves in the amputee’s arm. Since then, he and colleagues have been building on that technology, providing improved touch resolution of textures with a bionic fingertip, improved embodiment of the prosthetic limb, and working towards a permanent, wearable prosthetic hand. This technology will be soon used to restore other motor and sensory function in other cases such as spinal cord injury or stroke.SCIENMAG: Latest Science and Health News, 18d ago
...that within four years, his new, under-the-radar startup called Neuralink would have a device on the market that would not only translate thoughts into actions on computers, but also help cure “certain severe brain injuries.” Then, on July 16, 2019, Musk proudly revealed Neuralink’s tech to the public in a San Francisco launch...Fortune, 19d ago

Latest

Siri, Google Assistant, Cortana, and Alexa, are the successive technologies rolled out in the 20th century. They are readily accessible via our handy devices and serve as an intelligent personal assistant instead of just simple question-answering based on internet information. NLP, Natural Language Processing, and deep neural networks are the core building blocks of the technology, which allows our machines, appliances, and IOT devices to understand human language at ease. Command execution via voice recognition is the new norm where a simple instruction like “Hey Google, play me some country music!” will easily fire up your Spotify app to your liking.web3newshubb.com, 4d ago
Soft (bio)hybrid robotics aims at interfacing living beings with artificial technology. It was recently demonstrated that plant leaves coupled with artificial leaves of selected materials and tailored mechanics can convert wind-driven leaf fluttering into electricity. Here, we significantly advance this technology by establishing the additional opportunity to convert kinetic energy from raindrops hitting the upper surface of the artificial leaf into electricity. To achieve this, we integrated an extra electrification layer and exposed electrodes on the free upper surface of the wind energy harvesting leaf that allow to produce a significant current when droplets land and spread on the device. Single water drops create voltage and current peaks of over 40V and 15µA and can directly power 11 LEDs. The same structure has the additional capability to harvest wind energy using leaf oscillations. This shows that environment-responsive biohybrid technologies can be tailored to produce electricity in challenging settings, such as on plants under motion and exposed to rain. The devices have the potential for multisource energy harvesting and as self-powered sensors for environmental monitoring, pointing at applications in wireless sensor networks (WSNs), the Internet of Things (IoT), smart agriculture, and smart forestry.interestingengineering.com, 3d ago
...-- The New York General Group has the technology "World System." The idea is to have artificial intelligence (natural language processing models, image processing models, etc.) learn astrophysics, neuroscience, etc., integrate them with quantum mechanical processing models, create a universe on a quantum computer, create the human brain as a subset of the universe on top of it, and have superintelligence (artificial general intelligence)...PRLog, 3d ago
Animals excel at a wide range behaviors, many of which are essential for survival. For example, dragonflies are aerial predators, known for both their speed and high success rate, that must perform fast, accurate, and efficient calculations to survive. I will present a neural network model, inspired by the dragonfly nervous system, that calculates turning for successful prey interception. The model relies upon a coordinate transformation from eye-coordinates to body-coordinates, an operation that must be performed by almost any animal nervous system relying upon sensory information to interact with the external world. I will discuss how I and collaborators are combining neuroscience experiments, modeling studies, and exploration of neuromorphic architectures to understand how the biological dragonfly nervous system performs coordinate transformations and to develop novel approaches for efficient neural-inspired computation.SciTech Institute, 3d ago
System architecture networks the autopilot, avionics and Intel workstation with an onboard Nvidia Xavier AI board to give supercomputer-level processing opportunities in flight and in real time. A natural evolution of the architecture is for autonomous operations based on collected sensor data. As the system generates and records vast amounts, it can be used in real time for navigation, avoidance and on the fly tasking.Unmanned Systems Technology, 4d ago
The development of electronic skin with multiple senses is essential for various fields, including rehabilitation, health care, prosthetic limbs, and robotics. One of the key components of this technology is stretchable pressure sensors, which can detect various types of touch and pressure. Recently, a joint team of researchers from POSTECH and the University of Ulsan in Korea has recently made a significant breakthrough by successfully creating omnidirectionally stretchable pressure sensors inspired by crocodile skin.techxplore.com, 4d ago

Top

The Brain-Computer Interface (BCI) utilizes electrodes that read signals produced by neurons. These electrodes then send these signals to receivers located on either a prosthetic limb or device.IoT Worlds, 18d ago
In this talk we demonstrate two device concepts based on novel organic mixed-conducting materials and show how we can use these devices as artificial neurons and synapses in smart autonomous robotics, trainable biosensors and sensory coding.tue.nl, 17d ago
Neuralink is working to build a brain chip interface that can be implanted within the skull, which it says could eventually help the disabled move & communicate again and restore vision.Benzinga, 19d ago
SwRI engineers determined that newer microprocessors built on "instruction set architectures" could outperform conventional spaceflight technology under certain configurations in a laboratory. The research opens the door to embedding space systems with the same microprocessors used in cell phones and other Earth-based electronics.Space Daily, 21d ago
SwRI engineers determined that newer microprocessors built on “instruction set architectures” could outperform conventional spaceflight technology under certain configurations in a laboratory. The research opens the door to embedding space systems with the same microprocessors used in cell phones and other Earth-based electronics.Southwest Research Institute, 21d ago
The need to react fast and make quick decisions imposes strict runtime requirements on the neural networks that run on embedded hardware in an autonomous vehicle. By compressing the neural networks, for example using fewer parameters and bits, the algorithms can be executed faster and use less energy. For this task, the CERN–Zenseact team chose field-programmable gate arrays (FPGAs) as the hardware benchmark. Used at CERN for many years, especially for trigger readout electronics in the large LHC experiments, FPGAs are configurable integrated circuits that can execute complex decision-making algorithms in periods of microseconds. The main result of the FPGA experiment, says Petersson, was a practical demonstration that computer-vision tasks for automotive applications can be performed with high accuracy and short latency, even on a processing unit with limited computational resources. “The project clearly opens up for future directions of research. The developed workflows could be applied to many industries.”...CERN Courier, 20d ago

Latest

..., uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today's workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers' products, as well as the planet. Explore the benefits of Essential AI at...tmcnet.com, 3d ago
Today, it has become easy to predict disease using AI-based IoT systems, and this technology is developing for further improvements. For instance, the latest invention based on the neural network can detect the risk of heart attacks by up to 94.8%. DNN is also helpful in disease detection: the spectrogram of a person's voice received using IoT devices can identify voice pathologies after DNN processing. In general, ANN-based IoT health monitoring systems' accuracy is estimated to be above 85%.IoT Central, 5d ago
Tests have shown that certain compounds from the mushrooms could potentially improve memory in both rats and humans, although scientists have not yet pinpointed the exact compound or combination of compounds. The lion’s mane mushroom contains, in its edible parts, both hericenones and erinacines, which the research team says are linked to brain cell growth and memory improvement. Lead author of the paper, Frédéric Meunier, Clem Jones Centre for Ageing Dementia Research, Queensland Brain Institute, Australia, said, “Using super-resolution microscopy, we found the mushroom extract and its active components largely increase the size of growth cones, which are particularly important for brain cells to sense their environment and establish new connections with other neurons in the brain.”...The Medicine Maker, 3d ago

Latest

Nanotechnologies enable the development of innovative nano-devices with unprecedented capabilities. By means of communications, networks of nano-sensors and nano-actuators can perform complex tasks in a distributed manner. Similarly, the wireless interconnection of nano-processors in massive multi-core architectures can enable innovative parallel computing architectures. Moreover, the interconnection of networks of nanomachines or nanonetworks with macroscale communication networks – the so-called Internet of Nano-Things – opens the door to transformative applications not only across scales (from nano to macro) but also across domains (biological and non-biological) and realms (classical and quantum).comsoc.org, 5d ago
Elliptic Labs is a global enterprise targeting the smartphone, laptop, IoT, and automotive markets. Founded in 2006 as a research spin-off from Norway's Oslo University, the company's patented software uses AI, ultrasound and sensor fusion to deliver intuitive 3D gesture, proximity-, presence-, breathing- and heartbeat-detection experiences. Its scalable AI Virtual Smart Sensor Platform creates software-only sensors that are sustainable, human-friendly and already deployed in hundreds of millions of devices around the world. Elliptic Labs is the only software company that has delivered detection capabilities using AI software, ultrasound, and sensor fusion deployed at scale. The company joined the Oslo Børs main listing in March 2022.tmcnet.com, 6d ago
Leveraging the latest in natural language processing, this is an AI that can understand human speech, multi-pronged questions and hard accents, and then respond accurately, eliminating the need for an IVR (interactive voice recording) and hours of live agent time devoted to answering basic guest inquiries.Hospitality Technology, 3d ago
The team used a non-invasive medical test called magnetoencephalography (MEG) that measures the magnetic fields produced by a brain's electrical currents. Using MEG data acquired from human observers during object viewing, Mur and her international collaborators detected one key point of failure. They found that readily nameable parts of objects, such as "eye," "wheel," and "face," can account for variance in human neural dynamics over and above what deep learning can deliver.techxplore.com, 4d ago
Deep Learning is based on Neural Networks, and Neural Networks try to mimic how our brains work. Neurons receive inputs and send output signals to other neurons. Under the hood of Deep Learning, that’s what “circuits” are doing in a multi-billion-dimensional space. Our understanding of the human brain is still limited so AI models also suffer from that limited understanding.ON24, 5d ago
Elliptic Labs is a global enterprise targeting the smartphone, laptop, IoT, and automotive markets. Founded in 2006 as a research spin-off from Norway’s Oslo University, the company’s patented software uses AI, ultrasound and sensor fusion to deliver intuitive 3D gesture, proximity-, presence-, breathing- and heartbeat-detection experiences. Its scalable AI Virtual Smart Sensor Platform creates software-only sensors that are sustainable, human-friendly and already deployed in hundreds of millions of devices around the world. Elliptic Labs is the only software company that has delivered detection capabilities using AI software, ultrasound, and sensor fusion deployed at scale. The company joined the Oslo Børs main listing in March 2022.AiThority, 5d ago

Latest

A biomimetic flexible steerable probe is currently being developed at Imperial College London: the aim is the access of deep brain areas with minimum damage in order to accurately place minimally invasive instrumentation (catheters, electrodes for deep brain stimulation), to perform clinical analysis and diagnosis (biopsy, sampling), localized drug delivery and micro neurosurgery.Imperial College London, 6d ago
A proprietary haptic system was developed to give surgeons the sense of touch in a virtual reality simulation of knee arthroscopy surgery. During real diagnostic knee arthroscopy, the surgeon operates inside the knee with long tools (an endoscope and a hooked probe) that pass through keyholes in the skin. The surgeon uses visual and haptic (touch) feedback to diagnose problems in the knee.Imperial College London, 6d ago
A new type of sensor could lead to artificial skin that someday helps burn victims 'feel' and safeguards the rest of us. Researchers wanted to create a sensor that can mimic the sensing properties ...ScienceDaily, 4d ago
Ultimately, the algorithm could allow researchers to control snake robots and other hyper-redundant robots (e.g., robots inspired by octopus tentacles) with greater precision, while also better replicating snake- or tentacle-like movements. This could in turn facilitate the deployment of these robots in medical settings, particularly to perform minimally invasive surgical procedures inside the human body.TodayHeadline, 5d ago
Generative AI could have an impact on how autonomous (IoT) devices are controlled, e.g., robots. By capturing motion data from animals or humans, generative AI can be used to generate control logic and commands for robots. Instead of deterministically programming movements for each leg of a robot dog, for example, generative AI models can be utilized to generate the movements of individual parts and make the robot dog walk complicated, interconnected steps. Moreover, generative AI models can help robots make sense of their surroundings and connect so-called horizon goals with more intermediate steps to reach the goals (e.g.,...IoT Analytics, 7d ago
Despite our tremendous progress in artificial intelligence (AI), current AI systems still cannot adequately understand humans and flexibly interact with humans in real-world settings. The goal of my research is to build AI systems that can understand and cooperatively interact with humans in the real world. My hypothesis is that to achieve this goal, we need human-level machine social intelligence and that we can take inspiration from the studies of social cognition to engineer such social intelligence. To transfer insights from social cognition to real-world systems, I develop a research program for cognitively inspired machine social intelligence, in which I first i) build computational models to formalize the ideas and theories from social cognition, ii) develop new computational tools and AI methods to implement those models, and finally iii) apply those models to real-world systems such as assistive robots.The Hub, 5d ago

Latest

Neural surgery may be of fatal risk. To improve surgery safety, the Mechantronics in Medicine (MIM) lab of Imperial College London has develop a new Minimal Invasive neurosurgery robot which is soft and compliant needle capable of steering in the soft tissue of the brain. Its design is inspired by the ovipositing wasps, which are capable of steering their stings in different tissues towards a target point. Besides the new robot, MIM lab also develop many new technologies in surgery planning and execution. In further, the robot and new technology of surgery planning and control will make the neural surgery produce less trauma, avoid the key functionality of the brain and reach the target deep in brain despite the vibrating and deforming motion of a living brain.Imperial College London, 6d ago
Abstract: Human beings and other biological creatures navigate unpredictable and dynamic environments by combining compliant mechanical actuators (skeletal muscle) with neural control and sensory feedback. Abiotic actuators, by contrast, have yet to match their biological counterparts in their ability to autonomously sense and adapt their form and function to changing environments. We have shown that engineered skeletal muscle actuators, controlled by neuronal networks, can generate force and power functional behaviors such as walking and pumping in a range of untethered robots. These muscle-powered robots are dynamically responsive to mechanical stimuli and are capable of complex functional behaviors like exercise-mediated strengthening and healing in response to damage. Our lab uses engineered bioactuators as a platform to understand neuromuscular architecture and function in physiological and pathological states, restore mobility after disease and damage, and power soft robots. This talk will cover the advantages, challenges, and future directions of understanding and manipulating the mechanics of biological motor control.wpi.edu, 6d ago
My LISP (Learning, Intelligence + Signal Processing) lab is focused on asking fundamental questions such as “Can intelligence be learned?” at the intersection of signal processing, machine learning, game theory, extremal graph theory, and computational neuroscience. My students and I are developing geometric and topological methods to learn and understand information in general—signals (neural, images, videos, hyper-spectral, audio, language, RF), graphs (social networks, communication networks), and human interactions via game theory.dartmouth.edu, 5d ago

Top

We also showed that the 3D structures of nanomaterials can be programmed efficiently via nucleic acid sequence, and that it is possible to direct the formation of percolating networks with DNA self-assembly. In addition, using inspiration from biological neural networks that display extraordinary signal dynamics and processing abilities, we aimed to mimic some aspects of the morphology of natural neural networks using DNA self-assembly to fabricate nanoelectronic devices with measurable function. Non-linear electrical properties of nanocomposites that integrate DNA-modified CNTs are reported. Our eventual goal is to harness molecular recognition to precisely control the configuration and connection of nanomaterials to self-assemble into controllable nanostructures and, thus, to engineer, fabricate, and characterize DNA-based hydrogels for desired applications. Future DNA hydrogel composites may find impactful application as building blocks in artificial computer hardware, with architectures inspired by natural neural systems for memory and information-processing applications.Truth11.com, 12d ago
..." presented a new method that uses broadband diffractive networks to directly classify unknown objects through unknown, random diffusers using a single-pixel spectral detector. This broadband diffractive network architecture uses 20 discrete wavelengths to map a diffuser-distorted object into a spectral signature detected through a single pixel. During the training, many randomly generated phase diffusers were used to help the generalization performance of the diffractive optical network. After the deep learning-based training process, which is a one-time effort, the resulting diffractive layers can be physically fabricated to form a single-pixel network that classifies objects completely hidden by new, unknown random diffusers never seen during the training.This network was demonstrated to successfully recognize handwritten digits through randomly selected unknown phase diffusers with a blind testing accuracy of 87.74%. Furthermore, the researchers experimentally demonstrated the feasibility of this single-pixel broadband classier using a 3D-printed diffractive network and a terahertz time-domain spectroscopy system. This optical computing framework can be scaled with respect to the illumination wavelength to operate at any part of the electromagnetic spectrum without redesigning or retraining its layers.The research was led by Dr. Aydogan Ozcan, Chancellor's Professor and Volgenau Chair for Engineering Innovation at UCLA and HHMI Professor with the Howard Hughes Medical Institute. The other authors of this work include Bijie Bai, Yuhang Li, Yi Luo, Xurong Li, Ege Çetintaş, and Prof. Mona Jarrahi, all from the Electrical and Computer Engineering department at UCLA. Prof. Ozcan also has UCLA faculty appointments in the bioengineering and surgery departments and is an associate director of the California NanoSystems Institute (CNSI).PRLog, 12d ago
Wireless visual cortical stimulator devices are a type of visual prosthetic used to restore functional vision to blind individuals. They work by providing electrical stimulation that bypasses damaged retinal cells and stimulates healthy cells to induce visual perception. Second Sight announced the first implantation of its Orion I Visual Cortical Prosthesis (Orion I), a wireless visual cortical stimulator, in a 30-year-old blind patient on October 26, 2016. The goal of this implantation was to provide human proof for the ongoing development of the device. Second Sight’s Argus II System, a visual cortical stimulator that received FDA approval in 2013, also provides electrical stimulation to bypass defunct retinal cells and stimulate viable cells. The key difference between Orion I and Argus II System is that Orion I uses wireless technology to restore vision to patients completely blind due to conditions such as glaucoma, cancer, diabetic retinopathy, or trauma. The first implantation of Orion I successfully perceived and localized individual light spots without any side effects. While not all technological developments offer significant opportunities, the first human test of Orion I proved to be able to treat visually impaired patients with no side effects. The company plans to submit an application to the FDA in early 2017 for the initial completion of clinical trials for the complete system.Medgadget, 21d ago
In order to replicate this human behavior in neural networks, researchers in the past have developed many Audio-Visual Speech Recognition (AVSR) techniques that translate spoken words utilizing both audio and visual inputs. Some examples of such systems include Meta AI’s publicly available AV-HuBERT and RAVen models, which integrate visual data to enhance performance for English speech recognition tasks. These deep learning-based methods have been proven to be incredibly successful at improving the robustness of speech recognition. Adding on to this wave of research in speech translation, Meta AI has now unveiled MuAViC (Multilingual Audio-Visual Corpus), the first-ever benchmark that enables the application of audio-visual learning for extremely accurate speech translation. MuAViC is a multilingual audio-visual corpus that works well for tasks requiring accurate speech recognition and speech-to-text translation tasks. The researchers at Meta claim that it is the first open benchmark for audio-visual speech-to-text translation and the largest known benchmark for multilingual audio-visual speech recognition.MarkTechPost, 11d ago
Machines that can interpret what’s going on in people’s heads have been a mainstay of science fiction for decades. For years now, scientists from around the world have shown that computers and algorithms can indeed understand brain waves and make visual sense out of them through functional magnetic resonance imaging (fMRI) machines, the same devices doctors use to map neural activity during a brain scan. As early as 2008, researchers were already using machine learning to capture and decode brain activity.Bitcoin With Money, 12d ago
Wearable Devices Ltd. (the “Company”), a growth company developing a non-invasive neural input interface technology in the form of a wrist wearable band for controlling digital devices using subtle finger movements. Our company’s vision is to create a world in which the user’s hand becomes a universal input device for touchlessly interacting with technology, and we believe that our technology is setting the standard input interface for the Metaverse. Since our technology was introduced to the market, we have been working with both Business-to-Business and Business-to-Consumer customers as part of our push-pull strategy. Combining our own proprietary sensors and Artificial Intelligence, or AI, algorithms into a stylish wristband, our Mudra platform enables users to control digital devices through subtle finger movements and hand gestures, without physical touch or contact. For more information, visit...EEJournal, 22d ago

Latest

Think microscopes that peer into the brain of a living animal to record signals from individual brain cells or neurons. Add the ability to switch particular brain circuits on or off with pinpoint precision using a pulse of light, a technique called optogenetics. Mix in brain atlases with the dynamic resolution of Google maps – from full relief technicolour maps of the wrinkled brain surface down to cellular...Cosmos, 6d ago
A research team at City University of Hong Kong (CityU) invented a groundbreaking tunable terahertz (THz) meta-device that can control the radiation direction and coverage area of THz beams. By rotating its metasurface, the device can promptly direct the 6G signal only to a designated recipient, minimizing power leakage and enhancing privacy. It is expected to provide a highly adjustable, directional and secure means for future 6G communications systems.Electronics For You, 4d ago
Driver assistance: ADAS domain controller applications use a diversity of sensor inputs to build a picture of the environment around the vehicle. These systems may use as many as 11 cameras of differing resolutions and fields of view, combined with five radar sensors as inputs to a perception system in an L2 or L3 driving system. This demonstration shows how TI’s AI-enabled processor, the TDA4VH-Q1, easily handles 12 cameras (each at a 2-MP resolution) and performs image pre-processing on all inputs simultaneously in real-time, using only the image accelerators on the chip. This pre-processing allows the application and signal-processing cores to run the computer vision and AI algorithms required for accurate perception.electronicspecifier.com, 6d ago
Eyeris proprietary technology accurately generates in-cabin depth information for key regions of interest (ROIs), such as occupants' face, body, hands and objects, using a single automotive-grade 2D image sensor, like the latest RGB-IR sensors. Eyeris achieves this through a rigorous collection of naturalistic in-cabin 3D data used to train compute-efficient depth inference models that run on AI-enabled processors. This enables enhanced depth-aware understanding of the location, size, and position of occupants and other objects to customize - for example - the airbag deployment accordingly and reduce the risk of occupants' injury when deployed.EDACafe, 6d ago
Nuance’s roots are in voice commands, dictation and speech recognition: it owns the Dragon speech recognition suite and provided the original back-end NLP algorithm for Apple’s Siri. And voice has of course been closely watched (or listened to) with the rise of consumer voice assistants but Lorentzen says voice still has obvious limitations such as being understood noisy environments or with complex instructions: decoding a long alphanumeric string on a shipping order, for example. Lorentzen doesn’t expect consumer voice assistants to change that soon.The Stack, 5d ago
The other area of that is significant investment into brain-computer interface or brain-wearables, neural interface devices, both to enhance militaries and soldiers and their capabilities, but also to try to interfere with others. Imagine a world in which you have widespread neural interface, which is the world which I have described that I think is coming, and people are wearing their ear buds, their headphones, and all of that is brain activity that is being tracked and also used and interfered with by other countries.carnegiecouncil.org, 8d ago

Top

Musk said that the company was developing brain chip interfaces that could allow disabled patients to move and communicate again, as well as restore vision.Metro, 18d ago
There is genome and proteome and in general omics information that is used off of humans to interpret their future health states. And there's all kinds of prediction algorithms for different diseases in neurology and oncology in neurology and whatnot that are used to predict future health events and the top interventions that you could do with a certain patient. But in our own line of research, we are doing things that are centered on taking human bio signals. So, these might emerge from digital health devices. So, for example, wearables that are worn on body, but also cameras and night vision cameras and other ambient sensing devices that give us information about the status of the human body in various dimensions, like, for example, regarding heartbeat information, electrocardiogram information, breathing information, and other similar things.infineon.com, 13d ago
SynSense has been developing brain-like technology for more than 20 years. Currently, the SynSense brain-like vision chip Speck has become the world's first dynamic vision brain-like chip in commercial mass production. In addition, Xylo, an ultra-low-power brain-like processor, will be mass-produced in the second half of the year. Its intelligent perception processing technologies such as sound, pressure, IMU, and smell have made breakthroughs in the application of consumer electronics, smart wearables, and industrial testing.EqualOcean, 10w ago