Friday 26 October 2018

Sugar-powered sensor developed to detect and prevent disease

Researchers at Washington State University have developed an implantable, biofuel-powered sensor that runs on sugar and can monitor a body's biological signals to detect, prevent and diagnose diseases.
A cross-disciplinary research team led by Subhanshu Gupta, assistant professor in WSU's School of Electrical Engineering and Computer Science, developed the unique sensor, which, enabled by the biofuel cell, harvests glucose from body fluids to run.
The research team has demonstrated a unique integration of the biofuel cell with electronics to process physiological and biochemical signals with high sensitivity.
Their work recently was published in the IEEE Transactions of Circuits and Systems journal.
Professors Su Ha and Alla Kostyukova from the Gene and Linda School of Chemical Engineering and Bioengineering, led design of the biofuel cell.
Many popular sensors for disease detection are either watches, which need to be recharged, or patches that are worn on the skin, which are superficial and can't be embedded. The sensor developed by the WSU team could also remove the need to prick a finger for testing of certain diseases, such as diabetes.
"The human body carries a lot of fuel in its bodily fluids through blood glucose or lactate around the skin and mouth," said Gupta. "Using a biofuel cell opens the door to using the body as potential fuel."
The electronics in the sensor use state-of-the-art design and fabrication to consume only a few microwatts of power while being highly sensitive. Coupling these electronics with the biofuel cell makes it more efficient than traditional battery-powered devices, said Gupta. Since it relies on body glucose, the sensor's electronics can be powered indefinitely. So, for instance, the sensor could run on sugar produced just under the skin.
Unlike commonly used lithium-ion batteries, the biofuel cell is also completely non-toxic, making it more promising as an implant for people, he said. It is also more stable and sensitive than conventional biofuel cells.
The researchers say their sensor could be manufactured cheaply through mass production, by leveraging economies of scale.
While the sensors have been tested in the lab, the researchers are hoping to test and demonstrate them in blood capillaries, which will require regulatory approval. The researchers are also working on further improving and increasing the power output of their biofuel cell.
"This brings together the technology for making a biofuel cell with our sophisticated electronics," said Gupta. "It's a very good marriage that could work for many future applications."
Source: https://www.sciencedaily.com/releases/2018/09/180927145339.htm

Tuesday 11 September 2018

A new theory for phantom limb pain points the way to more effective treatment

Dr Max Ortiz Catalan of Chalmers University of Technology, Sweden, has developed a new theory for the origin of the mysterious condition, 'phantom limb pain'. Published in the journal Frontiers in Neurology, his hypothesis builds upon his previous work on a revolutionary treatment for the condition, that uses machine learning and augmented reality.
Phantom limb pain is a poorly understood phenomenon, in which people who have lost a limb can experience severe pain, seemingly located in that missing part of the body. The condition can be seriously debilitating and can drastically reduce the sufferer's quality of life. But current ideas on its origins cannot explain clinical findings, nor provide a comprehensive theoretical framework for its study and treatment.
Now, Max Ortiz Catalan, Associate Professor at Chalmers University of Technology, has published a paper that offers up a promising new theory -- one that he terms 'stochastic entanglement'.
He proposes that after an amputation, neural circuitry related to the missing limb loses its role and becomes susceptible to entanglement with other neural networks -- in this case, the network responsible for pain perception.
"Imagine you lose your hand. That leaves a big chunk of 'real estate' in your brain, and in your nervous system as a whole, without a job. It stops processing any sensory input, it stops producing any motor output to move the hand. It goes idle -- but not silent," explains Max Ortiz Catalan.
Neurons are never completely silent. When not processing a particular job, they might fire at random. This may result in coincidental firing of neurons in that part of the sensorimotor network, at the same time as from the network of pain perception. When they fire together, that will create the experience of pain in that part of the body.
"Normally, sporadic synchronised firing wouldn't be a big deal, because it's just part of the background noise, and it won't stand out," continues Max Ortiz Catalan. "But in patients with a missing limb, such event could stand out when little else is going on at the same time. This can result in a surprising, emotionally charged experience -- to feel pain in a part of the body you don't have. Such a remarkable sensation could reinforce a neural connection, make it stick out, and help establish an undesirable link."
Through a principle known as 'Hebb's Law' -- 'neurons that fire together, wire together' -- neurons in the sensorimotor and pain perception networks become entangled, resulting in phantom limb pain. The new theory also explains why not all amputees suffer from the condition- the randomness, or stochasticity, means that simultaneous firing may not occur, and become linked, in all patients.
In the new paper, Max Ortiz Catalan goes on to examine how this theory can explain the effectiveness of Phantom Motor Execution (PME), the novel treatment method he previously developed. During PME treatment, electrodes attached to the patient's residual limb pick up electrical signals intended for the missing limb, which are then translated through AI algorithms, into movements of a virtual limb in real time. The patients see themselves on a screen, with a digitally rendered limb in place of their missing one, and can then control it just as if it were their own biological limb . This allows the patient to stimulate and reactivate those dormant areas of the brain.
"The patients can start reusing those areas of brain that had gone idle. Making use of that circuitry helps to weaken and disconnect the entanglement to the pain network. It's a kind of 'inverse Hebb's law' -- the more those neurons fire apart, the weaker their connection. Or, it can be used preventatively, to protect against the formation of those links in the first place," he says.
The PME treatment method has been previously shown to help patients for whom other therapies have failed. Understanding exactly how and why it can help is crucial to ensuring it is administered correctly and in the most effective manner. Max Ortiz Catalan's new theory could help unravel some of the mysteries surrounding phantom limb pain, and offer relief for some of the most affected sufferers.
Source: https://www.sciencedaily.com/releases/2018/09/180906082022.htm

New Blood Pressure App

Michigan State University has invented a proof-of-concept blood pressure app that can give accurate readings using an iPhone -- with no special equipment.
The discovery, featured in the current issue of Scientific Reports, was made by a team of scientists led by Ramakrishna Mukkamala, MSU electrical and computer engineering professor.
"By leveraging optical and force sensors already in smartphones for taking 'selfies' and employing 'peek and pop,' we've invented a practical tool to keep tabs on blood pressure," he said. "Such ubiquitous blood pressure monitoring may improve hypertension awareness and control rates, and thereby help reduce the incidence of cardiovascular disease and mortality."
In a publication in Science Translational Medicine earlier this year, Mukkamala's team had proposed the concept with the invention of a blood pressure app and hardware. With the combination of a smartphone and add-on optical and force sensors, the team produced a device that rivaled arm-cuff readings, the standard in most medical settings.
With advances in smartphones, the add-on optical and force sensors may no longer be needed. Peek and pop, available to users looking to open functions and apps with a simple push of their finger, is now standard on many iPhones and included in some Android models.
If things keep moving along at the current pace, an app could be available in late 2019, Mukkamala added.
"Like our original device, the application still needs to be validated in a standard regulatory test," he said. "But because no additional hardware is needed, we believe that the app could reach society faster."
Internationally, this app could be a game-changer. While high blood pressure is treatable with lifestyle changes and medication, only around 20 percent of people with hypertension have their condition under control. This invention gives patients a convenient option and keeping a log of daily measurements would produce an accurate average, Mukkamala added.
Anand Chandrasekhar, Keerthana Natarajan, Mohammad Yavarimanesh -- all electrical and computer engineering doctoral candidates -- contributed to this research.
This research was funded in part by the National Institutes of Health.
Source: https://www.sciencedaily.com/releases/2018/09/180907135920.htm

Monday 20 August 2018

Artificial intelligence platform screens for acute neurological illnesses

An artificial intelligence platform designed to identify a broad range of acute neurological illnesses, such as stroke, hemorrhage, and hydrocephalus, was shown to identify disease in CT scans in 1.2 seconds, faster than human diagnosis, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published in the journal Nature Medicine.
"With a total processing and interpretation time of 1.2 seconds, such a triage system can alert physicians to a critical finding that may otherwise remain in a queue for minutes to hours," says senior author Eric Oermann, MD, Instructor in the Department of Neurosurgery at the Icahn School of Medicine at Mount Sinai. "We're executing on the vision to develop artificial intelligence in medicine that will solve clinical problems and improve patient care."
This is the first study to utilize artificial intelligence for detecting a wide range of acute neurologic events and to demonstrate a direct clinical application. Researchers used 37,236 head CT scans to train a deep neural network to identify whether an image contained critical or non-critical findings. The platform was then tested in a blinded, randomized controlled trial in a simulated clinical environment where it triaged head CT scans based on severity. The computer software was tested for how quickly it could recognize and provide notification versus the time it took a radiologist to notice a disease. The average time for the computer algorithm to preprocess an image, run its inference method, and, if necessary, raise an alarm was 150 times shorter than for physicians to read the image.
This study used "weakly supervised learning approaches," which built on the research team's expertise in natural language processing and the Mount Sinai Health System's large clinical datasets. Dr. Oermann says the next phase of this research will entail enhanced computer labeling of CT scans and a shift to "strongly supervised learning approaches" and novel techniques for increasing data efficiency. Researchers estimate the goal of re-engineering the system with these changes will be accomplished within the next two years.
"The expression 'time is brain' signifies that rapid response is critical in the treatment of acute neurological illnesses, so any tools that decrease time to diagnosis may lead to improved patient outcomes," says study co-author Joshua Bederson, MD, Professor and System Chair for the Department of Neurosurgery at Mount Sinai Health System and Clinical Director of the Neurosurgery Simulation Core.
"The application of deep learning and computer vision techniques to radiological imaging is a clear imperative for 21st century medical care," says study author Burton Drayer, MD, the Charles M. and Marilyn Newman Professor and System Chair of the Department of Radiology for the Mount Sinai Health System, CEO of the Mount Sinai Doctors Faculty Practice, and Dean for Clinical Affairs of the Icahn School of Medicine.
This study was performed by the Mount Sinai AI Consortium, known as "AISINAI" -- a group of scientists, physicians, and researchers dedicated to developing artificial intelligence in medicine that will improve patient care and help doctors accurately diagnose disease.
Source: https://www.sciencedaily.com/releases/2018/08/180813113315.htm

Tuesday 12 June 2018

Fungi-produced pigment shows promise as semiconductor material

Researchers at Oregon State University are looking at a highly durable organic pigment, used by humans in artwork for hundreds of years, as a promising possibility as a semiconductor material.

Findings suggest it could become a sustainable, low-cost, easily fabricated alternative to silicon in electronic or optoelectronic applications where the high-performance capabilities of silicon aren't required.
Optoelectronics is technology working with the combined use of light and electronics, such as solar cells, and the pigment being studied is xylindein.
"Xylindein is pretty, but can it also be useful? How much can we squeeze out of it?" said Oregon State University physicist Oksana Ostroverkhova. "It functions as an electronic material but not a great one, but there's optimism we can make it better."
Xylindien is secreted by two wood-eating fungi in the Chlorociboria genus. Any wood that's infected by the fungi is stained a blue-green color, and artisans have prized xylindein-affected wood for centuries.
The pigment is so stable that decorative products made half a millennium ago still exhibit its distinctive hue. It holds up against prolonged exposure to heat, ultraviolet light and electrical stress.
"If we can learn the secret for why those fungi-produced pigments are so stable, we could solve a problem that exists with organic electronics," Ostroverkhova said. "Also, many organic electronic materials are too expensive to produce, so we're looking to do something inexpensively in an ecologically friendly way that's good for the economy."
With current fabrication techniques, xylindein tends to form non-uniform films with a porous, irregular, "rocky" structure.
"There's a lot of performance variation," she said. "You can tinker with it in the lab, but you can't really make a technologically relevant device out of it on a large scale. But we found a way to make it more easily processed and to get a decent film quality."
Ostroverkhova and collaborators in OSU's colleges of Science and Forestry blended xylindein with a transparent, non-conductive polymer, poly(methyl methacrylate), abbreviated to PMMA and sometimes known as acrylic glass. They drop-cast solutions both of pristine xylindein and a xlyindein-PMMA blend onto electrodes on a glass substrate for testing.
They found the non-conducting polymer greatly improved the film structure without a detrimental effect on xylindein's electrical properties. And the blended films actually showed better photosensitivity.
"Exactly why that happened, and its potential value in solar cells, is something we'll be investigating in future research," Ostroverkhova said. "We'll also look into replacing the polymer with a natural product -- something sustainable made from cellulose. We could grow the pigment from the cellulose and be able to make a device that's all ready to go.
"Xylindein will never beat silicon, but for many applications, it doesn't need to beat silicon," she said. "It could work well for depositing onto large, flexible substrates, like for making wearable electronics."
This research, whose findings were recently published in MRS Advances, represents the first use of a fungus-produced material in a thin-film electrical device.
"And there are a lot more of the materials," Ostroverkhova said. "This is just first one we've explored. It could be the beginning of a whole new class of organic electronic materials."
The National Science Foundation supported this research.
Source: sciencedaily.com

Researchers reverse cognitive impairments in mice with dementia

Reversing memory deficits and impairments in spatial learning is a major goal in the field of dementia research. A lack of knowledge about cellular pathways critical to the development of dementia, however, has stood in the way of significant clinical advance. But now, researchers at the Lewis Katz School of Medicine at Temple University (LKSOM) are breaking through that barrier. They show, for the first time in an animal model, that tau pathology -- the second-most important lesion in the brain in patients with Alzheimer's disease -- can be reversed by a drug.


"We show that we can intervene after disease is established and pharmacologically rescue mice that have tau-induced memory deficits," explained senior investigator Domenico Praticò, MD, Scott Richards North Star Foundation Chair for Alzheimer's Research, Professor in the Departments of Pharmacology and Microbiology, and Director of the Alzheimer's Center at Temple at LKSOM. The study, published online in the journal Molecular Neurobiology, raises new hope for human patients affected by dementia.
The researchers landed on their breakthrough after discovering that inflammatory molecules known as leukotrienes are deregulated in Alzheimer's disease and related dementias. In experiments in animals, they found that the leukotriene pathway plays an especially important role in the later stages of disease.
"At the onset of dementia, leukotrienes attempt to protect nerve cells, but over the long term, they cause damage," Dr. Praticò said. "Having discovered this, we wanted to know whether blocking leukotrienes could reverse the damage, whether we could do something to fix memory and learning impairments in mice having already abundant tau pathology."
To recapitulate the clinical situation of dementia in humans, in which patients are already symptomatic by the time they are diagnosed, Dr. Praticò and colleagues used specially engineered tau transgenic mice, which develop tau pathology -- characterized by neurofibrillary tangles, disrupted synapses (the junctions between neurons that allow them to communicate with one another), and declines in memory and learning ability -- as they age. When the animals were 12 months old, the equivalent of age 60 in humans, they were treated with zileuton, a drug that inhibits leukotriene formation by blocking the 5-lipoxygenase enzyme.
After 16 weeks of treatment, animals were administered maze tests to assess their working memory and their spatial learning memory. Compared with untreated animals, tau mice that had received zileuton performed significantly better on the tests. Their superior performance suggested a successful reversal of memory deficiency.
To determine why this happened, the researchers first analyzed leukotriene levels. They found that treated tau mice experienced a 90-percent reduction in leukotrienes compared with untreated mice. In addition, levels of phosphorylated and insoluble tau, the form of the protein that is known to directly damage synapses, were 50 percent lower in treated animals. Microscopic examination revealed vast differences in synaptic integrity between the groups of mice. Whereas untreated animals had severe synaptic deterioration, the synapses of treated tau animals were indistinguishable from those of ordinary mice without the disease.
"Inflammation was completely gone from tau mice treated with the drug," Dr. Praticò said. "The therapy shut down inflammatory processes in the brain, allowing the tau damage to be reversed."
The study is especially exciting because zileuton is already approved by the Food and Drug Administration for the treatment of asthma. "Leukotrienes are in the lungs and the brain, but we now know that in addition to their functional role in asthma, they also have a functional role in dementia," Dr. Praticò explained.
"This is an old drug for a new disease," he added. "The research could soon be translated to the clinic, to human patients with Alzheimer's disease."
Source: sciencedaily.com

Tuesday 5 June 2018

Could robots be counselors? Early research shows positive user experience

New research has shown for the first time that a social robot can deliver a 'helpful' and 'enjoyable' motivational interview (MI) -- a counselling technique designed to support behaviour change.

Many participants in the University of Plymouth study praised the 'non-judgemental' nature of the humanoid NAO robot as it delivered its session -- with one even saying they preferred it to a human.
Led by the School of Psychology, the study also showed that the robot achieved a fundamental objective of MI as it encouraged participants, who wanted to increase their physical activity, to articulate their goals and dilemmas aloud.
MI is a technique that involves the counsellor supporting and encouraging someone to talk about their need for change, and their reasons for wanting to change.
The role of the interviewer in MI is mainly to evoke a conversation about change and commitment, and the robot was programmed with a set script designed to elicit ideas and conversation on how someone could increase their physical activity.
When finished answering each question, the participant taped the top of NAO's head to continue, with some sessions lasting up to an hour.
Lead academic Professor Jackie Andrade explained that, because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change.
"We were pleasantly surprised by how easily the participants adapted to the unusual experience of discussing their lifestyle with a robot," she said. "As we have shown for the first time that a motivational interview delivered by a social robot can elicit out-loud discussion from participants.
"In addition, the participants perceived the interaction as enjoyable, interesting and helpful. Participants found it especially useful to hear themselves talking about their behaviour aloud, and liked the fact that the robot didn't interrupt, which suggests that this new intervention has a potential advantage over other technology-delivered adaptations of MI.
"Concern about being judged by a human interviewer came across strongly in praise for the non-judgemental nature of the robot, suggesting that robots may be particularly helpful for eliciting talk about sensitive issues.
"The next stage is to undertake a quantitative study, where we can measure whether participants felt that the intervention actually increased their activity levels."
Source: sciencedaily.com

Activity simulator could eventually teach robots tasks like making coffee or setting the table

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. Researchers demonstrate 'VirtualHome,' a system that can simulate detailed household tasks and then have artificial 'agents' execute them, opening up the possibility of one day teaching robots to do such tasks.

For many people, household chores are a dreaded, inescapable part of life that we often put off or do with little care -- but what if a robot maid could help lighten the load?

In a new paper spearheaded by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto, researchers demonstrate "VirtualHome," a system that can simulate detailed household tasks and then have artificial "agents" execute them, opening up the possibility of one day teaching robots to do such tasks.
The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. A simple task like "making coffee," for example, would also include the step "grabbing a cup." The researchers demonstrated VirtualHome in a 3-D world inspired by the Sims video game.
The team's AI agent can execute 1,000 of these interactions in the Sims-style world, with eight different scenes including a living room, kitchen, dining room, bedroom, and home office.
"Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task," says PhD student Xavier Puig, who was lead author on the paper. "These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions."
The project was co-developed by CSAIL and the University of Toronto alongside researchers from McGill University and the University of Ljubljana. It will be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which takes place this month in Salt Lake City.
How it works
Unlike humans, robots need more explicit instructions to complete easy tasks -- they can't just infer and reason with ease.
For example, one might tell a human to "switch on the TV and watch it from the sofa." Here, actions like "grab the remote control" and "sit/lie on sofa" have been omitted, since they're part of the commonsense knowledge that humans have.
To better demonstrate these kinds of tasks to robots, the descriptions for actions needed to be much more detailed. To do so, the team first collected verbal descriptions of household activities, and then translated them into simple code. A program like this might include steps like: walk to the television, switch on the television, walk to the sofa, sit on the sofa, and watch television.
Once the programs were created, the team fed them to the VirtualHome 3-D simulator to be turned into videos. Then, a virtual agent would execute the tasks defined by the programs, whether it was watching television, placing a pot on the stove, or turning a toaster on and off.
The end result is not just a system for training robots to do chores, but also a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could eventually use data like this to train their models to do more complex tasks.
The team's model successfully demonstrated that, their agents could learn to reconstruct a program, and therefore perform a task, given either a description: "pour milk into glass," or a video demonstration of the activity.
"This line of work could facilitate true robotic personal assistants in the future," says Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. "Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process."
In the future, the team hopes to train the robots using actual videos instead of Sims-style simulation videos, which would enable a robot to learn simply by watching a YouTube video. The team is also working on implementing a reward-learning system in which the agent gets positive feedback when it does tasks correctly.
"You can imagine a setting where robots are assisting with chores at home and can eventually anticipate personalized wants and needs, or impending action," says Puig. "This could be especially helpful as an assistive technology for the elderly, or those who may have limited mobility."

Source: sciencedaily.com

Tuesday 29 May 2018

The first wireless flying robotic insect takes off

Engineers have created RoboFly, the first wireless flying robotic insect. RoboFly is slightly heavier than a toothpick and is powered by a laser beam.

Insect-sized flying robots could help with time-consuming tasks like surveying crop growth on large farms or sniffing out gas leaks. These robots soar by fluttering tiny wings because they are too small to use propellers, like those seen on their larger drone cousins. Small size is advantageous: These robots are cheap to make and can easily slip into tight places that are inaccessible to big drones.

But current flying robo-insects are still tethered to the ground. The electronics they need to power and control their wings are too heavy for these miniature robots to carry.
Now, engineers at the University of Washington have for the first time cut the cord and added a brain, allowing their RoboFly to take its first independent flaps. This might be one small flap for a robot, but it's one giant leap for robot-kind. The team will present its findings May 23 at the International Conference on Robotics and Automation in Brisbane, Australia.
RoboFly is slightly heavier than a toothpick and is powered by a laser beam. It uses a tiny onboard circuit that converts the laser energy into enough electricity to operate its wings.
"Before now, the concept of wireless insect-sized flying robots was science fiction. Would we ever be able to make them work without needing a wire?" said co-author Sawyer Fuller, an assistant professor in the UW Department of Mechanical Engineering. "Our new wireless RoboFly shows they're much closer to real life."
The engineering challenge is the flapping. Wing flapping is a power-hungry process, and both the power source and the controller that directs the wings are too big and bulky to ride aboard a tiny robot. So Fuller's previous robo-insect, the RoboBee, had a leash -- it received power and control through wires from the ground.
But a flying robot should be able to operate on its own. Fuller and team decided to use a narrow invisible laser beam to power their robot. They pointed the laser beam at a photovoltaic cell, which is attached above RoboFly and converts the laser light into electricity.
"It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight," said co-author Shyam Gollakota, an associate professor in the UW's Paul G. Allen School of Computer Science & Engineering.
Still, the laser alone does not provide enough voltage to move the wings. That's why the team designed a circuit that boosted the seven volts coming out of the photovoltaic cell up to the 240 volts needed for flight.
To give RoboFly control over its own wings, the engineers provided a brain: They added a microcontroller to the same circuit.
"The microcontroller acts like a real fly's brain telling wing muscles when to fire," said co-author Vikram Iyer, a doctoral student in the UW Department of Electrical Engineering. "On RoboFly, it tells the wings things like 'flap hard now' or 'don't flap.'"
Specifically, the controller sends voltage in waves to mimic the fluttering of a real insect's wings.
"It uses pulses to shape the wave," said Johannes James, the lead author and a mechanical engineering doctoral student. "To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction."
For now, RoboFly can only take off and land. Once its photovoltaic cell is out of the direct line of sight of the laser, the robot runs out of power and lands. But the team hopes to soon be able to steer the laser so that RoboFly can hover and fly around.
While RoboFly is currently powered by a laser beam, future versions could use tiny batteries or harvest energy from radio frequency signals, Gollakota said. That way, their power source can be modified for specific tasks.
Future RoboFlies can also look forward to more advanced brains and sensor systems that help the robots navigate and complete tasks on their own, Fuller said.
"I'd really like to make one that finds methane leaks," he said. "You could buy a suitcase full of them, open it up, and they would fly around your building looking for plumes of gas coming out of leaky pipes. If these robots can make it easy to find leaks, they will be much more likely to be patched up, which will reduce greenhouse emissions. This is inspired by real flies, which are really good at flying around looking for smelly things. So we think this is a good application for our RoboFly."
Source: sciencedaily.com

Hotter bodies fight infections and tumors better -- researchers show how

The hotter our body temperature, the more our bodies speed up a key defense system that fights against tumors, wounds or infections, new research by a multidisciplinary team of mathematicians and biologists from the Universities of Warwick and Manchester has found.

The researchers have demonstrated that small rises in temperature (such as during a fever) speed up the speed of a cellular 'clock' that controls the response to infections -- and this new understanding could lead to more effective and fast-working drugs which target a key protein involved in this process.
Biologists found that inflammatory signals activate 'Nuclear Factor kappa B' (NF-κB) proteins to start a 'clock' ticking, in which NF-κB proteins move backwards and forwards into and out of the cell nucleus, where they switch genes on and off.
This allows cells to respond to a tumour, wound or infection. When NF-κB is uncontrolled, it is associated with inflammatory diseases, such as Crohn's disease, psoriasis and rheumatoid arthritis.
At a body temperature of 34 degrees, the NF-κB clock slows down. At higher temperatures than the normal 37 degree body temperature (such as in fever, 40 degrees), the NF-κB clock speeds up.
Mathematicians at the University of Warwick's Systems Biology Centre calculated how temperature increases make the cycle speed up.
They predicted that a protein called A20 -- which is essential to avoid inflammatory disease -- might be critically involved in this process. The experimentalists then removed A20 from cells and found that the NF-kB clock lost its sensitivity to increases in temperature.
Lead mathematician Professor David Rand, Professor of Mathematics and a member of the University of Warwick's Zeeman Institute for Systems Biology and Infectious Disease Epidemiology (SBIDER), explained that in normal life the 24 hour body clock controls small (1.5 degree) changes in body temperature.
He commented: "the lower body temperature during sleep might provide a fascinating explanation into how shift work, jet lag or sleep disorders cause increased inflammatory disease"
Mathematician Dan Woodcock from the University of Warwick said: "this is a good example of how mathematical modelling of cells can lead to useful new biological understanding."
While the activities of many NF-kB controlled genes were not affected by temperature, a key group of genes showed altered profiles at the different temperatures. These temperature sensitive genes included key inflammatory regulators and controllers of cell communication that can alter cell responses.
This study shows that temperature changes inflammation in cells and tissues in a biologically organised way and suggests that new drugs might more precisely change the inflammatory response by targeting the A20 protein.
Professor Mike White, lead biologist from the University of Manchester, said the study provides a possible explanation of how both environmental and body temperature affects our health:
"We have known for some time that influenza and cold epidemics tend to be worse in the winter when temperatures are cooler. Also, mice living at higher temperatures suffer less from inflammation and cancer. These changes may now be explained by altered immune responses at different temperatures."

Source: sciencedaily.com

Thursday 26 April 2018

New breath and urine tests detect early breast cancer more accurately

A new method for early and accurate breast cancer screening has been developed by researchers at Ben-Gurion University of the Negev and Soroka University Medical Center, using commercially available technology.
The researchers were able to isolate relevant data to more accurately identify breast cancer biomarkers using two different electronic nose gas sensors for breath, along with gas-chromatography mass spectrometry (GC-MS) to quantify substances found in urine.
In their study published in Computers in Biology and Medicine, researchers detected breast cancer with more than 95 percent average accuracy using an inexpensive commercial electronic nose (e-nose) that identifies unique breath patterns in women with breast cancer. In addition, their revamped statistical analyses of urine samples submitted both by healthy patients and those diagnosed with breast cancer yielded 85 percent average accuracy.
"Breast cancer survival is strongly tied to the sensitivity of tumor detection; accurate methods for detecting smaller, earlier tumors remains a priority," says Prof. Yehuda Zeiri, a member of Ben-Gurion University's Department of Biomedical Engineering. "Our new approach utilizing urine and exhaled breath samples, analyzed with inexpensive, commercially available processes, is non-invasive, accessible and may be easily implemented in a variety of settings."
The study reports breast cancer is the most commonly diagnosed malignancy among females and the leading cause of death around the world. In 2016, breast cancer accounted for 29 percent of all new cancers identified in the United States and was responsible for 14 percent of all cancer-related deaths.
Mammography screenings, which are proven to significantly reduce breast cancer mortality, are not always able to detect small tumors in dense breast tissue. In fact, typical mammography sensitivity, which is 75 to 85 percent accurate, decreases to 30 to 50 percent in dense tissue.
Current diagnostic imaging detection for smaller tumors has significant drawbacks: dual-energy digital mammography, while effective, increases radiation exposure, and magnetic resonance imaging (MRI) is expensive. Biopsies and serum biomarker identification processes are invasive, equipment-intensive and require significant expertise.
"We've now shown that inexpensive, commercial electronic noses are sufficient for classifying cancer patients at early stages," says Prof. Zeiri. "With further study, it may also be possible to analyze exhaled breath and urine samples to identify other cancer types, as well."
Source: www.sciencedaily.com

3-D print electronics and cells printed directly on skin

In a groundbreaking new study, researchers used a customized, low-cost 3-D printer to print electronics on a real hand for the first time. The technology could be used by soldiers on the battlefield to print temporary sensors on their bodies to detect chemical or biological agents or solar cells to charge essential electronics.

In a groundbreaking new study, researchers at the University of Minnesota used a customized, low-cost 3D printer to print electronics on a real hand for the first time. The technology could be used by soldiers on the battlefield to print temporary sensors on their bodies to detect chemical or biological agents or solar cells to charge essential electronics.
Researchers also successfully printed biological cells on the skin wound of a mouse. The technique could lead to new medical treatments for wound healing and direct printing of grafts for skin disorders.
The research study was published today on the inside back cover of the academic journal Advanced Materials.
"We are excited about the potential of this new 3D-printing technology using a portable, lightweight printer costing less than $400," said Michael McAlpine, the study's lead author and the University of Minnesota Benjamin Mayhugh Associate Professor of Mechanical Engineering. "We imagine that a soldier could pull this printer out of a backpack and print a chemical sensor or other electronics they need, directly on the skin. It would be like a 'Swiss Army knife' of the future with everything they need all in one portable 3D printing tool."
One of the key innovations of the new 3D-printing technique is that this printer can adjust to small movements of the body during printing. Temporary markers are placed on the skin and the skin is scanned. The printer uses computer vision to adjust to movements in real-time.
"No matter how hard anyone would try to stay still when using the printer on the skin, a person moves slightly and every hand is different," McAlpine said. "This printer can track the hand using the markers and adjust in real-time to the movements and contours of the hand, so printing of the electronics keeps its circuit shape."
Another unique feature of this 3D-printing technique is that it uses a specialized ink made of silver flakes that can cure and conduct at room temperature. This is different from other 3D-printing inks that need to cure at high temperatures (up to 100 degrees Celsius or 212 degrees Fahrenheit) and would burn the hand.
To remove the electronics, the person can simply peel off the electronic device with tweezers or wash it off with water.
In addition to electronics, the new 3D-printing technique paves the way for many other applications, including printing cells to help those with skin diseases. McAlpine's team partnered with University of Minnesota Department of Pediatrics doctor and medical school Dean Jakub Tolar, an expert on treating rare skin disease. The team successfully used a bioink to print cells on a mouse skin wound, which could lead to advanced medical treatments for those with skin diseases.
"I'm fascinated by the idea of printing electronics or cells directly on the skin," McAlpine said. "It is such a simple idea and has unlimited potential for important applications in the future."
Source: https://www.sciencedaily.com

Friday 23 February 2018

This Nigerian Startup will pitch at the 2018 Harvard Business School New Venture Competition

Publiseer, a digital publishing platform for independent Nigerian authors and artistes will pitch its business in front of approximately 700 attendees and receive feedback from a panel of experienced judges as one of the 14 finalists of the 2018 Harvard Business School’s New Venture Competition, taking place in Boston, MA, United States, on March 2, 2018.
The Africa Business Club at Harvard Business School will be hosting the competition to showcase the diversity of entrepreneurs making a difference on the continent today. The competition will be held along with the 20th Africa Business Conference under the theme “Values And Value-Chains: Africa In A New Global Era”.
The judges include Samuel Alemayehu of Cambridge Industries, Steven Koltai of Koltai & Company, and Josh Sandler of Lori Systems. The winner and the runner-up of the competition will be awarded cash prices of $10,000 and $5,000 respectively.
Publiseer and other finalists will also have the opportunity to participate in the Startup Lab, a workshop for early-stage entrepreneurs to solicit advice from conference participants. Also, conference participants and Harvard Business School faculty and students with experience in strategy, operations, finance, and other relevant business fields will be recruited to engage with the entrepreneurs and help them ideate and solve their problems.
With over 130 writers and musicians, Publiseer distributes and monetizes the creative works of Nigerian writers and musicians worldwide.
Source: Techloy.com