Tumgik
sciencebulletin · 3 years
Text
Meet the spotted lanternfly, the bug health officials are begging you to kill on sight
Tumblr media
Meet the spotted lanternfly, the bug health officials are begging you to kill on sight Whether you choose to kill insects or not, there is one bug across the northeastern United States health officials want you to take care of immediately: the spotted lanternfly. Though it may seem like a colorful moth worthy of an Instagram post, it's actually an invasive species that can wreak havoc on trees, plants and other landscapes, resulting in millions of dollars in damages. The spotted lanternfly originates from China, and George Hamilton, department chair of entomology at Rutgers University, believes they landed in the U.S. via a crate coming from the Asian country. The invasive insects—which actually don't fly but rather are leafhoppers—were first spotted in Pennsylvania less than 10 years ago. Now, they can be seen throughout the northeast and mid-Atlantic, from the five boroughs in New York City to parts of Indiana. They may have spread so easily because they are hard to notice. From hiding on cars and packages, they've become such a problem that New Jersey and nearby areas have issued quarantine orders, asking people to inspect their vehicles before traveling. In Pennsylvania, there are 34 counties currently under quarantine. "They're very good hitchhikers," Hamilton told USA TODAY. "Most people don't even know they've got them until the adult form comes out." Walthery, CC BY-SA 4.0 The good news about the insects is that they can't harm humans or pets. However, they cause massive damage to plants and are known to feed on over 70 different types of trees and plants. But the damage doesn't end there. As Amy Korman, a horticulture educator for Penn State Extension, says, "What goes in must come out." The spotted lanternflies secrete a sticky material known as honeydew, which is very high in sugar. It is a substrate for mold, and when it gets on plants, it prevents them from photosynthesizing which then leads to the plants dying. The mold these lanternflies leave can end up in backyards and decks and can attract numerous other bugs. "It seems like it's such a fragile insect. And yet it's been so successful in taking over our landscapes," Korman said. "It's sort of like the Pandora's box of problems." They've destroyed vineyards throughout Pennsylvania, according to the Philadelphia Inquirer. A January 2020 study done by the Penn State's College of Agricultural Sciences found that if the species isn't contained, it could result in at least a $324 million hit to the state's economy and the loss of around 2,800 jobs. A worst-case scenario estimates a $554 million economic loss and almost 5,000 jobs lost. The study also found current spotted lanternfly-related damage is estimated to be $50.1 million per year with a loss of 484 jobs. "This insect has the potential to be such a significant economic burden," Korman said. "We're still working on ways to manage this insect. We haven't cracked the nut and how to really manage populations of this insect very well." The states impacted by the spotted lanternfly have a variety of ways to handle the population, but they all have the same goal. "First thing you should do is kill it," Hamilton said. If you don't feel up to killing a spotted lanternfly, Hamilton added the next best thing to do is to take a picture of it and report it to your state's department of agriculture. The state of Ohio has a form residents can fill out. Scrapping and destroying the eggs also helps control the population. "The only good ones are dead ones," Korman said. There are numerous ways to kill them, including the use of pesticides or simply crushing them. Extreme heat or cold also does the trick as well. Korman added that she's heard of many different ways people have handled the insects, which has ranged from detergents, alcohol and even kerosene. "Sometimes you have to laugh. I"s like you really came up with that concoction and you thought it was gonna work?" she said. "'I'm always scratching my head over with the next great home remedy will be." Read the full article
0 notes
sciencebulletin · 3 years
Text
A 1,000-year drought is hitting the West: Could desalination be a solution?
Tumblr media
The United States and many other parts of the world are reeling under the impacts of severe drought. One possible solution is the desalination of seawater, but is it a silver bullet? The Western United States is currently experiencing what one paleoclimatologist called "potentially the worst drought in 1,200 years." The region has had many droughts in the past, including "megadroughts" that last decades, but climate change is making dry years drier and wet years wetter. Higher temperatures heat the ground and air faster, and the increased evaporation dries the soil and decreases the amount of precipitation that reaches reservoirs. Warming also leads to less of the snow-pack needed to replenish rivers, streams, reservoirs and moisten soil in spring and summer. About 44 percent of the U.S. is experiencing some level of drought with almost 10 percent in "exceptional drought." Wildfires currently rage in 13 states, exacerbated by the hot and dry conditions. There have been unprecedented water cuts to the Colorado River—which provides water to seven states—and shutdowns of hydroelectric power plants. The aquifers of towns that depend on well water are depleted, forcing residents to truck in water. Normally, agriculture consumes over 90 percent of the water in many western states, but the drought has caused yields to plummet; some farmers have reduced their acreage or changed crops to less water-intensive ones, while others will likely go bankrupt. Ranchers are having to sell off parts of their herds. But even as the locals contend with these difficulties, more people are moving to the area. Between 1950 and 2010, the Southwest's growth rate was twice that of the rest of the country. The U.S. population is expected to continue growing through 2040, with more than half of that growth in areas that have experienced severe drought in the last ten years. Many people continue to move to an area expected to get even drier in years to come, just as the latest IPCC report predicts that climate change will intensify droughts in these regions. A hydropower plant on Lake Oroville was shut down when lake levels hit historic lows. Credit: Photo: Frank Schulenberg Every other continent in the world is also experiencing serious drought, except for Antarctica. And the U.N. has warned that 130 more countries could face droughts by 2100 if we do nothing to curb climate change. But as soon as 2025, two-thirds of the global population could face water shortages, according to the World Wildlife Fund. This could result in conflicts, political instability, and the displacement of millions of people. The scarcity of fresh water may also make it harder to decarbonize society—something we must do to prevent catastrophic climate change—because some strategies to do this could further stress water resources. Green hydrogen, seen as key to eliminating emissions from aviation, shipping, trucking, and heavy industry, is produced by electrolysis, which splits water into hydrogen and oxygen. However, the process requires large amounts of purified water. One estimate is that nine tons of it are needed to produce one ton of hydrogen, but actually the treatment process used to purify the water requires twice as much impure water. In other words, 18 tons of water are really needed to produce one ton of green hydrogen. Nuclear energy, seen by the IPCC as an important tool for achieving our climate goals, also depends on fresh water for cooling, but as water shortages increase, nuclear plants may be forced to reduce their capacity or shut down. Where there's water While most of our planet is covered by water, only three percent of it is fresh water and only a third of that is available to humans since the rest is frozen in glaciers or is inaccessible deep underground. Meanwhile, global warming continues to melt more glaciers each year and increase evaporation, diminishing our fresh water resources. As a result of water scarcity, some parts of the world have turned to desalination for drinking water. Desalination (desal) involves removing salt and minerals from salty water, usually seawater. This process occurs naturally as the sun heats the ocean—fresh water evaporates off the surface and then falls as rain. Arid regions like the Middle East and North Africa have long depended on desal technology for their fresh water. Today over 120 countries have desal plants with Saudi Arabia producing more fresh water through desal than any other nation. The United States also has a number of desal plants with the largest in the western hemisphere located in Carlsbad, CA. A new $1.4 billion desal plant in Huntington Beach, CA is likely to be approved soon. Desalination approaches Desal is usually done one of two ways. Thermal distillation involves boiling seawater, which produces steam that leaves the salt and minerals behind. The steam is then collected and condensed through cooling to produce pure water. The second method is membrane filtration which pushes seawater through membranes that trap the salt and minerals on one side and let pure water through. Before the 1980s, 84 percent of desal used the thermal distillation method. Today, about 70 percent of the world's desal is done with a membrane filtration method called reverse osmosis because it is the cheapest and most efficient method. In natural osmosis, molecules spontaneously move through a membrane from a solution with less dissolved substances to a more concentrated solution, equalizing the two sides. But in reverse osmosis, saltier water is moving through a membrane to a less salty solution. Because this is working against natural osmosis, reverse osmosis requires high pressure to push water through the semi-permeable membranes. The resulting fresh water is then sterilized, usually with ultraviolet light. Concerns about desalination Though desal may be the only solution for some regions, it is expensive, consumes a great deal of energy and has detrimental environmental impacts. "Desalination of seawater is one of the most expensive ways to get water," said Ngai Yin Yip, assistant professor of earth and environmental engineering at Columbia University. "This has just got to do with the fact that getting salt out of water is not an easy thing to do. But we have to have water—there's just no substitute for water. So it can be costly. But the fact that we cannot survive without water means that it is a necessary cost." Large-scale desal facilities are very expensive to build and the plants consume a great deal of energy. Thermal distillation plants require energy to boil water into steam and electricity to drive pumps. Reverse osmosis does not require energy to generate heat but relies on energy for the electricity to drive its high-pressure pumps. In addition, the fouling of membranes by less soluble salts, chemicals, and microorganisms can impact their permeability and reduce productivity, adding to maintenance and operational costs. According to Yip, the most economical way to go about doing desalination is to target sources of water that contain less salt, such as groundwater. "The less salt there is, the less work you need to do to take it out," he said. "So from a purely economic perspective, groundwater would be more economical than seawater." Desalinating groundwater can be done sustainably in places where it is abundant. But where it is decreasing, drawing up groundwater can lead to land subsidence, or in coastal areas, to saltwater intrusion of the aquifer. If there is no groundwater available, Yip feels reverse osmosis of seawater is the best technology to use. Many Middle Eastern plants, however, use older thermal plants that run on fossil fuels. As a result, desal plants are currently responsible for emitting 76 million tons of CO2 each year. As demand for desal is expected to increase, global emissions related to desal could reach 400 million tons of CO2 per year by 2050. Desal also has impacts on the marine environment because of the amount of brine it produces. For every one unit of pure water that's produced, about 1.5 units of concentrated brine—twice as salty as seawater and polluted with copper and chlorine used to pretreat the water to prevent it from fouling the membranes—results. Globally, each day over 155 million tons of brine are discharged back into the ocean. If brine is released in a calm area of the ocean, it sinks to the bottom where it can threaten marine life. A 2019 study of the Carlsbad desal plant near San Diego that dilutes its brine before releasing it, found that there were no direct impacts on marine life, however, salt levels exceeded permissible limits and the brine plume extended further offshore than permitted. Improving desalination Researchers around the world are attempting to solve desal's challenges. Here are a few examples of some of their solutions. Renewable energy NEOM is a $500 billion futuristic smart city-state being built in northwest Saudi Arabia along the shores of the Red Sea. To provide water for the estimated one million future residents, it will construct an innovative solar desal system comprising a dome of glass and steel 25 meters high over a cauldron of water. Seawater is piped through a glass enclosed aqueduct and heated by the sun as it travels into the dome. There, parabolic mirrors concentrate solar radiation onto the dome, superheating the seawater. As it evaporates, highly pressurized steam is released and condenses as fresh water, which is piped to reservoirs and irrigation systems. The system is completely carbon neutral and theoretically reduces the amount of brine waste produced. NEOM, expected to be completed in 2025, claims it will produce 30,000 cubic meters of fresh water per hour at 34 cents per cubic meter. The U.S. Army and the University of Rochester researchers have developed a simple and efficient method of desalinating water also dependent on the sun's energy. Using a laser treatment, they created a "super-wicking" aluminum panel with a grooved black surface that makes it super absorbent, enabling it to pull water up the panel from a water source. The black material, heated by the sun, evaporates the water, a process made more efficient because of its super-wicking nature. The water is then collected, leaving contaminants behind on the panel, which is easy to clean. It can be reconfigured and also be angled to face the sun, absorbing maximum sunlight, and because it is moveable, could easily be used by military troops in the field. Larger panels would potentially enable the process to be scaled up. European companies are developing the Floating WINDdesal in the Middle East, a seawater desal plant powered almost entirely by wind energy. The floating semi-submersible plant is being built in three sizes, with the largest expected to be able to produce enough water for 500,000 people. The plants can be moved by sea, making them easy to mobilize for emergencies and can be deployed in deeper water where brine disposal would have less impact on marine life. Because they float, they will not be affected by rising sea levels. Membranes Membrane research is focused on increasing membrane permeability which would reduce the amount of pressure needed, reducing the fouling that occurs, and making membranes more resilient to high pressure. A discovery by scientists at the University of Texas, Penn State and DuPont could improve the flow of water through membranes and increase their efficiency, which would mean that reverse osmosis would not require as much pressure. Using an electron microscope technique, the researchers discovered that the densely packed polymers that make up even the thinnest membranes could slow the water flow. The most permeable membranes are those that are more uniformly dense at the nanoscale, and not necessarily the thinnest. The discovery could help makers of membranes improve their performance. Reverse osmosis desal is hindered when microorganisms grow on the membrane surface, slowing the flow of water. Some coatings that have been used to prevent this "biofouling" of membranes are hard to remove, so they result in more energy use as well as more chemicals released into the sea. King Abdullah University of Science and Technology (KAUST) researchers created a nontoxic coating that adheres to the membrane and can be removed with a flush of high-saline solution. Desal without membranes Columbia University engineers led by Yip, developed a method called temperature swing solvent extraction (TSSE) that doesn't use membranes at all to desalinate. The efficient, scalable, and low-cost technique uses a solvent whose water solubility—the amount of a chemical substance that can dissolve in water—changes according to temperature. At low temperatures, the solvent mixed with salt water draws in water molecules but not salt. After all the water is sucked into the solvent, the salts form crystals that can easily be removed. The solvent and its absorbed water are then heated to a moderate temperature, enabling the solvent to release the water, which forms a separate layer below. The water can then be collected. Yip explained that the process is designed to deal with very salty water, which reverse osmosis cannot handle. For example, the water that comes up during oil and gas extraction can be five to seven times saltier than regular seawater. The textile industry also produces very salty water because of the solutions it uses to dye cloth. According to Yip, TSSE is not the best way to obtain drinking water, but it could help replenish our water resources for other needs. Brine Brine impacts can be lessened by how much brine is discharged and how the desal process is carried out. Stanford University researchers have developed a device that can turn brine into useful chemicals.Through an electrochemical process, it splits the brine into positively charged sodium and negatively charged chlorine ions. These can then be combined with other elements to form sodium hydroxide, hydrogen, and hydrochloric acid. Sodium hydroxide can be used to pretreat seawater going into the desal plant to minimize fouling of the membranes. It is also involved in the manufacture of soap, paper, detergents, explosives and aluminum. Hydrochloric acid is useful for cleaning desal plants, producing batteries, and processing leather; it is also used as a food additive and is a source of hydrogen. Turning brine components into chemicals that have other purposes would decrease brine waste and its environmental damage, as well as improve the economic viability of desalination. Diluting brine can also lessen its impacts. "You take more seawater, and you premix it in an engineered reactor," said Yip. "Now the salinity of that mix is not two times saltier than seawater. It's still saltier than seawater, but it's lower. And instead of discharging it at one point, you discharge it at several points with diffusers. These are engineering approaches to try to minimize the impacts of brine," he explained. Other solutions for the drought Despite improvements in desal's environmental and economic profile, however, it is still an expensive solution to water scarcity. This is especially so given that most water in the U.S. is used for agriculture, taking showers, and flushing toilets. Newsha Ajami, the director of urban water policy at Stanford, said "I disagree with using tons of resources to clean the water up just to flush it down the toilet." Water recycling Paulina Concha Laurrari, a senior staff associate at the Columbia Water Center, said "Water reuse definitely has to be an important part of the solution. Our wastewater can get treated, either to potable standards, like it's been done in other parts of the world and even in California, or to a different standard that can be used for agriculture or other things." Recycling the approximately 50 million tons of municipal wastewater that is discharged daily around the U.S. into the ocean or an estuary could supply 6 percent of the nation's total water use. Recycled water can be used for irrigation, watering lawns, parks and golf courses, for industrial use and for replenishing aquifers. The House of Representatives is considering a bill that would direct the Secretary of the Interior to establish a program to fund water recycling projects and build water recycling facilities in 17 western states through 2027. The technology to recycle water has been around for 50 years. Wastewater treatment facilities add microbes to wastewater to consume the organic matter. Membranes then are used to filter out bacteria and viruses, and the filtered water is treated with ultraviolet light to kill any remaining microbes. The water can be used for agriculture or industry, or it can be pumped into an aquifer for storage. When it is needed for drinking water, it can be pumped out and repurified. If the water is for human consumption, some minerals are added back in to make it more drinkable. Waste not Every year in the U.S., approximately 9 billion tons of drinking water are lost due to leaking faucets, pipes and water mains, and defective meters. President Biden's $1.2 billion infrastructure plan includes substantial sums for upgrading clean drinking water and wastewater infrastructure. In the U.S., 42 billion tons of untreated stormwater enter the sewage system and waterways and ultimately the ocean each year. This means that the rainwater that could soak into the ground to replenish groundwater supplies is lost. Green infrastructure, such as green roofs, rain gardens, trees, and rain barrels, would reduce some of this water waste. Sensible water use It's also important to figure out how to put the water that's available to the best use in a particular area. "For example, having a better planning strategy of what is the best use for water, like what to plant where," said Laurrari. "Instead of using it, say, for alfalfa, how do we use it for higher value crops? Or even tell farmers, "I will pay you not to use this water' and the state can have it to replenish our aquifers or to source cities or something else." Determining the most reasonable and economical uses for water would help everyone understand and appreciate its true value. "In some of these places where they're having droughts, there are still people who are watering their lawns, and happily paying the fine," said Yip. "So really, there's a mismatch between what is happening and what the reality is. We need to adjust our activities such that we are not putting that kind of a human-imposed strain on the water supply. We need to be thinking about how we make drastic wholesale changes to the way we organize our activities that actually make sense." Israel's example Israel is located in one of the driest regions of the world and has few natural water resources, however, it is considered "the best in the world in water efficiency" according to Global Water Intelligence, an international water industry publisher. Israeli children are taught about water conservation beginning in preschool, and adults are reminded not to waste water in television ads. Low-flow showerheads and faucets are mandatory, and Israeli toilets usually have two different flushing options for urine and bowel movements. The country adopted drip irrigation, which uses half the water than does traditional irrigation while producing more yield. Israel also resolutely attends to small leaks in pipes before they become large. In addition, 75 percent of its wastewater is recycled, more than that of any other country. And because Israelis pay for their water themselves, they are careful about how much they use and readily adopt water-saving technology. As a result, it's estimated that the average Israeli consumes half as much water each day as the average American. Israel began desalination in the 1960s. Read the full article
0 notes
sciencebulletin · 3 years
Text
Ancient woman's DNA provides first evidence for the origin of a mysterious lost culture: The Toaleans
Tumblr media
In 2015, archaeologists from the University of Hasanuddin in Makassar, on the Indonesian island of Sulawesi, uncovered the skeleton of a woman buried in a limestone cave. Studies revealed the person from Leang Panninge, or "Bat Cave," was 17 or 18 years old when she died some 7,200 years ago. Her discoverers dubbed her Bessé' (pronounced "bur-sek")—a nickname bestowed on newborn princesses among the Bugis people who now live in southern Sulawesi. The name denotes the great esteem local archaeologists have for this ancient woman. She represents the only known skeleton of one of the Toalean people. These enigmatic hunter-gatherers inhabited the island before Neolithic farmers from mainland Asia ("Austronesians") spread into Indonesia around 3,500 years ago. Our team found ancient DNA that survived inside the inner ear bone of Bessé", furnishing us with the first direct genetic evidence of the Toaleans. This is also the first time ancient human DNA has been reported from Wallacea, the vast group of islands between Borneo and New Guinea, of which Sulawesi is the largest. Genomic analysis shows Bessé' belonged to a population with a previously unknown ancestral composition. She shares about half of her genetic makeup with present-day Indigenous Australians and people in New Guinea and the Western Pacific. This includes DNA inherited from the now-extinct Denisovans, who were distant cousins of Neanderthals. Stone arrowheads (Maros points) and other flaked stone implements from the Toalean culture of South Sulawesi. Credit: Shahna Britton/Andrew Thomson, Author provided Burial of a Toalean hunter-gatherer woman dated to 7,200 years ago. Bessé’ was 17-18 years old at time of death. She was buried in a flexed position and several large cobbles were placed on and around her body. Although the skeleton is fragmented, ancient DNA was found preserved in the dense inner ear bone (petrous). Credit: University of HasanuddinIn fact, relative to other ancient and present-day groups in the region, the proportion of Denisovan DNA in Bessé' could indicate the main meeting point between our species and Denisovans was in Sulawesi itself (or perhaps a nearby Wallacean island). The ancestry of this pre-Neolithic woman provides fascinating insight into the little-known population history and genetic diversity of early modern humans in the Wallacean islands—the gateway to the continent of Australia. Toalean culture The archaeological story of the Toaleans began more than a century ago. In 1902, the Swiss naturalists Paul and Fritz Sarasin excavated several caves in the highlands of southern Sulawesi. Their digs unearthed small, finely crafted stone arrowheads known as Maros points. They also found other distinctive stone implements and tools fashioned from bone, which they attributed to the original inhabitants of Sulawesi—the prehistoric "Toalien" people (now spelled Toalean). Sulawesi is the largest island in Wallacea, the zone of oceanic islands between the continental regions of Asia and Australia. White shaded areas represent landmasses exposed during periods of lower sea level in the Late Pleistocene. The Wallace Line is a major biogeographical boundary that marks the eastern extent of the distinctive plant and animal worlds of Asia. The Toalean cave site Leang Panninge (where Bessé’ was found) is located in Sulawesi’s southwestern peninsula (see inset panel). Toalean archaeological sites have only been found in a roughly 10,000 km² area of this peninsula, south of Lake Tempe. Credit: Kim NewmanSome Toalean cave sites have since been excavated to a higher scientific standard, yet our understanding of this culture is at an early stage. The oldest known Maros points and other Toalean artifacts date to about 8,000 years ago. Excavated findings from caves suggest the Toaleans were hunter-gatherers who preyed heavily on wild endemic warty pigs and harvested edible shellfish from creeks and estuaries. So far, evidence for the group has only been found in one part of southern Sulawesi. Toalean artifacts disappear from the archaeological record by the fifth century AD—a few thousand years after the first Neolithic settlements emerged on the island. Prehistorians have long sought to determine who the Toaleans were, but efforts have been impeded by a lack of securely-dated human remains. This all changed with the discovery of Bessé' and the ancient DNA in her bones. A Toalean stone arrowhead, known as a Maros point. Classic Maros points are small (roughly 2.5cm in maxiumum dimension) and were fashioned with rows of fine tooth-like serrations along the sides and tip, and wing-like projections at the base. Although this particular stone technology seems to have been unique to the Toalean culture, similar projectile points were produced in northern Australia, Java and Japan. Credit: Shahna Britton/Andrew ThomsonThe ancestral story of Bessé' Our results mean we can now confirm existing presumptions the Toaleans were related to the first modern humans to enter Wallacea some 65,000 years ago or more. These seafaring hunter-gatherers were the ancestors of Aboriginal Australians and Papuans. They were also the earliest inhabitants of Sahul, the supercontinent that emerged during the Pleistocene (ice age) when global sea levels fell, exposing a land bridge between Australia and New Guinea. To reach Sahul, these pioneering humans made ocean crossings through Wallacea, but little about their journeys is known. It is conceivable the ancestors of Bessé' were among the first people to reach Wallacea. Instead of island-hopping to Sahul, however, they remained in Sulawesi. But our analyses also revealed a deep ancestral signature from an early modern human population that originated somewhere in continental Asia. These ancestors of Bessé' did not intermix with the forebears of Aboriginal Australians and Papuans, suggesting they may have entered the region after the initial peopling of Sahul—but long before the Austronesian expansion. Who were these people? When did they arrive in the region and how widespread were they? It's unlikely we will have answers to these questions until we have more ancient human DNA samples and pre-Neolithic fossils from Wallacea. This unexpected finding shows us how little we know about the early human story in our region. Toalean stone arrowheads (Maros points), backed microliths (small stone implements that may have been hafted as barbs) and bone projectile points. These artefacts are from Indonesian collections curated in Makassar and mostly comprise undated specimens collected from the ground surface at archaeological sites. Credit: Basran BurhanA new look at the Toaleans With funds awarded by the Australian Research Council's Discovery program we are initiating a new project that will explore the Toalean world in greater detail. Through archaeological excavations at Leang Panninge we hope to learn more about the development of this unique hunter-gatherer culture. We also wish to address longstanding questions about Toalean social organization and ways of life. For example, some scholars have inferred the Toaleans became so populous that these hitherto small and scattered groups of foragers began to settle down in large sedentary communities, and possibly even domesticated wild pigs. It has also recently been speculated Toaleans were the mysterious Asian seafarers who visited Australia in ancient times, introducing the dingo (or more accurately, the domesticated ancestor of this now-wild canid). There is clearly much left to uncover about the long island story of Bessé' and her kin. Read the full article
0 notes
sciencebulletin · 3 years
Text
Secrets of COVID-19 transmission revealed in turbulent puffs
Tumblr media
Researchers have developed a new model that explains how turbulent puffs, like coughs, behave under different environmental conditions. They found that at environmental temperatures 15°C or lower, the puffs behaved with newly observed dynamics, showing more buoyancy and traveling further. Their findings could help scientists better predict how turbulence and the environment affect airborne transmission of viruses like SARS-CoV-2. Turbulence is everywhere—in the movement of the wind, the ocean waves and even magnetic fields in space. It can also be seen in more transient phenomena, like smoke billowing from a chimney, or a cough. Understanding this latter type of turbulence—called puff turbulence—is important not only for the advancement of fundamental science, but also for practical health and environmental measures, like calculating how far cough droplets will travel, or how pollutants released from a chimney or cigarette might disperse into the surroundings. But creating a complete model of how turbulent puffs of gasses and liquids behave has so far proven elusive. "The very nature of turbulence is chaotic, so it's hard to predict," said Professor Marco Edoardo Rosti, who leads the Complex Fluids and Flows Unit at Okinawa Institute of Science and Technology Graduate University (OIST). "Puff turbulence, which occurs when the ejection of a gas or liquid into the environment is disrupted, rather than continuous, has more complicated characteristics, so it's even more challenging to study. But it's of vital importance—especially right now for understanding airborne transmission of viruses like SARS-CoV-2." Large scale and small scale dynamics of a turbulent puff. Credit: Okinawa Institute of Science and Technology Until now, the most recent theory was developed in the 1970s, and focused on the dynamics of a puff only at the scale of the puff itself, like how fast it moved and how wide it spread. The new model, developed in a collaboration between Prof. Rosti from OIST, Japan and Prof. Andrea Mazzino from the University of Genova in Italy, builds on this theory to include how minute fluctuations within the puff behave, and how both large-scale and small-scale dynamics are impacted by changes in temperature and humidity. Their findings were published in Physical Review Letters on August 25th 2021. Interestingly, the scientists found that at cooler temperatures (15°C or lower), their model deviated from the classical model for turbulence. In the classical model, turbulence reigns supreme—determining how all the little swirls and eddies within the flow behave. But once temperatures dipped, buoyancy started to have a greater impact. "The effect of buoyancy was initially very unexpected. It's a completely new addition to the theory of turbulent puffs," said Prof. Rosti. Buoyancy exerts an effect when the gas or liquid puff is much warmer than the temperature of the immediate surroundings it is released into. Warm gas or fluid is much less dense than the cold gas or fluid of the environment, and therefore the puff rises, allowing it to travel further. "Buoyancy generates a very different kind of turbulence—not only do you see changes in the large-scale movement of the puff, but also changes in the minute movements within the puff," said Prof. Rosti. The scientists used a powerful supercomputer, capable of resolving behavior of the puff at the large-scale and the small-scale, to run simulations of turbulent puffs, which confirmed their new theory. The new model could now allow scientists to better predict the movement of droplets in the air that are released when someone coughs or speaks unmasked. While larger droplets fall quickly to the ground, reaching distances of around one meter, smaller droplets can remain airborne for much longer and travel further. "How fast the droplets evaporate—and therefore how small they are—depends on turbulence, which in turn is affected by the humidity and temperature of the surroundings," explained Prof. Rosti. "We can now start to take these differences in environmental conditions, and how they affect turbulence, into consideration when studying airborne viral transmission." Next, the researchers plan to study how puffs behave when made of more complicated non-Newtonian fluids, where how easily the fluid flows can change depending on the forces it is under. "For COVID, this could be useful for studying sneezes, where non-Newtonian fluids like saliva and mucus are forcefully expelled," said Dr. Rosti. Read the full article
0 notes
sciencebulletin · 3 years
Text
New class of habitable exoplanets represent a big step forward in the search for life
Tumblr media
A new class of exoplanet very different to our own, but which could support life, has been identified by astronomers, which could greatly accelerate the search for life outside our Solar System. In the search for life elsewhere, astronomers have mostly looked for planets of a similar size, mass, temperature and atmospheric composition to Earth. However, astronomers from the University of Cambridge believe there are more promising possibilities out there. The researchers have identified a new class of habitable planets, dubbed 'Hycean' planets—hot, ocean-covered planets with hydrogen-rich atmospheres—which are more numerous and observable than Earth-like planets. The researchers say the results, reported in The Astrophysical Journal, could mean that finding biosignatures of life outside our Solar System within the next two or three years is a real possibility. "Hycean planets open a whole new avenue in our search for life elsewhere," said Dr. Nikku Madhusudhan from Cambridge's Institute of Astronomy, who led the research. Many of the prime Hycean candidates identified by the researchers are bigger and hotter than Earth, but still have the characteristics to host large oceans that could support microbial life similar to that found in some of Earth's most extreme aquatic environments. These planets also allow for a far wider habitable zone, or 'Goldilocks zone', compared to Earth-like planets. This means that they could still support life even though they lie outside the range where a planet similar to Earth would need to be in order to be habitable. Thousands of planets outside our Solar System have been discovered since the first exoplanet was identified nearly 30 years ago. The vast majority are planets between the sizes of Earth and Neptune and are often referred to as 'super-Earths' or 'mini-Neptunes': they can be predominantly rocky or ice giants with hydrogen-rich atmospheres, or something in between. Most mini-Neptunes are over 1.6 times the size of Earth: smaller than Neptune but too big to have rocky interiors like Earth. Earlier studies of such planets have found that the pressure and temperature beneath their hydrogen-rich atmospheres would be too high to support life. However, a recent study on the mini-Neptune K2-18b by Madhusudhan's team found that in certain conditions these planets could support life. The result led to a detailed investigation into the full range of planetary and stellar properties for which these conditions are possible, which known exoplanets may satisfy those conditions, and whether their biosignatures may be observable. Astronomers have identified a new class of habitable planets, dubbed ‘Hycean’ planets – hot, ocean-covered planets with hydrogen-rich atmospheres – which could represent a big step forward in the search for life elsewhere. Credit: Amanda Smith, University of Cambridge The investigation led the researchers to identify a new class of planets, Hycean planets, with massive planet-wide oceans beneath hydrogen-rich atmospheres. Hycean planets can be up to 2.6 times larger than Earth and have atmospheric temperatures up to nearly 200 degrees Celsius, but their oceanic conditions could be similar to those conducive for microbial life in Earth's oceans. Such planets also include tidally locked 'dark' Hycean worlds that may have habitable conditions only on their permanent night sides, and 'cold' Hycean worlds that receive little radiation from their stars. Planets of this size dominate the known exoplanet population, although they have not been studied in nearly as much detail as super-Earths. Hycean worlds are likely quite common, meaning that the most promising places to look for life elsewhere in the Galaxy may have been hiding in plain sight. However, size alone is not enough to confirm whether a planet is Hycean: other aspects such as mass, temperature and atmospheric properties are required for confirmation. When trying to determine what the conditions are like on a planet many light years away, astronomers first need to determine whether the planet lies in the habitable zone of its star, and then look for molecular signatures to infer the planet's atmospheric and internal structure, which govern the surface conditions, presence of oceans and potential for life. Astronomers also look for certain biosignatures which could indicate the possibility of life. Most often, these are oxygen, ozone, methane and nitrous oxide, which are all present on Earth. There are also a number of other biomarkers, such as methyl chloride and dimethyl sulphide, that are less abundant on Earth but can be promising indicators of life on planets with hydrogen-rich atmospheres where oxygen or ozone may not be as abundant. "Essentially, when we've been looking for these various molecular signatures, we have been focusing on planets similar to Earth, which is a reasonable place to start," said Madhusudhan. "But we think Hycean planets offer a better chance of finding several trace biosignatures." "It's exciting that habitable conditions could exist on planets so different from Earth," said co-author Anjali Piette, also from Cambridge. Madhusudhan and his team found that a number of trace terrestrial biomarkers expected to be present in Hycean atmospheres would be readily detectable with spectroscopic observations in the near future. The larger sizes, higher temperatures and hydrogen-rich atmospheres of Hycean planets make their atmospheric signatures much more detectable than Earth-like planets. The Cambridge team identified a sizeable sample of potential Hycean worlds which are prime candidates for detailed study with next-generation telescopes, such as the James Webb Space Telescope (JWST), which is due to be launched later this year. These planets all orbit red dwarf stars between 35-150 light years away: close by astronomical standards. Planned JWST observations of the most promising candidate, K2-18b, could lead to the detection of one or more biosignature molecules. "A biosignature detection would transform our understanding of life in the universe," said Madhusudhan. "We need to be open about where we expect to find life and what form that life could take, as nature continues to surprise us in often unimaginable ways." Read the full article
0 notes
sciencebulletin · 3 years
Text
LED streetlights contribute to insect population declines: study
Tumblr media
Street lighting has detrimental impacts on local insect populations Streetlights—particularly those that use white light-emitting diodes (LEDs)—not only disrupt insect behavior but are also a culprit behind their declining numbers, a new study carried out in southern England showed Wednesday. Artificial lights at night had been identified as a possible factor behind falling insect populations around the world, but the topic had been under-researched. To address the question, scientists compared 26 roadside sites consisting of either hedgerows or grass verges that were lit by streetlights, against an equal number of nearly identical sites that were unlit. They also examined a site with one unlit and two lit sections, all of which were similar in their vegetation. The team chose moth caterpillars as a proxy for nocturnal insects more broadly, because they remain within a few meters of where they hatched during the larval stage of their lives, before they acquire the ability to fly. The team either struck the hedges with sticks so that the caterpillars fell out, or swept the grass with nets to pick them up. Artificial lights at night had been identified as a possible factor behind falling insect populations around the world, but the topic had been under-researched. The results were eye-opening, with a 47 percent reduction in insect population at the hedgerow sites and 37 percent reduction at the roadside grassy areas. "We were really quite taken aback by just how stark it was," lead author Douglas Boyes, of the UK Centre for Ecology and Hydrology, told AFP, adding the team had expected a more modest decline of around 10 percent. This undated image courtesy of Douglas Boyes shows a selection of moth caterpillars caught by sweep netting during fieldwork."We consider it most likely that it's due to females, mums, not laying eggs in these areas," he said. The lighting also disturbed their feeding behavior: when the team weighed the caterpillars, they found that those in the lighted areas were heavier. Boyes said the team interpreted that as the caterpillars not knowing how to respond to the unfamiliar situation that runs counter to the conditions they evolved in over millions of years, and feeding more as a result to rush through their development. The team found that the disruption was most pronounced in areas lit by LED lights as opposed to high-pressure sodium (HPS) lamps or older low-pressure sodium (LPS) lamps, both of which produce a yellow-orange glow that is less like sunlight. LED lamps have grown more popular in recent years because of their superior energy efficiency. The paper acknowledged the effect of street lighting is localized and a "minor contributor" to declining insect numbers, with other important factors including urbanization and destruction of their habitats, intensive agriculture, pollution and climate change. But even localized reductions can have cascading consequences for the wider ecosystem, resulting in less food for the birds and bats that prey upon insects. Moreover, "there are really quite accessible solutions," said Boyes—like applying filters to change the lamps' color, or adding shields so that the light shines only on the road, not insect habitats. The study is published in Science Advances. Read the full article
0 notes
sciencebulletin · 3 years
Text
World-first detector designed by dark matter researchers records rare events
Tumblr media
A ground-breaking detector that aims to use quartz to capture high frequency gravitational waves has been built by researchers at the ARC Centre of Excellence for Dark Matter Particle Physics (CDM) and the University of Western Australia. In its first 153 days of operation, two events were detected that could, in principle, be high frequency gravitational waves, which have not been recorded by scientists before. Such high frequency gravitational waves may have been created by a primordial black hole or a cloud of dark matter particles. The results were published this month in Physical Review Letters in an article titled "Rare Events Detected with a Bulk Acoustic Wave High Frequency Gravitational Wave Antenna." Gravitational waves were originally predicted by Albert Einstein, who theorized that the movement of astronomical objects could cause waves of spacetime curvature to be sent rippling through the universe, almost like the waves caused by stones dropped into a flat pond. This prediction was proven in 2015 by the first detection of a gravitational wave signal. Scientists believe that low frequency gravitational waves are caused by two black holes spinning and merging into each other or a star disappearing into a black hole. A quartz crystal bulk acoustic wave resonator. Since then, a new era of gravitational wave research has begun but the current generation of active detectors feature strong sensitivity to only low frequency signals; the detection of high frequency gravitational waves has remained an unexplored and extremely challenging front in astronomy. Despite most attention devoted to low frequency gravitational waves, there is a significant number of theoretical proposals for high frequency GW sources as well, for example, primordial blackholes. The new detector designed by the research team at the CDM to pick up high frequency gravitational waves is built around a quartz crystal bulk acoustic wave resonator (BAW). At the heart of this device is a quartz crystal disk that can vibrate at high frequencies due to acoustic waves traveling through its thickness. These waves then induce electric charge across the device, which can be detected by placing conducting plates on the outer surfaces of the quartz disk. The BAW device was connected to a superconducting quantum interference device, known as SQUID, which acts as an extremely sensitive amplifier for the low voltage signal from the quartz BAW. This assembly was placed in multiple radiation shields to protect it from stray electromagnetic fields and cooled to a low temperature to allow low energy acoustic vibrations of the quartz crystal to be detected as large voltages with the help of the SQUID amplifier. The team, which included Dr. Maxim Goryachev, Professor Michael Tobar, William Campbell, Ik Siong Heng, Serge Galliou and Professor Eugene Ivanov will now work to determine the nature of the signal, potentially confirming the detection of high frequency gravitational waves. Mr Campbell said a gravitational wave is just one possible candidate that was detected, but other explanations for the result could be the presence of charge particles or mechanical stress build up, a meteor event or an internal atomic process. It might also be due to a very high mass dark matter candidates interacting with the detector. "It's exciting that this event has shown that the new detector is sensitive and giving us results, but now we have to determine exactly what those results mean," Mr Campbell said. "With this work, we have demonstrated for the first time that these devices can be used as highly sensitive gravitational wave detectors. This experiment is one of only two currently active in the world searching for high frequency gravitational waves at these frequencies and we have plans to extend our reach to even higher frequencies, where no other experiments have looked before. The development of this technology could potentially provide the first detection of gravitational waves at these high frequencies, giving us new insight into this area of gravitational wave astronomy. "The next generation of the experiment will involve building a clone of the detector and a muon detector sensitive to cosmic particles. If two detectors find the presence of gravitational waves, that will be really exciting," he said. Read the full article
0 notes
sciencebulletin · 3 years
Text
Possible new antivirals against COVID-19, herpes
Tumblr media
Clinically approved antiviral drugs are currently available for only 10 of the more than 220 viruses known to infect humans. The SARS-CoV-2 outbreak has exposed the critical need for compounds that can be rapidly mobilised for the treatment of re-emerging or emerging viral diseases In addition to antibodies and white blood cells, the immune system deploys peptides to fight viruses and other pathogens. Synthetic peptides could reinforce this defense but don't last long in the body, so researchers are developing stable peptide mimics. Today, scientists report succ.ss in using mimics known as peptoids to treat animals with herpes virus infections. These small synthetic molecules could one day cure or prevent many kinds of infections, including COVID-19. The researchers will present their results at the fall meeting of the American Chemical Society (ACS). "In the body, antimicrobial peptides such as LL-37 help keep viruses, bacteria, fungi, cancer cells and even parasites under control," says Annelise Barron, Ph.D., one of the project's principal investigators. But peptides are quickly cleared by enzymes, so they're not ideal drug candidates. Instead, she and her colleagues emulated the key biophysical attributes of LL-37 in smaller, more stable molecules called peptoids. "Peptoids are easy to make," says Barron, who is at Stanford University. "And unlike peptides, they're not rapidly degraded by enzymes, so they could be used at a much lower dose." Peptoids (blue, left) pierce the protective coat of a virus, causing its disintegration and inactivation (right). Credit: Maxwell Biosciences Peptides consist of short sequences of amino acids, with side chains bonded to carbon atoms in the molecules' backbone. This structure is easily broken apart by enzymes. In peptoids, the side chains are instead linked to nitrogens in the molecular backbone, forming a structure that resists enzymes. They were first created in 1992 by Chiron Corp.'s Ronald Zuckermann, Ph.D., later Barron's postdoctoral adviser. Unlike other types of peptide mimics that require laborious, multi-step organic chemistry to produce, peptoids are simple and inexpensive to make with an automated synthesizer and readily available chemicals, she says. "You can make them almost as easily as you make bread in a bread machine." Barron, Zuckermann, Gill Diamond, Ph.D., of the University of Louisville and others founded Maxwell Biosciences to develop peptoids as clinical candidates to prevent or treat viral infections. They recently reported results with their newest peptoid sequences, which were designed to be less toxic to people than previous versions. In lab dishes, the compounds inactivated SARS-CoV-2, which causes COVID-19, and herpes simplex virus-1 (HSV-1), which causes oral cold sores, making the viruses incapable of infecting cultured human cells. Now, the researchers are reporting in vivo results, showing that the peptoids safely prevented herpes infections in mice when dabbed on their lips. Diamond's team is conducting additional experiments to confirm the mouse findings. In addition, they will investigate the peptoids' effectiveness against HSV-1 strains that are resistant to acyclovir, the best current U.S. Food and Drug Administration-approved antiviral treatment for this condition, Barron says. The researchers are also getting ready to test peptoids for activity against SARS-CoV-2 in mice. "COVID-19 infection involves the whole body, once somebody gets really sick with it, so we will do this test intravenously, as well as looking at delivery to the lungs," Barron says. But these antimicrobial molecules could have many more applications. Work is ongoing at Stanford to explore their impact on ear and lung infections. And Barron has sent peptoid samples to experts in other labs to test against a range of viruses, with promising results in lab dish studies against influenza, the cold virus, and hepatitis B and C. "In their in vitro studies, a team found that two of the peptoids were the most potent antivirals ever identified against MERS and older SARS coronaviruses," Barron says. Other labs are testing the peptoids as anti-fungals for airways and the gut and as anti-infective coatings for contact lenses, catheters and implanted hip and knee joints. Diamond and Barron are studying how these broad-spectrum compounds work. They seem to pierce and break up the viral envelope and also bind to the virus' RNA or DNA. That multipronged mechanism has the advantage of inactivating the virus, unlike standard antivirals, which slow viral replication but still allow viruses to infect cells, Barron says. It also makes it less likely that pathogens could develop resistance. Barron expects clinical trials to begin within the year. If successful, she says, peptoids could be given as a preventative—for instance, before air travel to protect a passenger from COVID-19—or after an infection takes hold, such as when a person feels the telltale tingle of an oncoming cold sore. Read the full article
0 notes
sciencebulletin · 3 years
Text
Interstellar comets like Borisov may not be all that rare
Tumblr media
Astronomers calculate that the Oort Cloud may be home to more visiting objects than objects that belong to our solar system. In 2019, astronomers spotted something incredible in our backyard: a rogue comet from another star system. Named Borisov, the icy snowball traveled 110,000 miles per hour and marked the first and only interstellar comet ever detected by humans. But what if these interstellar visitors—comets, meteors, asteroids and other debris from beyond our solar system—are more common than we think? In a new study published Monday in the Monthly Notices of the Royal Astronomical Society, astronomers Amir Siraj and Avi Loeb at the Center for Astrophysics | Harvard & Smithsonian (CfA) present new calculations showing that in the Oort Cloud—a shell of debris in the farthest reaches of our solar system—interstellar objects outnumber objects belonging to our solar system. "Before the detection of the first interstellar comet, we had no idea how many interstellar objects there were in our solar system, but theory on the formation of planetary systems suggests that there should be fewer visitors than permanent residents," says Siraj, a concurrent undergraduate and graduate student in Harvard's Department of Astronomy and lead author of the study. "Now we're finding that there could be substantially more visitors." Detected in 2019, the Borisov comet was the first interstellar comet known to have passed through our solar system. Credit: NASA, ESA and D. Jewitt (UCLA) The calculations, made using conclusions drawn from Borisov, include significant uncertainties, Siraj points out. But even after taking these into consideration, interstellar visitors prevail over objects that are native to the solar system. "Let's say I watch a mile-long stretch of railroad for a day and observe one car cross it. I can say that, on that day, the observed rate of cars crossing the section of railroad was one per day per mile," Siraj explains. "But if I have a reason to believe that the observation was not a one-off event—say, by noticing a pair of crossing gates built for cars—then I can take it a step further and begin to make statistical conclusions about the overall rate of cars crossing that stretch of railroad." But if there are so many interstellar visitors, why have we only ever seen one? We just don't have the technology to see them yet, Siraj says. Consider, he says, that the Oort Cloud spans a region some 200 billion to 10 trillion miles away from our Sun—and unlike stars, objects in the Oort Cloud don't produce their own light. Those two factors make debris in the outer solar system incredibly hard to see. Senior astrophysicist Matthew Holman, who was not involved in the research, says the study results are exciting because they have implications for objects even closer than the Oort Cloud. "These results suggest that the abundances of interstellar and Oort cloud objects are comparable closer to the Sun than Saturn. This can be tested with current and future solar system surveys," says Holman, who is the former director of the CfA's Minor Planet Center, which tracks comets, asteroids and other debris in the solar system. "When looking at the asteroid data in that region, the question is: are there asteroids that really are interstellar that we just didn't recognize before?" he asks. Holman explains that there are some asteroids that get detected but aren't observed or followed up on year after year. "We think they are asteroids, then we lose them without doing a detailed look." Loeb, study co-author and Harvard astronomy professor, adds that "interstellar objects in the planetary region of the solar system would be rare, but our results clearly show they are more common than solar system material in the dark reaches of the Oort cloud." Observations with next-generation technology may help confirm the team's results. The launch of the Vera C. Rubin Observatory, slated for 2022, will "blow previous searches for interstellar objects out of the water," Siraj says, and hopefully help detect many more visitors like Borisov. The Transneptunian Automated Occultation Survey (TAOS II), which is specifically designed to detect comets in the far reaches of our solar system, may also be able to detect one of these passersby. TAOS II may come online as early as this year. The abundance of interstellar objects in the Oort Cloud suggests that much more debris is left over from the formation of planetary systems than previously thought, Siraj says. "Our findings show that interstellar objects can place interesting constraints on planetary system formation processes, since their implied abundance requires a significant mass of material to be ejected in the form of planetesimals," Siraj says. "Together with observational studies of protoplanetary disks and computational approaches to planet formation, the study of interstellar objects could help us unlock the secrets of how our planetary system—and others—formed." Read the full article
0 notes
sciencebulletin · 3 years
Text
Mathematical model predicts best way to build muscle
Tumblr media
Researchers have developed a mathematical model that can predict the optimum exercise regime for building muscle. The researchers, from the University of Cambridge, used methods of theoretical biophysics to construct the model, which can tell how much a specific amount of exertion will cause a muscle to grow and how long it will take. The model could form the basis of a software product, where users could optimise their exercise regimes by entering a few details of their individual physiology. The model is based on earlier work by the same team, which found that a component of muscle called titin is responsible for generating the chemical signals which affect muscle growth. The results, reported in the Biophysical Journal, suggest that there is an optimal weight at which to do resistance training for each person and each muscle growth target. Muscles can only be near their maximal load for a very short time, and it is the load integrated over time which activates the cell signalling pathway that leads to synthesis of new muscle proteins. But below a certain value, the load is insufficient to cause much signalling, and exercise time would have to increase exponentially to compensate. The value of this critical load is likely to depend on the particular physiology of the individual. We all know that exercise builds muscle. Or do we? "Surprisingly, not very much is known about why or how exercise builds muscles: there's a lot of anecdotal knowledge and acquired wisdom, but very little in the way of hard or proven data," said Professor Eugene Terentjev from Cambridge's Cavendish Laboratory, one of the paper's authors. Figure 1. The “textbook” hierarchy in the anatomy of skeletal muscle. The overall muscle is characterized by its cross-sectional area (CSA), which contains a certain number (Nc) of muscle fibers (the muscle cells with multiple nuclei or multinucleate myocytes). A given muscle has a nearly fixed number of myocytes: between Nc ≈ 1000 for the tensor tympani and Nc > 1,000,000 for large muscles (gastrocnemius, temporalis, etc. Credit: DOI: 10.1016/j.bpj.2021.07.023 When exercising, the higher the load, the more repetitions or the greater the frequency, then the greater the increase in muscle size. However, even when looking at the whole muscle, why or how much this happens isn't known. The answers to both questions get even trickier as the focus goes down to a single muscle or its individual fibres. Muscles are made up of individual filaments, which are only 2 micrometres long and less than a micrometre across, smaller than the size of the muscle cell. "Because of this, part of the explanation for muscle growth must be at the molecular scale," said co-author Neil Ibata. "The interactions between the main structural molecules in muscle were only pieced together around 50 years ago. How the smaller, accessory proteins fit into the picture is still not fully clear." This is because the data is very difficult to obtain: people differ greatly in their physiology and behaviour, making it almost impossible to conduct a controlled experiment on muscle size changes in a real person. "You can extract muscle cells and look at those individually, but that then ignores other problems like oxygen and glucose levels during exercise," said Terentjev. "It's very hard to look at it all together." Terentjev and his colleagues started looking at the mechanisms of mechanosensing -- the ability of cells to sense mechanical cues in their environment -- several years ago. The research was noticed by the English Institute of Sport, who were interested in whether it might relate to their observations in muscle rehabilitation. Together, they found that muscle hyper/atrophy was directly linked to the Cambridge work. In 2018, the Cambridge researchers started a project on how the proteins in muscle filaments change under force. They found that main muscle constituents, actin and myosin, lack binding sites for signalling molecules, so it had to be the third-most abundant muscle component -- titin -- that was responsible for signalling the changes in applied force. Whenever part of a molecule is under tension for a sufficiently long time, it toggles into a different state, exposing a previously hidden region. If this region can then bind to a small molecule involved in cell signalling, it activates that molecule, generating a chemical signal chain. Titin is a giant protein, a large part of which is extended when a muscle is stretched, but a small part of the molecule is also under tension during muscle contraction. This part of titin contains the so-called titin kinase domain, which is the one that generates the chemical signal that affects muscle growth. The molecule will be more likely to open if it is under more force, or when kept under the same force for longer. Both conditions will increase the number of activated signalling molecules. These molecules then induce the synthesis of more messenger RNA, leading to production of new muscle proteins, and the cross-section of the muscle cell increases. This realisation led to the current work, started by Ibata, himself a keen athlete. "I was excited to gain a better understanding of both the why and how of muscle growth," he said. "So much time and resources could be saved in avoiding low-productivity exercise regimes, and maximising athletes' potential with regular higher value sessions, given a specific volume that the athlete is capable of achieving." Terentjev and Ibata set out to constrict a mathematical model that could give quantitative predictions on muscle growth. They started with a simple model that kept track of titin molecules opening under force and starting the signalling cascade. They used microscopy data to determine the force-dependent probability that a titin kinase unit would open or close under force and activate a signalling molecule. They then made the model more complex by including additional information, such as metabolic energy exchange, as well as repetition length and recovery. The model was validated using past long-term studies on muscle hypertrophy. "Our model offers a physiological basis for the idea that muscle growth mainly occurs at 70% of the maximum load, which is the idea behind resistance training," said Terentjev. "Below that, the opening rate of titin kinase drops precipitously and precludes mechanosensitive signalling from taking place. Above that, rapid exhaustion prevents a good outcome, which our model has quantitatively predicted." "One of the challenges in preparing elite athletes is the common requirement for maximising adaptations while balancing associated trade-offs like energy costs," said Fionn MacPartlin, Senior Strength & Conditioning Coach at the English Institute of Sport. "This work gives us more insight into the potential mechanisms of how muscles sense and respond to load, which can help us more specifically design interventions to meet these goals." The model also addresses the problem of muscle atrophy, which occurs during long periods of bed rest or for astronauts in microgravity, showing both how long can a muscle afford to remain inactive before starting to deteriorate, and what the optimal recovery regime could be. Eventually, the researchers hope to produce a user-friendly software-based application that could give individualised exercise regimes for specific goals. The researchers also hope to improve their model by extending their analysis with detailed data for both men and women, as many exercise studies are heavily biased towards male athletes. Read the full article
0 notes
sciencebulletin · 3 years
Text
Wandering black holes
Tumblr media
Scientists have long theorized that supermassive black holes can wander through space—but catching them in the act has proven difficult. Every massive galaxy is believed to host a supermassive black hole (SMBH) at its center. Its mass is correlated with the mass of the inner regions of its host (and also with some other properties), probably because the SMBH grows and evolves as the galaxy itself grows, through mergers with other galaxies and the infall of material from the intergalactic medium. When material makes its way to the galactic center and accretes onto the SMBH, it produces an active galactic nucleus (AGN); outflows or other feedback from the AGN then act disruptively to quench star formation in the galaxy. Modern cosmological simulations now self-consistently trace star formation and SMBH growth in galaxies from the early universe to the present day, confirming these ideas. An image from the ROMULUS computer simulation showing an intermediate mass galaxy, its bright central region with its supermassive black hole, and the locations (and velocities) of "wandering" supermassive black holes (those not confined to the nucleus; the 10kpc marker corresponds to a distance of about 31 thousand light-years). Simulations have studied the evolution and abundances of wandering supermassive black holes; in the early universe they contain most of the mass that is in black holes. Credit: Ricarte et al, 2021 The merger process naturally results in some SMBHs that are slightly offset from the center of the enlarged galaxy. The path to a single, combined SMBH is complex. Sometimes a binary SMBH is first formed which then gradually merges into one. Detectable gravitational wave emission can be produced in this process. However the merger can sometimes stall or be disrupted—understanding why is one of the key puzzles in SMBH evolution. New cosmological simulations with the ROMULUS code predict that even after a billions of years of evolution some SMBHs do not join the nucleus but end up instead wandering through the galaxy. CfA astronomer Angelo Ricarte led a team of colleagues characterizing such wandering black holes. Using the ROMULUS simulations the team finds that in today's universe (that is, about 13.7 billion years after the big bang) about ten percent of the mass in black holes might be in wanderers. At earlier times in the universe, two billion years after the big bang or younger, these wanderers appear to be even more significant and contain most of the mass in black holes. Indeed, the scientists find that in these early epochs the wanderers also produce most of the emission coming from the SMBH population. In a related paper, the astronomers explore the observational signatures of the wandering SMBH population. The research was published in Monthly Notices of the Royal Astronomical Society. Read the full article
0 notes
sciencebulletin · 3 years
Text
Researchers open a path toward quantum computing in real-world conditions
Tumblr media
The quantum computing market is projected to reach $65 billion by 2030, a hot topic for investors and scientists alike because of its potential to solve incomprehensibly complex problems. Drug discovery is one example. To understand drug interactions, a pharmaceutical company might want to simulate the interaction of two molecules. The challenge is that each molecule is composed of a few hundred atoms, and scientists must model all the ways in which these atoms might array themselves when their respective molecules are introduced. The number of possible configurations is infinite—more than the number of atoms in the entire universe. Only a quantum computer can represent, much less solve, such an expansive, dynamic data problem. Mainstream use of quantum computing remains decades away, while research teams in universities and private industry across the globe work on different dimensions of the technology. Credit: CC0 Public Domain A research team led by Xu Yi, assistant professor of electrical and computer engineering at the University of Virginia School of Engineering and Applied Science, has carved a niche in the physics and applications of photonic devices, which detect and shape light for a wide range of uses including communications and computing. His research group has created a scalable quantum computing platform, which drastically reduces the number of devices needed to achieve quantum speed, on a photonic chip the size of a penny. Olivier Pfister, professor of quantum optics and quantum information at UVA, and Hansuek Lee, assistant professor at the Korean Advanced Institute of Science and Technology, contributed to this success. Nature Communications recently published the team's experimental results, A Squeezed Quantum Microcomb on a Chip. Two of Yi's group members, Zijiao Yang, a Ph.D. student in physics, and Mandana Jahanbozorgi, a Ph.D. student of electrical and computer engineering, are the paper's co-first authors. A grant from the National Science Foundation's Engineering Quantum Integrated Platforms for Quantum Communication program supports this research. Quantum computing promises an entirely new way of processing information. Your desktop or laptop computer processes information in long strings of bits. A bit can hold only one of two values: zero or one. Quantum computers process information in parallel, which means they don't have to wait for one sequence of information to be processed before they can compute more. Their unit of information is called a qubit, a hybrid that can be one and zero at the same time. A quantum mode, or qumode, spans the full spectrum of variables between one and zero—the values to the right of the decimal point. Researchers are working on different approaches to efficiently produce the enormous number of qumodes needed to achieve quantum speeds. Yi's photonics-based approach is attractive because a field of light is also full spectrum; each light wave in the spectrum has the potential to become a quantum unit. Yi hypothesized that by entangling fields of light, the light would achieve a quantum state. You are likely familiar with the optical fibers that deliver information through the internet. Within each optical fiber, lasers of many different colors are used in parallel, a phenomenon called multiplexing. Yi carried the multiplexing concept into the quantum realm. Micro is key to his team's success. UVA is a pioneer and a leader in the use of optical multiplexing to create a scalable quantum computing platform. In 2014, Pfister's group succeeded in generating more than 3,000 quantum modes in a bulk optical system. However, using this many quantum modes requires a large footprint to contain the thousands of mirrors, lenses and other components that would be needed to run an algorithm and perform other operations. "The future of the field is integrated quantum optics," Pfister said. "Only by transferring quantum optics experiments from protected optics labs to field-compatible photonic chips will bona fide quantum technology be able to see the light of day. We are extremely fortunate to have been able to attract to UVA a world expert in quantum photonics such as Xu Yi, and I'm very excited by the perspectives these new results open to us." Yi's group created a quantum source in an optical microresonator a ring-shaped, millimeter-sized structure that envelopes the photons and generates a microcobe, a device that efficiently converts photons from single to multiple wavelengths. Light circulates around the ring to build up optical power. This power buildup enhances chances for photons to interact, which produces quantum entanglement between fields of light in the microcomb. Through multiplexing, Yi's team verified the generation of 40 qumodes from a single microresonator on a chip, proving that multiplexing of quantum modes can work in integrated photonic platforms. This is just the number they are able to measure. "We estimate that when we optimize the system, we can generate thousands of qumodes from a single device," Yi said. Yi's multiplexing technique opens a path toward quantum computing for real-world conditions, where errors are inevitable. This is true even in classical computers. But quantum states are much more fragile than classical states. The number of qubits needed to compensate for errors could exceed one million, with a proportionate increase in the number of devices. Multiplexing reduces the number of devices needed by two or three orders of magnitude. Yi's photonics-based system offers two additional advantages in the quantum computing quest. Quantum computing platforms that use superconducting electronic circuits require cooling to cryogenic temperatures. Because the photon has no mass, quantum computers with photonic integrated chips can run or sleep at room temperature. Additionally, Lee fabricated the microresonator on a silicon chip using standard lithography techniques. This is important because it implies the resonator or quantum source can be mass-produced. "We are proud to push the frontiers of engineering in quantum computing and accelerate the transition from bulk optics to integrated photonics," Yi said. "We will continue to explore ways to integrate devices and circuits in a photonics-based quantum computing platform and optimize its performance." Read the full article
0 notes
sciencebulletin · 3 years
Text
Astronomers find a 'break' in one of the Milky Way's spiral arms
Tumblr media
The newly discovered feature offers insight into the large-scale structure of our galaxy, which is difficult to study from Earth’s position inside it. Scientists have spotted a previously unrecognized feature of our Milky Way galaxy: A contingent of young stars and star-forming gas clouds is sticking out of one of the Milky Way's spiral arms like a splinter poking out from a plank of wood. Stretching some 3,000 light-years, this is the first major structure identified with an orientation so dramatically different than the arm's. Astronomers have a rough idea of the size and shape of the Milky Way's arms, but much remains unknown: They can't see the full structure of our home galaxy because Earth is inside it. It's akin to standing in the middle of Times Square and trying to draw a map of the island of Manhattan. Could you measure distances precisely enough to know if two buildings were on the same block or a few streets apart? And how could you hope to see all the way to the tip of the island with so many things in your way? Shown here (from left) are the Eagle, Omega, Triffid, and Lagoon Nebulae, imaged by NASA’s infrared Spitzer Space Telescope. These nebulae are part of a structure within the Milky Way’s Sagittarius Arm that is poking out from the arm at a dramatic angle. Credit: NASA/JPL-Caltech To learn more, the authors of the new study focused on a nearby portion of one of the galaxy's arms, called the Sagittarius Arm. Using NASA's Spitzer Space Telescope prior to its retirement in January 2020, they sought out newborn stars, nestled in the gas and dust clouds (called nebulae) where they form. Spitzer detects infrared light that can penetrate those clouds, while visible light (the kind human eyes can see) is blocked. Young stars and nebulae are thought to align closely with the shape of the arms they reside in. To get a 3D view of the arm segment, the scientists used the latest data release from the ESA (European Space Agency) Gaia mission to measure the precise distances to the stars. The combined data revealed that the long, thin structure associated with the Sagittarius Arm is made of young stars moving at nearly the same velocity and in the same direction through space. "A key property of spiral arms is how tightly they wind around a galaxy," said Michael Kuhn, an astrophysicist at Caltech and lead author of the new paper. This characteristic is measured by the arm's pitch angle. A circle has a pitch angle of 0 degrees, and as the spiral becomes more open, the pitch angle increases. "Most models of the Milky Way suggest that the Sagittarius Arm forms a spiral that has a pitch angle of about 12 degrees, but the structure we examined really stands out at an angle of nearly 60 degrees." Similar structures – sometimes called spurs or feathers – are commonly found jutting off the arms of other spiral galaxies. For decades scientists have wondered whether our Milky Way's spiral arms are also dotted with these structures or if they are relatively smooth. A contingent of stars and star-forming clouds was found jutting out from the Milky Way's Sagittarius Arm. The inset shows the size of the structure and distance from the Sun. Credit: NASAMeasuring the Milky Way The newly discovered feature contains four nebulae known for their breathtaking beauty: the Eagle Nebula (which contains the Pillars of Creation), the Omega Nebula, the Trifid Nebula, and the Lagoon Nebula. In the 1950s, a team of astronomers made rough distance measurements to some of the stars in these nebulae and were able to infer the existence of the Sagittarius Arm. Their work provided some of the first evidence of our galaxy's spiral structure. "Distances are among the most difficult things to measure in astronomy," said co-author Alberto Krone-Martins, an astrophysicist and lecturer in informatics at the University of California, Irvine and a member of the Gaia Data Processing and Analysis Consortium (DPAC). "It is only the recent, direct distance measurements from Gaia that make the geometry of this new structure so apparent." In the new study, researchers also relied on a catalog of more than a hundred thousand newborn stars discovered by Spitzer in a survey of the galaxy called the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE). "When we put the Gaia and Spitzer data together and finally see this detailed, three-dimensional map, we can see that there's quite a bit of complexity in this region that just hasn't been apparent before," said Kuhn. Astronomers don't yet fully understand what causes spiral arms to form in galaxies like ours. Even though we can't see the Milky Way's full structure, the ability to measure the motion of individual stars is useful for understanding this phenomenon: The stars in the newly discovered structure likely formed around the same time, in the same general area, and were uniquely influenced by the forces acting within the galaxy, including gravity and shear due to the galaxy's rotation. "Ultimately, this is a reminder that there are many uncertainties about the large-scale structure of the Milky Way, and we need to look at the details if we want to understand that bigger picture," said one the paper's co-authors, Robert Benjamin, an astrophysicist at the University of Wisconsin-Whitewater and a principal investigator on the GLIMPSE survey. "This structure is a small piece of the Milky Way, but it could tell us something significant about the Galaxy as a whole." The study was published in Astronomy & Astrophysics. Read the full article
0 notes
sciencebulletin · 3 years
Text
The secret of the Stradivari violin confirmed
Tumblr media
New research co-authored by a Texas A&M University scientist has confirmed that renowned violin maker Antonio Stradivari and others treated their instruments with chemicals that produced their unique sound, and several of these chemicals have been identified for the first time. Joseph Nagyvary, professor emeritus of biochemistry at Texas A&M, who first proposed the theory that chemicals used in making the violins—not so much the skill of making the instrument itself—was the reason Stradivari and others, such as Guarneri del Gesu, made instruments whose sound has not been equaled in over 200 years. An international team led by Hwan-Ching Tai, professor of chemistry at National Taiwan University, has had their findings published in Angewandte Chemie International Edition. About 40 years ago at Texas A&M, Nagyvary was the first to prove a theory that he had spent years researching: that a primary reason for the pristine sound, beyond the fine craftsmanship, was the chemicals Stradivari and others used to treat their instruments due to a worm infestation at the time. "All of my research over many years has been based on the assumption that the wood of the great masters underwent an aggressive chemical treatment, and this had a direct role in creating the great sound of the Stradivari and the Guarneri," Nagyvary said. Joseph Nagyvary holds a violin (left) and a viola with poplar wood fingerboards. Credit: Joseph Nagyvary His findings were verified in a review by the American Chemical Society, the world's largest scientific organization. The current findings of the research team show that borax, zinc, copper and alum—along with lime water—were used to treat the wood used in the instruments. "Borax has a long history as a preservative, going back to the ancient Egyptians, who used it in mummification and later as an insecticide," Nagyvary said. "The presence of these chemicals all points to collaboration between the violin makers and the local drugstore and druggist at the time. Both Stradivari and Guarneri would have wanted to treat their violins to prevent worms from eating away the wood because worm infestations were very widespread at that time." He said that each violin maker probably used his own home-grown methods when treating the wood. "This new study reveals that Stradivari and Guarneri had their own individual proprietary method of wood processing, to which they could have attributed a considerable significance," he said. "They could have come to realize that the special salts they used for impregnation of the wood also imparted to it some beneficial mechanical strength and acoustical advantages. These methods were kept secret. There were no patents in those times. How the wood was manipulated with chemicals was impossible to guess by the visual inspection of the finished product." He said that the varnish recipes were not secret because the varnish itself is not a critical determinant of tone quality. In contrast, the process of how the fresh spruce planks were treated and processed with a variety of water-based chemical treatments is critical for the sound of the finished violin. Such knowledge was needed to gain a "competitive advantage" over other instrument makers, he said. Nagyvary added that the team found the chemicals used were found all over and inside the wood, not just its surface, and this directly affected the sound quality of the instruments. Antonio Stradivari (1644 –1737) made about 1,200 violins in his lifetime and sold them only to the very rich, including the royalty. Today, there are about 600 Stradivari violins remaining. A lesser-known contemporary of Stradivari, Guarneri del Gesu, had trouble selling his work, but his instruments are now considered equal in quality and price to Stradivari violins. "Their violins have been unmatched in sound and quality for 220 years," Nagyvary said, noting that a Stradivari violin today can be valued at $10 million, and a Guarneri can be worth even more. He said that further research is need to clarify other details of how the chemicals and wood produced pristine tonal quality. "First, one needs several dozens of samples from not only Stradivari and Guarneri, but also from other makers of the Golden Period (1660-1750) of Cremona, Italy," he said. "There will have to be better cooperation between the master restorers of antique musical instruments, the best makers of our time, and the scientists who are performing the experiments often pro bono in their free time." Nagyvary has been involved with violin research much of his 87 years. He first learned to play in Switzerland on an instrument that once belonged to Albert Einstein. Read the full article
0 notes
sciencebulletin · 3 years
Text
The Arctic Ocean's deep past provides clues to its imminent future
Tumblr media
As the North Pole, the Arctic Ocean, and the surrounding Arctic land warm rapidly, scientists are racing to understand the warming's effects on Arctic ecosystems. With shrinking sea ice, more light reaches the surface of the Arctic Ocean. Some have predicted that this will lead to more plankton, which in turn would support fish and other animals. Not so fast, says a team of scientists led by Princeton University and the Max Planck Institute for Chemistry. They point to nitrogen, a vital nutrient. The researchers used fossilized plankton to study the history of sources and supply rates of nitrogen to the western and central open Arctic Ocean. Their work, detailed in a paper in the current issue of the journal Nature Geoscience, suggests that under a global warming regime, these open Arctic waters will experience more intense nitrogen limitation, likely preventing a rise in productivity. "Looking at the Arctic Ocean from space, it's difficult to see water at all, as much of the Arctic Ocean is covered by a layer of sea ice," said lead author Jesse Farmer, a postdoctoral research associate in the Department of Geosciences at Princeton University who is also a visiting postdoctoral fellow at the Max Planck Institute for Chemistry in Mainz, Germany. This sea ice naturally expands during winters and contracts during summers. In recent decades, however, global warming has caused a rapid decline in summer sea ice coverage, with summer ice cover now roughly half that of 1979. Global climate change is warming the Arctic Ocean and shrinking sea ice. Here, the blue-white ice cap shows the coverage of sea ice at its smallest extent in summer 2020, and the yellow line shows the typical Arctic sea ice minimum extent between 1981 and 2010. Some have proposed that the newly exposed sea surface will lead to a plankton population boom and a burgeoning ecosystem in the open Arctic Ocean, but a team of Princeton and Max Planck Institute for Chemistry scientists say that’s not likely. They have examined the history and supply rate of nitrogen, a key nutrient. Their recent work finds that stratification of the open Arctic waters, especially in the areas fed by the Pacific Ocean via the Bering Strait, will prevent surface plankton from receiving enough nitrogen to grow abundantly. Credit: Jesse Farmer, Princeton University; modified from Rebecca Lindsey and Michon Scott, “Climate change: Arctic sea ice,” NOAA Climate.gov As sea ice melts, photosynthesizing plankton that form the base of Arctic food webs should benefit from the greater light availability. "But there's a catch," said contributing author Julie Granger, an associate professor of marine sciences at the University of Connecticut. "These plankton also need nutrients to grow, and nutrients are only abundant deeper in the Arctic Ocean, just beyond the reach of the plankton." Whether plankton can acquire these nutrients depends on how strictly the upper ocean is "stratified," or separated into layers. The upper 200 meters (660 feet) of the ocean consists of distinct layers of water with different densities, determined by their temperature and saltiness. These white lumps are fossilized foraminifera from an Arctic Ocean sediment core, magnified 30 times. The researchers used organic material inside these “forams” — plankton that grew in surface waters, then died and sank to the sea floor — to measure the isotopic composition of nitrogen. Credit: Jesse Farmer, Princeton University"When the upper ocean is strongly stratified, with very light water floating on top of dense deep water, the supply of nutrients to the sunlit surface is slow," said Farmer. New research led by scientists from Princeton University shows how the supply of nitrogen to the Arctic has changed since the last ice age, which reveals the history of Arctic Ocean stratification. Using sediment cores from the western and central Arctic Ocean, the researchers measured the isotopic composition of organic nitrogen trapped in the limestone fossils of foraminifera (plankton that grew in surface waters, then died and sank to the sea floor). Their measurements reveal how the proportions of Atlantic- and Pacific-derived nitrogen changed over time, while also tracking changes in the degree of nitrogen limitation of plankton at the surface. Ona Underwood of the Class of 2021 was a key member of the research team, analyzing western Arctic Ocean sediment cores for her junior project. Where the oceans meet: Pacific waters float above saltier, denser Atlantic waters The Arctic Ocean is the meeting place of two great oceans: the Pacific and the Atlantic. In the western Arctic, Pacific Ocean waters flow northward across the shallow Bering Strait that separates Alaska from Siberia. Arriving in the Arctic Ocean, the relatively fresh Pacific water flows over saltier water from the Atlantic. As a result, the upper water column of the western Arctic is dominated by Pacific-sourced nitrogen and is strongly stratified. However, this was not always the case. "During the last ice age, when the growth of ice sheets lowered global sea level, the Bering Strait didn't exist," said Daniel Sigman, Princeton's Dusenbury Professor of Geological and Geophysical Sciences and one of Farmer's research mentors. At that time, the Bering Strait was replaced by the Bering Land Bridge, a land connection between Asia and North America that allowed for the migration of humans into the Americas. Without the Bering Strait, the Arctic would only have Atlantic water, and the nitrogen data confirm this. Study co-author Julie Granger sampled water from the Arctic Ocean aboard the US Coast Guard icebreaker Healy. Credit: Julie Granger, University of ConnecticutWhen the ice age ended 11,500 years ago, as ice sheets melted and sea level rose, the data show the sudden appearance of Pacific nitrogen in the open western Arctic basin, dramatic evidence of the opening of the Bering Strait. "We had expected to see this signal in the data, but not so clearly!" Sigman said. This was just the first of the surprises. Analyzing the data, Farmer also realized that, prior to the opening of the Bering Strait, the Arctic had not been strongly stratified as it is today. Only with opening the Bering Strait did the western Arctic become strongly stratified, as reflected by the onset of nitrogen limitation of plankton in the surface waters. Heading eastward away from the Bering Strait, the Pacific-sourced water is diluted away, so that the modern central and eastern Arctic are dominated by Atlantic water and relatively weak stratification. Here, the researchers found that nitrogen limitation and density stratification varied with climate. As in the western Arctic, stratification was weak during the last ice age, when climate was colder. After the ice age, central Arctic stratification strengthened, reaching a peak between about 10,000 and 6,000 years ago, a period of naturally warmer Arctic summer temperatures called the "Holocene Thermal Maximum." Since that time, central Arctic stratification has weakened, allowing enough deep nitrogen to reach surface waters to exceed the requirements of plankton. Global warming is quickly returning the Arctic to the climate of the Holocene Thermal Maximum. As this warming continues, some scientists have predicted that reduced ice cover would enhance the productivity of Arctic plankton by increasing the amount of sunlight reaching the ocean. The new historical information acquired by Farmer and his colleagues suggests that such a change is unlikely for the open basin waters of the western and central Arctic. The western Arctic will remain strongly stratified due to persistent inflow of Pacific water through the Bering Strait, while the warming will strengthen stratification in the central Arctic. In both of these open ocean regions, slow nitrogen supply is likely to limit plankton productivity, the researchers concluded. "A rise in the productivity of the open Arctic basin would likely have been seen as a benefit, for example, increasing fisheries," said Farmer. "But given our data, a rise in open Arctic productivity seems unlikely. The best hope for a future rise in Arctic productivity is probably in the Arctic's coastal waters." Read the full article
0 notes
sciencebulletin · 3 years
Text
Recordings of the magnetic field from 9,000 years ago teach us about the magnetic field today
Tumblr media
New research has uncovered findings regarding the magnetic field that prevailed in the Middle East between approximately 10,000 and 8,000 years ago. Researchers examined pottery and burnt flints from archaeological sites in Jordan, on which the magnetic field during that time period was recorded. International research by Tel Aviv University, the Istituto Nazionale di Geofisica e Vulcanologia, Rome, and the University of California San Diego uncovered findings regarding the magnetic field that prevailed in the Middle East between approximately 10,000 and 8,000 years ago. Researchers examined pottery and burnt flints from archaeological sites in Jordan, on which the magnetic field during that time period was recorded. Information about the magnetic field during prehistoric times can affect our understanding of the magnetic field today, which has been showing a weakening trend that has been cause for concern among climate and environmental researchers. The research was conducted under the leadership of Prof. Erez Ben-Yosef of the Jacob M. Alkow Department of Archaeology and Ancient Near Eastern Cultures at Tel Aviv University and Prof. Lisa Tauxe, head of the Paleomagnetic Laboratory at the Scripps Institution of Oceanography, in collaboration with other researchers from the University of California at San Diego, Rome and Jordan. The article was published in the journal PNAS. Excavations—Tel Tifdan/Wadi Fidan. Credit: Thomas E. Levy Prof. Ben-Yosef explains, "Albert Einstein characterized the planet's magnetic field as one of the five greatest mysteries of modern physics. As of now, we know a number of basic facts about it: The magnetic field is generated by processes that take place below a depth of approximately 3,000 km beneath the surface of the planet (for the sake of comparison, the deepest human drilling has reached a depth of only 20 km); it protects the planet from the continued bombardment by cosmic radiation and thus allows life as we know it to exist; it is volatile and its strength and direction are constantly shifting, and it is connected to various phenomena in the atmosphere and the planet's ecological system, including -- possibly -- having a certain impact on climate. Nevertheless, the magnetic field's essence and origins have remained largely unresolved. In our research, we sought to open a peephole into this great riddle." The researchers explain that instruments for measuring the strength of the Earth's magnetic field were first invented only approximately 200 years ago. In order to examine the history of the field during earlier periods, science is helped by archaeological and geological materials that recorded the properties of the field when they were heated to high temperatures. The magnetic information remains "frozen" (forever or until another heating event) within tiny crystals of ferromagnetic minerals, from which it can be extracted using a series of experiments in the magnetics laboratory. Basalt from volcanic eruptions or ceramics fired in a kiln are frequent materials used for these types of experiments. The great advantage in using archaeological materials as opposed to geological is the time resolution: While in geology dating is on the scale of thousands years at best, in archaeology the artifacts and the magnetic field that they have recorded can be dated at a resolution of hundreds and sometimes even tens of years (and in specific cases, such as a known destruction event, even give an exact date). The obvious disadvantage of archaeology is the young age of the relevant artifacts: Ceramics, which have been used for this purpose up until now, were only invented 8,500 years ago. The current study is based on materials from four archaeological sites in Wadi Feinan (Jordan), which have been dated (using carbon-14) to the Neolithic period -- approximately 10,000 to 8,000 years ago -- some of which predate the invention of ceramics. Researchers examined the magnetic field that was recorded in 129 items found in these excavations, and this time, burnt flint tools were added to the ceramic shards. Prof. Ben-Yosef: "This is the first time that burnt flints from prehistoric sites are being used to reconstruct the magnetic field from their time period. About a year ago, groundbreaking research at the Hebrew University was published, showing the feasibility of working with such materials, and we took that one step forward, extracting geomagnetic information from tightly dated burned flint. Working with this material extends the research possibilities tens of thousands of years back, as humans used flint tools for a very long period of time prior to the invention of ceramics. Additionally, after enough information is collected about the changes in the geomagnetic field over the course of time, we will be able to use it in order to date archaeological remains." An additional and important finding of this study is the strength of the magnetic field during the time period that was examined. The archaeological artifacts demonstrated that at a certain stage during the Neolithic period, the field became very weak (among the weakest values ever recorded for the last 10,000 years), but recovered and strengthened within a relatively short amount of time. According to Prof. Tauxe, this finding is significant for us today: "In our time, since measurements began less than 200 years ago, we have seen a continuous decrease in the field's strength. This fact gives rise to a concern that we could completely lose the magnetic field that protects us against cosmic radiation and therefore, is essential to the existence of life on Earth. The findings of our study can be reassuring: This has already happened in the past. Approximately 7,600 years ago, the strength of the magnetic field was even lower than today, but within approximately 600 years, it gained strength and again rose to high levels." The research was carried out with the support of the US-Israel Binational Science Foundation, which encourages academic collaborations between universities in Israel and in the US. The researchers note that in this case, the collaboration was particularly essential to the success of the study because it is based on a tight integration of methods from the fields of archaeology and geophysics, and the insights that were obtained are notably relevant to both of these disciplines. Read the full article
0 notes
sciencebulletin · 3 years
Text
New salts raise the bar for lithium ion battery technology
Tumblr media
At the Monash University School of Chemistry, scientists under the leadership of Professor Doug MacFarlane and Dr. Mega Kar working with local company Calix Ltd have come up with alternative solutions to this challenge with new chemistry. Lithium ion batteries are set to take a dominant role in electric vehicles and other applications in the near future—but the battery materials, currently in use, fall short in terms of safety and performance and are holding back the next generation of high-performance batteries. In particular, the development of the electrolyte poses a key challenge for higher power batteries suitable for energy storage and vehicle applications. "The lithium salt currently being used in lithium ion batteries is lithium hexafluorophosphate, which poses a fire and safety hazard as well as toxicity," said Professor MacFarlane. "In smaller portable devices, this risk can be partially mitigated. However, in a large battery pack, such as electric vehicle and outdoor grid scale energy storage systems, the potential hazard is much intensified. Higher voltage and power batteries are also on the drawing board but cannot use the hexafluorophosphate salt. " In research published in Advanced Energy Materials, the chemists describe a novel lithium salt which might overcome the challenges of electrolyte design and replace the hexafluorophosphate salt. "Our aim has been to develop safe fluoroborate salts, which are not affected even if we expose them to air," said lead study author Dr. Binayak Roy, also from Monash University School of Chemistry. Scientists hope to turn these new anions into thermally stable, non-flammable liquid salts, making them beneficial for batteries operating at high temperatures. "The main challenge with the new fluoroborate salt was to synthesize it with battery grade purity which we have been able to do by a recrystallisation process," he said. "When put in a lithium battery with lithium manganese oxide cathodes, the cell cycled for more than 1000 cycles, even after atmospheric exposure, an unimaginable feat compared to the hyper-sensitive hexafluorophosphate salt." According to Dr. Roy, when combined with a novel cathode material in a high voltage lithium battery, this electrolyte far outperformed the conventional salt. Moreover, the salt was found to be very stable on aluminum current collectors at higher voltages, as required for next generation batteries. The research is a result of a collaborative effort within the Australian Research Council (ARC) Training Centre for Future Energy Storage Technologies (www.storenergy.com.au). StorEnergy is a federally funded Industry Transformation Training Centre which aims to train and skill the next generation of workers within the Australia energy industry and promote industry-university collaboration. StorEnergy Director Professor Maria Forsyth from Deakin University, said: "This is a wonderful example of how industry—university collaborations supported through government research funding can support Australia's leadership in next generation safe battery technologies." The research was conducted in collaboration with Calix Ltd., a Victoria/NSW-based company that is producing high-quality manganese-based battery materials from Australian sourced minerals. The research will assist Calix to achieve its goal of large-scale fabrication of Australian-based Li-ion batteries, aiming for grid scale energy storage systems for roll out in Australia. Dr. Matt Boot-Handford, General Manager for R&D at Calix said: "Calix is developing a platform technology to produce high-performance, cost-competitive battery materials in Australia. We are working closely with our research partners at Monash and Deakin through StorEnergy to support the development of electrolyte systems that are compatible with Calix's electrode materials. The superior electrochemical performance and stability demonstrated by the Monash team's new electrolyte system paired with Calix's lithium manganese oxide electrode material is an exciting and important milestone that brings us one step closer to making batteries featuring Calix next-generation electrode materials a commercial reality. "In the near future we hope to turn these new anions into thermally stable, non-flammable liquid salts, making them beneficial for batteries operating at high temperatures," said Dr. Kar. "With the current climate conditions, designing such battery technologies with safety and stability will be important in implementing a sustainable grid-scale energy solution in Australia." Read the full article
0 notes