13.2 C
Brussels
Wednesday, October 30, 2024
Home Blog Page 275

How high altitude changes the body’s metabolism

0
How high altitude changes the body’s metabolism

When mice are exposed to chronically low levels of oxygen, their metabolism is altered.

Compared to those who live at sea level, the 2 million people worldwide who live above an elevation of 4,500 meters (14,764 feet) — about the height of Mount Rainier, Mount Whitney, and many Colorado and Alaska peaks — have lower rates of metabolic diseases, such as diabetes, coronary artery disease, hypercholesterolemia, and obesity.

A team of scientists in Isha Jain’s lab showed how chronically low oxygen levels, such as those experienced at 4,500 meters of elevation, rewire how mice burn sugars and fats.

The work of U.S. National Science Foundation-supported researchers at Gladstone Institutes has shed new light on this phenomenon. 

The scientists showed that exposure to chronically low oxygen levels, such as those experienced at high elevations, rewired how mice burn sugars and fats. The results, published in the journal Cell Metabolism, help explain the metabolic differences of people who live at high altitudes and could lead to new treatments for metabolic disease.

“When an organism is exposed to chronically low levels of oxygen, different organs reshuffle their fuel sources and their energy-producing pathways,” says Isha Jain, senior author of the new study. “We hope these findings will help us identify metabolic switches that might benefit metabolism even outside of low-oxygen environments.”

At sea level, where a third of the world’s population lives, oxygen makes up about 21% of our air. But people who live above 4,500 meters, where oxygen makes up just 11% of the air, can adapt to the shortage of oxygen — known as hypoxia — and thrive.

Researchers studying the impact of hypoxia have usually carried out their research in isolated cells or in cancerous tumors, which often lack oxygen. Jain’s group wanted a better look at how long-term hypoxia impacts organs throughout the body.

Jain and colleagues at Gladstone and the University of California, San Francisco, housed adult mice in pressure chambers containing 21%, 11% or 8% oxygen — all levels humans and mice can survive. Over three weeks, the researchers observed the animals’ behavior, monitored their temperature, carbon dioxide levels and blood glucose, and used positron emission tomography (PET) scans to study how different organs consumed nutrients and what is the level of their metabolism.

In the first days of hypoxia, the mice living in 11% or 8% oxygen moved around less, spending hours completely still. However, by the end of the third week, their movement patterns had returned to normal.

Similarly, carbon dioxide levels in their blood — which usually decrease when mice or humans breathe faster to try to get more oxygen — initially decreased but returned to normal levels by the end of the three weeks.

The animals’ metabolism, however, seemed more permanently altered by the hypoxia. For animals housed in the hypoxic cages, blood glucose levels and body weight both dropped, and neither returned to pre-hypoxic levels.

These metabolism changes mirror those in humans who live at high altitudes, and are associated with a lower risk of diseases, including cardiovascular disease. Understanding how hypoxia contributes could lead to new drugs that mimic these beneficial effects.

Source: NSF


Source link

Laboratory Solar Flares Reveal Clues to Mechanism Behind Bursts of High-Energy Particles

0
Laboratory Solar Flares Reveal Clues to Mechanism Behind Bursts of High-Energy Particles

Simulating solar flares on a scale the size of a banana, researchers at Caltech have parsed out the process by which these massive explosions blast potentially harmful energetic particles and X-rays into the cosmos.

NASA’s Solar Dynamics Observatory captured this image of a solar flare on Oct. 2, 2014. The solar flare is the bright flash of light at top. A burst of solar material erupting out into space can be seen just to the right of it. Image Credit: NASA/SDO

Corona loops are arches of plasma that protrude from the surface of the sun, aligned along magnetic field lines. The magnetic field lines act like highways for charged particles, guiding the motion of the electrons and ions that comprise plasma.

The loops, which may project 100,000 kilometers above the sun’s surface, can persist for minutes to hours. The loops usually grow and evolve slowly but sometimes can abruptly blast a tremendous amount of energy—billions of times stronger than the most powerful nuclear explosion on Earth—into space. This sudden blast of energy is called a solar flare.

A simulated corona loop in the Bellan Lab.

Some of the energy in the flare takes the form of charged particles and “hard X-rays,” which are high-energy electromagnetic waves like those used to image bones in a doctor’s office.

The Earth’s own magnetic field and atmosphere act as a shield that protects life on the surface from getting cooked by these torrents of energy, but they have been known to disrupt communications and power grids. They also pose an ongoing threat to spacecraft and astronauts in space.

While the fact that solar flares generate energetic particles and X-ray bursts has long been known, scientists are only starting to piece together the mechanism by which they do so.

Structural similarities between an actual solar flare (top) and one simulated in the Bellan lab (below).

Structural similarities between an actual solar flare (top) and one simulated in the Bellan lab (below). Credit: Bellan Lab

Researchers have two options for deciphering how and why the loops form and change. The first is to observe the sun and hope to capture the phenomenon in sufficiently fine detail to yield relevant information. The second is to simulate the loops in a lab. Caltech’s Paul Bellan, professor of applied physics, chose the latter.

In a lab on the first floor of the Thomas J. Watson, Sr., Laboratories of Applied Physics on Caltech’s campus, Bellan built a vacuum chamber with twin electrodes inside. To simulate the phenomenon, he charged a capacitor with enough energy to run the City of Pasadena for a few microseconds, then discharged it through the electrodes to create a miniature solar corona loop.

ywAAAAAAQABAAACAUwAOw== Laboratory Solar Flares Reveal Clues to Mechanism Behind Bursts of High-Energy ParticlesEach loop lasts about 10 microseconds, and has a length of about 20 centimeters (cm) and a diameter of about 1 cm. But structurally, Bellan’s loops are identical to the real thing, offering he and his colleagues the opportunity to simulate and study them at will.

“Each experiment consumes about as much energy as it takes to run a 100-watt lightbulb for about a minute, and it takes just a couple minutes to charge the capacitor up,” says Bellan, the senior author of a new paper on solar flares that published in Nature Astronomy.

Bellan captures each loop with a camera capable of taking 10 million frames per second, and he then studies the resulting images.

Among the recent discoveries are that solar corona loops do not appear to be a single structure, but rather are composed of fractally braided strands akin to a large rope.

“If you dissect a piece of rope, you see that it’s made up of braids of individual strands,” says Yang Zhang, graduate student and lead author of the Nature Astronomy paper.

“Pull those individual strands apart, and you’ll see that they’re braids of even smaller strands, and so on. Plasma loops appear to work the same way.”

That structure, it turns out, is important to the generation of energetic particles and X-ray bursts associated with solar flares. Plasma is a strong electrical conductor—think of neon signs, which are filled with plasma and light up when electricity passes through.

However, when too much current tries to pass through a solar corona loop, the structure is compromised. The loop develops a kink—a corkscrew-shaped instability—and individual strands start to break. Each new broken strand then dumps strain onto the remaining ones.

“Like an elastic band stretched too tight, the loop gets longer and skinnier until the strands just snap,” says Seth Pree, postdoctoral scholar research associate in applied physics and materials science, and co-author of the Nature Astronomy paper.

Studying the process microsecond by microsecond, the team noted a negative voltage spike associated with an X-ray burst at the exact instant a strand broke. This voltage spike is akin to the pressure drop that builds up at the point of constriction in a water pipe. The electric field from this voltage spike accelerates charged particles to extreme energy, then X-rays are emitted when the energetic particles decelerate.

In addition, Zhang combed through pictures of solar flares and was able to document a kink instability similar to the one created in the lab that was associated with a subsequent X-ray burst.

Next, the team plans to explore how separate plasma loops can merge and reorganize into different configurations. They are interested to learn if there are also energy burst events during this type of interaction.

Written by Robert Perkins


Source link

Immunotherapy substantially increases survival of people with lymphomatoid granulomatosis

0
Immunotherapy substantially increases survival of people with lymphomatoid granulomatosis

Results from a clinical trial conducted by researchers at the National Institutes of Health (NIH) show that people with low-grade lymphomatoid granulomatosis treated with interferon alfa-2b, a type of immunotherapy, can live for decades after diagnosis.

Doctor performing a surgery – illustrative photo. Image credit: Jafar Ahmed via Unsplash, free license

Lymphomatoid granulomatosis is a rare precancerous condition triggered by Epstein-Barr virus infection. Left untreated, the disease can progress to a high-grade form, which has a poorer prognosis and can quickly turn into an aggressive and fatal B-cell lymphoma.

Treatment with interferon alfa-2b led to the disappearance of a large lesion (top image) in the lungs of a patient with lymphomatoid granulomatosis, as shown in this computed tomography chest scan. Image credit: Center for Cancer Research

In the phase 2 trial, led by researchers in the Center for Cancer Research at the National Cancer Institute (NCI), part of NIH, patients treated with interferon alfa-2b lived for a median of about 20 years.

By contrast, past studies reported a median survival of less than two years for people with lymphomatoid granulomatosis.

The findings suggest that immunotherapy can prevent the progression of low-grade disease to high-grade disease. The results were published in Lancet Haematology.

“We have shown in this rare disorder that using a novel immunotherapy-based approach for low-grade disease is effective and improves survival compared with historical treatments such as chemotherapy and corticosteroids,” said Christopher J. Melani, M.D., of NCI’s Center for Cancer Research, who co-led the study.

“I think the results of this study represent a significant contribution to determining the standard-of-care treatment for this rare disease.”

Lymphomatoid granulomatosis causes an overproduction of white blood cells known as B lymphocytes. Patients typically have lesions in the lungs, central nervous system, skin, liver, and kidneys.

Symptoms can include cough, shortness of breath, fever, weight loss, and fatigue. Chemotherapy is currently the standard treatment for people with high-grade disease, but there is no standard treatment for low-grade disease.

“Although lymphomatoid granulomatosis is uncommon, the effects of high-grade disease can be debilitating,” said Jeffrey Cohen, M.D., chief of the Laboratory of Infectious Diseases at the National Institute of Allergy and Infectious Diseases and a co-leader of the study.

“We need better ways to prevent the disease from progressing to this more severe state, such as interferon alfa-2b.”

NIH researchers have been studying lymphomatoid granulomatosis since the 1980s. In the early 1990s, Wyndham Wilson, M.D., Ph.D., also of NCI’s Center for Cancer Research, hypothesized that low-grade disease results from a defective immune response to the Epstein-Barr virus and could therefore be treated with immunotherapy, whereas high-grade disease requires chemotherapy to curb uncontrolled cell growth.

He and his colleagues treated four people with low-grade lymphomatoid granulomatosis with interferon alfa-2b over a 5-year period, and the treatment eradicated all signs of the disease in three of those patients, known as a complete remission.

That study laid the foundation for the phase 2 trial of interferon alfa-2b in lymphomatoid granulomatosis, which has taken 30 years to complete because of the rarity of the disease and the challenges of recruiting enough patients for the study.

“This really illustrates the unique ability of NIH to do a study like this that nobody else could do and no one else ever has done for this particular disease,” said Dr. Wilson, who co-led the study.

The trial included 67 people with lymphomatoid granulomatosis, 37 with low-grade disease and 30 with high-grade disease. In all cases, the participants had not yet been treated for the disease or their disease had not responded to or had returned after other treatments.

During their initial treatment, most of the patients with low-grade disease received subcutaneous injections of interferon alfa-2b three times a week in increasing doses for about one year. Most of the patients with high-grade disease were given six cycles of intravenous chemotherapy every three weeks.

Both groups improved, with the disease disappearing in 27 of 44 patients (61%) treated with interferon alfa-2b, and 8 of 17 patients (47%) treated with chemotherapy.

After their initial treatment, some patients subsequently received the other therapy, called crossover treatment.

Patients with low-grade disease that worsened after immunotherapy were given chemotherapy, whereas patients with high-grade disease that came back after chemotherapy were given interferon alfa-2b. Previous work showed that after high-grade disease is eliminated with chemotherapy, low-grade disease can re-emerge.

The crossover treatments were also effective, with the disease disappearing in 4 of 8 patients (50%) treated with interferon alfa-2b after chemotherapy and 7 of 15 patients (47%) treated with chemotherapy after interferon alfa-2b.  

Median overall survival was 20.6 years for patients treated initially with interferon alfa-2b and 19.8 years for patients who crossed over to receive interferon alfa-2b. Median overall survival was 12.1 years for patients treated initially with chemotherapy and not reached for those who crossed over to receive chemotherapy. 

The most common side effect of interferon alfa-2b treatment was low white blood cell count, and the most common side effects with chemotherapy were low white blood cell count and infection. Serious side effects occurred in only a quarter of the patients treated with interferon alfa-2b, compared with nearly two-thirds of patients treated with chemotherapy.  

Many newer immunotherapies, such as nivolumab, could potentially be used to treat low-grade lymphomatoid granulomatosis and other Epstein-Barr virus–associated disorders and may have fewer side effects.

“The trial of interferon alfa-2b established that immunotherapy improves survival in patients with low-grade lymphomatoid granulomatosis,” Dr. Melani said. “Now we can look into more novel immunotherapies that are easier to tolerate to see if they can improve on the efficacy of our current treatment.” 

Source: NIH


Source link

Study sheds light on the dark side of AI

0
Study sheds light on the dark side of AI

To understand how to get artificial intelligence right, we need to know how it can go wrong, says researcher.

Artificial intelligence – artistic concept. Image credit: Icons8 Team via Unsplash, free license

Artificial intelligence is touted as a panacea for almost every computational problem these days, from medical diagnostics to driverless cars to fraud prevention.

But when AI fails, it does so “quite spectacularly,” says Vern Glaser of the Alberta School of Business. In his recent study, “When Algorithms Rule, Values Can Wither,” Glaser explains how AI’s efficiency imperative often subsumes human values, and why the costs can be high.

“If you don’t actively try to think through the value implications, it’s going to end up creating bad outcomes,” he says.

When bots go bad

Glaser cites Microsoft’s Tay as one example of bad outcomes. When the chatbot was introduced on Twitter in 2016, it was revoked within 24 hours after trolls taught it to spew racist language.

Then there was the “robodebt” scandal of 2015, when the Australian government used AI to identify overpayments of unemployment and disability benefits. But the algorithm presumed every discrepancy reflected an overpayment and automatically sent notification letters demanding repayment. The case was forwarded to a debt collector if someone didn’t respond.

By 2019, the program identified over 734,000 overpayments worth two billion Australian dollars (C$1.8 billion).

“The idea was that by eliminating human judgment, which is shaped by biases and personal values, the automated program would make better, fairer and more rational decisions at much lower cost,” says Glaser.

But the human consequences were dire, he says. Parliamentary reviews found “a fundamental lack of procedural fairness” and called the program “incredibly disempowering to those people who had been affected, causing significant emotional trauma, stress and shame,” including at least two suicides.

While AI promises to bring enormous benefits to society, we are now also beginning to see its dark underbelly, says Glaser. In a recent Globe and Mail column, Lawrence Martin points out AI’s dystopian possibilities, including autonomous weapons that can fire without human supervision, cyberattacks, deepfakes and disinformation campaigns. Former Google CEO Eric Schmidt has warned that AI could quite easily be used to construct killer biological weapons.

Glaser roots his analysis in French philosopher Jacques Ellul’s notion of “technique,” offered in his 1954 book The Technological Society, by which the imperatives of efficiency and productivity determine every field of human activity.

“Ellul was very prescient,” says Glaser. “His argument is that when you’re going through this process of technique, you are inherently stripping away values and creating this mechanistic world where your values essentially get reduced to efficiency. 

“It doesn’t matter whether it’s AI or not. AI in many ways is perhaps only the ultimate example of it.”

A principled approach to AI

Glaser suggests adherence to three principles to guard against the “tyranny of technique” in AI. First, recognize that because algorithms are mathematical, they rely on “proxies,” or digital representations of real phenomena.

One way Facebook gauges friendship, for example, is by how many friends a user has, or by the number of likes received on posts from friends.

“Is that really a measure of friendship? It’s a measure of something, but whether it’s actually friendship is another matter,” says Glaser, adding that the intensity, nature, nuance and complexity of human relationships can easily be overlooked.

“When you’re digitizing phenomena, you’re essentially representing something as a number. And when you get this kind of operationalization, it’s easy to forget it’s a stripped-down version of whatever the broader concept is.”

For AI designers, Glaser recommends strategically inserting human interventions into algorithmic decision-making, and creating evaluative systems that account for multiple values.

“There’s a tendency when people implement algorithmic decision-making to do it once and then let it go,” he says, but AI that embodies human values requires vigilant and continuous oversight to prevent its ugly potential from emerging.

In other words, AI simply reflects who we are — at our best and worst. The latter could take over without a good, hard look in the mirror.

“We want to make sure we understand what’s going on, so the AI doesn’t manage us,” he says. “It’s important to keep the dark side in mind. If we can do that, it can be a force for social good.”

Source: University of Alberta


Source link

Twinkling stars fuel interstellar dust

0
Twinkling stars fuel interstellar dust

Of the many kinds of stars, asymptotic giant branch (AGB) stars, usually slightly larger and older than our own sun, are known to produce interstellar dust. Dusty AGBs are particularly prominent dust producers, and the light they shine varies widely.

For the first time, a long-period survey has found the variable intensity of dusty AGBs coincides with variations in the amount of dust these stars produce. As this dust can lead to the creation of planets, its study can shed light on our own origins.

AGB. Artist’s impression of an asymptotic giant branch star. Image credit: Miyata, Tachibana, et al. CC-BY

You’ve probably heard of the James Webb Space Telescope (JWST) which has been in the news lately. It’s famous for being the largest and most sensitive space telescope designed to observe infrared (IR) light.

But long before the JWST took to the skies, two other IR space telescopes, AKARI and WISE, have been surveying the cosmos, both of which have ended their initial missions, but produced so much valuable data that astronomers are still finding new discoveries with it.

The latest finding from that data by doctoral student Kengo Tachibana from the University of Tokyo’s Institute of Astronomy and his team, could have implications for the study of the origins of life itself.

“We study stars, and IR light from them is a key source of information that helps us unlock their secrets,” said Tachibana.

“Until recently, most IR data was from very short-period surveys due to the lack of advanced dedicated platforms. But missions like AKARI and WISE have allowed us to take longer-period surveys of things. This means we can see how things might change over greater time periods, and what these changes might imply. Lately, we turned our attention to a certain class of star known as asymptotic giant branch stars, which are interesting because they are the main producers of interstellar dust.”

This interstellar dust is not the same stuff that accumulates on your floor when you forget to vacuum for a few days; it’s a name given to heavy elements that disperse from stars and lead to the formation of solid objects including planets.

Although it’s long been known that AGBs, especially dusty AGBs, are the main producers of dust, it’s unknown what the main drivers of dust production are and where we should be looking to find this out.

“Our latest study has pointed us in the right direction,” said Tachibana.

“Thanks to long-period IR observations, we have found that the light from dusty AGBs varies with periods longer than several hundred days. We also found that the spherical shells of dust produced by and then ejected by these stars have concentrations of dust that vary in step with the stars’ changes in luminosity. Of the 169 dusty AGBs surveyed, no matter their variability period, the concentrations of dust around them would coincide. So, we’re certain these are connected.”

Interstellar dust. Artist’s impression of how asymptotic giant branch stars exert pressure on solid matter. Image credit: Miyata, Tachibana, et al. CC-BY

Finding a connection between the concentration of dust and the variability of stars’ brightness is just the first step in this investigation, however.

Now the team wishes to explore the possible physical mechanisms behind dust production. For this, they intend to continuously monitor various AGB stars for many years. The University of Tokyo is nearing completion of a large ground-based telescope project, the University of Tokyo Atacama Observatory, in Chile, dedicated to making infrared observations.

Source: University of Tokyo


Source link

New breakthrough: 3D printed ferroelectric materials eliminate harmful bacteria, including E coli

0
New breakthrough: 3D printed ferroelectric materials eliminate harmful bacteria, including E coli

New fabrication technique gives materials antimicrobial properties, with scope to improve safety of implants including heart valves and stents.

A new way of using 3D printing to create infection-fighting materials for use as medical implants has been revealed in a new research paper.

An impression of bacteria being destroyed by the new ferroelectric composite material. Image credit: University of Bath

Engineers at the University of Bath, working with colleagues at the University of Ulster, have for the first time successfully created a new kind of ferroelectric composite material with antimicrobial properties using a novel multi-material 3D printing process.

They say the use of electrically responsive ferroelectric materials gives the implants infection-fighting properties, making them ideal for biomedical applications, such as heart valves, stents, and bone implants reducing the risk of infection for patients.

While commonplace, all biomedical implants pose some level of risk as materials can carry surface bio-contaminants that can lead to infection. Reducing this risk could be beneficial both to patients in the form of improved outcomes, and to healthcare providers thanks to reduced costs incurred by ongoing treatment.

The team has previously used this 3D printing technique for the fabrication of three-dimensional scaffolds for bone tissue engineering.

Dr Hamideh Khanbareh, a lecturer in materials and structures in Bath’s Department of Mechanical Engineering, is lead author of the research. She says that the development has the scope for wide-ranging applications.

She said: “Biomedical implants that can fight infection or dangerous bacteria such as E. coli could present significant benefits to patients and to healthcare providers.

“Our research indicates that the ferroelectric composite materials we have created have a great potential as antimicrobial materials and surfaces. This is a potentially game-changing development that we would be keen to develop further through collaboration with medical researchers or healthcare providers.”

The innovation comes thanks to ferroelectricity, a characteristic of certain polar materials that generate electrical surface charge in response to a change in mechanical energy or temperature. In ferroelectric films and implants, this electrical charge leads to the formation of free radicals known as reactive oxygen species (ROS), which selectively eradicate bacteria.

This comes about through the micro-electrolysis of water molecules on a surface of polarised ferroelectric composite material.

The composite material used to harness this phenomenon is made by embedding ferroelectric barium calcium zirconate titanate (BCZT) micro-particles in polycaprolactone (PCL) a biodegradable polymer widely used in biomedical applications. The mixture of the ferroelectric particles and polymer is then fed into a 3D bioprinter to create a specific porous ‘scaffold’ shape designed to have a high surface area to promote ROS formation.

Testing showed that even when contaminated with high concentrations of aggressive E. coli bacteria, the composite can completely eradicate the bacteria cells without external intervention, killing 70% within just 15 minutes.

Source: University of Bath


Source link

Missing crew – one more reason why the only Russian aircraft carrier will not sail anytime soon

0
Missing crew – one more reason why the only Russian aircraft carrier will not sail anytime soon

The official date is set: the Russian aircraft carrier “Admiral Kuznetsov” should be fully repaired by 2024. But even if technical works follow the schedule, assembling and training a new crew for this ship will certainly not happen quickly.

Russian aircraft carrier Admiral Kuznetsov – photo from 12 December 2011. Image credit: UK MOD via Flickr, CC BY-NC 2.0

“Admiral Kuznetsov” was designed to have 1,900 technical personnel onboard. The new crew should consist of 1,500 people, and such a reduction is explained by the fact that many of the internal systems will be automated and therefore will require less maintenance.

Still, a 1,500-large crew is not a simple thing to prepare, because most of these servicemen will need to undergo very specialized and long training before the vessel will be ready to perform at least basic tasks. Also, due to the ongoing Russian invasion of Ukraine, finding the necessary crew members will not be a simple task.

The repair and overhaul of “Admiral Kuznetsov” have been underway for 6 years. After the ship was stationed at the repair facility, the former crew was disbanded.

Of course, some operations onboard an aircraft carrier are closely similar to those in other types of military ships. Nevertheless, every aircraft carrier has its own specifics, including the fact that the crew must be 100% capable of maintaining aircraft and aviation equipment.

“Even if there are 1500 suitable sailors, their training and familiarization [with the aircraft carrier] will take months, since the Kuznetsov is the largest surface ship in Russia. Even the old team would find it difficult to learn how to properly operate with newly installed equipment. An improperly trained team can lead to major accidents,” commented Russian Navy expert Matus Smutny.

In a similar way, pilots of naval fighter jets Su-33 and MiG-29KR will need to complete additional training after 7 years spent without regular practice at sea.

Previously, Ukrainian military intelligence reported that the ship suffered from significant hull corrosion which is extremely difficult to repair.


Source link

Cheap but efficient drones going mainstream in the Ukrainian Army

0
Cheap but efficient drones going mainstream in the Ukrainian Army

Ukrainian engineers are learning to make the maximum use of simple and cheap drones. They not only construct their flying machines, but also develop specialized munitions and adapt novel attack methods.

Drone and its projectile release mechanisms constructed by the “Steel Hornets”. Image credit: Steel Hornets

The tactics used by the Armed Forces of Ukraine are usually based on drones that drop munitions while flying over the positions of the enemy. Such unmanned flyers are often made by volunteer communities using non-military grade parts.

There are only two main technical requirements: the drone must have the ability to be controlled remotely with a visual feedback, and it must be capable of carrying an explosive charge.

One such community named “Steel Hornets” which manufactures drones for Ukrainian soldiers and also constructs ammunition release systems and even optimized the construction of munitions for this purpose, recently demonstrated how they tested their new the testing of its new development – shrapnel ammunition.

Shrapnel ammunition for drones made by the Ukrainian volunteer organization “Steel Hornets”

Each shrapnel charge weighs 800 grams (around 1.8 lb) and has a diameter of 63 mm. It uses a mechanical fuse.

Generally, shrapnel explosives are packed with metal balls with a diameter of several millimeters that are scattered in the proximity of the detonation. Modern versions may contain sharp pieces of metal sometimes shaped as small arrows to increase their flight distance.

But it is not only ammunition that plays an important role on the battlefield. Ukrainian drone pilots are using a ‘diving’ technique to increase the efficiency of their strikes.

In his method, the carried drone flies along the charge release trajectory in order to increase the initial projectile velocity right after it is released from the drop mechanism. This also allows performing hits from a lower altitude, thereby increasing the accuracy and simplifying the aiming process.

Is it possible to recover the drone?

Yes, the drone does not hit the ground but is able to change its trajectory right after the drop. As Defense Express notes, the drone essentially turns into a miniature combat aircraft that is able to repeat maneuvres similar to those used in large airplanes.

The process could be simplified even further by programming an automated drone return function to optimize its recovery after it performed its mission.


Source link

Ukraine is using ancient anti-aircraft gun KS-19. But it works best for ground targets?

0
Ukraine is using ancient anti-aircraft gun KS-19. But it works best for ground targets?

The Soviet anti-aircraft weapon KS-19 entered production in 1947. It was developed to replace old WW2-era 85 mm anti-aircraft guns. It is now very old, but the defenders of Ukraine are still using these weapons because they need to defend their country. But where did they get such old weapons from? And can they still be effective at all?

KS-19 is a 100 mm anti-aircraft gun, which can be pressed into service against ground targets. Image credit: Lvova Anastasiya via Wikimedia (CC BY-SA 3.0)

The Armed Forces of Ukraine uses anti-aircraft guns KS-19, which entered service with the Army of the USSR in 1947 – 76 years ago. All the people involved in the development of this anti-aircraft gun are long dead. The KS-19 is a 100 mm gun designed to replace the 85 mm guns of World War II.

A video appeared on the Internet showing as many as 4 KS-19 guns in the hands of the defenders of Ukraine. The Ukraine Weapons Tracker does not exclude the possibility that these anti-aircraft guns are used against ground targets – the KS-19 is definitely suitable for that. In fact, the KS-19 has been used for direct and indirect fire against lightly armoured vehicles or personnel positions for the entirety of its service life.

Defense Express notes that these KS-19 may have appeared in the Armed Forces of Ukraine as trophies, seized during the counterattack campaign in the Kharkiv region. The Ukrainian army then captured at least 4 KS-19 guns. However, it is possible that these anti-aircraft guns were dragged out from Ukraine’s own weapon storage.

Ukraine stored multiple KS-19 guns in Balakliia, a city in Kharkiv Oblast at the beginning of the Russian invasion. This city was then captured by the advancing Russian forces. Invaders did use these KS-19s, sometimes even as decoys to divert the attention of the Ukrainian reconnaissance.

However, as Russian positions in the Kharkiv region weakened, Russia was swiftly kicked out of Balakliia and Ukraine retook its KS-19 and other weapons.

Journalists note that, despite its age, KS-19 can be useful for the Armed Forces of Ukraine. The characteristics of this weapon allow shooting at ground targets at a distance of up to 20 km, and air targets can be reached at an altitude of up to 15 km.

The shooting rate of the KS-19 is up to 15 rounds per minute. The entire weapon weighs just under 10 tonnes and needs a crew of 15 people, but for modern trucks that is hardly an issue. Attacking large air targets with the KS-19 now might be tricky, but smaller drones can feel the wrath of this ancient weapon.

The main problem with such old weapons is the availability of ammunition. It is possible that Bulgaria, which also operates the KS-19, helped with the ammunition. Ukraine probably had some suitable 100 mm rounds stashed as well as this system has been in use for decades.

 

Sources: Focus.ua, Wikipedia


Source link

What Is Robotic Process Automation in Manufacturing Industry?

0
What Is Robotic Process Automation in Manufacturing Industry?

Robotic Process Automation (RPA) is a technology based on software robots used to automate rule-based processes, especially repetitive processes. RPA is beneficial to companies in a variety of industries, and manufacturing is no exception. In this article, you can learn more about RPA in the manufacturing industry, including the benefits, as well as potential challenges.

Manufacturing – illustrative photo.

How Can a Manufacturing Company Use RPA?

RPA can be useful for automating a wide range of processes in manufacturing businesses including tasks performed by any company regardless of its specialization, such as invoicing, as well as operations that are more specific to the type of manufacturing industry.

When it comes to invoicing and accounting, RPA can be applied to a variety of processes, including invoice creation and verification, payment scheduling, collecting and processing information as well as updating the system.

A great example of RPA applications for tasks specifically related to manufacturing is inventory management. Companies can automate the process of tracking inventory to determine stock levels and issue purchase orders to manage supply levels with minimal human supervision. This can help businesses to optimize inventory levels and avoid downtime caused by insufficient supply.

RPA can also support order processing by automating order entries, validation, and confirmation.

Manufacturing companies can automate production line monitoring, such as tracking machine performance and notifying of potential or ongoing problems, which is very helpful for increasing production efficiency.

Quality control is another part of manufacturing companies’ workflows that can be automated. RPA can handle data collection and analysis, as well as report generation.

Read more about robotic process automation in manufacturing industry on: https://xplusglobal.com/resources/blog/robotic-process-automation-rpa-in-manufacturing-industry/

Why Should a Manufacturing Company Implement RPA?

Manufacturing companies that want to stay ahead of their competitors can effectively optimize their production, reduce costs, and increase profits by using RPA.

Automating repetitive, rule-based tasks saves significant resources and creates opportunities for employees to engage in tasks that can contribute to the development of their skills and expertise. This also eliminates human error and increases accuracy. Whether they are accounting processes or tasks related to production quality control, they can have a significant impact on the company’s performance.

Automation also speeds up processes and allows companies to do them outside of employees’ working hours if needed. While this can further increase business productivity, speed is especially important for some processes. This is true, for example, for handling tasks related to customer service and shipment.

Finally, the reduction in the need for manual labor makes companies more scalable, as they do not necessarily need to hire more employees to handle an increasing number of tasks that can be automated.

Potential Challenges of RPA Implementation

Despite the considerable benefits, companies may encounter certain issues when implementing RPA. The biggest problem is the upfront financial investment, which not all companies can afford. This depends on the RPA solution an organization is looking for. For instance, the Microsoft Dynamics 365 ERP and CRM suite comes with tools for automation, while other solutions may require the implementation of separate solutions.

These expenses may include software costs as well as investments in employee training.

Although in many cases companies enjoy significant savings after deploying RPA software, smaller businesses may not receive ROI high enough to benefit from the implementation.

Another potential risk associated with RPA regards security. Automated execution of processes creates more opportunities to overlook vulnerabilities that can lead to data breaches or other types of losses. At the same time, depending on the exact nature of the information being processed, companies must ensure that their RPA systems comply with data security regulations. Enforcing these regulations, in turn, requires additional resources and can generate even higher costs of implementation.

RPA may also require changes in work routines. To successfully deploy automation, employees must adapt to the changes. While this requires the training mentioned above, there is still the possibility that some may not be ready to adapt to new workflows. This can slow down the adaptation of the system and can disrupt some of the business operations.

Source link