6.4 C
Brussels
Saturday, November 16, 2024
Home Blog Page 161

Graphene discovery could help generate hydrogen cheaply and sustainably

0
Graphene discovery could help generate hydrogen cheaply and sustainably


Researchers from The University of Warwick and the University of Manchester have finally solved the long-standing puzzle of why graphene is so much more permeable to protons than expected by theory.

Graphene – illustrative photo. Image credit: Pixabay (Free Pixabay license)

A decade ago, scientists at The University of Manchester demonstrated that graphene is permeable to protons, nuclei of hydrogen atoms.

The unexpected result started a debate in the community because theory predicted that it would take billions of years for a proton to permeate through graphene’s dense crystalline structure. This had led to suggestions that protons permeate not through the crystal lattice itself, but through the pinholes in its structure.

Now, writing in Nature, a collaboration between the University of Warwick, led by Prof. Patrick Unwin, and The University of Manchester, led by Dr. Marcelo Lozada-Hidalgo and Prof. Andre Geim, report ultra-high spatial resolution measurements of proton transport through graphene and prove that perfect graphene crystals are permeable to protons. Unexpectedly, protons are strongly accelerated around nanoscale wrinkles and ripples in the crystal.

The discovery has the potential to accelerate the hydrogen economy. Expensive catalysts and membranes, sometimes with significant environmental footprint, currently used to generate and utilise hydrogen could be replaced with more sustainable 2D crystals, reducing carbon emissions, and contributing to Net Zero through the generation of green hydrogen.

The team used a technique known as scanning electrochemical cell microscopy (SECCM) to measure minute proton currents collected from nanometre-sized areas. This allowed the researchers to visualise the spatial distribution of proton currents through graphene membranes.

If proton transport took place through holes as some scientists speculated, the currents would be concentrated in a few isolated spots. No such isolated spots were found, which ruled out the presence of holes in the graphene membranes.

Drs. Segun Wahab and Enrico Daviddi, leading authors of the paper, commented: “We were surprised to see absolutely no defects in the graphene crystals. Our results provide microscopic proof that graphene is intrinsically permeable to protons.”

Unexpectedly, the proton currents were found to be accelerated around nanometre-sized wrinkles in the crystals. The scientists found that this arises because the wrinkles effectively ‘stretch’ the graphene lattice, thus providing a larger space for protons to permeate through the pristine crystal lattice. This observation now reconciles the experiment and theory.

Dr. Lozada-Hidalgo said: “We are effectively stretching an atomic scale mesh and observing a higher current through the stretched interatomic spaces in this mesh – this is truly mind-boggling.”

Prof. Unwin commented: “These results showcase SECCM, developed in our lab, as a powerful technique to obtain microscopic insights into electrochemical interfaces, which opens up exciting possibilities for the design of next-generation membranes and separators involving protons.”

The authors are excited about the potential of this discovery to enable new hydrogen-based technologies. Dr. Lozada-Hidalgo said, “Exploiting the catalytic activity of ripples and wrinkles in 2D crystals is a fundamentally new way to accelerate ion transport and chemical reactions. This could lead to the development of low-cost catalysts for hydrogen-related technologies.”

Read the full paper here https://www.nature.com/articles/s41586-023-06247-6

Source: University of Warwick



Source link

AI Chatting: Free AI Chatbot at Your Service

0
AI Chatting: Free AI Chatbot at Your Service


In the past few years, artificial intelligence (AI) has taken great strides in the realm of online communications. The keywords “chat AI ask anything” have been increasingly growing. Also for this reason, chatbots have emerged as the digital companions of the future, designed to streamline and simplify our online experiences.

Among these, AI Chatting stands out as a free AI chatbot that promises to revolutionize the way we interact with digital services. This AI-powered chatbot is designed to assist you with any questions, problems, or tasks you may have. Starting from general information, recommendations or even just engaging in casual conversation, AI Chatting is at your service, providing you with the most accurate and relevant responses.

Without further ado, let’s get into the world of AI Chatting! It’s up to you to judge whether it truly lives up to the lofty expectations it sets.

Introduction of AI Chatting

In the midst of AI chatbots in the market, AI Chatting can be said as one of the oldest launched chatbots. It was first launched in 2020 by OpenAI, and yet claims to constantly improve and update over the time being. According to its reply, it was last updated in 2021 with the GPT-3 architecture.

AI Chatting is made of a line of codes and artificial intelligence. The fundamental aspect of it is its ability to learn and improve with use. It relies on machine learning to continually enhance its understanding and responses. For instance, if you ask a question that AI Chatting can’t answer, then it will take note of it and endeavor to improve its ability to respond to similar questions in the future.

One of the first things you’ll notice when using AI Chatting is its user-friendliness. Whether you’re tech-savvy or a newcomer to AI character chat, its intuitive interface ensures that anyone can interact with it effortlessly.

ywAAAAAAQABAAACAUwAOw== AI Chatting: Free AI Chatbot at Your Service

The Power of AI Chatting

For sure, this free AI chatbot brings a multitude of advantages to the table, making it a valuable tool for users seeking assistance in the digital realm. Some key advantages of using it include:

●      Versatility

No matter what you’re looking for, whether you are looking for information, insights, guidance, translating text, seeking inspiration, or even simply entertainment, AI Chatting can provide prompt responses to your queries, saving you precious time; it can compile information, analyze your inputs and deliver an appropriate solution in seconds.

●      Communication Channels

AI Chatting also serves as communication in various ways, one of which includes customer support portals by engage with visitors and providing instant assistance. It can handle initial inquiries, direct users to appropriate resources then escalate to a human agent if required.

Apart from that, translating a document can done in seconds. This AI chatbot is trained in tons of languages like English, Mandarin, Korean, Italian, German, etc. It breaks down language barriers and enables you to freely communicate with people from any part of the world.

●      Ease of Use

In an age of increasing digital concerns, AI Chatting has the commitment to take user privacy seriously. Other than ensuring that the generated content is appropriate and abides by community guidelines, it encrypts all conversations to safeguard your data and doesn’t store your chats. Respect for user’s privacy and confidentiality is its number one priority.

●      Cost Saving

Hiring a professional or training a team of human agents not only requires you to spend an amount of money but cannot guarantee its result as well. AI chatbots are developed and deployed based on our requirements, thus requiring minimal cost and yet able to handle a bunch of inquiries without added expenses. All you need is to invest in the beginning and it will serve you permanently.

Summary

In conclusion, AI Chatting represents a step forward in the field of AI chatbots and can be a valuable resource for those seeking online assistance. This AI chatbot provides up to 20 free credits per day, offering a wide range of useful functionalities, and is easy to use. Furthermore, its machine learning capability makes it increasingly effective with continuous use, and its attention to user privacy is commendable.

So, if you’re looking for a free AI chatbot that’s easy to use and offers a broad range of features, AI Chatting might be the right choice for you!



Source link

The big business of mental illness

0
grayscale photo of hospital bed

Psychologist Lisa Cosgrove, a professor at the University of Massachusetts, explained that More than 5% of young schoolchildren take psychotropic drugs daily. And although this was stated based on a study carried out to talk about the consumption of medical drugs in the United States, it can be extrapolated to any country, where psychiatry and the pharmaceutical industry have not stopped generating mental illnesses permanently.

In 1980 in the United States, 30 million boxes of antidepressants were prescribed, in 2012 this figure had reached 264 million prescriptions. What was the reason for this rebound? What has happened from 2012 to today? Perhaps the answer is as simple as it is dangerous: mental illness has become a business that generates billions of dollars in profits.

In 2014, a book already mentioned by me in previous reports was published, but it now acquires special relevance because similar complaints are currently being prepared in various publishers; is about Are we all mentally ill?, from the distinguished professor emeritus of the department of Psychiatry and Behavioral Sciences at Durham University, in North Carolina. But why is this book especially relevant, simply because its author, Allen Frances, was the president of the DSM IV working group and was part of the DSM III management team.

He himself confessed years later to having participated in said projects that After the publication in May 2013 of the DSM-V, there is almost no human behavior that cannot be classified at a given moment as a “mental disorder” and, therefore, susceptible to “solving” through drugs whose intake entails numerous side effects. .

Under the name DSM hides the misnamed Diagnostic and Statistical Manual of Mental Disorders. This manual has already been discredited ad nauseam by doctors and psychiatrists from around the world, among them the aforementioned Allen Frances, who actively participated in several of the manuals, however very soon and in the style of The Empire of Pain by the American journalist Patrick Radeen Keefe, another journalist, Robert Whitaker accompanied by the psychologist Lisa Cosgrove, will see his book Psychiatry under the influence, translated into Spanish and very possibly into other languages ​​in half the world, despite the different attempts to silence its publication . In it they tell the story how an allegedly corrupt conspiracy cataloged mental illnesses and triggered massive use of psychotropic drugs around the world. The person who writes the above is Daniel Arjona, a journalist from the newspaper El Mundo who on Friday, September 1, 2023, published, among other things, two important issues.

The first, the interesting words that Dr. Cosgrove transmitted to him by email where she put the point on an indisputable topic: (…) Over the past 35 years, psychiatry has transformed American culture. It has changed our view of childhood and what is expected of “normal” children, to the point that more than 5% of school-age young people now take a psychotropic drug daily. “It has changed our behavior as adults and, in particular, the way we seek to cope with emotional distress and difficulties in our lives.” And that is why millions of people around the world have fallen into the hands of psychotropic drugs with psychiatric endorsement. A real imprudence, a nonsense.

The second question that Whitaker and Cosgrove try to answer in their book, as reflected in Arjona’s article, is the following: (…) What is the thesis of this amendment to the entirety? Since the publication in 1980 of the third and decisive version of the DSM (today there are five, all of them under discussion), psychiatry has succumbed to institutional corruption on two fronts: that of big pharmaceutical companies and that of the “union influences” represented by an American Psychiatric Association voracious in defending and expanding its business. Having said the above, I encourage you to read some of the articles published under my signature on antidepressants and the illegal commission business in China, for example, where you can get an idea of ​​the magnitude of the tragedy to which the humanityIs the DSM to blame? Categorically not. The blame lies with a system that allows large pharmaceutical companies to easily advertise “happiness” pills for all kinds of problems. Something similar to what happened at the time with ADHD (Attention Deficit Hypersensitivity Disorder). In the 1990s (1990), this “disease” barely occupied a small corner in the profits of the enormous and enormous pharmaceutical industry, the income generated by this disease barely reached 70 million dollars, but some years later, when The DSM IV was published, an enormous business possibility was seen. Psychiatrists had opened a door with their diagnostic assumptions and patents were created, beginning to generate a huge advertising campaign aimed at patients (the general public) and doctors. Everyone saw the sky open when it was accepted that with a pill, “hyperactive” children would stop crying out, and teachers and families would finally have moments of respite. The company “bought” said benefit and with the slogan “Consult your doctor”, In just a few years, a market has tripled, and is increasing, as society in general has accepted that it is acceptable to medicate children from an early age. It has been accepted that many university students talk about mental health and take medication and also, by teachers, mothers/fathers and doctors, that a quiet classroom benefits the emotional health of children.

In some countries, the consumption of this type of products, antidepressants, anxiolytics, is making, with increasing intensity, sick societies, where access to these drugs It is much simpler than it may seem to us. That is why, with nuances, lists of countries with enormous consumption of this type of products are periodically made, among which we can highlight, without the need to give percentages, the following 10: United States, Iceland, Australia, Portugal, United Kingdom United Kingdom, Canada, Sweden, Belgium, Denmark and Spain. As a fact to take into account, due to proximity, comment that in Spain, in information dated 2022, the headline read: The data after a decade of “medicine culture” in Spain: the consumption of antidepressants has grown by 40%. Giving two issues as keys to this increase: The improvement of several drugs joins the industry strategies and their use as a resource to quickly finish a consultation.

Could the prescription of antidepressants or anxiolytics have become an absurd excuse to get rid of patients in a medical consultation? I imagine that we will have to look for an answer for this in the future, although I am afraid of what we are going to find.

Perhaps, as a preview of future research, I will stick with one of the answers that Allen Frances gave in one of his many interviews to the question:

-Isn’t the increase in the number of alleged “mental illnesses” then due to both psychiatrists and the pharmaceutical industry?

-Certainly. Look, pharmaceutical multinationals, especially those grouped under the expression Big Pharma, have become dangerous; and not only in the field of Psychiatry. In the United States, for example, there are now more deaths each year from drug overdoses than from traffic accidents. Most caused by prescription narcotics, not illegal drugs. Of course, pharmaceutical multinationals are experts at inventing diseases to sell drugs; In fact, they invest billions of dollars in spreading misleading messages.

As I finished transcribing Allen’s response, a dystopia came to mind where I imagined drug cartels advertising their product in the media of any kind, without any control and with the approval of many members of a dystopian society, authorities, media, teachers, fathers, mothers, etc., who obtained a profit, whether emotional or lucrative, with the widespread consumption of said product.

Information sources:
Graphic: Which countries consume the most antidepressants? | Statista
Medication data: consumption of antidepressants grows by 40% (rtve.es)
DSALUD (magazine) no. 177, December 2014
El Mundo Newspaper. Friday, September 1, 2023
Book: Are we all mentally ill? Author: Allen Frances. Ariel Editorial – 2014

Originally published at LaDamadeElche.com

On the Road to Spotting Alien Life

0
On the Road to Spotting Alien Life


The focal plane mask for the Coronagraph Instrument on NASA’s Nancy Grace Roman Space Telescope. Each circular section contains multiple “masks” – carefully engineered, opaque obstructions designed to block starlight. Image credit: NASA/JPL-Caltech

In early August, scientists and engineers gathered in a small auditorium at Caltech to discuss how to build the first space telescope capable of detecting alien life on planets like Earth.

The proposed mission concept, the Habitable Worlds Observatory (HWO), would be the next powerful astrophysics observatory after NASA’s James Webb Space Telescope (JWST). It would be able to study stars, galaxies, and a host of other cosmic objects, including planets outside our solar system, known as exoplanets, potentially even the alien life.

Though finding alien life on exoplanets may be a long shot, the Caltech workshop aimed to assess the state of technology HWO needs to search for life elsewhere.

“Before we can design the mission, we need to develop the key technologies as much as possible,” says Dimitri Mawet, a member of the Technical Assessment Group (TAG) for HWO, the David Morrisroe Professor of Astronomy, and a senior research scientist at the Jet Propulsion Laboratory (JPL), which Caltech manages for NASA.

“We are in a phase of technology maturation. The idea is to further advance the technologies that will enable the Habitable Worlds Observatory to deliver its revolutionary science while minimizing the risks of cost overruns down the line.”

First proposed as part the National Academy of Sciences’ Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020), a 10-year roadmap that outlines goals for the astronomy community, HWO would launch in the late 2030s or early 2040s. The mission’s observing time would be divided between general astrophysics and exoplanet studies.

Sara Seager of MIT gave a talk at the Caltech workshop titled

Sara Seager of MIT gave a talk at the Caltech workshop titled “Towards Starlight Suppression for the Habitable Worlds Observatory.” Image credit: Caltech

“The Decadal Survey recommended this mission as its top priority because of the transformational capabilities it would have for astrophysics, together with its ability to understand entire solar systems outside of our own,” says Fiona Harrison, one of two chairs of the Astro2020 decadal report and the Harold A. Rosen Professor of Physics at Caltech, as well as the Kent and Joyce Kresa Leadership Chair of the Division of Physics, Mathematics and Astronomy.

The space telescope’s ability to characterize the atmospheres of exoplanets, and therefore look for signatures that could indicate alien life, depends on technologies that block the glare from a distant star.

There are two main ways of blocking the star’s light: a small mask internal to the telescope, known as a coronagraph, and a large mask external to the telescope, known as a starshade. In space, starshades would unfurl into a giant sunflower-shaped structure, as seen in this animation.

Artistic concept of an Earth-like planet in the habitable zone of its star. New observatory will search for alien life.

Artist’s concept of an Earth-like planet in the habitable zone of its star. New observatory will search for alien life. Image credit: NASA Ames/JPL-Caltech/T. Pyle

In both cases, the light of stars is blocked so that faint starlight reflecting off a nearby planet is revealed. The process is similar to holding your hand up to block the sun while snapping a picture of your smiling friends.

By directly capturing the light of a planet, researchers can then use other instruments called spectrometers to scrutinize that light in search of the chemical signatures. If any life is present on a planet orbiting a distant star, then the collective inhales and exhales of that life might be detectable in the form of biosignatures.

“We estimate there are as many as several billion Earth-size planets in the habitable zone in our galaxy alone,” says Nick Siegler, the chief technologist of NASA’s Exoplanet Exploration Program at JPL. The habitable zone is the region around a star where temperatures are suitable for liquid water.

“We want to probe the atmospheres of these exoplanets to look for oxygen, methane, water vapor, and other chemicals that could signal the presence of life. We aren’t going to see little green [alien] men but rather spectral signatures of these key chemicals, or what we call biosignatures.”

According to Siegler, NASA has decided to focus on the coronagraph route for the HWO concept, building on recent investments in NASA’s Nancy Grace Roman Space Telescope, which will utilize an advanced coronagraph for imaging gas-giant exoplanets. (Caltech’s IPAC is home to the Roman Science Support Center).

Today, coronagraphs are in use on several other telescopes, including the orbiting JWST, Hubble, and ground-based observatories.

Mawet has developed coronagraphs for use in instruments at the W. M. Keck Observatory atop Maunakea, a mountain on the Big Island of Hawai’i.

The most recent version, known as a vortex coronagraph, was invented by Mawet and resides inside the Keck Planet Imager and Characterizer (KPIC), an instrument that allows researchers to directly image and study the thermal emissions of young and warm gas-giant exoplanets.

The coronagraph cancels out a star’s light to the point where the instrument can take pictures of planets that are about a million times fainter than their stars. That allows researchers to characterize the atmospheres, orbits, and spins of young gas-giant exoplanets in detail, helping to answer questions about the formation and evolution of other solar systems.

But directly imaging a twin Earth planet—where life as we know it is most likely to flourish—will take a massive refinement of current technologies. Planets like Earth that orbit sun-like stars in the habitable zone are easily lost in the glare of their stars.

Our own sun, for example, outshines the light of Earth by 10 billion times. For a coronagraph to achieve this level of starlight suppression, researchers will have to push their technologies to the limit.

“As we get closer and closer to this required level of starlight suppression, the challenges become exponentially harder,” Mawet says.

The Caltech workshop participants discussed a coronagraph technique that involves controlling light waves with an ultraprecise deformable mirror inside the instrument.

While coronagraphs can block out much of a star’s light, stray light can still make its way into the final image, appearing as speckles. By using thousands of actuators that push and pull on the reflective surface of the deformable mirror, researchers can cancel the blobs of residual starlight.

The upcoming Nancy Grace Roman Space Telescope will be the first to utilize this type of coronagraph, which is referred to as “active” because its mirror will be actively deformed. After more tests at JPL, the Roman coronagraph will ultimately be integrated into the final telescope at NASA’s Goddard Space Flight Center and launched into space no later than 2027.

The Roman Coronagraph Instrument will enable astronomers to image exoplanets possibly up to a billion times fainter than their stars. This includes both mature and young gas giants as well as disks of debris left over from the planet-formation process.

“The Roman Coronagraph Instrument is NASA’s next step along the path to finding life outside our solar system,” says Vanessa Bailey, the instrument technologist for Roman’s coronagraph at JPL.

“The performance gap between today’s telescopes and the Habitable Worlds Observatory is too large to bridge all at once. The purpose of the Roman Coronagraph Instrument is to be that intermediate steppingstone. It will demonstrate several of the necessary technologies, including coronagraph masks and deformable mirrors, at levels of performance never before achieved outside the lab.”

The quest to directly image an Earth twin around a sun-like star will mean pushing the technology behind Roman’s coronagraph even further.

“We need to be able to deform the mirrors to a picometer-level of precision,” Mawet explains.

“We will need to suppress the starlight by another factor of roughly 100 compared to Roman’s coronagraph. The workshop helped guide us in figuring out where the gaps are in our technology, and where we need to do more development in the coming decade.”

Other topics of conversation at the workshop included the best kind of primary mirror for use with the coronagraph, mirror coatings, dealing with damage to the mirrors from micrometeoroids, deformable mirror technologies, as well as detectors and advanced tools for integrated modeling and design.

Engineers also provided a status update on the starshade and its technological readiness.

Meanwhile, as technology drives ahead, other scientists have their eyes on the stars in search of Earth-like planets and possibly alien life that the HWO would image.

More than 5,500 exoplanets have been discovered so far, but none of them are truly Earth-like. Planet-hunting tools, such as the new Caltech-led Keck Planet Finder (KPF) at the Keck Observatory, have become better equipped to find planets by looking for the tugs they exert on their stars as they orbit around.

Heavier planets exert more of a tug, as do planets that orbit closer to their stars. KPF was designed to find Earth-size planets in the habitable zones of small red stars (the habitable zones for red stars are closer in). With additional refinements over the next several years, KPF may be able to detect Earth twins.

By the time HWO would launch in the late 2030s or early 2040s, scientists hope to have a catalog of at least 25 Earth-like planets to explore.

Despite the long road ahead, the scientists at the workshop eagerly discussed these challenges with their colleagues who had traveled to Pasadena from around the country. JPL director Laurie Leshin (MS ’89, PhD ’95) gave a pep talk at the start of the meeting.

“It’s an exciting and daunting challenge,” she said. “But that’s what we all live for. We don’t do it alone. We do it in collaboration.”

Written by Whitney Clavin

Source: Caltech



Source link

Farm Dams Can Be Converted Into Renewable Energy Storage Systems

0
Farm Dams Can Be Converted Into Renewable Energy Storage Systems


New research suggests Australia’s agricultural water reservoirs could be an innovative energy storage solution for variable renewables.

Over 30,000 micro-pumped hydro energy storage systems could potentially be made leveraging existing agricultural dams. Image credit: Pixabay, free license

Tens of thousands of small-scale hydroenergy storage sites could be built from Australia’s farm dams, supporting the uptake of reliable, low-carbon power systems in rural communities, new UNSW-Sydney-led research suggests.

The study, published in Applied Energy, finds agricultural reservoirs, like those used for solar-power irrigation, could be connected to form micro-pumped hydroenergy storage systems – household-size versions of the Snowy Hydro hydroelectric dam project. It’s the first study in the world to assess the potential of these small-scale systems as an innovative renewable energy storage solution.

Farm irrigation system.

Farm irrigation system. Image credit: deraugustodesign via Pixabay, CC0 Public Domain

With the increasing shift towards variable energy sources like wind and solar photovoltaics, storing surplus energy is essential for ensuring a stable and reliable power supply. In other words, when the sun isn’t up or the wind isn’t blowing, stored energy can help balance energy supply and demand in real time and overcome the risk of shortages and overloads. 

In a micro-pumped hydro energy storage system, excess solar energy from high-production periods is stored by pumping water to a high-lying reservoir, which is released back to a low-lying reservoir when more power is needed, flowing through a turbine-connected generator to create electricity.

However, constructing new water reservoirs for micro-pumped hydro energy storage can be expensive. 

“The transition to low-carbon power systems like wind and solar photovoltaics needs cost-effective energy storage solutions at all scales,” says Dr Nicholas Gilmore, lead author of the study and lecturer at the School of Mechanical and Manufacturing Engineering at UNSW Engineering.

“We thought – if you’re geographically fortunate to have two significant water volumes separated with sufficient elevation, you might have the potential to have your own hydro energy storage system.”

Micro-pumped hydro energy storage systems store excess solar energy from high-production periods by pumping water to a high-lying reservoir, which is released back to a low-lying reservoir when more power is needed.

Micro-pumped hydro energy storage systems store excess solar energy from high-production periods by pumping water to a high-lying reservoir, which is released back to a low-lying reservoir when more power is needed. Image credit: UNSW

Unlocking the untapped potential of farm dams

For the study, the team, which also included researchers from Deakin University and the University of Technology Sydney, used satellite imagery to create unique agricultural reservoir pairings across Australia from a 2021 dataset of farm dams.

They then used graph theory algorithms – a branch of mathematics that models how nodes can be organised and interconnected – to filter commercially promising sites based on minimum capacity and slope. 

“If you have a lot of dams in close proximity, it’s not viable to link them up in every combination,” says Dr Thomas Britz, co-author of the study and senior lecturer at UNSW Science’s School of Mathematics and Statistics. So, we use these graph theory algorithms to connect the best dam configurations with a reasonable energy capacity.”

From nearly 1.7 million farm dams, the researchers identified over 30,000 sites across Australia as promising for micro-pumped hydro energy storage. The average site could provide up to 2 kW of power and 30 kWh of usable energy – enough to back up a South Australian home for 40 hours.

“We identified tens of thousands of these potential sites where micro-pumped hydro energy storage systems could be installed without undertaking costly reservoir construction,” Dr Gilmore says. “That’s thousands of households that could potentially increase their solar usage, saving money on their energy bills, and reducing their carbon footprint.”

The research team also benchmarked a micro-pumped hydro site to a commercially available lithium-ion battery in solar-powered irrigation systems. Despite a low discharge efficiency, they found the pumped hydro storage was 30 per cent cheaper for a large single cycle load due to its high storage capacity.

“While the initial outlay for a micro-pumped hydro energy storage system is higher than a battery, the advantages are larger storage capacity and potential durability for decades,” Dr Gilmore says. “But that cost is significantly reduced anyway by capitalising on existing reservoirs, which also has the added benefit of less environmental impact.”

Building micro-pumped hydro energy power systems from existing farm dams could also assist rural areas susceptible to power outages that need a secure and reliable backup power source. Battery backup power is generally limited to less than half a day, while generators, though powerful, are dependent on affordable fuel supply and produce harmful emissions.

“People on the fringes of the electricity network can be more exposed to power outages, and the supply can be less reliable,” Dr Gilmore says. “If there’s a power outage during a bushfire, for example, a pumped hydro system will give you enough energy to last a day, whereas a battery typically lasts around eight hours.”

Although encouraging, the researchers say some limitations of the study require further analysis, including fluctuations in water availability, pump scheduling and discharge efficiency.

“Our findings are encouraging for further development of this emerging technology, and there is plenty of scope for future technological improvements that will make these systems increasingly cheaper over time,” Dr Gilmore says. 

“The next step would be setting up a pilot site, testing the performance of a system in action and modelling it in detail to get real-world validation – we have 30,000 potential candidates!”

Source: UNSW



Source link

Security of Smart Grids with Interacting Digital Systems

0
Security of Smart Grids with Interacting Digital Systems


New methods to analyze cyber security risk in cyber-physical electric power systems.

The increased electrification of society and the need to manage new resources (such as renewable energy sources and flexible resources) and new loads (such as electric vehicles) is changing the electric power system.

A digital system, printed-circuit board – illustrative photo. Image credit: Bermix Studio via Unsplash, free license

The extent of sensors, communication, and automation is increasing, and monitoring and control of the electric power grid is becoming more active and digitalised. The result is a cyber-physical electric power system where the operation of the physical power system increasingly depends on data transmitted through digital networks.

This development increases the number of potential entry points for an attacker and makes the systems more difficult to protect. Also, society is more dependent on electric power than ever before, and the consequences of a successful cyber-attack on interacting digital systems may become catastrophic.

Therefore, we need appropriate methods to assess and reduce cyber security risks in cyber-physical electric power systems. In the InterSecure project, SINTEF Energi, SINTEF Digital, NTNU and Proactima have developed such methods in collaboration with Norwegian grid companies and authorities.

What is a cyber-physical electric power system?

We understand a cyber-physical system as a system of physical components controlled via digital networks.

Commonly, cyber-physical electric power grids are called smart grids. This name emphasises the enhanced possibilities for intelligence, i.e., control, monitoring, and automation, brought to electric grids when they are increasingly connected to digital networks.

What worries the grid operators today?

The emerging smart grid, with its increasing interconnection and exchange of data, increase the number of actors and stakeholders in the operation of power systems. This can potentially cause several new or changed threats and vulnerabilities.

Discussions in the project have revealed some key sources of threats and vulnerabilities that the grid operators worry about today, and that are expected to become even more relevant in the future:

  • Extended digital networks that increase the number of possible entry points for cyber attackers,
  • new technology, components and systems that are rapidly introduced,
  • new connections between administrative IT systems and control systems that increase data flow across systems,
  • increased system complexity,
  • more interfaces between interdependent applications or systems, and
  • dependence on digital services from external suppliers.

The grid companies must be able to understand and handle new risks due to these system developments.

What kind of methods do the grid operators need to address their concerns?

The grid operators in the project secure their systems and manage risks according to current regulations. The main relevant regulations are Energiloven, Kraftberedskapsforskriften and Sikkerhetsloven.

Furthermore, the grid operators collect and use updated threat information from organisations providing notification services, such as KraftCERT, PST (Norwegian Police Security Service) and NSM (Norwegian National Security Authority).

Although the power supply is reliable today, and current regulations and risk management practices are well established, the grid operators are not well equipped to handle the new sources of threats and vulnerabilities described in the previous section.

Traditional power system risk management is not focused on capturing the intentional nature of cyber security incidents, the widespread entry points due to the far-reaching nature of digital networks, nor the vulnerabilities to cyber attackers exploiting these entry points.

Also, cyber security risk and traditional risk analysis are carried out separately. This approach is not optimal, as it does not enable the assessment of potential vulnerabilities due to system interconnections, interdependencies and complexity.

In the following, risk assessment methods developed in the InterSecure project are briefly described.

Framework for risk assessment of cyber-physical electric power systems

The framework is based on the ISO 31000 and NS 5814 standards. It emphasises not only the physical system but the entire system of systems that is included in the operation of smart grids.

In fact, as smart grids develop and the system becomes more complex, it will be fruitless trying to understand the entire system and how all the elements relate and interact. The sheer size and complexity of the system will make this impossible.

Therefore, the risk management of the system needs to be addressed at a more high-level perspective, before focusing in on different sections or areas of the system.

As part of the InterSecure project, a risk management framework has been proposed that enables a more iterative approach to manage the risk of complex socio-technical systems, such as smart grids.

The framework follows a “plan, do, check, act” structure that is common in risk management frameworks. It consists of three main phases: plan, assess and manage as well as three continuous phases of communication and consultation, recording and reporting, and monitoring and review.

Figure 1 Proposed risk management framework for interacting digital systems in smart grids

Figure 1 Proposed risk management framework for interacting digital systems in smart grids

The overall structure of the risk management framework is that of an iterative process. It includes considering the complexity within the system, and rather than trying to understand and model the entire system, it instead takes an incremental, top-down approach.

This allows the system to first be addressed from a high-level perspective and then become more familiar with the different areas and risks of the system, finding the right level to manage the different risks.

Threat modelling

Threat modelling for interacting digital systems is the exercise of analysing how a software or a system can be attacked with the aim of protecting against such attacks. While several methods exist, one of the more well-known methods is STRIDE (Spoofing, Tempering, Repudiation, Information disclosure, Denial of service, Elevation of privilege).

STRIDE starts by creating a model of the system to visualise how and what type of data is being transmitted between the different parts of the system. As an example, a part of the model used in InterSecure is shown in Figure 2. Based on this model, threats (i.e., potential attacks) are identified for the different parts of the system.

To aid the STRIDE threat modelling process, Microsoft has developed the Microsoft Threat Modeling tool. This tool provides a graphical user interface to build the model of the system and a structured way of identifying and evaluating threats.

The tool is originally aligned towards threat modelling of software, but as the tool allows users to create their own template, we have adapted the tool to identify threats against the smart grid. Here you can find the template developed.

Figure 2: Model used in STRIDE threat modeling

Figure 2: Model used in STRIDE threat modeling

In this project, we performed threat modelling of a digital secondary substation to test and demonstrate the use of the tool in a smart grid context.

Guided by the threat categories making up the STRIDE mnemonic, threats towards the substation from each of the categories were identified. Information disclosure and denial of service threats were identified as the most critical mainly due to the simplicity of performing such attacks.

The reason is that such threats were evaluated to potentially have relatively serious consequences without requiring specific knowledge or specialised tools to execute.

Communication impact simulations

Figure 3 Impact simulation model in Mininet network emulator

Figure 3 Impact simulation model in Mininet network emulator

We have developed two simulation models to verify the most critical threats (sniffing and availability attacks) identified by threat modelling. Both models have a topology comprising two digital secondary substations and a control centre.

The first model was created within the Mininet network emulator and selected as the primary model due to its easy usability and transportability, as the entire model is composed from a single virtual machine. The schema of the first model is shown in Figure 3.

The second model was created using separate virtual machines for each component (RTUs, gateways, routers and the monitoring device).

This model was used only for performance testing during Denial-of-Service attacks as its results were more closely corresponding to reality when compared to the Mininet model.

Performance evaluation of the model was done and described in the article “Threat Modeling of a Smart Grid Secondary Substation“. This model was not further considered due to its complexity and lack of easy export. The model schema is shown in Figure 4.

Figure 4 Impact simulation model using virtual machines

Figure 4 Impact simulation model using virtual machines

Both impact simulation models used emulated IEC 104 communication corresponding to data from The National Smart Grid Lab in Trondheim.

The results gained from the simulation models testing can be used by grid operators to improve grid security, for example, by tuning security devices such as firewalls. The first model was provided to all members of the InterSecure project and was also demonstrated.

In this demonstration, all the participants could install the model on their devices and learn the basic control of the model in a provided scenario. A demonstration is also available on Youtube.

Assessment of vulnerabilities and failure consequences

Smart grids are complicated systems, so no single model or framework can uncover all vulnerabilities. Hence, there is a need for a selection of models and frameworks to help the grid operators viewing the problem at hand from different angles.

To complement the other methods in the InterSecure project, an approach for assessing vulnerabilities and failure consequences for cyber-physical power grids based on the bow-tie model has been developed.

The approach is illustrated in Figure 5. The first part of the analysis is to perform a bow-tie analysis for a selected scenario for a specific critical asset, i.e., an asset that can directly impact the distribution of electricity.

Next, assumptions on the operation state of the power system are made, and the coping capacity and consequences at the system level are assessed.

Figure 5 A bow-tie model with the critical asset event at the center. The left side illustrates the four zones in the Purdue model. On the right side, the zone closest to the center is related to the event tree from the asset perspective, while the rest of the right side is related to the power system consequence assessment. The vertical orange bars represent barriers.

Figure 5 A bow-tie model with the critical asset event at the center. The left side illustrates the four zones in the Purdue model. On the right side, the zone closest to the center is related to the event tree from the asset perspective, while the rest of the right side is related to the power system consequence assessment. The vertical orange bars represent barriers. Adapted from Sperstad et al., 2020.

The proposed approach has been tested on a case related to conditional connection agreements at a Norwegian DSO. Advantages of the proposed approach are that the bow-tie model is well-known in the industry.

Thus, little time was needed to explain the method to the participants. The bow-tie model was found to be flexible enough to incorporate both traditional threats, such as technical failures and cyber threats from malicious actors, in the same diagram.

Further, the approach aided in building a common understanding among participants from the different departments of the grid operator by visualising threats, vulnerabilities, barriers, and consequences in the same diagram.

The bow-tie analyses are, however, time-consuming to perform. Considerable time is also needed to process the results before they can be used further in the risk management process.

Another consequence of the flexibility of the bow-tie method is that successful use is dependent on the ability of the facilitator to guide the discussion in the group so that relevant threats and vulnerabilities are discussed.

Because of this, there is a need for a structured overall approach to ensure that this type of analysis is used on the relevant assets and threats.

To summarize the methods tested in InterSecure are applicable in different situations where different levels of detail is needed. The suggested framework can be used at a high level.

While threat model can be used to identify information flows and threats and further “sort” out the most important threats for more detailed analysis and follow-up.

The simulation model is useful for detailed testing of concrete attacks with realistic communication- and network topology, while the assessment of vulnerabilities is useful for in depth analysis of both physical and cyber threats, vulnerabilities and barriers. The DSO should test the methods and plan which method to use when.

Source: Sintef



Source link

Webb Reveals New Structures Within Iconic Supernova

0
Webb Reveals New Structures Within Iconic Supernova


NASA’s James Webb Space Telescope has begun the study of one of the most renowned supernovae, SN 1987A (Supernova 1987A).

Located 168,000 light-years away in the Large Magellanic Cloud, SN 1987A has been a target of intense observations at wavelengths ranging from gamma rays to radio for nearly 40 years, since its discovery in February of 1987.

Recent observations by Webb’s NIRCam (Near-Infrared Camera) provide a crucial clue to our understanding of how a supernova develops to shape its remnant.

Webb’s NIRCam (Near-Infrared Camera) captured this detailed image of SN 1987A (Supernova 1987A). At the center, material ejected from the supernova forms a keyhole shape. Just to its left and right are faint crescents newly discovered by Webb. Beyond them an equatorial ring, formed from material ejected tens of thousands of years before the supernova explosion, contains bright hot spots. Exterior to that is diffuse emission and two faint outer rings. In this image blue represents light at 1.5 microns (F150W), cyan 1.64 and 2.0 microns (F164N, F200W), yellow 3.23 microns (F323N), orange 4.05 microns (F405N), and red 4.44 microns (F444W). Image Credit: NASA, ESA, CSA, M. Matsuura (Cardiff University), R. Arendt (NASA’s Goddard Spaceflight Center & University of Maryland, Baltimore County), C. Fransson

This image reveals a central structure like a keyhole. This center is packed with clumpy gas and dust ejected by the supernova explosion. The dust is so dense that even near-infrared light that Webb detects can’t penetrate it, shaping the dark “hole” in the keyhole.

A bright, equatorial ring surrounds the inner keyhole, forming a band around the waist that connects two faint arms of hourglass-shaped outer rings. The equatorial ring, formed from material ejected tens of thousands of years before the supernova explosion, contains bright hot spots, which appeared as the supernova’s shock wave hit the ring.

Now spots are found even exterior to the ring, with diffuse emission surrounding it. These are the locations of supernova shocks hitting more exterior material.

While these structures have been observed to varying degrees by NASA’s Hubble and Spitzer Space Telescopes and Chandra X-ray Observatory, the unparalleled sensitivity and spatial resolution of Webb revealed a new feature in this supernova remnant – small crescent-like structures.

These crescents are thought to be a part of the outer layers of gas shot out from the supernova explosion. Their brightness may be an indication of limb brightening, an optical phenomenon that results from viewing the expanding material in three dimensions.

In other words, our viewing angle makes it appear that there is more material in these two crescents than there actually may be.

The high resolution of these images is also noteworthy. Before Webb, the now-retired Spitzer telescope observed this supernova in infrared throughout its entire lifespan, yielding key data about how its emissions evolved over time. However, it was never able to observe the supernova with such clarity and detail.

Webb’s NIRCam (Near-Infrared Camera) captured this detailed image of SN 1987A (Supernova 1987A), which has been annotated to highlight key structures. At the center, material ejected from the supernova forms a keyhole shape. Just to its left and right are faint crescents newly discovered by Webb. Beyond them an equatorial ring, formed from material ejected tens of thousands of years before the supernova explosion, contains bright hot spots. Exterior to that is diffuse emission and two faint outer rings. In this image blue represents light at 1.5 microns (F150W), cyan 1.64 and 2.0 microns (F164N, F200W), yellow 3.23 microns (F323N), orange 4.05 microns (F405N), and red 4.44 microns (F444W).

Webb’s NIRCam (Near-Infrared Camera) captured this detailed image of SN 1987A (Supernova 1987A), which has been annotated to highlight key structures. At the center, material ejected from the supernova forms a keyhole shape. Just to its left and right are faint crescents newly discovered by Webb. Beyond them an equatorial ring, formed from material ejected tens of thousands of years before the supernova explosion, contains bright hot spots. Exterior to that is diffuse emission and two faint outer rings. In this image blue represents light at 1.5 microns (F150W), cyan 1.64 and 2.0 microns (F164N, F200W), yellow 3.23 microns (F323N), orange 4.05 microns (F405N), and red 4.44 microns (F444W). Image credits: NASA, ESA, CSA, M. Matsuura (Cardiff University), R. Arendt (NASA’s Goddard Spaceflight Center & University of Maryland, Baltimore County), C. Fransson (Stockholm University), and J. Larsson (KTH Royal Institute of Technology). Image credit: A. Pagan

Despite the decades of study since the supernova’s initial discovery, there are several mysteries that remain, particularly surrounding the neutron star that should have been formed in the aftermath of the supernova explosion.

Like Spitzer, Webb will continue to observe the supernova over time. Its NIRSpec (Near-Infrared Spectrograph) and MIRI (Mid-Infrared Instrument) instruments will offer astronomers the ability to capture new, high-fidelity infrared data over time and gain new insights into the newly identified crescent structures.

Further, Webb will continue to collaborate with Hubble, Chandra, and other observatories to provide new insights into the past and future of this legendary supernova.

The James Webb Space Telescope is the world’s premier space science observatory. Webb is solving mysteries in our solar system, looking beyond to distant worlds around other stars, and probing the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.

Source: NASA



Source link

Beyond the Visual: The Intersection of Art and Sound

0


Beyond the Visual: The Intersection of Art and Sound

Art has long been celebrated as a visual medium, capturing the imagination and stimulating the senses through brushstrokes, colors, and compositions. However, the power of art extends beyond what meets the eye. Sound, with its ability to evoke emotions and engage our auditory senses, has found an intriguing intersection with visual art. This fusion of art and sound has given rise to a new dimension of artistic expression that transcends the boundaries of traditional visuals. In this article, we will explore the profound merging of these two forms of artistic communication.

Subheading 1: Painting with Sound: The Auditory Canvas

Visual art often breathes life into the static canvas through the dynamic use of color, line, and shape. Similarly, sound can be used as a tool to paint a vivid and immersive auditory canvas. Artists now explore the creation of soundscapes, where the composition becomes an intricate expression of emotions, atmospheres, and stories. Just as an artist might use brushstrokes to layer and blend colors, musicians and sound artists utilize various tones, textures, and rhythms to build complex auditory narratives.

The concept of painting with sound has been employed by composers and musicians to enhance the immersive experience of visual arts exhibitions and installations. By orchestrating soundscapes that resonate with the underlying themes or visual elements of an artwork, they create an entirely new dimension for the audience to explore. Through the harmonious coexistence of art and sound, viewers engage with a multi-sensory experience that amplifies the impact and emotional resonance of the artwork.

Subheading 2: Synesthesia: When Art and Sound Collide

Beyond sound complementing visual art, a phenomenon known as synesthesia takes the fusion between art and sound to another level. Synesthesia refers to a neurological condition in which one sensory experience involuntarily triggers another. This means that an individual with synesthesia might see colors and shapes when they hear specific sounds or musical notes.

For artists and musicians who experience synesthesia, the relationship between sound and visual art becomes deeply intertwined. They can tap into this multisensory experience in their artistic creations, creating visual art that directly translates into sound, or vice versa. This unique ability allows synesthetic artists to present the world in a way that combines the auditory and visual dimensions. They provide audiences with an extraordinary glimpse into their sensorial experiences and invite them to perceive art in an entirely novel way.

This cross-pollination between art and sound opens up a world of possibilities for both artists and audiences. It encourages exploration, collaboration, and a deeper understanding of how different sensory stimuli can intertwine to create a rich and authentic artistic experience. By pushing the boundaries of traditional art forms, the intersection of art and sound challenges us to see, feel, and hear the world in new and captivating ways.

Capturing Life’s Essence: The Storytelling Nature of Portraiture

0
woman in white brassiere leaning on black metal railings during daytime
Photo by Matthew Moloney on Unsplash

Portraiture has been an essential part of art for centuries. From the intricate details in classical oil paintings to today’s avant-garde photographic portraits, each work tells a unique story about the subject. Portraits not only capture the physical likeness of individuals but also encapsulate their emotions, personality, and experiences. They serve as a powerful medium for expressing the essence of life. This article explores the storytelling nature of portraiture and its ability to convey the depth and complexity of human existence.

1. The Emotional Narrative: Portraits as windows into the human soul

One of the most remarkable aspects of portraiture is its ability to convey emotions and capture the essence of the subjects’ inner world. A skilled portrait artist can use various techniques to reveal the emotions and thoughts of the individual being portrayed. The subject’s eyes, for example, can directly engage the viewer, evoking empathy and inviting them to connect with the depicted person on a deeper level.

The posture, gestures, and facial expressions portrayed in a portrait also contribute to the emotional narrative. A slight smile can communicate joy, while a furrowed brow might hint at worry or contemplation. By capturing these subtle nuances, the artist can create a powerful narrative that reflects the subject’s emotional state, experiences, and even their journey through life. A portrait, in this sense, becomes a door that allows us to explore the complexities of human existence.

2. Contextualizing Identity: Portraits as portraits of society

Every portrait is not only a representation of an individual but also an encapsulation of the time and society in which they exist. Portraits serve as historical documents, often reflecting the cultural, social, and political influences that shape the subject’s identity. By examining a portrait, we can gain insights into the fashion, values, and cultural norms prevalent during that period.

For example, portraits from the Renaissance period not only reveal the physical appearance of the subjects but also offer glimpses into the political and social power structures of the time. Similarly, contemporary portraiture can reflect the diversity and inclusivity movements of today’s world, capturing individuals from different ethnicities, genders, and backgrounds.

In this way, portraiture becomes a means of contextualizing identity within the larger fabric of society. It invites us to explore both the individual and the collective, providing a broader understanding of the human experience throughout different eras.

Conclusion

Portraiture’s storytelling nature goes beyond capturing a simple likeness or physical appearance. Through a combination of artistic skill and psychological insight, portraiture encapsulates the essence of life, conveying emotions, experiences, and societal influences. Whether through expressive brushstrokes or skillful photography, portraits offer unique narratives that engage and connect with viewers, showcasing the multifaceted nature of human existence. By exploring these narratives, we deepen our understanding of ourselves, society, and the relentless beauty of the human spirit.

Finding Harmony in Chaos: The Art of Collage

0


Finding Harmony in Chaos: The Art of Collage

In today’s fast-paced world, chaos seems to be a constant companion. We are bombarded with information, images, and ideas from all directions, leaving us feeling overwhelmed and disconnected. However, amidst the chaos, there is beauty to be found – and one artistic medium that captures this essence is collage. The art of collage offers a unique way to create harmony by assembling various elements and bringing them together in a cohesive and visually appealing way. Let’s explore the world of collage and discover how it enables us to find harmony in chaos.

1. The Magic of Assembling Disparate Elements

Collage is the technique of creating a new whole by assembling different elements, such as photographs, papers, fabrics, and other objects. It allows artists to break away from traditional constraints and explore new possibilities by combining disparate elements that may seem unrelated at first glance.

In the chaos of everyday life, collage offers a way to bring order and unity. Artists carefully select and arrange these diverse elements, finding connections and meanings that might not have been apparent individually. The act of piecing together these fragments gives rise to a new creation that harmonizes with the chaos from which it was constructed. The resulting collage becomes a visual representation of the artist’s unique perspective on the world, bringing harmony to what initially seemed chaotic.

2. Storytelling through Layers and Texture

One of the intriguing aspects of collage is its ability to tell stories through the layers and textures created by the assembled elements. The juxtaposition of different materials and images adds depth and complexity, inviting viewers to explore multiple layers of meaning and interpretation.

In this way, collage allows artists to navigate the chaos of their experiences and emotions by using symbols and visual metaphors. It offers a platform to convey personal narratives, social commentaries, or abstract concepts that may otherwise be challenging to express. The different elements within a collage work together to create a harmonious whole, illustrating that even in chaos, there is coherence and meaning.

Furthermore, the physical texture within a collage adds another dimension to the artwork. By combining different materials like torn paper, textured fabrics, or found objects, artists create tactile compositions that engage the viewer’s senses. The tactile experience further enhances the connection between chaos and harmony, as one can physically feel the textures intermingling, reinforcing the idea that harmony can be found in even the most chaotic of circumstances.

In conclusion, collage is an art form that allows us to find harmony in the chaos that surrounds us. By assembling disparate elements and creating order from the disorder, collage artists showcase the beauty that can emerge from chaos. Through storytelling and the incorporation of texture, collage brings a sense of unity and wholeness to what might initially seem fragmented and chaotic. So, the next time you find yourself overwhelmed by the chaos of the world, perhaps it is a good time to embrace the art of collage and discover the harmony awaiting within it.