Featured

How do we know that our current climate is abnormal?

Okay, so it has been a while since I uploaded. Time has been ticking, life has been…life-ing, but I’m back!

The next set of blog posts will be segments from the EPQ that I wrote for my A Levels. For those of you who don’t know, an EPQ or Extended Project Qualification, is an independent research-based project involves writing a dissertation on a topic or subject of your choice. I decided to combine my interest in engineering and the environment, and to allow myself to delve deeper into the past, present and future changes of the climate. The essay has been broken down into more bearable sized posts, but I hope you enjoy them as much I enjoyed writing it.

What has the Ice Age and Medieval Warming Period warned and taught us about our current environmental situation? (part 1)

In more recent years, we have been using the term ‘climate change’ to refer to the negative ideologies of changes in the Earth’s temperature, more specifically, the unprecedented rate of global temperatures caused by the uncontrolled rate of emissions. Many people are unaware of the cycles that the Earth has and will continue to undergo so become naive to the fact that the term climate change can also refer to the Earth’s more natural phases. Without the knowledge that climate change refers to long term changes, an everyday person will not feel confident that they can make a difference through the actions they take such as reducing their carbon footprint or moving to more sustainable energy providers. In this essay, I will be looking into whether we have been taught anything about two specific periods in time which are an example of the extremities that the climate can reach: the Medieval Warming Period and the Ice Age.

The National Geographic defines climate change as ‘the long-term alteration of temperature and typical weather patterns in a place. Climate change could refer to a particular location or the planet as a whole’. Many fluctuations in the Earth’s temperature can, and have already, caused several disastrous environmental catastrophes including an increase in sea levels worldwide, increased number of droughts in sub-Saharan Africa, forest fires in Australia and floods which take thousands and potentially millions of lives and cause many negative socio-economic implications. These fluctuations in the climate have been happening for many centuries and over millions of years, but it is only now that they are starting to become out of control due to the impact of mankind and the rapid increase in industrial processes that use fossil fuels as the main source of energy and product production. In the 1980s, the issue became more public as countries began to become more aware of climate change due to the unexpected shift in weather patterns.

In 2013, the Intergovernmental Panel on Climate Change (IPCC) projected in the Fifth Assessment Report, together with scientific evidence, that to prevent causing a climate calamity, we needed to prevent the global temperature increasing by 2°C. Then in 2015, the UN brought about the ‘Paris Agreement’ in which 186 world leaders have now signed up to, which will allow for serious steps to be taken in the tackling of the climate crisis. However, we are now looking at a possible prevention of 4°C with the initial target being surpassed. An increase in 4°C could lead to mass extinction of hundreds, and possibly thousands, of animal species, widespread coral mortality and millions of people worldwide being adversely impacted with increased frequency of droughts, decreasing food availability and the loss of land and homes. However, have previous environmental events warned us of this climate catastrophe, making our actions a little too late?

There have been many events of varying significance in history that have caused a shift in the Earth’s climate cycle. These events have been caused by many contributing factors, such as the increased frequency of volcanoes and the change in the Earth’s orbit around the Sun, making them powerful enough to amplify the warming or cooling caused by the slight changes within the different mechanisms.

How do we reconstruct the changes in the Earth’s temperatures?

To find out about different periods of climate history, paleoclimatologists, those who examine climate data to understand how local ecology and global climate looked in the past, turn to core boring as the primary means to collect data about climate cycles. Core boring is a sampling technique where cylindrical shaped wells into rock or ice to analyse the ground and is commonly used in the civil engineering industry. From these cores, large amounts of data on past climate variations can be obtained. Many remains are trapped in material, such as sediment, ice, fossils, gas bubbles and discoveries can be made through analysing them. These remains can clearly deconstruct the climatic history of the earth. This technique has been used more widely to investigate the history of the current climate crisis and to help researchers understand how long it has been occurring, if the crisis is just a phase in the Earth’s natural climate cycle or if it is really a catastrophic event. Research has been done on the Medieval Warming period, the period between 900 and 1300 AD, and the most recent Ice Age which occurred 2.6 million years ago. By comparing different periods in time, this can help scientists identify the causes of fluctuations in global temperatures and use the cyclical patterns to predict where we could be heading if we continue to burn fossils fuels at the rate that we are currently at or higher. I will be comparing these two periods of time by looking at their causes and comparing it to the causes of today’s change in climate, looking at whether they have warned or taught scientists and engineers about our current environmental situation.

The type of boring method used when trying to analyse historical data is very important and most commonly, auger boring is used. Auger boring is the process of forming a horizontal bore, a vertical hole, by forcing a steel casing through the earth from a main shaft to a reception shaft. The rotating augers (i.e. the drilling device) carries the soil back to the surface through the casing pipe to the main shaft for removal. It is an efficient and economical method that uses simple and inexpensive material to create holes of a variety of sizes. It is appropriate for soft to firmer soils and can also be used to determine ground water table. This method is seen to be better than other methods such as wash boring, a boring system by which material is loosened and is forced down through the pipe using water, or rotary drilling, a method used to drill deep boreholes in rock formations. However, it is not suitable for very hard or cohesionless soils as the soil could possibly flow back into the hole which could give false data about each layer of soil. Before any boring can be undertaken, samples of the soil are taken to determine the type of soil that is in the area, which is then used determine the boring method to be applied.

So that’s part 1 done! Stay tuned for the next one!

Featured

Should we all be STEMinists?

On February 11th 2021, the world celebrated the 6th International Day of Women and Girls day in STEM. Currently, less than 30% of researchers worldwide are women and according to UNESCO, only about 30% of all female students select stem related fields in higher education. Longstanding biases and gender stereotypes are what steer girls and young women away from STEM-related fields in higher education. To increase the access to and participation in science for women and girls, UNESCO declared the 11th of February as the International Day of Women and Girls in science. In 2015, the United Nations came up with 17 sustainable development goals for people across the planet with a deadline of 2030. The fifth goal is for gender equality which includes women in STEM.

More than ever, it is important that we encourage girls in the early stages of their education that they can be whatever they aspire to be and show them that they will not be limited to stereotypes or the ‘norm’ of today’s society.

This year the coronavirus has shown us that it will take as many of us as possible, including women, in coming up with a way in which we can fight against COVID-19 whether it be in the research field, developing techniques for testing, in the making of the vaccine, administering the vaccine, taking care of COVID patients and in so many other ways.

Below are a few women in a STEM career who have improved our knowledge in their field and have empowered many other young people to do the same.

When the Ebola epidemic began in West Africa, Dr Pardis Sabeti led a team that sequenced virus samples from infected patients almost as soon as the outbreak began. This marked the first in-depth use of real-time DNA sequencing during such a deadly pandemic. Pardis and her team were able to work out clearly that the virus was spreading human to human—not from mosquito bites or some pig vector or something else. There were so many theories out there, but her work proves that there is nothing like real data to get rid of myths and guesses and get down to the facts. Many of her scientific collaborators died during this outbreak. This is high-risk research, but it ended up saving a lot of lives too.

Mae C. Jemison is an American astronaut and physician who in June 1987, became the first African American woman to be admitted into NASA’s astronaut training program and on September 12, 1992, became the first African American woman in space. In recognition of her accomplishments, Jemison received has received many awards including the 1988 Essence Science and Technology Award and the Ebony Black Achievement Award in 1992. She received a Bachelor of Science degree in chemical engineering from university in 1977 and after graduating found time to expand her horizons by studying in Cuba and Kenya and working at a Cambodian refugee camp in Thailand.

When she returned to the United States in 1985, Jemison made a career change and decided to follow a dream of applying to NASA’s astronaut training program and after more than a year of training, she received the title of ‘Science Mission Specialist’, the role that made her responsible for conducting crew-related scientific experiments on the space shuttle.

Jemison flew into space on September 12, 1992, with six other astronauts aboard the Endeavour on mission STS47. During the eight days she was in space, Jemison conducted experiments on weightlessness and motion sickness herself and the rest of the crew. In total, she spent more than 190 hours in space before returning to Earth on September 20, 1992. After the historic flight, Jemison said that society should recognize how much both women and members of other minority groups can contribute if given the opportunity.

The radiochemist Irène Joliot-Curie was a battlefield radiologist, activist and Nobel Prize in Chemistry winner in 1935. She was born in Paris to Marie and Pierre Curie, two of the most famous scientists in the world. Along with her husband, Frédéric, she discovered the first artificially created radioactive atoms, contributing to countless medical developments, especially in the fight against cancer.

After starting her studies at the Faculty of Science in Paris, she served as a nurse radiographer and worked together with her mother to provide mobile X-ray units during World War I before returning to her studies at university. She later worked at the institute that her parents had founded where she did important work on natural and artificial radioactivity, transmutation of elements, and nuclear physics. It was there that she and her husband Frédéric Joliot, whom she married in 1926 conducted the work that would award them the Nobel Prize for their synthesis of new radioactive elements.

Aprille Ericsson-Jackson is a native of Brooklyn, New York. She attended Massachusetts Institute of Technology before attending graduate school at Howard University. She was the first African-American woman to earn a PhD in Mechanical Engineering from Howard University and the first African-American woman to receive a PhD in Engineering at NASA’s Goddard Space Flight Centre. As she continues her career at NASA, Dr Ericsson-Jackson is also committed to educating and inspiring more African-American students to pursue careers in STEM.

These are only a small handful of some of the known and unknown women who were making a difference in their field and breaking boundaries in the world. It is our job to continue inspiring young girls to be whoever they want to be whether that be a policewoman, an entrepreneur, an engineer, or a pilot by following our own dreams and encouraging others to do the same. These women all followed their passions which not only gave them a sense of fulfilment and purpose but led the way for so many more young women to come.

Here’s to many more International Women and Girls days in STEM.

Photo by Christina @ wocintechchat.com on Unsplash

Featured

The pandemic that rocked the world…Where are we now?

Well…..2020…..what a year it was. A new virus that had never been detected spreading across the whole world, closing the borders of many countries, and forcing governments to make never-seen-before decisions that would cause massive financial implications. It is said to have originated in Wuhan, China at the end of 2019 and has now reached most countries around the world. But now that the virus has been around for just over a year, scientists, engineers, and biologists have been able to develop vaccines that could see the beginning of the end to this pandemic and teach us invaluable lessons about how we live and work with others.

The United Kingdom has recently ordered 40 million doses of the Pfizer vaccine, said to vaccinate 20 million people, and 100 million doses (according to GOV.UK) of the Oxford-AstraZeneca vaccine that have both been approved for use and have started to be administrated by doctors in the NHS. This total is said to be enough to vaccinate all over 16s in the UK population. The vaccine consists of two doses with the second dose being administered 12 weeks after the first to allow for as many people to be vaccinated as possible in a short period. This was increased from the original 3 weeks due to concerns about the new, faster-spreading variant of the virus in the UK. Priority for these vaccines will be given to people in care homes and those who care for them.

It is no secret that with the joy of the development of a vaccine such as this in a very (very) short space of time, that some people, as with any vaccine, will be hesitant about taking it or completely against it. Some people’s reasons for not taking the potentially life-saving vaccine remain unknown but for the majority, the speed at which the vaccine has been made is worrying. The years of rigorous research, testing and approvals have been sped up five-fold which makes many concerned about the validity of the vaccine and whether it is safe to distribute. However, what some people fail to realise is, although we do not know much about this new virus, the way the vaccine will work is very much the same and here is how.

Most vaccines that have been developed in the past have undergone extensive and thorough testing which ensures that they are safe before they can be distributed and administered. Each vaccine goes through phases which ensures that the vaccine produces an immune response. This part of the testing is not carried out on humans initially to evaluate its safety and to prevent potential side effects. If the vaccine triggers an immune response in animals, it is then tested in human clinal trials. This is process contains 3 phases.

Phases 1: The vaccine is given to a small number of volunteers to assess its safety and confirm that it generates an immune response, and the correct quantity is given in each dose.

Phase 2: The vaccine is then given to several hundred volunteers to further assess its safety and ability to generate an immune response. There are usually multiple trials in this phase in people of various age groups.

Phase 3: The vaccine is next given to thousands of volunteers – and compared to a similar group of people who didn’t get the vaccine. This is done to determine if the vaccine is effective against the disease and protects those with the vaccine.

During phase two and phase three of the trials, the volunteers and the scientists conducting the study are shielded from knowing which volunteers had received the vaccine being tested. This is known as ‘blinding’ and is done to ensure that neither the volunteer nor the scientist is biased in assessing the usefulness at the end of the trial. Once the results of the clinical trials are available, officials in each country review the data meticulously to decide whether to authorise the vaccine or not. Once the vaccine is deemed to be safe, they are then distributed to the public. However, as the vaccines are being distributed, it is crucial that the effects of the vaccine are monitored for their effectiveness and safety. This allows scientists to keep track of the vaccine’s impact.  So how does the vaccine work once they have been approved?

The vaccine enters the body through an injection, usually in the upper arm. The vaccine contains bits of genetic code to cause an immune response and is called an mRNA vaccine. This is a harmless virus altered to look like the current virus which triggers the body’s immune system to build immunity to Covid. This means that if the person is infected by the virus later, the immune system recognises the virus and is ready to attack it, protecting the person from Covid. 

The worrying news of a new variant has meant that the effectiveness of this vaccine is now being questioned by scientists as well as the public. This new variant, a mutation of Cov-Sars-2, is said to be spreading faster and infecting more people than the first, causing cases to rise exponentially in the last few months in most of South East England. However, it has been said by multiple doctors and government advisors in the UK, that the current vaccines cause a “broad immune response” so the chance of the vaccines being ineffective is very small.

 So far, 2021 has seen the rollout of the new vaccines developed by both Oxford and Pfizer, giving us hope that sometime soon, we may be able to return to some sort of normality. However, this new variant has taught us that the plans we make and the precautions we take may not always lead to the results we expect. All we can do now is continue to follow the guidelines set by the government and have patience because as history has shown us, these things do not last forever.

“While the most vulnerable are immunised, I urge everybody to continue following the restrictions so we can keep cases down and protect our loved ones.”

–Matt Hancock- England Health Secretary, January 2020

Featured

Electric Power Generation and consumption

Replacing fossil fuels with renewable and sustainable forms of energy, may be the greatest challenge of the 21st century and possibly of world history.

Many engineers and scientists are working on developing the current technology that we have to harness the natural sources of energy to mitigate climate change. We are all aware of the severe consequences to climate change; unpredictable weather patterns, the increase in sea level which in turn will bring about more frequent flooding and the destruction to people’s homes and livelihoods. However, the research, production and development of equipment to utilise the renewable sources will need to be drastically enhanced by engineers if we have any chance at mitigating climate change and this essay will focus on the ways in which engineers are doing just that.

Renewable Sources

The most common definition of a ‘renewable source’ is an energy resource that is replaced rapidly by natural processes. Renewable sources of energy include sun, wind, tidal, hydroelectric, geothermal and biomass. These sources do not emit carbon dioxide, sulphur dioxide and other greenhouse gases that contribute to climate change. Renewable sources are said to be more sustainable but are not the most reliable sources of energy due to the long period of time it takes to start making an economic gain from them after installation.

How are some sources of renewable energy generated?

Solar

Solar power is harnessed using solar photovoltaic (PV) panels which convert the sun’s energy into electricity.  The Solar PV panels absorb the sun’s rays and store it in the water or fluid inside it. The solar energy is then passed through an inverter, in turn being converted to a D.C current which is then connected to a generator. They can be installed on various scales e.g. in small hand-held appliances such as calculators, on slightly bigger scales, on the rooves of houses and on industrial scales using large areas of land as a ‘solar field’.

Although solar power is more common due to its ability to be used domestically on a smaller scale, the cost of research, development and manufacture is very high which then causes the final product to be quite expensive for people and industries to purchase. With most forms of renewable energy, including solar, there is an issue with how the energy can be stored. Currently, the electricity being generated has to be used as it is produced due to the lack of storage. On the other hand, over 486 GW of installed capacity makes solar power the 3rd largest generator of renewable energy. The annual growth rate in the use of solar power has increased by around 25% in the last 5 years making it the fastest growing source of renewable energy.

Therefore, to mitigate climate change, engineers will have to bring down the cost of solar PV panels to make them accessible to everyone which will them begin to reduce our carbon footprint.

Wind

Wind power is the power obtained by harnessing energy of the wind. When the wind blows over the blades, the air pressure on one side of the blade is significantly greater than the other. The differences in these air pressures, creates a drag and lifting force and as the lifting force is greater than the drag, a spinning effect is caused, causing the rotor to spin. The rotor blades are then connected to a generator (left), which speeds up the rotations. The energy is then converted to mechanical power turning the blades of wind turbines to power electric generators. These were traditionally used in agricultural work such as milling and pumping.

In 2018, wind power made up 24% of the world’s overall renewable energy generation and at the end of 2019, the US was home to 103GW of wind capacity with 77% of this being installed in the last 10 years. This shows the rapid growth in the production and use of wind power.

Due to the large push in the use of renewables, wind power is one of the best ways in which electricity can be produced. It does not release greenhouse gases, has a smaller carbon footprint and can potentially benefit the economy. Wind Power is also one of the few renewable energy forms which can be harnessed both offshore and onshore.  However, there are some negatives in the production of wind power. Firstly, although the wind is sustainable and will never run out, the speed of the wind varies on a daily basis so there can be some uncertainty in the efficiency of wind turbines. For wind turbines to be efficient, there needs to be a sufficient quantity of wind energy which explains why wind turbines tend to be placed in areas of high land, such as on hills, and out at sea, so that there are no objects obstructing the wind from reaching the turbines.

The initial cost of installing a turbine is very high although it is gradually becoming cheaper, as engineers are beginning to design them to become more energy efficient. Firstly, an engineer conducts a site survey where they measure wind speeds over an adequate period of time. If the area is suitable, the turbines need to be bought, transported and installed in that area. The cost becomes a lot greater for offshore wind farms due to the more complex investigation that needs to go on to find out if the bedrock is adequate enough for the wind turbine. The wind turbines are then transported using ships before being installed out at sea.

Although having a high initial cost of installation and some negative impacts on wildlife, wind power is more sustainable than fossil fuels and has a low maintenance cost. As time moves on, engineers will have to find ways to make the production of wind power more efficient, especially with changing wind speeds, as the demand for energy will inevitably continue to rise.

Non-Renewable Sources

Nuclear Power

When an atom splits into two as part of natural decay or by being split in a laboratory, the atom releases energy which is the process of nuclear fission. The nuclear reaction in which the nucleus of an atom, of low atomic mass, fuses with another atomic nucleus forming a heavier nucleus and releasing energy is known as nuclear fusion. Both forms of nuclear power can produce approximately one million times more energy per atom than the chemical energy per unit of typical fossil fuels. This shows that the waste produced from nuclear power is much less then fossil fuels being used now.  Scientists have seen nuclear fission and fusion as the new possible and more sustainable way of producing energy. It has the potential to become the next major source of energy but, it does however, come with many safety, environmental and political concerns, including the potential for unsafe disposal of the radioactive substance which can cause the mutation of cells in humans.

Nuclear fission occurs in Uranium-235 as it decays naturally by alpha radiation. The uranium releases an alpha particle, or a free neutron can be fired at the U-235 nucleus and the nucleus will take in the neutron, causing it to be unstable and split instantly.

 The decay of 1g U-235 atom can release around 1MW which is a very small amount of energy in terms of how much energy is need to power multiple houses. This is the same amount of energy as 3 tons of coal. To combat this issue, ‘Fast-Breed Reactors’ are used to extract Uranium because this process can make the uranium approximately 60 times more efficient as all of the uranium is used, which would be the same as providing each person with 33kWh per day.

Research into the extraction of Uranium for fuel is being undertaken all around the world, with researchers extracting uranium from both the ground and sea water. Japanese researchers have used the technique of extracting uranium from seawater which would cost $100-300 per kilogram of uranium which is much higher than the current cost of $20 per kilogram from Uranium ore. The high cost is not the only downside of using the seawater extraction method. Some researchers have questioned if the Japanese’s technique can be scaled up due to the large volume of seawater needed to make it a viable process and the immense cost of materials and stations needs to be minimised by the engineers who design them.

Although, the future of nuclear fusion is uncertain, the low volume of waste and the significant decrease in carbon dioxide emissions could see scientists using uranium as an increasing source of energy in the near future. This, however, will mean that engineers and scientists will need to find ways to mine and safely extract the uranium from the ore without causing negative impacts to the society and the environment due to contamination. Currently, another issue for the development of nuclear power is the lifetime of the plant and the infrastructure of it, from the reactor core itself to the mechanical equipment. Presently, engineers are doing research into the safe expansion of existing nuclear power plants in order to produce more energy to meet the world’s expected increase in demand. This means that we may start seeing a greater proportion of energy being generated from nuclear power stations in the future.

What can engineers do to mitigate climate change in relation to energy use and consumption both domestically and industrially? The diversity in the methods used to harness energy from renewable sources is projected to increase in the next few years.  As discussed, technology is the main driver of how fast we move to renewable energies. How engineers construct and design machines that will be energy efficient and a small carbon footprint will determine how quickly we reduce our use of renewables without compromising the safety and final outcome of the products.  So, to mitigate climate change, engineers must focus on improving efficiency, reducing cost and improve technologies that will ensure to harness the renewable sources that we have which will reduce carbon emissions.

Demi Bako

Featured

The holes in the sky

We now know for sure that we are amid a climate crisis. So many different terminologies get thrown around, but do we know what any of them mean?

Let us take the ozone layer. We know that the ozone layer is somewhere up there and that there is a hole that is slowly starting to emerge through it, but does it have any real importance and how does it affect us? The simple answer is yes, it does affect us. However, this one-word answer does not do a great deal at helping us understand the significance of the issue.

The ozone layer, which sits around 20 km from the Earth’s surface, is an area with a high concentration of ozone molecules (which has the chemical formula of O3) in the stratosphere – the second major layer of Earth’s atmosphere between 12 to 50 km above the Earth’s surface. Although ozone is toxic to humans, in the stratosphere, it absorbs harmful ultraviolet radiation (UV-C and most UV-B radiation) from the Sun which protects us from genetic damage and possible cell mutations which can cause skin cancer. It also has the task of stabilising the Earth’s temperature.

 In the early 1970s, scientists have used satellites and ground stations to measure ozone levels and found evidence to show that human activities were disrupting the ozone layer’s natural cycle of being decomposed by ultraviolet radiation and then being formed again in natural processes. The production of chemicals such as chlorofluorocarbons (CFCs), an organic compound containing chlorine, carbon and fluorine atoms, has added factor that destroys ozone. CFCs have been made by humans, not derived from nature. In the past, CFCs were used widely as they are inert (do not react easily with other chemicals) and stable molecules. This makes them very useful in fire-safety equipment and as refrigerants.

CFC molecules can, however, be broken down by ultraviolet radiation. In the troposphere -the lowest level of the Earth’s atmosphere, CFCs are protected from UV radiation by the ozone layer and so move as a whole molecule. This is where the problem starts to occur. Once the CFCs enter the stratosphere, the CFC molecules are not protected from UV radiation by the ozone layer but break down using the energy from the UV radiation from the Sun and release chlorine atoms. These chlorine atoms then react with ozone molecules (O3), taking one oxygen atom to form chlorine monoxide, ClO and an oxygen molecule, O2.

However, when a chlorine monoxide molecule meets an oxygen atom, the oxygen atom breaks up the chlorine monoxide, taking the oxygen atom and releasing the chlorine atom back into the stratosphere to react with, and destroying more ozone.

This reaction happens repeatedly, allowing a single atom of chlorine to eliminate ozone molecules forming ‘holes’. (These are not literal holes but areas where there have been significant depletion in the levels of ozone)

Fortunately, chlorine atoms do not remain in the stratosphere forever. When a chlorine atom reacts with gases such as methane (CH4), it forms hydrogen chloride (HCl), which can be carried down from the stratosphere into the troposphere, where it can be removed from the air by dissolving in the rain forming acid rain.

According to World Atlas, scientists now have images showing the ozone is recovering with a 20% decrease in ozone depletion between 2005 and 2016. In 2019, the smallest ozone hole since 1982 was recorded over Antarctica. However, in 2020 the largest hole ever observed was found over the Arctic and took longer to close due to atmospheric conditions.

So, the ozone is destroyed as part of its role of protecting us from harmful radiation from the Sun but damaging emissions are speeding up this natural process meaning that the ozone is being destroyed faster than it can repair itself. If humans stop putting CFCs and other ozone-destroying chemicals into the stratosphere, the ozone layer could eventually repair itself and therefore potentially bring the Earth’s temperature back to normal and eliminate the climate crisis.

Demi Bako

What sparked the rapid increase of carbon dioxide?

Today, scientists attribute the cause of the current climate crisis to the rapid increase in the use of fossil fuels that started around the time of the Industrial Revolution in Europe and the United States of America between the 18th and 19th century. This climate trend is particularly significant as it is the first major change in the climate which is extremely likely to be the result of human activity, unlike the Ice Age and the Medieval Warming Period, and it is proceeding at an unprecedented rate compared to other decades and millennia. It was found that prior to the Industrial Revolution, carbon dioxide levels varied between 180 and 280 parts per million across the glacial and interglacial cycles. However today, this has more than doubled to over 407 parts per million.

Certain gases such as nitrous oxide and carbon dioxide block the heat from escaping the atmosphere, causing the ‘greenhouse effect’. This can be due to human activity and often occurs after the release of large volumes of gases during volcanic eruptions. Some gases remain in the atmosphere due to being unresponsive to physical and chemical changes. These can be described as ‘forcing’ climate change as they remain in the atmosphere and can accumulate to form a ‘blanket’ of gases that cause the temperature of the planet to gradually increase. Other gases, such as water vapour, which do react physically and chemically with other molecules with the correct conditions are ‘feedbacks’. Water vapour is the most abundant of the greenhouse gases, but it is one of the most important ‘feedback’ mechanisms because the water vapour’s ability to condense to form water droplets and fall back to earth through precipitation.  Unlike ‘feedbacks’, there is a greater variety of molecules that force climate change. These include carbon dioxide and methane, gases produced from natural sources and human activity, nitrous oxide, a powerful greenhouse gas produced by the soil cultivation process, and chlorofluorocarbons (CFCs), synthetic compounds that originated from industrial processes.

To tackle climate change, even as we aim to turn net global emissions down to near zero in the near future, more needs to be invested into methods to remove some of the greenhouse gases, especially the synthetic ones that are already in the atmosphere. Much research into the development of carbon capture and storage methods that could potentially lead to large reductions in the volume of greenhouses gases in the atmosphere has been done by engineers and scientists. Carbon Capture and Storage (CCS) is one of the newest ways of reducing carbon emissions, which could be the key to tackling the current global warming.

The Intergovernmental Panel on Climate Change (IPCC) emphasised that, if we are to complete the objectives of the Paris Agreement and curb future temperature increases to 1.5 degrees, we must do more than increasing efforts to reduce emissions but also need to develop the technologies to remove existing greenhouse gases from the atmosphere – CCS is one of those technologies and can play an important role in resolving our climate crisis. CCS is a three-step process, involving: capturing the carbon dioxide produced by power generation or industrial activity; transporting it through ships or pipelines; and then storing it kilometres underground. The aim of CCS is to permanently store carbon dioxide in the ground by injecting it into the rocks blow the seabed. However, more also must be done to increase the amount of effort to support renewable energy companies that will, in the future be the main producers of energy. The Hothouse Earth study called “Trajectories of the Earth System in the Anthropocene”, in which a team of interdisciplinary Earth systems scientists warned that the problem of climate change may be even worse than we thought, cautions that with a heating of 3 or 4°C, Earth’s “self-reinforcing feedbacks” — wildfires, methane release, and so forth — can drive the temperature even higher, toward runaway heating, a “nonlinear process” that no amount of human intervention can control.

So, what was the Ice Age and the Medieval Warming Period?

And so we move onto part 2! In this section of my EPQ I went on to talk about what the Ice Age and Medieval warming period actually were and why the changing climate of the time didn’t raise as many concerns as our current changing climate.

The Medieval Warming Period, the period between 900 AD and 1300 AD, was warmer than the Little Ice Age that followed between 1303 and 1860. Temperatures began to rise between 1.0 to 1.4 degrees above normal and different environmental and biological changes were occurring. The warming period also caused prolonged droughts in the southwest of the United States and Alaska began to get warmer but historical accounts confirm that the medieval warming period was much cooler than the current conditions. Paleoenvironmental records are used to reconstruct the climate during this warming period. An ice core sample that was retrieved in the Antarctic Peninsula shows that the temperatures during this period were somewhat higher. 

In the last 800,000 years, there have been 8 ice ages, each of which lasting approximately 100, 000 years and were separated by interglacial periods of between 10,000 and 35,000 years. A typical ice age lasting 100,000 years can be characterised into phases of advancing and retreating ice where the ice; grows for 80,000 years, but it only takes 20,000 years for that ice to melt. Unlike the Medieval Warming Period, the Ice Age was a period of colder global temperatures that featured glacial expansion across the surface of the Earth. An Ice Age, also known as the glacial age, is any geological period during which thick sheets cover vast areas of land and drastically reshape the surface features of entire continents. This is caused by pulses of cold glacial phases interspersed with warmer interglacial phase and had a distinct regularity due to the Milankovitch cycles. Reduced levels of carbon dioxide, CO2, in the atmosphere create a suitable environment for glaciations.

During these times, there were no human causes of such change in weather as there were no ways for them to significantly impact the climate. However, natural processes such as the change in the Milankovitch cycle have affected the planet’s climate.

The Milankovitch cycles describe how reasonably small changes in Earth’s movement can have significant impacts on the climate. These cycles are named after Milutin Milankovitch, a Serbian astrophysicist who investigated the cause of Earth’s ancient ice ages in the early 1900s. Over a century ago, Milankovitch hypothesised the long-term effects of changes in Earth’s position relative to the Sun and these changes being the key driver to Earth’s long-term climate and are responsible for initiating the beginning and end of Ice Ages. Milankovitch investigated how the variations in the Earth’s orbital movements affects the amount of solar radiation that reaches the outermost layer of our atmosphere and most importantly, where these solar radiations reach. The cyclical orbital movement of the globe became known as the ‘Milankovitch cycle’ and cause variations of up to 25 percent in the amount of incoming radiation around the mid-latitudes – the areas between 30 and 60 degrees north and south of the equator. There are three factor, that can vary the incoming radiation which includes:

  • – The shape of the Earth’s orbit – the eccentricity
  • – The angle the Earth’s axis is tilted compared to the Earth’s orbital plane – the obliquity
  • – The direction of the Earth’s axis of rotation- the precession

These three factors can change the amount of solar radiation that reaches the Earth and operate together and as a unit to influence the Earth’s climate over long periods of time. This leads to larger changes in our climate over tens of thousands to hundreds of thousands of years. Milankovitch combined the cycles to create an all-inclusive mathematical model for calculating differences in solar radiation at several Earth latitudes along with corresponding surface temperatures. In 1965, the British climatologist Hubert Horace Lamb examined historical records of harvests and precipitation, along with early ice-core and tree-ring data and concluded that the Medieval Warming Period was a time of unusually high temperatures. It was also discovered to be the result of higher-than-average levels of solar radiation and less volcanic activity. It was caused by a change in the ocean circulation pattern which played a very important role in bringing warmer seawater to the Northern Atlantic.

The exact causes of ice ages, and the glacial cycles within them, have not been proven, they are most likely the result of complex interactions between things such as solar output, distance of the Earth from the sun and ocean circulation. Milankovitch calculated that Ice Ages occur approximately every 41,000 years which was confirmed by many different research that followed. An example of this is the study, published in records of the National Academy of Sciences. However, about 800,000 years ago, the length of the cycle of Ice Ages increased to 100,000 years, matching Earth’s eccentricity cycle. Although several theories have been offered to explain this shift, scientists currently do not have a clear answer. Other studies have also confirmed Milankovitch’s work, including research using data from ice cores in Greenland and the Arctic that has provided strong evidence of Milankovitch cycles from many hundreds of thousands of years in ‘Evidence-Based Climate Science’ by D.J. Easterbrook.

Is there a future for the oil and gas industry?

Oil and gas. The real cause of globalisation, the current fuel for most means of transportation and the reason that your food does not get stuck to a pot when you are cooking. Oil and gas have been driving economic growth at the expense of the planet’s climate for the last century, however, with the world turning its head towards renewable energy, this leaves the oil and gas industry in an existential crisis. In the short-term future, demand for energy produced from oil and gas down was due to the pandemic. In the long-term, well, there may be a more complicated end to this form of energy. Whichever way it ends, its overall use will certainly depreciate and so will its value.

So where did it all begin and how did we become so reliant on these now less efficient products? Firstly, most people fail to understand how all 3 forms of non-renewable energy: coal, oil (or petroleum), and natural gas are formed, giving us the reason as to why they are non-renewable and finite.

Millions of years ago, algae, plants and other microscopic organisms lived in warm, shallow oceans. After they died, they sank to the seafloor and the organic material mixed with other sediments and was buried. Over millions of years under high pressure and high temperature, the remains of these organisms transformed into what we know today as fossil fuels.

Today, petroleum is found in vast underground reservoirs where ancient seas were located, and the petroleum reservoirs can be found beneath land or the ocean floor and the crude oil is extracted with giant drilling machines. Oil and gas engineers are involved in the process of extracting oil and natural gas from reservoirs. They may be drilling engineers, those who design and supervise the drilling process unique to each petroleum deposit or production engineers, people who develop new mining and drilling equipment, as well as designing new extraction processes to optimise each oil field and gas deposit. Petroleum is used to make many things that we come across in our everyday lives. It is also processed and part of thousands of different items, including plastics, tires and can be burnt for energy, releasing toxic gases and high amounts of carbon dioxide into the atmosphere.

There have been some forecasts that approximately in the next 5 years, the demand for oil and gas is expected to increase but then will start to decline after this period due to renewable energies becoming cheaper, decreasing demand for fossil fuels. However, it will not be simple to just mitigate the use of fossil fuels without a decent plan of action, which is why many companies have now been looking at different options to try and keep the industry from collapse whilst reducing their effect on the environment. One of these options is to implement initiatives that offset emissions by tapping into neutral carbon sinks such as plants, forests and oceans. This reduces the concentration of greenhouse gases in the atmosphere whilst still being able to use fossil fuels.

The oil and gas industry was once an area in which many different people could get jobs with flexibility, permanency, and good pay, but now that we know how fossil fuels negatively impact our environment, it may be a sector that may undergo a slow decline in both dependence and popularity. Over the short term, this industry will be looking for ways to create more stability but in the long-term, it may be looking at a slow decline in the number of people employed in the sector, the demand for the product and potentially a change in the way we live our lives as we know it.

The body’s secret weapon – the blood buffer

In our day-to-day activities, we come across different forms of acids and bases. Well, you know to generally avoid these types of things but we are not always aware of the acidic things that we ingest. This can include acidic foods or alkaline dressings. Of course, it is important to limit how much of these foods that we eat daily, but we were never told why that is exactly (except that it can cause tooth decay). This article is going to look at the reason why we should avoid taking excessive amounts of acidic and basic foods and how the body reacts to the change in its conditions when we do take them in.


So to begin, you must be aware of the term acidic and basic and understand their basic meanings (no pun intended). An acid is a proton donor. In simple terms, this means that a solution can donate a hydrogen ion which is a proton. a base is a proton acceptor meaning that it can accept the proton being donated by the acid, and in terms of hydrogen ions, it can accept the hydrogen ions. acids have a pH ranging from about 0 to 6, bases have a range of 8 to 14 and pure water has a pH of 7. So, the higher the pH value the more alkaline the substance is or the less acidic it is. So we’d say that a more acidic substance has a high concentration of hydrogen ions whereas a substance that is more alkaline or basic has a high concentration of hydroxide ions. So, if you look at something like lemons, we know that it has a pH of around 5 this means that it has quite a high concentration of hydrogen ions. So what happens when you eat the lemon? Does it cause a change in our bodies and if so, what is that change? This all comes down to something called buffer action, a mechanism that is used to keep the body in check.


A buffer is a solution (or a substance) that can maintain pH and bring it back to its optimal value when there has been a small change in the pH. It does this by the addition or removal of hydrogen ions. They are made from a weak acid and the salt of this weak acid. Buffers working in the body fluid adjust the pH level of the blood if it rises or falls below 7.4, the human body’s pH level. If the pH of the blood falls below 7.4 and so becomes more acidic, the buffers act to use up hydrogen atoms and decrease the acidity of the blood. We have two natural buffers in our body—carbonate and bicarbonate— which play a vital role in regulating the body’s blood pH levels. When the blood becomes too acidic, the body produces bicarbonate to balance out the acidity. When the blood becomes too alkaline, the kidneys introduce carbonic acid (or carbonate) into the blood to bring down the excess alkalinity.


During exercise, the circulatory system cleans up the acid and carbon dioxide produced by taking it into the blood. This could result in a dangerous condition called acidosis, a process caused by increased acidity in the blood, but the bicarbonate buffer system maintains the blood pH at 7.4. When the blood becomes more acidic due to exercise, the additional protons from those acids are absorbed by the bicarbonate in the blood to form carbonic acid. The increase in carbonic acid in the blood stimulates the lungs to expel more carbon dioxide, which eventually causes the acid in the blood to be lowered back to a normal range.

So small changes in the value of the pH of our blood are ok as the body’s buffer system is able to shift the values back witihin the correct range. However, larger changes result in quite severe health issues. So let’s appreciate that we have these super power that means we can stay healthy and well. Where would we be without it?!

Our Expanding Galaxy: The Mystery Explained

For those of you who are sci-fi movie fans, have ever watched or heard of the Star Trek series (I, for one, am currently on season 3 of Star Trek Discovery on Netflix…. would strongly recommend) or are generally interested in astronomy, you may have come across the terms ‘dark matter’ and ‘dark energy’. At first, this sounds like a way of describing evil or enigmatic atmospheres, however, these concepts have been floating around for the last few decades and have left many scientists wondering what the makeup of our universe actually is.


The universe is full of matter and the attractive force of gravity pulls all matters together. Billions of years ago, the universe was expanding more slowly than it is today. This was shown by Edwin Hubble, an American astronomer, who played a crucial role in the field of observational cosmology whose law states that the further away the perimeter of the galaxy is from the Earth, the faster its recessional velocity.
Dark energy is a type of energy that permeates the whole universe and opposes the attractive force of gravitation between galaxies by the exertion of negative pressure. It is not detected directly but we know it exists because we now know the universe is accelerating as it expands. In the early 1990s, astronomers were observing the expansion of the universe as they tried to determine the rate at which its acceleration was decreasing.

Dark matter is a matter which cannot be seen and that does not admit or absorb electromagnetic radiation. It is detected indirectly based on its gravitational effects relating to either the rotation of galaxies or by gravitational lensing of starlight. Like dark energy, scientists believe that dark matter exists, but they do not know what it is or where it has come from. After the Big Bang, hydrogen and helium were present in the universe and they agree with the present levels of hydrogen and helium in the universe. This suggests that the missing mass of the universe is not likely to be made up of normal matter, which is made from protons and neutrons and is described as baryonic matter. It has been said that dark matter may be made over a new particle which was created in The Big Bang but has not been detected.


According to NASA, roughly 68% of the universe is dark energy and 27% is dark matter, which means that the remaining 5% is everything on Earth which is referred to ‘ordinary’ or ‘normal’ matter. At present, astronomers do not know what dark matter and dark energy are, and this remains one of the key outstanding areas of physics that are awaiting further evidence, understanding and clarification.


There are currently three different explanations for dark energy. The first explanation is that dark energy is a property of space meaning that it would be unchanged as the universe expands. So as more space comes into existence, more of this dark energy would come about. This would result in the universe to expand faster. The second explanation is that space has energy coming from the quantum theory of matter. Here, the theory suggests that space in the universe is full of temporary particles that form and disappear in short spaces of time. The third explanation for dark energy is that there is a new form of energy whose effect on the expansion of the universe is the complete opposite to what the matter and normal energy that we know of does. For all three explanations, scientists are struggling to come up with evidence to back up their theories.


According to CERN, the European Organization for Nuclear Research, dark matter seems to outweigh visible matter roughly six to one, making up about 27% of the universe. Most scientists think that dark matter is composed of non-baryonic matter. But the explanations we have, for both dark matter and dark energy, still leave scientists clueless as to why the strange force exists in the first place and makes us realise that there is still so much that we do not know about our vast universe.

The Power of our Senses

Taste buds are just one reason why we love some foods and hate others.
Hot Chocolate. Doughnuts. Haribo’s. McDonald’s. The world is full of polarizing flavours and foods, beloved by many, despised by just as many. Why is that? Scientists have untangled some — but not nearly all — of the mysteries behind our love and hatred of certain foods. While we might say, “That tastes like strawberries”, food scientists would disagree. Our tongues perceive only five basic tastes: sweet, sour, bitter, salty and “umami,” the Japanese word for savoury. To go from merely sweet to “Mm, strawberry!” the nose needs to get involved. The taste and olfactory senses, along with any chemical irritation a food creates in the throat (think mint, hot pepper or olive oil), all send the brain the information it needs to distinguish flavours.

“We as primates are born liking sweet and disliking bitter,” said Marcia Pelchat, who studies food preferences at the Monell Chemical Senses Center in Philadelphia. The theory is that we are hard-wired to like and dislike certain basic tastes so that the mouth can act as the body’s gatekeeper.
Sweet means energy; sour means not ripe yet. Savoury means food may contain protein. Bitter means caution as many poisons are bitter. Salty means sodium, a necessary ingredient for several functions in our bodies. (By the way, those tongue maps that show taste buds clumped into zones that detect sweet, bitter, etc.? Very misleading. Taste receptors of all types blanket our tongues — except for the centre line — and some reside elsewhere in our mouths and throats.)


Researchers have found only one major human gene that detects sweet tastes, but we all have it. By contrast, 25 or more bitter receptor genes might exist, but combinations vary by person. Some genetic connections are so strong that scientists can accurately predict how people will react to certain bitter tastes by looking at their DNA. Research has also shown that we are predisposed to like flavours of foods our mothers ate while pregnant. These flavours are passed through amniotic fluids and later through breast milk, possibly signalling to the baby that if Mom ate it, it must be readily available and safe.
You simply cannot teach a rat or dog to like spicy food; scientists have tried. But in humans, it is easy. Culture often overrides our genes and takes over the mouth’s role as the body’s gatekeeper. Few people immediately like bitter beverages or extreme spices, but many learn to love them through repeated exposure. We often learn to like what people around us like. Some people can’t stand slimy, gritty or creamy foods, regardless of the flavour. Science cannot fully explain where texture issues come from, but a study released last fall by the Monell Chemical Senses Center offers a clue: People with more of a certain enzyme in their bodies tolerated the feel of thick, starchy foods better. Also, texture can affect flavour by altering the release of aroma molecules in the mouth. Manufacturers pay special attention to this when trying to make a low-fat substitute taste and feel like a high-fat food.


Have you got leftover jellybeans? Take this test to see if you can tell the difference between taste and flavour. (You might need to get Jelly Belly’s Bean Boozled pack of jellybeans to get the less conventional flavours).


• Get one jellybean in each of these very different flavours: banana, black liquorice and cappuccino.
• Now hold your nose. Without looking, pop one of the beans into your mouth and chew it, keeping your nostrils closed.
• Try to identify the flavour.
Very few people will be correct more than a third of the time just by guessing. Why? Because all three-taste sweet and a little bitter, our tongues cannot tell the difference.
• Let go of your nose. Suddenly you can easily distinguish the flavour. (This explains why food does not taste right when you have a stuffy nose.)

How interesting…. we are all born with a love of sweet foods and a dislike of bitter flavours, but beyond that, the foods we grow up to love can vary wildly.

How can we move to a sustainable future if we keep using energy-intensive heating and cooling?

According to UK Power, a ‘typical’ home uses between 8,000 kWh and 17,000 kWh of energy a year for its heating and according to World Watch Institute data, buildings are responsible for the annual consumption of 40% of the world’s energy.

The design of many buildings for living and working make them inefficient in terms of their ability to produce, use, conserve and recycle the energy that is generated. Businesses are paying around £60 million in superfluous energy bills annually because of energy being wasted by office buildings in cities across the UK. This is the same amount of energy that could be used to power 65,000 homes, clearly showing that change is needed if we want to reduce our reliance on energy-intensive heating and cooling and in order to move towards an energy sustainable future.

The basis of any energy efficient and sustainable design is beginning with Smart Design, which includes the installation of insulation in walls and throughout homes to reduce heat loss. Highly insulated windows and doors can also be an efficient form of heat conservation with up to 30% of heating lost from buildings escaping through the crevices in windows and doors. Another important factor is the heat that is lost through uninsulated roofs or attics as heat rises due to convection currents. This accounts for around 25% for all the heat loss in homes and to be able to reduce the need for energy-intensive heating and cooling systems, there must be a significant decrease in the percentages of heat loss mentioned above. This will not only reduce the need for burning fossil fuels but will have more large positive impacts on the environment.

Houses and buildings could be designed to harness the solar energy that enters the home though the windows in summer months to generate energy to heat water or be stored to supply energy in the winter. In more recent times, homes are being designed with larger windows which will allow for the sun’s rays to be absorbed. So, if you are a fan of ‘getting that lighting’, this would be a good move. This will not only increase the amount of light coming in and reducing the need for energy consuming light bulbs, but will increase the amount of natural heating, therefore, diminishing the need for energy-intensive heating.

There is also a need to move from double to triple glazed windows to radically reduce the heat lost through the insufficient insulation of windows. The lack of insulation in windows allows heat to leak though the gaps around the windows, be lost by radiation through the window glazing, and also through conduction, thus making the window frames which can typically, in total, account for 20% of heat loss in a home. Triple glazed windows can be up to 50% more insulating than double glazed windows, meaning the switch to triple glazing could save between £20 and £40 a year on heating bills depending on the size of a house. By increasing the sizes of windows to allow light to enter a house and investing in triple -glazed windows, we could see a substantial decrease on the reliance of energy-intensive heating systems in the winter months and a decrease in the energy needed for lighting in the summer.

Around 10% of heat generated by heating systems is lost through uninsulated flooring. Insulating under floorboards on the ground floor could save up to £60 a year in the UK and can seal the gaps between floors and skirting boards to reduce draughts. According to the Energy Saving Trust, a semi-detached house can save 160kg of CO₂ per year which is equivalent to £40 of savings per year by insulating the house floors. Designing flooring that disallows heat to escape through the floor or is allowed to escape but is captured and stored to heat, or generate electricity to heat, the home would significantly reduce the need to burn excess fossil fuels and would drastically reduce energy bills

Water vapour and air condensation is a major threat to the structure of buildings due to its ability to weaken the structure. In cold climates, the difference in air pressure can cause the warm, moist air to be forced into the exterior walls, where the most insulation of the building would be placed. The air condenses as it cools, and the liquid weakens the barrier between the interior and the external climates. In warmer climates, the more humid air enters the walls and meets the cooler wall cavities. This causes the air to condense, again weakening the insulation. To resolve this issue, buildings will have to be designed with a vapour retarder, a material or structural element that can be used to hinder the movement of water vapour. An air retarder, a material that can reduce air flow in a house for a building’s envelope, can also be used. These would both be installed so that the quality of insulation remains of a high standard, in turn reducing heat loss both in the short and long term and reducing the need of energy intensive heating.

In much warmer countries where air conditioning is considered as an essential household item, the aim is to reduce the amount of heat being allowed into the building but also taking into account that temperatures can plummet in the evening, when it gets dark. Creating an efficient form of air circulation in a building will reduce the need for air conditioning which consumes a lot of electricity. To strike a balance between these two conditions, buildings will have to be designed to allow the cooling to be efficient enough to allow for a lower temperature but not cold to the point where heating systems will be having to be used in order to strike a balance. Currently, to allow for cooling homes and offices in countries such as African and South American countries, external horizontal overhangs are used to create shade or by using electric fans.

The cause of burning excess fossil fuels to power energy-intensive heating and cooling houses, is down to a building’s inability to preserve the heat that is produced. To reduce the need for intensive heating, we need to find the most efficient way to trap the heat that is produced in areas of colder climate and give buildings in hotter climates the capability to cool with natural ventilators and reducing the reliance on air-conditioning. Buildings designed in the future, should have high quality insulation in walls, roofs and floors and the capability to minimise an air leakage. The higher starting cost of insulation will be both cheaper in the long term compared to the need to buy more energy, further decreasing the amount of energy that is wasted to burn fossil fuels or renewables. Building smart homes is not only about the installation of innovative technology but about the building’s ability to retain and recycle the energy produced.

Is this the most underrated form of energy generation?


The purpose of switching from fossil fuels to low-emission alternative fuels is to mitigate climate change by reducing a country’s total carbon dioxide emissions, which in turn, will contribute to reducing total global emissions. There is a lot of talk about switching to cleaner forms of energy, and solar and wind energy are naturally the ones we gravitate towards as we are being made aware of these forms more than others. However, the discovery and development of hydrogen fuel cells may just revolutionise the sustainable energy sector.

A fuel cell is a device that generates electricity through an electrochemical reaction, not combustion. Due to their chemistry, fuel cells are very clean. Fuel cells that use pure hydrogen fuel are completely carbon-free, with their only by-products being electricity, heat, and water. Fuel cell systems are a clean, efficient, reliable, and quiet source of power. Fuel cells do not need to be periodically recharged like batteries, but instead, continue to produce electricity if a fuel source is provided. In the fuel cell, hydrogen and oxygen are combined to generate electricity, heat, and water. A fuel cell is composed of an anode, the positively charged electrode, cathode, the negatively charged electrode and an electrolyte membrane which keeps the hydrogen and oxygen separate. A typical fuel cell works by passing hydrogen through the anode of a fuel cell and oxygen through the cathode. At the anode, a catalyst splits the hydrogen molecules into electrons and protons. The protons pass through the porous electrolyte membrane, while the electrons are forced through a circuit, generating an electric current and excess heat. At the cathode, the protons, electrons, and oxygen combine to produce water molecules. As there are no moving parts, fuel cells operate silently and with extremely high reliability.


Fuel cells are used today in a range of applications, from providing power to homes and businesses, keeping critical facilities like hospitals, shops, and data centres up and running, and moving a variety of vehicles including cars and buses. Interestingly, NASA has used liquid hydrogen since the 1970s to propel space shuttles and other rockets into orbit. Hydrogen fuel cells power the shuttle’s electrical systems, producing a clean by-product – pure water, which the crew drinks. In September 2020, Airbus, a designer, and manufacturer of industry-leading commercial aircraft revealed three different concepts for the world’s first zero-emission commercial aircraft which could become available by 2035. These concepts each represent a different approach to achieving zero-emission flight by exploring various technology pathways to support the company’s ambition of leading the way in the eradication of fossil fuels in the entire aviation industry. All of these concepts rely on hydrogen as a primary power source – an option which Airbus believes holds exceptional promise as clean aviation fuel and is likely to be a solution for aerospace and many other industries to meet their climate-neutral targets.

However, when we consider the benefits of hydrogen fuel cells, aren’t there any negatives to producing energy this way? Unfortunately, the answer is yes. Even though using the hydrogen in itself is does not harm the environment, the process of producing it is. Hydrogen is the simplest and the most abundant element in the universe but does not occur naturally as a gas on the Earth – it is always combined with another element. For example, it is combined with carbon to form methane (CH4). So, to obtain the hydrogen, a lot of energy must be used to split the molecules that they are in. Steam methane reforming, or SMR, is the most common method for producing hydrogen on a large industrial scale. It involves reacting methane, found in natural gas, with steam (H20) to produce the hydrogen and carbon monoxide (the carbon monoxide must undergo further reactions as it is toxic which also produces more hydrogen). Although methane is being used, machinery driven by SMR-sources hydrogen is still cleaner than their fossil fuel equivalents


So, is there a future for greater use of hydrogen fuel cells in our everyday lives? The transportation sector has been dominated by fossil fuel-based fuels for much of the past century, due to their combined advantages in cost and energy density. This is what has made us very reliant on this form of energy. Although hydrogen is a very light gas, it is very difficult to transport. When speaking about oil, it can be transported through pipelines. When discussing coal, these can be transported in the back of trucks. When talking about hydrogen, however, just moving even small amounts is a very expensive matter. For that reason alone, the transport and storage of such a substance is currently considered impractical and very expensive. As technology develops and a way can be found to reduce the volume of hydrogen for much cheaper transportation and storage, we may be able to move from petroleum-based energy to hybrid and hydrogen power.

How was the Earth’s best friend created?

It must be said that the Moon is one of the most mysterious things that exists. We can see it clearly on Earth, but we only ever see one side of it. It can be seen both in the day and night as it reflects light from the sun. It affects the tides of the ocean, the Earth’s obliquity or ‘wobble and was used for centuries to work out the time of day. However, how has this strange, colossal mass formed in our sky and what would happen if we did not have a moon at all?
The concept of The Big Bang is not new to most people. The Big Bang can be neglected by some but is strongly supported by others. To those who support the theory of The Big Bang, this was the explosion that occurred at the beginning of the universe forming stars, planets, and matter. It was the moment that formed the basis of the materials that we have today. Yet, although this concept is widely known, the formation of the planets and their moons has somehow been lost in these explanations.
One of the most interesting concepts is how smaller masses, such as moons form and orbit larger planets. Thanks to astronauts who have visited the Moon, along with the many probes and satellites that have also been into space, we now know a lot about the Moon’s composition. But for all the explorations done on the moon and the knowledge gained, scientists are still struggling with a seemingly straightforward question: where the Moon came from?
It has been suggested that the moon was created along with the Earth as it was being formed. This was first suggested by the Italian astronomer, physicist and philosopher Galileo Galilei who had succeeded in making the first telescope powerful enough to show the moon in greater detail than had been seen before. In the early 1600s, Galileo showed that the Moon had a very similar landscape to the Earth, and this is where the suggestion that the Earth and Moon had formed simultaneously came from. Then later in the 1800s, the son of Charles Darwin, Sir George Darwin, suggested that when the Earth was young and was still a rapidly rotating ball of molten rock, it had been spinning so fast that some material broke away and began to orbit the earth. (It is said the Pacific Ocean is the ‘scar’ from this separation.)
The most widely accepted theory today is that the Moon was formed during a collision between the Earth and another small planet, Theia (a roughly Mars-sized planet). This collision turned the newly formed Earth into a ball of molten rock again and ejected debris from this impact which collected in an orbit around Earth to form the Moon as we now know it. Scientists have experimented with modelling the impact, changing the size of Theia to test what happens at different sizes and impact angles, trying to get the nearest possible match. This theory was first created in 1946 by Reginald Aldworth Daly from Harvard University. He questioned Darwin’s theory, calculating that only a piece of Earth breaking off could not allow the Moon to get to its current position.
So why is there so much research being done on the moon? Well, to start with, the moon is an important part of the Earth’s system, it not only provides light during the night, but it affects the ocean tides which are a result of the Moon’s gravitational pull. Without the gravitational pull on the Earth, the variation in sea animal species would decrease due to them not being able to lay eggs, for example, during lower tides. From a human point of view, this would mean that the number of fishes moving to areas close to land would decrease, which has significant economic impacts, especially on developing countries. For those who are interested in sport, the loss of the moon would mean the disappearance of surfing!
So, even though we do not know exactly how the Moon was formed, the giant impact theory holds the most promise, and scientists are on the ongoing mission of looking for clues to tell us more.

How high is too high?

Many of the world’s tallest buildings have become tourist attractions, New Year’s Eve countdown locations and even spots for wedding proposals. However, it is not the events that occur there or the words used to describe them, but the numbers that make these structures with complicated engineering calculations very inspiring. It can also be very shocking to think about everything required to construct these immense buildings but the mathematics behind them have become more complicated as they get taller. This leaves the question; will there eventually come a point where we can no longer build a taller building and if so, what is that point?

Consider the Singer Building, located in Manhattan, New York, which held the brief record of the tallest building in the world at 186.5 metres (612 ft) in 1908. The 35-story building was designed by the architect Ernest Flagg and took 20 months to build after the foundations had been set. The chief engineer of the project, Otto F. Semsch, tackled the problem of wind bracing (the act or process of strengthening a frame against winds) for the tower and the construction of the costly foundations. Although it was significantly taller than previous skyscrapers, the Singer Tower held the title for only a year, when it was surpassed by the Metropolitan Life Tower in New York. Sixty years later, it obtained another record – as the world’s largest skyscraper ever to be peacefully demolished.

Now, just over 100 years on from the construction of the Singer Building, the tallest completed building in the world today at 828 metres (2716.5 ft) belongs to the Burj Khalifa in Dubai which was completed in 2010 and is comprised of residential apartments, a hotel and offices. This building is nearly as tall as the Empire State Building, the Shard and the Statue of Liberty combined!

In the next few years, the Burj Khalifa will have to give up its title as the world’s tallest building to the Jeddah Tower, or Kingdom Tower, which is currently under construction in Saudi Arabia. Designed by the same studio of the Burj Khalifa, Adrian Smith + Gordon Gill Architectures, the Jeddah Tower is said to be ‘a masterpiece of structural engineering’ featuring avant-garde glass to keep the interiors cool in the hot climate and elevators using carbon-fibre, allowing them to reach record-breaking heights. With 200 floors, the Jeddah Tower is set to reach 1008 metres (3307.1ft) making it the first building to exceed one kilometre. There have been some issues with regards to the completion of this breath-taking project, but it is hoped that it will be completed soon.

It will not be surprising if in the next few decades there is another contender for the world’s tallest building, but this would mean theoretically we can go much taller. But how far up can we go?

In theory, there is no maximum height that can be achieved. One of the important technological advancements which are enabling engineers to design ‘supertall’ structures is the use of 3D computer modelling to investigate the structural integrity of a building. These models take information about a building’s profile, and the forces it will exert and have applied onto it, which provides engineers with details about the minimum amounts of materials they should use to guarantee a strong, secure structure. However, for such a towering structure to be stable, we would need to keep expanding the width of the base to support its weight. Logically, this would be impossible because the Earth is a sphere. Taller buildings with larger bases could be artificial mountains! These structures could house thousands of people, provide thousands of jobs through the construction of potential offices or be the shopping centre of your dreams but unfortunately, these could also come with a heavy price tag of anything up to 1.4 trillion USD.

So with skyscrapers, we really could reach for the stars but that comes with a cost. At some point, there may be a limit to how wide the base of a building can be or the amount of money that governments and private investors are willing to pay for it. It may be possible that at some point in the future, the development of materials and the advancements of technology could make the cost of these projects cheaper and allow skyscrapers to reach new heights!

Demi Bako

Is death the end or can we win the battle against mortality?

GUEST ARTICLE

Death is inevitable. There are many definitions and meanings to it. Some people believe death is the end of life, whilst others believe that death is not the end and is just the beginning of another life that we do not fully understand.  The Oxford definition of death is ‘the action or fact of dying or being killed, the end of the life of a person or an organism.’ The medical definition of death is ‘An individual who has sustained either irreversible termination of circulatory and respiratory functions.’ But how do doctors classify death?

Originally, death had been defined as when the heart stops beating and there is no blood circulating the body. This causes a person to immediately stop breathing, their brain shut down and become non-functional. However, researchers have shown and believe that if they can restart the heart after a person has gone through the first stage of death, then they can bring them back without any brain damage. However, death is not that sudden because when a person dies, their cells start to undergo a process of dying as well, and this can take hours. So, can doctors potentially bring back someone once they have died? With an increase in medical innovations and medical science, the line between living and death is becoming gradually more unclear. Even though this is a complicated process, it may be possible.

It is becoming more difficult to distinguish between those living without the reliance on technology and those that are fully independent, especially with machines such as a mechanical ventilator.  Mechanical ventilation is a life-saving therapy that has increased the development of modern intensive care significantly. The youth, especially those in developing countries, are vulnerable to acute diseases and respiratory failure, and therefore large numbers of patients unfortunately die due to a lack of resource and funding. It additionally helps in tackling respiratory problems in patients for a short period due to them suffering from an infectious disease, trauma and peripartum maternal or neonatal complications etc. Also, mechanical ventilation can help prolong their lives too, giving someone anywhere from a few extra minutes to years of their life that would have otherwise been lost. When the use of ventilators in modern medicine first began, it helped save many lives of patients with polio who were suffering and dying from respiratory failure, thus reducing the mortality rate from 87% to 25% (Journal of Global Health).

Furthermore, mechanical ventilation is a ‘life-sustaining treatment’ which, however, does have its risks which can include patients who are on the ventilator being more likely to get pneumonia. Even though mechanical ventilation can help prolong someone’s life, by assisting them in breathing, it does not mean the patient would be able to breathe unaided. There have been countless cases where the illness that led to the need of a ventilator not improving despite the intensive treatment. A recent example includes the case of Charlie Gard. Charlie Gard was an infant boy from London, United Kingdom, born with mitochondrial DNA depletion syndrome (MDDS), a rare genetic disorder that causes progressive brain damage and muscle failure. His parents pleaded for Charlie, who had suffered severe brain damage and was unable to breathe unaided and as a result needed the ventilator, to remain on life support. His parents appealed to the court because they disagreed with the doctors and believed they should not withdraw his life support. The court decided for the best interest of Charlie that the life support would be turned off. This case is an excellent example of how mechanical ventilation can help prolong someone’s life, but the person’s condition shows no signs of improvement. Should we prevent them from suffering even more, or take that chance that they could be able to recover? Medical decisions are never easy and are not as straight forward as they seem.

Another way in which one can prolong their life is through organ transplantations. In December 1954, the first successful kidney transplant took place by Dr Joseph Murray and Dr David Hume in Boston, United States. Even though there had been many failed attempts previously, this was the first successful operation. Transplantation has become increasingly popular and popular with more than 95% of kidney transplant patients surviving beyond a year. Organ Transplant gives another opportunity for patients who suffer from a damaged organ. The NHS has reported that after 28 years, in 2018, there was the highest number of organ donors. Even though organ transplant provides a second chance for many patients, a study conducted by Stanford Medicine School showed that almost 25% of kidney recipients and 50% heart recipients experienced acute rejection after the first year of their organ transplant. This suggests that this clinical intervention does not always guarantee the best outcome for the transplant recipients.

However, clinical intervention has given many patients another opportunity at life. Death is defined as the cessation of the circulatory and respiratory system, but what if we could improve the quality of our experience and prolong it? One is usually classified as dead when their heart stops beating, but what if we can extend their life? The first heart transplant took place on December 3rd in 1967 by a South African Surgeon called Christiaan Barnard in Cape Town. The patient survived for 18 days, and Dr Barnard’s second patient, Philip Blaiberg, lived for nearly two years. A year later, the first heart transplant in the UK took place on May 3rd, 1968. Although heart transplantation is a successful therapy to save those patients suffering from irreversible heart failure, the sad truth is that there is a lack of organs. The unavailability of healthy organs for transplantation has resulted in a significant organ shortage crisis. In the United States of America, the number of patients on the waiting list in 2006 was greater than 95,000, whist the number of patient deaths was just over 6,300. The organ shortage has deprived many of better quality of life and instead caused them to resort to alternatives such as dialysis. Nevertheless, there is still hope, as the ‘artificial organ market may achieve around 9% CAGR (compound annual growth rate) up to the year 2025’ potentially increasing the number of patients in need of organ transplant and improve their way of living. Even though it is expensive, it provides another alternative and can potentially help lengthen life, temporarily beating the battle against death.

Finally, a much newer example of extending life is by being cryogenically frozen. This is the freezing and storage of a human corpse, with the hope that a person may come back to life in the future. Even if currently there are no ways to combat certain diseases and fight against some medical issues, by being cryogenically frozen, there is a chance of being brought back to life if there was to be a cure in the future. The first person to be cryonically preserved was James Hiram Bedford, who died of cancer in 1967. Although we can cryonically preserve a person, we still need to figure out how to bring the person back again. If this succeeds, we will have to rethink the definition of death, as this is beyond anything we have seen before now. In February, a scientist in California successfully cryonically-preserved the brain of the rabbit. When he thawed it, Dr Kenneth Hayworth said ‘Every neurone and synapse looks beautifully preserved across the entire brain.’ This gives hope for this research, and this can potentially rewrite the future.

It is evident that as technology has advanced and our knowledge of human science has increased, the line between death and life is beginning to merge. Someone who may be brain dead (emotionally inactive) may still have a normal, fully functioning body. Similarly, someone who may have had an organ failure which could have potentially caused their death is now alive thanks to organ transplantation. It is thrilling and exciting to see what the future holds for this area of medicine and how many medical discoveries would change what we view as normal. Maybe in the future, death is something we could be able to overcome.

Piriyanka Jeyapahan

If X-rays are so dangerous, why do we still use them?

In today’s world, doctors conduct X-rays to diagnose many types of medical issues: broken bones, heart failure, pneumonia and more.

The fact that ordering x-rays to find the cause of some medical issues has become normal making us oblivious to how it was discovered and the dangers that come with it. Shockingly, not so long ago something such as a tumour or a broken bone could not be found without cutting a person open.


X-rays are a powerful type of electromagnetic radiation that was discovered in November 1895 by German physicist Wilhelm Roentgen. They are very useful because they can go through substances that light cannot. X-rays can show images of the inside of an object, such as a suitcase or the human body.


Roentgen was a professor of Physics in Wurzburg, Germany, and accidentally discovered X-rays in 1895 while testing whether cathode rays could pass through glass. (A cathode ray is a beam of electrons emitted from the cathode of a high-vacuum tube.) His cathode tube was covered in black paper, so he was surprised to find that when a fluorescent green light passed through the paper and projected onto a nearby fluorescent screen. Through numerous experiments, he found that the ‘mysterious light’ would pass through most substances but leave shadows of solid objects. Because he did not know what these rays were, he called them ‘X,’ indicating ‘unknown’ rays.


Roentgen quickly found that X-rays would pass through human tissue too, making the bones and tissue underneath visible. Word on his discovery spread rapidly worldwide, with doctors in Europe and the United States using X-rays to locate bone fractures, gunshots and kidney. He began to receive numerous awards for his discovery including the first Nobel Prize in physics in 1901.


Today, X-rays help doctors see inside our bodies. An x-ray image shows shades of grey, which is just how much of the x-ray beam manages to get through your body. If the area of the body is very dense (like bone) it will come up white, if it is less dense, like organ tissue, it will come up as a dark shade of grey. Radiographers, those who work the x-ray, can control the amount and strength of the x-ray beam so that the body parts they want to see come up on the images, helping to guide the surgeons in the operating theatre.


And this is something you may be surprised to know. There are many more uses of X-rays outside of the medical industry. X-rays have been allowing archaeologists to read long-lost documents and writing found on historic artefacts. For many years, researchers have sought out the writings that were found on objects such a coffin from many centuries ago, which could contain previously undiscovered knowledge about the ancient civilisation, but the issue was that there was no way of getting inside them without destroying its contents. Now, thanks to the advancements in X-ray technologies and imaging techniques, that has all been changing.


However, having too many x-ray scans or being exposed to a lot of x-ray radiation can be dangerous. They can damage the cells in your body (which is why the radiographer leaves the room while you get your x-ray done). Surprisingly, sometimes the damage to cells after being exposed to this radiation is a good thing; a treatment called radiotherapy uses x-rays to kill bad cells such as cancer cells. However, if you do need an x-ray, you will only be exposed to a very small dose of it so there is no need to worry!

%d bloggers like this: