Minimize Technologies and Applications

Technologies and Applications

This file is intended to present some technology topics that cannot be assigned to a particular mission. The following chapters contain only short descriptions, they are presented in reverse order. The topics should be of interest to the reader community.

  Slow light to speed up
LiDAR sensors development
First plant-powered IoT
sensor sends signal to space
Skin-like sensors - wearable tech Water drop antenna lens Particle accelerator that fits on a chip
ESA helps industry for
5G innovation
Glowing solar cell Quantum light sources
pave the way for optical circuits
Driverless shuttle New Method Can Spot Failing
Infrastructure from Space
Atomic motion captured
in 4-D for the first time
SUN-to-LIQUID Melting satellites The mysterious crystal that melts
at two different temperatures
Mission Control 'Saves Science' Testing satellite marker designs Mirror array for LSS
Cold plasma tested on ISS 3D printing and milling Athena optic bench SmartSat architecture in spacecraft
Radiation tolerance of 2D
meterial-based devices
Better Solar Cells Converting Wi-Fi Signals to Electricity
Neonatal Intensive Care Units Introduction of 5G
communication connectivity
Unique 3D printed sensor technology
New Geodesy Application for Emerging Atom-Optics Technology Wireless transmission at 100 Gbit/s 3D printing one of the strongest materials on Earth
Prototype nuclear battery packs The Kilopower Project of NASA Top Tomatoes - Mars Missions
NEXT-C ion propulsion engine New dimension in design Lasers Probing the nano-scale

Slow light to speed up LiDAR sensors development

• January 21, 2020: Quicker is not always better, especially when it comes to a 3D sensor in advanced technology. With applications in autonomous vehicles, robots and drones, security systems and more, researchers are striving for a 3D sensor that is compact and easy to use. 1)

A team from Yokohama National University in Japan believes they have developed a method to obtain such a sensor by taking advantage of slow light, an unexpected move in a field where speed is often valued above other variables.


Figure 1: A small-sized silicon photonics chip that can be used for non-mechanical beam steering and scanning (image credit: Yokohama National University)

They published their results on 14 January 20 in the Optica, a journal published by The Optical Society. 2)

LiDAR (Light Detection and Ranging) sensors can map the distance between distant objects and more using laser light. In modern LiDAR sensors, many of the systems are composed of a laser source; a photodetector, which converts light into current; and an optical beam steering device, which directs the light into the proper location.

"Currently existing optical beam steering devices all use some kind of mechanics, such as rotary mirrors," said Toshihiko Baba, paper author and professor in the Department of Electrical and Computer Engineering at Yokohama National University. "This makes the device large and heavy, with limited overall speed and a high cost. It all becomes unstable, particularly in mobile devices, hampering the wide range of applications."

In recent years, according to Baba, more engineers have turned toward optical phased arrays, which direct the optical beam without mechanical parts. But, Baba warned, such an approach can become complicated due to the sheer number of optical antennae required, as well as the time and precision needed to calibrate each piece.

"In our study, we employed another approach - what we call 'slow light,'" Baba said. Baba and his team used a special waveguide "photonic crystal," aimed through a silicon-etched medium. Light is slowed down and emitted to the free space when forced to interact with the photonic crystal. The researchers engaged a prism lens to then direct the beam in the desired direction.

"The non-mechanical steering is thought to be crucial for LiDAR sensors," Baba said. The resulting method and device are small-sized, free of moving mechanics, setting the stage for a solid-state LiDAR. Such a device is considered smaller, cheaper to make and more resilient, especially in mobile applications such as autonomous vehicles.

Next, Baba and his team plan to more fully demonstrate the potential of a solid-state LiDAR, as well as work on improving its performance with the ultimate goal of commercializing the device.

First plant-powered IoT sensor sends signal to space

• 14 January 2020: The first-ever plant-powered sensor has successfully transmitted to a satellite in space. The pilot service, using plants as the energy source, has been developed by the Dutch company Plant-e and Lacuna Space,which is based in the Netherlands and the UK, under ESA’s ARTES (Advanced Research in Telecommunications Systems) program. Because the sensor doesn’t need batteries, due to the internal storage in the system, it’ll reduce cost, maintenance requirements and environmental impact. As long as plants continue to grow, electricity will be produced. 3) 4)


Figure 2: Plant-powered sensors wetlands research site (image credit: Plant-e BV)

Such sensors could be used to connect everyday objects in remote locations, enabling them to send and receive data as part of the IoT (Internet of Things).

The device can inform farmers about the conditions of their crops to help increase yield, and enable retailers to gain detailed information about potential harvests.

It transmits data on air humidity, soil moisture and temperature, enabling field-by-field reporting from agricultural land, rice fields or other aquatic environments. The extremely low power device sends signals at radio frequencies that are picked up by satellites in LEO (Low Earth Orbit).

Plants produce organic matter through photosynthesis, but only part of this matter is used for plant growth. The rest is excreted into the soil through the plant’s roots. In the soil, bacteria around the roots break down this organic matter, releasing electrons as a waste product. The technology developed by Plant-e harvests these electrons to power small electrical devices.

The IoT prototype device, developed by the two companies, uses the electricity generated by living plants to transmit LoRa® [Long Range, LoRa is a low-power wide area network (LPWAN)] messages about air humidity, soil moisture, temperature, cell voltage and electrode potential straight to Lacuna's satellite.

Plant-e, a start-up from Wageningen, the Netherlands, has developed a technology to harvest electrical energy from living plants and bacteria to generate carbon-negative electricity. The output generates enough energy to power LEDs and sensors in small-scale products. 5)

“This collaboration shows how effective plant-electricity already is at its current state of development,” said Plant-e CEO Marjolein Helder. “We hope this inspires others to consider plant-electricity as a serious option for powering sensors.”

Lacuna, based in the UK and the Netherlands, is launching a LEO (Low Earth Orbit) satellite system that will provide a global Internet-of-Things service. The service allows collecting data from sensors even in remote areas with little or no connectivity. At the moment Lacuna Space is offering a pilot service with one satellite in orbit, and three more satellites are awaiting launch during the next few months.

“This opens up a new era in sustainable satellite communications,” says Rob Spurrett, chief executive and co-founder of Lacuna Space. “There are many regions in the world that are difficult to reach, which makes regular maintenance expensive and the use of solar power impossible. Through this technology, we can help people, communities and companies in those regions to improve their lives and businesses.”


Figure 3: Plant-powered sensor schematic (image credit: Plant-e BV)

Frank Zeppenfeldt, who works on future satellite communication systems at ESA, says: “We are very enthusiastic about this demonstration that combines biotechnology and space technology. It will help to collect small data points in agricultural, logistic, maritime and transportation applications—where terrestrial connectivity is not always available.”

Skin-like sensors bring a human touch to wearable tech

• 13 January 2020: University of Toronto Engineering researchers have developed a super-stretchy, transparent and self-powering sensor that records the complex sensations of human skin. 6)

- Dubbed AISkin (Artificial Ionic Skin), the researchers believe the innovative properties of AISkin could lead to future advancements in wearable electronics, personal health care and robotics.


Figure 4: Super stretchy, transparent and self-powering, researchers Xinyu Liu (MIE) and Binbin Ying (MIE, pictured) believe their AISkin will lead to meaningful advancements in wearable electronics, personal health care, and robotics (image credit: Daria Perevezentsev)

- “Since it’s hydrogel, it’s inexpensive and biocompatible — you can put it on the skin without any toxic effects. It’s also very adhesive, and it doesn’t fall off, so there are so many avenues for this material,” Professor Xinyu Liu (MIE), whose lab is focused on the emerging areas of ionic skin and soft robotics.

- The adhesive AISkin is made of two oppositely charged sheets of stretchable substances known as hydrogels. By overlaying negative and positive ions, the researchers create what they call a “sensing junction” on the gel’s surface.

- When the AISkin is subjected to strain, humidity or changes in temperature, it generates controlled ion movements across the sensing junction, which can be measured as electrical signals such as voltage or current.

- “If you look at human skin, how we sense heat or pressure, our neural cells transmit information through ions — it’s really not so different from our artificial skin,” says Liu.

- AISkin is also uniquely tough and stretchable. “Our human skin can stretch about 50 per cent, but our AISkin can stretch up to 400 per cent of its length without breaking,” says Binbin Ying (MIE), a visiting PhD candidate from McGill University who’s leading the project in Liu’s lab. The researchers recently published their findings in Materials Horizons. 7)

Figure 5: Human skin can stretch about 50%, but our AISkin can stretch up to 400% of its length without breaking (image credit: Daria Perevezentsev)

- The new AISkin could open doors to skin-like Fitbits that measure multiple body parameters, or an adhesive touchpad you can stick onto the surface of your hand, adds Liu. “It could work for athletes looking to measure the rigor of their training, or it could be a wearable touchpad to play games.”

- It could also measure the progress of muscle rehabilitation. “If you were to put this material on a glove of a patient rehabilitating their hand for example, the health care workers would be able to monitor their finger-bending movements,” says Liu.

Figure 6: Binbin Ying demonstrates how AISkin could be used to measure the progress of muscle rehabilitation (image credit: Binbin Ying)

- Another application is in soft robotics — flexible bots made completely out of polymers. An example is soft robotic grippers used in factories to handle delicate objects such as light bulbs or food.

- The researchers envision AISkin being integrated onto soft robots to measure data, whether it’s the temperature of food or the pressure necessary to handle brittle objects.

- Over the next year, Liu’s lab will be focused on further enhancing their AISkin, aiming to shrink the size of AISkin sensors through microfabrication. They’ll also add bio-sensing capabilities to the material, allowing it to measure biomolecules in body fluids such as sweat.

- “If we further advance this research, this could be something we put on like a ‘smart bandage,’” says Liu. “Wound healing requires breathability, moisture balance – ionic skin feels like the natural next step.”

Water drop antenna lens

• 08 January 2020: This novel ‘water drop’ antenna lens design for directing radio wave signals was developed by a pair of antenna engineers from ESA and Sweden’s Royal Institute of Technology, KTH. 8)


Figure 7: The inventors of this new lens design, which received an ESA Technical Improvement award in February 2017, like to call it the ‘water drop’ lens because its shape resembles the ripples produced by a water drop at the surface of a fluid (image credit: ESA–SJM Photography)

In the same way that optical lenses focus light, waveguide lenses serve to direct electromagnetic radio wave energy in a given direction – for instance to send out a radar or a communication signal – and minimize energy loss in the process.

Traditional waveguide lenses have complex electrically-sensitive ‘dielectric’ material to restrict electromagnetic signals as desired, but this water drop waveguide lens – once its top plate has been added on – comes down purely to its curved shape directing signals through it.

The lack of dielectrics in this shape-based design is an advantage, especially for space – where they would risk giving off unwanted fumes in orbital vacuum.

The lack of dielectrics in this shape-based design is an advantage, especially for space – where they would risk giving off unwanted fumes in orbital vacuum.

“The lens’s extremely simple structure should make it easy and cheap to manufacture, opening up avenues to a wide variety of potential materials such as metalized plastics,” explains ESA antenna engineer Nelson Fonseca.

“This prototype has been designed for the 30 GHz microwave range but the simplicity of its shape-based design also means it should be applicable to a broad frequency range – the higher the frequency, the smaller the structure, facilitating its integration”.

The idea came out of a brainstorming session during a conference, explains KTH antenna engineer Oscar Quevedo-Teruel: “We took the ‘Rinehart-Luneburg lens’, also called the geodesic lens, as our starting point. This is a cylindrical waveguide lens developed in the late 1940s, mostly for radar applications.

“We wanted the same performance, while reducing its size and height. So the idea we had was to retain the functional curvature of the original design by folding it in on itself, reducing its profile by a factor of four in the specific case of the manufactured prototype.”

This first prototype of a water drop lens was tested at KTH facilities, Oscar adds, to measure its radiation patterns, efficiency and gain: “While a conventional Luneburg lens might suffer from elevated dielectric losses, especially when used at higher frequencies, this design shows marginal signal loss thanks to its fully metallic design.”

Besides space applications, such as Earth observation and satellite communications on small satellites, this antenna has also attracted the attention of non-space companies. The Ericsson company is looking into using the compact design for the fifth generation mobile phone networks. The concept could also be used for guidance radars in the next generation of self-driving cars.

Researchers build a particle accelerator that fits on a chip

• 02 January 2020: On a hillside above Stanford University, the SLAC National Accelerator Laboratory operates a scientific instrument nearly 2 miles long. In this giant accelerator, a stream of electrons flows through a vacuum pipe, as bursts of microwave radiation nudge the particles ever-faster forward until their velocity approaches the speed of light, creating a powerful beam that scientists from around the world use to probe the atomic and molecular structures of inorganic and biological materials. 9) 10)

Now, for the first time, scientists have created a silicon chip that can accelerate electrons — albeit at a fraction of the velocity of the most massive accelerators — using an infrared laser to deliver, in less than a hair's width, the sort of energy boost that takes microwaves many feet.


Figure 8: This image, magnified 25,000 times, shows a section of a prototype accelerator-on-a-chip. The segment shown here are one-tenth the width of a human hair. The oddly shaped gray structures are nanometer-sized features carved in to silicon that focus bursts of infrared laser light, shown in yellow and purple, on a flow of electrons through the center channel. As the electrons travel from left to right, the light focused in the channel is carefully synchronized with passing particles to move them forward at greater and greater velocities. By packing 1,000 of these acceleration channels onto an inch-sized chip, Stanford researchers hope to create an electron beam that moves at 94 percent of the speed of light, and to use this energized particle flow for research and medical applications (image credit: Neil Sapra)

Writing in the Jan. 3 issue of Science, a team led by electrical engineer Jelena Vuckovic explained how they carved a nanoscale channel out of silicon, sealed it in a vacuum and sent electrons through this cavity while pulses of infrared light—to which silicon is as transparent as glass is to visible light—were transmitted by the channel walls to speed the electrons along. 11)

The accelerator-on-a-chip demonstrated in Science is just a prototype, but Vuckovic said its design and fabrication techniques can be scaled up to deliver particle beams accelerated enough to perform cutting-edge experiments in chemistry, materials science and biological discovery that don't require the power of a massive accelerator.

“The largest accelerators are like powerful telescopes. There are only a few in the world and scientists must come to places like SLAC to use them,” Vuckovic said. “We want to miniaturize accelerator technology in a way that makes it a more accessible research tool.”

Team members liken their approach to the way that computing evolved from the mainframe to the smaller but still useful PC. Accelerator-on-a-chip technology could also lead to new cancer radiation therapies, said physicist Robert Byer, a co-author of the Science paper. Again, it’s a matter of size. Today, medical X-ray machines fill a room and deliver a beam of radiation that’s tough to focus on tumors, requiring patients to wear lead shields to minimize collateral damage.

“In this paper we begin to show how it might be possible to deliver electron beam radiation directly to a tumor, leaving healthy tissue unaffected,” said Byer, who leads the ACHIP (Accelerator on a Chip International Program), a broader effort of which this current research is a part.

Inverse design

In their paper, Vuckovic and graduate student Neil Sapra, the first author, explain how the team built a chip that fires pulses of infrared light through silicon to hit electrons at just the right moment, and just the right angle, to move them forward just a bit faster than before.

To accomplish this, they turned the design process upside down. In a traditional accelerator, like the one at SLAC, engineers generally draft a basic design, then run simulations to physically arrange the microwave bursts to deliver the greatest possible acceleration. But microwaves measure 4 inches from peak to trough, while infrared light has a wavelength one-tenth the width of a human hair. That difference explains why infrared light can accelerate electrons in such short distances compared to microwaves. But this also means that the chip's physical features must be 100,000 times smaller than the copper structures in a traditional accelerator. This demands a new approach to engineering based on silicon integrated photonics and lithography.

Vuckovic's team solved the problem using inverse design algorithms that her lab has developed. These algorithms allowed the researchers to work backward, by specifying how much light energy they wanted the chip to deliver, and tasking the software with suggesting how to build the right nanoscale structures required to bring the photons into proper contact with the flow of electrons.

Vuckovic's team solved the problem using inverse design algorithms that her lab has developed. These algorithms allowed the researchers to work backward, by specifying how much light energy they wanted the chip to deliver, and tasking the software with suggesting how to build the right nanoscale structures required to bring the photons into proper contact with the flow of electrons.

The design algorithm came up with a chip layout that seems almost otherworldly. Imagine nanoscale mesas, separated by a channel, etched out of silicon. Electrons flowing through the channel run a gantlet of silicon wires, poking through the canyon wall at strategic locations. Each time the laser pulses—which it does 100,000 times a second—a burst of photons hits a bunch of electrons, accelerating them forward. All of this occurs in less than a hair's width, on the surface of a vacuum-sealed silicon chip, made by team members at Stanford.

The researchers want to accelerate electrons to 94 percent of the speed of light, or 1 million electron volts (1MeV), to create a particle flow powerful enough for research or medical purposes. This prototype chip provides only a single stage of acceleration, and the electron flow would have to pass through around 1,000 of these stages to achieve 1MeV. But that's not as daunting at it may seem, said Vuckovic, because this prototype accelerator-on-a-chip is a fully integrated circuit. That means all of the critical functions needed to create acceleration are built right into the chip, and increasing its capabilities should be reasonably straightforward.

The researchers plan to pack a thousand stages of acceleration into roughly an inch of chip space by the end of 2020 to reach their 1MeV target. Although that would be an important milestone, such a device would still pale in power alongside the capabilities of the SLAC research accelerator, which can generate energy levels 30,000 times greater than 1MeV. But Byer believes that, just as transistors eventually replaced vacuum tubes in electronics, light-based devices will one day challenge the capabilities of microwave-driven accelerators.

Meanwhile, in anticipation of developing a 1MeV accelerator on a chip, electrical engineer Olav Solgaard, a co-author on the paper, has already begun work on a possible cancer-fighting application. Today, highly energized electrons aren't used for radiation therapy because they would burn the skin. Solgaard is working on a way to channel high-energy electrons from a chip-sized accelerator through a catheter-like vacuum tube that could be inserted below the skin, right alongside a tumor, using the particle beam to administer radiation therapy surgically.

"We can derive medical benefits from the miniaturization of accelerator technology in addition to the research applications," Solgaard said.

Some background on SLAC : SLAC National Accelerator Laboratory operates in Menlo Park, California, and is a United States Department of Energy Laboratory, under the programmatic direction of the U.S. Department of Energy Office of Science. Originally named SLAC (Stanford Linear Accelerator Center), now referred to as ”National Accelerator Laboratory,” SLAC was founded in 1962 just west of the university's campus, covering 426 acres. The SLAC research program centers on experimental and theoretical physics researching elementary particle physics using electron beams and a broad program of research in atomic and solid-state physics. In March 2009 it was announced that the SLAC National Accelerator Laboratory was to Receive $68.3 Million in Recovery Act Funding to be disbursed by Department of Energy's Office of Science. As of 2005, SLAC employs over 1,000 people, some 150 of whom are physicists with doctorate degrees. SLAC also serves over 3,000 visiting researchers yearly, operating particle accelerators for high-energy physics, as well as the Stanford Synchrotron Radiation Laboratory (SSRL) for synchrotron light radiation research, which aided in the research of Stanford Professor Roger D. Kornberg as he won a Nobel Prize in Chemistry in 2006. 12)


Figure 9: Aerial photo showing the 2 mile length of SLAC, as it is the largest linear accelerator in the world (image credit: Stanford University)

ESA helps industry for 5G innovation

25 September 2019: Connecting people and machines to everything, everywhere and at all times through 5G networks promises to transform society. People will be able to access information and services developed to meet their immediate needs but, for this to happen seamlessly, satellite networks are needed alongside terrestrial ones. 13)

Figure 10: Space's part in the 5G revolution. Everybody is talking about 5G, the new generation of wireless communication. We are at the start of a revolution in connectivity for everything, everywhere, at all times. Space plays at important roll in this revolution. We need satellites to ensure businesses and citizens can benefit smoothly from 5G (video credit: ESA)

The European Space Agency is working with companies keen to develop and use space-enabled seamless 5G connectivity to develop ubiquitous services. At the UK Space Conference, held from 24 to 26 September in Newport, South Wales, UK, ESA is showcasing its work with several British-based companies, supported by the UK Space Agency.

The companies are working on applications that range from autonomous ships to connected cars and drone delivery, from cargo logistics to emergency services, from media and broadcast to financial services.

Spire is a satellite-powered data company that provides predictive analysis for global maritime, aviation and weather forecasting. It uses automatic identification systems aboard ships to track their whereabouts on the oceans.

Spire’s network of 80 nanosatellites picks up the identity, position, course and speed of each vessel. Thanks to intelligent machine-learning algorithms, it can predict vessel locations and the ship’s estimated time of arrival at port, enabling port authorities to manage busy docks and market traders to price the goods carried aboard.

Peter Platzer, chief executive of Spire, said: “ESA recognized the value of smaller, more nimble satellites and was looking for a provider that could bring satellites more rapidly and cheaper to orbit. That really was the start of our collaboration. ESA was instrumental in the fact that Spire’s largest office today is in the UK and most of its workforce is in Europe.”

Integrating the ubiquity and unprecedented performance of satellites with terrestrial 5G networks is fundamental to the future success of 'Project Darwin', a project to develop connected cars in a partnership between ESA, Telefonica 02, a satellite operator, the universities of Oxford and Glasgow and several UK-based start-up companies.

Connected cars need to switch seamlessly between terrestrial and satellite networks, so that people and goods can move across the country without any glitches.

Darwin relies on a terminal that will allow seamless switching between the networks.

Daniela Petrovic of Telefonica O2, who founded Darwin, said: “There is a really nice ecosystem of players delivering innovation. ESA provided the opportunities to start discussions with satellite operators and helped us create this partnership.

“There is a good body of knowledge within ESA on innovation and science hubs and this gave us the opportunity to see what other start-ups are doing. Through ESA, we are getting exposure to 22 member state countries which can see the opportunity and maybe get involved.”

Magali Vaissiere, Director of Telecommunications and Integrated Applications at ESA, said: “We are very excited to see the response of industry to our Space for 5G initiative, which aims to bring together the cellular and satellite telecommunications world and provide the connectivity fabric to enable the digital transformation of industry and society.

“The showcase of flagship 5G projects today confirms the strategic importance of our Space for 5G initiative, which will be a significant strategic part of the upcoming ESA Conference of Ministers to be held in November.”

Other companies that formed part of the showcase include: Cranfield University, which as part of its Digital Aviation Research and Technology Centre is set to spearhead the UK’s research into digital aviation technology; HiSky, a satellite virtual network operator that offers global low-cost voice, data and internet of things communications using existing telecommunications satellites; Inmarsat, a global satellite operator that is showcasing a range of new maritime services enabled by the seamless integration of 5G cellular and satellite connectivity; Open Cosmos, a small satellite manufacturer based at Harwell in Oxfordshire, which is investigating how to deliver 5G by satellite; and Sky and Space Global based in London that plans a constellation of 200 nanosatellites in equatorial low Earth orbit for narrowband communications.

Glowing solar cell

25 September 2019: A solar cell is being turned into a light source by running electric current through it. Such ‘luminescence’ testing is performed routinely in ESA’s Solar Generator Laboratory, employed to detect cell defects – such as the cracks highlighted here. 14)

By happy accident the solar (or ‘photovoltaic’) cell was invented in 1954, just before the start of the Space Age, allowing satellites to run off the abundant sunshine found in Earth orbit and beyond.


Figure 11: Made from the same kind of semiconductor materials as computer circuits, solar cells are designed so that incoming sunlight generates an electric current. But the process can be reversed for test purposes: apply an electric charge and a solar cell will glow (image credit: ESA–SJM Photography)

Solar cells, carefully assembled together into arrays, are an essential part of space missions, together with specially-designed batteries for times when a satellite needs more power, passes into darkness or faces a power emergency – plus the power conditioning and distribution electronics keeping all parts of a mission supplied with the power they require.

“Space power technologies are second only to launchers in ensuring European competitiveness and non-dependence,” comments Véronique Ferlet-Cavrois, Head of ESA’s Power Systems, EMC & Space Environment Division.

“Without the research and development ESA performs with European industry to ensure the continued availability of high-performance space power systems and components we would be left utterly reliant on foreign suppliers, or missions wouldn’t fly at all. We will be taking a look back at the important work done during the last three decades during this month’s European Space Power Conference.”

The 12th European Space Power Conference (ESPC) is taking take place in Juan-les-Pins, Côte d'Azur, France, from 30 September to 4 October, with almost 400 participants. Véronique is chairing the event.

“It will begin 30 years to the week from the very first conference in the series,” adds ESA power conditioning engineer Mariel Triggianese, ESPC’s technical coordinator.

“So we’ll be commemorating our past but also looking forward. Our theme is ‘Space Power, Achievements and Challenges’. The chief technology officers from Airbus, Thales, Ariane Group and OHB will be joined by ESA’s Director of Technology Engineering and Quality, Franco Ongaro, to discuss the space power needs of their markets into the future.”

Quantum light sources pave the way for optical circuits

05 August 2019: An international team headed up by Alexander Holleitner and Jonathan Finley, physicists at the Technical University of Munich (TUM), has succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies, from quantum sensors and transistors in smartphones through to new encryption technologies for data transmission. 15) 16)

Previous circuits on chips rely on electrons as the information carriers. In the future, photons which transmit information at the speed of light will be able to take on this task in optical circuits. Quantum light sources, which are then connected with quantum fiber optic cables and detectors are needed as basic building blocks for such new chips.


Figure 12: By bombarding thin molybdenum sulfide layers with helium ions, physicists at TUM succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies (image credit: TUM)

First step towards optical quantum computers: "This constitutes a first key step towards optical quantum computers," says Julian Klein, lead author of the study. "Because for future applications the light sources must be coupled with photon circuits, waveguides for example, in order to make light-based quantum calculations possible."

The critical point here is the exact and precisely controllable placement of the light sources. It is possible to create quantum light sources in conventional three-dimensional materials such as diamond or silicon, but they cannot be precisely placed in these materials.

Deterministic defects: The physicists then used a layer of the semiconductor molybdenum disulfide (MoS2) as the starting material, just three atoms thick. They irradiated this with a helium ion beam which they focused on a surface area of less than one nanometer.

In order to generate optically active defects, the desired quantum light sources, molybdenum or sulfur atoms are precisely hammered out of the layer. The imperfections are traps for so-called excitons, electron-hole pairs, which then emit the desired photons.

Technically, the new helium ion microscope at the Walter Schottky Institute's Center for Nanotechnology and Nanomaterials, which can be used to irradiate such material with an unparalleled lateral resolution, was of central importance for this.

On the road to new light sources: Together with theorists at TUM, the Max Planck Society, and the University of Bremen, the team developed a model which also describes the energy states observed at the imperfections in theory.

In the future, the researchers also want to create more complex light source patterns, in lateral two-dimensional lattice structures for example, in order to thus also research multi-exciton phenomena or exotic material properties.

This is the experimental gateway to a world which has long only been described in theory within the context of the so-called Bose-Hubbard model which seeks to account for complex processes in solids.

Quantum sensors, transistors and secure encryption: And there may be progress not only in theory, but also with regard to possible technological developments. Since the light sources always have the same underlying defect in the material, they are theoretically indistinguishable. This allows for applications which are based on the quantum-mechanical principle of entanglement.

"It is possible to integrate our quantum light sources very elegantly into photon circuits," says Klein. "Owing to the high sensitivity, for example, it is possible to build quantum sensors for smartphones and develop extremely secure encryption technologies for data transmission."

Driverless shuttle

10 July 2019: ESA’s technical heart will be serving as a testbed for this driverless shuttle in the coming months. 17)

The Agency’s ESTEC establishment in Noordwijk, the Netherlands, is working with vehicle owner Dutch Automated Mobility, provincial and municipal governments and the bus company Arriva to assess its viability as a ‘last mile’ solution for public transport.

The fully autonomous vehicle calculates its position using a fusion of satellite navigation, lidar ‘laser radar’, visible cameras and motion sensors. Once it enters service in October it will be used to transport employees from one side of the ESTEC complex to the other.

The fully-electric, zero-emission shuttle will respect the on-site speed limit of 15 km/h, and for its first six months of service will carry a steward to observe its operation along its preprogrammed 10-minute-long roundtrip.


Figure 13: This driverless shuttle will soon be tested at ESA/ESTEC in the Netherlands (image credit: ESA, B. Smith)

New Method Can Spot Failing Infrastructure from Space

09 July 2019: We rely on bridges to connect us to other places, and we trust that they're safe. While many governments invest heavily in inspection and maintenance programs, the number of bridges that are coming to the end of their design lives or that have significant structural damage can outpace the resources available to repair them. But infrastructure managers may soon have a new way to identify the structures most at risk of failure. 18)


Figure 14: A satellite view of the Morandi Bridge in Genoa, Italy, prior to its August 2018 collapse. The numbers identify key bridge components. Numbers 4 through 8 correspond to the bridge's V-shaped piers (from West to East). Numbers 9 through 11 correspond to three independent balance systems on the bridge. In the annotated version, the black arrows identify areas of change based on data from the Cosmo-SkyMed satellite constellation (image credit: NASA/JPL-Caltech/Google)

Scientists, led by Pietro Milillo of NASA's Jet Propulsion Laboratory in Pasadena, California, have developed a new technique for analyzing satellite data that can reveal subtle structural changes that may indicate a bridge is deteriorating - changes so subtle that they are not visible to the naked eye.

In August 2018, the Morandi Bridge, near Genoa, Italy, collapsed, killing dozens of people. A team of scientists from NASA, the University of Bath in England and the Italian Space Agency used synthetic aperture radar (SAR) measurements from several different satellites and reference points to map relative displacement - or structural changes to the bridge - from 2003 to the time of its collapse. Using a new process, they were able to detect millimeter-size changes to the bridge over time that would not have been detected by the standard processing approaches applied to spaceborne synthetic aperture radar observations.

They found that the deck next to the bridge's collapsed pier showed subtle signs of change as early as 2015; they also noted that several parts of the bridge showed a more significant increase in structural changes between March 2017 and August 2018 - a hidden indication that at least part of the bridge may have become structurally unsound.

"This is about developing a new technique that can assist in the characterization of the health of bridges and other infrastructure," Millilo said. "We couldn't have forecasted this particular collapse because standard assessment techniques available at the time couldn't detect what we can see now. But going forward, this technique, combined with techniques already in use, has the potential to do a lot of good."

The technique is limited to areas that have consistent synthetic aperture radar-equipped satellite coverage. In early 2022, NASA and the Indian Space Research Organization (ISRO) plan to launch the NASA-ISRO Synthetic Aperture Radar (NISAR), which will greatly expand that coverage. Designed to enable scientists to observe and measure global environmental changes and hazards, NISAR will collect imagery that will enable engineers and scientists to investigate the stability of structures like bridges nearly anywhere in the world about every week.

"We can't solve the entire problem of structural safety, but we can add a new tool to the standard procedures to better support maintenance considerations," said Milillo.

The majority of the SAR data for this study was acquired by the Italian Space Agency's COSMO-Skymed constellation and the European Space Agency's (ESA's) Sentinel-1a and -1b satellites. The research team also used historical data sets from ESA's Envisat satellite. The study was recently published in the journal Remote Sensing. 19)

Atomic motion captured in 4-D for the first time

27 June 2019: Everyday transitions from one state of matter to another—such as freezing, melting or evaporation—start with a process called "nucleation," in which tiny clusters of atoms or molecules (called "nuclei") begin to coalesce. Nucleation plays a critical role in circumstances as diverse as the formation of clouds and the onset of neurodegenerative disease. 20)

A UCLA-led team has gained a never-before-seen view of nucleation—capturing how the atoms rearrange at 4-D atomic resolution (that is, in three dimensions of space and across time). The findings, published in the journal Nature, differ from predictions based on the classical theory of nucleation that has long appeared in textbooks. 21)

"This is truly a groundbreaking experiment—we not only locate and identify individual atoms with high precision, but also monitor their motion in 4-D for the first time," said senior author Jianwei "John" Miao, a UCLA professor of physics and astronomy, who is the deputy director of the STROBE National Science Foundation Science and Technology Center and a member of the California NanoSystems Institute at UCLA.

Research by the team, which includes collaborators from Lawrence Berkeley National Laboratory, University of Colorado at Boulder, University of Buffalo and the University of Nevada, Reno, builds upon a powerful imaging technique previously developed by Miao's research group. That method, called "atomic electron tomography," uses a state-of-the-art electron microscope located at Berkeley Lab's Molecular Foundry, which images a sample using electrons. The sample is rotated, and in much the same way a CAT scan generates a three-dimensional X-ray of the human body, atomic electron tomography creates stunning 3D images of atoms within a material.

Miao and his colleagues examined an iron-platinum alloy formed into nanoparticles so small that it takes more than 10,000 laid side by side to span the width of a human hair. To investigate nucleation, the scientists heated the nanoparticles to 520 º Celsius ( 968º Fahrenheit), and took images after 9 minutes, 16 minutes and 26 minutes. At that temperature, the alloy undergoes a transition between two different solid phases.


Figure 15: The image shows 4D atomic motion is captured in an iron-platinum nanoparticle at three different annealing times. The experimental observations are inconsistent with classical nucleation theory, showing the need of a model beyond

SUN-to-LIQUID (Fuels from concentrated sunlight)

June 2019: The EU (European Union) energy roadmap for 2050 aims at a 75% share of renewables in the gross energy consumption. Achieving this target requires a significant share of alternative transportation fuels, including a 40% target share of low carbon fuels in aviation. 22) Therefore the European Commission calls for the development of sustainable fuels from non-biomass non-fossil sources.

In contrast to biofuels, solar energy is undisputedly scalable to any future demand and is already utilized at large scale to produce heat and electricity. Solar energy may also be used to produce hydrogen, but the transportation sector cannot easily replace hydrocarbon fuels, with aviation being the most notable example. Due to long design and service times of aircraft the aviation sector will critically depend on the availability of liquid hydrocarbons for decades to come . 23) Heavy duty trucks, maritime and road transportation are also expected to rely strongly on liquid hydrocarbon fuels. 24) Thus, the large volume availability of ‘drop-in’ capable renewable fuels is of great importance for decarbonizing the transport sector.

This challenge is addressed by the four year solar fuels project SUN-to-LIQUID kicked off in January 2016.

The European H2020 project aims at developing a solar thermochemical technology as a highly promising fuel path at large scale and competitive costs.

Solar radiation is concentrated by a heliostat field and efficiently absorbed in a solar reactor that thermochemically converts H2O and CO2 to syngas which is subsequently processed to Fischer-Tropsch hydro-carbon fuels. Solar-to-syngas energy conversion efficiencies exceeding 30% can potentially be realized (25)) thanks to favorable thermodynamics at high temperature and utilization of the full solar spectrum . 26)

Expected Innovations

The following key innovations are expected from the SUN-to-LIQUID project:

• Advanced modular solar concentration technology for high-flux/high-temperature applications.

• Modular solar reactor technology for the thermochemical production of syngas from H2O and CO2 at field scale and with record-high solar energy conversion efficiency.

• Optimization of high-performance redox materials and reticulated porous ceramic (RPC) structures favorable thermodynamics, rapid kinetics, stable cyclic operation, and efficient heat and mass transfer.

• Pre-commercial integration of all subsystems of the process chain to solar liquid fuels, namely: the high-flux solar concentrator, the solar thermochemical reactor, and the gas-to-liquid conversion unit.


SUN-to-LIQUID will design, fabricate, and experimentally validate a large-scale, complete solar fuel production plant.

The preceding EU-project SOLAR-JET has recently demonstrated the first-ever solar thermochemical kerosene production from H2O and CO2 in a laboratory environment. 27) A total of 291 stable redox cycles were performed, yielding 700 standard liters of high-quality syngas, which was compressed and further processed via Fischer-Tropsch synthesis to a mixture of naphtha, gasoil, and kerosene. 28)

As a follow-up project, SUN-to-LIQUID will design, fabricate, and experimentally validate a more than 12-fold scale-up of the complete solar fuel production plant and will establish a new milestone in reactor efficiency. The field validation will integrate for the first time the whole production chain from sunlight, H2O and CO2 to liquid hydrocarbon fuels.


Figure 16: SUN-to-LIQUID will realize three subsystems (image credit: EC)

1) A high-flux solar concentrating subsystem — Consisting of a sun-tracking heliostat field, that delivers radiative power to a solar reactor positioned at the top of a small tower.

2) A 50 kW solar thermochemical reactor subsystem — For syngas production from H2O and CO2 via the ceria-based thermochemical redox cycle, with optimized heat transfer, fluid mechanics, material structure, and redox chemistry.

3) A gas-to-liquid conversion subsystem — Comprising compression and storage units for syngas and a dedicated micro FT unit for the synthesis of liquid hydrocarbon fuels.

SUN-to-LIQUID will run a long-term operation campaign: SUN-to-LIQUID will parametrically optimize the solar thermochemical fuel plant on a daily basis over the time scale of months under realistic steady-state and transient conditions relevant to large-scale industrial implementation.

Concept and Approach

The SUN-to-LIQUID approach uses concentrated solar energy to synthesize liquid hydrocarbon fuels from H2O and CO2. This reversal of combustion is accomplished via a high-temperature thermochemical cycle based on metal oxide redox reactions which convert H2O and CO2 into energy-rich synthesis gas (syngas), a mixture of mainly H2 and CO.29) This two-step cycle for splitting H2O and CO2 is schematically represented by:

The thermochemical process

Since H2/CO and O2 are formed in different steps, the problematic high-temperature fuel/O2 separation is eliminated. The net product is high-quality synthesis gas (syngas), which is further processed to liquid hydrocarbons via Fischer-Tropsch (FT) synthesis. FT synthetic paraffinic kerosene derived from syngas is already certified for aviation.

SUN-to-LIQUID uses concentrated solar radiation as the source of high-temperature process heat to drive endothermic chemical reactions for solar fuel production. 30) A variety of redox active materials have been explored by different research groups. 31) Among them, non-stoichiometric cerium oxide (ceria) has emerged as an attractive redox active material because of its high oxygen ion conductivity and cyclability, while maintaining its fluorite-type structure and phase.

Reactor configuration

The laboratory-scale solar reactor for a radiative power input of 4 kW has been designed, fabricated, and experimentally demonstrated at ETH Zurich. The reactor configuration, which was used in the FP7-project SOLAR-JET, is schematically shown in Figure 17.

It consists of a cavity receiver containing a reticulated porous ceramic (RPC) foam-type structure made of pure CeO2 that was directly exposed to concentrated solar radiation. The production of H2 from H2O, CO from CO2, and high quality syngas suitable for FT synthesis by simultaneously splitting a mixture of H2O and CO2 has been demonstrated (Ref. 28).

The main objective of SUN-to-LIQUID is the scale-up and experimental demonstration of the complete process chain to solar liquid fuels from H2O and CO2 at a pre-commercial size, i.e. moving from a 4 kW setup in the laboratory to a 50 kW pre-commercial plant in the field. SUN-to-LIQUID will demonstrate an enhanced solar-to-fuel energy conversion efficiency and validate the field suitability.


Figure 17: Schematic of the reactor configuration in the FP7-project SOLAR-JET (image credit: FC)

SUN-to-LIQUID will demonstrate an enhanced solar-to-fuel energy conversion efficiency and validate the field suitability.

The high-flux solar concentrating subsystem consists of an ultra-modular solar heliostat central receiver that provides intense solar radiation for high temperature applications beyond the capabilities of current commercial CSP installations. This subsystem is constructed at IMDEA Energía at Móstoles Technology Park, Madrid, in 2016. The customized heliostat field makes use of most recent developments on small size heliostats and a tower with reduced height (15 m) to minimize visual impact. The heliostat field consists of 169 small size heliostats (1.9 m x 1.6 m). When all heliostats are aligned, it is possible to fulfil the specified flux above 2500 kW/m2 for at least 50 kW and an aperture of 16 cm, with a peak flux of 3000 kW/m2. A reliable road map for competitive drop-in fuel production from H2O, CO2, and solar energy will be established.

Figure 18: The SUN-to-LIQUID project develops an alternative fuel technology that promises unlimited renewable transportation fuel supply from water, CO2 and concentrated sunlight. The project, which is funded by the EU and Switzerland, can have important implications for the transportation sectors, especially for the long-haul aviation and shipping sectors, which are strongly dependent on hydrocarbon fuels (video credit: ARTTIC, Published on 12 June 2019)

SUN-to-LIQUID Field Test Project

The SUN-to-LIQUID four-year project, which finishes at the end of this year, is supported by the EU’s Horizon 2020 research and innovation program and the Swiss State Secretariat for Education, Research and Innovation. It involves leading European research organizations and companies in the field of solar thermochemical fuel research. In addition to ETH Zurich, IMDEA Energy and HyGear Technology & Services, other partners include the German Aerospace Center (DLR) and Abengoa Energía. Project coordinator Bauhaus Luftfahrt is also responsible for technology and system analyses and ARTTIC International Management Services is supporting the consortium with project management and communication. 32)

The preceding EU-project SOLAR-JET developed the technology and achieved the first-ever production of solar jet fuel in a laboratory environment. The SUN-to-LIQUID project scaled up this technology for on-sun testing at a solar tower. For that purpose, a unique solar concentrating plant was built at the IMDEA Energy Institute in Móstoles, Spain. “A sun-tracking field of heliostats concentrates sunlight by a factor of 2500 – three times greater than current solar tower plants used for electricity generation,” explains Manuel Romero of IMDEA Energy. This intense solar flux, verified by the flux measurement system developed by the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) makes it possible to reach reaction temperatures of more than 1500 ºC within the solar reactor positioned at the top of the tower. 33)


Figure 19: The sun-tracking heliostat field delivers radiative power to a solar reactor positioned at the top of the tower (image credit: Christophe Ramage ©ARTTIC 2019)

The solar reactor, developed by project partner ETH Zurich, produces synthesis gas, a mixture of hydrogen and carbon monoxide, from water and carbon dioxide via a thermochemical redox cycle. An on-site gas-to-liquid plant that was developed by the project partner HyGear processes this gas to kerosene.

DLR has many years of experience in the development of solar-thermal chemical processes and their components. In the SUN-to-LIQUID project, DLR was responsible for measuring the solar field and concentrated solar radiation, for developing concepts for optimized heat recovery and – as in the previous SOLAR-JET project – for computer simulations of the reactor and the entire plant. Researchers from the DLR Institute of Solar Research and the DLR Institute of Combustion Technology used virtual models to scale up the solar production of kerosene from the laboratory to a megawatt-scale plant and to optimize the design and operation of the plant. For SUN-to-LIQUID, DLR solar researchers developed a flux density measurement system that makes it possible to measure the intensity of highly concentrated solar radiation directly in front of the reactor with minimal interruption of its operation. This data is necessary to operate the plant safely and to determine the efficiency of the reactor.

Unlimited supply of sustainable fuel: Compared to conventional fossil-derived jet fuel, the net carbon dioxide emissions to the atmosphere can be reduced by more than 90 percent. Furthermore, since the solar energy-driven process relies on abundant feedstock and does not compete with food production, it can thus meet the future fuel demand at a global scale without the need to replace the existing worldwide infrastructure for fuel distribution, storage, and utilization.

Melting a satellite, a piece at a time

17 June 2019: Researchers took one of the densest parts of an Earth-orbiting satellite, placed it in a plasma wind tunnel then proceeded to melt it into vapor. Their goal was to better understand how satellites burn up during reentry, to minimize the risk of endangering anyone on the ground. 34)


Figure 20: A rod-shaped magnetorquer – made of an external carbon fiber reinforced polymer composite, with copper coils and an internal iron-cobalt core – being melted at thousands of degrees C inside a DLR plasma wind tunnel. This atmospheric reentry simulation was performed as part of ESA's 'Design for Demise' efforts to reduce the risk of reentering satellites reaching the ground (image credit: ESA/DLR)

Taking place as part of ESA’s Clean Space initiative, the fiery testing occurred inside a plasma wind tunnel, reproducing reentry conditions, at the DLR German Aerospace Center’s site in Cologne.

The test subject was a magnetorquer, designed to interact magnetically with Earth’s magnetic field to shift satellite orientation.

Figure 21: Melting a piece of a satellite. Researchers took one of the heaviest, bulkiest parts of an Earth-orbiting satellite, placed it in a plasma wind tunnel, then proceeded to melt it into vapor. Their goal was to better understand how satellites burn up during reentry, to minimize the risk of endangering anyone on the ground (video credit: ESA/DLR/Belstead Research)

The mysterious crystal that melts at two different temperatures

06 June 2019: In a little-known paper published in 1896, Emil Fischer—the German chemist who would go on to win the 1902 Nobel Prize in Chemistry for synthesizing sugars and caffeine—said his laboratory had produced a crystal that seemed to break the laws of thermodynamics. To his puzzlement, the solid form of acetaldehyde phenylhydrazone (APH) kept melting at two very different temperatures. A batch he produced on Monday might melt at 65 °C, while a batch on Thursday would melt at 100 °C. 35)

Colleagues and rivals at the time told him he must have made a mistake. Fischer didn’t think so. As far as he could tell, the crystals that melted at such different points were identical. A few groups in Britain and France repeated his work and got the same baffling results. But as those scientists died off, the mystery was forgotten, stranded in obscure academic journals published in German and French more than a century ago.

There it would probably have remained but for Terry Threlfall, an 84-year-old chemist at the University of Southampton, UK. Stumbling across Fischer’s 1896 paper in a library about a decade ago, Threlfall was intrigued enough to kick-start an international investigation of the mysterious crystal. Earlier this year in the journal Crystal Growth and Design, Threlfall and his colleagues published the solution: APH is the first recorded example of a solid that, when it melts, forms two structurally distinct liquids. Which liquid emerges comes down to contamination so subtle that it’s virtually undetectable. 36)


Figure 22: Crystals of acetaldehyde phenylhydrazone appear colorful when exposed to polarized light under a microscope (image credit: Terry Threlfall)

A forgotten mystery

he quest began in 2008 when Threlfall, a fluent speaker of German and a keen student of the history of science, was searching the pages of the 140-year-old Berichte der deutschen chemischen Gesellschaft for interesting solid-state work relevant to his research on second-order phase transitions. After learning of the long-lost puzzle from Fischer’s paper, Threlfall followed the reported recipe and found that his own samples of APH melted according to the same peculiar pattern. One batch melted at around 60 °C, the other at 90–95 °C.

As Fischer knew 125 years ago, the laws of thermodynamics do not allow such a molecule. If a pair of solids have different melting points, then they must be structurally distinct. Yet all the modern structural analysis techniques that Threlfall and some colleagues tried on Fischer’s compound confirmed the 19th-century claim. X-ray diffraction, nuclear magnetic resonance, IR spectroscopy: All showed the crystals that behaved so differently were identical.

“For two years we wondered whether to believe the evidence of our own eyes and think that we needed to rewrite the laws of the universe, or to believe thermodynamics and think that we were simply incompetent experimentalists,” Threlfall says.


Figure 23: Nobel laureate Emil Fischer works in his lab in 1904, eight years after describing a mysterious solid with multiple melting points (image credit: Nicola Perscheid)

Piecing together the puzzle

The first clue for solving the mystery came from the way APH crystals are prepared. The molecule (C8H10N2) is made up of a benzene ring attached to a pair of nitrogen atoms, one of which is attached to a hydrogen atom and a methyl group that can point either up or down. Chemists make APH by dissolving solid acetaldehyde (a precursor for many useful chemical reactions and a compound found naturally in fruit) into aqueous ethanol and adding drops of liquid phenylhydrazine (also first made and characterized by Fischer, who used it in his seminal studies of sugars). If the mixture is chilled and stirred, jagged flakes and then thicker chunks of APH crystals start to appear.


Figure 24: Terry Threlfall and his colleagues confirmed that there are low-melting-point and high-melting-point forms of APH. The y axis represents the heat absorbed in melting; the measured absorption is the area under the curve (image credit: Terry Threlfall)

According to reports from Fischer’s time, there were hints that impurities could play a role in the puzzling behavior of APH. Adding drops of an acid could steer the crystallization process toward the low-melting-point version of the molecule; with added alkali, the high-melting-point crystal would emerge. Threlfall confirmed that claim and found that he could convert between the two forms. The low-melting version could be made to melt at the higher temperature by exposing it to ammonia vapor. And the high-melting crystal just needed a whiff of acid to bring its melting point down.

That behavior seemed to suggest that the acid worked like rock salt does in lowering the melting point of water ice. But for salt to make a difference, a significant amount must be added—certainly enough to show up in a close examination of the ice’s structure. At as little as a thousandth of a molar equivalent, the quantities of acid or alkali needed to make the switch in APH were vanishingly small. Whatever contamination occurred did so with no detectable physical change to the crystal structure.

Threlfall got some important help from Hugo Meekes, a solid-state physicist at Radboud University in Nijmegen, the Netherlands. After hearing of a 2012 lecture that Threlfall had given about the conundrum, Meekes wondered if the solution might relate to a different, but equally curious, phenomenon called the disappearing polymorph problem. A scourge of drug companies, the problem manifests as the production of a solid that’s slightly but consequentially different from the desired product. The polymorphs are identical except for varying crystalline structures, which can give them different properties. In the late 1990s, for example, Abbott Laboratories learned that it had produced a less-soluble polymorph of its antiviral crystalline compound ritonavir.

The cause of disappearing polymorphs is disputed, but Meekes says it seems to come down to imperceptible contamination—perhaps a single molecule in the air can disrupt the process by seeding crystallization of the problematic form. “It sounds rather unbelievable, but it’s the only explanation,” he says. “We thought the situation with the APH must be something like this.”

But the APH case didn’t fit the pattern. The crystals of APH that melted at different temperatures weren’t polymorphs; they were identical. The researchers failed to find any other structural discrepancies either. For example, some molecules show different physical properties when their same atoms are arranged in different patterns, which is called isomerization. But both solid forms of APH contained the Z isomer, in which the methyl group points down.

Meekes too was stumped.

Enter Manuel Minas da Piedade, a solid-state physicist and thermodynamics researcher at the University of Lisbon, whom Threlfall met at a conference in 2011. After initially offering a hunch that led to another dead end, the Portuguese physicist did what many scientists do when faced with something that doesn’t add up: He went back to first principles. Because it is impossible for the same material to melt at different temperatures if the initial and final states are the same, he says, “either we don’t have the same crystal state, or the final state cannot be the same.”

Until then, all the tests performed by Threlfall and a growing number of interested colleagues had focused on solid APH, since differences in melting point typically stem from differences in the solid form. But, out of options on the solid front, in 2015 the researchers took a look at the liquids that emerged.

Back in the Netherlands, Meekes spun tiny tubes of the hot, molten APH in a solid-state NMR machine, once with the low-melting-point sample and once with the high-melting-point one. Occasional forays to temperatures higher than the delicate equipment’s 100 °C limit led to “frowning technicians,” Meekes says, but the risk was worth it. He discovered that the spectra of the two liquids were different. The same solid crystal was melting to form two liquids with distinct compositions—an unprecedented finding. “We think we have a clue as to what’s going on,” Meekes recalls telling Threlfall at a conference.


Figure 25: Study coauthors Simon Coles (left) and Terry Threlfall performed some of their APH detective work at the UK National Crystallography Service at the University of Southampton (image credit: Simon Coles)

Tricky liquid

The difference, Meekes, Threlfall, and colleagues soon found as they probed further, comes down to isomerization, but only in the liquid phase. Although solid APH consists of solely the Z isomer, liquid APH also contains E isomer, in which the methyl group points up. In the liquid state, with the molecules spaced farther apart and therefore with more room to maneuver, APH can flit between the two forms, and it does so until it finds the most stable mix. That turns out to be a blend of about one-third of the Z isomer and two-thirds of the E form.

The relative amounts of each isomer at equilibrium are determined by the molecules’ Gibbs free energies, a measure of their thermodynamic potential. As the difference in Gibbs energy increases, so does the ratio of one isomer to the other. What makes APH so unusual, Threlfall says, is that the optimal isomer combination for liquid APH doesn’t match that of the solid form. “That the [solid] crystal is composed entirely of Z molecules shows that these must have a more favorable packing,” he says.


Figure 26: An NMR analysis of liquid APH revealed structural differences between the low-melting-point (black line) and high-melting-point (red) forms (image credit: Terry Threlfall)

Tests showed that the high-melting solid crystal melted to a liquid that was also all Z. Then the Z-type molecules started to flip to E-type and continued until they hit that stable mix. But when the low-melting solid APH melted, it did so almost immediately to the stable mix of two-thirds E. The two liquids are different—and so the melting points are different—only because one represents an intermediate stage.

It was a melting-point suppression effect, just like salt and ice, but it was much larger than anyone on the team had thought possible. So what was behind it? Like the salt, they thought it must be an impurity. And like the disappearing polymorphs that plague the pharmaceutical industry, that impurity is too small to see or measure. Threlfall says hydrogen ions must be clinging to the surface of the solid crystal and catalyzing the shift from the Z form to the E form. To do so, those protons shift the electron density of the nitrogen atoms, which loosens the connection between nitrogen and carbon atoms in the APH molecules from a strong double bond to a weaker single one. The bond is therefore free to rotate, allowing a much more rapid switch between the Z and E forms.


Figure 27: Two isomers of APH. As a solid, molecules of APH take the Z form (left), in which the methyl group points down. But liquid APH also contains the E isomer, in which the methyl group points up (image credit: Leyla-Cann Söğütoğlu and Hugo Meekes)

With no acid present, the Z-form solid melts to Z-form liquid, and then this Z-form liquid starts the transition to E-form liquid until it reaches the stable 1:2 ratio. But when acid is there, the catalysis effect speeds the switch from Z form to E form, so much so that it happens as the solid melts.

Overall, the starting solid is the same, the finishing liquid is the same, and the amount of energy used is the same. The laws of the universe are safe. Gérard Coquerel, who works on thermodynamics and solid-state physics at the University of Rouen, France, and was not involved in the project, says it’s an important discovery that organic chemists and others who rely on melting points to help characterize compounds should take into account. “It shows that sometimes there is a need to be careful about what we consider as the melting point,” he says.

Fischer would have been delighted to see the answer, Threlfall says, and the 19th-century chemist would probably have understood it. Although the team’s work breaks genuinely new ground, Meekes cheerfully admits that the circumstances under which the melting-point suppression occurs are so specific that the research is unlikely to have useful applications. The team hasn’t even coined a name for the physical process by which identical solids can melt into distinct liquids. “If someone else wants to name it, then they can,” Threlfall says. “But if you ask me, the scientific literature is already cluttered with too many needless terms.”

Mission Control 'Saves Science'

17 May 2019: Every minute, ESA’s Earth observation satellites gather dozens of gigabytes of data about our planet – enough information to fill the pages on a 100-meter long bookshelf. Flying in low-Earth orbits, these spacecraft are continuously taking the pulse of our planet, but it's teams on the ground at ESA’s Operations Center in Darmstadt, Germany, that keep our explorers afloat. 37)


Figure 28: ESA has been dedicated to observing Earth from space ever since the launch of its first Meteosat weather satellite back in 1977. With the launch of a range of different types of satellites over the last 40 years, we are better placed to understand the complexities of our planet, particularly with respect to global change. Today’s satellites are used to forecast the weather, answer important Earth-science questions, provide essential information to improve agricultural practices, maritime safety, help when disaster strikes, and all manner of everyday applications (image credit: ESA)

From flying groups of spacecraft in complex formations to dodging space debris and navigating the ever-changing conditions in space known as space weather, ESA’s spacecraft operators ensure we continue to receive beautiful images and vital data on our changing planet.

Get in formation

Many Earth observation satellites travel in formation. For example, the Copernicus Sentinel-5P satellite follows behind the Suomi-NPP satellite (from the National Oceanic and Atmospheric Administration). Flying in a loose trailing formation, they observe parts of our planet in quick succession and monitor rapidly evolving situations. Together they can also cross-validate instruments on board as well as the data acquired.

ESA’s Earth Explorer Swarm satellites are another example of complex formation flying. On a mission to provide the best ever survey of Earth’s geomagnetic field, they are made up of three identical satellites flying in what is called a constellation formation.

Swarm’s individual satellites operate together under shared control in a synchronized manner, accomplishing the same objective of one giant – and more expensive – satellite.

“Formation flying has all the challenges of flying many single spacecraft, except with the added complexity that we need to maintain a regular distance between all of these high-speed and high-tech eyes on Earth,” explains Jose Morales Santiago, ESA’s Head of the Earth Observation Mission Operations Division. ”Every decision we make, every command we send, has to be the right one for each spacecraft – particularly when it comes to maneuvers. These must be planned properly so that they do not endanger companion satellites, while keeping a consistent configuration across the formation.”


Figure 29: Swarm is ESA's first Earth observation constellation of satellites. The three identical satellites are launched together on one rocket. Two satellites orbit almost side-by-side at the same altitude – initially at about 460 km, descending to around 300 km over the lifetime of the mission. The third satellite is in a higher orbit of 530 km and at a slightly different inclination. The satellites’ orbits drift, resulting in the upper satellite crossing the path of the lower two at an angle of 90° in the third year of operations. The different orbits along with satellites’ various instruments optimize the sampling in space and time, distinguishing between the effects of different sources and strengths of magnetism (image credit: ESA/AOES Medialab)

Saving science

Last year, ESA’s Earth observation missions performed a total of 28 ‘collision avoidance maneuvers’. These maneuvers saw operators send the orders to a spacecraft to get out of the way of an oncoming piece of space debris.

An impact with a fast-moving piece of space junk has the potential to destroy an entire satellite and in the process create even more debris. As a spacecraft ‘swerves’ to avoid collision, science instruments may need to be turned off to ensure their safety and avoid being contaminated by the thrusting engine.

Teams at mission control consider how to keep Europe’s fleet of Earth observers safe while maximizing the vital work they are able to do. Recently, they came up with an ingenious concept to ‘save science’ during such maneuvers of the Sentinel-5P satellite.

The Sentinel team quickly realized that during a collision avoidance maneuver they would have to suspend science collection for almost a day, because of the emergency firing of the thrusters.

“That’s a lot of data to miss out on. As the amount of space debris is currently increasing, this would be something we would need to do more and more often,” explains Pierre Choukroun, Sentinel-5P Spacecraft Operations Engineer, who came up with the fix. “So we designed and validated a new on-board function to enhance the spacecraft’s autonomy, such that the science data loss is reduced to a bare minimum. We are very much looking forward to securing more data for the science community in the near future!”

With this new strategy, the science instruments on Sentinel-5P would be shut off for around on hour compared with an entire day!

Sun protection

As if dodging bits of space debris weren’t enough for Europe’s Earth explorers, they also have to navigate the turbulent weather conditions in space.

Space weather refers to the environmental conditions around Earth due to the dynamic nature of our Sun. The constant mood swings of our star influence the functioning and reliability of our satellites in space, as well as infrastructure on the ground.

Figure 30: SOHO's view of the September 2017 solar flares. The Sun unleashed powerful solar flares on 6 September, one of which was the strongest in over a decade. An M-class flare was also observed two days earlier on 4 September. The flares were launched from a group of sunspots classified as active region 2673. The shaded disc at the center of the image is a mask in SOHO’s LASCO instrument that blocks out direct sunlight to allow study of the faint details in the Sun's corona. The white circle added within the disc shows the size and position of the visible Sun. (video credit: SOHO (ESA & NASA)

When the Sun is particularly active, it adds extra energy to Earth’s atmosphere, changing the density of the air at low-Earth orbits. Increased energy in the atmosphere means that satellites in this region experience more ‘drag’ – a force that acts in the opposite direction to the motion of the spacecraft, causing it to decrease in altitude.

Operators need this information to know when to perform maneuvers to “boost” the satellite’s speed in order to counter drag and keep it in its proper orbit.

This drag effect also changes the speed and position of space debris around Earth, meaning our understanding of the debris environment needs to be constantly updated in light of changing space weather.

“While Earth observation satellites monitor the weather on Earth, we have to stay aware of the changing weather in space,” says Thomas Ormston, Spacecraft Operations Engineer at ESA. “This is vital because understanding atmospheric drag is fundamental to predicting when we will be threatened by space debris and determining when and how big our spacecraft maneuvers need to be to keep delivering great science to our users.”

Space weather also impacts communication between ground stations and satellites due to changes in the upper atmosphere, the ionosphere, during solar events. Because of this, satellite operators avoid critical satellite operations like maneuvers or updates of the on board software during periods of high solar activity.


Figure 31: It’s difficult to comprehend the size and sheer power of our Sun, a churning ball of hot gas has a mass that is 1.3 million times larger than Earth, it dominates our Solar System. Unpredictable and temperamental, it blasts intense radiation and colossal amounts of energetic material in every direction, creating the ever-changing conditions in space known as 'space weather'. The solar wind is a constant stream of electrons, protons and stripped-down atoms emitted by the Sun, while coronal mass ejections are the Sun’s periodic outbursts of colossal clouds of solar plasma. The most extreme of these events disturb Earth’s protective magnetic field, creating geomagnetic storms at our planet. — These storms can cause serious problems for modern technological systems, disrupting or damaging satellites in space and the multitude of services – like navigation and telecoms – that rely on them, and blacking out power grids and radio communication. They can even serve potentially harmful doses of radiation to astronauts on future missions to the Moon or Mars (image credit: ESA)