Jahnavi Shah

  • Sounds of Climate Change

    Happy 2019 everyone! 

    I would like to start off by setting some goals for my blog for the new year. Firstly, I will write more often than last year, hopefully an update every 1-2 weeks. And more importantly, I would like to use blog posts as an opportunity to learn and present something new every time (in addition to providing research updates, of course). So let's get started!

    Quick Research Update

    Things in the research world are going well. I have started combining different data layers (optical, radar, and topography) for craters in ArcMap, and have started seeing some neat products. I have also been exploring different settings in ArcMap in order to find the most effective way to present each data layer. I am definitely feeling more comfortable working with the data and am now working towards putting little bits together (so I'm reading and re-reading the fundamentals). I have intentionally omitted image examples because I want to compile some samples and do a mini mock-crowd sourcing run in the lab (hopefully in the next few weeks). 

    Sounds of Climate Change

    "The underwater soundscape can be as noisy as any jungle or rainforest." - Kate Stafford, Oceanographer 

    Fish and marine mammals use sound to study their habitat, communicate with each other, and navigate. Unlike humans who tend to be very visual animals, marine mammals (such as dolphins and bowhead whales) rely on sound to "see". Light transmits poorly underwater, whereas sound transmits very well and so signals can be heard over large distances. Let's focus on the Arctic as an example and listen to the underwater sounds of Arctic marine life (skip to 03:49):

    Although the Arctic underwater world is a rich soundscape, it has some of the lowest ambient noise levels of the world's oceans when the ice is frozen solid. However, this is changing, mainly due to a decrease in seasonal sea ice which is a result of increased greenhouse gas emissions. A decrease in seasonal sea ice means an incease in open water season. This is causing a loss of habitat for animals such as ice seals, walrus, and polar bears. It is also changing prey availability for these marine mammals and birds as well.                       

                        

    In addition to physical habitat loss, decrease in sea ice is causing a loss of acoustic habitat too. There are 3 ways in which we are able to hear the 'sounds of climate change' using hyrophones (underwater microphones -- record ambient noise). 

    Air: Wind creates waves which contributes a noise like a hiss or a static in the background. Previously, the wind didn't make it to the water column because the ice acted as an buffer between the wind and water. However, due to climate change, there are not only more waves in the Arctic, but also an increasing number of intense storms which signficantly raise the noise levels.

    Water: With less seasonal sea ice, subarctic species are moving north into the Arctic as it is a new habitat opportunity for these mammals. For example, oceanographers are hearing sounds of fin, humpback, and killer whales, further north and later in the season. This invasion of the Arctic increases competition for food, a risk of new diseases, and new sounds. 

    Land: Due to an increased open water season, there is an increase in human activities in the Arctic including oil and gas exploration and extraction, commerical shipping, and tourism. These ship noises increase levels of stress hormones in whales and can disrupt feeding behaviour. As another example, dolphins reproductive rates have declined as a result of noise from dolphin-watching boats in western Australia. Increase in human activities is decreasing the acoustic space over which Arctic marine mammals can communicate.

    The contribution from land (or people) is most significant because it is the only one humans can control. We can't control the winds or migration of subarctic species to the north. Arctic marine mammals have evolved with sounds primarily from sea ice and other sea animals. These sounds are essential for their survival, but the sounds from ships are loud, alien, and are disrupting their habitat. Some solutions have been put into play to minimize disruption such as slower ships (meaning quieter ships). Additionally, we can restrict access to the Arctic in seasons and regions that are important for mating, feeding or migrating. I think it's really important for us to recognize that our actions have consequences on different realms and is not limited to disruptions in the physical space, but also the acoustic space (in the case of Arctic marine mammals). I would recommend checking out the TEDtalks linked below. The second has some interesting figures which I was not able to include in this post.

     

    TED talks: Kate Stafford - How human noise affects ocean habitats, Peter Tyack - The intriguing sound of marine mammals 

     


  • CRISPR and Bioethics

    I'm sure many of you have heard about the CRISPR story in the news. A Chinese scientist, He Jiankui of the Southern University of Science and Technology, claims to have created the first gene-edited babies. It was recently disclosed that genomes of two twin girls, conceived using IVF, had been modified to make them resistant to HIV. However, there is no official proof yet and the case is being investigated. I want to use this space to share what I've learned about CRISPR in general and the ethical implications related to human genome editing. 

    What is CRISPR?

    Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) are a bacterial defense system that forms the basis for CRISPR-Cas9 genome editing technology. This system can be programmed to target specific stretches of genetic code, edit DNA at precise locations, and can also be used for new diagnostic tools. With these systems, genes in living cells and organisms can be permanently modified. In the future, it may even be possible to correct mutations at precise locations in the human genome in order to treat genetic causes of diseases.

    CRISPRs were first discovered in archaea and were thought to serve as part of the bacterial immune system, defending against invading viruses, They consist of repeating sequences of genetic code, interrupted by "spacer" sequences (remnants of genetic code from past invaders - the system serves as a genetic memory). This helps the cell detect and destroy the bacteriophage when it returns. 

    In the video below, Feng Zhang, pioneer of development of genome editing tools from natural microbial CRISPR-Cas9 systems,  provides a simple explanation of CRISPR: 

    In January 2013, the Zhang lab published the first method to engineer CRISPR to edit the genome in mouse and human cells. At the International Summit on Human Gene Editing in 2015, there was a consensus for holding off human genome editing until the implications were fully considered. So given the recent news, I looked into some of the ethical issues related to genome editing and came across a paper by Rodriguez 2016 which will be briefly presented here. 

    Ethical Issues

    • Balance of risks and benefits
      • Application of CRISPR/Cas9 technique involves risk of off targe mutations which can be harmful. Additionally, large genomes may contain multiple DNA sequences identical to intended target DNA sequence. These unintended sequences could be split causing cell death or tranformation. 
    • Ecological disequilibrium
      • The possibility of off target mutations may increase each generation. If there is a risk of transferring genes to other species, then there is a risk of transferring modified sequences  (passing the negative trait) to related organisms which could lead to the disappearance of a whole population. This would have drastic consequences on the ecosystem equilibrium, e.g., other plagues may be developed. 
    • Informed consent
      • For human germline therapy, it would impossible to obtain informed consent because the patients affected by the edits are the embryo and future generations. 
    • Justice and equity
      • There is concern that genome editing might only be accessible to the wealthy which would increase the existing gap in access to health care. Furthermore, this could create classess on individuals defined by the quality of their engineered genome. 
    • Genome editing for enhancement
      • Even though the goal is to use genome editing for improving health of patients, there is a possibility of non-therapeutic interventions. For example, it could be used to enhance performance of athletes or to prevent violent behaviour or diminish addictions. Socially, there will be a problem if some individuals may be enhanced genetically giving them an upper hand over others, for example in intellectual capacity. The latter ties in with the justice and equity part.

    There needs to be a discussion about the social, ethical, and legal implications of using genome-editing techniques in human germline and other organisms. There are many factors and risks associated with it which will spiral out of control without definitive boundaries and regulations. 


  • Image Processing 

    It's been a while and I have a few research updates, so let's dive right into it! 

    Data collection

    When I first started the project, I was gathering Sentinel-1 and ALOS data for impact craters in North America using Vertex (ASF's data portal). As I got to the processing stage, I realized that many of the frames I had selected didn't actually cover the area I needed. This was partly because Vertex doesn't have a map scale so I don't know how much area is covered when I outline the search box. However, I've learned to use google maps simultaneously and estimate the scale enough to make sure the radar data has a good coverage of the crater (and maybe even some peculiar surrounding features). Now I will have to go back and look for new data for some of the craters. Many of these craters (in North America) are larger in diameter which requires a few frames to be mosaiced and this was a little overwhelming. So I decided to focus on impact craters in South America because it's a smaller dataset and the craters are relatively smaller (and most of them are exposed). I've collected radar data for all the craters and am currently in the process of processing them. 

    Colonia crater, in Brazil (ALOS processed data)

     

    Colonia crater, in Brazil (Sentinel-1 processed data)

    I want to quickly mention a tool I came across two weeks ago when I was gathering data. One morning, the Vertex site was down when I really wanted to find relevant frames and download the data. I could search the portal but the system would not let me login/download it. So, I started adding the files to the queue and decided I would bulk download them (although I had not looked into how to do that). But, it turned out to be super easy! The site has a pre-written python script which you execute in the command terminal and you can download all the files in the queue. This has been super useful because I can spend my time during the day finding all the files and leaving the downloading for overnight. I just have to be cautious about the size of the files and space on my computer because the downloading stops midway. And then, I have to go through all the files and manually figure out which ones did not get downloaded and restart the process. Other than that, I find the bulk download option to be super helpful.

    Issues with space and memory

    I started off with image processing on my computer which didn't work because of not enough memory space. So then I moved to Mike's computer which allowed me to process everything but it takes 15-30 minutes for each step to run for the Sentinel-1 data (ALOS data takes about 5 minutes). Using this computer was definitely a good solution compared to no processing power on my workstation, however it's still a significant amount of processing time. Thanks to Hun, we were able to test the image processing in Oz's lab and found it takes about 2-4 minutes for each step in Sentinel-1 processing. Based on that, Catherine, thankfully, got me access to the lab to process my S1 data. One challenge with using the computers in that lab is that they keep shutting down/restarting randomly. Last week, when I was trying to process some data, it did not work out well. Another challenge is that there is not much space on those computers so I might have to transfer each file to an external drive as soon as it's processed. I think this space/memory problem is a big one because eventually I'm going to run out of space on the drive and the server just with all the unprocessed data. For now, the 1 terabyte will do but am also brainstorming solutions for the near future. I wonder if getting a 5-10 TB drive might do the trick. I am open to ideas/comments/concerns. 

     

    Image processing

    Image processing is the part I really want to focus on. I sat in on a Digital Image Processing lecture last week (it's a course offered by the ECE department). I am interested in the subject and considering taking the course next year so I thought it would be good to give it a test run. It turned out that the lecture I attended was focused on techniques that might be useful for my project, but also good to know in general. A few different filters were discussed in class:

    1) Median filters: reduce salt and pepper noise with less blurring than spatial averaging. This filter is interesting because I wonder if this is what is used for the Speckle Filtering function in the Sentinel Application Platform (SNAP). I need to dig a little bit through the software's documentation and the source code perhaps to figure it out. 

    2) Sharpening filters: highlight fine detail (e.g. edges). The instructor mentioned this filter is very useful for radar images. He talked about his experience working with military radar data and using these filters to help identify things such as missiles. 

    3) Gradient filters: good for edge detection but also magnify noise. 

        3.1)  Laplacian filter: highlights discontinuties (more than first-order derivatives). 

    Next steps: I would like to apply these filters to some of the radar images and see what results we get. I'm not sure how effective it will be for radar images, but I'll test it out. 

    Image processing vs. signal processing

    Sharpening filters are commonly used in signal processing and are usually very effective (GW example). However, sharpening filters in image processing require a bit more work on the user side. For instance, if I apply a sharpening filter to a radar image, there might be a lot of fine details that get highlighted. In that case, visual analysis doesn't necessarily become easier. But would these filters be more effective if the images were digitally analyzed and subtle details/changed could be easily recognized? Is the numerical analysis of signals that makes these filters more effective? I'll definitely have to read into this more but just want to ponder here a little bit. 

    Here is an example of signal filtering that I did in a computational physics course. We analyzed the first gravitational wave detection event, called 'GW150914', data from LIGO. We applied a few filters in order to suppress the excess noise and highlight the event signal. Lastly, we converted the data to a sound file so that we could try to hear it (a frequency shift was applied to better hear the chirp signal --> audio equivalent of applying false coloour on telescope images, for example). Links to Hanford and Livingston signal sounds.

     


  • Sound of Space

    "There is geometry in the humming of the strings, there is music in the spacing of the spheres." -Pythagoras

    I recently came across One Sky after hearing some friends talk about their experience at Nuit Blanche Toronto. It was an outdoor musical planetarium exhibit, where the volume and pitch are controlled by the brightness and colour of the stars.

    I thought it was really neat so I decided to look into its origin and found SYSTEM Sounds. It's a collection of music and animations generated by numerical simulations and real data, created by Matt Russo (astrophysicist/musician), Dan Tamayo (astrophysicist), and Andrew Santaguida (musician). They were inspired by the musical TRAPPIST-1 planetary system and decided explore what happens when rhythms and harmonies of astronomical systems are translated into sound that humans can hear. 

    For those who don't know, TRAPPIST-1 is an exo-planetary system consisting of 7 Earth-sized planets orbiting a red dwarf and at least two planets should have the right temperature to host liquid water. But what's more unique is that the planets are locked in a resonant chain, meaning the time it takes each planet to go around the star forms a simple, integer ratio ratio with those of its neighbours. For every 2 orbits of the outermost planet (h), each body (moving inward) executes 3, 4, 6, 9, 15 and 24 orbits. 

    Actual orbital periods and corresponding frequencies and notes for each planets, after scaling orbital frequencies into the human hearing range.

    Translation

    REBOUND, an orbital integrator, was used to simulate the TRAPPIST-1 system and record the times when each planet passes in front of the star (transit) from the Earth's point of view. Then time was scaled such that TRAPPIST-1h completes its orbit once every 2 seconds, corresponding to a tempo of 30 bpm. Next, orbital frequencies were scaled into human hearing range in order to calculate pitch; Time was sped up by about 212 million times so that TRAPPIST-1h completes its orbit 130.81 times each second (130.81 Hz), corresponding to the note C3. The frequencies of the interior planets were calculated based on the simple, integer ratio relation to this frequency. The TRAPPIST beat was created by assigning drum to conjunctions of each adjacent pair of planets. The gravitational tug between planets is greatest when a faster innner planet overtakes its outer neighbour (mutual conjunction). Lastly, data from NASA's K2 mission monitoring the brightness of TRAPPIST-1 was used to capture the sound of the star! The star's 3.3 day rotation period corresponds to a frequency of 745 Hz (after speeding up time by the same factor used to assign pitches to planets). There are many higher frequencies  also present due to the star's variability and occasional solar flare. In addition, the star's brightness variations data was used to modulate the volume of this noise so that it is louder when the star is brighter. The brightness/volume peaks occur almost 6 times for every orbit of TRAPPIST-1h (just as in real life). So now let's actually listen to TRAPPIST Sounds:


    Because TRAPPIST-1 planets are in resonance with its neighbours, the system forms a harmonic resonant chain. Most planetary systems don't have this and so when their motion is converted into sound, it's not as pleasing. 

    The last thing I want to share is the Saturn Harp which was created by converting 2 million pixels of Cassini's highest resolution colour image of the intricate patterns found within the central B ring into musical notes (brighter rings producing higher pitches). 

    I found this project very exciting and I think the purpose is to help us experience our musical universe. 

    Learn more: Our Musical Universe - TEDxUofTConvergent migration renders TRAPPIST-1 Long Lived


  • My Research Project

    Hi all,

    I started my MSc in Geophysics/Planetary Science in September as a member of the Radar Remote Sensing Research Group supervised by Dr. Catherine Neish. This entry is going to be a quick introduction to my research project and how I am getting started. 

    The goal of my project is to determine the percentage of known impact craters on Earth that can be recognized with synthetic aperture radar (SAR) data. To date, there are 190 confirmed impact structures in the Earth Impact Database

    Figure 1: Map of confirmed impact craters on Earth

    The aim is to then use this information to infer the number of impact craters on Titan that may be missing. Given that impact cratering is a common process in our Solar System, the surface of Titan is expected to have thousands of craters which we do not observe. 

    For today, I am going to focus on the first part: mapping craters on Earth using radar data. I am working with data from two satellites, Sentinel-1 and ALOS PALSAR:

    Sentinel-1 is part of an Earth observation programme facilitated by the European Space Agency (ESA). This mission is composed of two satellites, Sentinel-1A (launched in April 2014) and Sentinel-1B (launched in April 2016), that carry a C-band (5.6 cm) SAR.  It has four operational modes but the main mode over land is the Interferometric Wide Swath (IW). This mode features 5 x 20 m spatial resolution, a 250 km swath, and offers products in single and dual polarization. 

    The Advanced Land Observation Satellite (ALOS) is part of the Japanese Earth observing satellite program. The Phased Array type L-band SAR (PALSAR), onboard the ALOS, is an L-band (24 cm) sensor with single, dual and full polarization capabilities. I am mainly trying to gather data from its Fine Resolution Mode (9 x 10 m for single polarization and 19 x 10 m for dual polarization). 

    I have been mostly reading papers and other texts to understand radar basics (and now re-reading some of them), but I did get a chance to play around with some data this week. I am using the Sentinels Application Platform (SNAP) to process the radar data. Here are some results after working with Barringer Crater (Arizona)  Sentinel-data. 

    I start with radiometric calibration of the intensity data (Fig.2). The calibration corrects the image such that the pixel values represent the radar backscatter. I am looking at the VH intensity band, but would like to compare with VV band results. However, I ran into a memory issue when trying to process VV band, so will have to look into that. Next, I apply the deburst operation which combines the burst data into one single image (Fig.3). Then, multilooking averages over range and/or azimuth pixels resulting in less noise and approximate square pixel spacing (Fig.4). I applied 1-by-4 (1 range, 4 azimuth) multilooking and will further investigate different specifications. The image still appears quite noisy so I applied speckle filtering (Fig.5) which reduces the amount of speckle. The drawback of this function is that it blurs features or reduces resolution. I used the default (Refined Lee) setting in this image, and would like to compare it to the other speckle reduction settings in order to determine the best result. The last part is terrain correction which corrects the SAR geometric distortions using a digital elevation model (DEM) and produces a map projected product (Fig.6). The Barringer Crater is visible in the zoomed-in image (Fig.7). My goal for the rest of the week is to explore the different settings for the operations I'm using in SNAP to see which produce the best results. Then, I can apply them to rest of the crater data. I have also started to play around with ALOS-PALSAR data but ran into troubles at the multilooking phase, will have some results for next time!

    Figure 2: VH band radiometrically calibrated. 

    Figure 3: Calibrated and deburst.

    Figure 4: Calibrated, deburst, and multilooked. 

    Figure 5: Calibrated, deburst, multilooked, and speckle filtered. 

    Figure 6: Calibrated, deburst, multilooked,peckle filtered, and terrain corrected. 

    Figure 7: Zoom-in on Barringer Crater, Arizona.


    In other news...

    Researchers have discovered a dwarf planet called 2015 TG387 or "The Goblin" at about 80 AU from the Sun (almost as close as it gets), far beyond the orbit of Pluto. Its strange orbit extends to as far away as 2300 AU which supports the idea of a 7x Earth-size planet at the outer edge of the solar system that has yet to be detected. 

    Read more: CBC article, AJ paper

     

     

     



This free website is created and hosted by Website.com's Site Builder.