Geospatial Intelligence

Transforming Defence Intelligence: Five Ways AI Enhances the Power of Remotely-Sensed Data

Artificial Intelligence is revolutionising our world every single day. More and more professionals in diverse domains are finding newer and innovative ways of including AI in their workflows. The Intelligence, Surveillance, and Reconnaissance is not behind. Learn here the five ways Deep Learning is revolutionising this space.

Aayush Malik

--

Photo by SaiKrishna Saketh Yellapragada on Unsplash

In the current global order we find ourselves in, one of the most important pillars for determining a nation’s clout in the international affairs is the sophistication of its military abilities and capabilities. Military capability is determined by the structure, modernisation, readiness and sustainability of its operations. This modernisation depends on not just weapons and equipments but also technical sophistication. With the advancements of Deep Learning, militaries across the world can find themselves at an enhanced competitive spot if they can exploit these resources for situational awareness and reconnaissance.

Artificial Intelligence and Deep Learning Overview

Artificial Intelligence (AI) aims to create intelligent machines capable of mimicking human cognitive processes. This is achieved by developing algorithms and systems that can perceive, reason, learn, and make decisions. Deep Learning, a subset of AI, focuses on training neural networks with multiple layers to automatically learn and extract complex patterns and representations from vast amounts of data. These neural networks are inspired by the structure and function of the human brain. Deep Learning is the driving force behind groundbreaking advancements in areas such as autonomous vehicles, medical diagnostics, and now, even Intelligence, Surveillance, and Reconnaissance.

Remote Sensing 101

Remote sensing is the science of acquiring information/data about objects from a distance using sensors that are not in physical contact with the target. These sensors are mostly aboard satellites, aircraft, drones, or ground-based instruments. By capturing electromagnetic radiation reflected or emitted by objects, remote sensing enables the creation of maps, images, and models. With the availability of high-resolution imagery, remote sensing has become an indispensable tool in disciplines like geology, forestry, climate science, and, notably, intelligence and defence, where it plays a crucial role in gathering geospatial data for military purposes, strategic analysis, and situational awareness.

Deep Learning in Remote Sensing

With more than 5400 satellites in orbit currently, satellite imagery is readily available but translating those to decision-making is still not developed fully as we have more data than our abilities of processing that data at scale using just humans. Deep learning enables us to do that in an efficient and cost-effective manner. According to the Ma et. al. (2019), deep learning has been used in various applications such as object detection, scene classification, land use classification. These are achieved by spatial-spectral signatures. The most recent research has also been done for applications like multi-sensor data fusion, semantic segmentation, and reregistration of satellite imagery. I am going to cover these five below.

Multi-sensor Data Fusion

Multi-sensor data fusion refers to the process of combining information obtained from multiple sensors or data sources to generate a more comprehensive and accurate understanding of a given phenomenon or situation. It involves integrating data collected from different sensors, such as optical sensors, radar, lidar, infrared, and others, each providing unique and complementary information about the same target or environment. The goal of multi-sensor data fusion is to overcome the limitations of individual sensors and leverage their collective strengths to improve overall data quality, reduce uncertainties, enhance detection and recognition capabilities, and make more informed decisions.

There are currently two methods of deep-learning based data fusion that are available.

  1. Pan-sharpening — Fusing high resolution pan-sharpened images with low resolution multispectral image to get high-resolution pan-sharpened images.
  2. Hyper-spectral and Multispectral Fusion — Fusing low-resolution hyper-spectral data with high-resolution multispectral data to generate high-resolution hyper-spectral data.

Semantic Segmentation

Semantic Segmentation assigns land cover labels to each pixel of an image. Using deep end-to-end Convoluted Neural Networks (CNNs), automated segmentation can be done on high-resolution remotely-sensed images to predict classes for a pixel. State-of-the-art semantic segmentation frameworks for remote sensing images are sequentially composed of encoder and decoder subnetworks. However there are challenges pertaining to blurry class boundaries and the loss of object details. To overcome these, numerous researchers have developed methods the details of which can be found here.

Object Detection

Deep learning based object detection is a fundamental activity in imagery analysis and has numerous applications in surveillance systems. Deep learning-based object detection models leverage convolutional neural networks (CNNs) to learn and recognise objects within an image. For object detection from remote-sensing images, in addition to the limitation of training samples, the biggest challenge is to effectively deal with the problem of object rotation variations, as reported by Cheng et. al. (2016). Additionally, the current technology extracts specific types of objects such as airplanes, or cars etc from a high-resolution image through a fixed window size. However, the reality is different as more data and object types are seen in publicly available data. Therefore, more research is needed on this.

Image Reregistration

Image registration is a fundamental step for many remote sensing tasks such as image fusion, change detection, image mosaic, etc. The method involves aligning two or more images captured by different sensors, at different times or from different viewpoints. There are four steps involved in image registration: Feature Extraction, Feature Matching, Transformation Model Estimation, and Image Resampling. Feature extraction plays a highly important role in image registration because it decides what type of feature is to be used for image matching. Because it’s the bread and butter of deep learning, automated data-driven scheme can learn features from images. More detailed information on how this is done using deep learning can be found here.

Land Use Classification

Deep Learning can be used with high-resolution satellite imagery to automatically do land usage classification which can show military infrastructure in the areas of interest because of the fine structural information (i.e. spatial details) of LULC objects in these types of images. According to Ma et. al. (2019), there are also many freely available medium-resolution (10 m-30 m) satellite images for LULC mapping, including Landsat and Sentinel-2 data, it is difficult to apply conventional DL algorithms directly to these images because of the lack of such fine structures. Most of the current state-of-the-art for automated Land Use Classification include Convolutional Neural Networks which have achieved high-accuracy on high-resolution imagery.

Five More Applications of Remote Sensing for Geo-Intelligence

In addition to the applications above, remote sensing has been used widely used for other purposes as well. Written below are five major applications where a lot of research has been happening and the technology is advancing maturely.

Battlefield Mapping and Terrain Analysis

Terrain mapping is the process of capturing, analysing, and representing the physical features and characteristics of the Earth’s surface. It involves creating detailed maps or digital elevation models (DEMs) that depict the shape, elevation, and other attributes of the land surface. This data aids in identifying suitable areas for troop deployment, determining optimal routes for manoeuvring, and assessing the impact of terrain on tactical movements. Having a correct representation of the battlefield terrain is important so that militaries can strategically locate their assets depending upon the enemy. Yang et. al. (2023) adopted a semantic segmentation model in computer vision to classify elementary landform types using AW3D30, Japan Aerospace Exploration Agency, digital elevation model (DEM) data.

Missile Defence and Early Warning Systems

Using the heat signatures of missile launches, satellites equipped with infrared sensors can provide critical signals for missile defence systems. For example, the U.S. Air Force’s Space Based Infrared System, or SBIRS, is an orbiting network of satellites in Geosynchronous Earth Orbit (GEO), payloads in Highly Elliptical Orbit (HEO), and flexible ground processing and control systems that provide a continuous view of the Earth’s surface. Using scanning sensors for wide-area surveillance and staring sensors to focus on smaller regions of interest, SBIRS collects and transmits infrared (IR) data that is vital for early missile warning and defence. Read more information here.

Spectral Anomaly Detection

According to Hu et. al. (2022), Hyper-Spectral Image-Anomaly Detection, HSI-AD refers to the identification of pixels whose spectral characteristics in an image are significantly different from adjacent or global background pixels. Because unsupervised methods do not require any prior spectral information of the target and the background, it can be used for better detection. However, the dimensionality of the hyper-spectral needs to bee reduced prior using these. Read more here.

Smuggling Detection

In recent years, deep-learning-based visual systems have been used for autonomous ship navigation, as well as for the maritime surveillance. Qiao et. al. (2021) developed methods for ship identification and ship tracking. Moniruzzaman et al. (2017) described the use of deep learning for underwater imagery analysis. For optical remote sensing images applied in maritime, Li et al. (2020) summarised the detection and classification of ship optical remote sensing images. Both methods were analysed for traditional feature-designed methods and the deep convolutional neural networks (CNN). However, the most effective method was from Chen (2020) To solve the problem of small objects and multi-object ship detection in complex scenarios.

Nuclear Experiments Monitoring

Remotely-sensed data can also help with detecting nuclear experiments underground. For example, Sentinel-1A/B radar remote sensing data were applied for the first time to determine the sixth nuclear test, its underground explosion h-bomb location and affected zone in North Korea, on September 3, 2017. Read here for more information.

Conclusion

In this article, I discussed five major and five minor applications of deep learning that can enhance military capabilities for better situational awareness and intelligence gathering. Coupled with the traditional ways of gathering intelligence, this has the potential of increasing military sophistication that can make a national protect itself in case of a war.

--

--

Aayush Malik

Satellite Imagery | Causal Inference | Machine Learning | Productivity and Communication | https://www.linkedin.com/in/aayushmalik/