Figure Closed Access

Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox

Rasti, Behnood; Hong, Danfeng; Hang, Renlong; Ghamisi, Pedram; Kang, Xudong; Chanussot, Jocelyn; Benediktsson, Jon Atli


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nkm##2200000uu#4500</leader>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">image</subfield>
    <subfield code="b">figure</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Hyperspectral images (HSIs) provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands), which can be used to accurately classify diverse materials of interest. The increased dimensionality of such data makes it possible to significantly improve data information content but provides a challenge to conventional techniques (the so-called curse of dimensionality) for accurate analysis of HSIs. Feature extraction (FE), a vibrant field of research in the hyperspectral community, evolved through decades of research to address this issue and extract informative features suitable for data representation and classification. The advances in FE were inspired by two fields of research&amp;mdash;the popularization of image and signal processing along with machine (deep) learning&amp;mdash;leading to two types of FE approaches: the shallow and deep techniques. This article outlines the advances in these approaches for HSI by providing a technical overview of state-of-the-art techniques, offering useful entry points for researchers at different levels (including students, researchers, and senior researchers) willing to explore novel investigations on this challenging topic. In more detail, this article provides a bird&amp;rsquo;s eye view of shallow [both supervised FE (SFE) and unsupervised FE (UFE)] and deep FE approaches, with a specific focus on hyperspectral FE and its application to HSI classification. Additionally, this article compares 15 advanced techniques with an emphasis on their methodological foundations and classification accuracies. Furthermore, to push this vibrant field of research forward, an impressive amount of code and libraries are shared on GitHub, which can be found in [131].&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.14278/rodare.679</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Rasti, Behnood</subfield>
    <subfield code="u">Helmholtz- Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology,Germany.</subfield>
    <subfield code="0">(orcid)0000-0002-1091-9841</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-rodare</subfield>
  </datafield>
  <controlfield tag="005">20211129143502.0</controlfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="o">oai:rodare.hzdr.de:679</subfield>
    <subfield code="p">user-rodare</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Hong, Danfeng</subfield>
    <subfield code="u">the Remote Sensing Technology Institute, German Aerospace Center, Weßling, Germany</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Hang, Renlong</subfield>
    <subfield code="u">School of Automation, Nanjing University of Information Science and Technology, China.</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Ghamisi, Pedram</subfield>
    <subfield code="u">Helmholtz- Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology,Germany.</subfield>
    <subfield code="0">(orcid)0000-0003-1203-741X</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Kang, Xudong</subfield>
    <subfield code="u">Hunan University, Changsha, China, and Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province, Changsha, China.</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Chanussot, Jocelyn</subfield>
    <subfield code="u">the Université Grenoble Alpes, INRIA, Centre National de la Recherche Scientifique, Grenoble Institute of Technology, Laboratoire Jean Kuntzmann, France,</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Benediktsson, Jon Atli</subfield>
    <subfield code="u">the faculty of Electrical and Computer Engineering, University of Iceland, Reykjavik.</subfield>
  </datafield>
  <controlfield tag="001">679</controlfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">closed</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2020-12-16</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="a">https://www.hzdr.de/publications/Publ-31906</subfield>
    <subfield code="i">isIdenticalTo</subfield>
    <subfield code="n">url</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="a">https://www.hzdr.de/publications/Publ-32303</subfield>
    <subfield code="i">isReferencedBy</subfield>
    <subfield code="n">url</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="a">10.14278/rodare.678</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="n">doi</subfield>
  </datafield>
</record>
1,835
0
views
downloads
All versions This version
Views 1,8351,833
Downloads 00
Data volume 0 Bytes0 Bytes
Unique views 1,2651,263
Unique downloads 00

Share

Cite as