0% found this document useful (0 votes)
43 views22 pages

A Broadband Hyperspectral Image Sensor With High Spatio-Temporal Resolution

The document presents a novel on-chip computational hyperspectral imaging sensor, termed HyperspecI, which achieves high spatial and temporal resolution by integrating broadband modulation materials on a sensor chip. This sensor enables real-time hyperspectral imaging across a wide spectral range (400-1,700 nm) with high light throughput and has applications in agriculture, health monitoring, industrial automation, and astronomy. The HyperspecI sensor's compact design and lightweight nature make it suitable for various resource-limited platforms, transforming high-dimensional imaging into a more accessible technology.

Uploaded by

luvorinico
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views22 pages

A Broadband Hyperspectral Image Sensor With High Spatio-Temporal Resolution

The document presents a novel on-chip computational hyperspectral imaging sensor, termed HyperspecI, which achieves high spatial and temporal resolution by integrating broadband modulation materials on a sensor chip. This sensor enables real-time hyperspectral imaging across a wide spectral range (400-1,700 nm) with high light throughput and has applications in agriculture, health monitoring, industrial automation, and astronomy. The HyperspecI sensor's compact design and lightweight nature make it suitable for various resource-limited platforms, transforming high-dimensional imaging into a more accessible technology.

Uploaded by

luvorinico
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Article

A broadband hyperspectral image sensor


with high spatio-temporal resolution

https://doi.org/10.1038/s41586-024-08109-1 Liheng Bian1,2 ✉, Zhen Wang1,2, Yuzhe Zhang1,2, Lianjie Li1, Yinuo Zhang1, Chen Yang1,
Wen Fang1, Jiajun Zhao1, Chunli Zhu1, Qinghao Meng1, Xuan Peng1 & Jun Zhang1 ✉
Received: 17 July 2023

Accepted: 24 September 2024


Hyperspectral imaging provides high-dimensional spatial–temporal–spectral
Published online: 6 November 2024
information showing intrinsic matter characteristics1–5. Here we report an on-chip
Open access
computational hyperspectral imaging framework with high spatial and temporal
Check for updates resolution. By integrating different broadband modulation materials on the image
sensor chip, the target spectral information is non-uniformly and intrinsically
coupled to each pixel with high light throughput. Using intelligent reconstruction
algorithms, multi-channel images can be recovered from each frame, realizing real-time
hyperspectral imaging. Following this framework, we fabricated a broadband visible–
near-infrared (400–1,700 nm) hyperspectral image sensor using photolithography,
with an average light throughput of 74.8% and 96 wavelength channels. The
demonstrated resolution is 1,024 × 1,024 pixels at 124 fps. We demonstrated its
wide applications, including chlorophyll and sugar quantification for intelligent
agriculture, blood oxygen and water quality monitoring for human health, textile
classification and apple bruise detection for industrial automation, and remote lunar
detection for astronomy. The integrated hyperspectral image sensor weighs only tens
of grams and can be assembled on various resource-limited platforms or equipped
with off-the-shelf optical systems. The technique transforms the challenge of high-
dimensional imaging from a high-cost manufacturing and cumbersome system to
one that is solvable through on-chip compression and agile computation.

Hyperspectral imaging captures spatial, temporal and spectral informa- developed, which encode multidimensional hyperspectral informa-
tion of the physical world, characterizing the intrinsic optical properties tion into single-shot measurements and decode the data cube using
of each location1. Compared with multispectral imaging, hyperspectral compressive sensing or deep learning algorithms. Although these
imaging acquires a substantially large number of wavelength chan- systems effectively improve temporal resolution, they still require
nels ranging from tens to hundreds and maintains a superior spatial individual optical elements for explicit light modulation that takes
mapping ability compared with spectrometry6. This high-dimensional a heavy load for lightweight integration11.
information enables precise distinction of different materials with Numerous on-chip acquisition trials have been conducted to achieve
similar colours, empowering more intelligent inspection than human integrated hyperspectral imaging. The most logical approach is to
vision with higher spectral resolution and wider spectral range. With extend the classic Bayer pattern of red, green and blue (RGB) colour
these advantages, hyperspectral imaging has been widely applied in cameras by introducing more narrow-band filters, which has led to
various fields such as remote sensing, machine vision, agricultural the development of commercial multispectral imaging sensors12.
analysis, medical diagnostics and scientific monitoring2–5. However, besides a substantial tradeoff between spatial and spectral
The most important challenge to realizing hyperspectral imaging is resolution, this technique also wastes the most light throughput due
acquiring the dense spatial–spectral data cubes efficiently. Most of the to narrow-band filtering. Benefiting from finely tunable spectrum
existing hyperspectral imaging systems use individual optical elements filtering ability, nano-fabricated metasurface13–16, photonic crystal
(such as prism, grating or spectral filters) and mechanical compo- slab arrays17 and Fabry–Pérot filters18 have also been used for spec-
nents to scan hyperspectral cubes in the spatial or spectral dimen- tral modulation in a certain spectral range. Experimentally, most of
sion7. However, these systems typically suffer from drawbacks such these existing prototypes cover about 200 nm in the visible range18,
as large size, heavy weight, high cost and time-consuming operation, with only around 20 channels. Recently, scattering media have been
which limit their widespread application. Because of the developments used for compact lensless hyperspectral imaging systems, building
in the compressive sensing theory and computational photography on their spatial multiplexing and point spread function properties19–22.
technique8, various computational snapshot hyperspectral imaging Despite these on-chip techniques, most of them suffer from narrow
techniques, such as computed-tomography imaging system (CTIS)9 spectral range, low-light throughput and the intrinsic tradeoff between
and coded aperture snapshot spectral imaging (CASSI)10, have been spatial and spectral resolution. A comparison of the comprehensive

1
State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China. 2These authors contributed equally: Liheng Bian,
Zhen Wang, Yuzhe Zhang. ✉e-mail: bian@bit.edu.cn; zhjun@bit.edu.cn

Nature | Vol 635 | 7 November 2024 | 73


Article
performance of different techniques is provided in Extended Data We developed a photolithography technique to fabricate BMSFA.
Table 1. First, we prepared broadband materials based on organic materials with
In this work, we report an on-chip computational hyperspectral different spectral responses. Then, by coupling the broadband materi-
image sensor, termed the HyperspecI sensor, and its comprehensive als with the negative photoresist, we fabricated broadband spectral
framework of hardware fabrication, optical calibration and compu- modulation materials suitable for lithography (Fig. 1c). The materials
tational reconstruction. First, to acquire both spatial and spectral were selected for optimal broadband spectral modulation characteris-
information effectively, we developed a broadband multispectral tics (Extended Data Fig. 9 and Supplementary Information section 2).
filter array (BMSFA) fabrication technique using photolithography. Then, using an improved photolithography process, we solidified the
The BMSFA is composed of different broadband spectral modulation spectral modulation materials on a high-transmission quartz substrate
materials at different spatial locations. In contrast to the conventional following the pre-designed photomask, forming the BMSFA (Fig. 1b and
narrow-band filters, the BMSFA can modulate incident light across the Extended Data Fig. 7). The photolithography process includes a series of
entire wide spectral range, resulting in much higher light throughput steps, including photomask design, substrate preparation, photoresist
that benefits low-light and long-distance imaging applications. The coating, soft baking, UV exposure, post-exposure baking, develop-
modulated information is then intrinsically compressed and acquired ment and hard baking (Supplementary Information section 3). To meet
by the underlying broadband monochrome sensor chip, enabling the demands of different spectral ranges, we designed and prepared
spatial–spectral compression with full temporal resolution. Second, to BMSFAs with different material systems and spatial arrangements.
efficiently restore hyperspectral data cubes from the BMSFA-compressed We integrated the fabricated BMSFAs with CMOS (complementary
measurements, we derived a lightweight and high-performance neu- metal oxide semiconductor) and InGaAs (indium gallium arsenide)
ral network (spectral reconstruction network (SRNet)), which has image sensor chips, respectively (Fig. 1a and Extended Data Fig. 7b,c).
stronger feature extraction and prior modelling ability. Consequently, Figure 1e shows exemplar hyperspectral imaging results, demon-
we can reconstruct hyperspectral images (HSIs) with high spatial strating that the HyperspecI sensors can acquire rich spatial details
and spectral resolution from each frame, realizing high-throughput and maintain high spectral accuracy across a wide spectral range. The
real-time hyperspectral imaging. comprehensive imaging results of more channels and scenes are pro-
Following the above framework, we fabricated two visible–near- vided in Extended Data Fig. 3. Furthermore, to demonstrate the high
infrared (VIS–NIR) hyperspectral image sensors (HyperspecI-V1 and accuracy and efficiency of the HyperspecI sensors for hyperspectral
HyperspecI-V2). The spectral response range of HyperspecI-V1 and image reconstruction, we compared the reported SRNet with the exist-
HyperspecI-V2 is 400–1,000 nm and 400–1,700 nm, and the average ing state-of-the-art model-based and deep-learning-based algorithms,
light throughput is 71.8% and 74.8%, respectively. In low-light condi- which indicates that our SRNet model outperforms others in terms of
tions, the HyperspecI sensors perform substantially better than mosaic both accuracy and efficiency (Supplementary Information section 5.3).
multispectral cameras and scanning hyperspectral systems, as shown Figure 1f shows the structure of the collected image and video dataset
in Fig. 3. The average spectral resolution is 2.65 nm for HyperspecI-V1 using the HyperspecI sensors, which might be useful for further hyper-
and 8.53 nm for HyperspecI-V2. For hyperspectral imaging, the spectral imaging and sensing studies.
HyperspecI-V1 sensor produces 61 channels in the 400–1,000 nm range,
each with 2,048 × 2,048 pixels at 47 fps. The HyperspecI-V2 sensor
produces 96 wavelength channels with a 10-nm interval in the range of Performance of HyperspecI
400–1,000 nm and a 20-nm interval in the range of 1,000–1,700 nm. We conducted a series of experiments to validate the quantitative and
Each channel consists of 1,024 × 1,024 pixels at 124 fps. For more details qualitative performance of the HyperspecI sensors. First, we exam-
of performance, please refer to Extended Data Table 1. ined the spectral and spatial resolution of the HyperspecI sensors.
To demonstrate the practical abilities and wide application potentials Figure 2a and Extended Data Fig. 3 show the reconstructed HSIs in
of the HyperspecI sensors, we conducted soil plant analysis develop- the synthesized RGB format. We also compared the reconstructed
ment (SPAD) and soluble solid content (SSC) evaluation for intelligent spectra with the corresponding ground truth collected by commercial
agriculture, blood oxygen and water quality monitoring for human spectrometers (Ocean Optics USB 2000+ and NIR-Quest 512) at the
health, textile classification and apple bruise detection for industrial locations indicated by yellow markers in the synthesized RGB images.
automation, and remote lunar detection for astronomy. These applica- We presented the reconstruction results of monochromatic light with
tions demonstrate the high signal-to-noise ratio (SNR), high-resolution, an interval of 0.2 nm and compared the reconstruction results of our
ultra-broadband and dynamic hyperspectral imaging abilities of our HyperspecI sensors with a commercial spectrometer under single-peak
HyperspecI technique, providing unique benefits in low-light condi- monochromatic light (Fig. 2b). The full width at half maximum (FWHM)
tions, targeting dynamic scenes and detecting small or remote targets of the monochromatic light is 2 nm. The average spectral resolution
that are unattainable using other techniques. Furthermore, the com- of HyperspecI-V1 and HyperspecI-V2 sensors are 2.65 nm and 8.53 nm,
pact size, lightweight and high integration level of HyperspecI make it respectively (Fig. 2b (iii) and (iv)). Moreover, we also used double-peak
suitable for use on platforms with limited payload capacity. We antici- monochromatic light (FWHM 2 nm) to calibrate the spectral resolving
pate that this scheme may provide opportunities for the development ability of our sensors based on the Rayleigh criterion. The results dem-
of next-generation image sensors of higher information dimension, onstrate the average resolvable double-peak distance of HyperspecI-V1
higher imaging resolution and higher degree of intelligence. and HyperspecI-V2 reaching 3.23 nm and 9.76 nm, respectively. Second,
to evaluate the spatial resolution, we acquired images of the USAF 1951
spatial resolution test chart using our HyperspecI sensors and cor-
Principle of HyperspecI responding monochrome cameras (with the same sensor chips and
The HyperspecI sensor consists of two main components: a BMSFA lens configuration). We presented the HyperspecI-V1 results in Fig. 2c
mask and a broadband monochrome image sensor chip (Fig. 1a). The as a demonstration. The results show that the HyperspecI sensor can
BMSFA encodes the high-dimensional hyperspectral information of the distinguish the fourth element of the third group, in which the width
target scene in the spectral domain, and the underlying image sensor of the three lines is about 0.26 mm on the chart and occupies 9 pixels
chip acquires the coupled two-dimensional measurements (Fig. 1d). of the image, resulting in a spatial resolution of 11.31 lines per mm,
Using a hybrid neural network SRNet, multi-channel HSIs can be recon- which is comparable to the commercial monochrome camera. Further-
structed from each frame with high fidelity and efficiency (Extended more, a comparison of the light throughput of several representative
Data Fig. 6a,b and Supplementary Information section 5). hyperspectral imaging techniques is shown in Fig. 2d. The comparison

74 | Nature | Vol 635 | 7 November 2024


a d Computin
g layer
I BMSFA II Imaging chip III Computing layer Imaging
chip
BMSFA

HyperspecI sensor
Target scene

Reconstructed
Acquired raw data hyperspectral images
b e
Measurement Synthesized RGB 450 nm
Spectral modulation Negative photoresist
(i)
materials

16
(ii)
Photoresist Photoresist 500 nm 600 nm 700 nm
coating Quartz substrates

(iii) UV
Alignment and (vi)
exposure
Nesting
(iv) 750 nm 800 nm 900 nm
Development

(v)
Hard bake
BMSFA

c 100 f
ODM-1 ODM-9 Indoor Outdoor
ODM-2 ODM-10
ODM-3 NMO-1

Building
80 ODM-4 NMO-2

Cloth

Plant
Toy

ODM-5 NMO-3
Transmittance (%)

ODM-6 NMO-4 ×30 ×20 ×30 ×30


Static

60 ODM-7 NMO-5
ODM-8 NMO-6
Specimen

Others
Rock

Stuff
1.0
40
0.5 ×10 ×20 ×20 ×50
Filter j

20 0
Translation
Dynamic

Swaying
Rotation

20 s 20 s 20 s 20 s
Driving
–0.5
0 ×25 ×25 ×25 ×25
400 800 1,200 1,600 Filter i
Wavelength (nm)

Fig. 1 | Working principle of the HyperspecI technique. a, The HyperspecI correlation matrix. d, The imaging principle of the HyperspecI sensor. The light
sensor consists of a BMSFA mask and a broadband monochrome image emitted from the target scene is modulated after passing through the BMSFA
sensor chip. The BMSFA consists of a cyclic arrangement of 4 × 4 broadband and then captured by the underlying broadband image sensor chip. The collected
materials for broadband spectral modulation, with each modulation unit compressed data are then given as input to a reconstruction algorithm to
10 μm in size. The BMSFA is cured onto the bare photodiode array surface using decouple and output HSIs. e, The exemplar hyperspectral imaging results of
SU-8 photoresist. b, The manufacturing process of BMSFA. We developed a the HyperspecI sensor. f, Illustration of the collected large-scale HSI image and
low-cost fabrication strategy to produce BMSFA using photolithography. video dataset using the HyperspecI sensor.
c, The transmission spectra of the 16 modulation materials and the coefficient

shows that the average light throughput is 71.8% for HyperspecI-V1 and the scanning hyperspectral imaging camera. The result comparison
74.8% for HyperspecI-V2, which is much higher than that of common validates that our HyperspecI sensor achieves a full temporal resolu-
RGB colour cameras (<30%), mosaic multispectral cameras (<10%) tion of the underlying image sensor chip for dynamic imaging at a high
and CASSI systems (<50%) (Supplementary Information section 6.5). frame rate, whereas the traditional scanning hyperspectral cameras
We conducted an imaging experiment on small point targets of vary- are unable to capture dynamic scenes (Supplementary Information
ing sizes (Fig. 3a). The results indicate that the HyperspecI sensor can section 6.2).
achieve stable and accurate spectral reconstruction even when the The above experiments demonstrate the broad spectral range, high
radius of the targets is smaller than a superpixel. In Fig. 3b, we also spatial resolution, high spectral accuracy, high light throughput and
compared the hyperspectral imaging performance of our HyperspecI real-time frame rate of our HyperspecI sensors. Furthermore, we stud-
sensor, a commercial mosaic multispectral camera (Silios, CMS-C) and ied the SNR (Supplementary Information section 6.1), noise resistance
a scanning hyperspectral camera (FigSpec, FS-23) in low-light condi- (Supplementary Information section 6.3), dynamic range (Supplemen-
tions. The light source is Thorlabs SLS302 with an illuminance level tary Information section 6.4) and thermal stability (Extended Data Fig. 4
of 290 lux. These experiments demonstrate that our sensor exhibits and Supplementary Information section 6.7) of our HyperspecI sensors.
superior hyperspectral imaging quality in low-light environments,
attributed to its higher light throughput and SNR. The superiority is
further illustrated by the remote lunar detection experiment presented Application for intelligent agriculture
in Extended Data Fig. 1. We further demonstrated the real-time imag- Effective detection of target components is imperative for improving
ing performance of our HyperspecI-V1 sensor at a frame rate of 47 fps crop management strategies23. The SPAD index, highly correlated with
(Fig. 3c). As a comparison, we presented the imaging results using the chlorophyll content24, is important for assessing plant physiology.

Nature | Vol 635 | 7 November 2024 | 75


Article
a Synthesized RGB images Reconstructed spectra b (i) Spectral resolution calibration of HyperspecI-V1

P1 RS P2 RS P3 RS Our HyperspecI sensor Commercial spectrometer


P1 GT P2 GT P3 GT
P4 RS P5 RS P6 RS 1.0

Intensity (a.u.)
P4 P4 GT P5 GT P6 GT

Intensity (a.u.)
P1 0.6 0.8
P2
P5 0.6
0.4
P6 0.4
0.2 0.2
P3
0
0
400 600 800 1,000 400 500 600 700 800 900 1,000
Wavelength (nm) Wavelength (nm)
P1 RS P2 RS P3 RS (ii) Spectral resolution calibration of HyperspecI-V2
P1 GT P2 GT P3 GT Our HyperspecI sensor Commercial spectrometer
P3 0.6 P4 RS P5 RS P6 RS
Intensity (a.u.)

P4 GT P5 GT P6 GT 1.0
P1
P2

Intensity (a.u.)
0.4 0.8
P5 0.6
P4
0.2 0.4
P6 0.2
0
400 600 800 1,000 0
Wavelength (nm) 400 600 800 1,000 1,200 1,400 1,600
P1 RS P2 RS P3 RS Wavelength (nm)
P1 P1 GT P2 GT P3 GT
P4 RS P5 RS P6 RS (iii) Spectral resolving ability (iv) Spectral resolving ability
Intensity (a.u.)

P2 P4 of HyperspecI-V1 of HyperspecI-V2
P4 GT P5 GT P6 GT
0.8 Single peak = 2.65 nm Single peak = 8.53 nm
0.6 5 Double peaks = 3.23 nm 15 Double peaks = 9.76 nm
P3
0.4 4 12
Resolution (nm)

Resolution (nm)
0.2
P5 3
P6 0 9
400 660 920 1,180 1,440 1,700 2
Wavelength (nm) 6
1
c Monochrome camera HyperspecI sensor 3
0
400 600 800 1,000 400 800 1,200 1,600
Wavelength (nm) Wavelength (nm)

d Light throughput calibration Light throughput comparison


100
Framework Light throughput
Light throughput (%)

80
RGB colour
60 <30.0%
camera
40 MSFA camera <10.0%

20 CASSI <50.0%
9 pixels, 0.26 mm, group 3, element 4
200 HyperspecI-V1 71.8%
0
400 600 800 1,000 HyperspecI-V2 74.8%
0 Wavelength (nm)
60 pixels, 1.7 mm

Fig. 2 | Hyperspectral imaging performance of the HyperspecI sensors. analyse the spectral resolving ability of our sensors. The average FWHM
a, Exemplar hyperspectral imaging results. The reconstructed hyperspectral of the reconstructed spectra under single-peak monochromatic light for
images are shown in the synthesized RGB format on the left. The spectral HyperspecI-V1 and HyperspecI-V2 are 2.65 nm and 8.53 nm, respectively.
comparison between the reconstructed spectra (RS) and ground truth (GT), The average resolvable peak distance of reconstructed spectra based on
acquired by the commercial spectrometers, are shown on the right (denoted the Rayleigh criterion for HyperspecI-V1 and HyperspecI-V2 under double-
by solid and dashed lines, respectively). b, Spectral resolution calibration. peak monochromatic light are 3.23 nm and 9.76 nm, respectively. c, Spatial
(i), (ii), Spectral comparison between the HyperspecI sensors (green solid lines) resolution calibration using the USAF 1951 resolution test chart. The curves
and commercial spectrometer (black dashed lines). The monochromatic light of a monochrome camera (red line) and our HyperspecI-V1 sensor (blue line)
(FWHM 2 nm) was produced by the commercial Omno151 monochromator. for elements 1–6 of group 3 are presented. d, Light throughput calibration.
(iii), (iv), Single-peak and double-peak monochromatic light were used to a.u., arbitrary units.

Similarly, the SSC is an important indicator for fruit quality assess- Figure 4b shows the SPAD detection principles based on the
ment and determination of harvest time25. However, conventional Lambert–Beer law, using the HyperspecI sensor to acquire transmis-
SPAD and SSC measurements involve destructive sampling, which is sion spectra of 200 leaves. The values at the characteristic peaks
complicated and time-consuming. Advancements in molecular spec- (660 nm and 720 nm) were used to establish the regression model.
troscopy, coupled with chemometric techniques, have popularized Validation with the additional 20 leaves resulted in high precision
VIS–NIR spectroscopy as a non-destructive alternative for internal with a root mean square error of 1.0532 and a relative error of 3.73%.
quality assessment26. To demonstrate the applicability of the Hyper- Figure 4c outlines the non-destructive SSC detection procedure in
specI sensor in intelligent agriculture, we developed a prototype for apples. Spectral curves show peaks and troughs indicative of vari-
non-destructive SPAD and SSC measurements (Fig. 4a). ous apple characteristics. Our partial least squares regression model

76 | Nature | Vol 635 | 7 November 2024


a Hyperspectral imaging results of small targets Reconstructed results
Measurement Synthesized RGB 400 nm 450 nm 500 nm

r1 r2 r3

5 superpixels 4 superpixels 3 superpixels

550 nm 600 nm 650 nm 700 nm 750 nm


r4 r5 r6

2 superpixels 1 superpixel 0.5 superpixel


0.6 Ground truth r1 r2 r3
Background r4 r5 r6

Intensity (a.u.)
800 nm 850 nm 900 nm 950 nm 1,000 nm 0.4

0.2

0
400 600 800 1,000
Wavelength (nm)
b HyperspecI sensor 71.8% Multispectral camera <10% Scanning hyperspectral camera <1%
Reconstructed HSI Normalization Measurement Normalization Measurement Normalization
Synthesized RGB
580 nm
690 nm

c Object speed: 0.5 m s–1 Object speed: 6 rad s–1


Scanning HSI Our device: 0 s Our device: 0.1 s Our device: 0.2 s Scanning HSI Our device: 0 s Our device: 0.02 s Our device: 0.04 s
Synthesized RGB
550 nm
700 nm
850 nm

Fig. 3 | Hyperspectral imaging performance demonstration on high- The synthesized RGB images, exemplar spectral images at 580 nm and 690 nm
resolution, high light throughput and real-time ability. a, Hyperspectral and the corresponding normalized data are shown. c, Hyperspectral imaging
imaging results of small targets. The raw measurements, synthesized RGB results at video frame rate. The results at three different time points (0 s, 1 s
images and hyperspectral images of several exemplar bands are shown on the and 2 s) while the object was undergoing translational motion at a speed of
left. A comparison of the background spectrum, ground truth and reconstructed about 0.5 m s−1 are shown on the left. The results at three different time points
spectra of the small targets, which are marked in the synthesized RGB image (0 s, 0.02 s, and 0.04 s) while the object was undergoing rotational motion at a
with a blue rectangle, is shown on the right. b, Hyperspectral imaging speed of around 6 rad s−1 are shown on the right. The comparison of the result is
comparison in low-light conditions. The imaging results of our HyperspecI demonstrated using synthesized RGB images and spectral images at 550 nm,
sensor, a commercial mosaic multispectral camera and a commercial scanning 700 nm and 850 nm. a.u., arbitrary units.
hyperspectral imaging camera are compared at a fixed exposure time of 1 ms.

accurately predicts SSC, with a correction coefficient of 0.8264 and in intelligent agriculture. For more details, refer to Supplementary
a root mean square error of 0.6132% for the training set, and 0.6162 Information section 7.
and 0.7877% for the test set, respectively. The relative error of the
prediction set is 5.30%. Figure 4d shows the RGB and reconstructed
hyperspectral images of leaves and apples, highlighting the potential Application for human health
of the sensor for agricultural applications. These results emphasize The rising attention to health concerns has led to a proliferation
the promise of the HyperspecI sensor for non-destructive analysis of health monitoring equipment, yet its progress is hampered by

Nature | Vol 635 | 7 November 2024 | 77


Article
a b Light Leaf
1.0 1.0
50
0.8 0.8

Reflectance
1
0.4 0.4 40
2
0
0

SPAD
30

0
0
0

0
0

0
0

0
40
60
80

80
00

60
40

00
1,

1,
1 Wavelength (nm)

2 20
2

I1
I0 l 10
1 5 10 15 20
Sample
c
Measured Predicted 16
SSC SSC Train set
Predict set
15
12.25
BRIX 12.17
14

Predicted SSC (%)


12.70%

13

PLS model 12
Reflectance

0.6 11
Reflectance
0.6
0.4 0.4
10 Rc = 0.8268, RMSEC = 0.6132
0.2 0.2 Rp = 0.6162, RMSEP = 0.7877
0 0 9
400 600 800 1,000 400 600 800 1,000 9 10 11 12 13 14 15 16
Wavelength (nm) Wavelength (nm) Measured SSC (%)

d
Leaf
0.8
Intensity (a.u.)

0.
0.6
1 2 3 0.4
Ours 1 GT 1
0.2 Ours 2 GT 2 1 2
0 Ours 3 GT 3

0.8 Apple
Intensity (a.u.)

0.6

0.4

0.2
Ours 1 GT 1
0 Ours 2 GT 2
600 nm 660 nm 720 nm 640 nm 680 nm 700 nm
400 600 800 1,000
Wavelength (nm)

Fig. 4 | Application of the HyperspecI sensor for intelligent agriculture. measured SSC using a commercial product and the predicted SSC using our
a, The prototype using the HyperspecI sensor for agriculture spectra acquisition. HyperspecI sensor and PLS regression model is shown on the right. d, The
It includes two distinct modes: the mode of leaf transmission spectra acquisition comparison between RGB images and the synthesized RGB images using
and the mode of apple reflectance spectra acquisition. b, The working principle the reconstructed hyperspectral images. The figure in the middle shows a
of measuring the SPAD index, which is used to evaluate the chlorophyll content comparison between the spectra acquired by a commercial spectrometer
of leaves, is shown on the left. SPAD evaluation results using the HyperspecI and the reconstructed spectra at exemplar randomly selected locations.
sensor are shown on the right. c, The working principle of measuring SSC, used a.u., arbitrary units.
to evaluate apple quality, is shown on the left. The comparison between the

limitations in resolution, real-time abilities and portability. To dem- bands (780 nm and 830 nm), which produces blood oxygen saturation
onstrate the advantages of our HyperspecI in dynamic, high-resolution (Supplementary Information section 8.1). Figure 5b shows a compari-
ability, we conducted experiments on blood oxygen detection and son of measurements between the HyperspecI sensor and a commercial
water quality assessment, illustrating its potential for real-time health oximeter.
monitoring as an alternative to traditional bulky and complex equip- Furthermore, we conducted an effluent diffusion monitoring experi-
ment. For blood oxygen saturation monitoring, we developed a proto- ment to explore the ability of the HyperspecI sensor for water quality
type device to detect changes in arterial blood absorption at specific detection. During the experiment, two solutions with similar colours
wavelengths due to pulsation (Fig. 5a). When the finger under measure- but different compositions were rapidly injected into distilled water;
ment is placed into the device, the transmission spectra are acquired the diffusion process was simultaneously recorded using the Hyper-
using a broad-spectrum light source and the HyperspecI sensor. By specI sensor and an RGB camera (Fig. 5c). Distinguishing between
reducing the effective number of pixels in the HyperspecI sensor, we these two solutions using RGB images is challenging. However, their
can achieve a collection frame rate of up to 100 Hz. Subsequently, the differentiation becomes straightforward through the disparities in
acquired data are processed to obtain a series of spectral profiles at their spectral curves and spectral images at the NIR range (780 nm).
a certain area on the finger. Finally, the pulsatile component (AC) is Furthermore, the segmentation results of RGB images and recon-
extracted from the photoplethysmography signal at two characteristic structed hyperspectral images show the superiority and potential

78 | Nature | Vol 635 | 7 November 2024


a b (i) 1,000
0.8

Wavelength (nm)
800

0.4
600

HyperspecI camera
400 0
1 1,000 2,000 3,000 4,000
Time (ms)
780 nm Reference
(ii) 0.64 830 nm Our device
(iii)

0.60 100
Place the finger

Intensity (a.u.)
0.56 1 Pulse onset
96

SpO2 (%)
Light source

3 2 Systolic peak
0.52 1 2 5 3 Diastolic notch
4 Dicrotic peak 92
0.48 4 5 AC value
0.44 88

0.40
0 1,000 2,000 3,000 4,000 1 6 12
Time (ms) Number
d Solution 1 Solution 2 Water
78 nm
nm
c
0
0
RGB-based HSI-based
62

620 nm 780 nm RGB


Frame 0

Frame 0
Frame 10

Frame 10
Frame 20

Frame 20
Fig. 5 | Application of the HyperspecI sensor for blood oxygen and water c, Three exemplar frames of HyperspecI measurements demonstrating the
quality monitoring. a, The prototype of the HyperspecI sensor for blood solution diffusion process, accompanied by the corresponding images captured
oxygen saturation (SpO2) monitoring. b, i, The transmission spectra through the using an RGB camera. In the petri dish, solution 1 was positioned at the top left
finger were obtained at a collection rate of 100 Hz. ii, Two photoplethysmography corner, and solution 2 was placed at the bottom left corner. These two solutions
(PPG) signals at 780 nm and 830 nm, corresponding to two bands with different were added to distilled water at the top right corner. Hyperspectral images
intensities of HbO2 and Hb absorption. The blood oxygen saturation can be acquired by the HyperspecI sensor are presented in the synthesized RGB
accurately determined by analysing and calibrating the PPG signals at these format. d, Comparison of segmentation maps between an RGB camera (left)
two characteristic bands. iii, Comparative analysis with a commercial oximeter and the HyperspecI sensor (right). a.u., arbitrary units.
product demonstrates a high level of consistency in the obtained results.

of our HyperspecI in real-time high-resolution spectral imaging and is crucial for invisible bruise detection. In our experiment, we prepared
water quality assessment (Fig. 5d and Supplementary Information 224 samples of Qixia Fuji apples and used a 30-cm steel pipe to system-
section 8.2). atically create bruises on random locations of each apple (Fig. 6d). We
used the HypersepcI sensor and an RGB camera to acquire hyperspec-
tral and colour images of these apples, constructing two separate image
Application for industrial automation datasets (Fig. 6e). Each dataset, comprising 224 images, was applied to
To demonstrate the near-infrared hyperspectral imaging ability and train a YOLOv5-based detection network, and the rest 40 samples were
accuracy of our sensors, we applied them in textile classification and used for testing (Supplementary Information section 9.2). Spectral
apple bruise detection. For textile classification, reflectance spectra of images were processed to create synthesized colour representations,
textiles were acquired using the HyperspecI sensor (Fig. 6a). Previous distinctly marking bruised regions for enhanced visualization (Fig. 6f).
research27 has shown that characteristic spectral bands of cotton fabrics The detection precision and recall scores on the near-infrared spectral
(at 1,220 nm, 1,320 nm and 1,480 nm) and polyester fabrics (at 1,320 nm, images are markedly higher than those on the RGB images (Fig. 6g). The
1,420 nm and 1,600 nm) are distinct, facilitating their classification higher mAP50 and mAP50-95 scores also indicate the effectiveness of
(Fig. 6b–d). In our experiment, we prepared 204 samples, including vari- using infrared spectral information for apple bruise detection, and
ous cotton and polyester fabrics, divided into training (75 cotton and further demonstrate that our HyperspecI sensor can capture crucial
75 polyester) and testing datasets (27 cotton and 27 polyester). Given spectral features of subtle changes in the NIR range.
the diverse appearance of these samples, their classification by visual
inspection is challenging (Fig. 6b). Subsequently, we used the sup-
port vector machine (SVM) algorithm for automatic fabric categories Conclusion and discussion
classification (Fig. 6c). For the testing phase, the overall classification This work introduces an on-chip hyperspectral image sensor technique,
accuracy reached 98.15% (Supplementary Information section 9.1). termed HyperspecI, which follows the computational imaging principle
Apple bruises, often located beneath the skin, are challenging to to realize integrated and high-throughput hyperspectral imaging. The
detect visually, leading to low identification accuracy and efficiency. HyperspecI sensor first acquires encoded hyperspectral information
Benefiting from the wide spectral range of our HyperspecI, bruised by integrating a BMSFA and a broadband monochrome sensor chip,
areas exhibit spectral characteristics near wavelengths of 1,060 nm, and then reconstructs hyperspectral images using deep learning.
1,260 nm and 1,440 nm because of water absorption of NIR light, which Compared with the classic scanning scheme, the HyperspecI sensor

Nature | Vol 635 | 7 November 2024 | 79


Article
a b Measurement
sRGB of
reconstruction Measurement
sRGB of
reconstruction

Light source

Sample

HyperspecI

1,220 nm 1,320 nm 1,480 nm 1,320 nm 1,420 nm 1,600 nm


c
Labels identified by tags

Results
+ CF PF
0.6 Cotton fabric
Polyester fabric
NIR model

Intensity (a.u.)
0.4

Prediction set
0.2

0.0
500 1000 1500
Calibration set Wavelength (nm)
d e
Measurement Normal
1.0 Bruise
Normal
0.8
Bruise
1,060 nm

Intensity (a.u.)
0.6
30 cm

0.4 1,260 nm
NIR RGB
0.2 1,440 nm
0
400 600 800 1,000 1,200 1,400 1,600
Wavelength (nm)
f GT Predictions GT Predictions
NIR
NIR RGB
RGB

g Class Type Precision Recall mAP50 mAP50-95

Bruise NIR 0.958 0.969 0.98 0.568


Bruise RGB 0.457 0.387 0.313 0.0865

Fig. 6 | Application of the HyperspecI sensor for textile classification and constructed using the device shown on the right. e, The acquired measurement
apple bruise detection. a, The experiment configuration for the acquisition of apples and the corresponding spectral curves of bruised and normal portions.
of fabric spectra. b, Measurements and reconstructed hyperspectral images The characteristic wavelengths of apple bruises are distributed at 1,060 nm,
of textile samples, together with synthesized RGB (sRGB) representations 1,260 nm and 1,440 nm. f, Comparison of apple bruise detection between
and exemplar hyperspectral images (1,220 nm, 1,320 nm and 1,480 nm for manual labelling (green bounding boxes) and model prediction (red bounding
cotton fabrics and 1,320 nm, 1,420 nm and 1,600 nm for polyester fabrics). boxes). We used the pre-trained YOLOv5 network to detect bruised portions of
c, An SVM model for fabric classification based on spectral characteristics, apples. g, Quantitative results of apple bruise detection based on NIR and RGB
achieving a high accuracy of 98.15% on the prediction set. d, The apple samples images, respectively. a.u., arbitrary units.
and experiment configuration. Apple samples with random bruises were

maintains the full temporal resolution of the underlying sensor chip. can be achieved. Moreover, considering the excellent compatibil-
Compared with the existing snapshot systems, the reported technique ity with other materials, the derived BMSFA strategy can be paired
demonstrates enhanced integration with lightweight and compact size. with high-performance 2D materials29, enabling more precise optical
Extensive experiments demonstrate the superiority of the HyperspecI control and enhanced optical performance. Second, the generaliza-
sensor on high spatial–spectral–temporal resolution, wide spectral tion ability of hyperspectral reconstruction can be further enhanced
response range and high light throughput. These advantages provide by training data augmentation, transfer learning and illumination
great benefits in hyperspectral imaging applications such as detecting decomposition, which can help in tackling common challenges such
under low light, targeting dynamic scenes, and detecting unattainable as outlier input, metamerism and varying illumination30. Third, the
small or remote targets using existing methods. We demonstrated the real-time hyperspectral imaging ability of HyperspecI can be com-
wide application potentials of the HyperspecI sensor such as in intel- bined with heterogeneous detection devices, such as LIDAR and SAR,
ligent agricultural monitoring and real-time human health monitor- to achieve multi-source fusion detection31. This is important for real-
ing. The different applications validated the versatility, flexibility and izing high-precision sensing and making high-reliability decisions in
robustness of the HyperspecI technique. complex environments. Fourth, the highly compatible architecture
The HyperspecI technique can be further extended. First, by using of HyperspecI provides off-the-shelf solutions for easy integration
advanced fabrication techniques such as electron beam lithography28, with various imaging platforms, thus directly upgrading their sensing
nanoimprinting and two-photon polymerization, higher degrees of dimension and enabling multifunctional applications. For instance, the
freedom and precision for BMSFA design and HyperspecI integration integration of vibration-coded microlens arrays into the BMSFA can

80 | Nature | Vol 635 | 7 November 2024


enable high-resolution hyperspectral 3D photography32. The combina- 17. Wang, Z. et al. Single-shot on-chip spectral sensors based on photonic crystal slabs. Nat.
Commun. 10, 1020 (2019).
tion with ultrafast imaging systems can realize hyperspectral transient 18. Yako, M. et al. Video-rate hyperspectral camera based on a CMOS-compatible random
observation33. By further designing BMSFA with multidimensional mul- array of Fabry–Pérot filters. Nat. Photon. 17, 218–223 (2023).
tiplexing abilities (such as polarization and phase encoding), large-scale 19. Kim, T., Lee, K. C., Baek, N., Chae, H. & Lee, S. A. Aperture-encoded snapshot hyperspectral
imaging with a lensless camera. APL Photon. 8, 066109 (2023).
multidimensional imaging can be achieved34. When incorporated with 20. Redding, B., Liew, S. F., Sarma, R. & Cao, H. Compact spectrometer based on a disordered
fluorescence imaging systems, the fluorescence signals of different photonic chip. Nat. Photon. 7, 746–751 (2013).
dyes can be effectively separated based on spectral characteristics in a 21. Monakhova, K., Yanny, K., Aggarwal, N. & Waller, L. Spectral DiffuserCam: lensless
snapshot hyperspectral imaging with a spectral filter array. Optica 7, 1298–1307 (2020).
snapshot manner, thus improving detection sensitivity and efficiency 22. Jeon, D. S. et al. Compact snapshot hyperspectral imaging with diffracted rotation. ACM
in biomedicine science35,36. Overall, we believe this work may provide Trans. Graph. 38, 117 (2019).
opportunities for the development of next-generation image sensors 23. Cortés, V., Blasco, J., Aleixos, N., Cubero, S. & Talens, P. Monitoring strategies for quality
control of agricultural products using visible and near-infrared spectroscopy: a review.
of higher information dimension, higher imaging resolution and higher Trends Food Sci. Technol. 85, 138–148 (2019).
degree of intelligence. 24. Limantara, L. et al. Analysis on the chlorophyll content of commercial green leafy
vegetables. Procedia Chem. 14, 225–231 (2015).
25. Li, L. et al. Calibration transfer between developed portable Vis/NIR devices for detection
of soluble solids contents in apple. Postharvest Biol. Technol. 183, 111720 (2022).
Online content 26. Ma, T., Xia, Y., Inagaki, T. & Tsuchikawa, S. Rapid and nondestructive evaluation of soluble
Any methods, additional references, Nature Portfolio reporting summa- solids content (SSC) and firmness in apple using Vis–NIR spatially resolved spectroscopy.
Postharvest Biol.Technol. 173, 111417 (2021).
ries, source data, extended data, supplementary information, acknowl- 27. Liu, Z., Li, W. & Wei, Z. Qualitative classification of waste textiles based on near infrared
edgements, peer review information; details of author contributions spectroscopy and the convolutional network. Text. Res. J. 90, 1057–1066 (2020).
and competing interests; and statements of data and code availability 28. Kim, S. et al. All-water-based electron-beam lithography using silk as a resist. Nat.
Nanotechnol. 9, 306–310 (2014).
are available at https://doi.org/10.1038/s41586-024-08109-1. 29. Yu, S., Wu, X., Wang, Y., Guo, X. & Tong, L. 2D materials for optical modulation: challenges
and opportunities. Adv. Mater. 29, 1606128 (2017).
30. Zheng, Y., Sato, I. & Sato, Y. Illumination and reflectance spectra separation of a
1. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Proc. Mag. 19, 17–28 hyperspectral image meets low-rank matrix factorization. In Proc. IEEE Conference on
(2002). Computer Vision and Pattern Recognition, pp. 1779–1787 (IEEE, 2015).
2. Li, S. et al. Deep learning for hyperspectral image classification: an overview. IEEE Trans. 31. Abdar, M. et al. A review of uncertainty quantification in deep learning: techniques,
Geosci. Remote 57, 6690–6709 (2019). applications and challenges. Inf. Fusion 76, 243–297 (2021).
3. Backman, V. et al. Detection of preinvasive cancer cells. Nature 406, 35–36 (2000). 32. Wu, J. et al. An integrated imaging sensor for aberration-corrected 3D photography.
4. Hadoux, X. et al. Non-invasive in vivo hyperspectral imaging of the retina for potential Nature 612, 62–71 (2022).
biomarker use in Alzheimer’s disease. Nat. Commun. 10, 4227 (2019). 33. Gao, L., Liang, J., Li, C. & Wang, L. V. Single-shot compressed ultrafast photography at
5. Mehl, P. M., Chen, Y.-R., Kim, M. S. & Chan, D. E. Development of hyperspectral imaging one hundred billion frames per second. Nature 516, 74–77 (2014).
technique for the detection of apple surface defects and contaminations. J. Food Eng. 61, 34. Altaqui, A. et al. Mantis shrimp–inspired organic photodetector for simultaneous
67–81 (2004). hyperspectral and polarimetric imaging. Sci. Adv. 7, 3196 (2021).
6. Yang, Z. et al. Single-nanowire spectrometers. Science 365, 1017–1020 (2019). 35. Shi, W. et al. Pre-processing visualization of hyperspectral fluorescent data with spectrally
7. Green, R. O. et al. Imaging spectroscopy and the airborne visible/infrared imaging encoded enhanced representations. Nat. Commun. 11, 726 (2020).
spectrometer (AVIRIS). Remote Sens. Environ. 65, 227–248 (1998). 36. Wu, J. et al. Iterative tomography with digital adaptive optics permits hour-long intravital
8. Pian, Q., Yao, R., Sinsuebphon, N. & Intes, X. Compressive hyperspectral time-resolved observation of 3D subcellular dynamics at millisecond scale. Cell 184, 3318–3332 (2021).
wide-field fluorescence lifetime imaging. Nat. Photonics 11, 411–414 (2017).
9. Descour, M. & Dereniak, E. Computed-tomography imaging spectrometer: experimental Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
calibration and reconstruction results. Appl. Opt. 34, 4817–4826 (1995). published maps and institutional affiliations.
10. Wagadarikar, A., John, R., Willett, R. & Brady, D. Single disperser design for coded
aperture snapshot spectral imaging. Appl. Opt. 47, 44–51 (2008). Open Access This article is licensed under a Creative Commons Attribution-
11. Arguello, H. & Arce, G. R. Colored coded aperture design by concentration of measure in NonCommercial-NoDerivatives 4.0 International License, which permits any
compressive spectral imaging. IEEE Trans. Image Process. 23, 1896–1908 (2014). non-commercial use, sharing, distribution and reproduction in any medium or
12. Geelen, B., Tack, N. & Lambrechts, A. A compact snapshot multispectral imager with a format, as long as you give appropriate credit to the original author(s) and the source, provide
monolithically integrated per-pixel filter mosaic. In Advanced Fabrication Technologies a link to the Creative Commons licence, and indicate if you modified the licensed material.
for Micro/nano Optics and Photonics VII, Vol. 8974, pp. 80–87 (SPIE, 2014). You do not have permission under this licence to share adapted material derived from this
13. Yesilkoy, F. et al. Ultrasensitive hyperspectral imaging and biodetection enabled by article or parts of it. The images or other third party material in this article are included in the
dielectric metasurfaces. Nat. Photon. 13, 390–396 (2019). article’s Creative Commons licence, unless indicated otherwise in a credit line to the material.
14. Faraji-Dana, M. et al. Hyperspectral imager with folded metasurface optics. ACS Photon. If material is not included in the article’s Creative Commons licence and your intended use is
6, 2161–2167 (2019). not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
15. Xiong, J. et al. Dynamic brain spectrum acquired by a real-time ultraspectral imaging chip permission directly from the copyright holder. To view a copy of this licence, visit http://
with reconfigurable metasurfaces. Optica 9, 461–468 (2022). creativecommons.org/licenses/by-nc-nd/4.0/.
16. He, H. et al. Meta-attention network based spectral reconstruction with snapshot
near-infrared metasurface. Adv. Mater. 2313357 (2024). © The Author(s) 2024

Nature | Vol 635 | 7 November 2024 | 81


Article
Methods
Spectral calibration
Spectral modulation material preparation We used a monochromator (Omno151, spectral range 200–2,000 nm)
We used 16 types of organic dyes covering 400–1,000 nm as spectral to generate monochromatic light with a FWHM of 10 nm. The mono-
modulation materials for HyperspecI-V1 sensor fabrication. For the chromatic light was uniformly irradiated onto a power meter probe
HypersepcI-V2 sensor, we used 10 types of organic dyes and 6 types (Thorlabs S130VC, S132C) and the HyperspecI sensors after passing
of nano-metal oxides to cover 400–1,700 nm. To prepare the organic through a collimated optical path. In the automated calibration process,
dyes for photolithography processes, we mixed 0.2 g of each organic we developed a program to control the wavelength of monochromatic
dye with 20 ml of photoresist (SU-8 2010) and used an ultrasonic liquid light, acquire the power meter value and save the corresponding meas-
processor (NingHuai NH-1000D) at room temperature. To ensure urements of the HyperspecI sensors. This process was repeated for each
complete dissolution and remove impurities, the mixed solution wavelength to automatically collect the compressive sensing matrix.
was filtered using 3 μm pore size filters. To prepare the nano-metal For more details, refer to Extended Data Fig. 7f-g and Supplementary
oxides for photolithography processes, we used a dispersion solution Information section 4.
(PGMEA), photoresist (SU-8 2025) and nano-metal oxide powder.
We mixed 20 g of each material powder with 80 g PGMEA. Following HSI reconstruction
a dispersion process of 48 h using the ultrasonic liquid processor We used a data-driven method to reconstruct HSIs from measurements
at room temperature, we obtained material dispersion fluids with (Extended Data Fig. 6a,b). The SRNet is a hybrid neural network that
a mass fraction of 20%. To address the issue of inappropriate con- combines the core features of Transformer and convolutional neural
centration, we mixed 10 ml of each material dispersant with 20 ml network architectures for efficient, high-precision reconstruction. It
photoresist at a concentration ratio of 1:2. These mixtures were uses a U-Net-shaped architecture as the baseline, the basic component
stirred for 15 min using the ultrasonic liquid processor. Then the of which is the spectral attention module (SAM) that focuses on extract-
filters with 3 μm pore size were used to remove the impurities. Sub- ing the spectral features of HSIs. SAM applies the attention mechanism
sequently, we applied the spectral modulation photoresist onto the in the spectral dimension rather than in the spatial dimension to reduce
quartz substrates ( JGS3) and the test smears formed at 4,000 rpm running time and memory cost. Moreover, this strategy enables us to
on the spin coater (Helicoater, HC220PE). We validated the spectral compute cross-covariance across spectral channels and create atten-
properties of these modulation photoresists using a spectropho- tion feature maps with implicit knowledge of spectral information and
tometer (PerkinElmer Lambda 950). The details of the experimental global context37,38. For more details, refer to Supplementary Informa-
equipment, operating procedures and analysis of the results are pre- tion section 5.2.
sented in Extended Data Figs. 8 and 9 and Supplementary Information Our training dataset was collected using the commercial FigSpec-23
section 2. and GaiaField Pro-N17E-HR hyperspectral cameras, both integrated
under a push-broom scanning mechanism, as shown in Extended
BMSFA fabrication and sensor integration Data Fig. 5. The training data consisted of 96 spectral channels, with
The fabrication of BMSFA includes a series of processes of photoresist 61 channels at intervals of 10 nm in the 400–1,000 nm range and 35
dissolution, photomask design, substrate preparation, photoresist channels at intervals of 20 nm in the 1,000–1,700 nm range (see Sup-
coating, soft baking, UV exposure, post-exposure baking, develop- plementary Information section 5.1 for more details). Considering
ment and hard baking (Fig. 1b and Extended Data Fig. 7). First, we the high spatial resolution of measurements, we randomly divided
prepared the photoresists with different spectral modulation proper- the calibrated pattern into several sub-patterns for training, which
ties. Then, we dropped a solvent of one kind of photoresist onto the can also avoid overfitting to a particular BMSFA encoding pattern.
quartz substrate ( JGS3, 4 inches), ensuring uniform distribution of the During each iteration, we randomly selected a 512 × 512 sub-pattern
photoresist containing spectral modulation materials at 4,000 rpm from the original full-resolution BMSFA pattern (2,048 × 2,048 for
on a spin coater. After soft baking at 95 °C for 5 min, we used UV pho- HyperspecI-V1 and 1,024 × 1,024 for HyperspecI-V2). The model was
tolithography (SUSS MA6 Mask Aligner, SUSS MicroTec AG) to cure trained using the Adam optimizer (β1 = 0.9, β2 = 0.999) for 1 × 106
the photoresist at the designed position of the quartz substrate. The iterations. The learning rate was initialized to 4 × 10−4, and the cosine
exposure dose of the UV lithography machine is 1,000 mJ cm−2. This annealing scheme was adopted. We chose the root mean square error,
process was conducted for different modulation materials using a mean relative absolute error, and total variation (TV) as the hybrid loss
designed photomask. After post-exposure baking at 95 °C for 10 min, function. We trained the model on the Pytorch platform with a single
the development removes the unexposed areas, leaving the photore- NVIDIA RTX 4090 GPU. The measurements and reconstructed HSIs
sist only at the specific locations. Then, hard baking of the wafers was synthesized as RGB images are shown in Extended Data Fig. 6c,d. The
done on a hotplate at 150 °C for 5 min. We repeated the above steps exemplar reconstructed spectra are shown in Fig. 2a and Extended
to pattern all 16 types of spectral modulation photoresists onto the Data Fig. 3.
quartz substrate. Eventually, we poured pure SU-8 photoresist onto
the finished substrate, completing the photolithography process Remote detection experiment
through photoresist coating, soft baking, UV exposure, post-exposure As shown in Extended Data Fig. 1a, we used a telescope (CELESTRON
baking and hard baking. By following these steps, the BMSFA was NEXSTAR 127SLT, 1,500 mm focal length, 127 mm aperture Maksutov
prepared for our HyperspecI sensors. Cassegrain) to image the moon and compared the imaging results of
For sensor integration, the HyperspecI-V1 sensor was prepared our HyperspecI sensor with that of line-scanning hyperspectral camera
by combining BMSFA with the Sony IMX264 chip, which covers the (FigSpec-23) and mosaic multispectral camera (Silios CMS-C). The tar-
400–1,000 nm spectral range. The HyperspecI-V2 sensor was pre- get scenes include the Mare Crisium (Extended Data Fig. 1c, region 2)
pared by combining BMSFA with the Sony IMX990 chip, covering the and Mare Fecunditatis (Extended Data Fig. 1c, regions 1 and 4) regions
400–1,700 nm spectral range. We used a laser engraving machine to of the moon during the crescent phase. The acquisition frame rate of
remove the packaging glass from the monochrome sensor. Then, we our HyperspecI was set to 47 fps, with an exposure time of 21 ms. The
cured the BMSFA onto the sensor surface using a photoresist under mosaic multispectral camera has an acquisition frame rate of 30 fps,
ultraviolet lighting, ensuring optimal sensor integration. The details with an exposure time of 33 ms. The line-scanning hyperspectral camera
of BMSFA fabrication and sensor integration are presented in Extended requires approximately 100 s to capture an HSI frame. For more details,
Data Fig. 7 and Supplementary Information section 3. refer to Supplementary Information section 10.
Metamerism experiment Code availability
Metamerism denotes that different spectra project the same colour in The demo code of this work is available from the public repository at
the visible spectral range. To validate the ability of HyperspecI to dis- GitHub (https://github.com/bianlab/HyperspecI).
tinguish materials with identical RGB values, we conducted two experi-
ments (Extended Data Fig. 2). First, we tested real and fake potted plants.
Points with the same colour are marked in Extended Data Fig. 2a (ii), in 37. Wang, Z. et al. Uformer: a general u-shaped transformer for image restoration.
In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 17683–17693
which the red points on the real plant and the yellow points on the fake (IEEE, 2022).
plant have the same RGB values. Extended Data Fig. 2a (iii) shows the orig- 38. Zamir, S.W. et al. Restormer: efficient transformer for high-resolution image restoration.
inal measurement from our HyperspecI sensor, and the reconstructed In Proc. IEEE Conferemce on Computer Vision and Pattern Recognition, pp. 5728–5739
(IEEE, 2022).
hyperspectral image is shown in Extended Data Fig. 2a (v). We plotted 39. Gehm, M. E., John, R., Brady, D. J., Willett, R. M. & Schulz, T. J. Single-shot compressive
the spectra of points P1 and P2 on both real and fake plants (Extended spectral imaging with a dual-disperser architecture. Opt. Express 15, 14013–14027
Data Fig. 2a (iv)). The spectra show that the leaves of the real plant exhibit (2007).
40. Cao, X., Du, H., Tong, X., Dai, Q. & Lin, S. A prism-mask system for multispectral video
distinct spectral features because of variations in chlorophyll and water acquisition. IEEE Trans. Pattern Anal. 33, 2423–2435 (2011).
content (highlighted in the blue block of Extended Data Fig. 2a (iv)), and 41. Kim, M. H. et al. 3D imaging spectroscopy for measuring hyperspectral patterns on solid
the fake plant shows completely different spectra. Second, we tested objects. ACM Trans. Graph. 31, 38 (2012).
42. Lin, X., Liu, Y., Wu, J. & Dai, Q. Spatial-spectral encoded compressive hyperspectral
real and fake strawberries, which present nearly identical appearances, imaging. ACM Trans. Graph. 33, 233 (2014).
textures and colours (Extended Data Fig. 2b). By extracting the spectra 43. Ma, C., Cao, X., Tong, X., Dai, Q. & Lin, S. Acquisition of high spatial and spectral
of points P1 and P2 on both real and fake strawberries, we observed resolution video with a hybrid camera system. Int. J Comput. Vision 110, 141–155
(2014).
distinct absorption peaks at 670 nm and 750 nm in the real strawber- 44. Lin, X., Wetzstein, G., Liu, Y. & Dai, Q. Dual-coded compressive hyperspectral imaging.
ries, whereas the spectra of the fake strawberry appeared smoother. Opt. Lett. 39, 2044–2047 (2014).
45. Golub, M. A. et al. Compressed sensing snapshot spectral imaging by a regular digital
camera with an added optical diffuser. Appl. Opt. 55, 432–443 (2016).
Thermal stability experiment 46. Wang, P. & Menon, R. Computational multispectral video imaging. J. Opt. Soc. Am. 35,
The experimental setup for studying the thermal stability of the BMSFA 189–199 (2018).
modulation mask is shown in Extended Data Fig. 4a. This setup consists 47. Mu, T., Han, F., Bao, D., Zhang, C. & Liang, R. Compact snapshot optically replicating and
remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter.
of a light source (Thorlabs SLS302 with a stabilized quartz tungsten Opt. Lett. 44, 1281–1284 (2019).
halogen lamp of 10 W output optical power), an illumination module 48. McClung, A., Samudrala, S., Torfeh, M., Mansouree, M. & Arbabi, A. Snapshot spectral
(consisting of optical lens, aperture stop, field stop, beam splitter of imaging with parallel metasystems. Sci. Adv. 6, eabc7646 (2020).
49. Williams, C., Gordon, G. S., Wilkinson, T. D. & Bohndiek, S. E. Grayscale-to-color:
Thorlabs VDFW5/M and objective lens of Olympus microscope objective scalable fabrication of custom multispectral filter arrays. ACS Photon. 6, 3132–3141
A 10 PL 10 × 0.25), a heating stage ( JF-956, 30–400 °C), several support (2019).
components (Thorlabs CEA1400) and a fine-tune module (GCM-VC 50. Zhang, W. et al. Handheld snapshot multi-spectral camera at tens-of-megapixel resolution.
Nat. Commun. 14, 5043 (2023).
13M). The fabricated BMSFA modulation mask was placed on the heat- 51. Yuan, L., Song, Q., Liu, H., Heggarty, K. & Cai, W. Super-resolution computed tomography
ing stage, and the temperature was sequentially increased in a step of imaging spectrometry. Photonics Res. 11, 212–224 (2023).
10 °C from 20 °C (room temperature) to 200 °C. After the temperature
stabilized at each step (waiting 10 min after the actual temperature Acknowledgements This work was supported by the National Natural Science Foundation of
reached the set temperature), we collected an image of the mask, as China (62322502, 61827901, 62088101 and 61971045).
shown in Extended Data Fig. 4b. For more details, refer to Supplemen-
tary Information section 6.7 and Supplementary Information Video. Author contributions L.B., Z.W. and Yuzhe Zhang conceived the idea. Z.W., Yuzhe Zhang,
Next, we placed the HyperspecI sensor on the heating stage and used C.Y. and W.F. conducted the material optical performance tests and photoresist preparation.
Z.W. and Yuzhe Zhang designed and fabricated the optical filter arrays. Yuzhe Zhang
it to acquire hyperspectral images of the same scene at different opera- and Yinuo Zhang designed and implemented sensor integration. Z.W. and Yuzhe Zhang
tion temperatures, with subsequent comparisons regarding image developed the reconstruction algorithms and conducted the model training. Z.W. and
Yuzhe Zhang calibrated the sensors and tested their imaging performance. L.L., X.P.,
similarity and spectral consistency. According to the manual provided
Yinuo Zhang and J. Zhao designed and implemented the experiments of chlorophyll
by Sony, the operational temperature range of the sensor chip is from detection, SSC detection, textile classification and apple bruise detection. Yuzhe Zhang,
0 °C to 50 °C, with the common operating surface temperature being Q.M. and Yinuo Zhang conducted blood oxygen and water quality monitoring experiments.
L.B., Z.W., Yuzhe Zhang, Yinuo Zhang, C.Y., L.L., C.Z. and J. Zhang prepared the figures and
around 37 °C at room temperature (20 °C). To assess the reconstruction
wrote the paper with input from all the authors. L.B. and J. Zhang supervised the project.
performance of the sensor at varying temperatures, the heating stage
was incrementally increased from 40 °C to 70 °C at 10 °C intervals. At
Competing interests L.B., Z.W., Yuzhe Zhang and J. Zhang hold patents on technologies related
each temperature, the sensor was powered on for 1 h to achieve tem- to the devices developed in this work (China patent nos. ZL 2022 1 0764166.5, ZL 2022 1
perature stabilization. Extended Data Fig. 4d–f shows the hyperspectral 0764143.4, ZL 2022 1 0764141.5, ZL 2019 1 0441784.4, ZL 2019 1 0482098.1 and ZL 2019 1
1234638.0) and submitted the related patent applications.
imaging performance of the same scene at different temperatures.
Additional information
Supplementary information The online version contains supplementary material available at
Data availability https://doi.org/10.1038/s41586-024-08109-1.
All data generated or analysed during this study are included in this Correspondence and requests for materials should be addressed to Liheng Bian or Jun Zhang.
Peer review information Nature thanks Yidong Huang, Yunfeng Nie and the other, anonymous,
published article and the public repository at GitHub (https://github. reviewer(s) for their contribution to the peer review of this work.
com/bianlab/Hyperspectral-imaging-dataset). Reprints and permissions information is available at http://www.nature.com/reprints.
Article

Extended Data Fig. 1 | Dynamic remote detection experiment in low-light corresponds well with the ground truth. In contrast, the results of the mosaic
environment (a cloudy and foggy night) with imaging comparison among multispectral camera contain serious measurement noise due to limited light
our HyperspecI sensor, line-scanning hyperspectral camera (FigSpec-23), throughput, and the topography details are buried. The results of the line-
and mosaic multispectral camera (Silios CMS-C). a, The experiment scanning hyperspectral camera suffer from a similar degradation, and severe
configuration. A telescope (CELESTRON NEXSTAR 127SLT, 1,500 mm focal scanning overlapping exists since the moon was moving during the line-
length, 127 mm aperture Maksutov-Cassegrain) was employed to image the scanning process. The above experiment demonstrates the unique high-light
moon combined with different cameras. b, The lunar spectrum comparison. throughput advantage of our HyperspecI sensor, which leads to high imaging
c, The HSI results by different cameras. The results of the HyperspecI sensor SNR that enables the acquisition of dynamic, remote, and fine details in low-
present fine details of lunar topography, and the reconstructed spectrum light conditions. d, The dynamic imaging results by different cameras.
Extended Data Fig. 2 | Metamerism experiment. a, Hyperspectral imaging results of our HyperspecI sensor on real and fake strawberries with the same
results of our HyperspecI sensor on real and fake potted plants with the same colour but different spectra. (i) RGB images of real and fake strawberries.
colour but different spectra. (i) RGB images of real and potted fake plants. (ii) Locations of real and fake strawberries of the same colour are marked with
(ii) Locations of real and fake plants of the same colour are marked with red red points and yellow points, respectively. (iii) The raw measurement of the
points and yellow points, respectively. (iii) The raw measurement of the HyperspecI sensor.(iv) Reconstructed spectra of metamerism locations.
HyperspecI sensor. (iv) Reconstructed spectra of metamerism locations. (v) Synthesized RGB image of the reconstructed HSI.
(v) Synthesized RGB image of the reconstructed HSI. b, Hyperspectral imaging
Article

Extended Data Fig. 3 | Exemplar HSI results by our HyperspecI sensors. are presented. The spectral comparison between reconstructed spectra (RS)
a-d, HSI results of four different indoor and outdoor scenes. The measurements and ground truth (GT, acquired by the commercial spectrometers of Ocean
were acquired by our HyperspecI sensors. The hyperspectral images were Optics USB 2000+ and NIR-Quest 512) are also represented (denoted by solid
reconstructed via SRNet. Synthesized RGB images and several spectral images and dashed lines, respectively).
Extended Data Fig. 4 | Thermal stability test of the BMSFA modulation stage with controllable temperatures ranging from 40 °C to 70 °C. Measurements
mask and HyperspecI sensor. a, The experimental configuration for BMSFA were acquired after each thermal step reached stability, with the sensor operating
thermal stability test, comprising an optical system (including components of for 1 hour at each temperature. d, Similarity evaluation results of raw data.
light source, illuminating system, camera, camera tube, beam splitter, objective The SSIM and PSNR measurements consistently indicate that the camera’s
lens, etc.) for uniform light illumination on the target, mechanical elements performance remains unaffected across different operating temperatures.
(featuring a manual focusing module, heating stage, translation stage, main e, The acquired raw data and corresponding HSI reconstruction results at
support, etc.) for precise control of the target’s observation position and different temperatures. f, Reconstructed spectral comparison of different
target heating, and the modulation mask. b, The visual representations of the regions. We calculated the Pearson correlation coefficients of the spectra in
modulation mask at different temperatures. These observations reveal that the the same region at different temperatures. The minimum correlation coefficient
modulation mask is stable under different temperature conditions, maintaining for each region is 0.99, indicating that the sensor’s spectral reconstruction
its structural integrity and properties. c, The experiment configuration for performance is robust to temperature variations.
sensor thermal stability test. The HyperspecI sensor was fixed on a heating
Article

Extended Data Fig. 5 | Hyperspectral image dataset construction. a, The the two commercial hyperspectral cameras. The scale-invariant feature
system to collect hyperspectral image dataset. Our dataset was mainly transform (SIFT) technique was employed to align the field of view. c, The
captured using the commercial FigSpec-23 (400-1000 nm @ 960 × 1,230 pixels, visualization of our constructed hyperspectral image dataset. After data
2.5 nm interval) and GaiaField Pro-N17E-HR (900-1700 nm, @ 640 × 666 pixels, registration, there yields the hyperspectral image dataset comprising 1,000
5 nm interval) hyperspectral cameras, both integrated under a push-broom scenes (500 outdoor scenes and 500 indoor scenes), covering the entire spectral
scanning mechanism. Measurements were acquired using our HyperspecI range of 400-1700 nm, with a spatial resolution of 640 × 666 pixels and a total
(V1 for 400-1000 nm and V2 for 400-1700 nm). b, Image registration between number of 131 spectral bands at 10 nm intervals.
Extended Data Fig. 6 | The spectral reconstruction network (SRNet) component of SRNet, which calculates the attention across spectral channel
structure and exemplar reconstructed results. a, The overall framework of dimensions, extracting the spectral features of HSIs. c,d, Measurements and
SRNet. SRNet is a hybrid neural network that combines the core features of corresponding spectral reconstruction result (presented as the synthesized
Transformer and CNN architectures for efficient, high-precision reconstruction. RGB form). The measurements were acquired using the HyperspecI-V1 sensor.
b, The framework of the Spectral Attention Module (SAM). SAM is the basic Close-ups are provided, marked in the measurements with rectangular outlines.
Article

Extended Data Fig. 7 | The preparation, integration, and calibration including microscopic images during the fabrication process. f, Display of the
demonstration of HyperspecI sensors. a, The demonstration of BMSFA HyperspecI-V1 sensor’s sensing matrix calibrated with monochromatic light in
photolithography fabrication. b, Display of integrated HyperspecI-V1 sensor. several spectral bands (550 nm, 650 nm, 750 nm). g, Display of the HyperspecI-V2
c, Display of integrated and packaged HyperspecI-V2 sensor. d, Photolithography sensor’s sensing matrix calibrated with monochromatic light in several
mask used for BMSFA fabrication. Multiple lithography operations can be spectral bands (600 nm, 800 nm, 1300 nm).
achieved using this single mask. e, BMSFA fabrication and its microstructure,
Extended Data Fig. 8 | See next page for caption.
Article
Extended Data Fig. 8 | The material selection and BMSFA design study. We can see that the reconstruction error is low with the dimension number
a, The evolutionary optimization based material selection method for BMSFA being ten at 400-1000 nm range and six at 1000-1700 nm range, which
design. This method starts with an initially selected subset of materials and demonstrates the sparsity of HSI in the spectral dimension. c, The spectral
iterates through the operations, including survival of the fittest, crossover, fidelity under different numbers of modulation filters selected by the material
mutation, and random replacement. The iterative process ends when it selection method. The signal-to-noise ratio of input measurements was set
converges to the optimal accuracy performance on the hyperspectral image as 20 dB. It further validates the reasonability of the number and selection of
dataset. b, The preprocess and analysis of the massive hyperspectral image broadband filters, and shows that the current choice of our HyperspecI sensor
data through the dimensionality reduction technique. We analysed the prototypes is optimal considering the tradeoff between spectral and spatial
distribution of the hyperspectral images using the PCA method. We calculated resolution. d, The organic dyes and nano-metal oxides prepared for BMSFA
the information loss (reconstruction error) in different latent dimensions design. e, The correlation coefficient map of the prepared 35 materials.
and compressed ratios to determine the potential compressive dimension.
Extended Data Fig. 9 | Modulation material preparation and transmission modulation materials. c, The smears of organic dyes, employing photoresist as
spectra measurements. a, Schematic diagram depicting the production of a carrier, are obtained through spin coating. d, The transmission spectra of
experimental smears using spectral modulation materials. This process follows organic dyes. e, The smears of nano-metal oxides, utilizing photoresist and
the steps of weighting, mixing, filtering, and spin coating. b, Schematic diagram dispersant as carriers, are obtained through spin coating. f, The transmission
of the optical path for transmission spectra measurements of spectral spectra of nano-metal oxides at the optimum concentration.
Article
Extended Data Table 1 | Comparison of different snapshot hyperspectral imaging techniques9–11,14,15,17–19,22,39–51

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy