Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse
Abstract
Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.
INTRODUCTION
Imaging scattering dynamics can broadly aid biomedicine. For example, time-dependent acoustic speckles convey useful physiological information, such as blood flow velocity (1) and tissue elasticity (2), and time-reversal scattering theory has contributed to the development of microwave imaging instruments (3, 4). Studies in light scattering have also been increasingly featured in recent progress in biomedicine (5). First, light scattering has been leveraged to develop novel biomedical optical instruments (6–8). For instance, techniques to spatiotemporally invert light scattering have provided the means to focus light into deep tissue for high-resolution imaging and control (9, 10). In addition, analysis of temporal fluctuation in the scattered light signal reveals many optical properties of biological tissues (5, 11). This characterization has enabled a diverse range of applications, such as assessments of food and pharmaceutical products (12) and studies of protein aggregation diseases (13, 14).
Light-scattering dynamics have been extensively investigated from both theoretical and experimental perspectives. Among many simulation paradigms (15–17), the Monte Carlo method offers a rigorous and flexible approach (18) and is often regarded as the gold standard for modeling light transport in a scattering medium (19). The Monte Carlo simulation is equivalent to modeling photon transport analytically by solving the radiative transfer equation (20). As a statistical approach, a typical Monte Carlo simulation provides an ensemble-averaged result of light propagation [that is, it ignores coherent effects (21)] and requires launching a large number of photons to ensure the desired accuracy (22). The Monte Carlo method is capable of simulating light propagation sequences with a short (for example, subnanosecond) time interval. This time-resolved Monte Carlo simulation has been widely used to model time-dependent light distribution, dynamic optical properties, and frequency domain light transportation in scattering media (23, 24).
On the other hand, experimental visualization of light propagation in scattering media in real time (defined as the actual time during which an event occurs) has been a long-standing challenge (25). Freezing light’s motion in a tabletop scene requires a picosecond-level exposure time per frame (that is, 1 billion frames per second) (26). Despite continuous improvements in state-of-the-art electronic sensors, current complementary metal-oxide semiconductor and charge-coupled device (CCD) technologies are incapable of reaching this speed (27) because they are fundamentally impeded by their on-chip storage capacity and electronic readout speeds (28). Nevertheless, various optical gating mechanisms, such as ultrashort pulse interference (29) and the Kerr electro-optic effect (30), were able to achieve picosecond exposure times. However, each gated measurement could capture only one image with two spatial dimensions at a specific time point, and temporal scanning (that is, repeated measurements with a varied delay between the pump and probe) is required to resolve details of the scattering event in its full duration (31, 32). Light-scattering events can also be measured by a streak camera—an ultrafast imager that converts light’s temporal profiles to spatial profiles by pulling photoelectrons with a sweep voltage along the axis perpendicular to both the device’s entrance slit and the optical axis (33–35). Capable of recording a time course with picosecond temporal resolution, the streak camera removes the need for optical gating–aided temporal scanning in ultrafast measurements. The pioneering studies by Alfano introduced the streak camera for light-scattering measurements (36, 37). Later seminal studies in this direction have led to many breakthroughs, including the observation of ballistic and diffusive components in the scattered light (38–42), the imaging of hidden objects behind scattering walls or around corners (43–45), and the measurement of the fluorescence lifetime of dye molecules in turbid media (46–49). However, the conventional operation of the streak camera sacrifices the imaging dimension—the narrow entrance slit (10 to 50 μm wide) confines the imaging field of view to a line. To achieve two-dimensional (2D) ultrafast imaging, this mode requires scanning the orthogonal spatial dimension and synthesizing the movie from a large number of measurements (45). In general, existing multiple-shot ultrafast imaging technologies based on temporal or spatial scanning do not have real-time imaging capability. They require the scattering events to be precisely repeatable, which is inherently challenging for events in dynamic scattering media, such as soft biological tissues and flowing blood.
Here, we present single-shot, real-time video recording of spatiotemporal light patterns in a scattering medium to overcome these limitations. Of particular interest is the long-sought-after transient phenomenon—photonic Mach cones (50, 51). Although their propagation has been previously observed using pump-probe methods (52, 53), a single-shot, real-time observation of traveling photonic Mach cones has not yet been achieved. To tackle this challenge, we generated a photonic Mach cone by scattering a picosecond laser pulse that travels superluminally relative to the surrounding medium. We modeled the evolution of this transient scattering phenomenon using a time-resolved Monte Carlo simulation. A newly developed lossless-encoding compressed ultrafast photography (LLE-CUP) system, whose more efficient hardware design and reconstruction paradigm surpass the performance of previous CUP systems (54–56), macroscopically imaged light-scattering dynamics at 100 billion frames per second with a single camera exposure. The recorded instantaneous scattering pattern—the photonic Mach cone—is in excellent agreement with theoretical predictions.
RESULTS
Modeling light-scattering dynamics in a thin scattering plate assembly
We assembled materials of different refractive indices and scattering coefficients (Fig. 1). Specifically, a “source tunnel” with a refractive index of ns scatters a collimated laser beam into two “display panels” with a refractive index of nd. A short laser pulse propagates in the source tunnel. The elastic scattering events in the source tunnel emit secondary wavelets of the same wavelength as the incident laser pulse. These wavelets form a wavefront in the display panels by superposition. When ns < nd, light propagates faster in the source tunnel than in the display panels. Under this circumstance, the scattering events generate secondary sources of light that advance superluminally to the light propagating in the display panels. At a certain time point, the instantaneous scattering light distribution has a Mach cone structure. The cone boundary is delineated by the common tangents of the secondary wavelets, where the wavelets overlap most to produce the greatest intensity. The semivertex angle of the photonic Mach cone, which is denoted by θ in Fig. 1, is determined by
Fig. 1
Because the scatterers in the source tunnel are randomly distributed within the cylindrical volume illuminated by the laser beam whose diameter is much greater than the optical wavelength, the scattered light forms a laser speckle pattern in the display panels (57), with speckle grains of a few micrometers in size. For macroscopic observation, because each effective pixel of the detector at the object plane usually has a size on the order of millimeters, the observed photonic Mach cone is an intensity pattern averaged over many speckle grains. The net effect is equivalent to averaging over many speckle realizations, as if the sources were spatially incoherent (57). Our theory is based on ensemble-averaged wavelet intensity addition due to the above spatial averaging effect. To obtain the analytical formula describing the intensity distribution of the cone, we first derive the impulse response from a spatiotemporal Dirac delta excitation, traveling at a superluminal speed cs in the +x direction (fig. S1)
Here, cd denotes the speed of light in the display panels (cd < cs), t denotes time, denotes position in a Cartesian coordinate system, q = cst − x, , , and (detailed in section S1). For a spatiotemporally arbitrarily shaped pulse, the spatiotemporal distribution of light intensity of the resultant cone can be found using a three-dimensional (3D) convolution
where U(r) denotes the 3D snapshot intensity distribution of the excitation pulse and “⊗” represents convolution in 3D (detailed in section S1). Extending the concept of the Mach number from fluid dynamics, we define the photonic Mach number as
As an example, the light intensity distribution corresponding to a superluminal impulse excitation at Mp = 1.4 was calculated according to Eq. 2. The central x-y cross section of the cone is shown in fig. S2A. The cone edge is defined by setting Y = 0, where the intensity approaches infinity (58). For a spatiotemporal Gaussian pulse excitation, the intensity distribution of the photonic Mach cone computed by Eq. 3 is shown in fig. S2B.
We also numerically evaluated the photonic Mach cone using the time-resolved Monte Carlo method. Both superluminal (Mp = 1.4) and subluminal (Mp = 0.8) light propagations were simulated (detailed in section S1). Briefly, an infinitely narrow source beam propagated through a thin scattering sheet with a speed of cs along the +x direction. During the propagation, 105 scattering events were randomly triggered with a uniform probability distribution. Each scattering event emitted an outgoing secondary wavelet, which contributed to the total light intensity distribution. Then, the resultant light intensity distribution was convolved with a normalized spatiotemporal Gaussian function representing the finiteness of the laser pulse. Figure 2 shows contour plots of the scattered light intensity distributions on the sheet. Under superluminal conditions (Fig. 2A), the contours depict a nearly triangular region dragged behind the excitation pulse, representing a photonic Mach cone. However, under subluminal conditions (Fig. 2B), no such cone is formed. The expanding secondary wavelets always bound the excitation pulse, preventing the formation of a photonic Mach cone.
Fig. 2
For both cases, the excitation light pulses are spatiotemporally Gaussian and propagate along the +x direction. (A) Contour plot of the light intensity distribution when a laser beam propagates superluminally in the medium with a photonic Mach number of 1.4. (B) Same as (A), but showing a laser beam propagating subluminally in the medium with a photonic Mach number of 0.8. The temporal processes of both transient events (A and B) are shown in movies S1 and S2.
Implementing LLE-CUP
We developed LLE-CUP to capture 2D light-speed scattering dynamic scenes in real time with a single exposure. As a computational imaging approach, LLE-CUP operates in two steps: data acquisition and image reconstruction (both further described in Materials and Methods). In data acquisition, LLE-CUP acquires three different views of the dynamic scene (detailed in section S2 and figs. S3 and S4). One view, akin to a view in traditional photography, records a direct image of the scene temporally integrated over the exposure time. In contrast, the other two views record the temporal information of the dynamic scene by using a compressed sensing paradigm (54, 55, 59). The image reconstruction in LLE-CUP recovers the dynamic scene from the three-view data by exploiting the spatiotemporal sparsity of the event, which holds in most, if not all, experimental conditions. A compressed sensing reconstruction algorithm, developed from the two-step iterative shrinkage/thresholding (TwIST) algorithm (60), is currently used (detailed in section S3).
The LLE-CUP system is shown schematically in Fig. 3 (with an animated illustration in movie S3 and further description in Materials and Methods). The dynamic scene is first imaged by a camera lens. A beam splitter equally divides the incident light into two components. The reflected component is imaged by an external CCD camera to form the time-unsheared view. The transmitted component passes through a 4f imaging system, consisting of a tube lens, a mirror, and a stereoscope objective, to a digital micromirror device (DMD). To spatially encode the scene, a pseudorandom binary pattern is displayed on the DMD. Each encoding pixel is turned to either +12° (on) or −12° (off) from the DMD’s surface normal and reflects the incident light in one of the two directions. Both reflected light beams, masked with complementary patterns, are collected by the same stereoscope objective. The collected beams are sent through tube lenses, folded by a planar mirror, and again folded by a right-angle prism mirror (see the upper right inset in Fig. 3) to form two images in separate horizontal areas on the entrance port of a streak camera. Unconventionally, this entrance port is fully opened (~5 mm width) to capture 2D spatial information. Inside the streak camera, a sweep voltage shears the encoded light distribution along the y′ axis according to the time of arrival. Therefore, these temporally sheared frames land at different spatial positions along the y′ axis and are temporally integrated, pixel by pixel, by an internal CCD camera in the streak camera, forming two time-sheared views.
Fig. 3
Lower left inset: Illustration of complementary spatial encoding for two time-sheared views. The on pixels are depicted in red for View 1 and depicted in crimson for View 2. The off pixels are depicted in black for both views. The combined mask shows that the two spatial encodings are complementary. Upper right inset: Close-up of the configuration before the streak camera’s entrance port (dashed black box). Light beams in both views are folded by a planar mirror and a right-angle prism mirror before entering the fully opened entrance port of the streak camera.
LLE-CUP’s unique paradigm of data acquisition and image reconstruction brings several prominent advantages. First, facilitated by the streak camera, the LLE-CUP system can image a nonrepetitive dynamic scene at 100 billion frames per second with a single-snapshot measurement, circumventing the necessity of repetitive measurements by the pump-probe technique (26, 45, 52, 53). Second, LLE-CUP does not need the specialized active illumination required by other single-shot ultrafast imagers (61–63), enabling passive imaging of dynamic light-scattering scenes. Third, compared with other streak camera–based single-shot ultrafast imaging methods (64, 65), the LLE-CUP system has a light throughput of nominally 100% (excluding imperfect losses from the optical elements).
In previously reported CUP systems (54, 55), only the on pixels of the DMD were used in the spatial encoding operation. As a result, information that landed on the off pixels of the DMD was lost, compromising reconstruction quality. In addition, the time-integrated CCD image was simply overlaid with the reconstructed datacube as a postprocessing step (55), without adding new information to assist in image reconstruction. In contrast, LLE-CUP harvests light reflected from both on and off pixels of the DMD to form two complementary time-sheared views. This design prevents any loss of information from spatial encoding, which is advantageous for compressed sensing–based reconstruction. In addition, the time-unsheared view recorded by the external CCD camera enriches the observation by adding another perspective, which is used with the two time-sheared views in the new reconstruction paradigm to yield a much improved image quality (as further explained in Materials and Methods and illustrated in fig. S5). Thus, the dual complementary masking, the triple-view recording of the scene, and the three-view joint reconstruction are three major enhancements of LLE-CUP over the previous CUP systems.
Single-shot real-time video recording of light-scattering dynamics in a scattering plate assembly
To video-record light-scattering dynamics, we built a thin scattering plate assembly containing a central source tunnel sandwiched between two display panels, as described above. The preparation of this plate assembly is described in detail in section S4 and fig. S6. Air mixed with dry ice fog scatterers was the medium for the source tunnel (ns = 1.0), and silicone rubber (nd = 1.4) mixed with scattering aluminum oxide powder was the medium for the display panels. A collimated visible laser pulse (wavelength, 532 nm; pulse duration, 7 ps; pulse energy, 4 μJ) propagated through the source tunnel. The scattering in the plate assembly extracted photons for the LLE-CUP system to image this dynamic scene. No averaging or gating over multiple laser pulses was required.
Figure 4A shows a time-integrated image of this dynamic event acquired by a conventional CCD camera, and Fig. 4B shows four representative time-lapse frames of the scattered light distribution of the same dynamic event imaged by the LLE-CUP system. Although the time-integrated image merely presents a small intensity variation in scattering light distribution across both the source tunnel and display panels, the time-lapse frames reveal a propagating photonic Mach cone. The central x-y cross section of the photonic Mach cone is displayed by the scattering plate assembly. The cone edge is seen as two light tails extending from the tip of the propagating laser pulse in the source tunnel, forming a V-shaped wedge. The semivertex angle, directly measured in these temporal frames, is ~45°, which agrees with the theoretical value (Eq. 1). As a subluminal control experiment, we used liquid oil with a high refractive index (ns = 1.8) as the medium for the source tunnel. The time-integrated image is shown in Fig. 4C. Because this source tunnel medium has a higher refractive index than the display panel medium, with Mp = 0.8, no photonic Mach cone was produced, as shown in Fig. 4D. It is worth noting that the video-recorded light pattern shows an asymmetric spatial intensity distribution, which is probably attributable to unequal couplings of the nonuniform scattering in the source tunnel to the upper and lower display panels. We also noticed that the primary laser pulse in the recorded videos was wider than the theoretical prediction, mainly because our LLE-CUP camera has a limited temporal resolution.
Fig. 4
(A) Time-integrated image of a laser beam propagating faster in the source tunnel (ns = 1.0) than scattered light does in the display panels (nd = 1. 4). (B) Representative snapshots of the same dynamic scene as in (A), acquired by LLE-CUP. A photonic Mach cone is observed. (C) Same as (A), but showing a laser beam propagating slower in the source tunnel (ns = 1.8) than scattered light does in the display panels (nd = 1.4). (D) Representative snapshots of the same dynamic scene as in (C), acquired by LLE-CUP. In (D), no photonic Mach cone is observed. The temporal processes of both transient events (B and D) are shown in movies S4 and S5. (E) Spectra of the incident laser pulse and the photonic Mach cone. (F) Normalized average intensity of the photonic Mach cone at incident laser pulse energies from 1 to 9 μJ, with steps of 1 μJ. Three photonic Mach cones were imaged at each pulse energy level. Scale bar, 10 mm. Error bars represent SE.
We further investigated the photonic Mach cone by measuring its spectrum at three locations in the scattering plate assembly (Fig. 4E). These spectra are identical to that of the incident laser pulse, indicating that elastic scattering dominates the light-scattering process. In addition, we imaged photonic Mach cones when the incident laser pulse energy was varied from 1 to 9 μJ, with steps of 1 μJ. For each pulse energy, we calculated the average cone intensity in the reconstructed image. The reconstructed intensity shows a clear linear relation with the laser pulse energy (Fig. 4F). This intensity-linear characteristic is based on two facts. First, the random distribution of the scatterers causes the scattering waves to have random phases to a given observation point, resulting in a speckle pattern. Second, multiple speckle grains are averaged within each resolution pixel of the LLE-CUP system in our current macroscopic observation setting. Our results show excellent agreement with the theoretical model of ensemble-averaged addition of intensity in the “Modeling light-scattering dynamics in a thin scattering plate assembly” section.
DISCUSSION
We have demonstrated ultrafast video recording of light-scattering dynamics using LLE-CUP and visualized the propagation of a scattering-induced photonic Mach cone as an instantaneous light-scattering pattern with a single camera exposure. The spectrum-compliant and intensity-linear features of the photonic Mach cone are consistent with the scattering theory simulated by the time-resolved Monte Carlo method. Single-shot real-time video recording of light-scattering dynamics contributes to the next generation of imaging modalities. For example, by leveraging the time-of-flight light signal, the technology can resolve depth information without any motion blurring (55). Coupling the current system with a femtosecond streak camera (66) may achieve an axial resolution comparable to that of optical coherence tomography (67), allowing single-shot full-field imaging of 3D microstructures in biological systems in vivo.
MATERIALS AND METHODS
LLE-CUP operation principle
As a computational imaging approach, LLE-CUP involves physical data acquisition and computational image reconstruction. In data acquisition, the scene was imaged in three views (fig. S3). One view was directly recorded as the time-unsheared view by an external CCD camera, and the measured optical energy distribution was denoted as E(0). For two time-sheared views, the image was first spatially encoded by a pair of complementary pseudorandom binary patterns, temporally sheared along the vertical spatial axis using a streak camera, and finally imaged to the internal CCD camera of the streak camera as optical energy distributions E(1) and E(2). Mathematically, the three views can be related to the intensity distribution of the dynamic scene I(x, y, t) as follows
where the linear operator T represents spatiotemporal integration, Fj (j = 0, 1, 2) represents spatial low-pass filtering due to optics, S represents temporal shearing, Di (i = 1, 2) represents image distortion primarily due to the encoding arm, and Ci (i = 1, 2) represents complementary spatial encoding with C1 + C2 = 1. The lossless complementary spatial encoding preserved more details in the dynamic scene than the lossy encoding in our first-generation CUP (54). Equation 5 can be concatenated as
where E = [E(0), αE(1), αE(2)]T and O = [TF0, αTSD1F1C1, αTSD2F2C2]T. The scalar factor α is related to the energy calibration of the streak camera against the external CCD camera. Given the known operator O and the spatiotemporal sparsity of the dynamic scene, a compressed sensing reconstruction algorithm built upon the TwIST algorithm (60) recovers I(x, y, t) by solving the inverse problem of Eq. 6.
LLE-CUP provided three views to recover more details of the dynamic scene. The time-unsheared view recorded only spatial information, without either spatial encoding or temporal shearing. The projection angle, represented by the direction of temporal integration, was parallel to the t axis (fig. S4A). In comparison, the two time-sheared views recorded spatiotemporal information by both spatial encoding and temporal shearing before spatiotemporal integration along the x′, y′, and t′ axes, as illustrated in fig. S4 (B to E). Here, t′ = t + τsc, where τsc is the transit time in the streak camera. In the y′-t′ coordinates (fig. S4, B and D), the temporal integration is obviously along the t′ axis. Equivalently, in the y-t coordinates (fig. S4, C and E), the integration is along the tilted direction, as shown. Thus, the two time-sheared views recorded the dynamic scene with another projection view. In each time-sheared view, lossy spatial encoding blocked part of the scene, reducing information transmission (54). Because lossless encoding is desirable in compressed sensing, we combined two lossy time-sheared views with complementary spatial encodings to synthesize an equivalent lossless single time-sheared view. Overall, LLE-CUP combined three views (fig. S4F) to provide two distinct lossless projection views of the dynamic scene, improving information transmission.
To test the lossless-encoding implementation, we imaged the dynamic scene shown in fig. S5A. A collimated laser pulse (wavelength, 532 nm; pulse duration, 7 ps) was shined on a car-model target at an oblique angle of ~30° with respect to the normal of the target surface, giving rise to a light wavefront sweeping across the target at a superluminal speed. The imaging system faced the pattern surface and collected the scattered photons from the scene. The dynamic scene was imaged by LLE-CUP at 100 billion frames per second. We reconstructed the datacube of the dynamic scene using both first-generation (lossy-encoding) CUP and LLE-CUP reconstruction algorithms (fig. S5, B and C, and movie S6). For presentation, we summed over the datacube voxels along the t axis, and the resultant images are shown in fig. S5C. The LLE-CUP provided far superior image quality to the first-generation CUP.
where E = [E(0), αE(1), αE(2)]T and O = [TF0, αTSD1F1C1, αTSD2F2C2]T. The scalar factor α is related to the energy calibration of the streak camera against the external CCD camera. Given the known operator O and the spatiotemporal sparsity of the dynamic scene, a compressed sensing reconstruction algorithm built upon the TwIST algorithm (60) recovers I(x, y, t) by solving the inverse problem of Eq. 6.
LLE-CUP provided three views to recover more details of the dynamic scene. The time-unsheared view recorded only spatial information, without either spatial encoding or temporal shearing. The projection angle, represented by the direction of temporal integration, was parallel to the t axis (fig. S4A). In comparison, the two time-sheared views recorded spatiotemporal information by both spatial encoding and temporal shearing before spatiotemporal integration along the x′, y′, and t′ axes, as illustrated in fig. S4 (B to E). Here, t′ = t + τsc, where τsc is the transit time in the streak camera. In the y′-t′ coordinates (fig. S4, B and D), the temporal integration is obviously along the t′ axis. Equivalently, in the y-t coordinates (fig. S4, C and E), the integration is along the tilted direction, as shown. Thus, the two time-sheared views recorded the dynamic scene with another projection view. In each time-sheared view, lossy spatial encoding blocked part of the scene, reducing information transmission (54). Because lossless encoding is desirable in compressed sensing, we combined two lossy time-sheared views with complementary spatial encodings to synthesize an equivalent lossless single time-sheared view. Overall, LLE-CUP combined three views (fig. S4F) to provide two distinct lossless projection views of the dynamic scene, improving information transmission.
To test the lossless-encoding implementation, we imaged the dynamic scene shown in fig. S5A. A collimated laser pulse (wavelength, 532 nm; pulse duration, 7 ps) was shined on a car-model target at an oblique angle of ~30° with respect to the normal of the target surface, giving rise to a light wavefront sweeping across the target at a superluminal speed. The imaging system faced the pattern surface and collected the scattered photons from the scene. The dynamic scene was imaged by LLE-CUP at 100 billion frames per second. We reconstructed the datacube of the dynamic scene using both first-generation (lossy-encoding) CUP and LLE-CUP reconstruction algorithms (fig. S5, B and C, and movie S6). For presentation, we summed over the datacube voxels along the t axis, and the resultant images are shown in fig. S5C. The LLE-CUP provided far superior image quality to the first-generation CUP.
LLE-CUP system details
In the LLE-CUP system, the dynamic scene was first imaged by a camera lens (CF75HA-1, Fujinon) with a focal length of 75 mm. Following the intermediate imaging plane, a beam splitter (BS013, Thorlabs) reflected half of the light to an external CCD camera (GS3-U3-28S4M-C, Point Grey). The other half of the light passed through the beam splitter and was imaged to a DMD (DLP LightCrafter 3000, Texas Instruments) through a 4f system consisting of a tube lens (AC508-100-A, Thorlabs) and a stereoscope objective (MV PLAPO 2XC, Olympus; numerical aperture, 0.50). The spatially encoded images were projected to the entrance port of a streak camera (C7700, Hamamatsu) through two 4f systems containing the same stereoscope objective, tube lenses (AC254-75-A, Thorlabs), planar mirrors, and the right-angle prism mirror. The shearing velocity of the streak camera was set to v = 1.32 mm/ns. The spatially encoded, temporally sheared images were acquired by an internal CCD camera (ORCA-R2, Hamamatsu) with a sensor size of 672 × 512 binned pixels (2 × 2 binning; binned pixel size d = 12.9 μm). The reconstructed frame rate, r, was determined by r = v/d to be 100 billion frames per second.
In practice, the reconstructed datacube size, Nx × Ny × Nt, was limited by the size of the internal CCD camera, NR × NC, where NR and NC are the number of rows and columns. In LLE-CUP, to simultaneously acquire the two complementarily encoded views, the internal CCD camera in the streak camera was split horizontally into two equal regions. As a result, the number of reconstructed voxels along the horizontal axis must have a dimension satisfying Nx≤ NC/2. In addition, considering that the temporal shearing occurs along the vertical axis, the number of reconstructed voxels on the vertical and time axes must meet the requirement of Ny+ Nt − 1 ≤ NR. With a fully opened entrance port (17 mm × 5 mm in the horizontal and vertical axes), each temporal frame has an approximate size of Nx × Ny = 330 × 200, which provides an approximate sequence depth of Nt = 300. Thus, the reconstructed datacube size in LLE-CUP is Nx × Ny × Nt = 330 × 200 × 300.
In the LLE-CUP system, the dynamic scene was first imaged by a camera lens (CF75HA-1, Fujinon) with a focal length of 75 mm. Following the intermediate imaging plane, a beam splitter (BS013, Thorlabs) reflected half of the light to an external CCD camera (GS3-U3-28S4M-C, Point Grey). The other half of the light passed through the beam splitter and was imaged to a DMD (DLP LightCrafter 3000, Texas Instruments) through a 4f system consisting of a tube lens (AC508-100-A, Thorlabs) and a stereoscope objective (MV PLAPO 2XC, Olympus; numerical aperture, 0.50). The spatially encoded images were projected to the entrance port of a streak camera (C7700, Hamamatsu) through two 4f systems containing the same stereoscope objective, tube lenses (AC254-75-A, Thorlabs), planar mirrors, and the right-angle prism mirror. The shearing velocity of the streak camera was set to v = 1.32 mm/ns. The spatially encoded, temporally sheared images were acquired by an internal CCD camera (ORCA-R2, Hamamatsu) with a sensor size of 672 × 512 binned pixels (2 × 2 binning; binned pixel size d = 12.9 μm). The reconstructed frame rate, r, was determined by r = v/d to be 100 billion frames per second.
In practice, the reconstructed datacube size, Nx × Ny × Nt, was limited by the size of the internal CCD camera, NR × NC, where NR and NC are the number of rows and columns. In LLE-CUP, to simultaneously acquire the two complementarily encoded views, the internal CCD camera in the streak camera was split horizontally into two equal regions. As a result, the number of reconstructed voxels along the horizontal axis must have a dimension satisfying Nx≤ NC/2. In addition, considering that the temporal shearing occurs along the vertical axis, the number of reconstructed voxels on the vertical and time axes must meet the requirement of Ny+ Nt − 1 ≤ NR. With a fully opened entrance port (17 mm × 5 mm in the horizontal and vertical axes), each temporal frame has an approximate size of Nx × Ny = 330 × 200, which provides an approximate sequence depth of Nt = 300. Thus, the reconstructed datacube size in LLE-CUP is Nx × Ny × Nt = 330 × 200 × 300.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/3/1/e1601814/DC1
section S1. Simulation of a photonic Mach cone
section S2. Forward model of LLE-CUP
section S3. Image reconstruction of LLE-CUP
section S4. Sample preparation
fig. S1. Cartesian coordinates used for analyzing the light-scattering dynamics.
fig. S2. Numerical simulation of light-scattering dynamics in a thin scattering sheet.
fig. S3. Schematic of the LLE-CUP data acquisition.
fig. S4. Illustration of multiview projections in LLE-CUP’s data acquisition.
fig. S5. Characterization of image reconstruction in LLE-CUP.
fig. S6. Preparation of the thin scattering plate assembly containing a source tunnel flanked by two display panels.
movie S1. Simulated instantaneous light-scattering pattern in a thin scattering sheet under superluminal conditions (Mp = 1.4).
movie S2. Simulated instantaneous light-scattering pattern in a thin scattering sheet under subluminal conditions (Mp = 0.8).
movie S3. Animated illustration of the LLE-CUP system.
movie S4. Experimentally imaged laser pulse propagation under superluminal conditions (Mp = 1.4) through a thin scattering plate assembly that contains air (refractive index ns = 1.0) as the source tunnel medium and silicone rubber (refractive index nd = 1.4) mixed with scattering aluminum oxide powder as the medium of the display panels.
movie S5. Experimentally imaged laser pulse propagation under subluminal conditions (Mp = 0.8) through a thin scattering plate assembly that contains liquid oil with a high refractive index (refractive index ns = 1.8) as the source tunnel medium and silicone rubber (refractive index nd = 1.4) mixed with scattering aluminum oxide powder as the medium of the display panels.
movie S6. Comparative image reconstructions of a superluminal light wavefront sweeping across a car-model target using first-generation CUP and LLE-CUP.
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/3/1/e1601814/DC1
section S1. Simulation of a photonic Mach cone
section S2. Forward model of LLE-CUP
section S3. Image reconstruction of LLE-CUP
section S4. Sample preparation
fig. S1. Cartesian coordinates used for analyzing the light-scattering dynamics.
fig. S2. Numerical simulation of light-scattering dynamics in a thin scattering sheet.
fig. S3. Schematic of the LLE-CUP data acquisition.
fig. S4. Illustration of multiview projections in LLE-CUP’s data acquisition.
fig. S5. Characterization of image reconstruction in LLE-CUP.
fig. S6. Preparation of the thin scattering plate assembly containing a source tunnel flanked by two display panels.
movie S1. Simulated instantaneous light-scattering pattern in a thin scattering sheet under superluminal conditions (Mp = 1.4).
movie S2. Simulated instantaneous light-scattering pattern in a thin scattering sheet under subluminal conditions (Mp = 0.8).
movie S3. Animated illustration of the LLE-CUP system.
movie S4. Experimentally imaged laser pulse propagation under superluminal conditions (Mp = 1.4) through a thin scattering plate assembly that contains air (refractive index ns = 1.0) as the source tunnel medium and silicone rubber (refractive index nd = 1.4) mixed with scattering aluminum oxide powder as the medium of the display panels.
movie S5. Experimentally imaged laser pulse propagation under subluminal conditions (Mp = 0.8) through a thin scattering plate assembly that contains liquid oil with a high refractive index (refractive index ns = 1.8) as the source tunnel medium and silicone rubber (refractive index nd = 1.4) mixed with scattering aluminum oxide powder as the medium of the display panels.
movie S6. Comparative image reconstructions of a superluminal light wavefront sweeping across a car-model target using first-generation CUP and LLE-CUP.
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.