A few weeks ago I gave a short seminar on how to do very simple Ghost Imaging simulations. So simple that you can run then in your latptop in a few seconds (or minutes), and you can use them as building blocks to develop larger projects. I created a Github repo with all the codes needed, and I will explain how to use them a little bit here. This is just a quick text covering some of the slides of the seminar, which were aimed at people who are already familiar with Ghost Imaging, but not so much on coding its implementation or simulating it. I did all my codes on Python, using Spyder as my IDE. For visualization, I use Matplotlib. I also use NumPy (of course), and there is also a bit of scikit-image, Pillow, h5py, SciPy, and a couple of libraries for Compressive Sensing recovery (PyProx and PyLops). However, I guess it is possible to do the same simulations quite easily in other languages (Matlab, Julia, etc.).
Anyway, let’s start with a quick introduction to the topic, just in case.
Ghost Imaging fundamentals
The basic idea of Ghost Imaging is to illuminate a scene with random illumination patterns (speckle), and to recover the image of the scene by using a single-pixel detector (sometimes also called bucket detector). This can be a simple photodiode, a photomultiplier tube, etc. The interesting thing here is that, instead of using a pixelated detector (such as a CCD or CMOS sensor) to recover the light coming from your scene, you only need a single detector, which can be made to suit your needs exactly (for example, it can be extremely sensitive, or it can work in a very exotic spectral range where pixelated detectors simply do not exist). The system works as follows:
- You generate speckle patterns (for example, you can use a laser and a rotating diffuser).
- You divide your optical beam in two paths: one goes to a pixelated detector (a camera), which records the speckles. The second path is projected onto the scene you want to measure.
- You collect the light that interacted with the scene, and measure its total intensity with the single-pixel detector.
- You measure the correlations between your single-pixel signal and the measured patterns, and you combine all of them to recover your image. The hand-waving explanation for this is that the projected patterns that resemble your object well will provide high signals on the single-pixel detector, while patterns that are very different will give low signal levels. Once you have measured all the intensity/pattern pairs, you recover your image as a linear superposition of patterns, weighted by the measured intensities (so, similar patterns will have big weights, and different patterns will be less relevant).
You can see the recap slide I used for this (with some relevant references that you can check if you want extra information):

The main problem with this approach is that speckles are very random, so basically non of them will ever be similar to your object, and it takes LOTS of speckles to recover a good quality image (and that’s why we will talk later about Compressive Sensing).
Anyway, so… how do you simulate this? In fact, its quite easy. Let’s start with creating speckles.
Generating Speckle Patterns
In order to generate speckle patterns, I use a very simple approach that resembles the physical phenomenon at hand: speckle emerges as a result of multiple random interference of coherent light. This is actually very easy to find when working with lasers, as just a little bit of dust in a mirror will cause this effect, and your beam will go from looking very nice to looking terrible (in fact, historically, people have been fighting against speckle for decades, as it typically hinders their experiments/applications). Anyway, in simulation, what I do is to create a random phase mask, illuminate it with a plane wave, and then propagate it to the far field (using Fourier Transforms). This creates speckles in a very efficient manner (thanks to FFTs), and you can even control some properties of the speckles by choosing your aperture size and tweaking the properties of the random phase mask.
The file I use to create speckles is called speckle_generation.py on the repo. It defines some physical properties first (wavelength, aperture size, number of pixels for the simulation, etc.). Then you can define the correlation width for the phase mask (which you can think of a kind of grain size for a rough surface, small grain size will look very random, while bigger grain size will look smoother). This changes a lot how the speckles look: for very small grain sizes you get fully developed speckles that cover large fields of view, while bigger correlation widths look closer to low order aberrations, and provide patterns that resemble that behavior. You can also play with the total number of 2pi jumps that the phase mask will cover (again, if you think as the phase mask as being a rough surface, the correlation widht will be like the lateral extent of mountains/valleys, and the scatter strength will be the height/depth of the mountains/valleys).
In this figure you can take a look at the code, and an example of four different phase masks (where the colors provide the phase values between 0 and 2pi):

Once you have the phase masks, you just need to propagate to the far field to get the speckles. Here I show the patterns that can be generated with this four masks after propagation. The images of the bottom row represent the intensity of the optical field (after propagation, you take the square modulus). As I wrote before, you can see that the masks with lower correlation width provide fully developed speckles, while the last image, which looks like a very low order aberration, almost provides a focal spot.

So, with this, you can generate infinite amounts of speckles, which you can use to simulate a Ghost Imaging experiment (the code stores the speckle images in a .h5 file). Let’s do that.
Simulating a Ghost Imaging experiment
So, once we have the speckle patterns, to simulate the experiment onto the computer we just need to project them onto our object (so, we just multiply the two images pixelwise), and measure the total energy of that projection (i.e., we add the value of all the pixels, which is what the single-pixel detector would do on the lab). Then, we recover the object by doing the linear superposition, and that’s all! This can be done with the file ghost_demo.py on the repo.
You can take a look at how the image is recovered while increasing the number of speckle patterns here:
It is easy to see, however, that the final result is not super nice. As I was saying, speckle patterns are random, and using a different random pattern at each projection does not give you a lot of new information (they are kind of redundant). You can see a few reconstructions while changing the total number of speckles here:

You start getting good results when the total number of illuminations goeswell beyond the total number of pixels on your object (4096). However, if you were going to do this on the lab, each measurement would take some time, and the total experiment will last for hours! Luckily, this process can be done faster if we use a cool trick: compressive sensing.
Different recovery strategies
The measurement process in Ghost Imaging can be expressed in matrix form very easily. If you arrange your object (O) in a column vector, then each speckle pattern can be aranged as a row vector in the matrix S, and your single-pixel signals can be written as a column vector (y):

Again, the recovery is just a linear superposition of the speckle patterns, weighted by the single-pixel intensities. However, this is not the only way of recovering your object. You could try to invert your sensing matrix (S), and recover your object by multiplying it by the measurement vector. This tends to be hard, because you typically do not have a square matrix (indeed, you don’t want it to be square, ideally you want to have as few rows as possible so measurement goes fast). However, there are many possibilities. If you don’t know anything about the object you want to recover, you can just try to solve the problem by minimizing the L2-norm between your recovery and your measurements (i.e., solving the least squares problem). You can also write down an optimization problem with some additional knowledge about your image (for example, it might be sparse in some domain, or it might look cartoony, with flat regions and very few edges). These three cases are labelled as a), b), and c) on the following slide:

The nice thing is that some of these are actually quite easy to solve. The least-squares problem can be solved by calculating the pseudoinverse of your measurement matrix. Sparsity assumptions are the core of Compressive Sensing theory, and there are many libraries to tackle them. I used PyProx and PyLops (I never used them before and I wanted to try with an easy example) and got very nice results. For my tests, I assumed the object was sparse in the Hadamard space.

You can also recover a better image by denoising the result from the least-squares problem. I used a Total Variation minimization (reduce the total number of ~sharp edges on the image):

You can take a look at all of these approaches at compressive_demo.py, and some of the results are shown here:


Now, even when using a low number of illuminations, it is possible to recover better quality images than the ones you get with the classical approach (correlations / linear superposition). I’d say that with just 512 illuminations you get a nice recovery when using Compressive Sensing (L2/L1- minimization), and the thing goes very well with all the approaches when increasing that number.
And… that’s basically it. I hope this was useful for someone!
Deja una respuesta