Facecrap, Bezosism, and more: THE WEEKLY RECAP (2021#37)

Really packed week, so let’s start right away.


Enhance!

Another year, another amazing contest of scientific photography. I particularly liked the third place, though all the images are really impressive.

2021 PHOTOMICROGRAPHY COMPETITION, on nikonsmallworld


Nintendo being Nintendo

Honestly, I thought that after 4 years they were not going to add bluetooth support for headphones, even though everyone knew that it was a trivial thing to do. Anyway, better late than never, I guess.

Nintendo finally adds Bluetooth audio to the Switch in new software update, on the verge

La Liga goes crypto

And jumps aboard the hype train of NFTs. Apparently, you will be able to burn waste energy buying digital stickers of Hazard soon.

La Liga Becomes First Top Soccer League to Offer NFTs of All Players, on coindesk

Kratos is back!

And the game really looks amazing. Can’t wait to get a ps5 (maybe in 2022? let’s cross fingers for production ramping up…).


The Facebook files

This week, the Wall Street Journal published a series of articles with a lot of insights of how Facebook operates, and why it has become a problem. Manipulating elections, spreading fake news, and shaming the bodies of minors are just a few examples of the stuff that happens behind the curtain in Zuckerberg company. Personally it’s been around 5 years I do not use Facebook (I left after several data breaches and privacy scandals), I’ve never been a fan of Instagram, and I left Whatsapp around one year ago (again, I did not want Facebook having any data on me).

Something I read on Techcrunch that really resonated was the argument that maybe Facebook (or big tech companies in general) is the tobacco company of our era. Will we see its negative effects for generations to come? It is clear that social networks have shaped the world we live in, and while they have brought cool stuff, I am not sure at all that these technologies are worth the negative effects we are experiencing every day. Is it really necessary that we see how other people drink a beer on Instagram? Are we really better informed about the world while browsing Twitter? What do you really learn while watching 30-second videos on TikTok?

The facebook files, on the wall street journal
Facebook knows Instagram harms teens. Now, its plan to open the app to kids looks worse than ever, on techcrunch

Bezosism my ass

It seems like there is no way of living through a week without reading negative stuff from these billionaires. A lot of good information on the piece from Wall Street Journal about how Bezos has stablished a new standard on managing your workers, achieving incredible performance (and of course, benefits). While I had read a lot of news talking about algorithms measuring the performance of workers and taking decisions on hiring/firing, I never realised about the fact that your performance is compared against the average of your peers. If you do better than average, you are fine. If your performance decreases (maybe you just had a child and are not sleeping well, maybe you have injuries because your job is shit and doing the same task for 9 hours straight should be illegal), you might lose your job in a couple of weeks. It was shocking to see that some workers really needed to dope themselves to save their jobs, which lead to everyone trying to improve their performance in an impossible loop for efficiency.

I could not stop thinking about professional cycling, where everyone seems to be so doped that many regular cyclists need to resort to illegal actions just to compete with the elite.

[…] The overall rate at which workers must complete a task in an Amazon warehouse, whether it’s putting items on shelves, taking them off, or putting them in boxes, is calculated based on the aggregate performance of everyone doing that task in a given facility, says an Amazon spokeswoman. This floating rate, Amazon argues, shows that none of its employees is being pushed beyond what’s reasonable, because that rate is something like an average of what everyone in a warehouse is already doing.[…]

[…] “If there are people who cut corners, if there are people who take tons of coffee and tons of energy drinks to go faster, that raises the cumulative rate,” says Mr. Hamilton. “Meaning, if you want to keep up with the average, then you have to cut corners and drink coffee and energy drinks at every break.”[…]

[…] A worker using the Kiva system in its early incarnations would typically triple their output, say from an average of 100 picks an hour to 300, says Mr. Mountz. But it wasn’t as if the Kiva-using companies then reduced all their warehouse employees’ hours to a third of what they once were while paying them the same wage. Instead, Staples and Walgreens, both early customers of Kiva, used their workers’ increased productivity to increase the output capacity of their warehouses; store and ship a wider range of products; shorten the amount of time required to fulfill an order, and ultimately either lower the cost of their services, increase their profits, or both. All reasons Amazon, a customer of Kiva, decided to acquire it[…]

Anyway, another mark in the legacy of Bezos, I guess.

The Way Amazon Uses Tech to Squeeze Performance Out of Workers Deserves Its Own Name: Bezosism, on the wall street journal

And that’s it for the week. Stay safe!

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

This just got posted on the arXiv, and has some interesting ideas inside. Using a ground glass diffuser before a pixelated detector, and after a calibrating procedure where you measure the associated speckle patterns when scanning the sample plane, a single shot of the fluorescence signal speckle pattern can be used to retrieve high spatial resolution images of a sample. Also, the authors claim that the approach should work on STORM setups, achieving really fast and sharp fluorescence images. Nice single-shot example of Compressive Sensing and Ghost Imaging!

Single frame wide-field Nanoscopy based on Ghost Imaging via Sparsity Constraints (GISC Nanoscopy)

by Wenwen Li, Zhishen Tong, Kang Xiao, Zhentao Liu, Qi Gao, Jing Sun, Shupeng Liu, Shensheng Han, and Zhongyang Wang, at arXiv.org

Abstract:

The applications of present nanoscopy techniques for live cell imaging are limited by the long sampling time and low emitter density. Here we developed a new single frame wide-field nanoscopy based on ghost imaging via sparsity constraints (GISC Nanoscopy), in which a spatial random phase modulator is applied in a wide-field microscopy to achieve random measurement for fluorescence signals. This new method can effectively utilize the sparsity of fluorescence emitters to dramatically enhance the imaging resolution to 80 nm by compressive sensing (CS) reconstruction for one raw image. The ultra-high emitter density of 143 {\mu}m-2 has been achieved while the precision of single-molecule localization below 25 nm has been maintained. Thereby working with high-density of photo-switchable fluorophores GISC nanoscopy can reduce orders of magnitude sampling frames compared with previous single-molecule localization based super-resolution imaging methods.

Experimental setup and fundamentals of the calibration and recovery process. Extracted from Fig.1 of the manuscript.

Simultaneous multiplane imaging with reverberation multiphoton microscopy

Really nice pre-print by the people at Boston University, leaded by J. Mertz.

Love the idea of generating ~infinite focal spots (until you run out of photons) inside a sample, and using a extremely fast single-pixel detector to recover the signal. Very original way to tackle volumetric imaging in bio-imaging!

Fundamental workflow of the technique. Extracted from Fig. 1 in the manuscript

Simultaneous multiplane imaging with reverberation multiphoton microscopy

by Devin R. Beaulieu, Ian G. Davison, Thomas G. Bifano, and Jerome Mertz, at arXiv.org

Abstract:

Multiphoton microscopy (MPM) has gained enormous popularity over the years for its capacity to provide high resolution images from deep within scattering samples. However, MPM is generally based on single-point laser-focus scanning, which is intrinsically slow. While imaging speeds as fast as video rate have become routine for 2D planar imaging, such speeds have so far been unattainable for 3D volumetric imaging without severely compromising microscope performance. We demonstrate here 3D volumetric (multiplane) imaging at the same speed as 2D planar (single plane) imaging, with minimal compromise in performance. Specifically, multiple planes are acquired by near-instantaneous axial scanning while maintaining 3D micron-scale resolution. Our technique, called reverberation MPM, is well adapted for large-scale imaging in scattering media with low repetition-rate lasers, and can be implemented with conventional MPM as a simple add-on.

De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

Very interesting stuff from the people at MIT regarding imaging through scattering media. Recently, multiple approaches taking advantage of temporal focusing (TF) increased efficiency inside scattering media when using two-photon microscopy have been published, and this goes a step further.

Here, the authors use wide-field structured illumination, in combination with TF, to obtain images with a large field-of-view and a slow number of camera acquisitions. To do so, they sequentially project a set of random structured patterns using a digital micromirror device (DMD). Using the pictures acquired for each illumination pattern in combination with the point-spread-function (PSF) of the imaging system allows to recover images of different biological samples without the typical scattering blur.

Optical design and working principle of the system. Figure extracted from “De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media,” Dushan N. Wadduwage et al., at https://arxiv.org/abs/1902.10737

De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media

by Dushan N. Wadduwage et al., at arXiv.

Abstract:

From multi-photon imaging penetrating millimeters deep through scattering biological tissue, to super-resolution imaging conquering the diffraction limit, optical imaging techniques have greatly advanced in recent years. Notwithstanding, a key unmet challenge in all these imaging techniques is to perform rapid wide-field imaging through a turbid medium. Strategies such as active wave-front correction and multi-photon excitation, both used for deep tissue imaging; or wide-field total-internal-refection illumination, used for super-resolution imaging; can generate arbitrary excitation patterns over a large field-of-view through or under turbid media. In these cases, throughput advantage gained by wide-field excitation is lost due to the use of point detection. To address this challenge, here we introduce a novel technique called De-scattering with Excitation Patterning, or ‘DEEP’, which uses patterned excitation followed by wide-field detection with computational imaging. We use two-photon temporal focusing (TFM) to demonstrate our approach at multiple scattering lengths deep in tissue. Our results suggest that millions of point-scanning measurements could be substituted with tens to hundreds of DEEP measurements with no compromise in image quality.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

The group leaded by P. Artal at Murcia University has recently published an interesting paper related to adaptive optics using an adaptive lens. When working in a real scenario, imperfections in the optical elements you use or just the objects you want to image introduce optical aberrations in the pictures you obtain. Usually these aberrations reduce the quality of your images just a bit (introducing a bit of defocus or some astigmatism), but in the worst case scenario it may result in completely useless results.

In order to overcome this problem, usually liquid crystal spatial light modulators or deformable mirrors are used in optical systems to introduce phase corrections to the light going through the system, countering the phase of these aberrations and thus restoring the image quality. However, these systems present several problems. Even though both spatial light modulators and deformable mirrors can correct the problems I mentioned earlier, they work in a reflection configuration. This introduces additional complexity to the optical systems. Also, liquid crystal spatial light modulators are sensitive to polarization, usually have low reflectance values, and tend to be slow.

As a way to tackle those obstacles, the authors have used an adaptive lens in a two-photon microscope to perform the adaptive optics procedure. Adaptive lenses are being used more and more recently to perform aberration correction. In contrast to both spatial light modulators and deformable mirrors, they work in transmission and present very low losses. Moreover, they can introduce low and mid-order aberrations at refresh rates of almost 1 kHz. The working principle can be seen in this figure:

Adaptive_lens
Schematics of the working principle of an adaptive lens. The lens is formed by two thin glass layers, and a liquid in between. Each actuator is triggered by an electrical signal, which deforms the glass windows, generating different shapes and changing the phase of the wavefront passing through the lens. Figure extracted from Stefano Bonora et. al., “Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens,” Opt. Express 23, 21931-21941 (2015)

In the paper, they show how this device can obtain results comparable to the traditional spatial light modulator approach, with the benefits mentioned before, in a multi-photon microscope.

Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens

by Juan M. Bueno et al., at Optics Express

Abstract:

A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

results_bueno.png
Experimental results showing the improvement on the image obtained with the adaptive lens system. Figure 3 from the paper: Juan M. Bueno, et. al, “Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens,” Opt. Express 26, 14278-14287 (2018)

 

Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.

 


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…


Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?


All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


OPEN SPIN MICROSCOPY

I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.


Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.