More Bezosism, Nixon deepfakes, and more: THE WEEKLY RECAP (2021#40)

Quite a lot going on this week. Massive leaks, Nobel prizes, deepfakes, Facebook being shamed everywhere… let’s start.


Can you tell the difference? Does it really matter?

Cool technology being shown by the people at MIT. Of course, deepfakes are not something new, and this one in particular is not over the top in quality (at some point people will start training the algorithms to move the forehead and the eyebrows in a natural way). Anyway, besides the technology, I liked the discussion about the use people are making of these tools. On the one hand, you have organizations trying to build systems to make mute people able to use their voice again (which is amazing). On the other, you have people putting celeb faces on porn videos, and making famous people tell lies on YouTube.

It really makes me think about how, for many many years, when a new regime wanted to control people, they used to change the history books. Nowadays, people consume most of their information in video format, through the internet. I guess we are not so far away from governments spamming famous people spreading fake news everywhere, with a quality that would be extremely difficult to grasp for the human eye. If it is hard to fight against fake news from random people on Facebook, what will happen when first line politicians/scientists will be the ones spreading misinformation?

A Nixon Deepfake, a ‘Moon Disaster’ Speech and an Information Ecosystem at Risk, on scientific american


Facebook vs the world

This week the Senate hold a hearing about Facebook, and a whistleblower throw a lot of shit on the fan about how the company algorithms work. The underlying idea, as everyone should know already, is that the only thing Facebook wants is for you to spend as much time as possible on the platform, sharing as much posts/information as possible, even if you spread fake news and hurt people, because that is what provides them huge amounts of money. I recommend the piece the people at MIT tech review wrote:

The Facebook whistleblower says its algorithms are dangerous. Here’s why., on the MIT technology review


CO2 removal, the shell game?

Quite an interesting piece on Nature about the plans from Microsoft to go zero-net emissions before 2030. With all these projects, I always wonder if ‘undoing’ your emissions is the right call, or generating technology with zero emissions should be the prior. Of course, at some point you have to undo all the emissions you did in the last centuries. However, I cannot help but think about how seeding trees to remove CO2 during the following decades will do nothing when those same forests disappear before balancing your emissions. Also, it is a very naïve way of solving a problem: I remove CO2 from the atmosphere and I store it on the biosphere, creating a problem for future generations (who will need to find a way to clean the biosphere). Anyway, at least they are doing something, I guess.

Microsoft’s million-tonne CO2-removal purchase — lessons for net zero, on nature


Science images of the month

Seahorse with mask

I’ll keep posting these as long as they keep doing them.

Space jellyfish and subterranean robots — September’s best science images, on nature


See you space cowboy

https://cdn-s-www.vosgesmatin.fr/images/2375D722-F317-4040-B216-AF8F2EFAB469/NW_listE/jeff-bezos-apres-son-vol-reussi-dans-l-espace-photo-joe-raedle-getty-images-afp-1626806281.jpg

Another week, another story about how it is impossible to win huge amounts of money without being a total prick that does not care about the wellbeing of others. People working 24/7 so I can ride through space? Why not.

Blue Origin’s ideas to mimic SpaceX sound pretty brutal for employees, on the verge


Twitch being pwned by 4chan

Besides a lot of code and internal information about the company (which apparently was not a big deal, as it was quite old), the leak included the numbers for how much money people have been winning on the platform. I guess everyday is clearer why the Amazon Prime subs will stop working at Twitch sooner than later.

Will Youtube become a real competitor at any point? What’s clear to me is that all these fuzz is paving the way for multiple services to stand up and generate a blooming field for streamers, which I’d say its a good thing.

Twitch source code and creator payouts part of massive leak, on the verge

Twitch confirms hack after source code and creator payout data leaks online, on techcrunch


And that’s it for the week. Stay safe!

Netflix and the chocolate factory, AI controlling funding, and more: THE WEEKLY RECAP (2021#38)

So, this week we have quite a lot of different stuff. Let’s get to it.


Could you please stop doing that

Is Netflix going to destroy some of the best books ever written? My bet is a big yes. At least I hope Dahl’s family will enjoy the money…

Netflix Acquires Roald Dahl Story Company, Plans Extensive Universe, on Variety

Sorry [#researcher_ID], funds not found

Really cool article on the MIT technology review about using AI to guide research. The case study is about the Decadal Survey, where many scientists decide every ten years which are the most interesting areas for future research. This leads to lots of funding going in that direction, so it is a big deal for some people (the researchers getting the funds), but also relevant for the general public (in the end, all the research provides advances for everyone, no matter the subject).

The news here is that some researchers are suggesting that we should use AI algorithms to go through all the proposals (there is more than 500 for the next survey), because there is no way the experts that work on the survey have enough knowledge to decide over so many different topics. While this seems like a good point to me, I still think the AI technology that we have nowadays is far from being useful for such a relevant task.

Some other thoughts that came to mind where that, when you decide a reduced number of areas and give tons of funding for doing research on them, you attract many scientists, which in the end will generate lots of papers on those topics. These papers will cross-reference other papers on the same topic, thus generating a lot of impact (as we usually measure the impact of publications by how many citations they get). In this scenario, you can always say that giving funding to this research was the good thing to do (it generated a lot of impact). But, was it relevant in the first place or it generated publications because there was a lot of money in funding?

Also, we have seen countless times that serendipity in science is a big force to reckon. You never know the findings you will get when doing research, and many times you will find extremely relevant applications in distant fields when you fund basic / not trendy research fields. Will AI ever be able to grasp these ideas? Should we really focus on specific topics of research, or just fund everything?

This AI could predict 10 years of scientific priorities—if we let it, on MIT technology review

Was “Despacito” a virus?

It is actually nice to know that, while I was infected many years ago by electronic music, it was something bound to happen at some point. Cool study trying to link the way music spreads between people with the way infectious diseases unfold. I really liked the ideas about the similarities and differences between dynamics with viruses and music. Sometimes you just heard something walking through the street (which would be similar to getting influenza at your work space or with your family), but many times you just see a tweet from a friend which is miles away and you get attracted to a song/genre.

Mathematicians discover music really can be infectious – like a virus, on the guardian
Modelling song popularity as a contagious process, on Proceedings of the Royal Society A

Modern architecture was a mistake

Really nice post on openculture with a video essay on Modern architecture, and why so many people (including myself) kinda hate it. Anyway, at least is not brutalism/postmodern (I am thinking about you, Centre Pompidou)

Why Do People Hate Modern Architecture?: A Video Essay, on openculture

Keep going, nothing to see here…

Everything is fine. No monopolistic practices. We are cool. Privacy is our motto. All we do is for the benefit of our customers. We review the apps on our store. We work with developers.

THE BITTER LAWSUIT HANGING OVER THE APPLE WATCH’S NEW SWIPE KEYBOARD, on theverge
Fortnite likely isn’t coming back to the App Store anytime soon, on techcrunch
Apple Lies About Epic Again, on the Michael Tsai blog

The weekly recap (2021#18)

Hectic week… sometimes unexpected tasks jeopardize you schedule. Coming back to normal now I hope.

Anyway, really interesting stuff has been happening for the past few days. Let’s delve into it:

Apple vs the world: episode #1

Recently Apple changed the way its users are noticed about how some apps track their information. This perturbation on the (advertisement) force has generated a lot of ripples, which might become bigger than a tsunami. There have been a lot of interesting articles on the topic, most of them defending a good vs bad sceneario (where Apple is almost a white knight fighting for our privacy and Facebook is a devil). While obviously there are companies that behave much worse than others, I kinda see the scenario as the evil vs the lesser evil. If you are interested, I found these sources particularly informative:

Apple And Tracking: A Story Of Good Guys And Bad Guys, on forbes
I checked Apple’s new privacy ‘nutrition labels.’ Many were false. on The Washington Post

Anyone in cherno?

One of the reasons we have not gone fully on the nuclear wagon was the Chernobyl accident. While some might think that everything was solved by now, there have been recent news that sparkled some concerns: nuclear reactions are starting to ramp up in activity again

‘It’s like the embers in a barbecue pit.’ Nuclear reactions are smoldering again at Chernobyl, on Science

Was Wolfram right?

I remember some coding lessons on the university that used Mathematica and their notebooks. I kinda hated those and always thought that it was a terrible way of coding, and only nice for sharing stuff. Nowadays Jupyter notebooks are used by millions of people, and I still see terrible code on notebooks that should have been just a .py file (also, it has to be said that I also see amazing dissemination notebooks).

Anyway, if you want to read the opinion of really clever people, you can take a look at this article:

Reactive, reproducible, collaborative: computational notebooks evolve, on Nature

Apple vs the world: episode #2

This week the trial on Apple vs Epic started, and oh boy it’s been fun. Two tech giants spreading shit all over the place. Let’s see how far greed can get.

I share some articles and a couple threads on twitter. While the articles are nice, the threads are kind of a live streaming of the trial, and I loved reading them. Also, every email that has been exposed during this week is pure gold, and a perfect insight on how companies operate… Interesting topic which will have a lot of repercussions in the way we interact with our devices…

WHY EPIC IS BURNING ITS OWN CASH TO COOK APPLE, on theverge
Even If Epic Loses Against Apple, Developers Could Still Win, on bloomberg
Apple antitrust trial kicks off with Tim Sweeney’s metaverse dreams, on theverge

Has AI gone too far?

Wild news on how some people are using AI-fuelled narrative games to make disturbing narratives involving sex and children. It really makes you think about the biases in training, the use we give to any tool, responsibility of companies on the use people do of their tech, and privacy. What a nice read:

It Began as an AI-Fueled Dungeon Game. It Got Much Darker, on wired

And that’s it for the week. Stay safe!

the Weekly recap

Let’s see for how long can I do this section this time… Lots of really cool stuff happening right now to be honest. I hope this short recap is interesting!


Another one bites the dust

This week we found out that Yahoo! Answers is shutting down. Another cool site from an old internet era that goes away… Have fun with delicious, google reader, grooveshark, etc.


Don’t worry, we are not getting out of work soon

This week Fermilab made one of those announcements that the media loves (new physics?). I’ve collected a couple links talking about it that I liked. First one is a nice strip done by @PHDcomics that was published here. The Physics Girl also made a nice video talking about the topic, if you prefer that medium:


Out-nerd me now, Randall!

The week started with a super cool strip on the mRNA vaccines from xkcd. SMBC however, stepped up the game talking about quantum computing.


Neuralink keeps pushing forward

A new bunch of results from one of the coolest companies I know was published this week. Brain-to-machine interfaces are getting closer and closer, and that’s a good thing. There is a super cool blog post with more info on the experiments in the Neuralink blog.


Humour in science articles

A nice piece of text on Nature Review Physics on funny article titles.

Fantastic titles and where to find them, on Nat Rev Phys 3, 225 (2021).

Interesting insights on problem solving

A very cool News and Views on Nature about how people try to solve problems. Seems like the mantra “less is more” is not hardwired to our brains at all:

Adding is favoured over subtracting in problem solving, on Nature 592, 189-190 (2021).

Take these extra fps buddy

A very interesting text on hackaday about a technology I had never heard about: using machine learning tools to upscale videogames either spatially or temporally (and thus gaining resolution or frames per second). Really nice concept, as it seems that is should be way more efficient to do the training for each videogame in a super computer, and then millions of players could run it while consuming much less energy. The same can apply to streaming services, etc. Really shows how compression techniques leak to every aspect of our world today.

AI UPSCALING AND THE FUTURE OF CONTENT DELIVERY, on hackaday.com

And that’s it for the week. See you soon!

Weekly recap (29/04/2018)

This week we have a lot of interesting stuff:

Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms

Adaptive Optics + Light Sheet Microscopy to see living cells inside the body of a Zebra fish (the favorite fish of biologists!). Really impressive images overcoming scattering caused by tissue. You can read more about the paper on Nature and/or Howard Hughes Medical Institute.

 


The Feynmann Lectures on Physics online

I just read on OpenCulture that The Feynmann Lectures on Physics have been made available online. Until now, only the first part was published, but now you can also find volumes 2 and 3. Time to reread the classics…


Imaging Without Lenses

An interesting text appeared this week in American Scientist covering some aspects of the coming symbiosis between optics, computation and electronics. We are already able to overcome optical resolution, obtain phase information, or even imaging without using traditional optical elements, such as lenses. What’s coming next?


All-Optical Machine Learning Using Diffractive Deep Neural Networks

A very nice paper appeared on arXiv this week.

Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the success of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.

Imagine if Fourier Transforms were discovered before lenses, and then some day someone comes up with just a piece of glass and says “this can make the computations of FT at the speed of light”. Very cool read.


OPEN SPIN MICROSCOPY

I just stumbled upon this project while reading Lab on the Cheap. Seems like a very good resource if you plan to build a light-sheet microscope and do not wanna spend $$$$ on Thorlabs.


Artificial Inteligence kits from Google, updated edition

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We’re seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven’t yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We’re taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

Looking forward to see the price tag and the date they become available.

The week in papers (22/04/18)

As a way to keep posts going, I am starting a short recap about interesting papers being published (or being discovered) every now and then. Probably I will write longer posts about some of them in the future.

Let’s get this thing going:

Two papers using ‘centroid estimation‘ to retrieve interesting information:

Extract voice information using high-speed camera

Mariko AkutsuYasuhiro Oikawa, and Yoshio Yamasaki, at The Journal of the Acoustical Society of America