The first photo of a black hole is the most historically significant “first photo of x” that happened in my life time and that I actually understood its historical significance when it came out. So I’d say that’s probably my favourite.
Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.
The first photo of a black hole is the most historically significant “first photo of x” that happened in my life time and that I actually understood its historical significance when it came out. So I’d say that’s probably my favourite.
Not a photo.
It’s the output of an AI model trained on simulations of black holes being asked to fill in the gaps from sparse observations.
Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.