Space space

SpaceX landing is even more impressive when you see the booster next to humans.

SpaceX landing is even more impressive when you see the booster next to humans.

I don't know why I thought it wouldn't be this big, but I never realized it's size until now

Edit: my most upvoted comment; u/ElonMuskOfficial, can I trade my points in for a SpaceX model rocket?

The Big Dipper as it was. The stars are in motion and they do move, albeit very slowly. This is how the Dipper would have appeared to our ancestors 50,000 years ago

The Big Dipper as it was. The stars are in motion and they do move, albeit very slowly. This is how the Dipper would have appeared to our ancestors 50,000 years ago

I hereby revoke the title "Big Dipper" and dub thee: "The Frying Pan"

Advertisement

Eugene Cernan, the last man on the moon, died today at the age of 82.

Eugene Cernan, the last man on the moon, died today at the age of 82.
"Another hundred years may pass before we understand the true significance of Apollo. Lunar exploration was not the equivalent of an American pyramid, some idle monument to technology, but more of a Rosetta stone, a key to unlocking dreams as yet undreamed."

Eugene Cernan. 

Here is a picture of him on the surface of the moon

Here is a link to a PDF copy of Wernher von Braun's The Mars Project, which was his proposal for how a crewed Mars mission should be conducted, which at one point was the goal following Apollo. As far as I am aware, this is the only copy on the internet.

"Another hundred years may pass before we understand the true significance of Apollo. Lunar exploration was not the equivalent of an American pyramid, some idle monument to technology, but more of a Rosetta stone, a key to unlocking dreams as yet undreamed."

Eugene Cernan.

Here is a link to a PDF copy of Wernher von Braun's The Mars Project, which was his proposal for how a crewed Mars mission should be conducted, which at one point was the goal following Apollo. As far as I am aware, this is the only copy on the internet.

Moon view from the ISS

Moon view from the ISS

Looks like the scene in Rogue 1 with the death star coming over the horizon

The Orion Nebula

The Orion Nebula
This old comment of mine is relevant:

Let me try to give some clarity about how photos of space are created. This explanation will necessarily be simplified.

It helps to understand how a digital camera works. In a consumer digital camera, the light from the scene passes through the lens and strikes a the sensor, which is made up of millions of little light detecting points called photosites. Each photosite records only the intensity of the light striking it (the luminance) and sends a numerical value to the camera’s computer, which assembles the values from each photosite and puts the image together. 

The photosites cannot detect the color of the light, so in order to create a color image, a bayer filter is placed over the photosites. The filter is made up of red, green, and blue transparent material, with each photosite on the sensor being covered by only one color on the filter. Thus, any light that strikes a photosite under a blue filter, for example, must be blue light. Thus, the data at each photosite is color (RGB) and luminance. 

A color image is made by combining the data from all the red photosites into the red channel, and so on for each color. The three channels are combined into a single image using this RGB data. You may have seen projectors which use 3 separate filters on the front of their lenses for each channel. The 3 lenses project the three channels to form a single image. This image illustrates a simplification of the process of assembling the image. 

You can see how, through this process, approximately 67% of the light coming toward the sensor is filtered out by the bayer filter. Each site can only detect one of three colors, which means all the light from the other two colors are filtered out. This also means that a single color pixel can only be created from 3 or more photosites (because a color pixel needs 3 color channels). So a sensor with 300 photosites can only produce an image of 100 color pixels. This is a compromise. Speed and ease of use is more important than density of data for most consumer applications, so some density is sacrificed in the name of convenience.

Because the filters are fixed and cannot change in response to the relative lighting situation, images from consumer cameras need to be white balanced. You may have seen pictures with really bad white balance in which the whole picture is yellow, blue, green, or purple. Under different lighting conditions, our brains adjust our perception of color dynamically. You might have noticed how white LEDs can look blue when viewed indoors. That’s because a lot of indoor lighting is yellow in color. Instead of realizing that all the light sources are yellow, our brain assumes “well they are probably mostly white” and adjusts our perception accordingly by sliding our own perception of “white” along the color temperature scale. By shifting everything, we perceive the white LED to be blue. The reason I bring this up is to illustrate and emphasize that there is no objective, true appearance of things. Visual perception is subjective. It exists only in our brains, not in the objects themselves. Color is our brain’s method of displaying information about wavelength of electromagnetic radiation. But there’s no reason why light with a wavelength of 700 nm is red. Red is just our brain’s method of displaying the fact that an object is putting off electromagnetic radiation at a wavelength of 700 nm. Red isn’t the “true” color of that wavelength. It’s just a wavelength. 

So what does this have to do with space?

Space cameras (by that I mean the general class of cameras used by professionals on probes, rovers, telescopes, etc) do not have bayer filters because density of information is more important than convenience and ease of use. Instead of an integrated bayer filter, space cameras have a series of movable filters which can be selected and placed in front of the sensor based on the requirements of the particular situation. 

In order to make a color image, a space camera has to take three images - one with a red filter, one with a green, and one with a blue - to use as the RGB channels. The camera computer knows which filter is placed in front of the sensor, so when it gets the luminance data back after taking a picture, it can add the RGB data automatically to each photosite. It’s as simple as “Each photosite in this set of data is a green photosite,” for instance. You get the RGB + luminance data out of each photosite, just the same as with a consumer camera. So you take one for each color channel, and each image is used as a single channel and combined into a single image. If there is movement in the scene, this can cause artefacts. You can see in this image of the moon passing in front of Earth taken by DSCOVR that the leading and trailing edge of the moon have a weird hue to them. That’s because it takes time to switch filters, and in that time the moon had moved. 

Going back a bit - this is one reason why we don’t use this method in consumer cameras; things just move too quickly. Also, note that both types of cameras (consumer and space) use the same general method to make a single color photograph - light of various colors is filtered before it hits the sensor, then the data gathered from the sensor is assembled into first color channels, and then into the final image.

There are benefits to using the space camera method, which is called narrowband imaging. A sensor with 300 photosites can produce an image of a full 300 pixels. Each photosite can be used to capture light for each channel, so the density of the data collected by the sensor is much higher. In addition, space cameras aren’t limited to just 3 colors in the visible wavelength. There are filters which can filter out all light except that produced by hot hydrogen, for example. You can imagine how useful this would be for a space camera. There are filters for all kinds of things. The sensor data from behind various filters can be assembled into an image. If you put the hydrogen filtered image in as the red channel, and the oxygen filtered image in as the blue channel, you can see exactly where the hydrogen and the oxygen is in a distant nebula. In the famous Pillars of Creation image, sulfur is displayed in the red channel, hydrogen is displayed in the green channel, and oxygen is displayed in the blue channel. The image uses color to give us information. Just like every photograph.

One might object that the wavelengths displayed in the image are not the same wavelengths as those collected by the camera sensor, but that is true of almost all images from every camera, as they go through a white balancing process (done automatically in consumer cameras, and manually in space cameras).

It's true that the pillars won't look like the image to someone viewing them in person, but that is as result of limitations of the human eyeball. Our eyes have a very hard time detecting color in dim light. The color is there, we just can't see it. Under dark skies through a telescope the Orion nebula will look something like this. But a photograph of the Orion nebula using a standard consumer camera can look like this. Why? Because our eyes are bad. The light is there, it just takes a camera to collect it. It's no more “what the eye would see” than the Pillars, even though it is in the visible wavelengths. 

The eye is poorly equipped. 

Some space photos, like this from MSL Curiosity and this from Cassini are approximately “what the eye would see” because their subjects are bright enough that our eyes would be able to make a decent image (Curiosity even has a color calibration target to help with the process). Even then it's not really what our eyes would see, because what we see is a dynamic interpretation of the scene. Remember the white balance discussion?

In short, the answer to the question “is this what it would look like if I was there?” is almost always no, but that is true of every photograph. The photos taken from space cameras are no more fake or false than the photos taken from any camera. Like all photos they are a visual interpretation using color to display data. Most space photos have information online about how they were created, what filters were used, and all kinds of interesting details about processing.

The discussion about whether a space photo is real or fake is meaningless. There's no distinction between photoshopped and not. It's a nuanced view but the nature of the situation demands it.

This old comment of mine is relevant:

Let me try to give some clarity about how photos of space are created. This explanation will necessarily be simplified.

It helps to understand how a digital camera works. In a consumer digital camera, the light from the scene passes through the lens and strikes a the sensor, which is made up of millions of little light detecting points called photosites. Each photosite records only the intensity of the light striking it (the luminance) and sends a numerical value to the camera’s computer, which assembles the values from each photosite and puts the image together.

The photosites cannot detect the color of the light, so in order to create a color image, a bayer filter is placed over the photosites. The filter is made up of red, green, and blue transparent material, with each photosite on the sensor being covered by only one color on the filter. Thus, any light that strikes a photosite under a blue filter, for example, must be blue light. Thus, the data at each photosite is color (RGB) and luminance.

A color image is made by combining the data from all the red photosites into the red channel, and so on for each color. The three channels are combined into a single image using this RGB data. You may have seen which use 3 separate filters on the front of their lenses for each channel. The 3 lenses project the three channels to form a single image. This image illustrates a simplification of the process of assembling the image.

You can see how, through this process, approximately 67% of the light coming toward the sensor is filtered out by the bayer filter. Each site can only detect one of three colors, which means all the light from the other two colors are filtered out. This also means that a single color pixel can only be created from 3 or more photosites (because a color pixel needs 3 color channels). So a sensor with 300 photosites can only produce an image of 100 color pixels. This is a compromise. Speed and ease of use is more important than density of data for most consumer applications, so some density is sacrificed in the name of convenience.

Because the filters are fixed and cannot change in response to the relative lighting situation, images from consumer cameras need to be white balanced. You may have seen pictures with really bad white balance in which the whole picture is yellow, blue, green, or purple. Under different lighting conditions, our brains adjust our perception of color dynamically. You might have noticed how white LEDs can look blue when viewed indoors. That’s because a lot of indoor lighting is yellow in color. Instead of realizing that all the light sources are yellow, our brain assumes “well they are probably mostly white” and adjusts our perception accordingly by sliding our own perception of “white” along the color temperature scale. By shifting everything, we perceive the white LED to be blue. The reason I bring this up is to illustrate and emphasize that there is no objective, true appearance of things. Visual perception is subjective. It exists only in our brains, not in the objects themselves. Color is our brain’s method of displaying information about wavelength of electromagnetic radiation. But there’s no reason why light with a wavelength of 700 nm is red. Red is just our brain’s method of displaying the fact that an object is putting off electromagnetic radiation at a wavelength of 700 nm. Red isn’t the “true” color of that wavelength. It’s just a wavelength.

So what does this have to do with space?

Space cameras (by that I mean the general class of cameras used by professionals on probes, rovers, telescopes, etc) do not have bayer filters because density of information is more important than convenience and ease of use. Instead of an integrated bayer filter, space cameras have a series of movable filters which can be selected and placed in front of the sensor based on the requirements of the particular situation.

In order to make a color image, a space camera has to take three images - one with a red filter, one with a green, and one with a blue - to use as the RGB channels. The camera computer knows which filter is placed in front of the sensor, so when it gets the luminance data back after taking a picture, it can add the RGB data automatically to each photosite. It’s as simple as “Each photosite in this set of data is a green photosite,” for instance. You get the RGB + luminance data out of each photosite, just the same as with a consumer camera. So you take one for each color channel, and each image is used as a single channel and combined into a single image. If there is movement in the scene, this can cause artefacts. You can see in this image of the moon passing in front of Earth taken by DSCOVR that the leading and trailing edge of the moon have a weird hue to them. That’s because it takes time to switch filters, and in that time the moon had moved.

Going back a bit - this is one reason why we don’t use this method in consumer cameras; things just move too quickly. Also, note that both types of cameras (consumer and space) use the same general method to make a single color photograph - light of various colors is filtered before it hits the sensor, then the data gathered from the sensor is assembled into first color channels, and then into the final image.

There are benefits to using the space camera method, which is called narrowband imaging. A sensor with 300 photosites can produce an image of a full 300 pixels. Each photosite can be used to capture light for each channel, so the density of the data collected by the sensor is much higher. In addition, space cameras aren’t limited to just 3 colors in the visible wavelength. There are filters which can filter out all light except that produced by hot hydrogen, for example. You can imagine how useful this would be for a space camera. There are filters for all kinds of things. The sensor data from behind various filters can be assembled into an image. If you put the hydrogen filtered image in as the red channel, and the oxygen filtered image in as the blue channel, you can see exactly where the hydrogen and the oxygen is in a distant nebula. In the famous Pillars of Creation image, sulfur is displayed in the red channel, hydrogen is displayed in the green channel, and oxygen is displayed in the blue channel. The image uses color to give us information. Just like every photograph.

One might object that the wavelengths displayed in the image are not the same wavelengths as those collected by the camera sensor, but that is true of almost all images from every camera, as they go through a white balancing process (done automatically in consumer cameras, and manually in space cameras).

It's true that the pillars won't look like the image to someone viewing them in person, but that is as result of limitations of the human eyeball. Our eyes have a very hard time detecting color in dim light. The color is there, we just can't see it. Under dark skies through a telescope the Orion nebula will look something like this. But a photograph of the Orion nebula using a standard consumer camera can look like this. Why? Because our eyes are bad. The light is there, it just takes a camera to collect it. It's no more “what the eye would see” than the Pillars, even though it is in the visible wavelengths.

The eye is poorly equipped.

Some space photos, like this from MSL Curiosity and this from Cassini are approximately “what the eye would see” because their subjects are bright enough that our eyes would be able to make a decent image (Curiosity even has a color calibration target to help with the process). Even then it's not really what our eyes would see, because what we see is a dynamic interpretation of the scene. Remember the white balance discussion?

In short, the answer to the question “is this what it would look like if I was there?” is almost always no, but that is true of every photograph. The photos taken from space cameras are no more fake or false than the photos taken from any camera. Like all photos they are a visual interpretation using color to display data. Most space photos have information online about how they were created, what filters were used, and all kinds of interesting details about processing.

The discussion about whether a space photo is real or fake is meaningless. There's no distinction between photoshopped and not. It's a nuanced view but the nature of the situation demands it.

Found this news paper while going through some of my grandma's old things. Thinking of framing it.

Found this news paper while going through some of my grandma's old things. Thinking of framing it.

Seal it in an airtight container and frame it, that way it wont deteriorate and youll have it for as long as you live :)

" Dangling my feet in space " Thomas Pesquet

" Dangling my feet in space " Thomas Pesquet

Looking at this makes me want to throw up in fear. Still badass though

My first attempt at a long exposure shot. Not the best, but I really like it.

My first attempt at a long exposure shot. Not the best, but I really like it.

"Not the best", man don't say that, that shot is gorgeous. Well done.

Try one of these subthreads