When a site with “little planet” images was first shown to me, I knew it was something I had to try. So during my recent visit to Sydney, I took the opportunity to capture the full sphere around me. I bought a 10.5mm (DX) fish-eye lens a couple years ago and this was the perfect application. Covering 180° corner-to-corner, or about 100° in one dimension and 75° in the other, you can capture the entire scene in as little as 12 images, including a little overlap for stitching.
You can, of course, use any type of lens but since a full sphere has an awful lot of angular area to cover, you really want to capture as much per image as possible. And since you’re going to be warping the images in weird & wonderful ways, there’s no advantage in starting with a rectilinear (i.e. “normal”) lens.
If you haven’t heard, there’s a very common problem with capturing multiple images for stitching. It’s called “parallax” and it’s what makes close-by things appear to move faster than far away ones when looking out the window of a moving car. This affects your images when you rotate the camera to capture an adjacent area. If your camera and lens are not pivoting on the optical center of the lens (the point at which light appears to be entering it) then close things will move faster that distant ones as you turn and the final stitched image will have odd discontinuities at image boundaries. A standard tripod will move around the focal plane (the film or CCD sensor) which is not the same thing.
You can spend hundreds of dollars for a nice, heavy “panoramic mount” that will make this problem go away… or you can cheat. The trade-off, of course, is time and accuracy. The trick to reducing parallax errors is to push them to places where it won’t be noticeable after blending. The sky is usually the best choice but anything without long, straight lines will usually work.
Before starting, set everything on the camera to “manual” so nothing changes between shots. This is supposed to look like a single exposure when all is said and done. Take some quick shots in all directions and adjust the exposure so that you’ll capture everything.
Start by capturing the entire scene. A level shot every 45° (vertical mount) or 90° (horizontal mount) plus a few of the sky and the ground will ensure that you have everything. This is important because it’s easy to miss sections in the next step but having this will allow you to fill in any gaps.
Once the images are captured, they have to be warped and stitched. You can pay an obscene amount of money for PTgui or you can use the free Hugin software. There’s no question that PTgui is more robust and has a nicer user interface but there’s no difference in the final output quality. In fact, PTgui uses many of the same (free) back-end programs such as “nona” and “enblend” when creating the final image.
The most time-consuming part of the stitching process is defining the control points. Both programs mentioned above have built-in automated tools to create control points but don’t use them! You want to do this by hand and only add control points along or on either side of the intended seam. Placing control points elsewhere in the image is detrimental since the program will sacrifice some accuracy in the important points to try to satisfy these unimportant ones. You want these boundary regions to align after warping because that is where the seams will go and the closer the areas match between warped images, the less noticable the seam. What happens elsewhere, where there are no seams, isn’t important since we’re distorting the final image so much that any error won’t be noticeable.
To get the “tiny planet” effect, use a “stereographic” or “stereographic down” output with about 300° on both the horizontal and vertical. When you preview the output, you’ll likely find that it looks “wrong” but it’s just a matter of setting the center to where the tripod can be seen. Then you can push, pull, rotate, etc. the preview until you get the general result you want.
Now render the final result. If you’re supremely lucky, the blending program will automatically place the seams in the correct locations and you’ll be done.
However, if you’re not that person, it’s not going to be perfect and you’re going to have to adjust it by hand. To do this, render the output again but this time have it output all the individual images separately.
After loading the blended image into Gimp, Photoshop, or whatever, find the blending mistakes and load the appropriate image into a higher layer. You can then make transparent all of this new layer from the boundary areas outwards, effectively pushing the seam out to those areas. Rinse & Repeat until all seams have been fixed.
If the ground has lines, you’re going to find that the seams are noticeable. It’s going to take some work with Gimp or Photoshop to transform/warp various parts of the images to create something that doesn’t have noticeable errors. In the end, though… It’s all worth it!