Planet Sydney

When a site with “little planet” images was first shown to me, I knew it was something I had to try. So during my recent visit to Sydney, I took the opportunity to capture the full sphere around me. I bought a 10.5mm (DX) fish-eye lens a couple years ago and this was the perfect application. Covering 180° corner-to-corner, or about 100° in one dimension and 75° in the other, you can capture the entire scene in as little as 12 images, including a little overlap for stitching.
Opera-House FishEye
You can, of course, use any type of lens but since a full sphere has an awful lot of angular area to cover, you really want to capture as much per image as possible. And since you’re going to be warping the images in weird & wonderful ways, there’s no advantage in starting with a rectilinear (i.e. “normal”) lens.

If you haven’t heard, there’s a very common problem with capturing multiple images for stitching. It’s called “parallax” and it’s what makes close-by things appear to move faster than far away ones when looking out the window of a moving car. This affects your images when you rotate the camera to capture an adjacent area. If your camera and lens are not pivoting on the optical center of the lens (the point at which light appears to be entering it) then close things will move faster that distant ones as you turn and the final stitched image will have odd discontinuities at image boundaries. A standard tripod will move around the focal plane (the film or CCD sensor) which is not the same thing.

You can spend hundreds of dollars for a nice, heavy “panoramic mount” that will make this problem go away… or you can cheat. The trade-off, of course, is time and accuracy. The trick to reducing parallax errors is to push them to places where it won’t be noticeable after blending. The sky is usually the best choice but anything without long, straight lines will usually work.

Before starting, set everything on the camera to “manual” so nothing changes between shots. This is supposed to look like a single exposure when all is said and done. Take some quick shots in all directions and adjust the exposure so that you’ll capture everything.

Start by capturing the entire scene. A level shot every 45° (vertical mount) or 90° (horizontal mount) plus a few of the sky and the ground will ensure that you have everything. This is important because it’s easy to miss sections in the next step but having this will allow you to fill in any gaps.

Originals
Parallax error is generally only noticable when there are long, strait lines because they tend to become “broken” when stitched. Since there is never any parallax error within a single image, you want to capture anything with continuous lines in a single shot along with some soft boundary area on the edges and at least some of the stuff on the other side of that boundary for joining. Later you can force the stitching seams to be in this boundary region where they will not be (overly) noticeable. You can make life easier for yourself by not having wide or tall things close by. For example, don’t set up next to a guard railing. These will almost certainly not fit in a single frame and yet have the biggest parallax errors.

Once the images are captured, they have to be warped and stitched. You can pay an obscene amount of money for PTgui or you can use the free Hugin software. There’s no question that PTgui is more robust and has a nicer user interface but there’s no difference in the final output quality. In fact, PTgui uses many of the same (free) back-end programs such as “nona” and “enblend” when creating the final image.

The most time-consuming part of the stitching process is defining the control points. Both programs mentioned above have built-in automated tools to create control points but don’t use them! You want to do this by hand and only add control points along or on either side of the intended seam. Placing control points elsewhere in the image is detrimental since the program will sacrifice some accuracy in the important points to try to satisfy these unimportant ones. You want these boundary regions to align after warping because that is where the seams will go and the closer the areas match between warped images, the less noticable the seam. What happens elsewhere, where there are no seams, isn’t important since we’re distorting the final image so much that any error won’t be noticeable.

To get the “tiny planet” effect, use a “stereographic” or “stereographic down” output with about 300° on both the horizontal and vertical. When you preview the output, you’ll likely find that it looks “wrong” but it’s just a matter of setting the center to where the tripod can be seen. Then you can push, pull, rotate, etc. the preview until you get the general result you want.

Now render the final result. If you’re supremely lucky, the blending program will automatically place the seams in the correct locations and you’ll be done.

However, if you’re not that person, it’s not going to be perfect and you’re going to have to adjust it by hand. To do this, render the output again but this time have it output all the individual images separately.

After loading the blended image into Gimp, Photoshop, or whatever, find the blending mistakes and load the appropriate image into a higher layer. You can then make transparent all of this new layer from the boundary areas outwards, effectively pushing the seam out to those areas. Rinse & Repeat until all seams have been fixed.

If the ground has lines, you’re going to find that the seams are noticeable. It’s going to take some work with Gimp or Photoshop to transform/warp various parts of the images to create something that doesn’t have noticeable errors. In the end, though… It’s all worth it!

Planet Sydney

Share This:

About Face

I’ve been using the “K2” theme for WordPress from the beginning but decided I wanted to try something that will use a bit more of the available horizontal space on a typical browser.  I’m quite impressed with the “Atahulpa” theme.  It’s amazingly customizable even without knowing any HTML/CSS.

I’d be happy to hear what you think of the new layout compared to the old.

Old K2 Theme

Share This:

The Biggest Picture

Ever since I found GigaPan and my friend Andre Gunther wrote his guest article on stitching large panoramas, I’ve wanted to do one.  While in Australia, I found a good subject: The Sydney night skyline, complete with Opera House and Harbor Bridge.  I spent two hours standing in one spot while I worked out the best exposure and taking 220 distinct images.  Each of these images was an 8-second exposure at ISO 200 and overlaps the previous one by about 25%.

215 of these images are 3 rows of 50-some vertical photos at full zoom (200mm) plus another two rows capturing the tops of buildings and the top of the bridge.  There was no point snapping a photo of empty space — there is nothing to anchor it to when stitching.

Source Images

The remaining 5 images were taken horizontally with an angle big enough to capture the full vertical.  This would provide background imagery of the night sky and harbor that could not be photographed at full zoom and be attached to a landmark.  Since neither the night sky nor an 8-second exposure of rippling water have any detail, nothing is lost.  Note when using this technique: capture full-zoom images that just touch the foreground imagery so that the final stitch will have plenty of “quiet space” around the main content for blending smoothly into the background.

And so ended the easy part.

All of these images were shot in RAW mode because my camera has a 12-bit analog-to-digital converter and every bit is important given the huge dynamic range of night photography.  I usually do these with HDR but if you’ve read my previous HDR adventure (and the follow-up) using only 11×3 images, you’ll understand why I was not about to do it with 220×3 images.

Using Photoshop RAW converter, I optimized the exposure as best I could to reduce blown-out areas to the bare minimum and then converted them to 48-bit TIFFs (16 bits per channel) that the stitching software can read.

When it came to the stitching, I tried both PTgui (v7, not the latest v8) and Hugin.  Both behaved pretty much the same and both died horribly when trying to produce the final output.

Creating the control points took a number of hours.  The automated placement did a good job, covering about 80% of the pairs but some had none at all.  I started the process with PTgui since it’s what I’ve used in the past; for many of the pairs with no control points, asking it to auto-generate points for just that pair usually produced good results.  For the rest, I added a bunch by hand.  Later, when using Hugin (which will read PTgui project files), I also added “vertical line” control points.  These marvelous items tell the optimizer how to make perfectly straight panoramas without having to fiddle with it by hand in the preview pane.

Hugin can also handle shots taken with different lenses or the same lens at different focal lengths.  This was essential in making sure that the 5 “background” images were reasonably well aligned with the 215 “foreground” ones.  PTgui (v7) had no such capacity.

If I had known Hugin had these last two features (vertical line control points and support for different lenses), I never would have tried PTgui, but instead I didn’t switch to Hugin until PTgui proved useless at producing a final image.

The problem, quite simply, was size.  Once everything was assembled, I learned that the final image was to be over 3 GIGApixels in size!!!  Multiply that by 3 colors and 16-bits (2 bytes) per pixel and you get an image that is over 18GB in size.  Ouch!  This is so big that PTgui couldn’t allocate a big enough memory chunk to even warp the images at full resolution.

gigapixel0101Hugin (or specifically, “nona”, the warping engine) has a nifty feature of keeping the warped output nearly the same size as the
original and include a position within a larger image encoded in the output file.  PTgui wanted every warped output to be the full 3Gpx in size.

Unfortunately, when it came to combining all the individual warped images, Enblend (the backend “blender” program of Hugin) died for the very same reason as PTgui: It couldn’t manage an 18GB image buffer, even held on disk, when compiled for a 32-bit operating system.  I’m actually running Vista-64 but these programs were built in 32-bit mode and nobody had build 64-bit versions that I could find.

I should mention that PTgui v8 is available as a 64-bit binary that may work…  If you want to shell out USD$216 or so.

Here’s another reason Hugin beats PTgui: It’s “free software”.  The source code is readily available.  I could easily download it and compile it as I needed.  I chose a different path, though.  I transferred the warped images to my laptop and then my Linux workstation at the office which is running a 64-bit distribution of Linux and has Hugin/Enblend available.  A little investigation into the Enblend command-line and a day later (including a lot of disk thrashing), I had a 6GB (compressed) TIFF file with the final image.

Using my laptop as a network again, I transferred the result back home for final exposure adjustment in Photoshop.  No luck.  Photoshop (CS2) won’t read a 6GB TIFF file.  In fact, TIFF doesn’t support files larger than 4GB — that darned 32-bit size thing again.

Back at the office…  This time I have Enblend stitch together one row of images at a time, producing 5 files only 1.5GB in size.  These load into photoshop just fine but I don’t want to hand-blend the rows.

Then I asked Enblend to combine adjacent rows (1&2, 2&3, 3&4, and 4&5).  Since there is a gap between rows 1&3, and 2&4, there must be a stripe in which both the 1&2 and 2&3 composites are identical, meaning blending is just cutting one off with a straight line through that area.  The output size of these row-pair images is about 2.7GB each.  But…

Photoshop still barfed on them.  It seems that it cannot load TIFF files larger than even 2GB.  What now?  I’d already split it down to row-pairs.  To make the files smaller, I’d have to start splitting it vertically as well.  I didn’t want to do that much hand blending of the pieces, though, even if it would be just finding the overlapping regions.

It occurred to me that perhaps I could split these large, blended files into smaller-sized tiles and then just butt them up against each other in Photoshop.  A Google-search later, I found a program to split images into tiles.  It’s $20 but has a trial version.  I try it.  It doesn’t support TIFF.  Ah well, it probably wouldn’t handle huge images either.  Another search later, I find a TIFF splitter.  Same pay/trial deal, but it turns out this only splits whole images from a multi-image file — completely useless to me.

Okay, back to my roots.  Way way back many centuries ago, not long after the Internet began, there existed a collection of image tools called PBM and eventually NetPBM.  This collection of command-line programs consists of binaries to convert pretty much every conceivable non-proprietary image format into a simple, uncompressed format and back, plus a large collection of programs that do various operations on images in this uncompressed format.

For example, if you wanted to scale a PNG up 50% and convert it into a JPEG, you could do something like:

    pngtopam myimage.png \
        | pamscale 1.5 \
        | ppmtojpeg -quality=85 \
        > mybigimage.jpg

The “ppm”, “pbm”, and “pam” names all mean different styles of the same basic format and almost all tools will read all formats.

In this case, the tool I was looking for is “pamdice” which will take an input image and dice it into a number of rows and columns.  By applying this to each row-pair, I can create files of manageable (read: loadable by Photoshop) size and put the whole thing together.

Now that I had files that Photoshop could load, I created a canvas of the desired final size and pasted in the bottom row-pair.  First the left half and then the right half.  So far, so good.

Adding the second row pair involved matching up the overlap.  This is easily done coarsely with a full view.  To align it precisely, zoom in to 100% (ctrl-alt-0) and set the blending option to “difference”.  Then drag the new piece around until swaths of the screen turn black.  Draw a line through this black area and “clear” (make transparent) the lower part of the upper row.  Then change the blending back to “normal” and “merge down”.  Repeat this with all row pairs.  Forget keeping independent layers and using layer masks — the image is simply too large.  There was enough waiting for swapping to disk as it was.

Once all the rows were done, it was time to fix the things that Enblend did wrong.  On a perfectly static scene, I might not have had to do anything.  However, I had a scene with boats going by and some of the choices Enblend made were not correct.

But no problem…  Where a wrong choice was made, I simply cut & pasted the raw warped image and position it using the same technique mentioned above.  Then I would select the area I want, feather it, clear the rest, and merge it.  After every major step, I’d save as PSB since PST won’t handle files this large.

Finally, it was time to combine the foreground and background images. I loaded the latter, scaled it up 4× to match the resolution of the foreground, waited for photoshop, cut & pasted the foreground, wait, wait, and wait some more while Photoshop writes 30+GB to its swap file, performed coarse placement of the foreground, wait, switched to “difference” blending and do fine placement, wait some more, etc.

For final blending of the two, I used the lasso to draw about 2/3 way between the landscape and the edge of the image.  I then feathered this selection to make the transition go from 1/3 of the way out to the end and cleared the outside.  This turned out more difficult than I anticipated because, though I had used the same F-stop and shutter speed, the background images were brighter than the foreground ones.  This required some compensation and very careful blending, but the result is nice, gentle transition that is invisible in the night sky and indistinguishable from depth-of-field or natural long-exposure blurring of water.

Since I had deliberately underexposed all of the images to avoid blow-out in the highlights, as the last step I used some “curves” to brighten up the dark areas.  With the extra bits from the raw capture, though, there’s no notable noise or posterization.

A quick crop (well, okay…  no operations are quick on an file this size) to the horizontal boundaries of the high-resolution imagery and it’s all finished.

gigapixel1600

Final Version (click for full-size viewing on GigaPan)

Bonus:  I’ll email the final full-size image to the first person who can identify the Andrew Lloyd Webber musical quoted above.  🙂

Share This:

Moving Picasa Albums To A New Computer

Picasa is a pretty nice little program for organizing your photos and can’t be beat if you compare the price:performance ratio. However, it’s not perfect and one of the ways it’s lacking is when trying to move your photos to a new location either on the same computer or on a new machine.

The easiest way is to ask Picasa to do a full backup of your photos and then restore that backup in the new location. However, this is not feasible when the photos are mixed in with your general user data or you’ve already transferred everything and just want Picasa to reference it as it did before.

The problem is that Picasa stores its database hidden under “Application Data” for the current user (in accordance to Windows style guides) and keeps only “starred” status and edits in the “picasa.ini” file in the directory alongside the files stored there. Thus, just moving your files won’t move this database. Instead, you need to do the following:

1) Go to C:\Documents and Settings\MyLoginName\Local Settings\Application Data\Google\Picasa2Albums and copy everything here to a scratch location on the new machine.

2) In the oddly named directory (an apparently random bunch of letters and numbers), edit all the .pal files and replace everything between <DBID>…</DBID> with the string “null”.

3) If the pathname to your photos has changed, do a global seach & replace to fix the pathnames in this file.

4) Close Picasa on the new machine.

5) Copy the modified .pal files to the oddly named directory at the same path on the new machine (note that the oddly named directly will be a different apparently random bunch of letters and numbers).

6) Start Picasa.

7) All your albums should now appear.

If they don’t, or some don’t, then one or more of the photos referenced in the copied .pal file(s) was not found among the new photos on the new computer. Picasa rejects (and deletes) the entire album if even a single entry cannot be found. This can happen because the files exist at a different pathname, don’t exist at all, or simply have not yet been scanned by Picasa. In the last case, it may just be a matter of marking all relevant folders as “scan always” and then restarting at step #4.

Since I work at Google, I sent an email to the Picasa development team and we talked about some ways of fixing this problem. Hopefully it’ll get better soon.

Share This:

More Punishment

Merry Christmas!!!

Google Earth now has entries for GigaPan. This is an amazing way to experience some places within Google Earth, but you’ll need v4.2 or later to see it.

In short, by placing a panoramic at the correct coordinates and specifying field of view, elevation, tilt, etc. it becomes possible to fly in to the image and look around it in detail. I decided to upload my night (hdr) panorama of Zurich but it wouldn’t let me — it wasn’t big enough! The image has to be at least 50 mega-pixels to be accepted. My original met this requirement but, as you may recall, I’d had to cut the image in 1/2 both horizontally and vertically in order to load it in to the HDR processing program. I could, of course, simply scale the image up but that would be cheating. So, I went back to the originals to try some new techniques.

There are two paths to follow… (1) Merge the images into an HDR with Photoshop and then do the tone-mapping with EasyHDR. Since I’d be loading just one 32-bit TIFF image, hopefully it would stay within EasyHDRs memory limitations. (2) Do HDR processing on each stack of images first and stitch those together into a panorama. This requires that the same transformation be applied to each stack in exactly the same manner or else there will be seams in the final image.

1) Merge into HDR with Photoshop

By restoring all the saved, aligned images I had made during my previous attempt, I had a good starting point. I recreated the three panoramas using PTGui and then loaded them all into Photoshop using “File::Automate::Merge to HDR”. The tone-mapping in Photoshop CS2 is poor compared to other alternatives, so at this point I saved it as a 32-bit TIFF.

Unfortunately, in the end I was unable to load even an image with 1/2 resolution in to EasyHDR. Windows programs that don’t do some sort of tiling of data (like Photoshop does) are generally limited to 2GB of memory. On to the next method…

2) Generate Multiple HDR Images and Stitch Them

The latest version of EasyHDR has some nice new features over what I used just 8 months ago. The trick here seemed to be to avoid anything that was dependent on the local image. To this end, I turned off the local mapping “mask” (which I don’t like anyway because it produces halos) and leave the general tone-mapping parameters at 1.0. I also never adjusted the black/white clip points, leaving them at the far ends of the spectrum. This would hopefully result in identical mapping for all image stacks and by saving in 16-bit mode there would be sufficient detail for me to adjust the total range in post-processing.

Before stitching, apply any filters that apply to a given image stack — noise-reduction, for example. Also, it’s likely that a lot of third-party software will not be able to handle gigapixel size images. You’ll have to run that processing on each part before stitching.

Other Things

Along with these changes, I switched the panorama generation to be “cylindrical” instead of “rectilinear” as Earth will do all the perspective alterations necessary. If you’re not familiar with the terms, the latter is the standard image that non-fisheye lenses will give you. It’s what the eye would see looking through a frame held in front of you. A cylindrical mapping, on the other hand, is what you would see if looked through a vertical slit held at arms length and then rotated your whole body, combining the all that is seen into a single image.

Here’s the final image… Click on it to browse it at full detail!  It should appear in Google Earth sometime in the future.  Look for it at 47.37593N, 8.54651E.

Zurich Night Panorama

Share This: