The Biggest Picture

Ever since I found GigaPan and my friend Andre Gunther wrote his guest article on stitching large panoramas, I’ve wanted to do one.  While in Australia, I found a good subject: The Sydney night skyline, complete with Opera House and Harbor Bridge.  I spent two hours standing in one spot while I worked out the best exposure and taking 220 distinct images.  Each of these images was an 8-second exposure at ISO 200 and overlaps the previous one by about 25%.

215 of these images are 3 rows of 50-some vertical photos at full zoom (200mm) plus another two rows capturing the tops of buildings and the top of the bridge.  There was no point snapping a photo of empty space — there is nothing to anchor it to when stitching.

Source Images

The remaining 5 images were taken horizontally with an angle big enough to capture the full vertical.  This would provide background imagery of the night sky and harbor that could not be photographed at full zoom and be attached to a landmark.  Since neither the night sky nor an 8-second exposure of rippling water have any detail, nothing is lost.  Note when using this technique: capture full-zoom images that just touch the foreground imagery so that the final stitch will have plenty of “quiet space” around the main content for blending smoothly into the background.

And so ended the easy part.

All of these images were shot in RAW mode because my camera has a 12-bit analog-to-digital converter and every bit is important given the huge dynamic range of night photography.  I usually do these with HDR but if you’ve read my previous HDR adventure (and the follow-up) using only 11×3 images, you’ll understand why I was not about to do it with 220×3 images.

Using Photoshop RAW converter, I optimized the exposure as best I could to reduce blown-out areas to the bare minimum and then converted them to 48-bit TIFFs (16 bits per channel) that the stitching software can read.

When it came to the stitching, I tried both PTgui (v7, not the latest v8) and Hugin.  Both behaved pretty much the same and both died horribly when trying to produce the final output.

Creating the control points took a number of hours.  The automated placement did a good job, covering about 80% of the pairs but some had none at all.  I started the process with PTgui since it’s what I’ve used in the past; for many of the pairs with no control points, asking it to auto-generate points for just that pair usually produced good results.  For the rest, I added a bunch by hand.  Later, when using Hugin (which will read PTgui project files), I also added “vertical line” control points.  These marvelous items tell the optimizer how to make perfectly straight panoramas without having to fiddle with it by hand in the preview pane.

Hugin can also handle shots taken with different lenses or the same lens at different focal lengths.  This was essential in making sure that the 5 “background” images were reasonably well aligned with the 215 “foreground” ones.  PTgui (v7) had no such capacity.

If I had known Hugin had these last two features (vertical line control points and support for different lenses), I never would have tried PTgui, but instead I didn’t switch to Hugin until PTgui proved useless at producing a final image.

The problem, quite simply, was size.  Once everything was assembled, I learned that the final image was to be over 3 GIGApixels in size!!!  Multiply that by 3 colors and 16-bits (2 bytes) per pixel and you get an image that is over 18GB in size.  Ouch!  This is so big that PTgui couldn’t allocate a big enough memory chunk to even warp the images at full resolution.

gigapixel0101Hugin (or specifically, “nona”, the warping engine) has a nifty feature of keeping the warped output nearly the same size as the
original and include a position within a larger image encoded in the output file.  PTgui wanted every warped output to be the full 3Gpx in size.

Unfortunately, when it came to combining all the individual warped images, Enblend (the backend “blender” program of Hugin) died for the very same reason as PTgui: It couldn’t manage an 18GB image buffer, even held on disk, when compiled for a 32-bit operating system.  I’m actually running Vista-64 but these programs were built in 32-bit mode and nobody had build 64-bit versions that I could find.

I should mention that PTgui v8 is available as a 64-bit binary that may work…  If you want to shell out USD$216 or so.

Here’s another reason Hugin beats PTgui: It’s “free software”.  The source code is readily available.  I could easily download it and compile it as I needed.  I chose a different path, though.  I transferred the warped images to my laptop and then my Linux workstation at the office which is running a 64-bit distribution of Linux and has Hugin/Enblend available.  A little investigation into the Enblend command-line and a day later (including a lot of disk thrashing), I had a 6GB (compressed) TIFF file with the final image.

Using my laptop as a network again, I transferred the result back home for final exposure adjustment in Photoshop.  No luck.  Photoshop (CS2) won’t read a 6GB TIFF file.  In fact, TIFF doesn’t support files larger than 4GB — that darned 32-bit size thing again.

Back at the office…  This time I have Enblend stitch together one row of images at a time, producing 5 files only 1.5GB in size.  These load into photoshop just fine but I don’t want to hand-blend the rows.

Then I asked Enblend to combine adjacent rows (1&2, 2&3, 3&4, and 4&5).  Since there is a gap between rows 1&3, and 2&4, there must be a stripe in which both the 1&2 and 2&3 composites are identical, meaning blending is just cutting one off with a straight line through that area.  The output size of these row-pair images is about 2.7GB each.  But…

Photoshop still barfed on them.  It seems that it cannot load TIFF files larger than even 2GB.  What now?  I’d already split it down to row-pairs.  To make the files smaller, I’d have to start splitting it vertically as well.  I didn’t want to do that much hand blending of the pieces, though, even if it would be just finding the overlapping regions.

It occurred to me that perhaps I could split these large, blended files into smaller-sized tiles and then just butt them up against each other in Photoshop.  A Google-search later, I found a program to split images into tiles.  It’s $20 but has a trial version.  I try it.  It doesn’t support TIFF.  Ah well, it probably wouldn’t handle huge images either.  Another search later, I find a TIFF splitter.  Same pay/trial deal, but it turns out this only splits whole images from a multi-image file — completely useless to me.

Okay, back to my roots.  Way way back many centuries ago, not long after the Internet began, there existed a collection of image tools called PBM and eventually NetPBM.  This collection of command-line programs consists of binaries to convert pretty much every conceivable non-proprietary image format into a simple, uncompressed format and back, plus a large collection of programs that do various operations on images in this uncompressed format.

For example, if you wanted to scale a PNG up 50% and convert it into a JPEG, you could do something like:

    pngtopam myimage.png \
        | pamscale 1.5 \
        | ppmtojpeg -quality=85 \
        > mybigimage.jpg

The “ppm”, “pbm”, and “pam” names all mean different styles of the same basic format and almost all tools will read all formats.

In this case, the tool I was looking for is “pamdice” which will take an input image and dice it into a number of rows and columns.  By applying this to each row-pair, I can create files of manageable (read: loadable by Photoshop) size and put the whole thing together.

Now that I had files that Photoshop could load, I created a canvas of the desired final size and pasted in the bottom row-pair.  First the left half and then the right half.  So far, so good.

Adding the second row pair involved matching up the overlap.  This is easily done coarsely with a full view.  To align it precisely, zoom in to 100% (ctrl-alt-0) and set the blending option to “difference”.  Then drag the new piece around until swaths of the screen turn black.  Draw a line through this black area and “clear” (make transparent) the lower part of the upper row.  Then change the blending back to “normal” and “merge down”.  Repeat this with all row pairs.  Forget keeping independent layers and using layer masks — the image is simply too large.  There was enough waiting for swapping to disk as it was.

Once all the rows were done, it was time to fix the things that Enblend did wrong.  On a perfectly static scene, I might not have had to do anything.  However, I had a scene with boats going by and some of the choices Enblend made were not correct.

But no problem…  Where a wrong choice was made, I simply cut & pasted the raw warped image and position it using the same technique mentioned above.  Then I would select the area I want, feather it, clear the rest, and merge it.  After every major step, I’d save as PSB since PST won’t handle files this large.

Finally, it was time to combine the foreground and background images. I loaded the latter, scaled it up 4× to match the resolution of the foreground, waited for photoshop, cut & pasted the foreground, wait, wait, and wait some more while Photoshop writes 30+GB to its swap file, performed coarse placement of the foreground, wait, switched to “difference” blending and do fine placement, wait some more, etc.

For final blending of the two, I used the lasso to draw about 2/3 way between the landscape and the edge of the image.  I then feathered this selection to make the transition go from 1/3 of the way out to the end and cleared the outside.  This turned out more difficult than I anticipated because, though I had used the same F-stop and shutter speed, the background images were brighter than the foreground ones.  This required some compensation and very careful blending, but the result is nice, gentle transition that is invisible in the night sky and indistinguishable from depth-of-field or natural long-exposure blurring of water.

Since I had deliberately underexposed all of the images to avoid blow-out in the highlights, as the last step I used some “curves” to brighten up the dark areas.  With the extra bits from the raw capture, though, there’s no notable noise or posterization.

A quick crop (well, okay…  no operations are quick on an file this size) to the horizontal boundaries of the high-resolution imagery and it’s all finished.

gigapixel1600

Final Version (click for full-size viewing on GigaPan)

Bonus:  I’ll email the final full-size image to the first person who can identify the Andrew Lloyd Webber musical quoted above.  :-)

2 comments to The Biggest Picture

  • I had similar trouble with photoshop. I used tiffcp, a command line tool to create a better zip compression. Apparently Photoshop doesn’t really care about the actual size of the image (number of pixels multiplied by bit depth) and only cares about file size.
    Photoshop also supports the very large file size format (PSB). I wouldn’t be surprised if you could find a file converter to create a PSB from tiff. You should be able to load those in Photoshop without a hitch.

  • Paul Heckbert of GigaPan suggested this link: http://www.bridgeclimb.com/ I wish I’d known about it during my visit — maybe next time I’m there.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>