Archive for July, 2006

How experts become experts

July 30, 2006

Ze Frank says,

“People started telling me that if I wanted to do something, first I had to learn about how complicated it was…”

In his smart sharp-witted way, he goes on to explain that a lot of people tell you that you must understand something in it’s entirety before you can even start to use something or start thinking about something, when actually picking up something and doing something with it quickly starts to fill in all that background reference material that ultimately lets future experts become experts.

Possible Bottom Line: Don’t let the fact that people tell you something is so complicated dissuade you from breaking through that complexity and understanding new things.

PS – This mirrors a lot of what Dave Winer was trying to get across in his spirited discussion about exposing rss to users during the recent Bloggercon session “Standards For Users” headed by Niall Kennedy. The 2006 Bloggercon session MP3’s can be found here.

Improving the space…

July 27, 2006

Kent, Earl and OmegaMom’s posts regarding ProBlogger’s series “If I Had to Start My Blog Again” are wonderful. In fact, I read Kent’s last night and pondered it quite a bit today, sketching out some notes during my lunch in the car today. I was mildly shocked to read OmegaMom’s post on the subject tonight, because it mirrored many of my own thoughts. But anyway, here are some of the things I’d do differently – (sorry for any duplication OmegaMom):

  1. I’d pick a blogging engine with categories. Aaargh. My varied interests just beg to be categorized.
  2. I’d try to limit my posting to my most enthusiastic topics. This is quite difficult given the fact that my interests seem to wax and wane with the tide some time.
  3. I would try to better highlight new voices not yet heard.
  4. I’m too hung-up about not being repetitive. Not with my own content but with regards to other bloggers. I’m sometimes hesitant to post about what someone else just posted about. I sometimes feel as if I’m riding coat-tails when I do that.
  5. I’d re-read Strunk & White and stop being so wishy-washy in my writing.This might possibly, could-a-would-a-should-a, just maybe (but correct me if I’m wrong) be a good idea. πŸ˜‰
  6. I’d try to exercise a little more of my creativity in my content. I’ve tried to do this by writing some more informative/instructional posts about photography, linux and other topics lately. This is conscious. I want not only to comment on other people’s content but provide some of my own so to speak.
  7. Something else I’ve noticed in Kent’s, OmegaMom’s and Earls posts about this topic is that there is a theme of building successful readership. This is not a bad thing. One of my other hang-ups has been my reluctance to overtly try to build readership. Growing readership is a good thing, but not necessarily the primary measure of success for me (and I’m not implying that it’s the primary measure for Kent, Earl or Omegamom). I’ve said before that I love the fact that anybody at all reads what I write, but I don’t want it to shape what I write about, or how I write it. That said, I’d like to do more to encourage readership. I’m all too happy to wallow in mediocrity.

As a commenter on OmegaMom’s post quite nicely put:

“But there’s no reason you can’t do those things NOW if you really want to!”

So that being said… watch this space. Hopefully you’ll see some improvement.

Digg’ in without monetary motivation

July 27, 2006

As usual, Kent Newsome has some thoughtful analysis of the Kevin Rose/ Jason Calcanis conflict. In fact he’s writing with some very insightful personal experience. However I have to think that it is quite possible that Kevin’s wording is a little misleading (although of course he’s free to correct me if I’m wrong):

Kevin states in his post:

“Ya see users like Digg, Del.icio.us, Reddit and Flickr because they are contributing to true, free, democratic social platforms devoid of monetary motivations.”

If he means that the contributing users are devoid of monetary motivations, then he is correct. I think more often than not, people will construe that he’s claiming Flickr, Digg, etc.. to be devoid of monetary motivations – which of course is preposterous.

Now are the contributing users actually devoid of monetary motivation? Well there’s currently no monetary motivation on Digg to do it, so perhaps they’re not devoid of it, but it’s effectively a motivation that’s all dressed up with no place to go. I think the point is that they are not contributing great links to make money. If they are, they should be doing it for Netscape. That might leave Digg with contributors who are simply doing it for the love of it. This is not a bad thing.

Part of Dave Winer’s post just begs comment. He writes:

“No doubt Kevin is going to make something like $20 or $30 million when he sells Digg, which seems a pretty likely outcome. What will the users get?”

What should the users get? Where did it state that contributors get anything? If people start worrying about who’s going to benefit from they’re content or effort, they should start worrying more about the policies of the places they publish it. If you are worried that Flickr/Delicious/YouTube are going to make profits on the back of your work without compensating you, check their policies. If you’re not satisfied they won’t (and you’re likely correct), then move on. Find and support services that pledge not to do that. OurMedia comes to mind.

Now I’m all for the ‘user’ – I’m a card-carrying member – but I realize quite simply that I’m publishing these words courtesy of Google (who owns Blogger). And if I really did care about who’s profiting from my content (now that’s a joke) I’d check Blogger’s policies and go somewhere else if I wasn’t happy with them (or host it myself).

While it would be nice if Digg paid all it’s contributors if and when it gets bought, I think that’s pie-in-the-sky unreasonable. Do you expect Google to pay you for all the search data you’re giving them day after day which they use to build profitable services and products? These people have every right to make money with a novel idea. Cripes I’m not even American and I support that! πŸ™‚

Maybe once users start realizing how important their content is, there will be a shift to organizations which protect that content. For now, I’m happily giving away my content. If a day comes when that happiness turns to anger, I will have no one to blame but myself.

Linux and The Mainstream

July 26, 2006

Toby Richards postulates on Newsforge about “Why Linux isn’t mainstream”. And in the huge whack of resulting comments, the discussion rages on as to what Linux has to do to make it mainstream.

I’m not entirely sure it will ever be. And I’m not entirely sure that it needs to be, at least in the way it’s typically discussed.

I’ve been running Linux (Ubuntu Dapper) for the past month or so on my home system and I haven’t booted to XP once within that time. Has it been a challenge? For sure, yes it has. But for me that is not a problem. In fact, I enjoy it. Is Windows ever a challenge? Maybe not for me, but ask my Father, Father-in-law, co-workers, and various friends why they’ve needed my computing help in the past and it wasn’t because they were running Linux.

Toby is right in that you likely won’t get Linux into organizations without full MS Exchange support. But that is that any different than other application segments? It is going to take products (and projects) that do things better than Windows without any penalty for switching. Is that impossible? Dont’ tell the Firefox folks that. Sure it’s tough, but with the shift to web-based applications and the ever improving nature of open-source software, maybe that rocky road is being paved smooth piece by piece.

Let me be clear, running Linux (Ubuntu Dapper anyway) has been more of a challenge than running XP. If I had bought a machine pre-installed with Linux maybe I wouldn’t be saying that. (I’ve never actually installed any version of Windows). But in any case, I’ve had to do a fair amount of learning in the past month or so. To me this is never a bad thing, but for the typical mainstream PC user it might be distasteful. (I’m always awed by Joe Public’s utter disdain for learning anything new – and not just in terms of computing).

So why do I like running Linux? Here are some of the reasons both practical and principled:

– The whole system is built and supported by people who are overwhelmingly doing things because they love doing them, and want to share them with others. Don’t dismiss this. It’s happening in lots of places on the web. Not only in open-source.

– Right now there are some 18000 software packages listed in my Synaptic Package Manager. I can choose to install any one of them right here, right now, for no cost.

– The support mechanism falls on the developers and the users. And for the vast majority of cases, it works and works well. There are a great number of sources for good, flame-free support. I’m blown away by how quickly passionate users will support other users. Of course this is not the exclusive domain of Linux, but I think being a smaller group has some advantages in this respect. There is a definite sense of community.

XGL/Compiz is just plain cool.

– System stability. While my XP system was never too bad, it was significantly more sensitive to a multitude applications running concurrently. I’ve not had any kind of system crash running Linux yet.

– Installing software very rarely requires a reboot (kernel patches being one of the few exceptions). I just install software from the repositories and keep on trucking.

– The command line is a powerful thing. It’s nice to be reminded that it is not always better to point and click. In fact many times it’s not.

– Linux people generally don’t whine and complain about their computers. We are all here by choice, so complaining about it is pointless. (It’s pretty rare to hear about someone forced to use Linux)

– Choice. Now some will regard this as a major detriment to Linux, but I love it. I get to choose things like my desktop environment, word processor, text editor, spreadsheet, mail client, web browser, podcasting client, graphics program, etc. Sure, for a new user it’s nice to provide good, safe defaults, but for me, I love the ability to choose and experiment.

– Very few worries about someone end-of-lifeing my favourite piece of software.

– I don’t need to run three anti-spyware apps and an antivirus program to scan for malicious things on my system.

– Did I mention it’s free?

Now of course the question remains ‘Will it ever be mainstream?’. I think it might be more likely that open-source cross-platform software projects (running on Windows) will continue to claim more and more of the mainstream market. And after a time, it won’t be such a big deal to switch XP out for Linux. There may come a time when it won’t make a difference because many of the apps they use on Windows will have come out of a Linux-based development system anyway. But will that alone be enough reason to switch? Probably not. Maybe there will also need to be a compelling Linux-only ‘killer’ app that will pull the market across at that point. There are plenty of smart people in the open-source world. Don’t think they’re not working on it.

This Week In Tech – no rss/xml icons??

July 24, 2006

If you’re a regular listener of This Week In Tech, you’ll probably know that they’ve recently done a redesign of their site. It looks much much better, but there is one problem.

Now maybe it’s too early on a Monday morning and I’ve just got to give my head a shake, but it seems to me that subscribing to the Twit series of shows on this site is a pain for someone like me who uses a plain-jane podcasting client like iPodder. I’ve recently re-installed it on my linux box (btw interestingly it’s still called ‘iPodder’ in linux and NOT ‘Juice’) and had to resubscribe to a few shows like Floss Weekly.

If you visit the site, you will NOT see the ubiquitous ‘xml’ or ‘rss’ chicklet for the show’s feed (or any of the show’s feeds). I usually right-click this to copy the address to the feed and paste it into my aggregator.

No. Instead they have a ‘Subscribe..’ drop down box which lists all the subscriptions mechanisms (‘I-Tunes, Odeo, etc..) and at the bottom you will see ‘rss’. That’s great. But I can’t right-click that entry in the drop down list to copy the address (at least I couldn’t) and when I select ‘rss’ I get the xml document tree representation that everyone calls ‘scary’.

The only way I can copy the feed address is to click ‘rss’ from the dropdown list, ignore the document tree page and copy the feed link in the address bar of my browser when the document tree is showing.

Is this a good solution??? C’mon guys. We shouldn’t have to do this to subscribe. NOT everyone uses I-Tunes. Give me some direct link to the feed address somewhere on your page that I can copy and paste. You should know better. For shame Leo.

Linux Digital Photography Workflow Redux – Is Low Tech better?

July 22, 2006

Almost a month ago, I posted about how I was attempting to deal with my digital photo workflow under Linux. Since then, a couple of things have happened which have made me re-think my workflow… and probably in a way you might find surprising.

Here are the two things that have happened:

A) I’ve become enamoured with the linux command line and the linux philosophy. While no guru, I am becoming reacquainted with the simplicity and power of the linux file system, the command line and the Unix philosophy of ‘writing small tools that do one thing and do it well’ and utilizing those tools together to create a powerful solution.

B) Adobe recently released the Beta version of it’s Lightroom program for Windows (previously it was only available for the Mac). I downloaded it and gave it a quick test spin.

In case you didn’t already guess, B) is at odds with A).In fact, after running the Lightroom Beta, I immediately came to the realization that it was NOT what I needed.

In fact, I soon came to the realization that most photo management applications I’ve tried – and I’ve tried A LOT – are more frustrating than useful to me. I’m not so sure an all-in-one solution is what I’m after at all. In the past I’ve tried: Paintshop Pro Album, ACDSee, Photoshop Album, Picasa, and now F-Spot on linux among several others.

Here are some the main reasons they frustrate me:

1. References to my photos and metadata (including any tagging or rating systems etc.) are kept in a hard-to-access database. I realize that this is probably the fastest/most stable/best way to do it, but I don’t like it. I can’t see the data, move the data or copy the data without really getting my hands dirty.

2. I find that some of these programs have trouble when I move images around on my hard drive. I’ve sometimes got multiple copies of the same image in the database and it’s hard for me to distinguish one copy from another. I don’t feel safe moving image files around or reorganizing my folder structure without wiping things out or screwing things up.

3. Many times these apps rely on locking you into their way of doing things. They work much better (in terms of built-in versioning schemes etc) if you do all your editing within them. This is not what I’m looking for. I want to use the best tool for the job without worrying about how it affects another program’s functionality.

4. What happens to all that metadata (ratings, tagging, etc.) that I spent hours and hours entering when I find a super-duper new application from a completely different company 5 years from now? Maybe F-Spot will do an honest job of exporting this info for others, but could you say the same about the proprietary solutions?

So what am I thinking about in terms of a digital photography workflow?

What about a relatively low-tech DIY photomanagement approach?

“Huh…Whuh?..” you might ask. Read on and I’ll explain what I’m thinking about. It’s by no means a complete solution (hence I said ‘approach’) but it may provide some food for thought. And if at the end of it you say ‘screw off.. you’re nuts! – I’m using Lightroom/Aperture/Iphoto’, then go right ahead…

And also bear in mind that this was based on *my* workflow needs. Yours of course might be totally different.

The first thing I asked myself was ‘what do I really do with the photos on my CF card?’. In it’s most basic form, here’s what I do:

1. Bring them into the computer.
2. Back them up to other media (usually DVD).
3. Make adjustments to them (crop, resize, colour correct, etc..).
4. Tag and/or rate them to make future searches easier – I’ve been hesitant at this.
5. Publish them to Flickr, and/or
6. Publish them to my blog directly, and/or
7. Print them

And what more specifically are the problems I have with the systems I’ve used in the past?

1. Many apps take a data-silo/lock-in approach to my photos and metadata (as I outlined at the start).

2. I find that my system gets ‘photo-bloat’. I end up with 10 copies of the same image, all of different sizes or different cropping for different purposes and of course, none of those 10 is the one I want.

3. I have a hell of a time distinguishing one version of an image with another without examining each one in some kind of image browser.

I intend to try and let the OS do part of the work. I want to leverage the power of the ubiquitous ‘filename’. A large amount of command line work can be done (nevermind scripting) when the file naming system is made somewhat useful and more importantly, consistent.

Some ground rules:

1. Make sure your camera is set to keep image numbering continuous. So if you empty and format your card or swap in a new card, it doesn’t reset the file name to IMG001.jpg again. That rootname (or even the number portion alone) is key to my proposed filenaming system. This name will anchor all the versions of a given image.

2. Subjective or verbose filenames are Satan’s work:

eg. UncleBills_birthday_Cousin_Huey_drunk_05.jpg

Don’t even go there. This type of thing does nothing to help searching 6 months for now. Avoid it at all costs.

3. Exif data is your friend.
Make sure any editing you do (and any editing software you use) keeps the Exif data intact. Try not to duplicate any of the Exif data in your filename unless it’s really necessary.

Folder structure:

On my linux system, I use F-spot to import my images. I shoot mostly in RAW, so a typical image filename would be ‘img_2345.cr2’. F-spot creates a nice directory structure where the /photos folder contains a /2006 folder which itself contains monthly folders. Those monthly folders contain date folders which are created based on the Exif information in the images. So if the img_2345.cr2 file was shot on July 21, 2006, the path to the file would be:

/photos/2006/7/21/img_2345.cr2

Many systems do a similar thing. The Canon software on my XP system would create something like:

\MyPhotos\2006_7_21\img_2345.cr2

In any case, make sure you set up a workable date-based folder structure. Unless you’re a professional photographer and have to separate things by project, a simple date-based folder structure is a good way to go.

I would NOT create separate subfolders for different photo tasks unless it makes sense. So I want my cr2 file, my jpeg master and any subsequent versions of that jpeg all in the same folder.The naming system I came up with makes better sense having all the versions of a file in one folder.

A Necessary Next Step (for RAW shooters):

Next I would create a duplicate set of jpegs from your RAW files. These would be colour-corrected and tuned with a raw conversion program. These would NOT be scaled, resized, or otherwise modified in any way. You’re trying to create a master copy of jpegs here. If of course you’re shooting jpeg anyway, then this step is unnecessary (other than doing some general colour correction if required).

Now the fun part.

Here’s where you come up with a file naming system that will perform a kind of ‘version tracking’ with the filename itself. Here are a few simple principles I tried to follow when coming up with it:

– All naming to be lowercase (in windows this doesn’t matter, but linux is case sensitive – so it’s important)

– All modifications to the filename to be made using suffixes and not prefixes. I want the original image name (‘img_2345’) to remain intact in all versions. This way when you list these files they are all grouped together.

– Demarcation between suffixes to be done with a period. I think it looks a lot cleaner than using underscores, but the choice was arbitrary.

– An indicator of image size in the filename would be helpful but wouldn’t be a requirement. So I considered it optional.

– The order of suffixes should match the actual order of processing if possible. This just gives you some clues as to how the photo was processed just by looking at the filename.

So what would my system look like:

Based on what I do with my photos, here’s my file naming system. Yours of course might be different to suit your own needs:

rootname.purpose.modifications.sizeindicator.jpg

rootname – This is the original filename minus extension (eg. img_2345)

purpose – This is a short (2 or 3 character) suffix indicating the purpose of the image file. For me this would be either ‘bp’-blog post, ‘fl’-flickr post, or ‘pr’-for printing

modifications – I do 3 general things to my images: crop, resize, and correct (‘correct’ meaning colour correction, levels, enhancement, noise-reduction etc..). So I will assign a short suffix indicating what type of basic correction was made to that specific image. ‘cr’-indicates cropping, ‘rz’-indicates resizing, ‘co’-indicates corrected.

sizeindicator – An optional suffix, but when it is used I will likely use a ‘700w’ OR ‘600h’ suffix to indicate 700px wide or 600px high respectively. I would describe the width or the height but likely not both unless absolutely required. This gives a quick indication of the size of the image without having to open it up in an image browser.

So looking at a directory listing of an image with several versions might yield (path omitted for clarity):

img_2345.bp.cc.rz.500w.jpg
img_2345.fl.jpg
img_2345.pr.cc.rz.1500w.jpg
img_2345.cr2
img_2345.jpg

A couple of key things to remember:

Keep It Simple Stupid: or you won’t remember the system or use it!

Document it. If you come up with a system, jot it down and stick it to your monitor. Don’t give yourself excuses to let it fall apart.

Some discussion points:

– A system like this allows for a quick search for all versions of an image. So simply searching for ‘img_2345*’ will yield all the versions of the image.

– You don’t have to create endless subfolders for each type or use of an image. Subfolders are good… to a point, after which I find they become a nuisance.

– You won’t always need a thumbnailing program to tell versions of an image apart.

– A simple file naming scheme like this doesn’t do anything about metadata. And seeing how that was one of my key gripes (way up at the start of this post) that is a big problem. I think metadata is useful. I’m not saying give up on F-Spot, Picasa etc.. I intend to still use them for metadata and visual searches on images. However they don’t give me a flexible versioning or naming system. I will still use them, but for more limited uses.

– Maybe a system wide file tagging system is in order. Not just image files, but all files.

– Applying a system like this retroactively to existing images is of course near impossible. All I can say is that if you think it’s a good idea, then start doing it now.

– Overwriting of a parent file with a child file is always a risk (for instance say you resize a file and save it out without changing the filename, overwriting your jpeg master copy). I have two ideas about this:

1. Perhaps using file permissions might be an answer, at least to protect your master files. Maybe after importing the RAW images and creating the jpeg master files, you set all of those files to be ‘read-only’ in the file permissions. This would prevent overwriting of the master files later on.

2. Getting into the habit of creating a file with the ‘purpose’ suffix prior to doing any editing. So if you know that you’re going to publish a given file to Flickr and print it, you could create the img_2345.fl.jpg and img_2345.pr.jpg files PRIOR to doing any editing.

So that’s it for now. I really didn’t think it would take so long to explain a system so simple, but I think the reasons behind wanting to do it are valid ones. I’ll probably continue to post about my workflow as I fine-tune it further.

Of course if you disagree, or have some something to add, correct, or improve by all means do so in the comments below.

Mood News – I Love It!

July 21, 2006

Found via a signature on a wxPython mailing list post of all places, this site has got to be one of the most interesting applications of news aggregation I’ve ever seen. It’s marked as a proof of concept and it tries to rank the headlines from good news to bad news. Thankfully it starts with the good news stories and finishes with the worst ones.

Think of it as mainstream network news ranking in reverse πŸ˜‰ I love it!

Photography Concept 4

July 19, 2006

In this fourth post on basic photography concepts I’m going to explain metering. As always, I’m not going to attempt an exhaustive discussion on the topic, but rather I’ll try to provide the basics in plain english, with hopefully just enough meat to build some understanding without reducing anybody to tears (of boredom or fear – take your pick).

If you’ve not been awaiting this post with baited breath (hah!) you can also check out the previous installments where we discussed the concepts of: exposure, focal length and lenses, and depth of field.

Back in that first post I described the concept of exposure. Exposure in photographic terms quite literally means how much light you let into the camera and for how long. I described that there are three ways to control the exposure: shutter speed, aperture and ISO speed. The first mechanism controls the duration of the exposure to light, the second controls the amount of light let into the camera and the third controls the light sensitivity of the camera sensor.

So why are we trying to control the exposure? In the most simplistic terms, we want the resulting photograph to accurately represent what we are seeing with our eyes (of course there are exceptions to this and quite wonderful ones at that!). If you underexpose, the photo will look dark. If you overexpose, the photo will be too bright and washed out.

So how do we know when we’ve got it right? The camera will tell us… hopefully. Modern cameras in fact make it quite difficult for us to get it completely wrong. But how does the camera know when it has it right (or at least mostly right)? Metering baby. Metering.

Measuring the amount of light in your scene (or more accurately the amount of light reflected off of it) is called metering. Now of course there are loads of different ways that metering can be accomplished. I’m not going to discuss all of them. Having you fall asleep at your keyboard is not my goal here. I’ll briefly describe an older system of metering that I’m familiar with and we’ll take a look at modern systems.

My first SLR camera was a 1977 Canon AT-1 given to me by my father about 5 years ago. That camera had TTL metering. The ‘TTL’ meaning “through the lens” metering. Older cameras had the photocell (the thing sensing the amount of light) on the camera body outside of the lens. On a TTL metering camera, the photocell was within the camera and measured the intensity of light coming through the lens – hence the name. This camera had what was called ‘match-needle’ metering. So the photocell would control a needle that you could see in the viewfinder. As the light intensity increased, the needle rose up and conversely, the needle lowered as the light levels dropped. There was a second needle which was controlled by the aperture and shutter speed adjustments on the camera. In order to get a decent exposure, you would make aperture and shutter speed adjustments that would bring the needle in line with the needle representing the light meter reading. You would ‘match’ the needles. This camera didn’t have ‘auto exposure’. The camera wouldn’t make any effort to pick the right exposure settings to match the amount of light reflected off of the scene. This was left up to the photographer. In more modern systems with ‘auto-exposure’, the camera tries to make adjustments to the shutter speed or aperture settings (or some combination of both) based on the light coming in, in order to get a good exposure.

In a typical photographic scene, there are light areas and dark areas. So in order to judge what a proper exposure might be, the most basic system would come up with an average of all the brightness levels in the scene and then set the exposure settings (aperture and/or shutter speed) so that this average level was rendered an ‘average’ tone on the resulting photograph. An ‘average tone’ being a mid-grey (think of worn, sunlit asphalt). The key concept is that it would average all the brightness levels in the scene. So if there was one area that was very bright relative to the rest of the scene, it would throw the average off – the average tone would be brighter. So it would close down the aperture and/or increase the shutter speed to render this average tone a middle-grey. Imagine you were taking a picture of a person against a backdrop of snow. The white background would dominate the frame, providing a very high average brightness value. So the camera would try to render this high average value as a middle-grey, resulting in your white snowy background looking grey. Not a good solution.

So just averaging the brightness values of a scene is not a very good method. Unless your scene has a average tonal value somewhere close to middle grey. There had to be a better way. And there was.. actually there were several.

Most modern SLR (and DSLR) cameras have a variety of metering ‘modes’. These modes take a much more complex look at the light coming into the camera from your scene. The most common metering modes are: centre-weighted, spot, and multi-segment.

It’s important to remember that these modes were created to better analyze a scene so that the resulting exposure is more accurate. But don’t forget that the resulting exposure still comes down to only 3 things: choosing a shutter speed, aperture and ISO speed. These fancy metering systems don’t necessarily get you better pictures, they only make it easier (and more likely) for you to get correct exposure settings.

Centre-weighted metering is a system where the tonal values of a scene are averaged together, but the central area of the frame will get more weighting in that calculation. So if the your subject is in the centre of the frame, the brightness of that subject will have a more significant effect on the resulting exposure settings than if you just averaged the whole scene. This is still a viable method of metering. But it has been rendered somewhat redundant by multi-segment metering which we’ll discuss in a moment.

Spot metering is a system by which the exposure is based on the tonal values read from a tight central area of the frame (say 5%) while any metering information from outside of that central area is ignored in the exposure calculation. This mode is good when you’ve got a dark subject against a bright background or a light subject against a dark background. Think of a bird against a bright sky, or an actor under a spotlight on a dark stage. This is a very useful mode. Not all cameras have a true spot-metering mode. My Canon Rebel XT for instance has what Canon calls a ‘partial’ metering mode. In this case, the camera takes about the central 9 or 10% of the frame and not the 4 or 5% present in true spot-metering systems.

Multi-segment metering goes by different names depending on what brand of camera you have. Canon calls it ‘Evaluative metering’, Nikon calls it ‘Matrix metering’, and Olympus has its ‘Digital ESP’. Each manufacturer’s technology differs of course, but the basis of each is the same. In a multi-segment system, the scene is divided up into many segments. Each segment is metered and the camera then tries to interpret this information into some logical exposure value. For instance, given a darker subject on a light background but where the subject is NOT centred within the frame, a multi-segment system could identify this and still expose based heavily on the darker subject, making the assumption that this is the primary subject of your photograph. (Don’t forget that a typical spot metering system would not work here unless the subject was in that central 4 or 5% of the frame.). These systems can make getting a good exposure a piece of cake, however as with most technologies, they can be fooled.

With all this technical talk about whiz-bang metering systems, it’s important to keep in mind that all you’re trying to do is get the right shutter speed, aperture and ISO settings. A very capable photographer could get the same great exposures with only a match-needle metering system and a keen eye for tonal values. The high-tech systems just make it a little more foolproof for the rest of us.

A few quickie tips and comments:

– Try to look at your scene in terms of tonal values. That is, visualize your scene in greyscale. If you can do this, you can get a much better handle on when you’re looking at a potentially tricky exposure.

– Experiment with the exposure compensation controls on your DSLR or digicam (not all have them). These let you override the in-camera metering. So if you think (or find) that the camera’s high-tech system is not getting things exactly right, you can make quick adjustments to get the exposure brighter or darker. Don’t be surprised. You’re eyes and brain might be more than a match for the best metering system with a little practice.

– Make use of the histogram display on your digital camera. This can indicate quickly if you’ve got overexposed or underexposed portions in your photo. You won’t always spot it on the lcd view of your photo, but a histogram will illustrate it much more clearly.

– Shooting in RAW format can give you significantly more leeway in terms of exposure. With my Rebel XT for instance I can adjust the overall exposure by +/- two stops. This is like having access to my exposure compensation controls after I’ve taken the photo. RAW format files allow you to make many significant adjustments to a photo that you would normally only be able to make on the camera. While you might not want to shoot RAW format for all your photos, you should experiment with it for shots that you feel might be tricky exposure-wise.

– I’ve only discussed TTL in-camera metering systems. There is a photography camp out there that would chastise me for not mentioning standalone light meters. So I’ll do it here (I just have). These devices are a perfectly viable way of metering a scene – and in some ways are even better than relying on the in-camera systems. But this is a series of posts based on the layman, and I didn’t feel it was appropriate to explore that whole avenue. For those interested, see here.

The Return Of Super…

July 18, 2006

The Man with the Bones has a great review of Superman Returns. This part just made me chuckle:

…In other instances, he weedily floats, bobs and slides around on his stomach as if he’s the asthmatic kid learning to swim on a tummy-board. Un-super in my opinion…

All in all a very well written and balanced review. I’m not a huge movie theatre guy (although I have recently enjoyed a few more courtesy of my daughter – think Wallace & Grommit, Chicken Little etc..), but Superman may be enough for me to lay down the obscene casheesh that it requires to see a movie these days. At least by the sounds of it.

Bursting my bubble….

July 17, 2006

Dave Winer’s criticism of Podshow during the recent RSS feed hijacking fiasco seems to have earned him a blank paragraph in a recent Steve Gillmor post. Now Dave writes about why he was critical of them and why he was right. He ends of the post with:

If Steve Gillmor thinks I’m fucked up for calling them on it, Steve can think that, no one is perfect, not even Steve.”;->”

Now why did Dave have to go and burst my bubble like that?!

πŸ˜‰