Wednesday, December 31, 2008

Syncing..., Part 3

In a couple of prior posts, I described some of my requirements for syncing my iPhone with my laptop computer. A couple of months ago, I solved the problem (and a whole bunch of other ones, too). I'm very pleased with the solution I chose, which I'll share with you here.

I bought a new MacBook Pro. Not one of the new solid-brick-of-aluminum ones, but one of the slightly older aluminum ones held together with screws. It has a beautiful 15-inch matte-finish monitor, and it's faster than any portable I've ever used. It runs Windows XP applications in my Fusion VM about 33% faster than the native XP laptop I replaced.

It was a big investment, but I love it. Every time I open up my MacBook, I feel good. Thirty seconds later after it has booted completely from scratch, I love it even more.

VMware Fusion, by the way, is a brilliant penetrant to the barriers I had perceived about moving from Microsoft Windows to Mac OS X. With Fusion, you can make your whole Windows PC run on your Mac if you want to. As you gain comfort in OS X, you can migrate applications from Windows to the native Mac operating system. I digress...

So, now I use iCal, which gives me the ability to subscribe to iCal feeds like the one from TripIt. And ones like webcal://ical.mac.com/ical/, which automagically populate your calendar with US holidays, so you won't accidentally book a course on Labor Day. (Visit iCal World for more.)

My new system completely solves the travel problems I was talking about. Now, I do this before I travel:
  1. Book travel.
  2. Forward the confirmation emails to plans@tripit.com.
  3. Print my TripIt unified itinerary for my briefcase. (Or not.)
  4. Sync my iPhone with my laptop.
And that's it. No data double-entry. None.

One more requirement I had, though, was syncing my iCal calendar with Google Calendar. I found a solution to that one, too. I tried using the Google Calaboration, but I really didn't like the way it forced me to deal with my separate calendars. The tool I chose is called Spanning Sync. I used their 15-day trial, liked it a lot, and bought a subscription. I love the way I can map from my list of Google calendars to my list of iCal calendars. However, I don't like the way it syncs my contacts, so I just turn that feature off.

I'm intrigued by the Spanning Sync business model as well. You can save $5 on Spanning Sync by clicking here. It works like this (I'm quoting here a Spanning Sync web page)...
After a 15-day free trial period, Spanning Sync usually costs $25/year, but you can save $5 by using my discount code if you decide to buy it:

39PKXV

Also, if you use my code I'll get a $5 referral fee from Spanning Sync. Once you're a subscriber you'll get a code of your own so you can make money every time one of your other friends subscribes to Spanning Sync. Pretty cool!
Anyway, I'm very pleased with the new system, and I'm happy to share the news.

Monday, December 29, 2008

Performance as a Service, Part 2

Over the holiday weekend, Dallas left a comment on my July 7 post that begins with this:
One of the biggest issues I run into is that most of my customers have no SLAs outside of availability.
It's an idea that resonates with a lot of people that I talk to.

I see the following progressive hierarchy when it comes to measuring performance...
  1. Don't measure response times at all.
  2. Measure response times. Don't alert at all.
  3. Measure response times. Alert against thresholds.
  4. Measure response times. Alert upon variances.
Most people don't measure response times at all (category 1), at least not until there's trouble. Most people don't measure response times even then, but some do. Not many people fit into what I've called category 2, because once you have a way to collect response time data, it's too tempting to do some kind of alerting with it.

Category 3 is a world in which people measure response times, and they compare those response times against some pre-specified list of tolerances for those response times. Here's where the big problem that Dallas is talking about hits you: Where does that list of tolerances come from? It takes work to make that list, and preceding that work is the motivation to make that list. Many companies just don't have that motivation.

I think it's the specter of the difficulty in getting to category 3 that prevents a lot of people from moving into category 2. I think that is Dallas's situation.

A few years ago, I would have listed category 3 at the top of my hierarchy, but at CMG'07, in a paper called "Death to Dashboards...," Peg McMahon and Justin Martin made me aware of another level: this notion of alerting based on variance.

The plan of creating a tolerance for every business task you execute on your system works fine for a few interesting tasks, but the idea doesn't scale to systems with hundreds or thousands of instrumented tasks. The task of negotiating, setting, and maintaining hundreds of different tolerances is just too labor-intensive.

Peg and Justin's paper described the notion that not bothering with individual tolerances works just as well—and with very low setup cost—because what you really ought to look out for are changes in response times. (It's an idea similar to what Robyn Sands described at Hotsos Symposium 2008.) You can look for variances without defining tolerances, but of course you cannot do it without measuring response times.

Dallas ends with:
I think one of the things you might offer as part of the "Performance as a Service" might be assisting customers in developing those performance SLAs, especially since your team is very experienced in knowing what is possible.
I of course love that he made that point, because this is exactly the kind of thing we're in business to do for people. Just contact us through http://method-r.com. We are ready, willing, and able, and now is a great time to schedule something.

There is a lot of value in doing the response time instrumentation exercise, no matter how you do alerting. The value comes in two main ways. First, just the act of measuring often reveals inefficiencies that are easy and inexpensive to fix. We find little mistakes all the time that make systems faster and nicer to use and that allow companies to use their IT budgets more efficiently. Second, response time information is just fascinating. It's valuable for people on both sides of the information supply-and-demand relationship to see how fast things really are, and how often you really run them. Seeing real performance data brings ideas together. It happens even if you don't choose to do alerting at all.

The biggest hurdle is in moving from category 1 to category 2. Once you're at category 2, your hardest work is behind you, and you and your business will have exactly the information you'll need for deciding whether to move on to category 3 or 4.

Monday, December 22, 2008

Cuts, Thinks About Cutting, Part 2

Thank you all for your comments on my prior post.

To Robyn's point in particular, I believe the processes of rehearsing, visualizing, and planning have extraordinarily high value. Visualization and rehearsal are vital, in my opinion, to doing anything well. Ask pilots, athletes, actors, public speakers, programmers, .... Check out "Blue Angels: A Year in the Life" from the Discovery Channel for a really good example.

But there's a time when the incremental value of planning drops beneath the value of actually doing something. I think the location of that "time" and the concept of obsessiveness are related. I'm learning that it's earlier than I had believed when I was younger.

The one sentence in my original blog post that captures the idea I want to emphasize is, "I'm not advocating that design shouldn't happen; I'm advocating that you not pretend your design is build-worthy before you can possibly know."

Thinking of all this reminds me of a "tree house" I wanted to build when I was about 10 years old. I use quotation marks, because it wasn't meant to have anything to do with a tree at all. Here was the general idea of the design:My dad was a pilot, so when I was 10, I was too. The front of this thing was going to have an instrument panel and some windows, and I was going to fly this thing everywhere I could think of.

I drew detailed design drawings, and I dug a hole. (If you have children, by the way, make sure they dig a hole sometime before they grow up. Trust me.) My plan called for a 2x4 frame with plywood walls screwed to that frame. It's when I started gathering the 2x4 lumber that my plan fell apart. See, I had naively planned that 2x4 lumber is 2 inches thick by 4 inches wide. It's not. It's 1-1/2 inches thick by 3-1/2 inches wide. That meant I'd have to redraw my whole plan. Using fractions.

I don't remember exactly what happened next, but I do remember exactly what didn't happen. I didn't redraw my whole plan, and I didn't ever build the "tree house." Eventually, I did finally fill in that hole in the yard.

Here are some lessons to take away from that experience:
  • Not understanding other people's specs well enough is one way to screw up your project.
  • Actually getting started is a sure way to learn what you need to know about those other people's specs.
  • You learn things in reality that you just don't learn when you're working on paper.
  • Not building the "tree house" at all meant that I'd have to wait until I was older to learn that putting wood on (or into) the ground is an impermanent solution.
  • The problems that you think during the design phase are your big problems may not be your actual big problems.

Friday, December 19, 2008

Cuts, Thinks About Cutting



This xkcd comic I saw today reminded me of a conversation a former colleague at Oracle and I had some years ago. During a break in one of the dozens of unmemorable meetings we attended together, he and I were talking about our wood shops. He had built several things lately, which he had told me about. I was in the planning phase of some next project I was going to do. He observed that this seemed to be the conversation we were always having. He argued that it would be better for me to build something, have the fun of building it, actually use the thing I had built, and then learn from my mistakes than to spend so much of my time hatching on what I was going to do next.

I of course argued that I didn't want to waste my very precious whatever-it-was that I would mess up if I didn't do it exactly right on the first try. We went back and forth over the idea, until we had to go back to our unmemorable meeting. I remember him shaking his head as we walked back in.

Sometime after we sat down, he passed a piece of paper over to me that looked like this:It said "cuts" with an arrow pointing to him, and "thinks about cutting" with an arrow pointing to me.

Thus was I irretrievably labeled.

He was right. In recent years, I've grown significantly in the direction of acquiring actual experience, not just imagined experience. I'm still inherently a "math guy" who really enjoys modeling and planning and imagining. But most of the things I've created that are really useful, I've created by building something very basic that just works. Most of the time when I've done that, I've gotten really good use out of that very basic thing. Most of the time, such a thing has enabled me to do stuff that I couldn't have done without it.

In the cases where the very basic thing hasn't been good enough, I've either enhanced it, or I've scrapped it in favor of a new version 2. In either case, the end result was better (and it happened earlier) than if I had tried to imagine every possible feature I was going to need before I built anything. Usually the enhancements have been in directions that I never imagined prior to actually putting my hands on the thing after I built it. I've written about some of those experiences in a paper called "Measure once, cut twice (no, really)".

Incremental design is an interesting subject, especially in the context of our Oracle ecosystem, where there is a grave shortage of competent design. But I do believe that a sensible application of the "measure-once, cut-twice" principle improves software quality, even in projects where you need a well-designed relational database model. I'm not advocating that design shouldn't happen; I'm advocating that you not pretend your design is build-worthy before you can possibly know.

Some good places where you can read more about this philosophy include...

Tuesday, December 16, 2008

A Small Adventure in Profiling

Tonight I'm finishing up some code I'm writing. It's a program that reports on directories full of trace files. I can tell you more about that later. Anyway, tonight, I got my code doing pretty much what I wanted it to be doing, and I decided to profile it. This way, I can see where my code is spending its time.

My program is called lstrc. It's written in Perl. Here's how I profiled it:
23:31:09 $ perl -d:DProf /usr/local/bin/lstrc
The output of my program appeared when I ran it. Then I ran dprofpp, and here's what I got:
23:31:23 $ dprofpp
Total Elapsed Time = 0.411082 Seconds
User+System Time = 0.407182 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
64.1 0.261 0.261 18 0.0145 0.0145 TFK::Util::total_cpu
18.1 0.074 0.076 176 0.0004 0.0004 TFK::Util::timdat
5.65 0.023 0.052 9 0.0026 0.0058 main::BEGIN
1.72 0.007 0.007 1348 0.0000 0.0000 File::ReadBackwards::readline
0.98 0.004 0.022 6 0.0007 0.0036 TFK::Util::BEGIN
0.74 0.003 0.011 18 0.0002 0.0006 TFK::Util::tim1
0.74 0.003 0.004 6 0.0005 0.0006 ActiveState::Path::BEGIN
0.74 0.003 0.014 7 0.0004 0.0019 Date::Parse::BEGIN
0.74 0.003 0.359 2 0.0015 0.1797 main::process_files
0.49 0.002 0.002 4 0.0005 0.0006 Config::BEGIN
0.49 0.002 0.002 177 0.0000 0.0000 File::Basename::fileparse
0.49 0.002 0.002 176 0.0000 0.0000 File::Basename::_strip_trailing_sep
0.49 0.002 0.002 3 0.0005 0.0005 Exporter::as_heavy
0.49 0.002 0.002 6 0.0003 0.0004 File::ReadBackwards::BEGIN
0.25 0.001 0.002 24 0.0001 0.0001 Getopt::Long::BEGIN
What this says is that the function called TFK::Util::total_cpu accounts for 64.1% of the program's total execution time. The thing you couldn't have known (except I'm going to tell you) is that this program is not supposed to execute the function TFK::Util::total_cpu. At all. It's because I didn't specify the --cpu command line argument. (I told you that you couldn't have known.)

Given this knowledge that my code was spending 64.1% of my time executing a function that I didn't even want to run, I was able to add the appropriate branch around the call of TFK::Util::total_cpu. Then, when I ran my code again, it produced exactly the same output, but its profile looked like this:
23:33:07 $ dprofpp
Total Elapsed Time = 0.150279 Seconds
User+System Time = 0.147957 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
50.0 0.074 0.076 176 0.0004 0.0004 TFK::Util::timdat
15.5 0.023 0.053 9 0.0026 0.0058 main::BEGIN
4.73 0.007 0.007 1348 0.0000 0.0000 File::ReadBackwards::readline
2.70 0.004 0.022 6 0.0007 0.0036 TFK::Util::BEGIN
2.70 0.004 0.013 18 0.0002 0.0007 TFK::Util::tim1
2.03 0.003 0.004 6 0.0005 0.0006 ActiveState::Path::BEGIN
2.03 0.003 0.013 7 0.0004 0.0019 Date::Parse::BEGIN
2.03 0.003 0.100 2 0.0015 0.0499 main::process_files
1.35 0.002 0.002 4 0.0005 0.0005 Config::BEGIN
1.35 0.002 0.002 177 0.0000 0.0000 File::Basename::fileparse
1.35 0.002 0.002 176 0.0000 0.0000 File::Basename::_strip_trailing_sep
1.35 0.002 0.002 6 0.0003 0.0004 File::ReadBackwards::BEGIN
1.35 0.002 0.002 3 0.0005 0.0005 Exporter::as_heavy
0.68 0.001 0.002 24 0.0001 0.0001 Getopt::Long::BEGIN
0.68 0.001 0.005 176 0.0000 0.0000 File::Basename::basename
Yay.

Let me summarize:
Total Elapsed Time = 0.411082 Seconds — before profiling
Total Elapsed Time = 0.150279 Seconds — after profiling
That's about a 64% improvement in response time, in return for about 30 extra seconds of development work.

Profiling—seeing how your code has spent your time—rocks.

Friday, November 21, 2008

Messed-Up App of the Day

A couple of weeks ago, I returned from the Miracle Oracle Open World 2009 event held in Rødby, Denmark. I go just about every year to this event. This time, I had accepted an invitation from Chris Antognini to speak at his company's TrivadisOPEN event in Zürich. So my travel planning was a little more complicated than it usually is. Instead of finding a simple trip from DFW to Copenhagen and back, this time I had to book a triangle: DFW to Zürich to Copenhagen, and then back to DFW.

I was surprised at how difficult it was to find a good schedule at a good price; it was a lot harder than my normal booking. I ended up finding a suitable enough itinerary at Orbitz. I usually fly American Airlines, but for this trip, the AA flights were much more expensive than I wanted to pay. But I found an itinerary that used British Airways and Air Berlin that I liked. So I booked it.

The trip went just fine. On the morning of the final day of my trip, Dan Norris and I were standing together in Kastrup at the BA ticket counter when the BA agent noticed that I hadn't provided my American Airlines AAdvantage number for the itinerary. (BA and AA are oneworld partners.) So I gave him the number, which he attached to my itinerary, and he informed me that the number would map only to the segments of my flight that I hadn't consumed yet (I had CPH-LHR and then LHR-DFW left to go). I'd need to request credit for my flights prior in the week separately on the web once I got home.

Fine.

So on Monday after I got home, I went to AA.com to register my BA flights for mileage credit. There was a form to fill out. It took a while to type everything into it, because I had to find my ticket numbers, figure out how to isolate the carrier id from the remainder of the ticket number, and then enter date and flight number information for each segment of my trip for which I was requesting credit. It was probably a 10-minute time investment to get the form filled out.

So I reviewed what I had entered, and then I hit the Submit button. Instantly, I got feedback from the page saying that I couldn't request mileage credit right now, that I would have to wait 15 days before requesting mileage credit. So I recoiled a little bit—filling out that form was a bunch of work—and I opened my calendar application to log myself a reminder to go through the process again in 15 days.

There are a couple of problems here:
  1. Why did the form force me to enter a bunch of details before telling me that I shouldn't be using this form?
  2. Why was it necessary in the first place for me to type in the flight date, flight number, and origin and destination information for each segment of the itinerary? The ticket number should have been enough.
The simple way to fix both these problems is to do what ba.com does: it asks only who I am and what my ticket number is, and it figures out everything else.

Anyway...

The fifteen days passed, and my calendar reminded me to submit my mileage credit form. So I went to work again, gathering my ticket number information, my flight numbers, my dates of travel, and my origin/destination airport codes. I spent another few minutes typing it all in again. Then I clicked Submit. This time: joy. The form said that the submission was complete, and I'd hear back from AA.com via email.

A minute or so later, I got an email in my inbox thanking me for visiting AA.com and confirming that I had issued a mileage credit request. Good: request confirmed.

Then, just a second or two later, I got a second email from AA.com saying this:
Thanks for using AA.com to request mileage credit for your travel on British Airways.

I'm sorry to disappoint you but your transatlantic travel on British Airways (or a BA-ticketed flight) is not eligible for AAdvantage mileage credit. British Airways transatlantic flights to/from the United States are specifically excluded from mileage accrual or redemption in the AAdvantage program.
Grrrr.... I had to work this hard—twice!—just to find out that I wasn't even going to get to do what I wanted to do? Why didn't they tell me this fifteen days ago?!

The first experience was annoying. The second experience took the annoyance to a whole new level.

Here are some application design lessons that this story reinforces:
  1. Don't ask for information you can derive.
  2. Don't separate feedback from its prerequisite input by any more "distance" (measured in either time or user investment) than absolutely necessary. Especially when there's a chance you're going to deny someone his service, don't make him work any harder or wait any longer than absolutely necessary to let him know.
Of these lessons, number one is far and away the most important. If the AA.com site had asked just for the ticket number like ba.com does, the pain of the other problems wouldn't have been that big of a deal.

Sunday, November 9, 2008

My Friend Oliver

Oliver Weers has been my friend for several years now. I met him at one of Mogens Nørgaard's fantastic events in Denmark, and it's at these events that I see Oliver once or twice a year.

Oliver is a DBA who works for CSC. Like a lot of people I've met in Denmark, he's a very sharp guy. Like a lot of people I've met who are DBAs, he has hobbies outside of work. I have a lot of fun talking to him, and I enjoy when I get to cross his path.

Oliver is particularly special, because his outside hobby is that he's a rock star.

...Not the "finger quotes" kind of rock star like Don Knuth or Chris Date. I mean Oliver is an actual rock star, like Joe Elliott or David Coverdale. Well, he'd probably be embarrassed by that characterization, but that's where I hope his ship is headed. A lot of people in Europe already know Oliver because of his performance on the TV show "X Factor," which is a Danish show that's similar to "American Idol."

I'm excited for Oliver lately because he has just released his first album called Get Ready. If you like your Whitesnake, I think you'll like Oliver. (He'll be warming up for Whitesnake on December 19th in front of 7,000 people at K. B. Hallen in Copenhagen.)

Here's a fun sample from the X Factor show. Try to hold it together for the first couple of minutes. Remember, it's like American Idol where half the fun is to see how bad it can get. Oliver kicks in at 2:22 to straighten things out.

So if you're interested in the Rock Music, have a look at him. Hit Oliver's MySpace page for a good idea of what he's got. This Calling Out For You video is good, too (including a nice interview på Dansk, with a little operatic demo thrown in there for good measure). Good song, and it's kind of fun to remember that this guy can rock in sqlplus, too.

So, please join me in wishing Oliver the very best of luck. He's worked hard at this for many years—the whole time holding down a pretty tricky job. I hope he'll become a huge overnight success worldwide sometime real soon.

Thursday, November 6, 2008

C. J. Date at Symposium 2009

I'm excited to announce that we have just arranged for Chris Date to speak at the upcoming Hotsos Symposium (hosted by our friends at Hotsos, 9–12 March 2009 near DFW Airport). Karen Morton just closed the deal with Chris a few minutes ago: he will deliver a keynote and then two one-hour technical sessions.

Here is a chance to meet one of the men who invented the whole field in which so many of us earn our livings today. This as an incredible opportunity.

I'll hope to see you there.

Friday, September 26, 2008

A Lesson in Writing from 1944

I watched the Presidential debate tonight. One of the candidates mentioned a pair of letters that General Dwight David Eisenhower wrote in 1944. He wrote one letter that he would use in the event of a victorious Normandy invasion, and he wrote another one that he would use in the event of a defeat.

I was curious about those letters, so I googled for them. I found something interesting in a way that I didn't expected. Here's the text of the letter that General Eisenhower wrote in case the invasion force at Normandy had been defeated:
Our landings in the Cherbourg-Havre area have failed to gain a satisfactory foothold and I have withdrawn the troops. My decision to attack at this time and place was based on the best information available. The troops, the air and the Navy did all that Bravery and devotion to duty could do. If any blame or fault attaches to the attempt it is mine alone. —July 5
Here is a picture of the handwritten note, which I found at archives.gov:



The handwritten note contains some important information that isn't present in the transcribed text alone. Observe that General Eisenhower edited his message. He actually edited himself three times; I'll refer here only to the top one. Here's the original version:
Our landings in the Cherbourg-Havre area have failed to gain a satisfactory foothold and the troops have been withdrawn.
Here's the modified version:
Our landings in the Cherbourg-Havre area have failed to gain a satisfactory foothold and I have withdrawn the troops.
The difference is subtle but important. In grammatical terms, General Eisenhower made the choice to discard passive voice and adopt the direct, subject-verb-object style of active voice. One Wikipedia article that I particularly admire identifies passive voice as a tactic of weasel wording: "Weasel words are usually expressed with deliberate imprecision with the intention to mislead the listeners or readers into believing statements for which sources are not readily available."

In Eisenhower's original version, he had stated that "the troops have been withdrawn." From this statement, we would have learned some information about the troops, but we would not have learned directly about who had withdrawn them. This passive-voice language, "the troops have been withdrawn," would have subtly conveyed the notion that the author wished to conceal the identity of the decision-maker about the withdrawal.

In the modified version, General Eisenhower made it abundantly clear who had made the decision: he did. The revised wording is more informative, it is more efficient, and it is more courageous.

Active-voice writing holds several advantages over passive-voice writing. I've learned this in my work, especially in consulting engagement reports, where I've found it's essential to write with active voice. Advantages of active-voice writing include:
  • Active voice transmits more information to the reader.
  • Active voice is plainer and simpler; it is easier to read.
  • Active voice is often more economical; it conveys as much or more information in fewer words.
  • Active voice is often more courageous.
The value of courage is obvious in the Eisenhower case. Even if the Allies had been defeated at Normandy, Eisenhower was courageous enough to accept the responsibility for the plan, its execution, and even its remediation.

Courage is also important in our writing about technology. Writing with active voice can be much more difficult than writing with passive voice. ...Because, you see, active voice gives you noplace to hide. When you know something, you say it. When you don't, active voice writing pretty much forces you to say that. It can be quite unsettling to admit to your audience that you don't know everything you wish you knew. It takes courage.

If you find yourself ashamed that your writing is too vague or that it asks more questions than it answers, then I think you have only four choices. (1) You can decide not to write anymore because it's too hard; (2) You can try to conceal your deficiencies with weasel wording; (3) You can admit the gaps in your work; or (4) You can improve the quality of your own knowledge.

Of course, I don't believe that giving up is the right answer. Option two—concealing your deficiencies with weasel wording—is, I think, by far the worst option of the four. Choice three frightens a lot of people, but actually it's not so bad. I believe that one of the great successes of the modern wiki- and forum-enabled Internet is the ease with which an author can voice unfinished ideas without feeling out of place. The fourth option is a fantastic solution if you have the time, the inclination, and the talent for it.

Back to General Eisenhower's note... I find his edit inspiring. By making it, he reveals something about his thought process. He wrote his original text in the common, politically safe "tasks have been executed" kind of way. But his edit reveals that it was especially important to him to be direct and forthcoming about who was making the decisions here, and who was at fault in case those decisions went wrong.

Knowing that General Eisenhower edited his note in the particular way that he did actually makes me respect him even more than if he had written it in active voice in the first place.

* * *

Here's where I thought I was finished for the evening. But I want to show you what it looks like to execute faithfully upon my own bitter advice. Eisenhower's letter piqued my interest in the D-Day invasion of Normandy. One thing I noticed is that the invasion was initiated on June 6, 1944. Eisenhower's memo is dated "July 5." Uh, that's a month after the invasion, not the night before. It was another hour or so of writing lots more stuff (which I've long since deleted) before I googled "eisenhower message june july" and found this, which states simply that, "The handwritten message by General Eisenhower, the In Case of Failure message, is mistakenly dated 'July' 5 instead of 'June' 5."

Ok. I can accept this as authoritative for my own purposes, for one, because it doesn't matter too much to me tonight if it's not true. It's a plausible mistake to imagine a man making who's under as much pressure as he would have been on June 5, 1944. For comparison, I could barely remember my own phone number on the night of the Loma Priete earthquake, which I rode out in the Foster City Holiday Inn in 1989. But of course, such an anecdote about me is no proof of this particular proposition about Dwight D. Eisenhower.

So, do you see what I mean when I say that writing is HARD!? The act of writing itself—if you try to do it well—forces you to do work that you never intended to do when you set out to write your piece.

That's one of the good things about the software industry. When someone makes a statement about computer software, I can confirm or refute the statement myself using strace, DTrace, 10046, block dumps, or some other research tool that I can actually get my hands on. That doesn't make it easy, but it usually does make it at least possible.

Wednesday, September 17, 2008

Thursday, September 4, 2008

Business of Software 2008, day 2

Greetings from the second and final day of "Business of Software 2008, the first ever Joel on Software conference."

Yesterday was a hard act to follow, but today met the challenge. Today's roster:
Some of today's highlight ideas for me (again, with apologies to the speakers for the crude summarization):
  • Nothing is difficult to someone who doesn't know what he's talking about. (Johnson)
  • Creating more artifacts and meetings is no answer. (Johnson)
  • Entrepreneurs are better entrepreneurs when they're not worried about their personal balance sheet. (Jennings)
  • "In the software field, we don't have to deal with the perversions of matter." (Stallman)
  • VCs say 65% of failed new ventures are the result of people problems with founding or management teams. (Wasserman)
  • Websites are successful to the extent they're self-evident as possible. (Krug)
  • Sensible usability testing is absolutely necessary and, better yet, possible and even inexpensive. You can even download a script at Steve's site. (Krug)
  • The huge chasm between #1 and #2 is all about elements of happiness, aesthetics, and culture. (Spolsky)
Steve Johnson and Steve Krug gave truly superb presentations. Steve Krug I knew about beforehand, from his book. Steve Johnson I did not know, but I do now. These are people I'll take courses from someday. And of course, Joel Spolsky... I had seen him speak before, so I knew what to expect. He's one of the best speakers I've ever watched. I've asked him to keynote at Hotsos Symposium 2009. We'll see what he says.

Wednesday, September 3, 2008

Business of Software 2008, day 1

Greetings from Boston, where I'm attending "Business of Software 2008, the first ever Joel on Software conference."

It has been fantastic so far. Here's a featured presenters roll call for the day:
That's not to mention the eight Pecha Kucha presentations, although I will mention two that I particularly enjoyed by Jason Cohen of SmartBear Software ("Agile marketing") and Alexis Ohanian, founder of Reddit ("How to start, run, and sell a web 2.0 startup"). Alexis won the contest, which netted him a new MacBook Air. Not bad for 6 minutes 40 seconds of work. ;-)

Here are some of the highlight ideas of the day for me (with apologies to the speakers for, in some cases, crudely over-simplifying their ideas):
  • Ideas that spread win. (Godin)
  • The leader of a tribe begins as a heretic. (Godin, Livingston)
  • Premature optimization is bad. In business too. Not just code. (Fried, Shah)
  • Interruptions are bad. Meetings are worse. (Fried, Sink, Livingston)
  • "Only two things grow forever: businesses and tumors." Unless you take inelligent action. (Fried)
  • Pricing is hard. Really, really hard. (Shah)
  • Business plans are usually stupid. (Fried, Shah, Livingston)
  • Software specs are usually stupid. (Fried)
  • An important opportunity cost of raising VC money is the time you're not spending working on the business of your actual business. (Shah)
  • The most common cause of startup failure isn't competition, it's fear. (Livingston)
  • Your first idea probably sucks. (Fried, Sink, Shah, Livingston)
  • Radical mood swings are part of the territory for founding a company. (Livingston)
An overarching belief that I think bonds almost all of the 300 people here at the event is this: If you're not working on your passion, then you're wasting yourself. It is inspiring to met so many people at one time who are living courageously without compromising this belief. Re-SPECT.

I think a good conference should provide three main intellectual benefits for people:
  1. You can expose yourself to new ideas, which can make you wiser.
  2. You can fortify some of the beliefs you already had, which can make you more confident.
  3. You can learn better ways to explain your beliefs to others, which can make you more effective.
And then of course there's networking, fun, and all that stuff—that's easy. So far, this event is ringing the bell on every dimension that I needed. Absolutely A+.

Tuesday, August 26, 2008

Hotsos Symposium 2009 Call for Papers

The Call for Papers for Hotsos Symposium 2009 is now open. To submit an abstract proposal for the event, please visit the Call for Papers page. The call will remain open until 24 October. This is your chance to get your name on the agenda and earn a complimentary pass to the event.

I love the Symposium for the people who show up, both the speakers and the attendees. If you've been there, you know: it is the best event of the year for professionals interested in Oracle performance. It's one of the rare places that I can just sit down with a pencil and fill my notebook with answers to long-standing questions and good new ideas to pursue for the coming year.

We've already booked Jonathan Lewis for two technical sessions and the Training Day event, and Tanel Põder has confirmed his participation on the agenda as well. That makes two of my favorites, with lots more on the way.

You're probably aware that earlier this year, I left Hotsos with a few former employees to create Method R Corporation (see our press release for more info). Method R and Hotsos are pleased to continue the tradition of the Hotsos Symposium as a joint venture between our two companies. I hope you'll join us.

Friday, August 22, 2008

Messed-Up App of the Day

My family has a cat. I don't like to talk about it, because I really don't like cats that much.

One thing our cat does that's kind of interesting to me, is that she brings "gifts" into our garage near her food bowl. Here's a picture of one. She climbs to the tops of the trees in our yard to catch these things.



Jeff Holt does something similar on occasion. Sometimes when I return from a trip, there'll be something on my desk from Jeff for me. Today, it was this:



It's a piece of wire taped to a sheet of paper. And that's my Messed-Up App of the Day.

Handwritten on the paper is this message from Jeff:
This was a grounding cable for the network.

LMAO
The first few seconds I looked at the cable, it didn't seem that funny to me. It was about the same reaction as when I see a dead cicada next to the food bowl at home. Amused. But not "LMAO."

Sometimes, what Jeff thinks is funny is funny because of something I haven't learned yet. So I got thinking, maybe his point is that it's against code to use stranded copper wire as grounding cable. Most of the ground wire you'll find in houses is solid copper. But then again, no, stranded cable is fine for ground wire. If I remember correctly, you use stranded instead of solid wire in applications where the wire is going to be required to flex in normal operational circumstances.

Then I noticed there was red tape on one end of the wire, and black tape on the other. Ah, ok, that's what was funny--ground wires are supposed to be labeled green. I figured the guys who wired our network probably just used some scrap wire instead of properly marked "ground wire." Sometimes that kind of thing bugs Jeff, and he knows it bugs me, too. Mystery solved, then.

But not really "LMAO," though, when you think about it.

Then I picked up the wire and looked at it. Look at this:



Oh... It's not tape. That's two wires. One red, one black.

Ok, that's legitimately "LMAO."

P.S.: If you have trouble explaining to your friends why this is funny, the following handy diagram may help you.

Friday, July 11, 2008

So how do you FIX the problems that "Performance as a Service" helps you find?

I want to respond carefully to Reubin's comment on my Performance as a Service post from July 7. Reuben asked:
i can see how you actually determine for a customer where the pain points are and you can validate user remarks about poor performance. But i don't see from your post how you are going to attack the problem of fixing the performance issue.

i would be most interested in hearing your thoughts on that. I wonder if you guys are going to touch the actual code behind the "order button" you described.
Under the service model I described in the Performance as a Service post, a client using our performance service would have several choices about how to respond to a problem. They could contract our team as consultants; they could use someone else; they could do it themselves; or of course, they could choose to defer the solution.

Attacking the problem of fixing the performance issue is actually the "easy" part. Well, it's not necessarily always easy, but it's the part that my team have been doing over and over again since the early 1990s. We use Method R. Once we've measured the response time in detail with Oracle's extended SQL trace function, we know exactly where the task is spending the end user's time. From there, I think it's fair to say we (Jeff, Karen, etc.) are pretty skilled at figuring out what to do next.

Sometimes, the root cause of a performance problem requires manipulation of the application source code, and sometimes it doesn't. If you do diagnosis right, you should never have to guess which one it is. A lot of people wonder what happens if it's the code that needs modification, but the application is prepackaged, and therefore the source code is out of your control. In my experience, most vendors are very responsive to improving the performance of their products when they're shown unambiguously how to improve them.

If your application is slow, you should be eager to know exactly why it's slow. You should be equally eager to know, whether you wrote the code yourself or someone else wrote it for you. To avoid collecting the right performance diagnostic data for an application because you're afraid of what you might find out is like taking your hands off the wheel and covering your eyes when a child rides his bike out in front of the car that you're driving. There's significant time-value upon information about performance problems. Even if someone else's code is the reason for your performance problem (or whatever truth you might be afraid of learning), you need to know it as early as possible.

The SLA Manager service I talked about is so important because the most difficult part of using Method R is usually the data collection step. The difficulty is almost always more political than technical. It's overcoming the question, "Why should we change the way we collect our data?" I believe the business value of knowing how long your computer system takes to execute tasks for your business is important enough that it will get people into the habit of measuring response time. ...Which is a vital step in solving the data collection problem that's at the heart of every persistent performance problem I've ever seen. I believe the data collection service that I described will help remove the most important remaining barrier to highly manageable application software performance in our market.

Thursday, July 10, 2008

Christian Antognini's new book: Troubleshooting Oracle Performance

I learned from a friend yesterday that Chris Antognini's new book, Troubleshooting Oracle Performance, is available now. I just checked at Amazon, and the product is listed as temporarily out of stock. That's good: it means people are buying them up.

If you're an Oracle application developer, get one. If you're an Oracle database administrator, get one for yourself and a couple more for your developer friends.

I hope he sells a million of them.



Jonathan Lewis and I both wrote a foreword for Chris after seeing the work he had put into this project. Here's mine...

My Foreword for Chris's Book

I think the best thing that has happened to Oracle performance in the past ten years is the radical improvement in the quality of the information you can buy now at the bookstore.

In the old days, the books you bought about Oracle performance all looked pretty much the same. They insinuated that your Oracle system inevitably suffered from too much I/O (which is, in fact, not inevitable) or not enough memory (which they claimed was the same thing as too much I/O, which also isn’t true). They’d show you loads and loads of SQL scripts that you might run, and they’d tell you to tune your SQL. And that, they said, would fix everything.

It was an age of darkness.

Chris’s book is a member of the family tree that has brought to us, …light. The difference between the darkness and the light boils down to one simple concept. It’s a concept that your mathematics teachers made you execute from the time when you were about ten years old: show your work.

I don’t mean “show and tell,” where someone claims he has improved performance at hundreds of customer sites by hundreds of percentage points [sic], so therefore he’s an expert. I mean show your work, which means documenting a relevant baseline measurement, conducting a controlled experiment, documenting a second relevant measurement, and then showing your results openly and transparently so that your reader can follow along and even reproduce your test if he wants to.

That’s a big deal. When authors started doing that, Oracle audiences started getting a lot smarter. Since the year 2000, there has been a dramatic increase in the number of people in the Oracle community who ask intelligent questions and demand intelligent answers about performance. And there’s been an acceleration in the drowning-out of some really bad ideas that lots of people used to believe.

In this book, Chris follows the pattern that works. He tells you useful things. But he doesn’t stop there. He shows you how he knows, which is to say he shows you how you can find out for yourself. He shows his work.

That brings you two big benefits. First, showing his work helps you understand more deeply what he’s showing you, which makes his lessons easier for you to remember and apply. Second, by understanding his examples, you can understand not just the things that Chris is showing you, but you’ll also be able to answer additional good questions that Chris hasn’t covered. …Like what will happen in the next release of Oracle after this book has gone to print.

This book, for me, is both a technical and a “persuasional” reference. It contains tremendous amounts of fully documented homework that I can reuse. It also contains eloquent new arguments on several points about which I share Chris’s views and his passion. The arguments that Chris uses in this book will help me convince more people to do the Right Things.

Chris is a smart, energetic guy who stands on the shoulders of Dave Ensor, Lex de Haan, Anjo Kolk, Steve Adams, Jonathan Lewis, Tom Kyte, and a handful of other people I regard as heroes for bringing rigor to our field. Now we have Chris’s shoulders to stand on as well.

―Cary Millsap
10 April 2008

Monday, July 7, 2008

Performance as a Service

I've mentioned already that, for the second time in ten years, I'm starting a business. It's a lot easier nowadays than it was back in 1999. I know; it's supposed to be easier the second time you do something, but what I mean is different from that. It's just a lot easier to start a business now than it used to be.

Take email for example. I remember the trauma of having to buy and build a server, install Linux on it, find a location for it, install Sendmail, figure out how to manage that, eventually hire someone to manage it, buy email client software for everyone (in our case, Microsoft Outlook), eventually decide that we wanted to use Microsoft Exchange instead of Sendmail, and then keep on top of hardware and software maintenance for everything we had bought, all in an environment where prices and technology and requirements were continuously variable. It took nearly a whole full-time person just to figure out which options we should be thinking about.

Jeff Holt did most of this work for us in my first start-up almost ten years ago. Now, when you think of how many people in the world there are who can set up email, and compare that to how few people in the world there are who can do what Jeff can do with an Oracle database, you realize that the opportunity cost of having Jeff fiddle with email is ludicrously high. But in 1999, the only other option I knew about was to spend a bunch of cash to hire a separate person to do it instead of Jeff.

Today, you pay $50 to Google for a whole year's worth of Gmail service for each employee you have, and that's it. Ok, there's a half hour or so of configuration work you have to do to get your own domain name in your email addresses. But for way less than one month's rent, you've got email for your company for a whole year that works every time, all the time, from anywhere. All you need is a browser to access it, and even that is free these days.

I can tell you the same kind of story for web hosting, bug tracking, backup and recovery, HR and payroll, accounting, even for sales. The common thread here is that there are a lot of things you have to do as a business that have nothing whatsoever to do with what your business really does, which is that content that your people are really passionate about providing to the market. Today, it's economically efficient to let specialty firms do things for you that ten years ago, you wouldn't have considered letting someone else do.

...Which brings me to what we do. My company, Method R Corporation, does performance for a living. Specifically, Oracle software performance. We know how to make just about any Oracle-based software go faster, and we can do it quicker than you probably think. And we can teach people how to do it just like we do. We even sell the tools we use, which make it a lot easier to do what we do. It works. Read the testimonials at our Profiler page for some evidence of what I mean.

So here's a really important question for our company: Why would a telco or a manufacturer or a transportation company or a financial services company—or even a computer software manufacturer—want to learn as much about Oracle performance as the people in Method R have invested into learning? The answer is that a lot of companies just don't.

I love the field of software performance. I love it; it's my life's work. But most people don't. There are a lot of business owners and even software developers out there who just don't love thinking about software performance. I get that. Hey, I happen not to love thinking about software security. I know it's necessary, and I want it; I just don't want to have to think about it. I think most people regard software performance the same way: want it, need it even, don't want to think about it.

What if software performance were something, like Gmail, that just worked, and the only time you had to think about it was when you wrote a little check to make sure you could continue not having to think about it? I think there's a real business model there.

So here's what we're doing.

The people here at Method R have created a software package that we call our SLA Manager. "SLA" stands for "Service Level Agreement." It is software that tracks the response times (the durations that your end-users care about) of the business tasks that you mark as the most important things you want to watch. For example, if your application's "Book Order" function is something that's important to you, we can measure all 10,436 of your "Book Order" button clicks that happened yesterday. Our SLA Manager could tell you how long every single one of those clicks took. We can report information like, "Only 92.7% of those clicks were fulfilled in 3 seconds or less (not 99% like you wanted)." Of course, we can see trends in the data (that is, we can see your performance problems before your users can), and so on.

So, our value proposition is this: We'll install some data collection software at your site. We'll instrument some of the business tasks that you want to make sure never have performance problems. We'll show you exactly what we're doing so there's no need to fear whether we're messing anything up for you. For example, we'll show you how to turn all our stuff off with the flick of a switch in case you ever get into a debate with one of your software vendors over the impact our measurements might have upon your system.

We'll periodically transfer data from your site to ours, where we'll look at your performance data. We'll charge a small fee for that. The people looking at your data will be Cary Millsap, Jeff Holt, Karen Morton, ...people like that.
Remember: we're not looking at your actual transactions; all we're going to see is how many you do and how long they take.
We'll report regularly to you on what we see, and we'll make recommendations when we see opportunities for improvement. How much or how little help you want will be your decision. If you ever do want us to help you fix a performance problem with one of the tasks that we've helped you instrument, we'll be able to provide quick answers because we have the tools that work with the instrumentation we installed.

Another part of our service will be regularly scheduled knowledge transfer sessions, where the same people I've mentioned already will be available to you. Whether the events are public or private, remote or on-site, ...that will depend on the level of service you want to purchase. We'll tailor these sessions to your needs. We'll be in tune with those needs because of the data we'll be collecting.

If this business model sounds attractive to you, then I hope you'll drop us a note at info at method-r dot com. If it doesn't sound attractive, then we're eager to know how we could make the idea more appealing.

Tuesday, July 1, 2008

Multitasking: Productivity Killer

A couple of years ago, I read Joel Spolsky's article "Human Task Switches Considered Harmful," and it resonated mightily. The key take-away from that article is this: Never let people work on more than one thing at once. Amen. The nice thing about Joel's article is that it explains why in a very compelling way.

Last week, a good friend emailed me a link to an article by Christine Rosen called "The Myth of Multitasking," which goes even further. It quotes one group of researchers at the University of California at Irvine, who found that workers took an average of twenty-five minutes to recover from interruptions such as phone calls or answering e-mail and return to their original task.

So it's not just me.

The "benefits" of human multitasking is an illusion. Looking or feeling busy is no substitute for accomplishment.

Here's a passage from the Rosen article that might get your attention, if I haven't already:
...Research has also found that multitasking contributes to the release of stress hormones and adrenaline, which can cause long-term health problems if not controlled, and contributes to the loss of short-term memory.
Translation: Trying too hard to do the information overload thing makes you sick, and it makes you stupid.

For as long as I can remember, I've hated the times I've been "forced" to multitask, and I've loved those segments of my life when I've been free to lock down on a train of thought for hours at a time. I believe deep down that multitasking is bad—at least for me—and literature like the two articles I've discussed here supports that feeling in a compelling way.

Here's a checklist of decisions that I resolve to implement myself:
  • When you need to sit down and write, whether it's code or text, close your door, and turn off your phone and your email. (Or just work the 10pm-to-4am shift like I did with Optimizing Oracle Performance.)
  • When you're in a classroom, if you're really trying to learn something, turn off your email and your browser.
  • When you're managing someone, make sure he's working on one thing at a time. It's obviously important that this one thing should be the right thing to be working on. But it's actually worse to be working on two things than working on just one wrong thing. Read Spolsky. You'll see.

Monday, June 9, 2008

Why Guess? When You Can Know

The comment from plαdys on my post about flash drives and databases is a great entry into exactly the right conversation to be having. He wrote:

What about Data Warehouse type databases? Lots of full table scans, less use of cache (not sure if that's true)...

The traditional approach to having this conversation is for a few enthusiastic participants to argue over what happens "most of the time" or what "can happen." I used to participate in conversations like that at conferences, on panels, and in email. Nowadays, people have conversations like this in newsgroups and blogs.

I'll submit to you that the conversation about what "can happen" and subsequent arguments about what happens "most of the time" are irrelevant. Here's why. Imagine that a conversation like that converged to a state where one person was able to argue successfully that "in precisely 99% of cases, some proposition P is true." That never happens, but go with me for a second; imagine it did.

So, now, is P true for you? Most people seem to assume that the things that happen to them are like the typical things that happen to most people. Most people would think, "If P is that common, then surely P is true for me." Maybe it is. But then again, maybe it's not. The most likely situation is that P is true for some tasks running within your system, but it's not true for others. Whether P is true for the most important tasks on your system is simply a game of chance if you're like most people who have this conversation.

My point is: Why guess? When you can know. When it comes to Amdahl's Law, you should be able to know exactly how much response time of an individual business task is being consumed upon the component of your system that you're thinking about upgrading. If you want to see an example of what it looks like, look at our Profiler page.

With Oracle systems, though, the traditional approach is not to think that way. The traditional approach is to look at measurements upon the resources that comprise a system (its CPUs, its memory, its disks, its network), not measurements upon the business tasks that the company who owns the machine is spending its money to perform.

My whole point is that if you look at the response time of your business's most important tasks (that's what Method R is all about), then you don't have to care about conversations about other people's systems, or whether your system is typical enough to follow other people's advice. You won't have to guess about stuff like that, because you'll know the specific nature and needs of your system, regardless of whether your system happens to be like anyone else's.

Stop guessing. You can know. But you have to be willing to look at performance from the outside in, from the perspective of the task being processed, not from the traditional inside-out perspective of the resources doing the work.

Wednesday, June 4, 2008

Flash Drives and Databases

I learned today about "Sun to embed flash storage in nearly all its servers." This is supposed to be good news for database professionals all over because, flash storage "...consumes one-fifth the power and is a hundred times faster [than rotating disk drives]."

Hey-hey!

Of course, flash storage is going to cost a little more. Well, I'm not sure, maybe a lot more. But, according to the article:
John Fowler, the head of Sun’s servers and storage division, said at a press conference in Boston Tuesday. “The fact that it’s not the same dollars per gigabyte is perfectly okay.”
Alright, I understand that. Get more, pay more. I'm still on board.

But I predict that a lot of people who buy flash storage are going to be disappointed. Here's why.

We all know now that flash storage is a hundred times faster than rotating disk drives. (Says so right in the article. And consumes one-fifth the power.) We all also "know" that databases are I/O intensive applications. (The article says that, too. But everybody already "knew" that anyway.)

The problem that's going to happen is the people (1) who have a slow database application, (2) who assume that their application is slow because of the I/O it is doing, (3) whose application doesn't really spend much time doing I/O at all (whether it does a "lot" of I/O is irrelevant), and (4) who buy flash storage specifically on the hope that after the installation, their database application will "be 100x faster" (because, of course, the flash storage is 100x faster than the storage it is replacing).

See the problem?

Think about Amdahl's Law: improving the speed of a component will help a user's performance only in proportion to the duration for which that user used that component in the first place. Here's an example. Imagine this response time profile:
Total response time: 100 minutes (100%)
Time spent executing OS read calls: 5 minutes (5%) (e.g., db file sequential read)
Time spent doing other stuff: 95 minutes (95%)
Now, so how much time will you save if you upgrade your disk drives to a technology that's 100x faster. The answer is that the new "Time spent executing OS read calls" will be .05 minutes, right? Well, maybe. Let's go with that for a moment. If that were true, then how much time will you save? You'll save 4.95 minutes, which is 4.95% of your original response time. Your application won't be 100x faster (or, equivalently, 99% faster), it'll be 4.95% faster.

The users in this story aren't going to be happy with this if they're thinking that the result of an expensive upgrade is going to be 100x faster performance. If they're expecting 1-minute performance and get 95.05-minute performance instead, they're going to be, um, disappointed.

Now, reality is probably not even going to be that good. Imagine that those 5 minutes our user spent in the 100-minute original application response time was consumed executing 150,000 distinct Oracle db file sequential read calls (which map to 150,000 OS read calls). That makes your single-call I/O latency 0.002 seconds per call (300 seconds divided by 150,000 calls).

That's pretty good, but it's a normal enough latency these days on today's high-powered SAN devices. If you think about rotating disk drives, then 0.002 seconds per call is mind-blowingly excellent. But I/O latencies of 0.002 seconds or better don't come from disk drives, they come from the cache that's sitting in these SANs. The read calls that result in physical disk access are taking much longer, 0.005 seconds or more. An average latency of 0.002 is possible because so many of those read calls are being fulfilled from cache.

And the flash drive upgrades aren't going to improve the latency of those calls being fulfilled from cache.

So, to recap, the best improvement you'll ever get by upgrading to flash drives is a percentage improvement that's equivalent to the percentage of time you spent before the upgrade actually making I/O calls. If a lot of your I/O calls are satisfied by reads from cache to begin with, then upgrading to flash drives will help you less than that.

The biggest performance problem most people have is that they don't know where their users' time is going. They know where their system's time is going, but that doesn't matter. What people need to see is the response time profiles of the tasks that the business regards as the most important things it needs done. That's the cornerstone of what Method R (both the method and the company) is all about.

Flash drives might help you. Maybe a lot. And maybe they'll help you a little, or maybe not at all. If you can't see individual user response times, then you'll have to actually try them to find out whether they'll be good for you or not (imagine cash register sound here).

We built our Profiler software so that when we manage Oracle systems, we can see the users' response times and not have to guess about stuff like this. When you can see your response times, you don't have to guess whether a proposed upgrade is going to help you. You'll know exactly whom will be helped, and you'll know by how much.

The Magic of VMs

Something that Faan said in a comment to one of my posts stimulated a memory I’d like to share. In that post, I mentioned that I’m kind of interested in trying Microsoft Outlook 2007, but I’m too chicken to do it, because I don’t have enough faith that if I didn’t end up wanting to buy it, I’d be able to uninstall it without gorping up my Outlook 2003 installation, which I still rely upon.

He mentioned that a good way to evaluate a product without having that product go mad through your production data is to use virtual machine software, like VMware. In my estimation, this is brilliant.

And that’s where the memory comes in. On my most recent trip to Europe, I had some time with my good friend Carel Jan Engel. Among the many stories we traded, Carel Jan gave me an excellent solution to the age-old problem of the awful transition period you have to go through when you replace your laptop computer.

In the Old Days, when you got a new computer, you had to install all the stuff that used to be on your old computer onto your new computer. This typically required me to spend weeks with both laptops sitting in front of me, so I could have access to all the license keys and so forth that I needed to install everything onto my new machine. Then there was the issue of re-customizing all your toolbars and everything that makes your apps yours.

Carel Jan excitedly told me the story of how he had just bought himself a new laptop, and all he had to do was bundle up the old Windows VM from his old machine, and copy it to his new machine. Presto! No more laptop upgrade purgatory. Brilliant.

Looks like I’ll have one more purgatory to survive, and, if I do things right, that will be the end of it for this lifetime.

Syncing..., Part 2

I've learned a lot about syncing my iPhone from the comments I received on my prior post about syncing. Here's a summary:
  • Plaxo is cool, but it just doesn't do what I need. It doesn't put appointments into my iPhone stand-alone Calendar application. ...Which means that when I'm in Europe and don't want to pay roaming charges for data, I'm not going to get an alert on my iPhone when a Google Calendar-entered appointment comes due.
  • GooSync looks interesting (e.g., "Sync multiple Google calendars"), but I'd have to pay for the Premium Account option to see if it would work. With so many things I've tried not working, and with plenty of other things occupying my time, my internal barrier to entry is too high to try this one.
  • It looks like Synthesis AG has interesting plans for an iPhone SyncML client, but it looks like that would give me less of a "solution" per se, and more of a basis for a new programming project that I could do myself with the GData APIs that Tony mentioned. I'm not interested in doing this myself as a project.
  • Dominic sent me an interesting article that does really get to the point of what I want, but it requires jail-breaking (i.e., voiding the warranty) on my iPhone. It didn't take me much introspection to figure out my position on this: I'm not a jail-breaking kind of guy.
The best solution appears to be to wait a few weeks and see what happens as a result of the scheduled-for-June release of iPhone G3 or whatever they'll call it, which is supposed to also take advantage of some mass of developers out there writing new apps for the new iPhone SDK. So, I'll wait and see what happens here in the next few weeks.

Tuesday, May 27, 2008

Syncing iCal feeds with my iPhone: Not

Here's something I need to do, but I don't know how: sync an iCal feed with the calendar application on my iPhone, without a Mac, and without upgrading Outlook 2003 to Outlook 2007. Here's the whole story.

First, I travel. Sometimes, a lot. And I have a lot of appointment managing to do. I feel very disoriented whenever I don't have my itinerary available to me, in my pocket. I also need my schedule on my laptop, where I can see a whole month in one view. Of course, the schedule in my pocket and the schedule on my laptop need to be synchronized.

I own an iPhone. It's the first cellphone I've truly loved since the first Nokia I bought back in the 1990s. I love it. My iPhone syncs with Microsoft Outlook 2003 on my Dell laptop. I don't own Outlook 2007. I don't do email in Outlook anymore. I use Gmail, both at home and at work. And on my iPhone.

I do still use Outlook, though, for calendars and contacts. That's because I don't know a better way. I need read/write access to my calendar and contacts lists on my PC, and I certainly don't want the last copy of my calendar and contacts to be stored on a device that goes with me everywhere I go and that could easily be lost or stolen.

A better place to store calendar data is on the web. That way, I can share it with people (without other people, we wouldn't need calendars at all!), and I can access it from whatever computer happens to be available. Enter Google Calendar.

I want to love Google Calendar, but I can't. I love the idea of it, but I don't love it because I can't sync it with my iPhone. There's no direct hookup between Google Calendar and my iPhone. Yes, I know I can see my Google Calendar from my iPhone, and I remember being able to do some limited form of calendar editing from my iPhone, but that's not good enough. I need alerts when I'm not connected. I need my information stored locally within the calendar application on my iPhone.

I think the solution is supposed to be Google Calendar Sync. It syncs information from Google Calendar to Outlook 2003, where I can sync to my iPhone. But Google Calendar Sync doesn't work on my laptop. I keep getting error code 1008. I looked it up, and Google says they're working on it, but there's no relief today. Additionally, even if I could get Google Calendar Sync to work (I had it working a couple of months ago), it still doesn't do what I need it to do. That's because Google Calendar Sync syncs only my primary Google Calendar to Outlook. More on that in a minute.

Now, let me describe one of the world's very coolest web applications ever: TripIt.com. TripIt is wholly excellent. Imagine this. Book a flight at American Airlines, a hotel room at Hilton, and a rental car at Hertz. You get three confirmation email messages as a result. In the old days, you might have spent some of your time transcribing the information from those messages into Outlook or whatever. Or maybe you paid someone good money to transcribe them for you.

With TripIt, all you do is forward your three confirmation email messages to plans@tripit.com. And then all your itinerary information gets structured automatically into a complete, single itinerary that you can access on the web. You can print that itinerary on a page or two, stuff it into your briefcase, and have everything you need: flight times, rental car and hotel confirmation numbers, weather forecasts, pertinent local maps, ..., everything. 

That's not even the best part. The best part is that TripIt creates an iCal feed that Google Calendar can pick up automatically.

So let me recap. You book travel however you like. You forward the confirmation mails to plans@tripit.com. (It just occurred to me that you could even do this automatically with Gmail filters). And then Google Calendar picks up your whole itinerary automatically.
 
But for me, that's where the joy ends. Because even when Google Calendar Sync does work (remember the 1008 error causes it not to), it syncs only your primary calendar. It doesn't sync a secondary calendar obtained through an iCal feed, so it doesn't sync my TripIt calendar.

So, here's what my process looks like now:
  1. Book travel.
  2. Forward the confirmation emails to plans@tripit.com.
  3. Print my TripIt unified itinerary for my briefcase.
  4. Type my itinerary into Outlook (or onto my iPhone). If the travel spans more than just a couple of time zones, then I enter the itinerary at mytimetraveler.com (which does my time zone arithmetic for me), and then I download the Outlook itinerary record from the web page. Back when I could get Google Calendar Sync to work, copying my TripIt calendar records to my primary calendar was an option, but not a good one.
  5. Sync my iPhone with my laptop.
Here's what I wish my process looked like:
  1. Book travel.
  2. Forward the confirmation emails to plans@tripit.com.
  3. Print my TripIt unified itinerary for my briefcase. (Or not.)
  4. Sync my iPhone with my laptop.
I've actually considered upgrading to Microsoft Outlook 2007, which, I understand, knows about iCal feeds. It might be able to sync my TripIt data with my iPhone. But I think the price tag is too high to pay for that one feature. And I'm not even assured that it will work. I know Microsoft has a 60-day free trial, but I'm worried that Outlook 2003 won't ever work right again if I try 2007 and don't like it.

Another option I've considered is replacing my laptop with a MacBook Pro. As tempted as I am by that idea, I'm not going to do that right now, and I'm not sure whether it would actually solve my problem anyway. Would it?

I hope there's a solution that I can implement with minimal expense, and with the hardware I own today. If there is, I sure haven't found it yet. I'd love to hear from you if you have a helpful opinion.

Thursday, May 22, 2008

Karen Morton

Today I’ve added Karen Morton’s blog to my Blog list. I met her a few years back at a course I helped teach in Tennessee. She generously describes that the course changed her life, and she has since changed mine.

Recently, Karen helped me found Method R Corporation. She’s our director of education and consulting. Many of you have met Karen already in a classroom.

Karen is an excellent teacher (that means more than “excellent instructor”), and she’s just one of those rare people who, when she says she’ll do something, it’s as good as a COMMIT. She is also one of the best SQL optimizers I know, on top of being a pioneer and first-rate practitioner of the techniques Jeff and I talk about in Optimizing Oracle Performance.

She has already taught me many things, and I’m eager to watch what she will have to say online.

Friday, May 16, 2008

May 28 seminar, Minneapolis

Today I'm making preparations for another public event: this one is a one-day Performance Seminar I'll conduct in the Minneapolis area for Speak-Tech on May 28. In the morning, I'll do a "Why you can't see your real performance problems" session, and in the afternoon, I'll do "Measure once, cut twice (no, really)," which I discussed briefly here yesterday.

I'm looking forward to a lot of audience interaction on this one. We should have plenty of time on the 9:00am-4:30pm agenda for discussion.

Thursday, May 15, 2008

Dilbert on "Measure Twice, Cut Once"

Speaking of "measure once, cut twice," here is a good Dilbert strip to get you in the mood:

Getting Ready for ODTUG

It's been too long since I've blogged. I've been busy doing all the little things you have to do when you start a business over the past few weeks. You know, web pages, contracts, business cards, email, insurance, health care, payroll, bills, furniture, vacuuming the floor, more contracts, and so on.

Today I received a timely message from Mike Riley of ODTUG asking the speakers at the event to please blog about our upcoming participation in ODTUG Kaleidoscope 2008 next month in New Orleans. Excellent idea. I like ODTUG a lot, because it's a rare event that I attend where a lot of software developers get together. These are the people who have the most leverage over software performance, which is my life's work.

On Wednesday, June 18, I'll be presenting a paper called "Measure once, cut twice (no, really)." I had to put the "no, really" in there to make people understand that it wasn't a typo. I presented this topic for the first time at the Hotsos Symposium in March, and I was reasonably happy with it, as first presentations of a topic go. Here's the abstract, in case you don't want to click away from here just now:
“Measure Twice, Cut Once” is a reminder that careful planning yields better gratification than going too quickly into operations that can’t be undone. Sometimes, however, it’s better to measure once, cut twice. It’s one of the secrets behind how carpenters hang square cabinets in not-so-square kitchens. And it’s one of the secrets behind how developers write applications that are easy to fix when they cause performance problems in production use. The key is to know which details you can plan for directly, and which details you simply can’t know and therefore have to defend yourself against. In this session Cary will discuss some aspects of software development where flexible design is more important than detailed planning, using woodworking analogies for inspiration. Cary will describe some particular flexibilities that your software needs, and he’ll describe how to create them.
It's essentially an exploration of why I think agile development methods work so well (for some personality types), with examples both from work and from the home wood shop.

I'll hope to see you there.

Tuesday, April 22, 2008

Messed-Up App of the Day

I hate to complain so much, but having spent my third 8- to 10-hour stretch using a really bad application this month, I'm compelled to say something. Today's Messed-Up App: the economy-class seat in an American Airlines Boeing 777 aircraft.

Alright, I get that in economy class, you're not going to get five feet of legroom, or 30-inch wide seats that lie flat or spin to face each other. That's okay. What I do get is transportation to Europe with a cash savings, relative to an upgraded fare, sufficient to purchase—if I wanted—this Rolex. And, actually, it's really nice how each seat in economy class has its own in-seat audio/video unit. You can watch whatever you want back there. Or nothing at all. That's very nice.

But the way someone designed the A/V remote control into the armrest is just wrong. Each remote snaps into a compartment designed into the top of the armrest. Here's what it looks like.

And so here's your decision tree: Either you'll rest your arms on the armrests, or you will not. Not much of a decision there. Even if you decide not to rest your arms on the armrest, you'll probably do it accidentally if you have the good fortune to fall asleep (which you better do, because you have to work tomorrow).

The remaining decision is influenced by the following observation: If you leave the A/V remote in its cradle, your arms inadvertently push buttons "TV On/Off" or "Channel Up/Down". Fortunately, the "Call Attendant" button is difficult to press accidentally. The other choice is to take the remote out of its cradle and put it in your lap so that you won't accidentally press the buttons while you're watching 30 Rock. The problem with taking the thing out of its cradle is that resting your arm on the resulting pointy-thinged-chasm becomes really uncomfortable by the time you've sat there for a while.

Oh, and you don't get to make a choice for your other armrest, which looks like this:

That's your neighbor's A/V remote.

...Whose buttons you're pushing accidentally with your other elbow, if you happen to be wider than your neighbor.

The only decent workaround I've found is to put your pillow underneath your arm on one side, and your blanket on the other. This brings other troublesome compromises into play, which I won't go into. One of them requires communicating with your neighbor.

The whole point, and what I believe makes this story relevant for software developers, even if you don't travel economy class to Europe very often, is this. People who design things need to actually use the things they design. If the person in charge of designing seats for American Airlines' Boeing 777 aircraft had been required to sit in a prototype of this particular seat for the 8-hour flight from DFW to LHR (or even better: the 10-hour ride back), this seat would never have made it into production.

People should test their designs by actually using them, under circumstances as similar as possible to the actual circumstances that the users of the design will endure.