Thursday, October 28, 2010

Steve Perry is loving this World Series!

On-and-off this summer, Steve Perry, who is/was the lead singer of Journey, would get interviewed by the guys on KNBR (the local sports radio).

They talked about how he enjoyed baseball, and how happy he was that the Giants were doing well, and how much fun he had just going to the games and relaxing.

The radio hosts kept trying to convince him to sing the national anthem at one of the games, but he didn't want any part of that.

So, well, to cut a long story short, read this, this, or this, or, better, just watch this.

A short trip to Victoria

My day job has offices around the world, including an office in Victoria, British Columbia.

Recently I was offered a chance to take a short trip to Victoria, to meet our Canadian employees and do some technical exchange work. Although most of the time was spent in airplanes, hotels, and conference rooms, I still had a chance to look around Victoria and get a quick taste of what seems to be a fascinating part of the world.

Our office in Canada is small, but quite successful. It is about evenly split between Technical Support and Product Development. I tremendously enjoyed meeting both teams, and we found we had plenty to discuss in the short time available.

When not in the office, I had a few chances to walk around downtown Victoria. Our hotel was the Magnolia; I recommend it very highly. The location is superb, the facilities were excellent, and the service was top-notch.

If you get to travel to Victoria, look for a direct flight. Although there are many connections via Seattle, Vancouver, etc., it is vastly superior to fly directly to Victoria.

Everyone in Victoria was extraordinarily kind and welcoming. My favorite story: on the stretch of road in front of the Fairmont Empress, there is a lane available adjacent to the curb. This lane is marked for parking, with signs:

Tourist Parking Only, 6:00 AM to 9:00 PM


The signs are observed!

I took a few pictures on my walks. Unfortunately, the picture of the statue of Emily Carr with her monkey Woo and her poodle Billy came out too blurry to post, but here's a nice website with the details.

Here's a picture I took of the memorial which documents the 1825 treaty that ceded British Columbia and the Yukon from Russia to Canada:
From BryanVictoria2010


And here's a picture of a nice statue of Captain Cook, who travelled to the island in 1778 I believe.
From BryanVictoria2010

Near the end of the plaque, it reads:

Also on the voyage was Midshipman George Vancouver.


Here's a couple nice views of our office:
From BryanVictoria2010

and
From BryanVictoria2010


And here's the view from the hotel (I told you it was a nice location):
From BryanVictoria2010

That's the Fairmont Empress in the foreground, and the Parliament building in the background. In between them is the Royal BC Museum, though you can't really see it in this picture.

Yes, it was rainy, and cold, and grey, but I enjoyed it very much.

Hopefully I'll get a chance to return, perhaps in the spring, when they say the flowers are in bloom and it's one of the prettiest spots on the planet.

A micro-review of Stieg Larsson's Salander trilogy

Here's a feeble attempt to condense all three books into a single short review (you want longer reviews, there are plenty of places to find them!):

  • The Girl With The Dragon Tattoo is an almost-perfect mix: one part action thriller, one part character study, one part modern Swedish history. Delicious!

  • The Girl Who Played With Fire is almost pure action thriller. It's a roller-coaster adventure ride, and will leave you completely breathless. If you like Thomas Harris, Jeffrey Deaver, authors like that, you'll be enthralled.

  • The Girl Who Kicked The Hornet's Nest swings the needle back the other way: it's about three-quarters modern Swedish history (politics, journalism, public policy, etc.) and about one-quarter action thriller. Moreover, the action thriller part of the book is a lot less action, and a lot more things like high-tech computer espionage, psychological maneuvering, etc. Thankfully, the dry parts occupy the first half of Hornet's Nest, and the second half moves along quite nicely; I suspect most readers, having experienced books one and two, will give Larsson the courtesy of being patient while he addresses things that must have felt important to him (I know I did).



The nice thing about the books is that you'll probably know within the first 50 pages of Dragon Tattoo whether the books are your thing or not.

Saturday, October 23, 2010

Giants are in the World Series!

Great 6th game! Can't wait for the series to start!

Learning from the active debates on Open Source

Several interesting debates regarding the future of "Open Source" are currently underway. I'm more than just an interested observer, since I'm a committer in Derby, an Apache open source project. However, I think that these discussions are interesting for anyone who is interested in software and how it is developed.

Firstly, there is the discussion over the future of Java, where there have been multiple developments during 2010, including:

  • Oracle's purchase of Sun

  • Oracle's suit against Google's Android software

  • The upcoming JCP elections and the changes underway to the JCP

  • Apple pulling back from Java


I've covered a lot of these events on my blog earlier; here's a couple of follow-ups with some recent news:

  • Regarding Apple's position on Java, James Gosling posted a short essay last night. In it, he points to the recent article on MacRumors.com, as well as to the on-going SoyLatte project to provide an OpenJDK port to Mac OS X. According to the MacRumors.com article, Steve Jobs defended Apple's decision by saying:

    Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.

    and that seems like a very valid and reasonable perspective, from Apple's point of view. But it's still not clear to me what alternative there will be: will Oracle be providing Java for Mac OS X? Is Apple hoping/expecting that the community will provide Java for Mac OS X? As Gosling notes, "in the early days, they were insistent on doing the port themselves", but Apple seems to be saying very little in this area right now, leaving a big void where people are trying to guess what will happen.


  • Regarding the JCP elections, Doug Lea, one of the most active and most important researchers in the Java world, has placed a short letter online describing his perspective on the current Java situation, and why he has decided not to run for re-election to the JCP, but rather to devote his efforts to the OpenJDK process:

    I believe that the JCP is no longer a credible specification and standards body, and there is no remaining useful role for an independent advocate for the academic and research community on the EC.

    ...

    I cannot recommend to anyone that they use the JCP JSR process, as opposed to some other group/organization/body, to gain consensus for proposed specifications. So I expect to see fewer submissions as people begin to realize that other venues provide
    better opportunities.

    When I saw the JCP ballot, I was surprised to see that Professor Lea was not running, as he has been an extremely visible and popular member of that community for many years, so it is quite interesting to see how things look from his point of view.

  • Stephen Colebourne posts another short essay about the JCP elections here.



Another big discussion in recent weeks has been the Android-versus-the-world discussion. First we had Oracle's lawsuit versus Google regarding the development of Android. Then, this week, we had the big debate about whether Android itself is open or not, and what that might mean, fueled by Steve Job's 5 minute speech/comments at the Apple quarterly earnings announcement, and some rather unusual responses from the Android developer team (tweeting your build instructions (!) ).

For a fairly independent perspective, Joe Hewitt has a very well-written essay on his web log about what it means to be open, fueled by years of his experience working with the Mozilla Foundation, which is definitely one of the most open development processes around. Hewitt says:

I cut my teeth in the software industry working on the Mozilla open source project, so when I hear others talk about openness, but see them omitting important facets like a public source tree and outsider commit privileges, my bullshit radar goes off. Mozilla's commitment to openness is about as genuine as you can possibly get, but then, the world of desktop browsers is hard to compare with the world of mobile operating systems. If Firefox had required subsidies and advertising to reach 20% market share, Mozilla may have had to make compromises too.


From my point of view, I agree with Hewitt that what makes "open source" open is all about the process, not about the code. My experience has been primarily with the Apache Software Foundation, and specifically with the Derby project, but I think that most Apache projects are broadly similar. The best description of the Apache process, I believe, is found on their web site: How the ASF works. I'd particularly highlight this aspect:

The Apache projects are managed using a collaborative, consensus-based process. We do not have a hierarchical structure. Rather, different groups of contributors have different rights and responsibilities in the organization.

This, fundamentally, is what distinguishes a process such as Linux, Mozilla, or Apache, from efforts such as the JCP or Android. Tweeting your build instructions is definitely not an open source process, and Rubin should know better than to claim it is; neither does your choice of license, by itself, define an open source process: although Android uses the Apache license, Android is not an Apache project.

Although I had other reasons (I'm fascinated by DBMS internals; I needed to use the software for several internal projects) for getting involved in Derby, one of the reasons that I got involved was to learn about open source development and about the Apache process. It's been a very interesting experience, and I recommend it to every developer. There are a wealth of open source projects out there: find one that interests you, and get involved. Learn about the process, learn about how open source development works. You won't regret the investment of time; it will make you a better software developer.

Anyway, as I said to start, they are interesting discussions. Hopefully the links and pointers give you something worth reading for your weekend.

On an unrelated note, I'm off to Canada for a 4 day business trip.

Posting will be light; I'm hoping the rain will be light as well :)

Friday, October 22, 2010

System sizes at the high end.

Here's a very impressive writeup of the recent Hadoop World conference in New York City, from David Menninger of Ventana Research.

Menninger notes that Hadoop installations are much larger than you might think:

How big is “big data”? In his opening remarks, Mike shared some statistics from a survey of attendees. The average Hadoop cluster among respondents was 66 nodes and 114 terabytes of data. However there is quite a range. The largest in the survey responses was a cluster of 1,300 nodes and more than 2 petabytes of data. (Presenters from eBay blew this away, describing their production cluster of 8,500 nodes and 16 petabytes of storage.) Over 60 percent of respondents had 10 terabytes or less, and half were running 10 nodes or less.

(in the above quote, "Mike" is Mike Olson of Cloudera.)

Curt Monash has been keeping track of some of these stupendous database installations, and shares some of what he's learned in this note.

At my day job, we had an internal presentation the other day from one of our larger customers, who reported that they've constructed a single node with 3 terabytes of RAM-SAN as a cache ahead of their main database disk array.

Our customer didn't think that was particularly large. They were just noting that it was plenty large enough for their use, at the moment.

Thursday, October 21, 2010

Apple is changing their support for Java on Mac OS X?

I'm a bit confused about whether the Apple announcement regarding Java on Mac OS X is significant or not.

Here's The Register's view.

Here's CNET's view.

In general, building and maintaining a JVM for a platform is very expensive, and it's not clear that we need multiple JVMs on a platform. So if Apple no longer builds their own Java environment for the Mac, will there be a Sun/Oracle version? Will there be an OpenJDK version?

Just so long as there is something...

Dropbox looks pretty useful

I don't know why I hadn't been paying attention to Dropbox before. It looks pretty useful. I'm not quite sure how long it's been around, but it seems to be fairly mature, and from what I've heard it's pretty reliable.

Other than on the Dropbox site, are there good usage stories out there about people who use Dropbox, what they use it for, and what their experiences have been? I found this nice writeup, are there other similar articles I should read?

JCP Elections

The 2010 Java Community Process Executive Committee Elections are underway.

You can find some interesting discussions about the candidates here and here.

I think they are all good candidates; it is a healthy thing to see such a long list of strong candidates. For the time being, at least, it shows that interest in making the Java community work remains alive.

By the way, I'm not a voting member, don't know anybody who's a voting member, and don't have any particular candidates to recommend.

Tuesday, October 19, 2010

Having trouble upgrading to Maverick Meerkat from behind a proxy

I had no trouble ugprading my home machine to Ubuntu 10.10 (Maverick Meerkat), but at work I'm stalled.

I think the problem is due to some sort of interaction with my corporate proxy server.

I can apply normal system updates via the proxy, but when I try to perform the upgrade from 10.04 (Lucid Lynx) to 10.10, the upgrade tool fails with the error message:

ERROR: No 'ubuntu-minimal' available/downloadable after sources.list rewrite+update.


Some searching has found others with similar problems, and lots of fingers pointed at the proxy server interactions, but no obvious fix that I've seen yet.

Anybody done an upgrade from 10.04 to 10.10 via a proxy server? Did you have any problems? Is there some configuration step I may have missed? Some log file that may have more clues?

I've read through the logs in /var/log/dist-upgrade, and I don't see anything obvious, beyond the error message above.

Monday, October 18, 2010

I obviously am not a lawyer

Not only am I not a lawyer, I'm not even a very legally-aware layperson. I'm a systems software engineer. But I truly don't understand how an apple can be intellectual property. What branch of intellectual property law covers apples? Are they registered trademarks? Are they copyrighted? Are they patented?

It must be patent law. This article in AgWeek says:

As the university’s lucrative patent on the Honeycrisp was about to expire, the school launched the SweeTango — a cross between the Honeycrisp and its Zestar! — to keep revenue flowing to support its cold-climate fruit research.


According to About.com, plants were not considered to be patentable until 1930, after Luther Burbank had already died.

But somehow he received a number of patents on his plants after he was dead.

Another article on About.com says that plant patents last for 20 years, and give the inventor:

the right to exclude others from asexually reproducing, selling, or using the plant so reproduced.


I guess I should stick to writing systems software :)

Focusing his efforts in the broader area

I guess this had been a pretty well-known rumor for some time, but now it's official: Ray Ozzie is retiring from Microsoft.

You have to love the carefully-chosen words:

Following the natural transition time with his teams but before he retires from Microsoft, Ray will be focusing his efforts in the broader area of entertainment where Microsoft has many ongoing investments.


Perhaps the most interesting part of the short Microsoft announcement was this: "the CSA role was unique and I won’t refill the role after Ray’s departure". The role may well have been "unique", at least if "unique" means "only two people ever held it"; as everybody knows, prior to Ray Ozzie the Chief Software Architect role was filled by some guy named Bill.

Well, hmmm, what do we make of all this? Where is Microsoft going, and how do they intend to be part of the ongoing computer industry? I'm sure the next few weeks will be filled with lots of speculation by lots of people, but if you see any informative assessments, drop me a line and let me know!

VLDB 2010 proceedings are online

The conference proceedings from last month's VLDB 2010 conference in Singapore are now online. In particular, you can find the published research papers here.

This conference is the most important annual conference in the database field, a field which has become so sophisticated and mature that the research work was sub-divided into three dozen separate sub-areas.

Although there was much of interest to me in these proceedings, and hopefully I'll find more time to dig through them in greater detail, from my long-standing background in storage systems I immediately latched on to several of the topics discussed in Session 20, Session 29, and Session 37:


Session 20: Databases on Modern Hardware

p.660: Complex Event Detection at Wire Speed with FPGAs
Louis Woods (Eidgenössische Technische Hochschule Zürich, Switzerland), Jens Teubner (Eidgenössische Technische Hochschule Zürich, Switzerland), Gustavo Alonso (Eidgenössische Technische Hochschule Zürich, Switzerland)

p.670: Database Compression on Graphics Processors
Wenbin Fang (The Hong Kong University of Science and Technology, People’s Republic of China), Bingsheng He (Nanyang Technological University, Republic of Singapore), Qiong Luo (The Hong Kong University of Science and Technology, People’s Republic of China)

p.681: Aether: A Scalable Approach to Logging
Ryan Johnson (Carnegie Mellon University, United States of America), Ippokratis Pandis (Carnegie Mellon University, United States of America), Radu Stoica (Ecole Polytechnique Fédérale de Lausanne, Switzerland), Manos Athanassoulis (Ecole Polytechnique Fédérale de Lausanne, Switzerland), Anastasia Ailamaki (Ecole Polytechnique Fédérale de Lausanne, Switzerland)

Session 29: Workflows, Transactions and Business Processes

p.928: Data-Oriented Transaction Execution
Ippokratis Pandis (Carnegie Mellon University, United States of America), Ryan Johnson (Carnegie Mellon University, United States of America), Nikos Hardavellas (Northwestern University, United States of America), Anastasia Ailamaki (Ecole Polytechnique Fédérale de Lausanne, Switzerland)

Session 37: Indexing Techniques
p.1195: Tree Indexing on Solid State Drives
Yinan Li (University of Wisconsin-Madison, United States of America), Bingsheng He (The Hong Kong University of Science and Technology, People’s Republic of China), Robin Jun Yang (The Hong Kong University of Science and Technology, People’s Republic of China), Qiong Luo (The Hong Kong University of Science and Technology, People’s Republic of China), Ke Yi (The Hong Kong University of Science and Technology, People’s Republic of China)

p.1207: Efficient B-tree Based Indexing for Cloud Data Processing
Sai Wu (National University of Singapore, Republic of Singapore), Dawei Jiang (National University of Singapore, Republic of Singapore), Beng Chin Ooi (National University of Singapore, Republic of Singapore), Kun-Lung Wu (IBM Thomas J. Watson Research Center, United States of America)



If, like me, you're endlessly fascinated by database system internals, you'll undoubtedly find much to read in these proceedings. Enjoy!

Sunday, October 17, 2010

try-with-resources and JDBC 4.1 auto-closable objects

One of the major annoyances with JDBC programming involves closing your objects when you're done with them. Connections, Statements, and ResultSets all need to be closed. Some JDBC implementations include finalizer implementations for these objects, but depending on the Garbage Collector to clean up your JDBC objects can lead to lots of unwanted consequences.

So it's best to close the objects as soon as you're done, as in:

Statement s = conn.CreateStatement();
ResultSet rs = s.executeQuery(...);
while (rs.next())
{
use the data ...
}
rs.close();
s.close();


Unfortunately, once you start involving exceptions in the code, you want to try to ensure that you close your objects in all exit paths from your function, so you start writing try blocks and finally blocks, and putting your close calls in your finally blocks.

And then it becomes more of a mess because the close calls themselves can throw exceptions, so you have to put the close calls in try blocks, and pretty soon your finally-try-close-catch logic seems like it's overwhelming your actual database execution logic in your program.

It's the bane of all JDBC programmers, and it's been with us for a decade or more, ever since JDBC was invented.

But hope springs eternal! One of the features that is still alive in the ever-delayed, volatile, unpredictable release that JDK 1.7 has become is known as "try-with-resources", and it promises to greatly reduce this JDBC PITA.

If you want to learn more about try-with-resources, you'll need to know that it's described under Project Coin: Automatic Resource Management, or you can go read the original spec by Joshua Bloch.

Or, if you just want to see how much better this makes the life of the average JDBC programmer, here's a nice short essay by Arul Dheslaseelan with code blocks to illustrate.

Friday, October 15, 2010

Original Scraper Bike Team

It's barely 10 miles away from my house, but it's another world entirely. If you've got 7.5 minutes to spare, sit down in front of your computer and watch this documentary about the Original Scraper Bike Team, narrated by the Bike King himself.

Unfortunately, the web site is astoundingly slow; be prepared to have to wait 15 minutes for the video to download. Does anybody know why this video is so slow to load? Are all Vimeo videos like this?

Update Try the YouTube link, it's much faster. Why is Vimeo so slow?

Grant Thornton and the vanishing IPO market

I've been in the software industry for nearly 30 years at this point, most of the time in fairly small software companies. I've worked at multiple venture-funded companies, and been through an IPO. I mention this merely to substantiate the fact that I've been paying close attention to the venture-funded software industry, and to the role of the IPO in how the industry operates.

One of the massive changes in this industry over the last decade has been that the IPO as a vehicle for software company funding has essentially vanished. In the 1980's and 1990's there were hundreds of software company IPO's, but since the turn of the century those events basically don't occur anymore. Instead, what happens, so far as
I can tell, is that software companies are sold to large, established firms (Google, IBM, Microsoft, HP, Oracle, etc.).

I've wondered, for quite some time, about why this change occurred, but had never found very good information that explained it. People would sort-of wave their hands and blame "the economy", or "the dot-com bubble", or such similar nebulous explanations.

So I was fascinated, a few weeks ago, to stumble upon this fascinating analysis by the tax consulting firm Grant Thornton. In considerable detail, with analysis and explanation, the Grant Thornton paper shows why the IPO market vanished, and describes the effect that this has had on American industry.


It's no mystery to people who work in the venture capital industry that in order to drive returns for investors in their funds, they've monetized returns by seeking "liquidity events" away from the public markets. While there is an array of liquidity options -- including alternative listing venues, such as the NASDAQ Portal, the AIM (London) or the TSX (Canada) -- most of these options have their own limitations and satisfy only a small fraction of liquidity needs. As a result, most companies today never make it public. Instead, the exit workhorse of venture capital is now the sale of a portfolio company to mostly strategic (large corporate) acquirers.


The paper describes a "Perfect Storm" of changes:

The Great Depression in Listings was caused by a confluence of technological, legislative and regulatory events — termed The Great Delisting Machine — that started in 1996, before the 1997 peak year for U.S. listings.

These changes, including:

  • Sarbanes-Oxley Act

  • Gramm-Leach-Bliley Act

  • online brokerages

  • decimalization

  • Order Handling Rules

  • Regulation FD

  • Regulation NMS


among others, together changed the conditions so that the IPO market that had existed in the 1980's and 1990's rapidly evaporated:

The United States enjoyed an ecosystem replete with institutional investors that were focused on the IPO market — active individual investors supported by stockbrokers and a cadre of renowned investment banks, including L.F. Rothschild & Company, Alex. Brown & Sons, Hambrecht & Quist, Robertson Stephens and Montgomery Securities, that supported the growth company markets for many years. None of these firms survives today. Firms have attempted to fill the void and have found that the economic model supported by equity research, sales and trading no longer works.


They make a persuasive argument that

this is not just an IPO problem. It is a severe dysfunction that affects the macroeconomy of the U.S. and that has grave consequences for current and future generations.


Of course, the authors have their own axe to grind; every analysis has its biases, and that's unavoidable.

Furthermore, we couldn't simply turn the clock back, and return to the economic and policy structures of the 1990's; the world has moved on.

Still, it's a very interesting argument, and if you're at all intrigued by these ideas, it's worth a read.

Wednesday, October 13, 2010

Interesting git presentation

If you're still trying to get your head around git, here's an interesting short presentation I came across. It won't do more than whet your appetite, but if you're interested in git you may find it worth a read.

Tuesday, October 12, 2010

The next Java shoe falls

IBM have made a big announcement regarding Java:

"IBM, Oracle and other members of the Java community working collaboratively in OpenJDK will accelerate the innovation in the Java platform," said Rod Smith, vice president, emerging technologies, IBM. "Oracle and IBM's collaboration also signals to enterprise customers that they can continue to rely on the Java community to deliver more open, flexible and innovative new technologies to help grow their business."


IBM's Bob Sutor has a short essay with some more details:

It became clear to us that first Sun and then Oracle were never planning to make the important test and certification tests for Java, the Java SE TCK, available to Apache. We disagreed with this choice, but it was not ours to make. So rather than continue to drive Harmony as an unofficial and uncertified Java effort, we decided to shift direction and put our efforts into OpenJDK.


Mark Reinhold has a short post:

I’m very pleased that IBM and Oracle are going to work more closely together, and that we’re going to do so in the OpenJDK Community. IBM engineers will soon be working directly alongside Oracle engineers, as well as many other contributors, on the Java SE Platform reference implementation.


Henrik Stahl at Oracle also has a short post, though it is clear nowadays that Oracle keep a very tight lid on their employee bloggers, and there isn't much to read here.

IBM joining the OpenJDK community is a great win for Java, as it will enable IBM, Oracle and all other contributors to pool resources and accelerate innovation while ensuring strict compatibility across different implementations.


So, who are these "all other contributors"? Well, there are probably several groups of substantial interest who aren't really discussed in the notes above: Apache and other open source groups; SAP, Apple, and other industry players; and, of course, Google.

Stephen Colebourne has a fascinating look at things from the Apache perspective:

Pretending that Sun behaved with the slightest element of decency in this matter is being utterly blind to the facts. They made an executive level choice to shaft the Apache Software Foundation with the explicit knowledge that they would not be sued by a Not-For-Profit.


Simon Phipps also rounds up several other viewpoints at his website.

Tim Ellison, one of the Harmony committers, has an extremely short post:

I believe that compatibility is vital, and rather than risk divergence the right thing is to bring the key platform development groups together on a common codebase.

Since Harmony was basically an IBM effort, it seems like almost a certainty that the IBM Harmony team will move over to the OpenJDK codebase, and that will be that.

What is the outcome for Java? Will it become the Oracle corporate language, with Apple, Microsoft, Google, and others going their own separate ways, and IBM walking a careful line to preserve their immense Java investment? Will there be a general-purpose Java programming environment for systems like Linux, Mac OS X, Android, or even Windows? Which cell phone platform will become dominant, and what will its programming environment be? Where will the next great programming language emerge from, and how will an industry/community form around it? Is that language JavaScript, Go, Scala, or something else yet to be invented?

My own experience with Java is changing. I've enjoyed being a part of the open source Derby project, but I also realize that there are large corporate interests at work here, and it's not an easy thing to understand what the future may hold (insert Yogi Berra joke here).

It's a fascinating process to watch; I'll be continuing to try to understand what is occurring. If you have ideas about what this means, let me know.

Monday, October 11, 2010

This post was brought to you by Maverick Meerkat

Maverick Meerkat, of course, is the alternate name for Ubuntu 10.10, which came out Sunday (10/10/10, heh).

I pressed the big button tonight and upgraded, and it seems to have gone successfully. It's late, though, so a more complete report will have to wait until another time.

The one thing worth noting, though, is that I once again made the same mistake I've made before, which was to forget that I had to find that screen buried inside of "Software Sources" where it says to only show the Long Term Update releases, and to set it to show Normal Releases, otherwise it didn't show any upgrade available.

Once I did that, things seemed to go fine, though it took 3+ hours...

60 minutes on HFT

The lead story on last night's 60 Minutes show involved HFT and the complexities of the modern stock exchanges. I thought it was a pretty good story; it's definitely hard to cover information that's this complex in a 15 minute segment, but they did a pretty careful and thorough job.

You can find detailed information about their story on the 60 Minutes web site.

When it comes to GCC warnings, all does not mean all

The GNU Compiler Collection, or GCC, is a stable and mature C/C++ compiler, with perhaps the most extensive set of command-line options ever provided on any program ever written.

Lately, I've been studying the options for generating warnings, as I've been trying to learn more about my code by getting the compiler to tell me things that it notices.

At first, I thought, it seems simple enough: just specify -Wall, which ought to mean "turn on ALL the warnings."

However:

-Wall

All of the above `-W' options combined. This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid (or modify to prevent the warning), even in conjunction with macros. This also enables some language-specific warnings described in C++ Dialect Options and Objective-C and Objective-C++ Dialect Options.


And then, you get to:

The following -W... options are not implied by -Wall. Some of them warn about constructions that users generally do not consider questionable, but which occasionally you might wish to check for; others warn about constructions that are necessary or hard to avoid in some cases, and there is no simple way to modify the code to suppress the warning.


And it's a long list; I counted about 50 warnings in the list of available-warnings-which-aren't-implied-by-Wall.

So, the bottom line is: if you're using GCC, and you're trying to investigate compiler warnings, and you think that -Wall is giving you the warnings that you should study, well, you're definitely not seeing all the warnings and you may not even be seeing the most interesting warnings!

It's worth spending time reading through all the warnings and looking for ones that match your organization's style and coding techniques, and picking out as many as you can to enable. Let your compiler help you!

Saturday, October 9, 2010

Google's robot cars

Somehow, I have a feeling that this will be somewhat controversial.


So we have developed technology for cars that can drive themselves. Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard. They’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe.


It's quite impressive work; Google apparently hired a number of people who have been working on the Darpa Challenges, a multi-year DoD-sponsored effort to build autonomous vehicles.

And Google claim that they have been careful:

Safety has been our first priority in this project. Our cars are never unmanned. We always have a trained safety driver behind the wheel who can take over as easily as one disengages cruise control.


Still, people will talk. And, by keeping it a secret until now, aren't Google in some ways rather inviting such criticism?

Meanwhile, where do I go to learn more about the software aspects? (Yes, that's me: always the software geek.)

Friday, October 8, 2010

Many questions still swirl regarding the "Flash Crash"

It's another Friday afternoon, a week has passed, and I'm still baffled by the economy, the stock market, financial engineering, etc. etc. etc. :)

Last week's SEC report on the Flash Crash offered lots of fascinating information, but it seems that there is still much that we don't know.

Most analysis of the crash has concluded that the single eMini futures trade performed by Waddell & Reed's Ivy Asset Strategy mutual fund is pretty much the entire story. For example the widely-published rant by the CEO of Tradebot apparently considers this to be "historic incompetence".

However, the guys over at Nanex are trying to understand why so many people still think that the Waddell & Reed trade is the be-all and end-all of the analysis. As they say,

The bulk of the W&R trades occurred after the market bottomed and was rocketing higher -- a point in time that the SEC report tells us the market was out of liquidity.


I'm not sure if this is exactly what the Nanex team is referring to, but here's part of the SEC report regarding liquidity:

If this example is typical of the price patterns at that time, and given that the internalizer would have routed to the exchange with the best available price, it seems that the general withdrawal of liquidity that led to broken trades was at least as prevalent on NYSE Arca as it was on Nasdaq and BATS. This suggests that if Nasdaq or BATS had re-routed orders to NYSE Arca, then these orders would have also been executed at unrealistically-low prices on NYSE Arca and subsequently broken. From this example it does not seem that self-help led to orders “routing around” liquidity at NYSE Arca, but rather that liquidity had been withdrawn across all exchanges, including NYSE Arca.


The Themis Trading team have some strong words regarding HFT and whether or not volume is the same thing as liquidity:

Yet we have recently heard the head of electronic trading at a major bulge bracket firm claim that the culprit in the flash crash was the market order. I’m not kidding. He said it in an editorial in Traders Magazine.
If you can’t handle market orders in what’s sup- posedly a very liquid market, it goes to show you that volume is not the same thing as liquidi- ty. If the HFT crowd is providing liquidity for investors and lowering costs, then why can’t we handle a simple 100-year-old order type in a market whose volume has increased 300%? What does it say when one of the guys who is playing the game is telling the world: “Do not trust our market because we can’t handle a market order”?

...

Our view is that HFTs provide only low-quality liquidity. In the old days, when NYSE specialists or Nasdaq market makers added liquidity, they were required to maintain a fair and orderly market, and to post a quote that was part of the National Best Bid and Offer a minimum percent- age of time. HFTs have no such requirements. They have no minimum shares to provide nor do they have a minimum quote time. They can turn off their liquidity at any time — as we saw quite clearly on May 6. What’s more, HFT volume can generate false trading signals, causing other investors to buy at higher prices, or sell at lower ones, than they otherwise would.


Now, over at the St Petersburg Times,
Robert Trigaux is wondering why people aren't studying the "mini-flash-crash" that occurred in Progress Energy stock on Sep 27th:

In minutes, shares dropped from $44.60 to $4.57 — an 88 percent decline — only to bounce back within seconds to just under $44.


I liked this computer-science-perspective over at Ars Technica, which, with a rather strained argument, proposes that the stock market can be viewed using systems analysis as a complex system: The stock market as a single, very big piece of multithreaded software. It makes the same point that many are making: volume is not the same thing as liquidity. And it proposes a view-point of the stock-market as distributed message-passing system:

The price of, say, AAPL at any given moment is a numerical value that represents the output of one set of concurrently running processes, and it also acts as the input for another set of processes. AAPL, then, is one of many hundreds of thousands of global variables that the market-as-software uses for message-passing among its billions of simultaneously running threads.


It's a pretty interesting idea, I think.

I also found this interesting work at Grant Thornton, who claim that Regulation NMS and the HFT trading industry, together with the switch to for-profit stock exchanges, is largely to blame for the collapse in the IPO market as a tool for capital creation in our economy:

The IPO Crisis is primarily a market-structure-caused crisis, the roots of which date back at least to 1997. The erosion in the U.S. IPO market can be seen as the perfect storm of unintended consequences from the cumulative effects of uncoordinated regulatory changes and inevitable technology advances — all of which stripped away the economic model that once supported investors and small cap companies with capital commitment, sales support and high- quality research.


Grant Thornton say that the liquidity loss is particularly steep for small cap stocks, and that this is one of the big reasons why no new small-cap stocks are appearing in the market (i.e., why there aren't any IPOs anymore).


Lastly, from the folks at Themis Trading, I found this quite interesting link: Equity Trading in the 21st Century. From the abstract:

“Make or take” pricing, the charging of access fees to market orders that “take” liquidity and paying rebates to limit orders that “make” liquidity, causes distortions that should be corrected. Such charges are not reflected in the quotations used for the measurement of best execution. Direct access by non-brokers to trading platforms requires appropriate risk management. Front running orders in correlated securities should be banned.


I'll try to dig into the paper as I find time, and I'll continue trying to plow through the massive SEC report. If you see any good resources in this area, please let me know.

Thursday, October 7, 2010

Modern storage subsystems are quite complex

This week, IBM announced a new mid-range storage subsystem, the "Storwize V7000". Storwize is a company that IBM purchased recently, although this particular product doesn't use any of the new technology, just the brand name. I'm not exactly sure what it means to be "mid-range", but I imagine it means something like "costs less than a new house".

Anyway, I was reading about the product on The Register, and I was particularly struck by this:

The EasyTier feature watches the data I/O pattern and moves the most active sub-LUN-sized pieces of data, called extents, up to the SSD tier for the fastest response.

"Having 6 per cent of the capacity being solid state can deliver around 200 per cent performance improvement," said Doug Balog, an IBM VP and the disk storage business line executive. "EasyTier was developed by IBM research and monitors sub-LUN pieces (extents) and puts hot extents on SSD with the rest on SATA. The extent size 16MB to 8GB and is settable, with the default being 256MB. The system learns over time as it watches the data patterns; it's autonomic."


As they say, "this is not your father's disk drive".

A LUN, by the way, is a Logical Unit Number, and has to do with sub-dividing these modern gigantic pools of storage into smaller units.

Wednesday, October 6, 2010

Colony Collapse Disorder breakthrough?

The New York Times is reporting a possible breakthrough in the Colony Collapse Disorder mystery:

A fungus tag-teaming with a virus have apparently interacted to cause the problem, according to a paper by Army scientists in Maryland and bee experts in Montana in the online science journal PLoS One.


It's intriguing to read about some of the reasons that this disease, or syndrome, or whatever you call it, has been so hard to diagnose:

One perverse twist of colony collapse that has compounded the difficulty of solving it is that the bees do not just die — they fly off in every direction from the hive, then die alone and dispersed. That makes large numbers of bee autopsies — and yes, entomologists actually do those — problematic.


One of the techniques used in the investigation involved the deployment of a new software tool:

The Army software system — an advance itself in the growing field of protein research, or proteomics — is designed to test and identify biological agents in circumstances where commanders might have no idea what sort of threat they face. The system searches out the unique proteins in a sample, then identifies a virus or other microscopic life form based on the proteins it is known to contain.


Mysteries solved, knowledge gained, software developed: lots of good news there.

Now we just have to use the information to help the bees!

A dilettante's guide to HFT networking

HFT, of course, is High Frequency Trading, and it's all over the news nowadays, with topics like the Flash Crash, etc. being of great interest.

Now, please understand: I'm not a network engineer, and I'm not in the financial industry, and in general I have no idea what I'm talking about.

But, I found myself being quite interested by modern HFT systems, particular when I read things like this:

They are to trading what Lamborghinis are to cars: smart, sleek, powerful and fast. Their modus operandi is to use the fastest trading tools on the Street, and to spray the market with millions of orders, with cancels immediately behind them. Besides the gobs of short-term liquidity they provide today in equities, their other contribution is the flickering quote.

And things like this:

Computer-assisted trading models need to react to fleeting opportunity windows that may last mere microseconds, opportunities that would never materialize if data-generated signals couldn’t set the appropriate pace.

The new paradigm is about direct-from-venue ultra-low latency data and per symbol/security subscription capabilities delivered in a normalized format.


Well, this certainly sounds good and technical and right up my alley! So I've been trying to learn more, and I thought I would share what I've learned so far, and maybe people will reply with more information and better sources and then I'll learn even more!

There seem to be two levels at which HFT Network Architecture must be considered:

  • Firstly, there is the hardware level, where we are primarily concerned with things like latency, bandwidth, and buffering.

  • Secondly, there is the network protocol level, where we are primarily concerned with protocol selection and system implementation



Hardware: Switches, co-location, buffering, etc.

A lot of people are very focused on this, and it does seem important. A number of companies are super-focused on this market, because it's such a big deal. For a good starting point, try the Cisco web site at http://www.cisco.com/go/hft. Here there are all sorts of great resources, such as this white paper on High Performance Automated Trading Network Architectures.

As the Cisco white paper observes, the high-level issues of latency, bandwidth, and buffering can all be broken down into lower-level details. For example, consider latency. The paper notes that there are 5 latency contributors:

  1. Serialization Delay

  2. Propagation Delay

  3. Nominal Switch Latency

  4. Queueing Latency

  5. Retransmission Delay



Of course, if you're interested in this area, you probably already know that "Light takes about 3 usec to traverse 1 km in fiber", but if you're like me, I found the Cisco papers quite readable and informative.

For another take on this, a good resource is the Blade Network Technologies web site (Blade was just bought by IBM last week). For example, you can read about the monster RackSwitch G8124 in the product brief:

The RackSwitch G8124 provides line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying data, and large data-center grade buffers to keep traffic moving.

All ports non-blocking 10 GbE with deterministic latency of fewer than 700 nanoseconds.

480 Gbps non-blocking switching throughput (full duplex).


Wow! My head just spins! This is incredibly high-end stuff.

Network protocols: routing, messaging, etc.

Once you have the hardware in place, properly co-located and terminated into the data center of choice (e.g., the SAVVIS facility in Weehawken, or the BT Radianz facility in Nutley), you'll want to start understanding your choices in network protocols and messaging infrastructure.

I'm not really a hardware guy, so I found myself considerably more interested in the protocol and messaging aspects of the HFT technologies.

For an introduction to network protocols and system implementation, the best resource I've found, so far, is the BATS US Equity/Options Connectivity Manual, although there is a lot of other information online at the BATS website.

As the BATS manual points out, there are two basic messaging systems that you probably need to be thinking about:

  1. Market Data Feeds

  2. Order Entry



For Market Data Feeds, the manual states:

BATS offers five different types of market data feeds:



These data feeds are not for the faint-of-heart:

BATS requires that members allocate a minimum of 1 Gb/s per Multicast PITCH Gig-Shaped
feed


Moreover, it is here that the importance of buffering is really brought home. As the BATS manual states:

During spikes in quote updates, members using less than sufficient bandwidth will experience queuing of their market data. Members using the same bandwidth to both receive quotes and transmit orders may expect their orders to be slightly delayed if they have less than sufficient bandwidth. Many members will find these delays unacceptable and should provision their bandwidth to reduce these delays.


Cisco make a similar point:

A microburst is a traffic pattern that causes short-lived congestion in the network. This is often due to a period of high activity that causes network endpoints to send bursts of traffic into the network, for example during high-volatility periods in the market. Benchmarking infrastructure performance during these high-volatility periods is especially important because this is when the automated trading businesses make the most profits.


For Order Entry protocols, it seems that it's all about FIX. So get thee to the FIX website and learn all about it:

FIX is the industry-driven messaging standard that is changing the face of the global financial services sector, as firms use the protocol to transact in an electronic, transparent, cost efficient and timely manner.


A particularly intriguing pointer in the FIX website is the link to the Plain English Business Practices repository at www.sifma.net. For example, here's the Agency Securities Market Business Practices. This is an interesting approach to trying to make this incredibly complex protocol intelligible to people like me.


There's a lot more to learn about HFT networking, but hopefully these pointers and excerpts get you started in the right direction.

Nice Tilt-Shift video of San Francisco

I liked the Lombard Street section, and the boats floating in the Pier 39 Marina, and the boat sailing past Alcatraz Island. Enjoy! Here's the video.

Tuesday, October 5, 2010

Urban Airship and the C500K problem

If you've been around web server implementations for 15 years or so (as I have -- wow I'm getting old!), then you're undoubtedly familiar with the C10K problem. The C10K problem was an attempt, about 7-8 years ago, to structure the discussion of server implementation strategies, and, in particular, to expose some of the basic choices that could be made, and what their impacts were. The C10K paper describes this quite clearly:

Designers of networking software have many options. Here are a few:

  • Whether and how to issue multiple I/O calls from a single thread

    • Don't; use blocking/synchronous calls throughout, and possibly use multiple threads or processes to achieve concurrency

    • Use nonblocking calls (e.g. write() on a socket set to O_NONBLOCK) to start I/O, and readiness notification (e.g. poll() or /dev/poll) to know when it's OK to start the next I/O on that channel. Generally only usable with network I/O, not disk I/O.

    • Use asynchronous calls (e.g. aio_write()) to start I/O, and completion notification (e.g. signals or completion ports) to know when the I/O finishes. Good for both network and disk I/O.

  • How to control the code servicing each client

    • one process for each client (classic Unix approach, used since 1980 or so)

    • one OS-level thread handles many clients; each client is controlled by:

      • a user-level thread (e.g. GNU state threads, classic Java with green threads)

      • a state machine (a bit esoteric, but popular in some circles; my favorite)

      • a continuation (a bit esoteric, but popular in some circles)

    • one OS-level thread for each client (e.g. classic Java with native threads)

    • one OS-level thread for each active client (e.g. Tomcat with apache front end; NT completion ports; thread pools)

  • Whether to use standard O/S services, or put some code into the kernel (e.g. in a custom driver, kernel module, or VxD)


The following five combinations seem to be popular:

  1. Serve many clients with each thread, and use nonblocking I/O and level-triggered readiness notification

  2. Serve many clients with each thread, and use nonblocking I/O and readiness change notification

  3. Serve many clients with each server thread, and use asynchronous I/O

  4. serve one client with each server thread, and use blocking I/O

  5. Build the server code into the kernel




Well, time has passed, and, frankly, 10,000 simultaneous connections just doesn't seem all that scary any more. At my day job, we have a number of customers who approach these levels routinely, and a few who are solidly pushing beyond them.

So, what's the next step? The folks at Urban Airship have recently published a pair of fascinating pair of blog posts talking about their own internal efforts to prototype, benchmark, and study a C500K server.

Yes, that's right: they are trying to support 500,000 simultaneous TCP/IP connections to a single server!

Moreover, they're trying to do this in Java (actually, I suspect, in Scala)!

Still moreover, they're trying to do this in the Amazon EC2 cloud!

As should probably not be surprising, the biggest issue is memory.

At any rate, if you're still reading at this point, you'll definitely want to head over to Urban Airship's site and read through their report:


It's quite interesting, and much thanks to Urban Airship for sharing their findings.

Computer industry patent law legal news of the day


  1. Google files their response to the Oracle Android lawsuit

    Over at AllThingsD, John Paczkowski has a short article discussing Google's legal response to the Oracle Android suit; the web page also contains the complete legal filing at the bottom of the article.


  2. Apple found guilty of Willful Infringement on Dr. Gelernter's Mirror Worlds patents.

    The New York Times is reporting on yesterday's verdict in the Mirror Worlds patent infringement suit against Apple.


Monday, October 4, 2010

Tuning Canabalt

If you have any interest in how the implementation of a game can make a big difference in the playability of the game, don't miss this very well-written and completely fascinating look inside the details of making the side-scrolling Flash game Canabalt be both entertaining and responsive: Tuning Canabalt.

Isolation Memory Buffers

According to SemiAccurate, the folks at Inphi can now implement 8 terabytes of DRAM on a single system! They do this using something called Isolation Memory Buffers, which I wasn't familiar with, but will now go and read about.

Regardless of the particular details (and according to the comments on the SemiAccurate page, some quibbling is possible), the trend toward greater memory density and greater memory capacity just seems to keep plugging along!

Sunday, October 3, 2010

The Inspiron 1010 Mini is a disappointment

Early this summer, my wife happened to be buying several new computers for her office, and Dell was running an offer: for $75 additional, they would throw in an Inspiron 1010 Mini. Well, the so-called "netbooks" have been all the rage, and at that price point it seemed like a pretty low-risk way to evaluate the netbook experience, so we bought one and I've been using it, on-and-off, for a few months now.

We're not novices with Dell laptops, or with laptops in general: we're currently running a Dell Latitude 610 (Ubuntu) and a Dell Studio 17 (Windows 7) as our primary compute platforms, and we've used a variety of other Dell and non-Dell laptops over the last 15 years.

But, unfortunately, I'm quite disappointed with the Inspiron Mini, and can't find myself willing to recommend this computer to anyone else. Here's why:

  1. The machine is slow. It takes a long time to boot up, it takes a long time to draw web pages in the browser, it takes a long time to start and run even simple applications.

  2. The touchpad is horrendous. I've used a lot of touchpads over the last decade, and this must be the worst one I've ever used. The cursor jumps all over the place, sometimes it doesn't track at all, sometimes it generates mouse clicks completely randomly, sometimes it won't generate a mouse click at all. It's abysmal. If you've never experienced the agony of having your machine "randomly" injecting mouse clicks into the input stream while you're trying to work, you probably don't know what I'm talking about, but trust me: it's a disaster. Text is suddenly selected or de-selected, you're suddenly typing into the wrong area on the screen, etc. The touchpad by itself would make me unwilling to recommend this machine.

  3. The machine is small. This is no surprise, of course, but if you've never used a netbook, you should try it and see how you feel about it, before you commit to it. In particular, the keyboard, although it is a relatively full keyboard with a decent keyboard feel, is just small enough to throw me off: my hands just don't fit. I'm an extremely rapid touch typer, and this keyboard probably cuts my typing speed by 1/3 or more.

  4. The machine has no optical drive. I guess this is pretty common with ultra-portable devices, but it can be a real annoyance to have no optical drive, because it means that you either (a) have to do all software installation and updating over the net, or (b) have to attach an external optical drive quite frequently. It so happens that we have a variety of external optical drives available, and the machine's USB ports seemed fast and reliable, but still this one was surprisingly annoying.

  5. The lid open/close feel is too tight. I'm not sure if there is a magnet of some sort which holds the lid closed, or if the friction control was just designed too tightly, but I simply can't open the machine single-handedly. When I attempt to lift the lid with one hand, the base of the unit remains stuck to the lid and the entire laptop just pivots uselessly. To open it, I have to use two hands, holding the keyboard down firmly with one hand while I lift the lid with my other hand.



In addition, and really this is not Dell's fault, but the battery on this machine appears to be trash. The battery never took a charge, even when the machine was brand new a few months ago, and it's remained usable only on AC power. Possibly this is just a manufacturing defect, and possibly Dell's support staff would have sent us a new battery if we'd asked, but frankly the machine is so unpleasant, I didn't even try.

On the positive side:

  1. The screen is also small, but it's quite nice. Of course, a small screen is not unexpected when you're using a netbook, but I was guilty of over-worrying about this problem, and in fact the screen is very reasonable. The screen runs at 1366x768 pixels, which is plenty of real-estate for browsing the web, editing text, etc. And the screen is crisp and clear.

  2. The machine is light. Compared to even our lower-end laptops, the Inspiron Mini is a featherweight, and it's comfortable to pick up and carry around.

  3. The machine doesn't get too hot. All too frequently, recently, I've found that laptops can be surprisingly unpleasant to actually hold on your lap! This machine, however, heats up somewhat, but it doesn't become distracting or uncomfortable.



So, there you have it: my sadly disappointing experience with the first Netbook I tried, the Dell Inspiron 1010 Mini. If you've had a different experience, or if you have suggestions about fixing some of these issues (particularly the touchpad), please do let me know. But I suspect I'm back to using my other Dell machines.

Book Review: How We Test Software At Microsoft

I've been intending to write a book review of How We Test Software At Microsoft, by Page, Johnston, and Rollison, but for whatever reason I just never found the time, until now.

In general, I like this book a lot. It's a nice blend of the tactical and the strategic, of the pragmatic and the theoretic, and it covers a lot of ground in a very readable fashion. It's hard to imagine anybody who is seriously interested in software testing who wouldn't find something that interested them in the book.

To give you a very high-level overview of the book, here's the "Contents at a Glance":

  1. About Microsoft


    1. Software Engineering at Microsoft

    2. Software Test Engineers at Microsoft

    3. Engineering Life Cycles


  2. About Testing


    1. A Practical Approach to Test Case Design

    2. Functional Testing Techniques

    3. Structural Testing Techniques

    4. Analyzing Risk with Code Complexity

    5. Model-Based Testing


  3. Test Tools and Systems


    1. Managing Bugs and Test Cases

    2. Test Automation

    3. Non-Functional Testing

    4. Other Tools

    5. Customer Feedback Systems

    6. Testing Software Plus Services


  4. About the Future


    1. Solving Tomorrow's Problems Today

    2. Building the Future




Now let's take a deeper look at a few of the areas the book covers, in more detail.

Not "those squeegee guys that wash your windows"

The section Software Test Engineers at Microsoft describes the organizational approach that Microsoft takes to software testing. I think that Microsoft doesn't get enough credit in areas such as these. Although there are other high-tech companies that are much larger than Microsoft (e.g., IBM, HP, Cisco, Intel) Microsoft is different from these other companies because they are purely a software company (well, OK, they have a very small hardware organization, but it's nothing like the others in that list). I think Microsoft has, far and away, the most sophisticated understanding of how to do very-large-scale software engineering, and I also think that they have one of the most sophisticated approaches to software testing. At a previous job, some of my co-workers did contract software engineering for Microsoft in their test organization, and it was very interesting to get a peek behind the curtain at how Microsoft works.

The book discusses some of the major tasks and activities that a Microsoft SDET (Software Development Engineer in Test) gets involved with:

  • Develop test harness for test execution

  • Develop specialty tools for security or performance testing

  • Automate API or protocol tests

  • Participate in bug bashes

  • Find, debug, file, and regress bugs

  • Participate in design reviews

  • Participate in code reviews


This is a challenging role, and it's pleasing to see Microsoft giving it the respect it deserves.

"The happy path should always pass"

The section A Practical Approach to Test Case Design is one of the strongest in the book, and is just jam-packed with useful, hard-won advice for the practical tester. It contains information on:

  • Testing patterns

  • Test estimation

  • Incorporating testing earlier in the development cycle

  • Testing strategies

  • Testability

  • Test specifications

  • Positive and negative testing

  • Test case design

  • Exploratory testing

  • Pair testing


It's not an exaggeration to suggest that a professional tester might find it worth getting this book for this section alone, and might well find himself (or herself) returning to re-read this section every year or two just to re-focus and re-center yourself around this level-headed, thorough approach. I particularly enjoy the section's pragmatic assessment:

There isn't a right way or a wrong way to test, and there are certainly no silver bullet techniques that will guarantee great testing. It is critical to take time to understand the component, feature, or application, and design tests based on that understanding drawn from a wide variety of techniques. A strategy of using a variety of test design efforts is much more likely to succeed than is an approach that favors only a few techniques.


"The USB cart of death"

The two sections Non-Functional Testing and Other tools are also, in my opinion, particularly strong sections, perhaps surprisingly so since they don't at first glance look like they should be as informative as they actually are.

Non-Functional Testing talks about a collection of "ilities" that are challenging to test, and that are often under-tested, particularly since it is hard to test them until relatively late in the process:

Areas defined as non-functional include performance, load, security, reliability, and many others. Non-functional tests are sometimes referred to as behavioral tests or quality tests. A characteristic of non-functional attributes is that direct measurement is generally not possible. Instead, these attributes are gauged by indirect measures such as failure rates to measure reliability or cyclomatic complexity and design review metrics to assess testability.

Here we find a variety of very useful sub-sections, including "How Do You Measure Performance?", "Distributed Stress Architecture", "Eating Our Dogfood", "Testing for Accessibility", and "Security Testing". Sensibly, many of these sections give some critical principles, philosophies, and techniques, and then are filled with references to additional resources for further exploration. For example, the "Security Testing" section mentions four other entire books specifically on the subject of software security:

  • Hunting Security Bugs

  • The How to Break... series

  • Writing Secure Code

  • Threat Modeling


These are quite reasonable suggestions, though I wish they'd included suggestions to read Ross Anderson's Security Engineering or some of Bruce Schneier's work.

Other Tools is unexpectedly valuable, given such a worthless section title. This section makes three basic points:

  • Use your build lab and your continuous integration systems

  • Use your source code management (SCM) system

  • Use your available dynamic and static analysis tools


Of course, at my day job we're proud to provide what we think is the best SCM system on the planet, so the topics in this section are close to my heart, but even when I wasn't working for an SCM provider I thought that techniques such as the ones provided in this section are incredibly useful. I've spent years building and using tools to mine information from build tools and CI systems, and I've found great value in static analysis tools like FindBugs for Java; at my day job we're big fans of Valgrind. Lastly, I like the fact that this section observes that "Test Code is Product Code", and so you need to pay attention to design, implementation, and maintenance of your tests, just as you pay attention to these same topics for your product code.

"Two Faces of Review"

Oddly buried in the About the Future section at the end of the book is an all-too-short section on code reviews. The section contains some suggestions about how to organize and structure your code reviews, how to divide and allocate reviewing responsibilities among a team, and how to track and observe the results of your code review efforts so that you can continue to improve them. And I particularly liked this observation:

For most people, the primary benefit of review is detecting bugs early. Reviews are, in fact, quite good at this, but they provide another benefit to any team that takes them seriously. Reviews are a fantastic teaching tool for everyone on the team. Developers and testers alike can use the review process to learn about techniques for improving code quality, better design skills, and writing more maintainable code. Conducting code reviews on a regular basis provides an opportunity for everyone involved to learn about diverse and potentially superior methods of coding.


Here, I have to take a brief aside, to relate a short story from work. Recently, my office conducted a two-day internal technical conference. The software development staff all went offsite, and we got together and discussed a variety of topics such as: agile programming, domain-specific languages, cloud computing, and other trendy stuff. But one presentation in particular was perhaps unexpected: our President, Christopher Seiwald, reprised a presentation he's given many-a-time before: The Seven Pillars of Pretty Code. If you've never seen the presentation, give it a try: it's quite interesting. But the important part, I think, is that the company felt strongly enough about the importance of code, and of code review, to get everybody, including the company president, together to spend an hour discussing and debating what makes great code, how to write great code, and so forth.

Is How We Test Software At Microsoft a great book? No, it's not. It's too long, and the presentation style bounces around a lot (not surprising for a book with many different authors), and the book is a bit too encyclopedic. I wish that the authors had covered fewer topics, perhaps only one-half to two-thirds of the overall topics, but had covered them in more detail. And the book's unapologetically-Microsoft-only approach can be frustrating for people outside of Microsoft who are interested in how to apply these techniques in other situations: what if you're trying to test low-memory handling on a Linux system, rather than Windows? what should you be thinking about when looking at character set issues in Java? etc. And I think that the book gives rather too much credit to some approaches that I'm not particularly fond of, such as code-complexity measurement tools, model-based test generators, and the love-hate relationship I have with code coverage tools.

But those are fairly minor complaints. If you are a professional tester, or a professional software developer, or even an amateur software developer, you'll find that this book has a lot of ideas, a lot of resources, and a lot of material that you can return to over and over. Don't make it the first software engineering book that you buy, but consider putting it on your personal study list, somewhere.

Friday, October 1, 2010

Following up: Stuxnet, HFT

Update: I thought I'd posted this last Friday, but when I returned to my computer today I found the Blogger 'edit post' window hiding behind a bunch of other windows. Oops. Anyhow, here's the post I meant to make last Friday :)

Just a very short post this afternoon to note that there is significant follow-up today on (at least) two topics I've been following closely of late:



Regarding the Flash Crash, I also want to note that I recently stumbled across an absolutely fascinating web site: Themis Trading LLC. If you don't have the time to read anything else on their web site, you absolutely must read this: Toxic Equity Trading Order Flow on Wall Street. This paper does a wonderful job of explaining, in simple and clear language and examples, just how it is that the various ultra-high-speed trading algorithms manipulate each other in subtle and surprising ways.