标题: How Google Tests Software 3-4 [打印本页] 作者: jumptor 时间: 2011-3-22 09:37 标题: How Google Tests Software 3-4 How Google Tests Software - Part Three
Wednesday, February 16, 2011 2:47 AM
By James Whittaker
Lots of questions in the comments to the last two posts. I am not ignoring them. Hopefully many of them will be answered here and in following posts. I am just getting started on this topic.
At Google, quality is not equal to test. Yes I am sure that is true elsewhere too. “Quality cannot be tested in” is so cliché it has to be true. From automobiles to software if it isn’t built right in the first place then it is never going to be right. Ask any car company that has ever had to do a mass recall how expensive it is to bolt on quality after-the-fact.
However, this is neither as simple nor as accurate as it sounds. While it is true that quality cannot be tested in, it is equally evident that without testing it is impossible to develop anything of quality. How does one decide if what you built is high quality without testing it?
The simple solution to this conundrum is to stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Better yet, plan the tests while you code or even before. Test isn’t a separate practice, it’s part and parcel of the development process itself. Quality is not equal to test; it is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.
At Google this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Since the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. In fact, teams that insist on having a large testing presence are generally assumed to be doing something wrong. Having too large a test team is a very strong sign that the code/test mix is out of balance. Adding more testers is not going to solve anything.
This means that quality is more an act of prevention than it is detection. Quality is a development issue, not a testing issue. To the extent that we are able to embed testing practice inside development, we have created a process that is hyper incremental where mistakes can be rolled back if any one increment turns out to be too buggy. We’ve not only prevented a lot of customer issues, we have greatly reduced the number of testers necessary to ensure the absence of recall-class bugs. At Google, testing is aimed at determining how well this prevention method is working. TEs are constantly on the lookout for evidence that the SWE-SET combination of bug writers/preventers are screwed toward the latter and TEs raise alarms when that process seems out of whack.
Manifestations of this blending of development and testing are all over the place from code review notes asking ‘where are your tests?’ to posters in the bathrooms reminding developers about best testing practices, our infamous Testing On The Toilet guides. Testing must be an unavoidable aspect of development and the marriage of development and testing is where quality is achieved. SWEs are testers, SETs are testers and TEs are testers.
If your organization is also doing this blending, please share your successes and challenges with the rest of us. If not, then here is a change you can help your organization make: get developers fully vested in the quality equation. You know the old saying that chickens are happy to contribute to a bacon and egg breakfast but the pig is fully committed? Well, it's true...go oink at one of your developer and see if they oink back. If they start clucking, you have a problem.
How Google Tests Software - Part Four
Wednesday, March 02, 2011 10:11 AM
By James Whittaker
Crawl, walk, run.
One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is often the goal: build the core of a product and release it the moment it is useful to as large a crowd as feasible, then get their feedback and iterate. This is what we did with Gmail, a product that kept its beta tag for four years. That tag was our warning to users that it was still being perfected. We removed the beta tag only when we reached our goal of 99.99% uptime for a real user’s email data. Obviously, quality is a work in progress!
It’s not as cowboy a process as I make it out to be. In fact, in order to make it to what we call the beta channel release, a product must go through a number of other channels and prove its worth. For Chrome, a product I spent my first two years at Google working on, multiple channels were used depending on our confidence in the product’s quality and the extent of feedback we were looking for. The sequence looked something like this:
Canary Channel is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.
Dev Channel is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.
Test Channel is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.
The Beta Channel or Release Channel builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage.
This crawl, walk, run approach gives us the chance to run tests and experiment on our applications early and obtain feedback from real human beings in addition to all the automation we run in each of these channels every day.
There are analytical benefits to this process as well. If a bug is found in the field a tester can create a test that reproduces it and run it against builds in each channel to determine if a fix has already been implemented.作者: jumptor 时间: 2011-3-22 09:38
--------------------------------------------------------------------------------
How Google Tests Software - A Brief Interlude
Tuesday, February 22, 2011 1:28 PM
By James Whittaker
These posts have garnered a number of interesting comments. I want to address two of the negative ones in this post. Both are of the same general opinion that I am abandoning testers and that Google is not a nice place to ply this trade. I am puzzled by these comments because nothing could be further from the truth. One such negative comment I can take as a one-off but two smart people (hey they are reading this blog, right?) having this impression requires a rebuttal. Here are the comments:
"A sad day for testers around the world. Our own spokesman has turned his back on us. What happened to 'devs can't test'?" by Gengodo
"I am a test engineer and Google has been one of my dream companies. Reading your blog I feel that Testers are so unimportant at Google and can be easily laid off. It's sad." by Maggi
First of all, I don't know of any tester or developer for that matter being laid off from Google. We're hiring at a rapid pace right now. However, we do change projects a lot so perhaps you read 'taken off a project' to mean something far worse than the reality of just moving to another project. A tester here may move every couple of years or so and it is a badge of honor to get to the point where you've worked yourself out of a job by building robust test frameworks for others to contribute tests to or to pass off what you've done to a junior tester and move on to a bigger challenge. Maggi, please keep the dream alive. If Google was a hostile place for testers, I would be working somewhere else.
Second, I am going to dodge the negative undertones of the developer vs tester debate. Whether developers can test or testers can code seems downright combative. Both types of engineers share the common goal of shipping a product that will be successful. There is enough negativity in this world and testers hating developers seems so 2001.
In fact, I feel a confession coming on. I have had sharp words with developers in the past. I have publicly decried the lack of testing rigor in commercial products. If you've seen me present you've probably witnessed me showing colorful bugs, pointing to the screen and shouting "you missed a spot!" I will admit, that was fun.
Here are some other quotes I have directed at developers:
"You must be smarter than me because I couldn't write this bug if I was trying to."
"What happened, did the compiler get your eye?"
"What do you say to a developer with two black eyes? Nothing, he's already been told twice."
"Did you hear about the developer who locked himself in his car?"
Ah, those were the good old days! But it's 2011 now and I am objective enough to give developers credit when they step up to the plate and do their job. At Google, many have and they are helping to shame the rest into following suit. And this is making bugs harder to find. I waste so little time on low hanging fruit that I get to dig deeper to find the really subtle, really critical bugs. The signal to noise ratio is just a whole lot stronger now. Yes there are fewer developer jokes but this is progress. I have to make myself feel good knowing how many bugs have been prevented instead of how many laughs I can get on stage demonstrating their miserable failures.
This is progress.
And, incidentally developers can test. In some cases far better than testers. Modern testing is about optimizing the places where developers test and where testers test. Getting that mix right means a great product. Getting it wrong puts us back in 2001 where my presentations were a heck of a lot funnier.
In what cases are developers better testers that we are? In what cases are they not only poor testers but we're better off not having them touch the product at all? Well, that's the subject of my next couple of posts. In the meantime...
...Peace.作者: 读你七夜雪 时间: 2011-3-28 16:43
有中文的吗?作者: temp20121017 时间: 2012-12-2 15:05
头大,,,嘿嘿。。。。作者: jjnha 时间: 2012-12-6 17:19
好文作者: alice2003yf 时间: 2013-1-2 09:47
建议楼主 把 文章翻译过来吧 ,很多童鞋 看不太 明白作者: yanhekun12593 时间: 2013-1-6 11:25
different people and different view!