51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 3163|回复: 9
打印 上一主题 下一主题

[转贴] How Google TestsSoftware[完整版]

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2012-5-6 09:40:22 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
本帖最后由 omicron15 于 2012-5-6 09:47 编辑

觉得不错,很多东西可以借鉴,希望大家喜欢,免去翻X看E文的痛苦

--------------
How Google TestsSoftware - Part One
Tuesday, January 25, 20119:08 AM

By James Whittaker

This is the firstin a series of posts on this topic.

The one question I get morethan any other is "How does Google test?" It's been explained in bitsand pieces on this blog but the explanation is due an update. The Googletesting strategy has never changed but the tactical ways we execute it hasevolved as the company has evolved. We're now a search, apps, ads, mobile,operating system, and so on and so forth company. Each of these Focus Areas (aswe call them) have to do things that make sense for their problem domain. As weadd new FAs and grow the existing ones, our testing has to expand and improve.What I am documenting in this series of posts is a combination of what we aredoing today and the direction we are trending toward in the foreseeable future.

Let's begin withorganizational structure and it's one that might surprise you. There isn't anactual testing organization at Google. Test exists within a Focus Area calledEngineering Productivity. Eng Prod owns any number of horizontal and verticalengineering disciplines, Test is the biggest. In a nutshell, Eng Prod is madeof:

1. A product team thatproduces internal and open source productivity tools that are consumed by allwalks of engineers across the company. We build and maintain code analyzers,IDEs, test case management systems, automated testing tools, build systems,source control systems, code review schedulers, bug databases... The idea is tomake the tools that make engineers more productive. Tools are a very large partof the strategic goal of prevention over detection.

2. A services team thatprovides expertise to Google product teams on a wide array of topics includingtools, documentation, testing, release management, training and so forth. Ourexpertise covers reliability, security, internationalization, etc., as well asproduct-specific functional issues that Google product teams might face. Everyother FA has access to Eng Prod expertise.

3. Embedded engineers thatare effectively loaned out to Google product teams on an as-needed basis. Someof these engineers might sit with the same product teams for years, otherscycle through teams wherever they are needed most. Google encourages all itsengineers to change product teams often to stay fresh, engaged and objective.Testers are no different but the cadence of changing teams is left to theindividual. I have testers on Chrome that have been there for several years andothers who join for 18 months and cycle off. Keeping a healthy balance betweenproduct knowledge and fresh eyes is something a test manager has to pay closeattention to.

So this means that testersreport to Eng Prod managers but identify themselves with a product team, likeSearch, Gmail or Chrome. Organizationally they are part of both teams. They sitwith the product teams, participate in their planning, go to lunch with them,share in ship bonuses and get treated like full members of the team. Thebenefit of the separate reporting structure is that it provides a forum fortesters to share information. Good testing ideas migrate easily within Eng Prodgiving all testers, no matter their product ties, access to the best technologywithin the company.

This separation of projectand reporting structures has its challenges. By far the biggest is that testersare an external resource. Product teams can't place too big a bet on them andmust keep their quality house in order. Yes, that's right: at Google it's theproduct teams that own quality, not testers. Every developer is expected to dotheir own testing. The job of the tester is to make sure they have the automationinfrastructure and enabling processes that support this self reliance. Testersenable developers to test.

What I like about thisstrategy is that it puts developers and testers on equal footing. It makes ustrue partners in quality and puts the biggest quality burden where it belongs:on the developers who are responsible for getting the product right. Anotherside effect is that it allows us a many-to-one dev-to-test ratio. Developersoutnumber testers. The better they are at testing the more they outnumber us.Product teams should be proud of a high ratio!

Ok, now we're all friendshere right? You see the hole in this strategy I am sure. It's big enough todrive a bug through. Developers can't test! Well, who am I to deny that? Noamount of corporate kool-aid could get me to deny it, especially coming off myGTAC talk last year where I pretty much made a game of developer vs. tester(spoiler alert: the tester wins).

Google's answer is to splitthe role. We solve this problem by having two types of testing roles at Googleto solve two very different testing problems. In my next post, I'll talk aboutthese roles and how we split the testing problem into two parts.





How Google TestsSoftware - Part Two
作者:James Whittaker
By James Whittaker

In order for the “you buildit, you break it” motto to be real, there are roles beyond the traditionaldeveloper that are necessary. Specifically, engineering roles that enable developersto do testing efficiently and effectively have to exist. At Google we havecreated roles in which some engineers are responsible for making others moreproductive. These engineers often identify themselves as testers but theiractual mission is one of productivity. They exist to make developers moreproductive and quality is a large part of that productivity. Here's a summaryof those roles:

The SWE or Software Engineeris the traditional developer role. SWEs write functional code that ships to users.They create design documentation, design data structures and overallarchitecture and spend the vast majority of their time writing and reviewingcode. SWEs write a lot of test code including test driven design, unit testsand, as we explain in future posts, participate in the construction of small,medium and large tests. SWEs own quality for everything they touch whether theywrote it, fixed it or modified it.

The SET or Software Engineerin Test is also a developer role except their focus is on testability. Theyreview designs and look closely at code quality and risk. They refactor code tomake it more testable. SETs write unit testing frameworks and automation. Theyare a partner in the SWE code base but are more concerned with increasingquality and test coverage than adding new features or increasing performance.

The TE or Test Engineer isthe exact reverse of the SET. It is a a role that puts testing first anddevelopment second. Many Google TEs spend a good deal of their time writingcode in the form of automation scripts and code that drives usage scenarios andeven mimics a user. They also organize the testing work of SWEs and SETs,interpret test results and drive test execution, particular in the late stagesof a project as the push toward release intensifies. TEs are product experts,quality advisers and analyzers of risk.

From a quality standpoint,SWEs own features and the quality of those features in isolation. They areresponsible for fault tolerant designs, failure recovery, TDD, unit tests andin working with the SET to write tests that exercise the code for theirfeature.

SETs are developers thatprovide testing features. A framework that can isolate newly developed code bysimulating its dependencies with stubs, mocks and fakes and submit queues formanaging code check-ins. In other words, SETs write code that allows SWEs totest their features. Much of the actual testing is performed by the SWEs, SETsare there to ensure that features are testable and that the SWEs are actively involvedin writing test cases.

Clearly SETs primary focusis on the developer. Individual feature quality is the target and enablingdevelopers to easily test the code they write is the primary focus of the SET.This development focus leaves one large hole which I am sure is already evidentto the reader: what about the user?

User focused testing is thejob of the Google TE. Assuming that the SWEs and SETs performed module andfeature level testing adequately, the next task is to understand how well thiscollection of executable code and data works together to satisfy the needs ofthe user. TEs act as a double-check on the diligence of the developers. Anyobvious bugs are an indication that early cycle developer testing wasinadequate or sloppy. When such bugs are rare, TEs can turn to their primarytask of ensuring that the software runs common user scenarios, is performantand secure, is internationalized and so forth. TEs perform a lot of testing andtest coordination tasks among TEs, contract testers, crowd sourced testers, dogfooders, beta users, early adopters. They communicate among all parties therisks inherent in the basic design, feature complexity and failure avoidancemethods. Once TEs get engaged, there is no end to their mission.

Ok, now that the roles arebetter understood, I'll dig into more details on how we choreograph the workitems among them. Until next time...thanks for your interest.
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏
回复

使用道具 举报

该用户从未签到

2#
 楼主| 发表于 2012-5-6 09:40:55 | 只看该作者
How Google TestsSoftware - Part Three
作者:James Whittaker
By James Whittaker

Lots of questions in thecomments to the last two posts. I am not ignoring them. Hopefully many of themwill be answered here and in following posts. I am just getting started on thistopic.

At Google, quality is notequal to test. Yes I am sure that is true elsewhere too. “Quality cannot betested in” is so cliché it has to be true. From automobiles to software if itisn’t built right in the first place then it is never going to be right. Askany car company that has ever had to do a mass recall how expensive it is to bolton quality after-the-fact.

However, this is neither assimple nor as accurate as it sounds. While it is true that quality cannot betested in, it is equally evident that without testing it is impossible todevelop anything of quality. How does one decide if what you built is highquality without testing it?

The simple solution to thisconundrum is to stop treating development and test as separate disciplines.Testing and development go hand in hand. Code a little and test what you built.Then code some more and test some more. Better yet, plan the tests while youcode or even before. Test isn’t a separate practice, it’s part and parcel ofthe development process itself. Quality is not equal to test; it is achieved byputting development and testing into a blender and mixing them until one isindistinguishable from the other.

At Google this is exactlyour goal: to merge development and testing so that you cannot do one withoutthe other. Build a little and then test it. Build some more and test some more.The key here is who is doing the testing. Since the number of actual dedicatedtesters at Google is so disproportionately low, the only possible answer has tobe the developer. Who better to do all that testing than the people doing theactual coding? Who better to find the bug than the person who wrote it? Who ismore incentivized to avoid writing the bug in the first place? The reasonGoogle can get by with so few dedicated testers is because developers ownquality. In fact, teams that insist on having a large testing presence aregenerally assumed to be doing something wrong. Having too large a test team isa very strong sign that the code/test mix is out of balance. Adding moretesters is not going to solve anything.

This means that quality ismore an act of prevention than it is detection. Quality is a development issue,not a testing issue. To the extent that we are able to embed testing practiceinside development, we have created a process that is hyper incremental wheremistakes can be rolled back if any one increment turns out to be too buggy.We’ve not only prevented a lot of customer issues, we have greatly reduced thenumber of testers necessary to ensure the absence of recall-class bugs. AtGoogle, testing is aimed at determining how well this prevention method isworking. TEs are constantly on the lookout for evidence that the SWE-SETcombination of bug writers/preventers are screwed toward the latter and TEsraise alarms when that process seems out of whack.

Manifestations of thisblending of development and testing are all over the place from code reviewnotes asking ‘where are your tests?’ to posters in the bathrooms remindingdevelopers about best testing practices, our infamous Testing On The Toiletguides. Testing must be an unavoidable aspect of development and the marriageof development and testing is where quality is achieved. SWEs are testers, SETsare testers and TEs are testers.

If your organization is alsodoing this blending, please share your successes and challenges with the restof us. If not, then here is a change you can help your organization make: getdevelopers fully vested in the quality equation. You know the old saying thatchickens are happy to contribute to a bacon and egg breakfast but the pig isfully committed? Well, it's true...go oink at one of your developer and see ifthey oink back. If they start clucking, you have a problem.

How Google TestsSoftware - Part Four
作者:James Whittaker
By James Whittaker

Crawl, walk, run.

One of the key ways Google achievesgood results with fewer testers than many companies is that we rarely attemptto ship a large set of features at once. In fact, the exact opposite is oftenthe goal: build the core of a product and release it the moment it is useful toas large a crowd as feasible, then get their feedback and iterate. This is whatwe did with Gmail, a product that kept its beta tag for four years. That tagwas our warning to users that it was still being perfected. We removed the betatag only when we reached our goal of 99.99% uptime for a real user’s emaildata. Obviously, quality is a work in progress!

It’s not as cowboy a processas I make it out to be. In fact, in order to make it to what we call the betachannel release, a product must go through a number of other channels and proveits worth. For Chrome, a product I spent my first two years at Google workingon, multiple channels were used depending on our confidence in the product’squality and the extent of feedback we were looking for. The sequence looked somethinglike this:

Canary Channel is used forcode we suspect isn’t fit for release. Like a canary in a coalmine, if itfailed to survive then we had work to do. Canary channel builds are only forthe ultra tolerant user running experiments and not depending on theapplication to get real work done.

Dev Channel is whatdevelopers use on their day-to-day work. All engineers on a product areexpected to pick this build and use it for real work.

Test Channel is the buildused for internal dog food and represents a candidate beta channel build givengood sustained performance.

The Beta Channel or ReleaseChannel builds are the first ones that get external exposure. A build only getsto the release channel after spending enough time in the prior channels that isgets a chance to prove itself against a barrage of both tests and real usage.

This crawl, walk, runapproach gives us the chance to run tests and experiment on our applicationsearly and obtain feedback from real human beings in addition to all theautomation we run in each of these channels every day.

There are analyticalbenefits to this process as well. If a bug is found in the field a tester cancreate a test that reproduces it and run it against builds in each channel todetermine if a fix has already been implemented.
回复 支持 反对

使用道具 举报

该用户从未签到

3#
 楼主| 发表于 2012-5-6 09:42:55 | 只看该作者
How Google TestsSoftware - Part Four
作者:James Whittaker
By James Whittaker

Crawl, walk, run.

One of the key ways Google achievesgood results with fewer testers than many companies is that we rarely attemptto ship a large set of features at once. In fact, the exact opposite is oftenthe goal: build the core of a product and release it the moment it is useful toas large a crowd as feasible, then get their feedback and iterate. This is whatwe did with Gmail, a product that kept its beta tag for four years. That tagwas our warning to users that it was still being perfected. We removed the betatag only when we reached our goal of 99.99% uptime for a real user’s emaildata. Obviously, quality is a work in progress!

It’s not as cowboy a processas I make it out to be. In fact, in order to make it to what we call the betachannel release, a product must go through a number of other channels and proveits worth. For Chrome, a product I spent my first two years at Google workingon, multiple channels were used depending on our confidence in the product’squality and the extent of feedback we were looking for. The sequence looked somethinglike this:

Canary Channel is used forcode we suspect isn’t fit for release. Like a canary in a coalmine, if itfailed to survive then we had work to do. Canary channel builds are only forthe ultra tolerant user running experiments and not depending on theapplication to get real work done.

Dev Channel is whatdevelopers use on their day-to-day work. All engineers on a product areexpected to pick this build and use it for real work.

Test Channel is the buildused for internal dog food and represents a candidate beta channel build givengood sustained performance.

The Beta Channel or ReleaseChannel builds are the first ones that get external exposure. A build only getsto the release channel after spending enough time in the prior channels that isgets a chance to prove itself against a barrage of both tests and real usage.

This crawl, walk, runapproach gives us the chance to run tests and experiment on our applicationsearly and obtain feedback from real human beings in addition to all theautomation we run in each of these channels every day.

There are analyticalbenefits to this process as well. If a bug is found in the field a tester cancreate a test that reproduces it and run it against builds in each channel todetermine if a fix has already been implemented.



How Google TestsSoftware - Part Five
作者:James Whittaker
By James Whittaker

Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.
回复 支持 反对

使用道具 举报

该用户从未签到

4#
 楼主| 发表于 2012-5-6 09:44:20 | 只看该作者
How Google TestsSoftware - Part Five
作者:James Whittaker
By James Whittaker

Instead of distinguishingbetween code, integration and system testing, Google uses the language ofsmall, medium and large tests emphasizing scope over form. Small tests coversmall amounts of code and so on. Each of the three engineering roles mayexecute any of these types of tests and they may be performed as automated ormanual tests.

Small Tests are mostly (butnot always) automated and exercise the code within a single function or module.They are most likely written by a SWE or an SET and may require mocks and fakedenvironments to run but TEs often pick these tests up when they are trying todiagnose a particular failure. For small tests the focus is on typicalfunctional issues such as data corruption, error conditions and off by oneerrors. The question a small test attempts to answer is does this code do whatit is supposed to do?

Medium Tests can beautomated or manual and involve two or more features and specifically cover theinteraction between those features. I've heard any number of SETs describe thisas "testing a function and its nearest neighbors." SETs drive thedevelopment of these tests early in the product cycle as individual featuresare completed and SWEs are heavily involved in writing, debugging andmaintaining the actual tests. If a test fails or breaks, the developer takescare of it autonomously. Later in the development cycle TEs may perform mediumtests either manually (in the event the test is difficult or prohibitivelyexpensive to automate) or with automation. The question a medium test attemptsto answer is does a set of near neighbor functions interoperate with each otherthe way they are supposed to?

Large Tests cover three ormore (usually more) features and represent real user scenarios to the extentpossible. There is some concern with overall integration of the features butlarge tests tend to be more results driven, i.e., did the software do what theuser expects? All three roles are involved in writing large tests andeverything from automation to exploratory testing can be the vehicle toaccomplish accomplish it. The question a large test attempts to answer is doesthe product operate the way a user would expect?

The actual language ofsmall, medium and large isn’t important. Call them whatever you want. Theimportant thing is that Google testers share a common language to talk aboutwhat is getting tested and how those tests are scoped. When some enterprisingtesters began talking about a fourth class they dubbed enormous every othertester in the company could imagine a system-wide test covering nearly everyfeature and that ran for a very long time. No additional explanation wasnecessary.

The primary driver of whatgets tested and how much is a very dynamic process and varies wildly fromproduct to product. Google prefers to release often and leans toward getting aproduct out to users so we can get feedback and iterate. The general idea isthat if we have developed some product or a new feature of an existing productwe want to get it out to users as early as possible so they may benefit fromit. This requires that we involve users and external developers early in theprocess so we have a good handle on whether what we are delivering is hittingthe mark.

Finally, the mix betweenautomated and manual testing definitely favors the former for all three sizesof tests. If it can be automated and the problem doesn’t require humancleverness and intuition, then it should be automated. Only those problems, inany of the above categories, which specifically require human judgment, such asthe beauty of a user interface or whether exposing some piece of dataconstitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it isimportant to note that Google performs a great deal of manual testing, bothscripted and exploratory, but even this testing is done under the watchful eyeof automation. Industry leading recording technology converts manual tests toautomated tests to be re-executed build after build to ensure minimalregressions and to keep manual testers always focusing on new issues. We alsoautomate the submission of bug reports and the routing of manual testing tasks.For example, if an automated test breaks, the system determines the last codechange that is the most likely culprit, sends email to its authors and files abug. The ongoing effort to automate to within the “last inch of the human mind”is currently the design spec for the next generation of test engineering toolsGoogle is building.

Those tools will behighlighted in future posts. However, my next target is going to revolve aroundThe Life of an SET. I hope you keep reading.
回复 支持 反对

使用道具 举报

该用户从未签到

5#
 楼主| 发表于 2012-5-6 09:46:40 | 只看该作者
How Google TestsSoftware - Part Six
作者:James Whittaker
By James Whittaker

The Life of anSET

SETs are Software Engineersin Test. They are software engineers who happen to write testing functionality.First and foremost, SETs are developers and the role is touted as a 100% codingrole in our recruiting literature and internal job promotion ladders. When SETcandidates are interviewed, the “coding bar” is nearly identical to the SWErole with more emphasis that SETs know how to test the code they create. Inother words, both SWEs and SETs answer coding questions. SETs are expected tonail a set of testing questions as well.

As you might imagine, it isa difficult role to fill and it is entirely possible that the low numbers of SETsisn’t because Google has created a magic formula for productivity but more of aresult of adapting our engineering practice around the reality that the SETskill set is really hard to find. We optimize on this very important task andbuild processes around the people who do it.

It is usually the case thatSETs are not involved early in the design phase. Their exclusion is not so muchpurposeful as it is a by-product of how a lot of Google projects are born. Acommon scenario for new project creation is that some informal 20% effort takesa life of its own as an actual Google branded product. Gmail and Chrome OS areboth projects that started out as ideas that were not formally mandated byGoogle but over time grew into shipping products with teams of developers andtesters working on them. In such cases early development is not about quality,it is about proving out a concept and working on things like scale andperformance that must be right before quality could even be an issue. If youcan't build a web service that scales, testing is not your biggest problem!

Once it is clear that aproduct can and will be built and shipped, that's when the development teamseeks out test involvement.

You can imagine a processlike this: someone has an idea, they think about it, write experimental code,seek out opinions of others, write some more code, get others involved, writeeven more code, realize they are onto something important, write more code tomold the idea into something that they can present to others to get feedback... somewhere in all this an actual project is created in Google's projectdatabase and the project becomes real. Testers don't get involved until itbecomes real.

Do all real projects gettesters? Not by default. Smaller projects and those meant for limited usersoften get tested exclusively by the people who build it. Others that areriskier to our users or the enterprise (much more about risk later) get testingattention.

The onus is on thedevelopment teams to solicit help from testers and convince them that theirproject is exciting and full of potential. Dev Directors explain their project,progress and ship schedule to Test Directors who then discuss how the testingburden is to be shared and agree on things like SWE involvement in testing,expected unit testing levels and how the duties of the release process aregoing to be shared. SETs may not be involved at project inception, but once theproject becomes real we have vast influence over how it is to be executed.

And when I say"testing" I don't just mean exercising code paths. Testers might notbe involved from the beginning ... but testing is. In fact, an SET's impact isfelt even before a developer manages to check code into the build. Stay tunedto understand what I am talking about.
回复 支持 反对

使用道具 举报

该用户从未签到

6#
 楼主| 发表于 2012-5-6 09:47:02 | 只看该作者
How Google TestsSoftware - A Break for Q&A
作者:James Whittaker
By James Whittaker

New material for the thisseries is coming more slowly. I am beginning to get into areas where I want tostart posting screen shots of internal Google tools and describe how ourinfrastructure works. This is material that takes longer to develop and alsorequires some scrutiny before being published externally. So in the meantime, Iam pausing to answer some of the questions you've posted in the comments.

I am going to start withLilia (because she likes Neil Young mainly, but also because she can runfurther than me and those two things combine to impress me to no small end) whoasks about SET-SWE conversion and vice-versa and which I have seen the most.There is also the broader question of whether there is a ceiling on the SETcareer path.

SETs and SWEs are on thesame pay scale and virtually the same job ladder. Both roles are essentially100% coding roles with the former writing test code and the latter doingfeature development. From a coding perspective the skill set is a dead match.From a testing perspective we expect a lot more from SETs. But the overlap oncoding makes SETs a great fit for SWE positions and vice versa. Personally Ithink it is a very healthy situation to have conversions. Since I have bothroles reporting to me I can speak from first hand experience that many of mybest coders are former SETs and some of my best testers are former SWEs. Eachis excellent training ground for the other. On my specific team I am even onthe conversions from one role to the other. But I suspect that Google-widethere are more SETs who become SWEs.

Why convert in the firstplace? Well at Google it isn't for the money. It also isn't for the prestige aswe have a lot more SWEs than SETs and it is a lot harder to standout. Thescarcity of our SETs creates somewhat of a mystique about these folk. Who arethese rare creatures who keep our code bases healthy and make our developmentprocess run so smoothly? Actually, most SWEs care more about making the SETshappy so they continue doing what they do. Why would any dev team force aconversion of a great developer from SET to SWE when finding a suitable SETreplacement is so much harder than adding another feature developer? SWEs ain'tthat stupid.

Now pausing before I takeanother hit of the corp kool-aid, let me be honest and say that there are farmore senior SWEs than SETs. Percentage wise we test folk are more outnumberedat the top of the org than at the middle and bottom. But keep in mind thatdevelopers have had a large head start on us. We have developers who have beenat Google since our founding and testers ... well ... less time than that.

Where do TEs fit into thismix? TE is an even newer role than SET but already we have a number climbing tothe Staff ranks and pushing on the senior most positions in the company. Thereis no ceiling, but the journey to the top takes some time.

Raghev among others hasasked about the career path and whether remaining an IC (individualcontributor) is an option over becoming a manager. I have mixed feelings aboutanswering this. As a manager myself, I see the role as one with much honor andyet I hear in your collective voices a hint of why do I have to become a manager?Ok, I admit, Dilbert is funny.

For me, being a manager is achance to impart some of my experience and old-guy judgement on lessexperienced but more technically gifted ICs. The combination of an experiencedmanager's vision and an ICs technical skill can be a fighting force ofincredible power. And yet, why should someone who does not want to manage beforced to do so in order to continue their career advancement?

Well, fortunately, Googledoes not make us choose. Our managers are expected to have IC tasks theyperform. They are expected to be engaged technically and lead as opposed tojust manage. And our ICs are expected to have influence beyond their personalwork area. When you get to the senior/staff positions here you are a leader,period. Some leaders lead more than they manage and some leaders manage morethan they lead.

But either way, the viewfrom the top means that a lot of people are looking to you for direction ...whether you manage them or not.
回复 支持 反对

使用道具 举报

该用户从未签到

7#
 楼主| 发表于 2012-5-6 09:47:21 | 只看该作者
How Google TestsSoftware - Part Seven
作者:James Whittaker
By James Whittaker

The Life of a TE

The Test Engineer is a newerrole within Google than either SWEs or SETs. As such, it is a role still in theprocess of being defined. The current generation of Google TEs are blazing atrail which will guide the next generation of new hires for this role. It isthe process that is emerging as the best within Google that we present here.

Not all products require theservices of a TE. Experimental efforts and early stage products without a well-definedmission or user story are certainly projects that won’t get a lot of TEattention. If the product stands a good chance of being cancelled (in the sensethat as a proof of concept it fails to pass muster) or has yet to engage usersor have a well defined set of features, testing it is largely something thatshould be done by the people developing it.

Even if it is clear that aproduct is going to get shipped, Test Engineers have little to do early in thedevelopment cycle when features are still in flux and the final feature listand scope is undetermined. Overinvesting in testing too early can mean a lot ofthings get thrown away. Likewise, early testing planning requires fewer testengineers than later cycle exploratory testing when the product is close tofinal form and the hunt for missed bugs has a greater urgency.

The trick in staffing aproject with Test Engineers has to do with risk and return on investment. Riskto the customer and to the enterprise means more testing effort and requiresmore TEs. But that effort needs to be in proportion to the potential return. Weneed the right number of TEs and we need them to engage at the right time andwith the right impact.

Once engaged, TEs do nothave to start from scratch. There is a great deal of test engineering andquality-oriented work performed by SWEs and SETs which is the starting pointfor additional TE work. The initial engagement of the TE is to decide thingssuch as:

· Where are the weak points in the software?

· What are the security, privacy, performance andreliability concerns?

· Do all the primary user scenarios work as expected?For all international audiences?

· Does the product interoperate with other products(hardware and software)?

· In the event of a problem, how good are thediagnostics?

All of this combines tospeak to the risk profile of releasing the software in question. TEs don’tnecessarily do all of this work, but they ensure that it gets done and theyleverage the work of others is assessing where additional work is required.Ultimately, test engineers are paid to protect users and the business from baddesign, confusing UX, functional bugs, security and privacy issues and soforth. At Google, TEs are the only people on a team whose full-time job is tolook at the product or service holistically for weak points. As such, the lifeof a Test Engineer is much less prescriptive and formalized than that of anSET. TE’s are asked to help on projects in all stages of readiness: everythingfrom the idea stage to version 8, or even watching over a deprecated or“mothballed” project. Often, a single TE will even span multiple projectsparticularly those with specialty type skills like security.

Obviously, the work of a TEvaries greatly depending on the project. Some TE’s spend much of their timeprogramming, much like an SET, but with more of a focus on end-to-end userscenarios. Other TE's take existing code and designs determine failure modesand look for errors that will cause those failures. In such a role a TE might modifycode but not create it from scratch. TE's must be more systematic and thoroughin their test planning and completeness with a focus on the actual usage andsystem experience. TE's excel at dealing with ambiguity in requirements and atreasoning and communicating about fuzzy problems.

Successful TEs accomplishall this while navigating the sensitivities and sometimes strong personalitiesof the development and product team members. When weak points are found, testengineers happily break the software, and drive to get these issues resolvedwith the SWEs, PMs, and SETs.

Such a job description is afrightening prospect given the mix of technical skill, leadership, and deepproduct understanding and without proper guidance it is a role in which manywould expect to fail. But at Google a strong community of test engineers hasemerged to counter this. Of all job functions, the TE role is perhaps the bestpeer supported role in the company and the insight and leadership required toperform it successfully means that many of the top test managers in the companycome from the TE ranks.

There is a fluidity to thework of a Google Test Engineer that belies any prescriptive process forengagement. TE’s can enter a project at any point and must assess the state ofthe project, code, design, and users quickly and decide what to focus on first.If the project is just getting started, test planning is often the first orderof business. Sometimes TEs are pulled in late in the cycle to evaluate whethera project is ready for ship or if there are any major issues before an early‘beta’ goes out. If they are brought into a newly acquired application or onein which they have little prior experience, they will often start doing someexploratory testing with little to no planning. Sometimes projects haven’t beenreleased for quite a while and just need some touchups/security fixes, or UXupdates—calling for an even different approach. One size rarely fits all forTEs at Google.
回复 支持 反对

使用道具 举报

该用户从未签到

8#
发表于 2012-5-6 12:43:19 | 只看该作者
作者已经跳槽了。
回复 支持 反对

使用道具 举报

该用户从未签到

9#
发表于 2012-5-7 15:36:10 | 只看该作者
mark
回复 支持 反对

使用道具 举报

该用户从未签到

10#
发表于 2015-4-20 10:35:47 | 只看该作者
谢谢啦,PO主有出处么?
回复 支持 反对

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-11-8 08:57 , Processed in 0.077293 second(s), 26 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表