|
呵呵 找到点东东 也许有用
As you can see there's lots of useful information in this summary. The most useful numbers are the throughput numbers that tell you the total number of hits and how many requests the Web server processed per second. In this example, 1.4 million links were served with an average of almost 49 a second. Impressive for a notebook computer that this sample was run on. Understand that this value is not the number of requests on your backend application, but all links including images and other static pages that the Web server provides.
Also notice the bandwidth information that tells you the average Kbytes received and sent per second. I was a little surprised by how low these numbers are for the amount of traffic generated: 300kb a second average for 1.4 million hits on the Web server in the 8 hour period. That's impressively low (less than a quarter of a T1 connection), but then again the Web Store application is very light on use of images – more image heavy applications will see much higher bandwidth usage.
Looking at the page detail we can see more information about specific requests. For example, it's easy to see which pages are static and which are dynamic based on the request times. TTFB (Total Time the first byte is received) and TTLB (last byte is received) let you get a glimpse at how long (in milliseconds) the client waits for pages. You can easily see the dynamic requests (the .wws pages) taking a couple of seconds as opposed to static links which appear to be next to instant. This can be attributed to the backlog of ISAPI requests in this case. You'll want to watch these numbers carefully in your tests – if the numbers go over 5 seconds you're probably keeping your Web clients waiting too long for each page.
There appears to be a bug with the way the TT values are recorded in the examples above – notice that some of the dynamic pages (.wws) which hit the backend are coming back next to instant (orderprofile.wws) while others (removeitem.wws) are taking 3 seconds. All backend request times are in the 50 millisecond range and all requests are evenly fast. For some reason it looks like POST requests are getting priority processing… in these cases the TTLB value is probably what one should go by. Microsoft is aware of this issue and is working on a fix for future releases. As a work around, you can set up another WAS client running the same script with only a single client – that single client will provide more accurate request retrieval times as there's no interference from multiple clients running simultaneously.
Notice also that WAS does not cache pages like a browser does, so realistically WAS clients are generating more traffic on your Web server than a typical browser would. For example, wwstore.css is the Web store's default Cascading Style Sheet that's used on every page of the store. Typical browsers will cache this static page after the first load. WAS however reloads wwstore.css on every client page that requests it. Note also that the page count is not summarized for all the wwstore.css pages, but rather each client request is separately listed in the link result list. This behavior may change in the future with options for caching provided for WAS clients.
All of this information is very useful as it lets you see how your application performs under a given load. Remember I ran this test without setting up delays between requests which means even 50 clients would easily be able to saturate the backend application as the client will simply have those 50 clients push data down to the client as fast as it can process it! In other words, without delays the number of clients is largely irrelevant – in fact in my tests 50 or 150 performed almost identically. Lower numbers weren't saturating the backend application so the values went down as did the CPU usage. Higher numbers (175 and up actually) started over-saturating the application resulting in slow downs and the TTFB values going up above 5 seconds. Adding more clients could rapidly bring the entire application to an unusable state. However, my goal in this test was to identify how much traffic I can throw at the site, and I have been able to get this information through these tests. I'll look at some additional information that the backend application provided in the next section.
If you want to gauge load for actual user connections you will need to add delays between requests that match the browsing patterns of your users. With delays in place you'll find that you can have lots more users than without delays as the WAS client application is throttled. Regardless of how fast a request finishes the client will have to wait for a specific interval. When I re-ran my tests inserting a 5 second delay I was able to run in excess of 2000 clients before the backend application started bogging down. Note that I had to add users on the Users item of the main script view – the default sets up 200 users. If you run anything more than 200 clients (Threads * Sockets per thread) you have to make sure you add the appropriate amount of users or else you'll get an error message that states the script cannot be run due to too many users for this setup.
呵呵
可惜是英文的
要是有时间就给它翻译了
不是更妙
时间...... |
|