51Testing软件测试论坛

 找回密码
 (注-册)加入51Testing

QQ登录

只需一步,快速开始

微信登录,快人一步

手机号码,快捷登录

查看: 3220|回复: 7
打印 上一主题 下一主题

[原创] web测试指标

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2004-11-4 22:51:13 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
诸位高手请问qaload的测试指标都是什末含义呀,能否提供一些例子和文档呀,谢谢!
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏

该用户从未签到

2#
发表于 2004-11-5 11:44:35 | 只看该作者

鹅也需要

是呀
没有一个标准或者指标什么的
一堆数据是没有用的
只有分析出数据才有意义哦
大吼一声:
谁有
快叫出来
呵呵
回复 支持 反对

使用道具 举报

该用户从未签到

3#
发表于 2004-11-5 19:39:15 | 只看该作者

UP AGAIN

呵呵
又没有人回复
自己找...找...在找...
回复 支持 反对

使用道具 举报

该用户从未签到

4#
发表于 2004-11-8 16:22:28 | 只看该作者

呵呵 找到点东东 也许有用

As you can see there's lots of useful information in this summary. The most useful numbers are the throughput numbers that tell you the total number of hits and how many requests the Web server processed per second. In this example, 1.4 million links were served with an average of almost 49 a second. Impressive for a notebook computer that this sample was run on. Understand that this value is not the number of requests on your backend application, but all links including images and other static pages that the Web server provides.



Also notice the bandwidth information that tells you the average Kbytes received and sent per second. I was a little surprised by how low these numbers are for the amount of traffic generated: 300kb a second average for 1.4 million hits on the Web server in the 8 hour period. That's impressively low (less than a quarter of a T1 connection), but then again the Web Store application is very light on use of images – more image heavy applications will see much higher bandwidth usage.



Looking at the page detail we can see more information about specific requests. For example, it's easy to see which pages are static and which are dynamic based on the request times. TTFB (Total Time the first byte is received) and TTLB (last byte is received) let you get a glimpse at how long (in milliseconds) the client waits for pages. You can easily see the dynamic requests (the .wws pages) taking a couple of seconds as opposed to static links which appear to be next to instant. This can be attributed to the backlog of ISAPI requests in this case. You'll want to watch these numbers carefully in your tests – if the numbers go over 5 seconds you're probably keeping your Web clients waiting too long for each page.



There appears to be a bug with the way the TT values are recorded in the examples above – notice that some of the dynamic pages (.wws) which hit the backend are coming back next to instant (orderprofile.wws) while others (removeitem.wws) are taking 3 seconds. All backend request times are in the 50 millisecond range and all requests are evenly fast. For some reason it looks like POST requests are getting priority processing… in these cases the TTLB value is probably what one should go by. Microsoft is aware of this issue and is working on a fix for future releases. As a work around, you can set up another WAS client running the same script with only a single client – that single client will provide more accurate request retrieval times as there's no interference from multiple clients running simultaneously.


Notice also that WAS does not cache pages like a browser does, so realistically WAS clients are generating more traffic on your Web server than a typical browser would. For example, wwstore.css is the Web store's default Cascading Style Sheet that's used on every page of the store. Typical browsers will cache this static page after the first load. WAS however reloads wwstore.css on every client page that requests it. Note also that the page count is not summarized for all the wwstore.css pages, but rather each client request is separately listed in the link result list. This behavior may change in the future with options for caching provided for WAS clients.



All of this information is very useful as it lets you see how your application performs under a given load. Remember I ran this test without setting up delays between requests which means even 50 clients would easily be able to saturate the backend application as the client will simply have those 50 clients push data down to the client as fast as it can process it!  In other words, without delays the number of clients is largely irrelevant – in fact in my tests 50 or 150 performed almost identically. Lower numbers weren't saturating the backend application so the values went down as did the CPU usage. Higher numbers (175 and up actually) started over-saturating the application resulting in slow downs and the TTFB values going up above 5 seconds. Adding more clients could rapidly bring the entire application to an unusable state. However, my goal in this test was to identify how much traffic I can throw at the site, and I have been able to get this information through these tests. I'll look at some additional information that the backend application provided in the next section.



If you want to gauge load for actual user connections you will need to add delays between requests that match the browsing patterns of your users. With delays in place you'll find that you can have lots more users than without delays as the WAS client application is throttled. Regardless of how fast a request finishes the client will have to wait for a specific interval. When I re-ran my tests inserting a 5 second delay I was able to run in excess of 2000 clients before the backend application started bogging down. Note that I had to add users on the Users item of the main script view – the default sets up 200 users. If you run anything more than 200 clients (Threads * Sockets per thread) you have to make sure you add the appropriate amount of users or else you'll get an error message that states the script cannot be run due to too many users for this setup.

呵呵
可惜是英文的
要是有时间就给它翻译了
不是更妙
时间......
回复 支持 反对

使用道具 举报

该用户从未签到

5#
发表于 2004-11-8 16:26:09 | 只看该作者
回复 支持 反对

使用道具 举报

该用户从未签到

6#
发表于 2004-11-8 17:21:38 | 只看该作者
全英文呀,呵呵。也倒是原味。
回复 支持 反对

使用道具 举报

该用户从未签到

7#
发表于 2004-11-9 15:25:47 | 只看该作者
讨厌看英文
回复 支持 反对

使用道具 举报

该用户从未签到

8#
发表于 2005-11-10 17:08:44 | 只看该作者

测出来的就是指标

* ProcessorTime: 指服务器CPU占用率,一般 平均达到70%时,服务就接近饱和;
* Memory Available Mbyte :   可用内存数,如果测试时发现内存有变化情况也要注意,如果是内存泄露则比较严重;
* Physicsdisk Time  : 物理磁盘读写时间情况;
        Web服务器指标:
* Avg Rps: 平均每秒钟响应次数= 总请求时间 / 秒数;
* Avg time to last byte per terstion (mstes):平均每秒业务角本的迭代次数 ,有人会把这两者混淆;
* Successful Rounds:成功的请求;
* Failed  Rounds :失败的请求;
* Successful  Hits :成功的点击次数;
* Failed  Hits :失败的点击次数;
* Hits Per Second :每秒点击次数;
* Successful  Hits Per Second :每秒成功的点击次数;
* Failed  Hits Per Second :每秒失败的点击次数;
* Attempted  Connections :尝试链接数;
        数据库服务器指标:
* User 0  Connections :用户连接数,也就是数据库的连接数量;
* Number of deadlocks:数据库死锁;
* Butter Cache hit :数据库Cache的命中情况;
回复 支持 反对

使用道具 举报

本版积分规则

关闭

站长推荐上一条 /1 下一条

小黑屋|手机版|Archiver|51Testing软件测试网 ( 沪ICP备05003035号 关于我们

GMT+8, 2024-11-22 07:59 , Processed in 0.068254 second(s), 23 queries .

Powered by Discuz! X3.2

© 2001-2024 Comsenz Inc.

快速回复 返回顶部 返回列表