Monday 25 November 2013

Hadoop Basics

What is  Hadoop ?

Hadoop is a paradigm-shifting technology that lets you do things you could not do before – namely compile and analyze vast stores of data that your business has collected. “What would you want to analyze?” you may ask. How about customer click and/or buying patterns? How about buying recommendations? How about personalized ad targeting, or more efficient use of marketing dollars?

From a business perspective, Hadoop is often used to build deeper relationships with external customers, providing them with valuable features like recommendations, fraud detection, and social graph analysis. In-house,Hadoop is used for log analysis, data mining, image processing, extract-transform-load (ETL), network monitoring– anywhere you’d want to process gigabytes, terabytes, or petabytes of data.


Pillars of Hadoop

HDFS exists to split, distribute, and manage chunks of the overall data set, which could be a single file or a
directory full of files. These chunks of data are pre-loaded onto the worker nodes, which later process them in the MapReduce phase. By having the data local at process time, HDFS saves all of the headache and inefficiency of shuffling data back and forth across the network.
In the MapReduce phase, each worker node spins up one or more tasks (which can either be Map or Reduce).Map tasks are assigned based on data locality, if at all possible. A Map task will be assigned to the worker node where the data resides. Reduce tasks (which are optional) then typically aggregate the output of all of the dozens,hundreds, or thousands of map tasks, and produce final output.

The Map and Reduce programs are where your specific logic lies, and seasoned programmers will immediately recognize Map as a common built-in function or data type in many languages, for example,
map(function,iterable) in Python, or array_map(callback, array) in PHP. All map does is run a userdefined
function (your logic) on every element of a given array. For example, we could define a function
squareMe, which does nothing but return the square of a number. We could then pass an array of numbers to
a map call, telling it to run squareMe on each. So an input array of (2,3,4,5) would return (4,9,16,25), and our call would look like (in Python) map(“squareMe”,array(‘i’,[2,3,4,5]).

Hadoop will parse the data in HDFS into user-defined keys and values, and each key and value will then be
passed to your Mapper code. In the case of image processing, each value may be the binary contents of your image file, and your Mapper may simply run a user-defined convertToPdf function against each file. In this case, you wouldn’t even need a Reducer, as the Mappers would simply write out the PDF file to some datastore (like HDFS or S3).This is what the New York Times did when converting their archives.

Consider, however, if you wished to count the occurrences of a list of “good/bad” keywords in all customer
chat sessions, twitter feeds, public Facebook posts, and/or e-mails in order to gauge customer satisfaction. Your good list may look like happy, appreciate, “great job”, awesome, etc., while your bad list may look like unhappy, angry, mad, horrible, etc., and your total data set of all chat sessions and emails may be hundreds of GB. In this case, each Mapper would work only on a subset of that overall data, and the Reducer would be used to compile the final count, summing up outputs of all the Map tasks.

At its core, Hadoop is really that simple. It takes care of all the underlying complexity, making sure that each record is processed, that the overall job runs quickly, and that failure of any individual task (or hardware/network failure) is handled gracefully. You simply bring your Map (and optionally Reduce) logic, and Hadoop processes every record in your dataset with that logic.


Why Hadoop??
The fact that Hadoop can do all the above is not the compelling argument for it’s use. Other technologies
have been around for a long, long while which can and do address everything we’ve listed so far. What makes Hadoop shine, however, is that it performs these tasks in minutes or hours, for little or no cost versus the days or weeks and substantial costs (licensing, product, specialized hardware) of previous solutions

Hadoop does this by abstracting out all of the difficult work in analyzing large data sets, performing its work on commodity hardware, and scaling linearly. -- Add twice as many worker nodes, and your processing will generally complete 2 times faster. With datasets growing larger and larger, Hadoop has become the solitary solution businesses turn to when they need fast, reliable processing of large, growing data sets for little cost.

Where to start Learning ?

Here are five steps to start learning Hadoop

  1.     Download and install Ubuntu Linux server 32-bit
  2.     Read about Hadoop (what's Hadoop, Hadoop architecture, MapReduce, and HDFS)
  3.     Start with installing Hadoop on a single node
  4.     Do some examples (like wordcount to test how it works)
  5.     Start doing multiple nodes


References : wiki.apache.org/hadoop/

Monday 18 November 2013

Performance Engineering

Performance engineering within systems engineering, encompasses the set of roles, skills,  activities, practices, tools, and deliverable applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the non-functional performance requirements defined for the solution. As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventative and perfective role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.Adherence to the non-functional requirements is also validated post-deployment by monitoring the production system.
Objectives
  • Increase business revenue by ensuring the system can process transactions within the requisite timeframe
  • Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure
  • Eliminate late system deployment due to performance issues
  • Eliminate avoidable system rework due to performance issues
  • Eliminate avoidable system tuning efforts
  • Avoid additional and unnecessary hardware acquisition costs
  • Reduce increased software maintenance costs due to performance problems in production
  • Reduce increased software maintenance costs due to software impacted by ad hoc performance fixes
  • Reduce additional operational overhead for handling system issues due to performance problems.
Approach

Tuesday 10 September 2013

Evolving Customer Performance Requirement

Mostly the customer performance requirement are unambiguous and unrealistic. This is more clear from my recent experience with one of our customer. The key challenges are:-

1. performance requirement are very high level with accepted Metrics / workload not defined.
2. The tools and procedure for Performance evaluation is not defined.

Lets take each of this and see how we can involve customer in continuous engagement and avoid the last minute rush!

The requirement are defined with desired number of users and the acceptable response time. The initial gap we found around the distribution of users and the workload associated. Are these the realistic work load or are stressing the system in wrong way! Are we confirming to Customer needs which we cannot achieve! We initially told the customer all these issues and though we did not got complete details but we got two things the workload scenario and distribution. On seeing the scenario we immediately knew these are not realistic scenario and achieving the response time criteria with this scenario is not possible based on our experience but we went ahead so we could gather enough information on additional data like Hits/sec and Throughput and do compare against the market standard for similar tools.After multiple round of test we did various performance tuning but could not achieve the desire expectation. But we had enough data to show to customer that their needs are not realistic.

The was next challenge on the procedure involved for evaluating. The think times b/w transaction was 7-10 sec , user ramp up was not clear , peak load time not defined , content checks were done at each step , run time settings were completely default and all these were making the test failed at much lesser concurrency. After some discussion one of key difference was content checks and that was adding to the Load Runner response time as statement was executed for each vuser.

I suggest to add these learning to your case to better manage your customer requirement!

Wednesday 28 August 2013

Evaluating Performacne the Customer Way!

We currently ran the performance testing for one of our prestigious customer.The tool we used for the test was load Runner . The test was supposed to be by two team one our team and another Customer team. We started the test on Customer database with 2500 CC and it got failed. The same test when run on our
internal Performance database and we have found that the test was scaling easily to 2500 CC and there comes out first learning

1. Evaluate the Volume of Database against when test is been done.
2. Evaluate the scenario against which UAT sign off is required.

Initial test runs failed and after multiple fixes we were able to successfully achieve 2500 CC. By the time we finished this test customer came up with his script and think time and we modified the script according to their think time and test again failed.Here comes my second learning

1. Never assume the think time even you are SME.
2. Hits/sec under certain concurrency is what Customer looks for.

Apart from the think time the customer did the URL mode scripting while we were doing the HTML based , currently we are running the test with context sensitive while out
Customer is running with Context Insensitive mode.

One can read moe about the Context Insensitive vs Context Sensitive on the blogs

Sunday 30 June 2013

Basic Web Performance Part 5

As we progress on our journey to performance test our web application. I think it would be appropriate to introduce few key terminology we use in our performance testing.It is important to know what are expectations from our performance testing and certain do and don't . so let me get started as i am not going to explain those terms here you might have to figure out yourself :)
Key Terminology
  •  Ramp Up/Ramp down
  •  Users/Threads
  • Iterations
  • Through put
  • WorkLoad/Business Flow
  • Request/Response
  • Load/Stress
Why Do Performance testing
  1. Assessing release readiness
  2. Assessing infrastructure adequacy
  3. Assessing software performacne adequacy
  4. Collecting metrics for Performance Tuning
Do and Don't

There are certain do and donts before you perform the performance test.These are again the guideline and all of them might not be applicable.

Do
  • Suitable time to Load test when all the parameters like network , user access, application access are under control.
  • Define the performance metrics , accepted level or SLA and goals.
  • Know the objective of test.
  • Know the Application Under Test architecture and details like protocol supported , cookies manages etc.
  • Know the Workload Scenarios and peak and normal usage time before the test.
  • Use the meaningful test scenario to construct Test Plan which is real life .
  • Run the tool on machine other than that running the Application.
  • Machine running the tool should have sufficient network bandwidth and resources(memory,CPU).
  • Run the test for long duration to have minimum deviations.
  • Ensure the application is stable and with no errors logged on log files by manually browsing the application.
  • Incorporate the think time to emulate real life scenario.
  • Keep Close watch on following (processor, memory,disk and network)
Don't
  • Never run the test against servers which are not assigned to test, else you may be accused of DoS attack.
You can see more on performance testing on the blog http://prashantbansode.blogspot.in/

In the next article we will see how we can follow these guidelines and start on performance testing with JMETER!!

Saturday 29 June 2013

Basic Web performance Part 4

While previous articles we have loooked upon the website request response in more single user scenario. This may not be the case as most of company website / web product would be used by thousands/ millions of user. Some transaction might be used by few thousands users while some might be used by millions.It is advisable to properly test your website with more real life user distribution and this  cannot happen by manually and we need to automate the process.We need tools support and we have broadly categorize the tool into two classes
1. Open Source
2. Commercial
There might be more subtle difference before you choose a tool like protocol support, vusers support , learning curve  etc. Since our discussion is limited to web application we will focus on HTTP protocol here. The tools in open source category are OpenSTA, Jmeter while on another side commercial tools like LoadRunner , WebLoad, Rational tools. We choose Jmeter for our testingfor below reasons
  1. Jmeter can be used for testing static and dynamic resource
  2. Can support heavy load i.e. users
  3. Great analysis for the performacne results/metrics
  4. High extensible with support on javascript/groovy etc to enhance the script.
  5. And Lastly No License Cost!!   

How to Install
Since it is open source you can do a quick googling , binging and get the executable download. More precisely you can go to
http://jmeter.apache.org/index.html and under download on left pane there is Download Releases latest is 2.9 version with prerequisites for java 6  . Under the binaries you can get the .zip or .tgz based on the operating system you are planning to install.For some reason you want to install older version you can go to Archives and download the version interested in.

Jmeter Launch
Once you have extracted the files , you will see bin,docs,extras,lib,printabledocs and few auxiliary files. You need to got to the bin folder and launch the jmeter.bat file this will launch the Jmeter and you will see something as below. The left pane on below screen contains the Elements of Jmeter and right pane is for configuration the settings for these elements.
We will talk more about the elements in jmeter in our next session and also touch base on some key terminologies in performance testing.

Wednesday 26 June 2013

ETL Performance - Data loading and Snapshot

Off-late we had requirement to test the performance of our ETL. ETL as a process which consist of below

E- Extraction
T- Transformation
L-Loading

The data is acquired by various source systems(aka UpStream) can be relational database , flat files etc. the data is then transformed by various business logic and then loaded on Marts. We had to acquire data from SQL database, the acquisition was done by tool developed by MS , Replication. The Replication performance was good enough with default setting for replication. We calculated the latency between the publisher to Distributor and Distributor to Subscriber and the latency was less than 3-5 seconds. So having the good performance we focused on performance of Transformation and Loading part ; we had the SSIS framework and were using the control flow and Data flow task to load and transform the data .

The Loading and Transformation performance can be divided into following three part

1. Full/Initial run
2. No Data change in Upstream and perform the run
3. Populate the delta record in Upstream and perform the run.

The performance of the initial run was less than 2 hours for 25GB of data in various facts and dimension populated successfully. We than re-ran the job without any modification in source data and the ETL took 1 hours and 15 minutes ; since there was no data change i think this time was just to check the records against the last checkpoint where data was processed successfully and /or re-population the data by doing truncate of existing records. The last part was to populate the data in source systems and do the run again and capture the performance . The data loading was required to be done in key tables based on business requirement and the metrics collected. The data loading was done with separate utility in source system and we found out to load all the data successfully into source we might have to wait for 3 days. We could not wait for that time we verified the data loaded which was happened and took snapshot of the source DB and ran out the ETL job and captured the benchmarking. Snapshot in SQL DB is process which captures the DB details in time and later we can compare the performance benchmarking with equivalent data present in snapshot DB.Later we will talk about the ETL performance reports . Thanks all and happy ETL perf testing :-)

Friday 21 June 2013

Data Distribution on Performacne DB- Internal Testing


We had an  performance issue which we found in our internal performance lab ; The issues was logged as blocker and release was pushed off! The whole  team was on fire!. The release went into red! Round the clock focus went on the bug. After the development team few rounds of analysis after push back for  reasons below
1. as environment issues , this task got completed in my environment very early.
2. Unit testing we are not seeing this issues( mind unit testing in DW is heavy transaction and sometime bring down your system)

We finally nailed down the issues with one ETL package talking more time and it was because of data in one of our transaction system which we have increased 3 times . We were less on time to fix the bug in the cycle so we thought to review our data against the existing customers in production . On our analysis the largest current customer data on the said table was less than the data we have originally in performance database.Then we started the process to analysis potential customer and we saw of all 10+ biggest customer database we analyzed only 1 Customer DB has data more than originally present in Performance DB.

Sometime you might be investing effort on one off requirement whose frequency of happening is less than 1 %.  We changed our strategy of Test data and moved back to original data in this table and plan to include the data only when we will onboard this Customer.
For the general testing , The data to be populated in Database should be more  towards the mean than on extreme to avoid the such conditions!. We can plot the normal distribution of data and try to populate the data in Performance DB which is more towards the median.


Thursday 13 June 2013

Basic Web Performance Part 3

In last articles we spoke about the measuring performance ; in this series we will continue our discussion and talk about measuring performance by analyzing the web logs . We will see how we can configure and capture the data in IIS web server logs . This raw data can be then analyzed with another free tool from microsoft called log parser. So lets get started.

Let me first show the steps to configure the logs. You need to go to your web server in inetmgr. In inetmgr go to website on left pane and select the same. Then go to Logging and click on the same . Below is pictorial representation of how to configure . Mostly we will select the following fields in logs
1. Time-Taken
2. bytes received
3. bytes send

Time taken is total time taken for request includes the time the request was queued , the execution time and rendering the response to client. The aspx request will include all these three component while the static component will only include time to client. The data transferred b/w client and web server can be calculated through bytes sent/received.



Once you have the raw data ; we will use the log parser  to analyze the data. The log parser is free utility from Microsoft and can be downloaded . You can run queries on the raw data through the log parser to get the status code , time taken etc. Here are few examples:

200 status Code
logparser -rtp:-1 "SELECT cs-uri-stem, cs-uri-query, date, sc-status, cs(Referer) INTO 200sReport.txt FROM ex0902*.log WHERE (sc-status >= 200 AND sc-status < 300) ORDER BY sc-status, date, cs-uri-stem, cs-uri-query"
400 status codes
logparser -rtp:-1 "SELECT cs-uri-stem, cs-uri-query, date, sc-status, cs(Referer) INTO 400sReport.txt FROM ex0811*.log WHERE (sc-status >= 400 AND sc-status < 500) ORDER BY sc-status, date, cs-uri-stem, cs-uri-query"
Bandwidth usage :Returns bytes (as well as converted to KB and MB) received and sent, per date, for a Web site.
logparser -rtp:-1 "SELECT date, SUM(cs-bytes) AS [Bytes received], DIV(SUM(cs-bytes), 1024) AS [KBytes received], DIV(DIV(SUM(cs-bytes), 1024), 1024) AS [MBytes received], SUM(sc-bytes) AS [Bytes sent], DIV(SUM(sc-bytes), 1024) AS [KBytes sent], DIV(DIV(SUM(sc-bytes), 1024), 1024) AS [MBytes sent], COUNT(*) AS Requests INTO Bandwidth.txt FROM ex0811*.log GROUP BY date ORDER BY date"
Bandwidth usage by request :Returns pages sorted by the total number of bytes transferred, as well as the total number of requests and average bytes.
logparser -i:iisw3c -rtp:-1 "SELECT DISTINCT TO_LOWERCASE(cs-uri-stem) AS [Url], COUNT(*) AS [Requests], AVG(sc-bytes) AS [AvgBytes], SUM(sc-bytes) AS [Bytes sent] INTO Bandwidth.txt FROM ex0909*.log GROUP BY [Url] HAVING [Requests] >= 20 ORDER BY [Bytes sent] DESC"

you can refer for more examples on http://logparserplus.com/Examples. You can output the IIS log file data into the csv and do more analysis.
In next series we will talk about the performance tools and dig more on open source tool JMETER and its capability.

Thursday 6 June 2013

Tester Planning (MPP vs Excel)

Last week we discussed about the plan with my manager. He wants me to come up with a plan for my testing in two days. It was not tough with i aware of the testing involved that would be mostly regression and the additional of few scenarios based on the changes.The initial step was to go through he requirement document and understand the changes and once i understood the changes and was aware of scenarios to be included; the next step is to put all the analysis in plan document and come up with the date. I need a tool. So i approached my IT team with requesting them for MPP ; that is quiet a tool for your planning needs take cares of business days, holidays,resources in your plan. Though i was more focused on coming up with effort and dates. The process of getting approval for license for Microsoft Project plan was difficult as it was costing 300-600$ and i was sure i wont get approval. So i decided to explore the obvious excel and there i came to know few functions that would be handy to come up with basic plan.

Here are few functions in excel which will come with Analysis pack add-in
1. Workday
2. Networkdays
3. Date

I used workday and date and also the Data->group function was handy to come up with Task and Sub-task structure as in MPP with only business day.Below is screen shot of my plan.There are lot many functions which i will explore later and come up with complete tools  for my Test planning.


Friday 31 May 2013

Basic Web Performance Part 2


In last article we saw the importance of better performing Web site . So we need to make sure that our sites are performing better but before we work on improving the performance we need tool to measuring  the Web site performance. In this article and series of next articles we will talk about few tools which will help measure the performance . Here is list of tools we we look at.

1. Fiddler
2. Microsoft Network Analyzer
3. IIS Log files
4. 3rd Party Services

While we will have a look on top three we will not talk about the fourth one which is basically services that benchmark your web site against the different web sites. So lets start with Fiddler.
We touched upon some of core feature of Fiddler ; lets get started with practical. I will hit upon Microsoft.com and Fiddler will then capture and we will start analyze the data through various functionality provided by fiddler. Below is the screen shot of the request which gets generated upon hitting microsoft.com.


Timeline provides the graphical representation of the request response times .The data captured through the Fiddler can be saved for later reference. In image above the right panel shows the timeline , as you can see the aspx page got loaded in 2 seconds but the other  content on the site took 14+ seconds; which shows most of time is spend on downloading static resources.

There is another feature which is called Statistics which apart from other details lets us know the response bytes (by Content type) against the content which can determine the most consuming bandwidth data that can be looked upon for optimization.
 In above figure you can see the maximum bandwidth is used by the jpeg files and those are good sources of optimization.

Thursday 30 May 2013

Basics Web Performance -Part 1

Basics Web Performance

We need to know why do we care about the performance. While fast web sites makes customer happy there are other benefits of better performance , They  saves your money by

1. consuming less bandwidth
2. Consuming less servers

Better performing websites also help in generating more money ,

According to recent survey done on two web site Google and Amazon ; In amazon it was observed with every 100 ms increase in load time of Amazon.com the sales got decreased by 1% in another incident the home page of goggle maps when decreased from 100KB to 70-80KB traffic went up by 10% in first week and additional 25% in following three weeks.

Your web site performance also impact the ranking on Google. Now we know the importance of performance lets have a look at what makes web sites slow

The general assumption is the performance of website can be improved by tuning the code on server side.But with recent research it has been found that 10-20% of total response time is invested in processing the request  generating the HTML and getting it download. While rest of 80-90% time is invested on getting auxiliary files like image , javascripts, css etc. To demonstrate this we need to use the tool to see the conversation that happen b/w the browser and server. The conversation between the browser and server happen through HTTP protocol . The protocol stands for Hypertext Transfer Protocol ; it is text based protocol so we need some tool which can help profile http traces and one of the tool is Fiddler. This tool can watch the conversation that happen b/w browser and server

This tool is free and can be downloaded by doing small search on google ; The tools has various features

1. You can watch the web traffic
2. Shows complete request and response
3. Can save the session which can be used to compare the performance before and after
4. Transfer Timeline : graphical view of all the request made
5. Statistics is another feature in Fiddler.

We will talk about the fiddler and how we can see the response time on next post.....

Monday 20 May 2013

Menopause Testing

In each Profession with experience task gets more repetitive , mundane  , less challenging, boring . There comes a day/week/phase which is like menopause ? The focus changes from core profession task with no creativity to people management , other skills focus , changing stream etc. IT Profession is no different in this aspect. The software development , testing all goes through the phase and person has to be prepared for this.

This phase also has challenges for the organization people start to leave to find something new in another organization , less motivated people becomes complacent which results in more mistakes and lesser productivity. People having enough time on planned task and invest time on office politics and other non core focus areas.How does organization make use of this phase is challenge.

While organization takes a different route and more specific with each organization type services vs Product , large size vs small size etc. I will talk more at individual level how you can take charge of your career at this phase. I would recommend to invest time on training , connecting through forums , participating in  Test Competition, writing testing blogs etc.

The training would give you an edge over others in your subject. Since our tasks mainly focuses on specific features we tent to use 10-20% of the features only while rest 80% is never used. This is more like pareto where 20% of feature solves your 80% problem . Enrolling in Bug finder sites will give adrenaline boost as it involves some extra money too depends on which phase your application is in you can get paid for each bug you find.

Test competition is another way you can get your skills toned and learn about the techniques and tools with little risk involved.

Friday 10 May 2013

My First International Test Competition Experience

A month or more back when i logged on my Linkedin Profile ; i saw something happening as part of Competition. Initially i thought it was Performance test competition.working on performance i was excited to learn about the different perspective/tool  of testing and hopefully chance to Win !! I decided to participate; I enrolled and started looking out for the team members. Though  we could confirm the team 2 weeks before the competition from different location in India.

Test Competition information was on NRG global blogs ; we started going through the blogs to know when/what/how and having read all the details we had still queries and confusion  and then we started out first discussion with Matt with set of Queries this lasted with spawn discussion about the reporting tools , Apploader ,  performance queries,architecture questions etc with different peoples . I am not sure  if they really loved to answer all queries posted by us but thanks to  Matt , Dan, Ben Smita  and other associates who patiently responded all our queries.

Though our initial discussion we were aware about the competition process and we know we had 3 hours to get requirement , test, file bugs , create  reports, suggest recommendation etc!!. HMM .. too much to be achieved in 3 hours and considering team of two people we had tough task! After thought we could have atleast team of 3 people should have helped better.

The competition day and time arrived Matt posted a blog just before the competition and we went through the blog , asked few specific requirement question  finalized on Test Scope and strategy and started on finding bugs!! One of most confusing part was different communication medium like skype,hangout,twitter, you tube etc..where people were interacting with Matt and most question were repetitive , people even discussing bugs though it was only meant for requirement discussion!!

We decided on QuickEasySurvey.com and CorkBoard.me  to be functional tested out of four website and now i think we made good decision. We focused on bugs and had enough number of them but somehow we logged them as one liner in the Team Pulse. I think we have not done good job of reporting  (at least on QES , my team mate still did better job on CorkBoard.me). The functional testing report on quality and recommendation were suggested .We  were running late , compiled the report and pushed it as fast as possible!! hmm still late by few minutes not sure how many points we have lost!!

The next day was planned for the Performance Testing ; we have finalized on the website and time slot we had lengthy with Dan about the application architecture and areas we want to focus on. Post the discussion we came to know this will not give the right performance metrics through Jmeter as there were lot of client side scripts and Jmeter does not behaves like the browser and had limitation there! Having though of pros and cons of performance tool we decided to go ahead with Apploader!!

There was lengthy process of setting up Apploader , generating scripts , Azure set-up . Though every thing was free but as usual all is not free and we ended up paying 1$ for Azure for International card usage :-)

After all hurdles and support from Ben we could run 25CC test and could gather the application performance as gave recommendations for performance . This all took a week post the competition and we were very tired with all night outs due to different time Zones :-) Many time we thought to leave but we persisted as support from Ben was always there. All thanks to my team member Anindita to taking this to closure.

After the competition and all reports submitted we awaited for results !! Hoping to Win and Learn. The results arrived and we were not as best as others but still we got good feedback on certain areas where we did better than others.


http://www.nrgglobal.com/general/test-competition-results


We would like to congratulate all the teams who have won !

There is lot of takeaways from the competition and we hope to utilize these learning for future!




Thursday 9 May 2013

My First Blogging Experience

When did you first blog ?? during college ,after  few years in jobs or  never !! Though i remember i wanted to start blogging few years backs not sure though which year .

But finally it happened today and i am lucky that i have done it for good ; what it takes to start a new blog 6 min of video and that all :-) i am not sure why i waited so long for this!!

My blog address testerkpi.blogpost.in got a name after my affilication with Analytics and Business Intelligence application and also term coined during the team meeting which we have during this week ! Well done the video helped to create the blog and a post..next how to add more post tough!! got it and this is how this post happened. Added the details about me , location details and also confused all the integrated application that google provides Google Plus+ , blogger, gmail and integrated data shared across these application. Still many tools needs to be added to the blog site need to watch out few more videos and enhance for better user experience.

Tester KPI

Confused over how to rate yourself as tester ? what are the key indicators for functional , non functional tester.

 A good coding skills, documenting skills, interfacing skills, attention to detail, more number of  bugs , analyzing bugs and proving resolution , generating  good scenarios , finding bugs in quality cycle earlier through review are some of the things one should look for before defining the KPI.

Or are there bunch of another things like number of  number test cases or effective number of test cases  number of requirement works against , effective reporting . The quantifiable metrics may or may not be the best approach to define the KPI.

The are more adjective applied to different tasks and how to evaluate these. Say number of test cases and effective number of test case . How to say which metrics is good KPI for the tester. How do you evaluate these metrics. One approach would be to review the same but then who review all the test cases ; another way would be reverse engineering through bug leakage  then again it takes time and probably you are too late .

Through i am not against the quantitative metrics for KPI  but i think these same can evaluate your good tester as bad and vice versa. So just have caution before you proceed with metrics