Tuesday, 4 June 2013

From Continuous Delivery to Continuous Quality Delivery



Everybody is talking about Continuous Delivery (CD) these days. In most of the conferences that I attended CD was a topic being discussed amongst developers or DevOps. I worked on a couple of projects recently and had some interesting observations. I think that CD is not just about DevOps being involved and QA is not just about testing. It in fact is a confluence of both those roles and many more.

Continuous Quality Delivery is making the application available to the end user in a stable state as often as possible and to give some business value at the same time.

As a QA I can contribute to the CD process in many ways thus transforming it to Continuous Quality Delivery. The objective is to avoid any defects in the first place or at least find and fix them as early in the development cycle as possible.

When I joined the team, I noticed that we were trying to achieve CD but the release process itself was too long and took about two days to get something out into production.
I took the initiative of getting people together to refine the release process. After a few sessions of facilitations over a couple of weeks of continuous improvement, we got to a place where we could actually release a working piece of software in a couple of hours.

In a typical delivery team it is always not possible to make every check-in committed by the developer to be production ready. So it is a good idea to user feature toggles. (I will leave it to a separate discussion as to why feature toggles are better than release branches) Feature toggles are generally used for different reasons: To hide incomplete features or to observe user behaviour by toggling a completed feature on or off.
This is a powerful technique to control the state of the application. But this needs continuous maintenance in terms of removing the toggle once the feature is completed along with the code and associated tests. As a QA I was on top of monitoring from the beginning, which stories would need toggles and which ones need to be removed. Thus keeping the quality of the application and the code base to a high level.
I also had conversations with the BAs trying to see if we can write and line up stories in such a way that we do not need toggles in the first place, thus keeping the toggle maintenance to its minimum. We could do this by playing stories in such a way that the backend functionality is developed first and then moving on to the front end visible stories.

Continuous Quality Delivery or other wise it is a good practice for a QA to pair with the BAs in story reviews to bridge gaps in requirements, to identify test data, to identify additional test cases.
Similarly it is also beneficial to constantly interact with developers ensuring the level of automation test coverage and finding defects while they are developing. I found it immensely useful in attending tech huddles, dev analysis, code reviews etc to understand the code and implementation better. This would enable me to think of other scenarios of testing apart from just black box testing. I found it necessary to write automation tests as much as we could to minimise the manual testing and regression time.

Build time plays a significant role of how quickly a developer can commit his code and how soon can it go through different environments to production. There was a time when the team started feeling the pain of the increased build time. I started tracking the build time and making it visible to the team. I got together with the operations people to increase the build agents so that we could run the tests in parallel. This surfaced a new challenge when we found out that some of the tests were dependent on each other. So I paired with the developers to make the tests autonomous. This helped us in reducing the build time.

The QAs were constantly trying to make sure that the manual regression test suite was kept to a minimum to reduce the manual testing time. While we were trying to add newer tests to the suite, at the same time older tests were being discarded. But for business functionality scenarios would always be present.

The QAs were using a Blue-Green deployment strategy to release the application into production. This would ensure that we have almost no downtime and also give us the confidence of testing on a "production" environment before it turns into "live"

The QAs were also holding retrospectives to make sure that we were always improving as a role in contributing to Continuous Quality Delivery.

The project was a grand success and the road map in this area was to go form weekly releases to daily releases.





Wednesday, 29 May 2013

Performance does matter

Assumptions -- This is my experience on a Web based Java application. There will be a separate post around Stress and Load testing.

When you start off on a project, it is very easy to forget about performance testing and the significance of it.
I tried my best to sell the idea of performance testing to the business but failed initially. The lesson I learned was to talk in the language of the stakeholders. Performance testing is not something we do at the end. By monitoring application performance at regular intervals, it is easier to analyse and fix the code. It would also educate us in avoiding the mistakes in the future. Thus we can avoid big bang performance testing and fixing right before the release. The stakeholders need to be informed about the advantages of doing continuous performance testing.

There are two types of performance testing namely "front end" and "back end".
Front end performance deals with how different browsers respond to the scripts. It analysis the page load / wait times of different components of the page. This translates in to user behaviour. Statistics state that on an average an user is happy to wait for four seconds for the page to load. Anything over ten seconds, you would start losing potential customers to the site.
Back end performance deals with how different services and servers cope up with the load. Database performance is also covered under this area.

We can run the performance tests at different levels namely Application, Service, Unit levels. I treat these tests similar to how I would treat my functional tests. And like automated tests, the higher the level of tests the more difficult it gets to analyse the problem.

Different tools would allow you to run these tests. The tools may also vary based on the programming language you are using.
For example JProfile is used to monitor JVMs and the application at a lower level (for classes and methods) or dotTrace in the .Net world.
After which we can test the application at a service / API level (maybe by using tools like Jmeter for generating the load)
And the top level is to hit the application through the web interface or even headless for starters. I have used Browsermob (Neustar) or VSTS if you are a .Net addict.

Running the performance tests from your local machines would limit you to the performance of those machines itself. So it is recommended to run it on the cloud. You could fire the test scripts to make it run against different build agents or virtual machines.

We need to make sure the performance test environments are configured and set to be production like. It is a common belief that having a production like environment is very expensive. The alternative solution is to fire up virtual machines. This worked for me as in some cases the production environments were also hosted on virtual machines.
It is highly beneficial to have the performance test environments in an independent infrastructure. If they are in a shared environment then we need to analyse how much and how often the other systems are going to impact the performance of the application.
Because our application was hosted on virtual machines, we had the privilege to scale up the machines and services to observe performance improvements.

We had to make sure that the performance test scripts that are written is mimicking actual user behaviour. For application level tests we were comparing statistics between the beta and the legacy systems. This helped us to configure our tests accordingly. We were running these tests from Browsermob(Neustar) but monitoring was done through internally built tools using Graphite Gdash.

What and how you monitor plays a significant role.
From the front end perspective, the things you may need to typically monitor is the response time (how long does it take to serve a request) and throughput (how many requests can it serve per second). If you have a legacy system then you can compare these statistics generated from production to the stats on the performance environment.
Ideally the monitoring tools should be sitting as close to the application servers as possible. If the monitoring tools are too far away in space and time then you would not have real time accurate statistics. If the tools are in the same environment as the system under test, then that too may effect the performance of the environment.
Common issues that one faces are memory leaks and CPU reaching its limits. So from the back end perspective, things you would find helpful in monitoring are CPU usage and memory for various machines and services.

Ideally you should not take into account network lag as part of the metrics. As you can do only as much about it. And it would also hide the actual problems in the system under test. Hence the run the tests through the cloud servers located closest to your environments.

Once you run the performance test and capture the results, then it is time to tackle the problem. Finding the bottleneck and fixing it may result in another bottleneck surfacing up.

While performance testing, change one thing at a time. For example try not to change the script and the application version at the same time. In that case if you see any discrepancies then you do not know whether it was the application version or the script that caused it. Hence it is a good idea to create baselines for every change you make and take baby steps to achieve the end result.

One of the good practices is to run automated performance test scripts as part of your Continuous Integration pipeline. By taking this approach, we had to set a performance threshold based on which the build would fail. This gives us a fast feedback like any other functional tests.


Some lessons learned:

1) Something which helped increase the performance of the system is caching. We should have some level of intelligent caching. Too much of caching will not help either as users would start seeing stale information. Caching can be considered at several levels. You can have caching at three different levels: the Content Delivery Network (CDN) level, the application level or even at the service / database levels.
While coding we should make sure that static content is cached at some level. Because in our case the static content was being served from one of the services which was eating up all its memory.

2) Services and databases should not be doing background tasks while their primary purpose is to serve the customer. Having other instances of these services to do the background tasks would decrease the risk of performance problems.

3) Sometimes databases queries too can be expensive. Monitoring these queries and to try and minimise the calls to the database is an effective method of increasing the system performance.

4) There may be instances where you would have some sections of the pages making extra calls to the backend systems to retrieve additional information even though not all end users may require it. So it would be better to analyse these conditions and wrap these sections such that the additional calls would be made only when a user genuinely wants that information.



Note: Browsermob has its only API through which we can write the perf test scripts. We can write two types of scripts. One which runs the test through the browser and the other one is headless. The costs differ based on which one you run.

P.S. It is tricky to write performance tests with sessions and logins in it.


Wednesday, 12 January 2011

Diamonds are forever

I never really appreciated the value of a diamond until I worked on this project with a client who grades and inscribes diamonds.
I was pulled into this project to modify their existing reporting system around invoicing. They also wanted to change their printers to a better brand. These printers were specially designed to print only on chip based cards.
This project was really exciting because it was my first non web based project. It was a man - machine - interface based application.
The grading and inscription process of a diamond is a long and tedious one with several steps involved in it. At each step there is an instrument (machine) which plays its role. Each of these instruments are connected to a computer which runs the client application. It was fun to interact with the instruments through this application and observe their behaviour. The client stake holders were more than helpful and patient enough in making us understand the domain.
There were a different set of problems that I had to tackle in this project.
The application consisted of a standalone software on each of the computers, a small web based admin page and reports. It was challenging to write automation tests for the standalone application running on different computers.
So tests were written around the grading and inscription process using a mock up application. At the time I walked into the project, these tests were broken. I suggested the team that we should be fixing those tests and running them on a separate build because of the length of time they took to run.
There were also massive amounts of reports which were being manually tested each time a release was being made. These reports were being generated in two steps; first a stored procedure was being called and temporary tables being generated, second the report generator would use these tables and make necessary calculations to show the correct figures on the report. I identified that there were no tests around any of these areas. I also showed them that the existing reports had several defects in them which they were unaware of and could have been avoided if they had some test coverage around them. I suggested the team that we should at the bare minimum write tests around the stored procedures so that we at least have a first level of safety net for the reports. I convinced the business that this would take some additional time in delivering their functionality but it is going to be good for the future of the project.
The existing continuous integration build was taking more than forty five minutes with little tests running on it. We found that we needed a virtual machine with better configuration to keep the build running quicker.
The web based admin pages too did not have any kind of automation tests. I initially suggested we use watiN as the code was in .Net, but the business came back saying that they would be happy with selenium because of their familiarity with it. So we researched and finally wrote the web based tests in selenium using .Net.

The other side of the story was the printer. The client got in a new brand of printer for printing their certificate cards for the diamonds. A certificate card which also has a chip full of information about the diamond in it is given to the customer on purchase of a diamond to confirm the validity. So the look and feel of the card was one of the most important factors. The particulars on the card being printed with the new printer had to look the same was as before. The first issue that we faced was that an image was being smudged. I tried playing around with the printer's heat settings, writing speed settings, different images etc. But none of that worked. After a couple of weeks of trials and trying to contact the printer guys, they sent a file which would change the firmware settings of the printer. And finally the image started looking better. But that had a side effect which meant I was unable to scan the barcode on the card anymore. It was a bit frustrating but enjoyed the different kind of challenges that I was faced in this project.

Friday, 26 November 2010

Cross functional teams - Future shape of a team structure ?

A typical traditional team structure consisted of a Business Analyst, Quality Analyst, Developer, Database specialist, Operations / IT support. These project roles were very well defined and the people associated used to strictly stick to their tasks. This leads to situations where a developer very often worries less about the quality of the system as he/she thinks that it was the task of the tester. There are also tensions between developers and operations around deployments to various environments. Database specialists take pride in owning their scripts and stored procedures. So without them around, developers feel a bit crippled. Similarly Quality Analysts are brought in at a later stage, thus, them missing out on all the crucial initial discussions and business values.

The future team structures look more promising where these vertical barriers of roles are being broken down and people are willing to step on each others toes a bit.
The Quality analysts are being involved in Business analysis along with the developers. You will notice different sets of questions coming up from different roles helps in understanding the system and closing any gaps before hand.
Quality analysts are pairing up with business analysts, also for writing the acceptance criteria for stories and then involving the business stakeholders to review them. This way the team is ensured that they are on the right track for that particular story.

More often from my observation, the developers implement the acceptance criteria but the tests do not perform what they intend to be doing or sometimes there are not enough checks written. To be mutually beneficial, quality analysts can pair up with the developers in writing the implementations for the acceptance criteria. Thus we can ensure that the tests are doing what they are supposed to do and also have a clean re-factored code.
By pairing the developers with the database experts, it helps the developers to understand the database structures and schema. Thus by sharing the knowledge we are avoiding the risk of depending on a single individual or team. I have noticed quite a few times that the IT operations team likes to take "ownership" of the environments and it becomes a difficult task to approach them for every single deployment to every single environment. By maintaining a good relationship with the IT operations team, the developers can thus gain mutual trust, so that the team as a whole can deploy more often and therefore gain a faster feedback cycle.

Thus I feel this approach that I have been following since the last few projects has not only helped the team deliver better quality software but also at a quicker rate.

Monday, 20 September 2010

The wonderful world of Deutschland

My client was situated in a beautiful town called Heidelberg in Germany. Once again I was lucky to have met such wonderful talented people and made some great friends.

The client was basically a consumer price comparison website for house hold gas, electricity and telecommunication. It took off as a startup 10 years ago and has grown to a medium sized family based business to become one of Germany's most popular price comparison website.

As part of improving their software delivery, they hired us to help them implement a pilot agile process. We started off with two simultaneous mini projects which would help them increase the number of people signing up for products in their system and also enhance user experience in general.

Apart from coaching them on new technology and better ways to write code, we also had to impart our knowledge on process improvement.
There were mixed emotions from the team in the beginning. Some of the members were very eager to learn and pick up on new things, some people were approaching it cautiously and some people were reluctant. This is very common at most of the clients and a very natural human behaviour to environmental changes.
Over time, coming across different hurdles and lots of lessons learnt, they eventually started to appreciate the value of our good practices. We also conducted a session called "Why we do What we do" to help them introspect all the practices we threw at them.

One thing I learnt was that if different consultants have different opinions on a certain situation, then instead of confusing and over whelming the client with all the ideas, the consultants should get to a common understanding first and place that idea before the client whilst presenting other ideas just as suggestions.
The last thing you would ever want is internal conflicts within the consultants.

This was a client who had no QAs in their software development teams and not surprisingly had little knowledge about the QA process itself. I introduced the concept of automation testing and the developers initially suggested to use ruby with cucumber to write the tests in the BDD style. But after a while they started facing difficulties learning a new language. It was also not a good practice to have the code base in .NET and tests written in ruby. So I suggested them to use WatiN along with YatFram. It was good fun pairing with the developers in helping them write the tests.

One of the challenges for writing automation tests was that, because it was a legacy code base, it was not in a very testable state. So to write unit tests the developers had to re-factor the code first. But at the same time they had to draw a line as to how much to re-factor. Thus we had to rely on high level automated browser based user journey tests. But the down side is that these take longer time to run on the build. To reduce the build time I recommended to split the tests into two builds; one running the quick journeys and the a second one running the detailed tests.  Thus it would give the developers a better opportunity to check in frequently while compromising a bit on the constant feedback.

As time progressed, when the tests started doing their job of finding defects when some one changed some code, then the developers started appreciating the value of the tests and became more proactive in writing them.

I had initially suggested that a round of performance testing had to be done before the first deployment to production based on the changes in code base that we had done. But it was de-prioritised until they realised after go live that their servers were running out of memory. So we had to quickly diagnose and fix the problem after which we ran a series of performance tests on the staging environment. I introduced JMeter for this purpose which worked out well. ANTS was used to monitor the server performance.

As a non-german speaker, I did find it a bit difficult to actually test the website even though I managed it eventually.

The website was very heavily dependant on the database. Interestingly they had 28 web servers and just one database server. Moreover all the validations, static information were stored in the database in the form of stored procedures.

We tried setting up different environments as part of the build process. Though we did not achieve a great amount of success automating the deployment process until the end of the project, we managed to improve it over a period of time. The developers fixed the gaps and defects in the process, I was coming across, as it was growing. It was a bit complex as it involved deploying the code base, database, CSS, third party tools and a CMS backend all separately.

During my time in the project, I got an opportunity to hire a person internally for a QA role and mentored him over weeks. The team was very happy that he picked up his role well enough to carry on independently once we leave.

We successfully deployed thrice over the three month period and delivered as per expectation. We also gave the client a feedback and what they could improve for the future. Overall it was a good project, a great client, an amazing country and people. 

Saturday, 29 May 2010

One of my smallest yet a challenging project using Citrix machines

Recently I worked on a project for a car insurance company. It was very exciting in terms of the domain as I have not worked in the Insurance domain before. So there was some learning in that end. It was one of the best clients I have worked with; right from the management to down below, everybody was very co-operative in helping us out with all the infrastructure and also about the domain / requirements itself along with a good sense of humor.

The core of the application consisted of calculating the insurances which the advisors in the company's branches would use to communicate with their customers.
The business drive came along when the management found out that the advisors in their branches were giving the most expensive deal (high cost to company) to the customers, thus greatly reducing the revenues of the company. They decided to bring in ThoughtWorks to come up with a pilot application with a set of business defined formulae to bring the revenues back on track. One of the challenges that we had was to deliver the application within a months time. From the initial estimates we figured out that it would take nearly twice of that for us to build a quality application with a clean code base. Further discussions revealed that the business was in a hurry to showcase the application to their teams in a national conference during that month. We came up with a strategy of coming up with a mocked up application with stubbed data and all the front end designs ready for the conference, and then continue with developing the application further in due course of time. This idea was bought well by the business from where our journey began.

The team consisted of just seven of us, the smallest team I have ever been on. Interestingly it was an exceptionally diverse team with each one of us from a different country. Being the lone QA I thought I could influence my ideas to the team, but the developers had stronger opinions of their own. So I had to set up a quality expectation meeting along with the business so that we were all on the same page and not stepping on each others toes.

I joined the project at the initiation phase which was right after the inception. I collaborated closely with the BA to analyze requirements and start writing the acceptance criteria for the stories so that the developers could start off once they were done with setting up the infrastructure.

Liaising with the product owner from the finance department was very helpful in terms of defining the formulae. I then went off and started creating the test data for the complex array of calculations. This greatly assisted me in my manual testing as well as for the automation tests. (Data driven testing.) It was all a cake walk until the product owner came back two weeks before the go live date asking us to change some calculations that were implemented right at the beginning. We had to go ahead implementing this change knowing that it is going to be a huge risk and might have a ripple down effect to the other calculations. But our automation tests around calculations were robust enough to catch any regression defects.

As part of the automation testing, I also started writing BDD tests in WatiN using C#. The developers later implemented them as part of their story development. This technique proves to be mutually beneficial, because as a QA I would get to define the tests with the test data. It would also give the developers a head start in implementing the tests. For this project keeping in mind that the business was not too specific with the quality requirements and the size of the application itself being small, we mostly stuck to the happy paths while defining the automation tests. There is always a compromise between the developer build times and the quality in terms of automation tests. So there should be a mutual understanding as to how much to automate and what to automate. I decided to run rest of the tests manually. I also find it a good practice to keep updating a manual regression test suite as the project evolves, which would act as a reference point at the time of deployments.

Now comes the most interestingly challenging part of the whole project. The application was going to be used by the branch officials through Citrix machines (dumb terminal / thin client). These terminals were connected to the central server on a 512 Kbps connection. The icing on the cake was that the phone lines which the branch advisors use, also share the same bandwidth. Thus it was very critical for us to keep performance, load and stress testing in mind since the beginning of the project and develop the application in such a way that it uses minimum bandwidth at all times.
The client had a "Model office" which they were using to simulate a production environment. But the drawback was that they had just four user terminals, so I wanted to use a testing tool for gaining full confidence in terms of load. The client was interested in an economical tool for this pilot project to test the performance of the application. So Jmeter was recommended as part of the testing approach.  I simulated thousands of users hitting the application server several times and doing a typical user journey thus creating enough initial load on the test server. At the same time I manually started using the terminals in the "Model office" to test the performance of the application, observing the refresh rate and response time. I was also observing statistics on the server in terms of memory leaks, %CPU utilization and Throughput. This gave us a fair idea and also a quick heads up of whether we were taking the right path. But we also needed to consider the difference in the configuration settings between the test server and the production server. So the % difference in the load was to be looked out, for getting a realistic view. We started deploying the application on to production, to monitor how it behaves. We gently started hitting it with Jmeter and saw that we were maxing out the server CPUs. The developers started optimizing the code and Javascript tweaks for IE6 improved the performance of the application.
Understanding how critical it is to go live, the client offered us dedicated servers. The client had an infrastructure in place with huge amounts of servers, CPUs and memory. And all they had to do is virtualize what ever they required for our application without altering their hardware. which I found was a very cost effective way.
We took advantage and beefed up to 4 servers with 4 CPUs on each, which gave us tremendous performance results but found that the CPUs were getting under utilized. So we ramped down to 4 servers and 2 CPUs on each server, which still gave us the required performance and CPU utilization was optimum. We got the statistics from the client and found out that the max Throughput that they would hit is about 12 requests/second. And till about 70 requests/second per server our application was giving an observed response time of about 2 seconds which was in agreeable terms with the customer.
The load balancers were sending the requests to a single server based on the IP address from which the request was coming from. So as Jmeter was running from a single machine, all the requests were being sent to a single server instead of getting load balanced. Also Jmeter itself was not able to handle huge amounts of load and was breaking off at 200 threads. The solution we came up with was  running Jmeter over several computers to achieve the required load on the servers. Later found out that Jmeter needs a "super computer" to run hundreds of threads.
Not having too many Listeners writing test results helped it a bit. But even better is asking Jmeter to write the files directly to a file. Even though I had the root URL in the HTTP Request header, after recording a script, I had to go back and change all the URLs to accommodate the dynamic multi user record generation. We had the user ids in the URL, so random user ids which were being generated by the Jmeter Generator, had to be replaced with the parameter in the URL. Used the gaussian random timer and ramp up periods to simulate closer to realistic conditions.
I also realized the importance of having a dedicated performance environment, because sharing environments caused our daily jobs to slow down by not being able to deploy builds on time, the application itself getting badly hit under heavy load etc.

The servers which were rendering the application on to the terminals were hosting IE6 browser. So we also had to overcome some design challenges. Our UX expert also suggested and came up with loads of design changes which were initially developed by a third party.

The final build was deployed on time to production and the client was very pleased by the application.

Thursday, 1 April 2010

My experiences on an Open Source Distributed Agile project

This project called RapidFTR was interesting because we all worked towards a good cause of supporting children who get separated from their families during disasters. So people at ThoughtWorks who were in between projects initially volunteered to help out building the applicationas part of the company's Corporate Social Responsibility.

It was also exciting in terms of me being the lone QA for the team. This was a truly open source distributed agile project and we had to adapt to a lot of process changes. Our aim was to get the source code into the cloud so that anybody and everybody could contribute to it.
This was a Ruby based project. We used all open source tools. GitHub for the source code repository / defect tracking, TeamCity for CI, CouchDB as the database.

There was a huge process tailoring required for the project. This is because, though initially the development was happening only in London, it gradually started getting distributed over to the East and West coast of the US. Thus we could not do pair programming anymore.

Stand ups instead of everyday used to happen biweekly late in the evening GMT, keeping the time zones in mind.

We did not get chances of doing story huddles either, again due to the differences in time and space.
The team came up with the idea that the code be peer reviewed once it is written to over come the lack of pair programming.

Similarly I suggested that we could also do some peer reviewing for the acceptance tests that are written in Mingle before the developers start playing a story. This would enable a knowledge sharing session amongst the team and also a way to have a second eye on the tests.

As a QA I used Cucumber and Webrat. It is a great tool to use. It works similar to Fitness and TWIST if people have prior experiences on these tools.  And one of the advantages is that it has some pre-implemented steps which can be used while writing the BDD tests. The tests run quiet fast too.

The stakeholder who came from a developer background, started getting too much involved in the development / implementation process of the project. This led to a great amount of discussions. I stepped in and suggested if he could just explain what he wants in plain simple english and le tthe developers worry about how it needs to be done, life would be easy for everybody. The stakeholder took this in a very positive way and the situation indeed change in a positive way.

Even after two months into the project I was testing the application on my local host which was quiet a pain. The developers kept forking out of the trunk to work on their feature stories and not merge often. As a QA, I found it very difficult to keep track of the current status of the story that the developers were working on as it was a pain to keep shifting between forks. I suggested them to merge as often as possible so that I can test the application on trunk which also might help me catch integration issues.

In a open source distributed agile project, one of the challenges of the QA is to review the code level tests (acceptance tests, rspec tests etc) along with the automation tests (scenario level) to make sure that the tests are actually testing what they are supposed to.

There were several people involved working solo in SFO, NY, London.

We kept communication to a maximum by opening up a google group and responding to emails through that medium. All most all of us were on skype and gtalk to catch up for a quick chat. We also used Google Wave for discussions. As explained earlier the core team used to meet up biweekly on a skype conference call to catch up with the updates.

We did a mistake of not including the stakeholder since the begining of the project on site. The initial 2-3 weeks of the project went in to skype calls with the stakeholder. But only when he came down to London and we all started talking over the table, did we realise that we were miles apart from understanding each other till then. After some heavy discussions and gaining a better idea, we had to go back and change some basic logic and architecture of the code. This costed us a bit of a time.

I was trying to improve and give emphasis on the performance of the application since the beginning, as that was one of the important lesson learnt from previous projects. But the developers were least concerned about it and like ever before wanted to keep the performance aspect for laters.